docs: update long lines and tables (#1952)

Update long lines and tables formatting in markdown doc files.
This commit is contained in:
Yuan Teoh
2025-11-14 12:25:49 -08:00
committed by GitHub
parent e5e9fb7f94
commit 735cb760ea
127 changed files with 1104 additions and 703 deletions

View File

@@ -93,7 +93,8 @@ implementation](https://github.com/googleapis/genai-toolbox/blob/main/internal/s
### Adding a New Tool
> [!NOTE]
> Please follow the tool naming convention detailed [here](./DEVELOPER.md#tool-naming-conventions).
> Please follow the tool naming convention detailed
> [here](./DEVELOPER.md#tool-naming-conventions).
We recommend looking at an [example tool
implementation](https://github.com/googleapis/genai-toolbox/tree/main/internal/tools/postgres/postgressql).
@@ -129,10 +130,10 @@ tools.
* **Add a test file** under a new directory `tests/newdb`.
* **Add pre-defined integration test suites** in the
`/tests/newdb/newdb_integration_test.go` that are **required** to be run as long as your
code contains related features. Please check each test suites for the config
defaults, if your source require test suites config updates, please refer to
[config option](./tests/option.go):
`/tests/newdb/newdb_integration_test.go` that are **required** to be run as
long as your code contains related features. Please check each test suites for
the config defaults, if your source require test suites config updates, please
refer to [config option](./tests/option.go):
1. [RunToolGetTest][tool-get]: tests for the `GET` endpoint that returns the
tool's manifest.

View File

@@ -255,18 +255,25 @@ Follow these steps to preview documentation changes locally using a Hugo server:
There are 3 GHA workflows we use to achieve document versioning:
1. **Deploy In-development docs:**
This workflow is run on every commit merged into the main branch. It deploys the built site to the `/dev/` subdirectory for the in-development documentation.
This workflow is run on every commit merged into the main branch. It deploys
the built site to the `/dev/` subdirectory for the in-development
documentation.
1. **Deploy Versioned Docs:**
When a new GitHub Release is published, it performs two deployments based on the new release tag.
One to the new version subdirectory and one to the root directory of the versioned-gh-pages branch.
When a new GitHub Release is published, it performs two deployments based on
the new release tag. One to the new version subdirectory and one to the root
directory of the versioned-gh-pages branch.
**Note:** Before the release PR from release-please is merged, add the newest version into the hugo.toml file.
**Note:** Before the release PR from release-please is merged, add the
newest version into the hugo.toml file.
1. **Deploy Previous Version Docs:**
This is a manual workflow, started from the GitHub Actions UI.
To rebuild and redeploy documentation for an already released version that were released before this new system was in place. This workflow can be started on the UI by providing the git version tag which you want to create the documentation for.
The specific versioned subdirectory and the root docs are updated on the versioned-gh-pages branch.
To rebuild and redeploy documentation for an already released version that
were released before this new system was in place. This workflow can be
started on the UI by providing the git version tag which you want to create
the documentation for. The specific versioned subdirectory and the root docs
are updated on the versioned-gh-pages branch.
#### Contributors
@@ -337,7 +344,9 @@ for instructions on developing Toolbox SDKs.
Team `@googleapis/senseai-eco` has been set as
[CODEOWNERS](.github/CODEOWNERS). The GitHub TeamSync tool is used to create
this team from MDB Group, `senseai-eco`. Additionally, database-specific GitHub teams (e.g., `@googleapis/toolbox-alloydb`) have been created from MDB groups to manage code ownership and review for individual database products.
this team from MDB Group, `senseai-eco`. Additionally, database-specific GitHub
teams (e.g., `@googleapis/toolbox-alloydb`) have been created from MDB groups to
manage code ownership and review for individual database products.
Team `@googleapis/toolbox-contributors` has write access to this repo. They
can create branches and approve test runs. But they do not have the ability
@@ -441,7 +450,8 @@ Trigger pull request tests for external contributors by:
## Repo Setup & Automation
* .github/blunderbuss.yml - Auto-assign issues and PRs from GitHub teams. Use a product label to assign to a product-specific team member.
* .github/blunderbuss.yml - Auto-assign issues and PRs from GitHub teams. Use a
product label to assign to a product-specific team member.
* .github/renovate.json5 - Tooling for dependency updates. Dependabot is built
into the GitHub repo for GitHub security warnings
* go/github-issue-mirror - GitHub issues are automatically mirrored into buganizer

View File

@@ -1,10 +1,14 @@
This document helps you find and install the right Gemini CLI extension to interact with your databases.
This document helps you find and install the right Gemini CLI extension to
interact with your databases.
## How to Install an Extension
To install any of the extensions listed below, use the `gemini extensions install` command followed by the extension's GitHub repository URL.
To install any of the extensions listed below, use the `gemini extensions
install` command followed by the extension's GitHub repository URL.
For complete instructions on finding, installing, and managing extensions, please see the [official Gemini CLI extensions documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/extensions/index.md).
For complete instructions on finding, installing, and managing extensions,
please see the [official Gemini CLI extensions
documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/extensions/index.md).
**Example Installation Command:**
@@ -13,46 +17,63 @@ gemini extensions install https://github.com/gemini-cli-extensions/EXTENSION_NAM
```
Make sure the user knows:
* These commands are not supported from within the CLI
* These commands will only be reflected in active CLI sessions on restart
* Extensions require Application Default Credentials in your environment. See [Set up ADC for a local development environment](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment) to learn how you can provide either your user credentials or service account credentials to ADC in a local development environment.
* Most extensions require you to set environment variables to connect to a database. If there is a link provided for the configuration, fetch the web page and return the configuration.
* Extensions require Application Default Credentials in your environment. See
[Set up ADC for a local development
environment](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment)
to learn how you can provide either your user credentials or service account
credentials to ADC in a local development environment.
* Most extensions require you to set environment variables to connect to a
database. If there is a link provided for the configuration, fetch the web
page and return the configuration.
-----
## Find Your Database Extension
Find your database or service in the list below to get the correct installation command.
Find your database or service in the list below to get the correct installation
command.
**Note on Observability:** Extensions with `-observability` in their name are designed to help you understand the health and performance of your database instances, often by analyzing metrics and logs.
**Note on Observability:** Extensions with `-observability` in their name are
designed to help you understand the health and performance of your database
instances, often by analyzing metrics and logs.
### Google Cloud Managed Databases
#### BigQuery
* For data analytics and querying:
* For data analytics and querying:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/bigquery-data-analytics
```
Configuration: https://github.com/gemini-cli-extensions/bigquery-data-analytics/tree/main?tab=readme-ov-file#configuration
Configuration:
https://github.com/gemini-cli-extensions/bigquery-data-analytics/tree/main?tab=readme-ov-file#configuration
* For conversational analytics (using natural language):
* For conversational analytics (using natural language):
```bash
gemini extensions install https://github.com/gemini-cli-extensions/bigquery-conversational-analytics
```
Configuration: https://github.com/gemini-cli-extensions/bigquery-conversational-analytics/tree/main?tab=readme-ov-file#configuration
#### Cloud SQL for MySQL
* Main Extension:
* Main Extension:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-mysql
```
Configuration: https://github.com/gemini-cli-extensions/cloud-sql-mysql/tree/main?tab=readme-ov-file#configuration
* Observability:
Configuration:
https://github.com/gemini-cli-extensions/cloud-sql-mysql/tree/main?tab=readme-ov-file#configuration
* Observability:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-mysql-observability
```
@@ -61,129 +82,166 @@ Find your database or service in the list below to get the correct installation
#### Cloud SQL for PostgreSQL
* Main Extension:
* Main Extension:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-postgresql
```
Configuration: https://github.com/gemini-cli-extensions/cloud-sql-postgresql/tree/main?tab=readme-ov-file#configuration
* Observability:
Configuration:
https://github.com/gemini-cli-extensions/cloud-sql-postgresql/tree/main?tab=readme-ov-file#configuration
* Observability:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-postgresql-observability
```
If you are looking for other PostgreSQL options, consider the `postgres` extension for self-hosted instances, or the `alloydb` extension for AlloyDB for PostgreSQL.
If you are looking for other PostgreSQL options, consider the `postgres`
extension for self-hosted instances, or the `alloydb` extension for AlloyDB
for PostgreSQL.
#### Cloud SQL for SQL Server
* Main Extension:
* Main Extension:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-sqlserver
```
Configuration: https://github.com/gemini-cli-extensions/cloud-sql-sqlserver/tree/main?tab=readme-ov-file#configuration
* Observability:
Configuration:
https://github.com/gemini-cli-extensions/cloud-sql-sqlserver/tree/main?tab=readme-ov-file#configuration
* Observability:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/cloud-sql-sqlserver-observability
```
If you are looking for self-hosted SQL Server, consider the `sql-server` extension.
If you are looking for self-hosted SQL Server, consider the `sql-server`
extension.
#### AlloyDB for PostgreSQL
* Main Extension:
* Main Extension:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/alloydb
```
Configuration: https://github.com/gemini-cli-extensions/alloydb/tree/main?tab=readme-ov-file#configuration
* Observability:
Configuration:
https://github.com/gemini-cli-extensions/alloydb/tree/main?tab=readme-ov-file#configuration
* Observability:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/alloydb-observability
```
If you are looking for other PostgreSQL options, consider the `postgres` extension for self-hosted instances, or the `cloud-sql-postgresql` extension for Cloud SQL for PostgreSQL.
If you are looking for other PostgreSQL options, consider the `postgres`
extension for self-hosted instances, or the `cloud-sql-postgresql` extension
for Cloud SQL for PostgreSQL.
#### Spanner
* For querying Spanner databases:
* For querying Spanner databases:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/spanner
```
Configuration: https://github.com/gemini-cli-extensions/spanner/tree/main?tab=readme-ov-file#configuration
Configuration:
https://github.com/gemini-cli-extensions/spanner/tree/main?tab=readme-ov-file#configuration
#### Firestore
* For querying Firestore in Native Mode:
* For querying Firestore in Native Mode:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/firestore-native
```
Configuration: https://github.com/gemini-cli-extensions/firestore-native/tree/main?tab=readme-ov-file#configuration
Configuration:
https://github.com/gemini-cli-extensions/firestore-native/tree/main?tab=readme-ov-file#configuration
### Other Google Cloud Data Services
#### Dataplex
* For interacting with Dataplex data lakes and assets:
* For interacting with Dataplex data lakes and assets:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/dataplex
```
Configuration: https://github.com/gemini-cli-extensions/dataplex/tree/main?tab=readme-ov-file#configuration
Configuration:
https://github.com/gemini-cli-extensions/dataplex/tree/main?tab=readme-ov-file#configuration
#### Looker
* For querying Looker instances:
* For querying Looker instances:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/looker
```
Configuration: https://github.com/gemini-cli-extensions/looker/tree/main?tab=readme-ov-file#configuration
Configuration:
https://github.com/gemini-cli-extensions/looker/tree/main?tab=readme-ov-file#configuration
### Other Database Engines
These extensions are for connecting to database instances not managed by Cloud SQL (e.g., self-hosted on-prem, on a VM, or in another cloud).
These extensions are for connecting to database instances not managed by Cloud
SQL (e.g., self-hosted on-prem, on a VM, or in another cloud).
* MySQL:
* MySQL:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/mysql
```
Configuration: https://github.com/gemini-cli-extensions/mysql/tree/main?tab=readme-ov-file#configuration
If you are looking for Google Cloud managed MySQL, consider the `cloud-sql-mysql` extension.
Configuration:
https://github.com/gemini-cli-extensions/mysql/tree/main?tab=readme-ov-file#configuration
If you are looking for Google Cloud managed MySQL, consider the
`cloud-sql-mysql` extension.
* PostgreSQL:
* PostgreSQL:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/postgres
```
Configuration: https://github.com/gemini-cli-extensions/postgres/tree/main?tab=readme-ov-file#configuration
If you are looking for Google Cloud managed PostgreSQL, consider the `cloud-sql-postgresql` or `alloydb` extensions.
Configuration:
https://github.com/gemini-cli-extensions/postgres/tree/main?tab=readme-ov-file#configuration
If you are looking for Google Cloud managed PostgreSQL, consider the
`cloud-sql-postgresql` or `alloydb` extensions.
* SQL Server:
* SQL Server:
```bash
gemini extensions install https://github.com/gemini-cli-extensions/sql-server
```
Configuration: https://github.com/gemini-cli-extensions/sql-server/tree/main?tab=readme-ov-file#configuration
Configuration:
https://github.com/gemini-cli-extensions/sql-server/tree/main?tab=readme-ov-file#configuration
If you are looking for Google Cloud managed SQL Server, consider the `cloud-sql-sqlserver` extension.
If you are looking for Google Cloud managed SQL Server, consider the
`cloud-sql-sqlserver` extension.
### Custom Tools
#### MCP Toolbox
* For connecting to MCP Toolbox servers:
* For connecting to MCP Toolbox servers:
This extension can be used with any Google Cloud database to build custom
tools. For more information, see the [MCP Toolbox
documentation](https://googleapis.github.io/genai-toolbox/getting-started/introduction/).
This extension can be used with any Google Cloud database to build custom tools. For more information, see the [MCP Toolbox documentation](https://googleapis.github.io/genai-toolbox/getting-started/introduction/).
```bash
gemini extensions install https://github.com/gemini-cli-extensions/mcp-toolbox
```
Configuration: https://github.com/gemini-cli-extensions/mcp-toolbox/tree/main?tab=readme-ov-file#configuration
Configuration:
https://github.com/gemini-cli-extensions/mcp-toolbox/tree/main?tab=readme-ov-file#configuration

View File

@@ -804,8 +804,6 @@ For more detailed instructions on using the Toolbox Core SDK, see the
For more detailed instructions on using the Toolbox Go SDK, see the
[project's README][toolbox-core-go-readme].
[toolbox-go]: https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go/core
[toolbox-core-go-readme]: https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/README.md
</details>
</details>

View File

@@ -183,11 +183,11 @@ Protocol (OTLP). If you would like to use a collector, please refer to this
The following flags are used to determine Toolbox's telemetry configuration:
| **flag** | **type** | **description** |
|----------------------------|----------|----------------------------------------------------------------------------------------------------------------|
| `--telemetry-gcp` | bool | Enable exporting directly to Google Cloud Monitoring. Default is `false`. |
| **flag** | **type** | **description** |
|----------------------------|----------|------------------------------------------------------------------------------------------------------------------|
| `--telemetry-gcp` | bool | Enable exporting directly to Google Cloud Monitoring. Default is `false`. |
| `--telemetry-otlp` | string | Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. "<http://127.0.0.1:4318>"). |
| `--telemetry-service-name` | string | Sets the value of the `service.name` resource attribute. Default is `toolbox`. |
| `--telemetry-service-name` | string | Sets the value of the `service.name` resource attribute. Default is `toolbox`. |
In addition to the flags noted above, you can also make additional configuration
for OpenTelemetry via the [General SDK Configuration][sdk-configuration] through

View File

@@ -99,7 +99,8 @@ my_second_toolset = client.load_toolset("my_second_toolset")
### Prompts
The `prompts` section of your `tools.yaml` defines the templates containing structured messages and instructions for interacting with language models.
The `prompts` section of your `tools.yaml` defines the templates containing
structured messages and instructions for interacting with language models.
```yaml
prompts:

View File

@@ -84,38 +84,46 @@ following instructions for your OS and CPU architecture.
{{< tabpane text=true >}}
{{% tab header="Linux (AMD64)" lang="en" %}}
To install Toolbox as a binary on Linux (AMD64):
```sh
# see releases page for other versions
export VERSION=0.20.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="macOS (Apple Silicon)" lang="en" %}}
To install Toolbox as a binary on macOS (Apple Silicon):
```sh
# see releases page for other versions
export VERSION=0.20.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/arm64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="macOS (Intel)" lang="en" %}}
To install Toolbox as a binary on macOS (Intel):
```sh
# see releases page for other versions
export VERSION=0.20.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/amd64/toolbox
chmod +x toolbox
```
{{% /tab %}}
{{% tab header="Windows (AMD64)" lang="en" %}}
To install Toolbox as a binary on Windows (AMD64):
```powershell
# see releases page for other versions
$VERSION = "0.20.0"
Invoke-WebRequest -Uri "https://storage.googleapis.com/genai-toolbox/v$VERSION/windows/amd64/toolbox.exe" -OutFile "toolbox.exe"
```
{{% /tab %}}
{{< /tabpane >}}
{{% /tab %}}

View File

@@ -25,12 +25,15 @@ This guide assumes you have already done the following:
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to Toolbox

View File

@@ -59,7 +59,7 @@ npm install genkit @genkit-ai/googleai
npm install llamaindex @llamaindex/google @llamaindex/workflow
{{< /tab >}}
{{< tab header="GoogleGenAI" lang="bash" >}}
npm install @google/genai
npm install @google/genai
{{< /tab >}}
{{< /tabpane >}}

View File

@@ -254,6 +254,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
@@ -278,6 +279,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{< /tabpane >}}
@@ -287,6 +289,7 @@ Your AI tool is now connected to Cloud SQL for SQL Server using MCP.
The `cloud-sql-mssql-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for SQL Server instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.

View File

@@ -254,6 +254,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
@@ -278,6 +279,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{< /tabpane >}}
@@ -287,6 +289,7 @@ Your AI tool is now connected to Cloud SQL for MySQL using MCP.
The `cloud-sql-mysql-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for MySQL instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.

View File

@@ -254,6 +254,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
@@ -278,6 +279,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{< /tabpane >}}
@@ -287,6 +289,7 @@ Your AI tool is now connected to Cloud SQL for PostgreSQL using MCP.
The `cloud-sql-postgres-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for PostgreSQL instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.

View File

@@ -46,6 +46,7 @@ to expose your developer assistant tools to a Looker instance:
v0.10.0+:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/linux/amd64/toolbox
@@ -82,7 +83,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
{{< tabpane text=true >}}
{{% tab header="Gemini-CLI" lang="en" %}}
1. Install [Gemini-CLI](https://github.com/google-gemini/gemini-cli#install-globally-with-npm).
1. Install
[Gemini-CLI](https://github.com/google-gemini/gemini-cli#install-globally-with-npm).
1. Create a directory `.gemini` in your home directory if it doesn't exist.
1. Create the file `.gemini/settings.json` if it doesn't exist.
1. Add the following configuration, or add the mcpServers stanza if you already
@@ -287,7 +289,8 @@ Your AI tool is now connected to Looker using MCP. Try asking your AI
assistant to list models, explores, dimensions, and measures. Run a
query, retrieve the SQL for a query, and run a saved Look.
The full tool list is available in the [Prebuilt Tools Reference](../../reference/prebuilt-tools.md/#looker).
The full tool list is available in the [Prebuilt Tools
Reference](../../reference/prebuilt-tools/#looker).
The following tools are available to the LLM:
@@ -314,8 +317,10 @@ instance and create new saved content.
1. **get_looks**: Return the saved Looks that match a title or description
1. **run_look**: Run a saved Look and return the data
1. **make_look**: Create a saved Look in Looker and return the URL
1. **get_dashboards**: Return the saved dashboards that match a title or description
1. **run_dashbaord**: Run the queries associated with a dashboard and return the data
1. **get_dashboards**: Return the saved dashboards that match a title or
description
1. **run_dashbaord**: Run the queries associated with a dashboard and return the
data
1. **make_dashboard**: Create a saved dashboard in Looker and return the URL
1. **add_dashboard_element**: Add a tile to a dashboard
@@ -344,7 +349,8 @@ as well as get the database schema needed to write LookML effectively.
1. **get_connection_schemas**: Get the list of schemas for a connection
1. **get_connection_databases**: Get the list of databases for a connection
1. **get_connection_tables**: Get the list of tables for a connection
1. **get_connection_table_columns**: Get the list of columns for a table in a connection
1. **get_connection_table_columns**: Get the list of columns for a table in a
connection
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs

View File

@@ -217,6 +217,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
@@ -243,6 +244,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
@@ -270,6 +272,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
@@ -299,6 +302,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{< /tabpane >}}

View File

@@ -215,6 +215,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
@@ -241,6 +242,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
@@ -268,6 +270,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
@@ -297,6 +300,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{< /tabpane >}}

View File

@@ -79,10 +79,10 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your
values, and save:
values, and save:
```json
{
@@ -108,7 +108,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
values, and save:
```json
{
@@ -129,15 +129,15 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your
values, and save:
values, and save:
```json
{
@@ -156,13 +156,15 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
```
1. You should see a green active status after the server is successfully connected.
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -211,6 +213,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
@@ -236,6 +239,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
@@ -262,6 +266,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
@@ -290,6 +295,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{< /tabpane >}}

View File

@@ -287,6 +287,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
@@ -313,6 +314,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{< /tabpane >}}

View File

@@ -195,6 +195,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
@@ -217,6 +218,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
@@ -240,6 +242,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
@@ -265,6 +268,7 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.20.0/windows/amd64/toolb
}
}
```
{{% /tab %}}
{{< /tabpane >}}

View File

@@ -7,12 +7,20 @@ description: "Connect to Toolbox via Gemini CLI Extensions."
## Gemini CLI Extensions
[Gemini CLI][gemini-cli] is an open-source AI agent designed to assist with development workflows by assisting with coding, debugging, data exploration, and content creation. Its mission is to provide an agentic interface for interacting with database and analytics services and popular open-source databases.
[Gemini CLI][gemini-cli] is an open-source AI agent designed to assist with
development workflows by assisting with coding, debugging, data exploration, and
content creation. Its mission is to provide an agentic interface for interacting
with database and analytics services and popular open-source databases.
### How extensions work
Gemini CLI is highly extensible, allowing for the addition of new tools and capabilities through extensions. You can load the extensions from a GitHub URL, a local directory, or a configurable registry. They provide new tools, slash commands, and prompts to assist with your workflow.
Use the Gemini CLI Extensions to load prebuilt or custom tools to interact with your databases.
Gemini CLI is highly extensible, allowing for the addition of new tools and
capabilities through extensions. You can load the extensions from a GitHub URL,
a local directory, or a configurable registry. They provide new tools, slash
commands, and prompts to assist with your workflow.
Use the Gemini CLI Extensions to load prebuilt or custom tools to interact with
your databases.
[gemini-cli]: https://google-gemini.github.io/gemini-cli/
@@ -35,4 +43,4 @@ Below are a list of Gemini CLI Extensions powered by MCP Toolbox:
* [mysql](https://github.com/gemini-cli-extensions/mysql)
* [postgres](https://github.com/gemini-cli-extensions/postgres)
* [spanner](https://github.com/gemini-cli-extensions/spanner)
* [sql-server](https://github.com/gemini-cli-extensions/sql-server)
* [sql-server](https://github.com/gemini-cli-extensions/sql-server)

View File

@@ -169,10 +169,10 @@ testing and debugging Toolbox server.
### Tested Clients
| Client | SSE Works | MCP Config Docs |
|--------|--------|--------|
| Claude Desktop | ✅ | <https://modelcontextprotocol.io/quickstart/user#1-download-claude-for-desktop> |
| MCP Inspector | ✅ | <https://github.com/modelcontextprotocol/inspector> |
| Cursor | ✅ | <https://docs.cursor.com/context/model-context-protocol> |
| Windsurf | ✅ | <https://docs.windsurf.com/windsurf/mcp> |
| VS Code (Insiders) | ✅ | <https://code.visualstudio.com/docs/copilot/chat/mcp-servers> |
| Client | SSE Works | MCP Config Docs |
|--------------------|------------|---------------------------------------------------------------------------------|
| Claude Desktop | ✅ | <https://modelcontextprotocol.io/quickstart/user#1-download-claude-for-desktop> |
| MCP Inspector | ✅ | <https://github.com/modelcontextprotocol/inspector> |
| Cursor | ✅ | <https://docs.cursor.com/context/model-context-protocol> |
| Windsurf | ✅ | <https://docs.windsurf.com/windsurf/mcp> |
| VS Code (Insiders) | ✅ | <https://code.visualstudio.com/docs/copilot/chat/mcp-servers> |

View File

@@ -164,7 +164,8 @@ You can connect to Toolbox Cloud Run instances directly through the SDK.
{{< tab header="Python" lang="python" >}}
from toolbox_core import ToolboxClient, auth_methods
# Replace with the Cloud Run service URL generated in the previous step.
# Replace with the Cloud Run service URL generated in the previous step
URL = "https://cloud-run-url.app"
auth_token_provider = auth_methods.aget_google_id_token(URL) # can also use sync method
@@ -204,7 +205,6 @@ func main() {
{{< /tab >}}
{{< /tabpane >}}
Now, you can use this client to connect to the deployed Cloud Run instance!
## Troubleshooting
@@ -215,21 +215,21 @@ for your service in the Google Cloud Console's Cloud Run section. They often
contain the specific error message needed to diagnose the problem.
{{< /notice >}}
* **Deployment Fails with "Container failed to start":** This is almost always
- **Deployment Fails with "Container failed to start":** This is almost always
caused by a port mismatch. Ensure your container's `--port` argument is set to
`8080` to match the `$PORT` environment variable provided by Cloud Run.
* **Client Receives Permission Denied Error (401 or 403):** If your client
- **Client Receives Permission Denied Error (401 or 403):** If your client
application (e.g., your local SDK) gets a `401 Unauthorized` or `403
Forbidden` error when trying to call your Cloud Run service, it means the
client is not properly authenticated as an invoker.
* Ensure the user or service account calling the service has the **Cloud Run
- Ensure the user or service account calling the service has the **Cloud Run
Invoker** (`roles/run.invoker`) IAM role.
* If running locally, make sure your Application Default Credentials are set
- If running locally, make sure your Application Default Credentials are set
up correctly by running `gcloud auth application-default login`.
* **Service Fails to Access Secrets (in logs):** If your application starts but
- **Service Fails to Access Secrets (in logs):** If your application starts but
the logs show errors like "permission denied" when trying to access Secret
Manager, it means the Toolbox service account is missing permissions.
* Ensure the `toolbox-identity` service account has the **Secret Manager
- Ensure the `toolbox-identity` service account has the **Secret Manager
Secret Accessor** (`roles/secretmanager.secretAccessor`) IAM role.

View File

@@ -69,7 +69,7 @@ response field (e.g. empty string).
To edit headers, press the "Edit Headers" button to display the header modal.
Within this modal, users can make direct edits by typing into the header's text
area.
area.
Toolbox UI validates that the headers are in correct JSON format. Other
header-related errors (e.g., incorrect header names or values required by the

View File

@@ -32,10 +32,12 @@ description: >
### Transport Configuration
**Server Settings:**
- `--address`, `-a`: Server listening address (default: "127.0.0.1")
- `--port`, `-p`: Server listening port (default: 5000)
**STDIO:**
- `--stdio`: Run in MCP STDIO mode instead of HTTP server
#### Usage Examples
@@ -50,15 +52,19 @@ description: >
The CLI supports multiple mutually exclusive ways to specify tool configurations:
**Single File:** (default)
- `--tools-file`: Path to a single YAML configuration file (default: `tools.yaml`)
**Multiple Files:**
- `--tools-files`: Comma-separated list of YAML files to merge
**Directory:**
- `--tools-folder`: Directory containing YAML files to load and merge
**Prebuilt Configurations:**
- `--prebuilt`: Use predefined configurations for specific database types (e.g.,
'bigquery', 'postgres', 'spanner'). See [Prebuilt Tools
Reference](prebuilt-tools.md) for allowed values.
@@ -79,4 +85,4 @@ Toolbox enables dynamic reloading by default. To disable, use the
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test
tools and toolsets with features such as authorized parameters. To learn more,
visit [Toolbox UI](../how-to/toolbox-ui/index.md).
visit [Toolbox UI](../how-to/toolbox-ui/index.md).

View File

@@ -9,7 +9,11 @@ description: >
A `prompt` represents a reusable prompt template that can be retrieved and used
by MCP clients.
A Prompt is essentially a template for a message or a series of messages that can be sent to a Large Language Model (LLM). The Toolbox server implements the `prompts/list` and `prompts/get` methods from the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) specification, allowing clients to discover and retrieve these prompts.
A Prompt is essentially a template for a message or a series of messages that
can be sent to a Large Language Model (LLM). The Toolbox server implements the
`prompts/list` and `prompts/get` methods from the [Model Context Protocol
(MCP)](https://modelcontextprotocol.io/docs/getting-started/intro)
specification, allowing clients to discover and retrieve these prompts.
```yaml
prompts:
@@ -24,19 +28,19 @@ prompts:
## Prompt Schema
| **field** | **type** | **required** | **description** |
| --- | --- | --- | --- |
| description | string | No | A brief explanation of what the prompt does. |
| kind | string | No | The kind of prompt. Defaults to `"custom"`. |
| messages | [][Message](#message-schema) | Yes | A list of one or more message objects that make up the prompt's content. |
| arguments | [][Argument](#argument-schema) | No | A list of arguments that can be interpolated into the prompt's content.|
| **field** | **type** | **required** | **description** |
|-------------|--------------------------------|--------------|--------------------------------------------------------------------------|
| description | string | No | A brief explanation of what the prompt does. |
| kind | string | No | The kind of prompt. Defaults to `"custom"`. |
| messages | [][Message](#message-schema) | Yes | A list of one or more message objects that make up the prompt's content. |
| arguments | [][Argument](#argument-schema) | No | A list of arguments that can be interpolated into the prompt's content. |
## Message Schema
| **field** | **type** | **required** | **description** |
| --- | --- | --- | --- |
| role | string | No | The role of the sender. Can be `"user"` or `"assistant"`. Defaults to `"user"`. |
| content | string | Yes | The text of the message. You can include placeholders for arguments using `{{.argument_name}}` syntax. |
| **field** | **type** | **required** | **description** |
|-----------|----------|--------------|--------------------------------------------------------------------------------------------------------|
| role | string | No | The role of the sender. Can be `"user"` or `"assistant"`. Defaults to `"user"`. |
| content | string | Yes | The text of the message. You can include placeholders for arguments using `{{.argument_name}}` syntax. |
## Argument Schema
@@ -45,11 +49,17 @@ type. If the `type` field is not specified, it will default to `string`.
## Usage with Gemini CLI
Prompts defined in your `tools.yaml` can be seamlessly integrated with the Gemini CLI to create [custom slash commands](https://github.com/google-gemini/gemini-cli/blob/main/docs/tools/mcp-server.md#mcp-prompts-as-slash-commands). The workflow is as follows:
Prompts defined in your `tools.yaml` can be seamlessly integrated with the
Gemini CLI to create [custom slash
commands](https://github.com/google-gemini/gemini-cli/blob/main/docs/tools/mcp-server.md#mcp-prompts-as-slash-commands).
The workflow is as follows:
1. **Discovery:** When the Gemini CLI connects to your Toolbox server, it automatically calls `prompts/list` to discover all available prompts.
1. **Discovery:** When the Gemini CLI connects to your Toolbox server, it
automatically calls `prompts/list` to discover all available prompts.
2. **Conversion:** Each discovered prompt is converted into a corresponding slash command. For example, a prompt named `code_review` becomes the `/code_review` command in the CLI.
2. **Conversion:** Each discovered prompt is converted into a corresponding
slash command. For example, a prompt named `code_review` becomes the
`/code_review` command in the CLI.
3. **Execution:** You can execute the command as follows:
@@ -65,6 +75,7 @@ Prompts defined in your `tools.yaml` can be seamlessly integrated with the Gemin
Please review the following code for quality, correctness, and potential improvements: \ndef hello():\n print('world')
```
5. **Response:** This completed prompt is then sent to the Gemini model, and the model's response is displayed back to you in the CLI.
5. **Response:** This completed prompt is then sent to the Gemini model, and the
model's response is displayed back to you in the CLI.
## Kinds of prompts

View File

@@ -52,12 +52,12 @@ prompts:
### Prompt Schema
| **field** | **type** | **required** | **description** |
| --- | --- | --- | --- |
| kind | string | No | The kind of prompt. Must be `"custom"`. |
| description | string | No | A brief explanation of what the prompt does. |
| messages | [][Message](#message-schema) | Yes | A list of one or more message objects that make up the prompt's content. |
| arguments | [][Argument](#argument-schema) | No | A list of arguments that can be interpolated into the prompt's content.|
| **field** | **type** | **required** | **description** |
|-------------|--------------------------------|--------------|--------------------------------------------------------------------------|
| kind | string | No | The kind of prompt. Must be `"custom"`. |
| description | string | No | A brief explanation of what the prompt does. |
| messages | [][Message](#message-schema) | Yes | A list of one or more message objects that make up the prompt's content. |
| arguments | [][Argument](#argument-schema) | No | A list of arguments that can be interpolated into the prompt's content. |
### Message Schema
@@ -66,4 +66,3 @@ Refer to the default prompt [Message Schema](../_index.md#message-schema).
### Argument Schema
Refer to the default prompt [Argument Schema](../_index.md#argument-schema).

View File

@@ -52,7 +52,7 @@ cluster][alloydb-free-trial].
List schemas in an AlloyDB for PostgreSQL database.
- [`postgres-database-overview`](../tools/postgres/postgres-database-overview.md)
Fetches the current state of the PostgreSQL server.
Fetches the current state of the PostgreSQL server.
- [`postgres-list-triggers`](../tools/postgres/postgres-list-triggers.md)
List triggers in an AlloyDB for PostgreSQL database.

View File

@@ -8,7 +8,10 @@ description: >
## About
[Apache Cassandra][cassandra-docs] is a NoSQL distributed database. By design, NoSQL databases are lightweight, open-source, non-relational, and largely distributed. Counted among their strengths are horizontal scalability, distributed architectures, and a flexible approach to schema definition.
[Apache Cassandra][cassandra-docs] is a NoSQL distributed database. By design,
NoSQL databases are lightweight, open-source, non-relational, and largely
distributed. Counted among their strengths are horizontal scalability,
distributed architectures, and a flexible approach to schema definition.
[cassandra-docs]: https://cassandra.apache.org/
@@ -17,7 +20,6 @@ description: >
- [`cassandra-cql`](../tools/cassandra/cassandra-cql.md)
Run parameterized CQL queries in Cassandra.
## Example
```yaml
@@ -43,15 +45,15 @@ instead of hardcoding your secrets into the configuration file.
## Reference
| **field** | **type** | **required** | **description** |
|------------------------|:---------:|:------------:|-------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cassandra". |
| hosts | string[] | true | List of IP addresses to connect to (e.g., ["192.168.1.1:9042", "192.168.1.2:9042","192.168.1.3:9042"]). The default port is 9042 if not specified. |
| keyspace | string | true | Name of the Cassandra keyspace to connect to (e.g., "my_keyspace"). |
| protoVersion | integer | false | Protocol version for the Cassandra connection (e.g., 4). |
| username | string | false | Name of the Cassandra user to connect as (e.g., "my-cassandra-user"). |
| password | string | false | Password of the Cassandra user (e.g., "my-password"). |
| caPath | string | false | Path to the CA certificate for SSL/TLS (e.g., "/path/to/ca.crt"). |
| certPath | string | false | Path to the client certificate for SSL/TLS (e.g., "/path/to/client.crt"). |
| keyPath | string | false | Path to the client key for SSL/TLS (e.g., "/path/to/client.key"). |
| enableHostVerification | boolean | false | Enable host verification for SSL/TLS (e.g., true). By default, host verification is disabled. |
| **field** | **type** | **required** | **description** |
|------------------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cassandra". |
| hosts | string[] | true | List of IP addresses to connect to (e.g., ["192.168.1.1:9042", "192.168.1.2:9042","192.168.1.3:9042"]). The default port is 9042 if not specified. |
| keyspace | string | true | Name of the Cassandra keyspace to connect to (e.g., "my_keyspace"). |
| protoVersion | integer | false | Protocol version for the Cassandra connection (e.g., 4). |
| username | string | false | Name of the Cassandra user to connect as (e.g., "my-cassandra-user"). |
| password | string | false | Password of the Cassandra user (e.g., "my-password"). |
| caPath | string | false | Path to the CA certificate for SSL/TLS (e.g., "/path/to/ca.crt"). |
| certPath | string | false | Path to the client certificate for SSL/TLS (e.g., "/path/to/client.crt"). |
| keyPath | string | false | Path to the client key for SSL/TLS (e.g., "/path/to/client.key"). |
| enableHostVerification | boolean | false | Enable host verification for SSL/TLS (e.g., true). By default, host verification is disabled. |

View File

@@ -21,7 +21,6 @@ description: >
- [`clickhouse-sql`](../tools/clickhouse/clickhouse-sql.md)
Execute SQL queries as prepared statements in ClickHouse.
## Requirements
### Database User

View File

@@ -23,9 +23,10 @@ A dataset is a container in your Google Cloud project that holds modality-specif
healthcare data. Datasets contain other data stores, such as FHIR stores and DICOM
stores, which in turn hold their own types of healthcare data.
A single dataset can contain one or many data stores, and those stores can all service
the same modality or different modalities as application needs dictate. Using multiple
stores in the same dataset might be appropriate in various situations.
A single dataset can contain one or many data stores, and those stores can all
service the same modality or different modalities as application needs dictate.
Using multiple stores in the same dataset might be appropriate in various
situations.
If you are new to the Cloud Healthcare API, you can try to
[create and view datasets and stores using curl][healthcare-quickstart-curl].
@@ -85,8 +86,9 @@ If you are new to the Cloud Healthcare API, you can try to
### IAM Permissions
The Cloud Healthcare API uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Cloud Healthcare resources like projects, datasets, and stores.
The Cloud Healthcare API uses [Identity and Access Management
(IAM)][iam-overview] to control user and group access to Cloud Healthcare
resources like projects, datasets, and stores.
### Authentication via Application Default Credentials (ADC)
@@ -96,9 +98,9 @@ By **default**, Toolbox will use your [Application Default Credentials
When using this method, you need to ensure the IAM identity associated with your
ADC (such as a service account) has the correct permissions for the queries you
intend to run. Common roles include `roles/healthcare.fhirResourceReader` (which includes
permissions to read and search for FHIR resources) or `roles/healthcare.dicomViewer` (for
retrieving DICOM images).
intend to run. Common roles include `roles/healthcare.fhirResourceReader` (which
includes permissions to read and search for FHIR resources) or
`roles/healthcare.dicomViewer` (for retrieving DICOM images).
Follow this [guide][set-adc] to set up your ADC.
### Authentication via User's OAuth Access Token
@@ -106,8 +108,8 @@ Follow this [guide][set-adc] to set up your ADC.
If the `useClientOAuth` parameter is set to `true`, Toolbox will instead use the
OAuth access token for authentication. This token is parsed from the
`Authorization` header passed in with the tool invocation request. This method
allows Toolbox to make queries to the [Cloud Healthcare API][healthcare-docs] on behalf of the
client or the end-user.
allows Toolbox to make queries to the [Cloud Healthcare API][healthcare-docs] on
behalf of the client or the end-user.
When using this on-behalf-of authentication, you must ensure that the
identity used has been granted the correct IAM permissions.

View File

@@ -15,6 +15,7 @@ Cloud Monitoring API](https://cloud.google.com/monitoring/api). This allows
tools to access cloud monitoring metrics explorer and run promql queries.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will

View File

@@ -48,7 +48,7 @@ to a database by following these instructions][csql-pg-quickstart].
List schemas in a PostgreSQL database.
- [`postgres-database-overview`](../tools/postgres/postgres-database-overview.md)
Fetches the current state of the PostgreSQL server.
Fetches the current state of the PostgreSQL server.
- [`postgres-list-triggers`](../tools/postgres/postgres-list-triggers.md)
List triggers in a PostgreSQL database.
@@ -65,7 +65,6 @@ to a database by following these instructions][csql-pg-quickstart].
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_pg_mcp/)
Connect your IDE to Cloud SQL for Postgres using Toolbox.
## Requirements
### IAM Permissions

View File

@@ -30,24 +30,25 @@ sources:
```
{{< notice note >}}
For more details about alternate addresses and custom ports refer to [Managing Connections](https://docs.couchbase.com/java-sdk/current/howtos/managing-connections.html).
For more details about alternate addresses and custom ports refer to [Managing
Connections](https://docs.couchbase.com/java-sdk/current/howtos/managing-connections.html).
{{< /notice >}}
## Reference
| **field** | **type** | **required** | **description** |
|----------------------|:--------:|:------------:|---------------------------------------------------------|
| kind | string | true | Must be "couchbase". |
| connectionString | string | true | Connection string for the Couchbase cluster. |
| bucket | string | true | Name of the bucket to connect to. |
| scope | string | true | Name of the scope within the bucket. |
| username | string | false | Username for authentication. |
| password | string | false | Password for authentication. |
| clientCert | string | false | Path to client certificate file for TLS authentication. |
| clientCertPassword | string | false | Password for the client certificate. |
| clientKey | string | false | Path to client key file for TLS authentication. |
| clientKeyPassword | string | false | Password for the client key. |
| caCert | string | false | Path to CA certificate file. |
| noSslVerify | boolean | false | If true, skip server certificate verification. **Warning:** This option should only be used in development or testing environments. Disabling SSL verification poses significant security risks in production as it makes your connection vulnerable to man-in-the-middle attacks. |
| profile | string | false | Name of the connection profile to apply. |
| **field** | **type** | **required** | **description** |
|----------------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "couchbase". |
| connectionString | string | true | Connection string for the Couchbase cluster. |
| bucket | string | true | Name of the bucket to connect to. |
| scope | string | true | Name of the scope within the bucket. |
| username | string | false | Username for authentication. |
| password | string | false | Password for authentication. |
| clientCert | string | false | Path to client certificate file for TLS authentication. |
| clientCertPassword | string | false | Password for the client certificate. |
| clientKey | string | false | Path to client key file for TLS authentication. |
| clientKeyPassword | string | false | Password for the client key. |
| caCert | string | false | Path to CA certificate file. |
| noSslVerify | boolean | false | If true, skip server certificate verification. **Warning:** This option should only be used in development or testing environments. Disabling SSL verification poses significant security risks in production as it makes your connection vulnerable to man-in-the-middle attacks. |
| profile | string | false | Name of the connection profile to apply. |
| queryScanConsistency | integer | false | Query scan consistency. Controls the consistency guarantee for index scanning. Values: 1 for "not_bounded" (fastest option, but results may not include the most recent operations), 2 for "request_plus" (highest consistency level, includes all operations up until the query started, but incurs a performance penalty). If not specified, defaults to the Couchbase Go SDK default. |

View File

@@ -321,4 +321,4 @@ Logical operators are case-sensitive. `OR` and `AND` are acceptable whereas `or`
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------------------|
| kind | string | true | Must be "dataplex". |
| project | string | true | ID of the GCP project used for quota and billing purposes (e.g. "my-project-id").|
| project | string | true | ID of the GCP project used for quota and billing purposes (e.g. "my-project-id").|

View File

@@ -10,22 +10,27 @@ description: >
# Elasticsearch Source
[Elasticsearch][elasticsearch-docs] is a distributed, free and open search and analytics engine
for all types of data, including textual, numerical, geospatial, structured,
and unstructured.
[Elasticsearch][elasticsearch-docs] is a distributed, free and open search and
analytics engine for all types of data, including textual, numerical,
geospatial, structured, and unstructured.
If you are new to Elasticsearch, you can learn how to
If you are new to Elasticsearch, you can learn how to
[set up a cluster and start indexing data][elasticsearch-quickstart].
Elasticsearch uses [ES|QL][elasticsearch-esql] for querying data. ES|QL
is a powerful query language that allows you to search and aggregate data in
Elasticsearch.
See the [official documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) for more information.
See the [official
documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)
for more information.
[elasticsearch-docs]: https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
[elasticsearch-quickstart]: https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html
[elasticsearch-esql]: https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html
[elasticsearch-docs]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
[elasticsearch-quickstart]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html
[elasticsearch-esql]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html
## Available Tools
@@ -44,9 +49,12 @@ ensure the API key has the correct permissions for the queries you intend to
run. See [API key management][api-key-management] for more information on
applying permissions to an API key.
[api-key]: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
[set-api-key]: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
[api-key-management]: https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-get-api-key.html
[api-key]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
[set-api-key]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-create-api-key.html
[api-key-management]:
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-api-get-api-key.html
## Example
@@ -61,8 +69,8 @@ sources:
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-------------------------------------------------------------------------------|
| kind | string | true | Must be "elasticsearch". |
| addresses | []string | true | List of Elasticsearch hosts to connect to. |
| apikey | string | true | The API key to use for authentication. |
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------|
| kind | string | true | Must be "elasticsearch". |
| addresses | []string | true | List of Elasticsearch hosts to connect to. |
| apikey | string | true | The API key to use for authentication. |

View File

@@ -60,4 +60,4 @@ instead of hardcoding your secrets into the configuration file.
| port | string | true | Port to connect to (e.g. "3050") |
| database | string | true | Path to the Firebird database file (e.g. "/var/lib/firebird/data/test.fdb"). |
| user | string | true | Name of the Firebird user to connect as (e.g. "SYSDBA"). |
| password | string | true | Password of the Firebird user (e.g. "masterkey"). |
| password | string | true | Password of the Firebird user (e.g. "masterkey"). |

View File

@@ -48,8 +48,8 @@ permissions):
- `roles/cloudaicompanion.user`
- `roles/geminidataanalytics.dataAgentStatelessUser`
To initialize the application default credential run `gcloud auth login --update-adc`
in your environment before starting MCP Toolbox.
To initialize the application default credential run `gcloud auth login
--update-adc` in your environment before starting MCP Toolbox.
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
@@ -81,7 +81,8 @@ The client id and client secret are seemingly random character sequences
assigned by the looker server. If you are using Looker OAuth you don't need
these settings
The `project` and `location` fields are utilized **only** when using the conversational analytics tool.
The `project` and `location` fields are utilized **only** when using the
conversational analytics tool.
{{< notice tip >}}
Use environment variable replacement with the format ${ENV_NAME}

View File

@@ -8,17 +8,32 @@ description: >
## About
[MindsDB][mindsdb-docs] is an AI federated database in the world. It allows you to combine information from hundreds of datasources as if they were SQL, supporting joins across datasources and enabling you to query all unstructured data as if it were structured.
[MindsDB][mindsdb-docs] is an AI federated database in the world. It allows you
to combine information from hundreds of datasources as if they were SQL,
supporting joins across datasources and enabling you to query all unstructured
data as if it were structured.
MindsDB translates MySQL queries into whatever API is needed - whether it's REST APIs, GraphQL, or native database protocols. This means you can write standard SQL queries and MindsDB automatically handles the translation to APIs like Salesforce, Jira, GitHub, email systems, MongoDB, and hundreds of other datasources.
MindsDB translates MySQL queries into whatever API is needed - whether it's REST
APIs, GraphQL, or native database protocols. This means you can write standard
SQL queries and MindsDB automatically handles the translation to APIs like
Salesforce, Jira, GitHub, email systems, MongoDB, and hundreds of other
datasources.
MindsDB also enables you to use ML frameworks to train and use models as virtual tables from the data in those datasources. With MindsDB, the GenAI Toolbox can now expand to hundreds of datasources and leverage all of MindsDB's capabilities on ML and unstructured data.
MindsDB also enables you to use ML frameworks to train and use models as virtual
tables from the data in those datasources. With MindsDB, the GenAI Toolbox can
now expand to hundreds of datasources and leverage all of MindsDB's capabilities
on ML and unstructured data.
**Key Features:**
- **Federated Database**: Connect and query hundreds of datasources through a single SQL interface
- **Cross-Datasource Joins**: Perform joins across different datasources seamlessly
- **API Translation**: Automatically translates MySQL queries into REST APIs, GraphQL, and native protocols
- **Unstructured Data Support**: Query unstructured data as if it were structured
- **Federated Database**: Connect and query hundreds of datasources through a
single SQL interface
- **Cross-Datasource Joins**: Perform joins across different datasources
seamlessly
- **API Translation**: Automatically translates MySQL queries into REST APIs,
GraphQL, and native protocols
- **Unstructured Data Support**: Query unstructured data as if it were
structured
- **ML as Virtual Tables**: Train and use ML models as virtual tables
- **MySQL Wire Protocol**: Compatible with standard MySQL clients and tools
@@ -30,6 +45,7 @@ MindsDB also enables you to use ML frameworks to train and use models as virtual
MindsDB supports hundreds of datasources, including:
### **Business Applications**
- **Salesforce**: Query leads, opportunities, accounts, and custom objects
- **Jira**: Access issues, projects, workflows, and team data
- **GitHub**: Query repositories, commits, pull requests, and issues
@@ -37,22 +53,23 @@ MindsDB supports hundreds of datasources, including:
- **HubSpot**: Query contacts, companies, deals, and marketing data
### **Databases & Storage**
- **MongoDB**: Query NoSQL collections as structured tables
- **Redis**: Key-value stores and caching layers
- **Elasticsearch**: Search and analytics data
- **S3/Google Cloud Storage**: File storage and data lakes
### **Communication & Email**
- **Gmail/Outlook**: Query emails, attachments, and metadata
- **Slack**: Access workspace data and conversations
- **Microsoft Teams**: Team communications and files
- **Discord**: Server data and message history
## Example Queries
### Cross-Datasource Analytics
```sql
-- Join Salesforce opportunities with GitHub activity
SELECT
@@ -67,6 +84,7 @@ GROUP BY s.opportunity_name, s.amount, g.repository_name;
```
### Email & Communication Analysis
```sql
-- Analyze email patterns with Slack activity
SELECT
@@ -81,6 +99,7 @@ GROUP BY e.sender, e.subject, s.channel_name;
```
### ML Model Predictions
```sql
-- Use ML model to predict customer churn
SELECT
@@ -96,9 +115,13 @@ WHERE predicted_churn_probability > 0.8;
### Database User
This source uses standard MySQL authentication since MindsDB implements the MySQL wire protocol. You will need to [create a MindsDB user][mindsdb-users] to login to the database with. If MindsDB is configured without authentication, you can omit the password field.
This source uses standard MySQL authentication since MindsDB implements the
MySQL wire protocol. You will need to [create a MindsDB user][mindsdb-users] to
login to the database with. If MindsDB is configured without authentication, you
can omit the password field.
[mindsdb-users]: https://docs.mindsdb.com/
## Example
```yaml
@@ -136,26 +159,32 @@ instead of hardcoding your secrets into the configuration file.
With MindsDB integration, you can:
- **Query Multiple Datasources**: Connect to databases, APIs, file systems, and more through a single SQL interface
- **Cross-Datasource Analytics**: Perform joins and analytics across different data sources
- **ML Model Integration**: Use trained ML models as virtual tables for predictions and insights
- **Unstructured Data Processing**: Query documents, images, and other unstructured data as structured tables
- **Real-time Predictions**: Get real-time predictions from ML models through SQL queries
- **API Abstraction**: Write SQL queries that automatically translate to REST APIs, GraphQL, and native protocols
- **Query Multiple Datasources**: Connect to databases, APIs, file systems, and
more through a single SQL interface
- **Cross-Datasource Analytics**: Perform joins and analytics across different
data sources
- **ML Model Integration**: Use trained ML models as virtual tables for
predictions and insights
- **Unstructured Data Processing**: Query documents, images, and other
unstructured data as structured tables
- **Real-time Predictions**: Get real-time predictions from ML models through
SQL queries
- **API Abstraction**: Write SQL queries that automatically translate to REST
APIs, GraphQL, and native protocols
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | ----------------------------------------------------------------------------------------------- |
| kind | string | true | Must be "mindsdb". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |
| database | string | true | Name of the MindsDB database to connect to (e.g. "my_db"). |
| user | string | true | Name of the MindsDB user to connect as (e.g. "my-mindsdb-user"). |
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mindsdb". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |
| database | string | true | Name of the MindsDB database to connect to (e.g. "my_db"). |
| user | string | true | Name of the MindsDB user to connect as (e.g. "my-mindsdb-user"). |
| password | string | false | Password of the MindsDB user (e.g. "my-password"). Optional if MindsDB is configured without authentication. |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
| queryTimeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, no timeout is applied. |
## Resources
- [MindsDB Documentation][mindsdb-docs] - Official documentation and guides
- [MindsDB GitHub][mindsdb-github] - Source code and community
- [MindsDB GitHub][mindsdb-github] - Source code and community

View File

@@ -33,7 +33,8 @@ amount of data through a structured format.
This source only uses standard authentication. You will need to [create a
SQL Server user][mssql-users] to login to the database with.
[mssql-users]: https://learn.microsoft.com/en-us/sql/relational-databases/security/authentication-access/create-a-database-user?view=sql-server-ver16
[mssql-users]:
https://learn.microsoft.com/en-us/sql/relational-databases/security/authentication-access/create-a-database-user?view=sql-server-ver16
## Example
@@ -56,12 +57,12 @@ instead of hardcoding your secrets into the configuration file.
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mssql". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "1433"). |
| database | string | true | Name of the SQL Server database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SQL Server user to connect as (e.g. "my-user"). |
| password | string | true | Password of the SQL Server user (e.g. "my-password"). |
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mssql". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "1433"). |
| database | string | true | Name of the SQL Server database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SQL Server user to connect as (e.g. "my-user"). |
| password | string | true | Password of the SQL Server user (e.g. "my-password"). |
| encrypt | string | false | Encryption level for data transmitted between the client and server (e.g., "strict"). If not specified, defaults to the [github.com/microsoft/go-mssqldb](https://github.com/microsoft/go-mssqldb?tab=readme-ov-file#common-parameters) package's default encrypt value. |

View File

@@ -82,4 +82,4 @@ suitable for large-scale applications.
### Strong Consistency
OceanBase provides strong consistency guarantees, ensuring that all transactions
are ACID compliant.
are ACID compliant.

View File

@@ -8,7 +8,10 @@ description: >
## About
[Oracle Database][oracle-docs] is a multi-model database management system produced and marketed by Oracle Corporation. It is commonly used for running online transaction processing (OLTP), data warehousing (DW), and mixed (OLTP & DW) database workloads.
[Oracle Database][oracle-docs] is a multi-model database management system
produced and marketed by Oracle Corporation. It is commonly used for running
online transaction processing (OLTP), data warehousing (DW), and mixed (OLTP &
DW) database workloads.
[oracle-docs]: https://www.oracle.com/database/
@@ -24,33 +27,44 @@ description: >
### Database User
This source uses standard authentication. You will need to [create an Oracle user][oracle-users] to log in to the database with the necessary permissions.
This source uses standard authentication. You will need to [create an Oracle
user][oracle-users] to log in to the database with the necessary permissions.
[oracle-users]:
https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/CREATE-USER.html
## Connection Methods
You can configure the connection to your Oracle database using one of the following three methods. **You should only use one method** in your source configuration.
You can configure the connection to your Oracle database using one of the
following three methods. **You should only use one method** in your source
configuration.
### Basic Connection (Host/Port/Service Name)
This is the most straightforward method, where you provide the connection details as separate fields:
This is the most straightforward method, where you provide the connection
details as separate fields:
- `host`: The IP address or hostname of the database server.
- `port`: The port number the Oracle listener is running on (typically 1521).
- `serviceName`: The service name for the database instance you wish to connect to.
- `serviceName`: The service name for the database instance you wish to connect
to.
### Connection String
As an alternative, you can provide all the connection details in a single `connectionString`. This is a convenient way to consolidate the connection information. The typical format is `hostname:port/servicename`.
As an alternative, you can provide all the connection details in a single
`connectionString`. This is a convenient way to consolidate the connection
information. The typical format is `hostname:port/servicename`.
### TNS Alias
For environments that use a `tnsnames.ora` configuration file, you can connect using a TNS (Transparent Network Substrate) alias.
For environments that use a `tnsnames.ora` configuration file, you can connect
using a TNS (Transparent Network Substrate) alias.
- `tnsAlias`: Specify the alias name defined in your `tnsnames.ora` file.
- `tnsAdmin` (Optional): If your configuration file is not in a standard location, you can use this field to provide the path to the directory containing it. This setting will override the `TNS_ADMIN` environment variable.
- `tnsAdmin` (Optional): If your configuration file is not in a standard
location, you can use this field to provide the path to the directory
containing it. This setting will override the `TNS_ADMIN` environment
variable.
## Example

View File

@@ -42,7 +42,7 @@ reputation for reliability, feature robustness, and performance.
List schemas in a PostgreSQL database.
- [`postgres-database-overview`](../tools/postgres/postgres-database-overview.md)
Fetches the current state of the PostgreSQL server.
Fetches the current state of the PostgreSQL server.
- [`postgres-list-triggers`](../tools/postgres/postgres-list-triggers.md)
List triggers in a PostgreSQL database.

View File

@@ -8,18 +8,22 @@ description: >
## About
[SingleStore][singlestore-docs] is a distributed SQL database built to power intelligent applications. It is both relational and multi-model, enabling developers to easily build and scale applications and workloads.
[SingleStore][singlestore-docs] is a distributed SQL database built to power
intelligent applications. It is both relational and multi-model, enabling
developers to easily build and scale applications and workloads.
SingleStore is built around Universal Storage which combines in-memory rowstore and on-disk columnstore data formats to deliver a single table type that is optimized to handle both transactional and analytical workloads.
SingleStore is built around Universal Storage which combines in-memory rowstore
and on-disk columnstore data formats to deliver a single table type that is
optimized to handle both transactional and analytical workloads.
[singlestore-docs]: https://docs.singlestore.com/
## Available Tools
- [`singlestore-sql`](../tools/singlestore/singlestore-sql.md)
- [`singlestore-sql`](../tools/singlestore/singlestore-sql.md)
Execute pre-defined prepared SQL queries in SingleStore.
- [`singlestore-execute-sql`](../tools/singlestore/singlestore-execute-sql.md)
- [`singlestore-execute-sql`](../tools/singlestore/singlestore-execute-sql.md)
Run parameterized SQL queries in SingleStore.
## Requirements
@@ -29,7 +33,8 @@ SingleStore is built around Universal Storage which combines in-memory rowstore
This source only uses standard authentication. You will need to [create a
database user][singlestore-user] to login to the database with.
[singlestore-user]: https://docs.singlestore.com/cloud/reference/sql-reference/security-management-commands/create-user/
[singlestore-user]:
https://docs.singlestore.com/cloud/reference/sql-reference/security-management-commands/create-user/
## Example
@@ -53,7 +58,7 @@ instead of hardcoding your secrets into the configuration file.
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :------: | :----------: | ----------------------------------------------------------------------------------------------- |
|--------------|:--------:|:------------:|-------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "singlestore". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "3306"). |

View File

@@ -8,14 +8,19 @@ aliases: [/resources/tools/alloydb-create-cluster]
## About
The `alloydb-create-cluster` tool creates a new AlloyDB for PostgreSQL cluster in a specified project and location. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-create-cluster` tool creates a new AlloyDB for PostgreSQL cluster
in a specified project and location. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
This tool provisions a cluster with a **private IP address** within the specified VPC network.
**Permissions & APIs Required:**
Before using, ensure the following on your GCP project:
1. The [AlloyDB API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com) is enabled.
2. The user or service account executing the tool has one of the following IAM roles:
1. The [AlloyDB
API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com) is
enabled.
2. The user or service account executing the tool has one of the following IAM
roles:
- `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)
- `roles/owner` (the Owner basic IAM role)
@@ -24,7 +29,7 @@ This tool provisions a cluster with a **private IP address** within the specifie
The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :------------------------------------------------------------------------------------------------------------------------ | :------- |
|:-----------|:-------|:--------------------------------------------------------------------------------------------------------------------------|:---------|
| `project` | string | The GCP project ID where the cluster will be created. | Yes |
| `cluster` | string | A unique identifier for the new AlloyDB cluster. | Yes |
| `password` | string | A secure password for the initial user. | Yes |
@@ -44,8 +49,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** | |
| ----------- | :------: | :----------: | ---------------------------------------------------- | - |
| kind | string | true | Must be alloydb-create-cluster. | |
| source | string | true | The name of an `alloydb-admin` source. | |
| description | string | false | Description of the tool that is passed to the agent. | |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| kind | string | true | Must be alloydb-create-cluster. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |

View File

@@ -22,7 +22,6 @@ This tool provisions a new instance with a **public IP address**.
2. The user or service account executing the tool has one of the following IAM
roles:
- `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)
- `roles/owner` (the Owner basic IAM role)
- `roles/editor` (the Editor basic IAM role)

View File

@@ -8,10 +8,12 @@ aliases: [/resources/tools/alloydb-get-instance]
## About
The `alloydb-get-instance` tool retrieves detailed information for a single, specified AlloyDB instance. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-get-instance` tool retrieves detailed information for a single,
specified AlloyDB instance. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
| Parameter | Type | Description | Required |
| :--------- | :----- | :-------------------------------------------------- | :------- |
|:-----------|:-------|:----------------------------------------------------|:---------|
| `project` | string | The GCP project ID to get instance for. | Yes |
| `location` | string | The location of the instance (e.g., 'us-central1'). | Yes |
| `cluster` | string | The ID of the cluster. | Yes |
@@ -30,7 +32,7 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------- |
|-------------|:--------:|:------------:|------------------------------------------------------|
| kind | string | true | Must be alloydb-get-instance. |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | false | Description of the tool that is passed to the agent. |
| description | string | false | Description of the tool that is passed to the agent. |

View File

@@ -34,7 +34,8 @@ database layer.
{{< notice tip >}} AlloyDB AI natural language is currently in gated public
preview. For more information on availability and limitations, please see
[AlloyDB AI natural language overview](https://cloud.google.com/alloydb/docs/ai/natural-language-overview)
[AlloyDB AI natural language
overview](https://cloud.google.com/alloydb/docs/ai/natural-language-overview)
{{< /notice >}}
To enable AlloyDB AI natural language for your AlloyDB cluster, please follow
@@ -46,15 +47,17 @@ context for your application.
As of AlloyDB AI NL v1.0.3+, the signature of `execute_nl_query` has been
updated. Run `SELECT extversion FROM pg_extension WHERE extname =
'alloydb_ai_nl';` to check which version your instance is using.
AlloyDB AI NL v1.0.3+ is required for Toolbox v0.19.0+. Starting with Toolbox v0.19.0, users
who previously used the create_configuration operation for the natural language
configuration must update it. To do so, please drop the existing configuration
and redefine it using the instructions
AlloyDB AI NL v1.0.3+ is required for Toolbox v0.19.0+. Starting with Toolbox
v0.19.0, users who previously used the create_configuration operation for the
natural language configuration must update it. To do so, please drop the
existing configuration and redefine it using the instructions
[here](https://docs.cloud.google.com/alloydb/docs/ai/use-natural-language-generate-sql-queries#create-config).
{{< /notice >}}
[alloydb-ai-nl-overview]: https://cloud.google.com/alloydb/docs/ai/natural-language-overview
[alloydb-ai-gen-nl]: https://cloud.google.com/alloydb/docs/ai/generate-sql-queries-natural-language
[alloydb-ai-nl-overview]:
https://cloud.google.com/alloydb/docs/ai/natural-language-overview
[alloydb-ai-gen-nl]:
https://cloud.google.com/alloydb/docs/ai/generate-sql-queries-natural-language
## Configuration
@@ -84,12 +87,17 @@ Parameters](../#array-parameters) or Bound Parameters to provide secure
access to queries generated using natural language, as these parameters are not
visible to the LLM.
[alloydb-psv]: https://cloud.google.com/alloydb/docs/parameterized-secure-views-overview
[alloydb-psv]:
https://cloud.google.com/alloydb/docs/parameterized-secure-views-overview
{{< notice tip >}} Make sure to enable the `parameterized_views` extension
before running this tool. You can do so by running this command in the AlloyDB
studio:
{{< notice tip >}} Make sure to enable the `parameterized_views` extension before running this tool. You can do so by running this command in the AlloyDB studio:
```sql
CREATE EXTENSION IF NOT EXISTS parameterized_views;
```
{{< /notice >}}
## Example
@@ -112,12 +120,13 @@ tools:
- name: my_google_service
field: email
```
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------|
| kind | string | true | Must be "alloydb-ai-nl". |
| source | string | true | Name of the AlloyDB source the natural language query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| nlConfig | string | true | The name of the `nl_config` in AlloyDB |
| **field** | **type** | **required** | **description** |
|--------------------|:---------------------------------------:|:------------:|--------------------------------------------------------------------------|
| kind | string | true | Must be "alloydb-ai-nl". |
| source | string | true | Name of the AlloyDB source the natural language query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| nlConfig | string | true | The name of the `nl_config` in AlloyDB |
| nlConfigParameters | [parameters](../#specifying-parameters) | true | List of PSV parameters defined in the `nl_config` |

View File

@@ -39,20 +39,27 @@ It's compatible with the following sources:
insights. Can be `'NO_PRUNING'` or `'PRUNE_REDUNDANT_INSIGHTS'`. Defaults to
`'PRUNE_REDUNDANT_INSIGHTS'`.
The behavior of this tool is influenced by the `writeMode` setting on its `bigquery` source:
The behavior of this tool is influenced by the `writeMode` setting on its
`bigquery` source:
- **`allowed` (default) and `blocked`:** These modes do not impose any special restrictions on the `bigquery-analyze-contribution` tool.
- **`protected`:** This mode enables session-based execution. The tool will operate within the same BigQuery session as other
tools using the same source. This allows the `input_data` parameter to be a query that references temporary resources (e.g.,
`TEMP` tables) created within that session.
- **`allowed` (default) and `blocked`:** These modes do not impose any special
restrictions on the `bigquery-analyze-contribution` tool.
- **`protected`:** This mode enables session-based execution. The tool will
operate within the same BigQuery session as other tools using the same source.
This allows the `input_data` parameter to be a query that references temporary
resources (e.g., `TEMP` tables) created within that session.
The tool's behavior is also influenced by the `allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can use any table or query for the `input_data` parameter.
- **With `allowedDatasets` restriction:** The tool verifies that the `input_data` parameter only accesses tables within the allowed datasets.
- If `input_data` is a table ID, the tool checks if the table's dataset is in the allowed list.
- If `input_data` is a query, the tool performs a dry run to analyze the query and rejects it if it accesses any table outside the allowed list.
The tool's behavior is also influenced by the `allowedDatasets` restriction on
the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can use any table or query
for the `input_data` parameter.
- **With `allowedDatasets` restriction:** The tool verifies that the
`input_data` parameter only accesses tables within the allowed datasets.
- If `input_data` is a table ID, the tool checks if the table's dataset is in
the allowed list.
- If `input_data` is a query, the tool performs a dry run to analyze the query
and rejects it if it accesses any table outside the allowed list.
## Example
@@ -65,6 +72,7 @@ tools:
```
## Sample Prompt
You can prepare a sample table following
https://cloud.google.com/bigquery/docs/get-contribution-analysis-insights.
And use the following sample prompts to call this tool:
@@ -78,8 +86,8 @@ And use the following sample prompts to call this tool:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------------|
| kind | string | true | Must be "bigquery-analyze-contribution". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "bigquery-analyze-contribution". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -37,15 +37,18 @@ It's compatible with the following sources:
conversation history and system instructions for context.
- **`table_references`:** A JSON string of a list of BigQuery tables to use as
context. Each object in the list must contain `projectId`, `datasetId`, and
`tableId`. Example: `'[{"projectId": "my-gcp-project", "datasetId": "my_dataset", "tableId": "my_table"}]'`
`tableId`. Example: `'[{"projectId": "my-gcp-project", "datasetId":
"my_dataset", "tableId": "my_table"}]'`
The tool's behavior regarding these parameters is influenced by the `allowedDatasets`
restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can use tables from any
The tool's behavior regarding these parameters is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can use tables from any
dataset specified in the `table_references` parameter.
- **With `allowedDatasets` restriction:** Before processing the request, the tool
verifies that every table in `table_references` belongs to a dataset in the allowed
list. If any table is from a dataset that is not in the list, the request is denied.
- **With `allowedDatasets` restriction:** Before processing the request, the
tool verifies that every table in `table_references` belongs to a dataset in
the allowed list. If any table is from a dataset that is not in the list, the
request is denied.
## Example

View File

@@ -16,23 +16,31 @@ It's compatible with the following sources:
- [bigquery](../../sources/bigquery.md)
`bigquery-execute-sql` accepts the following parameters:
- **`sql`** (required): The GoogleSQL statement to execute.
- **`dry_run`** (optional): If set to `true`, the query is validated but not run,
returning information about the execution instead. Defaults to `false`.
The behavior of this tool is influenced by the `writeMode` setting on its `bigquery` source:
- **`sql`** (required): The GoogleSQL statement to execute.
- **`dry_run`** (optional): If set to `true`, the query is validated but not
run, returning information about the execution instead. Defaults to `false`.
The behavior of this tool is influenced by the `writeMode` setting on its
`bigquery` source:
- **`allowed` (default):** All SQL statements are permitted.
- **`blocked`:** Only `SELECT` statements are allowed. Any other type of statement (e.g., `INSERT`, `UPDATE`, `CREATE`) will be rejected.
- **`protected`:** This mode enables session-based execution. `SELECT` statements can be used on all tables, while write operations are allowed only for the session's temporary dataset (e.g., `CREATE TEMP TABLE ...`). This prevents modifications to permanent datasets while allowing stateful, multi-step operations within a secure session.
- **`blocked`:** Only `SELECT` statements are allowed. Any other type of
statement (e.g., `INSERT`, `UPDATE`, `CREATE`) will be rejected.
- **`protected`:** This mode enables session-based execution. `SELECT`
statements can be used on all tables, while write operations are allowed only
for the session's temporary dataset (e.g., `CREATE TEMP TABLE ...`). This
prevents modifications to permanent datasets while allowing stateful,
multi-step operations within a secure session.
The tool's behavior is influenced by the `allowedDatasets` restriction on the
`bigquery` source. Similar to `writeMode`, this setting provides an additional layer of security by controlling which datasets can be accessed:
`bigquery` source. Similar to `writeMode`, this setting provides an additional
layer of security by controlling which datasets can be accessed:
- **Without `allowedDatasets` restriction:** The tool can execute any valid GoogleSQL
query.
- **With `allowedDatasets` restriction:** Before execution, the tool performs a dry run
to analyze the query.
- **Without `allowedDatasets` restriction:** The tool can execute any valid
GoogleSQL query.
- **With `allowedDatasets` restriction:** Before execution, the tool performs a
dry run to analyze the query.
It will reject the query if it attempts to access any table outside the
allowed `datasets` list. To enforce this restriction, the following operations
are also disallowed:
@@ -40,7 +48,8 @@ The tool's behavior is influenced by the `allowedDatasets` restriction on the
- **Unanalyzable operations** where the accessed tables cannot be determined
statically (e.g., `EXECUTE IMMEDIATE`, `CREATE PROCEDURE`, `CALL`).
> **Note:** This tool is intended for developer assistant workflows with human-in-the-loop and shouldn't be used for production agents.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Example
@@ -54,8 +63,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigquery-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "bigquery-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -33,19 +33,27 @@ query based on the provided parameters:
- **horizon** (integer, optional): The number of future time steps you want to
predict. It defaults to 10 if not specified.
The behavior of this tool is influenced by the `writeMode` setting on its `bigquery` source:
The behavior of this tool is influenced by the `writeMode` setting on its
`bigquery` source:
- **`allowed` (default) and `blocked`:** These modes do not impose any special restrictions on the `bigquery-forecast` tool.
- **`protected`:** This mode enables session-based execution. The tool will operate within the same BigQuery session as other
tools using the same source. This allows the `history_data` parameter to be a query that references temporary resources (e.g.,
`TEMP` tables) created within that session.
- **`allowed` (default) and `blocked`:** These modes do not impose any special
restrictions on the `bigquery-forecast` tool.
- **`protected`:** This mode enables session-based execution. The tool will
operate within the same BigQuery session as other tools using the same source.
This allows the `history_data` parameter to be a query that references
temporary resources (e.g., `TEMP` tables) created within that session.
The tool's behavior is also influenced by the `allowedDatasets` restriction on the `bigquery` source:
The tool's behavior is also influenced by the `allowedDatasets` restriction on
the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can use any table or query for the `history_data` parameter.
- **With `allowedDatasets` restriction:** The tool verifies that the `history_data` parameter only accesses tables within the allowed datasets.
- If `history_data` is a table ID, the tool checks if the table's dataset is in the allowed list.
- If `history_data` is a query, the tool performs a dry run to analyze the query and rejects it if it accesses any table outside the allowed list.
- **Without `allowedDatasets` restriction:** The tool can use any table or query
for the `history_data` parameter.
- **With `allowedDatasets` restriction:** The tool verifies that the
`history_data` parameter only accesses tables within the allowed datasets.
- If `history_data` is a table ID, the tool checks if the table's dataset is
in the allowed list.
- If `history_data` is a query, the tool performs a dry run to analyze the
query and rejects it if it accesses any table outside the allowed list.
## Example
@@ -58,11 +66,13 @@ tools:
```
## Sample Prompt
You can use the following sample prompts to call this tool:
- Can you forecast the history time series data in bigquery table `bqml_tutorial.google_analytic`? Use project_id `myproject`.
- What are the future `total_visits` in bigquery table `bqml_tutorial.google_analytic`?
- Can you forecast the history time series data in bigquery table
`bqml_tutorial.google_analytic`? Use project_id `myproject`.
- What are the future `total_visits` in bigquery table
`bqml_tutorial.google_analytic`?
## Reference

View File

@@ -16,12 +16,14 @@ It's compatible with the following sources:
- [bigquery](../../sources/bigquery.md)
`bigquery-get-dataset-info` accepts the following parameters:
- **`dataset`** (required): Specifies the dataset for which to retrieve metadata.
- **`project`** (optional): Defines the Google Cloud project ID. If not provided,
the tool defaults to the project from the source configuration.
The tool's behavior regarding these parameters is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can retrieve metadata for
any dataset specified by the `dataset` and `project` parameters.
- **With `allowedDatasets` restriction:** Before retrieving metadata, the tool

View File

@@ -16,6 +16,7 @@ It's compatible with the following sources:
- [bigquery](../../sources/bigquery.md)
`bigquery-get-table-info` accepts the following parameters:
- **`table`** (required): The name of the table for which to retrieve metadata.
- **`dataset`** (required): The dataset containing the specified table.
- **`project`** (optional): The Google Cloud project ID. If not provided, the
@@ -23,6 +24,7 @@ It's compatible with the following sources:
The tool's behavior regarding these parameters is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can retrieve metadata for
any table specified by the `table`, `dataset`, and `project` parameters.
- **With `allowedDatasets` restriction:** Before retrieving metadata, the tool

View File

@@ -16,11 +16,13 @@ It's compatible with the following sources:
- [bigquery](../../sources/bigquery.md)
`bigquery-list-dataset-ids` accepts the following parameter:
- **`project`** (optional): Defines the Google Cloud project ID. If not provided,
the tool defaults to the project from the source configuration.
The tool's behavior regarding this parameter is influenced by the
`allowedDatasets` restriction on the `bigquery` source:
- **Without `allowedDatasets` restriction:** The tool can list datasets from any
project specified by the `project` parameter.
- **With `allowedDatasets` restriction:** The tool directly returns the

View File

@@ -15,10 +15,16 @@ the following sources:
- [bigquery](../../sources/bigquery.md)
The behavior of this tool is influenced by the `writeMode` setting on its `bigquery` source:
The behavior of this tool is influenced by the `writeMode` setting on its
`bigquery` source:
- **`allowed` (default) and `blocked`:** These modes do not impose any restrictions on the `bigquery-sql` tool. The pre-defined SQL statement will be executed as-is.
- **`protected`:** This mode enables session-based execution. The tool will operate within the same BigQuery session as other tools using the same source, allowing it to interact with temporary resources like `TEMP` tables created within that session.
- **`allowed` (default) and `blocked`:** These modes do not impose any
restrictions on the `bigquery-sql` tool. The pre-defined SQL statement will be
executed as-is.
- **`protected`:** This mode enables session-based execution. The tool will
operate within the same BigQuery session as other tools using the same source,
allowing it to interact with temporary resources like `TEMP` tables created
within that session.
### GoogleSQL
@@ -28,7 +34,8 @@ parameters can be inserted into the query. BigQuery supports both named paramete
(e.g., `@name`) and positional parameters (`?`), but they cannot be mixed in the
same query.
[bigquery-googlesql]: https://cloud.google.com/bigquery/docs/reference/standard-sql/
[bigquery-googlesql]:
https://cloud.google.com/bigquery/docs/reference/standard-sql/
## Example
@@ -100,11 +107,11 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigquery-sql". |
| source | string | true | Name of the source the GoogleSQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | The GoogleSQL statement to execute. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| **field** | **type** | **required** | **description** |
|--------------------|:---------------------------------------------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigquery-sql". |
| source | string | true | Name of the source the GoogleSQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | The GoogleSQL statement to execute. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](../#template-parameters) | false | List of [templateParameters](../#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |

View File

@@ -100,13 +100,13 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigtable-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigtable-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
## Tips
@@ -119,6 +119,8 @@ tools:
workaround would be to leverage Bigtable [Logical
Views][bigtable-logical-view] to rename the columns.
[bigtable-studio]: https://cloud.google.com/bigtable/docs/manage-data-using-console
[bigtable-logical-view]: https://cloud.google.com/bigtable/docs/create-manage-logical-views
[bigtable-studio]:
https://cloud.google.com/bigtable/docs/manage-data-using-console
[bigtable-logical-view]:
https://cloud.google.com/bigtable/docs/create-manage-logical-views
[bigtable-querybuilder]: https://cloud.google.com/bigtable/docs/query-builder

View File

@@ -16,17 +16,19 @@ database. It's compatible with any of the following sources:
- [cassandra](../../sources/cassandra.md)
The specified CQL statement is executed as a [prepared statement][cassandra-prepare],
and expects parameters in the CQL query to be in the form of placeholders `?`.
The specified CQL statement is executed as a [prepared
statement][cassandra-prepare], and expects parameters in the CQL query to be in
the form of placeholders `?`.
[cassandra-prepare]: https://docs.datastax.com/en/datastax-drivers/developing/prepared-statements.html
[cassandra-prepare]:
https://docs.datastax.com/en/datastax-drivers/developing/prepared-statements.html
## Example
> **Note:** This tool uses parameterized queries to prevent CQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for keyspaces, table names, column names,
> or other parts of the query.
> Parameters cannot be used as substitutes for keyspaces, table names, column
> names, or other parts of the query.
```yaml
tools:
@@ -85,12 +87,12 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cassandra-cql". |
| source | string | true | Name of the source the CQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | CQL statement to execute. |
| authRequired | []string | false | List of authentication requirements for the source. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the CQL statement. |
| **field** | **type** | **required** | **description** |
|--------------------|:---------------------------------------------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cassandra-cql". |
| source | string | true | Name of the source the CQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | CQL statement to execute. |
| authRequired | []string | false | List of authentication requirements for the source. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the CQL statement. |
| templateParameters | [templateParameters](../#template-parameters) | false | List of [templateParameters](../#template-parameters) that will be inserted into the CQL statement before executing prepared statement. |

View File

@@ -44,4 +44,4 @@ tools:
|-------------|:--------:|:------------:|-------------------------------------------------------|
| kind | string | true | Must be "clickhouse-execute-sql". |
| source | string | true | Name of the ClickHouse source to execute SQL against. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,11 +10,12 @@ aliases:
## About
A `clickhouse-list-tables` tool lists all available tables in a specified
ClickHouse database. It's compatible with the [clickhouse](../../sources/clickhouse.md) source.
A `clickhouse-list-tables` tool lists all available tables in a specified
ClickHouse database. It's compatible with the
[clickhouse](../../sources/clickhouse.md) source.
This tool executes the `SHOW TABLES FROM <database>` command and returns a list
of all tables in the specified database that are accessible to the configured
This tool executes the `SHOW TABLES FROM <database>` command and returns a list
of all tables in the specified database that are accessible to the configured
user, making it useful for schema exploration and table discovery tasks.
## Example
@@ -29,17 +30,19 @@ tools:
## Parameters
| **parameter** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|---------------------------------------------|
| database | string | true | The database to list tables from. |
| **parameter** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|-----------------------------------|
| database | string | true | The database to list tables from. |
## Return Value
The tool returns an array of objects, where each object contains:
- `name`: The name of the table
- `database`: The database the table belongs to
Example response:
```json
[
{"name": "users", "database": "analytics"},
@@ -51,10 +54,10 @@ Example response:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------:|:------------:|-----------------------------------------------------------|
| kind | string | true | Must be "clickhouse-list-tables". |
| source | string | true | Name of the ClickHouse source to list tables from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | array of string | false | Authentication services required to use this tool. |
| parameters | array of Parameter | false | Parameters for the tool (see Parameters section above). |
| **field** | **type** | **required** | **description** |
|--------------|:------------------:|:------------:|---------------------------------------------------------|
| kind | string | true | Must be "clickhouse-list-tables". |
| source | string | true | Name of the ClickHouse source to list tables from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | array of string | false | Authentication services required to use this tool. |
| parameters | array of Parameter | false | Parameters for the tool (see Parameters section above). |

View File

@@ -14,8 +14,8 @@ A `clickhouse-sql` tool executes SQL queries as prepared statements against a
ClickHouse database. It's compatible with the
[clickhouse](../../sources/clickhouse.md) source.
This tool supports both template parameters (for SQL statement customization)
and regular parameters (for prepared statement values), providing flexible
This tool supports both template parameters (for SQL statement customization)
and regular parameters (for prepared statement values), providing flexible
query execution capabilities.
## Example

View File

@@ -10,13 +10,15 @@ aliases:
## About
A `cloud-healthcare-fhir-fetch-page` tool fetches a page of FHIR resources from a given URL. It's
compatible with the following sources:
A `cloud-healthcare-fhir-fetch-page` tool fetches a page of FHIR resources from
a given URL. It's compatible with the following sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
`cloud-healthcare-fhir-fetch-page` can be used for pagination when a previous tool call (like
`cloud-healthcare-fhir-patient-search` or `cloud-healthcare-fhir-patient-everything`) returns a 'next' link in the response bundle.
`cloud-healthcare-fhir-fetch-page` can be used for pagination when a previous
tool call (like `cloud-healthcare-fhir-patient-search` or
`cloud-healthcare-fhir-patient-everything`) returns a 'next' link in the
response bundle.
## Example

View File

@@ -10,14 +10,14 @@ aliases:
## About
A `cloud-healthcare-fhir-patient-everything` tool retrieves resources related to a given patient
from a FHIR store. It's compatible with the following sources:
A `cloud-healthcare-fhir-patient-everything` tool retrieves resources related to
a given patient from a FHIR store. It's compatible with the following sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
`cloud-healthcare-fhir-patient-everything` returns all the information available for a given
patient ID. It can be configured to only return certain resource types, or only
resources that have been updated after a given time.
`cloud-healthcare-fhir-patient-everything` returns all the information available
for a given patient ID. It can be configured to only return certain resource
types, or only resources that have been updated after a given time.
## Example
@@ -46,4 +46,5 @@ tools:
| sinceFilter | string | false | If provided, only resources updated after this time are returned. The time uses the format YYYY-MM-DDThh:mm:ss.sss+zz:zz. The time must be specified to the second and include a time zone. For example, 2015-02-07T13:28:17.239+02:00 or 2017-01-01T00:00:00Z. |
| storeID | string | true* | The FHIR store ID to search in. |
*If the `allowedFHIRStores` in the source has length 1, then the `storeID` parameter is not needed.
*If the `allowedFHIRStores` in the source has length 1, then the `storeID`
parameter is not needed.

View File

@@ -10,12 +10,13 @@ aliases:
## About
A `cloud-healthcare-fhir-patient-search` tool searches for patients in a FHIR store based on a
set of criteria. It's compatible with the following sources:
A `cloud-healthcare-fhir-patient-search` tool searches for patients in a FHIR
store based on a set of criteria. It's compatible with the following sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
`cloud-healthcare-fhir-patient-search` returns a list of patients that match the given criteria.
`cloud-healthcare-fhir-patient-search` returns a list of patients that match the
given criteria.
## Example
@@ -60,4 +61,5 @@ tools:
| summary | boolean | false | Requests the server to return a subset of the resource. True by default. |
| storeID | string | true* | The FHIR store ID to search in. |
*If the `allowedFHIRStores` in the source has length 1, then the `storeID` parameter is not needed.
*If the `allowedFHIRStores` in the source has length 1, then the `storeID`
parameter is not needed.

View File

@@ -10,8 +10,8 @@ aliases:
## About
A `cloud-healthcare-get-dicom-store-metrics` tool retrieves metrics for a DICOM store. It's
compatible with the following sources:
A `cloud-healthcare-get-dicom-store-metrics` tool retrieves metrics for a DICOM
store. It's compatible with the following sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
@@ -41,4 +41,5 @@ tools:
|-----------|:--------:|:------------:|----------------------------------------|
| storeID | string | true* | The DICOM store ID to get metrics for. |
*If the `allowedDICOMStores` in the source has length 1, then the `storeID` parameter is not needed.
*If the `allowedDICOMStores` in the source has length 1, then the `storeID`
parameter is not needed.

View File

@@ -11,12 +11,14 @@ aliases:
## About
A `cloud-healthcare-get-fhir-resource` tool retrieves a specific FHIR resource from a FHIR store.
A `cloud-healthcare-get-fhir-resource` tool retrieves a specific FHIR resource
from a FHIR store.
It's compatible with the following sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
`cloud-healthcare-get-fhir-resource` returns a single FHIR resource, identified by its type and ID.
`cloud-healthcare-get-fhir-resource` returns a single FHIR resource, identified
by its type and ID.
## Example
@@ -44,4 +46,5 @@ tools:
| resourceID | string | true | The ID of the FHIR resource to retrieve. |
| storeID | string | true* | The FHIR store ID to retrieve the resource from. |
*If the `allowedFHIRStores` in the source has length 1, then the `storeID` parameter is not needed.
*If the `allowedFHIRStores` in the source has length 1, then the `storeID`
parameter is not needed.

View File

@@ -10,13 +10,14 @@ aliases:
## About
A `cloud-healthcare-list-dicom-stores` lists the available DICOM stores in the healthcare dataset.
A `cloud-healthcare-list-dicom-stores` lists the available DICOM stores in the
healthcare dataset.
It's compatible with the following sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
`cloud-healthcare-list-dicom-stores` returns the details of the available DICOM stores in the
dataset of the healthcare source. It takes no extra parameters.
`cloud-healthcare-list-dicom-stores` returns the details of the available DICOM
stores in the dataset of the healthcare source. It takes no extra parameters.
## Example

View File

@@ -10,13 +10,14 @@ aliases:
## About
A `cloud-healthcare-list-fhir-stores` lists the available FHIR stores in the healthcare dataset.
A `cloud-healthcare-list-fhir-stores` lists the available FHIR stores in the
healthcare dataset.
It's compatible with the following sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
`cloud-healthcare-list-fhir-stores` returns the details of the available FHIR stores in the
dataset of the healthcare source. It takes no extra parameters.
`cloud-healthcare-list-fhir-stores` returns the details of the available FHIR
stores in the dataset of the healthcare source. It takes no extra parameters.
## Example

View File

@@ -10,12 +10,14 @@ aliases:
## About
A `cloud-healthcare-retrieve-rendered-dicom-instance` tool retrieves a rendered DICOM instance from a DICOM store.
A `cloud-healthcare-retrieve-rendered-dicom-instance` tool retrieves a rendered
DICOM instance from a DICOM store.
It's compatible with the following sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
`cloud-healthcare-retrieve-rendered-dicom-instance` returns a base64 encoded string of the image in JPEG format.
`cloud-healthcare-retrieve-rendered-dicom-instance` returns a base64 encoded
string of the image in JPEG format.
## Example
@@ -45,4 +47,5 @@ tools:
| FrameNumber | integer | false | The frame number to retrieve (1-based). Only applicable to multi-frame instances. Defaults to 1. |
| storeID | string | true* | The DICOM store ID to retrieve from. |
*If the `allowedDICOMStores` in the source has length 1, then the `storeID` parameter is not needed.
*If the `allowedDICOMStores` in the source has length 1, then the `storeID`
parameter is not needed.

View File

@@ -10,12 +10,14 @@ aliases:
## About
A `cloud-healthcare-search-dicom-instances` tool searches for DICOM instances in a DICOM store based on a
set of criteria. It's compatible with the following sources:
A `cloud-healthcare-search-dicom-instances` tool searches for DICOM instances in
a DICOM store based on a set of criteria. It's compatible with the following
sources:
- [cloud-healthcare](../../sources/cloud-healthcare.md)
`search-dicom-instances` returns a list of DICOM instances that match the given criteria.
`search-dicom-instances` returns a list of DICOM instances that match the given
criteria.
## Example
@@ -52,4 +54,5 @@ tools:
| includefield | []string | false | List of attributeIDs to include in the output, such as DICOM tag IDs or keywords. Set to `["all"]` to return all available tags. |
| storeID | string | true* | The DICOM store ID to search in. |
*If the `allowedDICOMStores` in the source has length 1, then the `storeID` parameter is not needed.
*If the `allowedDICOMStores` in the source has length 1, then the `storeID`
parameter is not needed.

View File

@@ -36,7 +36,7 @@ project:
- **Ad-hoc analysis:** Quickly investigate performance issues by executing
direct promql queries for a database instance.
- **Prebuilt Configs:** Use the already added prebuilt tools mentioned in
prebuilt-tools.md to query the databases system/query level metrics.
prebuilt-tools.md to query the databases system/query level metrics.
Here are some common use cases for the `cloud-monitoring-query-prometheus` tool:
@@ -54,7 +54,6 @@ Here are some common use cases for the `cloud-monitoring-query-prometheus` tool:
Here are some examples of how to use the `cloud-monitoring-query-prometheus`
tool.
```yaml
tools:
get_wait_time_metrics:
@@ -68,6 +67,7 @@ tools:
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| kind | string | true | Must be cloud-monitoring-query-prometheus. |

View File

@@ -11,7 +11,6 @@ long-running Cloud SQL operation to complete. It does this by polling the Cloud
SQL Admin API operation status endpoint until the operation is finished, using
exponential backoff.
## Example
```yaml

View File

@@ -89,12 +89,12 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "couchbase-sql". |
| source | string | true | Name of the source the SQL query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be used with the SQL statement. |
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "couchbase-sql". |
| source | string | true | Name of the source the SQL query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be used with the SQL statement. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
| authRequired | array[string] | false | List of auth services that are required to use this tool. |
| authRequired | array[string] | false | List of auth services that are required to use this tool. |

View File

@@ -10,21 +10,26 @@ aliases:
## About
A `dataform-compile-local` tool runs the `dataform compile` command on a local Dataform project.
A `dataform-compile-local` tool runs the `dataform compile` command on a local
Dataform project.
It is a standalone tool and **is not** compatible with any sources.
At invocation time, the tool executes `dataform compile --json` in the specified project directory and returns the resulting JSON object from the CLI.
At invocation time, the tool executes `dataform compile --json` in the specified
project directory and returns the resulting JSON object from the CLI.
`dataform-compile-local` takes the following parameter:
- `project_dir` (string): The absolute or relative path to the local Dataform project directory. The server process must have read access to this path.
- `project_dir` (string): The absolute or relative path to the local Dataform
project directory. The server process must have read access to this path.
## Requirements
### Dataform CLI
This tool executes the `dataform` command-line interface (CLI) via a system call. You must have the **`dataform` CLI** installed and available in the server's system `PATH`.
This tool executes the `dataform` command-line interface (CLI) via a system
call. You must have the **`dataform` CLI** installed and available in the
server's system `PATH`.
You can typically install the CLI via `npm`:
@@ -32,7 +37,9 @@ You can typically install the CLI via `npm`:
npm install -g @dataform/cli
```
See the [official Dataform documentation](https://www.google.com/search?q=https://cloud.google.com/dataform/docs/install-dataform-cli) for more details.
See the [official Dataform
documentation](https://www.google.com/search?q=https://cloud.google.com/dataform/docs/install-dataform-cli)
for more details.
## Example
@@ -45,7 +52,7 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
| :---- | :---- | :---- | :---- |
| kind | string | true | Must be "dataform-compile-local". |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|:------------|:---------|:-------------|:---------------------------------------------------|
| kind | string | true | Must be "dataform-compile-local". |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -13,7 +13,9 @@ Execute ES|QL queries.
This tool allows you to execute ES|QL queries against your Elasticsearch
cluster. You can use this to perform complex searches and aggregations.
See the [official documentation](https://www.elastic.co/docs/reference/query-languages/esql/esql-getting-started) for more information.
See the [official
documentation](https://www.elastic.co/docs/reference/query-languages/esql/esql-getting-started)
for more information.
## Example
@@ -36,10 +38,9 @@ tools:
## Parameters
| **name** | **type** | **required** | **description** |
|------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| query | string | false | The ES\|QL query to run. Can also be passed by parameters. |
| format | string | false | The format of the query. Default is json. Valid values are csv, json, tsv, txt, yaml, cbor, smile, or arrow. |
| timeout | integer | false | The timeout for the query in seconds. Default is 60 (1 minute). |
| **name** | **type** | **required** | **description** |
|------------|:---------------------------------------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| query | string | false | The ES\|QL query to run. Can also be passed by parameters. |
| format | string | false | The format of the query. Default is json. Valid values are csv, json, tsv, txt, yaml, cbor, smile, or arrow. |
| timeout | integer | false | The timeout for the query in seconds. Default is 60 (1 minute). |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be used with the ES\|QL query.<br/>Only supports “string”, “integer”, “float”, “boolean”. |

View File

@@ -18,8 +18,8 @@ database. It's compatible with the following source:
The specified SQL statement is executed as a [prepared statement][fb-prepare],
and supports both positional parameters (`?`) and named parameters (`:param_name`).
Parameters will be inserted according to their position or name. If template
parameters are included, they will be resolved before the execution of the
Parameters will be inserted according to their position or name. If template
parameters are included, they will be resolved before the execution of the
prepared statement.
[fb-prepare]: https://firebirdsql.org/refdocs/langrefupd25-psql-execstat.html
@@ -125,11 +125,11 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firebird-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| **field** | **type** | **required** | **description** |
|--------------------|:---------------------------------------------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firebird-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](../#template-parameters) | false | List of [templateParameters](../#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |

View File

@@ -38,6 +38,7 @@ The tool requires Firestore's native JSON format for document data. Each field
must be wrapped with its type indicator:
### Basic Types
- **String**: `{"stringValue": "your string"}`
- **Integer**: `{"integerValue": "123"}` or `{"integerValue": 123}`
- **Double**: `{"doubleValue": 123.45}`
@@ -47,6 +48,7 @@ must be wrapped with its type indicator:
- **Timestamp**: `{"timestampValue": "2025-01-07T10:00:00Z"}` (RFC3339 format)
### Complex Types
- **GeoPoint**: `{"geoPointValue": {"latitude": 34.052235, "longitude": -118.243683}}`
- **Array**: `{"arrayValue": {"values": [{"stringValue": "item1"}, {"integerValue": "2"}]}}`
- **Map**: `{"mapValue": {"fields": {"key1": {"stringValue": "value1"}, "key2": {"booleanValue": true}}}}`
@@ -65,6 +67,7 @@ tools:
```
Usage:
```json
{
"collectionPath": "companies",

View File

@@ -74,11 +74,11 @@ deleted. To delete a field, include it in the `updateMask` but omit it from
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------------:|:------------:|----------------------------------------------------------|
| kind | string | true | Must be "firestore-update-document". |
| source | string | true | Name of the Firestore source to update documents in. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| kind | string | true | Must be "firestore-update-document". |
| source | string | true | Name of the Firestore source to update documents in. |
| description | string | true | Description of the tool that is passed to the LLM. |
## Examples
@@ -91,7 +91,9 @@ tools:
source: my-firestore
description: Update a user document
```
Usage:
```json
{
"documentPath": "users/user123",
@@ -140,7 +142,8 @@ Usage:
### Update with Field Deletion
To delete fields, include them in the `updateMask` but omit them from `documentData`:
To delete fields, include them in the `updateMask` but omit them from
`documentData`:
```json
{
@@ -158,7 +161,8 @@ To delete fields, include them in the `updateMask` but omit them from `documentD
In this example:
- `name` will be updated to "John Smith"
- `temporaryField` and `obsoleteData` will be deleted from the document (they are in the mask but not in the data)
- `temporaryField` and `obsoleteData` will be deleted from the document (they
are in the mask but not in the data)
### Complex Update with Nested Data
@@ -317,28 +321,43 @@ Common errors include:
## Best Practices
1. **Use update masks for precision**: When you only need to update specific fields, use the `updateMask` parameter to avoid unintended changes
2. **Always use typed values**: Every field must be wrapped with its appropriate type indicator (e.g., `{"stringValue": "text"}`)
3. **Integer values can be strings**: The tool accepts integer values as strings (e.g., `{"integerValue": "1500"}`)
4. **Use returnData sparingly**: Only set to true when you need to verify the exact data after the update
5. **Validate data before sending**: Ensure your data matches Firestore's native JSON format
1. **Use update masks for precision**: When you only need to update specific
fields, use the `updateMask` parameter to avoid unintended changes
2. **Always use typed values**: Every field must be wrapped with its appropriate
type indicator (e.g., `{"stringValue": "text"}`)
3. **Integer values can be strings**: The tool accepts integer values as strings
(e.g., `{"integerValue": "1500"}`)
4. **Use returnData sparingly**: Only set to true when you need to verify the
exact data after the update
5. **Validate data before sending**: Ensure your data matches Firestore's native
JSON format
6. **Handle timestamps properly**: Use RFC3339 format for timestamp strings
7. **Base64 encode binary data**: Binary data must be base64 encoded in the `bytesValue` field
8. **Consider security rules**: Ensure your Firestore security rules allow document updates
9. **Delete fields using update mask**: To delete fields, include them in the `updateMask` but omit them from `documentData`
10. **Test with non-production data first**: Always test your updates on non-critical documents first
7. **Base64 encode binary data**: Binary data must be base64 encoded in the
`bytesValue` field
8. **Consider security rules**: Ensure your Firestore security rules allow
document updates
9. **Delete fields using update mask**: To delete fields, include them in the
`updateMask` but omit them from `documentData`
10. **Test with non-production data first**: Always test your updates on
non-critical documents first
## Differences from Add Documents
- **Purpose**: Updates existing documents vs. creating new ones
- **Document must exist**: For standard updates (though not using updateMask will create if missing with given document id)
- **Document must exist**: For standard updates (though not using updateMask
will create if missing with given document id)
- **Update mask support**: Allows selective field updates
- **Field deletion**: Supports removing specific fields by including them in the mask but not in the data
- **Field deletion**: Supports removing specific fields by including them in the
mask but not in the data
- **Returns updateTime**: Instead of createTime
## Related Tools
- [`firestore-add-documents`](firestore-add-documents.md) - Add new documents to Firestore
- [`firestore-get-documents`](firestore-get-documents.md) - Retrieve documents by their paths
- [`firestore-query-collection`](firestore-query-collection.md) - Query documents in a collection
- [`firestore-delete-documents`](firestore-delete-documents.md) - Delete documents from Firestore
- [`firestore-add-documents`](firestore-add-documents.md) - Add new documents to
Firestore
- [`firestore-get-documents`](firestore-get-documents.md) - Retrieve documents
by their paths
- [`firestore-query-collection`](firestore-query-collection.md) - Query
documents in a collection
- [`firestore-delete-documents`](firestore-delete-documents.md) - Delete
documents from Firestore

View File

@@ -247,17 +247,17 @@ my-http-tool:
## Reference
| **field** | **type** | **required** | **description** |
|--------------|:------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "http". |
| source | string | true | Name of the source the HTTP request should be sent to. |
| description | string | true | Description of the tool that is passed to the LLM. |
| path | string | true | The path of the HTTP request. You can include static query parameters in the path string. |
| method | string | true | The HTTP method to use (e.g., GET, POST, PUT, DELETE). |
| headers | map[string]string | false | A map of headers to include in the HTTP request (overrides source headers). |
| requestBody | string | false | The request body payload. Use [go template][go-template-doc] with the parameter name as the placeholder (e.g., `{{.id}}` will be replaced with the value of the parameter that has name `id` in the `bodyParams` section). |
| queryParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the query string. |
| bodyParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the request body payload. |
| headerParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted as the request headers. |
| **field** | **type** | **required** | **description** |
|--------------|:---------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "http". |
| source | string | true | Name of the source the HTTP request should be sent to. |
| description | string | true | Description of the tool that is passed to the LLM. |
| path | string | true | The path of the HTTP request. You can include static query parameters in the path string. |
| method | string | true | The HTTP method to use (e.g., GET, POST, PUT, DELETE). |
| headers | map[string]string | false | A map of headers to include in the HTTP request (overrides source headers). |
| requestBody | string | false | The request body payload. Use [go template][go-template-doc] with the parameter name as the placeholder (e.g., `{{.id}}` will be replaced with the value of the parameter that has name `id` in the `bodyParams` section). |
| queryParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the query string. |
| bodyParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the request body payload. |
| headerParams | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted as the request headers. |
[go-template-doc]: <https://pkg.go.dev/text/template#pkg-overview>

View File

@@ -11,7 +11,8 @@ aliases:
## About
A `looker-conversational-analytics` tool allows you to ask questions about your Looker data.
A `looker-conversational-analytics` tool allows you to ask questions about your
Looker data.
It's compatible with the following sources:
@@ -19,9 +20,11 @@ It's compatible with the following sources:
`looker-conversational-analytics` accepts two parameters:
1. `user_query_with_context`: The question asked of the Conversational Analytics system.
2. `explore_references`: A list of one to five explores that can be queried to answer the
question. The form of the entry is `[{"model": "model name", "explore": "explore name"}, ...]`
1. `user_query_with_context`: The question asked of the Conversational Analytics
system.
2. `explore_references`: A list of one to five explores that can be queried to
answer the question. The form of the entry is `[{"model": "model name",
"explore": "explore name"}, ...]`
## Example
@@ -42,4 +45,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "lookerca-conversational-analytics". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -42,4 +42,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-create-project-file". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -41,4 +41,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-delete-project-file". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -16,8 +16,8 @@ It's compatible with the following sources:
- [looker](../../sources/looker.md)
`looker-dev-mode` accepts a boolean parameter, true to enter dev mode and false to exit dev mode.
`looker-dev-mode` accepts a boolean parameter, true to enter dev mode and false
to exit dev mode.
## Example
@@ -39,4 +39,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-dev-mode". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,7 +10,10 @@ aliases:
## About
The `looker-generate-embed-url` tool generates an embeddable URL for a given piece of Looker content. The url generated is created for the user authenticated to the Looker source. When opened in the browser it will create a Looker Embed session.
The `looker-generate-embed-url` tool generates an embeddable URL for a given
piece of Looker content. The url generated is created for the user authenticated
to the Looker source. When opened in the browser it will create a Looker Embed
session.
It's compatible with the following sources:
@@ -21,7 +24,9 @@ It's compatible with the following sources:
1. the `type` of content (e.g., "dashboards", "looks", "query-visualization")
2. the `id` of the content
It's recommended to use other tools from the Looker MCP toolbox with this tool to do things like fetch dashboard id's, generate a query, etc that can be supplied to this tool.
It's recommended to use other tools from the Looker MCP toolbox with this tool
to do things like fetch dashboard id's, generate a query, etc that can be
supplied to this tool.
## Example
@@ -42,6 +47,6 @@ tools:
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-generate-embed-url" |
| kind | string | true | Must be "looker-generate-embed-url" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -38,4 +38,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-connection-databases". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -38,4 +38,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-connection-schemas". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -12,7 +12,6 @@ aliases:
A `looker-get-connection-table-columns` tool returns all the columnes for each table specified.
It's compatible with the following sources:
- [looker](../../sources/looker.md)
@@ -40,4 +39,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-connection-table-columns". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -16,7 +16,8 @@ It's compatible with the following sources:
- [looker](../../sources/looker.md)
`looker-get-connection-tables` accepts a `conn` parameter, a `schema` parameter, and an optional `db` parameter.
`looker-get-connection-tables` accepts a `conn` parameter, a `schema` parameter,
and an optional `db` parameter.
## Example
@@ -38,4 +39,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-connection-tables". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -39,4 +39,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-connections". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -31,6 +31,7 @@ The return type is an array of maps, each map is formatted like:
"group_label": "group label"
}
```
## Example
```yaml

View File

@@ -52,7 +52,6 @@ The response is a json array with the following elements:
}
```
## Reference
| **field** | **type** | **required** | **description** |

View File

@@ -58,7 +58,6 @@ The response is a json array with the following elements:
}
```
## Reference
| **field** | **type** | **required** | **description** |

View File

@@ -52,7 +52,6 @@ The response is a json array with the following elements:
}
```
## Reference
| **field** | **type** | **required** | **description** |

View File

@@ -38,4 +38,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-project-files". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -38,4 +38,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-projects". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,11 +10,18 @@ aliases:
## About
The `looker-health-analyze` tool performs various analysis tasks on a Looker instance. The `action` parameter selects the type of analysis to perform:
The `looker-health-analyze` tool performs various analysis tasks on a Looker
instance. The `action` parameter selects the type of analysis to perform:
- `projects`: Analyzes all projects or a specified project, reporting on the number of models and view files, as well as Git connection and validation status.
- `models`: Analyzes all models or a specified model, providing a count of explores, unused explores, and total query counts.
- `explores`: Analyzes all explores or a specified explore, reporting on the number of joins, unused joins, fields, unused fields, and query counts. Being classified as **Unused** is determined by whether a field has been used as a field or filter within the past 90 days in production.
- `projects`: Analyzes all projects or a specified project, reporting on the
number of models and view files, as well as Git connection and validation
status.
- `models`: Analyzes all models or a specified model, providing a count of
explores, unused explores, and total query counts.
- `explores`: Analyzes all explores or a specified explore, reporting on the
number of joins, unused joins, fields, unused fields, and query counts. Being
classified as **Unused** is determined by whether a field has been used as a
field or filter within the past 90 days in production.
## Parameters
@@ -54,4 +61,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-health-analyze" |
| source | string | true | Looker source name |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,31 +10,36 @@ aliases:
## About
The `looker-health-pulse` tool performs health checks on a Looker instance. The `action` parameter selects the type of check to perform:
The `looker-health-pulse` tool performs health checks on a Looker instance. The
`action` parameter selects the type of check to perform:
- `check_db_connections`: Checks all database connections, runs supported tests, and reports query counts.
- `check_dashboard_performance`: Finds dashboards with slow running queries in the last 7 days.
- `check_dashboard_errors`: Lists dashboards with erroring queries in the last 7 days.
- `check_explore_performance`: Lists the slowest explores in the last 7 days and reports average query runtime.
- `check_schedule_failures`: Lists schedules that have failed in the last 7 days.
- `check_legacy_features`: Lists enabled legacy features. (*To note, this function is not
available in Looker Core.*)
- `check_db_connections`: Checks all database connections, runs supported tests,
and reports query counts.
- `check_dashboard_performance`: Finds dashboards with slow running queries in
the last 7 days.
- `check_dashboard_errors`: Lists dashboards with erroring queries in the last 7
days.
- `check_explore_performance`: Lists the slowest explores in the last 7 days and
reports average query runtime.
- `check_schedule_failures`: Lists schedules that have failed in the last 7
days.
- `check_legacy_features`: Lists enabled legacy features. (*To note, this
function is not available in Looker Core.*)
## Parameters
| **field** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|---------------------------------------------|
| action | string | true | The health check to perform |
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-----------------------------|
| action | string | true | The health check to perform |
| **action** | **description** |
|---------------------------|--------------------------------------------------------------------------------|
| check_db_connections | Checks all database connections and reports query counts and errors |
| check_dashboard_performance | Finds dashboards with slow queries (>30s) in the last 7 days |
| check_dashboard_errors | Lists dashboards with erroring queries in the last 7 days |
| check_explore_performance | Lists slowest explores and average query runtime |
| check_schedule_failures | Lists failed schedules in the last 7 days |
| check_legacy_features | Lists enabled legacy features |
| **action** | **description** |
|-----------------------------|---------------------------------------------------------------------|
| check_db_connections | Checks all database connections and reports query counts and errors |
| check_dashboard_performance | Finds dashboards with slow queries (>30s) in the last 7 days |
| check_dashboard_errors | Lists dashboards with erroring queries in the last 7 days |
| check_explore_performance | Lists slowest explores and average query runtime |
| check_schedule_failures | Lists failed schedules in the last 7 days |
| check_legacy_features | Lists enabled legacy features |
## Example
@@ -66,4 +71,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-health-pulse" |
| source | string | true | Looker source name |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,7 +10,9 @@ aliases:
## About
The `looker-health-vacuum` tool helps you identify unused LookML objects such as models, explores, joins, and fields. The `action` parameter selects the type of vacuum to perform:
The `looker-health-vacuum` tool helps you identify unused LookML objects such as
models, explores, joins, and fields. The `action` parameter selects the type of
vacuum to perform:
- `models`: Identifies unused explores within a model.
- `explores`: Identifies unused joins and fields within an explore.
@@ -28,7 +30,8 @@ The `looker-health-vacuum` tool helps you identify unused LookML objects such as
## Example
Identify unnused fields (*in this case, less than 1 query in the last 20 days*) and joins in the `order_items` explore and `thelook` model
Identify unnused fields (*in this case, less than 1 query in the last 20 days*)
and joins in the `order_items` explore and `thelook` model
```yaml
tools:
@@ -52,9 +55,8 @@ tools:
The result is a list of objects that are candidates for deletion.
```
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-health-vacuum" |
| source | string | true | Looker source name |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -40,4 +40,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-run-dashboard" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -42,4 +42,4 @@ tools:
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-update-project-file". |
| source | string | true | Name of the source Looker instance. |
| description | string | true | Description of the tool that is passed to the LLM. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -8,24 +8,36 @@ description: >
## About
MindsDB is the most widely adopted AI federated database that enables you to query hundreds of datasources and ML models through a single SQL interface. The following tools work with MindsDB databases:
MindsDB is the most widely adopted AI federated database that enables you to
query hundreds of datasources and ML models through a single SQL interface. The
following tools work with MindsDB databases:
- [mindsdb-execute-sql](mindsdb-execute-sql.md) - Execute SQL queries directly on MindsDB
- [mindsdb-execute-sql](mindsdb-execute-sql.md) - Execute SQL queries directly
on MindsDB
- [mindsdb-sql](mindsdb-sql.md) - Execute parameterized SQL queries on MindsDB
These tools leverage MindsDB's capabilities to:
- **Connect to Multiple Datasources**: Query databases, APIs, file systems, and more through SQL
- **Cross-Datasource Operations**: Perform joins and analytics across different data sources
- **ML Model Integration**: Use trained ML models as virtual tables for predictions
- **Unstructured Data Processing**: Query documents, images, and other unstructured data as structured tables
- **Real-time Predictions**: Get real-time predictions from ML models through SQL
- **API Translation**: Automatically translate SQL queries into REST APIs, GraphQL, and native protocols
- **Connect to Multiple Datasources**: Query databases, APIs, file systems, and
more through SQL
- **Cross-Datasource Operations**: Perform joins and analytics across different
data sources
- **ML Model Integration**: Use trained ML models as virtual tables for
predictions
- **Unstructured Data Processing**: Query documents, images, and other
unstructured data as structured tables
- **Real-time Predictions**: Get real-time predictions from ML models through
SQL
- **API Translation**: Automatically translate SQL queries into REST APIs,
GraphQL, and native protocols
## Supported Datasources
MindsDB automatically translates your SQL queries into the appropriate APIs for hundreds of datasources:
MindsDB automatically translates your SQL queries into the appropriate APIs for
hundreds of datasources:
### **Business Applications**
- **Salesforce**: Query leads, opportunities, accounts, and custom objects
- **Jira**: Access issues, projects, workflows, and team data
- **GitHub**: Query repositories, commits, pull requests, and issues
@@ -33,6 +45,7 @@ MindsDB automatically translates your SQL queries into the appropriate APIs for
- **HubSpot**: Query contacts, companies, deals, and marketing data
### **Databases & Storage**
- **MongoDB**: Query NoSQL collections as structured tables
- **PostgreSQL/MySQL**: Standard relational databases
- **Redis**: Key-value stores and caching layers
@@ -40,11 +53,13 @@ MindsDB automatically translates your SQL queries into the appropriate APIs for
- **S3/Google Cloud Storage**: File storage and data lakes
### **Communication & Email**
- **Gmail/Outlook**: Query emails, attachments, and metadata
- **Microsoft Teams**: Team communications and files
- **Discord**: Server data and message history
### **Analytics & Monitoring**
- **Google Analytics**: Website traffic and user behavior
- **Mixpanel**: Product analytics and user events
- **Datadog**: Infrastructure monitoring and logs
@@ -53,6 +68,7 @@ MindsDB automatically translates your SQL queries into the appropriate APIs for
## Example Use Cases
### Cross-Datasource Analytics
```sql
-- Join Salesforce opportunities with GitHub activity
SELECT
@@ -66,6 +82,7 @@ WHERE s.stage = 'Closed Won';
```
### Email & Communication Analysis
```sql
-- Analyze email patterns with Slack activity
SELECT
@@ -79,6 +96,7 @@ WHERE e.date >= '2024-01-01';
```
### ML Model Predictions
```sql
-- Use ML model to predict customer churn
SELECT
@@ -90,7 +108,9 @@ FROM customer_churn_model
WHERE predicted_churn_probability > 0.8;
```
Since MindsDB implements the MySQL wire protocol, these tools are functionally compatible with MySQL tools while providing access to MindsDB's advanced federated database capabilities.
Since MindsDB implements the MySQL wire protocol, these tools are functionally
compatible with MySQL tools while providing access to MindsDB's advanced
federated database capabilities.
## Working Configuration Example
@@ -113,4 +133,4 @@ tools:
Execute SQL queries directly on MindsDB database.
Use this tool to run any SQL statement against your MindsDB instance.
Example: SELECT * FROM my_table LIMIT 10
```
```

View File

@@ -19,16 +19,23 @@ federated database. It's compatible with any of the following sources:
`mindsdb-execute-sql` takes one input parameter `sql` and runs the SQL
statement against the `source`. This tool enables you to:
- **Query Multiple Datasources**: Execute SQL across hundreds of connected datasources
- **Cross-Datasource Joins**: Perform joins between different databases, APIs, and file systems
- **ML Model Predictions**: Query ML models as virtual tables for real-time predictions
- **Unstructured Data**: Query documents, images, and other unstructured data as structured tables
- **Federated Analytics**: Perform analytics across multiple datasources simultaneously
- **API Translation**: Automatically translate SQL queries into REST APIs, GraphQL, and native protocols
- **Query Multiple Datasources**: Execute SQL across hundreds of connected
datasources
- **Cross-Datasource Joins**: Perform joins between different databases, APIs,
and file systems
- **ML Model Predictions**: Query ML models as virtual tables for real-time
predictions
- **Unstructured Data**: Query documents, images, and other unstructured data as
structured tables
- **Federated Analytics**: Perform analytics across multiple datasources
simultaneously
- **API Translation**: Automatically translate SQL queries into REST APIs,
GraphQL, and native protocols
## Example Queries
### Cross-Datasource Analytics
```sql
-- Join Salesforce opportunities with GitHub activity
SELECT
@@ -43,6 +50,7 @@ GROUP BY s.opportunity_name, s.amount, g.repository_name;
```
### Email & Communication Analysis
```sql
-- Analyze email patterns with Slack activity
SELECT
@@ -57,6 +65,7 @@ GROUP BY e.sender, e.subject, s.channel_name;
```
### ML Model Predictions
```sql
-- Use ML model to predict customer churn
SELECT
@@ -69,6 +78,7 @@ WHERE predicted_churn_probability > 0.8;
```
### MongoDB Query
```sql
-- Query MongoDB collections as structured tables
SELECT
@@ -119,8 +129,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mindsdb-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "mindsdb-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

Some files were not shown because too many files have changed in this diff Show More