docs: fix docs linting (#1520)

This commit is contained in:
Yuan Teoh
2025-09-18 16:30:46 -07:00
committed by GitHub
parent 7450482bb2
commit 3d8a041782
89 changed files with 1393 additions and 808 deletions

View File

@@ -27,7 +27,6 @@ This project follows
> [!NOTE]
> New contributions should always include both unit and integration tests.
All submissions, including submissions by project members, require review. We
use GitHub pull requests for this purpose. Consult
@@ -37,14 +36,14 @@ information on using pull requests.
### Code reviews
* Within 2-5 days, a reviewer will review your PR. They may approve it, or request
changes.
changes.
* When requesting changes, reviewers should self-assign the PR to ensure
they are aware of any updates.
* If additional changes are needed, push additional commits to your PR branch -
this helps the reviewer know which parts of the PR have changed.
this helps the reviewer know which parts of the PR have changed.
* Commits will be
squashed when merged.
* Please follow up with changes promptly.
* Please follow up with changes promptly.
* If a PR is awaiting changes by the
author for more than 10 days, maintainers may mark that PR as Draft. PRs that
are inactive for more than 30 days may be closed.
@@ -53,12 +52,16 @@ are inactive for more than 30 days may be closed.
Please create an
[issue](https://github.com/googleapis/genai-toolbox/issues) before
implementation to ensure we can accept the contribution and no duplicated work. This issue
should include an overview of the API design. If you have any questions, reach out on our
[Discord](https://discord.gg/Dmm69peqjh) to chat directly with the team.
implementation to ensure we can accept the contribution and no duplicated work.
This issue should include an overview of the API design. If you have any
questions, reach out on our [Discord](https://discord.gg/Dmm69peqjh) to chat
directly with the team.
> [!NOTE]
> New tools can be added for [pre-existing data sources](https://github.com/googleapis/genai-toolbox/tree/main/internal/sources). However, any new database source should also include at least one new tool type.
> New tools can be added for [pre-existing data
> sources](https://github.com/googleapis/genai-toolbox/tree/main/internal/sources).
> However, any new database source should also include at least one new tool
> type.
### Adding a New Database Source
@@ -196,7 +199,7 @@ detailed description of your changes and any requests for long term testing
resources.
* **Title:** All pull request title should follow the formatting of
[Conventional
[Conventional
Commit](https://www.conventionalcommits.org/) guidelines: `<type>[optional
scope]: description`. For example, if you are adding a new field in postgres
source, the title should be `feat(source/postgres): add support for

View File

@@ -59,12 +59,14 @@ cancel_hotel: <- tool name
Tool name is the identifier used by a Large Language Model (LLM) to invoke a
specific tool.
* Custom tools: The user can define any name they want. The below guidelines
do not apply.
* Pre-built tools: The tool name is predefined and cannot be changed. It
should follow the guidelines.
The following guidelines apply to tool names:
* Should use underscores over hyphens (e.g., `list_collections` instead of
`list-collections`).
* Should not have the product name in the name (e.g., `list_collections` instead
@@ -79,6 +81,7 @@ The following guidelines apply to tool names:
Tool kind serves as a category or type that a user can assign to a tool.
The following guidelines apply to tool kinds:
* Should user hyphens over underscores (e.g. `firestore-list-collections` or
`firestore_list_colelctions`).
* Should use product name in name (e.g. `firestore-list-collections` over

View File

@@ -175,7 +175,8 @@ To run Toolbox from binary:
```
**NOTE:**
Toolbox enables dynamic reloading by default. To disable, use the `--disable-reload` flag.
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
</details>
@@ -194,7 +195,8 @@ us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:$VERSION \
```
**NOTE:**
The `-v` flag mounts your local `tools.yaml` into the container, and `-p` maps the container's port `5000` to your host's port `5000`.
The `-v` flag mounts your local `tools.yaml` into the container, and `-p` maps
the container's port `5000` to your host's port `5000`.
</details>
@@ -202,14 +204,18 @@ The `-v` flag mounts your local `tools.yaml` into the container, and `-p` maps t
<summary>Source</summary>
To run the server directly from source, navigate to the project root directory and run:
To run the server directly from source, navigate to the project root directory
and run:
```sh
go run .
```
**NOTE:**
This command runs the project from source, and is more suitable for development and testing. It does **not** compile a binary into your `$GOPATH`. If you want to compile a binary instead, refer the [Developer Documentation](./DEVELOPER.md#building-the-binary).
This command runs the project from source, and is more suitable for development
and testing. It does **not** compile a binary into your `$GOPATH`. If you want
to compile a binary instead, refer the [Developer
Documentation](./DEVELOPER.md#building-the-binary).
</details>
@@ -217,7 +223,9 @@ This command runs the project from source, and is more suitable for development
<summary>Homebrew</summary>
If you installed Toolbox using [Homebrew](https://brew.sh/), the `toolbox` binary is available in your system path. You can start the server with the same command:
If you installed Toolbox using [Homebrew](https://brew.sh/), the `toolbox`
binary is available in your system path. You can start the server with the same
command:
```sh
toolbox --tools-file "tools.yaml"
@@ -232,7 +240,6 @@ For more detailed documentation on deploying to different environments, check
out the resources in the [How-to
section](https://googleapis.github.io/genai-toolbox/how-to/)
### Integrating your application
Once your server is up and running, you can load the tools into your
@@ -777,6 +784,7 @@ Since the project is in a pre-release stage (version `0.x.y`), we follow the
standard conventions for initial development:
### Pre-1.0.0 Versioning
While the major version is `0`, the public API should be considered unstable.
The version will be incremented as follows:
@@ -786,6 +794,7 @@ The version will be incremented as follows:
backward-compatible bug fixes.
### Post-1.0.0 Versioning
Once the project reaches a stable `1.0.0` release, the versioning will follow
the more common convention:

View File

@@ -22,6 +22,7 @@ etc., you could use environment variables instead with the format `${ENV_NAME}`.
user: ${USER_NAME}
password: ${PASSWORD}
```
A default value can be specified like `${ENV_NAME:default}`.
```yaml

View File

@@ -108,6 +108,7 @@ To install Toolbox using Homebrew on macOS or Linux:
```sh
brew install mcp-toolbox
```
{{% /tab %}}
{{% tab header="Compile from source" lang="en" %}}
@@ -138,8 +139,9 @@ Toolbox enables dynamic reloading by default. To disable, use the
#### Launching Toolbox UI
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test tools and toolsets
with features such as authorized parameters. To learn more, visit [Toolbox UI](../../how-to/toolbox-ui/index.md).
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test
tools and toolsets with features such as authorized parameters. To learn more,
visit [Toolbox UI](../../how-to/toolbox-ui/index.md).
```sh
./toolbox --ui
@@ -147,7 +149,8 @@ with features such as authorized parameters. To learn more, visit [Toolbox UI](.
#### Homebrew Users
If you installed Toolbox using Homebrew, the `toolbox` binary is available in your system path. You can start the server with the same command:
If you installed Toolbox using Homebrew, the `toolbox` binary is available in
your system path. You can start the server with the same command:
```sh
toolbox --tools-file "tools.yaml"
@@ -185,7 +188,8 @@ async with ToolboxClient("http://127.0.0.1:5000") as client:
{{< /highlight >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/README.md).
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-core/README.md).
{{% /tab %}}
{{% tab header="LangChain" lang="en" %}}
@@ -206,7 +210,8 @@ async with ToolboxClient("http://127.0.0.1:5000") as client:
{{< /highlight >}}
For more detailed instructions on using the Toolbox LangChain SDK, see the
[project's README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-langchain/README.md).
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-python/blob/main/packages/toolbox-langchain/README.md).
{{% /tab %}}
{{% tab header="Llamaindex" lang="en" %}}
@@ -228,7 +233,8 @@ async with ToolboxClient("http://127.0.0.1:5000") as client:
{{< /highlight >}}
For more detailed instructions on using the Toolbox Llamaindex SDK, see the
[project's README](https://github.com/googleapis/genai-toolbox-llamaindex-python/blob/main/README.md).
[project's
README](https://github.com/googleapis/genai-toolbox-llamaindex-python/blob/main/README.md).
{{% /tab %}}
{{< /tabpane >}}
@@ -343,7 +349,8 @@ const tools = toolboxTools.map(getTool);
{{< /tabpane >}}
For more detailed instructions on using the Toolbox Core SDK, see the
[project's README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md).
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-js/blob/main/packages/toolbox-core/README.md).
#### Go
@@ -590,7 +597,8 @@ func main() {
{{< /tabpane >}}
For more detailed instructions on using the Toolbox Go SDK, see the
[project's README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/README.md).
[project's
README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/README.md).
For end-to-end samples on using the Toolbox Go SDK with orchestration
frameworks, see the [project's

View File

@@ -14,12 +14,14 @@ description: >
This guide assumes you have already done the following:
1. Installed [Python 3.9+][install-python] (including [pip][install-pip] and
your preferred virtual environment tool for managing dependencies e.g. [venv][install-venv]).
your preferred virtual environment tool for managing dependencies e.g.
[venv][install-venv]).
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[install-python]: https://wiki.python.org/moin/BeginnersGuide/Download
[install-pip]: https://pip.pypa.io/en/stable/installation/
[install-venv]: https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments
[install-venv]:
https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
@@ -36,9 +38,10 @@ This guide assumes you have already done the following:
In this section, we will write and run an agent that will load the Tools
from Toolbox.
{{< notice tip>}} If you prefer to experiment within a Google Colab environment,
you can connect to a
[local runtime](https://research.google.com/colaboratory/local-runtimes.html).
{{< notice tip>}}
If you prefer to experiment within a Google Colab environment, you can connect
to a [local
runtime](https://research.google.com/colaboratory/local-runtimes.html).
{{< /notice >}}
1. In a new terminal, install the SDK package.
@@ -148,5 +151,6 @@ Documentation](https://github.com/googleapis/python-genai?tab=readme-ov-file#man
```
{{< notice info >}}
For more information, visit the [Python SDK repo](https://github.com/googleapis/mcp-toolbox-sdk-python).
For more information, visit the [Python SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-python).
{{</ notice >}}

View File

@@ -17,12 +17,15 @@ This guide assumes you have already done the following:
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to Toolbox
@@ -51,14 +54,12 @@ from Toolbox.
{{< include "quickstart/go/langchain/quickstart.go" >}}
{{< /tab >}}
{{< tab header="Genkit Go" lang="go" >}}
{{< include "quickstart/go/genkit/quickstart.go" >}}
{{< /tab >}}
{{< tab header="Go GenAI" lang="go" >}}
@@ -71,7 +72,6 @@ from Toolbox.
{{< include "quickstart/go/openAI/quickstart.go" >}}
{{< /tab >}}
{{< /tabpane >}}

View File

@@ -17,12 +17,15 @@ This guide assumes you have already done the following:
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}
## Step 1: Set up your database
{{< regionInclude "quickstart/shared/database_setup.md" "database_setup" >}}
## Step 2: Install and configure Toolbox
{{< regionInclude "quickstart/shared/configure_toolbox.md" "configure_toolbox" >}}
## Step 3: Connect your agent to Toolbox
@@ -36,7 +39,8 @@ from Toolbox.
npm init -y
```
1. In a new terminal, install the [SDK](https://www.npmjs.com/package/@toolbox-sdk/core).
1. In a new terminal, install the
[SDK](https://www.npmjs.com/package/@toolbox-sdk/core).
```bash
npm install @toolbox-sdk/core
@@ -59,7 +63,8 @@ npm install @google/genai
{{< /tab >}}
{{< /tabpane >}}
1. Create a new file named `hotelAgent.js` and copy the following code to create an agent:
1. Create a new file named `hotelAgent.js` and copy the following code to create
an agent:
{{< tabpane persist=header >}}
{{< tab header="LangChain" lang="js" >}}
@@ -95,5 +100,6 @@ npm install @google/genai
```
{{< notice info >}}
For more information, visit the [JS SDK repo](https://github.com/googleapis/mcp-toolbox-sdk-js).
For more information, visit the [JS SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-js).
{{</ notice >}}

View File

@@ -5,7 +5,8 @@ If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using
local development:
1. [Install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install)
1. [Set up Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment)
1. [Set up Application Default Credentials
(ADC)](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment)
1. Set your project and enable Vertex AI
```bash
@@ -13,8 +14,4 @@ local development:
gcloud services enable aiplatform.googleapis.com
```
[install-python]: https://wiki.python.org/moin/BeginnersGuide/Download
[install-pip]: https://pip.pypa.io/en/stable/installation/
[install-venv]: https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments
[install-postgres]: https://www.postgresql.org/download/
<!-- [END cloud_setup] -->

View File

@@ -11,4 +11,4 @@ description: >
<link rel="canonical" href="https://cloud.google.com/alloydb/docs/create-database-with-mcp-toolbox"/>
<meta http-equiv="refresh" content="0;url=https://cloud.google.com/alloydb/docs/create-database-with-mcp-toolbox"/>
</head>
</html>
</html>

View File

@@ -6,8 +6,9 @@ description: >
Create and manage Cloud SQL for SQL Server (Admin) using Toolbox.
---
This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to create and manage Cloud SQL for SQL Server instance, database and users:
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for SQL Server
instance, database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
@@ -235,8 +236,10 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
@@ -255,9 +258,12 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
@@ -279,7 +285,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
Your AI tool is now connected to Cloud SQL for SQL Server using MCP.
The `cloud-sql-mssql-admin` server provides tools for managing your Cloud SQL instances and interacting with your database:
The `cloud-sql-mssql-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for SQL Server instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.

View File

@@ -6,8 +6,9 @@ description: >
Create and manage Cloud SQL for MySQL (Admin) using Toolbox.
---
This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to create and manage Cloud SQL for MySQL instance, database and users:
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for MySQL instance,
database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
@@ -235,8 +236,10 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
@@ -255,9 +258,12 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
@@ -279,7 +285,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
Your AI tool is now connected to Cloud SQL for MySQL using MCP.
The `cloud-sql-mysql-admin` server provides tools for managing your Cloud SQL instances and interacting with your database:
The `cloud-sql-mysql-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for MySQL instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.

View File

@@ -6,8 +6,9 @@ description: >
Create and manage Cloud SQL for PostgreSQL (Admin) using Toolbox.
---
This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to create and manage Cloud SQL for PostgreSQL instance, database and users:
This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your
developer assistant tools to create and manage Cloud SQL for PostgreSQL
instance, database and users:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
@@ -235,8 +236,10 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
@@ -255,9 +258,12 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration and save:
```json
@@ -279,7 +285,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
Your AI tool is now connected to Cloud SQL for PostgreSQL using MCP.
The `cloud-sql-postgres-admin` server provides tools for managing your Cloud SQL instances and interacting with your database:
The `cloud-sql-postgres-admin` server provides tools for managing your Cloud SQL
instances and interacting with your database:
* **create_instance**: Creates a new Cloud SQL for PostgreSQL instance.
* **get_instance**: Gets information about a Cloud SQL instance.
* **list_instances**: Lists Cloud SQL instances in a project.

View File

@@ -5,7 +5,11 @@ weight: 2
description: "Connect your IDE to SQL Server using Toolbox."
---
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol for connecting Large Language Models (LLMs) to data sources like SQL Server. This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your developer assistant tools to a SQL Server instance:
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like SQL Server. This guide covers how to use [MCP Toolbox for
Databases][toolbox] to expose your developer assistant tools to a SQL Server
instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
@@ -28,11 +32,15 @@ description: "Connect your IDE to SQL Server using Toolbox."
## Set up the database
1. [Create or select a SQL Server instance.](https://www.microsoft.com/en-us/sql-server/sql-server-downloads)
1. [Create or select a SQL Server
instance.](https://www.microsoft.com/en-us/sql-server/sql-server-downloads)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct binary](https://github.com/googleapis/genai-toolbox/releases) corresponding to your OS and CPU architecture. You are required to use Toolbox version V0.10.0+:
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
@@ -71,9 +79,11 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -99,7 +109,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -120,13 +131,16 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the new MCP server available.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap the **MCP Servers** icon.
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -146,13 +160,15 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
}
```
1. You should see a green active status after the server is successfully connected.
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -172,13 +188,17 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor Settings > MCP**. You should see a green active status after the server is successfully connected.
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and create a `.vscode` directory in your project root if it doesn't exist.
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -200,9 +220,11 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the Cascade assistant.
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -224,9 +246,12 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
@@ -248,10 +273,14 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
@@ -275,7 +304,9 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
## Use Tools
Your AI tool is now connected to SQL Server using MCP. Try asking your AI assistant to list tables, create a table, or define and execute other SQL statements.
Your AI tool is now connected to SQL Server using MCP. Try asking your AI
assistant to list tables, create a table, or define and execute other SQL
statements.
The following tools are available to the LLM:
@@ -283,5 +314,6 @@ The following tools are available to the LLM:
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs will adapt to the tools available, so this shouldn't affect most users.
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}

View File

@@ -5,7 +5,10 @@ weight: 2
description: "Connect your IDE to MySQL using Toolbox."
---
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol for connecting Large Language Models (LLMs) to data sources like MySQL. This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your developer assistant tools to a MySQL instance:
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like MySQL. This guide covers how to use [MCP Toolbox for Databases][toolbox] to
expose your developer assistant tools to a MySQL instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
@@ -32,7 +35,10 @@ description: "Connect your IDE to MySQL using Toolbox."
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct binary](https://github.com/googleapis/genai-toolbox/releases) corresponding to your OS and CPU architecture. You are required to use Toolbox version V0.10.0+:
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
@@ -71,9 +77,11 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -99,7 +107,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -120,13 +129,16 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the new MCP server available.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap the **MCP Servers** icon.
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -146,13 +158,15 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
}
```
1. You should see a green active status after the server is successfully connected.
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -172,13 +186,17 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor Settings > MCP**. You should see a green active status after the server is successfully connected.
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and create a `.vscode` directory in your project root if it doesn't exist.
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -200,9 +218,11 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the Cascade assistant.
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -224,9 +244,12 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
@@ -248,10 +271,14 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
@@ -275,7 +302,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.14.0/windows/amd64/toolb
## Use Tools
Your AI tool is now connected to MySQL using MCP. Try asking your AI assistant to list tables, create a table, or define and execute other SQL statements.
Your AI tool is now connected to MySQL using MCP. Try asking your AI assistant
to list tables, create a table, or define and execute other SQL statements.
The following tools are available to the LLM:
@@ -283,5 +311,6 @@ The following tools are available to the LLM:
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs will adapt to the tools available, so this shouldn't affect most users.
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}

View File

@@ -5,7 +5,10 @@ weight: 2
description: "Connect your IDE to Neo4j using Toolbox."
---
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol for connecting Large Language Models (LLMs) to data sources like Neo4j. This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your developer assistant tools to a Neo4j instance:
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like Neo4j. This guide covers how to use [MCP Toolbox for Databases][toolbox] to
expose your developer assistant tools to a Neo4j instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
@@ -28,11 +31,15 @@ description: "Connect your IDE to Neo4j using Toolbox."
## Set up the database
1. [Create or select a Neo4j instance.](https://neo4j.com/cloud/platform/aura-graph-database/)
1. [Create or select a Neo4j
instance.](https://neo4j.com/cloud/platform/aura-graph-database/)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct binary](https://github.com/googleapis/genai-toolbox/releases) corresponding to your OS and CPU architecture. You are required to use Toolbox version v0.15.0+:
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
v0.15.0+:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
@@ -71,9 +78,11 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -98,7 +107,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -118,13 +128,16 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the new MCP server available.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap the **MCP Servers** icon.
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -168,13 +181,17 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor Settings > MCP**. You should see a green active status after the server is successfully connected.
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and create a `.vscode` directory in your project root if it doesn't exist.
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -197,9 +214,11 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the Cascade assistant.
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -220,9 +239,12 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
@@ -243,10 +265,14 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
@@ -269,13 +295,16 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.15.0/windows/amd64/toolb
## Use Tools
Your AI tool is now connected to Neo4j using MCP. Try asking your AI assistant to get the graph schema or execute Cypher statements.
Your AI tool is now connected to Neo4j using MCP. Try asking your AI assistant
to get the graph schema or execute Cypher statements.
The following tools are available to the LLM:
1. **get_schema**: extracts the complete database schema, including details about node labels, relationships, properties, constraints, and indexes.
1. **get_schema**: extracts the complete database schema, including details
about node labels, relationships, properties, constraints, and indexes.
1. **execute_cypher**: executes any arbitrary Cypher statement.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs will adapt to the tools available, so this shouldn't affect most users.
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}

View File

@@ -5,7 +5,10 @@ weight: 2
description: "Connect your IDE to SQLite using Toolbox."
---
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol for connecting Large Language Models (LLMs) to data sources like SQLite. This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your developer assistant tools to a SQLite instance:
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is
an open protocol for connecting Large Language Models (LLMs) to data sources
like SQLite. This guide covers how to use [MCP Toolbox for Databases][toolbox]
to expose your developer assistant tools to a SQLite instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
@@ -32,7 +35,10 @@ description: "Connect your IDE to SQLite using Toolbox."
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct binary](https://github.com/googleapis/genai-toolbox/releases) corresponding to your OS and CPU architecture. You are required to use Toolbox version V0.10.0+:
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
@@ -71,9 +77,11 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -95,7 +103,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -112,13 +121,16 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the new MCP server available.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap the **MCP Servers** icon.
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and
tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -134,13 +146,15 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
}
```
1. You should see a green active status after the server is successfully connected.
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -156,13 +170,17 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor Settings > MCP**. You should see a green active status after the server is successfully connected.
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and create a `.vscode` directory in your project root if it doesn't exist.
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -180,9 +198,11 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the Cascade assistant.
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -200,9 +220,12 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
@@ -220,10 +243,14 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. In your working directory, create a folder named `.gemini`. Within it,
create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
@@ -243,7 +270,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbo
## Use Tools
Your AI tool is now connected to SQLite using MCP. Try asking your AI assistant to list tables, create a table, or define and execute other SQL statements.
Your AI tool is now connected to SQLite using MCP. Try asking your AI assistant
to list tables, create a table, or define and execute other SQL statements.
The following tools are available to the LLM:
@@ -251,5 +279,6 @@ The following tools are available to the LLM:
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs will adapt to the tools available, so this shouldn't affect most users.
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}

View File

@@ -103,10 +103,11 @@ section.
```bash
export IMAGE=us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:latest
```
{{< notice note >}}
**The `$PORT` Environment Variable**
Google Cloud Run dictates the port your application must listen on by setting the
`$PORT` environment variable inside your container. This value defaults to
Google Cloud Run dictates the port your application must listen on by setting
the `$PORT` environment variable inside your container. This value defaults to
**8080**. Your application's `--port` argument **must** be set to listen on this
port. If there is a mismatch, the container will fail to start and the
deployment will time out.
@@ -209,18 +210,26 @@ Now, you can use this client to connect to the deployed Cloud Run instance!
## Troubleshooting
{{< notice note >}}
For any deployment or runtime error, the best first step is to check the logs for your service in the Google Cloud Console's Cloud Run section. They often contain the specific error message needed to diagnose the problem.
For any deployment or runtime error, the best first step is to check the logs
for your service in the Google Cloud Console's Cloud Run section. They often
contain the specific error message needed to diagnose the problem.
{{< /notice >}}
* **Deployment Fails with "Container failed to start":** This is almost always
caused by a port mismatch. Ensure your container's `--port` argument is set to
`8080` to match the `$PORT` environment variable provided by Cloud Run.
* **Client Receives Permission Denied Error (401 or 403):** If your client application (e.g., your local SDK) gets a `401 Unauthorized` or `403 Forbidden` error when trying to call your Cloud Run service, it means the client is not properly authenticated as an invoker.
* Ensure the user or service account calling the service has the **Cloud Run Invoker** (`roles/run.invoker`) IAM role.
* If running locally, make sure your Application Default Credentials are set up correctly by running `gcloud auth application-default login`.
* **Service Fails to Access Secrets (in logs):** If your application starts but the logs show errors like "permission denied" when trying to access Secret Manager, it means the Toolbox service account is missing permissions.
* Ensure the `toolbox-identity` service account has the **Secret Manager Secret Accessor** (`roles/secretmanager.secretAccessor`) IAM role.
* **Client Receives Permission Denied Error (401 or 403):** If your client
application (e.g., your local SDK) gets a `401 Unauthorized` or `403
Forbidden` error when trying to call your Cloud Run service, it means the
client is not properly authenticated as an invoker.
* Ensure the user or service account calling the service has the **Cloud Run
Invoker** (`roles/run.invoker`) IAM role.
* If running locally, make sure your Application Default Credentials are set
up correctly by running `gcloud auth application-default login`.
* **Service Fails to Access Secrets (in logs):** If your application starts but
the logs show errors like "permission denied" when trying to access Secret
Manager, it means the Toolbox service account is missing permissions.
* Ensure the `toolbox-identity` service account has the **Secret Manager
Secret Accessor** (`roles/secretmanager.secretAccessor`) IAM role.

View File

@@ -6,7 +6,8 @@ description: >
How to effectively use Toolbox UI.
---
Toolbox UI is a built-in web interface that allows users to visually inspect and test out configured resources such as tools and toolsets.
Toolbox UI is a built-in web interface that allows users to visually inspect and
test out configured resources such as tools and toolsets.
## Launching Toolbox UI
@@ -16,8 +17,9 @@ To launch Toolbox's interactive UI, use the `--ui` flag.
./toolbox --ui
```
Toolbox UI will be served from the same host and port as the Toolbox Server, with the `/ui` suffix. Once Toolbox
is launched, the following INFO log with Toolbox UI's url will be shown:
Toolbox UI will be served from the same host and port as the Toolbox Server,
with the `/ui` suffix. Once Toolbox is launched, the following INFO log with
Toolbox UI's url will be shown:
```bash
INFO "Toolbox UI is up and running at: http://localhost:5000/ui"
@@ -25,11 +27,13 @@ INFO "Toolbox UI is up and running at: http://localhost:5000/ui"
## Navigating the Tools Page
The tools page shows all tools loaded from your configuration file. This corresponds to the default toolset (represented by an empty string). Each tool's name on this page will exactly match its name in the configuration
file.
The tools page shows all tools loaded from your configuration file. This
corresponds to the default toolset (represented by an empty string). Each tool's
name on this page will exactly match its name in the configuration file.
To view details for a specific tool, click on the tool name. The main content area will be populated
with the tool name, description, and available parameters.
To view details for a specific tool, click on the tool name. The main content
area will be populated with the tool name, description, and available
parameters.
![Tools Page](./tools.png)
@@ -45,12 +49,17 @@ with the tool name, description, and available parameters.
### Optional Parameters
Toolbox allows users to add [optional parameters](../../resources/tools/#basic-parameters) with or without a default value.
Toolbox allows users to add [optional
parameters](../../resources/tools/#basic-parameters) with or without a default
value.
To exclude a parameter, uncheck the box to the right of an associated parameter, and that parameter will not be
included in the request body. If the parameter is not sent, Toolbox will either use it as `nil` value or the `default` value, if configured. If the parameter is required, Toolbox will throw an error.
To exclude a parameter, uncheck the box to the right of an associated parameter,
and that parameter will not be included in the request body. If the parameter is
not sent, Toolbox will either use it as `nil` value or the `default` value, if
configured. If the parameter is required, Toolbox will throw an error.
When the box is checked, parameter will be sent exactly as entered in the response field (e.g. empty string).
When the box is checked, parameter will be sent exactly as entered in the
response field (e.g. empty string).
![Optional Parameter checked example](./optional-param-checked.png)
@@ -58,34 +67,41 @@ When the box is checked, parameter will be sent exactly as entered in the respon
### Editing Headers
To edit headers, press the "Edit Headers" button to display the header modal. Within this modal,
users can make direct edits by typing into the header's text area.
To edit headers, press the "Edit Headers" button to display the header modal.
Within this modal, users can make direct edits by typing into the header's text
area.
Toolbox UI validates that the headers are in correct JSON format. Other header-related errors (e.g.,
incorrect header names or values required by the tool) will be reported in the Response section
after running the tool.
Toolbox UI validates that the headers are in correct JSON format. Other
header-related errors (e.g., incorrect header names or values required by the
tool) will be reported in the Response section after running the tool.
![Edit Headers](./edit-headers.png)
#### Google OAuth
Currently, Toolbox supports Google OAuth 2.0 as an AuthService, which allows tools to utilize
authorized parameters. When a tool uses an authorized parameter, the parameter will be displayed
but not editable, as it will be populated from the authentication token.
Currently, Toolbox supports Google OAuth 2.0 as an AuthService, which allows
tools to utilize authorized parameters. When a tool uses an authorized
parameter, the parameter will be displayed but not editable, as it will be
populated from the authentication token.
To provide the token, add your Google OAuth ID Token to the request header using the "Edit Headers"
button and modal described above. The key should be the name of your AuthService as defined in
your tool configuration file, suffixed with `_token`. The value should be your ID token as a string.
To provide the token, add your Google OAuth ID Token to the request header using
the "Edit Headers" button and modal described above. The key should be the name
of your AuthService as defined in your tool configuration file, suffixed with
`_token`. The value should be your ID token as a string.
1. Select a tool that requires [authenticated parameters]()
1. The auth parameter's text field is greyed out. This is because it cannot be entered manually and will
be parsed from the resolved auth token
1. The auth parameter's text field is greyed out. This is because it cannot be
entered manually and will be parsed from the resolved auth token
1. To update request headers with the token, select "Edit Headers"
1. (Optional) If you wish to manually edit the header, checkout the dropdown "How to extract Google OAuth ID Token manually" for guidance on retrieving ID token
1. To edit the header automatically, click the "Auto Setup" button that is associated with your Auth Profile
1. (Optional) If you wish to manually edit the header, checkout the dropdown
"How to extract Google OAuth ID Token manually" for guidance on retrieving ID
token
1. To edit the header automatically, click the "Auto Setup" button that is
associated with your Auth Profile
1. Enter the Client ID defined in your tools configuration file
1. Click "Continue"
1. Click "Sign in With Google" and login with your associated google account. This should automatically populate the header text area with your token
1. Click "Sign in With Google" and login with your associated google account.
This should automatically populate the header text area with your token
1. Click "Save"
1. Click "Run Tool"
@@ -100,10 +116,11 @@ be parsed from the resolved auth token
## Navigating the Toolsets Page
Through the toolsets page, users can search for a specific toolset to retrieve tools from. Simply
enter the toolset name in the search bar, and press "Enter" to retrieve the associated tools.
Through the toolsets page, users can search for a specific toolset to retrieve
tools from. Simply enter the toolset name in the search bar, and press "Enter"
to retrieve the associated tools.
If the toolset name is not defined within the tools configuration file, an error message will be
displayed.
If the toolset name is not defined within the tools configuration file, an error
message will be displayed.
![Toolsets Page](./toolsets.png)

View File

@@ -8,24 +8,24 @@ description: >
## Reference
| Flag (Short) | Flag (Long) | Description | Default |
|---|---|---|---|
| `-a` | `--address` | Address of the interface the server will listen on. | `127.0.0.1` |
| | `--disable-reload` | Disables dynamic reloading of tools file. | |
| `-h` | `--help` | help for toolbox | |
| | `--log-level` | Specify the minimum level logged. Allowed: 'DEBUG', 'INFO', 'WARN', 'ERROR'. | `info` |
| | `--logging-format` | Specify logging format to use. Allowed: 'standard' or 'JSON'. | `standard` |
| `-p` | `--port` | Port the server will listen on. | `5000` |
| | `--prebuilt` | Use a prebuilt tool configuration by source type. Cannot be used with --tools-file. See [Prebuilt Tools Reference](prebuilt-tools.md) for allowed values. | |
| | `--stdio` | Listens via MCP STDIO instead of acting as a remote HTTP server. | |
| | `--telemetry-gcp` | Enable exporting directly to Google Cloud Monitoring. | |
| | `--telemetry-otlp` | Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. 'http://127.0.0.1:4318') | |
| | `--telemetry-service-name` | Sets the value of the service.name resource attribute for telemetry data. | `toolbox` |
| | `--tools-file` | File path specifying the tool configuration. Cannot be used with --prebuilt, --tools-files, or --tools-folder. | |
| | `--tools-files` | Multiple file paths specifying tool configurations. Files will be merged. Cannot be used with --prebuilt, --tools-file, or --tools-folder. | |
| | `--tools-folder` | Directory path containing YAML tool configuration files. All .yaml and .yml files in the directory will be loaded and merged. Cannot be used with --prebuilt, --tools-file, or --tools-files. | |
| | `--ui` | Launches the Toolbox UI web server. | |
| `-v` | `--version` | version for toolbox | |
| Flag (Short) | Flag (Long) | Description | Default |
|--------------|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
| `-a` | `--address` | Address of the interface the server will listen on. | `127.0.0.1` |
| | `--disable-reload` | Disables dynamic reloading of tools file. | |
| `-h` | `--help` | help for toolbox | |
| | `--log-level` | Specify the minimum level logged. Allowed: 'DEBUG', 'INFO', 'WARN', 'ERROR'. | `info` |
| | `--logging-format` | Specify logging format to use. Allowed: 'standard' or 'JSON'. | `standard` |
| `-p` | `--port` | Port the server will listen on. | `5000` |
| | `--prebuilt` | Use a prebuilt tool configuration by source type. Cannot be used with --tools-file. See [Prebuilt Tools Reference](prebuilt-tools.md) for allowed values. | |
| | `--stdio` | Listens via MCP STDIO instead of acting as a remote HTTP server. | |
| | `--telemetry-gcp` | Enable exporting directly to Google Cloud Monitoring. | |
| | `--telemetry-otlp` | Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. 'http://127.0.0.1:4318') | |
| | `--telemetry-service-name` | Sets the value of the service.name resource attribute for telemetry data. | `toolbox` |
| | `--tools-file` | File path specifying the tool configuration. Cannot be used with --prebuilt, --tools-files, or --tools-folder. | |
| | `--tools-files` | Multiple file paths specifying tool configurations. Files will be merged. Cannot be used with --prebuilt, --tools-file, or --tools-folder. | |
| | `--tools-folder` | Directory path containing YAML tool configuration files. All .yaml and .yml files in the directory will be loaded and merged. Cannot be used with --prebuilt, --tools-file, or --tools-files. | |
| | `--ui` | Launches the Toolbox UI web server. | |
| `-v` | `--version` | version for toolbox | |
## Examples
@@ -59,10 +59,15 @@ The CLI supports multiple mutually exclusive ways to specify tool configurations
- `--tools-folder`: Directory containing YAML files to load and merge
**Prebuilt Configurations:**
- `--prebuilt`: Use predefined configurations for specific database types (e.g., 'bigquery', 'postgres', 'spanner'). See [Prebuilt Tools Reference](prebuilt-tools.md) for allowed values.
- `--prebuilt`: Use predefined configurations for specific database types (e.g.,
'bigquery', 'postgres', 'spanner'). See [Prebuilt Tools
Reference](prebuilt-tools.md) for allowed values.
{{< notice tip >}}
The CLI enforces mutual exclusivity between configuration source flags, preventing simultaneous use of `--prebuilt` with file-based options, and ensuring only one of `--tools-file`, `--tools-files`, or `--tools-folder` is used at a time.
The CLI enforces mutual exclusivity between configuration source flags,
preventing simultaneous use of `--prebuilt` with file-based options, and
ensuring only one of `--tools-file`, `--tools-files`, or `--tools-folder` is
used at a time.
{{< /notice >}}
### Hot Reload
@@ -72,4 +77,6 @@ Toolbox enables dynamic reloading by default. To disable, use the
### Toolbox UI
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test tools and toolsets with features such as authorized parameters. To learn more, visit [Toolbox UI](../how-to/toolbox-ui/index.md).
To launch Toolbox's interactive UI, use the `--ui` flag. This allows you to test
tools and toolsets with features such as authorized parameters. To learn more,
visit [Toolbox UI](../how-to/toolbox-ui/index.md).

View File

@@ -6,9 +6,12 @@ description: >
This page lists all the prebuilt tools available.
---
Prebuilt tools are reusable, pre-packaged toolsets that are designed to extend the capabilities of agents. These tools are built to be generic and adaptable, allowing developers to interact with and take action on databases.
Prebuilt tools are reusable, pre-packaged toolsets that are designed to extend
the capabilities of agents. These tools are built to be generic and adaptable,
allowing developers to interact with and take action on databases.
See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for details on how to connect your AI tools (IDEs) to databases via Toolbox and MCP.
See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for
details on how to connect your AI tools (IDEs) to databases via Toolbox and MCP.
## AlloyDB Postgres
@@ -19,17 +22,23 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `ALLOYDB_POSTGRES_CLUSTER`: The ID of your AlloyDB cluster.
* `ALLOYDB_POSTGRES_INSTANCE`: The ID of your AlloyDB instance.
* `ALLOYDB_POSTGRES_DATABASE`: The name of the database to connect to.
* `ALLOYDB_POSTGRES_USER`: (Optional) The database username. Defaults to IAM authentication if unspecified.
* `ALLOYDB_POSTGRES_PASSWORD`: (Optional) The password for the database user. Defaults to IAM authentication if unspecified.
* `ALLOYDB_POSTGRES_IP_TYPE`: (Optional) The IP type i.e. "Public" or "Private" (Default: Public).
* `ALLOYDB_POSTGRES_USER`: (Optional) The database username. Defaults to
IAM authentication if unspecified.
* `ALLOYDB_POSTGRES_PASSWORD`: (Optional) The password for the database
user. Defaults to IAM authentication if unspecified.
* `ALLOYDB_POSTGRES_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **AlloyDB Client** (`roles/alloydb.client`) to connect to the instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to execute queries.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the database.
* `list_memory_configurations`: Lists memory-related configurations in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
@@ -39,7 +48,8 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `--prebuilt` value: `alloydb-postgres-admin`
* **Permissions:**
* **AlloyDB Viewer** (`roles/alloydb.viewer`) is required for `list` and `get` tools.
* **AlloyDB Viewer** (`roles/alloydb.viewer`) is required for `list` and
`get` tools.
* **AlloyDB Admin** (`roles/alloydb.admin`) is required for `create` tools.
* **Tools:**
* `create_cluster`: Creates a new AlloyDB cluster.
@@ -50,17 +60,23 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `get_instance`: Gets information about a specified AlloyDB instance.
* `create_user`: Creates a new database user in an AlloyDB cluster.
* `list_users`: Lists all database users within an AlloyDB cluster.
* `get_user`: Gets information about a specified database user in an AlloyDB cluster.
* `wait_for_operation`: Polls the operations API to track the status of long-running operations.
* `get_user`: Gets information about a specified database user in an
AlloyDB cluster.
* `wait_for_operation`: Polls the operations API to track the status of
long-running operations.
## AlloyDB Postgres Observability
* `--prebuilt` value: `alloydb-postgres-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the project to view monitoring data.
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data (timeseries metrics) for an AlloyDB instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data (timeseries metrics) for queries running in an AlloyDB instance using a PromQL query.
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for an AlloyDB instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in an AlloyDB instance using a
PromQL query.
## BigQuery
@@ -69,13 +85,23 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `BIGQUERY_PROJECT`: The GCP project ID.
* `BIGQUERY_LOCATION`: (Optional) The dataset location.
* **Permissions:**
* **BigQuery User** (`roles/bigquery.user`) to execute queries and view metadata.
* **BigQuery Metadata Viewer** (`roles/bigquery.metadataViewer`) to view all datasets.
* **BigQuery Data Editor** (`roles/bigquery.dataEditor`) to create or modify datasets and tables.
* **Gemini for Google Cloud** (`roles/cloudaicompanion.user`) to use the conversational analytics API.
* **BigQuery User** (`roles/bigquery.user`) to execute queries and view
metadata.
* **BigQuery Metadata Viewer** (`roles/bigquery.metadataViewer`) to view
all datasets.
* **BigQuery Data Editor** (`roles/bigquery.dataEditor`) to create or
modify datasets and tables.
* **Gemini for Google Cloud** (`roles/cloudaicompanion.user`) to use the
conversational analytics API.
* **Tools:**
* `analyze_contribution`: Use this tool to perform contribution analysis, also called key driver analysis.
* `ask_data_insights`: Use this tool to perform data analysis, get insights, or answer complex questions about the contents of specific BigQuery tables. For more information on required roles, API setup, and IAM configuration, see the setup and authentication section of the [Conversational Analytics API documentation](https://cloud.google.com/gemini/docs/conversational-analytics-api/overview).
* `analyze_contribution`: Use this tool to perform contribution analysis,
also called key driver analysis.
* `ask_data_insights`: Use this tool to perform data analysis, get
insights, or answer complex questions about the contents of specific
BigQuery tables. For more information on required roles, API setup, and
IAM configuration, see the setup and authentication section of the
[Conversational Analytics API
documentation](https://cloud.google.com/gemini/docs/conversational-analytics-api/overview).
* `execute_sql`: Executes a SQL statement.
* `forecast`: Use this tool to forecast time series data.
* `get_dataset_info`: Gets dataset metadata.
@@ -97,35 +123,45 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `CLOUD_SQL_MYSQL_IP_TYPE`: The IP type i.e. "Public
or "Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to execute queries.
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `get_query_plan`: Provides information about how MySQL executes a SQL statement.
* `get_query_plan`: Provides information about how MySQL executes a SQL
statement.
## Cloud SQL for MySQL Observability
* `--prebuilt` value: `cloud-sql-mysql-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the project to view monitoring data.
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data (timeseries metrics) for a MySQL instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data (timeseries metrics) for queries running in a MySQL instance using a PromQL query.
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a MySQL instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in a MySQL instance using a
PromQL query.
## Cloud SQL for MySQL Admin
* `--prebuilt` value: `cloud-sql-mysql-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only access to resources.
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to manage existing resources.
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over all resources.
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
@@ -146,17 +182,24 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `CLOUD_SQL_POSTGRES_REGION`: The region of your Cloud SQL instance.
* `CLOUD_SQL_POSTGRES_INSTANCE`: The ID of your Cloud SQL instance.
* `CLOUD_SQL_POSTGRES_DATABASE`: The name of the database to connect to.
* `CLOUD_SQL_POSTGRES_USER`: (Optional) The database username. Defaults to IAM authentication if unspecified.
* `CLOUD_SQL_POSTGRES_PASSWORD`: (Optional) The password for the database user. Defaults to IAM authentication if unspecified.
* `CLOUD_SQL_POSTGRES_IP_TYPE`: (Optional) The IP type i.e. "Public" or "Private" (Default: Public).
* `CLOUD_SQL_POSTGRES_USER`: (Optional) The database username. Defaults to
IAM authentication if unspecified.
* `CLOUD_SQL_POSTGRES_PASSWORD`: (Optional) The password for the database
user. Defaults to IAM authentication if unspecified.
* `CLOUD_SQL_POSTGRES_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to execute queries.
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the database.
* `list_memory_configurations`: Lists memory-related configurations in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
@@ -166,24 +209,31 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `--prebuilt` value: `cloud-sql-postgres-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the project to view monitoring data.
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data (timeseries metrics) for a Postgres instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data (timeseries metrics) for queries running in Postgres instance using a PromQL query.
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a Postgres instance using a PromQL query.
* `get_query_metrics`: Fetches query level cloud monitoring data
(timeseries metrics) for queries running in Postgres instance using a
PromQL query.
## Cloud SQL for PostgreSQL Admin
* `--prebuilt` value: `cloud-sql-postgres-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only access to resources.
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to manage existing resources.
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over all resources.
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
@@ -207,10 +257,13 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `CLOUD_SQL_MSSQL_IP_ADDRESS`: The IP address of the Cloud SQL instance.
* `CLOUD_SQL_MSSQL_USER`: The database username.
* `CLOUD_SQL_MSSQL_PASSWORD`: The password for the database user.
* `CLOUD_SQL_MSSQL_IP_TYPE`: (Optional) The IP type i.e. "Public" or "Private" (Default: Public).
* `CLOUD_SQL_MSSQL_IP_TYPE`: (Optional) The IP type i.e. "Public" or
"Private" (Default: Public).
* **Permissions:**
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to execute queries.
* **Cloud SQL Client** (`roles/cloudsql.client`) to connect to the
instance.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
@@ -219,23 +272,28 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `--prebuilt` value: `cloud-sql-mssql-observability`
* **Permissions:**
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the project to view monitoring data.
* **Monitoring Viewer** (`roles/monitoring.viewer`) is required on the
project to view monitoring data.
* **Tools:**
* `get_system_metrics`: Fetches system level cloud monitoring data (timeseries metrics) for a SQL Server instance using a PromQL query.
* `get_system_metrics`: Fetches system level cloud monitoring data
(timeseries metrics) for a SQL Server instance using a PromQL query.
## Cloud SQL for SQL Server Admin
* `--prebuilt` value: `cloud-sql-mssql-admin`
* **Permissions:**
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only access to resources.
* **Cloud SQL Viewer** (`roles/cloudsql.viewer`): Provides read-only
access to resources.
* `get_instance`
* `list_instances`
* `list_databases`
* `wait_for_operation`
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to manage existing resources.
* **Cloud SQL Editor** (`roles/cloudsql.editor`): Provides permissions to
manage existing resources.
* All `viewer` tools
* `create_database`
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over all resources.
* **Cloud SQL Admin** (`roles/cloudsql.admin`): Provides full control over
all resources.
* All `editor` and `viewer` tools
* `create_instance`
* `create_user`
@@ -254,31 +312,39 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* **Environment Variables:**
* `DATAPLEX_PROJECT`: The GCP project ID.
* **Permissions:**
* **Dataplex Reader** (`roles/dataplex.viewer`) to search and look up entries.
* **Dataplex Reader** (`roles/dataplex.viewer`) to search and look up
entries.
* **Dataplex Editor** (`roles/dataplex.editor`) to modify entries.
* **Tools:**
* `dataplex_search_entries`: Searches for entries in Dataplex Catalog.
* `dataplex_lookup_entry`: Retrieves a specific entry from Dataplex Catalog.
* `dataplex_search_aspect_types`: Finds aspect types relevant to the query.
* `dataplex_lookup_entry`: Retrieves a specific entry from Dataplex
Catalog.
* `dataplex_search_aspect_types`: Finds aspect types relevant to the
query.
## Firestore
* `--prebuilt` value: `firestore`
* **Environment Variables:**
* `FIRESTORE_PROJECT`: The GCP project ID.
* `FIRESTORE_DATABASE`: (Optional) The Firestore database ID. Defaults to "(default)".
* `FIRESTORE_DATABASE`: (Optional) The Firestore database ID. Defaults to
"(default)".
* **Permissions:**
* **Cloud Datastore User** (`roles/datastore.user`) to get documents, list collections, and query collections.
* **Firebase Rules Viewer** (`roles/firebaserules.viewer`) to get and validate Firestore rules.
* **Cloud Datastore User** (`roles/datastore.user`) to get documents, list
collections, and query collections.
* **Firebase Rules Viewer** (`roles/firebaserules.viewer`) to get and
validate Firestore rules.
* **Tools:**
* `get_documents`: Gets multiple documents from Firestore by their paths.
* `add_documents`: Adds a new document to a Firestore collection.
* `update_document`: Updates an existing document in Firestore.
* `list_collections`: Lists Firestore collections for a given parent path.
* `delete_documents`: Deletes multiple documents from Firestore.
* `query_collection`: Retrieves one or more Firestore documents from a collection.
* `query_collection`: Retrieves one or more Firestore documents from a
collection.
* `get_rules`: Retrieves the active Firestore security rules.
* `validate_rules`: Checks the provided Firestore Rules source for syntax and validation errors.
* `validate_rules`: Checks the provided Firestore Rules source for syntax
and validation errors.
## Looker
@@ -289,7 +355,8 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `LOOKER_CLIENT_SECRET`: The client secret for the Looker API.
* `LOOKER_VERIFY_SSL`: Whether to verify SSL certificates.
* **Permissions:**
* A Looker account with permissions to access the desired models, explores, and data is required.
* A Looker account with permissions to access the desired models,
explores, and data is required.
* **Tools:**
* `get_models`: Retrieves the list of LookML models.
* `get_explores`: Retrieves the list of explores in a model.
@@ -317,7 +384,8 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `MSSQL_USER`: The database username.
* `MSSQL_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to execute queries.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
@@ -332,11 +400,13 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `MYSQL_USER`: The database username.
* `MYSQL_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to execute queries.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `get_query_plan`: Provides information about how MySQL executes a SQL statement.
* `get_query_plan`: Provides information about how MySQL executes a SQL
statement.
## OceanBase
@@ -348,7 +418,8 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `OCEANBASE_USER`: The database username.
* `OCEANBASE_PASSWORD`: The password for the database user.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to execute queries.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
@@ -362,14 +433,18 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `POSTGRES_DATABASE`: The name of the database to connect to.
* `POSTGRES_USER`: The database username.
* `POSTGRES_PASSWORD`: The password for the database user.
* `POSTGRES_QUERY_PARAMS`: (Optional) Raw query to be added to the db connection string.
* `POSTGRES_QUERY_PARAMS`: (Optional) Raw query to be added to the db
connection string.
* **Permissions:**
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to execute queries.
* Database-level permissions (e.g., `SELECT`, `INSERT`) are required to
execute queries.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the database.
* `list_memory_configurations`: Lists memory-related configurations in the database.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
database.
* `list_top_bloated_tables`: List top bloated tables in the database.
* `list_replication_slots`: Lists replication slots in the database.
* `list_invalid_indexes`: Lists invalid indexes in the database.
@@ -383,8 +458,10 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `SPANNER_INSTANCE`: The Spanner instance ID.
* `SPANNER_DATABASE`: The Spanner database ID.
* **Permissions:**
* **Cloud Spanner Database Reader** (`roles/spanner.databaseReader`) to execute DQL queries and list tables.
* **Cloud Spanner Database User** (`roles/spanner.databaseUser`) to execute DML queries.
* **Cloud Spanner Database Reader** (`roles/spanner.databaseReader`) to
execute DQL queries and list tables.
* **Cloud Spanner Database User** (`roles/spanner.databaseUser`) to
execute DML queries.
* **Tools:**
* `execute_sql`: Executes a DML SQL query.
* `execute_sql_dql`: Executes a DQL SQL query.
@@ -398,18 +475,23 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `SPANNER_INSTANCE`: The Spanner instance ID.
* `SPANNER_DATABASE`: The Spanner database ID.
* **Permissions:**
* **Cloud Spanner Database Reader** (`roles/spanner.databaseReader`) to execute DQL queries and list tables.
* **Cloud Spanner Database User** (`roles/spanner.databaseUser`) to execute DML queries.
* **Cloud Spanner Database Reader** (`roles/spanner.databaseReader`) to
execute DQL queries and list tables.
* **Cloud Spanner Database User** (`roles/spanner.databaseUser`) to
execute DML queries.
* **Tools:**
* `execute_sql`: Executes a DML SQL query using the PostgreSQL interface for Spanner.
* `execute_sql_dql`: Executes a DQL SQL query using the PostgreSQL interface for Spanner.
* `execute_sql`: Executes a DML SQL query using the PostgreSQL interface
for Spanner.
* `execute_sql_dql`: Executes a DQL SQL query using the PostgreSQL
interface for Spanner.
* `list_tables`: Lists tables in the database.
## SQLite
* `--prebuilt` value: `sqlite`
* **Environment Variables:**
* `SQLITE_DATABASE`: The path to the SQLite database file (e.g., `./sample.db`).
* `SQLITE_DATABASE`: The path to the SQLite database file (e.g.,
`./sample.db`).
* **Permissions:**
* File system read/write permissions for the specified database file.
* **Tools:**
@@ -420,7 +502,8 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `--prebuilt` value: `neo4j`
* **Environment Variables:**
* `NEO4J_URI`: The URI of the Neo4j instance (e.g., `bolt://localhost:7687`).
* `NEO4J_URI`: The URI of the Neo4j instance (e.g.,
`bolt://localhost:7687`).
* `NEO4J_DATABASE`: The name of the Neo4j database to connect to.
* `NEO4J_USERNAME`: The username for the Neo4j instance.
* `NEO4J_PASSWORD`: The password for the Neo4j instance.

View File

@@ -200,7 +200,9 @@ func main() {
### Specifying tokens for existing tools
#### Python
Use the [Python SDK](https://github.com/googleapis/mcp-toolbox-sdk-python/tree/main).
Use the [Python
SDK](https://github.com/googleapis/mcp-toolbox-sdk-python/tree/main).
{{< tabpane persist=header >}}
{{< tab header="Core" lang="Python" >}}

View File

@@ -11,11 +11,17 @@ aliases:
## About
The `alloydb-admin` source provides a client to interact with the [Google AlloyDB API](https://cloud.google.com/alloydb/docs/reference/rest). This allows tools to perform administrative tasks on AlloyDB resources, such as managing clusters, instances, and users.
The `alloydb-admin` source provides a client to interact with the [Google
AlloyDB API](https://cloud.google.com/alloydb/docs/reference/rest). This allows
tools to perform administrative tasks on AlloyDB resources, such as managing
clusters, instances, and users.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will expect an OAuth 2.0 access token to be provided by the client (e.g., a web browser) for each request.
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Example
@@ -30,7 +36,7 @@ sources:
```
## Reference
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "alloydb-admin". |
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "alloydb-admin". |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |

View File

@@ -31,8 +31,10 @@ apply when creating SQL queries to run against your BigQuery data, such as
avoiding full table scans or complex filters.
[bigquery-docs]: https://cloud.google.com/bigquery/docs
[bigquery-quickstart-cli]: https://cloud.google.com/bigquery/docs/quickstarts/quickstart-command-line
[bigquery-googlesql]: https://cloud.google.com/bigquery/docs/reference/standard-sql/
[bigquery-quickstart-cli]:
https://cloud.google.com/bigquery/docs/quickstarts/quickstart-command-line
[bigquery-googlesql]:
https://cloud.google.com/bigquery/docs/reference/standard-sql/
## Available Tools
@@ -64,12 +66,14 @@ avoiding full table scans or complex filters.
Run SQL queries directly against BigQuery datasets.
- [`bigquery-search-catalog`](../tools/bigquery/bigquery-search_catalog.md)
List all entries in Dataplex Catalog (e.g. tables, views, models) that matches given user query.
List all entries in Dataplex Catalog (e.g. tables, views, models) that matches
given user query.
### Pre-built Configurations
- [BigQuery using MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/bigquery_mcp/)
Connect your IDE to BigQuery using Toolbox.
- [BigQuery using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/bigquery_mcp/)
Connect your IDE to BigQuery using Toolbox.
## Requirements
@@ -80,7 +84,9 @@ user and group access to BigQuery resources like projects, datasets, and tables.
### Authentication via Application Default Credentials (ADC)
By **default**, Toolbox will use your [Application Default Credentials (ADC)][adc] to authorize and authenticate when interacting with [BigQuery][bigquery-docs].
By **default**, Toolbox will use your [Application Default Credentials
(ADC)][adc] to authorize and authenticate when interacting with
[BigQuery][bigquery-docs].
When using this method, you need to ensure the IAM identity associated with your
ADC (such as a service account) has the correct permissions for the queries you

View File

@@ -9,7 +9,7 @@ description: >
## About
[ClickHouse][clickhouse-docs] is a fast, open-source, column-oriented database
[ClickHouse][clickhouse-docs] is a fast, open-source, column-oriented database
[clickhouse-docs]: https://clickhouse.com/docs
@@ -27,10 +27,12 @@ description: >
### Database User
This source uses standard ClickHouse authentication. You will need to [create a
ClickHouse user][clickhouse-users] (or with [ClickHouse Cloud][clickhouse-cloud]) to connect to the database with. The user
should have appropriate permissions for the operations you plan to perform.
ClickHouse user][clickhouse-users] (or with [ClickHouse
Cloud][clickhouse-cloud]) to connect to the database with. The user should have
appropriate permissions for the operations you plan to perform.
[clickhouse-cloud]: https://clickhouse.com/docs/getting-started/quick-start/cloud#connect-with-your-app
[clickhouse-cloud]:
https://clickhouse.com/docs/getting-started/quick-start/cloud#connect-with-your-app
[clickhouse-users]: https://clickhouse.com/docs/en/sql-reference/statements/create/user
### Network Access
@@ -79,13 +81,13 @@ instead of hardcoding your secrets into the configuration file.
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------------------------------------|
| kind | string | true | Must be "clickhouse". |
| host | string | true | IP address or hostname to connect to (e.g. "127.0.0.1" or "clickhouse.example.com") |
| port | string | true | Port to connect to (e.g. "8443" for HTTPS, "8123" for HTTP) |
| database | string | true | Name of the ClickHouse database to connect to (e.g. "my_database"). |
| user | string | true | Name of the ClickHouse user to connect as (e.g. "analytics_user"). |
| password | string | false | Password of the ClickHouse user (e.g. "my-password"). |
| protocol | string | false | Connection protocol: "https" (default) or "http". |
| secure | boolean | false | Whether to use a secure connection (TLS). Default: false. |
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-------------------------------------------------------------------------------------|
| kind | string | true | Must be "clickhouse". |
| host | string | true | IP address or hostname to connect to (e.g. "127.0.0.1" or "clickhouse.example.com") |
| port | string | true | Port to connect to (e.g. "8443" for HTTPS, "8123" for HTTP) |
| database | string | true | Name of the ClickHouse database to connect to (e.g. "my_database"). |
| user | string | true | Name of the ClickHouse user to connect as (e.g. "analytics_user"). |
| password | string | false | Password of the ClickHouse user (e.g. "my-password"). |
| protocol | string | false | Connection protocol: "https" (default) or "http". |
| secure | boolean | false | Whether to use a secure connection (TLS). Default: false. |

View File

@@ -10,11 +10,16 @@ aliases:
## About
The `cloud-monitoring` source provides a client to interact with the [Google Cloud Monitoring API](https://cloud.google.com/monitoring/api). This allows tools to access cloud monitoring metrics explorer and run promql queries.
The `cloud-monitoring` source provides a client to interact with the [Google
Cloud Monitoring API](https://cloud.google.com/monitoring/api). This allows
tools to access cloud monitoring metrics explorer and run promql queries.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will expect an OAuth 2.0 access token to be provided by the client (e.g., a web browser) for each request.
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Example
@@ -30,7 +35,7 @@ sources:
## Reference
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-monitoring". |
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-monitoring". |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |

View File

@@ -10,11 +10,17 @@ aliases:
## About
The `cloud-sql-admin` source provides a client to interact with the [Google Cloud SQL Admin API](https://cloud.google.com/sql/docs/mysql/admin-api/v1). This allows tools to perform administrative tasks on Cloud SQL instances, such as creating users and databases.
The `cloud-sql-admin` source provides a client to interact with the [Google
Cloud SQL Admin API](https://cloud.google.com/sql/docs/mysql/admin-api/v1). This
allows tools to perform administrative tasks on Cloud SQL instances, such as
creating users and databases.
Authentication can be handled in two ways:
1. **Application Default Credentials (ADC):** By default, the source uses ADC to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will expect an OAuth 2.0 access token to be provided by the client (e.g., a web browser) for each request.
1. **Application Default Credentials (ADC):** By default, the source uses ADC
to authenticate with the API.
2. **Client-side OAuth:** If `useClientOAuth` is set to `true`, the source will
expect an OAuth 2.0 access token to be provided by the client (e.g., a web
browser) for each request.
## Example
@@ -30,7 +36,7 @@ sources:
## Reference
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-sql-admin". |
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "cloud-sql-admin". |
| useClientOAuth | boolean | false | If true, the source will use client-side OAuth for authorization. Otherwise, it will use Application Default Credentials. Defaults to `false`. |

View File

@@ -42,8 +42,9 @@ to a database by following these instructions][csql-mysql-quickstart].
### Pre-built Configurations
- [Cloud SQL for MySQL using MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_mysql_mcp/)
Connect your IDE to Cloud SQL for MySQL using Toolbox.
- [Cloud SQL for MySQL using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_mysql_mcp/)
Connect your IDE to Cloud SQL for MySQL using Toolbox.
## Requirements

View File

@@ -18,7 +18,8 @@ If you are new to Cloud SQL for PostgreSQL, you can try [creating and connecting
to a database by following these instructions][csql-pg-quickstart].
[csql-pg-docs]: https://cloud.google.com/sql/docs/postgres
[csql-pg-quickstart]: https://cloud.google.com/sql/docs/postgres/connect-instance-local-computer
[csql-pg-quickstart]:
https://cloud.google.com/sql/docs/postgres/connect-instance-local-computer
## Available Tools
@@ -42,7 +43,8 @@ to a database by following these instructions][csql-pg-quickstart].
### Pre-built Configurations
- [Cloud SQL for Postgres using MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_pg_mcp/)
- [Cloud SQL for Postgres using
MCP](https://googleapis.github.io/genai-toolbox/how-to/connect-ide/cloud_sql_pg_mcp/)
Connect your IDE to Cloud SQL for Postgres using Toolbox.

View File

@@ -46,7 +46,7 @@ Your primary objective is to help discover, organize and manage metadata related
Example (Incorrect): Hi there! I see that you are looking for...
Example (Correct): This problem likely stems from...
3. Do not reiterate or summarize the question in the answer.
4. Crucially, always convey a tone of uncertainty and caution. Since you are interpreting metadata and have no way to externally verify your answers, never express complete confidence. Frame your responses as interpretations based solely on the provided metadata. Use a suggestive tone, not a prescriptive one:
4. Crucially, always convey a tone of uncertainty and caution. Since you are interpreting metadata and have no way to externally verify your answers, never express complete confidence. Frame your responses as interpretations based solely on the provided metadata. Use a suggestive tone, not a prescriptive one:
Example (Correct): "The entry describes..."
Example (Correct): "According to catalog,..."
Example (Correct): "Based on the metadata,..."

View File

@@ -9,7 +9,10 @@ description: >
## About
[Firebird][fb-docs] is a relational database management system offering many ANSI SQL standard features that runs on Linux, Windows, and a variety of Unix platforms. It is known for its small footprint, powerful features, and easy maintenance.
[Firebird][fb-docs] is a relational database management system offering many
ANSI SQL standard features that runs on Linux, Windows, and a variety of Unix
platforms. It is known for its small footprint, powerful features, and easy
maintenance.
[fb-docs]: https://firebirdsql.org/
@@ -25,7 +28,8 @@ description: >
### Database User
This source uses standard authentication. You will need to [create a Firebird user][fb-users] to login to the database with.
This source uses standard authentication. You will need to [create a Firebird
user][fb-users] to login to the database with.
[fb-users]: https://firebirdsql.org/refdocs/langrefupd25-sql-create-user.html
@@ -49,11 +53,11 @@ instead of hardcoding your secrets into the configuration file.
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------|
| kind | string | true | Must be "firebird". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1") |
| port | string | true | Port to connect to (e.g. "3050") |
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------------|
| kind | string | true | Must be "firebird". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1") |
| port | string | true | Port to connect to (e.g. "3050") |
| database | string | true | Path to the Firebird database file (e.g. "/var/lib/firebird/data/test.fdb"). |
| user | string | true | Name of the Firebird user to connect as (e.g. "SYSDBA"). |
| password | string | true | Password of the Firebird user (e.g. "masterkey"). |
| user | string | true | Name of the Firebird user to connect as (e.g. "SYSDBA"). |
| password | string | true | Password of the Firebird user (e.g. "masterkey"). |

View File

@@ -8,7 +8,11 @@ description: >
## About
[OceanBase][oceanbase-docs] is a distributed relational database management system (RDBMS) that provides high availability, scalability, and strong consistency. It's designed to handle large-scale data processing and is compatible with MySQL, making it easy for developers to migrate from MySQL to OceanBase.
[OceanBase][oceanbase-docs] is a distributed relational database management
system (RDBMS) that provides high availability, scalability, and strong
consistency. It's designed to handle large-scale data processing and is
compatible with MySQL, making it easy for developers to migrate from MySQL to
OceanBase.
[oceanbase-docs]: https://www.oceanbase.com/
@@ -16,11 +20,15 @@ description: >
### Database User
This source only uses standard authentication. You will need to create an OceanBase user to login to the database with. OceanBase supports MySQL-compatible user management syntax.
This source only uses standard authentication. You will need to create an
OceanBase user to login to the database with. OceanBase supports
MySQL-compatible user management syntax.
### Network Connectivity
Ensure that your application can connect to the OceanBase cluster. OceanBase typically runs on ports 2881 (for MySQL protocol) or 3881 (for MySQL protocol with SSL).
Ensure that your application can connect to the OceanBase cluster. OceanBase
typically runs on ports 2881 (for MySQL protocol) or 3881 (for MySQL protocol
with SSL).
## Example
@@ -57,16 +65,21 @@ instead of hardcoding your secrets into the configuration file.
### MySQL Compatibility
OceanBase is highly compatible with MySQL, supporting most MySQL SQL syntax, data types, and functions. This makes it easy to migrate existing MySQL applications to OceanBase.
OceanBase is highly compatible with MySQL, supporting most MySQL SQL syntax,
data types, and functions. This makes it easy to migrate existing MySQL
applications to OceanBase.
### High Availability
OceanBase provides automatic failover and data replication across multiple nodes, ensuring high availability and data durability.
OceanBase provides automatic failover and data replication across multiple
nodes, ensuring high availability and data durability.
### Scalability
OceanBase can scale horizontally by adding more nodes to the cluster, making it suitable for large-scale applications.
OceanBase can scale horizontally by adding more nodes to the cluster, making it
suitable for large-scale applications.
### Strong Consistency
OceanBase provides strong consistency guarantees, ensuring that all transactions are ACID compliant.
OceanBase provides strong consistency guarantees, ensuring that all transactions
are ACID compliant.

View File

@@ -9,7 +9,10 @@ description: >
## About
[TiDB][tidb-docs] is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL-compatible and features horizontal scalability, strong consistency, and high availability.
[TiDB][tidb-docs] is an open-source distributed SQL database that supports
Hybrid Transactional and Analytical Processing (HTAP) workloads. It is
MySQL-compatible and features horizontal scalability, strong consistency, and
high availability.
[tidb-docs]: https://docs.pingcap.com/tidb/stable
@@ -17,9 +20,11 @@ description: >
### Database User
This source uses standard MySQL protocol authentication. You will need to [create a TiDB user][tidb-users] to login to the database with.
This source uses standard MySQL protocol authentication. You will need to
[create a TiDB user][tidb-users] to login to the database with.
For TiDB Cloud users, you can create database users through the TiDB Cloud console.
For TiDB Cloud users, you can create database users through the TiDB Cloud
console.
[tidb-users]: https://docs.pingcap.com/tidb/stable/user-account-management
@@ -27,11 +32,14 @@ For TiDB Cloud users, you can create database users through the TiDB Cloud conso
- TiDB Cloud
For TiDB Cloud instances, SSL is automatically enabled when the hostname matches the TiDB Cloud pattern (`gateway*.*.*.tidbcloud.com`). You don't need to explicitly set `ssl: true` for TiDB Cloud connections.
For TiDB Cloud instances, SSL is automatically enabled when the hostname
matches the TiDB Cloud pattern (`gateway*.*.*.tidbcloud.com`). You don't
need to explicitly set `ssl: true` for TiDB Cloud connections.
- Self-Hosted TiDB
For self-hosted TiDB instances, you can optionally enable SSL by setting `ssl: true` in your configuration.
For self-hosted TiDB instances, you can optionally enable SSL by setting
`ssl: true` in your configuration.
## Example

View File

@@ -8,7 +8,9 @@ description: >
## About
[Trino][trino-docs] is a distributed SQL query engine designed for fast analytic queries against data of any size. It allows you to query data where it lives, including Hive, Cassandra, relational databases or even proprietary data stores.
[Trino][trino-docs] is a distributed SQL query engine designed for fast analytic
queries against data of any size. It allows you to query data where it lives,
including Hive, Cassandra, relational databases or even proprietary data stores.
[trino-docs]: https://trino.io/docs/
@@ -24,7 +26,8 @@ description: >
### Trino Cluster
You need access to a running Trino cluster with appropriate user permissions for the catalogs and schemas you want to query.
You need access to a running Trino cluster with appropriate user permissions for
the catalogs and schemas you want to query.
## Example
@@ -47,16 +50,16 @@ instead of hardcoding your secrets into the configuration file.
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------:|:------------:|------------------------------------------------------------------------|
| kind | string | true | Must be "trino". |
| host | string | true | Trino coordinator hostname (e.g. "trino.example.com") |
| port | string | true | Trino coordinator port (e.g. "8080", "8443") |
| user | string | false | Username for authentication (e.g. "analyst"). Optional for anonymous access. |
| password | string | false | Password for basic authentication |
| catalog | string | true | Default catalog to use for queries (e.g. "hive") |
| schema | string | true | Default schema to use for queries (e.g. "default") |
| queryTimeout| string | false | Query timeout duration (e.g. "30m", "1h") |
| accessToken | string | false | JWT access token for authentication |
| kerberosEnabled | boolean | false | Enable Kerberos authentication (default: false) |
| sslEnabled | boolean | false | Enable SSL/TLS (default: false) |
| **field** | **type** | **required** | **description** |
|-----------------|:--------:|:------------:|------------------------------------------------------------------------------|
| kind | string | true | Must be "trino". |
| host | string | true | Trino coordinator hostname (e.g. "trino.example.com") |
| port | string | true | Trino coordinator port (e.g. "8080", "8443") |
| user | string | false | Username for authentication (e.g. "analyst"). Optional for anonymous access. |
| password | string | false | Password for basic authentication |
| catalog | string | true | Default catalog to use for queries (e.g. "hive") |
| schema | string | true | Default schema to use for queries (e.g. "default") |
| queryTimeout | string | false | Query timeout duration (e.g. "30m", "1h") |
| accessToken | string | false | JWT access token for authentication |
| kerberosEnabled | boolean | false | Enable Kerberos authentication (default: false) |
| sslEnabled | boolean | false | Enable SSL/TLS (default: false) |

View File

@@ -8,7 +8,9 @@ description: >
## About
[YugabyteDB][yugabytedb] is a high-performance, distributed SQL database designed for global, internet-scale applications, with full PostgreSQL compatibility.
[YugabyteDB][yugabytedb] is a high-performance, distributed SQL database
designed for global, internet-scale applications, with full PostgreSQL
compatibility.
[yugabytedb]: https://www.yugabyte.com/
@@ -29,16 +31,16 @@ sources:
## Reference
| **field** | **type** | **required** | **description** |
|-----------------------------------|:--------:|:------------:|------------------------------------------------------------------------|
| kind | string | true | Must be "yugabytedb". |
| host | string | true | IP address to connect to. |
| port | integer | true | Port to connect to. The default port is 5433. |
| database | string | true | Name of the YugabyteDB database to connect to. The default database name is yugabyte. |
| user | string | true | Name of the YugabyteDB user to connect as. The default user is yugabyte. |
| password | string | true | Password of the YugabyteDB user. The default password is yugabyte. |
| loadBalance | boolean | false | If true, enable uniform load balancing. The default loadBalance value is false. |
| topologyKeys | string | false | Comma-separated geo-locations in the form cloud.region.zone:priority to enable topology-aware load balancing. Ignored if loadBalance is false. It is null by default. |
| ybServersRefreshInterval | integer | false | The interval (in seconds) to refresh the servers list; ignored if loadBalance is false. The default value of ybServersRefreshInterval is 300. |
| fallbackToTopologyKeysOnly | boolean | false | If set to true and topologyKeys are specified, only connect to nodes specified in topologyKeys. By defualt, this is set to false. |
| failedHostReconnectDelaySecs | integer | false | Time (in seconds) to wait before trying to connect to failed nodes. The default value of is 5. |
| **field** | **type** | **required** | **description** |
|------------------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "yugabytedb". |
| host | string | true | IP address to connect to. |
| port | integer | true | Port to connect to. The default port is 5433. |
| database | string | true | Name of the YugabyteDB database to connect to. The default database name is yugabyte. |
| user | string | true | Name of the YugabyteDB user to connect as. The default user is yugabyte. |
| password | string | true | Password of the YugabyteDB user. The default password is yugabyte. |
| loadBalance | boolean | false | If true, enable uniform load balancing. The default loadBalance value is false. |
| topologyKeys | string | false | Comma-separated geo-locations in the form cloud.region.zone:priority to enable topology-aware load balancing. Ignored if loadBalance is false. It is null by default. |
| ybServersRefreshInterval | integer | false | The interval (in seconds) to refresh the servers list; ignored if loadBalance is false. The default value of ybServersRefreshInterval is 300. |
| fallbackToTopologyKeysOnly | boolean | false | If set to true and topologyKeys are specified, only connect to nodes specified in topologyKeys. By defualt, this is set to false. |
| failedHostReconnectDelaySecs | integer | false | Time (in seconds) to wait before trying to connect to failed nodes. The default value of is 5. |

View File

@@ -8,14 +8,19 @@ aliases: [/resources/tools/alloydb-create-instance]
## About
The `alloydb-create-instance` tool creates a new AlloyDB instance (PRIMARY or READ_POOL) within a specified cluster. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-create-instance` tool creates a new AlloyDB instance (PRIMARY or
READ_POOL) within a specified cluster. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
This tool provisions a new instance with a **public IP address**.
**Permissions & APIs Required:**
Before using, ensure the following on your GCP project:
1. The [AlloyDB API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com) is enabled.
2. The user or service account executing the tool has one of the following IAM roles:
1. The [AlloyDB
API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com)
is enabled.
2. The user or service account executing the tool has one of the following IAM
roles:
- `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)
@@ -35,7 +40,8 @@ The tool takes the following input parameters:
| `nodeCount` | int | The number of nodes for a read pool. Required only if `instanceType` is `READ_POOL`. Default: `1` | No |
> Note
> The tool sets the `password.enforce_complexity` database flag to `on`, requiring new database passwords to meet complexity rules.
> The tool sets the `password.enforce_complexity` database flag to `on`,
> requiring new database passwords to meet complexity rules.
## Example

View File

@@ -8,13 +8,18 @@ aliases: [/resources/tools/alloydb-create-user]
## About
The `alloydb-create-user` tool creates a new database user (`ALLOYDB_BUILT_IN` or `ALLOYDB_IAM_USER`) within a specified cluster. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-create-user` tool creates a new database user (`ALLOYDB_BUILT_IN`
or `ALLOYDB_IAM_USER`) within a specified cluster. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
**Permissions & APIs Required:**
Before using, ensure the following on your GCP project:
1. The [AlloyDB API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com) is enabled.
2. The user or service account executing the tool has one of the following IAM roles:
1. The [AlloyDB
API](https://console.cloud.google.com/apis/library/alloydb.googleapis.com)
is enabled.
2. The user or service account executing the tool has one of the following IAM
roles:
- `roles/alloydb.admin` (the AlloyDB Admin predefined IAM role)
- `roles/owner` (the Owner basic IAM role)
- `roles/editor` (the Editor basic IAM role)

View File

@@ -8,7 +8,9 @@ aliases: [/resources/tools/alloydb-get-cluster]
## About
The `alloydb-get-cluster` tool retrieves detailed information for a single, specified AlloyDB cluster. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-get-cluster` tool retrieves detailed information for a single,
specified AlloyDB cluster. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
| Parameter | Type | Description | Required |
| :--------- | :----- | :------------------------------------------------- | :------- |

View File

@@ -8,7 +8,9 @@ aliases: [/resources/tools/alloydb-get-user]
## About
The `alloydb-get-user` tool retrieves detailed information for a single, specified AlloyDB user. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-get-user` tool retrieves detailed information for a single,
specified AlloyDB user. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
| Parameter | Type | Description | Required |
| :--------- | :----- | :------------------------------------------------- | :------- |

View File

@@ -8,9 +8,13 @@ aliases: [/resources/tools/alloydb-list-clusters]
## About
The `alloydb-list-clusters` tool retrieves AlloyDB cluster information for all or specified locations in a given project. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-list-clusters` tool retrieves AlloyDB cluster information for all
or specified locations in a given project. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
`alloydb-list-clusters` tool lists the detailed information of AlloyDB cluster(cluster name, state, configuration, etc) for a given project and location. The tool takes the following input parameters:
`alloydb-list-clusters` tool lists the detailed information of AlloyDB
cluster(cluster name, state, configuration, etc) for a given project and
location. The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :----------------------------------------------------------------------------------------------- | :------- |

View File

@@ -8,9 +8,14 @@ aliases: [/resources/tools/alloydb-list-instances]
## About
The `alloydb-list-instances` tool retrieves AlloyDB instance information for all or specified clusters and locations in a given project. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-list-instances` tool retrieves AlloyDB instance information for all
or specified clusters and locations in a given project. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
`alloydb-list-instances` tool lists the detailed information of AlloyDB instances (instance name, type, IP address, state, configuration, etc) for a given project, cluster and location. The tool takes the following input parameters:
`alloydb-list-instances` tool lists the detailed information of AlloyDB
instances (instance name, type, IP address, state, configuration, etc) for a
given project, cluster and location. The tool takes the following input
parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :--------------------------------------------------------------------------------------------------------- | :------- |

View File

@@ -8,7 +8,9 @@ aliases: [/resources/tools/alloydb-list-users]
## About
The `alloydb-list-users` tool lists all database users within an AlloyDB cluster. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The `alloydb-list-users` tool lists all database users within an AlloyDB
cluster. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md)
source.
The tool takes the following input parameters:
| Parameter | Type | Description | Required |

View File

@@ -8,7 +8,8 @@ description: "Wait for a long-running AlloyDB operation to complete.\n"
The `alloydb-wait-for-operation` tool is a utility tool that waits for a
long-running AlloyDB operation to complete. It does this by polling the AlloyDB
Admin API operation status endpoint until the operation is finished, using
exponential backoff. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
exponential backoff. It is compatible with
[alloydb-admin](../../sources/alloydb-admin.md) source.
| Parameter | Type | Description | Required |
| :---------- | :----- | :--------------------------------------------------- | :------- |

View File

@@ -10,7 +10,9 @@ aliases:
## About
A `bigquery-analyze-contribution` tool performs contribution analysis in BigQuery by creating a temporary `CONTRIBUTION_ANALYSIS` model and then querying it with `ML.GET_INSIGHTS` to find top contributors for a given metric.
A `bigquery-analyze-contribution` tool performs contribution analysis in
BigQuery by creating a temporary `CONTRIBUTION_ANALYSIS` model and then querying
it with `ML.GET_INSIGHTS` to find top contributors for a given metric.
It's compatible with the following sources:
@@ -18,12 +20,24 @@ It's compatible with the following sources:
`bigquery-analyze-contribution` takes the following parameters:
- **input_data** (string, required): The data that contain the test and control data to analyze. This can be a fully qualified BigQuery table ID (e.g., `my-project.my_dataset.my_table`) or a SQL query that returns the data.
- **contribution_metric** (string, required): The name of the column that contains the metric to analyze. This can be SUM(metric_column_name), SUM(numerator_metric_column_name)/SUM(denominator_metric_column_name) or SUM(metric_sum_column_name)/COUNT(DISTINCT categorical_column_name) depending the type of metric to analyze.
- **is_test_col** (string, required): The name of the column that identifies whether a row is in the test or control group. The column must contain boolean values.
- **dimension_id_cols** (array of strings, optional): An array of column names that uniquely identify each dimension.
- **top_k_insights_by_apriori_support** (integer, optional): The number of top insights to return, ranked by apriori support. Default to '30'.
- **pruning_method** (string, optional): The method to use for pruning redundant insights. Can be `'NO_PRUNING'` or `'PRUNE_REDUNDANT_INSIGHTS'`. Defaults to `'PRUNE_REDUNDANT_INSIGHTS'`.
- **input_data** (string, required): The data that contain the test and control
data to analyze. This can be a fully qualified BigQuery table ID (e.g.,
`my-project.my_dataset.my_table`) or a SQL query that returns the data.
- **contribution_metric** (string, required): The name of the column that
contains the metric to analyze. This can be SUM(metric_column_name),
SUM(numerator_metric_column_name)/SUM(denominator_metric_column_name) or
SUM(metric_sum_column_name)/COUNT(DISTINCT categorical_column_name) depending
the type of metric to analyze.
- **is_test_col** (string, required): The name of the column that identifies
whether a row is in the test or control group. The column must contain boolean
values.
- **dimension_id_cols** (array of strings, optional): An array of column names
that uniquely identify each dimension.
- **top_k_insights_by_apriori_support** (integer, optional): The number of top
insights to return, ranked by apriori support. Default to '30'.
- **pruning_method** (string, optional): The method to use for pruning redundant
insights. Can be `'NO_PRUNING'` or `'PRUNE_REDUNDANT_INSIGHTS'`. Defaults to
`'PRUNE_REDUNDANT_INSIGHTS'`.
## Example
@@ -37,11 +51,16 @@ tools:
```
## Sample Prompt
You can prepare a sample table following https://cloud.google.com/bigquery/docs/get-contribution-analysis-insights.
You can prepare a sample table following
https://cloud.google.com/bigquery/docs/get-contribution-analysis-insights.
And use the following sample prompts to call this tool:
- What drives the changes in sales in the table `bqml_tutorial.iowa_liquor_sales_sum_data`? Use the project id myproject.
- Analyze the contribution for the `total_sales` metric in the table `bqml_tutorial.iowa_liquor_sales_sum_data`. The test group is identified by the `is_test` column. The dimensions are `store_name`, `city`, `vendor_name`, `category_name` and `item_description`.
- What drives the changes in sales in the table
`bqml_tutorial.iowa_liquor_sales_sum_data`? Use the project id myproject.
- Analyze the contribution for the `total_sales` metric in the table
`bqml_tutorial.iowa_liquor_sales_sum_data`. The test group is identified by
the `is_test` column. The dimensions are `store_name`, `city`, `vendor_name`,
`category_name` and `item_description`.
## Reference

View File

@@ -10,18 +10,21 @@ aliases:
## About
A `bigquery-conversational-analytics` tool allows you to ask questions about your data in natural language.
A `bigquery-conversational-analytics` tool allows you to ask questions about
your data in natural language.
This function takes a user's question (which can include conversational history for context)
and references to specific BigQuery tables, and sends them to a stateless conversational API.
This function takes a user's question (which can include conversational history
for context) and references to specific BigQuery tables, and sends them to a
stateless conversational API.
The API uses a GenAI agent to understand the question, generate and execute SQL queries
and Python code, and formulate an answer. This function returns a detailed, sequential
log of this entire process, which includes any generated SQL or Python code, the data
retrieved, and the final text answer.
The API uses a GenAI agent to understand the question, generate and execute SQL
queries and Python code, and formulate an answer. This function returns a
detailed, sequential log of this entire process, which includes any generated
SQL or Python code, the data retrieved, and the final text answer.
**Note**: This tool requires additional setup in your project. Please refer to the
official [Conversational Analytics API documentation](https://cloud.google.com/gemini/docs/conversational-analytics-api/overview)
**Note**: This tool requires additional setup in your project. Please refer to
the official [Conversational Analytics API
documentation](https://cloud.google.com/gemini/docs/conversational-analytics-api/overview)
for instructions.
It's compatible with the following sources:
@@ -30,11 +33,11 @@ It's compatible with the following sources:
`bigquery-conversational-analytics` accepts the following parameters:
- **`user_query_with_context`:** The user's question, potentially including conversation history and system
instructions for context.
- **`table_references`:** A JSON string of a list of BigQuery tables to use as context.
Each object in the list must contain `projectId`, `datasetId`, and `tableId`. Example:
`'[{"projectId": "my-gcp-project", "datasetId": "my_dataset", "tableId": "my_table"}]'`
- **`user_query_with_context`:** The user's question, potentially including
conversation history and system instructions for context.
- **`table_references`:** A JSON string of a list of BigQuery tables to use as
context. Each object in the list must contain `projectId`, `datasetId`, and
`tableId`. Example: `'[{"projectId": "my-gcp-project", "datasetId": "my_dataset", "tableId": "my_table"}]'`
The tool's behavior regarding these parameters is influenced by the `allowedDatasets`
restriction on the `bigquery` source:
@@ -57,9 +60,9 @@ tools:
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigquery-conversational-analytics". |
| source | string | true | Name of the source for chat. |
| description | string | true | Description of the tool
that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "bigquery-conversational-analytics". |
| source | string | true | Name of the source for chat. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -15,13 +15,23 @@ It's compatible with the following sources:
- [bigquery](../../sources/bigquery.md)
`bigquery-forecast` constructs and executes a `SELECT * FROM AI.FORECAST(...)` query based on the provided parameters:
`bigquery-forecast` constructs and executes a `SELECT * FROM AI.FORECAST(...)`
query based on the provided parameters:
- **history_data** (string, required): This specifies the source of the historical time series data. It can be either a fully qualified BigQuery table ID (e.g., my-project.my_dataset.my_table) or a SQL query that returns the data.
- **timestamp_col** (string, required): The name of the column in your history_data that contains the timestamps.
- **data_col** (string, required): The name of the column in your history_data that contains the numeric values to be forecasted.
- **id_cols** (array of strings, optional): If you are forecasting multiple time series at once (e.g., sales for different products), this parameter takes an array of column names that uniquely identify each series. It defaults to an empty array if not provided.
- **horizon** (integer, optional): The number of future time steps you want to predict. It defaults to 10 if not specified.
- **history_data** (string, required): This specifies the source of the
historical time series data. It can be either a fully qualified BigQuery table
ID (e.g., my-project.my_dataset.my_table) or a SQL query that returns the
data.
- **timestamp_col** (string, required): The name of the column in your
history_data that contains the timestamps.
- **data_col** (string, required): The name of the column in your history_data
that contains the numeric values to be forecasted.
- **id_cols** (array of strings, optional): If you are forecasting multiple time
series at once (e.g., sales for different products), this parameter takes an
array of column names that uniquely identify each series. It defaults to an
empty array if not provided.
- **horizon** (integer, optional): The number of future time steps you want to
predict. It defaults to 10 if not specified.
## Example
@@ -42,8 +52,8 @@ You can use the following sample prompts to call this tool:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigquery-forecast". |
| source | string | true | Name of the source the forecast tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|---------------------------------------------------------|
| kind | string | true | Must be "bigquery-forecast". |
| source | string | true | Name of the source the forecast tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -15,11 +15,13 @@ It's compatible with the following sources:
- [bigquery](../../sources/bigquery.md)
`bigquery-search-catalog` takes a required `query` parameter based on which
entries are filtered and returned to the user. It also optionally accepts following parameters:
entries are filtered and returned to the user. It also optionally accepts
following parameters:
- `datasetIds` - The IDs of the bigquery dataset.
- `projectIds` - The IDs of the bigquery project.
- `types` - The type of the data. Accepted values are: CONNECTION, POLICY, DATASET, MODEL, ROUTINE, TABLE, VIEW.
- `types` - The type of the data. Accepted values are: CONNECTION, POLICY,
DATASET, MODEL, ROUTINE, TABLE, VIEW.
- `pageSize` - Number of results in the search page. Defaults to `5`.
## Requirements

View File

@@ -12,7 +12,8 @@ aliases:
## About
A `clickhouse-execute-sql` tool executes a SQL statement against a ClickHouse
database. It's compatible with the [clickhouse](../../sources/clickhouse.md) source.
database. It's compatible with the [clickhouse](../../sources/clickhouse.md)
source.
`clickhouse-execute-sql` takes one input parameter `sql` and runs the SQL
statement against the specified `source`. This tool includes query logging
@@ -33,14 +34,14 @@ tools:
## Parameters
| **parameter** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|----------------------------------------------------|
| **parameter** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|---------------------------------------------------|
| sql | string | true | The SQL statement to execute against the database |
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|---------------------------------------------------------|
| kind | string | true | Must be "clickhouse-execute-sql". |
| source | string | true | Name of the ClickHouse source to execute SQL against. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|-------------------------------------------------------|
| kind | string | true | Must be "clickhouse-execute-sql". |
| source | string | true | Name of the ClickHouse source to execute SQL against. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,8 +10,9 @@ aliases:
## About
A `clickhouse-list-databases` tool lists all available databases in a
ClickHouse instance. It's compatible with the [clickhouse](../../sources/clickhouse.md) source.
A `clickhouse-list-databases` tool lists all available databases in a ClickHouse
instance. It's compatible with the [clickhouse](../../sources/clickhouse.md)
source.
This tool executes the `SHOW DATABASES` command and returns a list of all
databases accessible to the configured user, making it useful for database
@@ -44,10 +45,10 @@ Example response:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------:|:------------:|-----------------------------------------------------------|
| kind | string | true | Must be "clickhouse-list-databases". |
| source | string | true | Name of the ClickHouse source to list databases from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | array of string | false | Authentication services required to use this tool. |
| parameters | array of Parameter | false | Parameters for the tool (typically not used). |
| **field** | **type** | **required** | **description** |
|--------------|:------------------:|:------------:|-------------------------------------------------------|
| kind | string | true | Must be "clickhouse-list-databases". |
| source | string | true | Name of the ClickHouse source to list databases from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| authRequired | array of string | false | Authentication services required to use this tool. |
| parameters | array of Parameter | false | Parameters for the tool (typically not used). |

View File

@@ -10,8 +10,9 @@ aliases:
## About
A `clickhouse-sql` tool executes SQL queries as prepared statements against a
ClickHouse database. It's compatible with the [clickhouse](../../sources/clickhouse.md) source.
A `clickhouse-sql` tool executes SQL queries as prepared statements against a
ClickHouse database. It's compatible with the
[clickhouse](../../sources/clickhouse.md) source.
This tool supports both template parameters (for SQL statement customization)
and regular parameters (for prepared statement values), providing flexible
@@ -71,11 +72,11 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------:|:------------:|-----------------------------------------------------------|
| kind | string | true | Must be "clickhouse-sql". |
| source | string | true | Name of the ClickHouse source to execute SQL against. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | The SQL statement template to execute. |
| parameters | array of Parameter | false | Parameters for prepared statement values. |
| templateParameters | array of Parameter | false | Parameters for SQL statement template customization. |
| **field** | **type** | **required** | **description** |
|--------------------|:------------------:|:------------:|-------------------------------------------------------|
| kind | string | true | Must be "clickhouse-sql". |
| source | string | true | Name of the ClickHouse source to execute SQL against. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | The SQL statement template to execute. |
| parameters | array of Parameter | false | Parameters for prepared statement values. |
| templateParameters | array of Parameter | false | Parameters for SQL statement template customization. |

View File

@@ -5,42 +5,54 @@ weight: 1
description: The "cloud-monitoring-query-prometheus" tool fetches time series metrics for a project using a given prometheus query.
---
The `cloud-monitoring-query-prometheus` tool fetches timeseries metrics data from Google Cloud Monitoring for a project using a given prometheus query.
The `cloud-monitoring-query-prometheus` tool fetches timeseries metrics data
from Google Cloud Monitoring for a project using a given prometheus query.
## About
The `cloud-monitoring-query-prometheus` tool allows you to query all metrics available in Google Cloud Monitoring using the Prometheus Query Language (PromQL).
The `cloud-monitoring-query-prometheus` tool allows you to query all metrics
available in Google Cloud Monitoring using the Prometheus Query Language
(PromQL).
It's compatible with any of the following sources:
- [cloud-monitoring](../../sources/cloud-monitoring.md)
## Prerequisites
To use this tool, you need to have the following IAM role on your Google Cloud project:
To use this tool, you need to have the following IAM role on your Google Cloud
project:
- `roles/monitoring.viewer`
## Arguments
| Name | Type | Description |
| ----------- | ------ | ----------------------------------------- |
| `projectId` | string | The Google Cloud project ID. |
| `query` | string | The Prometheus query to execute. |
| Name | Type | Description |
|-------------|--------|----------------------------------|
| `projectId` | string | The Google Cloud project ID. |
| `query` | string | The Prometheus query to execute. |
## Use Cases
- **Ad-hoc analysis:** Quickly investigate performance issues by executing direct promql queries for a database instance.
- **Prebuilt Configs:** Use the already added prebuilt tools mentioned in prebuilt-tools.md to query the databases system/query level metrics.
- **Ad-hoc analysis:** Quickly investigate performance issues by executing
direct promql queries for a database instance.
- **Prebuilt Configs:** Use the already added prebuilt tools mentioned in
prebuilt-tools.md to query the databases system/query level metrics.
Here are some common use cases for the `cloud-monitoring-query-prometheus` tool:
- **Monitoring resource utilization:** Track CPU, memory, and disk usage for your database instance (Can use the [prebuilt tools](../../../reference/prebuilt-tools.md)).
- **Monitoring query performance:** Monitor latency, execution_time, wait_time for database instance or even for the queries running (Can use the [prebuilt tools](../../../reference/prebuilt-tools.md)).
- **System Health:** Get the overall system health for the database instance (Can use the [prebuilt tools](../../../reference/prebuilt-tools.md)).
- **Monitoring resource utilization:** Track CPU, memory, and disk usage for
your database instance (Can use the [prebuilt
tools](../../../reference/prebuilt-tools.md)).
- **Monitoring query performance:** Monitor latency, execution_time, wait_time
for database instance or even for the queries running (Can use the [prebuilt
tools](../../../reference/prebuilt-tools.md)).
- **System Health:** Get the overall system health for the database instance
(Can use the [prebuilt tools](../../../reference/prebuilt-tools.md)).
## Examples
Here are some examples of how to use the `cloud-monitoring-query-prometheus` tool.
Here are some examples of how to use the `cloud-monitoring-query-prometheus`
tool.
```yaml
@@ -56,8 +68,8 @@ tools:
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be cloud-monitoring-query-prometheus. |
| source | string | true | The name of an `cloud-monitoring` source. |
| description | string | true | Description of the tool that is passed to the agent. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| kind | string | true | Must be cloud-monitoring-query-prometheus. |
| source | string | true | The name of an `cloud-monitoring` source. |
| description | string | true | Description of the tool that is passed to the agent. |

View File

@@ -6,7 +6,8 @@ description: >
Create a new database in a Cloud SQL instance.
---
The `cloud-sql-create-database` tool creates a new database in a specified Cloud SQL instance.
The `cloud-sql-create-database` tool creates a new database in a specified Cloud
SQL instance.
{{< notice info >}}
This tool uses a `source` of kind `cloud-sql-admin`.

View File

@@ -6,7 +6,8 @@ description: >
Create a new user in a Cloud SQL instance.
---
The `cloud-sql-create-users` tool creates a new user in a specified Cloud SQL instance. It can create both built-in and IAM users.
The `cloud-sql-create-users` tool creates a new user in a specified Cloud SQL
instance. It can create both built-in and IAM users.
{{< notice info >}}
This tool uses a `source` of kind `cloud-sql-admin`.

View File

@@ -6,7 +6,8 @@ description: >
Get a Cloud SQL instance resource.
---
The `cloud-sql-get-instance` tool retrieves a Cloud SQL instance resource using the Cloud SQL Admin API.
The `cloud-sql-get-instance` tool retrieves a Cloud SQL instance resource using
the Cloud SQL Admin API.
{{< notice info >}}
This tool uses a `source` of kind `cloud-sql-admin`.

View File

@@ -9,13 +9,14 @@ The `cloud-sql-list-instances` tool lists all Cloud SQL instances in a specified
Google Cloud project.
{{< notice info >}}
This tool uses the `cloud-sql-admin` source, which automatically handles authentication on behalf of the user.
This tool uses the `cloud-sql-admin` source, which automatically handles
authentication on behalf of the user.
{{< /notice >}}
## Configuration
Here is an example of how to configure the `cloud-sql-list-instances` tool in your
`tools.yaml` file:
Here is an example of how to configure the `cloud-sql-list-instances` tool in
your `tools.yaml` file:
```yaml
sources:
@@ -39,8 +40,8 @@ The `cloud-sql-list-instances` tool has one required parameter:
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :-------: | :----------: | ----------------------------------------------------------------------------------- |
| kind | string | true | Must be "cloud-sql-list-instances". |
| description | string | false | Description of the tool that is passed to the agent. |
| source | string | true | The name of the `cloud-sql-admin` source to use for this tool. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------------------|
| kind | string | true | Must be "cloud-sql-list-instances". |
| description | string | false | Description of the tool that is passed to the agent. |
| source | string | true | The name of the `cloud-sql-admin` source to use for this tool. |

View File

@@ -5,7 +5,8 @@ weight: 10
description: "Create a Cloud SQL for SQL Server instance."
---
The `cloud-sql-mssql-create-instance` tool creates a Cloud SQL for SQL Server instance using the Cloud SQL Admin API.
The `cloud-sql-mssql-create-instance` tool creates a Cloud SQL for SQL Server
instance using the Cloud SQL Admin API.
{{< notice info dd>}}
This tool uses a `source` of kind `cloud-sql-admin`.
@@ -34,9 +35,9 @@ tools:
### Tool Inputs
| **parameter** | **type** | **required** | **description** |
| --------------- | :------: | :----------: | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
|-----------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------|
| project | string | true | The project ID. |
| name | string | true | The name of the instance. |
| databaseVersion | string | false | The database version for SQL Server. If not specified, defaults to the latest available version (e.g., SQLSERVER_2022_STANDARD). |
| databaseVersion | string | false | The database version for SQL Server. If not specified, defaults to the latest available version (e.g., SQLSERVER_2022_STANDARD). |
| rootPassword | string | true | The root password for the instance. |
| editionPreset | string | false | The edition of the instance. Can be `Production` or `Development`. This determines the default machine type and availability. Defaults to `Development`. |

View File

@@ -5,7 +5,8 @@ weight: 2
description: "Create a Cloud SQL for MySQL instance."
---
The `cloud-sql-mysql-create-instance` tool creates a new Cloud SQL for MySQL instance in a specified Google Cloud project.
The `cloud-sql-mysql-create-instance` tool creates a new Cloud SQL for MySQL
instance in a specified Google Cloud project.
{{< notice info >}}
This tool uses the `cloud-sql-admin` source.
@@ -13,7 +14,8 @@ This tool uses the `cloud-sql-admin` source.
## Configuration
Here is an example of how to configure the `cloud-sql-mysql-create-instance` tool in your `tools.yaml` file:
Here is an example of how to configure the `cloud-sql-mysql-create-instance`
tool in your `tools.yaml` file:
```yaml
sources:

View File

@@ -5,7 +5,8 @@ weight: 10
description: Create a Cloud SQL for PostgreSQL instance.
---
The `cloud-sql-postgres-create-instance` tool creates a Cloud SQL for PostgreSQL instance using the Cloud SQL Admin API.
The `cloud-sql-postgres-create-instance` tool creates a Cloud SQL for PostgreSQL
instance using the Cloud SQL Admin API.
{{< notice info >}}
This tool uses a `source` of kind `cloud-sql-admin`.
@@ -34,9 +35,9 @@ tools:
### Tool Inputs
| **parameter** | **type** | **required** | **description** |
| --------------- | :------: | :----------: | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
|-----------------|:--------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------|
| project | string | true | The project ID. |
| name | string | true | The name of the instance. |
| databaseVersion | string | false | The database version for Postgres. If not specified, defaults to the latest available version (e.g., POSTGRES_17). |
| databaseVersion | string | false | The database version for Postgres. If not specified, defaults to the latest available version (e.g., POSTGRES_17). |
| rootPassword | string | true | The root password for the instance. |
| editionPreset | string | false | The edition of the instance. Can be `Production` or `Development`. This determines the default machine type and availability. Defaults to `Development`. |
| editionPreset | string | false | The edition of the instance. Can be `Production` or `Development`. This determines the default machine type and availability. Defaults to `Development`. |

View File

@@ -10,15 +10,26 @@ aliases:
## About
A `dataplex-lookup-entry` tool returns details of a particular entry in Dataplex Catalog.
It's compatible with the following sources:
A `dataplex-lookup-entry` tool returns details of a particular entry in Dataplex
Catalog. It's compatible with the following sources:
- [dataplex](../sources/dataplex.md)
`dataplex-lookup-entry` takes a required `name` parameter which contains the project and location to which the request should be attributed in the following form: projects/{project}/locations/{location} and also a required `entry` parameter which is the resource name of the entry in the following form: projects/{project}/locations/{location}/entryGroups/{entryGroup}/entries/{entry}. It also optionally accepts following parameters:
- `view` - View to control which parts of an entry the service should return. It takes integer values from 1-4 corresponding to type of view - BASIC, FULL, CUSTOM, ALL
- `aspectTypes` - Limits the aspects returned to the provided aspect types in the format `projects/{project}/locations/{location}/aspectTypes/{aspectType}`. It only works for CUSTOM view.
- `paths` - Limits the aspects returned to those associated with the provided paths within the Entry. It only works for CUSTOM view.
`dataplex-lookup-entry` takes a required `name` parameter which contains the
project and location to which the request should be attributed in the following
form: projects/{project}/locations/{location} and also a required `entry`
parameter which is the resource name of the entry in the following form:
projects/{project}/locations/{location}/entryGroups/{entryGroup}/entries/{entry}.
It also optionally accepts following parameters:
- `view` - View to control which parts of an entry the service should return.
It takes integer values from 1-4 corresponding to type of view - BASIC,
FULL, CUSTOM, ALL
- `aspectTypes` - Limits the aspects returned to the provided aspect types in
the format
`projects/{project}/locations/{location}/aspectTypes/{aspectType}`. It only
works for CUSTOM view.
- `paths` - Limits the aspects returned to those associated with the provided
paths within the Entry. It only works for CUSTOM view.
## Requirements
@@ -53,8 +64,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "dataplex-lookup-entry". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "dataplex-lookup-entry". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,16 +10,19 @@ aliases:
## About
A `dataplex-search-aspect-types` tool allows to fetch the metadata template of aspect types based on search query.
A `dataplex-search-aspect-types` tool allows to fetch the metadata template of
aspect types based on search query.
It's compatible with the following sources:
- [dataplex](../../sources/dataplex.md)
`dataplex-search-aspect-types` accepts following parameters optionally:
- `query` - Narrows down the search of aspect types to value of this parameter. If not provided, it fetches all aspect types available to the user.
- `query` - Narrows down the search of aspect types to value of this parameter.
If not provided, it fetches all aspect types available to the user.
- `pageSize` - Number of returned aspect types in the search page. Defaults to `5`.
- `orderBy` - Specifies the ordering of results. Supported values are: relevance (default), last_modified_timestamp, last_modified_timestamp asc.
- `orderBy` - Specifies the ordering of results. Supported values are: relevance
(default), last_modified_timestamp, last_modified_timestamp asc.
## Requirements
@@ -55,8 +58,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "dataplex-search-aspect-types". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "dataplex-search-aspect-types". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -17,7 +17,8 @@ It's compatible with the following sources:
- [dataplex](../../sources/dataplex.md)
`dataplex-search-entries` takes a required `query` parameter based on which
entries are filtered and returned to the user. It also optionally accepts following parameters:
entries are filtered and returned to the user. It also optionally accepts
following parameters:
- `pageSize` - Number of results in the search page. Defaults to `5`.
- `orderBy` - Specifies the ordering of results. Supported values are: relevance
@@ -57,8 +58,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "dataplex-search-entries". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "dataplex-search-entries". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -113,12 +113,12 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------|
| kind | string | true | Must be "dgraph-dql". |
| source | string | true | Name of the source the dql query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | dql statement to execute |
| isQuery | boolean | false | To run statement as query set true otherwise false |
| timeout | string | false | To set timeout for query |
| **field** | **type** | **required** | **description** |
|-------------|:---------------------------------------:|:------------:|-------------------------------------------------------------------------------------------|
| kind | string | true | Must be "dgraph-dql". |
| source | string | true | Name of the source the dql query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | dql statement to execute |
| isQuery | boolean | false | To run statement as query set true otherwise false |
| timeout | string | false | To set timeout for query |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be used with the dql statement. |

View File

@@ -34,8 +34,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firebird-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "firebird-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -125,11 +125,11 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|---------------------|:---------------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firebird-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](_index#specifying-parameters) | false | List of [parameters](_index#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](_index#template-parameters) | false | List of [templateParameters](_index#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firebird-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](_index#specifying-parameters) | false | List of [parameters](_index#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](_index#template-parameters) | false | List of [templateParameters](_index#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |

View File

@@ -9,29 +9,33 @@ aliases:
---
## Description
The `firestore-add-documents` tool allows you to add new documents to a Firestore collection. It supports all Firestore data types using Firestore's native JSON format. The tool automatically generates a unique document ID for each new document.
The `firestore-add-documents` tool allows you to add new documents to a
Firestore collection. It supports all Firestore data types using Firestore's
native JSON format. The tool automatically generates a unique document ID for
each new document.
## Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `collectionPath` | string | Yes | The path of the collection where the document will be added |
| `documentData` | map | Yes | The data to be added as a document to the given collection. Must use [Firestore's native JSON format](https://cloud.google.com/firestore/docs/reference/rest/Shared.Types/ArrayValue#Value) with typed values |
| `returnData` | boolean | No | If set to true, the output will include the data of the created document. Defaults to false to help avoid overloading the context |
| Parameter | Type | Required | Description |
|------------------|---------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `collectionPath` | string | Yes | The path of the collection where the document will be added |
| `documentData` | map | Yes | The data to be added as a document to the given collection. Must use [Firestore's native JSON format](https://cloud.google.com/firestore/docs/reference/rest/Shared.Types/ArrayValue#Value) with typed values |
| `returnData` | boolean | No | If set to true, the output will include the data of the created document. Defaults to false to help avoid overloading the context |
## Output
The tool returns a map containing:
| Field | Type | Description |
|-------|------|-------------|
| Field | Type | Description |
|----------------|--------|--------------------------------------------------------------------------------------------------------------------------------|
| `documentPath` | string | The full resource name of the created document (e.g., `projects/{projectId}/databases/{databaseId}/documents/{document_path}`) |
| `createTime` | string | The timestamp when the document was created |
| `documentData` | map | The data that was added (only included when `returnData` is true) |
| `createTime` | string | The timestamp when the document was created |
| `documentData` | map | The data that was added (only included when `returnData` is true) |
## Data Type Format
The tool requires Firestore's native JSON format for document data. Each field must be wrapped with its type indicator:
The tool requires Firestore's native JSON format for document data. Each field
must be wrapped with its type indicator:
### Basic Types
- **String**: `{"stringValue": "your string"}`
@@ -259,16 +263,25 @@ Common errors include:
## Best Practices
1. **Always use typed values**: Every field must be wrapped with its appropriate type indicator (e.g., `{"stringValue": "text"}`)
2. **Integer values can be strings**: The tool accepts integer values as strings (e.g., `{"integerValue": "1500"}`)
3. **Use returnData sparingly**: Only set to true when you need to verify the exact data that was written
4. **Validate data before sending**: Ensure your data matches Firestore's native JSON format
1. **Always use typed values**: Every field must be wrapped with its appropriate
type indicator (e.g., `{"stringValue": "text"}`)
2. **Integer values can be strings**: The tool accepts integer values as strings
(e.g., `{"integerValue": "1500"}`)
3. **Use returnData sparingly**: Only set to true when you need to verify the
exact data that was written
4. **Validate data before sending**: Ensure your data matches Firestore's native
JSON format
5. **Handle timestamps properly**: Use RFC3339 format for timestamp strings
6. **Base64 encode binary data**: Binary data must be base64 encoded in the `bytesValue` field
7. **Consider security rules**: Ensure your Firestore security rules allow document creation in the target collection
6. **Base64 encode binary data**: Binary data must be base64 encoded in the
`bytesValue` field
7. **Consider security rules**: Ensure your Firestore security rules allow
document creation in the target collection
## Related Tools
- [`firestore-get-documents`](firestore-get-documents.md) - Retrieve documents by their paths
- [`firestore-query-collection`](firestore-query-collection.md) - Query documents in a collection
- [`firestore-delete-documents`](firestore-delete-documents.md) - Delete documents from Firestore
- [`firestore-get-documents`](firestore-get-documents.md) - Retrieve documents
by their paths
- [`firestore-query-collection`](firestore-query-collection.md) - Query
documents in a collection
- [`firestore-delete-documents`](firestore-delete-documents.md) - Delete
documents from Firestore

View File

@@ -19,9 +19,9 @@ It's compatible with the following sources:
- [firestore](../../sources/firestore.md)
`firestore-list-collections` takes an optional `parentPath` parameter to specify a document
path. If provided, it lists all subcollections of that document. If not provided, it lists
all root-level collections in the database.
`firestore-list-collections` takes an optional `parentPath` parameter to specify
a document path. If provided, it lists all subcollections of that document. If
not provided, it lists all root-level collections in the database.
## Example

View File

@@ -10,18 +10,28 @@ aliases:
## Overview
The `firestore-query` tool allows you to query Firestore collections with dynamic, parameterizable filters that support Firestore's native JSON value types. This tool is designed for querying single collection, which is the standard pattern in Firestore. The collection path itself can be parameterized, making it flexible for various use cases. This tool is particularly useful when you need to create reusable query templates with parameters that can be substituted at runtime.
The `firestore-query` tool allows you to query Firestore collections with
dynamic, parameterizable filters that support Firestore's native JSON value
types. This tool is designed for querying single collection, which is the
standard pattern in Firestore. The collection path itself can be parameterized,
making it flexible for various use cases. This tool is particularly useful when
you need to create reusable query templates with parameters that can be
substituted at runtime.
**Developer Note**: This tool serves as the general querying foundation that developers can use to create custom tools with specific query patterns.
**Developer Note**: This tool serves as the general querying foundation that
developers can use to create custom tools with specific query patterns.
## Key Features
- **Parameterizable Queries**: Use Go template syntax to create dynamic queries
- **Dynamic Collection Paths**: The collection path can be parameterized for flexibility
- **Native JSON Value Types**: Support for Firestore's typed values (stringValue, integerValue, doubleValue, etc.)
- **Dynamic Collection Paths**: The collection path can be parameterized for
flexibility
- **Native JSON Value Types**: Support for Firestore's typed values
(stringValue, integerValue, doubleValue, etc.)
- **Complex Filter Logic**: Support for AND/OR logical operators in filters
- **Template Substitution**: Dynamic collection paths, filters, and ordering
- **Query Analysis**: Optional query performance analysis with explain metrics (non-parameterizable)
- **Query Analysis**: Optional query performance analysis with explain metrics
(non-parameterizable)
## Configuration
@@ -115,22 +125,23 @@ tools:
### Configuration Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `kind` | string | Yes | Must be `firestore-query` |
| `source` | string | Yes | Name of the Firestore source to use |
| `description` | string | Yes | Description of what this tool does |
| `collectionPath` | string | Yes | Path to the collection to query (supports templates) |
| `filters` | string | No | JSON string defining query filters (supports templates) |
| `select` | array | No | Fields to select from documents(supports templates - string or array) |
| `orderBy` | object | No | Ordering configuration with `field` and `direction`(supports templates for the value of field or direction) |
| `limit` | integer | No | Maximum number of documents to return (default: 100) (supports templates) |
| `analyzeQuery` | boolean | No | Whether to analyze query performance (default: false) |
| `parameters` | array | Yes | Parameter definitions for template substitution |
| Parameter | Type | Required | Description |
|------------------|---------|----------|-------------------------------------------------------------------------------------------------------------|
| `kind` | string | Yes | Must be `firestore-query` |
| `source` | string | Yes | Name of the Firestore source to use |
| `description` | string | Yes | Description of what this tool does |
| `collectionPath` | string | Yes | Path to the collection to query (supports templates) |
| `filters` | string | No | JSON string defining query filters (supports templates) |
| `select` | array | No | Fields to select from documents(supports templates - string or array) |
| `orderBy` | object | No | Ordering configuration with `field` and `direction`(supports templates for the value of field or direction) |
| `limit` | integer | No | Maximum number of documents to return (default: 100) (supports templates) |
| `analyzeQuery` | boolean | No | Whether to analyze query performance (default: false) |
| `parameters` | array | Yes | Parameter definitions for template substitution |
### Runtime Parameters
Runtime parameters are defined in the `parameters` array and can be used in templates throughout the configuration.
Runtime parameters are defined in the `parameters` array and can be used in
templates throughout the configuration.
## Filter Format
@@ -182,17 +193,17 @@ Runtime parameters are defined in the `parameters` array and can be used in temp
The tool supports all Firestore native JSON value types:
| Type | Format | Example |
|------|--------|---------|
| String | `{"stringValue": "text"}` | `{"stringValue": "{{.name}}"}` |
| Integer | `{"integerValue": "123"}` or `{"integerValue": 123}` | `{"integerValue": "{{.age}}"}` or `{"integerValue": {{.age}}}` |
| Double | `{"doubleValue": 45.67}` | `{"doubleValue": {{.price}}}` |
| Boolean | `{"booleanValue": true}` | `{"booleanValue": {{.active}}}` |
| Null | `{"nullValue": null}` | `{"nullValue": null}` |
| Timestamp | `{"timestampValue": "RFC3339"}` | `{"timestampValue": "{{.date}}"}` |
| GeoPoint | `{"geoPointValue": {"latitude": 0, "longitude": 0}}` | See below |
| Array | `{"arrayValue": {"values": [...]}}` | See below |
| Map | `{"mapValue": {"fields": {...}}}` | See below |
| Type | Format | Example |
|-----------|------------------------------------------------------|----------------------------------------------------------------|
| String | `{"stringValue": "text"}` | `{"stringValue": "{{.name}}"}` |
| Integer | `{"integerValue": "123"}` or `{"integerValue": 123}` | `{"integerValue": "{{.age}}"}` or `{"integerValue": {{.age}}}` |
| Double | `{"doubleValue": 45.67}` | `{"doubleValue": {{.price}}}` |
| Boolean | `{"booleanValue": true}` | `{"booleanValue": {{.active}}}` |
| Null | `{"nullValue": null}` | `{"nullValue": null}` |
| Timestamp | `{"timestampValue": "RFC3339"}` | `{"timestampValue": "{{.date}}"}` |
| GeoPoint | `{"geoPointValue": {"latitude": 0, "longitude": 0}}` | See below |
| Array | `{"arrayValue": {"values": [...]}}` | See below |
| Map | `{"mapValue": {"fields": {...}}}` | See below |
### Complex Type Examples
@@ -391,11 +402,15 @@ curl -X POST http://localhost:5000/api/tool/your-tool-name/invoke \
## Best Practices
1. **Use Typed Values**: Always use Firestore's native JSON value types for proper type handling
2. **String Numbers for Large Integers**: Use string representation for large integers to avoid precision loss
3. **Template Security**: Validate all template parameters to prevent injection attacks
1. **Use Typed Values**: Always use Firestore's native JSON value types for
proper type handling
2. **String Numbers for Large Integers**: Use string representation for large
integers to avoid precision loss
3. **Template Security**: Validate all template parameters to prevent injection
attacks
4. **Index Optimization**: Use `analyzeQuery` to identify missing indexes
5. **Limit Results**: Always set a reasonable `limit` to prevent excessive data retrieval
5. **Limit Results**: Always set a reasonable `limit` to prevent excessive data
retrieval
6. **Field Selection**: Use `select` to retrieve only necessary fields
## Technical Notes
@@ -407,6 +422,8 @@ curl -X POST http://localhost:5000/api/tool/your-tool-name/invoke \
## See Also
- [firestore-query-collection](firestore-query-collection.md) - Non-parameterizable query tool
- [firestore-query-collection](firestore-query-collection.md) -
Non-parameterizable query tool
- [Firestore Source Configuration](../../sources/firestore.md)
- [Firestore Query Documentation](https://firebase.google.com/docs/firestore/query-data/queries)
- [Firestore Query
Documentation](https://firebase.google.com/docs/firestore/query-data/queries)

View File

@@ -9,30 +9,36 @@ aliases:
---
## Description
The `firestore-update-document` tool allows you to update existing documents in Firestore. It supports all Firestore data types using Firestore's native JSON format. The tool can perform both full document updates (replacing all fields) or selective field updates using an update mask. When using an update mask, fields referenced in the mask but not present in the document data will be deleted from the document, following Firestore's native behavior.
The `firestore-update-document` tool allows you to update existing documents in
Firestore. It supports all Firestore data types using Firestore's native JSON
format. The tool can perform both full document updates (replacing all fields)
or selective field updates using an update mask. When using an update mask,
fields referenced in the mask but not present in the document data will be
deleted from the document, following Firestore's native behavior.
## Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `documentPath` | string | Yes | The path of the document which needs to be updated |
| `documentData` | map | Yes | The data to update in the document. Must use [Firestore's native JSON format](https://cloud.google.com/firestore/docs/reference/rest/Shared.Types/ArrayValue#Value) with typed values |
| `updateMask` | array | No | The selective fields to update. If not provided, all fields in documentData will be updated. When provided, only the specified fields will be updated. Fields referenced in the mask but not present in documentData will be deleted from the document |
| `returnData` | boolean | No | If set to true, the output will include the data of the updated document. Defaults to false to help avoid overloading the context |
| Parameter | Type | Required | Description |
|----------------|---------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `documentPath` | string | Yes | The path of the document which needs to be updated |
| `documentData` | map | Yes | The data to update in the document. Must use [Firestore's native JSON format](https://cloud.google.com/firestore/docs/reference/rest/Shared.Types/ArrayValue#Value) with typed values |
| `updateMask` | array | No | The selective fields to update. If not provided, all fields in documentData will be updated. When provided, only the specified fields will be updated. Fields referenced in the mask but not present in documentData will be deleted from the document |
| `returnData` | boolean | No | If set to true, the output will include the data of the updated document. Defaults to false to help avoid overloading the context |
## Output
The tool returns a map containing:
| Field | Type | Description |
|-------|------|-------------|
| `documentPath` | string | The full path of the updated document |
| `updateTime` | string | The timestamp when the document was updated |
| `documentData` | map | The current data of the document after the update (only included when `returnData` is true) |
| Field | Type | Description |
|----------------|--------|---------------------------------------------------------------------------------------------|
| `documentPath` | string | The full path of the updated document |
| `updateTime` | string | The timestamp when the document was updated |
| `documentData` | map | The current data of the document after the update (only included when `returnData` is true) |
## Data Type Format
The tool requires Firestore's native JSON format for document data. Each field must be wrapped with its type indicator:
The tool requires Firestore's native JSON format for document data. Each field
must be wrapped with its type indicator:
### Basic Types
- **String**: `{"stringValue": "your string"}`
@@ -53,11 +59,16 @@ The tool requires Firestore's native JSON format for document data. Each field m
### Full Document Update (Merge All)
When `updateMask` is not provided, the tool performs a merge operation that updates all fields specified in `documentData` while preserving other existing fields in the document.
When `updateMask` is not provided, the tool performs a merge operation that
updates all fields specified in `documentData` while preserving other existing
fields in the document.
### Selective Field Update
When `updateMask` is provided, only the fields listed in the mask are updated. This allows for precise control over which fields are modified, added, or deleted. To delete a field, include it in the `updateMask` but omit it from `documentData`.
When `updateMask` is provided, only the fields listed in the mask are updated.
This allows for precise control over which fields are modified, added, or
deleted. To delete a field, include it in the `updateMask` but omit it from
`documentData`.
## Reference

View File

@@ -47,8 +47,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "looker-get-explores". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-explores". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -55,8 +55,8 @@ The response is a json array with the following elements:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "looker-get-filters". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-filters". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -61,8 +61,8 @@ The response is a json array with the following elements:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "looker-get-measures". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-measures". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -55,8 +55,8 @@ The response is a json array with the following elements:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "looker-get-parameters". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-get-parameters". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,17 +10,24 @@ aliases:
## About
The `mssql-list-tables` tool retrieves schema information for all or specified tables in a SQL server database. It is compatible with any of the following sources:
The `mssql-list-tables` tool retrieves schema information for all or specified
tables in a SQL server database. It is compatible with any of the following
sources:
- [cloud-sql-mssql](../../sources/cloud-sql-mssql.md)
- [mssql](../../sources/mssql.md)
`mssql-list-tables` lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned).
`mssql-list-tables` lists detailed schema information (object type, columns,
constraints, indexes, triggers, owner, comment) as JSON for user-created tables
(ordinary or partitioned).
The tool takes the following input parameters:
- **`table_names`** (string, optional): Filters by a comma-separated list of names. By default, it lists all tables in user schemas. Default: `""`.
- **`output_format`** (string, optional): Indicate the output format of table schema. `simple` will return only the table names, `detailed` will return the full table information. Default: `detailed`.
- **`table_names`** (string, optional): Filters by a comma-separated list of
names. By default, it lists all tables in user schemas. Default: `""`.
- **`output_format`** (string, optional): Indicate the output format of table
schema. `simple` will return only the table names, `detailed` will return the
full table information. Default: `detailed`.
## Example
@@ -34,8 +41,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mssql-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| kind | string | true | Must be "mssql-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |

View File

@@ -10,15 +10,18 @@ aliases:
## About
A `mysql-list-active-queries` tool retrieves information about active queries in a MySQL database. It's compatible with
A `mysql-list-active-queries` tool retrieves information about active queries in
a MySQL database. It's compatible with:
- [cloud-sql-mysql](../../sources/cloud-sql-mysql.md)
- [mysql](../../sources/mysql.md)
`mysql-list-active-queries` outputs detailed information as JSON for current active queries, ordered by execution time in descending order.
`mysql-list-active-queries` outputs detailed information as JSON for current
active queries, ordered by execution time in descending order.
This tool takes 2 optional input parameters:
- `min_duration_secs` (optional): Only show queries running for at least this long in seconds, default `0`.
- `min_duration_secs` (optional): Only show queries running for at least this
long in seconds, default `0`.
- `limit` (optional): max number of queries to return, default `10`.
## Example
@@ -52,8 +55,8 @@ The response is a json array with the following fields:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mysql-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "mysql-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,17 +10,25 @@ aliases:
## About
A `mysql-list-table-fragmentation` tool checks table fragmentation of MySQL tables by calculating the size of the data and index files in bytes and comparing with free space allocated to each table. This tool calculates `fragmentation_percentage` which represents the proportion of free space relative to the total data and index size. It's compatible with
A `mysql-list-table-fragmentation` tool checks table fragmentation of MySQL
tables by calculating the size of the data and index files in bytes and
comparing with free space allocated to each table. This tool calculates
`fragmentation_percentage` which represents the proportion of free space
relative to the total data and index size. It's compatible with
- [cloud-sql-mysql](../../sources/cloud-sql-mysql.md)
- [mysql](../../sources/mysql.md)
`mysql-list-table-fragmentation` outputs detailed information as JSON , ordered by the fragmentation percentage in descending order.
`mysql-list-table-fragmentation` outputs detailed information as JSON , ordered
by the fragmentation percentage in descending order.
This tool takes 4 optional input parameters:
- `table_schema` (optional): The database where fragmentation check is to be executed. Check all tables visible to the current user if not specified.
- `table_name` (optional): Name of the table to be checked. Check all tables visible to the current user if not specified.
- `data_free_threshold_bytes` (optional): Only show tables with at least this much free space in bytes. Default 1.
- `table_schema` (optional): The database where fragmentation check is to be
executed. Check all tables visible to the current user if not specified.
- `table_name` (optional): Name of the table to be checked. Check all tables
visible to the current user if not specified.
- `data_free_threshold_bytes` (optional): Only show tables with at least this
much free space in bytes. Default 1.
- `limit` (optional): Max rows to return, default 10.
## Example
@@ -46,8 +54,8 @@ The response is a json array with the following fields:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mysql-list-table-fragmentation". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "mysql-list-table-fragmentation". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,14 +10,18 @@ aliases:
## About
A `mysql-list-tables-missing-unique-indexes` tool searches tables that do not have primary or unique indices in a MySQL database. It's compatible with:
A `mysql-list-tables-missing-unique-indexes` tool searches tables that do not
have primary or unique indices in a MySQL database. It's compatible with:
- [cloud-sql-mysql](../../sources/cloud-sql-mysql.md)
- [mysql](../../sources/mysql.md)
`mysql-list-tables-missing-unique-indexes` outputs table names, including `table_schema` and `table_name` in JSON format. It takes 2 optional input parameters:
`mysql-list-tables-missing-unique-indexes` outputs table names, including
`table_schema` and `table_name` in JSON format. It takes 2 optional input
parameters:
- `table_schema` (optional): Only check tables in this specific schema/database. Search all visible tables in all visible databases if not specified.
- `table_schema` (optional): Only check tables in this specific schema/database.
Search all visible tables in all visible databases if not specified.
- `limit` (optional): max number of queries to return, default `50`.
## Example
@@ -39,8 +43,8 @@ The response is a json array with the following fields:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mysql-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "mysql-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,18 +10,24 @@ aliases:
## About
The `mysql-list-tables` tool retrieves schema information for all or specified tables in a MySQL database. It is compatible with any of the following sources:
The `mysql-list-tables` tool retrieves schema information for all or specified
tables in a MySQL database. It is compatible with any of the following sources:
- [cloud-sql-mysql](../../sources/cloud-sql-mysql.md)
- [mysql](../../sources/mysql.md)
`mysql-list-tables` lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, it lists all tables in user schemas. The output format can be set to `simple` which will return only the table names or `detailed` which is the default.
`mysql-list-tables` lists detailed schema information (object type, columns,
constraints, indexes, triggers, owner, comment) as JSON for user-created tables
(ordinary or partitioned). Filters by a comma-separated list of names. If names
are omitted, it lists all tables in user schemas. The output format can be set
to `simple` which will return only the table names or `detailed` which is the
default.
The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :--------------------------------------------------------------------------------------- | :------- |
| `table_names` | string | Filters by a comma-separated list of names. By default, it lists all tables in user schemas. Default: `""` | No |
| Parameter | Type | Description | Required |
|:----------------|:-------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------|
| `table_names` | string | Filters by a comma-separated list of names. By default, it lists all tables in user schemas. Default: `""` | No |
| `output_format` | string | Indicate the output format of table schema. `simple` will return only the table names, `detailed` will return the full table information. Default: `detailed`. | No |
## Example
@@ -36,8 +42,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mysql-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| kind | string | true | Must be "mysql-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |

View File

@@ -11,11 +11,19 @@ aliases:
## About
A `neo4j-schema` tool connects to a Neo4j database and extracts its complete schema information. It runs multiple queries concurrently to efficiently gather details about node labels, relationships, properties, constraints, and indexes.
A `neo4j-schema` tool connects to a Neo4j database and extracts its complete
schema information. It runs multiple queries concurrently to efficiently gather
details about node labels, relationships, properties, constraints, and indexes.
The tool automatically detects if the APOC (Awesome Procedures on Cypher) library is available. If so, it uses APOC procedures like `apoc.meta.schema` for a highly detailed overview of the database structure; otherwise, it falls back to using native Cypher queries.
The tool automatically detects if the APOC (Awesome Procedures on Cypher)
library is available. If so, it uses APOC procedures like `apoc.meta.schema` for
a highly detailed overview of the database structure; otherwise, it falls back
to using native Cypher queries.
The extracted schema is **cached** to improve performance for subsequent requests. The output is a structured JSON object containing all the schema details, which can be invaluable for providing database context to an LLM. This tool is compatible with a `neo4j` source and takes no parameters.
The extracted schema is **cached** to improve performance for subsequent
requests. The output is a structured JSON object containing all the schema
details, which can be invaluable for providing database context to an LLM. This
tool is compatible with a `neo4j` source and takes no parameters.
## Example
@@ -34,9 +42,9 @@ tools:
```
## Reference
| **field** | **type** | **required** | **description** |
|---------------------|:----------:|:------------:|-------------------------------------------------------------------------------------------------|
| kind | string | true | Must be `neo4j-schema`. |
| source | string | true | Name of the source the schema should be extracted from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| cacheExpireMinutes | integer | false | Cache expiration time in minutes. Defaults to 60. |
| **field** | **type** | **required** | **description** |
|--------------------|:--------:|:------------:|---------------------------------------------------------|
| kind | string | true | Must be `neo4j-schema`. |
| source | string | true | Name of the source the schema should be extracted from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| cacheExpireMinutes | integer | false | Cache expiration time in minutes. Defaults to 60. |

View File

@@ -10,13 +10,16 @@ aliases:
## About
An `oceanbase-execute-sql` tool executes a SQL statement against an OceanBase database. It's compatible with the following source:
An `oceanbase-execute-sql` tool executes a SQL statement against an OceanBase
database. It's compatible with the following source:
- [oceanbase](../sources/oceanbase.md)
`oceanbase-execute-sql` takes one input parameter `sql` and runs the sql statement against the `source`.
`oceanbase-execute-sql` takes one input parameter `sql` and runs the sql
statement against the `source`.
> **Note:** This tool is intended for developer assistant workflows with human-in-the-loop and shouldn't be used for production agents.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Example
@@ -30,8 +33,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:----------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "oceanbase-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "oceanbase-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,11 +10,14 @@ aliases:
## About
An `oceanbase-sql` tool executes a pre-defined SQL statement against an OceanBase database. It's compatible with the following source:
An `oceanbase-sql` tool executes a pre-defined SQL statement against an
OceanBase database. It's compatible with the following source:
- [oceanbase](../sources/oceanbase.md)
The specified SQL statement is executed as a [prepared statement][mysql-prepare], and expects parameters in the SQL query to be in the form of placeholders `?`.
The specified SQL statement is executed as a [prepared
statement][mysql-prepare], and expects parameters in the SQL query to be in the
form of placeholders `?`.
[mysql-prepare]: https://dev.mysql.com/doc/refman/8.4/en/sql-prepared-statements.html
@@ -22,7 +25,8 @@ The specified SQL statement is executed as a [prepared statement][mysql-prepare]
> **Note:** This tool uses parameterized queries to prevent SQL injections.
> Query parameters can be used as substitutes for arbitrary expressions.
> Parameters cannot be used as substitutes for identifiers, column names, table names, or other parts of the query.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
```yaml
tools:
@@ -54,7 +58,10 @@ tools:
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement, including identifiers, column names, and table names. **This makes it more vulnerable to SQL injections**. Using basic parameters only (see above) is recommended for performance and safety reasons.
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons.
```yaml
tools:
@@ -112,11 +119,11 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "oceanbase-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](_index#specifying-parameters) | false | List of [parameters](_index#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](_index#template-parameters) | false | List of [templateParameters](_index#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "oceanbase-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](_index#specifying-parameters) | false | List of [parameters](_index#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](_index#template-parameters) | false | List of [templateParameters](_index#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |

View File

@@ -10,16 +10,24 @@ aliases:
## About
The `postgres-list-active-queries` tool retrieves information about currently active queries in a Postgres database. It's compatible with any of the following sources:
The `postgres-list-active-queries` tool retrieves information about currently
active queries in a Postgres database. It's compatible with any of the following
sources:
- [alloydb-postgres](../../sources/alloydb-pg.md)
- [cloud-sql-postgres](../../sources/cloud-sql-pg.md)
- [postgres](../../sources/postgres.md)
`postgres-list-active-queries` lists detailed information as JSON for currently active queries. The tool takes the following input parameters:
`postgres-list-active-queries` lists detailed information as JSON for currently
active queries. The tool takes the following input parameters:
- `min_duraton` (optional): Only show queries running at least this long (e.g., '1 minute', '1 second', '2 seconds'). Default: '1 minute'.
- `exclude_application_names` (optional): A comma-separated list of application names to exclude from the query results. This is useful for filtering out queries from specific applications (e.g., 'psql', 'pgAdmin', 'DBeaver'). The match is case-sensitive. Whitespace around commas and names is automatically handled. If this parameter is omitted, no applications are excluded.
- `min_duraton` (optional): Only show queries running at least this long (e.g.,
'1 minute', '1 second', '2 seconds'). Default: '1 minute'.
- `exclude_application_names` (optional): A comma-separated list of application
names to exclude from the query results. This is useful for filtering out
queries from specific applications (e.g., 'psql', 'pgAdmin', 'DBeaver'). The
match is case-sensitive. Whitespace around commas and names is automatically
handled. If this parameter is omitted, no applications are excluded.
- `limit` (optional): The maximum number of rows to return. Default: `50`.
## Example
@@ -54,8 +62,8 @@ The response is a json array with the following elements:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "postgres-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "postgres-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,13 +10,17 @@ aliases:
## About
The `postgres-list-available-extensions` tool retrieves all PostgreSQL extensions available for installation on a Postgres database. It's compatible with any of the following sources:
The `postgres-list-available-extensions` tool retrieves all PostgreSQL
extensions available for installation on a Postgres database. It's compatible
with any of the following sources:
- [alloydb-postgres](../../sources/alloydb-pg.md)
- [cloud-sql-postgres](../../sources/cloud-sql-pg.md)
- [postgres](../../sources/postgres.md)
`postgres-list-available-extensions` lists all PostgreSQL extensions available for installation (extension name, default version description) as JSON. The does not support any input parameter.
`postgres-list-available-extensions` lists all PostgreSQL extensions available
for installation (extension name, default version description) as JSON. The does
not support any input parameter.
## Example
@@ -30,9 +34,9 @@ tools:
## Reference
|**name** |**default_version**|**description** |
|--------------------|---------------|-------------------------------------------------------------------------------------------------------------------|
|address_standardizer|3.5.2 |Used to parse an address into constituent elements. Generally used to support geocoding address normalization step.|
|amcheck |1.4 |functions for verifying relation integrity |
|anon |1.0.0 |Data anonymization tools |
|autoinc |1.0 |functions for autoincrementing fields |
| **name** | **default_version** | **description** |
|----------------------|---------------------|---------------------------------------------------------------------------------------------------------------------|
| address_standardizer | 3.5.2 | Used to parse an address into constituent elements. Generally used to support geocoding address normalization step. |
| amcheck | 1.4 | functions for verifying relation integrity |
| anon | 1.0.0 | Data anonymization tools |
| autoinc | 1.0 | functions for autoincrementing fields |

View File

@@ -10,13 +10,17 @@ aliases:
## About
The `postgres-list-installed-extensions` tool retrieves all PostgreSQL extensions installed on a Postgres database. It's compatible with any of the following sources:
The `postgres-list-installed-extensions` tool retrieves all PostgreSQL
extensions installed on a Postgres database. It's compatible with any of the
following sources:
- [alloydb-postgres](../../sources/alloydb-pg.md)
- [cloud-sql-postgres](../../sources/cloud-sql-pg.md)
- [postgres](../../sources/postgres.md)
`postgres-list-installed-extensions` lists all installed PostgreSQL extensions (extension name, version, schema, owner, description) as JSON. The does not support any input parameter.
`postgres-list-installed-extensions` lists all installed PostgreSQL extensions
(extension name, version, schema, owner, description) as JSON. The does not
support any input parameter.
## Example
@@ -30,8 +34,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "postgres-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "postgres-list-active-queries". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,15 +10,22 @@ aliases:
## About
The `postgres-list-tables` tool retrieves schema information for all or specified tables in a Postgres database. It's compatible with any of the following sources:
The `postgres-list-tables` tool retrieves schema information for all or
specified tables in a Postgres database. It's compatible with any of the
following sources:
- [alloydb-postgres](../../sources/alloydb-pg.md)
- [cloud-sql-postgres](../../sources/cloud-sql-pg.md)
- [postgres](../../sources/postgres.md)
`postgres-list-tables` lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). The tool takes the following input parameters:
* `table_names` (optional): Filters by a comma-separated list of names. By default, it lists all tables in user schemas.
* `output_format` (optional): Indicate the output format of table schema. `simple` will return only the table names, `detailed` will return the full table information. Default: `detailed`.
`postgres-list-tables` lists detailed schema information (object type, columns,
constraints, indexes, triggers, owner, comment) as JSON for user-created tables
(ordinary or partitioned). The tool takes the following input parameters:
* `table_names` (optional): Filters by a comma-separated list of names. By
default, it lists all tables in user schemas.
* `output_format` (optional): Indicate the output format of table schema.
`simple` will return only the table names, `detailed` will return the full
table information. Default: `detailed`.
## Example
@@ -32,8 +39,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "postgres-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|------------------------------------------------------|
| kind | string | true | Must be "postgres-list-tables". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the agent. |

View File

@@ -71,10 +71,10 @@ tools:
The tool accepts two optional parameters:
| **parameter** | **type** | **default** | **description** |
|-----------------|:--------:|:-----------:|------------------------------------------------------------------------------------------------------|
| table_names | string | "" | Comma-separated list of table names to filter. If empty, lists all tables in user-accessible schemas |
| output_format | string | "detailed" | Output format: "simple" returns only table names, "detailed" returns full schema information |
| **parameter** | **type** | **default** | **description** |
|---------------|:--------:|:-----------:|------------------------------------------------------------------------------------------------------|
| table_names | string | "" | Comma-separated list of table names to filter. If empty, lists all tables in user-accessible schemas |
| output_format | string | "detailed" | Output format: "simple" returns only table names, "detailed" returns full schema information |
## Output Format
@@ -99,7 +99,8 @@ When `output_format` is set to "simple", the tool returns a minimal JSON structu
### Detailed Format
When `output_format` is set to "detailed" (default), the tool returns comprehensive schema information:
When `output_format` is set to "detailed" (default), the tool returns
comprehensive schema information:
```json
[
@@ -194,12 +195,12 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|--------------------------------------------------------------------|
| kind | string | true | Must be "spanner-list-tables" |
| source | string | true | Name of the Spanner source to query |
| description | string | false | Description of the tool that is passed to the LLM |
| authRequired | string[] | false | List of auth services required to invoke this tool |
| **field** | **type** | **required** | **description** |
|--------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "spanner-list-tables" |
| source | string | true | Name of the Spanner source to query |
| description | string | false | Description of the tool that is passed to the LLM |
| authRequired | string[] | false | List of auth services required to invoke this tool |
## Notes

View File

@@ -157,12 +157,12 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "spanner-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| readOnly | bool | false | When set to `true`, the `statement` is run as a read-only transaction. Default: `false`. |
| **field** | **type** | **required** | **description** |
|--------------------|:--------------------------------------------:|:------------:|----------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "spanner-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](../#specifying-parameters) | false | List of [parameters](../#specifying-parameters) that will be inserted into the SQL statement. |
| readOnly | bool | false | When set to `true`, the `statement` is run as a read-only transaction. Default: `false`. |
| templateParameters | [templateParameters](..#template-parameters) | false | List of [templateParameters](..#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |

View File

@@ -10,12 +10,14 @@ aliases:
## About
A `sqlite-execute-sql` tool executes a single SQL statement against a SQLite database.
It's compatible with any of the following sources:
A `sqlite-execute-sql` tool executes a single SQL statement against a SQLite
database. It's compatible with any of the following sources:
- [sqlite](../../sources/sqlite.md)
This tool is designed for direct execution of SQL statements. It takes a single `sql` input parameter and runs the SQL statement against the configured SQLite `source`.
This tool is designed for direct execution of SQL statements. It takes a single
`sql` input parameter and runs the SQL statement against the configured SQLite
`source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
@@ -32,8 +34,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "sqlite-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "sqlite-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -65,10 +65,11 @@ tools:
### Example with Template Parameters
> **Note:** This tool allows direct modifications to the SQL statement, including identifiers, column names,
> and table names. **This makes it more vulnerable to SQL injections**. Using basic parameters
> only (see above) is recommended for performance and safety reasons. For more details, please
> check [templateParameters](_index#template-parameters).
> **Note:** This tool allows direct modifications to the SQL statement,
> including identifiers, column names, and table names. **This makes it more
> vulnerable to SQL injections**. Using basic parameters only (see above) is
> recommended for performance and safety reasons. For more details, please check
> [templateParameters](_index#template-parameters).
```yaml
tools:
@@ -91,11 +92,11 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|---------------------|:---------------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "yugabytedb-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](_index#specifying-parameters) | false | List of [parameters](_index#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](_index#template-parameters) | false | List of [templateParameters](_index#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |
| **field** | **type** | **required** | **description** |
|--------------------|:------------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "yugabytedb-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| statement | string | true | SQL statement to execute on. |
| parameters | [parameters](_index#specifying-parameters) | false | List of [parameters](_index#specifying-parameters) that will be inserted into the SQL statement. |
| templateParameters | [templateParameters](_index#template-parameters) | false | List of [templateParameters](_index#template-parameters) that will be inserted into the SQL statement before executing prepared statement. |

View File

@@ -16,10 +16,18 @@ on how to [connect to Toolbox via MCP](../../how-to/connect_via_mcp.md).
This guide assumes you have already done the following:
1. [Create a AlloyDB cluster and instance](https://cloud.google.com/alloydb/docs/cluster-create) with a database and user.
1. Connect to the instance using [AlloyDB Studio](https://cloud.google.com/alloydb/docs/manage-data-using-studio), [`psql` command-line tool](https://www.postgresql.org/download/), or any other PostgreSQL client.
1. [Create a AlloyDB cluster and
instance](https://cloud.google.com/alloydb/docs/cluster-create) with a
database and user.
1. Connect to the instance using [AlloyDB
Studio](https://cloud.google.com/alloydb/docs/manage-data-using-studio),
[`psql` command-line tool](https://www.postgresql.org/download/), or any
other PostgreSQL client.
2. Enable the `pgvector` and `google_ml_integration` [extensions](https://cloud.google.com/alloydb/docs/ai). These are required for Semantic Search and Natural Language to SQL tools. Run the following SQL commands:
1. Enable the `pgvector` and `google_ml_integration`
[extensions](https://cloud.google.com/alloydb/docs/ai). These are required
for Semantic Search and Natural Language to SQL tools. Run the following SQL
commands:
```sql
CREATE EXTENSION IF NOT EXISTS "vector";
@@ -30,7 +38,8 @@ This guide assumes you have already done the following:
## Step 1: Set up your AlloyDB database
In this section, we will create the necessary tables and functions in your AlloyDB instance.
In this section, we will create the necessary tables and functions in your
AlloyDB instance.
1. Create tables using the following commands:
@@ -127,9 +136,11 @@ In this section, we will download and install the Toolbox binary.
## Step 3: Configure the tools
Create a `tools.yaml` file and add the following content. You must replace the placeholders with your actual AlloyDB configuration.
Create a `tools.yaml` file and add the following content. You must replace the
placeholders with your actual AlloyDB configuration.
First, define the data source for your tools. This tells Toolbox how to connect to your AlloyDB instance.
First, define the data source for your tools. This tells Toolbox how to connect
to your AlloyDB instance.
```yaml
sources:
@@ -144,11 +155,14 @@ sources:
password: YOUR_PASSWORD
```
Next, define the tools the agent can use. We will categorize them into three types:
Next, define the tools the agent can use. We will categorize them into three
types:
### 1. Structured Queries Tools
These tools execute predefined SQL statements. They are ideal for common, structured queries like managing a shopping cart. Add the following to your `tools.yaml` file:
These tools execute predefined SQL statements. They are ideal for common,
structured queries like managing a shopping cart. Add the following to your
`tools.yaml` file:
```yaml
tools:
@@ -225,7 +239,9 @@ tools:
### 2. Semantic Search Tools
These tools use vector embeddings to find the most relevant results based on the meaning of a user's query, rather than just keywords. Append the following tools to the `tools` section in your `tools.yaml`:
These tools use vector embeddings to find the most relevant results based on the
meaning of a user's query, rather than just keywords. Append the following tools
to the `tools` section in your `tools.yaml`:
```yaml
search-product-recommendations:
@@ -253,14 +269,21 @@ These tools use vector embeddings to find the most relevant results based on the
### 3. Natural Language to SQL (NL2SQL) Tools
1. Create a [natural language configuration](https://cloud.google.com/alloydb/docs/ai/use-natural-language-generate-sql-queries#create-config) for your AlloyDB cluster.
1. Create a [natural language
configuration](https://cloud.google.com/alloydb/docs/ai/use-natural-language-generate-sql-queries#create-config)
for your AlloyDB cluster.
{{< notice tip >}}Before using NL2SQL tools,
you must first install the `alloydb_ai_nl` extension and
create the [semantic layer](https://cloud.google.com/alloydb/docs/ai/natural-language-overview) under a configuration named `flower_shop`.
create the [semantic
layer](https://cloud.google.com/alloydb/docs/ai/natural-language-overview)
under a configuration named `flower_shop`.
{{< /notice >}}
2. Configure your NL2SQL tool to use your configuration. These tools translate natural language questions into SQL queries, allowing users to interact with the database conversationally. Append the following tool to the `tools` section:
2. Configure your NL2SQL tool to use your configuration. These tools translate
natural language questions into SQL queries, allowing users to interact with
the database conversationally. Append the following tool to the `tools`
section:
```yaml
ask-questions-about-products:
@@ -273,7 +296,8 @@ These tools use vector embeddings to find the most relevant results based on the
Always SELECT the IDs of objects when generating queries.
```
Finally, group the tools into a `toolset` to make them available to the model. Add the following to the end of your `tools.yaml` file:
Finally, group the tools into a `toolset` to make them available to the model.
Add the following to the end of your `tools.yaml` file:
```yaml
toolsets:
@@ -306,7 +330,8 @@ Run the Toolbox server, pointing to the `tools.yaml` file created earlier:
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please take note of `<YOUR_SESSION_TOKEN>`):
1. It should show the following when the MCP Inspector is up and running (please
take note of `<YOUR_SESSION_TOKEN>`):
```bash
Starting MCP inspector...