Compare commits

...

32 Commits
sample ... sig

Author SHA1 Message Date
duwenxin99
da20532fbe typo 2025-10-02 14:08:38 -04:00
duwenxin99
c3b080641a test trigger 2025-10-02 14:06:38 -04:00
duwenxin99
e17fc8a882 test 2025-10-02 13:44:08 -04:00
duwenxin
afe5b785e5 ci: add code signing signatures to binary releases 2025-09-15 22:53:59 -04:00
Mend Renovate
fd00fef5b7 chore(deps): update module cloud.google.com/go/spanner to v1.85.1 (#1451)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
|
[cloud.google.com/go/spanner](https://redirect.github.com/googleapis/google-cloud-go)
| `v1.84.1` -> `v1.85.1` |
[![age](https://developer.mend.io/api/mc/badges/age/go/cloud.google.com%2fgo%2fspanner/v1.85.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/go/cloud.google.com%2fgo%2fspanner/v1.84.1/v1.85.1?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-09-15 17:15:55 -07:00
Mend Renovate
e4957af011 chore(deps): update module modernc.org/sqlite to v1.39.0 (#1450)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
| [modernc.org/sqlite](https://gitlab.com/cznic/sqlite) | `v1.38.2` ->
`v1.39.0` |
[![age](https://developer.mend.io/api/mc/badges/age/go/modernc.org%2fsqlite/v1.39.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/go/modernc.org%2fsqlite/v1.38.2/v1.39.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>cznic/sqlite (modernc.org/sqlite)</summary>

###
[`v1.39.0`](https://gitlab.com/cznic/sqlite/compare/v1.38.2...v1.39.0)

[Compare
Source](https://gitlab.com/cznic/sqlite/compare/v1.38.2...v1.39.0)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-09-15 16:56:59 -07:00
Haoming Chen
aa5486b8b2 docs: add missing doc for bigquery/analyze-contribution (#1447)
The feature was merged via
https://github.com/googleapis/genai-toolbox/pull/1223 but some docs are
not updated.

Co-authored-by: Huan Chen <142538604+Genesis929@users.noreply.github.com>
2025-09-15 23:45:44 +00:00
Mend Renovate
61b9041f1a chore(deps): update module github.com/clickhouse/clickhouse-go/v2 to v2.40.3 (#1444)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
|
[github.com/ClickHouse/clickhouse-go/v2](https://redirect.github.com/ClickHouse/clickhouse-go)
| `v2.40.1` -> `v2.40.3` |
[![age](https://developer.mend.io/api/mc/badges/age/go/github.com%2fClickHouse%2fclickhouse-go%2fv2/v2.40.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/go/github.com%2fClickHouse%2fclickhouse-go%2fv2/v2.40.1/v2.40.3?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>ClickHouse/clickhouse-go
(github.com/ClickHouse/clickhouse-go/v2)</summary>

###
[`v2.40.3`](https://redirect.github.com/ClickHouse/clickhouse-go/blob/HEAD/CHANGELOG.md#v2403-2025-09-13----Release-notes-generated-using-configuration-in-githubreleaseyml-at-main---)

[Compare
Source](https://redirect.github.com/ClickHouse/clickhouse-go/compare/v2.40.2...v2.40.3)

#### What's Changed

##### Other Changes 🛠

- bug: deserializing into nullable field by
[@&#8203;rbroggi](https://redirect.github.com/rbroggi) in
[#&#8203;1649](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1649)
- Fixes for
[#&#8203;1649](https://redirect.github.com/ClickHouse/clickhouse-go/issues/1649)
by [@&#8203;SpencerTorres](https://redirect.github.com/SpencerTorres) in
[#&#8203;1654](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1654)

#### New Contributors

- [@&#8203;rbroggi](https://redirect.github.com/rbroggi) made their
first contribution in
[#&#8203;1649](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1649)

**Full Changelog**:
<https://github.com/ClickHouse/clickhouse-go/compare/v2.40.2...v2.40.3>

###
[`v2.40.2`](https://redirect.github.com/ClickHouse/clickhouse-go/blob/HEAD/CHANGELOG.md#v2402-2025-09-13----Release-notes-generated-using-configuration-in-githubreleaseyml-at-main---)

[Compare
Source](https://redirect.github.com/ClickHouse/clickhouse-go/compare/v2.40.1...v2.40.2)

#### What's Changed

##### Other Changes 🛠

- Bump golang.org/x/net from 0.42.0 to 0.43.0 by
[@&#8203;dependabot](https://redirect.github.com/dependabot)\[bot] in
[#&#8203;1634](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1634)
- Bump github.com/ClickHouse/ch-go from 0.67.0 to 0.68.0 by
[@&#8203;dependabot](https://redirect.github.com/dependabot)\[bot] in
[#&#8203;1639](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1639)
- Bump github.com/stretchr/testify from 1.10.0 to 1.11.1 by
[@&#8203;dependabot](https://redirect.github.com/dependabot)\[bot] in
[#&#8203;1641](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1641)
- Bump go.opentelemetry.io/otel/trace from 1.37.0 to 1.38.0 by
[@&#8203;dependabot](https://redirect.github.com/dependabot)\[bot] in
[#&#8203;1642](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1642)
- Bump github.com/docker/docker from 28.3.3+incompatible to
28.4.0+incompatible by
[@&#8203;dependabot](https://redirect.github.com/dependabot)\[bot] in
[#&#8203;1646](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1646)
- Bump golang.org/x/net from 0.43.0 to 0.44.0 by
[@&#8203;dependabot](https://redirect.github.com/dependabot)\[bot] in
[#&#8203;1647](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1647)
- chore: migrate to maintained YAML library by
[@&#8203;joschi](https://redirect.github.com/joschi) in
[#&#8203;1651](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1651)
- skip random tests on Go 1.25 by
[@&#8203;SpencerTorres](https://redirect.github.com/SpencerTorres) in
[#&#8203;1652](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1652)
- bug: headers map can be nil by
[@&#8203;r0bobo](https://redirect.github.com/r0bobo) in
[#&#8203;1650](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1650)

#### New Contributors

- [@&#8203;joschi](https://redirect.github.com/joschi) made their first
contribution in
[#&#8203;1651](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1651)
- [@&#8203;r0bobo](https://redirect.github.com/r0bobo) made their first
contribution in
[#&#8203;1650](https://redirect.github.com/ClickHouse/clickhouse-go/pull/1650)

**Full Changelog**:
<https://github.com/ClickHouse/clickhouse-go/compare/v2.40.1...v2.40.2>

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-09-15 16:31:45 -07:00
nester-neo4j
4babc4e11b fix(tools/neo4j): Implement value conversion for Neo4j types to JSON-compatible (#1428)
This pull request introduces a utility function to standardize the
conversion of Neo4j driver values into JSON-compatible types. The new
`ConvertValue` function is added to the `helpers` package, and is now
used in both the `neo4jcypher` and `neo4jexecutecypher` tools to ensure
consistent output formatting. Comprehensive unit tests for this function
are also included. Additionally, a new `ValueType` interface is defined
to generalize Neo4j value stringification.

**Helpers and Value Conversion:**

* Added the `ConvertValue` function to
`internal/tools/neo4j/neo4jschema/helpers/helpers.go` to recursively
convert Neo4j driver types (including nodes, relationships, paths,
points, and temporal types) into JSON-compatible Go values. This ensures
proper serialization of complex Neo4j types.
* Defined a `ValueType` interface in
`internal/tools/neo4j/neo4jschema/types/types.go` to generalize the
stringification of Neo4j value types.

**Integration with Tools:**

* Updated the `Invoke` methods in both `neo4jcypher` and
`neo4jexecutecypher` tools to use `helpers.ConvertValue` when processing
Neo4j query results, ensuring consistent and correct output formatting.
[[1]](diffhunk://#diff-b3e792b742cb92c92d1f5136b444c8fe0a7ec0376920868182dc88f13002e8eeL138-R139)
[[2]](diffhunk://#diff-de7fdd7e68c95ea9813c704a89fffb8fd6de34e81b43a484623fdff7683e18f3L160-R161)
* Added the necessary imports for the helpers package in the affected
files.
[[1]](diffhunk://#diff-b3e792b742cb92c92d1f5136b444c8fe0a7ec0376920868182dc88f13002e8eeR23)
[[2]](diffhunk://#diff-de7fdd7e68c95ea9813c704a89fffb8fd6de34e81b43a484623fdff7683e18f3R26)
[[3]](diffhunk://#diff-97a47eb63017102fc7084aa79689e22ef8cb6d1952945177058f118d57be306fR26)

**Testing:**

* Added extensive tests for `ConvertValue` in `helpers_test.go`,
covering all supported Neo4j types, primitives, slices, maps, and
unhandled types.
* Included required imports in the test file for Neo4j driver and time
handling.

**Relates to:**  https://github.com/googleapis/genai-toolbox/issues/1344

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-09-15 14:30:44 -07:00
Dr. Strangelove
2036c8efd2 feat(tools/looker): Query tracking for MCP Toolbox in Looker System Activity views (#1410)
## Description

---

Customers have requested that queries to Looker from MCP Toolbox show up
in
the Looker System Activity under a separate category. To do this we need
to
create the category `MCP Toolbox` on the Looker side. That should happen
by Looker 25.18 release. With 25.18 the request goes to a new
undocumented
API endpoint. If that endpoint results in an error, the Toolbox will
fall back to using
the documented API endpoint.

---

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #1409
2025-09-15 16:31:41 -04:00
Sri Varshitha
971001400f feat(source/alloydb-admin): Add user agent and attach alloydb api in alloydb-admin source (#1448)
## Description

---

- This PR introduces a userAgentRoundTripper that prepends our custom
user agent to the existing User-Agent header
- Moves alloydb api client to `alloydb-admin` source
- Updates alloydb control plane tools (`alloydb-get-cluster`,
`alloydb-list-clusters`, `alloydb-list-instances`, `alloydb-list-users`)
accordingly.

## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>

---------

Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-09-15 23:07:25 +05:30
prernakakkar-google
56b6574fc2 feat(source/cloud-sql-admin): Add User agent and attach sqldmin in cloud-sql-admin source. (#1441)
## Description

---
1. This change introduces a userAgentRoundTripper that correctly
prepends our custom user agent to the existing User-Agent header
2. Moves sqladmin client to source.
3. Updated cloudsql tools for above support.
4. Add test cases to validate User agent.

## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2025-09-15 16:31:38 +00:00
trehanshakuntG
7d384dc28f feat(tools/spanner-list-tables): Add new tool spanner-list-tables (#1404)
## Description

This PR introduces a new tool `spanner-list-tables` and includes
comprehensive tests and documentation.

### Features

- __New Tool__: Adds the `spanner-list-tables` tool, which lists tables
in a Spanner database.
- __Dialect-aware__: The tool automatically detects whether the Spanner
database is using the `GoogleSQL` or `PostgreSQL` dialect and executes
the appropriate query.
- __Refactoring__: The integration tests for the new tool have been
refactored to improve maintainability and reduce code duplication by
using shared helper functions.
- __Simpler Prebuilt tool__: The spanner pre-built tool uses this tool

Example tool usage:
```
list_tables:
    kind: spanner-list-tables
    source: spanner-source
    description: "Lists detailed schema information (object type, columns, constraints, indexes) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
```


## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

---------

Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-09-15 15:18:26 +05:30
trehanshakuntG
60b26608dd feat!: update prebuilt-tool names to use consistent guidance (#1421)
## Description

---
This PR updates the naming conventions across prebuilt tool
configuration files to improve consistency:

__Changes:__

- __Toolset names__: Changed from hyphen-separated to
underscore-separated format (e.g., `firestore-database-tools` →
`firestore_database_tools`)
- __Tool names__: Removed product name prefixes for cleaner naming and
using underscore-separated format (e.g., `firestore-get-documents` →
`get_documents`, `alloydb-create-cluster` → `create_cluster`)
- __Copyright headers__: Added missing copyright headers to
configuration files
- __Test updates__: Updated `cmd/root_test.go` to reflect the new naming
conventions


## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
2025-09-15 14:38:09 +05:30
Ajaykumar Yadav
681c2b4f3a feat(prebuilt/sqlite): prebuilt tools for the sqlite. (#1227)
## Description
Prebuilt tools for the sqlite
- [x] `list_table` with simple and detailed(trigger,index,column) for
each table
- [x]  `execute-sql` for executing any SQL statement for sqlite.
- [x]  added tests and done required changes in config.
- [x] **Documentation update**:Done

## PR Checklist

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/langchain-google-alloydb-pg-python/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involves a breaking change

🛠️ Fixes: https://github.com/googleapis/genai-toolbox/issues/1226

---------

Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-09-15 10:09:34 +05:30
Huan Chen
caba2ef829 chore: add usage tracker for bigquery-conversational-analytics (#1442)
## Description

---
Add client_id_enum for usage tracking.

## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [ ] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>

---------

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-09-12 22:33:58 +00:00
Huan Chen
2fda200066 chore: add retry to flaky integration tests in conversational analytics api (#1430)
## Description

Added retries to bigquery-conversational-analytics api tests for timeout
error(25s in test) and service unavailable error(code 503). Other errors
will not retry.

## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [ ] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2025-09-12 22:09:44 +00:00
Wenxin Du
fe2999a691 feat(tools/bigquery): Add useClientOAuth to BigQuery prebuilt source config (#1431)
Allow users to set `useClientOAuth` in the BQ prebuilt config
2025-09-12 20:25:04 +00:00
prernakakkar-google
3a6b51752f feat(prebuilt/cloudsql): Add create user tool for cloud sql (#1406)
## Description

---
This pull request introduces a new tool, cloud-sql-create-users, which
allows for the creation of both built-in and IAM users in a Cloud SQL
instance.
<img width="518" height="846" alt="image"
src="https://github.com/user-attachments/assets/2f96f0be-658b-46d1-9de6-f47db2804274"
/>
<img width="518" height="956" alt="image"
src="https://github.com/user-attachments/assets/2a7d80d4-eab2-4e91-b08b-5fb78c150319"
/>


## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2025-09-12 15:49:42 +00:00
prernakakkar-google
77919c7d8e feat(prebuilt/cloudsql): Add cloud-sql-get-instances tool (#1383)
## Description

---
This pull request introduces the `cloud-sql-get-instances` tool, which
enables users to retrieve detailed information about a specified Cloud
SQL instance. This tool enhances the toolbox by providing a direct and
authenticated way to interact with the Google Cloud SQL Admin API.
Authentication is handled automatically by generating a bearer token
from the environment's Application Default Credentials with the
`https://www.googleapis.com/auth/sqlservice.admin` scope.


<img width="282" height="1064" alt="image"
src="https://github.com/user-attachments/assets/253d3939-7de2-4324-bc2b-8a2eb20eb133"
/>



####


## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea - Tracked internally
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2025-09-12 15:02:57 +00:00
prernakakkar-google
01712284b4 feat(prebuilt/cloudsql): Add list instances tool for cloudsql (#1310)
## Description
---
This change introduces a new tool, `cloudsqllistinstance`, to the
`cloudsql` toolset. This tool allows users to list all Cloud SQL
instances within a specified GCP project.

The implementation includes:

- The `cloudsqllistinstance` tool definition and logic, which makes an
authenticated GET request to the `sqladmin.googleapis.com` API.
- The tool takes a single required parameter: `project`.
<img width="654" height="1406" alt="image"
src="https://github.com/user-attachments/assets/7c129a54-acb7-4695-9a0b-215914a6a273"
/>



## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/langchain-google-alloydb-pg-python/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea - tracked internally
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2025-09-12 10:50:06 +00:00
Sri Varshitha
c181dabc91 feat(tools/alloydb-get-cluster): Add get-cluster tool for alloydb (#1420)
## Description

This pull request introduces a new custom tool kind
`alloydb-get-cluster` that retrieves detailed information for a specific
AlloyDB cluster.

### Example Configuration

```yaml
tools:
  get_cluster:
    kind: alloydb-get-cluster
    source: alloydb-admin-source
    description: Use this tool to retrieve detailed information for a specific AlloyDB cluster.
```

### Example Request
``` 
curl -X POST http://127.0.0.1:5000/api/tool/get_cluster/invoke \
-H "Content-Type: application/json" \
-d '{
    "projectId": "example-project",
    "locationId": "us-central1",
    "clusterId": "my-alloydb-cluster",
}'
```
## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2025-09-12 15:39:45 +05:30
Sri Varshitha
93c1b30fce feat(tools/alloydb-list-instances): Add custom tool kind for AlloyDB list_instances (#1357)
## Description

---
This pull request introduces a new custom tool kind
`alloydb-list-instances` that allows users to list the AlloyDB instances
in a given project, cluster and location.

### Example Configuration

```yaml
tools:
  list_instances:
    kind: alloydb-list-instances
    source: alloydb-admin-source
    description: Use this tool to list all AlloyDB instances for a given project, cluster and location.
```

### Example Request
``` 
curl -X POST http://127.0.0.1:5000/api/tool/list_instances/invoke \
-H "Content-Type: application/json" \
-d '{
    "projectId": "example-project",
    "locationId": "us-central1",
    "clusterId": "example-cluster",
}'
```

## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2025-09-12 15:08:23 +05:30
Sri Varshitha
3a8a65ceaa feat(tools/alloydb-list-users): Add new custom tool kind for AlloyDB list_users (#1377)
## Description

---
This pull request introduces a new custom tool kind `alloydb-list-users`
that allows users to list all database users within an AlloyDB cluster.

### Example Configuration

```yaml
tools:
  list_users:
    kind: alloydb-list-users
    source: alloydb-admin-source
    description: Use this tool to list all database users within an AlloyDB cluster
```

### Example Request
``` 
curl -X POST http://127.0.0.1:5000/api/tool/list_users/invoke \
-H "Content-Type: application/json" \
-d '{
    "projectId": "example-project",
    "locationId": "us-central1",
    "clusterId": "example-cluster",
}'
```
## PR Checklist

---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>

---------

Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-09-12 14:44:21 +05:30
Manu Paliwal
236be89961 feat(observability): add cloud sql observability tools (#1425)
## Description
  ---
> This PR introduces a new observability prebuilt configs that allows
fetching system and query level metrics from the Cloud Monitoring API
for CloudSQL PG, MySQL and SQLServer.
  >
  > The key changes include:
> - 3 new configs using the cloud-monitoring-query-prometheus tool and
source cloud-monitoring
> - Manual testing is also done by locally running the server/tools and
integrating with Gemini CLI

 ## Followup Changes
  ---
   > - Documentation around the tools.

 ## PR Checklist
  ---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
  > few things you can do to make sure it goes smoothly:
   - [x] Make sure you reviewed
CONTRIBUTING.md
(https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
   - [x] Make sure to open an issue as a
bug/issue
(https://github.com/googleapis/langchain-google-alloydb-pg-python/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
    designs, and agree on the general idea
   - [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
   - [] Appropriate docs were updated (if necessary)
   - [ ] Make sure to add ! if this involve a breaking change

---------

Co-authored-by: prernakakkar-google <158031829+prernakakkar-google@users.noreply.github.com>
2025-09-12 08:48:04 +00:00
prernakakkar-google
3aef2bb7be feat(tool/cloudsql): Add cloud sql wait for operation tool with exponential backoff (#1306)
## Description
---
This pull request introduces a new tool, `cloudsql-wait-for-operation`,
to improve the handling of long-running operations in Google Cloud SQL.

__Key Features:__

- __Asynchronous Operation Polling:__ The `cloudsql-wait-for-operation`
tool polls the Cloud SQL operations API at a specified interval until
the operation completes or fails. This is essential for managing
asynchronous tasks like instance and database creation, which can take
several minutes.
- __Configurable Retries:__ The tool includes configurable retry logic
with exponential backoff (`delay`, `maxDelay`, `multiplier`,
`maxRetries`) to handle transient network issues and make the polling
mechanism more resilient.
- __Improved User Experience:__ By waiting for operations to complete,
this tool provides a more synchronous-like experience for the user, who
can be confident that a resource is ready before the next step in a
workflow is executed.

Tested:
<img width="592" height="1118" alt="image"
src="https://github.com/user-attachments/assets/fd64d367-0fba-4d6a-a6f1-8fc642132208"
/>



## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [X] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [X] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/langchain-google-alloydb-pg-python/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea - (Internal bug)
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [X] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2025-09-12 08:33:48 +00:00
Mend Renovate
ce736defb0 chore(deps): update module google.golang.org/genai to v1.24.0 (#1417)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
|
[google.golang.org/genai](https://redirect.github.com/googleapis/go-genai)
| `v1.23.0` -> `v1.24.0` |
[![age](https://developer.mend.io/api/mc/badges/age/go/google.golang.org%2fgenai/v1.24.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/go/google.golang.org%2fgenai/v1.23.0/v1.24.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>googleapis/go-genai (google.golang.org/genai)</summary>

###
[`v1.24.0`](https://redirect.github.com/googleapis/go-genai/releases/tag/v1.24.0)

[Compare
Source](https://redirect.github.com/googleapis/go-genai/compare/v1.23.0...v1.24.0)

##### Features

- \[Python] Implement async embedding batches for MLDev.
([f32fb26](f32fb26a12))
- Add labels to create tuning job config
([c13a2a5](c13a2a5f68))
- generate the function\_call class's converters
([995a3ac](995a3acc0a))
- Support Veo 2 Editing on Vertex
([7fd6940](7fd694074b))

##### Bug Fixes

- Enable `id` field in `FunctionCall` for Vertex AI.
([a3f3c2b](a3f3c2b37e))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
2025-09-12 05:24:00 +00:00
Mend Renovate
02aa376df8 chore(deps): update module cloud.google.com/go/dataplex to v1.27.0 (#1429)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
|
[cloud.google.com/go/dataplex](https://redirect.github.com/googleapis/google-cloud-go)
| `v1.26.0` -> `v1.27.0` |
[![age](https://developer.mend.io/api/mc/badges/age/go/cloud.google.com%2fgo%2fdataplex/v1.27.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/go/cloud.google.com%2fgo%2fdataplex/v1.26.0/v1.27.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
2025-09-11 17:46:15 -07:00
Mend Renovate
7f97442045 chore(deps): update module github.com/redis/go-redis/v9 to v9.14.0 (#1405)
This PR contains the following updates:

| Package | Change | Age | Confidence |
|---|---|---|---|
|
[github.com/redis/go-redis/v9](https://redirect.github.com/redis/go-redis)
| `v9.13.0` -> `v9.14.0` |
[![age](https://developer.mend.io/api/mc/badges/age/go/github.com%2fredis%2fgo-redis%2fv9/v9.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|
[![confidence](https://developer.mend.io/api/mc/badges/confidence/go/github.com%2fredis%2fgo-redis%2fv9/v9.13.0/v9.14.0?slim=true)](https://docs.renovatebot.com/merge-confidence/)
|

---

### Release Notes

<details>
<summary>redis/go-redis (github.com/redis/go-redis/v9)</summary>

###
[`v9.14.0`](https://redirect.github.com/redis/go-redis/releases/tag/v9.14.0):
9.14.0

[Compare
Source](https://redirect.github.com/redis/go-redis/compare/v9.13.0...v9.14.0)

#### Highlights

- Added batch process method to the pipeline
([#&#8203;3510](https://redirect.github.com/redis/go-redis/pull/3510))

### Changes

#### 🚀 New Features

- Added batch process method to the pipeline
([#&#8203;3510](https://redirect.github.com/redis/go-redis/pull/3510))

#### 🐛 Bug Fixes

- fix: SetErr on Cmd if the command cannot be queued correctly in
multi/exec
([#&#8203;3509](https://redirect.github.com/redis/go-redis/pull/3509))

#### 🧰 Maintenance

- Updates release drafter config to exclude dependabot
([#&#8203;3511](https://redirect.github.com/redis/go-redis/pull/3511))
- chore(deps): bump actions/setup-go from 5 to 6
([#&#8203;3504](https://redirect.github.com/redis/go-redis/pull/3504))

#### Contributors

We'd like to thank all the contributors who worked on this release!

[@&#8203;elena-kolevska](https://redirect.github.com/elena-kolevksa),
[@&#8203;htemelski-redis](https://redirect.github.com/htemelski-redis)
and [@&#8203;ndyakov](https://redirect.github.com/ndyakov)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS45Ny4xMCIsInVwZGF0ZWRJblZlciI6IjQxLjk3LjEwIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-09-11 15:44:19 -07:00
Twisha Bansal
c761dfa5aa fix: update mcp implementation comments (#1285)
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-09-11 21:35:32 +00:00
Yuan Teoh
38eae15e16 docs: fix broken links in python quickstart (#1427) 2025-09-11 14:17:38 -07:00
Sri Varshitha
d4a9eb0ce2 feat(tools/alloydb-list-cluster): Add custom tool kind for AlloyDB list_clusters (#1319)
## Description
---
This pull request introduces a new custom tool kind
`alloydb-list-clusters` that lists all AlloyDB clusters in a given
project and location.

### Example Configuration

```yaml
tools:
  list_clusters:
    kind: alloydb-list-clusters
    source: alloydb-admin-source
    description: Use this tool to list all AlloyDB clusters in a given project and location.
```

### Example Request
``` 
curl -X POST http://127.0.0.1:5000/api/tool/list_clusters/invoke \
-H "Content-Type: application/json" \
-d '{
    "projectId": "example-project",
    "locationId": "us-central1"
}'
```

## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/langchain-google-alloydb-pg-python/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>

---------

Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-09-11 14:03:17 -07:00
96 changed files with 7902 additions and 472 deletions

View File

@@ -20,9 +20,9 @@ DESCRIPTIONS=(
)
# Write the table header
ROW_FMT="| %-105s | %-120s | %-67s |\n"
output_string+=$(printf "$ROW_FMT" "**OS/Architecture**" "**Description**" "**SHA256 Hash**")$'\n'
output_string+=$(printf "$ROW_FMT" "$(printf -- '-%0.s' {1..105})" "$(printf -- '-%0.s' {1..120})" "$(printf -- '-%0.s' {1..67})")$'\n'
ROW_FMT="| %-105s | %-120s | %-67s | %-108s |\n"
output_string+=$(printf "$ROW_FMT" "**OS/Architecture**" "**Description**" "**SHA256 Hash**" "**Signature**")$'\n'
output_string+=$(printf "$ROW_FMT" "$(printf -- '-%0.s' {1..105})" "$(printf -- '-%0.s' {1..120})" "$(printf -- '-%0.s' {1..67})" "$(printf -- '-%0.s' {1..67})")$'\n'
# Loop through all files matching the pattern "toolbox.*.*"
@@ -43,16 +43,19 @@ do
URL="https://storage.googleapis.com/genai-toolbox/$VERSION/$OS/$ARCH/toolbox"
fi
# Generate the signature URL & link
SIG_URL="${URL}.sig"
SIG_LINK="[.sig]($SIG_URL)"
curl "$URL" --fail --output toolbox || exit 1
# Calculate the SHA256 checksum of the file
SHA256=$(shasum -a 256 toolbox | awk '{print $1}')
# Write the table row
output_string+=$(printf "$ROW_FMT" "[$OS/$ARCH]($URL)" "$description_text" "$SHA256")$'\n'
output_string+=$(printf "$ROW_FMT" "[$OS/$ARCH]($URL)" "$description_text" "$SHA256" "$SIG_LINK")$'\n'
rm toolbox
done
printf "$output_string\n"

View File

@@ -62,6 +62,28 @@ steps:
postgressql \
postgresexecutesql
- id: "alloydb"
name: golang:1
waitFor: ["compile-test-binary"]
entrypoint: /bin/bash
env:
- "GOPATH=/gopath"
- "ALLOYDB_PROJECT=$PROJECT_ID"
- "ALLOYDB_CLUSTER=$_ALLOYDB_POSTGRES_CLUSTER"
- "ALLOYDB_INSTANCE=$_ALLOYDB_POSTGRES_INSTANCE"
- "ALLOYDB_REGION=$_REGION"
secretEnv: ["ALLOYDB_POSTGRES_USER"]
volumes:
- name: "go"
path: "/gopath"
args:
- -c
- |
.ci/test_with_coverage.sh \
"AlloyDB" \
alloydb \
alloydb
- id: "alloydb-pg"
name: golang:1
waitFor: ["compile-test-binary"]
@@ -531,6 +553,24 @@ steps:
utility \
utility/alloydbwaitforoperation
- id: "cloud-sql"
name: golang:1
waitFor: ["compile-test-binary"]
entrypoint: /bin/bash
env:
- "GOPATH=/gopath"
secretEnv: ["CLIENT_ID"]
volumes:
- name: "go"
path: "/gopath"
args:
- -c
- |
.ci/test_with_coverage.sh \
"Cloud SQL Wait for Operation" \
cloudsql \
cloudsql
- id: "tidb"
name: golang:1
waitFor: ["compile-test-binary"]

View File

@@ -17,6 +17,7 @@ steps:
waitFor: ['-']
script: |
#!/usr/bin/env bash
set -e
export VERSION=$(cat ./cmd/version.txt)
docker buildx create --name container-builder --driver docker-container --bootstrap --use
@@ -26,6 +27,41 @@ steps:
fi
docker buildx build --platform linux/amd64,linux/arm64 --build-arg BUILD_TYPE=container.release --build-arg COMMIT_SHA=$(git rev-parse HEAD) $TAGS --push .
- id: "generate-token"
name: "gcr.io/cloud-builders/gcloud"
waitFor: ['-']
script: |
#!/usr/bin/env bash
set -e
gcloud auth print-identity-token --audiences=sigstore > /workspace/token
- id: "get-docker-digest"
name: "gcr.io/cloud-builders/gcloud"
waitFor:
- "build-docker"
script: |
#!/usr/bin/env bash
set -e
export VERSION=$(cat ./cmd/version.txt)
IMAGE_DIGEST=$(\
gcloud container images describe ${_DOCKER_URI}:$VERSION \
--format='get(image_summary.fully_qualified_digest)'\
)
echo $IMAGE_DIGEST > /workspace/image_digest
- id: "sign-docker"
name: "gcr.io/projectsigstore/cosign"
waitFor:
- "get-docker-digest"
- "generate-token"
env:
- 'SIGSTORE_NO_CACHE=true'
script: |
#!/busybox/sh
set -e
IMAGE_DIGEST=$(cat /workspace/image_digest)
cosign sign --identity-token=$(cat /workspace/token) $IMAGE_DIGEST -y
- id: "install-dependencies"
name: golang:1
waitFor: ['-']
@@ -52,14 +88,31 @@ steps:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 \
go build -ldflags "-X github.com/googleapis/genai-toolbox/cmd.buildType=binary -X github.com/googleapis/genai-toolbox/cmd.commitSha=$(git rev-parse HEAD)" -o toolbox.linux.amd64
- id: "sign-linux-amd64"
name: "gcr.io/projectsigstore/cosign"
waitFor:
- "build-linux-amd64"
- "generate-token"
env:
- 'SIGSTORE_NO_CACHE=true'
script: |
#!/busybox/sh
set -e
cosign sign-blob --identity-token=$(cat /workspace/token) --bundle=toolbox.linux.amd64.sig ./toolbox.linux.amd64 -y
- id: "store-linux-amd64"
name: "gcr.io/cloud-builders/gcloud:latest"
waitFor:
- "build-linux-amd64"
- "sign-linux-amd64"
script: |
#!/usr/bin/env bash
set -e
export VERSION=v$(cat ./cmd/version.txt)
gcloud storage cp toolbox.linux.amd64 gs://$_BUCKET_NAME/$VERSION/linux/amd64/toolbox
gcloud storage cp toolbox.linux.amd64
gs://$_BUCKET_NAME/test/$VERSION/linux/amd64/toolbox
gcloud storage cp toolbox.linux.amd64.sig gs://$_BUCKET_NAME/test/$VERSION/linux/amd64/toolbox.sig
- id: "build-darwin-arm64"
name: golang:1
@@ -76,14 +129,30 @@ steps:
CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 \
go build -ldflags "-X github.com/googleapis/genai-toolbox/cmd.buildType=binary -X github.com/googleapis/genai-toolbox/cmd.commitSha=$(git rev-parse HEAD)" -o toolbox.darwin.arm64
- id: "sign-darwin-arm64"
name: "gcr.io/projectsigstore/cosign"
waitFor:
- "build-darwin-arm64"
- "generate-token"
env:
- 'SIGSTORE_NO_CACHE=true'
script: |
#!/busybox/sh
set -e
cosign sign-blob --identity-token=$(cat /workspace/token) --bundle=toolbox.darwin.arm64.sig ./toolbox.darwin.arm64 -y
- id: "store-darwin-arm64"
name: "gcr.io/cloud-builders/gcloud:latest"
waitFor:
- "build-darwin-arm64"
- "sign-darwin-arm64"
script: |
#!/usr/bin/env bash
set -e
export VERSION=v$(cat ./cmd/version.txt)
gcloud storage cp toolbox.darwin.arm64 gs://$_BUCKET_NAME/$VERSION/darwin/arm64/toolbox
gcloud storage cp toolbox.darwin.arm64 gs://$_BUCKET_NAME/test/$VERSION/darwin/arm64/toolbox
gcloud storage cp toolbox.darwin.arm64.sig gs://$_BUCKET_NAME/test/$VERSION/darwin/arm64/toolbox.sig
- id: "build-darwin-amd64"
name: golang:1
@@ -100,14 +169,30 @@ steps:
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 \
go build -ldflags "-X github.com/googleapis/genai-toolbox/cmd.buildType=binary -X github.com/googleapis/genai-toolbox/cmd.commitSha=$(git rev-parse HEAD)" -o toolbox.darwin.amd64
- id: "sign-darwin-amd64"
name: "gcr.io/projectsigstore/cosign"
waitFor:
- "build-darwin-amd64"
- "generate-token"
env:
- 'SIGSTORE_NO_CACHE=true'
script: |
#!/busybox/sh
set -e
cosign sign-blob --identity-token=$(cat /workspace/token) --bundle=toolbox.darwin.amd64.sig ./toolbox.darwin.amd64 -y
- id: "store-darwin-amd64"
name: "gcr.io/cloud-builders/gcloud:latest"
waitFor:
- "build-darwin-amd64"
- "sign-darwin-amd64"
script: |
#!/usr/bin/env bash
set -e
export VERSION=v$(cat ./cmd/version.txt)
gcloud storage cp toolbox.darwin.amd64 gs://$_BUCKET_NAME/$VERSION/darwin/amd64/toolbox
gcloud storage cp toolbox.darwin.amd64 gs://$_BUCKET_NAME/test/$VERSION/darwin/amd64/toolbox
gcloud storage cp toolbox.darwin.amd64.sig gs://$_BUCKET_NAME/test/$VERSION/darwin/amd64/toolbox.sig
- id: "build-windows-amd64"
name: golang:1
@@ -124,14 +209,30 @@ steps:
CGO_ENABLED=0 GOOS=windows GOARCH=amd64 \
go build -ldflags "-X github.com/googleapis/genai-toolbox/cmd.buildType=binary -X github.com/googleapis/genai-toolbox/cmd.commitSha=$(git rev-parse HEAD)" -o toolbox.windows.amd64
- id: "sign-windows-amd64"
name: "gcr.io/projectsigstore/cosign"
waitFor:
- "build-windows-amd64"
- "generate-token"
env:
- 'SIGSTORE_NO_CACHE=true'
script: |
#!/busybox/sh
set -e
cosign sign-blob --identity-token=$(cat /workspace/token) --bundle=toolbox.windows.amd64.sig ./toolbox.windows.amd64 -y
- id: "store-windows-amd64"
name: "gcr.io/cloud-builders/gcloud:latest"
waitFor:
- "build-windows-amd64"
- "sign-windows-amd64"
script: |
#!/usr/bin/env bash
set -e
export VERSION=v$(cat ./cmd/version.txt)
gcloud storage cp toolbox.windows.amd64 gs://$_BUCKET_NAME/$VERSION/windows/amd64/toolbox.exe
gcloud storage cp toolbox.windows.amd64 gs://$_BUCKET_NAME/test/$VERSION/windows/amd64/toolbox.exe
gcloud storage cp toolbox.windows.amd64.sig gs://$_BUCKET_NAME/test/$VERSION/windows/amd64/toolbox.exe.sig
options:
automapSubstitutions: true
@@ -144,5 +245,5 @@ substitutions:
_AR_HOSTNAME: ${_REGION}-docker.pkg.dev
_AR_REPO_NAME: toolbox
_BUCKET_NAME: genai-toolbox
_DOCKER_URI: ${_AR_HOSTNAME}/${PROJECT_ID}/${_AR_REPO_NAME}/toolbox
_DOCKER_URI: ${_AR_HOSTNAME}/${PROJECT_ID}/${_AR_REPO_NAME}/test
_PUSH_LATEST: "true"

View File

@@ -36,4 +36,5 @@ extraFiles: [
"docs/en/how-to/connect-ide/mssql_mcp.md",
"docs/en/how-to/connect-ide/postgres_mcp.md",
"docs/en/how-to/connect-ide/neo4j_mcp.md",
"docs/en/how-to/connect-ide/sqlite_mcp.md",
]

View File

@@ -42,6 +42,10 @@ import (
"github.com/googleapis/genai-toolbox/internal/util"
// Import tool packages for side effect of registration
_ "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydbgetcluster"
_ "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydblistclusters"
_ "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydblistinstances"
_ "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydblistusers"
_ "github.com/googleapis/genai-toolbox/internal/tools/alloydbainl"
_ "github.com/googleapis/genai-toolbox/internal/tools/bigquery/bigqueryanalyzecontribution"
_ "github.com/googleapis/genai-toolbox/internal/tools/bigquery/bigqueryconversationalanalytics"
@@ -57,6 +61,10 @@ import (
_ "github.com/googleapis/genai-toolbox/internal/tools/clickhouse/clickhouselistdatabases"
_ "github.com/googleapis/genai-toolbox/internal/tools/clickhouse/clickhousesql"
_ "github.com/googleapis/genai-toolbox/internal/tools/cloudmonitoring"
_ "github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqlcreateusers"
_ "github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqlgetinstances"
_ "github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqllistinstances"
_ "github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqlwaitforoperation"
_ "github.com/googleapis/genai-toolbox/internal/tools/couchbase"
_ "github.com/googleapis/genai-toolbox/internal/tools/dataplex/dataplexlookupentry"
_ "github.com/googleapis/genai-toolbox/internal/tools/dataplex/dataplexsearchaspecttypes"
@@ -113,8 +121,10 @@ import (
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgressql"
_ "github.com/googleapis/genai-toolbox/internal/tools/redis"
_ "github.com/googleapis/genai-toolbox/internal/tools/spanner/spannerexecutesql"
_ "github.com/googleapis/genai-toolbox/internal/tools/spanner/spannerlisttables"
_ "github.com/googleapis/genai-toolbox/internal/tools/spanner/spannersql"
_ "github.com/googleapis/genai-toolbox/internal/tools/sqlitesql"
_ "github.com/googleapis/genai-toolbox/internal/tools/sqlite/sqliteexecutesql"
_ "github.com/googleapis/genai-toolbox/internal/tools/sqlite/sqlitesql"
_ "github.com/googleapis/genai-toolbox/internal/tools/tidb/tidbexecutesql"
_ "github.com/googleapis/genai-toolbox/internal/tools/tidb/tidbsql"
_ "github.com/googleapis/genai-toolbox/internal/tools/trino/trinoexecutesql"

View File

@@ -1244,6 +1244,7 @@ func TestPrebuiltTools(t *testing.T) {
postgresconfig, _ := prebuiltconfigs.Get("postgres")
spanner_config, _ := prebuiltconfigs.Get("spanner")
spannerpg_config, _ := prebuiltconfigs.Get("spanner-postgres")
sqlite_config, _ := prebuiltconfigs.Get("sqlite")
neo4jconfig, _ := prebuiltconfigs.Get("neo4j")
// Set environment variables
@@ -1319,6 +1320,8 @@ func TestPrebuiltTools(t *testing.T) {
t.Setenv("LOOKER_CLIENT_SECRET", "your_looker_client_secret")
t.Setenv("LOOKER_VERIFY_SSL", "true")
t.Setenv("SQLITE_DATABASE", "test.db")
t.Setenv("NEO4J_URI", "bolt://localhost:7687")
t.Setenv("NEO4J_DATABASE", "neo4j")
t.Setenv("NEO4J_USERNAME", "your_neo4j_user")
@@ -1337,9 +1340,9 @@ func TestPrebuiltTools(t *testing.T) {
name: "alloydb postgres admin prebuilt tools",
in: alloydb_admin_config,
wantToolset: server.ToolsetConfigs{
"alloydb-postgres-admin-tools": tools.ToolsetConfig{
Name: "alloydb-postgres-admin-tools",
ToolNames: []string{"alloydb-create-cluster", "alloydb-operations-get", "alloydb-create-instance", "alloydb-list-clusters", "alloydb-list-instances", "alloydb-list-users", "alloydb-create-user"},
"alloydb_postgres_admin_tools": tools.ToolsetConfig{
Name: "alloydb_postgres_admin_tools",
ToolNames: []string{"create_cluster", "wait_for_operation", "create_instance", "list_clusters", "list_instances", "list_users", "create_user", "get_cluster"},
},
},
},
@@ -1347,8 +1350,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "alloydb prebuilt tools",
in: alloydb_config,
wantToolset: server.ToolsetConfigs{
"alloydb-postgres-database-tools": tools.ToolsetConfig{
Name: "alloydb-postgres-database-tools",
"alloydb_postgres_database_tools": tools.ToolsetConfig{
Name: "alloydb_postgres_database_tools",
ToolNames: []string{"execute_sql", "list_tables"},
},
},
@@ -1357,8 +1360,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "bigquery prebuilt tools",
in: bigquery_config,
wantToolset: server.ToolsetConfigs{
"bigquery-database-tools": tools.ToolsetConfig{
Name: "bigquery-database-tools",
"bigquery_database_tools": tools.ToolsetConfig{
Name: "bigquery_database_tools",
ToolNames: []string{"analyze_contribution", "ask_data_insights", "execute_sql", "forecast", "get_dataset_info", "get_table_info", "list_dataset_ids", "list_table_ids"},
},
},
@@ -1367,8 +1370,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "clickhouse prebuilt tools",
in: clickhouse_config,
wantToolset: server.ToolsetConfigs{
"clickhouse-database-tools": tools.ToolsetConfig{
Name: "clickhouse-database-tools",
"clickhouse_database_tools": tools.ToolsetConfig{
Name: "clickhouse_database_tools",
ToolNames: []string{"execute_sql", "list_databases"},
},
},
@@ -1377,8 +1380,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "cloudsqlpg prebuilt tools",
in: cloudsqlpg_config,
wantToolset: server.ToolsetConfigs{
"cloud-sql-postgres-database-tools": tools.ToolsetConfig{
Name: "cloud-sql-postgres-database-tools",
"cloud_sql_postgres_database_tools": tools.ToolsetConfig{
Name: "cloud_sql_postgres_database_tools",
ToolNames: []string{"execute_sql", "list_tables"},
},
},
@@ -1387,8 +1390,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "cloudsqlmysql prebuilt tools",
in: cloudsqlmysql_config,
wantToolset: server.ToolsetConfigs{
"cloud-sql-mysql-database-tools": tools.ToolsetConfig{
Name: "cloud-sql-mysql-database-tools",
"cloud_sql_mysql_database_tools": tools.ToolsetConfig{
Name: "cloud_sql_mysql_database_tools",
ToolNames: []string{"execute_sql", "list_tables"},
},
},
@@ -1397,8 +1400,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "cloudsqlmssql prebuilt tools",
in: cloudsqlmssql_config,
wantToolset: server.ToolsetConfigs{
"cloud-sql-mssql-database-tools": tools.ToolsetConfig{
Name: "cloud-sql-mssql-database-tools",
"cloud_sql_mssql_database_tools": tools.ToolsetConfig{
Name: "cloud_sql_mssql_database_tools",
ToolNames: []string{"execute_sql", "list_tables"},
},
},
@@ -1407,9 +1410,9 @@ func TestPrebuiltTools(t *testing.T) {
name: "dataplex prebuilt tools",
in: dataplex_config,
wantToolset: server.ToolsetConfigs{
"dataplex-tools": tools.ToolsetConfig{
Name: "dataplex-tools",
ToolNames: []string{"dataplex_search_entries", "dataplex_lookup_entry", "dataplex_search_aspect_types"},
"dataplex_tools": tools.ToolsetConfig{
Name: "dataplex_tools",
ToolNames: []string{"search_entries", "lookup_entry", "search_aspect_types"},
},
},
},
@@ -1417,9 +1420,9 @@ func TestPrebuiltTools(t *testing.T) {
name: "firestore prebuilt tools",
in: firestoreconfig,
wantToolset: server.ToolsetConfigs{
"firestore-database-tools": tools.ToolsetConfig{
Name: "firestore-database-tools",
ToolNames: []string{"firestore-get-documents", "firestore-add-documents", "firestore-update-document", "firestore-list-collections", "firestore-delete-documents", "firestore-query-collection", "firestore-get-rules", "firestore-validate-rules"},
"firestore_database_tools": tools.ToolsetConfig{
Name: "firestore_database_tools",
ToolNames: []string{"get_documents", "add_documents", "update_document", "list_collections", "delete_documents", "query_collection", "get_rules", "validate_rules"},
},
},
},
@@ -1427,8 +1430,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "mysql prebuilt tools",
in: mysql_config,
wantToolset: server.ToolsetConfigs{
"mysql-database-tools": tools.ToolsetConfig{
Name: "mysql-database-tools",
"mysql_database_tools": tools.ToolsetConfig{
Name: "mysql_database_tools",
ToolNames: []string{"execute_sql", "list_tables"},
},
},
@@ -1437,8 +1440,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "mssql prebuilt tools",
in: mssql_config,
wantToolset: server.ToolsetConfigs{
"mssql-database-tools": tools.ToolsetConfig{
Name: "mssql-database-tools",
"mssql_database_tools": tools.ToolsetConfig{
Name: "mssql_database_tools",
ToolNames: []string{"execute_sql", "list_tables"},
},
},
@@ -1447,8 +1450,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "looker prebuilt tools",
in: looker_config,
wantToolset: server.ToolsetConfigs{
"looker-tools": tools.ToolsetConfig{
Name: "looker-tools",
"looker_tools": tools.ToolsetConfig{
Name: "looker_tools",
ToolNames: []string{"get_models", "get_explores", "get_dimensions", "get_measures", "get_filters", "get_parameters", "query", "query_sql", "query_url", "get_looks", "run_look", "make_look", "get_dashboards", "make_dashboard", "add_dashboard_element"},
},
},
@@ -1457,8 +1460,8 @@ func TestPrebuiltTools(t *testing.T) {
name: "postgres prebuilt tools",
in: postgresconfig,
wantToolset: server.ToolsetConfigs{
"postgres-database-tools": tools.ToolsetConfig{
Name: "postgres-database-tools",
"postgres_database_tools": tools.ToolsetConfig{
Name: "postgres_database_tools",
ToolNames: []string{"execute_sql", "list_tables"},
},
},
@@ -1477,18 +1480,28 @@ func TestPrebuiltTools(t *testing.T) {
name: "spanner pg prebuilt tools",
in: spannerpg_config,
wantToolset: server.ToolsetConfigs{
"spanner-postgres-database-tools": tools.ToolsetConfig{
Name: "spanner-postgres-database-tools",
"spanner_postgres_database_tools": tools.ToolsetConfig{
Name: "spanner_postgres_database_tools",
ToolNames: []string{"execute_sql", "execute_sql_dql", "list_tables"},
},
},
},
{
name: "sqlite prebuilt tools",
in: sqlite_config,
wantToolset: server.ToolsetConfigs{
"sqlite_database_tools": tools.ToolsetConfig{
Name: "sqlite_database_tools",
ToolNames: []string{"execute_sql", "list_tables"},
},
},
},
{
name: "neo4j prebuilt tools",
in: neo4jconfig,
wantToolset: server.ToolsetConfigs{
"neo4j-database-tools": tools.ToolsetConfig{
Name: "neo4j-database-tools",
"neo4j_database_tools": tools.ToolsetConfig{
Name: "neo4j_database_tools",
ToolNames: []string{"execute_cypher", "get_schema"},
},
},

View File

@@ -17,6 +17,11 @@ This guide assumes you have already done the following:
your preferred virtual environment tool for managing dependencies e.g. [venv][install-venv]).
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[install-python]: https://wiki.python.org/moin/BeginnersGuide/Download
[install-pip]: https://pip.pypa.io/en/stable/installation/
[install-venv]: https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments
[install-postgres]: https://www.postgresql.org/download/
### Cloud Setup (Optional)
{{< regionInclude "quickstart/shared/cloud_setup.md" "cloud_setup" >}}

View File

@@ -4,7 +4,7 @@ go 1.24.6
require (
github.com/googleapis/mcp-toolbox-sdk-go v0.3.0
google.golang.org/genai v1.23.0
google.golang.org/genai v1.24.0
)
require (

View File

@@ -0,0 +1,255 @@
---
title: SQLite using MCP
type: docs
weight: 2
description: "Connect your IDE to SQLite using Toolbox."
---
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) is an open protocol for connecting Large Language Models (LLMs) to data sources like SQLite. This guide covers how to use [MCP Toolbox for Databases][toolbox] to expose your developer assistant tools to a SQLite instance:
* [Cursor][cursor]
* [Windsurf][windsurf] (Codium)
* [Visual Studio Code][vscode] (Copilot)
* [Cline][cline] (VS Code extension)
* [Claude desktop][claudedesktop]
* [Claude code][claudecode]
* [Gemini CLI][geminicli]
* [Gemini Code Assist][geminicodeassist]
[toolbox]: https://github.com/googleapis/genai-toolbox
[cursor]: #configure-your-mcp-client
[windsurf]: #configure-your-mcp-client
[vscode]: #configure-your-mcp-client
[cline]: #configure-your-mcp-client
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client
## Set up the database
1. [Create or select a SQLite database file.](https://www.sqlite.org/download.html)
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct binary](https://github.com/googleapis/genai-toolbox/releases) corresponding to your OS and CPU architecture. You are required to use Toolbox version V0.10.0+:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v1.0.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->
1. Make the binary executable:
```bash
chmod +x toolbox
```
1. Verify the installation:
```bash
./toolbox --version
```
## Configure your MCP Client
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Add the following configuration, replace the environment variables with your values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Restart Claude code to apply the new configuration.
{{% /tab %}}
{{% tab header="Claude desktop" lang="en" %}}
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. You should see a green active status after the server is successfully connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt", "sqlite", "--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
1. Open [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor Settings > MCP**. You should see a green active status after the server is successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Add the following configuration, replace the environment variables with your values, and save:
```json
{
"servers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Add the following configuration, replace the environment variables with your values, and save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
```json
{
"mcpServers": {
"sqlite": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","sqlite","--stdio"],
"env": {
"SQLITE_DATABASE": "./sample.db"
}
}
}
}
```
{{% /tab %}}
{{< /tabpane >}}
## Use Tools
Your AI tool is now connected to SQLite using MCP. Try asking your AI assistant to list tables, create a table, or define and execute other SQL statements.
The following tools are available to the LLM:
1. **list_tables**: lists tables and descriptions
1. **execute_sql**: execute any SQL statement
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}

View File

@@ -57,6 +57,7 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* **BigQuery Data Editor** (`roles/bigquery.dataEditor`) to create or modify datasets and tables.
* **Gemini for Google Cloud** (`roles/cloudaicompanion.user`) to use the conversational analytics API.
* **Tools:**
* `analyze_contribution`: Use this tool to perform contribution analysis, also called key driver analysis.
* `ask_data_insights`: Use this tool to perform data analysis, get insights, or answer complex questions about the contents of specific BigQuery tables. For more information on required roles, API setup, and IAM configuration, see the setup and authentication section of the [Conversational Analytics API documentation](https://cloud.google.com/gemini/docs/conversational-analytics-api/overview).
* `execute_sql`: Executes a SQL statement.
* `forecast`: Use this tool to forecast time series data.
@@ -269,6 +270,17 @@ See guides, [Connect from your IDE](../how-to/connect-ide/_index.md), for detail
* `execute_sql_dql`: Executes a DQL SQL query using the PostgreSQL interface for Spanner.
* `list_tables`: Lists tables in the database.
## SQLite
* `--prebuilt` value: `sqlite`
* **Environment Variables:**
* `SQLITE_DATABASE`: The path to the SQLite database file (e.g., `./sample.db`).
* **Permissions:**
* File system read/write permissions for the specified database file.
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## Neo4j
* `--prebuilt` value: `neo4j`

View File

@@ -36,6 +36,9 @@ avoiding full table scans or complex filters.
## Available Tools
- [`bigquery-analyze-contribution`](../tools/bigquery/bigquery-analyze-contribution.md)
Performs contribution analysis, also called key driver analysis in BigQuery.
- [`bigquery-conversational-analytics`](../tools/bigquery/bigquery-conversational-analytics.md)
Allows conversational interaction with a BigQuery source.
@@ -128,10 +131,10 @@ sources:
## Reference
| **field** | **type** | **required** | **description** |
|----------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigquery". |
| project | string | true | Id of the Google Cloud project to use for billing and as the default project for BigQuery resources. |
| location | string | false | Specifies the location (e.g., 'us', 'asia-northeast1') in which to run the query job. This location must match the location of any tables referenced in the query. Defaults to the table's location or 'US' if the location cannot be determined. [Learn More](https://cloud.google.com/bigquery/docs/locations) |
| allowedDatasets | []string | false | An optional list of dataset IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a dataset not in this list will be rejected. To enforce this, two types of operations are also disallowed: 1) Dataset-level operations (e.g., `CREATE SCHEMA`), and 2) operations where table access cannot be statically analyzed (e.g., `EXECUTE IMMEDIATE`, `CREATE PROCEDURE`). If a single dataset is provided, it will be treated as the default for prebuilt tools. |
| useClientOAuth | bool | false | If true, forwards the client's OAuth access token from the "Authorization" header to downstream queries. |
| **field** | **type** | **required** | **description** |
|-----------------|:--------:|:------------:|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "bigquery". |
| project | string | true | Id of the Google Cloud project to use for billing and as the default project for BigQuery resources. |
| location | string | false | Specifies the location (e.g., 'us', 'asia-northeast1') in which to run the query job. This location must match the location of any tables referenced in the query. Defaults to the table's location or 'US' if the location cannot be determined. [Learn More](https://cloud.google.com/bigquery/docs/locations) |
| allowedDatasets | []string | false | An optional list of dataset IDs that tools using this source are allowed to access. If provided, any tool operation attempting to access a dataset not in this list will be rejected. To enforce this, two types of operations are also disallowed: 1) Dataset-level operations (e.g., `CREATE SCHEMA`), and 2) operations where table access cannot be statically analyzed (e.g., `EXECUTE IMMEDIATE`, `CREATE PROCEDURE`). If a single dataset is provided, it will be treated as the default for prebuilt tools. |
| useClientOAuth | bool | false | If true, forwards the client's OAuth access token from the "Authorization" header to downstream queries. |

View File

@@ -26,6 +26,14 @@ SQLite has the following notable characteristics:
- [`sqlite-sql`](../tools/sqlite/sqlite-sql.md)
Run SQL queries against a local SQLite database.
- [`sqlite-execute-sql`](../tools/sqlite/sqlite-execute-sql.md)
Run parameterized SQL statements in SQlite.
### Pre-built Configurations
- [SQLite using MCP](../../how-to/connect-ide/sqlite_mcp.md)
Connect your IDE to SQlite using Toolbox.
## Requirements

View File

@@ -0,0 +1,7 @@
---
title: "AlloyDB for PostgreSQL"
type: docs
weight: 1
description: >
Tools for AlloyDB for PostgreSQL.
---

View File

@@ -0,0 +1,37 @@
---
title: "alloydb-get-cluster"
type: docs
weight: 1
description: >
The "alloydb-get-cluster" tool retrieves details for a specific AlloyDB cluster.
aliases:
- /resources/tools/alloydb-get-cluster
---
## About
The `alloydb-get-cluster` tool retrieves detailed information for a single, specified AlloyDB cluster. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
| Parameter | Type | Description | Required |
| :--------- | :----- | :--------------------------------------------------------------------------------------- | :------- |
| `projectId` | string | The GCP project ID to get cluster for. | Yes |
| `locationId` | string | The location of the cluster (e.g., 'us-central1'). | Yes |
| `clusterId` | string | The ID of the cluster to retrieve. | Yes |
> **Note**
> This tool authenticates using the credentials configured in its [alloydb-admin](../../sources/alloydb-admin.md) source which can be either [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials) or client-side OAuth.
## Example
```yaml
tools:
get_specific_cluster:
kind: alloydb-get-cluster
source: my-alloydb-admin-source
description: Use this tool to retrieve details for a specific AlloyDB cluster.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be alloydb-get-cluster. | |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | true | Description of the tool that is passed to the agent. |

View File

@@ -0,0 +1,38 @@
---
title: "alloydb-list-clusters"
type: docs
weight: 1
description: >
The "alloydb-list-clusters" tool lists the AlloyDB clusters in a given project and location.
aliases:
- /resources/tools/alloydb-list-clusters
---
## About
The `alloydb-list-clusters` tool retrieves AlloyDB cluster information for all or specified locations in a given project. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
`alloydb-list-clusters` tool lists the detailed information of AlloyDB cluster(cluster name, state, configuration, etc) for a given project and location. The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :--------------------------------------------------------------------------------------- | :------- |
| `projectId` | string | The GCP project ID to list clusters for. | Yes |
| `locationId` | string | The location to list clusters in (e.g., 'us-central1'). Use `-` for all locations. Default: `-`.| No |
> **Note**
> This tool authenticates using the credentials configured in its [alloydb-admin](../../sources/alloydb-admin.md) source which can be either [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials) or client-side OAuth.
## Example
```yaml
tools:
list_clusters:
kind: alloydb-list-clusters
source: alloydb-admin-source
description: Use this tool to list all AlloyDB clusters in a given project and location.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be alloydb-list-clusters. | |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | true | Description of the tool that is passed to the agent. |

View File

@@ -0,0 +1,39 @@
---
title: "alloydb-list-instances"
type: docs
weight: 1
description: >
The "alloydb-list-instances" tool lists the AlloyDB instances for a given project, cluster and location.
aliases:
- /resources/tools/alloydb-list-instances
---
## About
The `alloydb-list-instances` tool retrieves AlloyDB instance information for all or specified clusters and locations in a given project. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
`alloydb-list-instances` tool lists the detailed information of AlloyDB instances (instance name, type, IP address, state, configuration, etc) for a given project, cluster and location. The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :--------------------------------------------------------------------------------------- | :------- |
| `projectId` | string | The GCP project ID to list instances for. | Yes |
| `clusterId` | string | The ID of the cluster to list instances from. Use '-' to get results for all clusters. Default: `-`.| No |
| `locationId` | string | The location of the cluster (e.g., 'us-central1'). Use '-' to get results for all locations. Default: `-`.| No |
> **Note**
> This tool authenticates using the credentials configured in its [alloydb-admin](../../sources/alloydb-admin.md) source which can be either [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials) or client-side OAuth.
## Example
```yaml
tools:
list_instances:
kind: alloydb-list-instances
source: alloydb-admin-source
description: Use this tool to list all AlloyDB instances for a given project, cluster and location.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be alloydb-list-instances. | |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | true | Description of the tool that is passed to the agent. |

View File

@@ -0,0 +1,38 @@
---
title: "alloydb-list-users"
type: docs
weight: 1
description: >
The "alloydb-list-users" tool lists all database users within an AlloyDB cluster.
aliases:
- /resources/tools/alloydb-list-users
---
## About
The `alloydb-list-users` tool lists all database users within an AlloyDB cluster. It is compatible with [alloydb-admin](../../sources/alloydb-admin.md) source.
The tool takes the following input parameters:
| Parameter | Type | Description | Required |
| :--------- | :----- | :--------------------------------------------------------------------------------------- | :------- |
| `projectId` | string | The GCP project ID to list users for. | Yes |
| `clusterId` | string | The ID of the cluster to list users from. | Yes |
| `locationId` | string | The location of the cluster (e.g., 'us-central1'). | Yes |
> **Note**
> This tool authenticates using the credentials configured in its [alloydb-admin](../../sources/alloydb-admin.md) source which can be either [Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials) or client-side OAuth.
## Example
```yaml
tools:
list_users:
kind: alloydb-list-users
source: alloydb-admin-source
description: Use this tool to list all database users within an AlloyDB cluster
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be alloydb-list-users. | |
| source | string | true | The name of an `alloydb-admin` source. |
| description | string | true | Description of the tool that is passed to the agent. |

View File

@@ -0,0 +1,7 @@
---
title: "Cloud SQL"
type: docs
weight: 1
description: >
Tools that work with Cloud SQL.
---

View File

@@ -0,0 +1,31 @@
---
title: cloud-sql-create-users
type: docs
weight: 10
description: >
Create a new user in a Cloud SQL instance.
---
The `cloud-sql-create-users` tool creates a new user in a specified Cloud SQL instance. It can create both built-in and IAM users.
{{< notice info >}}
This tool uses a `source` of kind `cloud-sql-admin`.
{{< /notice >}}
## Example
```yaml
tools:
create-cloud-sql-user:
kind: cloud-sql-create-users
source: my-cloud-sql-admin-source
description: "Creates a new user in a Cloud SQL instance. Both built-in and IAM users are supported. IAM users require an email account as the user name. IAM is the more secure and recommended way to manage users. The agent should always ask the user what type of user they want to create. For more information, see https://cloud.google.com/sql/docs/postgres/add-manage-iam-users"
```
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :-------: | :----------: | ------------------------------------------------ |
| kind | string | true | Must be "cloud-sql-create-users". |
| description | string | false | A description of the tool. |
| source | string | true | The name of the `cloud-sql-admin` source to use. |

View File

@@ -0,0 +1,31 @@
---
title: "cloud-sql-get-instance"
type: docs
weight: 10
description: >
Get a Cloud SQL instance resource.
---
The `cloud-sql-get-instance` tool retrieves a Cloud SQL instance resource using the Cloud SQL Admin API.
{{< notice info >}}
This tool uses a `source` of kind `cloud-sql-admin`.
{{< /notice >}}
## Example
```yaml
tools:
get-sql-instance:
kind: cloud-sql-get-instance
source: my-cloud-sql-admin-source
description: "Gets a particular cloud sql instance."
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ------------------------------------------------ |
| kind | string | true | Must be "cloud-sql-get-instance". |
| source | string | true | The name of the `cloud-sql-admin` source to use. |
| description | string | false | A description of the tool. |

View File

@@ -0,0 +1,46 @@
---
title: Cloud SQL List Instances
type: docs
weight: 1
description: "List Cloud SQL instances in a project.\n"
---
The `cloud-sql-list-instances` tool lists all Cloud SQL instances in a specified
Google Cloud project.
{{< notice info >}}
This tool uses the `cloud-sql-admin` source, which automatically handles authentication on behalf of the user.
{{< /notice >}}
## Configuration
Here is an example of how to configure the `cloud-sql-list-instances` tool in your
`tools.yaml` file:
```yaml
sources:
my-cloud-sql-admin-source:
kind: cloud-sql-admin
tools:
list_my_instances:
kind: cloud-sql-list-instances
source: my-cloud-sql-admin-source
description: Use this tool to list all Cloud SQL instances in a project.
```
## Parameters
The `cloud-sql-list-instances` tool has one required parameter:
| **field** | **type** | **required** | **description** |
| --------- | :------: | :----------: | ---------------------------- |
| project | string | true | The Google Cloud project ID. |
## Reference
| **field** | **type** | **required** | **description** |
| ------------ | :-------: | :----------: | ----------------------------------------------------------------------------------- |
| kind | string | true | Must be "cloud-sql-list-instances". |
| description | string | false | Description of the tool that is passed to the agent. |
| source | string | true | The name of the `cloud-sql-admin` source to use for this tool. |

View File

@@ -0,0 +1,39 @@
---
title: "cloud-sql-wait-for-operation"
type: docs
weight: 10
description: >
Wait for a long-running Cloud SQL operation to complete.
---
The `cloud-sql-wait-for-operation` tool is a utility tool that waits for a
long-running Cloud SQL operation to complete. It does this by polling the Cloud
SQL Admin API operation status endpoint until the operation is finished, using
exponential backoff.
## Example
```yaml
tools:
cloudsql-operations-get:
kind: cloud-sql-wait-for-operation
source: my-cloud-sql-source
description: "This will poll on operations API until the operation is done. For checking operation status we need projectId and operationId. Once instance is created give follow up steps on how to use the variables to bring data plane MCP server up in local and remote setup."
delay: 1s
maxDelay: 4m
multiplier: 2
maxRetries: 10
```
## Reference
| **field** | **type** | **required** | **description** |
| ----------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------- |
| kind | string | true | Must be "cloud-sql-wait-for-operation". |
| source | string | true | The name of a `cloud-sql-admin` source to use for authentication. |
| description | string | false | A description of the tool. |
| delay | duration | false | The initial delay between polling requests (e.g., `3s`). Defaults to 3 seconds. |
| maxDelay | duration | false | The maximum delay between polling requests (e.g., `4m`). Defaults to 4 minutes. |
| multiplier | float | false | The multiplier for the polling delay. The delay is multiplied by this value after each request. Defaults to 2.0. |
| maxRetries | int | false | The maximum number of polling attempts before giving up. Defaults to 10. |

View File

@@ -29,6 +29,10 @@ It's compatible with the following sources:
7. an optional `limit`
8. an optional `tz`
Starting in Looker v25.18, these queries can be identified in Looker's
System Activity. In the History explore, use the field API Client Name
to find MCP Toolbox queries.
## Example
```yaml

View File

@@ -29,6 +29,10 @@ It's compatible with the following sources:
7. an optional `limit`
8. an optional `tz`
Starting in Looker v25.18, these queries can be identified in Looker's
System Activity. In the History explore, use the field API Client Name
to find MCP Toolbox queries.
## Example
```yaml

View File

@@ -0,0 +1,208 @@
---
title: "spanner-list-tables"
type: docs
weight: 3
description: >
A "spanner-list-tables" tool retrieves schema information about tables in a
Google Cloud Spanner database.
---
## About
A `spanner-list-tables` tool retrieves comprehensive schema information about
tables in a Cloud Spanner database. It automatically adapts to the database
dialect (GoogleSQL or PostgreSQL) and returns detailed metadata including
columns, constraints, and indexes. It's compatible with:
- [spanner](../../sources/spanner.md)
This tool is read-only and executes pre-defined SQL queries against the
`INFORMATION_SCHEMA` tables to gather metadata. The tool automatically detects
the database dialect from the source configuration and uses the appropriate SQL
syntax.
## Features
- **Automatic Dialect Detection**: Adapts queries based on whether the database
uses GoogleSQL or PostgreSQL dialect
- **Comprehensive Schema Information**: Returns columns, data types, constraints,
indexes, and table relationships
- **Flexible Filtering**: Can list all tables or filter by specific table names
- **Output Format Options**: Choose between simple (table names only) or detailed
(full schema information) output
## Example
### Basic Usage - List All Tables
```yaml
sources:
my-spanner-db:
kind: spanner
project: ${SPANNER_PROJECT}
instance: ${SPANNER_INSTANCE}
database: ${SPANNER_DATABASE}
dialect: googlesql # or postgresql
tools:
list_all_tables:
kind: spanner-list-tables
source: my-spanner-db
description: Lists all tables with their complete schema information
```
### List Specific Tables
```yaml
tools:
list_specific_tables:
kind: spanner-list-tables
source: my-spanner-db
description: |
Lists schema information for specific tables.
Example usage:
{
"table_names": "users,orders,products",
"output_format": "detailed"
}
```
## Parameters
The tool accepts two optional parameters:
| **parameter** | **type** | **default** | **description** |
|-----------------|:--------:|:-----------:|------------------------------------------------------------------------------------------------------|
| table_names | string | "" | Comma-separated list of table names to filter. If empty, lists all tables in user-accessible schemas |
| output_format | string | "detailed" | Output format: "simple" returns only table names, "detailed" returns full schema information |
## Output Format
### Simple Format
When `output_format` is set to "simple", the tool returns a minimal JSON structure:
```json
[
{
"schema_name": "public",
"object_name": "users",
"object_details": "{\"name\":\"users\"}"
},
{
"schema_name": "public",
"object_name": "orders",
"object_details": "{\"name\":\"orders\"}"
}
]
```
### Detailed Format
When `output_format` is set to "detailed" (default), the tool returns comprehensive schema information:
```json
[
{
"schema_name": "public",
"object_name": "users",
"object_details": "{
\"schema_name\": \"public\",
\"object_name\": \"users\",
\"object_type\": \"BASE TABLE\",
\"columns\": [
{
\"column_name\": \"id\",
\"data_type\": \"INT64\",
\"ordinal_position\": 1,
\"is_not_nullable\": true,
\"column_default\": null
},
{
\"column_name\": \"email\",
\"data_type\": \"STRING(255)\",
\"ordinal_position\": 2,
\"is_not_nullable\": true,
\"column_default\": null
}
],
\"constraints\": [
{
\"constraint_name\": \"PK_users\",
\"constraint_type\": \"PRIMARY KEY\",
\"constraint_definition\": \"PRIMARY KEY (id)\",
\"constraint_columns\": [\"id\"],
\"foreign_key_referenced_table\": null,
\"foreign_key_referenced_columns\": []
}
],
\"indexes\": [
{
\"index_name\": \"idx_users_email\",
\"index_type\": \"INDEX\",
\"is_unique\": true,
\"is_null_filtered\": false,
\"interleaved_in_table\": null,
\"index_key_columns\": [
{\"column_name\": \"email\", \"ordering\": \"ASC\"}
],
\"storing_columns\": []
}
]
}"
}
]
```
## Use Cases
1. **Database Documentation**: Generate comprehensive documentation of your
database schema
2. **Schema Validation**: Verify that expected tables and columns exist
3. **Migration Planning**: Understand the current schema before making changes
4. **Development Tools**: Build tools that need to understand database structure
5. **Audit and Compliance**: Track schema changes and ensure compliance with
data governance policies
## Example with Agent Integration
```yaml
sources:
spanner-db:
kind: spanner
project: my-project
instance: my-instance
database: my-database
dialect: googlesql
tools:
schema_inspector:
kind: spanner-list-tables
source: spanner-db
description: |
Use this tool to inspect database schema information.
You can:
- List all tables by leaving table_names empty
- Get specific table schemas by providing comma-separated table names
- Choose between simple (names only) or detailed (full schema) output
Examples:
1. List all tables with details: {"output_format": "detailed"}
2. Get specific tables: {"table_names": "users,orders", "output_format": "detailed"}
3. Just get table names: {"output_format": "simple"}
```
## Reference
| **field** | **type** | **required** | **description** |
|---------------|:--------:|:------------:|--------------------------------------------------------------------|
| kind | string | true | Must be "spanner-list-tables" |
| source | string | true | Name of the Spanner source to query |
| description | string | false | Description of the tool that is passed to the LLM |
| authRequired | string[] | false | List of auth services required to invoke this tool |
## Notes
- This tool is read-only and does not modify any data
- The tool automatically handles both GoogleSQL and PostgreSQL dialects
- Large databases with many tables may take longer to query

View File

@@ -0,0 +1,39 @@
---
title: "sqlite-execute-sql"
type: docs
weight: 1
description: >
A "sqlite-execute-sql" tool executes a single SQL statement against a SQLite database.
aliases:
- /resources/tools/sqlite-execute-sql
---
## About
A `sqlite-execute-sql` tool executes a single SQL statement against a SQLite database.
It's compatible with any of the following sources:
- [sqlite](../../sources/sqlite.md)
This tool is designed for direct execution of SQL statements. It takes a single `sql` input parameter and runs the SQL statement against the configured SQLite `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
## Example
```yaml
tools:
execute_sql_tool:
kind: sqlite-execute-sql
source: my-sqlite-db
description: Use this tool to execute a SQL statement.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "sqlite-execute-sql". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

36
go.mod
View File

@@ -9,10 +9,10 @@ require (
cloud.google.com/go/bigquery v1.70.0
cloud.google.com/go/bigtable v1.39.0
cloud.google.com/go/cloudsqlconn v1.18.1
cloud.google.com/go/dataplex v1.26.0
cloud.google.com/go/dataplex v1.27.0
cloud.google.com/go/firestore v1.18.0
cloud.google.com/go/spanner v1.84.1
github.com/ClickHouse/clickhouse-go/v2 v2.40.1
cloud.google.com/go/spanner v1.85.1
github.com/ClickHouse/clickhouse-go/v2 v2.40.3
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/trace v1.29.0
github.com/cenkalti/backoff/v5 v5.0.3
@@ -30,11 +30,11 @@ require (
github.com/google/uuid v1.6.0
github.com/jackc/pgx/v5 v5.7.6
github.com/json-iterator/go v1.1.12
github.com/looker-open-source/sdk-codegen/go v0.25.10
github.com/looker-open-source/sdk-codegen/go v0.25.11
github.com/microsoft/go-mssqldb v1.9.3
github.com/nakagami/firebirdsql v0.9.15
github.com/neo4j/neo4j-go-driver/v5 v5.28.3
github.com/redis/go-redis/v9 v9.13.0
github.com/redis/go-redis/v9 v9.14.0
github.com/spf13/cobra v1.9.1
github.com/thlib/go-timezone-local v0.0.7
github.com/trinodb/trino-go-client v0.328.0
@@ -42,21 +42,21 @@ require (
github.com/yugabyte/pgx/v5 v5.5.3-yb-5
go.mongodb.org/mongo-driver v1.17.4
go.opentelemetry.io/contrib/propagators/autoprop v0.62.0
go.opentelemetry.io/otel v1.37.0
go.opentelemetry.io/otel v1.38.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0
go.opentelemetry.io/otel/metric v1.37.0
go.opentelemetry.io/otel/metric v1.38.0
go.opentelemetry.io/otel/sdk v1.37.0
go.opentelemetry.io/otel/sdk/metric v1.37.0
go.opentelemetry.io/otel/trace v1.37.0
go.opentelemetry.io/otel/trace v1.38.0
golang.org/x/oauth2 v0.31.0
google.golang.org/api v0.249.0
google.golang.org/genproto v0.0.0-20250826171959-ef028d996bc1
modernc.org/sqlite v1.38.2
modernc.org/sqlite v1.39.0
)
require (
github.com/ClickHouse/ch-go v0.67.0 // indirect
github.com/ClickHouse/ch-go v0.68.0 // indirect
github.com/andybalholm/brotli v1.2.0 // indirect
github.com/andybalholm/cascadia v1.3.3 // indirect
github.com/go-faster/city v1.0.1 // indirect
@@ -65,7 +65,6 @@ require (
github.com/segmentio/asm v1.2.0 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
golang.org/x/exp v0.0.0-20250620022241-b7579e27df2b // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
require (
@@ -163,14 +162,15 @@ require (
go.opentelemetry.io/proto/otlp v1.7.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
golang.org/x/crypto v0.41.0 // indirect
golang.org/x/mod v0.26.0 // indirect
golang.org/x/net v0.43.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.35.0 // indirect
golang.org/x/text v0.28.0 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.42.0 // indirect
golang.org/x/mod v0.27.0 // indirect
golang.org/x/net v0.44.0 // indirect
golang.org/x/sync v0.17.0 // indirect
golang.org/x/sys v0.36.0 // indirect
golang.org/x/text v0.29.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.35.0 // indirect
golang.org/x/tools v0.36.0 // indirect
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250818200422-3122310a409c // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250818200422-3122310a409c // indirect

80
go.sum
View File

@@ -236,8 +236,8 @@ cloud.google.com/go/dataplex v1.3.0/go.mod h1:hQuRtDg+fCiFgC8j0zV222HvzFQdRd+SVX
cloud.google.com/go/dataplex v1.4.0/go.mod h1:X51GfLXEMVJ6UN47ESVqvlsRplbLhcsAt0kZCCKsU0A=
cloud.google.com/go/dataplex v1.5.2/go.mod h1:cVMgQHsmfRoI5KFYq4JtIBEUbYwc3c7tXmIDhRmNNVQ=
cloud.google.com/go/dataplex v1.6.0/go.mod h1:bMsomC/aEJOSpHXdFKFGQ1b0TDPIeL28nJObeO1ppRs=
cloud.google.com/go/dataplex v1.26.0 h1:nu8/KrLR5v62L1lApGNgm61Oq+xaa2bS9rgc1csjqE0=
cloud.google.com/go/dataplex v1.26.0/go.mod h1:12R9nlLUzxOscbb2HgoYnkGNibmv4sXEVMXxrdw2a90=
cloud.google.com/go/dataplex v1.27.0 h1:k6gf5DnX7YHD/hilShxVP9ExmGrEWZFjfBZ7rHt4JlM=
cloud.google.com/go/dataplex v1.27.0/go.mod h1:VB+xlYJiJ5kreonXsa2cHPj0A3CfPh/mgiHG4JFhbUA=
cloud.google.com/go/dataproc v1.7.0/go.mod h1:CKAlMjII9H90RXaMpSxQ8EU6dQx6iAYNPcYPOkSbi8s=
cloud.google.com/go/dataproc v1.8.0/go.mod h1:5OW+zNAH0pMpw14JVrPONsxMQYMBqJuzORhIBfBn9uI=
cloud.google.com/go/dataproc v1.12.0/go.mod h1:zrF3aX0uV3ikkMz6z4uBbIKyhRITnxvr4i3IjKsKrw4=
@@ -544,8 +544,8 @@ cloud.google.com/go/shell v1.6.0/go.mod h1:oHO8QACS90luWgxP3N9iZVuEiSF84zNyLytb+
cloud.google.com/go/spanner v1.41.0/go.mod h1:MLYDBJR/dY4Wt7ZaMIQ7rXOTLjYrmxLE/5ve9vFfWos=
cloud.google.com/go/spanner v1.44.0/go.mod h1:G8XIgYdOK+Fbcpbs7p2fiprDw4CaZX63whnSMLVBxjk=
cloud.google.com/go/spanner v1.45.0/go.mod h1:FIws5LowYz8YAE1J8fOS7DJup8ff7xJeetWEo5REA2M=
cloud.google.com/go/spanner v1.84.1 h1:ShH4Y3YeDtmHa55dFiSS3YtQ0dmCuP0okfAoHp/d68w=
cloud.google.com/go/spanner v1.84.1/go.mod h1:3GMEIjOcXINJSvb42H3M6TdlGCDzaCFpiiNQpjHPlCM=
cloud.google.com/go/spanner v1.85.1 h1:cJx1ZD//C2QIfFQl8hSTn4twL8amAXtnayyflRIjj40=
cloud.google.com/go/spanner v1.85.1/go.mod h1:bbwCXbM+zljwSPLZ44wZOdzcdmy89hbUGmM/r9sD0ws=
cloud.google.com/go/speech v1.6.0/go.mod h1:79tcr4FHCimOp56lwC01xnt/WPJZc4v3gzyT7FoBkCM=
cloud.google.com/go/speech v1.7.0/go.mod h1:KptqL+BAQIhMsj1kOP2la5DSEEerPDuOP/2mmkhHhZQ=
cloud.google.com/go/speech v1.8.0/go.mod h1:9bYIl1/tjsAnMgKGHKmBZzXKEkGgtU+MpdDPTE9f7y0=
@@ -655,10 +655,10 @@ github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJ
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/ClickHouse/ch-go v0.67.0 h1:18MQF6vZHj+4/hTRaK7JbS/TIzn4I55wC+QzO24uiqc=
github.com/ClickHouse/ch-go v0.67.0/go.mod h1:2MSAeyVmgt+9a2k2SQPPG1b4qbTPzdGDpf1+bcHh+18=
github.com/ClickHouse/clickhouse-go/v2 v2.40.1 h1:PbwsHBgqXRydU7jKULD1C8CHmifczffvQqmFvltM2W4=
github.com/ClickHouse/clickhouse-go/v2 v2.40.1/go.mod h1:GDzSBLVhladVm8V01aEB36IoBOVLLICfyeuiIp/8Ezc=
github.com/ClickHouse/ch-go v0.68.0 h1:zd2VD8l2aVYnXFRyhTyKCrxvhSz1AaY4wBUXu/f0GiU=
github.com/ClickHouse/ch-go v0.68.0/go.mod h1:C89Fsm7oyck9hr6rRo5gqqiVtaIY6AjdD0WFMyNRQ5s=
github.com/ClickHouse/clickhouse-go/v2 v2.40.3 h1:46jB4kKwVDUOnECpStKMVXxvR0Cg9zeV9vdbPjtn6po=
github.com/ClickHouse/clickhouse-go/v2 v2.40.3/go.mod h1:qO0HwvjCnTB4BPL/k6EE3l4d9f/uF+aoimAhJX70eKA=
github.com/GoogleCloudPlatform/grpc-gcp-go/grpcgcp v1.5.3 h1:2afWGsMzkIcN8Qm4mgPJKZWyroE5QBszMiDMYEBrnfw=
github.com/GoogleCloudPlatform/grpc-gcp-go/grpcgcp v1.5.3/go.mod h1:dppbR7CwXD4pgtV9t3wD1812RaLDcBjtblcDF5f1vI0=
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.29.0 h1:UQUsRi8WTzhZntp5313l+CHIAT95ojUI2lpP/ExlZa4=
@@ -802,8 +802,8 @@ github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/r
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/docker/cli v26.1.4+incompatible h1:I8PHdc0MtxEADqYJZvhBrW9bo8gawKwwenxRM7/rLu8=
github.com/docker/cli v26.1.4+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/docker v28.3.3+incompatible h1:Dypm25kh4rmk49v1eiVbsAtpAsYURjYkaKubwuBdxEI=
github.com/docker/docker v28.3.3+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v28.4.0+incompatible h1:KVC7bz5zJY/4AZe/78BIvCnPsLaC9T/zh72xnlrTTOk=
github.com/docker/docker v28.4.0+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.5.0 h1:USnMq7hx7gwdVZq1L49hLXaFtUdTADjXGp+uj1Br63c=
github.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
@@ -895,7 +895,6 @@ github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/goccy/go-yaml v1.18.0 h1:8W7wMFS12Pcas7KU+VVkaiCng+kG8QiFeFwzFb+rwuw=
github.com/goccy/go-yaml v1.18.0/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
@@ -1121,8 +1120,8 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/looker-open-source/sdk-codegen/go v0.25.10 h1:ltBbwkwZrQEHEIKrE5QbF+EtBlweKN0RZpQR0w2GIqo=
github.com/looker-open-source/sdk-codegen/go v0.25.10/go.mod h1:YM/IYSsTPk7I54j4l6PduNJYgXyOShuaMi7mD6xic8E=
github.com/looker-open-source/sdk-codegen/go v0.25.11 h1:IPxG3eTqz8ICd1ImqffEKQVd8G9/IAbjH7dg4IhiQtU=
github.com/looker-open-source/sdk-codegen/go v0.25.11/go.mod h1:Br1ntSiruDJ/4nYNjpYyWyCbqJ7+GQceWbIgn0hYims=
github.com/lyft/protoc-gen-star v0.6.0/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
github.com/lyft/protoc-gen-star v0.6.1/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
github.com/lyft/protoc-gen-star/v2 v2.0.1/go.mod h1:RcCdONR2ScXaYnQC5tUzxzlpA3WVYF7/opLeUgcQs/o=
@@ -1195,8 +1194,8 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
github.com/redis/go-redis/v9 v9.13.0 h1:PpmlVykE0ODh8P43U0HqC+2NXHXwG+GUtQyz+MPKGRg=
github.com/redis/go-redis/v9 v9.13.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/redis/go-redis/v9 v9.14.0 h1:u4tNCjXOyzfgeLN+vAZaW1xUooqWDqVEsZN0U01jfAE=
github.com/redis/go-redis/v9 v9.14.0/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
@@ -1242,8 +1241,8 @@ github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/thlib/go-timezone-local v0.0.7 h1:fX8zd3aJydqLlTs/TrROrIIdztzsdFV23OzOQx31jII=
github.com/thlib/go-timezone-local v0.0.7/go.mod h1:/Tnicc6m/lsJE0irFMA0LfIwTBo4QP7A8IfyIv4zZKI=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
@@ -1317,22 +1316,22 @@ go.opentelemetry.io/contrib/propagators/jaeger v1.37.0 h1:pW+qDVo0jB0rLsNeaP85xL
go.opentelemetry.io/contrib/propagators/jaeger v1.37.0/go.mod h1:x7bd+t034hxLTve1hF9Yn9qQJlO/pP8H5pWIt7+gsFM=
go.opentelemetry.io/contrib/propagators/ot v1.37.0 h1:tVjnBF6EiTDMXoq2Xuc2vK0I7MTbEs05II/0j9mMK+E=
go.opentelemetry.io/contrib/propagators/ot v1.37.0/go.mod h1:MQjyNXtxAC8PGN9gzPtO4GY5zuP+RI3XX53uWbCTvEQ=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0 h1:9PgnL3QNlj10uGxExowIDIZu66aVBwWhXmbOp1pa6RA=
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.37.0/go.mod h1:0ineDcLELf6JmKfuo0wvvhAVMuxWFYvkTin2iV4ydPQ=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.37.0 h1:Ahq7pZmv87yiyn3jeFz/LekZmPLLdKejuO3NcK9MssM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.37.0/go.mod h1:MJTqhM0im3mRLw1i8uGHnCvUEeS7VwRyxlLC78PA18M=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0 h1:bDMKF3RUSxshZ5OjOTi8rsHGaPKsAt76FaqgvIUySLc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.37.0/go.mod h1:dDT67G/IkA46Mr2l9Uj7HsQVwsjASyV9SjGofsiUZDA=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc=
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.opentelemetry.io/proto/otlp v0.15.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
go.opentelemetry.io/proto/otlp v0.19.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
@@ -1348,6 +1347,8 @@ go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN8
go.uber.org/zap v1.18.1/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
@@ -1363,8 +1364,8 @@ golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliY
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI=
golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -1427,8 +1428,8 @@ golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.26.0 h1:EGMPT//Ezu+ylkCijjPc+f4Aih7sZvaAr+O3EHBxvZg=
golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
golang.org/x/mod v0.27.0 h1:kb+q2PyFnEADO2IEF935ehFUXlWiNjJWtRNgBLSfbxQ=
golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1492,8 +1493,8 @@ golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I=
golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1545,8 +1546,8 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1632,8 +1633,8 @@ golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@@ -1668,8 +1669,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk=
golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1744,8 +1745,8 @@ golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0=
golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw=
golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg=
golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -2039,7 +2040,6 @@ gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/ini.v1 v1.61.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@@ -2108,8 +2108,8 @@ modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.18.1/go.mod h1:6ho+Gow7oX5V+OiOQ6Tr4xeqbx13UZ6t+Fw9IRUG4d4=
modernc.org/sqlite v1.38.2 h1:Aclu7+tgjgcQVShZqim41Bbw9Cho0y/7WzYptXqkEek=
modernc.org/sqlite v1.38.2/go.mod h1:cPTJYSlgg3Sfg046yBShXENNtPrWrDX8bsbAQBzgQ5E=
modernc.org/sqlite v1.39.0 h1:6bwu9Ooim0yVYA7IZn9demiQk/Ejp0BtTjBWFLymSeY=
modernc.org/sqlite v1.39.0/go.mod h1:cPTJYSlgg3Sfg046yBShXENNtPrWrDX8bsbAQBzgQ5E=
modernc.org/strutil v1.1.1/go.mod h1:DE+MQQ/hjKBZS2zNInV5hhcipt5rLPWkmpbGeW5mmdw=
modernc.org/strutil v1.1.3/go.mod h1:MEHNA7PdEnEwLvspRMtWTNnp2nnyvMfkimT1NKNAGbw=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=

View File

@@ -26,8 +26,11 @@ var expectedToolSources = []string{
"alloydb-postgres",
"bigquery",
"clickhouse",
"cloud-sql-mssql-observability",
"cloud-sql-mssql",
"cloud-sql-mysql-observability",
"cloud-sql-mysql",
"cloud-sql-postgres-observability",
"cloud-sql-postgres",
"dataplex",
"firestore",
@@ -39,6 +42,7 @@ var expectedToolSources = []string{
"postgres",
"spanner-postgres",
"spanner",
"sqlite",
}
func TestGetPrebuiltSources(t *testing.T) {
@@ -90,8 +94,11 @@ func TestGetPrebuiltTool(t *testing.T) {
alloydb_config, _ := Get("alloydb-postgres")
bigquery_config, _ := Get("bigquery")
clickhouse_config, _ := Get("clickhouse")
cloudsqlpg_observability_config, _ := Get("cloud-sql-postgres-observability")
cloudsqlpg_config, _ := Get("cloud-sql-postgres")
cloudsqlmysql_observability_config, _ := Get("cloud-sql-mysql-observability")
cloudsqlmysql_config, _ := Get("cloud-sql-mysql")
cloudsqlmssql_observability_config, _ := Get("cloud-sql-mssql-observability")
cloudsqlmssql_config, _ := Get("cloud-sql-mssql")
dataplex_config, _ := Get("dataplex")
firestoreconfig, _ := Get("firestore")
@@ -101,6 +108,7 @@ func TestGetPrebuiltTool(t *testing.T) {
postgresconfig, _ := Get("postgres")
spanner_config, _ := Get("spanner")
spannerpg_config, _ := Get("spanner-postgres")
sqlite_config, _ := Get("sqlite")
neo4jconfig, _ := Get("neo4j")
if len(alloydb_admin_config) <= 0 {
t.Fatalf("unexpected error: could not fetch alloydb prebuilt tools yaml")
@@ -117,12 +125,21 @@ func TestGetPrebuiltTool(t *testing.T) {
if len(clickhouse_config) <= 0 {
t.Fatalf("unexpected error: could not fetch clickhouse prebuilt tools yaml")
}
if len(cloudsqlpg_observability_config) <= 0 {
t.Fatalf("unexpected error: could not fetch cloud sql pg observability prebuilt tools yaml")
}
if len(cloudsqlpg_config) <= 0 {
t.Fatalf("unexpected error: could not fetch cloud sql pg prebuilt tools yaml")
}
if len(cloudsqlmysql_observability_config) <= 0 {
t.Fatalf("unexpected error: could not fetch cloud sql mysql observability prebuilt tools yaml")
}
if len(cloudsqlmysql_config) <= 0 {
t.Fatalf("unexpected error: could not fetch cloud sql mysql prebuilt tools yaml")
}
if len(cloudsqlmssql_observability_config) <= 0 {
t.Fatalf("unexpected error: could not fetch cloud sql mssql observability prebuilt tools yaml")
}
if len(cloudsqlmssql_config) <= 0 {
t.Fatalf("unexpected error: could not fetch cloud sql mssql prebuilt tools yaml")
}
@@ -150,6 +167,9 @@ func TestGetPrebuiltTool(t *testing.T) {
if len(spannerpg_config) <= 0 {
t.Fatalf("unexpected error: could not fetch spanner pg prebuilt tools yaml")
}
if len(sqlite_config) <= 0 {
t.Fatalf("unexpected error: could not fetch sqlite prebuilt tools yaml")
}
if len(neo4jconfig) <= 0 {
t.Fatalf("unexpected error: could not fetch neo4j prebuilt tools yaml")
}

View File

@@ -1,3 +1,17 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
alloydb-api-source:
kind: http
@@ -5,8 +19,10 @@ sources:
headers:
Authorization: Bearer ${API_KEY}
Content-Type: application/json
alloydb-admin-source:
kind: alloydb-admin
tools:
alloydb-create-cluster:
create_cluster:
kind: http
source: alloydb-api-source
method: POST
@@ -48,14 +64,14 @@ tools:
- name: user
type: string
description: "The name for the initial superuser. If not provided, it defaults to 'postgres'. The initial database will always be named 'postgres'."
alloydb-operations-get:
wait_for_operation:
kind: alloydb-wait-for-operation
description: "This will poll on operations API until the operation is done. For checking operation status we need projectId, locationID and operationId. Once instance is created give follow up steps on how to use the variables to bring data plane MCP server up in local and remote setup."
delay: 1s
maxDelay: 4m
multiplier: 2
maxRetries: 10
alloydb-create-instance:
create_instance:
kind: http
source: alloydb-api-source
method: POST
@@ -114,56 +130,19 @@ tools:
type: string
description: "The full resource name of the primary cluster for a SECONDARY instance. Required only if instanceType is SECONDARY. Otherwise don't ask"
default: ""
alloydb-list-clusters:
kind: http
source: alloydb-api-source
method: GET
path: /v1/projects/{{.projectId}}/locations/{{.locationId}}/clusters
list_clusters:
kind: alloydb-list-clusters
source: alloydb-admin-source
description: "Lists all AlloyDB clusters in a given project and location."
pathParams:
- name: projectId
type: string
description: "The GCP project ID to list clusters for."
- name: locationId
type: string
description: "The location to list clusters in (e.g., 'us-central1'). Use '-' to list clusters across all locations."
default: "-"
alloydb-list-instances:
kind: http
source: alloydb-api-source
method: GET
path: /v1/projects/{{.projectId}}/locations/{{.locationId}}/clusters/{{.clusterId}}/instances
list_instances:
kind: alloydb-list-instances
source: alloydb-admin-source
description: "Lists all AlloyDB instances within a specific cluster."
pathParams:
- name: projectId
type: string
description: "The GCP project ID."
- name: locationId
type: string
description: "The location of the cluster (e.g., 'us-central1'). Use '-' to get results for all regions."
default: "-"
- name: clusterId
type: string
description: "The ID of the cluster to list instances from. Use '-' to get results for all clusters."
default: "-"
alloydb-list-users:
kind: http
source: alloydb-api-source
method: GET
path: /v1/projects/{{.projectId}}/locations/{{.locationId}}/clusters/{{.clusterId}}/users
list_users:
kind: alloydb-list-users
source: alloydb-admin-source
description: "Lists all database users within a specific AlloyDB cluster."
pathParams:
- name: projectId
type: string
description: "The GCP project ID."
- name: locationId
type: string
description: "The location of the cluster (e.g., 'us-central1')."
default: us-central1
- name: clusterId
type: string
description: "The ID of the cluster to list users from."
alloydb-create-user:
create_user:
kind: http
source: alloydb-api-source
method: POST
@@ -210,13 +189,18 @@ tools:
type: string
description: "The type of user to create. Valid values are: USER_TYPE_UNSPECIFIED, ALLOYDB_BUILT_IN, ALLOYDB_IAM_USER."
default: "ALLOYDB_BUILT_IN"
get_cluster:
kind: alloydb-get-cluster
source: alloydb-admin-source
description: "Retrieves details of a specific AlloyDB cluster."
toolsets:
alloydb-postgres-admin-tools:
- alloydb-create-cluster
- alloydb-operations-get
- alloydb-create-instance
- alloydb-list-clusters
- alloydb-list-instances
- alloydb-list-users
- alloydb-create-user
alloydb_postgres_admin_tools:
- create_cluster
- wait_for_operation
- create_instance
- list_clusters
- list_instances
- list_users
- create_user
- get_cluster

View File

@@ -116,6 +116,6 @@ tools:
18. `alloydb.googleapis.com/database/postgresql/insights/pertag/row_count`: The number of retrieved or affected rows since the last sample per tag. `alloydb.googleapis.com/Database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`.
toolsets:
alloydb-postgres-cloud-monitoring-tools:
alloydb_postgres_cloud_monitoring_tools:
- get_system_metrics
- get_query_metrics

View File

@@ -36,6 +36,6 @@ tools:
description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
toolsets:
alloydb-postgres-database-tools:
alloydb_postgres_database_tools:
- execute_sql
- list_tables

View File

@@ -17,6 +17,7 @@ sources:
kind: "bigquery"
project: ${BIGQUERY_PROJECT}
location: ${BIGQUERY_LOCATION:}
useClientOAuth: ${BIGQUERY_USE_CLIENT_OAUTH:false}
tools:
analyze_contribution:
@@ -63,7 +64,7 @@ tools:
description: Use this tool to list tables.
toolsets:
bigquery-database-tools:
bigquery_database_tools:
- analyze_contribution
- ask_data_insights
- execute_sql

View File

@@ -33,6 +33,6 @@ tools:
description: Use this tool to list all databases in ClickHouse.
toolsets:
clickhouse-database-tools:
clickhouse_database_tools:
- execute_sql
- list_databases

View File

@@ -0,0 +1,76 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
cloud-monitoring-source:
kind: cloud-monitoring
tools:
get_system_metrics:
kind: cloud-monitoring-query-prometheus
source: cloud-monitoring-source
description: |
Fetches system level cloudmonitoring data (timeseries metrics) for a SqlServer instance using a PromQL query. Take projectId and instanceId from the user for which the metrics timeseries data needs to be fetched.
To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
Generate PromQL `query` for SqlServer system metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id` from user intent.
Defaults:
1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
PromQL Query Examples:
1. Basic Time Series: `avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m])`
2. Top K: `topk(30, avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
3. Mean: `avg(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
4. Minimum: `min(min_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
5. Maximum: `max(max_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
6. Sum: `sum(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
7. Count streams: `count(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
8. Percentile with groupby on database_id: `quantile by ("database_id")(0.99,avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
Available Metrics List: metricname. description. monitored resource. labels. database_id is actually the instance id and the format is `project_id:instance_id`.
1. `cloudsql.googleapis.com/database/cpu/utilization`: Current CPU utilization as a percentage of the reserved CPU. `cloudsql_database`. `database`, `project_id`, `database_id`.
2. `cloudsql.googleapis.com/database/memory/usage`: RAM usage in bytes, excluding buffer/cache. `cloudsql_database`. `database`, `project_id`, `database_id`.
3. `cloudsql.googleapis.com/database/memory/total_usage`: Total RAM usage in bytes, including buffer/cache. `cloudsql_database`. `database`, `project_id`, `database_id`.
4. `cloudsql.googleapis.com/database/disk/bytes_used`: Data utilization in bytes. `cloudsql_database`. `database`, `project_id`, `database_id`.
5. `cloudsql.googleapis.com/database/disk/quota`: Maximum data disk size in bytes. `cloudsql_database`. `database`, `project_id`, `database_id`.
6. `cloudsql.googleapis.com/database/disk/read_ops_count`: Delta count of data disk read IO operations. `cloudsql_database`. `database`, `project_id`, `database_id`.
7. `cloudsql.googleapis.com/database/disk/write_ops_count`: Delta count of data disk write IO operations. `cloudsql_database`. `database`, `project_id`, `database_id`.
8. `cloudsql.googleapis.com/database/network/received_bytes_count`: Delta count of bytes received through the network. `cloudsql_database`. `database`, `project_id`, `database_id`.
9. `cloudsql.googleapis.com/database/network/sent_bytes_count`: Delta count of bytes sent through the network. `cloudsql_database`. `destination`, `database`, `project_id`, `database_id`.
10. `cloudsql.googleapis.com/database/sqlserver/memory/buffer_cache_hit_ratio`: Current percentage of pages found in the buffer cache without reading from disk. `cloudsql_database`. `database`, `project_id`, `database_id`.
11. `cloudsql.googleapis.com/database/sqlserver/memory/memory_grants_pending`: Current number of processes waiting for a workspace memory grant. `cloudsql_database`. `database`, `project_id`, `database_id`.
12. `cloudsql.googleapis.com/database/sqlserver/memory/free_list_stall_count`: Total number of requests that waited for a free page. `cloudsql_database`. `database`, `project_id`, `database_id`.
13. `cloudsql.googleapis.com/database/swap/pages_swapped_in_count`: Total count of pages swapped in from disk since the system was booted. `cloudsql_database`. `database`, `project_id`, `database_id`.
14. `cloudsql.googleapis.com/database/swap/pages_swapped_out_count`: Total count of pages swapped out to disk since the system was booted. `cloudsql_database`. `database`, `project_id`, `database_id`.
15. `cloudsql.googleapis.com/database/sqlserver/memory/checkpoint_page_count`: Total number of pages flushed to disk by a checkpoint. `cloudsql_database`. `database`, `project_id`, `database_id`.
16. `cloudsql.googleapis.com/database/sqlserver/memory/lazy_write_count`: Total number of buffers written by the buffer manager's lazy writer. `cloudsql_database`. `database`, `project_id`, `database_id`.
17. `cloudsql.googleapis.com/database/sqlserver/memory/page_life_expectancy`: Current number of seconds a page will stay in the buffer pool. `cloudsql_database`. `database`, `project_id`, `database_id`.
18. `cloudsql.googleapis.com/database/sqlserver/memory/page_operation_count`: Total number of physical database page reads or writes. `cloudsql_database`. `operation`, `database`, `project_id`, `database_id`.
19. `cloudsql.googleapis.com/database/sqlserver/transactions/page_split_count`: Total number of page splits from overflowing index pages. `cloudsql_database`. `database`, `project_id`, `database_id`.
20. `cloudsql.googleapis.com/database/sqlserver/transactions/deadlock_count`: Total number of lock requests that resulted in a deadlock. `cloudsql_database`. `locked_resource`, `database`, `project_id`, `database_id`.
21. `cloudsql.googleapis.com/database/sqlserver/transactions/transaction_count`: Total number of transactions started. `cloudsql_database`. `database`, `project_id`, `database_id`.
22. `cloudsql.googleapis.com/database/sqlserver/transactions/batch_request_count`: Total number of Transact-SQL command batches received. `cloudsql_database`. `database`, `project_id`, `database_id`.
23. `cloudsql.googleapis.com/database/sqlserver/transactions/sql_compilation_count`: Total number of SQL compilations. `cloudsql_database`. `database`, `project_id`, `database_id`.
24. `cloudsql.googleapis.com/database/sqlserver/transactions/sql_recompilation_count`: Total number of SQL recompilations. `cloudsql_database`. `database`, `project_id`, `database_id`.
25. `cloudsql.googleapis.com/database/sqlserver/connections/processes_blocked`: Current number of blocked processes. `cloudsql_database`. `database`, `project_id`, `database_id`.
26. `cloudsql.googleapis.com/database/sqlserver/transactions/lock_wait_time`: Total time lock requests were waiting for locks. `cloudsql_database`. `locked_resource`, `database`, `project_id`, `database_id`.
27. `cloudsql.googleapis.com/database/sqlserver/transactions/lock_wait_count`: Total number of lock requests that required the caller to wait. `cloudsql_database`. `locked_resource`, `database`, `project_id`, `database_id`.
28. `cloudsql.googleapis.com/database/network/connections`: Number of connections to databases on the instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
29. `cloudsql.googleapis.com/database/sqlserver/connections/login_attempt_count`: Total number of login attempts since the last server restart. `cloudsql_database`. `database`, `project_id`, `database_id`.
30. `cloudsql.googleapis.com/database/sqlserver/connections/logout_count`: Total number of logout operations since the last server restart. `cloudsql_database`. `database`, `project_id`, `database_id`.
31. `cloudsql.googleapis.com/database/sqlserver/connections/connection_reset_count`: Total number of logins started from the connection pool since the last server restart. `cloudsql_database`. `database`, `project_id`, `database_id`.
32. `cloudsql.googleapis.com/database/sqlserver/transactions/full_scan_count`: Total number of unrestricted full scans (base-table or full-index). `cloudsql_database`. `database`, `project_id`, `database_id`.
toolsets:
cloud-sql-mssql-cloud-monitoring-tools:
- get_system_metrics

View File

@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
cloudsql-mssql-source:
kind: cloud-sql-mssql
@@ -289,6 +290,6 @@ tools:
default: "detailed"
toolsets:
cloud-sql-mssql-database-tools:
cloud_sql_mssql_database_tools:
- execute_sql
- list_tables

View File

@@ -0,0 +1,111 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
cloud-monitoring-source:
kind: cloud-monitoring
tools:
get_system_metrics:
kind: cloud-monitoring-query-prometheus
source: cloud-monitoring-source
description: |
Fetches system level cloudmonitoring data (timeseries metrics) for a MySQL instance using a PromQL query. Take projectId and instanceId from the user for which the metrics timeseries data needs to be fetched.
To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
Generate PromQL `query` for MySQL system metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id` from user intent.
Defaults:
1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
PromQL Query Examples:
1. Basic Time Series: `avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m])`
2. Top K: `topk(30, avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
3. Mean: `avg(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
4. Minimum: `min(min_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
5. Maximum: `max(max_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
6. Sum: `sum(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
7. Count streams: `count(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
8. Percentile with groupby on database_id: `quantile by ("database_id")(0.99,avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
Available Metrics List: metricname. description. monitored resource. labels. database_id is actually the instance id and the format is `project_id:instance_id`.
1. `cloudsql.googleapis.com/database/cpu/utilization`: Current CPU utilization as a percentage of reserved CPU. `cloudsql_database`. `database`, `project_id`, `database_id`.
2. `cloudsql.googleapis.com/database/network/connections`: Number of connections to the database instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
3. `cloudsql.googleapis.com/database/network/received_bytes_count`: Delta count of bytes received through the network. `cloudsql_database`. `database`, `project_id`, `database_id`.
4. `cloudsql.googleapis.com/database/network/sent_bytes_count`: Delta count of bytes sent through the network. `cloudsql_database`. `destination`, `database`, `project_id`, `database_id`.
5. `cloudsql.googleapis.com/database/memory/components`: Memory usage for components like usage, cache, and free memory. `cloudsql_database`. `component`, `database`, `project_id`, `database_id`.
6. `cloudsql.googleapis.com/database/disk/bytes_used_by_data_type`: Data utilization in bytes. `cloudsql_database`. `data_type`, `database`, `project_id`, `database_id`.
7. `cloudsql.googleapis.com/database/disk/read_ops_count`: Delta count of data disk read IO operations. `cloudsql_database`. `database`, `project_id`, `database_id`.
8. `cloudsql.googleapis.com/database/disk/write_ops_count`: Delta count of data disk write IO operations. `cloudsql_database`. `database`, `project_id`, `database_id`.
9. `cloudsql.googleapis.com/database/mysql/queries`: Delta count of statements executed by the server. `cloudsql_database`. `database`, `project_id`, `database_id`.
10. `cloudsql.googleapis.com/database/mysql/questions`: Delta count of statements sent by the client. `cloudsql_database`. `database`, `project_id`, `database_id`.
11. `cloudsql.googleapis.com/database/mysql/received_bytes_count`: Delta count of bytes received by MySQL process. `cloudsql_database`. `database`, `project_id`, `database_id`.
12. `cloudsql.googleapis.com/database/mysql/sent_bytes_count`: Delta count of bytes sent by MySQL process. `cloudsql_database`. `database`, `project_id`, `database_id`.
13. `cloudsql.googleapis.com/database/mysql/innodb_buffer_pool_pages_dirty`: Number of unflushed pages in the InnoDB buffer pool. `cloudsql_database`. `database`, `project_id`, `database_id`.
14. `cloudsql.googleapis.com/database/mysql/innodb_buffer_pool_pages_free`: Number of unused pages in the InnoDB buffer pool. `cloudsql_database`. `database`, `project_id`, `database_id`.
15. `cloudsql.googleapis.com/database/mysql/innodb_buffer_pool_pages_total`: Total number of pages in the InnoDB buffer pool. `cloudsql_database`. `database`, `project_id`, `database_id`.
16. `cloudsql.googleapis.com/database/mysql/innodb_data_fsyncs`: Delta count of InnoDB fsync() calls. `cloudsql_database`. `database`, `project_id`, `database_id`.
17. `cloudsql.googleapis.com/database/mysql/innodb_os_log_fsyncs`: Delta count of InnoDB fsync() calls to the log file. `cloudsql_database`. `database`, `project_id`, `database_id`.
18. `cloudsql.googleapis.com/database/mysql/innodb_pages_read`: Delta count of InnoDB pages read. `cloudsql_database`. `database`, `project_id`, `database_id`.
19. `cloudsql.googleapis.com/database/mysql/innodb_pages_written`: Delta count of InnoDB pages written. `cloudsql_database`. `database`, `project_id`, `database_id`.
20. `cloudsql.googleapis.com/database/mysql/open_tables`: The number of tables that are currently open. `cloudsql_database`. `database`, `project_id`, `database_id`.
21. `cloudsql.googleapis.com/database/mysql/opened_table_count`: The number of tables opened since the last sample. `cloudsql_database`. `database`, `project_id`, `database_id`.
22. `cloudsql.googleapis.com/database/mysql/open_table_definitions`: The number of table definitions currently cached. `cloudsql_database`. `database`, `project_id`, `database_id`.
23. `cloudsql.googleapis.com/database/mysql/opened_table_definitions_count`: The number of table definitions cached since the last sample. `cloudsql_database`. `database`, `project_id`, `database_id`.
24. `cloudsql.googleapis.com/database/mysql/innodb/dictionary_memory`: Memory allocated for the InnoDB dictionary cache. `cloudsql_database`. `database`, `project_id`, `database_id`.
get_query_metrics:
kind: cloud-monitoring-query-prometheus
source: cloud-monitoring-source
description: |
Fetches query level cloudmonitoring data (timeseries metrics) for queries running in Mysql instance using a PromQL query. Take projectID and instanceID from the user for which the metrics timeseries data needs to be fetched.
To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
Generate PromQL `query` for Mysql query metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id`, `query_hash` from user intent. If query_hash is provided then use the per_query metrics. Query hash and query id are same.
Defaults:
1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
PromQL Query Examples:
1. Basic Time Series: `avg_over_time({"__name__"="dbinsights.googleapis.com/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m])`
2. Top K: `topk(30, avg_over_time({"__name__"="dbinsights.googleapis.com/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
3. Mean: `avg(avg_over_time({"__name__"="dbinsights.googleapis.com/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
4. Minimum: `min(min_over_time({"__name__"="dbinsights.googleapis.com/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
5. Maximum: `max(max_over_time({"__name__"="dbinsights.googleapis.com/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
6. Sum: `sum(avg_over_time({"__name__"="dbinsights.googleapis.com/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
7. Count streams: `count(avg_over_time({"__name__"="dbinsights.googleapis.com/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
8. Percentile with groupby on resource_id, database: `quantile by ("resource_id","database")(0.99,avg_over_time({"__name__"="dbinsights.googleapis.com/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
Available Metrics List: metricname. description. monitored resource. labels. resource_id label format is `project_id:instance_id` which is actually instance id only. aggregate is the aggregated values for all query stats, Use aggregate metrics if query id is not provided. For perquery metrics do not fetch querystring unless specified by user specifically. Have the aggregation on query hash to avoid fetching the querystring. Do not use latency metrics for anything.
1. `dbinsights.googleapis.com/aggregate/latencies`: Cumulative query latency distribution per user and database. `cloudsql_instance_database`. `user`, `client_addr`, `database`, `project_id`, `resource_id`.
2. `dbinsights.googleapis.com/aggregate/execution_time`: Cumulative query execution time per user and database. `cloudsql_instance_database`. `user`, `client_addr`, `database`, `project_id`, `resource_id`.
3. `dbinsights.googleapis.com/aggregate/execution_count`: Total number of query executions per user and database. `cloudsql_instance_database`. `user`, `client_addr`, `database`, `project_id`, `resource_id`.
4. `dbinsights.googleapis.com/aggregate/lock_time`: Cumulative lock wait time per user and database. `cloudsql_instance_database`. `user`, `client_addr`, `lock_type`, `database`, `project_id`, `resource_id`.
5. `dbinsights.googleapis.com/aggregate/io_time`: Cumulative IO wait time per user and database. `cloudsql_instance_database`. `user`, `client_addr`, `database`, `project_id`, `resource_id`.
6. `dbinsights.googleapis.com/aggregate/row_count`: Total number of rows affected during query execution. `cloudsql_instance_database`. `user`, `client_addr`, `row_status`, `database`, `project_id`, `resource_id`.
7. `dbinsights.googleapis.com/perquery/latencies`: Cumulative query latency distribution per user, database, and query. `cloudsql_instance_database`. `querystring`, `user`, `client_addr`, `query_hash`, `database`, `project_id`, `resource_id`.
8. `dbinsights.googleapis.com/perquery/execution_time`: Cumulative query execution time per user, database, and query. `cloudsql_instance_database`. `querystring`, `user`, `client_addr`, `query_hash`, `database`, `project_id`, `resource_id`.
9. `dbinsights.googleapis.com/perquery/execution_count`: Total number of query executions per user, database, and query. `cloudsql_instance_database`. `querystring`, `user`, `client_addr`, `query_hash`, `database`, `project_id`, `resource_id`.
10. `dbinsights.googleapis.com/perquery/lock_time`: Cumulative lock wait time per user, database, and query. `cloudsql_instance_database`. `querystring`, `user`, `client_addr`, `lock_type`, `query_hash`, `database`, `project_id`, `resource_id`.
11. `dbinsights.googleapis.com/perquery/io_time`: Cumulative io wait time per user, database, and query. `cloudsql_instance_database`. `querystring`, `user`, `client_addr`, `query_hash`, `database`, `project_id`, `resource_id`.
12. `dbinsights.googleapis.com/perquery/row_count`: Total number of rows affected during query execution. `cloudsql_instance_database`. `querystring`, `user`, `client_addr`, `query_hash`, `row_status`, `database`, `project_id`, `resource_id`.
13. `dbinsights.googleapis.com/pertag/latencies`: Cumulative query latency distribution per user, database, and tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `database`, `project_id`, `resource_id`.
14. `dbinsights.googleapis.com/pertag/execution_time`: Cumulative query execution time per user, database, and tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `database`, `project_id`, `resource_id`.
15. `dbinsights.googleapis.com/pertag/execution_count`: Total number of query executions per user, database, and tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `database`, `project_id`, `resource_id`.
16. `dbinsights.googleapis.com/pertag/lock_time`: Cumulative lock wait time per user, database and tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `lock_type`, `tag_hash`, `database`, `project_id`, `resource_id`.
17. `dbinsights.googleapis.com/pertag/io_time`: Cumulative IO wait time per user, database and tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `database`, `project_id`, `resource_id`.
18. `dbinsights.googleapis.com/pertag/row_count`: Total number of rows affected during query execution. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `row_status`, `database`, `project_id`, `resource_id`.
toolsets:
cloud-sql-mysql-cloud-monitoring-tools:
- get_system_metrics
- get_query_metrics

View File

@@ -33,6 +33,6 @@ tools:
description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
toolsets:
cloud-sql-mysql-database-tools:
cloud_sql_mysql_database_tools:
- execute_sql
- list_tables
- list_tables

View File

@@ -0,0 +1,113 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
cloud-monitoring-source:
kind: cloud-monitoring
tools:
get_system_metrics:
kind: cloud-monitoring-query-prometheus
source: cloud-monitoring-source
description: |
Fetches system level cloudmonitoring data (timeseries metrics) for a Postgres instance using a PromQL query. Take projectId and instanceId from the user for which the metrics timeseries data needs to be fetched.
To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
Generate PromQL `query` for Postgres system metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id` from user intent.
Defaults:
1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
PromQL Query Examples:
1. Basic Time Series: `avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m])`
2. Top K: `topk(30, avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
3. Mean: `avg(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
4. Minimum: `min(min_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
5. Maximum: `max(max_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
6. Sum: `sum(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
7. Count streams: `count(avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
8. Percentile with groupby on database_id: `quantile by ("database_id")(0.99,avg_over_time({"__name__"="cloudsql.googleapis.com/database/cpu/utilization","monitored_resource"="cloudsql_database","project_id"="my-projectId","database_id"="my-projectId:my-instanceId"}[5m]))`
Available Metrics List: metricname. description. monitored resource. labels. database_id is actually the instance id and the format is `project_id:instance_id`.
1. `cloudsql.googleapis.com/database/postgresql/new_connection_count`: Count of new connections added to the postgres instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
2. `cloudsql.googleapis.com/database/postgresql/backends_in_wait`: Number of backends in wait in postgres instance. `cloudsql_database`. `backend_type`, `wait_event`, `wait_event_type`, `project_id`, `database_id`.
3. `cloudsql.googleapis.com/database/postgresql/transaction_count`: Delta count of number of transactions. `cloudsql_database`. `database`, `transaction_type`, `project_id`, `database_id`.
4. `cloudsql.googleapis.com/database/memory/components`: Memory stats components in percentage as usage, cache and free memory for the database. `cloudsql_database`. `component`, `project_id`, `database_id`.
5. `cloudsql.googleapis.com/database/postgresql/external_sync/max_replica_byte_lag`: Replication lag in bytes for Postgres External Server (ES) replicas. Aggregated across all DBs on the replica. `cloudsql_database`. `project_id`, `database_id`.
6. `cloudsql.googleapis.com/database/cpu/utilization`: Current CPU utilization represented as a percentage of the reserved CPU that is currently in use. Values are typically numbers between 0.0 and 1.0 (but might exceed 1.0). Charts display the values as a percentage between 0% and 100% (or more). `cloudsql_database`. `project_id`, `database_id`.
7. `cloudsql.googleapis.com/database/disk/bytes_used_by_data_type`: Data utilization in bytes. `cloudsql_database`. `data_type`, `project_id`, `database_id`.
8. `cloudsql.googleapis.com/database/disk/read_ops_count`: Delta count of data disk read IO operations. `cloudsql_database`. `project_id`, `database_id`.
9. `cloudsql.googleapis.com/database/disk/write_ops_count`: Delta count of data disk write IO operations. `cloudsql_database`. `project_id`, `database_id`.
10. `cloudsql.googleapis.com/database/postgresql/num_backends_by_state`: Number of connections to the Cloud SQL PostgreSQL instance, grouped by its state. `cloudsql_database`. `database`, `state`, `project_id`, `database_id`.
11. `cloudsql.googleapis.com/database/postgresql/num_backends`: Number of connections to the Cloud SQL PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
12. `cloudsql.googleapis.com/database/network/received_bytes_count`: Delta count of bytes received through the network. `cloudsql_database`. `project_id`, `database_id`.
13. `cloudsql.googleapis.com/database/network/sent_bytes_count`: Delta count of bytes sent through the network. `cloudsql_database`. `destination`, `project_id`, `database_id`.
14. `cloudsql.googleapis.com/database/postgresql/deadlock_count`: Number of deadlocks detected for this database. `cloudsql_database`. `database`, `project_id`, `database_id`.
15. `cloudsql.googleapis.com/database/postgresql/blocks_read_count`: Number of disk blocks read by this database. The source field distingushes actual reads from disk versus reads from buffer cache. `cloudsql_database`. `database`, `source`, `project_id`, `database_id`.
16. `cloudsql.googleapis.com/database/postgresql/tuples_processed_count`: Number of tuples(rows) processed for a given database for operations like insert, update or delete. `cloudsql_database`. `operation_type`, `database`, `project_id`, `database_id`.
17. `cloudsql.googleapis.com/database/postgresql/tuple_size`: Number of tuples (rows) in the database. `cloudsql_database`. `database`, `tuple_state`, `project_id`, `database_id`.
18. `cloudsql.googleapis.com/database/postgresql/vacuum/oldest_transaction_age`: Age of the oldest transaction yet to be vacuumed in the Cloud SQL PostgreSQL instance, measured in number of transactions that have happened since the oldest transaction. `cloudsql_database`. `oldest_transaction_type`, `project_id`, `database_id`.
19. `cloudsql.googleapis.com/database/replication/log_archive_success_count`: Number of successful attempts for archiving replication log files. `cloudsql_database`. `project_id`, `database_id`.
20. `cloudsql.googleapis.com/database/replication/log_archive_failure_count`: Number of failed attempts for archiving replication log files. `cloudsql_database`. `project_id`, `database_id`.
21. `cloudsql.googleapis.com/database/postgresql/transaction_id_utilization`: Current utilization represented as a percentage of transaction IDs consumed by the Cloud SQL PostgreSQL instance. Values are typically numbers between 0.0 and 1.0. Charts display the values as a percentage between 0% and 100% . `cloudsql_database`. `project_id`, `database_id`.
22. `cloudsql.googleapis.com/database/postgresql/num_backends_by_application`: Number of connections to the Cloud SQL PostgreSQL instance, grouped by applications. `cloudsql_database`. `application`, `project_id`, `database_id`.
23. `cloudsql.googleapis.com/database/postgresql/tuples_fetched_count`: Total number of rows fetched as a result of queries per database in the PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
24. `cloudsql.googleapis.com/database/postgresql/tuples_returned_count`: Total number of rows scanned while processing the queries per database in the PostgreSQL instance. `cloudsql_database`. `database`, `project_id`, `database_id`.
25. `cloudsql.googleapis.com/database/postgresql/temp_bytes_written_count`: Total amount of data (in bytes) written to temporary files by the queries per database. `cloudsql_database`. `database`, `project_id`, `database_id`.
26. `cloudsql.googleapis.com/database/postgresql/temp_files_written_count`: Total number of temporary files used for writing data while performing algorithms such as join and sort. `cloudsql_database`. `database`, `project_id`, `database_id`.
get_query_metrics:
kind: cloud-monitoring-query-prometheus
source: cloud-monitoring-source
description: |
Fetches query level cloudmonitoring data (timeseries metrics) for queries running in Postgres instance using a PromQL query. Take projectID and instanceID from the user for which the metrics timeseries data needs to be fetched.
To use this tool, you must provide the Google Cloud `projectId` and a PromQL `query`.
Generate PromQL `query` for Postgres query metrics. Use the provided metrics and rules to construct queries, Get the labels like `instance_id`, `query_hash` from user intent. If query_hash is provided then use the per_query metrics. Query hash and query id are same.
Defaults:
1. Interval: Use a default interval of `5m` for `_over_time` aggregation functions unless a different window is specified by the user.
PromQL Query Examples:
1. Basic Time Series: `avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m])`
2. Top K: `topk(30, avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
3. Mean: `avg(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
4. Minimum: `min(min_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
5. Maximum: `max(max_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
6. Sum: `sum(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
7. Count streams: `count(avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
8. Percentile with groupby on resource_id, database: `quantile by ("resource_id","database")(0.99,avg_over_time({"__name__"="cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time","monitored_resource"="cloudsql_instance_database","project_id"="my-projectId","resource_id"="my-projectId:my-instanceId"}[5m]))`
Available Metrics List: metricname. description. monitored resource. labels. resource_id label format is `project_id:instance_id` which is actually instance id only. aggregate is the aggregated values for all query stats, Use aggregate metrics if query id is not provided. For perquery metrics do not fetch querystring unless specified by user specifically. Have the aggregation on query hash to avoid fetching the querystring. Do not use latency metrics for anything.
1. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/latencies`: Aggregated query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
2. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/execution_time`: Accumulated aggregated query execution time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
3. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/io_time`: Accumulated aggregated IO time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `io_type`, `project_id`, `resource_id`.
4. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/lock_time`: Accumulated aggregated lock wait time since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `lock_type`, `project_id`, `resource_id`.
5. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/row_count`: Aggregated number of retrieved or affected rows since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `project_id`, `resource_id`.
6. `cloudsql.googleapis.com/database/postgresql/insights/aggregate/shared_blk_access_count`: Aggregated shared blocks accessed by statement execution. `cloudsql_instance_database`. `user`, `client_addr`, `access_type`, `project_id`, `resource_id`.
7. `cloudsql.googleapis.com/database/postgresql/insights/perquery/latencies`: Per query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
8. `cloudsql.googleapis.com/database/postgresql/insights/perquery/execution_time`: Accumulated execution times per user per database per query. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
9. `cloudsql.googleapis.com/database/postgresql/insights/perquery/io_time`: Accumulated IO time since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `io_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
10. `cloudsql.googleapis.com/database/postgresql/insights/perquery/lock_time`: Accumulated lock wait time since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `lock_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
11. `cloudsql.googleapis.com/database/postgresql/insights/perquery/row_count`: The number of retrieved or affected rows since the last sample per query. `cloudsql_instance_database`. `user`, `client_addr`, `querystring`, `query_hash`, `project_id`, `resource_id`.
12. `cloudsql.googleapis.com/database/postgresql/insights/perquery/shared_blk_access_count`: Shared blocks accessed by statement execution per query. `cloudsql_instance_database`. `user`, `client_addr`, `access_type`, `querystring`, `query_hash`, `project_id`, `resource_id`.
13. `cloudsql.googleapis.com/database/postgresql/insights/pertag/latencies`: Query latency distribution. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.
14. `cloudsql.googleapis.com/database/postgresql/insights/pertag/execution_time`: Accumulated execution times since the last sample. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.
15. `cloudsql.googleapis.com/database/postgresql/insights/pertag/io_time`: Accumulated IO time since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `io_type`, `tag_hash`, `project_id`, `resource_id`.
16. `cloudsql.googleapis.com/database/postgresql/insights/pertag/lock_time`: Accumulated lock wait time since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `lock_type`, `tag_hash`, `project_id`, `resource_id`.
17. `cloudsql.googleapis.com/database/postgresql/insights/pertag/shared_blk_access_count`: Shared blocks accessed by statement execution per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `access_type`, `tag_hash`, `project_id`, `resource_id`.
18. `cloudsql.googleapis.com/database/postgresql/insights/pertag/row_count`: The number of retrieved or affected rows since the last sample per tag. `cloudsql_instance_database`. `user`, `client_addr`, `action`, `application`, `controller`, `db_driver`, `framework`, `route`, `tag_hash`, `project_id`, `resource_id`.
toolsets:
cloud-sql-postgres-cloud-monitoring-tools:
- get_system_metrics
- get_query_metrics

View File

@@ -35,6 +35,6 @@ tools:
description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
toolsets:
cloud-sql-postgres-database-tools:
cloud_sql_postgres_database_tools:
- execute_sql
- list_tables

View File

@@ -1,24 +1,38 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
dataplex-source:
kind: "dataplex"
project: ${DATAPLEX_PROJECT}
tools:
dataplex_search_entries:
search_entries:
kind: dataplex-search-entries
source: dataplex-source
description: Use this tool to search for entries in Dataplex Catalog based on the provided search query.
dataplex_lookup_entry:
lookup_entry:
kind: dataplex-lookup-entry
source: dataplex-source
description: Use this tool to retrieve a specific entry from Dataplex Catalog.
dataplex_search_aspect_types:
search_aspect_types:
kind: dataplex-search-aspect-types
source: dataplex-source
description: Use this tool to find aspect types relevant to the query.
toolsets:
dataplex-tools:
- dataplex_search_entries
- dataplex_lookup_entry
- dataplex_search_aspect_types
dataplex_tools:
- search_entries
- lookup_entry
- search_aspect_types

View File

@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
firestore-source:
kind: firestore
@@ -18,11 +19,11 @@ sources:
database: ${FIRESTORE_DATABASE:}
tools:
firestore-get-documents:
get_documents:
kind: firestore-get-documents
source: firestore-source
description: Gets multiple documents from Firestore by their paths
firestore-add-documents:
add_documents:
kind: firestore-add-documents
source: firestore-source
description: |
@@ -34,7 +35,7 @@ tools:
5. Handle timestamps properly: Use RFC3339 format for timestamp strings
6. Base64 encode binary data: Binary data must be base64 encoded in the bytesValue field
7. Consider security rules: Ensure your Firestore security rules allow document creation in the target collection
firestore-update-document:
update_document:
kind: firestore-update-document
source: firestore-source
description: |
@@ -46,36 +47,36 @@ tools:
5. Use returnData sparingly: Only set to true when you need to verify the exact data after the update
6. Handle timestamps properly: Use RFC3339 format for timestamp strings
7. Consider security rules: Ensure your Firestore security rules allow document updates
firestore-list-collections:
list_collections:
kind: firestore-list-collections
source: firestore-source
description: List Firestore collections for a given parent path
firestore-delete-documents:
delete_documents:
kind: firestore-delete-documents
source: firestore-source
description: Delete multiple documents from Firestore
firestore-query-collection:
query_collection:
kind: firestore-query-collection
source: firestore-source
description: |
Retrieves one or more Firestore documents from a collection in a database in the current project by a collection with a full document path.
Use this if you know the exact path of a collection and the filtering clause you would like for the document.
firestore-get-rules:
get_rules:
kind: firestore-get-rules
source: firestore-source
description: Retrieves the active Firestore security rules for the current project
firestore-validate-rules:
validate_rules:
kind: firestore-validate-rules
source: firestore-source
description: Checks the provided Firestore Rules source for syntax and validation errors. Provide the source code to validate.
toolsets:
firestore-database-tools:
- firestore-get-documents
- firestore-add-documents
- firestore-update-document
- firestore-list-collections
- firestore-delete-documents
- firestore-query-collection
- firestore-get-rules
- firestore-validate-rules
firestore_database_tools:
- get_documents
- add_documents
- update_document
- list_collections
- delete_documents
- query_collection
- get_rules
- validate_rules

View File

@@ -696,7 +696,7 @@ tools:
and the resulting tiles will be added in order.
toolsets:
looker-tools:
looker_tools:
- get_models
- get_explores
- get_dimensions

View File

@@ -1,3 +1,17 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
mssql-source:
kind: mssql
@@ -273,6 +287,6 @@ tools:
default: "detailed"
toolsets:
mssql-database-tools:
mssql_database_tools:
- execute_sql
- list_tables
- list_tables

View File

@@ -37,6 +37,6 @@ tools:
description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
toolsets:
mysql-database-tools:
mysql_database_tools:
- execute_sql
- list_tables
- list_tables

View File

@@ -32,6 +32,6 @@ tools:
description: Use this tool to get the database schema.
toolsets:
neo4j-database-tools:
neo4j_database_tools:
- execute_cypher
- get_schema
- get_schema

View File

@@ -1,3 +1,17 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
oceanbase-source:
kind: oceanbase
@@ -164,6 +178,6 @@ tools:
type: string
description: "Optional: A comma-separated list of table names. If empty, details for all tables in user-accessible schemas will be listed."
toolsets:
oceanbase-database-tools:
oceanbase_database_tools:
- execute_sql
- list_tables
- list_tables

View File

@@ -34,6 +34,6 @@ tools:
description: "Lists detailed schema information (object type, columns, constraints, indexes, triggers, owner, comment) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
toolsets:
postgres-database-tools:
postgres_database_tools:
- execute_sql
- list_tables

View File

@@ -1,3 +1,17 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
spanner-source:
kind: "spanner"
@@ -216,7 +230,7 @@ tools:
default: "detailed"
toolsets:
spanner-postgres-database-tools:
spanner_postgres_database_tools:
- execute_sql
- execute_sql_dql
- list_tables

View File

@@ -1,218 +1,41 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
spanner-source:
kind: spanner
project: ${SPANNER_PROJECT}
instance: ${SPANNER_INSTANCE}
database: ${SPANNER_DATABASE}
dialect: ${SPANNER_DIALECT:googlesql}
tools:
execute_sql:
kind: spanner-execute-sql
source: spanner-source
description: Use this tool to execute DML SQL
description: Use this tool to execute DML SQL. Please use the ${SPANNER_DIALECT:googlesql} interface for Spanner.
execute_sql_dql:
kind: spanner-execute-sql
source: spanner-source
description: Use this tool to execute DQL SQL
description: Use this tool to execute DQL SQL. Please use the ${SPANNER_DIALECT:googlesql} interface for Spanner.
readOnly: true
list_tables:
kind: spanner-sql
kind: spanner-list-tables
source: spanner-source
readOnly: true
description: "Lists detailed schema information (object type, columns, constraints, indexes) as JSON for user-created tables (ordinary or partitioned). Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
statement: |
WITH FilterTableNames AS (
SELECT DISTINCT TRIM(name) AS TABLE_NAME
FROM UNNEST(IF(@table_names = '' OR @table_names IS NULL, ['%'], SPLIT(@table_names, ','))) AS name
),
-- 1. Table Information
table_info_cte AS (
SELECT
T.TABLE_SCHEMA,
T.TABLE_NAME,
T.TABLE_TYPE,
T.PARENT_TABLE_NAME, -- For interleaved tables
T.ON_DELETE_ACTION -- For interleaved tables
FROM INFORMATION_SCHEMA.TABLES AS T
WHERE
T.TABLE_SCHEMA = ''
AND T.TABLE_TYPE = 'BASE TABLE'
AND (EXISTS (SELECT 1 FROM FilterTableNames WHERE FilterTableNames.TABLE_NAME = '%') OR T.TABLE_NAME IN (SELECT TABLE_NAME FROM FilterTableNames))
),
-- 2. Column Information (with JSON string for each column)
columns_info_cte AS (
SELECT
C.TABLE_SCHEMA,
C.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"column_name":"', IFNULL(C.COLUMN_NAME, ''), '",',
'"data_type":"', IFNULL(C.SPANNER_TYPE, ''), '",',
'"ordinal_position":', CAST(C.ORDINAL_POSITION AS STRING), ',',
'"is_not_nullable":', IF(C.IS_NULLABLE = 'NO', 'true', 'false'), ',',
'"column_default":', IF(C.COLUMN_DEFAULT IS NULL, 'null', CONCAT('"', C.COLUMN_DEFAULT, '"')),
'}'
) ORDER BY C.ORDINAL_POSITION
) AS columns_json_array_elements
FROM INFORMATION_SCHEMA.COLUMNS AS C
WHERE EXISTS (SELECT 1 FROM table_info_cte TI WHERE C.TABLE_SCHEMA = TI.TABLE_SCHEMA AND C.TABLE_NAME = TI.TABLE_NAME)
GROUP BY C.TABLE_SCHEMA, C.TABLE_NAME
),
-- Helper CTE for aggregating constraint columns
constraint_columns_agg_cte AS (
SELECT
CONSTRAINT_CATALOG,
CONSTRAINT_SCHEMA,
CONSTRAINT_NAME,
ARRAY_AGG(CONCAT('"', COLUMN_NAME, '"') ORDER BY ORDINAL_POSITION) AS column_names_json_list
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
GROUP BY CONSTRAINT_CATALOG, CONSTRAINT_SCHEMA, CONSTRAINT_NAME
),
-- 3. Constraint Information (with JSON string for each constraint)
constraints_info_cte AS (
SELECT
TC.TABLE_SCHEMA,
TC.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"constraint_name":"', IFNULL(TC.CONSTRAINT_NAME, ''), '",',
'"constraint_type":"', IFNULL(TC.CONSTRAINT_TYPE, ''), '",',
'"constraint_definition":',
CASE TC.CONSTRAINT_TYPE
WHEN 'CHECK' THEN IF(CC.CHECK_CLAUSE IS NULL, 'null', CONCAT('"', CC.CHECK_CLAUSE, '"'))
WHEN 'PRIMARY KEY' THEN CONCAT('"', 'PRIMARY KEY (', ARRAY_TO_STRING(COALESCE(KeyCols.column_names_json_list, []), ', '), ')', '"')
WHEN 'UNIQUE' THEN CONCAT('"', 'UNIQUE (', ARRAY_TO_STRING(COALESCE(KeyCols.column_names_json_list, []), ', '), ')', '"')
WHEN 'FOREIGN KEY' THEN CONCAT('"', 'FOREIGN KEY (', ARRAY_TO_STRING(COALESCE(KeyCols.column_names_json_list, []), ', '), ') REFERENCES ',
IFNULL(RefKeyTable.TABLE_NAME, ''),
' (', ARRAY_TO_STRING(COALESCE(RefKeyCols.column_names_json_list, []), ', '), ')', '"')
ELSE 'null'
END, ',',
'"constraint_columns":[', ARRAY_TO_STRING(COALESCE(KeyCols.column_names_json_list, []), ','), '],',
'"foreign_key_referenced_table":', IF(RefKeyTable.TABLE_NAME IS NULL, 'null', CONCAT('"', RefKeyTable.TABLE_NAME, '"')), ',',
'"foreign_key_referenced_columns":[', ARRAY_TO_STRING(COALESCE(RefKeyCols.column_names_json_list, []), ','), ']',
'}'
) ORDER BY TC.CONSTRAINT_NAME
) AS constraints_json_array_elements
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS TC
LEFT JOIN INFORMATION_SCHEMA.CHECK_CONSTRAINTS AS CC
ON TC.CONSTRAINT_CATALOG = CC.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = CC.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = CC.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS RC
ON TC.CONSTRAINT_CATALOG = RC.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = RC.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = RC.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS RefConstraint
ON RC.UNIQUE_CONSTRAINT_CATALOG = RefConstraint.CONSTRAINT_CATALOG AND RC.UNIQUE_CONSTRAINT_SCHEMA = RefConstraint.CONSTRAINT_SCHEMA AND RC.UNIQUE_CONSTRAINT_NAME = RefConstraint.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.TABLES AS RefKeyTable
ON RefConstraint.TABLE_CATALOG = RefKeyTable.TABLE_CATALOG AND RefConstraint.TABLE_SCHEMA = RefKeyTable.TABLE_SCHEMA AND RefConstraint.TABLE_NAME = RefKeyTable.TABLE_NAME
LEFT JOIN constraint_columns_agg_cte AS KeyCols
ON TC.CONSTRAINT_CATALOG = KeyCols.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = KeyCols.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = KeyCols.CONSTRAINT_NAME
LEFT JOIN constraint_columns_agg_cte AS RefKeyCols
ON RC.UNIQUE_CONSTRAINT_CATALOG = RefKeyCols.CONSTRAINT_CATALOG AND RC.UNIQUE_CONSTRAINT_SCHEMA = RefKeyCols.CONSTRAINT_SCHEMA AND RC.UNIQUE_CONSTRAINT_NAME = RefKeyCols.CONSTRAINT_NAME AND TC.CONSTRAINT_TYPE = 'FOREIGN KEY'
WHERE EXISTS (SELECT 1 FROM table_info_cte TI WHERE TC.TABLE_SCHEMA = TI.TABLE_SCHEMA AND TC.TABLE_NAME = TI.TABLE_NAME)
GROUP BY TC.TABLE_SCHEMA, TC.TABLE_NAME
),
-- Helper CTE for aggregating index key columns (as JSON strings)
index_key_columns_agg_cte AS (
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
ARRAY_AGG(
CONCAT(
'{"column_name":"', IFNULL(COLUMN_NAME, ''), '",',
'"ordering":"', IFNULL(COLUMN_ORDERING, ''), '"}'
) ORDER BY ORDINAL_POSITION
) AS key_column_json_details
FROM INFORMATION_SCHEMA.INDEX_COLUMNS
WHERE ORDINAL_POSITION IS NOT NULL -- Key columns
GROUP BY TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, INDEX_NAME
),
-- Helper CTE for aggregating index storing columns (as JSON strings)
index_storing_columns_agg_cte AS (
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
ARRAY_AGG(CONCAT('"', COLUMN_NAME, '"') ORDER BY COLUMN_NAME) AS storing_column_json_names
FROM INFORMATION_SCHEMA.INDEX_COLUMNS
WHERE ORDINAL_POSITION IS NULL -- Storing columns
GROUP BY TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, INDEX_NAME
),
-- 4. Index Information (with JSON string for each index)
indexes_info_cte AS (
SELECT
I.TABLE_SCHEMA,
I.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"index_name":"', IFNULL(I.INDEX_NAME, ''), '",',
'"index_type":"', IFNULL(I.INDEX_TYPE, ''), '",',
'"is_unique":', IF(I.IS_UNIQUE, 'true', 'false'), ',',
'"is_null_filtered":', IF(I.IS_NULL_FILTERED, 'true', 'false'), ',',
'"interleaved_in_table":', IF(I.PARENT_TABLE_NAME IS NULL, 'null', CONCAT('"', I.PARENT_TABLE_NAME, '"')), ',',
'"index_key_columns":[', ARRAY_TO_STRING(COALESCE(KeyIndexCols.key_column_json_details, []), ','), '],',
'"storing_columns":[', ARRAY_TO_STRING(COALESCE(StoringIndexCols.storing_column_json_names, []), ','), ']',
'}'
) ORDER BY I.INDEX_NAME
) AS indexes_json_array_elements
FROM INFORMATION_SCHEMA.INDEXES AS I
LEFT JOIN index_key_columns_agg_cte AS KeyIndexCols
ON I.TABLE_CATALOG = KeyIndexCols.TABLE_CATALOG AND I.TABLE_SCHEMA = KeyIndexCols.TABLE_SCHEMA AND I.TABLE_NAME = KeyIndexCols.TABLE_NAME AND I.INDEX_NAME = KeyIndexCols.INDEX_NAME
LEFT JOIN index_storing_columns_agg_cte AS StoringIndexCols
ON I.TABLE_CATALOG = StoringIndexCols.TABLE_CATALOG AND I.TABLE_SCHEMA = StoringIndexCols.TABLE_SCHEMA AND I.TABLE_NAME = StoringIndexCols.TABLE_NAME AND I.INDEX_NAME = StoringIndexCols.INDEX_NAME AND I.INDEX_TYPE = 'INDEX'
WHERE EXISTS (SELECT 1 FROM table_info_cte TI WHERE I.TABLE_SCHEMA = TI.TABLE_SCHEMA AND I.TABLE_NAME = TI.TABLE_NAME)
GROUP BY I.TABLE_SCHEMA, I.TABLE_NAME
)
-- Final SELECT to build the JSON output
SELECT
TI.TABLE_SCHEMA AS schema_name,
TI.TABLE_NAME AS object_name,
CASE
WHEN @output_format = 'simple' THEN
-- IF format is 'simple', return basic JSON
CONCAT('{"name":"', IFNULL(REPLACE(TI.TABLE_NAME, '"', '\"'), ''), '"}')
ELSE
CONCAT(
'{',
'"schema_name":"', IFNULL(TI.TABLE_SCHEMA, ''), '",',
'"object_name":"', IFNULL(TI.TABLE_NAME, ''), '",',
'"object_type":"', IFNULL(TI.TABLE_TYPE, ''), '",',
'"columns":[', ARRAY_TO_STRING(COALESCE(CI.columns_json_array_elements, []), ','), '],',
'"constraints":[', ARRAY_TO_STRING(COALESCE(CONSI.constraints_json_array_elements, []), ','), '],',
'"indexes":[', ARRAY_TO_STRING(COALESCE(II.indexes_json_array_elements, []), ','), '],',
'}'
)
END AS object_details
FROM table_info_cte AS TI
LEFT JOIN columns_info_cte AS CI
ON TI.TABLE_SCHEMA = CI.TABLE_SCHEMA AND TI.TABLE_NAME = CI.TABLE_NAME
LEFT JOIN constraints_info_cte AS CONSI
ON TI.TABLE_SCHEMA = CONSI.TABLE_SCHEMA AND TI.TABLE_NAME = CONSI.TABLE_NAME
LEFT JOIN indexes_info_cte AS II
ON TI.TABLE_SCHEMA = II.TABLE_SCHEMA AND TI.TABLE_NAME = II.TABLE_NAME
ORDER BY TI.TABLE_SCHEMA, TI.TABLE_NAME;
parameters:
- name: table_names
type: string
description: "Optional: A comma-separated list of table names. If empty, details for all tables in user-accessible schemas will be listed."
- name: output_format
type: string
description: "Optional: Use 'simple' to return table names only or use 'detailed' to return the full information schema."
default: "detailed"
toolsets:
spanner-database-tools:

View File

@@ -0,0 +1,112 @@
# Copyright 2025 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
sources:
sqlite-source:
kind: sqlite
database: ${SQLITE_DATABASE}
tools:
execute_sql:
kind: sqlite-execute-sql
source: sqlite-source
description: Use this tool to execute SQL.
list_tables:
kind: sqlite-sql
source: sqlite-source
description: "Lists SQLite tables. Use 'output_format' ('simple'/'detailed') and 'table_names' (comma-separated or empty) to control output."
statement: |
WITH table_columns AS (
SELECT
m.name AS table_name,
json_group_array(json_object('column_name', ti.name, 'data_type', ti.type, 'ordinal_position', ti.cid, 'is_not_nullable', ti."notnull" = 1, 'column_default', ti.dflt_value, 'is_primary_key', ti.pk > 0)) AS details
FROM sqlite_master AS m, pragma_table_info(m.name) AS ti
WHERE m.type = 'table' AND m.name NOT LIKE 'sqlite_%'
GROUP BY m.name
),
table_constraints AS (
SELECT
table_name,
json_group_array(json(details)) AS details
FROM (
SELECT m.name AS table_name, json_object('constraint_name', 'PRIMARY', 'constraint_type', 'PRIMARY KEY', 'constraint_columns', json_group_array(T.name)) AS details
FROM sqlite_master AS m, pragma_table_info(m.name) AS T
WHERE m.type = 'table' AND T.pk > 0
GROUP BY m.name
HAVING COUNT(T.name) > 0
UNION ALL
SELECT m.name, json_object('constraint_name', 'fk_' || m.name || '_' || F.id, 'constraint_type', 'FOREIGN KEY', 'constraint_columns', json_group_array(F."from"), 'foreign_key_referenced_table', F."table", 'foreign_key_referenced_columns', json_group_array(F."to"))
FROM sqlite_master AS m, pragma_foreign_key_list(m.name) AS F
WHERE m.type = 'table'
GROUP BY m.name, F.id
UNION ALL
SELECT m.name, json_object('constraint_name', I.name, 'constraint_type', 'UNIQUE', 'constraint_columns', (SELECT json_group_array(C.name) FROM pragma_index_info(I.name) AS C ORDER BY C.seqno))
FROM sqlite_master AS m, pragma_index_list(m.name) AS I
WHERE m.type = 'table' AND I."unique" = 1 AND I.origin != 'pk'
)
GROUP BY table_name
),
table_indexes AS (
SELECT
m.name AS table_name,
json_group_array(json_object('index_name', il.name, 'is_unique', il."unique" = 1, 'is_primary', il.origin = 'pk', 'index_columns', (SELECT json_group_array(ii.name) FROM pragma_index_info(il.name) AS ii))) AS details
FROM sqlite_master AS m, pragma_index_list(m.name) AS il
WHERE m.type = 'table' AND m.name NOT LIKE 'sqlite_%'
GROUP BY m.name
),
table_triggers AS (
SELECT
tbl_name AS table_name,
json_group_array(json_object('trigger_name', name, 'trigger_definition', sql)) AS details
FROM sqlite_master
WHERE type = 'trigger'
GROUP BY tbl_name
)
SELECT
CASE
WHEN '{{.output_format}}' = 'simple' THEN json_object('name', m.name)
ELSE json_object(
'schema_name', 'main',
'object_name', m.name,
'object_type', m.type,
'columns', json(COALESCE(tc.details, '[]')),
'constraints', json(COALESCE(tcons.details, '[]')),
'indexes', json(COALESCE(ti.details, '[]')),
'triggers', json(COALESCE(tt.details, '[]'))
)
END AS object_details
FROM
sqlite_master AS m
LEFT JOIN table_columns tc ON m.name = tc.table_name
LEFT JOIN table_constraints tcons ON m.name = tcons.table_name
LEFT JOIN table_indexes ti ON m.name = ti.table_name
LEFT JOIN table_triggers tt ON m.name = tt.table_name
WHERE
m.type = 'table'
AND m.name NOT LIKE 'sqlite_%'
{{if .table_names}}
AND instr(',' || '{{.table_names}}' || ',', ',' || m.name || ',') > 0
{{end}};
templateParameters:
- name: output_format
type: string
description: "Optional: Use 'simple' to return table names only or use 'detailed' to return the full information schema."
default: "detailed"
- name: table_names
type: string
description: "Optional: A comma-separated list of table names. If empty, details for all tables in user-accessible schemas will be listed."
default: ""
toolsets:
sqlite_database_tools:
- execute_sql
- list_tables

View File

@@ -355,14 +355,14 @@ func httpHandler(s *Server, w http.ResponseWriter, r *http.Request) {
}
// check if client have `Mcp-Session-Id` header
// if `Mcp-Session-Id` header is set, we are using v2025-03-26 since
// previous version doesn't use this header.
// `Mcp-Session-Id` is only set for v2025-03-26 in Toolbox
headerSessionId := r.Header.Get("Mcp-Session-Id")
if headerSessionId != "" {
protocolVersion = v20250326.PROTOCOL_VERSION
}
// check if client have `MCP-Protocol-Version` header
// Only supported for v2025-06-18+.
headerProtocolVersion := r.Header.Get("MCP-Protocol-Version")
if headerProtocolVersion != "" {
if !mcp.VerifyProtocolVersion(headerProtocolVersion) {

View File

@@ -24,11 +24,32 @@ import (
"go.opentelemetry.io/otel/trace"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/option"
alloydbrestapi "google.golang.org/api/alloydb/v1"
)
const SourceKind string = "alloydb-admin"
type userAgentRoundTripper struct {
userAgent string
next http.RoundTripper
}
func (rt *userAgentRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
newReq := *req
newReq.Header = make(http.Header)
for k, v := range req.Header {
newReq.Header[k] = v
}
ua := newReq.Header.Get("User-Agent")
if ua == "" {
newReq.Header.Set("User-Agent", rt.userAgent)
} else {
newReq.Header.Set("User-Agent", rt.userAgent+" "+ua)
}
return rt.next.RoundTrip(&newReq)
}
// validate interface
var _ sources.SourceConfig = Config{}
@@ -64,22 +85,36 @@ func (r Config) Initialize(ctx context.Context, tracer trace.Tracer) (sources.So
var client *http.Client
if r.UseClientOAuth {
client = nil
client = &http.Client{
Transport: &userAgentRoundTripper{
userAgent: ua,
next: http.DefaultTransport,
},
}
} else {
// Use Application Default Credentials
creds, err := google.FindDefaultCredentials(ctx, alloydbrestapi.CloudPlatformScope)
if err != nil {
return nil, fmt.Errorf("failed to find default credentials: %w", err)
}
client = oauth2.NewClient(ctx, creds.TokenSource)
baseClient := oauth2.NewClient(ctx, creds.TokenSource)
baseClient.Transport = &userAgentRoundTripper{
userAgent: ua,
next: baseClient.Transport,
}
client = baseClient
}
service, err := alloydbrestapi.NewService(ctx, option.WithHTTPClient(client))
if err != nil {
return nil, fmt.Errorf("error creating new alloydb service: %w", err)
}
s := &Source{
Name: r.Name,
Kind: SourceKind,
BaseURL: "https://alloydb.googleapis.com",
Client: client,
UserAgent: ua,
Service: service,
UseClientOAuth: r.UseClientOAuth,
}
@@ -92,8 +127,7 @@ type Source struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
BaseURL string
Client *http.Client
UserAgent string
Service *alloydbrestapi.Service
UseClientOAuth bool
}
@@ -101,15 +135,17 @@ func (s *Source) SourceKind() string {
return SourceKind
}
func (s *Source) GetClient(ctx context.Context, accessToken string) (*http.Client, error) {
func (s *Source) GetService(ctx context.Context, accessToken string) (*alloydbrestapi.Service, error) {
if s.UseClientOAuth {
if accessToken == "" {
return nil, fmt.Errorf("client-side OAuth is enabled but no access token was provided")
}
token := &oauth2.Token{AccessToken: accessToken}
return oauth2.NewClient(ctx, oauth2.StaticTokenSource(token)), nil
client := oauth2.NewClient(ctx, oauth2.StaticTokenSource(token))
service, err := alloydbrestapi.NewService(ctx, option.WithHTTPClient(client))
if err != nil {
return nil, fmt.Errorf("error creating new alloydb service: %w", err)
}
return service, nil
}
return s.Client, nil
return s.Service, nil
}
func (s *Source) UseClientAuthorization() bool {

View File

@@ -24,11 +24,32 @@ import (
"go.opentelemetry.io/otel/trace"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/option"
sqladmin "google.golang.org/api/sqladmin/v1"
)
const SourceKind string = "cloud-sql-admin"
type userAgentRoundTripper struct {
userAgent string
next http.RoundTripper
}
func (rt *userAgentRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
newReq := *req
newReq.Header = make(http.Header)
for k, v := range req.Header {
newReq.Header[k] = v
}
ua := newReq.Header.Get("User-Agent")
if ua == "" {
newReq.Header.Set("User-Agent", rt.userAgent)
} else {
newReq.Header.Set("User-Agent", rt.userAgent+" "+ua)
}
return rt.next.RoundTrip(&newReq)
}
// validate interface
var _ sources.SourceConfig = Config{}
@@ -65,22 +86,36 @@ func (r Config) Initialize(ctx context.Context, tracer trace.Tracer) (sources.So
var client *http.Client
if r.UseClientOAuth {
client = nil
client = &http.Client{
Transport: &userAgentRoundTripper{
userAgent: ua,
next: http.DefaultTransport,
},
}
} else {
// Use Application Default Credentials
creds, err := google.FindDefaultCredentials(ctx, sqladmin.SqlserviceAdminScope)
if err != nil {
return nil, fmt.Errorf("failed to find default credentials: %w", err)
}
client = oauth2.NewClient(ctx, creds.TokenSource)
baseClient := oauth2.NewClient(ctx, creds.TokenSource)
baseClient.Transport = &userAgentRoundTripper{
userAgent: ua,
next: baseClient.Transport,
}
client = baseClient
}
service, err := sqladmin.NewService(ctx, option.WithHTTPClient(client))
if err != nil {
return nil, fmt.Errorf("error creating new sqladmin service: %w", err)
}
s := &Source{
Name: r.Name,
Kind: SourceKind,
BaseURL: "https://sqladmin.googleapis.com",
Client: client,
UserAgent: ua,
Service: service,
UseClientOAuth: r.UseClientOAuth,
}
return s, nil
@@ -92,8 +127,7 @@ type Source struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
BaseURL string
Client *http.Client
UserAgent string
Service *sqladmin.Service
UseClientOAuth bool
}
@@ -101,15 +135,17 @@ func (s *Source) SourceKind() string {
return SourceKind
}
func (s *Source) GetClient(ctx context.Context, accessToken string) (*http.Client, error) {
func (s *Source) GetService(ctx context.Context, accessToken string) (*sqladmin.Service, error) {
if s.UseClientOAuth {
if accessToken == "" {
return nil, fmt.Errorf("client-side OAuth is enabled but no access token was provided")
}
token := &oauth2.Token{AccessToken: accessToken}
return oauth2.NewClient(ctx, oauth2.StaticTokenSource(token)), nil
client := oauth2.NewClient(ctx, oauth2.StaticTokenSource(token))
service, err := sqladmin.NewService(ctx, option.WithHTTPClient(client))
if err != nil {
return nil, fmt.Errorf("error creating new sqladmin service: %w", err)
}
return service, nil
}
return s.Client, nil
return s.Service, nil
}
func (s *Source) UseClientAuthorization() bool {

View File

@@ -0,0 +1,165 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydbgetcluster
import (
"context"
"fmt"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
alloydbadmin "github.com/googleapis/genai-toolbox/internal/sources/alloydbadmin"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "alloydb-get-cluster"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// Configuration for the get-cluster tool.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description" validate:"required"`
AuthRequired []string `yaml:"authRequired"`
BaseURL string `yaml:"baseURL"`
}
// validate interface
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of the tool.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize initializes the tool from the configuration.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("source %q not found", cfg.Source)
}
s, ok := rawS.(*alloydbadmin.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `%s`", kind, alloydbadmin.SourceKind)
}
allParameters := tools.Parameters{
tools.NewStringParameter("projectId", "The GCP project ID."),
tools.NewStringParameter("locationId", "The location of the cluster (e.g., 'us-central1')."),
tools.NewStringParameter("clusterId", "The ID of the cluster."),
}
paramManifest := allParameters.Manifest()
inputSchema := allParameters.McpManifest()
inputSchema.Required = []string{"projectId", "locationId", "clusterId"}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: inputSchema,
}
return Tool{
Name: cfg.Name,
Kind: kind,
Source: s,
AllParams: allParameters,
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest},
mcpManifest: mcpManifest,
}, nil
}
// Tool represents the get-cluster tool.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Source *alloydbadmin.Source
AllParams tools.Parameters
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's logic.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
projectId, ok := paramsMap["projectId"].(string)
if !ok {
return nil, fmt.Errorf("invalid or missing 'projectId' parameter; expected a string")
}
locationId, ok := paramsMap["locationId"].(string)
if !ok {
return nil, fmt.Errorf("invalid 'locationId' parameter; expected a string")
}
clusterId, ok := paramsMap["clusterId"].(string)
if !ok {
return nil, fmt.Errorf("invalid 'clusterId' parameter; expected a string")
}
service, err := t.Source.GetService(ctx, string(accessToken))
if err != nil {
return nil, err
}
urlString := fmt.Sprintf("projects/%s/locations/%s/clusters/%s", projectId, locationId, clusterId)
resp, err := service.Projects.Locations.Clusters.Get(urlString).Do()
if err != nil {
return nil, fmt.Errorf("error getting AlloyDB cluster: %w", err)
}
return resp, nil
}
// ParseParams parses the parameters for the tool.
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
// Manifest returns the tool's manifest.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the tool's MCP manifest.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return true
}
func (t Tool) RequiresClientAuthorization() bool {
return t.Source.UseClientAuthorization()
}

View File

@@ -0,0 +1,94 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydbgetcluster_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
alloydbgetcluster "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydbgetcluster"
)
func TestParseFromYaml(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
get-my-cluster:
kind: alloydb-get-cluster
source: my-alloydb-admin-source
description: some description
`,
want: server.ToolConfigs{
"get-my-cluster": alloydbgetcluster.Config{
Name: "get-my-cluster",
Kind: "alloydb-get-cluster",
Source: "my-alloydb-admin-source",
Description: "some description",
AuthRequired: []string{},
},
},
},
{
desc: "with auth required",
in: `
tools:
get-my-cluster-auth:
kind: alloydb-get-cluster
source: my-alloydb-admin-source
description: some description
authRequired:
- my-google-auth-service
- other-auth-service
`,
want: server.ToolConfigs{
"get-my-cluster-auth": alloydbgetcluster.Config{
Name: "get-my-cluster-auth",
Kind: "alloydb-get-cluster",
Source: "my-alloydb-admin-source",
Description: "some description",
AuthRequired: []string{"my-google-auth-service", "other-auth-service"},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,161 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydblistclusters
import (
"context"
"fmt"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
alloydbadmin "github.com/googleapis/genai-toolbox/internal/sources/alloydbadmin"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "alloydb-list-clusters"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// Configuration for the list-clusters tool.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description" validate:"required"`
AuthRequired []string `yaml:"authRequired"`
BaseURL string `yaml:"baseURL"`
}
// validate interface
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of the tool.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize initializes the tool from the configuration.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("source %q not found", cfg.Source)
}
s, ok := rawS.(*alloydbadmin.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `%s`", kind, alloydbadmin.SourceKind)
}
allParameters := tools.Parameters{
tools.NewStringParameter("projectId", "The GCP project ID to list clusters for."),
tools.NewStringParameterWithDefault("locationId", "-", "Optional: The location to list clusters in (e.g., 'us-central1'). Use '-' to list clusters across all locations.(Default: '-')"),
}
paramManifest := allParameters.Manifest()
inputSchema := allParameters.McpManifest()
inputSchema.Required = []string{"projectId", "locationId"}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: inputSchema,
}
return Tool{
Name: cfg.Name,
Kind: kind,
Source: s,
AllParams: allParameters,
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest},
mcpManifest: mcpManifest,
}, nil
}
// Tool represents the list-clusters tool.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Description string `yaml:"description"`
Source *alloydbadmin.Source
AllParams tools.Parameters `yaml:"allParams"`
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's logic.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
projectId, ok := paramsMap["projectId"].(string)
if !ok {
return nil, fmt.Errorf("invalid or missing 'projectId' parameter; expected a string")
}
locationId, ok := paramsMap["locationId"].(string)
if !ok {
return nil, fmt.Errorf("invalid 'locationId' parameter; expected a string")
}
service, err := t.Source.GetService(ctx, string(accessToken))
if err != nil {
return nil, err
}
urlString := fmt.Sprintf("projects/%s/locations/%s", projectId, locationId)
resp, err := service.Projects.Locations.Clusters.List(urlString).Do()
if err != nil {
return nil, fmt.Errorf("error listing AlloyDB clusters: %w", err)
}
return resp, nil
}
// ParseParams parses the parameters for the tool.
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
// Manifest returns the tool's manifest.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the tool's MCP manifest.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return true
}
func (t Tool) RequiresClientAuthorization() bool {
return t.Source.UseClientAuthorization()
}

View File

@@ -0,0 +1,94 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydblistclusters_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
alloydblistclusters "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydblistclusters"
)
func TestParseFromYaml(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
list-my-clusters:
kind: alloydb-list-clusters
source: my-alloydb-admin-source
description: some description
`,
want: server.ToolConfigs{
"list-my-clusters": alloydblistclusters.Config{
Name: "list-my-clusters",
Kind: "alloydb-list-clusters",
Source: "my-alloydb-admin-source",
Description: "some description",
AuthRequired: []string{},
},
},
},
{
desc: "with auth required",
in: `
tools:
list-my-clusters-auth:
kind: alloydb-list-clusters
source: my-alloydb-admin-source
description: some description
authRequired:
- my-google-auth-service
- other-auth-service
`,
want: server.ToolConfigs{
"list-my-clusters-auth": alloydblistclusters.Config{
Name: "list-my-clusters-auth",
Kind: "alloydb-list-clusters",
Source: "my-alloydb-admin-source",
Description: "some description",
AuthRequired: []string{"my-google-auth-service", "other-auth-service"},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,166 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydblistinstances
import (
"context"
"fmt"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
alloydbadmin "github.com/googleapis/genai-toolbox/internal/sources/alloydbadmin"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "alloydb-list-instances"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// Configuration for the list-instances tool.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description" validate:"required"`
AuthRequired []string `yaml:"authRequired"`
BaseURL string `yaml:"baseURL"`
}
// validate interface
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of the tool.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize initializes the tool from the configuration.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("source %q not found", cfg.Source)
}
s, ok := rawS.(*alloydbadmin.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `%s`", kind, alloydbadmin.SourceKind)
}
allParameters := tools.Parameters{
tools.NewStringParameter("projectId", "The GCP project ID to list instances for."),
tools.NewStringParameterWithDefault("locationId", "-", "Optional: The location of the cluster (e.g., 'us-central1'). Use '-' to get results for all regions.(Default: '-')"),
tools.NewStringParameterWithDefault("clusterId", "-", "Optional: The ID of the cluster to list instances from. Use '-' to get results for all clusters.(Default: '-')"),
}
paramManifest := allParameters.Manifest()
inputSchema := allParameters.McpManifest()
inputSchema.Required = []string{"projectId"}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: inputSchema,
}
return Tool{
Name: cfg.Name,
Kind: kind,
Source: s,
AllParams: allParameters,
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest},
mcpManifest: mcpManifest,
}, nil
}
// Tool represents the list-instances tool.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Description string `yaml:"description"`
Source *alloydbadmin.Source
AllParams tools.Parameters `yaml:"allParams"`
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's logic.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
projectId, ok := paramsMap["projectId"].(string)
if !ok {
return nil, fmt.Errorf("invalid or missing 'projectId' parameter; expected a string")
}
locationId, ok := paramsMap["locationId"].(string)
if !ok {
return nil, fmt.Errorf("invalid 'locationId' parameter; expected a string")
}
clusterId, ok := paramsMap["clusterId"].(string)
if !ok {
return nil, fmt.Errorf("invalid 'clusterId' parameter; expected a string")
}
service, err := t.Source.GetService(ctx, string(accessToken))
if err != nil {
return nil, err
}
urlString := fmt.Sprintf("projects/%s/locations/%s/clusters/%s", projectId, locationId, clusterId)
resp, err := service.Projects.Locations.Clusters.Instances.List(urlString).Do()
if err != nil {
return nil, fmt.Errorf("error listing AlloyDB instances: %w", err)
}
return resp, nil
}
// ParseParams parses the parameters for the tool.
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
// Manifest returns the tool's manifest.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the tool's MCP manifest.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return true
}
func (t Tool) RequiresClientAuthorization() bool {
return t.Source.UseClientAuthorization()
}

View File

@@ -0,0 +1,94 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydblistinstances_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
alloydblistinstances "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydblistinstances"
)
func TestParseFromYaml(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
list-my-instances:
kind: alloydb-list-instances
source: my-alloydb-admin-source
description: some description
`,
want: server.ToolConfigs{
"list-my-instances": alloydblistinstances.Config{
Name: "list-my-instances",
Kind: "alloydb-list-instances",
Source: "my-alloydb-admin-source",
Description: "some description",
AuthRequired: []string{},
},
},
},
{
desc: "with auth required",
in: `
tools:
list-my-instances-auth:
kind: alloydb-list-instances
source: my-alloydb-admin-source
description: some description
authRequired:
- my-google-auth-service
- other-auth-service
`,
want: server.ToolConfigs{
"list-my-instances-auth": alloydblistinstances.Config{
Name: "list-my-instances-auth",
Kind: "alloydb-list-instances",
Source: "my-alloydb-admin-source",
Description: "some description",
AuthRequired: []string{"my-google-auth-service", "other-auth-service"},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,166 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydblistusers
import (
"context"
"fmt"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
alloydbadmin "github.com/googleapis/genai-toolbox/internal/sources/alloydbadmin"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "alloydb-list-users"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// Configuration for the list-users tool.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description" validate:"required"`
AuthRequired []string `yaml:"authRequired"`
BaseURL string `yaml:"baseURL"`
}
// validate interface
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of the tool.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize initializes the tool from the configuration.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("source %q not found", cfg.Source)
}
s, ok := rawS.(*alloydbadmin.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `%s`", kind, alloydbadmin.SourceKind)
}
allParameters := tools.Parameters{
tools.NewStringParameter("projectId", "The GCP project ID."),
tools.NewStringParameter("locationId", "The location of the cluster (e.g., 'us-central1')."),
tools.NewStringParameter("clusterId", "The ID of the cluster to list users from."),
}
paramManifest := allParameters.Manifest()
inputSchema := allParameters.McpManifest()
inputSchema.Required = []string{"projectId", "locationId", "clusterId"}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: inputSchema,
}
return Tool{
Name: cfg.Name,
Kind: kind,
Source: s,
AllParams: allParameters,
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest},
mcpManifest: mcpManifest,
}, nil
}
// Tool represents the list-users tool.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Description string `yaml:"description"`
Source *alloydbadmin.Source
AllParams tools.Parameters `yaml:"allParams"`
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's logic.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
projectId, ok := paramsMap["projectId"].(string)
if !ok {
return nil, fmt.Errorf("invalid or missing 'projectId' parameter; expected a string")
}
locationId, ok := paramsMap["locationId"].(string)
if !ok {
return nil, fmt.Errorf("invalid 'locationId' parameter; expected a string")
}
clusterId, ok := paramsMap["clusterId"].(string)
if !ok {
return nil, fmt.Errorf("invalid 'clusterId' parameter; expected a string")
}
service, err := t.Source.GetService(ctx, string(accessToken))
if err != nil {
return nil, err
}
urlString := fmt.Sprintf("projects/%s/locations/%s/clusters/%s", projectId, locationId, clusterId)
resp, err := service.Projects.Locations.Clusters.Users.List(urlString).Do()
if err != nil {
return nil, fmt.Errorf("error listing AlloyDB users: %w", err)
}
return resp, nil
}
// ParseParams parses the parameters for the tool.
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
// Manifest returns the tool's manifest.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the tool's MCP manifest.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return true
}
func (t Tool) RequiresClientAuthorization() bool {
return t.Source.UseClientAuthorization()
}

View File

@@ -0,0 +1,94 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydblistusers_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
alloydblistusers "github.com/googleapis/genai-toolbox/internal/tools/alloydb/alloydblistusers"
)
func TestParseFromYaml(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
list-my-users:
kind: alloydb-list-users
source: my-alloydb-admin-source
description: some description
`,
want: server.ToolConfigs{
"list-my-users": alloydblistusers.Config{
Name: "list-my-users",
Kind: "alloydb-list-users",
Source: "my-alloydb-admin-source",
Description: "some description",
AuthRequired: []string{},
},
},
},
{
desc: "with auth required",
in: `
tools:
list-my-users-auth:
kind: alloydb-list-users
source: my-alloydb-admin-source
description: some description
authRequired:
- my-google-auth-service
- other-auth-service
`,
want: server.ToolConfigs{
"list-my-users-auth": alloydblistusers.Config{
Name: "list-my-users-auth",
Kind: "alloydb-list-users",
Source: "my-alloydb-admin-source",
Description: "some description",
AuthRequired: []string{"my-google-auth-service", "other-auth-service"},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -98,6 +98,7 @@ type CAPayload struct {
Project string `json:"project"`
Messages []Message `json:"messages"`
InlineContext InlineContext `json:"inlineContext"`
ClientIdEnum string `json:"clientIdEnum"`
}
// validate compatible sources are still compatible
@@ -243,6 +244,7 @@ func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken
},
Options: Options{Chart: ChartOptions{Image: ImageOptions{NoImage: map[string]any{}}}},
},
ClientIdEnum: "GENAI_TOOLBOX",
}
// Call the streaming API

View File

@@ -0,0 +1,187 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsqlcreateusers
import (
"context"
"fmt"
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/sources/cloudsqladmin"
"github.com/googleapis/genai-toolbox/internal/tools"
sqladmin "google.golang.org/api/sqladmin/v1"
)
const kind string = "cloud-sql-create-users"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// Config defines the configuration for the create-user tool.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
}
// validate interface
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of the tool.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize initializes the tool from the configuration.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
s, ok := rawS.(*cloudsqladmin.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `cloud-sql-admin`", kind)
}
allParameters := tools.Parameters{
tools.NewStringParameter("project", "The project ID"),
tools.NewStringParameter("instance", "The ID of the instance where the user will be created."),
tools.NewStringParameter("name", "The name for the new user. Must be unique within the instance."),
tools.NewStringParameterWithRequired("password", "A secure password for the new user. Not required for IAM users.", false),
tools.NewBooleanParameter("iamUser", "Set to true to create a Cloud IAM user."),
}
paramManifest := allParameters.Manifest()
inputSchema := allParameters.McpManifest()
description := cfg.Description
if description == "" {
description = "Creates a new user in a Cloud SQL instance. Both built-in and IAM users are supported. IAM users require an email account as the user name. IAM is the more secure and recommended way to manage users. The agent should always ask the user what type of user they want to create. For more information, see https://cloud.google.com/sql/docs/postgres/add-manage-iam-users"
}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: description,
InputSchema: inputSchema,
}
return Tool{
Name: cfg.Name,
Kind: kind,
AuthRequired: cfg.AuthRequired,
Source: s,
AllParams: allParameters,
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}, nil
}
// Tool represents the create-user tool.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
Source *cloudsqladmin.Source
AllParams tools.Parameters `yaml:"allParams"`
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's logic.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
project, ok := paramsMap["project"].(string)
if !ok {
return nil, fmt.Errorf("missing 'project' parameter")
}
instance, ok := paramsMap["instance"].(string)
if !ok {
return nil, fmt.Errorf("missing 'instance' parameter")
}
name, ok := paramsMap["name"].(string)
if !ok {
return nil, fmt.Errorf("missing 'name' parameter")
}
iamUser, _ := paramsMap["iamUser"].(bool)
user := sqladmin.User{
Name: name,
}
if iamUser {
user.Type = "CLOUD_IAM_USER"
} else {
user.Type = "BUILT_IN"
password, ok := paramsMap["password"].(string)
if !ok || password == "" {
return nil, fmt.Errorf("missing 'password' parameter for non-IAM user")
}
user.Password = password
}
service, err := t.Source.GetService(ctx, string(accessToken))
if err != nil {
return nil, err
}
resp, err := service.Users.Insert(project, instance, &user).Do()
if err != nil {
return nil, fmt.Errorf("error creating user: %w", err)
}
return resp, nil
}
// ParseParams parses the parameters for the tool.
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
// Manifest returns the tool's manifest.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the tool's MCP manifest.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return true
}
func (t Tool) RequiresClientAuthorization() bool {
return t.Source.UseClientAuthorization()
}

View File

@@ -0,0 +1,72 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsqlcreateusers_test
import (
"testing"
"github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqlcreateusers"
)
func TestParseFromYaml(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
create-user:
kind: cloud-sql-create-users
source: my-source
description: some description
`,
want: server.ToolConfigs{
"create-user": cloudsqlcreateusers.Config{
Name: "create-user",
Kind: "cloud-sql-create-users",
Source: "my-source",
Description: "some description",
AuthRequired: []string{},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,164 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsqlgetinstances
import (
"context"
"fmt"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/sources/cloudsqladmin"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "cloud-sql-get-instance"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// Config defines the configuration for the get-instances tool.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Description string `yaml:"description"`
Source string `yaml:"source" validate:"required"`
AuthRequired []string `yaml:"authRequired"`
}
// validate interface
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of the tool.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize initializes the tool from the configuration.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
s, ok := rawS.(*cloudsqladmin.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `cloud-sql-admin`", kind)
}
allParameters := tools.Parameters{
tools.NewStringParameter("projectId", "The project ID"),
tools.NewStringParameter("instanceId", "The instance ID"),
}
paramManifest := allParameters.Manifest()
inputSchema := allParameters.McpManifest()
inputSchema.Required = []string{"projectId", "instanceId"}
description := cfg.Description
if description == "" {
description = "Gets a particular cloud sql instance."
}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: description,
InputSchema: inputSchema,
}
return Tool{
Name: cfg.Name,
Kind: kind,
AuthRequired: cfg.AuthRequired,
Source: s,
AllParams: allParameters,
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}, nil
}
// Tool represents the get-instances tool.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
Source *cloudsqladmin.Source
AllParams tools.Parameters `yaml:"allParams"`
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's logic.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
projectId, ok := paramsMap["projectId"].(string)
if !ok {
return nil, fmt.Errorf("missing 'projectId' parameter")
}
instanceId, ok := paramsMap["instanceId"].(string)
if !ok {
return nil, fmt.Errorf("missing 'instanceId' parameter")
}
service, err := t.Source.GetService(ctx, string(accessToken))
if err != nil {
return nil, err
}
resp, err := service.Instances.Get(projectId, instanceId).Do()
if err != nil {
return nil, fmt.Errorf("error getting instance: %w", err)
}
return resp, nil
}
// ParseParams parses the parameters for the tool.
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
// Manifest returns the tool's manifest.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the tool's MCP manifest.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return true
}
func (t Tool) RequiresClientAuthorization() bool {
return t.Source.UseClientAuthorization()
}

View File

@@ -0,0 +1,72 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsqlgetinstances_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
cloudsqlgetinstances "github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqlgetinstances"
)
func TestParseFromYaml(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
get-instances:
kind: cloud-sql-get-instance
description: "A tool to get cloud sql instances"
source: "my-gcp-source"
`,
want: server.ToolConfigs{
"get-instances": cloudsqlgetinstances.Config{
Name: "get-instances",
Kind: "cloud-sql-get-instance",
Description: "A tool to get cloud sql instances",
Source: "my-gcp-source",
AuthRequired: []string{},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,175 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsqllistinstances
import (
"context"
"fmt"
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
cloudsqladminsrc "github.com/googleapis/genai-toolbox/internal/sources/cloudsqladmin"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "cloud-sql-list-instances"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// Config defines the configuration for the list-instance tool.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
}
// validate interface
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of the tool.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize initializes the tool from the configuration.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
s, ok := rawS.(*cloudsqladminsrc.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `cloud-sql-admin`", kind)
}
allParameters := tools.Parameters{
tools.NewStringParameter("project", "The project ID"),
}
paramManifest := allParameters.Manifest()
inputSchema := allParameters.McpManifest()
inputSchema.Required = []string{"project"}
description := cfg.Description
if description == "" {
description = "Lists all type of Cloud SQL instances for a project."
}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: description,
InputSchema: inputSchema,
}
return Tool{
Name: cfg.Name,
Kind: kind,
AuthRequired: cfg.AuthRequired,
source: s,
AllParams: allParameters,
manifest: tools.Manifest{Description: description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}, nil
}
// Tool represents the list-instance tool.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
AllParams tools.Parameters `yaml:"allParams"`
source *cloudsqladminsrc.Source
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's logic.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
project, ok := paramsMap["project"].(string)
if !ok {
return nil, fmt.Errorf("missing 'project' parameter")
}
service, err := t.source.GetService(ctx, string(accessToken))
if err != nil {
return nil, err
}
resp, err := service.Instances.List(project).Do()
if err != nil {
return nil, fmt.Errorf("error listing instances: %w", err)
}
if resp.Items == nil {
return []any{}, nil
}
type instanceInfo struct {
Name string `json:"name"`
InstanceType string `json:"instanceType"`
}
var instances []instanceInfo
for _, item := range resp.Items {
instances = append(instances, instanceInfo{
Name: item.Name,
InstanceType: item.InstanceType,
})
}
return instances, nil
}
// ParseParams parses the parameters for the tool.
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
// Manifest returns the tool's manifest.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the tool's MCP manifest.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return true
}
func (t Tool) RequiresClientAuthorization() bool {
return t.source.UseClientAuthorization()
}

View File

@@ -0,0 +1,71 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsqllistinstances
import (
"testing"
"github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
)
func TestParseFromYaml(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
list-my-instances:
kind: cloud-sql-list-instances
description: some description
source: some-source
`,
want: server.ToolConfigs{
"list-my-instances": Config{
Name: "list-my-instances",
Kind: "cloud-sql-list-instances",
Description: "some description",
AuthRequired: []string{},
Source: "some-source",
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,410 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsqlwaitforoperation
import (
"context"
"encoding/json"
"fmt"
"regexp"
"strings"
"text/template"
"time"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/sources/cloudsqladmin"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "cloud-sql-wait-for-operation"
var cloudSQLConnectionMessageTemplate = `Your Cloud SQL resource is ready.
To connect, please configure your environment. The method depends on how you are running the toolbox:
**If running locally via stdio:**
Update the MCP server configuration with the following environment variables:
` + "```json" + `
{
"mcpServers": {
"cloud-sql-{{.DBType}}": {
"command": "./PATH/TO/toolbox",
"args": ["--prebuilt","cloud-sql-{{.DBType}}","--stdio"],
"env": {
"CLOUD_SQL_{{.DBTypeUpper}}_PROJECT": "{{.Project}}",
"CLOUD_SQL_{{.DBTypeUpper}}_REGION": "{{.Region}}",
"CLOUD_SQL_{{.DBTypeUpper}}_INSTANCE": "{{.Instance}}",
"CLOUD_SQL_{{.DBTypeUpper}}_DATABASE": "{{.Database}}",
"CLOUD_SQL_{{.DBTypeUpper}}_USER": "<your-user>",
"CLOUD_SQL_{{.DBTypeUpper}}_PASSWORD": "<your-password>"
}
}
}
}
` + "```" + `
**If running remotely:**
For remote deployments, you will need to set the following environment variables in your deployment configuration:
` + "```" + `
CLOUD_SQL_{{.DBTypeUpper}}_PROJECT={{.Project}}
CLOUD_SQL_{{.DBTypeUpper}}_REGION={{.Region}}
CLOUD_SQL_{{.DBTypeUpper}}_INSTANCE={{.Instance}}
CLOUD_SQL_{{.DBTypeUpper}}_DATABASE={{.Database}}
CLOUD_SQL_{{.DBTypeUpper}}_USER=<your-user>
CLOUD_SQL_{{.DBTypeUpper}}_PASSWORD=<your-password>
` + "```" + `
Please refer to the official documentation for guidance on deploying the toolbox:
- Deploying the Toolbox: https://googleapis.github.io/genai-toolbox/how-to/deploy_toolbox/
- Deploying on GKE: https://googleapis.github.io/genai-toolbox/how-to/deploy_gke/
`
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// Config defines the configuration for the wait-for-operation tool.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
BaseURL string `yaml:"baseURL"`
// Polling configuration
Delay string `yaml:"delay"`
MaxDelay string `yaml:"maxDelay"`
Multiplier float64 `yaml:"multiplier"`
MaxRetries int `yaml:"maxRetries"`
}
// validate interface
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of the tool.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize initializes the tool from the configuration.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
s, ok := rawS.(*cloudsqladmin.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `cloud-sql-admin`", kind)
}
allParameters := tools.Parameters{
tools.NewStringParameter("project", "The project ID"),
tools.NewStringParameter("operation", "The operation ID"),
}
paramManifest := allParameters.Manifest()
inputSchema := allParameters.McpManifest()
inputSchema.Required = []string{"project", "operation"}
description := cfg.Description
if description == "" {
description = "This will poll on operations API until the operation is done. For checking operation status we need projectId and operationId. Once instance is created give follow up steps on how to use the variables to bring data plane MCP server up in local and remote setup."
}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: description,
InputSchema: inputSchema,
}
var delay time.Duration
if cfg.Delay == "" {
delay = 3 * time.Second
} else {
var err error
delay, err = time.ParseDuration(cfg.Delay)
if err != nil {
return nil, fmt.Errorf("invalid value for delay: %w", err)
}
}
var maxDelay time.Duration
if cfg.MaxDelay == "" {
maxDelay = 4 * time.Minute
} else {
var err error
maxDelay, err = time.ParseDuration(cfg.MaxDelay)
if err != nil {
return nil, fmt.Errorf("invalid value for maxDelay: %w", err)
}
}
multiplier := cfg.Multiplier
if multiplier == 0 {
multiplier = 2.0
}
maxRetries := cfg.MaxRetries
if maxRetries == 0 {
maxRetries = 10
}
return Tool{
Name: cfg.Name,
Kind: kind,
AuthRequired: cfg.AuthRequired,
Source: s,
AllParams: allParameters,
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
Delay: delay,
MaxDelay: maxDelay,
Multiplier: multiplier,
MaxRetries: maxRetries,
}, nil
}
// Tool represents the wait-for-operation tool.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
Source *cloudsqladmin.Source
AllParams tools.Parameters `yaml:"allParams"`
// Polling configuration
Delay time.Duration
MaxDelay time.Duration
Multiplier float64
MaxRetries int
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's logic.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
project, ok := paramsMap["project"].(string)
if !ok {
return nil, fmt.Errorf("missing 'project' parameter")
}
operationID, ok := paramsMap["operation"].(string)
if !ok {
return nil, fmt.Errorf("missing 'operation' parameter")
}
service, err := t.Source.GetService(ctx, string(accessToken))
if err != nil {
return nil, err
}
ctx, cancel := context.WithTimeout(ctx, 30*time.Minute)
defer cancel()
delay := t.Delay
maxDelay := t.MaxDelay
multiplier := t.Multiplier
maxRetries := t.MaxRetries
retries := 0
for retries < maxRetries {
select {
case <-ctx.Done():
return nil, fmt.Errorf("timed out waiting for operation: %w", ctx.Err())
default:
}
op, err := service.Operations.Get(project, operationID).Do()
if err != nil {
fmt.Printf("error getting operation: %s, retrying in %v\n", err, delay)
} else {
if op.Status == "DONE" {
if op.Error != nil {
var errorBytes []byte
errorBytes, err = json.Marshal(op.Error)
if err != nil {
return nil, fmt.Errorf("operation finished with error but could not marshal error object: %w", err)
}
return nil, fmt.Errorf("operation finished with error: %s", string(errorBytes))
}
var opBytes []byte
opBytes, err = op.MarshalJSON()
if err != nil {
return nil, fmt.Errorf("could not marshal operation: %w", err)
}
var data map[string]any
if err := json.Unmarshal(opBytes, &data); err != nil {
return nil, fmt.Errorf("could not unmarshal operation: %w", err)
}
if msg, ok := t.generateCloudSQLConnectionMessage(data); ok {
return msg, nil
}
return string(opBytes), nil
}
fmt.Printf("Operation not complete, retrying in %v\n", delay)
}
time.Sleep(delay)
delay = time.Duration(float64(delay) * multiplier)
if delay > maxDelay {
delay = maxDelay
}
retries++
}
return nil, fmt.Errorf("exceeded max retries waiting for operation")
}
// ParseParams parses the parameters for the tool.
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
// Manifest returns the tool's manifest.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the tool's MCP manifest.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return true
}
func (t Tool) RequiresClientAuthorization() bool {
return t.Source.UseClientAuthorization()
}
func (t Tool) generateCloudSQLConnectionMessage(opResponse map[string]any) (string, bool) {
operationType, ok := opResponse["operationType"].(string)
if !ok || operationType != "CREATE_DATABASE" {
return "", false
}
targetLink, ok := opResponse["targetLink"].(string)
if !ok {
return "", false
}
r := regexp.MustCompile(`/projects/([^/]+)/instances/([^/]+)/databases/([^/]+)`)
matches := r.FindStringSubmatch(targetLink)
if len(matches) < 4 {
return "", false
}
project := matches[1]
instance := matches[2]
database := matches[3]
instanceData, err := t.fetchInstanceData(context.Background(), project, instance)
if err != nil {
fmt.Printf("error fetching instance data: %v\n", err)
return "", false
}
region, ok := instanceData["region"].(string)
if !ok {
return "", false
}
databaseVersion, ok := instanceData["databaseVersion"].(string)
if !ok {
return "", false
}
var dbType string
if strings.Contains(databaseVersion, "POSTGRES") {
dbType = "postgres"
} else if strings.Contains(databaseVersion, "MYSQL") {
dbType = "mysql"
} else if strings.Contains(databaseVersion, "SQLSERVER") {
dbType = "mssql"
} else {
return "", false
}
tmpl, err := template.New("cloud-sql-connection").Parse(cloudSQLConnectionMessageTemplate)
if err != nil {
return fmt.Sprintf("template parsing error: %v", err), false
}
data := struct {
Project string
Region string
Instance string
DBType string
DBTypeUpper string
Database string
}{
Project: project,
Region: region,
Instance: instance,
DBType: dbType,
DBTypeUpper: strings.ToUpper(dbType),
Database: database,
}
var b strings.Builder
if err := tmpl.Execute(&b, data); err != nil {
return fmt.Sprintf("template execution error: %v", err), false
}
return b.String(), true
}
func (t Tool) fetchInstanceData(ctx context.Context, project, instance string) (map[string]any, error) {
service, err := t.Source.GetService(ctx, "")
if err != nil {
return nil, err
}
resp, err := service.Instances.Get(project, instance).Do()
if err != nil {
return nil, fmt.Errorf("error getting instance: %w", err)
}
var data map[string]any
var b []byte
b, err = resp.MarshalJSON()
if err != nil {
return nil, fmt.Errorf("error marshalling response: %w", err)
}
if err := json.Unmarshal(b, &data); err != nil {
return nil, fmt.Errorf("error unmarshalling response body: %w", err)
}
return data, nil
}

View File

@@ -0,0 +1,80 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsqlwaitforoperation_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
cloudsqlwaitforoperation "github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqlwaitforoperation"
)
func TestParseFromYaml(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
wait-for-thing:
kind: cloud-sql-wait-for-operation
source: some-source
description: some description
delay: 1s
maxDelay: 5s
multiplier: 1.5
maxRetries: 5
`,
want: server.ToolConfigs{
"wait-for-thing": cloudsqlwaitforoperation.Config{
Name: "wait-for-thing",
Kind: "cloud-sql-wait-for-operation",
Source: "some-source",
Description: "some description",
AuthRequired: []string{},
Delay: "1s",
MaxDelay: "5s",
Multiplier: 1.5,
MaxRetries: 5,
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -262,3 +262,25 @@ func ProcessQueryArgs(ctx context.Context, params tools.ParamValues) (*v4.WriteQ
}
return &wq, nil
}
type QueryApiClientContext struct {
Name string `json:"name"`
Attributes map[string]string `json:"attributes,omitempty"`
ExtraAttributes map[string]string `json:"extra_attributes,omitempty"`
}
type RenderOptions struct {
Format string `json:"format"`
}
type RequestRunInlineQuery2 struct {
Query v4.WriteQuery `json:"query"`
RenderOpts RenderOptions `json:"render_options"`
QueryApiClientCtx QueryApiClientContext `json:"query_api_client_context"`
}
func RunInlineQuery2(l *v4.LookerSDK, request RequestRunInlineQuery2, options *rtl.ApiSettings) (string, error) {
var result string
err := l.AuthSession.Do(&result, "POST", "/4.0", "/queries/run_inline", nil, request, options)
return result, err
}

View File

@@ -15,6 +15,7 @@
package lookercommon_test
import (
"encoding/json"
"testing"
"github.com/google/go-cmp/cmp"
@@ -169,3 +170,32 @@ func TestExtractLookerFieldPropertiesWithNilFields(t *testing.T) {
t.Fatalf("incorrect result: diff %v", diff)
}
}
func TestRequestRunInlineQuery2(t *testing.T) {
fields := make([]string, 1)
fields[0] = "foo.bar"
wq := v4.WriteQuery{
Model: "model",
View: "explore",
Fields: &fields,
}
req2 := lookercommon.RequestRunInlineQuery2{
Query: wq,
RenderOpts: lookercommon.RenderOptions{
Format: "json",
},
QueryApiClientCtx: lookercommon.QueryApiClientContext{
Name: "MCP Toolbox",
},
}
json, err := json.Marshal(req2)
if err != nil {
t.Fatalf("Could not marshall req2 as json")
}
got := string(json)
want := `{"query":{"model":"model","view":"explore","fields":["foo.bar"]},"render_options":{"format":"json"},"query_api_client_context":{"name":"MCP Toolbox"}}`
if diff := cmp.Diff(want, got); diff != "" {
t.Fatalf("incorrect result: diff %v", diff)
}
}

View File

@@ -131,9 +131,22 @@ func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken
Body: *wq,
ResultFormat: "json",
}
resp, err := sdk.RunInlineQuery(req, t.ApiSettings)
req2 := lookercommon.RequestRunInlineQuery2{
Query: *wq,
RenderOpts: lookercommon.RenderOptions{
Format: "json",
},
QueryApiClientCtx: lookercommon.QueryApiClientContext{
Name: "MCP Toolbox",
},
}
resp, err := lookercommon.RunInlineQuery2(sdk, req2, t.ApiSettings)
if err != nil {
return nil, fmt.Errorf("error making query request: %s", err)
logger.DebugContext(ctx, "error querying with new endpoint, trying again with original", err)
resp, err = sdk.RunInlineQuery(req, t.ApiSettings)
if err != nil {
return nil, fmt.Errorf("error making query request: %s", err)
}
}
logger.DebugContext(ctx, "resp = ", resp)

View File

@@ -130,9 +130,22 @@ func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken
Body: *wq,
ResultFormat: "sql",
}
resp, err := sdk.RunInlineQuery(req, t.ApiSettings)
req2 := lookercommon.RequestRunInlineQuery2{
Query: *wq,
RenderOpts: lookercommon.RenderOptions{
Format: "sql",
},
QueryApiClientCtx: lookercommon.QueryApiClientContext{
Name: "MCP Toolbox",
},
}
resp, err := lookercommon.RunInlineQuery2(sdk, req2, t.ApiSettings)
if err != nil {
return nil, fmt.Errorf("error making query_sql request: %s", err)
logger.DebugContext(ctx, "error querying with new endpoint, trying again with original", err)
resp, err = sdk.RunInlineQuery(req, t.ApiSettings)
if err != nil {
return nil, fmt.Errorf("error making query_sql request: %s", err)
}
}
logger.DebugContext(ctx, "resp = ", resp)

View File

@@ -20,6 +20,7 @@ import (
"github.com/goccy/go-yaml"
neo4jsc "github.com/googleapis/genai-toolbox/internal/sources/neo4j"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/helpers"
"github.com/neo4j/neo4j-go-driver/v5/neo4j"
"github.com/googleapis/genai-toolbox/internal/sources"
@@ -135,7 +136,7 @@ func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken
for _, record := range records {
vMap := make(map[string]any)
for col, value := range record.Values {
vMap[keys[col]] = value
vMap[keys[col]] = helpers.ConvertValue(value)
}
out = append(out, vMap)
}

View File

@@ -23,6 +23,7 @@ import (
neo4jsc "github.com/googleapis/genai-toolbox/internal/sources/neo4j"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jexecutecypher/classifier"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/helpers"
"github.com/neo4j/neo4j-go-driver/v5/neo4j"
)
@@ -157,7 +158,7 @@ func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken
for _, record := range records {
vMap := make(map[string]any)
for col, value := range record.Values {
vMap[keys[col]] = value
vMap[keys[col]] = helpers.ConvertValue(value)
}
out = append(out, vMap)
}

View File

@@ -23,6 +23,7 @@ import (
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/types"
"github.com/neo4j/neo4j-go-driver/v5/neo4j"
)
// ConvertToStringSlice converts a slice of any type to a slice of strings.
@@ -289,3 +290,73 @@ func sortAndClean(nodeLabels []types.NodeLabel, relationships []types.Relationsh
stats.PropertiesByRelType = nil
}
}
// ConvertValue converts Neo4j value to JSON-compatible value.
func ConvertValue(value any) any {
switch v := value.(type) {
case nil, neo4j.InvalidValue:
return nil
case bool, string, int, int8, int16, int32, int64, float32, float64:
return v
case neo4j.Date, neo4j.LocalTime, neo4j.Time,
neo4j.LocalDateTime, neo4j.Duration:
if iv, ok := v.(types.ValueType); ok {
return iv.String()
}
case neo4j.Node:
return map[string]any{
"elementId": v.GetElementId(),
"labels": v.Labels,
"properties": ConvertValue(v.GetProperties()),
}
case neo4j.Relationship:
return map[string]any{
"elementId": v.GetElementId(),
"type": v.Type,
"startElementId": v.StartElementId,
"endElementId": v.EndElementId,
"properties": ConvertValue(v.GetProperties()),
}
case neo4j.Entity:
return map[string]any{
"elementId": v.GetElementId(),
"properties": ConvertValue(v.GetProperties()),
}
case neo4j.Path:
var nodes []any
var relationships []any
for _, r := range v.Relationships {
relationships = append(relationships, ConvertValue(r))
}
for _, n := range v.Nodes {
nodes = append(nodes, ConvertValue(n))
}
return map[string]any{
"nodes": nodes,
"relationships": relationships,
}
case neo4j.Record:
m := make(map[string]any)
for i, key := range v.Keys {
m[key] = ConvertValue(v.Values[i])
}
return m
case neo4j.Point2D:
return map[string]any{"x": v.X, "y": v.Y, "srid": v.SpatialRefId}
case neo4j.Point3D:
return map[string]any{"x": v.X, "y": v.Y, "z": v.Z, "srid": v.SpatialRefId}
case []any:
arr := make([]any, len(v))
for i, elem := range v {
arr[i] = ConvertValue(elem)
}
return arr
case map[string]any:
m := make(map[string]any)
for key, val := range v {
m[key] = ConvertValue(val)
}
return m
}
return fmt.Sprintf("%v", value)
}

View File

@@ -16,9 +16,11 @@ package helpers
import (
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/types"
"github.com/neo4j/neo4j-go-driver/v5/neo4j"
)
func TestHelperFunctions(t *testing.T) {
@@ -382,3 +384,176 @@ func TestProcessNonAPOCSchema(t *testing.T) {
}
})
}
func TestConvertValue(t *testing.T) {
tests := []struct {
name string
input any
want any
}{
{
name: "nil value",
input: nil,
want: nil,
},
{
name: "neo4j.InvalidValue",
input: neo4j.InvalidValue{},
want: nil,
},
{
name: "primitive bool",
input: true,
want: true,
},
{
name: "primitive int",
input: int64(42),
want: int64(42),
},
{
name: "primitive float",
input: 3.14,
want: 3.14,
},
{
name: "primitive string",
input: "hello",
want: "hello",
},
{
name: "neo4j.Date",
input: neo4j.Date(time.Date(2024, 6, 1, 0, 0, 0, 0, time.UTC)),
want: "2024-06-01",
},
{
name: "neo4j.LocalTime",
input: neo4j.LocalTime(time.Date(0, 0, 0, 12, 34, 56, 0, time.Local)),
want: "12:34:56",
},
{
name: "neo4j.Time",
input: neo4j.Time(time.Date(0, 0, 0, 1, 2, 3, 0, time.UTC)),
want: "01:02:03Z",
},
{
name: "neo4j.LocalDateTime",
input: neo4j.LocalDateTime(time.Date(2024, 6, 1, 10, 20, 30, 0, time.Local)),
want: "2024-06-01T10:20:30",
},
{
name: "neo4j.Duration",
input: neo4j.Duration{Months: 1, Days: 2, Seconds: 3, Nanos: 4},
want: "P1M2DT3.000000004S",
},
{
name: "neo4j.Point2D",
input: neo4j.Point2D{X: 1.1, Y: 2.2, SpatialRefId: 1234},
want: map[string]any{"x": 1.1, "y": 2.2, "srid": uint32(1234)},
},
{
name: "neo4j.Point3D",
input: neo4j.Point3D{X: 1.1, Y: 2.2, Z: 3.3, SpatialRefId: 5467},
want: map[string]any{"x": 1.1, "y": 2.2, "z": 3.3, "srid": uint32(5467)},
},
{
name: "neo4j.Node (handled by Entity case, losing labels)",
input: neo4j.Node{
ElementId: "element-1",
Labels: []string{"Person"},
Props: map[string]any{"name": "Alice"},
},
want: map[string]any{
"elementId": "element-1",
"labels": []string{"Person"},
"properties": map[string]any{"name": "Alice"},
},
},
{
name: "neo4j.Relationship (handled by Entity case, losing type/endpoints)",
input: neo4j.Relationship{
ElementId: "element-2",
StartElementId: "start-1",
EndElementId: "end-1",
Type: "KNOWS",
Props: map[string]any{"since": 2024},
},
want: map[string]any{
"elementId": "element-2",
"properties": map[string]any{"since": 2024},
"startElementId": "start-1",
"endElementId": "end-1",
"type": "KNOWS",
},
},
{
name: "neo4j.Path (elements handled by Entity case)",
input: func() neo4j.Path {
node1 := neo4j.Node{ElementId: "n10", Labels: []string{"A"}, Props: map[string]any{"p1": "v1"}}
node2 := neo4j.Node{ElementId: "n11", Labels: []string{"B"}, Props: map[string]any{"p2": "v2"}}
rel1 := neo4j.Relationship{ElementId: "r12", StartElementId: "n10", EndElementId: "n11", Type: "REL", Props: map[string]any{"p3": "v3"}}
return neo4j.Path{
Nodes: []neo4j.Node{node1, node2},
Relationships: []neo4j.Relationship{rel1},
}
}(),
want: map[string]any{
"nodes": []any{
map[string]any{
"elementId": "n10",
"properties": map[string]any{"p1": "v1"},
"labels": []string{"A"},
},
map[string]any{
"elementId": "n11",
"properties": map[string]any{"p2": "v2"},
"labels": []string{"B"},
},
},
"relationships": []any{
map[string]any{
"elementId": "r12",
"properties": map[string]any{"p3": "v3"},
"startElementId": "n10",
"endElementId": "n11",
"type": "REL",
},
},
},
},
{
name: "slice of primitives",
input: []any{"a", 1, true},
want: []any{"a", 1, true},
},
{
name: "slice of mixed types",
input: []any{"a", neo4j.Date(time.Date(2024, 6, 1, 0, 0, 0, 0, time.UTC))},
want: []any{"a", "2024-06-01"},
},
{
name: "map of primitives",
input: map[string]any{"foo": 1, "bar": "baz"},
want: map[string]any{"foo": 1, "bar": "baz"},
},
{
name: "map with nested neo4j type",
input: map[string]any{"date": neo4j.Date(time.Date(2024, 6, 1, 0, 0, 0, 0, time.UTC))},
want: map[string]any{"date": "2024-06-01"},
},
{
name: "unhandled type",
input: struct{ X int }{X: 5},
want: "{5}",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := ConvertValue(tt.input)
if !cmp.Equal(got, tt.want) {
t.Errorf("ConvertValue() mismatch (-want +got):\n%s", cmp.Diff(tt.want, got))
}
})
}
}

View File

@@ -15,6 +15,11 @@
// Package types contains the shared data structures for Neo4j schema representation.
package types
// ValueType interface representing a Neo4j value.
type ValueType interface {
String() string
}
// SchemaInfo represents the complete database schema.
type SchemaInfo struct {
NodeLabels []NodeLabel `json:"nodeLabels"`

View File

@@ -0,0 +1,606 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package spannerlisttables
import (
"context"
"fmt"
"strings"
"cloud.google.com/go/spanner"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
spannerdb "github.com/googleapis/genai-toolbox/internal/sources/spanner"
"github.com/googleapis/genai-toolbox/internal/tools"
"google.golang.org/api/iterator"
)
const kind string = "spanner-list-tables"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type compatibleSource interface {
SpannerClient() *spanner.Client
DatabaseDialect() string
}
// validate compatible sources are still compatible
var _ compatibleSource = &spannerdb.Source{}
var compatibleSources = [...]string{spannerdb.SourceKind}
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
}
// validate interface
var _ tools.ToolConfig = Config{}
func (cfg Config) ToolConfigKind() string {
return kind
}
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// verify source exists
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// verify the source is compatible
s, ok := rawS.(compatibleSource)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
}
// Define parameters for the tool
allParameters := tools.Parameters{
tools.NewStringParameterWithDefault(
"table_names",
"",
"Optional: A comma-separated list of table names. If empty, details for all tables in user-accessible schemas will be listed.",
),
tools.NewStringParameterWithDefault(
"output_format",
"detailed",
"Optional: Use 'simple' to return table names only or use 'detailed' to return the full information schema.",
),
}
description := cfg.Description
if description == "" {
description = "Lists detailed schema information (object type, columns, constraints, indexes) as JSON for user-created tables. Filters by a comma-separated list of names. If names are omitted, lists all tables in user schemas."
}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: description,
InputSchema: allParameters.McpManifest(),
}
// finish tool setup
t := Tool{
Name: cfg.Name,
Kind: kind,
AllParams: allParameters,
AuthRequired: cfg.AuthRequired,
Client: s.SpannerClient(),
dialect: s.DatabaseDialect(),
manifest: tools.Manifest{Description: description, Parameters: allParameters.Manifest(), AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}
return t, nil
}
// validate interface
var _ tools.Tool = Tool{}
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
AuthRequired []string `yaml:"authRequired"`
AllParams tools.Parameters `yaml:"allParams"`
Client *spanner.Client
dialect string
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// processRows iterates over the spanner.RowIterator and converts each row to a map[string]any.
func processRows(iter *spanner.RowIterator) ([]any, error) {
var out []any
defer iter.Stop()
for {
row, err := iter.Next()
if err == iterator.Done {
break
}
if err != nil {
return nil, fmt.Errorf("unable to parse row: %w", err)
}
vMap := make(map[string]any)
cols := row.ColumnNames()
for i, c := range cols {
vMap[c] = row.ColumnValue(i)
}
out = append(out, vMap)
}
return out, nil
}
func (t Tool) getStatement() string {
switch strings.ToLower(t.dialect) {
case "postgresql":
return postgresqlStatement
case "googlesql":
return googleSQLStatement
default:
// Default to GoogleSQL
return googleSQLStatement
}
}
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
// Get the appropriate SQL statement based on dialect
statement := t.getStatement()
// Prepare parameters based on dialect
var stmtParams map[string]interface{}
tableNames, _ := paramsMap["table_names"].(string)
outputFormat, _ := paramsMap["output_format"].(string)
if outputFormat == "" {
outputFormat = "detailed"
}
switch strings.ToLower(t.dialect) {
case "postgresql":
// PostgreSQL uses positional parameters ($1, $2)
stmtParams = map[string]interface{}{
"p1": tableNames,
"p2": outputFormat,
}
case "googlesql":
// GoogleSQL uses named parameters (@table_names, @output_format)
stmtParams = map[string]interface{}{
"table_names": tableNames,
"output_format": outputFormat,
}
default:
return nil, fmt.Errorf("unsupported dialect: %s", t.dialect)
}
stmt := spanner.Statement{
SQL: statement,
Params: stmtParams,
}
// Execute the query (read-only)
iter := t.Client.Single().Query(ctx, stmt)
results, err := processRows(iter)
if err != nil {
return nil, fmt.Errorf("unable to execute query: %w", err)
}
return results, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}
func (t Tool) RequiresClientAuthorization() bool {
return false
}
// PostgreSQL statement for listing tables
const postgresqlStatement = `
WITH table_info_cte AS (
SELECT
T.TABLE_SCHEMA,
T.TABLE_NAME,
T.TABLE_TYPE,
T.PARENT_TABLE_NAME,
T.ON_DELETE_ACTION
FROM INFORMATION_SCHEMA.TABLES AS T
WHERE
T.TABLE_SCHEMA = 'public'
AND T.TABLE_TYPE = 'BASE TABLE'
AND (
NULLIF(TRIM($1), '') IS NULL OR
T.TABLE_NAME IN (
SELECT table_name
FROM UNNEST(regexp_split_to_array($1, '\s*,\s*')) AS table_name)
)
),
columns_info_cte AS (
SELECT
C.TABLE_SCHEMA,
C.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"column_name":"', COALESCE(REPLACE(C.COLUMN_NAME, '"', '\"'), ''), '",',
'"data_type":"', COALESCE(REPLACE(C.SPANNER_TYPE, '"', '\"'), ''), '",',
'"ordinal_position":', C.ORDINAL_POSITION::TEXT, ',',
'"is_not_nullable":', CASE WHEN C.IS_NULLABLE = 'NO' THEN 'true' ELSE 'false' END, ',',
'"column_default":', CASE WHEN C.COLUMN_DEFAULT IS NULL THEN 'null' ELSE CONCAT('"', REPLACE(C.COLUMN_DEFAULT::text, '"', '\"'), '"') END,
'}'
) ORDER BY C.ORDINAL_POSITION
) AS columns_json_array_elements
FROM INFORMATION_SCHEMA.COLUMNS AS C
WHERE C.TABLE_SCHEMA = 'public'
AND EXISTS (SELECT 1 FROM table_info_cte TI WHERE C.TABLE_SCHEMA = TI.TABLE_SCHEMA AND C.TABLE_NAME = TI.TABLE_NAME)
GROUP BY C.TABLE_SCHEMA, C.TABLE_NAME
),
constraint_columns_agg_cte AS (
SELECT
CONSTRAINT_CATALOG,
CONSTRAINT_SCHEMA,
CONSTRAINT_NAME,
ARRAY_AGG(REPLACE(COLUMN_NAME, '"', '\"') ORDER BY ORDINAL_POSITION) AS column_names_json_list
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE CONSTRAINT_SCHEMA = 'public'
GROUP BY CONSTRAINT_CATALOG, CONSTRAINT_SCHEMA, CONSTRAINT_NAME
),
constraints_info_cte AS (
SELECT
TC.TABLE_SCHEMA,
TC.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"constraint_name":"', COALESCE(REPLACE(TC.CONSTRAINT_NAME, '"', '\"'), ''), '",',
'"constraint_type":"', COALESCE(REPLACE(TC.CONSTRAINT_TYPE, '"', '\"'), ''), '",',
'"constraint_definition":',
CASE TC.CONSTRAINT_TYPE
WHEN 'CHECK' THEN CASE WHEN CC.CHECK_CLAUSE IS NULL THEN 'null' ELSE CONCAT('"', REPLACE(CC.CHECK_CLAUSE, '"', '\"'), '"') END
WHEN 'PRIMARY KEY' THEN CONCAT('"', 'PRIMARY KEY (', array_to_string(COALESCE(KeyCols.column_names_json_list, ARRAY[]::text[]), ', '), ')', '"')
WHEN 'UNIQUE' THEN CONCAT('"', 'UNIQUE (', array_to_string(COALESCE(KeyCols.column_names_json_list, ARRAY[]::text[]), ', '), ')', '"')
WHEN 'FOREIGN KEY' THEN CONCAT('"', 'FOREIGN KEY (', array_to_string(COALESCE(KeyCols.column_names_json_list, ARRAY[]::text[]), ', '), ') REFERENCES ',
COALESCE(REPLACE(RefKeyTable.TABLE_NAME, '"', '\"'), ''),
' (', array_to_string(COALESCE(RefKeyCols.column_names_json_list, ARRAY[]::text[]), ', '), ')', '"')
ELSE 'null'
END, ',',
'"constraint_columns":["', array_to_string(COALESCE(KeyCols.column_names_json_list, ARRAY[]::text[]), ','), '"],',
'"foreign_key_referenced_table":', CASE WHEN RefKeyTable.TABLE_NAME IS NULL THEN 'null' ELSE CONCAT('"', REPLACE(RefKeyTable.TABLE_NAME, '"', '\"'), '"') END, ',',
'"foreign_key_referenced_columns":["', array_to_string(COALESCE(RefKeyCols.column_names_json_list, ARRAY[]::text[]), ','), '"]',
'}'
) ORDER BY TC.CONSTRAINT_NAME
) AS constraints_json_array_elements
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS TC
LEFT JOIN INFORMATION_SCHEMA.CHECK_CONSTRAINTS AS CC
ON TC.CONSTRAINT_CATALOG = CC.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = CC.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = CC.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS RC
ON TC.CONSTRAINT_CATALOG = RC.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = RC.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = RC.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS RefConstraint
ON RC.UNIQUE_CONSTRAINT_CATALOG = RefConstraint.CONSTRAINT_CATALOG AND RC.UNIQUE_CONSTRAINT_SCHEMA = RefConstraint.CONSTRAINT_SCHEMA AND RC.UNIQUE_CONSTRAINT_NAME = RefConstraint.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.TABLES AS RefKeyTable
ON RefConstraint.TABLE_CATALOG = RefKeyTable.TABLE_CATALOG AND RefConstraint.TABLE_SCHEMA = RefKeyTable.TABLE_SCHEMA AND RefConstraint.TABLE_NAME = RefKeyTable.TABLE_NAME
LEFT JOIN constraint_columns_agg_cte AS KeyCols
ON TC.CONSTRAINT_CATALOG = KeyCols.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = KeyCols.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = KeyCols.CONSTRAINT_NAME
LEFT JOIN constraint_columns_agg_cte AS RefKeyCols
ON RC.UNIQUE_CONSTRAINT_CATALOG = RefKeyCols.CONSTRAINT_CATALOG AND RC.UNIQUE_CONSTRAINT_SCHEMA = RefKeyCols.CONSTRAINT_SCHEMA AND RC.UNIQUE_CONSTRAINT_NAME = RefKeyCols.CONSTRAINT_NAME AND TC.CONSTRAINT_TYPE = 'FOREIGN KEY'
WHERE TC.TABLE_SCHEMA = 'public'
AND EXISTS (SELECT 1 FROM table_info_cte TI WHERE TC.TABLE_SCHEMA = TI.TABLE_SCHEMA AND TC.TABLE_NAME = TI.TABLE_NAME)
GROUP BY TC.TABLE_SCHEMA, TC.TABLE_NAME
),
index_key_columns_agg_cte AS (
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
ARRAY_AGG(
CONCAT(
'{"column_name":"', COALESCE(REPLACE(COLUMN_NAME, '"', '\"'), ''), '",',
'"ordering":"', COALESCE(REPLACE(COLUMN_ORDERING, '"', '\"'), ''), '"}'
) ORDER BY ORDINAL_POSITION
) AS key_column_json_details
FROM INFORMATION_SCHEMA.INDEX_COLUMNS
WHERE ORDINAL_POSITION IS NOT NULL
AND TABLE_SCHEMA = 'public'
GROUP BY TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, INDEX_NAME
),
index_storing_columns_agg_cte AS (
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
ARRAY_AGG(CONCAT('"', REPLACE(COLUMN_NAME, '"', '\"'), '"') ORDER BY COLUMN_NAME) AS storing_column_json_names
FROM INFORMATION_SCHEMA.INDEX_COLUMNS
WHERE ORDINAL_POSITION IS NULL
AND TABLE_SCHEMA = 'public'
GROUP BY TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, INDEX_NAME
),
indexes_info_cte AS (
SELECT
I.TABLE_SCHEMA,
I.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"index_name":"', COALESCE(REPLACE(I.INDEX_NAME, '"', '\"'), ''), '",',
'"index_type":"', COALESCE(REPLACE(I.INDEX_TYPE, '"', '\"'), ''), '",',
'"is_unique":', CASE WHEN I.IS_UNIQUE = 'YES' THEN 'true' ELSE 'false' END, ',',
'"is_null_filtered":', CASE WHEN I.IS_NULL_FILTERED = 'YES' THEN 'true' ELSE 'false' END, ',',
'"interleaved_in_table":', CASE WHEN I.PARENT_TABLE_NAME IS NULL OR I.PARENT_TABLE_NAME = '' THEN 'null' ELSE CONCAT('"', REPLACE(I.PARENT_TABLE_NAME, '"', '\"'), '"') END, ',',
'"index_key_columns":[', COALESCE(array_to_string(KeyIndexCols.key_column_json_details, ','), ''), '],',
'"storing_columns":[', COALESCE(array_to_string(StoringIndexCols.storing_column_json_names, ','), ''), ']',
'}'
) ORDER BY I.INDEX_NAME
) AS indexes_json_array_elements
FROM INFORMATION_SCHEMA.INDEXES AS I
LEFT JOIN index_key_columns_agg_cte AS KeyIndexCols
ON I.TABLE_CATALOG = KeyIndexCols.TABLE_CATALOG AND I.TABLE_SCHEMA = KeyIndexCols.TABLE_SCHEMA AND I.TABLE_NAME = KeyIndexCols.TABLE_NAME AND I.INDEX_NAME = KeyIndexCols.INDEX_NAME
LEFT JOIN index_storing_columns_agg_cte AS StoringIndexCols
ON I.TABLE_CATALOG = StoringIndexCols.TABLE_CATALOG AND I.TABLE_SCHEMA = StoringIndexCols.TABLE_SCHEMA AND I.TABLE_NAME = StoringIndexCols.TABLE_NAME AND I.INDEX_NAME = StoringIndexCols.INDEX_NAME
AND I.INDEX_TYPE IN ('LOCAL', 'GLOBAL')
WHERE I.TABLE_SCHEMA = 'public'
AND EXISTS (SELECT 1 FROM table_info_cte TI WHERE I.TABLE_SCHEMA = TI.TABLE_SCHEMA AND I.TABLE_NAME = TI.TABLE_NAME)
GROUP BY I.TABLE_SCHEMA, I.TABLE_NAME
)
SELECT
TI.TABLE_SCHEMA AS schema_name,
TI.TABLE_NAME AS object_name,
CASE
WHEN $2 = 'simple' THEN
-- IF format is 'simple', return basic JSON
CONCAT('{"name":"', COALESCE(REPLACE(TI.TABLE_NAME, '"', '\"'), ''), '"}')
ELSE
CONCAT(
'{',
'"schema_name":"', COALESCE(REPLACE(TI.TABLE_SCHEMA, '"', '\"'), ''), '",',
'"object_name":"', COALESCE(REPLACE(TI.TABLE_NAME, '"', '\"'), ''), '",',
'"object_type":"', COALESCE(REPLACE(TI.TABLE_TYPE, '"', '\"'), ''), '",',
'"columns":[', COALESCE(array_to_string(CI.columns_json_array_elements, ','), ''), '],',
'"constraints":[', COALESCE(array_to_string(CONSI.constraints_json_array_elements, ','), ''), '],',
'"indexes":[', COALESCE(array_to_string(II.indexes_json_array_elements, ','), ''), ']',
'}'
)
END AS object_details
FROM table_info_cte AS TI
LEFT JOIN columns_info_cte AS CI
ON TI.TABLE_SCHEMA = CI.TABLE_SCHEMA AND TI.TABLE_NAME = CI.TABLE_NAME
LEFT JOIN constraints_info_cte AS CONSI
ON TI.TABLE_SCHEMA = CONSI.TABLE_SCHEMA AND TI.TABLE_NAME = CONSI.TABLE_NAME
LEFT JOIN indexes_info_cte AS II
ON TI.TABLE_SCHEMA = II.TABLE_SCHEMA AND TI.TABLE_NAME = II.TABLE_NAME
ORDER BY TI.TABLE_SCHEMA, TI.TABLE_NAME`
// GoogleSQL statement for listing tables
const googleSQLStatement = `
WITH FilterTableNames AS (
SELECT DISTINCT TRIM(name) AS TABLE_NAME
FROM UNNEST(IF(@table_names = '' OR @table_names IS NULL, ['%'], SPLIT(@table_names, ','))) AS name
),
-- 1. Table Information
table_info_cte AS (
SELECT
T.TABLE_SCHEMA,
T.TABLE_NAME,
T.TABLE_TYPE,
T.PARENT_TABLE_NAME, -- For interleaved tables
T.ON_DELETE_ACTION -- For interleaved tables
FROM INFORMATION_SCHEMA.TABLES AS T
WHERE
T.TABLE_SCHEMA = ''
AND T.TABLE_TYPE = 'BASE TABLE'
AND (EXISTS (SELECT 1 FROM FilterTableNames WHERE FilterTableNames.TABLE_NAME = '%') OR T.TABLE_NAME IN (SELECT TABLE_NAME FROM FilterTableNames))
),
-- 2. Column Information (with JSON string for each column)
columns_info_cte AS (
SELECT
C.TABLE_SCHEMA,
C.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"column_name":"', IFNULL(C.COLUMN_NAME, ''), '",',
'"data_type":"', IFNULL(C.SPANNER_TYPE, ''), '",',
'"ordinal_position":', CAST(C.ORDINAL_POSITION AS STRING), ',',
'"is_not_nullable":', IF(C.IS_NULLABLE = 'NO', 'true', 'false'), ',',
'"column_default":', IF(C.COLUMN_DEFAULT IS NULL, 'null', CONCAT('"', C.COLUMN_DEFAULT, '"')),
'}'
) ORDER BY C.ORDINAL_POSITION
) AS columns_json_array_elements
FROM INFORMATION_SCHEMA.COLUMNS AS C
WHERE EXISTS (SELECT 1 FROM table_info_cte TI WHERE C.TABLE_SCHEMA = TI.TABLE_SCHEMA AND C.TABLE_NAME = TI.TABLE_NAME)
GROUP BY C.TABLE_SCHEMA, C.TABLE_NAME
),
-- Helper CTE for aggregating constraint columns
constraint_columns_agg_cte AS (
SELECT
CONSTRAINT_CATALOG,
CONSTRAINT_SCHEMA,
CONSTRAINT_NAME,
ARRAY_AGG(REPLACE(COLUMN_NAME, '"', '\"') ORDER BY ORDINAL_POSITION) AS column_names_json_list
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
GROUP BY CONSTRAINT_CATALOG, CONSTRAINT_SCHEMA, CONSTRAINT_NAME
),
-- 3. Constraint Information (with JSON string for each constraint)
constraints_info_cte AS (
SELECT
TC.TABLE_SCHEMA,
TC.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"constraint_name":"', IFNULL(TC.CONSTRAINT_NAME, ''), '",',
'"constraint_type":"', IFNULL(TC.CONSTRAINT_TYPE, ''), '",',
'"constraint_definition":',
CASE TC.CONSTRAINT_TYPE
WHEN 'CHECK' THEN IF(CC.CHECK_CLAUSE IS NULL, 'null', CONCAT('"', CC.CHECK_CLAUSE, '"'))
WHEN 'PRIMARY KEY' THEN CONCAT('"', 'PRIMARY KEY (', ARRAY_TO_STRING(COALESCE(KeyCols.column_names_json_list, []), ', '), ')', '"')
WHEN 'UNIQUE' THEN CONCAT('"', 'UNIQUE (', ARRAY_TO_STRING(COALESCE(KeyCols.column_names_json_list, []), ', '), ')', '"')
WHEN 'FOREIGN KEY' THEN CONCAT('"', 'FOREIGN KEY (', ARRAY_TO_STRING(COALESCE(KeyCols.column_names_json_list, []), ', '), ') REFERENCES ',
IFNULL(RefKeyTable.TABLE_NAME, ''),
' (', ARRAY_TO_STRING(COALESCE(RefKeyCols.column_names_json_list, []), ', '), ')', '"')
ELSE 'null'
END, ',',
'"constraint_columns":["', ARRAY_TO_STRING(COALESCE(KeyCols.column_names_json_list, []), ','), '"],',
'"foreign_key_referenced_table":', IF(RefKeyTable.TABLE_NAME IS NULL, 'null', CONCAT('"', RefKeyTable.TABLE_NAME, '"')), ',',
'"foreign_key_referenced_columns":["', ARRAY_TO_STRING(COALESCE(RefKeyCols.column_names_json_list, []), ','), '"]',
'}'
) ORDER BY TC.CONSTRAINT_NAME
) AS constraints_json_array_elements
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS TC
LEFT JOIN INFORMATION_SCHEMA.CHECK_CONSTRAINTS AS CC
ON TC.CONSTRAINT_CATALOG = CC.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = CC.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = CC.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS AS RC
ON TC.CONSTRAINT_CATALOG = RC.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = RC.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = RC.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS RefConstraint
ON RC.UNIQUE_CONSTRAINT_CATALOG = RefConstraint.CONSTRAINT_CATALOG AND RC.UNIQUE_CONSTRAINT_SCHEMA = RefConstraint.CONSTRAINT_SCHEMA AND RC.UNIQUE_CONSTRAINT_NAME = RefConstraint.CONSTRAINT_NAME
LEFT JOIN INFORMATION_SCHEMA.TABLES AS RefKeyTable
ON RefConstraint.TABLE_CATALOG = RefKeyTable.TABLE_CATALOG AND RefConstraint.TABLE_SCHEMA = RefKeyTable.TABLE_SCHEMA AND RefConstraint.TABLE_NAME = RefKeyTable.TABLE_NAME
LEFT JOIN constraint_columns_agg_cte AS KeyCols
ON TC.CONSTRAINT_CATALOG = KeyCols.CONSTRAINT_CATALOG AND TC.CONSTRAINT_SCHEMA = KeyCols.CONSTRAINT_SCHEMA AND TC.CONSTRAINT_NAME = KeyCols.CONSTRAINT_NAME
LEFT JOIN constraint_columns_agg_cte AS RefKeyCols
ON RC.UNIQUE_CONSTRAINT_CATALOG = RefKeyCols.CONSTRAINT_CATALOG AND RC.UNIQUE_CONSTRAINT_SCHEMA = RefKeyCols.CONSTRAINT_SCHEMA AND RC.UNIQUE_CONSTRAINT_NAME = RefKeyCols.CONSTRAINT_NAME AND TC.CONSTRAINT_TYPE = 'FOREIGN KEY'
WHERE EXISTS (SELECT 1 FROM table_info_cte TI WHERE TC.TABLE_SCHEMA = TI.TABLE_SCHEMA AND TC.TABLE_NAME = TI.TABLE_NAME)
GROUP BY TC.TABLE_SCHEMA, TC.TABLE_NAME
),
-- Helper CTE for aggregating index key columns (as JSON strings)
index_key_columns_agg_cte AS (
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
ARRAY_AGG(
CONCAT(
'{"column_name":"', IFNULL(COLUMN_NAME, ''), '",',
'"ordering":"', IFNULL(COLUMN_ORDERING, ''), '"}'
) ORDER BY ORDINAL_POSITION
) AS key_column_json_details
FROM INFORMATION_SCHEMA.INDEX_COLUMNS
WHERE ORDINAL_POSITION IS NOT NULL -- Key columns
GROUP BY TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, INDEX_NAME
),
-- Helper CTE for aggregating index storing columns (as JSON strings)
index_storing_columns_agg_cte AS (
SELECT
TABLE_CATALOG,
TABLE_SCHEMA,
TABLE_NAME,
INDEX_NAME,
ARRAY_AGG(CONCAT('"', COLUMN_NAME, '"') ORDER BY COLUMN_NAME) AS storing_column_json_names
FROM INFORMATION_SCHEMA.INDEX_COLUMNS
WHERE ORDINAL_POSITION IS NULL -- Storing columns
GROUP BY TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME, INDEX_NAME
),
-- 4. Index Information (with JSON string for each index)
indexes_info_cte AS (
SELECT
I.TABLE_SCHEMA,
I.TABLE_NAME,
ARRAY_AGG(
CONCAT(
'{',
'"index_name":"', IFNULL(I.INDEX_NAME, ''), '",',
'"index_type":"', IFNULL(I.INDEX_TYPE, ''), '",',
'"is_unique":', IF(I.IS_UNIQUE, 'true', 'false'), ',',
'"is_null_filtered":', IF(I.IS_NULL_FILTERED, 'true', 'false'), ',',
'"interleaved_in_table":', IF(I.PARENT_TABLE_NAME IS NULL, 'null', CONCAT('"', I.PARENT_TABLE_NAME, '"')), ',',
'"index_key_columns":[', ARRAY_TO_STRING(COALESCE(KeyIndexCols.key_column_json_details, []), ','), '],',
'"storing_columns":[', ARRAY_TO_STRING(COALESCE(StoringIndexCols.storing_column_json_names, []), ','), ']',
'}'
) ORDER BY I.INDEX_NAME
) AS indexes_json_array_elements
FROM INFORMATION_SCHEMA.INDEXES AS I
LEFT JOIN index_key_columns_agg_cte AS KeyIndexCols
ON I.TABLE_CATALOG = KeyIndexCols.TABLE_CATALOG AND I.TABLE_SCHEMA = KeyIndexCols.TABLE_SCHEMA AND I.TABLE_NAME = KeyIndexCols.TABLE_NAME AND I.INDEX_NAME = KeyIndexCols.INDEX_NAME
LEFT JOIN index_storing_columns_agg_cte AS StoringIndexCols
ON I.TABLE_CATALOG = StoringIndexCols.TABLE_CATALOG AND I.TABLE_SCHEMA = StoringIndexCols.TABLE_SCHEMA AND I.TABLE_NAME = StoringIndexCols.TABLE_NAME AND I.INDEX_NAME = StoringIndexCols.INDEX_NAME AND I.INDEX_TYPE = 'INDEX'
WHERE EXISTS (SELECT 1 FROM table_info_cte TI WHERE I.TABLE_SCHEMA = TI.TABLE_SCHEMA AND I.TABLE_NAME = TI.TABLE_NAME)
GROUP BY I.TABLE_SCHEMA, I.TABLE_NAME
)
-- Final SELECT to build the JSON output
SELECT
TI.TABLE_SCHEMA AS schema_name,
TI.TABLE_NAME AS object_name,
CASE
WHEN @output_format = 'simple' THEN
-- IF format is 'simple', return basic JSON
CONCAT('{"name":"', IFNULL(REPLACE(TI.TABLE_NAME, '"', '\"'), ''), '"}')
ELSE
CONCAT(
'{',
'"schema_name":"', IFNULL(TI.TABLE_SCHEMA, ''), '",',
'"object_name":"', IFNULL(TI.TABLE_NAME, ''), '",',
'"object_type":"', IFNULL(TI.TABLE_TYPE, ''), '",',
'"columns":[', ARRAY_TO_STRING(COALESCE(CI.columns_json_array_elements, []), ','), '],',
'"constraints":[', ARRAY_TO_STRING(COALESCE(CONSI.constraints_json_array_elements, []), ','), '],',
'"indexes":[', ARRAY_TO_STRING(COALESCE(II.indexes_json_array_elements, []), ','), ']',
'}'
)
END AS object_details
FROM table_info_cte AS TI
LEFT JOIN columns_info_cte AS CI
ON TI.TABLE_SCHEMA = CI.TABLE_SCHEMA AND TI.TABLE_NAME = CI.TABLE_NAME
LEFT JOIN constraints_info_cte AS CONSI
ON TI.TABLE_SCHEMA = CONSI.TABLE_SCHEMA AND TI.TABLE_NAME = CONSI.TABLE_NAME
LEFT JOIN indexes_info_cte AS II
ON TI.TABLE_SCHEMA = II.TABLE_SCHEMA AND TI.TABLE_NAME = II.TABLE_NAME
ORDER BY TI.TABLE_SCHEMA, TI.TABLE_NAME`

View File

@@ -0,0 +1,112 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package spannerlisttables_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/internal/tools/spanner/spannerlisttables"
)
func TestParseFromYamlListTables(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
example_tool:
kind: spanner-list-tables
source: my-spanner-instance
description: Lists tables in the database
`,
want: server.ToolConfigs{
"example_tool": spannerlisttables.Config{
Name: "example_tool",
Kind: "spanner-list-tables",
Source: "my-spanner-instance",
Description: "Lists tables in the database",
AuthRequired: []string{},
},
},
},
{
desc: "with auth required",
in: `
tools:
example_tool:
kind: spanner-list-tables
source: my-spanner-instance
description: Lists tables in the database
authRequired:
- auth1
- auth2
`,
want: server.ToolConfigs{
"example_tool": spannerlisttables.Config{
Name: "example_tool",
Kind: "spanner-list-tables",
Source: "my-spanner-instance",
Description: "Lists tables in the database",
AuthRequired: []string{"auth1", "auth2"},
},
},
},
{
desc: "minimal config",
in: `
tools:
example_tool:
kind: spanner-list-tables
source: my-spanner-instance
`,
want: server.ToolConfigs{
"example_tool": spannerlisttables.Config{
Name: "example_tool",
Kind: "spanner-list-tables",
Source: "my-spanner-instance",
Description: "",
AuthRequired: []string{},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,213 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package sqliteexecutesql
import (
"context"
"database/sql"
"encoding/json"
"fmt"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/sources/sqlite"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/util"
)
const kind string = "sqlite-execute-sql"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type compatibleSource interface {
SQLiteDB() *sql.DB
}
// validate compatible sources are still compatible
var _ compatibleSource = &sqlite.Source{}
var compatibleSources = [...]string{sqlite.SourceKind}
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description" validate:"required"`
AuthRequired []string `yaml:"authRequired"`
}
// validate interface
var _ tools.ToolConfig = Config{}
func (cfg Config) ToolConfigKind() string {
return kind
}
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// verify source exists
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// verify the source is compatible
s, ok := rawS.(compatibleSource)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
}
sqlParameter := tools.NewStringParameter("sql", "The sql to execute.")
parameters := tools.Parameters{sqlParameter}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: parameters.McpManifest(),
}
// finish tool setup
t := Tool{
Name: cfg.Name,
Kind: kind,
Parameters: parameters,
AuthRequired: cfg.AuthRequired,
DB: s.SQLiteDB(),
manifest: tools.Manifest{Description: cfg.Description, Parameters: parameters.Manifest(), AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}
return t, nil
}
// validate interface
var _ tools.Tool = Tool{}
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
AuthRequired []string `yaml:"authRequired"`
Parameters tools.Parameters `yaml:"parameters"`
DB *sql.DB
manifest tools.Manifest
mcpManifest tools.McpManifest
}
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken tools.AccessToken) (any, error) {
sql, ok := params.AsMap()["sql"].(string)
if !ok {
return nil, fmt.Errorf("missing or invalid 'sql' parameter")
}
if sql == "" {
return nil, fmt.Errorf("sql parameter cannot be empty")
}
// Log the query executed for debugging.
logger, err := util.LoggerFromContext(ctx)
if err != nil {
return nil, fmt.Errorf("error getting logger: %s", err)
}
logger.DebugContext(ctx, "executing `%s` tool query: %s", kind, sql)
results, err := t.DB.QueryContext(ctx, sql)
if err != nil {
return nil, fmt.Errorf("unable to execute query: %w", err)
}
cols, err := results.Columns()
if err != nil {
return nil, fmt.Errorf("unable to retrieve rows column name: %w", err)
}
// The sqlite driver does not support ColumnTypes, so we can't get the
// underlying database type of the columns. We'll have to rely on the
// generic `any` type and then handle the JSON data separately.
// create an array of values for each column, which can be re-used to scan each row
rawValues := make([]any, len(cols))
values := make([]any, len(cols))
for i := range rawValues {
values[i] = &rawValues[i]
}
defer results.Close()
var out []any
for results.Next() {
err := results.Scan(values...)
if err != nil {
return nil, fmt.Errorf("unable to parse row: %w", err)
}
vMap := make(map[string]any)
for i, name := range cols {
val := rawValues[i]
if val == nil {
vMap[name] = nil
continue
}
// Handle JSON data
if jsonString, ok := val.(string); ok {
var unmarshaledData any
if json.Unmarshal([]byte(jsonString), &unmarshaledData) == nil {
vMap[name] = unmarshaledData
continue
}
}
vMap[name] = val
}
out = append(out, vMap)
}
if err := results.Err(); err != nil {
return nil, fmt.Errorf("errors encountered during row iteration: %w", err)
}
if len(out) == 0 {
return nil, nil
}
return out, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.Parameters, data, claims)
}
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}
func (t Tool) RequiresClientAuthorization() bool {
return false
}

View File

@@ -0,0 +1,299 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package sqliteexecutesql_test
import (
"context"
"database/sql"
"reflect"
"testing"
_ "modernc.org/sqlite"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/tools/sqlite/sqliteexecutesql"
)
func TestParseFromYamlExecuteSql(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
example_tool:
kind: sqlite-execute-sql
source: my-instance
description: some description
authRequired:
- my-google-auth-service
- other-auth-service
`,
want: server.ToolConfigs{
"example_tool": sqliteexecutesql.Config{
Name: "example_tool",
Kind: "sqlite-execute-sql",
Source: "my-instance",
Description: "some description",
AuthRequired: []string{"my-google-auth-service", "other-auth-service"},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}
func setupTestDB(t *testing.T) *sql.DB {
db, err := sql.Open("sqlite", ":memory:")
if err != nil {
t.Fatalf("Failed to open in-memory database: %v", err)
}
return db
}
func TestTool_Invoke(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
type fields struct {
Name string
Kind string
AuthRequired []string
Parameters tools.Parameters
DB *sql.DB
}
type args struct {
ctx context.Context
params tools.ParamValues
accessToken tools.AccessToken
}
tests := []struct {
name string
fields fields
args args
want any
wantErr bool
}{
{
name: "create table",
fields: fields{
DB: setupTestDB(t),
},
args: args{
ctx: ctx,
params: []tools.ParamValue{
{Name: "sql", Value: "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER)"},
},
},
want: nil,
wantErr: false,
},
{
name: "insert data",
fields: fields{
DB: setupTestDB(t),
},
args: args{
ctx: ctx,
params: []tools.ParamValue{
{Name: "sql", Value: "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER); INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30), (2, 'Bob', 25)"},
},
},
want: nil,
wantErr: false,
},
{
name: "select data",
fields: fields{
DB: func() *sql.DB {
db := setupTestDB(t)
if _, err := db.Exec("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER); INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30), (2, 'Bob', 25)"); err != nil {
t.Fatalf("Failed to set up database for select: %v", err)
}
return db
}(),
},
args: args{
ctx: ctx,
params: []tools.ParamValue{
{Name: "sql", Value: "SELECT * FROM users"},
},
},
want: []any{
map[string]any{"id": int64(1), "name": "Alice", "age": int64(30)},
map[string]any{"id": int64(2), "name": "Bob", "age": int64(25)},
},
wantErr: false,
},
{
name: "drop table",
fields: fields{
DB: func() *sql.DB {
db := setupTestDB(t)
if _, err := db.Exec("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER)"); err != nil {
t.Fatalf("Failed to set up database for drop: %v", err)
}
return db
}(),
},
args: args{
ctx: ctx,
params: []tools.ParamValue{
{Name: "sql", Value: "DROP TABLE users"},
},
},
want: nil,
wantErr: false,
},
{
name: "invalid sql",
fields: fields{
DB: setupTestDB(t),
},
args: args{
ctx: ctx,
params: []tools.ParamValue{
{Name: "sql", Value: "SELECT * FROM non_existent_table"},
},
},
want: nil,
wantErr: true,
},
{
name: "empty sql",
fields: fields{
DB: setupTestDB(t),
},
args: args{
ctx: ctx,
params: []tools.ParamValue{
{Name: "sql", Value: ""},
},
},
want: nil,
wantErr: true,
},
{
name: "data types",
fields: fields{
DB: func() *sql.DB {
db := setupTestDB(t)
if _, err := db.Exec("CREATE TABLE data_types (id INTEGER PRIMARY KEY, null_col TEXT, blob_col BLOB)"); err != nil {
t.Fatalf("Failed to set up database for data types: %v", err)
}
if _, err := db.Exec("INSERT INTO data_types (id, null_col, blob_col) VALUES (1, NULL, ?)", []byte{1, 2, 3}); err != nil {
t.Fatalf("Failed to insert data for data types: %v", err)
}
return db
}(),
},
args: args{
ctx: ctx,
params: []tools.ParamValue{
{Name: "sql", Value: "SELECT * FROM data_types"},
},
},
want: []any{
map[string]any{"id": int64(1), "null_col": nil, "blob_col": []byte{1, 2, 3}},
},
wantErr: false,
},
{
name: "join operation",
fields: fields{
DB: func() *sql.DB {
db := setupTestDB(t)
if _, err := db.Exec("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, age INTEGER)"); err != nil {
t.Fatalf("Failed to set up database for join: %v", err)
}
if _, err := db.Exec("INSERT INTO users (id, name, age) VALUES (1, 'Alice', 30), (2, 'Bob', 25)"); err != nil {
t.Fatalf("Failed to insert data for join: %v", err)
}
if _, err := db.Exec("CREATE TABLE orders (id INTEGER PRIMARY KEY, user_id INTEGER, item TEXT)"); err != nil {
t.Fatalf("Failed to set up database for join: %v", err)
}
if _, err := db.Exec("INSERT INTO orders (id, user_id, item) VALUES (1, 1, 'Laptop'), (2, 2, 'Keyboard')"); err != nil {
t.Fatalf("Failed to insert data for join: %v", err)
}
return db
}(),
},
args: args{
ctx: ctx,
params: []tools.ParamValue{
{Name: "sql", Value: "SELECT u.name, o.item FROM users u JOIN orders o ON u.id = o.user_id"},
},
},
want: []any{
map[string]any{"name": "Alice", "item": "Laptop"},
map[string]any{"name": "Bob", "item": "Keyboard"},
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tr := &sqliteexecutesql.Tool{
Name: tt.fields.Name,
Kind: tt.fields.Kind,
AuthRequired: tt.fields.AuthRequired,
Parameters: tt.fields.Parameters,
DB: tt.fields.DB,
}
got, err := tr.Invoke(tt.args.ctx, tt.args.params, tt.args.accessToken)
if (err != nil) != tt.wantErr {
t.Errorf("Tool.Invoke() error = %v, wantErr %v", err, tt.wantErr)
return
}
isEqual := false
if got != nil && len(got.([]any)) == 0 && len(tt.want.([]any)) == 0 {
isEqual = true // Special case for empty slices, since DeepEqual returns false
} else {
isEqual = reflect.DeepEqual(got, tt.want)
}
if !isEqual {
t.Errorf("Tool.Invoke() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -17,6 +17,7 @@ package sqlitesql
import (
"context"
"database/sql"
"encoding/json"
"fmt"
yaml "github.com/goccy/go-yaml"
@@ -150,45 +151,50 @@ func (t Tool) Invoke(ctx context.Context, params tools.ParamValues, accessToken
return nil, fmt.Errorf("unable to get column names: %w", err)
}
// The sqlite driver does not support ColumnTypes, so we can't get the
// underlying database type of the columns. We'll have to rely on the
// generic `any` type and then handle the JSON data separately.
rawValues := make([]any, len(cols))
values := make([]any, len(cols))
valuePtrs := make([]any, len(cols))
for i := range values {
valuePtrs[i] = &values[i]
for i := range rawValues {
values[i] = &rawValues[i]
}
// Prepare the result slice
var result []any
// Iterate through the rows
var out []any
for rows.Next() {
// Scan the row into the value pointers
if err := rows.Scan(valuePtrs...); err != nil {
if err := rows.Scan(values...); err != nil {
return nil, fmt.Errorf("unable to scan row: %w", err)
}
// Create a map for this row
rowMap := make(map[string]interface{})
for i, col := range cols {
val := values[i]
vMap := make(map[string]any)
for i, name := range cols {
val := rawValues[i]
// Handle nil values
if val == nil {
rowMap[col] = nil
vMap[name] = nil
continue
}
// Handle JSON data
if jsonString, ok := val.(string); ok {
var unmarshaledData any
if json.Unmarshal([]byte(jsonString), &unmarshaledData) == nil {
vMap[name] = unmarshaledData
continue
}
}
// Store the value in the map
rowMap[col] = val
vMap[name] = val
}
result = append(result, rowMap)
}
if err = rows.Close(); err != nil {
return nil, fmt.Errorf("unable to close rows: %w", err)
out = append(out, vMap)
}
if err = rows.Err(); err != nil {
return nil, fmt.Errorf("error iterating rows: %w", err)
}
return result, nil
return out, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {

View File

@@ -15,14 +15,18 @@
package sqlitesql_test
import (
"context"
"database/sql"
"reflect"
"testing"
_ "modernc.org/sqlite"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/tools/sqlitesql"
"github.com/googleapis/genai-toolbox/internal/tools/sqlite/sqlitesql"
)
func TestParseFromYamlSQLite(t *testing.T) {
@@ -174,3 +178,146 @@ func TestParseFromYamlWithTemplateSqlite(t *testing.T) {
})
}
}
func setupTestDB(t *testing.T) *sql.DB {
db, err := sql.Open("sqlite", ":memory:")
if err != nil {
t.Fatalf("Failed to open in-memory database: %v", err)
}
createTable := `
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name TEXT,
age INTEGER
);`
if _, err := db.Exec(createTable); err != nil {
t.Fatalf("Failed to create table: %v", err)
}
insertData := `
INSERT INTO users (id, name, age) VALUES
(1, 'Alice', 30),
(2, 'Bob', 25);`
if _, err := db.Exec(insertData); err != nil {
t.Fatalf("Failed to insert data: %v", err)
}
return db
}
func TestTool_Invoke(t *testing.T) {
type fields struct {
Name string
Kind string
AuthRequired []string
Parameters tools.Parameters
TemplateParameters tools.Parameters
AllParams tools.Parameters
Db *sql.DB
Statement string
}
type args struct {
ctx context.Context
params tools.ParamValues
accessToken tools.AccessToken
}
tests := []struct {
name string
fields fields
args args
want any
wantErr bool
}{
{
name: "simple select",
fields: fields{
Db: setupTestDB(t),
Statement: "SELECT * FROM users",
},
args: args{
ctx: context.Background(),
},
want: []any{
map[string]any{"id": int64(1), "name": "Alice", "age": int64(30)},
map[string]any{"id": int64(2), "name": "Bob", "age": int64(25)},
},
wantErr: false,
},
{
name: "select with parameter",
fields: fields{
Db: setupTestDB(t),
Statement: "SELECT * FROM users WHERE name = ?",
Parameters: []tools.Parameter{
tools.NewStringParameter("name", "user name"),
},
},
args: args{
ctx: context.Background(),
params: []tools.ParamValue{
{Name: "name", Value: "Alice"},
},
},
want: []any{
map[string]any{"id": int64(1), "name": "Alice", "age": int64(30)},
},
wantErr: false,
},
{
name: "select with template parameter",
fields: fields{
Db: setupTestDB(t),
Statement: "SELECT * FROM {{.tableName}}",
TemplateParameters: []tools.Parameter{
tools.NewStringParameter("tableName", "table name"),
},
},
args: args{
ctx: context.Background(),
params: []tools.ParamValue{
{Name: "tableName", Value: "users"},
},
},
want: []any{
map[string]any{"id": int64(1), "name": "Alice", "age": int64(30)},
map[string]any{"id": int64(2), "name": "Bob", "age": int64(25)},
},
wantErr: false,
},
{
name: "invalid sql",
fields: fields{
Db: setupTestDB(t),
Statement: "SELECT * FROM non_existent_table",
},
args: args{
ctx: context.Background(),
},
want: nil,
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tr := sqlitesql.Tool{
Name: tt.fields.Name,
Kind: tt.fields.Kind,
AuthRequired: tt.fields.AuthRequired,
Parameters: tt.fields.Parameters,
TemplateParameters: tt.fields.TemplateParameters,
AllParams: tt.fields.AllParams,
Db: tt.fields.Db,
Statement: tt.fields.Statement,
}
got, err := tr.Invoke(tt.args.ctx, tt.args.params, tt.args.accessToken)
if (err != nil) != tt.wantErr {
t.Errorf("Tool.Invoke() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("Tool.Invoke() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -0,0 +1,779 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package alloydb
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"reflect"
"regexp"
"sort"
"strings"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server/mcp/jsonrpc"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/tests"
)
var (
AlloyDBProject = os.Getenv("ALLOYDB_PROJECT")
AlloyDBLocation = os.Getenv("ALLOYDB_REGION")
AlloyDBCluster = os.Getenv("ALLOYDB_CLUSTER")
AlloyDBInstance = os.Getenv("ALLOYDB_INSTANCE")
AlloyDBUser = os.Getenv("ALLOYDB_POSTGRES_USER")
)
func getAlloyDBVars(t *testing.T) map[string]string {
if AlloyDBProject == "" {
t.Fatal("'ALLOYDB_PROJECT' not set")
}
if AlloyDBLocation == "" {
t.Fatal("'ALLOYDB_REGION' not set")
}
if AlloyDBCluster == "" {
t.Fatal("'ALLOYDB_CLUSTER' not set")
}
if AlloyDBInstance == "" {
t.Fatal("'ALLOYDB_INSTANCE' not set")
}
if AlloyDBUser == "" {
t.Fatal("'ALLOYDB_USER' not set")
}
return map[string]string{
"projectId": AlloyDBProject,
"locationId": AlloyDBLocation,
"clusterId": AlloyDBCluster,
"instanceId": AlloyDBInstance,
"user": AlloyDBUser,
}
}
func getAlloyDBToolsConfig() map[string]any {
return map[string]any{
"sources": map[string]any{
"alloydb-admin-source": map[string]any{
"kind": "alloydb-admin",
},
},
"tools" : map[string]any{
// Tool for RunAlloyDBToolGetTest
"my-simple-tool": map[string]any{
"kind": "alloydb-list-clusters",
"source": "alloydb-admin-source",
"description": "Simple tool to test end to end functionality.",
},
// Tool for MCP test
"my-param-tool": map[string]any{
"kind": "alloydb-list-clusters",
"source": "alloydb-admin-source",
"description": "Tool to list clusters",
},
// Tool for MCP test that fails
"my-fail-tool": map[string]any{
"kind": "alloydb-list-clusters",
"source": "alloydb-admin-source",
"description": "Tool that will fail",
},
// AlloyDB specific tools
"alloydb-list-clusters": map[string]any{
"kind": "alloydb-list-clusters",
"source": "alloydb-admin-source",
"description": "Lists all AlloyDB clusters in a given project and location.",
},
"alloydb-list-users": map[string]any{
"kind": "alloydb-list-users",
"source": "alloydb-admin-source",
"description": "Lists all AlloyDB users within a specific cluster.",
},
"alloydb-list-instances": map[string]any{
"kind": "alloydb-list-instances",
"source": "alloydb-admin-source",
"description": "Lists all AlloyDB instances within a specific cluster.",
},
"alloydb-get-cluster": map[string]any{
"kind": "alloydb-get-cluster",
"source": "alloydb-admin-source",
"description": "Retrieves details of a specific AlloyDB cluster.",
},
},
}
}
func TestAlloyDBToolEndpoints(t *testing.T) {
vars := getAlloyDBVars(t)
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute)
defer cancel()
var args []string
toolsFile := getAlloyDBToolsConfig()
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
if err != nil {
t.Fatalf("command initialization returned an error: %v", err)
}
defer cleanup()
waitCtx, cancelWait := context.WithTimeout(ctx, 20*time.Second)
defer cancelWait()
out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
if err != nil {
t.Logf("toolbox command logs: \n%s", out)
t.Fatalf("toolbox didn't start successfully: %v", err)
}
runAlloyDBToolGetTest(t)
runAlloyDBMCPToolCallMethod(t, vars)
// Run tool-specific invoke tests
runAlloyDBListClustersTest(t, vars)
runAlloyDBListUsersTest(t, vars)
runAlloyDBListInstancesTest(t, vars)
runAlloyDBGetClusterTest(t, vars)
}
func runAlloyDBToolGetTest(t *testing.T) {
tcs := []struct {
name string
api string
want map[string]any
}{
{
name: "get my-simple-tool",
api: "http://127.0.0.1:5000/api/tool/my-simple-tool/",
want: map[string]any{
"my-simple-tool": map[string]any{
"description": "Simple tool to test end to end functionality.",
"parameters": []any{
map[string]any{"name": "projectId", "type": "string", "description": "The GCP project ID to list clusters for.", "required": true, "authSources": []any{}},
map[string]any{"name": "locationId", "type": "string", "description": "Optional: The location to list clusters in (e.g., 'us-central1'). Use '-' to list clusters across all locations.(Default: '-')", "required": false, "authSources": []any{}},
},
"authRequired": nil,
},
},
},
}
for _, tc := range tcs {
t.Run(tc.name, func(t *testing.T) {
resp, err := http.Get(tc.api)
if err != nil {
t.Fatalf("error when sending a request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
t.Fatalf("response status code is not 200")
}
var body map[string]interface{}
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
t.Fatalf("error parsing response body: %v", err)
}
got, ok := body["tools"]
if !ok {
t.Fatalf("unable to find 'tools' in response body")
}
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("response mismatch (-want +got):\n%s", diff)
}
})
}
}
func runAlloyDBMCPToolCallMethod(t *testing.T, vars map[string]string) {
sessionId := tests.RunInitialize(t, "2024-11-05")
header := map[string]string{}
if sessionId != "" {
header["Mcp-Session-Id"] = sessionId
}
invokeTcs := []struct {
name string
requestBody jsonrpc.JSONRPCRequest
wantContains string
isErr bool
}{
{
name: "MCP Invoke my-param-tool",
requestBody: jsonrpc.JSONRPCRequest{
Jsonrpc: "2.0",
Id: "my-param-tool-mcp",
Request: jsonrpc.Request{Method: "tools/call"},
Params: map[string]any{
"name": "my-param-tool",
"arguments": map[string]any{
"projectId": vars["projectId"],
"locationId": vars["locationId"],
},
},
},
wantContains: fmt.Sprintf(`"name\":\"projects/%s/locations/%s/clusters/%s\"`, vars["projectId"], vars["locationId"], vars["clusterId"]),
isErr: false,
},
{
name: "MCP Invoke my-fail-tool",
requestBody: jsonrpc.JSONRPCRequest{
Jsonrpc: "2.0",
Id: "invoke-fail-tool",
Request: jsonrpc.Request{Method: "tools/call"},
Params: map[string]any{
"name": "my-fail-tool",
"arguments": map[string]any{
"locationId": vars["locationId"],
},
},
},
wantContains: `parameter \"projectId\" is required`,
isErr: true,
},
{
name: "MCP Invoke invalid tool",
requestBody: jsonrpc.JSONRPCRequest{
Jsonrpc: "2.0",
Id: "invalid-tool-mcp",
Request: jsonrpc.Request{Method: "tools/call"},
Params: map[string]any{
"name": "non-existent-tool",
"arguments": map[string]any{},
},
},
wantContains: `tool with name \"non-existent-tool\" does not exist`,
isErr: true,
},
{
name: "MCP Invoke tool without required parameters",
requestBody: jsonrpc.JSONRPCRequest{
Jsonrpc: "2.0",
Id: "invoke-without-params-mcp",
Request: jsonrpc.Request{Method: "tools/call"},
Params: map[string]any{
"name": "my-param-tool",
"arguments": map[string]any{"locationId": vars["locationId"]},
},
},
wantContains: `parameter \"projectId\" is required`,
isErr: true,
},
{
name: "MCP Invoke my-auth-required-tool",
requestBody: jsonrpc.JSONRPCRequest{
Jsonrpc: "2.0",
Id: "invoke my-auth-required-tool",
Request: jsonrpc.Request{Method: "tools/call"},
Params: map[string]any{
"name": "my-auth-required-tool",
"arguments": map[string]any{},
},
},
wantContains: `tool with name \"my-auth-required-tool\" does not exist`,
isErr: true,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
api := "http://127.0.0.1:5000/mcp"
reqMarshal, err := json.Marshal(tc.requestBody)
if err != nil {
t.Fatalf("unexpected error during marshaling of request body: %v", err)
}
req, err := http.NewRequest(http.MethodPost, api, bytes.NewBuffer(reqMarshal))
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
respBody, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("unable to read request body: %s", err)
}
got := string(bytes.TrimSpace(respBody))
if !strings.Contains(got, tc.wantContains) {
t.Fatalf("Expected substring not found:\ngot: %q\nwant: %q (to be contained within got)", got, tc.wantContains)
}
})
}
}
func runAlloyDBListClustersTest(t *testing.T, vars map[string]string) {
type ListClustersResponse struct {
Clusters []struct {
Name string `json:"name"`
} `json:"clusters"`
}
type ToolResponse struct {
Result string `json:"result"`
}
// NOTE: If clusters are added, removed or changed in the test project,
// this list must be updated for the "list clusters specific locations" test to pass
wantForSpecificLocation := []string{
fmt.Sprintf("projects/%s/locations/us-central1/clusters/alloydb-ai-nl-testing", vars["projectId"]),
fmt.Sprintf("projects/%s/locations/us-central1/clusters/alloydb-pg-testing", vars["projectId"]),
}
// NOTE: If clusters are added, removed, or changed in the test project,
// this list must be updated for the "list clusters all locations" test to pass
wantForAllLocations := []string{
fmt.Sprintf("projects/%s/locations/us-central1/clusters/alloydb-ai-nl-testing", vars["projectId"]),
fmt.Sprintf("projects/%s/locations/us-central1/clusters/alloydb-pg-testing", vars["projectId"]),
fmt.Sprintf("projects/%s/locations/us-east4/clusters/alloydb-private-pg-testing", vars["projectId"]),
fmt.Sprintf("projects/%s/locations/us-east4/clusters/colab-testing", vars["projectId"]),
}
invokeTcs := []struct {
name string
requestBody io.Reader
want []string
wantStatusCode int
}{
{
name: "list clusters for all locations",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "-"}`, vars["projectId"])),
want: wantForAllLocations,
wantStatusCode: http.StatusOK,
},
{
name: "list clusters specific location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "us-central1"}`, vars["projectId"])),
want: wantForSpecificLocation,
wantStatusCode: http.StatusOK,
},
{
name: "list clusters missing project",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"locationId": "%s"}`, vars["locationId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "list clusters non-existent location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "abcd"}`, vars["projectId"])),
wantStatusCode: http.StatusInternalServerError,
},
{
name: "list clusters non-existent project",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "non-existent-project", "locationId": "%s"}`, vars["locationId"])),
wantStatusCode: http.StatusInternalServerError,
},
{
name: "list clusters empty project",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "", "locationId": "%s"}`, vars["locationId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "list clusters empty location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": ""}`, vars["projectId"])),
wantStatusCode: http.StatusBadRequest,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
api := "http://127.0.0.1:5000/api/tool/alloydb-list-clusters/invoke"
req, err := http.NewRequest(http.MethodPost, api, tc.requestBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != tc.wantStatusCode {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not %d, got %d: %s", tc.wantStatusCode, resp.StatusCode, string(bodyBytes))
}
if tc.wantStatusCode == http.StatusOK {
var body ToolResponse
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
t.Fatalf("error parsing outer response body: %v", err)
}
var clustersData ListClustersResponse
if err := json.Unmarshal([]byte(body.Result), &clustersData); err != nil {
t.Fatalf("error parsing nested result JSON: %v", err)
}
var got []string
for _, cluster := range clustersData.Clusters {
got = append(got, cluster.Name)
}
sort.Strings(got)
sort.Strings(tc.want)
if !reflect.DeepEqual(got, tc.want) {
t.Errorf("cluster list mismatch:\n got: %v\nwant: %v", got, tc.want)
}
}
})
}
}
func runAlloyDBListUsersTest(t *testing.T, vars map[string]string) {
type UsersResponse struct {
Users []struct {
Name string `json:"name"`
} `json:"users"`
}
type ToolResponse struct {
Result string `json:"result"`
}
invokeTcs := []struct {
name string
requestBody io.Reader
wantContains string
wantCount int
wantStatusCode int
}{
{
name: "list users success",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s", "clusterId": "%s"}`, vars["projectId"], vars["locationId"], vars["clusterId"])),
wantContains: fmt.Sprintf("projects/%s/locations/%s/clusters/%s/users/%s", vars["projectId"], vars["locationId"], vars["clusterId"], AlloyDBUser),
wantCount: 3, // NOTE: If users are added or removed in the test project, update the number of users here must be updated for this test to pass
wantStatusCode: http.StatusOK,
},
{
name: "list users missing project",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"locationId": "%s", "clusterId": "%s"}`, vars["locationId"], vars["clusterId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "list users missing location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "clusterId": "%s"}`, vars["projectId"], vars["clusterId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "list users missing cluster",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s"}`, vars["projectId"], vars["clusterId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "list users non-existent project",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "non-existent-project", "locationId": "%s", "clusterId": "%s"}`, vars["locationId"], vars["clusterId"])),
wantStatusCode: http.StatusInternalServerError,
},
{
name: "list users non-existent location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "non-existent-location", "clusterId": "%s"}`, vars["projectId"], vars["clusterId"])),
wantStatusCode: http.StatusInternalServerError,
},
{
name: "list users non-existent cluster",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s", "clusterId": "non-existent-cluster"}`, vars["projectId"], vars["locationId"])),
wantStatusCode: http.StatusBadRequest,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
api := "http://127.0.0.1:5000/api/tool/alloydb-list-users/invoke"
req, err := http.NewRequest(http.MethodPost, api, tc.requestBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != tc.wantStatusCode {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not %d, got %d: %s", tc.wantStatusCode, resp.StatusCode, string(bodyBytes))
}
if tc.wantStatusCode == http.StatusOK {
var body ToolResponse
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
t.Fatalf("error parsing outer response body: %v", err)
}
var usersData UsersResponse
if err := json.Unmarshal([]byte(body.Result), &usersData); err != nil {
t.Fatalf("error parsing nested result JSON: %v", err)
}
var got []string
for _, user := range usersData.Users {
got = append(got, user.Name)
}
sort.Strings(got)
if len(got) != tc.wantCount {
t.Errorf("user count mismatch:\n got: %v\nwant: %v", len(got), tc.wantCount)
}
found := false
for _, g := range got {
if g == tc.wantContains {
found = true
break
}
}
if !found {
t.Errorf("wantContains not found in response:\n got: %v\nwant: %v", got, tc.wantContains)
}
}
})
}
}
func runAlloyDBListInstancesTest(t *testing.T, vars map[string]string) {
type ListInstancesResponse struct {
Instances []struct {
Name string `json:"name"`
} `json:"instances"`
}
type ToolResponse struct {
Result string `json:"result"`
}
wantForSpecificClusterAndLocation := []string{
fmt.Sprintf("projects/%s/locations/%s/clusters/%s/instances/%s", vars["projectId"], vars["locationId"], vars["clusterId"], vars["instanceId"]),
}
// NOTE: If clusters or instances are added, removed or changed in the test project,
// the below lists must be updated for the tests to pass.
wantForAllClustersSpecificLocation := []string{
fmt.Sprintf("projects/%s/locations/%s/clusters/alloydb-ai-nl-testing/instances/alloydb-ai-nl-testing-instance", vars["projectId"], vars["locationId"]),
fmt.Sprintf("projects/%s/locations/%s/clusters/alloydb-pg-testing/instances/alloydb-pg-testing-instance", vars["projectId"], vars["locationId"]),
}
wantForAllClustersAllLocations := []string{
fmt.Sprintf("projects/%s/locations/us-central1/clusters/alloydb-ai-nl-testing/instances/alloydb-ai-nl-testing-instance", vars["projectId"]),
fmt.Sprintf("projects/%s/locations/us-central1/clusters/alloydb-pg-testing/instances/alloydb-pg-testing-instance", vars["projectId"]),
fmt.Sprintf("projects/%s/locations/us-east4/clusters/alloydb-private-pg-testing/instances/alloydb-private-pg-testing-instance", vars["projectId"]),
fmt.Sprintf("projects/%s/locations/us-east4/clusters/colab-testing/instances/colab-testing-primary", vars["projectId"]),
}
invokeTcs := []struct {
name string
requestBody io.Reader
want []string
wantStatusCode int
}{
{
name: "list instances for a specific cluster and location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s", "clusterId": "%s"}`, vars["projectId"], vars["locationId"], vars["clusterId"])),
want: wantForSpecificClusterAndLocation,
wantStatusCode: http.StatusOK,
},
{
name: "list instances for all clusters and specific location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s", "clusterId": "-"}`, vars["projectId"], vars["locationId"])),
want: wantForAllClustersSpecificLocation,
wantStatusCode: http.StatusOK,
},
{
name: "list instances for all clusters and all locations",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "-", "clusterId": "-"}`, vars["projectId"])),
want: wantForAllClustersAllLocations,
wantStatusCode: http.StatusOK,
},
{
name: "list instances missing project",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"locationId": "%s", "clusterId": "%s"}`, vars["locationId"], vars["clusterId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "list instances non-existent project",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "non-existent-project", "locationId": "%s", "clusterId": "%s"}`, vars["locationId"], vars["clusterId"])),
wantStatusCode: http.StatusInternalServerError,
},
{
name: "list instances non-existent location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "non-existent-location", "clusterId": "%s"}`, vars["projectId"], vars["clusterId"])),
wantStatusCode: http.StatusInternalServerError,
},
{
name: "list instances non-existent cluster",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s", "clusterId": "non-existent-cluster"}`, vars["projectId"], vars["locationId"])),
wantStatusCode: http.StatusBadRequest,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
api := "http://127.0.0.1:5000/api/tool/alloydb-list-instances/invoke"
req, err := http.NewRequest(http.MethodPost, api, tc.requestBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != tc.wantStatusCode {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not %d, got %d: %s", tc.wantStatusCode, resp.StatusCode, string(bodyBytes))
}
if tc.wantStatusCode == http.StatusOK {
var body ToolResponse
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
t.Fatalf("error parsing outer response body: %v", err)
}
var instancesData ListInstancesResponse
if err := json.Unmarshal([]byte(body.Result), &instancesData); err != nil {
t.Fatalf("error parsing nested result JSON: %v", err)
}
var got []string
for _, instance := range instancesData.Instances {
got = append(got, instance.Name)
}
sort.Strings(got)
sort.Strings(tc.want)
if !reflect.DeepEqual(got, tc.want) {
t.Errorf("instance list mismatch:\n got: %v\nwant: %v", got, tc.want)
}
}
})
}
}
func runAlloyDBGetClusterTest(t *testing.T, vars map[string]string) {
type ToolResponse struct {
Result string `json:"result"`
}
invokeTcs := []struct {
name string
requestBody io.Reader
want map[string]any
wantStatusCode int
}{
{
name: "get cluster success",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s", "clusterId": "%s"}`, vars["projectId"], vars["locationId"], vars["clusterId"])),
want: map[string]any{
"clusterType": "PRIMARY",
"name": fmt.Sprintf("projects/%s/locations/%s/clusters/%s", vars["projectId"], vars["locationId"], vars["clusterId"]),
},
wantStatusCode: http.StatusOK,
},
{
name: "get cluster missing project",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"locationId": "%s", "clusterId": "%s"}`, vars["locationId"], vars["clusterId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "get cluster missing location",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "clusterId": "%s"}`, vars["projectId"], vars["clusterId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "get cluster missing clusterId",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s"}`, vars["projectId"], vars["locationId"])),
wantStatusCode: http.StatusBadRequest,
},
{
name: "get cluster non-existent cluster",
requestBody: bytes.NewBufferString(fmt.Sprintf(`{"projectId": "%s", "locationId": "%s", "clusterId": "non-existent-cluster"}`, vars["projectId"], vars["locationId"])),
wantStatusCode: http.StatusBadRequest,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
api := "http://127.0.0.1:5000/api/tool/alloydb-get-cluster/invoke"
req, err := http.NewRequest(http.MethodPost, api, tc.requestBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != tc.wantStatusCode {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not %d, got %d: %s", tc.wantStatusCode, resp.StatusCode, string(bodyBytes))
}
if tc.wantStatusCode == http.StatusOK {
var body ToolResponse
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
t.Fatalf("error parsing response body: %v", err)
}
if tc.want != nil {
var gotMap map[string]any
if err := json.Unmarshal([]byte(body.Result), &gotMap); err != nil {
t.Fatalf("failed to unmarshal JSON result into map: %v", err)
}
got := make(map[string]any)
for key := range tc.want {
if value, ok := gotMap[key]; ok {
got[key] = value
}
}
if diff := cmp.Diff(tc.want, got); diff != "" {
t.Errorf("Unexpected result: got %#v, want: %#v", got, tc.want)
}
}
}
})
}
}

View File

@@ -272,4 +272,4 @@ func TestAlloyDBPgIAMConnection(t *testing.T) {
}
})
}
}
}

View File

@@ -74,7 +74,7 @@ func initBigQueryConnection(project string) (*bigqueryapi.Client, error) {
func TestBigQueryToolEndpoints(t *testing.T) {
sourceConfig := getBigQueryVars(t)
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Minute)
ctx, cancel := context.WithTimeout(context.Background(), 6*time.Minute)
defer cancel()
var args []string
@@ -1873,6 +1873,9 @@ func runBigQueryGetTableInfoToolInvokeTest(t *testing.T, datasetName, tableName,
}
func runBigQueryConversationalAnalyticsInvokeTest(t *testing.T, datasetName, tableName, dataInsightsWant string) {
// Each test is expected to complete in under 10s, we set a 25s timeout with retries to avoid flaky tests.
const maxRetries = 3
const requestTimeout = 25 * time.Second
// Get ID token
idToken, err := tests.GetGoogleIdToken(tests.ClientId)
if err != nil {
@@ -1960,18 +1963,53 @@ func runBigQueryConversationalAnalyticsInvokeTest(t *testing.T, datasetName, tab
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
// Send Tool invocation request
req, err := http.NewRequest(http.MethodPost, tc.api, tc.requestBody)
var resp *http.Response
var err error
bodyBytes, err := io.ReadAll(tc.requestBody)
if err != nil {
t.Fatalf("failed to read request body: %v", err)
}
req, err := http.NewRequest(http.MethodPost, tc.api, nil)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
req.Header.Set("Content-type", "application/json")
for k, v := range tc.requestHeader {
req.Header.Add(k, v)
}
resp, err := http.DefaultClient.Do(req)
for i := 0; i < maxRetries; i++ {
ctx, cancel := context.WithTimeout(context.Background(), requestTimeout)
defer cancel()
req.Body = io.NopCloser(bytes.NewReader(bodyBytes))
req.GetBody = func() (io.ReadCloser, error) {
return io.NopCloser(bytes.NewReader(bodyBytes)), nil
}
reqWithCtx := req.WithContext(ctx)
resp, err = http.DefaultClient.Do(reqWithCtx)
if err != nil {
// Retry on time out.
if os.IsTimeout(err) {
t.Logf("Request timed out (attempt %d/%d), retrying...", i+1, maxRetries)
time.Sleep(5 * time.Second)
continue
}
t.Fatalf("unable to send request: %s", err)
}
if resp.StatusCode == http.StatusServiceUnavailable {
t.Logf("Received 503 Service Unavailable (attempt %d/%d), retrying...", i+1, maxRetries)
time.Sleep(15 * time.Second)
continue
}
break
}
if err != nil {
t.Fatalf("unable to send request: %s", err)
t.Fatalf("Request failed after %d retries: %v", maxRetries, err)
}
defer resp.Body.Close()

View File

@@ -0,0 +1,242 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsql
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"net/url"
"reflect"
"regexp"
"strings"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/tests"
)
var (
createUserToolKind = "cloud-sql-create-users"
)
type createUsersTransport struct {
transport http.RoundTripper
url *url.URL
}
func (t *createUsersTransport) RoundTrip(req *http.Request) (*http.Response, error) {
if strings.HasPrefix(req.URL.String(), "https://sqladmin.googleapis.com") {
req.URL.Scheme = t.url.Scheme
req.URL.Host = t.url.Host
}
return t.transport.RoundTrip(req)
}
type userCreateRequest struct {
Name string `json:"name"`
Password string `json:"password,omitempty"`
Type string `json:"type,omitempty"`
}
type masterCreateUserHandler struct {
t *testing.T
}
func (h *masterCreateUserHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if !strings.Contains(r.UserAgent(), "genai-toolbox/") {
h.t.Errorf("User-Agent header not found")
}
var body userCreateRequest
if err := json.NewDecoder(r.Body).Decode(&body); err != nil {
h.t.Fatalf("failed to decode request body: %v", err)
}
var expectedBody userCreateRequest
var response any
var statusCode int
switch body.Name {
case "test-user":
expectedBody = userCreateRequest{Name: "test-user", Password: "password", Type: "BUILT_IN"}
response = map[string]any{"name": "op1", "status": "PENDING"}
statusCode = http.StatusOK
case "iam-user":
expectedBody = userCreateRequest{Name: "iam-user", Type: "CLOUD_IAM_USER"}
response = map[string]any{"name": "op2", "status": "PENDING"}
statusCode = http.StatusOK
default:
http.Error(w, fmt.Sprintf("unhandled user name: %s", body.Name), http.StatusInternalServerError)
return
}
// For IAM user, password is not expected
if body.Type == "CLOUD_IAM_USER" {
expectedBody.Password = ""
}
if diff := cmp.Diff(expectedBody, body); diff != "" {
h.t.Errorf("unexpected request body (-want +got):\n%s", diff)
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(statusCode)
if err := json.NewEncoder(w).Encode(response); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
func TestCreateUsersToolEndpoints(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
handler := &masterCreateUserHandler{t: t}
server := httptest.NewServer(handler)
defer server.Close()
serverURL, err := url.Parse(server.URL)
if err != nil {
t.Fatalf("failed to parse server URL: %v", err)
}
originalTransport := http.DefaultClient.Transport
if originalTransport == nil {
originalTransport = http.DefaultTransport
}
http.DefaultClient.Transport = &createUsersTransport{
transport: originalTransport,
url: serverURL,
}
t.Cleanup(func() {
http.DefaultClient.Transport = originalTransport
})
var args []string
toolsFile := getCreateUsersToolsConfig()
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
if err != nil {
t.Fatalf("command initialization returned an error: %s", err)
}
defer cleanup()
waitCtx, cancel := context.WithTimeout(ctx, 30*time.Second)
defer cancel()
out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
if err != nil {
t.Logf("toolbox command logs: \n%s", out)
t.Fatalf("toolbox didn't start successfully: %s", err)
}
tcs := []struct {
name string
toolName string
body string
want string
expectError bool
errorStatus int
}{
{
name: "successful built-in user creation",
toolName: "create-user",
body: `{"project": "p1", "instance": "i1", "name": "test-user", "password": "password", "iamUser": false}`,
want: `{"name":"op1","status":"PENDING"}`,
},
{
name: "successful iam user creation",
toolName: "create-user",
body: `{"project": "p1", "instance": "i1", "name": "iam-user", "iamUser": true}`,
want: `{"name":"op2","status":"PENDING"}`,
},
{
name: "missing password for built-in user",
toolName: "create-user",
body: `{"project": "p1", "instance": "i1", "name": "test-user", "iamUser": false}`,
expectError: true,
errorStatus: http.StatusBadRequest,
},
}
for _, tc := range tcs {
tc := tc
t.Run(tc.name, func(t *testing.T) {
api := fmt.Sprintf("http://127.0.0.1:5000/api/tool/%s/invoke", tc.toolName)
req, err := http.NewRequest(http.MethodPost, api, bytes.NewBufferString(tc.body))
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if tc.expectError {
if resp.StatusCode != tc.errorStatus {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("expected status %d but got %d: %s", tc.errorStatus, resp.StatusCode, string(bodyBytes))
}
return
}
if resp.StatusCode != http.StatusOK {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes))
}
var result struct {
Result string `json:"result"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
var got, want map[string]any
if err := json.Unmarshal([]byte(result.Result), &got); err != nil {
t.Fatalf("failed to unmarshal result: %v", err)
}
if err := json.Unmarshal([]byte(tc.want), &want); err != nil {
t.Fatalf("failed to unmarshal want: %v", err)
}
if !reflect.DeepEqual(got, want) {
t.Fatalf("unexpected result: got %+v, want %+v", got, want)
}
})
}
}
func getCreateUsersToolsConfig() map[string]any {
return map[string]any{
"sources": map[string]any{
"my-cloud-sql-source": map[string]any{
"kind": "cloud-sql-admin",
},
},
"tools": map[string]any{
"create-user": map[string]any{
"kind": createUserToolKind,
"source": "my-cloud-sql-source",
},
},
}
}

View File

@@ -0,0 +1,254 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsql
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"net/url"
"reflect"
"regexp"
"strings"
"sync"
"testing"
"time"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/tests"
)
var (
getInstancesToolKind = "cloud-sql-get-instance"
)
type getInstancesTransport struct {
transport http.RoundTripper
url *url.URL
}
func (t *getInstancesTransport) RoundTrip(req *http.Request) (*http.Response, error) {
if strings.HasPrefix(req.URL.String(), "https://sqladmin.googleapis.com") {
req.URL.Scheme = t.url.Scheme
req.URL.Host = t.url.Host
}
return t.transport.RoundTrip(req)
}
type instance struct {
Name string `json:"name"`
Kind string `json:"kind"`
}
type handler struct {
mu sync.Mutex
instances map[string]*instance
t *testing.T
}
func (h *handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
h.mu.Lock()
defer h.mu.Unlock()
if !strings.Contains(r.UserAgent(), "genai-toolbox/") {
h.t.Errorf("User-Agent header not found")
}
if !strings.HasPrefix(r.URL.Path, "/v1/projects/") {
http.Error(w, "unexpected path", http.StatusBadRequest)
return
}
// The format is /v1/projects/{project}/instances/{instance_name}
// We only care about the instance_name for the test
parts := regexp.MustCompile("/").Split(r.URL.Path, -1)
instanceName := parts[len(parts)-1]
inst, ok := h.instances[instanceName]
if !ok {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "application/json")
if err := json.NewEncoder(w).Encode(inst); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
func TestGetInstancesToolEndpoints(t *testing.T) {
h := &handler{
instances: map[string]*instance{
"instance-1": {Name: "instance-1", Kind: "sql#instance"},
},
t: t,
}
server := httptest.NewServer(h)
defer server.Close()
serverURL, err := url.Parse(server.URL)
if err != nil {
t.Fatalf("failed to parse server URL: %v", err)
}
originalTransport := http.DefaultClient.Transport
if originalTransport == nil {
originalTransport = http.DefaultTransport
}
http.DefaultClient.Transport = &getInstancesTransport{
transport: originalTransport,
url: serverURL,
}
t.Cleanup(func() {
http.DefaultClient.Transport = originalTransport
})
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
var args []string
toolsFile := getToolsConfig()
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
if err != nil {
t.Fatalf("command initialization returned an error: %s", err)
}
defer cleanup()
waitCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
if err != nil {
t.Logf("toolbox command logs: \n%s", out)
t.Fatalf("toolbox didn't start successfully: %s", err)
}
tcs := []struct {
name string
toolName string
body string
want string
expectError bool
wantSubstring bool
}{
{
name: "successful get instance",
toolName: "get-instance-1",
body: `{"projectId": "p1", "instanceId": "instance-1"}`,
want: `{"name":"instance-1","kind":"sql#instance"}`,
},
{
name: "failed get instance",
toolName: "get-instance-2",
body: `{"projectId": "p1", "instanceId": "instance-2"}`,
expectError: true,
},
}
for _, tc := range tcs {
t.Run(tc.name, func(t *testing.T) {
api := fmt.Sprintf("http://127.0.0.1:5000/api/tool/%s/invoke", tc.toolName)
req, err := http.NewRequest(http.MethodPost, api, bytes.NewBufferString(tc.body))
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if tc.expectError {
if resp.StatusCode == http.StatusOK {
t.Fatal("expected error but got status 200")
}
return
}
if resp.StatusCode != http.StatusOK {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes))
}
var result struct {
Result any `json:"result"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
var gotBytes []byte
if s, ok := result.Result.(string); ok {
gotBytes = []byte(s)
} else {
var err error
gotBytes, err = json.Marshal(result.Result)
if err != nil {
t.Fatalf("failed to marshal result: %v", err)
}
}
if tc.wantSubstring {
if !bytes.Contains(gotBytes, []byte(tc.want)) {
t.Fatalf("unexpected result: got %q, want substring %q", string(gotBytes), tc.want)
}
return
}
var got, want map[string]any
if err := json.Unmarshal(gotBytes, &got); err != nil {
t.Fatalf("failed to unmarshal result: %v", err)
}
if err := json.Unmarshal([]byte(tc.want), &want); err != nil {
t.Fatalf("failed to unmarshal want: %v", err)
}
if !reflect.DeepEqual(got, want) {
t.Fatalf("unexpected result: got %+v, want %+v", got, want)
}
})
}
}
func getToolsConfig() map[string]any {
return map[string]any{
"sources": map[string]any{
"my-cloud-sql-source": map[string]any{
"kind": "cloud-sql-admin",
},
"my-invalid-cloud-sql-source": map[string]any{
"kind": "cloud-sql-admin",
"useClientOAuth": true,
},
},
"tools": map[string]any{
"get-instance-1": map[string]any{
"kind": getInstancesToolKind,
"description": "get instance 1",
"source": "my-cloud-sql-source",
},
"get-instance-2": map[string]any{
"kind": getInstancesToolKind,
"description": "get instance 2",
"source": "my-invalid-cloud-sql-source",
},
},
}
}

View File

@@ -0,0 +1,193 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsql
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"net/url"
"reflect"
"regexp"
"strings"
"testing"
"time"
"github.com/googleapis/genai-toolbox/internal/testutils"
_ "github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqllistinstances"
"github.com/googleapis/genai-toolbox/tests"
)
type transport struct {
transport http.RoundTripper
url *url.URL
}
func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) {
if strings.HasPrefix(req.URL.String(), "https://sqladmin.googleapis.com") {
req.URL.Scheme = t.url.Scheme
req.URL.Host = t.url.Host
}
return t.transport.RoundTrip(req)
}
func TestListInstance(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !strings.Contains(r.UserAgent(), "genai-toolbox/") {
t.Errorf("User-Agent header not found")
}
if r.URL.Path != "/v1/projects/test-project/instances" {
http.Error(w, fmt.Sprintf("unexpected path: got %q", r.URL.Path), http.StatusBadRequest)
return
}
w.Header().Set("Content-Type", "application/json")
fmt.Fprintln(w, `{"items": [{"name": "test-instance", "instanceType": "CLOUD_SQL_INSTANCE"}]}`)
}))
defer server.Close()
serverURL, err := url.Parse(server.URL)
if err != nil {
t.Fatalf("failed to parse server URL: %v", err)
}
originalTransport := http.DefaultClient.Transport
if originalTransport == nil {
originalTransport = http.DefaultTransport
}
http.DefaultClient.Transport = &transport{
transport: originalTransport,
url: serverURL,
}
t.Cleanup(func() {
http.DefaultClient.Transport = originalTransport
})
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
var args []string
toolsFile := getListInstanceToolsConfig()
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
if err != nil {
t.Fatalf("command initialization returned an error: %s", err)
}
defer cleanup()
waitCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
out, err := testutils.WaitForString(waitCtx, regexp.MustCompile("Server ready to serve"), cmd.Out)
if err != nil {
t.Logf("toolbox command logs: \n%s", out)
t.Fatalf("toolbox didn't start successfully: %s", err)
}
tcs := []struct {
name string
toolName string
body string
want string
expectError bool
}{
{
name: "successful operation",
toolName: "list-instances",
body: `{"project": "test-project"}`,
want: `[{"name":"test-instance","instanceType":"CLOUD_SQL_INSTANCE"}]`,
},
{
name: "failed operation",
toolName: "list-instances-fail",
body: `{"project": "test-project"}`,
expectError: true,
},
}
for _, tc := range tcs {
t.Run(tc.name, func(t *testing.T) {
api := fmt.Sprintf("http://127.0.0.1:5000/api/tool/%s/invoke", tc.toolName)
req, err := http.NewRequest(http.MethodPost, api, bytes.NewBufferString(tc.body))
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if tc.expectError {
if resp.StatusCode == http.StatusOK {
t.Fatal("expected error but got status 200")
}
return
}
if resp.StatusCode != http.StatusOK {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes))
}
var result struct {
Result string `json:"result"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
var got, want any
if err := json.Unmarshal([]byte(result.Result), &got); err != nil {
t.Fatalf("failed to unmarshal result: %v", err)
}
if err := json.Unmarshal([]byte(tc.want), &want); err != nil {
t.Fatalf("failed to unmarshal want: %v", err)
}
if !reflect.DeepEqual(got, want) {
t.Fatalf("unexpected result: got %+v, want %+v", got, want)
}
})
}
}
func getListInstanceToolsConfig() map[string]any {
return map[string]any{
"sources": map[string]any{
"my-cloud-sql-source": map[string]any{
"kind": "cloud-sql-admin",
},
"my-invalid-cloud-sql-source": map[string]any{
"kind": "cloud-sql-admin",
"useClientOAuth": true,
},
},
"tools": map[string]any{
"list-instances": map[string]any{
"kind": "cloud-sql-list-instances",
"source": "my-cloud-sql-source",
},
"list-instances-fail": map[string]any{
"kind": "cloud-sql-list-instances",
"description": "list instances",
"source": "my-invalid-cloud-sql-source",
},
},
}
}

View File

@@ -0,0 +1,315 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cloudsql
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"net/url"
"reflect"
"regexp"
"strings"
"sync"
"testing"
"time"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/tests"
_ "github.com/googleapis/genai-toolbox/internal/tools/cloudsql/cloudsqlwaitforoperation"
)
var (
cloudsqlWaitToolKind = "cloud-sql-wait-for-operation"
)
type waitForOperationTransport struct {
transport http.RoundTripper
url *url.URL
}
func (t *waitForOperationTransport) RoundTrip(req *http.Request) (*http.Response, error) {
if strings.HasPrefix(req.URL.String(), "https://sqladmin.googleapis.com") {
req.URL.Scheme = t.url.Scheme
req.URL.Host = t.url.Host
}
return t.transport.RoundTrip(req)
}
type cloudsqlOperation struct {
Name string `json:"name"`
Status string `json:"status"`
TargetLink string `json:"targetLink"`
OperationType string `json:"operationType"`
Error *struct {
Errors []struct {
Code string `json:"code"`
Message string `json:"message"`
} `json:"errors"`
} `json:"error,omitempty"`
}
type cloudsqlInstance struct {
Region string `json:"region"`
DatabaseVersion string `json:"databaseVersion"`
}
type cloudsqlHandler struct {
mu sync.Mutex
operations map[string]*cloudsqlOperation
instances map[string]*cloudsqlInstance
t *testing.T
}
func (h *cloudsqlHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
h.mu.Lock()
defer h.mu.Unlock()
if !strings.Contains(r.UserAgent(), "genai-toolbox/") {
h.t.Errorf("User-Agent header not found")
}
if match, _ := regexp.MatchString("/v1/projects/p1/operations/.*", r.URL.Path); match {
parts := regexp.MustCompile("/").Split(r.URL.Path, -1)
opName := parts[len(parts)-1]
op, ok := h.operations[opName]
if !ok {
http.NotFound(w, r)
return
}
if op.Status != "DONE" {
op.Status = "DONE"
}
w.Header().Set("Content-Type", "application/json")
if err := json.NewEncoder(w).Encode(op); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
} else if match, _ := regexp.MatchString("/v1/projects/p1/instances/.*", r.URL.Path); match {
parts := regexp.MustCompile("/").Split(r.URL.Path, -1)
instanceName := parts[len(parts)-1]
instance, ok := h.instances[instanceName]
if !ok {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Type", "application/json")
if err := json.NewEncoder(w).Encode(instance); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
} else {
http.NotFound(w, r)
}
}
func TestCloudSQLWaitToolEndpoints(t *testing.T) {
h := &cloudsqlHandler{
operations: map[string]*cloudsqlOperation{
"op1": {Name: "op1", Status: "PENDING", OperationType: "CREATE_DATABASE"},
"op2": {Name: "op2", Status: "PENDING", OperationType: "CREATE_DATABASE", Error: &struct {
Errors []struct {
Code string `json:"code"`
Message string `json:"message"`
} `json:"errors"`
}{
Errors: []struct {
Code string `json:"code"`
Message string `json:"message"`
}{
{Code: "ERROR_CODE", Message: "failed"},
},
}},
"op3": {Name: "op3", Status: "PENDING", OperationType: "CREATE"},
},
instances: map[string]*cloudsqlInstance{
"i1": {Region: "r1", DatabaseVersion: "POSTGRES_13"},
},
t: t,
}
server := httptest.NewServer(h)
defer server.Close()
h.operations["op1"].TargetLink = "https://sqladmin.googleapis.com/v1/projects/p1/instances/i1/databases/d1"
h.operations["op2"].TargetLink = "https://sqladmin.googleapis.com/v1/projects/p1/instances/i2/databases/d2"
h.operations["op3"].TargetLink = "https://sqladmin.googleapis.com/v1/projects/p1/instances/i1"
serverURL, err := url.Parse(server.URL)
if err != nil {
t.Fatalf("failed to parse server URL: %v", err)
}
originalTransport := http.DefaultClient.Transport
if originalTransport == nil {
originalTransport = http.DefaultTransport
}
http.DefaultClient.Transport = &waitForOperationTransport{
transport: originalTransport,
url: serverURL,
}
t.Cleanup(func() {
http.DefaultClient.Transport = originalTransport
})
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
var args []string
toolsFile := getCloudSQLWaitToolsConfig()
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
if err != nil {
t.Fatalf("command initialization returned an error: %s", err)
}
defer cleanup()
waitCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
if err != nil {
t.Logf("toolbox command logs: \n%s", out)
t.Fatalf("toolbox didn't start successfully: %s", err)
}
tcs := []struct {
name string
toolName string
body string
want string
expectError bool
wantSubstring bool
}{
{
name: "successful operation",
toolName: "wait-for-op1",
body: `{"project": "p1", "operation": "op1"}`,
want: "Your Cloud SQL resource is ready",
wantSubstring: true,
},
{
name: "failed operation",
toolName: "wait-for-op2",
body: `{"project": "p1", "operation": "op2"}`,
expectError: true,
},
{
name: "non-database create operation",
toolName: "wait-for-op3",
body: `{"project": "p1", "operation": "op3"}`,
want: `{"name":"op3","status":"DONE","targetLink":"` + h.operations["op3"].TargetLink + `","operationType":"CREATE"}`,
},
}
for _, tc := range tcs {
t.Run(tc.name, func(t *testing.T) {
api := fmt.Sprintf("http://127.0.0.1:5000/api/tool/%s/invoke", tc.toolName)
req, err := http.NewRequest(http.MethodPost, api, bytes.NewBufferString(tc.body))
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if tc.expectError {
if resp.StatusCode == http.StatusOK {
t.Fatal("expected error but got status 200")
}
return
}
if resp.StatusCode != http.StatusOK {
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes))
}
if tc.wantSubstring {
var result struct {
Result string `json:"result"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
if !bytes.Contains([]byte(result.Result), []byte(tc.want)) {
t.Fatalf("unexpected result: got %q, want substring %q", result.Result, tc.want)
}
return
}
var result struct {
Result string `json:"result"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
var tempString string
if err := json.Unmarshal([]byte(result.Result), &tempString); err != nil {
t.Fatalf("failed to unmarshal outer JSON string: %v", err)
}
var got, want map[string]any
if err := json.Unmarshal([]byte(tempString), &got); err != nil {
t.Fatalf("failed to unmarshal inner JSON object: %v", err)
}
if err := json.Unmarshal([]byte(tc.want), &want); err != nil {
t.Fatalf("failed to unmarshal want: %v", err)
}
if !reflect.DeepEqual(got, want) {
t.Fatalf("unexpected result: got %+v, want %+v", got, want)
}
})
}
}
func getCloudSQLWaitToolsConfig() map[string]any {
return map[string]any{
"sources": map[string]any{
"my-cloud-sql-source": map[string]any{
"kind": "cloud-sql-admin",
},
},
"tools": map[string]any{
"wait-for-op1": map[string]any{
"kind": cloudsqlWaitToolKind,
"source": "my-cloud-sql-source",
"description": "wait for op1",
},
"wait-for-op2": map[string]any{
"kind": cloudsqlWaitToolKind,
"source": "my-cloud-sql-source",
"description": "wait for op2",
},
"wait-for-op3": map[string]any{
"kind": cloudsqlWaitToolKind,
"source": "my-cloud-sql-source",
"description": "wait for op3",
},
},
}
}

View File

@@ -133,6 +133,7 @@ func TestSpannerToolEndpoints(t *testing.T) {
toolsFile = addSpannerExecuteSqlConfig(t, toolsFile)
toolsFile = addSpannerReadOnlyConfig(t, toolsFile)
toolsFile = addTemplateParamConfig(t, toolsFile)
toolsFile = addSpannerListTablesConfig(t, toolsFile)
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
if err != nil {
@@ -174,6 +175,7 @@ func TestSpannerToolEndpoints(t *testing.T) {
)
runSpannerSchemaToolInvokeTest(t, accessSchemaWant)
runSpannerExecuteSqlToolInvokeTest(t, select1Want, invokeParamWant, tableNameParam, tableNameAuth)
runSpannerListTablesTest(t, tableNameParam, tableNameAuth, tableNameTemplateParam)
}
// getSpannerToolInfo returns statements and param for my-tool for spanner-sql kind
@@ -303,6 +305,24 @@ func addSpannerReadOnlyConfig(t *testing.T, config map[string]any) map[string]an
return config
}
// addSpannerListTablesConfig adds the spanner-list-tables tool configuration
func addSpannerListTablesConfig(t *testing.T, config map[string]any) map[string]any {
tools, ok := config["tools"].(map[string]any)
if !ok {
t.Fatalf("unable to get tools from config")
}
// Add spanner-list-tables tool
tools["list-tables-tool"] = map[string]any{
"kind": "spanner-list-tables",
"source": "my-instance",
"description": "Lists tables with their schema information",
}
config["tools"] = tools
return config
}
func addTemplateParamConfig(t *testing.T, config map[string]any) map[string]any {
toolsMap, ok := config["tools"].(map[string]any)
if !ok {
@@ -527,6 +547,119 @@ func runSpannerExecuteSqlToolInvokeTest(t *testing.T, select1Want, invokeParamWa
}
}
// Helper function to verify table list results
func verifyTableListResult(t *testing.T, body map[string]interface{}, expectedTables []string, expectedSimpleFormat bool) {
// Parse the result
result, ok := body["result"].(string)
if !ok {
t.Fatalf("unable to find result in response body")
}
var tables []interface{}
err := json.Unmarshal([]byte(result), &tables)
if err != nil {
t.Fatalf("unable to parse result as JSON array: %s", err)
}
// If we expect specific tables, verify they exist
if len(expectedTables) > 0 {
tableNames := make(map[string]bool)
requiredKeys := []string{"schema_name", "object_name", "object_type", "columns", "constraints", "indexes"}
if expectedSimpleFormat {
requiredKeys = []string{"name"}
}
for _, table := range tables {
tableMap, ok := table.(map[string]interface{})
if !ok {
continue
}
// Parse object_details JSON string into map[string]interface{}
if objectDetailsStr, ok := tableMap["object_details"].(string); ok {
var objectDetails map[string]interface{}
if err := json.Unmarshal([]byte(objectDetailsStr), &objectDetails); err != nil {
t.Errorf("failed to parse object_details JSON: %v for %v", err, objectDetailsStr)
continue
}
for _, reqKey := range requiredKeys {
if _, hasKey := objectDetails[reqKey]; !hasKey {
t.Errorf("missing required key '%s', for object_details: %v",reqKey, objectDetails)
}
}
}
if name, ok := tableMap["object_name"].(string); ok {
tableNames[name] = true
}
}
for _, expected := range expectedTables {
if !tableNames[expected] {
t.Errorf("expected table %s not found in results", expected)
}
}
}
}
// runSpannerListTablesTest tests the spanner-list-tables tool
func runSpannerListTablesTest(t *testing.T, tableNameParam, tableNameAuth, tableNameTemplateParam string) {
invokeTcs := []struct {
name string
requestBody io.Reader
expectedTables []string // empty means don't check specific tables
useSimpleFormat bool
}{
{
name: "list all tables with detailed format",
requestBody: bytes.NewBuffer([]byte(`{}`)),
expectedTables: []string{tableNameParam, tableNameAuth, tableNameTemplateParam},
},
{
name: "list tables with simple format",
requestBody: bytes.NewBuffer([]byte(`{"output_format": "simple"}`)),
expectedTables: []string{tableNameParam, tableNameAuth, tableNameTemplateParam},
useSimpleFormat: true,
},
{
name: "list specific tables",
requestBody: bytes.NewBuffer([]byte(fmt.Sprintf(`{"table_names": "%s,%s"}`, tableNameParam, tableNameAuth))),
expectedTables: []string{tableNameParam, tableNameAuth},
},
{
name: "list non-existent table",
requestBody: bytes.NewBuffer([]byte(`{"table_names": "non_existent_table_xyz"}`)),
expectedTables: []string{},
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
// Use RunRequest helper function from tests package
url := "http://127.0.0.1:5000/api/tool/list-tables-tool/invoke"
headers := map[string]string{}
resp, respBody := tests.RunRequest(t, http.MethodPost, url, tc.requestBody, headers)
if resp.StatusCode != http.StatusOK {
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(respBody))
}
// Check response body
var body map[string]interface{}
err := json.Unmarshal(respBody, &body)
if err != nil {
t.Fatalf("error parsing response body: %s", err)
}
verifyTableListResult(t, body, tc.expectedTables, tc.useSimpleFormat)
})
}
}
func runSpannerSchemaToolInvokeTest(t *testing.T, accessSchemaWant string) {
invokeTcs := []struct {
name string

View File

@@ -18,6 +18,8 @@ import (
"context"
"database/sql"
"fmt"
"io"
"net/http"
"os"
"regexp"
"strings"
@@ -164,3 +166,106 @@ func TestSQLiteToolEndpoint(t *testing.T) {
tests.RunMCPToolCallMethod(t, mcpMyFailToolWant, mcpSelect1Want)
tests.RunToolInvokeWithTemplateParameters(t, tableNameTemplateParam)
}
func TestSQLiteExecuteSqlTool(t *testing.T) {
db, teardownDb, sqliteDb, err := initSQLiteDb(t, SQLiteDatabase)
if err != nil {
t.Fatal(err)
}
defer teardownDb(t)
defer db.Close()
sourceConfig := getSQLiteVars(t)
sourceConfig["database"] = sqliteDb
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
// Create a table and insert data
tableName := "exec_table_" + strings.ReplaceAll(uuid.New().String(), "-", "")
createStmt := fmt.Sprintf("CREATE TABLE IF NOT EXISTS %s (id INTEGER PRIMARY KEY, name TEXT);", tableName)
insertStmt := fmt.Sprintf("INSERT INTO %s (name) VALUES (?);", tableName)
params := []any{"Bob"}
setupSQLiteTestDB(t, ctx, db, createStmt, insertStmt, tableName, params)
// Add sqlite-execute-sql tool config
toolConfig := map[string]any{
"tools": map[string]any{
"my-exec-sql-tool": map[string]any{
"kind": "sqlite-execute-sql",
"source": "my-instance",
"description": "Tool to execute SQL statements",
},
},
"sources": map[string]any{
"my-instance": sourceConfig,
},
}
cmd, cleanup, err := tests.StartCmd(ctx, toolConfig)
if err != nil {
t.Fatalf("command initialization returned an error: %s", err)
}
defer cleanup()
waitCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
if err != nil {
t.Logf("toolbox command logs: \n%s", out)
t.Fatalf("toolbox didn't start successfully: %s", err)
}
// Table-driven test cases
testCases := []struct {
name string
sql string
wantStatus int
wantBody string
}{
{
name: "select existing row",
sql: fmt.Sprintf("SELECT name FROM %s WHERE id = 1", tableName),
wantStatus: 200,
wantBody: "Bob",
},
{
name: "select no rows",
sql: fmt.Sprintf("SELECT name FROM %s WHERE id = 999", tableName),
wantStatus: 200,
wantBody: "null",
},
{
name: "invalid SQL",
sql: "SELEC name FROM not_a_table",
wantStatus: 400,
wantBody: "SQL logic error",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
api := "http://127.0.0.1:5000/api/tool/my-exec-sql-tool/invoke"
reqBody := strings.NewReader(fmt.Sprintf(`{"sql":"%s"}`, tc.sql))
req, err := http.NewRequest("POST", api, reqBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Set("Content-Type", "application/json")
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
bodyBytes, err := io.ReadAll(resp.Body)
if err != nil {
t.Fatalf("unable to read response: %s", err)
}
if resp.StatusCode != tc.wantStatus {
t.Fatalf("unexpected status: %d, body: %s", resp.StatusCode, string(bodyBytes))
}
if tc.wantBody != "" && !strings.Contains(string(bodyBytes), tc.wantBody) {
t.Fatalf("expected body to contain %q, got: %s", tc.wantBody, string(bodyBytes))
}
})
}
}

View File

@@ -792,7 +792,7 @@ func RunInitialize(t *testing.T, protocolVersion string) string {
t.Fatalf("unexpected error during marshaling of body")
}
resp, _ := runRequest(t, http.MethodPost, url, bytes.NewBuffer(reqMarshal), nil)
resp, _ := RunRequest(t, http.MethodPost, url, bytes.NewBuffer(reqMarshal), nil)
if resp.StatusCode != 200 {
t.Fatalf("response status code is not 200")
}
@@ -817,7 +817,7 @@ func RunInitialize(t *testing.T, protocolVersion string) string {
t.Fatalf("unexpected error during marshaling of notifications body")
}
_, _ = runRequest(t, http.MethodPost, url, bytes.NewBuffer(notiMarshal), header)
_, _ = RunRequest(t, http.MethodPost, url, bytes.NewBuffer(notiMarshal), header)
return sessionId
}
@@ -1089,7 +1089,7 @@ func RunMCPToolCallMethod(t *testing.T, myFailToolWant, select1Want string, opti
headers[key] = value
}
httpResponse, respBody := runRequest(t, http.MethodPost, tc.api, bytes.NewBuffer(reqMarshal), headers)
httpResponse, respBody := RunRequest(t, http.MethodPost, tc.api, bytes.NewBuffer(reqMarshal), headers)
// Check status code
if httpResponse.StatusCode != tc.wantStatusCode {
@@ -1105,7 +1105,8 @@ func RunMCPToolCallMethod(t *testing.T, myFailToolWant, select1Want string, opti
}
}
func runRequest(t *testing.T, method, url string, body io.Reader, headers map[string]string) (*http.Response, []byte) {
// RunRequest is a helper function to send HTTP requests and return the response
func RunRequest(t *testing.T, method, url string, body io.Reader, headers map[string]string) (*http.Response, []byte) {
// Send request
req, err := http.NewRequest(method, url, body)
if err != nil {