Compare commits

...

13 Commits

Author SHA1 Message Date
release-please[bot]
466aef024f chore(main): release 0.23.0 (#2138)
🤖 I have created a release *beep* *boop*
---


##
[0.23.0](https://github.com/googleapis/genai-toolbox/compare/v0.22.0...v0.23.0)
(2025-12-11)


### ⚠ BREAKING CHANGES

* **serverless-spark:** add URLs to create batch tool outputs
* **serverless-spark:** add URLs to list_batches output
* **serverless-spark:** add Cloud Console and Logging URLs to get_batch
* **tools/postgres:** Add additional filter params for existing postgres
tools ([#2033](https://github.com/googleapis/genai-toolbox/issues/2033))

### Features

* **tools/postgres:** Add list-table-stats-tool to list table
statistics.
([#2055](https://github.com/googleapis/genai-toolbox/issues/2055))
([78b02f0](78b02f08c3))
* **looker/tools:** Enhance dashboard creation with dashboard filters
([#2133](https://github.com/googleapis/genai-toolbox/issues/2133))
([285aa46](285aa46b88))
* **serverless-spark:** Add Cloud Console and Logging URLs to get_batch
([e29c061](e29c0616d6))
* **serverless-spark:** Add URLs to create batch tool outputs
([c6ccf4b](c6ccf4bd87))
* **serverless-spark:** Add URLs to list_batches output
([5605eab](5605eabd69))
* **sources/mariadb:** Add MariaDB source and MySQL tools integration
([#1908](https://github.com/googleapis/genai-toolbox/issues/1908))
([3b40fea](3b40fea25e))
* **tools/postgres:** Add additional filter params for existing postgres
tools ([#2033](https://github.com/googleapis/genai-toolbox/issues/2033))
([489117d](489117d747))
* **tools/postgres:** Add list_pg_settings, list_database_stats tools
for postgres
([#2030](https://github.com/googleapis/genai-toolbox/issues/2030))
([32367a4](32367a472f))
* **tools/postgres:** Add new postgres-list-roles tool
([#2038](https://github.com/googleapis/genai-toolbox/issues/2038))
([bea9705](bea9705450))


### Bug Fixes

* List tables tools null fix
([#2107](https://github.com/googleapis/genai-toolbox/issues/2107))
([2b45266](2b45266598))
* **tools/mongodb:** Removed sortPayload and sortParams
([#1238](https://github.com/googleapis/genai-toolbox/issues/1238))
([c5a6daa](c5a6daa768))


### Miscellaneous Chores
* **looker:** Upgrade to latest go sdk
([#2159](https://github.com/googleapis/genai-toolbox/issues/2159))
([78e015d](78e015d7df))
---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

---------

Co-authored-by: release-please[bot] <55107282+release-please[bot]@users.noreply.github.com>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
2025-12-11 22:26:26 +00:00
Wenxin Du
a6830744fc chore: release 0.23.0 (#2160)
Release-As: 0.23.0
2025-12-11 22:02:23 +00:00
Wenxin Du
615b5f0130 chore: add v0.23.0 doc version (#2161) 2025-12-11 21:27:01 +00:00
gRedHeadphone
2b45266598 fix: list tables tools null fix (#2107)
## Description

Return empty list instead of null in list tables tools when no tables
found

## PR Checklist

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
  before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #2027

---------

Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-12-11 20:58:52 +00:00
Twisha Bansal
26ead2ed78 docs: include npx method to run server (#2094)
## Description

> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution

## PR Checklist

> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [ ] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
  before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>

---------

Co-authored-by: Averi Kitsch <akitsch@google.com>
Co-authored-by: Anubhav Dhawan <anubhavdhawan@google.com>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
2025-12-11 18:25:12 +00:00
Twisha Bansal
1f31c2c9b2 docs: add prompts quickstart using gemini cli (#2158)
## Description

> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution

## PR Checklist

> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [ ] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
  before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-11 23:16:16 +05:30
Dr. Strangelove
78e015d7df fix(looker): upgrade to latest go sdk (#2159)
## Description

Upgrade to latest version of Looker sdk with fix for expiring
credentials.

## PR Checklist

> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
  before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #1597
2025-12-11 17:11:48 +00:00
Dave Borowitz
c6ccf4bd87 feat(serverless-spark)!: add URLs to create batch tool outputs 2025-12-10 15:10:40 -08:00
Dave Borowitz
5605eabd69 feat(serverless-spark)!: add URLs to list_batches output
Unlike get_batch, in this case we are not returning a JSON type directly
from the server, so we can add the new fields in our top-level object
rather than wrapping.
2025-12-10 15:10:40 -08:00
Dave Borowitz
e29c0616d6 feat(serverless-spark)!: add Cloud Console and Logging URLs to get_batch
These are useful links for humans to follow for more information
(output, metrics, logs) that's not readily availble via MCP.
2025-12-10 15:10:40 -08:00
Dr. Strangelove
285aa46b88 feat(looker/tools): Enhance dashboard creation with dashboard filters (#2133)
## Description

Enhance dashboard creation with dashboard level filters. Also improve
tool descriptions.

## PR Checklist

> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [X] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
  before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
2025-12-10 13:30:20 -08:00
Ganga4060
c5a6daa768 fix: removed sortPayload and sortParams from the reference (#1238)
Removed sortPayload and sortParams from the reference

---------

Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-12-10 19:19:07 +00:00
Siddharth Ravi
78b02f08c3 feat: add list-table-stats-tool to list table statistics. (#2055)
Adds the following tools for Postgres:
(1) list_table_stats: Lists table statistics in the database. .

<img width="3446" height="1304" alt="image"
src="https://github.com/user-attachments/assets/68951edc-8d99-460e-a1ac-2d3da9388baf"
/>

<img width="2870" height="1338" alt="image"
src="https://github.com/user-attachments/assets/100a3b7d-202d-4dfd-b046-5dab4390ba41"
/>


> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution

## PR Checklist

> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
  before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #1738
2025-12-10 11:11:33 +00:00
103 changed files with 2909 additions and 788 deletions

View File

@@ -51,6 +51,10 @@ ignoreFiles = ["quickstart/shared", "quickstart/python", "quickstart/js", "quick
# Add a new version block here before every release
# The order of versions in this file is mirrored into the dropdown
[[params.versions]]
version = "v0.23.0"
url = "https://googleapis.github.io/genai-toolbox/v0.23.0/"
[[params.versions]]
version = "v0.22.0"
url = "https://googleapis.github.io/genai-toolbox/v0.22.0/"

View File

@@ -1,5 +1,37 @@
# Changelog
## [0.23.0](https://github.com/googleapis/genai-toolbox/compare/v0.22.0...v0.23.0) (2025-12-11)
### ⚠ BREAKING CHANGES
* **serverless-spark:** add URLs to create batch tool outputs
* **serverless-spark:** add URLs to list_batches output
* **serverless-spark:** add Cloud Console and Logging URLs to get_batch
* **tools/postgres:** Add additional filter params for existing postgres tools ([#2033](https://github.com/googleapis/genai-toolbox/issues/2033))
### Features
* **tools/postgres:** Add list-table-stats-tool to list table statistics. ([#2055](https://github.com/googleapis/genai-toolbox/issues/2055)) ([78b02f0](https://github.com/googleapis/genai-toolbox/commit/78b02f08c3cc3062943bb2f91cf60d5149c8d28d))
* **looker/tools:** Enhance dashboard creation with dashboard filters ([#2133](https://github.com/googleapis/genai-toolbox/issues/2133)) ([285aa46](https://github.com/googleapis/genai-toolbox/commit/285aa46b887d9acb2da8766e107bbf1ab75b8812))
* **serverless-spark:** Add Cloud Console and Logging URLs to get_batch ([e29c061](https://github.com/googleapis/genai-toolbox/commit/e29c0616d6b9ecda2badcaf7b69614e511ac031b))
* **serverless-spark:** Add URLs to create batch tool outputs ([c6ccf4b](https://github.com/googleapis/genai-toolbox/commit/c6ccf4bd87026484143a2d0f5527b2edab03b54a))
* **serverless-spark:** Add URLs to list_batches output ([5605eab](https://github.com/googleapis/genai-toolbox/commit/5605eabd696696ade07f52431a28ef65c0fb1f77))
* **sources/mariadb:** Add MariaDB source and MySQL tools integration ([#1908](https://github.com/googleapis/genai-toolbox/issues/1908)) ([3b40fea](https://github.com/googleapis/genai-toolbox/commit/3b40fea25edae607e02c1e8fc2b0c957fa2c8e9a))
* **tools/postgres:** Add additional filter params for existing postgres tools ([#2033](https://github.com/googleapis/genai-toolbox/issues/2033)) ([489117d](https://github.com/googleapis/genai-toolbox/commit/489117d74711ac9260e7547163ca463eb45eeaa2))
* **tools/postgres:** Add list_pg_settings, list_database_stats tools for postgres ([#2030](https://github.com/googleapis/genai-toolbox/issues/2030)) ([32367a4](https://github.com/googleapis/genai-toolbox/commit/32367a472fae9653fed7f126428eba0252978bd5))
* **tools/postgres:** Add new postgres-list-roles tool ([#2038](https://github.com/googleapis/genai-toolbox/issues/2038)) ([bea9705](https://github.com/googleapis/genai-toolbox/commit/bea97054502cfa236aa10e2ebc8ff58eb00ad035))
### Bug Fixes
* List tables tools null fix ([#2107](https://github.com/googleapis/genai-toolbox/issues/2107)) ([2b45266](https://github.com/googleapis/genai-toolbox/commit/2b452665983154041d4cd0ed7d82532e4af682eb))
* **tools/mongodb:** Removed sortPayload and sortParams ([#1238](https://github.com/googleapis/genai-toolbox/issues/1238)) ([c5a6daa](https://github.com/googleapis/genai-toolbox/commit/c5a6daa7683d2f9be654300d977692c368e55e31))
### Miscellaneous Chores
* **looker:** Upgrade to latest go sdk ([#2159](https://github.com/googleapis/genai-toolbox/issues/2159)) ([78e015d](https://github.com/googleapis/genai-toolbox/commit/78e015d7dfd9cce7e2b444ed934da17eb355bc86))
## [0.22.0](https://github.com/googleapis/genai-toolbox/compare/v0.21.0...v0.22.0) (2025-12-04)

View File

@@ -105,6 +105,21 @@ redeploying your application.
## Getting Started
### (Non-production) Running Toolbox
You can run Toolbox directly with a [configuration file](#configuration):
```sh
npx @toolbox-sdk/server --tools-file tools.yaml
```
This runs the latest version of the toolbox server with your configuration file.
> [!NOTE]
> This method should only be used for non-production use cases such as
> experimentation. For any production use-cases, please consider [Installing the
> server](#installing-the-server) and then [running it](#running-the-server).
### Installing the server
For the latest version, check the [releases page][releases] and use the
@@ -125,7 +140,7 @@ To install Toolbox as a binary:
>
> ```sh
> # see releases page for other versions
> export VERSION=0.22.0
> export VERSION=0.23.0
> curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
> chmod +x toolbox
> ```
@@ -138,7 +153,7 @@ To install Toolbox as a binary:
>
> ```sh
> # see releases page for other versions
> export VERSION=0.22.0
> export VERSION=0.23.0
> curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/arm64/toolbox
> chmod +x toolbox
> ```
@@ -151,7 +166,7 @@ To install Toolbox as a binary:
>
> ```sh
> # see releases page for other versions
> export VERSION=0.22.0
> export VERSION=0.23.0
> curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/amd64/toolbox
> chmod +x toolbox
> ```
@@ -164,7 +179,7 @@ To install Toolbox as a binary:
>
> ```cmd
> :: see releases page for other versions
> set VERSION=0.22.0
> set VERSION=0.23.0
> curl -o toolbox.exe "https://storage.googleapis.com/genai-toolbox/v%VERSION%/windows/amd64/toolbox.exe"
> ```
>
@@ -176,7 +191,7 @@ To install Toolbox as a binary:
>
> ```powershell
> # see releases page for other versions
> $VERSION = "0.21.0"
> $VERSION = "0.23.0"
> curl.exe -o toolbox.exe "https://storage.googleapis.com/genai-toolbox/v$VERSION/windows/amd64/toolbox.exe"
> ```
>
@@ -189,7 +204,7 @@ You can also install Toolbox as a container:
```sh
# see releases page for other versions
export VERSION=0.22.0
export VERSION=0.23.0
docker pull us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:$VERSION
```
@@ -213,7 +228,7 @@ To install from source, ensure you have the latest version of
[Go installed](https://go.dev/doc/install), and then run the following command:
```sh
go install github.com/googleapis/genai-toolbox@v0.22.0
go install github.com/googleapis/genai-toolbox@v0.23.0
```
<!-- {x-release-please-end} -->
@@ -303,6 +318,16 @@ toolbox --tools-file "tools.yaml"
</details>
<details>
<summary>NPM</summary>
To run Toolbox directly without manually downloading the binary (requires Node.js):
```sh
npx @toolbox-sdk/server --tools-file tools.yaml
```
</details>
<details>
<summary>Gemini CLI</summary>

View File

@@ -120,6 +120,7 @@ import (
_ "github.com/googleapis/genai-toolbox/internal/tools/firestore/firestorevalidaterules"
_ "github.com/googleapis/genai-toolbox/internal/tools/http"
_ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookeradddashboardelement"
_ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookeradddashboardfilter"
_ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookerconversationalanalytics"
_ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookercreateprojectfile"
_ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookerdeleteprojectfile"
@@ -196,6 +197,7 @@ import (
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgreslistsequences"
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgreslisttables"
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgreslisttablespaces"
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgreslisttablestats"
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgreslisttriggers"
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgreslistviews"
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgreslongrunningtransactions"

View File

@@ -1488,7 +1488,7 @@ func TestPrebuiltTools(t *testing.T) {
wantToolset: server.ToolsetConfigs{
"alloydb_postgres_database_tools": tools.ToolsetConfig{
Name: "alloydb_postgres_database_tools",
ToolNames: []string{"execute_sql", "list_tables", "list_active_queries", "list_available_extensions", "list_installed_extensions", "list_autovacuum_configurations", "list_memory_configurations", "list_top_bloated_tables", "list_replication_slots", "list_invalid_indexes", "get_query_plan", "list_views", "list_schemas", "database_overview", "list_triggers", "list_indexes", "list_sequences", "long_running_transactions", "list_locks", "replication_stats", "list_query_stats", "get_column_cardinality", "list_publication_tables", "list_tablespaces", "list_pg_settings", "list_database_stats", "list_roles"},
ToolNames: []string{"execute_sql", "list_tables", "list_active_queries", "list_available_extensions", "list_installed_extensions", "list_autovacuum_configurations", "list_memory_configurations", "list_top_bloated_tables", "list_replication_slots", "list_invalid_indexes", "get_query_plan", "list_views", "list_schemas", "database_overview", "list_triggers", "list_indexes", "list_sequences", "long_running_transactions", "list_locks", "replication_stats", "list_query_stats", "get_column_cardinality", "list_publication_tables", "list_tablespaces", "list_pg_settings", "list_database_stats", "list_roles", "list_table_stats"},
},
},
},
@@ -1518,7 +1518,7 @@ func TestPrebuiltTools(t *testing.T) {
wantToolset: server.ToolsetConfigs{
"cloud_sql_postgres_database_tools": tools.ToolsetConfig{
Name: "cloud_sql_postgres_database_tools",
ToolNames: []string{"execute_sql", "list_tables", "list_active_queries", "list_available_extensions", "list_installed_extensions", "list_autovacuum_configurations", "list_memory_configurations", "list_top_bloated_tables", "list_replication_slots", "list_invalid_indexes", "get_query_plan", "list_views", "list_schemas", "database_overview", "list_triggers", "list_indexes", "list_sequences", "long_running_transactions", "list_locks", "replication_stats", "list_query_stats", "get_column_cardinality", "list_publication_tables", "list_tablespaces", "list_pg_settings", "list_database_stats", "list_roles"},
ToolNames: []string{"execute_sql", "list_tables", "list_active_queries", "list_available_extensions", "list_installed_extensions", "list_autovacuum_configurations", "list_memory_configurations", "list_top_bloated_tables", "list_replication_slots", "list_invalid_indexes", "get_query_plan", "list_views", "list_schemas", "database_overview", "list_triggers", "list_indexes", "list_sequences", "long_running_transactions", "list_locks", "replication_stats", "list_query_stats", "get_column_cardinality", "list_publication_tables", "list_tablespaces", "list_pg_settings", "list_database_stats", "list_roles", "list_table_stats"},
},
},
},
@@ -1598,7 +1598,7 @@ func TestPrebuiltTools(t *testing.T) {
wantToolset: server.ToolsetConfigs{
"looker_tools": tools.ToolsetConfig{
Name: "looker_tools",
ToolNames: []string{"get_models", "get_explores", "get_dimensions", "get_measures", "get_filters", "get_parameters", "query", "query_sql", "query_url", "get_looks", "run_look", "make_look", "get_dashboards", "run_dashboard", "make_dashboard", "add_dashboard_element", "health_pulse", "health_analyze", "health_vacuum", "dev_mode", "get_projects", "get_project_files", "get_project_file", "create_project_file", "update_project_file", "delete_project_file", "get_connections", "get_connection_schemas", "get_connection_databases", "get_connection_tables", "get_connection_table_columns"},
ToolNames: []string{"get_models", "get_explores", "get_dimensions", "get_measures", "get_filters", "get_parameters", "query", "query_sql", "query_url", "get_looks", "run_look", "make_look", "get_dashboards", "run_dashboard", "make_dashboard", "add_dashboard_element", "add_dashboard_filter", "generate_embed_url", "health_pulse", "health_analyze", "health_vacuum", "dev_mode", "get_projects", "get_project_files", "get_project_file", "create_project_file", "update_project_file", "delete_project_file", "get_connections", "get_connection_schemas", "get_connection_databases", "get_connection_tables", "get_connection_table_columns"},
},
},
},
@@ -1618,7 +1618,7 @@ func TestPrebuiltTools(t *testing.T) {
wantToolset: server.ToolsetConfigs{
"postgres_database_tools": tools.ToolsetConfig{
Name: "postgres_database_tools",
ToolNames: []string{"execute_sql", "list_tables", "list_active_queries", "list_available_extensions", "list_installed_extensions", "list_autovacuum_configurations", "list_memory_configurations", "list_top_bloated_tables", "list_replication_slots", "list_invalid_indexes", "get_query_plan", "list_views", "list_schemas", "database_overview", "list_triggers", "list_indexes", "list_sequences", "long_running_transactions", "list_locks", "replication_stats", "list_query_stats", "get_column_cardinality", "list_publication_tables", "list_tablespaces", "list_pg_settings", "list_database_stats", "list_roles"},
ToolNames: []string{"execute_sql", "list_tables", "list_active_queries", "list_available_extensions", "list_installed_extensions", "list_autovacuum_configurations", "list_memory_configurations", "list_top_bloated_tables", "list_replication_slots", "list_invalid_indexes", "get_query_plan", "list_views", "list_schemas", "database_overview", "list_triggers", "list_indexes", "list_sequences", "long_running_transactions", "list_locks", "replication_stats", "list_query_stats", "get_column_cardinality", "list_publication_tables", "list_tablespaces", "list_pg_settings", "list_database_stats", "list_roles", "list_table_stats"},
},
},
},

View File

@@ -1 +1 @@
0.22.0
0.23.0

View File

@@ -234,7 +234,7 @@
},
"outputs": [],
"source": [
"version = \"0.22.0\" # x-release-please-version\n",
"version = \"0.23.0\" # x-release-please-version\n",
"! curl -O https://storage.googleapis.com/genai-toolbox/v{version}/linux/amd64/toolbox\n",
"\n",
"# Make the binary executable\n",

View File

@@ -71,6 +71,22 @@ redeploying your application.
## Getting Started
### (Non-production) Running Toolbox
You can run Toolbox directly with a [configuration file](../configure.md):
```sh
npx @toolbox-sdk/server --tools-file tools.yaml
```
This runs the latest version of the toolbox server with your configuration file.
{{< notice note >}}
This method should only be used for non-production use cases such as
experimentation. For any production use-cases, please consider [Installing the
server](#installing-the-server) and then [running it](#running-the-server).
{{< /notice >}}
### Installing the server
For the latest version, check the [releases page][releases] and use the
@@ -87,7 +103,7 @@ To install Toolbox as a binary on Linux (AMD64):
```sh
# see releases page for other versions
export VERSION=0.22.0
export VERSION=0.23.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox
```
@@ -98,7 +114,7 @@ To install Toolbox as a binary on macOS (Apple Silicon):
```sh
# see releases page for other versions
export VERSION=0.22.0
export VERSION=0.23.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/arm64/toolbox
chmod +x toolbox
```
@@ -109,7 +125,7 @@ To install Toolbox as a binary on macOS (Intel):
```sh
# see releases page for other versions
export VERSION=0.22.0
export VERSION=0.23.0
curl -L -o toolbox https://storage.googleapis.com/genai-toolbox/v$VERSION/darwin/amd64/toolbox
chmod +x toolbox
```
@@ -120,7 +136,7 @@ To install Toolbox as a binary on Windows (Command Prompt):
```cmd
:: see releases page for other versions
set VERSION=0.22.0
set VERSION=0.23.0
curl -o toolbox.exe "https://storage.googleapis.com/genai-toolbox/v%VERSION%/windows/amd64/toolbox.exe"
```
@@ -130,7 +146,7 @@ To install Toolbox as a binary on Windows (PowerShell):
```powershell
# see releases page for other versions
$VERSION = "0.21.0"
$VERSION = "0.23.0"
curl.exe -o toolbox.exe "https://storage.googleapis.com/genai-toolbox/v$VERSION/windows/amd64/toolbox.exe"
```
@@ -142,7 +158,7 @@ You can also install Toolbox as a container:
```sh
# see releases page for other versions
export VERSION=0.22.0
export VERSION=0.23.0
docker pull us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:$VERSION
```
@@ -161,7 +177,7 @@ To install from source, ensure you have the latest version of
[Go installed](https://go.dev/doc/install), and then run the following command:
```sh
go install github.com/googleapis/genai-toolbox@v0.22.0
go install github.com/googleapis/genai-toolbox@v0.23.0
```
{{% /tab %}}

View File

@@ -105,7 +105,7 @@ In this section, we will download Toolbox, configure our tools in a
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/$OS/toolbox
```
<!-- {x-release-please-end} -->

View File

@@ -0,0 +1,245 @@
---
title: "Prompts using Gemini CLI"
type: docs
weight: 5
description: >
How to get started using Toolbox prompts locally with PostgreSQL and [Gemini CLI](https://pypi.org/project/gemini-cli/).
---
## Before you begin
This guide assumes you have already done the following:
1. Installed [PostgreSQL 16+ and the `psql` client][install-postgres].
[install-postgres]: https://www.postgresql.org/download/
## Step 1: Set up your database
In this section, we will create a database, insert some data that needs to be
accessed by our agent, and create a database user for Toolbox to connect with.
1. Connect to postgres using the `psql` command:
```bash
psql -h 127.0.0.1 -U postgres
```
Here, `postgres` denotes the default postgres superuser.
{{< notice info >}}
#### **Having trouble connecting?**
* **Password Prompt:** If you are prompted for a password for the `postgres`
user and do not know it (or a blank password doesn't work), your PostgreSQL
installation might require a password or a different authentication method.
* **`FATAL: role "postgres" does not exist`:** This error means the default
`postgres` superuser role isn't available under that name on your system.
* **`Connection refused`:** Ensure your PostgreSQL server is actually running.
You can typically check with `sudo systemctl status postgresql` and start it
with `sudo systemctl start postgresql` on Linux systems.
<br/>
#### **Common Solution**
For password issues or if the `postgres` role seems inaccessible directly, try
switching to the `postgres` operating system user first. This user often has
permission to connect without a password for local connections (this is called
peer authentication).
```bash
sudo -i -u postgres
psql -h 127.0.0.1
```
Once you are in the `psql` shell using this method, you can proceed with the
database creation steps below. Afterwards, type `\q` to exit `psql`, and then
`exit` to return to your normal user shell.
If desired, once connected to `psql` as the `postgres` OS user, you can set a
password for the `postgres` *database* user using: `ALTER USER postgres WITH
PASSWORD 'your_chosen_password';`. This would allow direct connection with `-U
postgres` and a password next time.
{{< /notice >}}
1. Create a new database and a new user:
{{< notice tip >}}
For a real application, it's best to follow the principle of least permission
and only grant the privileges your application needs.
{{< /notice >}}
```sql
CREATE USER toolbox_user WITH PASSWORD 'my-password';
CREATE DATABASE toolbox_db;
GRANT ALL PRIVILEGES ON DATABASE toolbox_db TO toolbox_user;
ALTER DATABASE toolbox_db OWNER TO toolbox_user;
```
1. End the database session:
```bash
\q
```
(If you used `sudo -i -u postgres` and then `psql`, remember you might also
need to type `exit` after `\q` to leave the `postgres` user's shell
session.)
1. Connect to your database with your new user:
```bash
psql -h 127.0.0.1 -U toolbox_user -d toolbox_db
```
1. Create the required tables using the following commands:
```sql
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE restaurants (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
location VARCHAR(100)
);
CREATE TABLE reviews (
id SERIAL PRIMARY KEY,
user_id INT REFERENCES users(id),
restaurant_id INT REFERENCES restaurants(id),
rating INT CHECK (rating >= 1 AND rating <= 5),
review_text TEXT,
is_published BOOLEAN DEFAULT false,
moderation_status VARCHAR(50) DEFAULT 'pending_manual_review',
created_at TIMESTAMPTZ DEFAULT NOW()
);
```
1. Insert dummy data into the tables.
```sql
INSERT INTO users (id, username, email) VALUES
(123, 'jane_d', 'jane.d@example.com'),
(124, 'john_s', 'john.s@example.com'),
(125, 'sam_b', 'sam.b@example.com');
INSERT INTO restaurants (id, name, location) VALUES
(455, 'Pizza Palace', '123 Main St'),
(456, 'The Corner Bistro', '456 Oak Ave'),
(457, 'Sushi Spot', '789 Pine Ln');
INSERT INTO reviews (user_id, restaurant_id, rating, review_text, is_published, moderation_status) VALUES
(124, 455, 5, 'Best pizza in town! The crust was perfect.', true, 'approved'),
(125, 457, 4, 'Great sushi, very fresh. A bit pricey but worth it.', true, 'approved'),
(123, 457, 5, 'Absolutely loved the dragon roll. Will be back!', true, 'approved'),
(123, 456, 4, 'The atmosphere was lovely and the food was great. My photo upload might have been weird though.', false, 'pending_manual_review'),
(125, 456, 1, 'This review contains inappropriate language.', false, 'rejected');
```
1. End the database session:
```bash
\q
```
## Step 2: Configure Toolbox
Create a file named `tools.yaml`. This file defines the database connection, the
SQL tools available, and the prompts the agents will use.
```yaml
sources:
my-foodiefind-db:
kind: postgres
host: 127.0.0.1
port: 5432
database: toolbox_db
user: toolbox_user
password: my-password
tools:
find_user_by_email:
kind: postgres-sql
source: my-foodiefind-db
description: Find a user's ID by their email address.
parameters:
- name: email
type: string
description: The email address of the user to find.
statement: SELECT id FROM users WHERE email = $1;
find_restaurant_by_name:
kind: postgres-sql
source: my-foodiefind-db
description: Find a restaurant's ID by its exact name.
parameters:
- name: name
type: string
description: The name of the restaurant to find.
statement: SELECT id FROM restaurants WHERE name = $1;
find_review_by_user_and_restaurant:
kind: postgres-sql
source: my-foodiefind-db
description: Find the full record for a specific review using the user's ID and the restaurant's ID.
parameters:
- name: user_id
type: integer
description: The numerical ID of the user.
- name: restaurant_id
type: integer
description: The numerical ID of the restaurant.
statement: SELECT * FROM reviews WHERE user_id = $1 AND restaurant_id = $2;
prompts:
investigate_missing_review:
description: "Investigates a user's missing review by finding the user, restaurant, and the review itself, then analyzing its status."
arguments:
- name: "user_email"
description: "The email of the user who wrote the review."
- name: "restaurant_name"
description: "The name of the restaurant being reviewed."
messages:
- content: >-
**Goal:** Find the review written by the user with email '{{.user_email}}' for the restaurant named '{{.restaurant_name}}' and understand its status.
**Workflow:**
1. Use the `find_user_by_email` tool with the email '{{.user_email}}' to get the `user_id`.
2. Use the `find_restaurant_by_name` tool with the name '{{.restaurant_name}}' to get the `restaurant_id`.
3. Use the `find_review_by_user_and_restaurant` tool with the `user_id` and `restaurant_id` you just found.
4. Analyze the results from the final tool call. Examine the `is_published` and `moderation_status` fields and explain the review's status to the user in a clear, human-readable sentence.
```
## Step 3: Connect to Gemini CLI
Configure the Gemini CLI to talk to your local Toolbox MCP server.
1. Open or create your Gemini settings file: `~/.gemini/settings.json`.
2. Add the following configuration to the file:
```json
{
"mcpServers": {
"MCPToolbox": {
"httpUrl": "http://localhost:5000/mcp"
}
},
"mcp": {
"allowed": ["MCPToolbox"]
}
}
```
3. Start Gemini CLI using
```sh
gemini
```
In case Gemini CLI is already running, use `/mcp refresh` to refresh the MCP server.
4. Use gemini slash commands to run your prompt:
```sh
/investigate_missing_review --user_email="jane.d@example.com" --restaurant_name="The Corner Bistro"
```

View File

@@ -13,7 +13,7 @@ In this section, we will download Toolbox, configure our tools in a
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/$OS/toolbox
```
<!-- {x-release-please-end} -->

View File

@@ -49,19 +49,19 @@ to expose your developer assistant tools to a Looker instance:
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/linux/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/arm64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/windows/amd64/toolbox.exe
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->
@@ -323,6 +323,8 @@ instance and create new saved content.
data
1. **make_dashboard**: Create a saved dashboard in Looker and return the URL
1. **add_dashboard_element**: Add a tile to a dashboard
1. **add_dashboard_filter**: Add a filter to a dashboard
1. **generate_embed_url**: Generate an embed url for content
### Looker Instance Health Tools

View File

@@ -45,19 +45,19 @@ instance:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/linux/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/arm64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/windows/amd64/toolbox.exe
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->

View File

@@ -43,19 +43,19 @@ expose your developer assistant tools to a MySQL instance:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/linux/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/arm64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/windows/amd64/toolbox.exe
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->

View File

@@ -44,19 +44,19 @@ expose your developer assistant tools to a Neo4j instance:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/linux/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/arm64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/windows/amd64/toolbox.exe
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->

View File

@@ -56,19 +56,19 @@ Omni](https://cloud.google.com/alloydb/omni/current/docs/overview).
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/linux/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/arm64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/windows/amd64/toolbox.exe
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->

View File

@@ -43,19 +43,19 @@ to expose your developer assistant tools to a SQLite instance:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/linux/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/arm64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/darwin/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/windows/amd64/toolbox.exe
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->

View File

@@ -416,6 +416,8 @@ details on how to connect your AI tools (IDEs) to databases via Toolbox and MCP.
* `run_dashboard`: Runs the queries associated with a dashboard.
* `make_dashboard`: Creates a new dashboard.
* `add_dashboard_element`: Adds a tile to a dashboard.
* `add_dashboard_filter`: Adds a filter to a dashboard.
* `generate_embed_url`: Generate an embed url for content.
* `health_pulse`: Test the health of a Looker instance.
* `health_analyze`: Analyze the LookML usage of a Looker instance.
* `health_vacuum`: Suggest LookML elements that can be removed.

View File

@@ -77,6 +77,9 @@ cluster][alloydb-free-trial].
- [`postgres-get-column-cardinality`](../tools/postgres/postgres-get-column-cardinality.md)
List cardinality of columns in a table in a PostgreSQL database.
- [`postgres-list-table-stats`](../tools/postgres/postgres-list-table-stats.md)
List statistics of a table in a PostgreSQL database.
- [`postgres-list-publication-tables`](../tools/postgres/postgres-list-publication-tables.md)
List publication tables in a PostgreSQL database.

View File

@@ -58,6 +58,7 @@ to a database by following these instructions][csql-pg-quickstart].
- [`postgres-list-sequences`](../tools/postgres/postgres-list-sequences.md)
List sequences in a PostgreSQL database.
- [`postgres-long-running-transactions`](../tools/postgres/postgres-long-running-transactions.md)
List long running transactions in a PostgreSQL database.
@@ -73,6 +74,9 @@ to a database by following these instructions][csql-pg-quickstart].
- [`postgres-get-column-cardinality`](../tools/postgres/postgres-get-column-cardinality.md)
List cardinality of columns in a table in a PostgreSQL database.
- [`postgres-list-table-stats`](../tools/postgres/postgres-list-table-stats.md)
List statistics of a table in a PostgreSQL database.
- [`postgres-list-publication-tables`](../tools/postgres/postgres-list-publication-tables.md)
List publication tables in a PostgreSQL database.

View File

@@ -91,18 +91,17 @@ instead of hardcoding your secrets into the configuration file.
## Reference
| **field** | **type** | **required** | **description** |
|----------------------|:--------:|:------------:|-------------------------------------------------------------------------------------------|
| kind | string | true | Must be "looker". |
| base_url | string | true | The URL of your Looker server with no trailing /. |
| client_id | string | false | The client id assigned by Looker. |
| client_secret | string | false | The client secret assigned by Looker. |
| verify_ssl | string | false | Whether to check the ssl certificate of the server. |
| project | string | false | The project id to use in Google Cloud. |
| location | string | false | The location to use in Google Cloud. (default: us) |
| timeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, 120s is applied. |
| use_client_oauth | string | false | Use OAuth tokens instead of client_id and client_secret. (default: false) If a header |
| | | | name is provided, it will be used instead of "Authorization". |
| show_hidden_models | string | false | Show or hide hidden models. (default: true) |
| show_hidden_explores | string | false | Show or hide hidden explores. (default: true) |
| show_hidden_fields | string | false | Show or hide hidden fields. (default: true) |
| **field** | **type** | **required** | **description** |
|----------------------|:--------:|:------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "looker". |
| base_url | string | true | The URL of your Looker server with no trailing /. |
| client_id | string | false | The client id assigned by Looker. |
| client_secret | string | false | The client secret assigned by Looker. |
| verify_ssl | string | false | Whether to check the ssl certificate of the server. |
| project | string | false | The project id to use in Google Cloud. |
| location | string | false | The location to use in Google Cloud. (default: us) |
| timeout | string | false | Maximum time to wait for query execution (e.g. "30s", "2m"). By default, 120s is applied. |
| use_client_oauth | string | false | Use OAuth tokens instead of client_id and client_secret. (default: false) If a header name is provided, it will be used instead of "Authorization". |
| show_hidden_models | string | false | Show or hide hidden models. (default: true) |
| show_hidden_explores | string | false | Show or hide hidden explores. (default: true) |
| show_hidden_fields | string | false | Show or hide hidden fields. (default: true) |

View File

@@ -68,6 +68,9 @@ reputation for reliability, feature robustness, and performance.
- [`postgres-get-column-cardinality`](../tools/postgres/postgres-get-column-cardinality.md)
List cardinality of columns in a table in a PostgreSQL database.
- [`postgres-list-table-stats`](../tools/postgres/postgres-list-table-stats.md)
List statistics of a table in a PostgreSQL database.
- [`postgres-list-publication-tables`](../tools/postgres/postgres-list-publication-tables.md)
List publication tables in a PostgreSQL database.

View File

@@ -10,27 +10,18 @@ aliases:
## About
The `looker-add-dashboard-element` creates a dashboard element
in the given dashboard.
The `looker-add-dashboard-element` tool creates a new tile (element) within an existing Looker dashboard.
Tiles are added in the order this tool is called for a given `dashboard_id`.
CRITICAL ORDER OF OPERATIONS:
1. Create the dashboard using `make_dashboard`.
2. Add any dashboard-level filters using `add_dashboard_filter`.
3. Then, add elements (tiles) using this tool.
It's compatible with the following sources:
- [looker](../../sources/looker.md)
`looker-add-dashboard-element` takes eleven parameters:
1. the `model`
2. the `explore`
3. the `fields` list
4. an optional set of `filters`
5. an optional set of `pivots`
6. an optional set of `sorts`
7. an optional `limit`
8. an optional `tz`
9. an optional `vis_config`
10. the `title`
11. the `dashboard_id`
## Example
```yaml
@@ -39,24 +30,37 @@ tools:
kind: looker-add-dashboard-element
source: looker-source
description: |
add_dashboard_element Tool
This tool creates a new tile (element) within an existing Looker dashboard.
Tiles are added in the order this tool is called for a given `dashboard_id`.
This tool creates a new tile in a Looker dashboard using
the query parameters and the vis_config specified.
CRITICAL ORDER OF OPERATIONS:
1. Create the dashboard using `make_dashboard`.
2. Add any dashboard-level filters using `add_dashboard_filter`.
3. Then, add elements (tiles) using this tool.
Most of the parameters are the same as the query_url
tool. In addition, there is a title that may be provided.
The dashboard_id must be specified. That is obtained
from calling make_dashboard.
Required Parameters:
- dashboard_id: The ID of the target dashboard, obtained from `make_dashboard`.
- model_name, explore_name, fields: These query parameters are inherited
from the `query` tool and are required to define the data for the tile.
This tool can be called many times for one dashboard_id
and the resulting tiles will be added in order.
Optional Parameters:
- title: An optional title for the dashboard tile.
- pivots, filters, sorts, limit, query_timezone: These query parameters are
inherited from the `query` tool and can be used to customize the tile's query.
- vis_config: A JSON object defining the visualization settings for this tile.
The structure and options are the same as for the `query_url` tool's `vis_config`.
Connecting to Dashboard Filters:
A dashboard element can be connected to one or more dashboard filters (created with
`add_dashboard_filter`). To do this, specify the `name` of the dashboard filter
and the `field` from the element's query that the filter should apply to.
The format for specifying the field is `view_name.field_name`.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-add-dashboard-element" |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
|:------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-add-dashboard-element". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -0,0 +1,75 @@
---
title: "looker-add-dashboard-filter"
type: docs
weight: 1
description: >
The "looker-add-dashboard-filter" tool adds a filter to a specified dashboard.
aliases:
- /resources/tools/looker-add-dashboard-filter
---
## About
The `looker-add-dashboard-filter` tool adds a filter to a specified Looker dashboard.
CRITICAL ORDER OF OPERATIONS:
1. Create a dashboard using `make_dashboard`.
2. Add all desired filters using this tool (`add_dashboard_filter`).
3. Finally, add dashboard elements (tiles) using `add_dashboard_element`.
It's compatible with the following sources:
- [looker](../../sources/looker.md)
## Parameters
| **parameter** | **type** | **required** | **default** | **description** |
|:----------------------|:--------:|:-----------------:|:--------------:|-------------------------------------------------------------------------------------------------------------------------------|
| dashboard_id | string | true | none | The ID of the dashboard to add the filter to, obtained from `make_dashboard`. |
| name | string | true | none | A unique internal identifier for the filter. This name is used later in `add_dashboard_element` to bind tiles to this filter. |
| title | string | true | none | The label displayed to users in the Looker UI. |
| filter_type | string | true | `field_filter` | The filter type of filter. Can be `date_filter`, `number_filter`, `string_filter`, or `field_filter`. |
| default_value | string | false | none | The initial value for the filter. |
| model | string | if `field_filter` | none | The name of the LookML model, obtained from `get_models`. |
| explore | string | if `field_filter` | none | The name of the explore within the model, obtained from `get_explores`. |
| dimension | string | if `field_filter` | none | The name of the field (e.g., `view_name.field_name`) to base the filter on, obtained from `get_dimensions`. |
| allow_multiple_values | boolean | false | true | The Dashboard Filter should allow multiple values |
| required | boolean | false | false | The Dashboard Filter is required to run dashboard |
## Example
```yaml
tools:
add_dashboard_filter:
kind: looker-add-dashboard-filter
source: looker-source
description: |
This tool adds a filter to a Looker dashboard.
CRITICAL ORDER OF OPERATIONS:
1. Create a dashboard using `make_dashboard`.
2. Add all desired filters using this tool (`add_dashboard_filter`).
3. Finally, add dashboard elements (tiles) using `add_dashboard_element`.
Parameters:
- dashboard_id (required): The ID from `make_dashboard`.
- name (required): A unique internal identifier for the filter. You will use this `name` later in `add_dashboard_element` to bind tiles to this filter.
- title (required): The label displayed to users in the UI.
- filter_type (required): One of `date_filter`, `number_filter`, `string_filter`, or `field_filter`.
- default_value (optional): The initial value for the filter.
Field Filters (`flter_type: field_filter`):
If creating a field filter, you must also provide:
- model
- explore
- dimension
The filter will inherit suggestions and type information from this LookML field.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:--------:|:------------:|----------------------------------------------------|
| kind | string | true | Must be "looker-add-dashboard-filter". |
| source | string | true | Name of the source the SQL should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -34,9 +34,10 @@ tools:
kind: looker-conversational-analytics
source: looker-source
description: |
Use this tool to perform data analysis, get insights,
or answer complex questions about the contents of specific
Looker explores.
Use this tool to ask questions about your data using the Looker Conversational
Analytics API. You must provide a natural language query and a list of
1 to 5 model and explore combinations (e.g. [{'model': 'the_model', 'explore': 'the_explore'}]).
Use the 'get_models' and 'get_explores' tools to discover available models and explores.
```
## Reference

View File

@@ -27,13 +27,18 @@ tools:
kind: looker-create-project-file
source: looker-source
description: |
create_project_file Tool
This tool creates a new LookML file within a specified project, populating
it with the provided content.
Given a project_id and a file path within the project, as well as the content
of a LookML file, this tool will create a new file within the project.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
This tool must be called after the dev_mode tool has changed the session to
dev mode.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The desired path and filename for the new file within the project.
- content (required): The full LookML content to write into the new file.
Output:
A confirmation message upon successful file creation.
```
## Reference

View File

@@ -26,13 +26,17 @@ tools:
kind: looker-delete-project-file
source: looker-source
description: |
delete_project_file Tool
This tool permanently deletes a specified LookML file from within a project.
Use with caution, as this action cannot be undone through the API.
Given a project_id and a file path within the project, this tool will delete
the file from the project.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
This tool must be called after the dev_mode tool has changed the session to
dev mode.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The exact path to the LookML file to delete within the project.
Output:
A confirmation message upon successful file deletion.
```
## Reference

View File

@@ -27,10 +27,13 @@ tools:
kind: looker-dev-mode
source: looker-source
description: |
dev_mode Tool
This tool allows toggling the Looker IDE session between Development Mode and Production Mode.
Development Mode enables making and testing changes to LookML projects.
Passing true to this tool switches the session to dev mode. Passing false to this tool switches the
session to production mode.
Parameters:
- enable (required): A boolean value.
- `true`: Switches the current session to Development Mode.
- `false`: Switches the current session to Production Mode.
```
## Reference

View File

@@ -36,11 +36,17 @@ tools:
kind: looker-generate-embed-url
source: looker-source
description: |
generate_embed_url Tool
This tool generates a signed, private embed URL for specific Looker content,
allowing users to access it directly.
This tool generates an embeddable URL for Looker content.
You need to provide the type of content (e.g., 'dashboards', 'looks', 'query-visualization')
and the ID of the content.
Parameters:
- type (required): The type of content to embed. Common values include:
- `dashboards`
- `looks`
- `explore`
- id (required): The unique identifier for the content.
- For dashboards and looks, use the numeric ID (e.g., "123").
- For explores, use the format "model_name/explore_name".
```
## Reference

View File

@@ -26,10 +26,16 @@ tools:
kind: looker-get-connection-databases
source: looker-source
description: |
get_connection_databases Tool
This tool retrieves a list of databases available through a specified Looker connection.
This is only applicable for connections that support multiple databases.
Use `get_connections` to check if a connection supports multiple databases.
This tool will list the databases available from a connection if the connection
supports multiple databases.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
Output:
A JSON array of strings, where each string is the name of an available database.
If the connection does not support multiple databases, an empty list or an error will be returned.
```
## Reference

View File

@@ -26,10 +26,16 @@ tools:
kind: looker-get-connection-schemas
source: looker-source
description: |
get_connection_schemas Tool
This tool retrieves a list of database schemas available through a specified
Looker connection.
This tool will list the schemas available from a connection, filtered by
an optional database name.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- database (optional): An optional database name to filter the schemas.
Only applicable for connections that support multiple databases.
Output:
A JSON array of strings, where each string is the name of an available schema.
```
## Reference

View File

@@ -26,11 +26,20 @@ tools:
kind: looker-get-connection-table-columns
source: looker-source
description: |
get_connection_table_columns Tool
This tool retrieves a list of columns for one or more specified tables within a
given database schema and connection.
This tool will list the columns available from a connection, for all the tables
given in a comma separated list of table names, filtered by the
schema name and optional database name.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- schema (required): The name of the schema where the tables reside, obtained from `get_connection_schemas`.
- tables (required): A comma-separated string of table names for which to retrieve columns
(e.g., "users,orders,products"), obtained from `get_connection_tables`.
- database (optional): The name of the database to filter by. Only applicable for connections
that support multiple databases (check with `get_connections`).
Output:
A JSON array of objects, where each object represents a column and contains details
such as `table_name`, `column_name`, `data_type`, and `is_nullable`.
```
## Reference

View File

@@ -27,10 +27,17 @@ tools:
kind: looker-get-connection-tables
source: looker-source
description: |
get_connection_tables Tool
This tool retrieves a list of tables available within a specified database schema
through a Looker connection.
This tool will list the tables available from a connection, filtered by the
schema name and optional database name.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- schema (required): The name of the schema to list tables from, obtained from `get_connection_schemas`.
- database (optional): The name of the database to filter by. Only applicable for connections
that support multiple databases (check with `get_connections`).
Output:
A JSON array of strings, where each string is the name of an available table.
```
## Reference

View File

@@ -26,11 +26,18 @@ tools:
kind: looker-get-connections
source: looker-source
description: |
get_connections Tool
This tool retrieves a list of all database connections configured in the Looker system.
This tool will list all the connections available in the Looker system, as
well as the dialect name, the default schema, the database if applicable,
and whether the connection supports multiple databases.
Parameters:
This tool takes no parameters.
Output:
A JSON array of objects, each representing a database connection and including details such as:
- `name`: The connection's unique identifier.
- `dialect`: The database dialect (e.g., "mysql", "postgresql", "bigquery").
- `default_schema`: The default schema for the connection.
- `database`: The associated database name (if applicable).
- `supports_multiple_databases`: A boolean indicating if the connection can access multiple databases.
```
## Reference

View File

@@ -29,25 +29,29 @@ default to 100 and 0.
```yaml
tools:
get_dashboards:
kind: looker-get-dashboards
source: looker-source
description: |
get_dashboards Tool
This tool is used to search for saved dashboards in a Looker instance.
String search params use case-insensitive matching. String search
params can contain % and '_' as SQL LIKE pattern match wildcard
expressions. example="dan%" will match "danger" and "Danzig" but
not "David" example="D_m%" will match "Damage" and "dump".
Most search params can accept "IS NULL" and "NOT NULL" as special
expressions to match or exclude (respectively) rows where the
column is null.
The limit and offset are used to paginate the results.
The result of the get_dashboards tool is a list of json objects.
get_dashboards:
kind: looker-get-dashboards
source: looker-source
description: |
This tool searches for saved dashboards in a Looker instance. It returns a list of JSON objects, each representing a dashboard.
Search Parameters:
- title (optional): Filter by dashboard title (supports wildcards).
- folder_id (optional): Filter by the ID of the folder where the dashboard is saved.
- user_id (optional): Filter by the ID of the user who created the dashboard.
- description (optional): Filter by description content (supports wildcards).
- id (optional): Filter by specific dashboard ID.
- limit (optional): Maximum number of results to return. Defaults to a system limit.
- offset (optional): Starting point for pagination.
String Search Behavior:
- Case-insensitive matching.
- Supports SQL LIKE pattern match wildcards:
- `%`: Matches any sequence of zero or more characters. (e.g., `"finan%"` matches "financial", "finance")
- `_`: Matches any single character. (e.g., `"s_les"` matches "sales")
- Special expressions for null checks:
- `"IS NULL"`: Matches dashboards where the field is null.
- `"NOT NULL"`: Excludes dashboards where the field is null.
```
## Reference

View File

@@ -28,16 +28,20 @@ tools:
kind: looker-get-dimensions
source: looker-source
description: |
The get_dimensions tool retrieves the list of dimensions defined in
an explore.
This tool retrieves a list of dimensions defined within a specific Looker explore.
Dimensions are non-aggregatable attributes or characteristics of your data
(e.g., product name, order date, customer city) that can be used for grouping,
filtering, or segmenting query results.
It takes two parameters, the model_name looked up from get_models and the
explore_name looked up from get_explores.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
If this returns a suggestions field for a dimension, the contents of suggestions
can be used as filters for this field. If this returns a suggest_explore and
suggest_dimension, a query against that explore and dimension can be used to find
valid filters for this field.
Output Details:
- If a dimension includes a `suggestions` field, its contents are valid values
that can be used directly as filters for that dimension.
- If a `suggest_explore` and `suggest_dimension` are provided, you can query
that specified explore and dimension to retrieve a list of valid filter values.
```

View File

@@ -40,10 +40,13 @@ tools:
kind: looker-get-explores
source: looker-source
description: |
The get_explores tool retrieves the list of explores defined in a LookML model
in the Looker system.
This tool retrieves a list of explores defined within a specific LookML model.
Explores represent a curated view of your data, typically joining several
tables together to allow for focused analysis on a particular subject area.
The output provides details like the explore's `name` and `label`.
It takes one parameter, the model_name looked up from get_models.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
```
## Reference

View File

@@ -24,15 +24,22 @@ It's compatible with the following sources:
```yaml
tools:
get_dimensions:
get_filters:
kind: looker-get-filters
source: looker-source
description: |
The get_filters tool retrieves the list of filters defined in
an explore.
This tool retrieves a list of "filter-only fields" defined within a specific
Looker explore. These are special fields defined in LookML specifically to
create user-facing filter controls that do not directly affect the `GROUP BY`
clause of the SQL query. They are often used in conjunction with liquid templating
to create dynamic queries.
It takes two parameters, the model_name looked up from get_models and the
explore_name looked up from get_explores.
Note: Regular dimensions and measures can also be used as filters in a query.
This tool *only* returns fields explicitly defined as `filter:` in LookML.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
```
The response is a json array with the following elements:

View File

@@ -34,21 +34,26 @@ tools:
kind: looker-get-looks
source: looker-source
description: |
get_looks Tool
This tool searches for saved Looks (pre-defined queries and visualizations)
in a Looker instance. It returns a list of JSON objects, each representing a Look.
This tool is used to search for saved looks in a Looker instance.
String search params use case-insensitive matching. String search
params can contain % and '_' as SQL LIKE pattern match wildcard
expressions. example="dan%" will match "danger" and "Danzig" but
not "David" example="D_m%" will match "Damage" and "dump".
Search Parameters:
- title (optional): Filter by Look title (supports wildcards).
- folder_id (optional): Filter by the ID of the folder where the Look is saved.
- user_id (optional): Filter by the ID of the user who created the Look.
- description (optional): Filter by description content (supports wildcards).
- id (optional): Filter by specific Look ID.
- limit (optional): Maximum number of results to return. Defaults to a system limit.
- offset (optional): Starting point for pagination.
Most search params can accept "IS NULL" and "NOT NULL" as special
expressions to match or exclude (respectively) rows where the
column is null.
The limit and offset are used to paginate the results.
The result of the get_looks tool is a list of json objects.
String Search Behavior:
- Case-insensitive matching.
- Supports SQL LIKE pattern match wildcards:
- `%`: Matches any sequence of zero or more characters. (e.g., `"dan%"` matches "danger", "Danzig")
- `_`: Matches any single character. (e.g., `"D_m%"` matches "Damage", "dump")
- Special expressions for null checks:
- `"IS NULL"`: Matches Looks where the field is null.
- `"NOT NULL"`: Excludes Looks where the field is null.
```
## Reference

View File

@@ -28,16 +28,19 @@ tools:
kind: looker-get-measures
source: looker-source
description: |
The get_measures tool retrieves the list of measures defined in
an explore.
This tool retrieves a list of measures defined within a specific Looker explore.
Measures are aggregatable metrics (e.g., total sales, average price, count of users)
that are used for calculations and quantitative analysis in your queries.
It takes two parameters, the model_name looked up from get_models and the
explore_name looked up from get_explores.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
If this returns a suggestions field for a measure, the contents of suggestions
can be used as filters for this field. If this returns a suggest_explore and
suggest_dimension, a query against that explore and dimension can be used to find
valid filters for this field.
Output Details:
- If a measure includes a `suggestions` field, its contents are valid values
that can be used directly as filters for that measure.
- If a `suggest_explore` and `suggest_dimension` are provided, you can query
that specified explore and dimension to retrieve a list of valid filter values.
```

View File

@@ -26,9 +26,12 @@ tools:
kind: looker-get-models
source: looker-source
description: |
The get_models tool retrieves the list of LookML models in the Looker system.
This tool retrieves a list of available LookML models in the Looker instance.
LookML models define the data structure and relationships that users can query.
The output includes details like the model's `name` and `label`, which are
essential for subsequent calls to tools like `get_explores` or `query`.
It takes no parameters.
This tool takes no parameters.
```
## Reference

View File

@@ -28,11 +28,15 @@ tools:
kind: looker-get-parameters
source: looker-source
description: |
The get_parameters tool retrieves the list of parameters defined in
an explore.
This tool retrieves a list of parameters defined within a specific Looker explore.
LookML parameters are dynamic input fields that allow users to influence query
behavior without directly modifying the underlying LookML. They are often used
with `liquid` templating to create flexible dashboards and reports, enabling
users to choose dimensions, measures, or other query components at runtime.
It takes two parameters, the model_name looked up from get_models and the
explore_name looked up from get_explores.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
```
The response is a json array with the following elements:

View File

@@ -26,10 +26,15 @@ tools:
kind: looker-get-project-file
source: looker-source
description: |
get_project_file Tool
This tool retrieves the raw content of a specific LookML file from within a project.
Given a project_id and a file path within the project, this tool returns
the contents of the LookML file.
Parameters:
- project_id (required): The unique ID of the LookML project, obtained from `get_projects`.
- file_path (required): The path to the LookML file within the project,
typically obtained from `get_project_files`.
Output:
The raw text content of the specified LookML file.
```
## Reference

View File

@@ -26,10 +26,15 @@ tools:
kind: looker-get-project-files
source: looker-source
description: |
get_project_files Tool
This tool retrieves a list of all LookML files within a specified project,
providing details about each file.
Given a project_id this tool returns the details about
the LookML files that make up that project.
Parameters:
- project_id (required): The unique ID of the LookML project, obtained from `get_projects`.
Output:
A JSON array of objects, each representing a LookML file and containing
details such as `path`, `id`, `type`, and `git_status`.
```
## Reference

View File

@@ -26,10 +26,16 @@ tools:
kind: looker-get-projects
source: looker-source
description: |
get_projects Tool
This tool retrieves a list of all LookML projects available on the Looker instance.
It is useful for identifying projects before performing actions like retrieving
project files or making modifications.
This tool returns the project_id and project_name for
all the LookML projects on the looker instance.
Parameters:
This tool takes no parameters.
Output:
A JSON array of objects, each containing the `project_id` and `project_name`
for a LookML project.
```
## Reference

View File

@@ -42,17 +42,18 @@ tools:
kind: looker-health-analyze
source: looker-source
description: |
health-analyze Tool
This tool calculates the usage statistics for Looker projects, models, and explores.
This tool calculates the usage of projects, models and explores.
Parameters:
- action (required): The type of resource to analyze. Can be `"projects"`, `"models"`, or `"explores"`.
- project (optional): The specific project ID to analyze.
- model (optional): The specific model name to analyze. Requires `project` if used without `explore`.
- explore (optional): The specific explore name to analyze. Requires `model` if used.
- timeframe (optional): The lookback period in days for usage data. Defaults to `90` days.
- min_queries (optional): The minimum number of queries for a resource to be considered active. Defaults to `1`.
It accepts 6 parameters:
1. `action`: can be "projects", "models", or "explores"
2. `project`: the project to analyze (optional)
3. `model`: the model to analyze (optional)
4. `explore`: the explore to analyze (optional)
5. `timeframe`: the lookback period in days, default is 90
6. `min_queries`: the minimum number of queries to consider a resource as active, default is 1
Output:
The result is a JSON object containing usage metrics for the specified resources.
```
## Reference

View File

@@ -49,20 +49,22 @@ tools:
kind: looker-health-pulse
source: looker-source
description: |
health-pulse Tool
This tool performs various health checks on a Looker instance.
This tool takes the pulse of a Looker instance by taking
one of the following actions:
1. `check_db_connections`,
2. `check_dashboard_performance`,
3. `check_dashboard_errors`,
4. `check_explore_performance`,
5. `check_schedule_failures`, or
6. `check_legacy_features`
The `check_legacy_features` action is only available in Looker Core. If
it is called on a Looker Core instance, you will get a notice. That notice
should not be reported as an error.
Parameters:
- action (required): Specifies the type of health check to perform.
Choose one of the following:
- `check_db_connections`: Verifies database connectivity.
- `check_dashboard_performance`: Assesses dashboard loading performance.
- `check_dashboard_errors`: Identifies errors within dashboards.
- `check_explore_performance`: Evaluates explore query performance.
- `check_schedule_failures`: Reports on failed scheduled deliveries.
- `check_legacy_features`: Checks for the usage of legacy features.
Note on `check_legacy_features`:
This action is exclusively available in Looker Core instances. If invoked
on a non-Looker Core instance, it will return a notice rather than an error.
This notice should be considered normal behavior and not an indication of an issue.
```
## Reference

View File

@@ -39,20 +39,19 @@ tools:
kind: looker-health-vacuum
source: looker-source
description: |
health-vacuum Tool
This tool identifies and suggests LookML models or explores that can be
safely removed due to inactivity or low usage.
This tool suggests models or explores that can removed
because they are unused.
Parameters:
- action (required): The type of resource to analyze for removal candidates. Can be `"models"` or `"explores"`.
- project (optional): The specific project ID to consider.
- model (optional): The specific model name to consider. Requires `project` if used without `explore`.
- explore (optional): The specific explore name to consider. Requires `model` if used.
- timeframe (optional): The lookback period in days to assess usage. Defaults to `90` days.
- min_queries (optional): The minimum number of queries for a resource to be considered active. Defaults to `1`.
It accepts 6 parameters:
1. `action`: can be "models" or "explores"
2. `project`: the project to vacuum (optional)
3. `model`: the model to vacuum (optional)
4. `explore`: the explore to vacuum (optional)
5. `timeframe`: the lookback period in days, default is 90
6. `min_queries`: the minimum number of queries to consider a resource as active, default is 1
The result is a list of objects that are candidates for deletion.
Output:
A JSON array of objects, each representing a model or explore that is a candidate for deletion due to low usage.
```
| **field** | **type** | **required** | **description** |

View File

@@ -30,18 +30,19 @@ tools:
kind: looker-make-dashboard
source: looker-source
description: |
make_dashboard Tool
This tool creates a new, empty dashboard in Looker. Dashboards are stored
in the user's personal folder, and the dashboard name must be unique.
After creation, use `add_dashboard_filter` to add filters and
`add_dashboard_element` to add content tiles.
This tool creates a new dashboard in Looker. The dashboard is
initially empty and the add_dashboard_element tool is used to
add content to the dashboard.
Required Parameters:
- title (required): A unique title for the new dashboard.
- description (required): A brief description of the dashboard's purpose.
The newly created dashboard will be created in the user's
personal folder in looker. The dashboard name must be unique.
The result is a json document with a link to the newly
created dashboard and the id of the dashboard. Use the id
when calling add_dashboard_element.
Output:
A JSON object containing a link (`url`) to the newly created dashboard and
its unique `id`. This `dashboard_id` is crucial for subsequent calls to
`add_dashboard_filter` and `add_dashboard_element`.
```
## Reference

View File

@@ -40,20 +40,24 @@ tools:
kind: looker-make-look
source: looker-source
description: |
make_look Tool
This tool creates a new Look (saved query with visualization) in Looker.
The Look will be saved in the user's personal folder, and its name must be unique.
This tool creates a new look in Looker, using the query
parameters and the vis_config specified.
Required Parameters:
- title: A unique title for the new Look.
- description: A brief description of the Look's purpose.
- model_name: The name of the LookML model (from `get_models`).
- explore_name: The name of the explore (from `get_explores`).
- fields: A list of field names (dimensions, measures, filters, or parameters) to include in the query.
Most of the parameters are the same as the query_url
tool. In addition, there is a title and a description
that must be provided.
Optional Parameters:
- pivots, filters, sorts, limit, query_timezone: These parameters are identical
to those described for the `query` tool.
- vis_config: A JSON object defining the visualization settings for the Look.
The structure and options are the same as for the `query_url` tool's `vis_config`.
The newly created look will be created in the user's
personal folder in looker. The look name must be unique.
The result is a json document with a link to the newly
created look.
Output:
A JSON object containing a link (`url`) to the newly created Look, along with its `id` and `slug`.
```
## Reference

View File

@@ -41,38 +41,17 @@ tools:
kind: looker-query-sql
source: looker-source
description: |
Query SQL Tool
This tool generates the underlying SQL query that Looker would execute
against the database for a given set of parameters. It is useful for
understanding how Looker translates a request into SQL.
This tool is used to generate a sql query against the LookML model. The
model, explore, and fields list must be specified. Pivots,
filters and sorts are optional.
Parameters:
All parameters for this tool are identical to those of the `query` tool.
This includes `model_name`, `explore_name`, `fields` (required),
and optional parameters like `pivots`, `filters`, `sorts`, `limit`, and `query_timezone`.
The model can be found from the get_models tool. The explore
can be found from the get_explores tool passing in the model.
The fields can be found from the get_dimensions, get_measures,
get_filters, and get_parameters tools, passing in the model
and the explore.
Provide a model_id and explore_name, then a list
of fields. Optionally a list of pivots can be provided.
The pivots must also be included in the fields list.
Filters are provided as a map of {"field.id": "condition",
"field.id2": "condition2", ...}. Do not put the field.id in
quotes. Filter expressions can be found at
https://cloud.google.com/looker/docs/filter-expressions.
Sorts can be specified like [ "field.id desc 0" ].
An optional row limit can be added. If not provided the limit
will default to 500. "-1" can be specified for unlimited.
An optional query timezone can be added. The query_timezone to
will default to that of the workstation where this MCP server
is running, or Etc/UTC if that can't be determined. Not all
models support custom timezones.
The result of the query tool is the sql string.
Output:
The result of this tool is the raw SQL text.
```
## Reference

View File

@@ -37,17 +37,21 @@ tools:
kind: looker-query-url
source: looker-source
description: |
Query URL Tool
This tool generates a shareable URL for a Looker query, allowing users to
explore the query further within the Looker UI. It returns the generated URL,
along with the `query_id` and `slug`.
This tool is used to generate the URL of a query in Looker.
The user can then explore the query further inside Looker.
The tool also returns the query_id and slug. The parameters
are the same as the query tool with an additional vis_config
parameter.
Parameters:
All query parameters (e.g., `model_name`, `explore_name`, `fields`, `pivots`,
`filters`, `sorts`, `limit`, `query_timezone`) are the same as the `query` tool.
The vis_config is optional. If provided, it will be used to
control the default visualization for the query. Here are
some notes on making visualizations.
Additionally, it accepts an optional `vis_config` parameter:
- vis_config (optional): A JSON object that controls the default visualization
settings for the generated query.
vis_config Details:
The `vis_config` object supports a wide range of properties for various chart types.
Here are some notes on making visualizations.
### Cartesian Charts (Area, Bar, Column, Line, Scatter)

View File

@@ -41,38 +41,24 @@ tools:
kind: looker-query
source: looker-source
description: |
Query Tool
This tool runs a query against a LookML model and returns the results in JSON format.
This tool is used to run a query against the LookML model. The
model, explore, and fields list must be specified. Pivots,
filters and sorts are optional.
Required Parameters:
- model_name: The name of the LookML model (from `get_models`).
- explore_name: The name of the explore (from `get_explores`).
- fields: A list of field names (dimensions, measures, filters, or parameters) to include in the query.
The model can be found from the get_models tool. The explore
can be found from the get_explores tool passing in the model.
The fields can be found from the get_dimensions, get_measures,
get_filters, and get_parameters tools, passing in the model
and the explore.
Optional Parameters:
- pivots: A list of fields to pivot the results by. These fields must also be included in the `fields` list.
- filters: A map of filter expressions, e.g., `{"view.field": "value", "view.date": "7 days"}`.
- Do not quote field names.
- Use `not null` instead of `-NULL`.
- If a value contains a comma, enclose it in single quotes (e.g., "'New York, NY'").
- sorts: A list of fields to sort by, optionally including direction (e.g., `["view.field desc"]`).
- limit: Row limit (default 500). Use "-1" for unlimited.
- query_timezone: specific timezone for the query (e.g. `America/Los_Angeles`).
Provide a model_id and explore_name, then a list
of fields. Optionally a list of pivots can be provided.
The pivots must also be included in the fields list.
Filters are provided as a map of {"field.id": "condition",
"field.id2": "condition2", ...}. Do not put the field.id in
quotes. Filter expressions can be found at
https://cloud.google.com/looker/docs/filter-expressions.
If the condition is a string that contains a comma, use a second
set of quotes. For example, {"user.city": "'New York, NY'"}.
Sorts can be specified like [ "field.id desc 0" ].
An optional row limit can be added. If not provided the limit
will default to 500. "-1" can be specified for unlimited.
An optional query timezone can be added. The query_timezone to
will default to that of the workstation where this MCP server
is running, or Etc/UTC if that can't be determined. Not all
models support custom timezones.
Note: Use `get_dimensions`, `get_measures`, `get_filters`, and `get_parameters` to find valid fields.
The result of the query tool is JSON
```

View File

@@ -27,11 +27,15 @@ tools:
kind: looker-run-dashboard
source: looker-source
description: |
run_dashboard Tool
This tool executes the queries associated with each tile in a specified dashboard
and returns the aggregated data in a JSON structure.
This tools runs the query associated with each tile in a dashboard
and returns the data in a JSON structure. It accepts the dashboard_id
as the parameter.
Parameters:
- dashboard_id (required): The unique identifier of the dashboard to run,
typically obtained from the `get_dashboards` tool.
Output:
The data from all dashboard tiles is returned as a JSON object.
```
## Reference

View File

@@ -27,11 +27,15 @@ tools:
kind: looker-run-look
source: looker-source
description: |
run_look Tool
This tool executes the query associated with a saved Look and
returns the resulting data in a JSON structure.
This tool runs the query associated with a look and returns
the data in a JSON structure. It accepts the look_id as the
parameter.
Parameters:
- look_id (required): The unique identifier of the Look to run,
typically obtained from the `get_looks` tool.
Output:
The query results are returned as a JSON object.
```
## Reference

View File

@@ -27,13 +27,17 @@ tools:
kind: looker-update-project-file
source: looker-source
description: |
update_project_file Tool
This tool modifies the content of an existing LookML file within a specified project.
Given a project_id and a file path within the project, as well as the content
of a LookML file, this tool will modify the file within the project.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
This tool must be called after the dev_mode tool has changed the session to
dev mode.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The exact path to the LookML file to modify within the project.
- content (required): The new, complete LookML content to overwrite the existing file.
Output:
A confirmation message upon successful file modification.
```
## Reference

View File

@@ -64,5 +64,3 @@ tools:
| filterParams | list | false | A list of parameter objects that define the variables used in the `filterPayload`. |
| projectPayload | string | false | An optional MongoDB projection document to specify which fields to include (1) or exclude (0) in the result. |
| projectParams | list | false | A list of parameter objects for the `projectPayload`. |
| sortPayload | string | false | An optional MongoDB sort document. Useful for selecting which document to return if the filter matches multiple (e.g., get the most recent). |
| sortParams | list | false | A list of parameter objects for the `sortPayload`. |

View File

@@ -0,0 +1,171 @@
---
title: "postgres-list-table-stats"
type: docs
weight: 1
description: >
The "postgres-list-table-stats" tool reports table statistics including size, scan metrics, and bloat indicators for PostgreSQL tables.
aliases:
- /resources/tools/postgres-list-table-stats
---
## About
The `postgres-list-table-stats` tool queries `pg_stat_all_tables` to provide comprehensive statistics about tables in the database. It calculates useful metrics like index scan ratio and dead row ratio to help identify performance issues and table bloat.
Compatible sources:
- [alloydb-postgres](../../sources/alloydb-pg.md)
- [cloud-sql-postgres](../../sources/cloud-sql-pg.md)
- [postgres](../../sources/postgres.md)
The tool returns a JSON array where each element represents statistics for a table, including scan metrics, row counts, and vacuum history. Results are sorted by sequential scans by default and limited to 50 rows.
## Example
```yaml
tools:
list_table_stats:
kind: postgres-list-table-stats
source: postgres-source
description: "Lists table statistics including size, scans, and bloat metrics."
```
### Example Requests
**List default tables in public schema:**
```json
{}
```
**Filter by specific table name:**
```json
{
"table_name": "users"
}
```
**Filter by owner and sort by size:**
```json
{
"owner": "app_user",
"sort_by": "size",
"limit": 10
}
```
**Find tables with high dead row ratio:**
```json
{
"sort_by": "dead_rows",
"limit": 20
}
```
### Example Response
```json
[
{
"schema_name": "public",
"table_name": "users",
"owner": "postgres",
"total_size_bytes": 8388608,
"seq_scan": 150,
"idx_scan": 450,
"idx_scan_ratio_percent": 75.0,
"live_rows": 50000,
"dead_rows": 1200,
"dead_row_ratio_percent": 2.34,
"n_tup_ins": 52000,
"n_tup_upd": 12500,
"n_tup_del": 800,
"last_vacuum": "2025-11-27T10:30:00Z",
"last_autovacuum": "2025-11-27T09:15:00Z",
"last_autoanalyze": "2025-11-27T09:16:00Z"
},
{
"schema_name": "public",
"table_name": "orders",
"owner": "postgres",
"total_size_bytes": 16777216,
"seq_scan": 50,
"idx_scan": 1200,
"idx_scan_ratio_percent": 96.0,
"live_rows": 100000,
"dead_rows": 5000,
"dead_row_ratio_percent": 4.76,
"n_tup_ins": 120000,
"n_tup_upd": 45000,
"n_tup_del": 15000,
"last_vacuum": "2025-11-26T14:22:00Z",
"last_autovacuum": "2025-11-27T02:30:00Z",
"last_autoanalyze": "2025-11-27T02:31:00Z"
}
]
```
## Parameters
| parameter | type | required | default | description |
|-------------|---------|----------|---------|-------------|
| schema_name | string | false | "public" | Optional: A specific schema name to filter by (supports partial matching) |
| table_name | string | false | null | Optional: A specific table name to filter by (supports partial matching) |
| owner | string | false | null | Optional: A specific owner to filter by (supports partial matching) |
| sort_by | string | false | null | Optional: The column to sort by. Valid values: `size`, `dead_rows`, `seq_scan`, `idx_scan` (defaults to `seq_scan`) |
| limit | integer | false | 50 | Optional: The maximum number of results to return |
## Output Fields Reference
| field | type | description |
|------------------------|-----------|-------------|
| schema_name | string | Name of the schema containing the table. |
| table_name | string | Name of the table. |
| owner | string | PostgreSQL user who owns the table. |
| total_size_bytes | integer | Total size of the table including all indexes in bytes. |
| seq_scan | integer | Number of sequential (full table) scans performed on this table. |
| idx_scan | integer | Number of index scans performed on this table. |
| idx_scan_ratio_percent | decimal | Percentage of total scans (seq_scan + idx_scan) that used an index. A low ratio may indicate missing or ineffective indexes. |
| live_rows | integer | Number of live (non-deleted) rows in the table. |
| dead_rows | integer | Number of dead (deleted but not yet vacuumed) rows in the table. |
| dead_row_ratio_percent | decimal | Percentage of dead rows relative to total rows. High values indicate potential table bloat. |
| n_tup_ins | integer | Total number of rows inserted into this table. |
| n_tup_upd | integer | Total number of rows updated in this table. |
| n_tup_del | integer | Total number of rows deleted from this table. |
| last_vacuum | timestamp | Timestamp of the last manual VACUUM operation on this table (null if never manually vacuumed). |
| last_autovacuum | timestamp | Timestamp of the last automatic vacuum operation on this table. |
| last_autoanalyze | timestamp | Timestamp of the last automatic analyze operation on this table. |
## Interpretation Guide
### Index Scan Ratio (`idx_scan_ratio_percent`)
- **High ratio (> 80%)**: Table queries are efficiently using indexes. This is typically desirable.
- **Low ratio (< 20%)**: Many sequential scans indicate missing indexes or queries that cannot use existing indexes effectively. Consider adding indexes to frequently searched columns.
- **0%**: No index scans performed; all queries performed sequential scans. May warrant index investigation.
### Dead Row Ratio (`dead_row_ratio_percent`)
- **< 2%**: Healthy table with minimal bloat.
- **2-5%**: Moderate bloat; consider running VACUUM if not recent.
- **> 5%**: High bloat; may benefit from manual VACUUM or VACUUM FULL.
### Vacuum History
- **Null `last_vacuum`**: Table has never been manually vacuumed; relies on autovacuum.
- **Recent `last_autovacuum`**: Autovacuum is actively managing the table.
- **Stale timestamps**: Consider running manual VACUUM and ANALYZE if maintenance windows exist.
## Performance Considerations
- Statistics are collected from `pg_stat_all_tables`, which resets on PostgreSQL restart.
- Run `ANALYZE` on tables to update statistics for accurate query planning.
- The tool defaults to limiting results to 50 rows; adjust the `limit` parameter for larger result sets.
- Filtering by schema, table name, or owner uses `LIKE` pattern matching (supports partial matches).
## Use Cases
- **Finding ineffective indexes**: Identify tables with low `idx_scan_ratio_percent` to evaluate index strategy.
- **Detecting table bloat**: Sort by `dead_rows` to find tables needing VACUUM.
- **Monitoring growth**: Track `total_size_bytes` over time for capacity planning.
- **Audit maintenance**: Check `last_autovacuum` and `last_autoanalyze` timestamps to ensure maintenance tasks are running.
- **Understanding workload**: Examine `seq_scan` vs `idx_scan` ratios to understand query patterns.

View File

@@ -57,24 +57,31 @@ tools:
## Response Format
The response is an [operation](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/projects.locations.operations#resource:-operation) metadata JSON
object corresponding to [batch operation metadata](https://pkg.go.dev/cloud.google.com/go/dataproc/v2/apiv1/dataprocpb#BatchOperationMetadata)
Example:
The response contains the
[operation](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/projects.locations.operations#resource:-operation)
metadata JSON object corresponding to [batch operation
metadata](https://pkg.go.dev/cloud.google.com/go/dataproc/v2/apiv1/dataprocpb#BatchOperationMetadata),
plus additional fields `consoleUrl` and `logsUrl` where a human can go for more
detailed information.
```json
{
"batch": "projects/myproject/locations/us-central1/batches/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"batchUuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"createTime": "2025-11-19T16:36:47.607119Z",
"description": "Batch",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
"opMetadata": {
"batch": "projects/myproject/locations/us-central1/batches/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"batchUuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"createTime": "2025-11-19T16:36:47.607119Z",
"description": "Batch",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
},
"operationType": "BATCH",
"warnings": [
"No runtime version specified. Using the default runtime version."
]
},
"operationType": "BATCH",
"warnings": [
"No runtime version specified. Using the default runtime version."
]
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/...",
"logsUrl": "https://console.cloud.google.com/logs/viewer?..."
}
```

View File

@@ -62,26 +62,31 @@ tools:
## Response Format
The response is an
The response contains the
[operation](https://docs.cloud.google.com/dataproc-serverless/docs/reference/rest/v1/projects.locations.operations#resource:-operation)
metadata JSON object corresponding to [batch operation
metadata](https://pkg.go.dev/cloud.google.com/go/dataproc/v2/apiv1/dataprocpb#BatchOperationMetadata)
Example:
metadata](https://pkg.go.dev/cloud.google.com/go/dataproc/v2/apiv1/dataprocpb#BatchOperationMetadata),
plus additional fields `consoleUrl` and `logsUrl` where a human can go for more
detailed information.
```json
{
"batch": "projects/myproject/locations/us-central1/batches/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"batchUuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"createTime": "2025-11-19T16:36:47.607119Z",
"description": "Batch",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
"opMetadata": {
"batch": "projects/myproject/locations/us-central1/batches/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"batchUuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"createTime": "2025-11-19T16:36:47.607119Z",
"description": "Batch",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
},
"operationType": "BATCH",
"warnings": [
"No runtime version specified. Using the default runtime version."
]
},
"operationType": "BATCH",
"warnings": [
"No runtime version specified. Using the default runtime version."
]
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/...",
"logsUrl": "https://console.cloud.google.com/logs/viewer?..."
}
```

View File

@@ -34,43 +34,50 @@ tools:
## Response Format
The response is a full Batch JSON object as defined in the [API
spec](https://cloud.google.com/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches#Batch).
Example with a reduced set of fields:
The response contains the full Batch object as defined in the [API
spec](https://cloud.google.com/dataproc-serverless/docs/reference/rest/v1/projects.locations.batches#Batch),
plus additional fields `consoleUrl` and `logsUrl` where a human can go for more
detailed information.
```json
{
"createTime": "2025-10-10T15:15:21.303146Z",
"creator": "alice@example.com",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
},
"name": "projects/google.com:hadoop-cloud-dev/locations/us-central1/batches/alice-20251010-abcd",
"operation": "projects/google.com:hadoop-cloud-dev/regions/us-central1/operations/11111111-2222-3333-4444-555555555555",
"runtimeConfig": {
"properties": {
"spark:spark.driver.cores": "4",
"spark:spark.driver.memory": "12200m"
}
},
"sparkBatch": {
"jarFileUris": ["file:///usr/lib/spark/examples/jars/spark-examples.jar"],
"mainClass": "org.apache.spark.examples.SparkPi"
},
"state": "SUCCEEDED",
"stateHistory": [
{
"state": "PENDING",
"stateStartTime": "2025-10-10T15:15:21.303146Z"
"batch": {
"createTime": "2025-10-10T15:15:21.303146Z",
"creator": "alice@example.com",
"labels": {
"goog-dataproc-batch-uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee",
"goog-dataproc-location": "us-central1"
},
{
"state": "RUNNING",
"stateStartTime": "2025-10-10T15:16:41.291747Z"
}
],
"stateTime": "2025-10-10T15:17:21.265493Z",
"uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
"name": "projects/google.com:hadoop-cloud-dev/locations/us-central1/batches/alice-20251010-abcd",
"operation": "projects/google.com:hadoop-cloud-dev/regions/us-central1/operations/11111111-2222-3333-4444-555555555555",
"runtimeConfig": {
"properties": {
"spark:spark.driver.cores": "4",
"spark:spark.driver.memory": "12200m"
}
},
"sparkBatch": {
"jarFileUris": [
"file:///usr/lib/spark/examples/jars/spark-examples.jar"
],
"mainClass": "org.apache.spark.examples.SparkPi"
},
"state": "SUCCEEDED",
"stateHistory": [
{
"state": "PENDING",
"stateStartTime": "2025-10-10T15:15:21.303146Z"
},
{
"state": "RUNNING",
"stateStartTime": "2025-10-10T15:16:41.291747Z"
}
],
"stateTime": "2025-10-10T15:17:21.265493Z",
"uuid": "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
},
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/...",
"logsUrl": "https://console.cloud.google.com/logs/viewer?..."
}
```

View File

@@ -50,14 +50,18 @@ tools:
"uuid": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
"state": "SUCCEEDED",
"creator": "alice@example.com",
"createTime": "2023-10-27T10:00:00Z"
"createTime": "2023-10-27T10:00:00Z",
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/us-central1/batch-abc-123/summary?project=my-project",
"logsUrl": "https://console.cloud.google.com/logs/viewer?advancedFilter=resource.type%3D%22cloud_dataproc_batch%22%0Aresource.labels.project_id%3D%22my-project%22%0Aresource.labels.location%3D%22us-central1%22%0Aresource.labels.batch_id%3D%22batch-abc-123%22%0Atimestamp%3E%3D%222023-10-27T09%3A59%3A00Z%22%0Atimestamp%3C%3D%222023-10-27T10%3A10%3A00Z%22&project=my-project&resource=cloud_dataproc_batch%2Fbatch_id%2Fbatch-abc-123"
},
{
"name": "projects/my-project/locations/us-central1/batches/batch-def-456",
"uuid": "b2c3d4e5-f6a7-8901-2345-678901bcdefa",
"state": "FAILED",
"creator": "alice@example.com",
"createTime": "2023-10-27T11:30:00Z"
"createTime": "2023-10-27T11:30:00Z",
"consoleUrl": "https://console.cloud.google.com/dataproc/batches/us-central1/batch-def-456/summary?project=my-project",
"logsUrl": "https://console.cloud.google.com/logs/viewer?advancedFilter=resource.type%3D%22cloud_dataproc_batch%22%0Aresource.labels.project_id%3D%22my-project%22%0Aresource.labels.location%3D%22us-central1%22%0Aresource.labels.batch_id%3D%22batch-def-456%22%0Atimestamp%3E%3D%222023-10-27T11%3A29%3A00Z%22%0Atimestamp%3C%3D%222023-10-27T11%3A40%3A00Z%22&project=my-project&resource=cloud_dataproc_batch%2Fbatch_id%2Fbatch-def-456"
}
],
"nextPageToken": "abcd1234"

View File

@@ -771,7 +771,7 @@
},
"outputs": [],
"source": [
"version = \"0.22.0\" # x-release-please-version\n",
"version = \"0.23.0\" # x-release-please-version\n",
"! curl -L -o /content/toolbox https://storage.googleapis.com/genai-toolbox/v{version}/linux/amd64/toolbox\n",
"\n",
"# Make the binary executable\n",

View File

@@ -123,7 +123,7 @@ In this section, we will download and install the Toolbox binary.
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
export VERSION="0.22.0"
export VERSION="0.23.0"
curl -O https://storage.googleapis.com/genai-toolbox/v$VERSION/$OS/toolbox
```
<!-- {x-release-please-end} -->

View File

@@ -220,7 +220,7 @@
},
"outputs": [],
"source": [
"version = \"0.22.0\" # x-release-please-version\n",
"version = \"0.23.0\" # x-release-please-version\n",
"! curl -O https://storage.googleapis.com/genai-toolbox/v{version}/linux/amd64/toolbox\n",
"\n",
"# Make the binary executable\n",

View File

@@ -179,7 +179,7 @@ to use BigQuery, and then run the Toolbox server.
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/$OS/toolbox
```
<!-- {x-release-please-end} -->

View File

@@ -98,7 +98,7 @@ In this section, we will download Toolbox, configure our tools in a
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/$OS/toolbox
```
<!-- {x-release-please-end} -->

View File

@@ -34,7 +34,7 @@ In this section, we will download Toolbox and run the Toolbox server.
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/$OS/toolbox
```
<!-- {x-release-please-end} -->

View File

@@ -48,7 +48,7 @@ In this section, we will download Toolbox and run the Toolbox server.
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/$OS/toolbox
```
<!-- {x-release-please-end} -->

View File

@@ -34,7 +34,7 @@ In this section, we will download Toolbox and run the Toolbox server.
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.22.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.23.0/$OS/toolbox
```
<!-- {x-release-please-end} -->

View File

@@ -1,6 +1,6 @@
{
"name": "mcp-toolbox-for-databases",
"version": "0.22.0",
"version": "0.23.0",
"description": "MCP Toolbox for Databases is an open-source MCP server for more than 30 different datasources.",
"contextFileName": "MCP-TOOLBOX-EXTENSION.md"
}

2
go.mod
View File

@@ -37,7 +37,7 @@ require (
github.com/google/uuid v1.6.0
github.com/jackc/pgx/v5 v5.7.6
github.com/json-iterator/go v1.1.12
github.com/looker-open-source/sdk-codegen/go v0.25.18
github.com/looker-open-source/sdk-codegen/go v0.25.21
github.com/microsoft/go-mssqldb v1.9.3
github.com/nakagami/firebirdsql v0.9.15
github.com/neo4j/neo4j-go-driver/v5 v5.28.4

4
go.sum
View File

@@ -1134,8 +1134,8 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/looker-open-source/sdk-codegen/go v0.25.18 h1:me1JBFRnOBCrDWwpoSUVDVDFcFmcYMR2ijbx6ATtwTs=
github.com/looker-open-source/sdk-codegen/go v0.25.18/go.mod h1:Br1ntSiruDJ/4nYNjpYyWyCbqJ7+GQceWbIgn0hYims=
github.com/looker-open-source/sdk-codegen/go v0.25.21 h1:nlZ1nz22SKluBNkzplrMHBPEVgJO3zVLF6aAws1rrRA=
github.com/looker-open-source/sdk-codegen/go v0.25.21/go.mod h1:Br1ntSiruDJ/4nYNjpYyWyCbqJ7+GQceWbIgn0hYims=
github.com/lyft/protoc-gen-star v0.6.0/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
github.com/lyft/protoc-gen-star v0.6.1/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
github.com/lyft/protoc-gen-star/v2 v2.0.1/go.mod h1:RcCdONR2ScXaYnQC5tUzxzlpA3WVYF7/opLeUgcQs/o=

View File

@@ -200,6 +200,10 @@ tools:
kind: postgres-get-column-cardinality
source: alloydb-pg-source
list_table_stats:
kind: postgres-list-table-stats
source: alloydb-pg-source
list_publication_tables:
kind: postgres-list-publication-tables
source: alloydb-pg-source
@@ -249,3 +253,4 @@ toolsets:
- list_pg_settings
- list_database_stats
- list_roles
- list_table_stats

View File

@@ -201,6 +201,10 @@ tools:
get_column_cardinality:
kind: postgres-get-column-cardinality
source: cloudsql-pg-source
list_table_stats:
kind: postgres-list-table-stats
source: cloudsql-pg-source
list_publication_tables:
kind: postgres-list-publication-tables
@@ -251,3 +255,4 @@ toolsets:
- list_pg_settings
- list_database_stats
- list_roles
- list_table_stats

View File

@@ -29,26 +29,37 @@ tools:
kind: looker-conversational-analytics
source: looker-source
description: |
Use this tool to perform data analysis, get insights,
or answer complex questions about the contents of specific
Looker explores.
Use this tool to ask questions about your data using the Looker Conversational
Analytics API. You must provide a natural language query and a list of
1 to 5 model and explore combinations (e.g. [{'model': 'the_model', 'explore': 'the_explore'}]).
Use the 'get_models' and 'get_explores' tools to discover available models and explores.
get_models:
kind: looker-get-models
source: looker-source
description: |
The get_models tool retrieves the list of LookML models in the Looker system.
get_models Tool
It takes no parameters.
This tool retrieves a list of available LookML models in the Looker instance.
LookML models define the data structure and relationships that users can query.
The output includes details like the model's `name` and `label`, which are
essential for subsequent calls to tools like `get_explores` or `query`.
This tool takes no parameters.
get_explores:
kind: looker-get-explores
source: looker-source
description: |
The get_explores tool retrieves the list of explores defined in a LookML model
in the Looker system.
get_explores Tool
It takes one parameter, the model_name looked up from get_models.
This tool retrieves a list of explores defined within a specific LookML model.
Explores represent a curated view of your data, typically joining several
tables together to allow for focused analysis on a particular subject area.
The output provides details like the explore's `name` and `label`.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
toolsets:
looker_conversational_analytics_tools:

View File

@@ -30,136 +30,151 @@ tools:
kind: looker-get-models
source: looker-source
description: |
The get_models tool retrieves the list of LookML models in the Looker system.
This tool retrieves a list of available LookML models in the Looker instance.
LookML models define the data structure and relationships that users can query.
The output includes details like the model's `name` and `label`, which are
essential for subsequent calls to tools like `get_explores` or `query`.
It takes no parameters.
This tool takes no parameters.
get_explores:
kind: looker-get-explores
source: looker-source
description: |
The get_explores tool retrieves the list of explores defined in a LookML model
in the Looker system.
This tool retrieves a list of explores defined within a specific LookML model.
Explores represent a curated view of your data, typically joining several
tables together to allow for focused analysis on a particular subject area.
The output provides details like the explore's `name` and `label`.
It takes one parameter, the model_name looked up from get_models.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
get_dimensions:
kind: looker-get-dimensions
source: looker-source
description: |
The get_dimensions tool retrieves the list of dimensions defined in
an explore.
This tool retrieves a list of dimensions defined within a specific Looker explore.
Dimensions are non-aggregatable attributes or characteristics of your data
(e.g., product name, order date, customer city) that can be used for grouping,
filtering, or segmenting query results.
It takes two parameters, the model_name looked up from get_models and the
explore_name looked up from get_explores.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
If this returns a suggestions field for a dimension, the contents of suggestions
can be used as filters for this field. If this returns a suggest_explore and
suggest_dimension, a query against that explore and dimension can be used to find
valid filters for this field.
Output Details:
- If a dimension includes a `suggestions` field, its contents are valid values
that can be used directly as filters for that dimension.
- If a `suggest_explore` and `suggest_dimension` are provided, you can query
that specified explore and dimension to retrieve a list of valid filter values.
get_measures:
kind: looker-get-measures
source: looker-source
description: |
The get_measures tool retrieves the list of measures defined in
an explore.
This tool retrieves a list of measures defined within a specific Looker explore.
Measures are aggregatable metrics (e.g., total sales, average price, count of users)
that are used for calculations and quantitative analysis in your queries.
It takes two parameters, the model_name looked up from get_models and the
explore_name looked up from get_explores.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
If this returns a suggestions field for a measure, the contents of suggestions
can be used as filters for this field. If this returns a suggest_explore and
suggest_dimension, a query against that explore and dimension can be used to find
valid filters for this field.
Output Details:
- If a measure includes a `suggestions` field, its contents are valid values
that can be used directly as filters for that measure.
- If a `suggest_explore` and `suggest_dimension` are provided, you can query
that specified explore and dimension to retrieve a list of valid filter values.
get_filters:
kind: looker-get-filters
source: looker-source
description: |
The get_filters tool retrieves the list of filters defined in
an explore.
This tool retrieves a list of "filter-only fields" defined within a specific
Looker explore. These are special fields defined in LookML specifically to
create user-facing filter controls that do not directly affect the `GROUP BY`
clause of the SQL query. They are often used in conjunction with liquid templating
to create dynamic queries.
It takes two parameters, the model_name looked up from get_models and the
explore_name looked up from get_explores.
Note: Regular dimensions and measures can also be used as filters in a query.
This tool *only* returns fields explicitly defined as `filter:` in LookML.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
get_parameters:
kind: looker-get-parameters
source: looker-source
description: |
The get_parameters tool retrieves the list of parameters defined in
an explore.
This tool retrieves a list of parameters defined within a specific Looker explore.
LookML parameters are dynamic input fields that allow users to influence query
behavior without directly modifying the underlying LookML. They are often used
with `liquid` templating to create flexible dashboards and reports, enabling
users to choose dimensions, measures, or other query components at runtime.
It takes two parameters, the model_name looked up from get_models and the
explore_name looked up from get_explores.
Parameters:
- model_name (required): The name of the LookML model, obtained from `get_models`.
- explore_name (required): The name of the explore within the model, obtained from `get_explores`.
query:
kind: looker-query
source: looker-source
description: |
Query Tool
This tool runs a query against a LookML model and returns the results in JSON format.
This tool is used to run a query against the LookML model. The
model, explore, and fields list must be specified. Pivots,
filters and sorts are optional.
Required Parameters:
- model_name: The name of the LookML model (from `get_models`).
- explore_name: The name of the explore (from `get_explores`).
- fields: A list of field names (dimensions, measures, filters, or parameters) to include in the query.
The model can be found from the get_models tool. The explore
can be found from the get_explores tool passing in the model.
The fields can be found from the get_dimensions, get_measures,
get_filters, and get_parameters tools, passing in the model
and the explore.
Optional Parameters:
- pivots: A list of fields to pivot the results by. These fields must also be included in the `fields` list.
- filters: A map of filter expressions, e.g., `{"view.field": "value", "view.date": "7 days"}`.
- Do not quote field names.
- Use `not null` instead of `-NULL`.
- If a value contains a comma, enclose it in single quotes (e.g., "'New York, NY'").
- sorts: A list of fields to sort by, optionally including direction (e.g., `["view.field desc"]`).
- limit: Row limit (default 500). Use "-1" for unlimited.
- query_timezone: specific timezone for the query (e.g. `America/Los_Angeles`).
Provide a model_id and explore_name, then a list
of fields. Optionally a list of pivots can be provided.
The pivots must also be included in the fields list.
Filters are provided as a map of {"field.id": "condition",
"field.id2": "condition2", ...}. Do not put the field.id in
quotes. Filter expressions can be found at
https://cloud.google.com/looker/docs/filter-expressions. There
is one mistake in that, however, Use `not null` instead of `-NULL`.
If the condition is a string that contains a comma, use a second
set of quotes. For example, {"user.city": "'New York, NY'"}.
Sorts can be specified like [ "field.id desc 0" ].
An optional row limit can be added. If not provided the limit
will default to 500. "-1" can be specified for unlimited.
An optional query timezone can be added. The query_timezone to
will default to that of the workstation where this MCP server
is running, or Etc/UTC if that can't be determined. Not all
models support custom timezones.
The result of the query tool is JSON
Note: Use `get_dimensions`, `get_measures`, `get_filters`, and `get_parameters` to find valid fields.
query_sql:
kind: looker-query-sql
source: looker-source
description: |
Query SQL Tool
This tool generates the underlying SQL query that Looker would execute
against the database for a given set of parameters. It is useful for
understanding how Looker translates a request into SQL.
This tool is used to generate the SQL that Looker would
run against the underlying database. The parameters are
the same as the query tool.
Parameters:
All parameters for this tool are identical to those of the `query` tool.
This includes `model_name`, `explore_name`, `fields` (required),
and optional parameters like `pivots`, `filters`, `sorts`, `limit`, and `query_timezone`.
The result of the query sql tool is SQL text.
Output:
The result of this tool is the raw SQL text.
query_url:
kind: looker-query-url
source: looker-source
description: |
Query URL Tool
This tool generates a shareable URL for a Looker query, allowing users to
explore the query further within the Looker UI. It returns the generated URL,
along with the `query_id` and `slug`.
This tool is used to generate the URL of a query in Looker.
The user can then explore the query further inside Looker.
The tool also returns the query_id and slug. The parameters
are the same as the query tool with an additional vis_config
parameter.
Parameters:
All query parameters (e.g., `model_name`, `explore_name`, `fields`, `pivots`,
`filters`, `sorts`, `limit`, `query_timezone`) are the same as the `query` tool.
The vis_config is optional. If provided, it will be used to
control the default visualization for the query. Here are
some notes on making visualizations.
Additionally, it accepts an optional `vis_config` parameter:
- vis_config (optional): A JSON object that controls the default visualization
settings for the generated query.
vis_config Details:
The `vis_config` object supports a wide range of properties for various chart types.
Here are some notes on making visualizations.
### Cartesian Charts (Area, Bar, Column, Line, Scatter)
@@ -599,286 +614,432 @@ tools:
kind: looker-get-looks
source: looker-source
description: |
get_looks Tool
This tool searches for saved Looks (pre-defined queries and visualizations)
in a Looker instance. It returns a list of JSON objects, each representing a Look.
This tool is used to search for saved looks in a Looker instance.
String search params use case-insensitive matching. String search
params can contain % and '_' as SQL LIKE pattern match wildcard
expressions. example="dan%" will match "danger" and "Danzig" but
not "David" example="D_m%" will match "Damage" and "dump".
Search Parameters:
- title (optional): Filter by Look title (supports wildcards).
- folder_id (optional): Filter by the ID of the folder where the Look is saved.
- user_id (optional): Filter by the ID of the user who created the Look.
- description (optional): Filter by description content (supports wildcards).
- id (optional): Filter by specific Look ID.
- limit (optional): Maximum number of results to return. Defaults to a system limit.
- offset (optional): Starting point for pagination.
Most search params can accept "IS NULL" and "NOT NULL" as special
expressions to match or exclude (respectively) rows where the
column is null.
The limit and offset are used to paginate the results.
The result of the get_looks tool is a list of json objects.
String Search Behavior:
- Case-insensitive matching.
- Supports SQL LIKE pattern match wildcards:
- `%`: Matches any sequence of zero or more characters. (e.g., `"dan%"` matches "danger", "Danzig")
- `_`: Matches any single character. (e.g., `"D_m%"` matches "Damage", "dump")
- Special expressions for null checks:
- `"IS NULL"`: Matches Looks where the field is null.
- `"NOT NULL"`: Excludes Looks where the field is null.
run_look:
kind: looker-run-look
source: looker-source
description: |
run_look Tool
This tool executes the query associated with a saved Look and
returns the resulting data in a JSON structure.
This tool runs the query associated with a look and returns
the data in a JSON structure. It accepts the look_id as the
parameter.
Parameters:
- look_id (required): The unique identifier of the Look to run,
typically obtained from the `get_looks` tool.
Output:
The query results are returned as a JSON object.
make_look:
kind: looker-make-look
source: looker-source
description: |
make_look Tool
This tool creates a new Look (saved query with visualization) in Looker.
The Look will be saved in the user's personal folder, and its name must be unique.
This tool creates a new look in Looker, using the query
parameters and the vis_config specified.
Required Parameters:
- title: A unique title for the new Look.
- description: A brief description of the Look's purpose.
- model_name: The name of the LookML model (from `get_models`).
- explore_name: The name of the explore (from `get_explores`).
- fields: A list of field names (dimensions, measures, filters, or parameters) to include in the query.
Most of the parameters are the same as the query_url
tool. In addition, there is a title and a description
that must be provided.
Optional Parameters:
- pivots, filters, sorts, limit, query_timezone: These parameters are identical
to those described for the `query` tool.
- vis_config: A JSON object defining the visualization settings for the Look.
The structure and options are the same as for the `query_url` tool's `vis_config`.
The newly created look will be created in the user's
personal folder in looker. The look name must be unique.
The result is a json document with a link to the newly
created look.
Output:
A JSON object containing a link (`url`) to the newly created Look, along with its `id` and `slug`.
get_dashboards:
kind: looker-get-dashboards
source: looker-source
description: |
get_dashboards Tool
This tool searches for saved dashboards in a Looker instance. It returns a list of JSON objects, each representing a dashboard.
This tool is used to search for saved dashboards in a Looker instance.
String search params use case-insensitive matching. String search
params can contain % and '_' as SQL LIKE pattern match wildcard
expressions. example="dan%" will match "danger" and "Danzig" but
not "David" example="D_m%" will match "Damage" and "dump".
Most search params can accept "IS NULL" and "NOT NULL" as special
expressions to match or exclude (respectively) rows where the
column is null.
Search Parameters:
- title (optional): Filter by dashboard title (supports wildcards).
- folder_id (optional): Filter by the ID of the folder where the dashboard is saved.
- user_id (optional): Filter by the ID of the user who created the dashboard.
- description (optional): Filter by description content (supports wildcards).
- id (optional): Filter by specific dashboard ID.
- limit (optional): Maximum number of results to return. Defaults to a system limit.
- offset (optional): Starting point for pagination.
The limit and offset are used to paginate the results.
The result of the get_dashboards tool is a list of json objects.
String Search Behavior:
- Case-insensitive matching.
- Supports SQL LIKE pattern match wildcards:
- `%`: Matches any sequence of zero or more characters. (e.g., `"finan%"` matches "financial", "finance")
- `_`: Matches any single character. (e.g., `"s_les"` matches "sales")
- Special expressions for null checks:
- `"IS NULL"`: Matches dashboards where the field is null.
- `"NOT NULL"`: Excludes dashboards where the field is null.
run_dashboard:
kind: looker-run-dashboard
source: looker-source
description: |
run_dashboard Tool
This tool executes the queries associated with each tile in a specified dashboard
and returns the aggregated data in a JSON structure.
This tools runs the query associated with each tile in a dashboard
and returns the data in a JSON structure. It accepts the dashboard_id
as the parameter.
Parameters:
- dashboard_id (required): The unique identifier of the dashboard to run,
typically obtained from the `get_dashboards` tool.
Output:
The data from all dashboard tiles is returned as a JSON object.
make_dashboard:
kind: looker-make-dashboard
source: looker-source
description: |
make_dashboard Tool
This tool creates a new, empty dashboard in Looker. Dashboards are stored
in the user's personal folder, and the dashboard name must be unique.
After creation, use `add_dashboard_filter` to add filters and
`add_dashboard_element` to add content tiles.
This tool creates a new dashboard in Looker. The dashboard is
initially empty and the add_dashboard_element tool is used to
add content to the dashboard.
Required Parameters:
- title (required): A unique title for the new dashboard.
- description (required): A brief description of the dashboard's purpose.
The newly created dashboard will be created in the user's
personal folder in looker. The dashboard name must be unique.
The result is a json document with a link to the newly
created dashboard and the id of the dashboard. Use the id
when calling add_dashboard_element.
Output:
A JSON object containing a link (`url`) to the newly created dashboard and
its unique `id`. This `dashboard_id` is crucial for subsequent calls to
`add_dashboard_filter` and `add_dashboard_element`.
add_dashboard_element:
kind: looker-add-dashboard-element
source: looker-source
description: |
add_dashboard_element Tool
This tool creates a new tile (element) within an existing Looker dashboard.
Tiles are added in the order this tool is called for a given `dashboard_id`.
This tool creates a new tile in a Looker dashboard using
the query parameters and the vis_config specified.
CRITICAL ORDER OF OPERATIONS:
1. Create the dashboard using `make_dashboard`.
2. Add any dashboard-level filters using `add_dashboard_filter`.
3. Then, add elements (tiles) using this tool.
Most of the parameters are the same as the query_url
tool. In addition, there is a title that may be provided.
The dashboard_id must be specified. That is obtained
from calling make_dashboard.
Required Parameters:
- dashboard_id: The ID of the target dashboard, obtained from `make_dashboard`.
- model_name, explore_name, fields: These query parameters are inherited
from the `query` tool and are required to define the data for the tile.
This tool can be called many times for one dashboard_id
and the resulting tiles will be added in order.
Optional Parameters:
- title: An optional title for the dashboard tile.
- pivots, filters, sorts, limit, query_timezone: These query parameters are
inherited from the `query` tool and can be used to customize the tile's query.
- vis_config: A JSON object defining the visualization settings for this tile.
The structure and options are the same as for the `query_url` tool's `vis_config`.
Connecting to Dashboard Filters:
A dashboard element can be connected to one or more dashboard filters (created with
`add_dashboard_filter`). To do this, specify the `name` of the dashboard filter
and the `field` from the element's query that the filter should apply to.
The format for specifying the field is `view_name.field_name`.
add_dashboard_filter:
kind: looker-add-dashboard-filter
source: looker-source
description: |
This tool adds a filter to a Looker dashboard.
CRITICAL ORDER OF OPERATIONS:
1. Create a dashboard using `make_dashboard`.
2. Add all desired filters using this tool (`add_dashboard_filter`).
3. Finally, add dashboard elements (tiles) using `add_dashboard_element`.
Parameters:
- dashboard_id (required): The ID from `make_dashboard`.
- name (required): A unique internal identifier for the filter. You will use this `name` later in `add_dashboard_element` to bind tiles to this filter.
- title (required): The label displayed to users in the UI.
- flter_type (required): One of `date_filter`, `number_filter`, `string_filter`, or `field_filter`.
- default_value (optional): The initial value for the filter.
Field Filters (`flter_type: field_filter`):
If creating a field filter, you must also provide:
- model
- explore
- dimension
The filter will inherit suggestions and type information from this LookML field.
generate_embed_url:
kind: looker-generate-embed-url
source: looker-source
description: |
This tool generates a signed, private embed URL for specific Looker content,
allowing users to access it directly.
Parameters:
- type (required): The type of content to embed. Common values include:
- `dashboards`
- `looks`
- `explore`
- id (required): The unique identifier for the content.
- For dashboards and looks, use the numeric ID (e.g., "123").
- For explores, use the format "model_name/explore_name".
health_pulse:
kind: looker-health-pulse
source: looker-source
description: |
health-pulse Tool
This tool performs various health checks on a Looker instance.
This tool takes the pulse of a Looker instance by taking
one of the following actions:
1. `check_db_connections`,
2. `check_dashboard_performance`,
3. `check_dashboard_errors`,
4. `check_explore_performance`,
5. `check_schedule_failures`, or
6. `check_legacy_features`
The `check_legacy_features` action is only available in Looker Core. If
it is called on a Looker Core instance, you will get a notice. That notice
should not be reported as an error.
Parameters:
- action (required): Specifies the type of health check to perform.
Choose one of the following:
- `check_db_connections`: Verifies database connectivity.
- `check_dashboard_performance`: Assesses dashboard loading performance.
- `check_dashboard_errors`: Identifies errors within dashboards.
- `check_explore_performance`: Evaluates explore query performance.
- `check_schedule_failures`: Reports on failed scheduled deliveries.
- `check_legacy_features`: Checks for the usage of legacy features.
Note on `check_legacy_features`:
This action is exclusively available in Looker Core instances. If invoked
on a non-Looker Core instance, it will return a notice rather than an error.
This notice should be considered normal behavior and not an indication of an issue.
health_analyze:
kind: looker-health-analyze
source: looker-source
description: |
health-analyze Tool
This tool calculates the usage statistics for Looker projects, models, and explores.
This tool calculates the usage of projects, models and explores.
Parameters:
- action (required): The type of resource to analyze. Can be `"projects"`, `"models"`, or `"explores"`.
- project (optional): The specific project ID to analyze.
- model (optional): The specific model name to analyze. Requires `project` if used without `explore`.
- explore (optional): The specific explore name to analyze. Requires `model` if used.
- timeframe (optional): The lookback period in days for usage data. Defaults to `90` days.
- min_queries (optional): The minimum number of queries for a resource to be considered active. Defaults to `1`.
It accepts 6 parameters:
1. `action`: can be "projects", "models", or "explores"
2. `project`: the project to analyze (optional)
3. `model`: the model to analyze (optional)
4. `explore`: the explore to analyze (optional)
5. `timeframe`: the lookback period in days, default is 90
6. `min_queries`: the minimum number of queries to consider a resource as active, default is 1
Output:
The result is a JSON object containing usage metrics for the specified resources.
health_vacuum:
kind: looker-health-vacuum
source: looker-source
description: |
health-vacuum Tool
This tool identifies and suggests LookML models or explores that can be
safely removed due to inactivity or low usage.
This tool suggests models or explores that can removed
because they are unused.
Parameters:
- action (required): The type of resource to analyze for removal candidates. Can be `"models"` or `"explores"`.
- project (optional): The specific project ID to consider.
- model (optional): The specific model name to consider. Requires `project` if used without `explore`.
- explore (optional): The specific explore name to consider. Requires `model` if used.
- timeframe (optional): The lookback period in days to assess usage. Defaults to `90` days.
- min_queries (optional): The minimum number of queries for a resource to be considered active. Defaults to `1`.
It accepts 6 parameters:
1. `action`: can be "models" or "explores"
2. `project`: the project to vacuum (optional)
3. `model`: the model to vacuum (optional)
4. `explore`: the explore to vacuum (optional)
5. `timeframe`: the lookback period in days, default is 90
6. `min_queries`: the minimum number of queries to consider a resource as active, default is 1
The result is a list of objects that are candidates for deletion.
Output:
A JSON array of objects, each representing a model or explore that is a candidate for deletion due to low usage.
dev_mode:
kind: looker-dev-mode
source: looker-source
description: |
dev_mode Tool
This tool allows toggling the Looker IDE session between Development Mode and Production Mode.
Development Mode enables making and testing changes to LookML projects.
Passing true to this tool switches the session to dev mode. Passing false to this tool switches the
session to production mode.
Parameters:
- enable (required): A boolean value.
- `true`: Switches the current session to Development Mode.
- `false`: Switches the current session to Production Mode.
get_projects:
kind: looker-get-projects
source: looker-source
description: |
get_projects Tool
This tool retrieves a list of all LookML projects available on the Looker instance.
It is useful for identifying projects before performing actions like retrieving
project files or making modifications.
This tool returns the project_id and project_name for
all the LookML projects on the looker instance.
Parameters:
This tool takes no parameters.
Output:
A JSON array of objects, each containing the `project_id` and `project_name`
for a LookML project.
get_project_files:
kind: looker-get-project-files
source: looker-source
description: |
get_project_files Tool
This tool retrieves a list of all LookML files within a specified project,
providing details about each file.
Given a project_id this tool returns the details about
the LookML files that make up that project.
Parameters:
- project_id (required): The unique ID of the LookML project, obtained from `get_projects`.
Output:
A JSON array of objects, each representing a LookML file and containing
details such as `path`, `id`, `type`, and `git_status`.
get_project_file:
kind: looker-get-project-file
source: looker-source
description: |
get_project_file Tool
This tool retrieves the raw content of a specific LookML file from within a project.
Given a project_id and a file path within the project, this tool returns
the contents of the LookML file.
Parameters:
- project_id (required): The unique ID of the LookML project, obtained from `get_projects`.
- file_path (required): The path to the LookML file within the project,
typically obtained from `get_project_files`.
Output:
The raw text content of the specified LookML file.
create_project_file:
kind: looker-create-project-file
source: looker-source
description: |
create_project_file Tool
This tool creates a new LookML file within a specified project, populating
it with the provided content.
Given a project_id and a file path within the project, as well as the content
of a LookML file, this tool will create a new file within the project.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
This tool must be called after the dev_mode tool has changed the session to
dev mode.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The desired path and filename for the new file within the project.
- content (required): The full LookML content to write into the new file.
Output:
A confirmation message upon successful file creation.
update_project_file:
kind: looker-update-project-file
source: looker-source
description: |
update_project_file Tool
This tool modifies the content of an existing LookML file within a specified project.
Given a project_id and a file path within the project, as well as the content
of a LookML file, this tool will modify the file within the project.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
This tool must be called after the dev_mode tool has changed the session to
dev mode.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The exact path to the LookML file to modify within the project.
- content (required): The new, complete LookML content to overwrite the existing file.
Output:
A confirmation message upon successful file modification.
delete_project_file:
kind: looker-delete-project-file
source: looker-source
description: |
delete_project_file Tool
This tool permanently deletes a specified LookML file from within a project.
Use with caution, as this action cannot be undone through the API.
Given a project_id and a file path within the project, this tool will delete
the file from the project.
Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first.
This tool must be called after the dev_mode tool has changed the session to
dev mode.
Parameters:
- project_id (required): The unique ID of the LookML project.
- file_path (required): The exact path to the LookML file to delete within the project.
Output:
A confirmation message upon successful file deletion.
get_connections:
kind: looker-get-connections
source: looker-source
description: |
get_connections Tool
This tool retrieves a list of all database connections configured in the Looker system.
This tool will list all the connections available in the Looker system, as
well as the dialect name, the default schema, the database if applicable,
and whether the connection supports multiple databases.
Parameters:
This tool takes no parameters.
Output:
A JSON array of objects, each representing a database connection and including details such as:
- `name`: The connection's unique identifier.
- `dialect`: The database dialect (e.g., "mysql", "postgresql", "bigquery").
- `default_schema`: The default schema for the connection.
- `database`: The associated database name (if applicable).
- `supports_multiple_databases`: A boolean indicating if the connection can access multiple databases.
get_connection_schemas:
kind: looker-get-connection-schemas
source: looker-source
description: |
get_connection_schemas Tool
This tool retrieves a list of database schemas available through a specified
Looker connection.
This tool will list the schemas available from a connection, filtered by
an optional database name.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- database (optional): An optional database name to filter the schemas.
Only applicable for connections that support multiple databases.
Output:
A JSON array of strings, where each string is the name of an available schema.
get_connection_databases:
kind: looker-get-connection-databases
source: looker-source
description: |
get_connection_databases Tool
This tool retrieves a list of databases available through a specified Looker connection.
This is only applicable for connections that support multiple databases.
Use `get_connections` to check if a connection supports multiple databases.
This tool will list the databases available from a connection if the connection
supports multiple databases.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
Output:
A JSON array of strings, where each string is the name of an available database.
If the connection does not support multiple databases, an empty list or an error will be returned.
get_connection_tables:
kind: looker-get-connection-tables
source: looker-source
description: |
get_connection_tables Tool
This tool retrieves a list of tables available within a specified database schema
through a Looker connection.
This tool will list the tables available from a connection, filtered by the
schema name and optional database name.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- schema (required): The name of the schema to list tables from, obtained from `get_connection_schemas`.
- database (optional): The name of the database to filter by. Only applicable for connections
that support multiple databases (check with `get_connections`).
Output:
A JSON array of strings, where each string is the name of an available table.
get_connection_table_columns:
kind: looker-get-connection-table-columns
source: looker-source
description: |
get_connection_table_columns Tool
This tool retrieves a list of columns for one or more specified tables within a
given database schema and connection.
This tool will list the columns available from a connection, for all the tables
given in a comma separated list of table names, filtered by the
schema name and optional database name.
Parameters:
- connection_name (required): The name of the database connection, obtained from `get_connections`.
- schema (required): The name of the schema where the tables reside, obtained from `get_connection_schemas`.
- tables (required): A comma-separated string of table names for which to retrieve columns
(e.g., "users,orders,products"), obtained from `get_connection_tables`.
- database (optional): The name of the database to filter by. Only applicable for connections
that support multiple databases (check with `get_connections`).
Output:
A JSON array of objects, where each object represents a column and contains details
such as `table_name`, `column_name`, `data_type`, and `is_nullable`.
toolsets:
@@ -899,6 +1060,8 @@ toolsets:
- run_dashboard
- make_dashboard
- add_dashboard_element
- add_dashboard_filter
- generate_embed_url
- health_pulse
- health_analyze
- health_vacuum

View File

@@ -201,6 +201,10 @@ tools:
kind: postgres-get-column-cardinality
source: postgresql-source
list_table_stats:
kind: postgres-list-table-stats
source: postgresql-source
list_publication_tables:
kind: postgres-list-publication-tables
source: postgresql-source
@@ -250,3 +254,4 @@ toolsets:
- list_pg_settings
- list_database_stats
- list_roles
- list_table_stats

View File

@@ -121,7 +121,7 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
}
defer results.Close()
var tables []map[string]any
tables := []map[string]any{}
for results.Next() {
var tableName string
err := results.Scan(&tableName)

View File

@@ -86,6 +86,16 @@ func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error)
"",
)
params = append(params, vizParameter)
dashFilters := parameters.NewArrayParameterWithRequired("dashboard_filters",
`An array of dashboard filters like [{"dashboard_filter_name": "name", "field": "view_name.field_name"}, ...]`,
false,
parameters.NewMapParameterWithDefault("dashboard_filter",
map[string]any{},
`A dashboard filter like {"dashboard_filter_name": "name", "field": "view_name.field_name"}`,
"",
),
)
params = append(params, dashFilters)
annotations := cfg.Annotations
if annotations == nil {
@@ -142,7 +152,9 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
if err != nil {
return nil, fmt.Errorf("unable to get logger from ctx: %s", err)
}
logger.DebugContext(ctx, "params = ", params)
wq, err := lookercommon.ProcessQueryArgs(ctx, params)
if err != nil {
return nil, fmt.Errorf("error building query request: %w", err)
@@ -155,23 +167,64 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
visConfig := paramsMap["vis_config"].(map[string]any)
wq.VisConfig = &visConfig
qrespFields := "id"
sdk, err := lookercommon.GetLookerSDK(t.UseClientOAuth, t.ApiSettings, t.Client, accessToken)
if err != nil {
return nil, fmt.Errorf("error getting sdk: %w", err)
}
qresp, err := sdk.CreateQuery(*wq, qrespFields, t.ApiSettings)
qresp, err := sdk.CreateQuery(*wq, "id", t.ApiSettings)
if err != nil {
return nil, fmt.Errorf("error making create query request: %w", err)
}
dashFilters := []any{}
if v, ok := paramsMap["dashboard_filters"]; ok {
if v != nil {
dashFilters = paramsMap["dashboard_filters"].([]any)
}
}
var filterables []v4.ResultMakerFilterables
for _, m := range dashFilters {
f := m.(map[string]any)
name, ok := f["dashboard_filter_name"].(string)
if !ok {
return nil, fmt.Errorf("error processing dashboard filter: %w", err)
}
field, ok := f["field"].(string)
if !ok {
return nil, fmt.Errorf("error processing dashboard filter: %w", err)
}
listener := v4.ResultMakerFilterablesListen{
DashboardFilterName: &name,
Field: &field,
}
listeners := []v4.ResultMakerFilterablesListen{listener}
filter := v4.ResultMakerFilterables{
Listen: &listeners,
}
filterables = append(filterables, filter)
}
if len(filterables) == 0 {
filterables = nil
}
wrm := v4.WriteResultMakerWithIdVisConfigAndDynamicFields{
Query: wq,
VisConfig: &visConfig,
Filterables: &filterables,
}
wde := v4.WriteDashboardElement{
DashboardId: &dashboard_id,
Title: &title,
ResultMaker: &wrm,
Query: wq,
QueryId: qresp.Id,
}
switch len(visConfig) {
case 0:
wde.Type = &dataType

View File

@@ -0,0 +1,248 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package lookeradddashboardfilter
import (
"context"
"fmt"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
lookersrc "github.com/googleapis/genai-toolbox/internal/sources/looker"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/tools/looker/lookercommon"
"github.com/googleapis/genai-toolbox/internal/util"
"github.com/googleapis/genai-toolbox/internal/util/parameters"
"github.com/looker-open-source/sdk-codegen/go/rtl"
v4 "github.com/looker-open-source/sdk-codegen/go/sdk/v4"
)
const kind string = "looker-add-dashboard-filter"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description" validate:"required"`
AuthRequired []string `yaml:"authRequired"`
Annotations *tools.ToolAnnotations `yaml:"annotations,omitempty"`
}
// validate interface
var _ tools.ToolConfig = Config{}
func (cfg Config) ToolConfigKind() string {
return kind
}
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// verify source exists
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// verify the source is compatible
s, ok := rawS.(*lookersrc.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `looker`", kind)
}
params := parameters.Parameters{}
dashIdParameter := parameters.NewStringParameter("dashboard_id", "The id of the dashboard where this filter will exist")
params = append(params, dashIdParameter)
nameParameter := parameters.NewStringParameter("name", "The name of the Dashboard Filter")
params = append(params, nameParameter)
titleParameter := parameters.NewStringParameter("title", "The title of the Dashboard Filter")
params = append(params, titleParameter)
filterTypeParameter := parameters.NewStringParameterWithDefault("filter_type", "field_filter", "The filter_type of the Dashboard Filter: date_filter, number_filter, string_filter, field_filter (default field_filter)")
params = append(params, filterTypeParameter)
defaultParameter := parameters.NewStringParameterWithRequired("default_value", "The default_value of the Dashboard Filter (optional)", false)
params = append(params, defaultParameter)
modelParameter := parameters.NewStringParameterWithRequired("model", "The model of a field type Dashboard Filter (required if type field)", false)
params = append(params, modelParameter)
exploreParameter := parameters.NewStringParameterWithRequired("explore", "The explore of a field type Dashboard Filter (required if type field)", false)
params = append(params, exploreParameter)
dimensionParameter := parameters.NewStringParameterWithRequired("dimension", "The dimension of a field type Dashboard Filter (required if type field)", false)
params = append(params, dimensionParameter)
multiValueParameter := parameters.NewBooleanParameterWithDefault("allow_multiple_values", true, "The Dashboard Filter should allow multiple values (default true)")
params = append(params, multiValueParameter)
requiredParameter := parameters.NewBooleanParameterWithDefault("required", false, "The Dashboard Filter is required to run dashboard (default false)")
params = append(params, requiredParameter)
annotations := cfg.Annotations
if annotations == nil {
readOnlyHint := false
annotations = &tools.ToolAnnotations{
ReadOnlyHint: &readOnlyHint,
}
}
mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, params, annotations)
// finish tool setup
return Tool{
Config: cfg,
Name: cfg.Name,
Kind: kind,
UseClientOAuth: s.UseClientAuthorization(),
AuthTokenHeaderName: s.GetAuthTokenHeaderName(),
Client: s.Client,
ApiSettings: s.ApiSettings,
Parameters: params,
manifest: tools.Manifest{
Description: cfg.Description,
Parameters: params.Manifest(),
AuthRequired: cfg.AuthRequired,
},
mcpManifest: mcpManifest,
}, nil
}
// validate interface
var _ tools.Tool = Tool{}
type Tool struct {
Config
Name string `yaml:"name"`
Kind string `yaml:"kind"`
UseClientOAuth bool
AuthTokenHeaderName string
Client *v4.LookerSDK
ApiSettings *rtl.ApiSettings
AuthRequired []string `yaml:"authRequired"`
Parameters parameters.Parameters `yaml:"parameters"`
manifest tools.Manifest
mcpManifest tools.McpManifest
}
func (t Tool) ToConfig() tools.ToolConfig {
return t.Config
}
func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, params parameters.ParamValues, accessToken tools.AccessToken) (any, error) {
logger, err := util.LoggerFromContext(ctx)
if err != nil {
return nil, fmt.Errorf("unable to get logger from ctx: %s", err)
}
logger.DebugContext(ctx, "params = ", params)
paramsMap := params.AsMap()
dashboard_id := paramsMap["dashboard_id"].(string)
name := paramsMap["name"].(string)
title := paramsMap["title"].(string)
filterType := paramsMap["flter_type"].(string)
switch filterType {
case "date_filter":
case "number_filter":
case "string_filter":
case "field_filter":
default:
return nil, fmt.Errorf("invalid filter type: %s. Must be one of date_filter, number_filter, string_filter, field_filter", filterType)
}
allowMultipleValues := paramsMap["allow_multiple_values"].(bool)
required := paramsMap["required"].(bool)
req := v4.WriteCreateDashboardFilter{
DashboardId: dashboard_id,
Name: name,
Title: title,
Type: filterType,
AllowMultipleValues: &allowMultipleValues,
Required: &required,
}
if v, ok := paramsMap["default_value"]; ok {
if v != nil {
defaultValue := paramsMap["default_value"].(string)
req.DefaultValue = &defaultValue
}
}
if filterType == "field_filter" {
model, ok := paramsMap["model"].(string)
if !ok || model == "" {
return nil, fmt.Errorf("model must be specified for field_filter type")
}
explore, ok := paramsMap["explore"].(string)
if !ok || explore == "" {
return nil, fmt.Errorf("explore must be specified for field_filter type")
}
dimension, ok := paramsMap["dimension"].(string)
if !ok || dimension == "" {
return nil, fmt.Errorf("dimension must be specified for field_filter type")
}
req.Model = &model
req.Explore = &explore
req.Dimension = &dimension
}
sdk, err := lookercommon.GetLookerSDK(t.UseClientOAuth, t.ApiSettings, t.Client, accessToken)
if err != nil {
return nil, fmt.Errorf("error getting sdk: %w", err)
}
resp, err := sdk.CreateDashboardFilter(req, "name", t.ApiSettings)
if err != nil {
return nil, fmt.Errorf("error making create dashboard filter request: %s", err)
}
logger.DebugContext(ctx, "resp = %v", resp)
data := make(map[string]any)
data["result"] = fmt.Sprintf("Dashboard filter \"%s\" added to dashboard %s", *resp.Name, dashboard_id)
return data, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (parameters.ParamValues, error) {
return parameters.ParseParams(t.Parameters, data, claims)
}
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}
func (t Tool) RequiresClientAuthorization(resourceMgr tools.SourceProvider) bool {
return t.UseClientOAuth
}
func (t Tool) GetAuthTokenHeaderName() string {
return t.AuthTokenHeaderName
}

View File

@@ -0,0 +1,116 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package lookeradddashboardfilter_test
import (
"strings"
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
lkr "github.com/googleapis/genai-toolbox/internal/tools/looker/lookeradddashboardfilter"
)
func TestParseFromYamlLookerAddDashboardFilter(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
example_tool:
kind: looker-add-dashboard-filter
source: my-instance
description: some description
`,
want: server.ToolConfigs{
"example_tool": lkr.Config{
Name: "example_tool",
Kind: "looker-add-dashboard-filter",
Source: "my-instance",
Description: "some description",
AuthRequired: []string{},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}
func TestFailParseFromYamlLookerAddDashboardFilter(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
err string
}{
{
desc: "Invalid method",
in: `
tools:
example_tool:
kind: looker-add-dashboard-filter
source: my-instance
method: GOT
description: some description
`,
err: "unable to parse tool \"example_tool\" as kind \"looker-add-dashboard-filter\": [4:1] unknown field \"method\"\n 1 | authRequired: []\n 2 | description: some description\n 3 | kind: looker-add-dashboard-filter\n> 4 | method: GOT\n ^\n 5 | source: my-instance",
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err == nil {
t.Fatalf("expect parsing to fail")
}
errStr := err.Error()
if !strings.Contains(errStr, tc.err) {
t.Fatalf("unexpected error string: got %q, want substring %q", errStr, tc.err)
}
})
}
}

View File

@@ -58,8 +58,6 @@ type Config struct {
FilterParams parameters.Parameters `yaml:"filterParams"`
ProjectPayload string `yaml:"projectPayload"`
ProjectParams parameters.Parameters `yaml:"projectParams"`
SortPayload string `yaml:"sortPayload"`
SortParams parameters.Parameters `yaml:"sortParams"`
}
// validate interface
@@ -83,7 +81,7 @@ func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error)
}
// Create a slice for all parameters
allParameters := slices.Concat(cfg.FilterParams, cfg.ProjectParams, cfg.SortParams)
allParameters := slices.Concat(cfg.FilterParams, cfg.ProjectParams)
// Verify no duplicate parameter names
err := parameters.CheckDuplicateParameters(allParameters)
@@ -123,34 +121,6 @@ type Tool struct {
mcpManifest tools.McpManifest
}
func getOptions(sortParameters parameters.Parameters, projectPayload string, paramsMap map[string]any) (*options.FindOneOptions, error) {
opts := options.FindOne()
sort := bson.M{}
for _, p := range sortParameters {
sort[p.GetName()] = paramsMap[p.GetName()]
}
opts = opts.SetSort(sort)
if len(projectPayload) == 0 {
return opts, nil
}
result, err := parameters.PopulateTemplateWithJSON("MongoDBFindOneProjectString", projectPayload, paramsMap)
if err != nil {
return nil, fmt.Errorf("error populating project payload: %s", err)
}
var projection any
err = bson.UnmarshalExtJSON([]byte(result), false, &projection)
if err != nil {
return nil, fmt.Errorf("error unmarshalling projection: %s", err)
}
opts = opts.SetProjection(projection)
return opts, nil
}
func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, params parameters.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
@@ -160,9 +130,18 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
return nil, fmt.Errorf("error populating filter: %s", err)
}
opts, err := getOptions(t.SortParams, t.ProjectPayload, paramsMap)
if err != nil {
return nil, fmt.Errorf("error populating options: %s", err)
opts := options.FindOne()
if len(t.ProjectPayload) > 0 {
result, err := parameters.PopulateTemplateWithJSON("MongoDBFindOneProjectString", t.ProjectPayload, paramsMap)
if err != nil {
return nil, fmt.Errorf("error populating project payload: %s", err)
}
var projection any
err = bson.UnmarshalExtJSON([]byte(result), false, &projection)
if err != nil {
return nil, fmt.Errorf("error unmarshalling projection: %s", err)
}
opts = opts.SetProjection(projection)
}
var filter = bson.D{}

View File

@@ -56,9 +56,6 @@ func TestParseFromYamlMongoQuery(t *testing.T) {
projectPayload: |
{ name: 1, age: 1 }
projectParams: []
sortPayload: |
{ timestamp: -1 }
sortParams: []
`,
want: server.ToolConfigs{
"example_tool": mongodbfindone.Config{
@@ -81,8 +78,6 @@ func TestParseFromYamlMongoQuery(t *testing.T) {
},
ProjectPayload: "{ name: 1, age: 1 }\n",
ProjectParams: parameters.Parameters{},
SortPayload: "{ timestamp: -1 }\n",
SortParams: parameters.Parameters{},
},
},
},

View File

@@ -391,7 +391,7 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
values[i] = &rawValues[i]
}
var out []any
out := []any{}
for rows.Next() {
err = rows.Scan(values...)
if err != nil {

View File

@@ -300,7 +300,7 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
return nil, fmt.Errorf("unable to get column types: %w", err)
}
var out []any
out := []any{}
for results.Next() {
err := results.Scan(values...)
if err != nil {

View File

@@ -210,7 +210,7 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
defer results.Close()
fields := results.FieldDescriptions()
var out []map[string]any
out := []map[string]any{}
for results.Next() {
values, err := results.Values()

View File

@@ -0,0 +1,245 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package postgreslisttablestats
import (
"context"
"fmt"
yaml "github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/sources/alloydbpg"
"github.com/googleapis/genai-toolbox/internal/sources/cloudsqlpg"
"github.com/googleapis/genai-toolbox/internal/sources/postgres"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/util/parameters"
"github.com/jackc/pgx/v5/pgxpool"
)
const kind string = "postgres-list-table-stats"
const listTableStats = `
WITH table_stats AS (
SELECT
s.schemaname AS schema_name,
s.relname AS table_name,
pg_catalog.pg_get_userbyid(c.relowner) AS owner,
pg_total_relation_size(s.relid) AS total_size_bytes,
s.seq_scan,
s.idx_scan,
-- Ratio of index scans to total scans
CASE
WHEN (s.seq_scan + s.idx_scan) = 0 THEN 0
ELSE round((s.idx_scan * 100.0) / (s.seq_scan + s.idx_scan), 2)
END AS idx_scan_ratio_percent,
s.n_live_tup AS live_rows,
s.n_dead_tup AS dead_rows,
-- Percentage of rows that are "dead" (bloat)
CASE
WHEN (s.n_live_tup + s.n_dead_tup) = 0 THEN 0
ELSE round((s.n_dead_tup * 100.0) / (s.n_live_tup + s.n_dead_tup), 2)
END AS dead_row_ratio_percent,
s.n_tup_ins,
s.n_tup_upd,
s.n_tup_del,
s.last_vacuum,
s.last_autovacuum,
s.last_autoanalyze
FROM pg_stat_all_tables s
JOIN pg_catalog.pg_class c ON s.relid = c.oid
)
SELECT *
FROM table_stats
WHERE
($1::text IS NULL OR schema_name LIKE '%' || $1::text || '%')
AND ($2::text IS NULL OR table_name LIKE '%' || $2::text || '%')
AND ($3::text IS NULL OR owner LIKE '%' || $3::text || '%')
ORDER BY
CASE
WHEN $4::text = 'size' THEN total_size_bytes
WHEN $4::text = 'dead_rows' THEN dead_rows
WHEN $4::text = 'seq_scan' THEN seq_scan
WHEN $4::text = 'idx_scan' THEN idx_scan
ELSE seq_scan
END DESC
LIMIT COALESCE($5::int, 50);
`
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type compatibleSource interface {
PostgresPool() *pgxpool.Pool
}
// validate compatible sources are still compatible
var _ compatibleSource = &alloydbpg.Source{}
var _ compatibleSource = &cloudsqlpg.Source{}
var _ compatibleSource = &postgres.Source{}
var compatibleSources = [...]string{alloydbpg.SourceKind, cloudsqlpg.SourceKind, postgres.SourceKind}
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
}
// validate interface
var _ tools.ToolConfig = Config{}
func (cfg Config) ToolConfigKind() string {
return kind
}
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// verify source exists
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// verify the source is compatible
s, ok := rawS.(compatibleSource)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
}
allParameters := parameters.Parameters{
parameters.NewStringParameterWithDefault("schema_name", "public", "Optional: A specific schema name to filter by"),
parameters.NewStringParameterWithRequired("table_name", "Optional: A specific table name to filter by", false),
parameters.NewStringParameterWithRequired("owner", "Optional: A specific owner to filter by", false),
parameters.NewStringParameterWithRequired("sort_by", "Optional: The column to sort by", false),
parameters.NewIntParameterWithDefault("limit", 50, "Optional: The maximum number of results to return"),
}
paramManifest := allParameters.Manifest()
if cfg.Description == "" {
cfg.Description = `Lists the user table statistics in the database ordered by number of
sequential scans with a default limit of 50 rows. Returns the following
columns: schema name, table name, table size in bytes, number of
sequential scans, number of index scans, idx_scan_ratio_percent (showing
the percentage of total scans that utilized an index, where a low ratio
indicates missing or ineffective indexes), number of live rows, number
of dead rows, dead_row_ratio_percent (indicating potential table bloat),
total number of rows inserted, updated, and deleted, the timestamps
for the last_vacuum, last_autovacuum, and last_autoanalyze operations.`
}
mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, allParameters, nil)
// finish tool setup
return Tool{
name: cfg.Name,
kind: cfg.Kind,
authRequired: cfg.AuthRequired,
allParams: allParameters,
pool: s.PostgresPool(),
manifest: tools.Manifest{
Description: cfg.Description,
Parameters: paramManifest,
AuthRequired: cfg.AuthRequired,
},
mcpManifest: mcpManifest,
}, nil
}
// validate interface
var _ tools.Tool = Tool{}
type Tool struct {
Config
name string `yaml:"name"`
kind string `yaml:"kind"`
authRequired []string `yaml:"authRequired"`
allParams parameters.Parameters `yaml:"allParams"`
pool *pgxpool.Pool
manifest tools.Manifest
mcpManifest tools.McpManifest
}
func (t Tool) ToConfig() tools.ToolConfig {
return t.Config
}
func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, params parameters.ParamValues, accessToken tools.AccessToken) (any, error) {
paramsMap := params.AsMap()
newParams, err := parameters.GetParams(t.allParams, paramsMap)
if err != nil {
return nil, fmt.Errorf("unable to extract standard params %w", err)
}
sliceParams := newParams.AsSlice()
results, err := t.pool.Query(ctx, listTableStats, sliceParams...)
if err != nil {
return nil, fmt.Errorf("unable to execute query: %w", err)
}
defer results.Close()
fields := results.FieldDescriptions()
var out []map[string]any
for results.Next() {
values, err := results.Values()
if err != nil {
return nil, fmt.Errorf("unable to parse row: %w", err)
}
rowMap := make(map[string]any)
for i, field := range fields {
rowMap[string(field.Name)] = values[i]
}
out = append(out, rowMap)
}
return out, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (parameters.ParamValues, error) {
return parameters.ParseParams(t.allParams, data, claims)
}
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.authRequired, verifiedAuthServices)
}
func (t Tool) RequiresClientAuthorization(resourceMgr tools.SourceProvider) bool {
return false
}
func (t Tool) GetAuthTokenHeaderName() string {
return "Authorization"
}

View File

@@ -0,0 +1,95 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package postgreslisttablestats_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/internal/tools/postgres/postgreslisttablestats"
)
func TestParseFromYamlPostgresListTableStats(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
example_tool:
kind: postgres-list-table-stats
source: my-postgres-instance
description: some description
authRequired:
- my-google-auth-service
- other-auth-service
`,
want: server.ToolConfigs{
"example_tool": postgreslisttablestats.Config{
Name: "example_tool",
Kind: "postgres-list-table-stats",
Source: "my-postgres-instance",
Description: "some description",
AuthRequired: []string{"my-google-auth-service", "other-auth-service"},
},
},
},
{
desc: "basic example",
in: `
tools:
example_tool:
kind: postgres-list-table-stats
source: my-postgres-instance
description: some description
`,
want: server.ToolConfigs{
"example_tool": postgreslisttablestats.Config{
Name: "example_tool",
Kind: "postgres-list-table-stats",
Source: "my-postgres-instance",
Description: "some description",
AuthRequired: []string{},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,91 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package common
import (
"fmt"
"net/url"
"regexp"
"time"
"cloud.google.com/go/dataproc/v2/apiv1/dataprocpb"
)
const (
logTimeBufferBefore = 1 * time.Minute
logTimeBufferAfter = 10 * time.Minute
)
var batchFullNameRegex = regexp.MustCompile(`projects/(?P<project>[^/]+)/locations/(?P<location>[^/]+)/batches/(?P<batch_id>[^/]+)`)
// Extract BatchDetails extracts the project ID, location, and batch ID from a fully qualified batch name.
func ExtractBatchDetails(batchName string) (projectID, location, batchID string, err error) {
matches := batchFullNameRegex.FindStringSubmatch(batchName)
if len(matches) < 4 {
return "", "", "", fmt.Errorf("failed to parse batch name: %s", batchName)
}
return matches[1], matches[2], matches[3], nil
}
// BatchConsoleURLFromProto builds a URL to the Google Cloud Console linking to the batch summary page.
func BatchConsoleURLFromProto(batchPb *dataprocpb.Batch) (string, error) {
projectID, location, batchID, err := ExtractBatchDetails(batchPb.GetName())
if err != nil {
return "", err
}
return BatchConsoleURL(projectID, location, batchID), nil
}
// BatchLogsURLFromProto builds a URL to the Google Cloud Console showing Cloud Logging for the given batch and time range.
func BatchLogsURLFromProto(batchPb *dataprocpb.Batch) (string, error) {
projectID, location, batchID, err := ExtractBatchDetails(batchPb.GetName())
if err != nil {
return "", err
}
createTime := batchPb.GetCreateTime().AsTime()
stateTime := batchPb.GetStateTime().AsTime()
return BatchLogsURL(projectID, location, batchID, createTime, stateTime), nil
}
// BatchConsoleURL builds a URL to the Google Cloud Console linking to the batch summary page.
func BatchConsoleURL(projectID, location, batchID string) string {
return fmt.Sprintf("https://console.cloud.google.com/dataproc/batches/%s/%s/summary?project=%s", location, batchID, projectID)
}
// BatchLogsURL builds a URL to the Google Cloud Console showing Cloud Logging for the given batch and time range.
//
// The implementation adds some buffer before and after the provided times.
func BatchLogsURL(projectID, location, batchID string, startTime, endTime time.Time) string {
advancedFilterTemplate := `resource.type="cloud_dataproc_batch"
resource.labels.project_id="%s"
resource.labels.location="%s"
resource.labels.batch_id="%s"`
advancedFilter := fmt.Sprintf(advancedFilterTemplate, projectID, location, batchID)
if !startTime.IsZero() {
actualStart := startTime.Add(-1 * logTimeBufferBefore)
advancedFilter += fmt.Sprintf("\ntimestamp>=\"%s\"", actualStart.Format(time.RFC3339Nano))
}
if !endTime.IsZero() {
actualEnd := endTime.Add(logTimeBufferAfter)
advancedFilter += fmt.Sprintf("\ntimestamp<=\"%s\"", actualEnd.Format(time.RFC3339Nano))
}
v := url.Values{}
v.Add("resource", "cloud_dataproc_batch/batch_id/"+batchID)
v.Add("advancedFilter", advancedFilter)
v.Add("project", projectID)
return "https://console.cloud.google.com/logs/viewer?" + v.Encode()
}

View File

@@ -0,0 +1,119 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package common
import (
"testing"
"time"
"cloud.google.com/go/dataproc/v2/apiv1/dataprocpb"
"google.golang.org/protobuf/types/known/timestamppb"
)
func TestExtractBatchDetails_Success(t *testing.T) {
batchName := "projects/my-project/locations/us-central1/batches/my-batch"
projectID, location, batchID, err := ExtractBatchDetails(batchName)
if err != nil {
t.Errorf("ExtractBatchDetails() error = %v, want no error", err)
return
}
wantProject := "my-project"
wantLocation := "us-central1"
wantBatchID := "my-batch"
if projectID != wantProject {
t.Errorf("ExtractBatchDetails() projectID = %v, want %v", projectID, wantProject)
}
if location != wantLocation {
t.Errorf("ExtractBatchDetails() location = %v, want %v", location, wantLocation)
}
if batchID != wantBatchID {
t.Errorf("ExtractBatchDetails() batchID = %v, want %v", batchID, wantBatchID)
}
}
func TestExtractBatchDetails_Failure(t *testing.T) {
batchName := "invalid-name"
_, _, _, err := ExtractBatchDetails(batchName)
wantErr := "failed to parse batch name: invalid-name"
if err == nil || err.Error() != wantErr {
t.Errorf("ExtractBatchDetails() error = %v, want %v", err, wantErr)
}
}
func TestBatchConsoleURL(t *testing.T) {
got := BatchConsoleURL("my-project", "us-central1", "my-batch")
want := "https://console.cloud.google.com/dataproc/batches/us-central1/my-batch/summary?project=my-project"
if got != want {
t.Errorf("BatchConsoleURL() = %v, want %v", got, want)
}
}
func TestBatchLogsURL(t *testing.T) {
startTime := time.Date(2025, 10, 1, 5, 0, 0, 0, time.UTC)
endTime := time.Date(2025, 10, 1, 6, 0, 0, 0, time.UTC)
got := BatchLogsURL("my-project", "us-central1", "my-batch", startTime, endTime)
want := "https://console.cloud.google.com/logs/viewer?advancedFilter=" +
"resource.type%3D%22cloud_dataproc_batch%22" +
"%0Aresource.labels.project_id%3D%22my-project%22" +
"%0Aresource.labels.location%3D%22us-central1%22" +
"%0Aresource.labels.batch_id%3D%22my-batch%22" +
"%0Atimestamp%3E%3D%222025-10-01T04%3A59%3A00Z%22" + // Minus 1 minute
"%0Atimestamp%3C%3D%222025-10-01T06%3A10%3A00Z%22" + // Plus 10 minutes
"&project=my-project" +
"&resource=cloud_dataproc_batch%2Fbatch_id%2Fmy-batch"
if got != want {
t.Errorf("BatchLogsURL() = %v, want %v", got, want)
}
}
func TestBatchConsoleURLFromProto(t *testing.T) {
batchPb := &dataprocpb.Batch{
Name: "projects/my-project/locations/us-central1/batches/my-batch",
}
got, err := BatchConsoleURLFromProto(batchPb)
if err != nil {
t.Fatalf("BatchConsoleURLFromProto() error = %v", err)
}
want := "https://console.cloud.google.com/dataproc/batches/us-central1/my-batch/summary?project=my-project"
if got != want {
t.Errorf("BatchConsoleURLFromProto() = %v, want %v", got, want)
}
}
func TestBatchLogsURLFromProto(t *testing.T) {
createTime := time.Date(2025, 10, 1, 5, 0, 0, 0, time.UTC)
stateTime := time.Date(2025, 10, 1, 6, 0, 0, 0, time.UTC)
batchPb := &dataprocpb.Batch{
Name: "projects/my-project/locations/us-central1/batches/my-batch",
CreateTime: timestamppb.New(createTime),
StateTime: timestamppb.New(stateTime),
}
got, err := BatchLogsURLFromProto(batchPb)
if err != nil {
t.Fatalf("BatchLogsURLFromProto() error = %v", err)
}
want := "https://console.cloud.google.com/logs/viewer?advancedFilter=" +
"resource.type%3D%22cloud_dataproc_batch%22" +
"%0Aresource.labels.project_id%3D%22my-project%22" +
"%0Aresource.labels.location%3D%22us-central1%22" +
"%0Aresource.labels.batch_id%3D%22my-batch%22" +
"%0Atimestamp%3E%3D%222025-10-01T04%3A59%3A00Z%22" + // Minus 1 minute
"%0Atimestamp%3C%3D%222025-10-01T06%3A10%3A00Z%22" + // Plus 10 minutes
"&project=my-project" +
"&resource=cloud_dataproc_batch%2Fbatch_id%2Fmy-batch"
if got != want {
t.Errorf("BatchLogsURLFromProto() = %v, want %v", got, want)
}
}

View File

@@ -18,11 +18,13 @@ import (
"context"
"encoding/json"
"fmt"
"time"
dataproc "cloud.google.com/go/dataproc/v2/apiv1/dataprocpb"
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/sources/serverlessspark"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/tools/serverlessspark/common"
"github.com/googleapis/genai-toolbox/internal/util/parameters"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
@@ -131,7 +133,20 @@ func (t *Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, par
return nil, fmt.Errorf("failed to unmarshal create batch op metadata JSON: %w", err)
}
return result, nil
projectID, location, batchID, err := common.ExtractBatchDetails(meta.GetBatch())
if err != nil {
return nil, fmt.Errorf("error extracting batch details from name %q: %v", meta.GetBatch(), err)
}
consoleUrl := common.BatchConsoleURL(projectID, location, batchID)
logsUrl := common.BatchLogsURL(projectID, location, batchID, meta.GetCreateTime().AsTime(), time.Time{})
wrappedResult := map[string]any{
"opMetadata": meta,
"consoleUrl": consoleUrl,
"logsUrl": logsUrl,
}
return wrappedResult, nil
}
func (t *Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (parameters.ParamValues, error) {

View File

@@ -25,6 +25,7 @@ import (
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/sources/serverlessspark"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/tools/serverlessspark/common"
"github.com/googleapis/genai-toolbox/internal/util/parameters"
"google.golang.org/protobuf/encoding/protojson"
)
@@ -142,9 +143,23 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
return nil, fmt.Errorf("failed to unmarshal batch JSON: %w", err)
}
return result, nil
}
consoleUrl, err := common.BatchConsoleURLFromProto(batchPb)
if err != nil {
return nil, fmt.Errorf("error generating console url: %v", err)
}
logsUrl, err := common.BatchLogsURLFromProto(batchPb)
if err != nil {
return nil, fmt.Errorf("error generating logs url: %v", err)
}
wrappedResult := map[string]any{
"consoleUrl": consoleUrl,
"logsUrl": logsUrl,
"batch": result,
}
return wrappedResult, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (parameters.ParamValues, error) {
return parameters.ParseParams(t.Parameters, data, claims)
}

View File

@@ -24,6 +24,7 @@ import (
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/sources/serverlessspark"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/tools/serverlessspark/common"
"github.com/googleapis/genai-toolbox/internal/util/parameters"
"google.golang.org/api/iterator"
)
@@ -124,6 +125,8 @@ type Batch struct {
Creator string `json:"creator"`
CreateTime string `json:"createTime"`
Operation string `json:"operation"`
ConsoleURL string `json:"consoleUrl"`
LogsURL string `json:"logsUrl"`
}
// Invoke executes the tool's operation.
@@ -159,15 +162,26 @@ func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, para
return nil, fmt.Errorf("failed to list batches: %w", err)
}
batches := ToBatches(batchPbs)
batches, err := ToBatches(batchPbs)
if err != nil {
return nil, err
}
return ListBatchesResponse{Batches: batches, NextPageToken: nextPageToken}, nil
}
// ToBatches converts a slice of protobuf Batch messages to a slice of Batch structs.
func ToBatches(batchPbs []*dataprocpb.Batch) []Batch {
func ToBatches(batchPbs []*dataprocpb.Batch) ([]Batch, error) {
batches := make([]Batch, 0, len(batchPbs))
for _, batchPb := range batchPbs {
consoleUrl, err := common.BatchConsoleURLFromProto(batchPb)
if err != nil {
return nil, fmt.Errorf("error generating console url: %v", err)
}
logsUrl, err := common.BatchLogsURLFromProto(batchPb)
if err != nil {
return nil, fmt.Errorf("error generating logs url: %v", err)
}
batch := Batch{
Name: batchPb.Name,
UUID: batchPb.Uuid,
@@ -175,10 +189,12 @@ func ToBatches(batchPbs []*dataprocpb.Batch) []Batch {
Creator: batchPb.Creator,
CreateTime: batchPb.CreateTime.AsTime().Format(time.RFC3339),
Operation: batchPb.Operation,
ConsoleURL: consoleUrl,
LogsURL: logsUrl,
}
batches = append(batches, batch)
}
return batches
return batches, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (parameters.ParamValues, error) {

View File

@@ -129,7 +129,7 @@ type Tool struct {
// processRows iterates over the spanner.RowIterator and converts each row to a map[string]any.
func processRows(iter *spanner.RowIterator) ([]any, error) {
var out []any
out := []any{}
defer iter.Stop()
for {

View File

@@ -14,11 +14,11 @@
"url": "https://github.com/googleapis/genai-toolbox",
"source": "github"
},
"version": "0.22.0",
"version": "0.23.0",
"packages": [
{
"registryType": "oci",
"identifier": "us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:0.22.0",
"identifier": "us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:0.23.0",
"transport": {
"type": "streamable-http",
"url": "http://{host}:{port}/mcp"

View File

@@ -195,6 +195,7 @@ func TestAlloyDBPgToolEndpoints(t *testing.T) {
tests.RunPostgresLongRunningTransactionsTest(t, ctx, pool)
tests.RunPostgresListQueryStatsTest(t, ctx, pool)
tests.RunPostgresGetColumnCardinalityTest(t, ctx, pool)
tests.RunPostgresListTableStatsTest(t, ctx, pool)
tests.RunPostgresListPublicationTablesTest(t, ctx, pool)
tests.RunPostgresListTableSpacesTest(t)
tests.RunPostgresListPgSettingsTest(t, ctx, pool)

View File

@@ -179,6 +179,7 @@ func TestCloudSQLPgSimpleToolEndpoints(t *testing.T) {
tests.RunPostgresLongRunningTransactionsTest(t, ctx, pool)
tests.RunPostgresListQueryStatsTest(t, ctx, pool)
tests.RunPostgresGetColumnCardinalityTest(t, ctx, pool)
tests.RunPostgresListTableStatsTest(t, ctx, pool)
tests.RunPostgresListPublicationTablesTest(t, ctx, pool)
tests.RunPostgresListTableSpacesTest(t)
tests.RunPostgresListPgSettingsTest(t, ctx, pool)

View File

@@ -207,6 +207,7 @@ func AddPostgresPrebuiltConfig(t *testing.T, config map[string]any) map[string]a
PostgresReplicationStatsToolKind = "postgres-replication-stats"
PostgresListQueryStatsToolKind = "postgres-list-query-stats"
PostgresGetColumnCardinalityToolKind = "postgres-get-column-cardinality"
PostgresListTableStats = "postgres-list-table-stats"
PostgresListPublicationTablesToolKind = "postgres-list-publication-tables"
PostgresListTablespacesToolKind = "postgres-list-tablespaces"
PostgresListPGSettingsToolKind = "postgres-list-pg-settings"
@@ -286,6 +287,12 @@ func AddPostgresPrebuiltConfig(t *testing.T, config map[string]any) map[string]a
"kind": PostgresGetColumnCardinalityToolKind,
"source": "my-instance",
}
tools["list_table_stats"] = map[string]any{
"kind": PostgresListTableStats,
"source": "my-instance",
}
tools["list_tablespaces"] = map[string]any{
"kind": PostgresListTablespacesToolKind,
"source": "my-instance",

View File

@@ -250,7 +250,7 @@ func RunMariDBListTablesTest(t *testing.T, databaseName, tableNameParam, tableNa
name: "invoke list_tables with non-existent table",
requestBody: bytes.NewBufferString(`{"table_names": "non_existent_table"}`),
wantStatusCode: http.StatusOK,
want: nil,
want: []objectDetails{},
},
}
for _, tc := range invokeTcs {
@@ -282,7 +282,7 @@ func RunMariDBListTablesTest(t *testing.T, databaseName, tableNameParam, tableNa
if err := json.Unmarshal([]byte(resultString), &tables); err != nil {
t.Fatalf("failed to unmarshal outer JSON array into []tableInfo: %v", err)
}
var details []map[string]any
details := []map[string]any{}
for _, table := range tables {
var d map[string]any
if err := json.Unmarshal([]byte(table.ObjectDetails), &d); err != nil {
@@ -292,23 +292,19 @@ func RunMariDBListTablesTest(t *testing.T, databaseName, tableNameParam, tableNa
}
got = details
} else {
if resultString == "null" {
got = nil
} else {
var tables []tableInfo
if err := json.Unmarshal([]byte(resultString), &tables); err != nil {
t.Fatalf("failed to unmarshal outer JSON array into []tableInfo: %v", err)
}
var details []objectDetails
for _, table := range tables {
var d objectDetails
if err := json.Unmarshal([]byte(table.ObjectDetails), &d); err != nil {
t.Fatalf("failed to unmarshal nested ObjectDetails string: %v", err)
}
details = append(details, d)
}
got = details
var tables []tableInfo
if err := json.Unmarshal([]byte(resultString), &tables); err != nil {
t.Fatalf("failed to unmarshal outer JSON array into []tableInfo: %v", err)
}
details := []objectDetails{}
for _, table := range tables {
var d objectDetails
if err := json.Unmarshal([]byte(table.ObjectDetails), &d); err != nil {
t.Fatalf("failed to unmarshal nested ObjectDetails string: %v", err)
}
details = append(details, d)
}
got = details
}
opts := []cmp.Option{
@@ -319,7 +315,7 @@ func RunMariDBListTablesTest(t *testing.T, databaseName, tableNameParam, tableNa
// Checking only the current database where the test tables are created to avoid brittle tests.
if tc.isAllTables {
var filteredGot []objectDetails
filteredGot := []objectDetails{}
if got != nil {
for _, item := range got.([]objectDetails) {
if item.SchemaName == databaseName {
@@ -327,11 +323,7 @@ func RunMariDBListTablesTest(t *testing.T, databaseName, tableNameParam, tableNa
}
}
}
if len(filteredGot) == 0 {
got = nil
} else {
got = filteredGot
}
got = filteredGot
}
if diff := cmp.Diff(tc.want, got, opts...); diff != "" {

Some files were not shown because too many files have changed in this diff Show More