## Description
update unit tests execute command in DEVELOPER.md to exclude integration
tests
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
Adds a postgresql custom list_roles tool, that lists all the
user-created roles in the instance. It provides details about each
role's attributes and memberships.
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
![Uploading Screenshot 2025-11-26 at 1.16.42 AM.png…]()
<img width="1065" height="145" alt="Screenshot 2025-11-26 at 12 59
56 AM"
src="https://github.com/user-attachments/assets/d90131b1-d369-4108-b4db-ee5dc9aafe38"
/>
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<1738>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
## Description
Add additional filter parameters for existing PostgreSQL tools:
1. `list_views`:
- Add a new optional `"schema_name"` filter parameter to return results
based on a specific schema name pattern.
- Add an additional column `"definition"` to return the view definition.
2. `list_schemas`:
- Add a new optional `"owner"` filter parameter to return results based
on a specific owner name pattern.
- Add a new optional `"limit"` parameter to return a specific number of
rows.
3. `list_indexes`:
- Add a new optional `"only_unused"` filter parameter to return only
unused indexes.
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
list_views
<img width="1531" height="763" alt="Screenshot 2025-11-25 at 1 36 39 PM"
src="https://github.com/user-attachments/assets/bd6805b3-43d2-46c7-adc8-62d3a4521d36"
/>
list_schemas
<img width="1519" height="755" alt="Screenshot 2025-11-25 at 1 35 54 PM"
src="https://github.com/user-attachments/assets/62d3e987-b64e-442b-ba1a-84def1df7a58"
/>
list_indexes
<img width="1523" height="774" alt="Screenshot 2025-11-25 at 1 35 32 PM"
src="https://github.com/user-attachments/assets/c6f73b3f-f8a2-4b76-9218-64d7011a2241"
/>
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<1738>
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
This PR
1. Adds **MariaDB** as a Source - Implementation is similar to **MySQL**
source
2. Utilises pre implemented **MySQL** Tools
- `mysql-execute-sql`
- `mysql-list-active-queries`
- `mysql-list-table-fragmentation`
- `mysql-list-tables`
- `mysql-list-tables-missing-unique-indexes`
- `mysql-sql`
**Note:** After discussion with @duwenxin99 in issue #1768, I initially
assumed MariaDB required new tools due to different metadata structures
and system tables. That is true for older MariaDB versions, but current
MySQL tooling already works with MariaDB (verified), so a separate tool
set was not needed.
3. Adds a source doc for **MariaDB** in docs
4. Adds MariaDB integration tests using the existing MySQL test flow.
Note: The test file is based on the MySQL integration test, but
`GetMariaDBWants()` and
`RunMariDBListTablesTest()` are implemented because MariaDB returns
different metadata
and list-tables output, so the assertions must be MariaDB-specific.
5. Updates CI
Lastly
I considered adding a MariaDB-exclusive Galera cluster monitoring tool,
but skipped it because it requires a multi-node Galera setup for
integration testing and would significantly increase CI complexity with
unclear usage demand.
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1712#1768
---------
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from
0.43.0 to 0.45.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4e0068c009"><code>4e0068c</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="e79546e28b"><code>e79546e</code></a>
ssh: curb GSSAPI DoS risk by limiting number of specified OIDs</li>
<li><a
href="f91f7a7c31"><code>f91f7a7</code></a>
ssh/agent: prevent panic on malformed constraint</li>
<li><a
href="2df4153a03"><code>2df4153</code></a>
acme/autocert: let automatic renewal work with short lifetime certs</li>
<li><a
href="bcf6a849ef"><code>bcf6a84</code></a>
acme: pass context to request</li>
<li><a
href="b4f2b62076"><code>b4f2b62</code></a>
ssh: fix error message on unsupported cipher</li>
<li><a
href="79ec3a51fc"><code>79ec3a5</code></a>
ssh: allow to bind to a hostname in remote forwarding</li>
<li><a
href="122a78f140"><code>122a78f</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="c0531f9c34"><code>c0531f9</code></a>
all: eliminate vet diagnostics</li>
<li><a
href="0997000b45"><code>0997000</code></a>
all: fix some comments</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/crypto/compare/v0.43.0...v0.45.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/googleapis/genai-toolbox/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
Adds a postgresql custom list_tablespaces tool, that returns the details
of tablespaces present in database.
<img width="1719" height="698" alt="Screenshot 2025-11-12 at 9 11 13 AM"
src="https://github.com/user-attachments/assets/03964a1b-27e0-4da8-85a2-57db905163ed"
/>
<img width="1077" height="141" alt="Screenshot 2025-11-12 at 9 12 42 AM"
src="https://github.com/user-attachments/assets/f93f5692-eb62-4f30-8192-40c8873d4d00"
/>
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
Lists all tablespaces in the database. Returns the tablespace name,
owner name, size in bytes, internal object ID, the access control list
regarding permissions, and any specific tablespace options.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1738
## Description
Adds a postgresql custom list_publication_tables tool, that returns the
details of publication tables present in database.
Test Output:
<img width="845" height="239" alt="Screenshot 2025-11-11 at 12 50 59 AM"
src="https://github.com/user-attachments/assets/b7606e44-c5f6-4fc7-865e-7efadd112eff"
/>
<img width="1529" height="648" alt="Screenshot 2025-11-11 at 1 15 18 AM"
src="https://github.com/user-attachments/assets/6192b772-f0bc-4fb4-8032-ca487434d77c"
/>
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1738
Co-authored-by: Averi Kitsch <akitsch@google.com>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
This tool is almost identical to create_pyspark_batch, but for Java
Spark batches. There are some minor differences in how the main files
and args are provided.
We are planning to add several very similar tools for creating batches
like the existing pyspark batches: spark (Java), R, etc. They will use
an identical approach of specifying environment and runtime config in
the YAML, differing only in how language-specific args are passed. We
can streamline this.
This tool creates a PySpark batch from a minimal set of parameters, to
keep things simple for the LLM. Advanced runtime and environment config
can be specified in tools.yaml.
The `required` tag raises validation failure error when a boolean field
is defined as `false`:
```
ERROR "unable to parse tool file at "mongodb_tools.yaml": unable to parse tool "insert-one-device" as kind "mongodb-insert-one": [2:12] Key: 'Config.Canonical' Error:Field validation for 'Canonical' failed on the 'required' tag\n 1 | authRequired: []\n> 2 | canonical: false\n ^\n 3 | collection: Device\n 4 | database: xiar\n 5 | description: Inserts a single new document into the Device collection. The 'data' parameter must be a string containing the JSON object to insert.\n 6 | "
```
All the `required` tags are removed from the boolean `canonical` field
of the MongoDB tools. Unit tests are added.
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
This change updates both `bigquery-sql` and `bigquery-execute-sql` tools
to format `NUMERIC` and `BIGNUMERIC` values as decimal strings (e.g.,
"9.5") instead of rational fractions (e.g., "19/2"). This ensures the
tools' output matches the BigQuery REST API JSON format.
Key changes:
- Added `NormalizeValue` function in
`internal/tools/bigquery/bigquerycommon` to handle `*big.Rat` conversion
with 38-digit precision and trailing zero trimming.
- Updated `bigquery-sql` and `bigquery-execute-sql` to use
`NormalizeValue`.
- Added comprehensive tests in
`internal/tools/bigquery/bigquerycommon/conversion_test.go`.
With these changes the formatting for NUMERIC and BIGNUMERIC is fixed.
**Before:**
```
[
{
"id": 3,
"numeric_value": "1"
},
{
"id": 2,
"numeric_value": "333333333/1000000000"
},
{
"id": 4,
"numeric_value": "12341/10"
},
{
"id": 1,
"numeric_value": "19/2"
}
]
```
**After:**
```
[
{
"id": 3,
"numeric_value": "1"
},
{
"id": 2,
"numeric_value": "0.333333333"
},
{
"id": 4,
"numeric_value": "1234.1"
},
{
"id": 1,
"numeric_value": "9.5"
}
]
```
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1194
---------
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
This PR introduces the configuration file (.gemini/config.yaml) for the
Gemini Code Assist GitHub App. The goal is to enable AI code reviews
while preventing quota exhaustion caused by high-volume automated PRs
(e.g., Renovate/Dependabot).
See
[schema](https://developers.google.com/gemini-code-assist/docs/customize-gemini-behavior-github#config.yaml-schema)
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
This PR updates the documentation across the repository to reflect the
new installation workflow using `npx` and Node.js, replacing the
previous binary download instructions. It also standardizes the
prerequisites and adds helpful configuration notes for Windows users.
These changes simplify the setup process for users by leveraging `npx`
for executing the tools, ensuring they always use the latest version
without manual binary management. It also addresses feedback from PR
#2079 regarding installation clarity and Windows support.
---------
Co-authored-by: Twisha Bansal <twishabansal07@gmail.com>
## Description
Adds spanner list graphs to prebuildconfigs
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#2051
---------
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
This sets the default annotation (readOnlyHint, destructiveHint, etc.)
in the looker tool code.
If desired, a custom tool can override the annotation in yaml like
```yaml
tool_name:
kind: tool-kind
source: tool-source
annotations:
readOnlyHint: false
destructiveHint: true
description: |
Tool Description
```
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
This Pull Request addresses an issue where the `postgres-list-tables`
tool was including the `google_ml` schema in the results when listing
tables from an AlloyDB PostgreSQL instance.
The primary change involves updating the SQL query within
[https://github.com/googleapis/genai-toolbox/blob/main/internal/tools/postgres/postgreslisttables/postgreslisttables.go](url)
to explicitly exclude the `google_ml` schema, alongside other system
schemas.
The `WHERE` clause in the `listTablesStatement` has been modified from:
```sql
-- Original WHERE clause snippet
AND ns.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')
AND ns.nspname NOT LIKE 'pg_temp_%' AND ns.nspname NOT LIKE 'pg_toast_temp_%'
```
To:
```sql
-- Updated WHERE clause snippet
AND ns.nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast','google_ml')
AND ns.nspname NOT LIKE 'pg_temp_%' AND ns.nspname NOT LIKE 'pg_toast_temp_%'
```
This ensures that tables residing in the google_ml schema are no longer
returned by the tool, improving the clarity and relevance of the table
listings for users.
## Testing
The changes were validated end-to-end by:
* Building a custom `toolbox` binary from the modified source.
* Configuring `~/.gemini/settings.json` to use this binary as an
`alloydb-test` MCP server.
* Launching `gemini` CLI and confirming the `alloydb-test` server was `🟢
Ready` via `/mcp`.
* Invoking the `postgres-list-tables` tool with `list tables`.
* Verifying that `google_ml` schema tables were successfully excluded
from the results.
Testing screenshot :
<img width="2809" height="1779" alt="Screenshot 2025-11-25 3 47 43 PM"
src="https://github.com/user-attachments/assets/126cf5be-30d7-4ec1-9d31-652b5219c0ce"
/>
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️Fixes#2009
Co-authored-by: prernakakkar-google <158031829+prernakakkar-google@users.noreply.github.com>
## Description
---
This pull request adds a new tool, cloud-sql-clone-instance, which
enables cloning a Cloud SQL instance from the toolbox using the Cloud
SQL Admin API. The tool supports both standard cloning and point-in-time
recovery (PITR). It also supports specifying preferred zones for cloned
instances via the preferredZone and preferredSecondaryZone fields.
Key Features:
Instance Cloning: The tool allows you to clone a Cloud SQL instance by
specifying the source and destination instance names.
Point-in-Time Recovery (PITR): By providing a pointInTime timestamp, you
can create a clone of an instance as it existed at a specific moment.
High Availability Configuration: The preferredZone and
preferredSecondaryZone parameters allow you to configure the cloned
instance for high availability.
Tested:
<img width="1182" height="446" alt="Screenshot 2025-11-11 at 12 21
47 PM"
src="https://github.com/user-attachments/assets/7f39a5a3-3967-43d0-8041-f1d47b4fbcd9"
/>
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1915
Co-authored-by: prernakakkar-google <158031829+prernakakkar-google@users.noreply.github.com>
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
This pull request resolves issue
[#36](https://github.com/gemini-cli-extensions/sql-server/issues/36) by
introducing default connection parameters for the MSSQL prebuilt tool.
If a user does not specify a host or port, the tool will now default to
localhost:1433
**Changes Implemented**
The modification is in internal/prebuiltconfigs/tools/mssql.yaml, where
the host and port fields now include default values:
host: ${MSSQL_HOST:localhost}
port: ${MSSQL_PORT:1433}
This configuration allows the tool to function using the defaults, but
users can still set the MSSQL_HOST or MSSQL_PORT environment variables
to override them.
**Validation Process**
Validated changes by running the toolbox against a Microsoft SQL Server
instance hosted in a Docker container.
- **Database Setup:** A testdb database containing a products table was
initialized.
- **CLI Configuration:** The ~/.gemini/settings.json file was updated to
point to my local toolbox build.
- **Tool Launch:** The UI was started using go run . --prebuilt mssql
--ui
- **Testing:** Confirmed the connection logic by testing two scenarios.
One with the environment variables set and one without (to confirm the
default logic).
**1.** Default Value Connection: For this test, the MSSQL_HOST and
MSSQL_PORT environment variables were not defined. The application
correctly utilized the new default values (localhost:1433) to connect to
the testdb.
<img width="2175" height="1144" alt="Screenshot 2025-11-12 6 33 05 PM"
src="https://github.com/user-attachments/assets/bacfc9bf-8b35-42e1-ad53-4af3aef27125"
/>
**2.** Explicit Variable Connection: For the second test, MSSQL_HOST and
MSSQL_PORT were set to specific values. The application correctly
prioritized these variables over the new defaults and connected
successfully.
<img width="2175" height="1144" alt="Screenshot 2025-11-12 6 37 02 PM"
src="https://github.com/user-attachments/assets/61254849-211d-41f4-8c7d-ff92cf64a51c"
/>
Result: Both connection methods were verified by executing the
list_tables prebuilt tool via the UI. In each scenario, the tool
successfully retrieved the UserDetails table, confirming the changes.
<img width="2250" height="1144" alt="Screenshot 2025-11-12 6 38 12 PM"
src="https://github.com/user-attachments/assets/87085777-897e-4a74-9e3f-f36cc8a33305"
/>
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes
[#36](https://github.com/gemini-cli-extensions/sql-server/issues/36)
## Description
Tool `invoke()` and `RequiresClientAuthorization()` takes a new input
argument -- Resource Manager. Resource manager will be used to retrieve
Source in the next step.
In order to achieve the goal, this PR implements the follows:
* move resource manager from the server package to a new package to
prevent import cycles (between server and mcp)
* added a new interface in `tools.go` to prevent import cycle (between
resources and tools package)
* add new input argument in all tools
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
## Description
Add the import statement for firebirdsql. Our integration test runs on
linux and did not catch this error.
Seems like windows will look for the import statement -- `unable to
create pool: unable to create connection pool: sql: unknown driver
\"firebirdsql\" (forgotten import?)"`
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#2014
## Description
Update error log to more details when initialization fails.
* When a toolset name is not valid, show the toolset name (similar with
promptset).
* When tool does not exist, show the tool name that is missing (similar
with prompt).
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1579
Support `allowed-origins` flag to allow secure deployment of Toolbox.
Current Toolbox is **insecure by default**, which allows all origin
(`*`). This PR also updated docs to notify user of the new
`allowed-origins` flag in the Cloud Run, kubernetes, and docker
deployment docs.
This PR was tested manually by mocking a browser access:
1. Created a HTML file with Javascript fetch named
`malicious-client.html`:
```
<!DOCTYPE html>
<html>
<head>
<title>Malicious CORS Test</title>
</head>
<body>
<h1>Attempting to access API at http://127.0.0.1:5000/mcp</h1>
<p>Check the **Chrome Developer Console** (F12 -> Console tab) for the result.</p>
<script>
fetch('http://127.0.0.1:5000/mcp', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
// The browser automatically adds the 'Origin' header based on where this HTML is served from (http://localhost:8000)
},
body: JSON.stringify({
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list"
})
})
.then(response => {
console.log('Success (but check console for CORS enforcement details):', response);
return response.json();
})
.then(data => console.log('Data received (only if CORS passes):', data))
.catch(error => console.error('Fetch Error:', error));
</script>
</body>
</html>
```
2. Run `python3 -m http.server 8000`
3. Open `http://localhost:8000/malicious-client.html` in browser.
4. Tried without `--allowed-origins` flag -- success.
Tried with `--allowed-origins=http://localhost:8000` -- success.
Tried with `--allowed-origins=http://foo.com` -- unsuccessful.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
The MCP spec supports tool annotations like the below structure in the
2025-06-18 version of the spec.
https://modelcontextprotocol.io/specification/2025-06-18/schema#toolannotations
```
{
destructiveHint?: boolean;
idempotentHint?: boolean;
openWorldHint?: boolean;
readOnlyHint?: boolean;
}
```
Added a ToolAnnotations structure, an Annotations member to the
McpManifest structure, and a nil initializer for the Annotations member
to all calls to GetMcpManifest.
The ToolAnnotations structure and the member annotations are all defined
as pointers so that they are omited when not set. There are times when
the zero value is meaningful so this was the only way to make sure that
we distinguish between not setting the annotation and setting it with a
zero value.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#927
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Reopening PR from https://github.com/googleapis/genai-toolbox/pull/2019
but on `main`.
## Description
This PR introduces a new How-to guide to deploy ADK Agent to Google
Cloud.
Following the updates to the ADK with Toolbox Local Quickstart (in
https://github.com/googleapis/genai-toolbox/pull/1962), this guide
provides the necessary steps to take a locally developed ADK agent and
deploy it to a production-like cloud environment.
The new guide covers the following workflow:
* Instructions (via link) to deploy the Toolbox server to Cloud Run.
* Using `uvx agent-starter-pack enhance` to scaffold deployment
configuration and adding `toolbox-core` as a dependency.
* Updating the agent code to connect to the remote Cloud Run URL.
* Running `make backend` to deploy the agent to Vertex AI Agent Engine.
* Verifying the deployment using the Agent Engine Playground.
This completes the user journey from local development to a fully
deployed architecture on Google Cloud.
🛠️ Addresses https://github.com/googleapis/genai-toolbox/issues/1705
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from
0.43.0 to 0.45.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4e0068c009"><code>4e0068c</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="e79546e28b"><code>e79546e</code></a>
ssh: curb GSSAPI DoS risk by limiting number of specified OIDs</li>
<li><a
href="f91f7a7c31"><code>f91f7a7</code></a>
ssh/agent: prevent panic on malformed constraint</li>
<li><a
href="2df4153a03"><code>2df4153</code></a>
acme/autocert: let automatic renewal work with short lifetime certs</li>
<li><a
href="bcf6a849ef"><code>bcf6a84</code></a>
acme: pass context to request</li>
<li><a
href="b4f2b62076"><code>b4f2b62</code></a>
ssh: fix error message on unsupported cipher</li>
<li><a
href="79ec3a51fc"><code>79ec3a5</code></a>
ssh: allow to bind to a hostname in remote forwarding</li>
<li><a
href="122a78f140"><code>122a78f</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="c0531f9c34"><code>c0531f9</code></a>
all: eliminate vet diagnostics</li>
<li><a
href="0997000b45"><code>0997000</code></a>
all: fix some comments</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/crypto/compare/v0.43.0...v0.45.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/googleapis/genai-toolbox/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from
0.43.0 to 0.45.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4e0068c009"><code>4e0068c</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="e79546e28b"><code>e79546e</code></a>
ssh: curb GSSAPI DoS risk by limiting number of specified OIDs</li>
<li><a
href="f91f7a7c31"><code>f91f7a7</code></a>
ssh/agent: prevent panic on malformed constraint</li>
<li><a
href="2df4153a03"><code>2df4153</code></a>
acme/autocert: let automatic renewal work with short lifetime certs</li>
<li><a
href="bcf6a849ef"><code>bcf6a84</code></a>
acme: pass context to request</li>
<li><a
href="b4f2b62076"><code>b4f2b62</code></a>
ssh: fix error message on unsupported cipher</li>
<li><a
href="79ec3a51fc"><code>79ec3a5</code></a>
ssh: allow to bind to a hostname in remote forwarding</li>
<li><a
href="122a78f140"><code>122a78f</code></a>
go.mod: update golang.org/x dependencies</li>
<li><a
href="c0531f9c34"><code>c0531f9</code></a>
all: eliminate vet diagnostics</li>
<li><a
href="0997000b45"><code>0997000</code></a>
all: fix some comments</li>
<li>Additional commits viewable in <a
href="https://github.com/golang/crypto/compare/v0.43.0...v0.45.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/googleapis/genai-toolbox/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
---------
Co-authored-by: prernakakkar-google <158031829+prernakakkar-google@users.noreply.github.com>
## Description
Spanner List Graphs tool, similar to list tables it can be used to get
all/specific graph details
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1916
---------
Co-authored-by: Averi Kitsch <akitsch@google.com>
Refactor Python example to use an async main function for
`ToolboxClient`, which is an async client and needs to be called from an
`async` function. This PR fixes that by putting this function call in an
async `main` function.
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
Implements the 'postgres-upgrade-precheck' tool to allow users to
validate instance readiness for major version upgrades for CloudSQL
PostgreSQL.
This includes the tool implementation, unit tests for YAML parsing,
integration tests for tool invocation, and documentation. The tool is
also added to the CloudSQL PostgreSQL prebuilt set.
TEST output:
<img width="3406" height="1646" alt="image"
src="https://github.com/user-attachments/assets/6abaa535-285d-4645-9dd3-7ebcd447d448"
/>
<img width="3532" height="1490" alt="image"
src="https://github.com/user-attachments/assets/4d512af1-51fd-4187-b80f-be13198aba68"
/>
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1721
---------
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
This PR updates the installation guides and documentation to reflect
that Python 3.9 is no longer supported. Users are now instructed to
install Python 3.10+.
## Context
This is a follow-up to
https://github.com/googleapis/mcp-toolbox-sdk-python/pull/422, which
officially removed support for Python 3.9 from the Python SDKs codebase.
This change ensures the documentation aligns with the current package
requirements.
- Support CGO cross compilation for multiple architectures using Zig.
- Download and link MacOSX SDK as needed by the MacOSX cross
compilation. There is no official release for MacOSC SDK so I had to
download from a third party repo.
- Update dockerfile from using `gcr.io/distroless/static:nonroot` to
`gcr.io/distroless/cc-debian12:nonroot` for C libraries that is needed
for dynamic linking.
This PR updates the documentation to fix installation commands
compatible with the standard Windows Command Prompt (`cmd.exe`). Earlier
it was targeted towards Powershell, which might not be used by many
users using a Windows machine.
This PR contains the following updates:
| Package | Change | Age | Confidence |
|---|---|---|---|
| [google-adk](https://redirect.github.com/google/adk-python)
([changelog](https://redirect.github.com/google/adk-python/blob/main/CHANGELOG.md))
| `==1.18.0` -> `==1.19.0` |
[](https://docs.renovatebot.com/merge-confidence/)
|
[](https://docs.renovatebot.com/merge-confidence/)
|
---
> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.
---
### Release Notes
<details>
<summary>google/adk-python (google-adk)</summary>
###
[`v1.19.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1190-2025-11-19)
[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.18.0...v1.19.0)
##### Features
- **\[Core]**
- Add `id` and `custom_metadata` fields to `MemoryEntry`
([4dd28a3](4dd28a3970))
- Add progressive SSE streaming feature
([a5ac1d5](a5ac1d5e14))
- Add a2a\_request\_meta\_provider to RemoteAgent init
([d12468e](d12468ee5a))
- Add feature decorator for the feature registry system
([871da73](871da731f1))
- Breaking: Raise minimum Python version to 3\_10
([8402832](840283228e))
- Refactor and rename BigQuery agent analytics plugin
([6b14f88](6b14f88726))
- Pass custom\_metadata through forwarding artifact service
([c642f13](c642f13f21))
- Update save\_files\_as\_artifacts\_plugin to never keep inline data
([857de04](857de04deb))
- **\[Evals]**
- Add support for InOrder and AnyOrder match in ToolTrajectoryAvgScore
Metric
([e2d3b2d](e2d3b2d862))
- **\[Integrations]**
- Enhance BQ Plugin Schema, Error Handling, and Logging
([5ac5129](5ac5129fb0))
- Schema Enhancements with Descriptions, Partitioning, and Truncation
Indicator
([7c993b0](7c993b01d1))
- **\[Services]**
- Add file-backed artifact service
([99ca6aa](99ca6aa6e6))
- Add service factory for configurable session and artifact backends
([a12ae81](a12ae812d3))
- Add SqliteSessionService and a migration script to migrate existing DB
using DatabaseSessionService to SqliteSessionService
([e218254](e218254495))
- Add transcription fields to session events
([3ad30a5](3ad30a58f9))
- Full async implementation of DatabaseSessionService
([7495941](74959414d8))
- **\[Models]**
- Add experimental feature to use `parameters_json_schema` and
`response_json_schema` for McpTool
([1dd97f5](1dd97f5b45))
- Add support for parsing inline JSON tool calls in LiteLLM responses
([22eb7e5](22eb7e5b06))
- Expose artifact URLs to the model when available
([e3caf79](e3caf79139))
- **\[Tools]**
- Add BigQuery related label handling
([ffbab4c](ffbab4cf4e))
- Allow setting max\_billed\_bytes in BigQuery tools config
([ffbb0b3](ffbb0b37e1))
- Propagate `application_name` set for the BigQuery Tools as BigQuery
job labels
([f13a11e](f13a11e1dc))
- Set per-tool user agent in BQ calls and tool label in BQ jobs
([c0be1df](c0be1df052))
- **\[Observability]**
- Migrate BigQuery logging to Storage Write API
([a2ce34a](a2ce34a0b9))
##### Bug Fixes
- Add `jsonschema` dependency for Agent Builder config validation
([0fa7e46](0fa7e4619d))
- Add None check for `event` in `remote_a2a_agent.py`
([744f94f](744f94f0c8))
- Add vertexai initialization for code being deployed to AgentEngine
([b8e4aed](b8e4aedfbf))
- Change LiteLLM content and tool parameter handling
([a19be12](a19be12c1f))
- Change name for builder agent
([131d39c](131d39c3db))
- Ensure event compaction completes by awaiting task
([b5f5df9](b5f5df9fa8))
- Fix deploy to cloud run on Windows
([29fea7e](29fea7ec1f))
- Fix error handling when MCP server is unreachable
([ee8106b](ee8106be77))
- Fix error when query job destination is None
([0ccc43c](0ccc43cf49))
- Fix Improve logic for checking if a MCP session is disconnected
([a754c96](a754c96d3c))
- Fix McpToolset crashing with anyio.BrokenResourceError
([8e0648d](8e0648df23))
- Fix Safely handle `FunctionDeclaration` without a `required` attribute
([93aad61](93aad61198))
- Fix status code in error message in RestApiTool
([9b75456](9b754564b3))
- Fix Use `async for` to loop through event iterator to get all events
in vertex\_ai\_session\_service
([9211f4c](9211f4ce8c))
- Fix: Fixes DeprecationWarning when using send method
([2882995](2882995289))
- Improve logic for checking if a MCP session is disconnected
([a48a1a9](a48a1a9e88))
- Improve handling of partial and complete transcriptions in live calls
([1819ecb](1819ecb4b8))
- Keep vertex session event after the session update time
([0ec0195](0ec01956e8))
- Let part converters also return multiple parts so they can support
more usecases
([824ab07](824ab07212))
- Load agent/app before creating session
([236f562](236f562cd2))
- Remove app name from FileArtifactService directory structure
([12db84f](12db84f5cd))
- Remove hardcoded `google-cloud-aiplatform` version in agent engine
requirements
([e15e19d](e15e19da05))
- Stop updating write mode in the global settings during tool execution
([5adbf95](5adbf95a0a))
- Update description for `load_artifacts` tool
([c485889](c4858896ff))
##### Improvements
- Add BigQuery related label handling
([ffbab4c](ffbab4cf4e))
- Add demo for rewind
([8eb1bdb](8eb1bdbc58))
- Add debug logging for live connection
([5d5708b](5d5708b2ab))
- Add debug logging for missing function call events
([f3d6fcf](f3d6fcf444))
- Add default retry options as fall back to llm\_request that are made
during evals
([696852a](696852a280))
- Add plugin for returning GenAI Parts from tools into the model request
([116b26c](116b26c33e))
- Add support for abstract types in AFC
([2efc184](2efc184a46))
- Add support for structured output schemas in LiteLLM models
([7ea4aed](7ea4aed35b))
- Add tests for `max_query_result_rows` in BigQuery tool config
([fd33610](fd33610e96))
- Add type hints in `cleanup_unused_files.py`
([2dea573](2dea5733b7))
- Add util to build our llms.txt and llms-full.txt files
- ADK changes
([f1f4467](f1f44675e4))
- Defer import of `google.cloud.storage` in `GCSArtifactService`
([999af55](999af55880))
- Defer import of `live`, `Client` and `_transformers` in `google.genai`
([22c6dbe](22c6dbe83c))
- Enhance the messaging with possible fixes for RESOURCE\_EXHAUSTED
errors from Gemini
([b2c45f8](b2c45f8d91))
- Improve gepa tau-bench colab for external use
([e02f177](e02f177790))
- Improve gepa voter agent demo colab
([d118479](d118479ccf))
- Lazy import DatabaseSessionService in the adk/sessions/ module
([5f05749](5f057498a2))
- Move adk\_agent\_builder\_assistant to built\_in\_agents
([b2b7f2d](b2b7f2d6aa))
- Plumb memory service from LocalEvalService to EvaluationGenerator
([dc3f60c](dc3f60cc93))
- Removes the unrealistic todo comment of visibility management
([e511eb1](e511eb1f70))
- Returns agent state regardless if ctx.is\_resumable
([d6b928b](d6b928bdf7))
- Stop logging the full content of LLM blobs
([0826755](082675546f))
- Update ADK web to match main branch
([14e3802](14e3802643))
- Update agent instructions and retry limit in
`plugin_reflect_tool_retry` sample
([01bac62](01bac62f0c))
- Update conformance test CLI to handle long-running tool calls
([dd706bd](dd706bdc45))
- Update Gemini Live model names in live bidi streaming sample
([aa77834](aa77834e2e))
</details>
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0Mi4xNi4xIiwidXBkYXRlZEluVmVyIjoiNDIuMTYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->
Co-authored-by: Harsh Jha <83023263+rapid-killer-9@users.noreply.github.com>
## Description
This PR deletes an older Github Actions Workflow which deploys the
in-development docs to the `gh-pages` branch. This workflow is not
needed going forward as we have switched to versioned documentation in
the `versioned-gh-pages` branch
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [go](https://go.dev/)
([source](https://redirect.github.com/golang/go)) | toolchain | patch |
`1.25.2` -> `1.25.3` |
---
### Release Notes
<details>
<summary>golang/go (go)</summary>
###
[`v1.25.3`](https://redirect.github.com/golang/go/compare/go1.25.2...go1.25.3)
</details>
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
---------
## Description
This commit allows a tool to pull an alternate authorization
token from the header of the http request.
This is initially being built for the Looker integration. Looker
uses its own OAuth token. When deploying MCP Toolbox to Cloud
Run, the default token in the "Authorization" header is for
authentication with Cloud Run. An alternate token can be put into
another header by a client such as ADK or any other client that
can programatically set http headers. This token will be used
to authenticate with Looker.
If needed, other sources can use this by setting the header name
in the source config, passing it into the tool config, and returning
the header name in the Tool GetAuthTokenHeaderName() function.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1540
## Description
Previous merge conflict caused parameter duplications
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes https://github.com/gemini-cli-extensions/alloydb/issues/97
Bumps [glob](https://github.com/isaacs/node-glob) from 10.4.5 to 10.5.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="56774ef73b"><code>56774ef</code></a>
10.5.0</li>
<li><a
href="1e4e297342"><code>1e4e297</code></a>
bin: Do not expose filenames to shell expansion</li>
<li>See full diff in <a
href="https://github.com/isaacs/node-glob/compare/v10.4.5...v10.5.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/googleapis/genai-toolbox/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Anmol Shukla <shuklaanmol@google.com>
Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
To keep a persistent backend storage for configuration, we will have to
keep a single source of truth. This involves supporting bi-directional
conversion for Prompt and Promptsets.
This PR make the following changes:
* Separate Prompt from PromptConfig
* Embed Config in Resources (for both prompt and promptset)
* Add `ToConfig()` to extract Config from Resources (for both prompt and
promptset)
## Description
Temporarily move `server.json` out from the `.registry/` folder. The
mcp-publisher library do not support the `--file` flag yet. A
[PR](https://github.com/modelcontextprotocol/registry/pull/771) was
submitted to add flags for publish subcommand.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
fixed a small typo
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1944
## Description
Return error if there's error during query execution. AlloyDB AI NL tool
was failing query silently. When this happened, the user was just
receiving `nil` response and not informed of what the error was.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1949
## Description
Update blunderbuss to include dishaprakash after OOO
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
Unmarshal object_details json string into map to make response have
nested json in spannerlisttables instead of nested JSON string
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1838
---------
Co-authored-by: Averi Kitsch <akitsch@google.com>
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [actions/checkout](https://redirect.github.com/actions/checkout) |
action | pinDigest | -> `08c6903` |
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNzMuMSIsInVwZGF0ZWRJblZlciI6IjQxLjE3My4xIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Averi Kitsch <akitsch@google.com>
To keep a persistent backend storage for configuration, we will have to
keep a single source of truth. This involves supporting bi-directional
conversion between ToolsetConfig and Toolset.
This PR make the following changes:
* Embed ToolsetConfig in Toolset
* Add `ToConfig()` to extract ToolsetConfig from Toolset.
To keep a persistent backend storage for configuration, we will have to
keep a single source of truth. This involves supporting bi-directional
conversion between Config and AuthService.
This PR make the following changes:
* Embed Config in AuthService
* Add `ToConfig()` to extract Config from AuthService.
To keep a persistent backend storage for configuration, we will have to
keep a single source of truth. This involves supporting bi-directional
conversion between Config and Tool.
This PR make the following changes:
* Embed Config in Tool
* Add `ToConfig()` to extract Config from Tool.
Jules PR
---
*PR created automatically by Jules for task
[11947649751737965380](https://jules.google.com/task/11947649751737965380)*
---------
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Co-authored-by: Yuan Teoh <yuanteoh@google.com>
To keep a persistent backend storage for configuration, we will have to
keep a single source of truth. This involves supporting bi-directional
conversion between Config and Source.
This PR make the following changes:
* Embed Config in Source
* Add `ToConfig()` to extract Config from Source.
To facilitate the transition of moving invocation implementation to
Source, we will have to move parameter to `internal/util`. This approach
is crucial because certain parameters may not be fully resolvable
pre-implementation. Since both `internal/sources` and `internal/tools`
will need access to `parameters`, it will be more relevant to move
parameters implementation to utils.
## Description
Set up product specific automation
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
This PR adds a new tool to the Looker MCP Toolbox, that enables the user
authenticated to the Looker Source to generate authenticated embed urls
for dashboards, looks and queries. When combined with other tools that
already exist in the Looker toolbox, this enables searching existing
content and providing authenticated urls to them OR creating
authenticated urls from queries generated via Natural Language. The
embed urls will create an embed session for the user and can be opened
directly or added to an iframe `src` attribute to provide a smooth
Embedded Analytics setup.
Additionally this PR adds a new optional parameter to the Looker source
called `SessionLength` which an admin setting up the Looker source can
set to determine how long the Embed sessions last for.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ X] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ X] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ X] Code coverage does not decrease (if any source code was changed)
- [X ] Appropriate docs were updated (if necessary)
- [ X] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1876https://github.com/googleapis/genai-toolbox/issues/1876
---------
Co-authored-by: Luka Fontanilla <maluka@google.com>
Co-authored-by: Dr. Strangelove <drstrangelove@google.com>
## Description
As the codebase gets bigger, the linter takes longer to run. Currently
it is running in an average of 3 minutes and 30 seconds or so, but
occasionally goes to 4 minutes, which causes a failure since it hits the
timeout.
The unit tests all take 10 to 15 minutes to run, so setting the timeout
to 10m will give plenty of time for the linter to run.
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
This PR adds a new [github
action](https://github.com/modelcontextprotocol/registry/blob/main/docs/guides/publishing/github-actions.md)
workflow to publish Toolbox to mcp-registry. This only publishes the
custom tools server, prebuilt-tools will be added in a different PR once
this is successful.
The workflow will be triggered on (1) any new release tag (v*) or (2)
manually via GHA workflow dispatch. It will grab the version number from
the `server.json` file.
The server.json file will be updated by release please during every
release.
Note: server that are published in the mcp-registry is immutable. In
cases where we successfully published a server and would like to make an
update, we will have to update the `version` field in
`.registry/server.json` and manually publish another server with a new
version.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1659
Issue at mcp-registry to support GAR
([#427](https://github.com/modelcontextprotocol/registry/issues/427))
## Description
This pull request addresses issue
[#29](https://github.com/gemini-cli-extensions/mysql/issues/29) by
implementing default connection values for the MySQL data source. When a
user does not specify a host or port, the connection will now
automatically default to localhost:3306.
### Detailed Changes
1. **internal/prebuiltconfigs/tools/mysql.yaml**
- Added the default values to host and port.Specifically, they are now
host: ${MYSQL_HOST:localhost} and port: ${MYSQL_PORT:3306}.This allows
the configuration loader to directly use "localhost" or "3306" as a
fallback if the MYSQL_HOST or MYSQL_PORT environment variables are not
set.
**Testing and Validation**
To ensure the changes work correctly and are compatible with MySQL-like
databases, the feature was manually tested against a MariaDB instance
running in a Docker container.
The testing process involved the following steps:
1. A local testdb database with a products table was created in the
MariaDB container.
2. For Gemini CLI integration, the ~/.gemini/settings.json file was
configured to point to a local build of the toolbox executable.
3. The toolbox was launched in UI mode using the **_./toolbox
--prebuilt=mysql --ui_** command.
4. Two connection scenarios were tested to validate the new logic.
**Test 1: Default Connection (Host/Port Unset)**
The MYSQL_HOST and MYSQL_PORT environment variables were unset. The
application correctly defaulted to localhost:3306 and successfully
connected to the testdb database.
<img width="2087" height="1102" alt="Screenshot 2025-11-11 11 31 45 PM"
src="https://github.com/user-attachments/assets/a8ef5f92-eaca-472f-a0df-e2b3c0c027da"
/>
**Test 2: Explicit Connection (Host/Port Set)**
The MYSQL_HOST and MYSQL_PORT environment variables were explicitly set.
The application correctly used these values, overriding the defaults and
establishing a successful connection.
<img width="2073" height="958" alt="Screenshot 2025-11-11 6 12 44 PM"
src="https://github.com/user-attachments/assets/4b9b8838-091f-4c78-9e3b-97768323693c"
/>
**Result:**
In both scenarios, the list_tables prebuilt tool was executed via the
Toolbox UI, which successfully returned the products table from the
testdb database, confirming the changes work as expected.
Screenshot of list_tables execution from the Toolbox UI:
<img width="2251" height="1240" alt="result"
src="https://github.com/user-attachments/assets/f1c5372d-acc0-4551-af2d-fa1ee4b228d7"
/>
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes [#29](https://github.com/gemini-cli-extensions/mysql/issues/29)
---------
Co-authored-by: prernakakkar-google <158031829+prernakakkar-google@users.noreply.github.com>
### Description
This pull request addresses issue #41 by implementing default connection
values for the Postgresql data source. When a user does not specify a
host or port, the connection will now automatically default to
`localhost:5432`.
---
### Detailed Changes
- **docs/en/reference/prebuilt-tools.md**
-updated the env variables host and port to optional in the doc
- **internal/prebuiltconfigs/tools/postgres.yaml**
- Added the default values to host and port. Specifically, they are now
`host: ${POSTGRES_HOST:localhost}` and `port: ${POSTGRES_PORT:5432}`.
This allows the configuration loader to directly use `"localhost"` or
`"5432"` as a fallback if the `POSTGRES_HOST` or `POSTGRES_PORT`
environment variables are not set.
---
### PR Checklist
Thank you for opening a Pull Request! Before submitting your PR, there
are a few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
CONTRIBUTING.md
- [ ] Make sure to open an issue as a
bug/issue
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add ! if this involve a breaking change
🛠️ **Fixes** #41
---------
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
---
This change introduces the `DefaultProject` field for the
`alloydb-admin` and `cloud-sql-admin` sources. This field allows the
alloydb and cloud sql control plane tools to use the project value from
the environment variables (Ex: `ALLOYDB_POSTGRES_PROJECT`) if it is
already set instead of asking the user.
## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes: https://github.com/gemini-cli-extensions/alloydb/issues/47
---------
Co-authored-by: Averi Kitsch <akitsch@google.com>
This PR contains the following updates:
| Package | Change | Age | Confidence |
|---|---|---|---|
| [pytest](https://redirect.github.com/pytest-dev/pytest)
([changelog](https://docs.pytest.org/en/stable/changelog.html)) |
`==8.4.2` -> `==9.0.0` |
[](https://docs.renovatebot.com/merge-confidence/)
|
[](https://docs.renovatebot.com/merge-confidence/)
|
---
### Release Notes
<details>
<summary>pytest-dev/pytest (pytest)</summary>
###
[`v9.0.0`](https://redirect.github.com/pytest-dev/pytest/releases/tag/9.0.0)
[Compare
Source](https://redirect.github.com/pytest-dev/pytest/compare/8.4.2...9.0.0)
### pytest 9.0.0 (2025-11-05)
#### New features
-
[#​1367](https://redirect.github.com/pytest-dev/pytest/issues/1367):
**Support for subtests** has been added.
`subtests <subtests>` are an alternative to parametrization, useful in
situations where the parametrization values are not all known at
collection time.
Example:
```python
def contains_docstring(p: Path) -> bool:
"""Return True if the given Python file contains a top-level
docstring."""
...
def test_py_files_contain_docstring(subtests: pytest.Subtests) -> None:
for path in Path.cwd().glob("*.py"):
with subtests.test(path=str(path)):
assert contains_docstring(path)
```
Each assert failure or error is caught by the context manager and
reported individually, giving a clear picture of all files that are
missing a docstring.
In addition, `unittest.TestCase.subTest` is now also supported.
This feature was originally implemented as a separate plugin in
[pytest-subtests](https://redirect.github.com/pytest-dev/pytest-subtests),
but since then has been merged into the core.
> \[!NOTE]
> This feature is experimental and will likely evolve in future
releases. By that we mean that we might change how subtests are reported
on failure, but the functionality and how to use it are stable.
-
[#​13743](https://redirect.github.com/pytest-dev/pytest/issues/13743):
Added support for **native TOML configuration files**.
While pytest, since version 6, supports configuration in
`pyproject.toml` files under `[tool.pytest.ini_options]`,
it does so in an "INI compatibility mode", where all configuration
values are treated as strings or list of strings.
Now, pytest supports the native TOML data model.
In `pyproject.toml`, the native TOML configuration is under the
`[tool.pytest]` table.
```toml
# pyproject.toml
[tool.pytest]
minversion = "9.0"
addopts = ["-ra", "-q"]
testpaths = [
"tests",
"integration",
]
```
The `[tool.pytest.ini_options]` table remains supported, but both tables
cannot be used at the same time.
If you prefer to use a separate configuration file, or don't use
`pyproject.toml`, you can use `pytest.toml` or `.pytest.toml`:
```toml
# pytest.toml or .pytest.toml
[pytest]
minversion = "9.0"
addopts = ["-ra", "-q"]
testpaths = [
"tests",
"integration",
]
```
The documentation now (sometimes) shows configuration snippets in both
TOML and INI formats, in a tabbed interface.
See `config file formats` for full details.
-
[#​13823](https://redirect.github.com/pytest-dev/pytest/issues/13823):
Added a **"strict mode"** enabled by the `strict` configuration option.
When set to `true`, the `strict` option currently enables
- `strict_config`
- `strict_markers`
- `strict_parametrization_ids`
- `strict_xfail`
The individual strictness options can be explicitly set to override the
global `strict` setting.
The previously-deprecated `--strict` command-line flag now enables
strict mode.
If pytest adds new strictness options in the future, they will also be
enabled in strict mode.
Therefore, you should only enable strict mode if you use a pinned/locked
version of pytest,
or if you want to proactively adopt new strictness options as they are
added.
See `strict mode` for more details.
-
[#​13737](https://redirect.github.com/pytest-dev/pytest/issues/13737):
Added the `strict_parametrization_ids` configuration option.
When set, pytest emits an error if it detects non-unique parameter set
IDs,
rather than automatically making the IDs unique by adding <span
class="title-ref">0</span>, <span class="title-ref">1</span>, ... to
them.
This can be particularly useful for catching unintended duplicates.
-
[#​13072](https://redirect.github.com/pytest-dev/pytest/issues/13072):
Added support for displaying test session **progress in the terminal
tab** using the [OSC
9;4;](https://conemu.github.io/en/AnsiEscapeCodes.html#ConEmu_specific_OSC)
ANSI sequence.
When pytest runs in a supported terminal emulator like ConEmu, Gnome
Terminal, Ptyxis, Windows Terminal, Kitty or Ghostty,
you'll see the progress in the terminal tab or window,
allowing you to monitor pytest's progress at a glance.
This feature is automatically enabled when running in a TTY. It is
implemented as an internal plugin. If needed, it can be disabled as
follows:
- On a user level, using `-p no:terminalprogress` on the command line or
via an environment variable `PYTEST_ADDOPTS='-p no:terminalprogress'`.
- On a project configuration level, using `addopts = "-p
no:terminalprogress"`.
-
[#​478](https://redirect.github.com/pytest-dev/pytest/issues/478):
Support PEP420 (implicit namespace packages) as <span
class="title-ref">--pyargs</span> target when
`consider_namespace_packages` is <span class="title-ref">true</span> in
the config.
Previously, this option only impacted package imports, now it also
impacts tests discovery.
-
[#​13678](https://redirect.github.com/pytest-dev/pytest/issues/13678):
Added a new `faulthandler_exit_on_timeout` configuration option set to
"false" by default to let <span class="title-ref">faulthandler</span>
interrupt the <span class="title-ref">pytest</span> process after a
timeout in case of deadlock.
Previously, a <span class="title-ref">faulthandler</span> timeout would
only dump the traceback of all threads to stderr, but would not
interrupt the <span class="title-ref">pytest</span> process.
\-- by `ogrisel`.
-
[#​13829](https://redirect.github.com/pytest-dev/pytest/issues/13829):
Added support for configuration option aliases via the `aliases`
parameter in `Parser.addini() <pytest.Parser.addini>`.
Plugins can now register alternative names for configuration options,
allowing for more flexibility in configuration naming and supporting
backward compatibility when renaming options.
The canonical name always takes precedence if both the canonical name
and an alias are specified in the configuration file.
#### Improvements in existing functionality
-
[#​13330](https://redirect.github.com/pytest-dev/pytest/issues/13330):
Having pytest configuration spread over more than one file (for example
having both a `pytest.ini` file and `pyproject.toml` with a
`[tool.pytest.ini_options]` table) will now print a warning to make it
clearer to the user that only one of them is actually used.
\-- by `sgaist`
-
[#​13574](https://redirect.github.com/pytest-dev/pytest/issues/13574):
The single argument `--version` no longer loads the entire plugin
infrastructure, making it faster and more reliable when displaying only
the pytest version.
Passing `--version` twice (e.g., `pytest --version --version`) retains
the original behavior, showing both the pytest version and plugin
information.
> \[!NOTE]
> Since `--version` is now processed early, it only takes effect when
passed directly via the command line. It will not work if set through
other mechanisms, such as `PYTEST_ADDOPTS` or `addopts`.
-
[#​13823](https://redirect.github.com/pytest-dev/pytest/issues/13823):
Added `strict_xfail` as an alias to the `xfail_strict` option,
`strict_config` as an alias to the `--strict-config` flag,
and `strict_markers` as an alias to the `--strict-markers` flag.
This makes all strictness options consistently have configuration
options with the prefix `strict_`.
-
[#​13700](https://redirect.github.com/pytest-dev/pytest/issues/13700):
<span class="title-ref">--junitxml</span> no longer prints the <span
class="title-ref">generated xml file</span> summary at the end of the
pytest session when <span class="title-ref">--quiet</span> is given.
-
[#​13732](https://redirect.github.com/pytest-dev/pytest/issues/13732):
Previously, when filtering warnings, pytest would fail if the filter
referenced a class that could not be imported. Now, this only outputs a
message indicating the problem.
-
[#​13859](https://redirect.github.com/pytest-dev/pytest/issues/13859):
Clarify the error message for <span
class="title-ref">pytest.raises()</span> when a regex <span
class="title-ref">match</span> fails.
-
[#​13861](https://redirect.github.com/pytest-dev/pytest/issues/13861):
Better sentence structure in a test's expected error message.
Previously, the error message would be "expected exception must be
\<expected>, but got \<actual>". Now, it is "Expected \<expected>, but
got \<actual>".
#### Removals and backward incompatible breaking changes
-
[#​12083](https://redirect.github.com/pytest-dev/pytest/issues/12083):
Fixed a bug where an invocation such as <span class="title-ref">pytest
a/ a/b</span> would cause only tests from <span
class="title-ref">a/b</span> to run, and not other tests under <span
class="title-ref">a/</span>.
The fix entails a few breaking changes to how such overlapping arguments
and duplicates are handled:
1. <span class="title-ref">pytest a/b a/</span> or <span
class="title-ref">pytest a/ a/b</span> are equivalent to <span
class="title-ref">pytest a</span>; if an argument overlaps another
arguments, only the prefix remains.
2. <span class="title-ref">pytest x.py x.py</span> is equivalent to
<span class="title-ref">pytest x.py</span>; previously such an
invocation was taken as an explicit request to run the tests from the
file twice.
If you rely on these behaviors, consider using `--keep-duplicates
<duplicate-paths>`, which retains its existing behavior (including the
bug).
-
[#​13719](https://redirect.github.com/pytest-dev/pytest/issues/13719):
Support for Python 3.9 is dropped following its end of life.
-
[#​13766](https://redirect.github.com/pytest-dev/pytest/issues/13766):
Previously, pytest would assume it was running in a CI/CD environment if
either of the environment variables <span class="title-ref">$CI</span>
or <span class="title-ref">$BUILD\_NUMBER</span> was defined;
now, CI mode is only activated if at least one of those variables is
defined and set to a *non-empty* value.
-
[#​13779](https://redirect.github.com/pytest-dev/pytest/issues/13779):
**PytestRemovedIn9Warning deprecation warnings are now errors by
default.**
Following our plan to remove deprecated features with as little
disruption as
possible, all warnings of type `PytestRemovedIn9Warning` now generate
errors
instead of warning messages by default.
**The affected features will be effectively removed in pytest 9.1**, so
please consult the
`deprecations` section in the docs for directions on how to update
existing code.
In the pytest `9.0.X` series, it is possible to change the errors back
into warnings as a
stopgap measure by adding this to your `pytest.ini` file:
```ini
[pytest]
filterwarnings =
ignore::pytest.PytestRemovedIn9Warning
```
But this will stop working when pytest `9.1` is released.
**If you have concerns** about the removal of a specific feature, please
add a
comment to `13779`.
#### Deprecations (removal in next major release)
-
[#​13807](https://redirect.github.com/pytest-dev/pytest/issues/13807):
`monkeypatch.syspath_prepend() <pytest.MonkeyPatch.syspath_prepend>` now
issues a deprecation warning when the prepended path contains legacy
namespace packages (those using `pkg_resources.declare_namespace()`).
Users should migrate to native namespace packages (`420`).
See `monkeypatch-fixup-namespace-packages` for details.
#### Bug fixes
-
[#​13445](https://redirect.github.com/pytest-dev/pytest/issues/13445):
Made the type annotations of `pytest.skip` and friends more
spec-complaint to have them work across more type checkers.
-
[#​13537](https://redirect.github.com/pytest-dev/pytest/issues/13537):
Fixed a bug in which `ExceptionGroup` with only `Skipped` exceptions in
teardown was not handled correctly and showed as error.
-
[#​13598](https://redirect.github.com/pytest-dev/pytest/issues/13598):
Fixed possible collection confusion on Windows when short paths and
symlinks are involved.
-
[#​13716](https://redirect.github.com/pytest-dev/pytest/issues/13716):
Fixed a bug where a nonsensical invocation like `pytest x.py[a]` (a file
cannot be parametrized) was silently treated as `pytest x.py`. This is
now a usage error.
-
[#​13722](https://redirect.github.com/pytest-dev/pytest/issues/13722):
Fixed a misleading assertion failure message when using `pytest.approx`
on mappings with differing lengths.
-
[#​13773](https://redirect.github.com/pytest-dev/pytest/issues/13773):
Fixed the static fixture closure calculation to properly consider
transitive dependencies requested by overridden fixtures.
-
[#​13816](https://redirect.github.com/pytest-dev/pytest/issues/13816):
Fixed `pytest.approx` which now returns a clearer error message when
comparing mappings with different keys.
-
[#​13849](https://redirect.github.com/pytest-dev/pytest/issues/13849):
Hidden `.pytest.ini` files are now picked up as the config file even if
empty.
This was an inconsistency with non-hidden `pytest.ini`.
-
[#​13865](https://redirect.github.com/pytest-dev/pytest/issues/13865):
Fixed <span class="title-ref">--show-capture</span> with <span
class="title-ref">--tb=line</span>.
-
[#​13522](https://redirect.github.com/pytest-dev/pytest/issues/13522):
Fixed `pytester` in subprocess mode ignored all :attr\`pytester.plugins
\<pytest.Pytester.plugins>\` except the first.
Fixed `pytester` in subprocess mode silently ignored non-str
`pytester.plugins <pytest.Pytester.plugins>`.
Now it errors instead.
If you are affected by this, specify the plugin by name, or switch the
affected tests to use `pytester.runpytest_inprocess
<pytest.Pytester.runpytest_inprocess>` explicitly instead.
#### Packaging updates and notes for downstreams
-
[#​13791](https://redirect.github.com/pytest-dev/pytest/issues/13791):
Minimum requirements on `iniconfig` and `packaging` were bumped to
`1.0.1` and `22.0.0`, respectively.
#### Contributor-facing changes
-
[#​12244](https://redirect.github.com/pytest-dev/pytest/issues/12244):
Fixed self-test failures when <span class="title-ref">TERM=dumb</span>.
-
[#​12474](https://redirect.github.com/pytest-dev/pytest/issues/12474):
Added scheduled GitHub Action Workflow to run Sphinx linkchecks in repo
documentation.
-
[#​13621](https://redirect.github.com/pytest-dev/pytest/issues/13621):
pytest's own testsuite now handles the `lsof` command hanging (e.g. due
to unreachable network filesystems), with the affected selftests being
skipped after 10 seconds.
-
[#​13638](https://redirect.github.com/pytest-dev/pytest/issues/13638):
Fixed deprecated `gh pr new` command in `scripts/prepare-release-pr.py`.
The script now uses `gh pr create` which is compatible with GitHub CLI
v2.0+.
-
[#​13695](https://redirect.github.com/pytest-dev/pytest/issues/13695):
Flush <span class="title-ref">stdout</span> and <span
class="title-ref">stderr</span> in <span
class="title-ref">Pytester.run</span> to avoid truncated outputs in
<span class="title-ref">test\_faulthandler.py::test\_timeout</span> on
CI -- by `ogrisel`.
-
[#​13771](https://redirect.github.com/pytest-dev/pytest/issues/13771):
Skip <span
class="title-ref">test\_do\_not\_collect\_symlink\_siblings</span> on
Windows environments without symlink support to avoid false negatives.
-
[#​13841](https://redirect.github.com/pytest-dev/pytest/issues/13841):
`tox>=4` is now required when contributing to pytest.
-
[#​13625](https://redirect.github.com/pytest-dev/pytest/issues/13625):
Added missing docstrings to `pytest_addoption()`, `pytest_configure()`,
and `cacheshow()` functions in `cacheprovider.py`.
#### Miscellaneous internal changes
-
[#​13830](https://redirect.github.com/pytest-dev/pytest/issues/13830):
Configuration overrides (`-o`/`--override-ini`) are now processed during
startup rather than during `config.getini() <pytest.Config.getini>`.
</details>
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Added a new worker pool in release cloud build that uses
`n2d-standard-64` machine, with disk size 100GB. removed the
`machineType` since it's indicated in the worker pool.
Release-As: 0.20.0
Add an additional binary build step for gemini cli. The purpose of this
is to mimic the build steps of versioned.release. However, we will not
be storing that additional build.
## Description
Update blunderbuss for OOO
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
Add support for healthcare source, tool and prebuilt config. This branch
consist of all previously approved PRs.
🛠️Fixes#1648
---------
Co-authored-by: Marwan Tammam <15021613+Quarz0@users.noreply.github.com>
Update AlloyDB AI NL integration test's database back to
`test_database`.
Previously a new database was created with new NL configuration in order
to not break existing integration tests before merging PR #1753.
`test_database` is updated with v1.0.4 AlloyDB AI NL extension.
Move postgres prebuilt integration tests to `common.go` and `tool.go`.
Run those tests from alloydbpg and cloudsqlpg as well.
alloydbpg and cloudsqlpg integration test coverage calculate against the
whole `internal/tools/postgres/` folder. If not added, the coverage will
eventually drop below minimum requirement.
Add commit sha tag to continuous release image. Currently, we only
include the ref_name tag (which will always show as `main` since we run
the continuous release from the main branch), however, this will be
replaced during every PR merge and it will be tough to find image of
previous versions. With the commit sha tag, this will help user find the
exact image that they are looking for.
Assigning job iterator value to an array instead of map to preserve
column order.
When assigning incoming values to map, sometimes the result is not in
order of statement. E.g. `SELECT id, name from ...` might turn into
`{"name": "name_value", "id":1}` rather than `{"id":1, "name":
"name_value"}`. Previously, during json marshaling, it will ALWAYS order
the map in alphabetical order. so that wasn't an issue.
With the implementation of `orderedmap` (#1852), the bigquery execute
sql tool will now preserves the column order during the marshaling
process. Due to this, bigquery's integration test is flaky and failed
when the map is reordered. This update assign incoming value as array
instead, preserving the actual order.
## Description
The run_dashboard tool will run the query associated with each tile of
the dashboard and return the full set of query results. It enables the
agent to answer questions like "Summarize this dashboard for me".
---------
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
This PR contains the following updates:
| Package | Change | Age | Confidence |
|---|---|---|---|
| [google-adk](https://redirect.github.com/google/adk-python)
([changelog](https://redirect.github.com/google/adk-python/blob/main/CHANGELOG.md))
| `==1.15.1` -> `==1.18.0` |
[](https://docs.renovatebot.com/merge-confidence/)
|
[](https://docs.renovatebot.com/merge-confidence/)
|
---
### Release Notes
<details>
<summary>google/adk-python (google-adk)</summary>
###
[`v1.18.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1180-2025-11-05)
[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.17.0...v1.18.0)
##### Features
- **\[ADK Visual Agent Builder]**
- Core Features
- Visual workflow designer for agent creation
- Support for multiple agent types (LLM, Sequential, Parallel, Loop,
Workflow)
- Agent tool support with nested agent tools
- Built-in and custom tool integration
- Callback management for all ADK callback types (before/after agent,
model, tool)
- Assistant to help you build your agents with natural language
- Assistant proposes and writes agent configuration yaml files for you
- Save to test with chat interfaces as normal
- Build and debug at the same time in adk web!
- **\[Core]**
- Add support for extracting cache-related token counts from LiteLLM
usage
([4f85e86](4f85e86fc3))
- Expose the Python code run by the code interpreter in the logs
([a2c6a8a](a2c6a8a85c))
- Add run\_debug() helper method for quick agent experimentation
([0487eea](0487eea2ab))
- Allow injecting a custom Runner into `agent_to_a2a`
([156d235](156d235479))
- Support MCP prompts via the McpInstructionProvider class
([88032cf](88032cf5c5))
- **\[Models]**
- Add model tracking to LiteLlm and introduce a LiteLLM with fallbacks
demo
([d4c63fc](d4c63fc562))
- Add ApigeeLlm as a model that lets ADK Agent developers to connect
with an Apigee proxy
([87dcb3f](87dcb3f7ba))
- **\[Integrations]**
- Add example and fix for loading and upgrading old ADK session
databases
([338c3c8](338c3c89c6))
- Add support for specifying logging level for adk eval cli command
([b1ff85f](b1ff85fb23))
- Propagate LiteLLM finish\_reason to LlmResponse for use in callbacks
([71aa564](71aa5645f6))
- Allow LLM request to override the model used in the generate content
async method in LiteLLM
([ce8f674](ce8f674a28))
- Add api key argument to Vertex Session and Memory services for Express
Mode support
([9014a84](9014a849ea))
- Added support for enums as arguments for function tools
([240ef5b](240ef5beea))
- Implement artifact\_version related methods in GcsArtifactService
([e194ebb](e194ebb33c))
- **\[Services]**
- Add support for Vertex AI Express Mode when deploying to Agent Engine
([d4b2a8b](d4b2a8b49f))
- Remove custom polling logic for Vertex AI Session Service since LRO
polling is supported in express mode
([546c2a6](546c2a6816))
- Make VertexAiSessionService fully asynchronous
([f7e2a7a](f7e2a7a40e))
- **\[Tools]**
- Add Bigquery detect\_anomalies tool
([9851340](9851340ad1))
- Extend Bigquery detect\_anomalies tool to support future data anomaly
detection
([38ea749](38ea749c9c))
- Add get\_job\_info tool to BigQuery toolset
([6429457](64294572c1))
- **\[Evals]**
- Add "final\_session\_state" to the EvalCase data model
([2274c4f](2274c4f304))
- Marked expected\_invocation as optional field on evaluator interface
([b17c8f1](b17c8f19e5))
- Adds LLM-backed user simulator
([54c4ecc](54c4ecc733))
- **\[Observability]**
- Add BigQueryLoggingPlugin for event logging to BigQuery
([b7dbfed](b7dbfed4a3))
- **\[Live]**
- Add token usage to live events for bidi streaming
([6e5c0eb](6e5c0eb6e0))
##### Bug Fixes
- Reduce logging spam for MCP tools without authentication
([11571c3](11571c37ab))
- Fix typo in several files
([d2888a3](d2888a3766))
- Disable SetModelResponseTool workaround for Vertex AI Gemini 2+ models
([6a94af2](6a94af24bf))
- Bug when callback\_context\_invocation\_context is missing in
GlobalInstructionPlugin
([f81ebdb](f81ebdb622))
- Support models slash prefix in model name extraction
([8dff850](8dff85099d))
- Do not consider events with state delta and no content as final
response
([1ee93c8](1ee93c8bcb))
- Parameter filtering for CrewAI functions with \*\*kwargs
([74a3500](74a3500fc5))
- Do not treat FinishReason.STOP as error case for LLM responses
containing candidates with empty contents
([2f72ceb](2f72ceb49b))
- Fixes null check for reflect\_retry plugin sample
([86f0155](86f01550bd))
- Creates evalset directory on evalset create
([6c3882f](6c3882f2d6))
- Add ADK\_DISABLE\_LOAD\_DOTENV environment variable that disables
automatic loading of .env when running ADK cli, if set to true or 1
([15afbcd](15afbcd158))
- Allow tenacity 9.0.0
([ee8acc5](ee8acc58be))
- Output file uploading to artifact service should handle both base64
encoded and raw bytes
([496f8cd](496f8cd6bb))
- Correct message part ordering in A2A history
([5eca72f](5eca72f9bf))
- Change instruction insertion to respect tool call/response pairs
([1e6a9da](1e6a9daa63))
- DynamicPickleType to support MySQL dialect
([fc15c9a](fc15c9a0c3))
- Enable usage metadata in LiteLLM streaming
([f9569bb](f9569bbb1a))
- Fix issue with MCP tools throwing an error
([1a4261a](1a4261ad4b))
- Remove redundant `format` field from LiteLLM content objects
([489c39d](489c39db01))
- Update the contribution analysis tool to use original write mode
([54db3d4](54db3d4434))
- Fix agent evaluations detailed output rows wrapping
issue([4284c61](4284c61901))
- Update dependency version constraints to be based on PyPI
versions([0b1784e](0b1784e0e4))
##### Improvements
- Add Community Repo section to README
([432d30a](432d30af48))
- Undo adding MCP tools output schema to FunctionDeclaration
([92a7d19](92a7d19573))
- Refactor ADK README for clarity and consistency
([b0017ae](b0017aed44))
- Add support for reversed proxy in adk web
([a0df75b](a0df75b6fa))
- Avoid rendering empty columns as part of detailed results rendering of
eval results
([5cb35db](5cb35db921))
- Clear the behavior of disallow\_transfer\_to\_parent
([48ddd07](48ddd07894))
- Disable the scheduled execution for issue triage workflow
([a02f321](a02f321f1b))
- Include delimiter when matching events from parent nodes in content
processor
([b8a2b6c](b8a2b6c570))
- Improve Tau-bench ADK colab stability
([04dbc42](04dbc42e50))
- Implement ADK-based agent factory for Tau-bench
([c0c67c8](c0c67c8698))
- Add util to run ADK LLM Agent with simulation environment
([87f415a](87f415a7c3))
- Demonstrate CodeExecutor customization for environment setup
([8eeff35](8eeff35b35))
- Add sample agent for VertexAiCodeExecutor
([edfe553](edfe553942))
- Adds a new sample agent that demonstrates how to integrate PostgreSQL
databases using the Model Context Protocol (MCP)
([45a2168](45a2168e0e))
- Add example for using ADK with Fast MCP sampling
([d3796f9](d3796f9b33))
- Refactor gepa sample code and clean-up user demo
colab([63353b2](63353b2b74))
###
[`v1.17.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1170-2025-10-22)
[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.16.0...v1.17.0)
##### Features
- **\[Core]**
- Add a service registry to provide a generic way to register custom
service implementations to be used in FastAPI server. See short
instruction
[here](https://redirect.github.com/google/adk-python/discussions/3175#discussioncomment-14745120).
([391628f](391628fcdc))
- Add the ability to rewind a session to before a previous invocation
([9dce06f](9dce06f9b0))
- Support resuming a parallel agent with multiple branches paused on
tool confirmation requests
([9939e0b](9939e0b087))
- Support content union as static instruction
([cc24d61](cc24d616f8))
- **\[Evals]**
- ADK cli allows developers to create an eval set and add an eval case
([ae139bb](ae139bb461))
- **\[Integrations]**
- Allow custom request and event converters in A2aAgentExecutor
([a17f3b2](a17f3b2e6d))
- **\[Observability]**
- Env variable for disabling llm\_request and llm\_response in spans
([e50f05a](e50f05a9fc))
- **\[Services]**
- Allow passing extra kwargs to create\_session of
VertexAiSessionService
([6a5eac0](6a5eac0bdc))
- Implement new methods in in-memory artifact service to support custom
metadata, artifact versions, etc.
([5a543c0](5a543c00df))
- Add create\_time and mime\_type to ArtifactVersion
([2c7a342](2c7a342593))
- Support returning all sessions when user id is none
([141318f](141318f775))
- **\[Tools]**
- Support additional headers for Google API toolset
([ed37e34](ed37e343f0))
- Introduces a new AgentEngineSandboxCodeExecutor class that supports
executing agent-generated code using the Vertex AI Code Execution
Sandbox API
([ee39a89](ee39a89110))
- Support dynamic per-request headers in MCPToolset
([6dcbb5a](6dcbb5aca6))
- Add `bypass_multi_tools_limit` option to GoogleSearchTool and
VertexAiSearchTool
([9a6b850](9a6b8507f0),
[6da7274](6da7274858))
- Extend `ReflectAndRetryToolPlugin` to support hallucinating function
calls
([f51380f](f51380f9ea))
- Add require\_confirmation param for MCP tool/toolset
([78e74b5](78e74b5bf2))
- **\[UI]**
- Granular per agent speech configuration
([409df13](409df1378f))
##### Bug Fixes
- Returns dict as result from McpTool to comply with BaseTool
expectations
([4df9263](4df926388b))
- Fixes the identity prompt to be one line
([7d5c6b9](7d5c6b9acf))
- Fix the broken langchain importing caused their 1.0.0 release
([c850da3](c850da3a07))
- Fix BuiltInCodeExecutor to support visualizations
([ce3418a](ce3418a69d))
- Relax runner app-name enforcement and improve agent origin inference
([dc4975d](dc4975dea9))
- Improve error message when adk web is run in wrong directory
([4a842c5](4a842c5a13))
- Handle App objects in eval and graph endpoints
([0b73a69](0b73a6937b))
- Exclude `additionalProperties` from Gemini schemas
([307896a](307896aece))
- Overall eval status should be NOT\_EVALUATED if no invocations were
evaluated
([9fbed0b](9fbed0b15a))
- Create context cache only when prefix matches with previous request
([9e0b1fb](9e0b1fb62b))
- Handle `App` instances returned by `agent_loader.load_agent`
([847df16](847df1638c))
- Add support for file URIs in LiteLLM content conversion
([85ed500](85ed500871))
- Only exclude scores that are None
([998264a](998264a5b1))
- Better handling the A2A streaming tasks
([bddc70b](bddc70b5d0))
- Correctly populate context\_id in remote\_a2a\_agent library
([2158b3c](2158b3c915))
- Remove unnecessary Aclosing
([2f4f561](2f4f5611bd))
- Fix pickle data was truncated error in database session using MySql
([36c96ec](36c96ec5b3))
##### Improvements
- Improve hint message in agent loader
([fe1fc75](fe1fc75c15))
- Fixes MCPToolset --> McpToolset in various places
([d4dc645](d4dc645478))
- Add span for context caching handling and new cache creation
([a2d9f13](a2d9f13fa1))
- Checks gemini version for `2 and above` for gemini-builtin tools
([0df6759](0df67599c0))
- Refactor and fix state management in the session service
([8b3ed05](8b3ed059c2))
- Update agent builder instructions and remove run command details
([89344da](89344da813))
- Clarify how to use adk built-in tool in instruction
([d22b8bf](d22b8bf890))
- Delegate the agent state reset logic to LoopAgent
([bb1ea74](bb1ea74924))
- Adjust the instruction about default model
([214986e](214986ebeb))
- Migrate invocation\_context to callback\_context
([e2072af](e2072af69f))
- Correct the callback signatures
([fa84bcb](fa84bcb575))
- Set default for `bypass_multi_tools_limit` to False for
GoogleSearchTool and VertexAiSearchTool
([6da7274](6da7274858))
- Add more clear instruction to the doc updater agent about one PR for
each recommended change
([b21d0a5](b21d0a50d6))
- Add a guideline to avoid content deletion
([16b030b](16b030b2b2))
- Add an sample agent for the `ReflectAndRetryToolPlugin`
([9b8a4aa](9b8a4aad6f))
- Improve error message when adk web is run in wrong directory
([4a842c5](4a842c5a13))
- Add an sample agent for the `ReflectAndRetryToolPlugin`
([9b8a4aa](9b8a4aad6f))
- Add span for context caching handling and new cache creation
([a2d9f13](a2d9f13fa1))
- Disable the scheduled execution for issue triage workflow
([bae2102](bae21027d9))
- Correct the callback signatures
([fa84bcb](fa84bcb575))
##### Documentation
- Format README.md for samples
([0bdba30](0bdba30263))
- Bump models in llms and llms-full to Gemini 2.5
([ce46386](ce4638651f))
- Update gemini\_llm\_connection.py - typo spelling correction
([e6e2767](e6e2767c39))
- Announce the first ADK Community Call in the README
([731bb90](731bb9078d))
###
[`v1.16.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1160-2025-10-08)
[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.15.1...v1.16.0)
##### Features
- **\[Core]**
- Implementation of LLM context compaction
([e0dd06f](e0dd06ff04))
- Support pause and resume an invocation in ADK
([ce9c39f](ce9c39f5a8),
[2f1040f](2f1040f296),
[1ee01cc](1ee01cc05a),
[f005414](f005414895),
[fbf7576](fbf75761bb))
- **\[Models]**
- Add `citation_metadata` to `LlmResponse`
([3f28e30](3f28e30c6d))
- Add support for gemma model via gemini api
([2b5acb9](2b5acb98f5))
- **\[Tools]**
- Add `dry_run` functionality to BigQuery `execute_sql` tool
([960eda3](960eda3d1f))
- Add BigQuery analyze\_contribution tool
([4bb089d](4bb089d386))
- Spanner ADK toolset supports customizable template SQL and
parameterized SQL
([da62700](da62700d73))
- Support Oauth2 client credentials grant type
([5c6cdcd](5c6cdcd197))
- Add `ReflectRetryToolPlugin` to reflect from errors and retry with
different arguments when tool errors
([e55b894](e55b8946d6))
- Support using `VertexAiSearchTool` built-in tool with other tools in
the same agent
([4485379](4485379a04))
- Support using google search built-in tool with other tools in the same
agent
([d3148da](d3148dacc9))
- **\[Evals]**
- Add HallucinationsV1 evaluation metric
([8c73d29](8c73d29c75))
- Add Rubric based tool use metric
([c984b9e](c984b9e552))
- **\[UI]**
- Adds `adk web` options for custom logo
([822efe0](822efe0065))
- **\[Observability]**
- **otel:** Switch CloudTraceSpanExporter to telemetry.googleapis.com
([bd76b46](bd76b46ce2))
##### Bug Fixes
- Adapt to new computer use tool name in genai sdk 1.41.0
([c6dd444](c6dd444fc9))
- Add AuthConfig json serialization in vertex ai session service
([636def3](636def3687))
- Added more agent instructions for doc content changes
([7459962](745996212d))
- Convert argument to pydantic model when tool declares it accepts
pydantic model as argument
([571c802](571c802fba))
- Do not re-create `App` object when loader returns an `App`
([d5c46e4](d5c46e4960))
- Fix compaction logic
([3f2b457](3f2b457efd))
- Fix the instruction in workflow\_triage example agent
([8f3ca03](8f3ca0359e))
- Fixes a bug that causes intermittent `pydantic` validation errors when
uploading files
([e680063](e68006386f))
- Handle A2A Task Status Update Event when streaming in
remote\_a2a\_agent
([a5cf80b](a5cf80b952))
- Make compactor optional in Events Compaction Config and add a default
([3f4bd67](3f4bd67b49))
- Rename SlidingWindowCompactor to LlmEventSummarizer and refine its
docstring
([f1abdb1](f1abdb1938))
- Rollback compaction handling from \_get\_contents
([84f2f41](84f2f417f7))
- Set `max_output_tokens` for the agent builder
([2e2d61b](2e2d61b6fe))
- Set default response modality to AUDIO in run\_session
([68402bd](68402bda49))
- Update remote\_a2a\_agent to better handle streaming events and avoid
duplicate responses
([8e5f361](8e5f361264))
- Update the load\_artifacts tool so that the model can reliably call it
for follow up questions about the same artifact
([238472d](238472d083))
- Fix VertexAiSessionService base\_url override to preserve initialized
http\_options
([8110e41](8110e41b36),
[c51ea0b](c51ea0b52e))
- Handle `App` instances returned by `agent_loader.load_agent`
([847df16](847df1638c))
##### Improvements
- Migrate VertexAiSessionService to use Agent Engine SDK
([90d4c19](90d4c19c51))
- Migrate VertexAiMemoryBankService to use Agent Engine SDK
([d1efc84](d1efc8461e),
[97b950b](97b950b36b),
[83fd045](83fd045718))
- Add support for resolving $ref and $defs in OpenAPI schemas
([a239716](a239716930))
##### Documentation
- Update BigQuery samples README
([3021266](30212669ff))
</details>
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
---
- This PR adds SingleStore database source and tools. The code is mostly
based on MySQL source and tools, and it uses the same go-mysql driver.
- https://github.com/singlestore-labs/singlestoredb-dev-image can be
used to deploy a test SingleStore instance. In this PR the default port
is set to 3308 so the command would be
```docker run \
-d --name singlestoredb-dev \
-e ROOT_PASSWORD="YOUR SINGLESTORE ROOT PASSWORD" \
-p 3308:3306 ghcr.io/singlestore-labs/singlestoredb-dev:latest
```
## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/langchain-google-alloydb-pg-python/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes https://github.com/googleapis/genai-toolbox/issues/1348
---------
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
## Description
This PR adds documentation for the new `tbadk` and it's usage with ADK
Go
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
Add a read-only PostgreSQL custom list_schemas tool, that returns the
schemas present in the database excluding system and temporary schemas.
Returns the schema name, schema owner, grants, number of functions,
number of tables, and number of views within each schema.
<img width="1985" height="1043" alt="Screenshot 2025-10-20 at 7 45
45 PM"
src="https://github.com/user-attachments/assets/8c4f0bb8-587c-489a-8795-efa79e92b06f"
/>
<img width="3372" height="1694" alt="3NpZG7W6h3XGsM7"
src="https://github.com/user-attachments/assets/370b5440-cc48-4c4e-82ea-4fd508cbcf2b"
/>
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
Update bigquery test to include column order for SELECT statement.
Update mindsdb tests to drop table before creating. The whole
integration test pause when there's a failure from any one of
integration tests. If the test pause after `CREATE` and before `DROP`,
the int test will fail when running it again.
In general tests should be parallizable since they interact only with a
deterministic set of batches. The exception is list-batches, especially
with pagination, so run that one sequentially.
This doesn't make much difference for the current set of tests, but in
the near future we will have tests that create batches, which take tens
of seconds to even start running.
Rearrange subtests to be primarily organized by tool, which is more
understandable and easier to filter with `-run`. Test helper methods can
still be called multiple times in different subtests for different
tools.
Sample test output showing the new structure:
```
--- PASS: TestServerlessSparkToolEndpoints (2.01s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches (1.78s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success (1.23s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/filtered (0.34s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/empty (0.40s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/omit_page_size (0.42s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/one_page (0.43s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/20_batches (0.44s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/two_pages (0.54s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/errors (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/errors/negative_page_size (0.01s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/errors/zero_page_size (0.01s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/auth (0.77s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/auth/no_auth_token (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/auth/invalid_auth_token (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/auth/valid_auth_token (0.18s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch (0.09s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/errors (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/errors/full_batch_name (0.01s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/errors/missing_batch (0.11s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/success (0.21s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/success/found_batch (0.11s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/auth (0.60s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/auth/invalid_auth_token (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/auth/no_auth_token (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/auth/valid_auth_token (0.11s)
```
This commit introduces a new `orderedmap` package to preserve the column
order of SQL query results when they are marshaled to JSON.
The default Go `json.Marshal` function sorts map keys, which was causing
the column order to be lost in the output of the database tools.
This commit updates the following tools to use the new `orderedmap`
package:
- `mysqlexecutesql`
- `mssqlexecutesql`
- `postgresexecutesql`
- `spannerexecutesql`
- `sqliteexecutesql`
- `bigqueryexecutesql`
A new test has been added to the `mysqlexecutesql` tool to verify that
the column order is preserved.
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1492
---------
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Co-authored-by: Yuan Teoh <yuanteoh@google.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
Corrects an issue where the `cloud-monitoring-query-prometheus` tool
would fail to populate the `authRequired` field in its generated
manifest.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
🚀 Add MindsDB Integration: Expand Toolbox to Hundreds of Datasources
Overview
This PR introduces comprehensive MindsDB integration to the Google GenAI
Toolbox, enabling SQL queries across hundreds of datasources through a
unified interface. MindsDB is the most widely adopted AI federated
database that automatically translates MySQL queries into REST APIs,
GraphQL, and native protocols.
🎯 Key Value for Google GenAI Toolbox Ecosystem
1. Massive Datasource Expansion
Before: Toolbox limited to ~15 traditional databases
After: Access to hundreds of datasources including Salesforce, Jira,
GitHub, MongoDB, Gmail, Slack, and more
Impact: Dramatically expands the toolbox's reach and utility for
enterprise users
2. Cross-Datasource Analytics
New Capability: Perform joins and analytics across different datasources
seamlessly
Example: Join Salesforce opportunities with GitHub activity to correlate
sales with development activity
Value: Enables comprehensive data analysis that was previously
impossible
3. API Abstraction Layer
Innovation: Write standard SQL queries that automatically translate to
any API
Benefit: Developers can query REST APIs, GraphQL, and native protocols
using familiar SQL syntax
Impact: Reduces complexity and learning curve for accessing diverse
datasources
4. ML Model Integration
Enhanced Capability: Use ML models as virtual tables for real-time
predictions
Example: Query customer churn predictions directly through SQL
Value: Brings AI/ML capabilities into the standard SQL workflow
🔧 Technical Implementation
Source Layer
✅ New MindsDB source implementation using MySQL wire protocol
✅ Comprehensive test coverage with integration tests
✅ Updated existing MySQL tools to support MindsDB sources
✅ Created dedicated MindsDB tools for enhanced functionality
Tools Layer
✅ mindsdb-execute-sql: Direct SQL execution across federated datasources
✅ mindsdb-sql: Parameterized SQL queries with template support
✅ Backward compatibility with existing MySQL tools
Documentation & Configuration
✅ Comprehensive documentation with real-world examples
✅ Prebuilt configuration for easy setup
✅ Updated CLI help text and command-line options
📊 Supported Datasources
Business Applications
Salesforce (leads, opportunities, accounts)
Jira (issues, projects, workflows)
GitHub (repositories, commits, PRs)
Slack (channels, messages, teams)
HubSpot (contacts, companies, deals)
Databases & Storage
MongoDB (NoSQL collections as structured tables)
Redis (key-value stores)
Elasticsearch (search and analytics)
S3, filesystems, etc (file storage)
Communication & Email
Gmail/Outlook (emails, attachments)
Microsoft Teams (communications, files)
Discord (server data, messages)
🎯 Example Use Cases
Cross-Datasource Analytics
-- Join Salesforce opportunities with GitHub activity
```
SELECT
s.opportunity_name,
s.amount,
g.repository_name,
COUNT(g.commits) as commit_count
FROM salesforce.opportunities s
JOIN github.repositories g ON s.account_id = g.owner_id
WHERE s.stage = 'Closed Won';
```
Email & Communication Analysis
```
-- Analyze email patterns with Slack activity
SELECT
e.sender,
e.subject,
s.channel_name,
COUNT(s.messages) as message_count
FROM gmail.emails e
JOIN slack.messages s ON e.sender = s.user_name
WHERE e.date >= '2024-01-01';
```
🚀 Benefits for Google GenAI Toolbox
Enterprise Adoption: Enables access to enterprise datasources
(Salesforce, Jira, etc.)
Developer Productivity: Familiar SQL interface for any datasource
AI/ML Integration: Seamless integration of ML models into SQL workflows
Scalability: Single interface for hundreds of datasources
Competitive Advantage: Unique federated database capabilities in the
toolbox ecosystem
📈 Impact Metrics
Datasource Coverage: +1000% increase in supported datasources
API Abstraction: Eliminates need to learn individual API syntaxes
Cross-Platform Analytics: Enables previously impossible data
correlations
ML Integration: Brings AI capabilities into standard SQL workflows
🔗 Resources
MindsDB Documentation
MindsDB GitHub
Updated Toolbox Documentation
✅ Testing
✅ Unit tests for MindsDB source implementation
✅ Integration tests with real datasource examples
✅ Backward compatibility with existing MySQL tools
✅ Documentation examples tested and verified
This integration transforms the Google GenAI Toolbox from a traditional
database tool into a comprehensive federated data platform, enabling
users to query and analyze data across their entire technology stack
through a unified SQL interface.
---------
Co-authored-by: duwenxin <duwenxin@google.com>
Co-authored-by: setohe0909 <setohe.09@gmail.com>
Co-authored-by: Kurtis Van Gent <31518063+kurtisvg@users.noreply.github.com>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
The order of parameters in alloydb_ai_nl.execute_nl_query changed, which
broke the alloydbainl tool. This adds named parameters to the statement
in the tool, which fixes this.
This will be a breaking change for existing user that defined their
natural language configuration with the `create_configure` operation.
The `execute_nl_query` input argument parameter and order had been
updated recently, hence, users that are defining their configuration
with the latest instructions will not be able to use Toolbox. This
update is inevitable.
Previously, user will create the configuration with the following:
```
CALL google_ml.create_model(model_id => 'gemini-2_0_flash', ...);
SELECT alloydb_ai_nl.g_manage_configuration(
'create_configuration', -- operation
'my_nl_config', -- configuration_id_in
'gemini-2_0_flash' -- model_id_in
);
SELECT alloydb_ai_nl.g_manage_configuration(
operation => 'register_table_view',
configuration_id_in => 'my_nl_config',
table_views_in=>'{auth_psv}');
```
Currently, user will create the configuration with the following:
```
SELECT alloydb_ai_nl.g_create_configuration(configuration_id =>'my_nl_config');
SELECT alloydb_ai_nl.g_manage_configuration(
operation => 'register_table_view',
configuration_id_in => 'my_nl_config',
table_views_in=>'{auth_psv}'
);
```
This PR also updates the nl_question from "return 1" to "return the
number 1" to provide more context to the model.
A new `ainl_update_testing` database was created with the new NL
configuration in order to not break existing integration tests before
merging this PR. Once this is merged, the existing `test_database`
database will be updated and will update the integration test's database
again.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [X] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [X] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [X] Ensure the tests and linter pass
- [X] Code coverage does not decrease (if any source code was changed)
- [X] Appropriate docs were updated (if necessary)
- [X] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1752
---------
Co-authored-by: Yuan Teoh <yuanteoh@google.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
Update CONTRIBUTING.md with correct file name conventions
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
Co-authored-by: Averi Kitsch <akitsch@google.com>
Add client cache and automatic cache cleanup.
The cache is managed by a map with OAuth access token as the keys.
Upon user tool invocation, get client from existing cache or create a
new one.
## Description
The debug context logger does not take in value placeholders. The
statements must be first converted to a string.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
Spanner source page in docs was missing spanner-list-tables tool.
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1836
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
* Add new database metadata tools to list.
* Break the list down into useful sections.
* Provide a link to the generated prebuilt tools page.
## Description
Add new tools to get metadata from databases through Looker
* get_connections
* get_connection_schemas
* get_connection_databases
* get_connection_tables
* get_connection_table_columns
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
|
[github.com/googleapis/mcp-toolbox-sdk-go](https://redirect.github.com/googleapis/mcp-toolbox-sdk-go)
| require | digest | `eb73e0c` -> `f1f6a9f` |
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
Invalid SQL like selecting from invalid tables or granting bad
permissions resulted in a null due to the missing error statement.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1638
## Description
This change adds service account impersonation support to Bigquery.
Users can now optionally supply an `impersonateServiceAccount` field in
their `bigquery-source` config to enable impersonation.
---
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#906
## Description
Removing the `ipAddress` field since it is not an input for Cloud SQL
SQL Server source.
Kept the variable in Source's config but removed this variable from
everywhere else in the code. This will PREVENT a breaking change since
the validator won't flag it as an "extra field".
**Will have to update the following as well:**
(1) Cloud docs
https://cloud.google.com/sql/docs/sqlserver/pre-built-tools-with-mcp-toolbox
(2) gemini-cli-extensions
https://github.com/gemini-cli-extensions/cloud-sql-sqlserver
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1549
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
---------
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
---
This introduces a breaking change. The bigquery-get-dataset-info tool
will now enforce the allowed datasets setting from its BigQuery source
configuration. Previously, this setting had no effect on the tool.
## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Part of https://github.com/googleapis/genai-toolbox/issues/873
This PR contains the following updates:
| Package | Change | Age | Confidence |
|---|---|---|---|
| [google-genai](https://redirect.github.com/googleapis/python-genai) |
`==1.46.0` -> `==1.47.0` |
[](https://docs.renovatebot.com/merge-confidence/)
|
[](https://docs.renovatebot.com/merge-confidence/)
|
---
### Release Notes
<details>
<summary>googleapis/python-genai (google-genai)</summary>
###
[`v1.47.0`](https://redirect.github.com/googleapis/python-genai/blob/HEAD/CHANGELOG.md#1470-2025-10-29)
[Compare
Source](https://redirect.github.com/googleapis/python-genai/compare/v1.46.0...v1.47.0)
##### Features
- Add safety\_filter\_level and person\_generation for Imagen upscaling
([6196b1b](6196b1b425))
- Add support for preference optimization tuning in the SDK.
([4540f9d](4540f9d25f))
- Pass file name to the backend when uploading with a file path
([4fa2edd](4fa2edd927))
- Support default global location when not using api key with vertexai
backend
([6340ce0](6340ce0cf0))
- Support retries in API requests
([ac70ecd](ac70ecdb02))
##### Bug Fixes
- Check environment Vertex AI api key for credential precedence
([9bd758c](9bd758c50c))
- Correct pydantic version range (bytes fields are broken with
pydantic<=2.8).
([d24cb56](d24cb5634e))
- Make `__del__` methods more robust in `_api_client` and `client`.
([64cab58](64cab58b38))
- Setting custom httpx async client will ensure using httpx client even
if aiohttp is installed
([7bd1bde](7bd1bdef36))
- Support custom\_base\_url for Live and other APIs when
project/location are unset and even when project/location are set in the
environment, and avoid Automatic Default Credentials
([7bd1bde](7bd1bdef36))
##### Documentation
- Add docstring for classes and fields that are not supported in Gemini
or Vertex API
([4a6c6af](4a6c6af190))
- Add docstring for enum classes that are not supported in Gemini or
Vertex API
([909f26b](909f26b926))
- Add documentation for the retry behavior
([ff12b46](ff12b46294))
- Update Codegen Instructions to include newer models and use consistent
formatting.
([f0b0a94](f0b0a94aa1))
- Update README.md and index.rst to use consistent spacing for Python
Samples
([2e5aa1f](2e5aa1f933))
</details>
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Averi Kitsch <akitsch@google.com>
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
|
[google.golang.org/genproto](https://redirect.github.com/googleapis/go-genproto)
| require | digest | `49b9836` -> `3a174f9` |
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
### This PR introduces a new excludedValues field for tool parameters,
enhancing validation capabilities.
This change introduces a new excludedValues field for tool parameters.
This field allows developers to specify a list of values that are not
allowed for a parameter.
The excludedValues field supports both exact value matching and regular
expression matching.
The changes include:
- Updating the tool parameter documentation to include the
excludedValues field.
- Adding the excludedValues field to the CommonParameter struct.
- Implementing the logic to check for excluded values in the Parse
method of each parameter type.
- Updating the MatchStringOrRegex function to support non-string inputs
by converting them to strings before regex matching. This makes the
allowedValues and excludedValues checks more robust.
- Adding unit tests for allowedValues to verify the MatchStringOrRegex
change on parameters.
- Adding unit tests to verify the excludedValues functionality.
## PR Checklist
- [x] Make sure you reviewed
CONTRIBUTING.md
(httpshttps://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
bug/issue
(https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [ ] Make sure to add ! if this involve a breaking change
🛠️Fixes#1792
Co-authored-by: Averi Kitsch <akitsch@google.com>
2025-10-29 16:32:57 -07:00
676 changed files with 48076 additions and 7850 deletions
* **tools/postgres:** Add allowed-origins flag ([#1984](https://github.com/googleapis/genai-toolbox/issues/1984)) ([862868f](https://github.com/googleapis/genai-toolbox/commit/862868f28476ea981575ce412faa7d6a03138f31))
* **tools/postgres:** Add list-query-stats and get-column-cardinality functions ([#1976](https://github.com/googleapis/genai-toolbox/issues/1976)) ([9f76026](https://github.com/googleapis/genai-toolbox/commit/9f760269253a8cc92a357e995c6993ccc4a0fb7b))
* **tools/spanner:** Add spanner list graphs to prebuiltconfigs ([#2056](https://github.com/googleapis/genai-toolbox/issues/2056)) ([0e7fbf4](https://github.com/googleapis/genai-toolbox/commit/0e7fbf465c488397aa9d8cab2e55165fff4eb53c))
* Support alternate accessToken header name ([#1968](https://github.com/googleapis/genai-toolbox/issues/1968)) ([18017d6](https://github.com/googleapis/genai-toolbox/commit/18017d6545335a6fc1c472617101c35254d9a597))
* Support for annotations ([#2007](https://github.com/googleapis/genai-toolbox/issues/2007)) ([ac21335](https://github.com/googleapis/genai-toolbox/commit/ac21335f4e88ca52d954d7f8143a551a35661b94))
* **tool/mssql:** Set default host and port for MSSQL source ([#1943](https://github.com/googleapis/genai-toolbox/issues/1943)) ([7a9cc63](https://github.com/googleapis/genai-toolbox/commit/7a9cc633768d9ae9a7ff8230002da69d6a36ca86))
* Add import for firebirdsql ([#2045](https://github.com/googleapis/genai-toolbox/issues/2045)) ([fb7aae9](https://github.com/googleapis/genai-toolbox/commit/fb7aae9d35b760d3471d8379642f835a0d84ec41))
* Correct FAQ to mention HTTP tools ([#2036](https://github.com/googleapis/genai-toolbox/issues/2036)) ([7b44237](https://github.com/googleapis/genai-toolbox/commit/7b44237d4a21bfbf8d3cebe4d32a15affa29584d))
* Format BigQuery numeric output as decimal strings ([#2084](https://github.com/googleapis/genai-toolbox/issues/2084)) ([155bff8](https://github.com/googleapis/genai-toolbox/commit/155bff80c1da4fae1e169e425fd82e1dc3373041))
* Set default annotations for tools in code if annotation not provided in yaml ([#2049](https://github.com/googleapis/genai-toolbox/issues/2049)) ([565460c](https://github.com/googleapis/genai-toolbox/commit/565460c4ea8953dbe80070a8e469f957c0f7a70c))
* **tools/alloydb-postgres-list-tables:** Exclude google_ml schema from list_tables ([#2046](https://github.com/googleapis/genai-toolbox/issues/2046)) ([a03984c](https://github.com/googleapis/genai-toolbox/commit/a03984cc15254c928f30085f8fa509ded6a79a0c))
* **tools/alloydbcreateuser:** Remove duplication of project praram ([#2028](https://github.com/googleapis/genai-toolbox/issues/2028)) ([730ac6d](https://github.com/googleapis/genai-toolbox/commit/730ac6d22805fd50b4a675b74c1865f4e7689e7c))
* **tools/mongodb:** Remove `required` tag from the `canonical` field ([#2099](https://github.com/googleapis/genai-toolbox/issues/2099)) ([744214e](https://github.com/googleapis/genai-toolbox/commit/744214e04cd12b11d166e6eb7da8ce4714904abc))
* **tools/spanner-list-tables:** Unmarshal `object_details` json string into map to make response have nested json ([#1894](https://github.com/googleapis/genai-toolbox/issues/1894)) ([446d62a](https://github.com/googleapis/genai-toolbox/commit/446d62acd995d5128f52e9db254dd1c7138227c6))
### Features
* **tools/postgres:** Add `long_running_transactions`, `list_locks` and `replication_stats` tools ([#1751](https://github.com/googleapis/genai-toolbox/issues/1751)) ([5abad5d](https://github.com/googleapis/genai-toolbox/commit/5abad5d56c6cc5ba86adc5253b948bf8230fa830))
* Added prompt support for toolbox ([#1798](https://github.com/googleapis/genai-toolbox/issues/1798)) ([cd56ea4](https://github.com/googleapis/genai-toolbox/commit/cd56ea44fbdd149fcb92324e70ee36ac747635db))
* **source/alloydb, source/cloud-sql-postgres,source/cloud-sql-mysql,source/cloud-sql-mssql:** Use project from env for alloydb and cloud sql control plane tools ([#1588](https://github.com/googleapis/genai-toolbox/issues/1588)) ([12bdd95](https://github.com/googleapis/genai-toolbox/commit/12bdd954597e49d3ec6b247cc104584c5a4d1943))
* **source/mysql:** Set default host and port for MySQL source ([#1922](https://github.com/googleapis/genai-toolbox/issues/1922)) ([2c228ef](https://github.com/googleapis/genai-toolbox/commit/2c228ef4f2d4cb8dfc41e845466bfe3566d141a1))
* **source/Postgresql:** Set default host and port for Postgresql source ([#1927](https://github.com/googleapis/genai-toolbox/issues/1927)) ([7e6e88a](https://github.com/googleapis/genai-toolbox/commit/7e6e88a21f2b9b60e0d645cdde33a95892d31a04))
* **tools/alloydbainl:** update AlloyDB AI NL statement order ([#1753](https://github.com/googleapis/genai-toolbox/issues/1753))
* **tools/bigquery-analyze-contribution:** Add allowed dataset support ([#1675](https://github.com/googleapis/genai-toolbox/issues/1675)) ([ef28e39](https://github.com/googleapis/genai-toolbox/commit/ef28e39e90b21287ca8e69b99f4e792c78e9d31f))
* **tools/bigquery-get-dataset-info:** add allowed dataset support ([#1654](https://github.com/googleapis/genai-toolbox/issues/1654))
### Features
* Support `excludeValues` for parameters ([#1818](https://github.com/googleapis/genai-toolbox/issues/1818)) ([a8e98dc](https://github.com/googleapis/genai-toolbox/commit/a8e98dc99d208e8b37a3bc4d1ab4749b5154ed36))
* **elasticsearch:** Add Elasticsearch source and tools ([#1109](https://github.com/googleapis/genai-toolbox/issues/1109)) ([5367285](https://github.com/googleapis/genai-toolbox/commit/5367285e91ddda99ae7261d8ed4b025f975d1453))
* **mindsdb:** Add MindsDB Source and Tools ([#878](https://github.com/googleapis/genai-toolbox/issues/878)) ([1b2cca9](https://github.com/googleapis/genai-toolbox/commit/1b2cca9faa6f7bacbeb5d25634ce3bf61067de16))
* **cloud-healthcare:** Add support for healthcare source, tool and prebuilt config ([#1853](https://github.com/googleapis/genai-toolbox/issues/1853)) ([1f833fb](https://github.com/googleapis/genai-toolbox/commit/1f833fb1a124e23819ddfec476f2e63ef31dd22f))
* **singlestore:** Add SingleStore Source and Tools ([#1333](https://github.com/googleapis/genai-toolbox/issues/1333)) ([40b9dba](https://github.com/googleapis/genai-toolbox/commit/40b9dbab088add05a66995abb1c71a9345b8f7e5))
* **source/bigquery:** Add client cache for user-passed credentials ([#1119](https://github.com/googleapis/genai-toolbox/issues/1119)) ([cf7012a](https://github.com/googleapis/genai-toolbox/commit/cf7012a82bb5c77309da3a26e563a5015786aa69))
* **source/bigquery:** Add service account impersonation support for bigquery ([#1641](https://github.com/googleapis/genai-toolbox/issues/1641)) ([e09d182](https://github.com/googleapis/genai-toolbox/commit/e09d182f88bf697a169428f477aebc9f1741e35f))
* **tools/bigquery-analyze-contribution:** Add allowed dataset support ([#1675](https://github.com/googleapis/genai-toolbox/issues/1675)) ([ef28e39](https://github.com/googleapis/genai-toolbox/commit/ef28e39e90b21287ca8e69b99f4e792c78e9d31f))
* **tools/bigquery-get-dataset-info:** Add allowed dataset support ([#1654](https://github.com/googleapis/genai-toolbox/issues/1654)) ([a2006ad](https://github.com/googleapis/genai-toolbox/commit/a2006ad57718ebad3de5c6850e9c6a5a763808ec))
* **tools/looker-run-dashboard:** New `run_dashboard` tool ([#1858](https://github.com/googleapis/genai-toolbox/issues/1858)) ([30857c2](https://github.com/googleapis/genai-toolbox/commit/30857c2294bb14961d3be49e2c368c69ecf844ec))
* **tools/looker-run-look:** Modify run_look to show query origin ([#1860](https://github.com/googleapis/genai-toolbox/issues/1860)) ([991e539](https://github.com/googleapis/genai-toolbox/commit/991e539f9c7978fa618ada3179a0b656c33ff501))
* **tools/looker:** Tools to retrieve the connections, schemas, databases, and column metadata from a looker system. ([#1804](https://github.com/googleapis/genai-toolbox/issues/1804)) ([d7d1b03](https://github.com/googleapis/genai-toolbox/commit/d7d1b03f3b746ed748d67f67e833457d55c846ab))
* **tools/mongodb:** Make MongoDB tools' `filterParams` field optional ([#1614](https://github.com/googleapis/genai-toolbox/issues/1614)) ([208ab92](https://github.com/googleapis/genai-toolbox/commit/208ab92eb377b538a99330a415ecc18790b077b7))
* **tools/serverless-spark:** Add serverless-spark source with list_batches tool ([#1690](https://github.com/googleapis/genai-toolbox/pull/1690))([816dbce](https://github.com/googleapis/genai-toolbox/commit/816dbce268392046e54767732bd31488c6e89bdb))
### Bug Fixes
* Bigquery execute_sql to assign values to array ([#1884](https://github.com/googleapis/genai-toolbox/issues/1884)) ([559e2a2](https://github.com/googleapis/genai-toolbox/commit/559e2a22e0db20bb947702e13140ce869b5865a7))
* **cloudmonitoring:** Populate `authRequired` in tool manifest ([#1800](https://github.com/googleapis/genai-toolbox/issues/1800)) ([954152c](https://github.com/googleapis/genai-toolbox/commit/954152c7928bf0da9be221e011e32f74bc4cebbc))
* Instructions to quote filters that include commas ([#1794](https://github.com/googleapis/genai-toolbox/issues/1794)) ([4b01720](https://github.com/googleapis/genai-toolbox/commit/4b0172083c0dd4c71098d4e0ab5fa0b16ea0d830))
* **source/cloud-sql-mssql:** Remove `ipAddress` field ([#1822](https://github.com/googleapis/genai-toolbox/issues/1822)) ([38d535d](https://github.com/googleapis/genai-toolbox/commit/38d535de34cfedd6828a01d9dcd25daf1bad7306))
* **tools/alloydbainl:** AlloyDB AI NL execute_sql statement order ([#1753](https://github.com/googleapis/genai-toolbox/issues/1753)) ([9723cad](https://github.com/googleapis/genai-toolbox/commit/9723cadaa181a76d8fdda65a6254f6c887c3cf57))
* **tools/postgres-execute-sql:** Do not ignore SQL failure ([#1829](https://github.com/googleapis/genai-toolbox/issues/1829)) ([8984287](https://github.com/googleapis/genai-toolbox/commit/898428759c2a1a384bea8939605cf0914d129bec))
author for more than 10 days, maintainers may mark that PR as Draft. PRs that
are inactive for more than 30 days may be closed.
### Automated Code Reviews
This repository uses **Gemini Code Assist** to provide automated code reviews on Pull Requests. While this does not replace human review, it provides immediate feedback on code quality and potential issues.
You can manually trigger the bot by commenting on your Pull Request:
*`/gemini`: Manually invokes Gemini Code Assist in comments
*`/gemini review`: Posts a code review of the changes in the pull request
*`/gemini summary`: Posts a summary of the changes in the pull request.
*`/gemini help`: Overview of the available commands
* **Internal Contributors:** Testing workflows should trigger automatically.
* **External Contributors:** Request Toolbox maintainers to trigger the testing
workflows on your PR.
* Maintainers can comment `/gcbrun` to execute the integration tests.
* Maintainers can add the label `tests:run` to execute the unit tests.
#### Test Resources
@@ -183,6 +185,9 @@ variables for each source.
* Couchbase - setup in the test project via the Marketplace
* DGraph - using the public dgraph interface <https://play.dgraph.io> for
testing
* Looker
* The Cloud Build service account is a user for conversational analytics
* The Looker instance runs under google.com:looker-sandbox.
* Memorystore Redis - setup in the test project using a Memorystore for Redis
standalone instance
* Memorystore Redis Cluster, Memorystore Valkey standalone, and Memorystore
@@ -250,18 +255,25 @@ Follow these steps to preview documentation changes locally using a Hugo server:
There are 3 GHA workflows we use to achieve document versioning:
1. **Deploy In-development docs:**
This workflow is run on every commit merged into the main branch. It deploys the built site to the `/dev/` subdirectory for the in-development documentation.
This workflow is run on every commit merged into the main branch. It deploys
the built site to the `/dev/` subdirectory for the in-development
documentation.
1. **Deploy Versioned Docs:**
When a new GitHub Release is published, it performs two deployments based on the new release tag.
One to the new version subdirectory and one to the root directory of the versioned-gh-pages branch.
When a new GitHub Release is published, it performs two deployments based on
the new release tag. One to the new version subdirectory and one to the root
directory of the versioned-gh-pages branch.
**Note:** Before the release PR from release-please is merged, add the newest version into the hugo.toml file.
**Note:** Before the release PR from release-please is merged, add the
newest version into the hugo.toml file.
1. **Deploy Previous Version Docs:**
This is a manual workflow, started from the GitHub Actions UI.
To rebuild and redeploy documentation for an already released version that were released before this new system was in place. This workflow can be started on the UI by providing the git version tag which you want to create the documentation for.
The specific versioned subdirectory and the root docs are updated on the versioned-gh-pages branch.
To rebuild and redeploy documentation for an already released version that
were released before this new system was in place. This workflow can be
started on the UI by providing the git version tag which you want to create
the documentation for. The specific versioned subdirectory and the root docs
are updated on the versioned-gh-pages branch.
#### Contributors
@@ -330,9 +342,17 @@ for instructions on developing Toolbox SDKs.
### Team
Team, `@googleapis/senseai-eco`, has been set as
Team `@googleapis/senseai-eco` has been set as
[CODEOWNERS](.github/CODEOWNERS). The GitHub TeamSync tool is used to create
this team from MDB Group, `senseai-eco`.
this team from MDB Group, `senseai-eco`. Additionally, database-specific GitHub
teams (e.g., `@googleapis/toolbox-alloydb`) have been created from MDB groups to
manage code ownership and review for individual database products.
Team `@googleapis/toolbox-contributors` has write access to this repo. They
can create branches and approve test runs. But they do not have the ability
to approve PRs for main. TeamSync is used to create this team from the MDB
Group `toolbox-contributors`. Googlers who are developing for MCP-Toolbox
but aren't part of the core team should join this group.
This document helps you find and install the right Gemini CLI extension to interact with your databases.
This document helps you find and install the right Gemini CLI extension to
interact with your databases.
## How to Install an Extension
To install any of the extensions listed below, use the `gemini extensions install` command followed by the extension's GitHub repository URL.
To install any of the extensions listed below, use the `gemini extensions
install` command followed by the extension's GitHub repository URL.
For complete instructions on finding, installing, and managing extensions, please see the [official Gemini CLI extensions documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/extensions/index.md).
For complete instructions on finding, installing, and managing extensions,
* These commands are not supported from within the CLI
* These commands will only be reflected in active CLI sessions on restart
* Extensions require Application Default Credentials in your environment. See [Set up ADC for a local development environment](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment) to learn how you can provide either your user credentials or service account credentials to ADC in a local development environment.
* Most extensions require you to set environment variables to connect to a database. If there is a link provided for the configuration, fetch the web page and return the configuration.
* Extensions require Application Default Credentials in your environment. See
to learn how you can provide either your user credentials or service account
credentials to ADC in a local development environment.
* Most extensions require you to set environment variables to connect to a
database. If there is a link provided for the configuration, fetch the web
page and return the configuration.
-----
## Find Your Database Extension
Find your database or service in the list below to get the correct installation command.
Find your database or service in the list below to get the correct installation
command.
**Note on Observability:** Extensions with `-observability` in their name are designed to help you understand the health and performance of your database instances, often by analyzing metrics and logs.
**Note on Observability:** Extensions with `-observability` in their name are
designed to help you understand the health and performance of your database
If you are looking for other PostgreSQL options, consider the `postgres` extension for self-hosted instances, or the `alloydb` extension for AlloyDB for PostgreSQL.
If you are looking for other PostgreSQL options, consider the `postgres`
extension for self-hosted instances, or the `alloydb` extension for AlloyDB
If you are looking for other PostgreSQL options, consider the `postgres` extension for self-hosted instances, or the `cloud-sql-postgresql` extension for Cloud SQL for PostgreSQL.
If you are looking for other PostgreSQL options, consider the `postgres`
extension for self-hosted instances, or the `cloud-sql-postgresql` extension
This extension can be used with any Google Cloud database to build custom tools. For more information, see the [MCP Toolbox documentation](https://googleapis.github.io/genai-toolbox/getting-started/introduction/).
flags.BoolVar(&cmd.cfg.TelemetryGCP,"telemetry-gcp",false,"Enable exporting directly to Google Cloud Monitoring.")
flags.StringVar(&cmd.cfg.TelemetryOTLP,"telemetry-otlp","","Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. 'http://127.0.0.1:4318')")
flags.StringVar(&cmd.cfg.TelemetryServiceName,"telemetry-service-name","toolbox","Sets the value of the service.name resource attribute for telemetry data.")
// Fetch prebuilt tools sources to customize the help description
prebuiltHelp:=fmt.Sprintf(
"Use a prebuilt tool configuration by source type. Cannot be used with --tools-file. Allowed: '%s'.",
flags.BoolVar(&cmd.cfg.Stdio,"stdio",false,"Listens via MCP STDIO instead of acting as a remote HTTP server.")
flags.BoolVar(&cmd.cfg.DisableReload,"disable-reload",false,"Disables dynamic reloading of tools file.")
flags.BoolVar(&cmd.cfg.UI,"ui",false,"Launches the Toolbox UI web server.")
flags.StringSliceVar(&cmd.cfg.AllowedOrigins,"allowed-origins",[]string{"*"},"Specifies a list of origins permitted to access this server. Defaults to '*'.")
// wrap RunE command so that we have access to original Command object
returnToolsFile{},fmt.Errorf("resource conflicts detected:\n - %s\n\nPlease ensure each source, authService, tool, and toolset has a unique name across all files",strings.Join(conflicts,"\n - "))
returnToolsFile{},fmt.Errorf("resource conflicts detected:\n - %s\n\nPlease ensure each source, authService, tool, toolset and prompt has a unique name across all files",strings.Join(conflicts,"\n - "))
The AlloyDB Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud AlloyDB for PostgreSQL resources. It supports full lifecycle control, from creating clusters and instances to exploring schemas and running queries.
## Features
An editor configured to use the AlloyDB MCP server can use its AI capabilities to help you:
* **Provision & Manage Infrastructure**: Create and manage AlloyDB clusters, instances, and users
To connect to the database to explore and query data, search the MCP store for the AlloyDB for PostgreSQL MCP Server.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **AlloyDB API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* Service Usage Consumer (`roles/serviceusage.serviceUsageConsumer`)
## Install & Configuration
In the Antigravity MCP Store, click the "Install" button.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide AlloyDB capabilities to your AI assistant. You can:
* "Create a new AlloyDB cluster named 'e-commerce-prod' in the 'my-gcp-project' project."
* "Add a read-only instance to my 'e-commerce-prod' cluster."
* "Create a new user named 'analyst' with read access to all tables."
## Server Capabilities
The AlloyDB MCP server provides the following tools:
The AlloyDB Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud AlloyDB for PostgreSQL resources. It supports full lifecycle control, from exploring schemas and running queries to monitoring your database.
## Features
An editor configured to use the AlloyDB MCP server can use its AI capabilities to help you:
- **Explore Schemas and Data** - List tables, get table details, and view data
- **Execute SQL** - Run SQL queries directly from your editor
- **Monitor Performance** - View active queries, query plans, and other performance metrics (via observability tools)
- **Manage Extensions** - List available and installed PostgreSQL extensions
For AlloyDB infrastructure management, search the MCP store for the AlloyDB for PostgreSQL Admin MCP Server.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **AlloyDB API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* AlloyDB Client (`roles/alloydb.client`) (for connecting and querying)
* Service Usage Consumer (`roles/serviceusage.serviceUsageConsumer`)
> **Note:** If your AlloyDB instance uses private IPs, you must run the MCP server in the same Virtual Private Cloud (VPC) network.
## Install & Configuration
1. In the Antigravity MCP Store, click the "Install" button.
2. Add the required inputs for your [cluster](https://docs.cloud.google.com/alloydb/docs/cluster-list) in the configuration pop-up, then click "Save". You can update this configuration at any time in the "Configure" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
You'll now be able to see all enabled tools in the "Tools" tab.
## Usage
Once configured, the MCP server will automatically provide AlloyDB capabilities to your AI assistant. You can:
* "Show me all tables in the 'orders' database."
* "What are the columns in the 'products' table?"
* "How many orders were placed in the last 30 days?"
## Server Capabilities
The AlloyDB MCP server provides the following tools:
The BigQuery Model Context Protocol (MCP) Server enables AI-powered development tools to seamlessly connect, interact, and generate data insights with your BigQuery datasets and data using natural language commands.
## Features
An editor configured to use the BigQuery MCP server can use its AI capabilities to help you:
- **Natural Language to Data Analytics:** Easily find required BigQuery tables and ask analytical questions in plain English.
- **Seamless Workflow:** Stay within your CLI, eliminating the need to constantly switch to the GCP console for generating analytical insights.
- **Run Advanced Analytics:** Generate forecasts and perform contribution analysis using built-in advanced tools.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **BigQuery API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* BigQuery User (`roles/bigquery.user`)
## Install & Configuration
1. In the Antigravity MCP Store, click the "Install" button.
2. Add the required inputs in the configuration pop-up, then click "Save". You can update this configuration at any time in the "Configure" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
You'll now be able to see all enabled tools in the "Tools" tab.
### Usage
Once configured, the MCP server will automatically provide BigQuery capabilities to your AI assistant. You can:
***Find Data:**
* "Find tables related to PyPi downloads"
* "Find tables related to Google analytics data in the dataset bigquery-public-data"
***Generate Analytics and Insights:**
* "Using bigquery-public-data.pypi.file_downloads show me the top 10 downloaded pypi packages this month."
* "Using bigquery-public-data.pypi.file_downloads can you forecast downloads for the last four months of 2025 for package urllib3?"
## Server Capabilities
The BigQuery MCP server provides the following tools:
The Cloud SQL for SQL Server Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud SQL for SQL Server databases. It supports connecting to instances, exploring schemas, and running queries.
## Features
An editor configured to use the Cloud SQL for SQL Server MCP server can use its AI capabilities to help you:
- **Provision & Manage Infrastructure** - Create and manage Cloud SQL instances and users
To connect to the database to explore and query data, search the MCP store for the Cloud SQL for SQL Server MCP Server.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **Cloud SQL Admin API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* Cloud SQL Viewer (`roles/cloudsql.viewer`)
* Cloud SQL Admin (`roles/cloudsql.admin`)
## Install & Configuration
In the Antigravity MCP Store, click the "Install" button.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Cloud SQL for SQL Server capabilities to your AI assistant. You can:
* "Create a new Cloud SQL for SQL Server instance named 'e-commerce-prod' in the 'my-gcp-project' project."
* "Create a new user named 'analyst' with read access to all tables."
## Server Capabilities
The Cloud SQL for SQL Server MCP server provides the following tools:
The Cloud SQL for SQL Server Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud SQL for SQL Server databases. It supports connecting to instances, exploring schemas, and running queries.
## Features
An editor configured to use the Cloud SQL for SQL Server MCP server can use its AI capabilities to help you:
- **Query Data** - Execute SQL queries
- **Explore Schema** - List tables and view schema details
For Cloud SQL infrastructure management, search the MCP store for the Cloud SQL for SQL Server Admin MCP Server.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **Cloud SQL Admin API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* Cloud SQL Client (`roles/cloudsql.client`)
> **Note:** If your instance uses private IPs, you must run the MCP server in the same Virtual Private Cloud (VPC) network.
## Install & Configuration
1. In the Antigravity MCP Store, click the "Install" button.
2. Add the required inputs for your [instance](https://cloud.google.com/sql/docs/sqlserver/instance-info) in the configuration pop-up, then click "Save". You can update this configuration at any time in the "Configure" tab.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Cloud SQL for SQL Server capabilities to your AI assistant. You can:
* "Select top 10 rows from the customers table."
* "List all tables in the database."
## Server Capabilities
The Cloud SQL for SQL Server MCP server provides the following tools:
The Cloud SQL for MySQL Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud SQL for MySQL databases. It supports connecting to instances, exploring schemas, and running queries.
## Features
An editor configured to use the Cloud SQL for MySQL MCP server can use its AI capabilities to help you:
- **Provision & Manage Infrastructure** - Create and manage Cloud SQL instances and users
To connect to the database to explore and query data, search the MCP store for the Cloud SQL for MySQL MCP Server.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **Cloud SQL Admin API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* Cloud SQL Viewer (`roles/cloudsql.viewer`)
* Cloud SQL Admin (`roles/cloudsql.admin`)
## Install & Configuration
In the Antigravity MCP Store, click the "Install" button.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Cloud SQL for MySQL capabilities to your AI assistant. You can:
* "Create a new Cloud SQL for MySQL instance named 'e-commerce-prod' in the 'my-gcp-project' project."
* "Create a new user named 'analyst' with read access to all tables."
## Server Capabilities
The Cloud SQL for MySQL MCP server provides the following tools:
The Cloud SQL for MySQL Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud SQL for MySQL databases. It supports connecting to instances, exploring schemas, and running queries.
## Features
An editor configured to use the Cloud SQL for MySQL MCP server can use its AI capabilities to help you:
- **Explore Schema** - List tables and view schema details
- **Database Maintenance** - Check for fragmentation and missing indexes
- **Monitor Performance** - View active queries
For Cloud SQL infrastructure management, search the MCP store for the Cloud SQL for MySQL Admin MCP Server.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **Cloud SQL Admin API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* Cloud SQL Client (`roles/cloudsql.client`)
> **Note:** If your instance uses private IPs, you must run the MCP server in the same Virtual Private Cloud (VPC) network.
## Install & Configuration
1. In the Antigravity MCP Store, click the "Install" button.
2. Add the required inputs for your [instance](https://cloud.google.com/sql/docs/mysql/instance-info) in the configuration pop-up, then click "Save". You can update this configuration at any time in the "Configure" tab.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Cloud SQL for MySQL capabilities to your AI assistant. You can:
* "Show me the schema for the 'orders' table."
* "List the top 10 active queries."
* "Check for tables missing unique indexes."
## Server Capabilities
The Cloud SQL for MySQL MCP server provides the following tools:
The Cloud SQL for PostgreSQL Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud SQL for PostgreSQL databases. It supports connecting to instances, exploring schemas, running queries, and analyzing performance.
## Features
An editor configured to use the Cloud SQL for PostgreSQL MCP server can use its AI capabilities to help you:
- **Provision & Manage Infrastructure** - Create and manage Cloud SQL instances and users
To connect to the database to explore and query data, search the MCP store for the Cloud SQL for PostgreSQL MCP Server.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **Cloud SQL Admin API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* Cloud SQL Viewer (`roles/cloudsql.viewer`)
* Cloud SQL Admin (`roles/cloudsql.admin`)
## Install & Configuration
In the Antigravity MCP Store, click the "Install" button.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Cloud SQL for PostgreSQL capabilities to your AI assistant. You can:
* "Create a new Cloud SQL for Postgres instance named 'e-commerce-prod' in the 'my-gcp-project' project."
* "Create a new user named 'analyst' with read access to all tables."
## Server Capabilities
The Cloud SQL for PostgreSQL MCP server provides the following tools:
The Cloud SQL for PostgreSQL Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud SQL for PostgreSQL databases. It supports connecting to instances, exploring schemas, running queries, and analyzing performance.
## Features
An editor configured to use the Cloud SQL for PostgreSQL MCP server can use its AI capabilities to help you:
- **Explore Schema** - List tables, views, indexes, and triggers
- **Monitor Performance** - View active queries, bloat, and memory configurations
- **Manage Extensions** - List available and installed extensions
For Cloud SQL infrastructure management, search the MCP store for the Cloud SQL for PostgreSQL Admin MCP Server.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **Cloud SQL Admin API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* Cloud SQL Client (`roles/cloudsql.client`)
> **Note:** If your instance uses private IPs, you must run the MCP server in the same Virtual Private Cloud (VPC) network.
## Install & Configuration
1. In the Antigravity MCP Store, click the "Install" button.
2. Add the required inputs for your [instance](https://cloud.google.com/sql/docs/postgres/instance-info) in the configuration pop-up, then click "Save". You can update this configuration at any time in the "Configure" tab.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Cloud SQL for PostgreSQL capabilities to your AI assistant. You can:
* "Show me the top 5 bloated tables."
* "List all installed extensions."
* "Explain the query plan for SELECT * FROM users."
## Server Capabilities
The Cloud SQL for PostgreSQL MCP server provides the following tools:
The Dataplex Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud Dataplex Catalog. It supports searching and looking up entries and aspect types.
## Features
An editor configured to use the Dataplex MCP server can use its AI capabilities to help you:
- **Search Catalog** - Search for entries in Dataplex Catalog
- **Explore Metadata** - Lookup specific entries and search aspect types
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **Dataplex API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* Dataplex Viewer (`roles/dataplex.viewer`) or equivalent permissions to read catalog entries.
## Install & Configuration
1. In the Antigravity MCP Store, click the "Install" button.
2. Add the required inputs in the configuration pop-up, then click "Save". You can update this configuration at any time in the "Configure" tab.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Dataplex capabilities to your AI assistant. You can:
* "Search for entries related to 'sales' in Dataplex."
* "Look up details for the entry 'projects/my-project/locations/us-central1/entryGroups/my-group/entries/my-entry'."
## Server Capabilities
The Dataplex MCP server provides the following tools:
The Looker Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Looker instance. It supports exploring models, running queries, managing dashboards, and more.
## Features
An editor configured to use the Looker MCP server can use its AI capabilities to help you:
- **Explore Models** - Get models, explores, dimensions, measures, filters, and parameters
- **Manage Dashboards** - Create, run, and modify dashboards
- **Manage Looks** - Search for and run saved looks
- **Health Checks** - Analyze instance health and performance
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* Access to a Looker instance.
* API Credentials (`Client ID` and `Client Secret`) or OAuth configuration.
## Install & Configuration
1. In the Antigravity MCP Store, click the "Install" button.
2. Add the required inputs for your [instance](https://docs.cloud.google.com/looker/docs/set-up-and-administer-looker) in the configuration pop-up, then click "Save". You can update this configuration at any time in the "Configure" tab.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Looker capabilities to your AI assistant. You can:
* "Find explores in the 'ecommerce' model."
* "Run a query to show total sales by month."
* "Create a new dashboard named 'Sales Overview'."
## Server Capabilities
The Looker MCP server provides a wide range of tools. Here are some of the key capabilities:
The Cloud Spanner Model Context Protocol (MCP) Server gives AI-powered development tools the ability to work with your Google Cloud Spanner databases. It supports executing SQL queries and exploring schemas.
## Features
An editor configured to use the Cloud Spanner MCP server can use its AI capabilities to help you:
- **Query Data** - Execute DML and DQL SQL queries
- **Explore Schema** - List tables and view schema details
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with the **Cloud Spanner API** enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
* IAM Permissions:
* Cloud Spanner Database User (`roles/spanner.databaseUser`) (for data access)
1. In the Antigravity MCP Store, click the "Install" button.
2. Add the required inputs for your [instance](https://docs.cloud.google.com/spanner/docs/instances) in the configuration pop-up, then click "Save". You can update this configuration at any time in the "Configure" tab.
You'll now be able to see all enabled tools in the "Tools" tab.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Once configured, the MCP server will automatically provide Cloud Spanner capabilities to your AI assistant. You can:
* "Execute a DML query to update customer names."
* "List all tables in the `my-database`."
* "Execute a DQL query to select data from `orders` table."
## Server Capabilities
The Cloud Spanner MCP server provides the following tools:
The MCP Toolbox for Databases Server gives AI-powered development tools the ability to work with your custom tools. It is designed to simplify and secure the development of tools for interacting with databases.
## Prerequisites
* [Node.js](https://nodejs.org/) installed.
* A Google Cloud project with relevant APIs enabled.
* Ensure [Application Default Credentials](https://cloud.google.com/docs/authentication/gcloud) are available in your environment.
## Install & Configuration
1. In the Antigravity MCP Store, click the "Install" button.
2. Create your [`tools.yaml` configuration file](https://googleapis.github.io/genai-toolbox/getting-started/configure/).
3. Click "View raw config" and update the `tools.yaml` path with the full absolute path to your file.
> [!NOTE]
> If you encounter issues with Windows Defender blocking the execution, you may need to configure an allowlist. See [Configure exclusions for Microsoft Defender Antivirus](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/configure-exclusions-microsoft-defender-antivirus?view=o365-worldwide) for more details.
## Usage
Interact with your custom tools using natural language.
@@ -7,12 +7,20 @@ description: "Connect to Toolbox via Gemini CLI Extensions."
## Gemini CLI Extensions
[Gemini CLI][gemini-cli] is an open-source AI agent designed to assist with development workflows by assisting with coding, debugging, data exploration, and content creation. Its mission is to provide an agentic interface for interacting with database and analytics services and popular open-source databases.
[Gemini CLI][gemini-cli] is an open-source AI agent designed to assist with
development workflows by assisting with coding, debugging, data exploration, and
content creation. Its mission is to provide an agentic interface for interacting
with database and analytics services and popular open-source databases.
### How extensions work
Gemini CLI is highly extensible, allowing for the addition of new tools and capabilities through extensions. You can load the extensions from a GitHub URL, a local directory, or a configurable registry. They provide new tools, slash commands, and prompts to assist with your workflow.
Use the Gemini CLI Extensions to load prebuilt or custom tools to interact with your databases.
Gemini CLI is highly extensible, allowing for the addition of new tools and
capabilities through extensions. You can load the extensions from a GitHub URL,
a local directory, or a configurable registry. They provide new tools, slash
commands, and prompts to assist with your workflow.
Use the Gemini CLI Extensions to load prebuilt or custom tools to interact with
@@ -69,7 +69,7 @@ response field (e.g. empty string).
To edit headers, press the "Edit Headers" button to display the header modal.
Within this modal, users can make direct edits by typing into the header's text
area.
area.
Toolbox UI validates that the headers are in correct JSON format. Other
header-related errors (e.g., incorrect header names or values required by the
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.