## Description
Add support for healthcare source, tool and prebuilt config. This branch
consist of all previously approved PRs.
🛠️Fixes#1648
---------
Co-authored-by: Marwan Tammam <15021613+Quarz0@users.noreply.github.com>
Update AlloyDB AI NL integration test's database back to
`test_database`.
Previously a new database was created with new NL configuration in order
to not break existing integration tests before merging PR #1753.
`test_database` is updated with v1.0.4 AlloyDB AI NL extension.
Move postgres prebuilt integration tests to `common.go` and `tool.go`.
Run those tests from alloydbpg and cloudsqlpg as well.
alloydbpg and cloudsqlpg integration test coverage calculate against the
whole `internal/tools/postgres/` folder. If not added, the coverage will
eventually drop below minimum requirement.
Add commit sha tag to continuous release image. Currently, we only
include the ref_name tag (which will always show as `main` since we run
the continuous release from the main branch), however, this will be
replaced during every PR merge and it will be tough to find image of
previous versions. With the commit sha tag, this will help user find the
exact image that they are looking for.
Assigning job iterator value to an array instead of map to preserve
column order.
When assigning incoming values to map, sometimes the result is not in
order of statement. E.g. `SELECT id, name from ...` might turn into
`{"name": "name_value", "id":1}` rather than `{"id":1, "name":
"name_value"}`. Previously, during json marshaling, it will ALWAYS order
the map in alphabetical order. so that wasn't an issue.
With the implementation of `orderedmap` (#1852), the bigquery execute
sql tool will now preserves the column order during the marshaling
process. Due to this, bigquery's integration test is flaky and failed
when the map is reordered. This update assign incoming value as array
instead, preserving the actual order.
## Description
The run_dashboard tool will run the query associated with each tile of
the dashboard and return the full set of query results. It enables the
agent to answer questions like "Summarize this dashboard for me".
---------
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
This PR contains the following updates:
| Package | Change | Age | Confidence |
|---|---|---|---|
| [google-adk](https://redirect.github.com/google/adk-python)
([changelog](https://redirect.github.com/google/adk-python/blob/main/CHANGELOG.md))
| `==1.15.1` -> `==1.18.0` |
[](https://docs.renovatebot.com/merge-confidence/)
|
[](https://docs.renovatebot.com/merge-confidence/)
|
---
### Release Notes
<details>
<summary>google/adk-python (google-adk)</summary>
###
[`v1.18.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1180-2025-11-05)
[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.17.0...v1.18.0)
##### Features
- **\[ADK Visual Agent Builder]**
- Core Features
- Visual workflow designer for agent creation
- Support for multiple agent types (LLM, Sequential, Parallel, Loop,
Workflow)
- Agent tool support with nested agent tools
- Built-in and custom tool integration
- Callback management for all ADK callback types (before/after agent,
model, tool)
- Assistant to help you build your agents with natural language
- Assistant proposes and writes agent configuration yaml files for you
- Save to test with chat interfaces as normal
- Build and debug at the same time in adk web!
- **\[Core]**
- Add support for extracting cache-related token counts from LiteLLM
usage
([4f85e86](4f85e86fc3))
- Expose the Python code run by the code interpreter in the logs
([a2c6a8a](a2c6a8a85c))
- Add run\_debug() helper method for quick agent experimentation
([0487eea](0487eea2ab))
- Allow injecting a custom Runner into `agent_to_a2a`
([156d235](156d235479))
- Support MCP prompts via the McpInstructionProvider class
([88032cf](88032cf5c5))
- **\[Models]**
- Add model tracking to LiteLlm and introduce a LiteLLM with fallbacks
demo
([d4c63fc](d4c63fc562))
- Add ApigeeLlm as a model that lets ADK Agent developers to connect
with an Apigee proxy
([87dcb3f](87dcb3f7ba))
- **\[Integrations]**
- Add example and fix for loading and upgrading old ADK session
databases
([338c3c8](338c3c89c6))
- Add support for specifying logging level for adk eval cli command
([b1ff85f](b1ff85fb23))
- Propagate LiteLLM finish\_reason to LlmResponse for use in callbacks
([71aa564](71aa5645f6))
- Allow LLM request to override the model used in the generate content
async method in LiteLLM
([ce8f674](ce8f674a28))
- Add api key argument to Vertex Session and Memory services for Express
Mode support
([9014a84](9014a849ea))
- Added support for enums as arguments for function tools
([240ef5b](240ef5beea))
- Implement artifact\_version related methods in GcsArtifactService
([e194ebb](e194ebb33c))
- **\[Services]**
- Add support for Vertex AI Express Mode when deploying to Agent Engine
([d4b2a8b](d4b2a8b49f))
- Remove custom polling logic for Vertex AI Session Service since LRO
polling is supported in express mode
([546c2a6](546c2a6816))
- Make VertexAiSessionService fully asynchronous
([f7e2a7a](f7e2a7a40e))
- **\[Tools]**
- Add Bigquery detect\_anomalies tool
([9851340](9851340ad1))
- Extend Bigquery detect\_anomalies tool to support future data anomaly
detection
([38ea749](38ea749c9c))
- Add get\_job\_info tool to BigQuery toolset
([6429457](64294572c1))
- **\[Evals]**
- Add "final\_session\_state" to the EvalCase data model
([2274c4f](2274c4f304))
- Marked expected\_invocation as optional field on evaluator interface
([b17c8f1](b17c8f19e5))
- Adds LLM-backed user simulator
([54c4ecc](54c4ecc733))
- **\[Observability]**
- Add BigQueryLoggingPlugin for event logging to BigQuery
([b7dbfed](b7dbfed4a3))
- **\[Live]**
- Add token usage to live events for bidi streaming
([6e5c0eb](6e5c0eb6e0))
##### Bug Fixes
- Reduce logging spam for MCP tools without authentication
([11571c3](11571c37ab))
- Fix typo in several files
([d2888a3](d2888a3766))
- Disable SetModelResponseTool workaround for Vertex AI Gemini 2+ models
([6a94af2](6a94af24bf))
- Bug when callback\_context\_invocation\_context is missing in
GlobalInstructionPlugin
([f81ebdb](f81ebdb622))
- Support models slash prefix in model name extraction
([8dff850](8dff85099d))
- Do not consider events with state delta and no content as final
response
([1ee93c8](1ee93c8bcb))
- Parameter filtering for CrewAI functions with \*\*kwargs
([74a3500](74a3500fc5))
- Do not treat FinishReason.STOP as error case for LLM responses
containing candidates with empty contents
([2f72ceb](2f72ceb49b))
- Fixes null check for reflect\_retry plugin sample
([86f0155](86f01550bd))
- Creates evalset directory on evalset create
([6c3882f](6c3882f2d6))
- Add ADK\_DISABLE\_LOAD\_DOTENV environment variable that disables
automatic loading of .env when running ADK cli, if set to true or 1
([15afbcd](15afbcd158))
- Allow tenacity 9.0.0
([ee8acc5](ee8acc58be))
- Output file uploading to artifact service should handle both base64
encoded and raw bytes
([496f8cd](496f8cd6bb))
- Correct message part ordering in A2A history
([5eca72f](5eca72f9bf))
- Change instruction insertion to respect tool call/response pairs
([1e6a9da](1e6a9daa63))
- DynamicPickleType to support MySQL dialect
([fc15c9a](fc15c9a0c3))
- Enable usage metadata in LiteLLM streaming
([f9569bb](f9569bbb1a))
- Fix issue with MCP tools throwing an error
([1a4261a](1a4261ad4b))
- Remove redundant `format` field from LiteLLM content objects
([489c39d](489c39db01))
- Update the contribution analysis tool to use original write mode
([54db3d4](54db3d4434))
- Fix agent evaluations detailed output rows wrapping
issue([4284c61](4284c61901))
- Update dependency version constraints to be based on PyPI
versions([0b1784e](0b1784e0e4))
##### Improvements
- Add Community Repo section to README
([432d30a](432d30af48))
- Undo adding MCP tools output schema to FunctionDeclaration
([92a7d19](92a7d19573))
- Refactor ADK README for clarity and consistency
([b0017ae](b0017aed44))
- Add support for reversed proxy in adk web
([a0df75b](a0df75b6fa))
- Avoid rendering empty columns as part of detailed results rendering of
eval results
([5cb35db](5cb35db921))
- Clear the behavior of disallow\_transfer\_to\_parent
([48ddd07](48ddd07894))
- Disable the scheduled execution for issue triage workflow
([a02f321](a02f321f1b))
- Include delimiter when matching events from parent nodes in content
processor
([b8a2b6c](b8a2b6c570))
- Improve Tau-bench ADK colab stability
([04dbc42](04dbc42e50))
- Implement ADK-based agent factory for Tau-bench
([c0c67c8](c0c67c8698))
- Add util to run ADK LLM Agent with simulation environment
([87f415a](87f415a7c3))
- Demonstrate CodeExecutor customization for environment setup
([8eeff35](8eeff35b35))
- Add sample agent for VertexAiCodeExecutor
([edfe553](edfe553942))
- Adds a new sample agent that demonstrates how to integrate PostgreSQL
databases using the Model Context Protocol (MCP)
([45a2168](45a2168e0e))
- Add example for using ADK with Fast MCP sampling
([d3796f9](d3796f9b33))
- Refactor gepa sample code and clean-up user demo
colab([63353b2](63353b2b74))
###
[`v1.17.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1170-2025-10-22)
[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.16.0...v1.17.0)
##### Features
- **\[Core]**
- Add a service registry to provide a generic way to register custom
service implementations to be used in FastAPI server. See short
instruction
[here](https://redirect.github.com/google/adk-python/discussions/3175#discussioncomment-14745120).
([391628f](391628fcdc))
- Add the ability to rewind a session to before a previous invocation
([9dce06f](9dce06f9b0))
- Support resuming a parallel agent with multiple branches paused on
tool confirmation requests
([9939e0b](9939e0b087))
- Support content union as static instruction
([cc24d61](cc24d616f8))
- **\[Evals]**
- ADK cli allows developers to create an eval set and add an eval case
([ae139bb](ae139bb461))
- **\[Integrations]**
- Allow custom request and event converters in A2aAgentExecutor
([a17f3b2](a17f3b2e6d))
- **\[Observability]**
- Env variable for disabling llm\_request and llm\_response in spans
([e50f05a](e50f05a9fc))
- **\[Services]**
- Allow passing extra kwargs to create\_session of
VertexAiSessionService
([6a5eac0](6a5eac0bdc))
- Implement new methods in in-memory artifact service to support custom
metadata, artifact versions, etc.
([5a543c0](5a543c00df))
- Add create\_time and mime\_type to ArtifactVersion
([2c7a342](2c7a342593))
- Support returning all sessions when user id is none
([141318f](141318f775))
- **\[Tools]**
- Support additional headers for Google API toolset
([ed37e34](ed37e343f0))
- Introduces a new AgentEngineSandboxCodeExecutor class that supports
executing agent-generated code using the Vertex AI Code Execution
Sandbox API
([ee39a89](ee39a89110))
- Support dynamic per-request headers in MCPToolset
([6dcbb5a](6dcbb5aca6))
- Add `bypass_multi_tools_limit` option to GoogleSearchTool and
VertexAiSearchTool
([9a6b850](9a6b8507f0),
[6da7274](6da7274858))
- Extend `ReflectAndRetryToolPlugin` to support hallucinating function
calls
([f51380f](f51380f9ea))
- Add require\_confirmation param for MCP tool/toolset
([78e74b5](78e74b5bf2))
- **\[UI]**
- Granular per agent speech configuration
([409df13](409df1378f))
##### Bug Fixes
- Returns dict as result from McpTool to comply with BaseTool
expectations
([4df9263](4df926388b))
- Fixes the identity prompt to be one line
([7d5c6b9](7d5c6b9acf))
- Fix the broken langchain importing caused their 1.0.0 release
([c850da3](c850da3a07))
- Fix BuiltInCodeExecutor to support visualizations
([ce3418a](ce3418a69d))
- Relax runner app-name enforcement and improve agent origin inference
([dc4975d](dc4975dea9))
- Improve error message when adk web is run in wrong directory
([4a842c5](4a842c5a13))
- Handle App objects in eval and graph endpoints
([0b73a69](0b73a6937b))
- Exclude `additionalProperties` from Gemini schemas
([307896a](307896aece))
- Overall eval status should be NOT\_EVALUATED if no invocations were
evaluated
([9fbed0b](9fbed0b15a))
- Create context cache only when prefix matches with previous request
([9e0b1fb](9e0b1fb62b))
- Handle `App` instances returned by `agent_loader.load_agent`
([847df16](847df1638c))
- Add support for file URIs in LiteLLM content conversion
([85ed500](85ed500871))
- Only exclude scores that are None
([998264a](998264a5b1))
- Better handling the A2A streaming tasks
([bddc70b](bddc70b5d0))
- Correctly populate context\_id in remote\_a2a\_agent library
([2158b3c](2158b3c915))
- Remove unnecessary Aclosing
([2f4f561](2f4f5611bd))
- Fix pickle data was truncated error in database session using MySql
([36c96ec](36c96ec5b3))
##### Improvements
- Improve hint message in agent loader
([fe1fc75](fe1fc75c15))
- Fixes MCPToolset --> McpToolset in various places
([d4dc645](d4dc645478))
- Add span for context caching handling and new cache creation
([a2d9f13](a2d9f13fa1))
- Checks gemini version for `2 and above` for gemini-builtin tools
([0df6759](0df67599c0))
- Refactor and fix state management in the session service
([8b3ed05](8b3ed059c2))
- Update agent builder instructions and remove run command details
([89344da](89344da813))
- Clarify how to use adk built-in tool in instruction
([d22b8bf](d22b8bf890))
- Delegate the agent state reset logic to LoopAgent
([bb1ea74](bb1ea74924))
- Adjust the instruction about default model
([214986e](214986ebeb))
- Migrate invocation\_context to callback\_context
([e2072af](e2072af69f))
- Correct the callback signatures
([fa84bcb](fa84bcb575))
- Set default for `bypass_multi_tools_limit` to False for
GoogleSearchTool and VertexAiSearchTool
([6da7274](6da7274858))
- Add more clear instruction to the doc updater agent about one PR for
each recommended change
([b21d0a5](b21d0a50d6))
- Add a guideline to avoid content deletion
([16b030b](16b030b2b2))
- Add an sample agent for the `ReflectAndRetryToolPlugin`
([9b8a4aa](9b8a4aad6f))
- Improve error message when adk web is run in wrong directory
([4a842c5](4a842c5a13))
- Add an sample agent for the `ReflectAndRetryToolPlugin`
([9b8a4aa](9b8a4aad6f))
- Add span for context caching handling and new cache creation
([a2d9f13](a2d9f13fa1))
- Disable the scheduled execution for issue triage workflow
([bae2102](bae21027d9))
- Correct the callback signatures
([fa84bcb](fa84bcb575))
##### Documentation
- Format README.md for samples
([0bdba30](0bdba30263))
- Bump models in llms and llms-full to Gemini 2.5
([ce46386](ce4638651f))
- Update gemini\_llm\_connection.py - typo spelling correction
([e6e2767](e6e2767c39))
- Announce the first ADK Community Call in the README
([731bb90](731bb9078d))
###
[`v1.16.0`](https://redirect.github.com/google/adk-python/blob/HEAD/CHANGELOG.md#1160-2025-10-08)
[Compare
Source](https://redirect.github.com/google/adk-python/compare/v1.15.1...v1.16.0)
##### Features
- **\[Core]**
- Implementation of LLM context compaction
([e0dd06f](e0dd06ff04))
- Support pause and resume an invocation in ADK
([ce9c39f](ce9c39f5a8),
[2f1040f](2f1040f296),
[1ee01cc](1ee01cc05a),
[f005414](f005414895),
[fbf7576](fbf75761bb))
- **\[Models]**
- Add `citation_metadata` to `LlmResponse`
([3f28e30](3f28e30c6d))
- Add support for gemma model via gemini api
([2b5acb9](2b5acb98f5))
- **\[Tools]**
- Add `dry_run` functionality to BigQuery `execute_sql` tool
([960eda3](960eda3d1f))
- Add BigQuery analyze\_contribution tool
([4bb089d](4bb089d386))
- Spanner ADK toolset supports customizable template SQL and
parameterized SQL
([da62700](da62700d73))
- Support Oauth2 client credentials grant type
([5c6cdcd](5c6cdcd197))
- Add `ReflectRetryToolPlugin` to reflect from errors and retry with
different arguments when tool errors
([e55b894](e55b8946d6))
- Support using `VertexAiSearchTool` built-in tool with other tools in
the same agent
([4485379](4485379a04))
- Support using google search built-in tool with other tools in the same
agent
([d3148da](d3148dacc9))
- **\[Evals]**
- Add HallucinationsV1 evaluation metric
([8c73d29](8c73d29c75))
- Add Rubric based tool use metric
([c984b9e](c984b9e552))
- **\[UI]**
- Adds `adk web` options for custom logo
([822efe0](822efe0065))
- **\[Observability]**
- **otel:** Switch CloudTraceSpanExporter to telemetry.googleapis.com
([bd76b46](bd76b46ce2))
##### Bug Fixes
- Adapt to new computer use tool name in genai sdk 1.41.0
([c6dd444](c6dd444fc9))
- Add AuthConfig json serialization in vertex ai session service
([636def3](636def3687))
- Added more agent instructions for doc content changes
([7459962](745996212d))
- Convert argument to pydantic model when tool declares it accepts
pydantic model as argument
([571c802](571c802fba))
- Do not re-create `App` object when loader returns an `App`
([d5c46e4](d5c46e4960))
- Fix compaction logic
([3f2b457](3f2b457efd))
- Fix the instruction in workflow\_triage example agent
([8f3ca03](8f3ca0359e))
- Fixes a bug that causes intermittent `pydantic` validation errors when
uploading files
([e680063](e68006386f))
- Handle A2A Task Status Update Event when streaming in
remote\_a2a\_agent
([a5cf80b](a5cf80b952))
- Make compactor optional in Events Compaction Config and add a default
([3f4bd67](3f4bd67b49))
- Rename SlidingWindowCompactor to LlmEventSummarizer and refine its
docstring
([f1abdb1](f1abdb1938))
- Rollback compaction handling from \_get\_contents
([84f2f41](84f2f417f7))
- Set `max_output_tokens` for the agent builder
([2e2d61b](2e2d61b6fe))
- Set default response modality to AUDIO in run\_session
([68402bd](68402bda49))
- Update remote\_a2a\_agent to better handle streaming events and avoid
duplicate responses
([8e5f361](8e5f361264))
- Update the load\_artifacts tool so that the model can reliably call it
for follow up questions about the same artifact
([238472d](238472d083))
- Fix VertexAiSessionService base\_url override to preserve initialized
http\_options
([8110e41](8110e41b36),
[c51ea0b](c51ea0b52e))
- Handle `App` instances returned by `agent_loader.load_agent`
([847df16](847df1638c))
##### Improvements
- Migrate VertexAiSessionService to use Agent Engine SDK
([90d4c19](90d4c19c51))
- Migrate VertexAiMemoryBankService to use Agent Engine SDK
([d1efc84](d1efc8461e),
[97b950b](97b950b36b),
[83fd045](83fd045718))
- Add support for resolving $ref and $defs in OpenAPI schemas
([a239716](a239716930))
##### Documentation
- Update BigQuery samples README
([3021266](30212669ff))
</details>
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
---
- This PR adds SingleStore database source and tools. The code is mostly
based on MySQL source and tools, and it uses the same go-mysql driver.
- https://github.com/singlestore-labs/singlestoredb-dev-image can be
used to deploy a test SingleStore instance. In this PR the default port
is set to 3308 so the command would be
```docker run \
-d --name singlestoredb-dev \
-e ROOT_PASSWORD="YOUR SINGLESTORE ROOT PASSWORD" \
-p 3308:3306 ghcr.io/singlestore-labs/singlestoredb-dev:latest
```
## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/langchain-google-alloydb-pg-python/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes https://github.com/googleapis/genai-toolbox/issues/1348
---------
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
## Description
This PR adds documentation for the new `tbadk` and it's usage with ADK
Go
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
Add a read-only PostgreSQL custom list_schemas tool, that returns the
schemas present in the database excluding system and temporary schemas.
Returns the schema name, schema owner, grants, number of functions,
number of tables, and number of views within each schema.
<img width="1985" height="1043" alt="Screenshot 2025-10-20 at 7 45
45 PM"
src="https://github.com/user-attachments/assets/8c4f0bb8-587c-489a-8795-efa79e92b06f"
/>
<img width="3372" height="1694" alt="3NpZG7W6h3XGsM7"
src="https://github.com/user-attachments/assets/370b5440-cc48-4c4e-82ea-4fd508cbcf2b"
/>
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
Update bigquery test to include column order for SELECT statement.
Update mindsdb tests to drop table before creating. The whole
integration test pause when there's a failure from any one of
integration tests. If the test pause after `CREATE` and before `DROP`,
the int test will fail when running it again.
In general tests should be parallizable since they interact only with a
deterministic set of batches. The exception is list-batches, especially
with pagination, so run that one sequentially.
This doesn't make much difference for the current set of tests, but in
the near future we will have tests that create batches, which take tens
of seconds to even start running.
Rearrange subtests to be primarily organized by tool, which is more
understandable and easier to filter with `-run`. Test helper methods can
still be called multiple times in different subtests for different
tools.
Sample test output showing the new structure:
```
--- PASS: TestServerlessSparkToolEndpoints (2.01s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches (1.78s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success (1.23s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/filtered (0.34s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/empty (0.40s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/omit_page_size (0.42s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/one_page (0.43s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/20_batches (0.44s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/success/two_pages (0.54s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/errors (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/errors/negative_page_size (0.01s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/errors/zero_page_size (0.01s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/auth (0.77s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/auth/no_auth_token (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/auth/invalid_auth_token (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/list-batches/auth/valid_auth_token (0.18s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch (0.09s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/errors (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/errors/full_batch_name (0.01s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/errors/missing_batch (0.11s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/success (0.21s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/success/found_batch (0.11s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/auth (0.60s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/auth/invalid_auth_token (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/auth/no_auth_token (0.00s)
--- PASS: TestServerlessSparkToolEndpoints/parallel-tool-tests/get-batch/auth/valid_auth_token (0.11s)
```
This commit introduces a new `orderedmap` package to preserve the column
order of SQL query results when they are marshaled to JSON.
The default Go `json.Marshal` function sorts map keys, which was causing
the column order to be lost in the output of the database tools.
This commit updates the following tools to use the new `orderedmap`
package:
- `mysqlexecutesql`
- `mssqlexecutesql`
- `postgresexecutesql`
- `spannerexecutesql`
- `sqliteexecutesql`
- `bigqueryexecutesql`
A new test has been added to the `mysqlexecutesql` tool to verify that
the column order is preserved.
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1492
---------
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Co-authored-by: Yuan Teoh <yuanteoh@google.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
Corrects an issue where the `cloud-monitoring-query-prometheus` tool
would fail to populate the `authRequired` field in its generated
manifest.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
🚀 Add MindsDB Integration: Expand Toolbox to Hundreds of Datasources
Overview
This PR introduces comprehensive MindsDB integration to the Google GenAI
Toolbox, enabling SQL queries across hundreds of datasources through a
unified interface. MindsDB is the most widely adopted AI federated
database that automatically translates MySQL queries into REST APIs,
GraphQL, and native protocols.
🎯 Key Value for Google GenAI Toolbox Ecosystem
1. Massive Datasource Expansion
Before: Toolbox limited to ~15 traditional databases
After: Access to hundreds of datasources including Salesforce, Jira,
GitHub, MongoDB, Gmail, Slack, and more
Impact: Dramatically expands the toolbox's reach and utility for
enterprise users
2. Cross-Datasource Analytics
New Capability: Perform joins and analytics across different datasources
seamlessly
Example: Join Salesforce opportunities with GitHub activity to correlate
sales with development activity
Value: Enables comprehensive data analysis that was previously
impossible
3. API Abstraction Layer
Innovation: Write standard SQL queries that automatically translate to
any API
Benefit: Developers can query REST APIs, GraphQL, and native protocols
using familiar SQL syntax
Impact: Reduces complexity and learning curve for accessing diverse
datasources
4. ML Model Integration
Enhanced Capability: Use ML models as virtual tables for real-time
predictions
Example: Query customer churn predictions directly through SQL
Value: Brings AI/ML capabilities into the standard SQL workflow
🔧 Technical Implementation
Source Layer
✅ New MindsDB source implementation using MySQL wire protocol
✅ Comprehensive test coverage with integration tests
✅ Updated existing MySQL tools to support MindsDB sources
✅ Created dedicated MindsDB tools for enhanced functionality
Tools Layer
✅ mindsdb-execute-sql: Direct SQL execution across federated datasources
✅ mindsdb-sql: Parameterized SQL queries with template support
✅ Backward compatibility with existing MySQL tools
Documentation & Configuration
✅ Comprehensive documentation with real-world examples
✅ Prebuilt configuration for easy setup
✅ Updated CLI help text and command-line options
📊 Supported Datasources
Business Applications
Salesforce (leads, opportunities, accounts)
Jira (issues, projects, workflows)
GitHub (repositories, commits, PRs)
Slack (channels, messages, teams)
HubSpot (contacts, companies, deals)
Databases & Storage
MongoDB (NoSQL collections as structured tables)
Redis (key-value stores)
Elasticsearch (search and analytics)
S3, filesystems, etc (file storage)
Communication & Email
Gmail/Outlook (emails, attachments)
Microsoft Teams (communications, files)
Discord (server data, messages)
🎯 Example Use Cases
Cross-Datasource Analytics
-- Join Salesforce opportunities with GitHub activity
```
SELECT
s.opportunity_name,
s.amount,
g.repository_name,
COUNT(g.commits) as commit_count
FROM salesforce.opportunities s
JOIN github.repositories g ON s.account_id = g.owner_id
WHERE s.stage = 'Closed Won';
```
Email & Communication Analysis
```
-- Analyze email patterns with Slack activity
SELECT
e.sender,
e.subject,
s.channel_name,
COUNT(s.messages) as message_count
FROM gmail.emails e
JOIN slack.messages s ON e.sender = s.user_name
WHERE e.date >= '2024-01-01';
```
🚀 Benefits for Google GenAI Toolbox
Enterprise Adoption: Enables access to enterprise datasources
(Salesforce, Jira, etc.)
Developer Productivity: Familiar SQL interface for any datasource
AI/ML Integration: Seamless integration of ML models into SQL workflows
Scalability: Single interface for hundreds of datasources
Competitive Advantage: Unique federated database capabilities in the
toolbox ecosystem
📈 Impact Metrics
Datasource Coverage: +1000% increase in supported datasources
API Abstraction: Eliminates need to learn individual API syntaxes
Cross-Platform Analytics: Enables previously impossible data
correlations
ML Integration: Brings AI capabilities into standard SQL workflows
🔗 Resources
MindsDB Documentation
MindsDB GitHub
Updated Toolbox Documentation
✅ Testing
✅ Unit tests for MindsDB source implementation
✅ Integration tests with real datasource examples
✅ Backward compatibility with existing MySQL tools
✅ Documentation examples tested and verified
This integration transforms the Google GenAI Toolbox from a traditional
database tool into a comprehensive federated data platform, enabling
users to query and analyze data across their entire technology stack
through a unified SQL interface.
---------
Co-authored-by: duwenxin <duwenxin@google.com>
Co-authored-by: setohe0909 <setohe.09@gmail.com>
Co-authored-by: Kurtis Van Gent <31518063+kurtisvg@users.noreply.github.com>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
The order of parameters in alloydb_ai_nl.execute_nl_query changed, which
broke the alloydbainl tool. This adds named parameters to the statement
in the tool, which fixes this.
This will be a breaking change for existing user that defined their
natural language configuration with the `create_configure` operation.
The `execute_nl_query` input argument parameter and order had been
updated recently, hence, users that are defining their configuration
with the latest instructions will not be able to use Toolbox. This
update is inevitable.
Previously, user will create the configuration with the following:
```
CALL google_ml.create_model(model_id => 'gemini-2_0_flash', ...);
SELECT alloydb_ai_nl.g_manage_configuration(
'create_configuration', -- operation
'my_nl_config', -- configuration_id_in
'gemini-2_0_flash' -- model_id_in
);
SELECT alloydb_ai_nl.g_manage_configuration(
operation => 'register_table_view',
configuration_id_in => 'my_nl_config',
table_views_in=>'{auth_psv}');
```
Currently, user will create the configuration with the following:
```
SELECT alloydb_ai_nl.g_create_configuration(configuration_id =>'my_nl_config');
SELECT alloydb_ai_nl.g_manage_configuration(
operation => 'register_table_view',
configuration_id_in => 'my_nl_config',
table_views_in=>'{auth_psv}'
);
```
This PR also updates the nl_question from "return 1" to "return the
number 1" to provide more context to the model.
A new `ainl_update_testing` database was created with the new NL
configuration in order to not break existing integration tests before
merging this PR. Once this is merged, the existing `test_database`
database will be updated and will update the integration test's database
again.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [X] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [X] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [X] Ensure the tests and linter pass
- [X] Code coverage does not decrease (if any source code was changed)
- [X] Appropriate docs were updated (if necessary)
- [X] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1752
---------
Co-authored-by: Yuan Teoh <yuanteoh@google.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
Update CONTRIBUTING.md with correct file name conventions
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
Co-authored-by: Averi Kitsch <akitsch@google.com>
Add client cache and automatic cache cleanup.
The cache is managed by a map with OAuth access token as the keys.
Upon user tool invocation, get client from existing cache or create a
new one.
## Description
The debug context logger does not take in value placeholders. The
statements must be first converted to a string.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
## Description
Spanner source page in docs was missing spanner-list-tables tool.
## PR Checklist
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1836
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
* Add new database metadata tools to list.
* Break the list down into useful sections.
* Provide a link to the generated prebuilt tools page.
## Description
Add new tools to get metadata from databases through Looker
* get_connections
* get_connection_schemas
* get_connection_databases
* get_connection_tables
* get_connection_table_columns
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
|
[github.com/googleapis/mcp-toolbox-sdk-go](https://redirect.github.com/googleapis/mcp-toolbox-sdk-go)
| require | digest | `eb73e0c` -> `f1f6a9f` |
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Averi Kitsch <akitsch@google.com>
## Description
Invalid SQL like selecting from invalid tables or granting bad
permissions resulted in a null due to the missing error statement.
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1638
## Description
This change adds service account impersonation support to Bigquery.
Users can now optionally supply an `impersonateServiceAccount` field in
their `bigquery-source` config to enable impersonation.
---
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#906
## Description
Removing the `ipAddress` field since it is not an input for Cloud SQL
SQL Server source.
Kept the variable in Source's config but removed this variable from
everywhere else in the code. This will PREVENT a breaking change since
the validator won't flag it as an "extra field".
**Will have to update the following as well:**
(1) Cloud docs
https://cloud.google.com/sql/docs/sqlserver/pre-built-tools-with-mcp-toolbox
(2) gemini-cli-extensions
https://github.com/gemini-cli-extensions/cloud-sql-sqlserver
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [x] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [x] Ensure the tests and linter pass
- [x] Code coverage does not decrease (if any source code was changed)
- [x] Appropriate docs were updated (if necessary)
- [x] Make sure to add `!` if this involve a breaking change
🛠️Fixes#1549
## Description
> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution
## PR Checklist
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Fixes #<issue_number_goes_here>
---------
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
## Description
---
This introduces a breaking change. The bigquery-get-dataset-info tool
will now enforce the allowed datasets setting from its BigQuery source
configuration. Previously, this setting had no effect on the tool.
## PR Checklist
---
> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:
- [ ] Make sure you reviewed
[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a
[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
before writing your code! That way we can discuss the change, evaluate
designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change
🛠️ Part of https://github.com/googleapis/genai-toolbox/issues/873
This PR contains the following updates:
| Package | Change | Age | Confidence |
|---|---|---|---|
| [google-genai](https://redirect.github.com/googleapis/python-genai) |
`==1.46.0` -> `==1.47.0` |
[](https://docs.renovatebot.com/merge-confidence/)
|
[](https://docs.renovatebot.com/merge-confidence/)
|
---
### Release Notes
<details>
<summary>googleapis/python-genai (google-genai)</summary>
###
[`v1.47.0`](https://redirect.github.com/googleapis/python-genai/blob/HEAD/CHANGELOG.md#1470-2025-10-29)
[Compare
Source](https://redirect.github.com/googleapis/python-genai/compare/v1.46.0...v1.47.0)
##### Features
- Add safety\_filter\_level and person\_generation for Imagen upscaling
([6196b1b](6196b1b425))
- Add support for preference optimization tuning in the SDK.
([4540f9d](4540f9d25f))
- Pass file name to the backend when uploading with a file path
([4fa2edd](4fa2edd927))
- Support default global location when not using api key with vertexai
backend
([6340ce0](6340ce0cf0))
- Support retries in API requests
([ac70ecd](ac70ecdb02))
##### Bug Fixes
- Check environment Vertex AI api key for credential precedence
([9bd758c](9bd758c50c))
- Correct pydantic version range (bytes fields are broken with
pydantic<=2.8).
([d24cb56](d24cb5634e))
- Make `__del__` methods more robust in `_api_client` and `client`.
([64cab58](64cab58b38))
- Setting custom httpx async client will ensure using httpx client even
if aiohttp is installed
([7bd1bde](7bd1bdef36))
- Support custom\_base\_url for Live and other APIs when
project/location are unset and even when project/location are set in the
environment, and avoid Automatic Default Credentials
([7bd1bde](7bd1bdef36))
##### Documentation
- Add docstring for classes and fields that are not supported in Gemini
or Vertex API
([4a6c6af](4a6c6af190))
- Add docstring for enum classes that are not supported in Gemini or
Vertex API
([909f26b](909f26b926))
- Add documentation for the retry behavior
([ff12b46](ff12b46294))
- Update Codegen Instructions to include newer models and use consistent
formatting.
([f0b0a94](f0b0a94aa1))
- Update README.md and index.rst to use consistent spacing for Python
Samples
([2e5aa1f](2e5aa1f933))
</details>
---
### Configuration
📅 **Schedule**: Branch creation - At any time (no schedule defined),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/googleapis/genai-toolbox).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS4xNTkuNCIsInVwZGF0ZWRJblZlciI6IjQxLjE1OS40IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->
Co-authored-by: Averi Kitsch <akitsch@google.com>