Example:
```
alloydb-operations-get:
kind: wait-for-operation
source: alloydb-api-source
method: GET
path: /v1/projects/{{.projectId}}/locations/{{.locationId}}/operations/{{.operationId}}
description: "Makes API call to check whether operation is done or not. This tool is run first then wait for tool. if its still in create phase trigger it after 3 minutes. Print a message saying still not done. We will notify once its done."
pathParams:
- name: projectId
type: string
description: The dynamic path parameter
- name: locationId
type: string
description: The dynamic path parameter
default: us-central1
- name: operationId
type: string
description: Operation status check for previous task
```
This pull request introduces a new tool for executing arbitrary Cypher
queries against a Neo4j database (`neo4j-execute-cypher`) and implements
robust query classification functionality to distinguish between read
and write operations. The changes include updates to documentation, the
addition of a query classifier, and comprehensive test coverage for the
classifier.
### Addition of `neo4j-execute-cypher` tool:
- **Documentation**: Added a new markdown file `neo4j-execute-cypher.md`
that explains the tool's functionality, usage, and configuration
options, including the ability to enforce read-only mode for security.
- **Import statement**: Registered the new tool in the `cmd/root.go`
file to make it available in the toolbox.
### Query classification functionality:
- **Query classifier implementation**: Added `QueryClassifier` in
`classifier.go`, which classifies Cypher queries into read or write
operations based on keywords, procedures, and subquery analysis. It
supports handling edge cases like nested subqueries, multi-word
keywords, and invalid syntax.
- **Test coverage**: Created extensive tests in `classifier_test.go` to
validate the classifier's behavior across various query types, including
abuse cases, subqueries, and procedure calls. Tests ensure the
classifier is robust and does not panic on malformed queries.
---------
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
Previously code snippets were added to the README on how to use the new
Toolbox Go Core SDK.
Recently we have released a `tbgenkit` client-side package to make
integration with Genkit Go easier.
This PR updates the code snippet to use the newer package
---------
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
### Description
Enhance the [JavaScript
Quickstart](https://googleapis.github.io/genai-toolbox/getting-started/local_quickstart_js/)
by adding a new tab dedicated to integrating the MCP Toolbox with
LlamaIndex. This provides users with a direct, runnable example of how
to utilize Toolbox's capabilities within a LlamaIndex-powered LLM agent
for hotel search and booking operations.
The changes include:
- Addition of a new `LlamaIndex` tab in `docs/quickstarts/js-local.md`.
- Corresponding `hotelAgent.js` code snippet for LlamaIndex integration.
### Related Issue
This PR addresses and resolves#930.
---------
Signed-off-by: Anushka Saxena <anushka.saxena-it-2k19-05@iips.edu.in>
Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
support both generic and typed map. Config example:
```
parameters:
- name: user_scores
type: map
description: A map of user IDs to their scores. All scores must be integers.
valueType: integer # This enforces the value type for all entries. Leave it blank for generic map
```
Represented as `Object` with `additionalProperties` in manifests.
Added a util function to convert json.Number (string type) to int/float
types to address the problem where int/float values are converted to
strings for the generic map.
Firestore is a NoSQL document database built for automatic scaling, high
performance, and ease of application development. It's a fully managed,
serverless database that supports mobile, web, and server development.
This change adds Firestore as a source in toolbox
`
tools:
wait_for_tool:
kind: wait-for
description: "Tool puts chat to sleep for given timeout example 3m,10s,
5m etc. For tasks such as cluster creation no need to wait. It takes
longer time such as 8m for instance creation. Use timeout value as
default if nothing is provided"
timeout: 10s
`
Output:
`
prernakakkar@prernakakkar:~/senseai/genai-toolbox$ curl -X POST -H
"Content-Type: application/json" -d '{"duration": "5m"}'
http://127.0.0.1:5000/api/tool/wa
it_for_tool/invoke
{"result":"[\"Wait for 5m0s completed successfully.\"]"}
`
This pr updates the docs, specifically the quickstart working with MCP
Inspector. The tool recently included a Session Token for Authorization
that can easily be missed, this updates the guide to make it easy to
find and use this Session Token
---------
Co-authored-by: Nick Acosta <nick.acosta@getcollate.io>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
feat(docs): Enhance Bigtable GoogleSQL DML clarity
**Description:**
This PR updates the `bigtable-sql` tool documentation to provide a
clearer understanding of GoogleSQL's DML capabilities within Bigtable.
**Changes Made:**
- Added a note indicating that Bigtable's GoogleSQL dialect only
supports `SELECT` DML operations.
- Explicitly states that `INSERT`, `UPDATE`, and `DELETE` DML statements
are not supported.
- Updated the external link to directly reference the "Use cases"
section in the official Google Cloud Bigtable GoogleSQL overview, where
this limitation is detailed.
**Reasoning:**
The previous documentation, while linking to the overview, did not
explicitly highlight the DML limitations. This clarification will
prevent potential confusion for users expecting full DML support,
ensuring the documentation accurately reflects Bigtable's capabilities
with GoogleSQL.
---------
Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com>
This PR adds the Go SDK code snippets to the docsite, following the same
pattern as Python and JS.
The code snippets are for:
- Core usage
- LangChain Go
- Genkit Go
- Go GenAI
- OpenAI Go
This PR adds redirection links to the actual SDK content pages.
This is added because Hugo still generates a regular HTML site page for
each of the client SDK file, even though it points to the external
Github link.
In case a user finds themselves in the internal url pointing to these
files, this should still redirect them to the specific github repos.
## Summary
- Added configurable query timeout to MySQL source configuration
- Updated connection DSN to include readTimeout parameter
- Added documentation and example usage
- Added test coverage for queryTimeout configuration
## Test plan
- [x] Added unit test for queryTimeout configuration parsing
- [x] Updated documentation with queryTimeout field description
- [x] Verified timeout parameter is correctly added to DSN when
specified
## Caveat
When queryTimeout is exceeded, we get an obscure error message
([screenshot](https://github.com/user-attachments/assets/fd292f91-328d-4ebc-9a87-2d92e9887300)):
```
unable to execute query: invalid connection
```
This seems to be a problem with the mysql-go library:
https://stackoverflow.com/q/65253798/10720618
I tried to use
[MAX_EXECUTION_TIME](https://dev.mysql.com/doc/refman/8.0/en/optimizer-hints.html#optimizer-hints-execution-time)
but it didn't work as expected (my `sleep(MAX_EXECUTION_TIME+3)` query
finished successfully after MAX_EXECUTION_TIME milliseconds)
Any ideas on what can be done here? The error message is very
misleading. My goal with adding timeouts is to communicate to the LLM
when it has issued a slow query and force it to adjust (e.g. query
indexes and write a more optimized query) but this defeats the purpose.
---------
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
Updated the link in the Cloud Run deployment guide for `tools.yaml`
setup. The previous link incorrectly pointed to a `localhost` source
example, which causes confusion and deployment failures. The new link
directs users to the guide for configuring cloud-based sources, ensuring
a correct setup.
Allow Toolbox server to automatically update when users modify their
tool configuration file(s), instead of requiring a restart.
This feature is automatically enabled, but can be turned off with the
flag `--disable-reload`.
Optional projectID parameter enables dynamic, cross-project resource
access in BigQuery tools.
This allows a single tool configuration to target different projects at
runtime, rather than being fixed to the project in its source
configuration.
---------
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
## Problem
Users attempting to filter results in a SQL query based on an array
parameter from a tool may intuitively write a `statement` using the `IN`
clause, like so:
```sql
SELECT * FROM flights WHERE preferred_airlines IN ($1);
```
When this query is executed with an array argument (e.g., `["Delta",
"United"]`), it fails with a cryptic error message from the database
driver:
```
Exception: error while invoking tool: unable to execute query: failed to encode args[0]: unable to encode []interface {}{"Delta", "United"} into text format for text (OID 25): cannot find encode plan
```
This error occurs because the driver does not automatically expand the
single `$1` placeholder into a list of values `('Delta', 'United')`.
Instead, it tries to encode the entire Go slice `[]interface{}` as a
single text value, which fails. This creates a point of friction, as the
correct syntax is not immediately obvious and can lead to user
frustration and debugging time.
## Solution
This PR updates our documentation and example usage to demonstrate the
correct SQL syntax for handling array parameters. The proper way to
check for a value's existence in an array parameter is by using
PostgreSQL's `ANY()` operator:
```sql
SELECT * FROM flights WHERE preferred_airlines = ANY($1);
```
When this syntax is used, the database driver correctly interprets the
Go slice passed as `$1` as a PostgreSQL array, and the query executes as
intended.
## Impact
Saves developers time they would otherwise spend troubleshooting a
non-obvious database driver behavior.