Compare commits

...

15 Commits

Author SHA1 Message Date
Averi Kitsch
ba3f371f65 Merge branch 'main' into akitsch-prebuilt 2025-07-25 11:12:38 -07:00
Yuan Teoh
90d4558a8e docs: update docs lint (#995) 2025-07-25 17:26:28 +00:00
prernakakkar-google
7791c6f87e docs: Minor documentation fixes for AlloyDB Admin API using MCP (#1003) 2025-07-25 09:17:55 -07:00
Pranava B
8ff60ca430 feat: add homebrew installation support for toolbox (#936)
Fixes #820 

- Added installation and upgrade support for toolbox via homebrew
-https://github.com/Homebrew/homebrew-core/pull/231149,
https://github.com/Homebrew/homebrew-core/pull/230590
- This PR updates the documentation files to include the same.

Install toolbox via homebrew with:
```
brew install mcp-toolbox
```

Start the server using the command:
```
toolbox --tools-file "tools.yaml"
```
2025-07-25 14:17:22 +00:00
release-please[bot]
c45390e6f7 chore(main): release 0.10.0 (#886)
🤖 I have created a release *beep* *boop*
---


##
[0.10.0](https://github.com/googleapis/genai-toolbox/compare/v0.9.0...v0.10.0)
(2025-07-25)


### Features

* Add `Map` parameters support
([#928](https://github.com/googleapis/genai-toolbox/issues/928))
([4468bc9](4468bc920b))
* Add Dataplex source and tool
([#847](https://github.com/googleapis/genai-toolbox/issues/847))
([30c16a5](30c16a559e))
* Add Looker source and tool
([#923](https://github.com/googleapis/genai-toolbox/issues/923))
([c67e01b](c67e01bcf9))
* Add support for null optional parameter
([#802](https://github.com/googleapis/genai-toolbox/issues/802))
([a817b12](a817b120ca)),
closes [#736](https://github.com/googleapis/genai-toolbox/issues/736)
* **prebuilt/alloydb-admin-config:** Add alloydb control plane as a
prebuilt config
([#937](https://github.com/googleapis/genai-toolbox/issues/937))
([0b28b72](0b28b72aa0))
* **prebuilt/mysql,prebuilt/mssql:** Add generic mysql and mssql
prebuilt tools
([#983](https://github.com/googleapis/genai-toolbox/issues/983))
([c600c30](c600c30374))
* **server/mcp:** Support MCP version 2025-06-18
([#898](https://github.com/googleapis/genai-toolbox/issues/898))
([313d3ca](313d3ca0d0))
* **sources/mssql:** Add support for encrypt connection parameter
([#874](https://github.com/googleapis/genai-toolbox/issues/874))
([14a868f](14a868f2a0))
* **sources/firestore:** Add Firestore as Source
([#786](https://github.com/googleapis/genai-toolbox/issues/786))
([2bb790e](2bb790e4f8))
* **sources/mongodb:** Add MongoDB Source
([#969](https://github.com/googleapis/genai-toolbox/issues/969))
([74dbd61](74dbd6124d))
* **tools/alloydb-wait-for-operation:** Add wait for operation tool with
exponential backoff
([#920](https://github.com/googleapis/genai-toolbox/issues/920))
([3f6ec29](3f6ec2944e))
* **tools/mongodb-aggregate:** Add MongoDB `aggregate` Tools
([#977](https://github.com/googleapis/genai-toolbox/issues/977))
([bd399bb](bd399bb0fb))
* **tools/mongodb-delete:** Add MongoDB `delete` Tools
([#974](https://github.com/googleapis/genai-toolbox/issues/974))
([78e9752](78e9752f62))
* **tools/mongodb-find:** Add MongoDB `find` Tools
([#970](https://github.com/googleapis/genai-toolbox/issues/970))
([a747475](a7474752d8))
* **tools/mongodb-insert:** Add MongoDB `insert` Tools
([#975](https://github.com/googleapis/genai-toolbox/issues/975))
([4c63f0c](4c63f0c1e4))
* **tools/mongodb-update:** Add MongoDB `update` Tools
([#972](https://github.com/googleapis/genai-toolbox/issues/972))
([dfde52c](dfde52ca9a))
* **tools/neo4j-execute-cypher:** Add neo4j-execute-cypher for Neo4j
sources ([#946](https://github.com/googleapis/genai-toolbox/issues/946))
([81d0505](81d05053b2))
* **tools/neo4j-schema:** Add neo4j-schema tool
([#978](https://github.com/googleapis/genai-toolbox/issues/978))
([be7db3d](be7db3dff2))
* **tools/wait:** Create wait for tool
([#885](https://github.com/googleapis/genai-toolbox/issues/885))
([ed5ef4c](ed5ef4caea))


### Bug Fixes

* Fix document preview pipeline for forked PRs
([#950](https://github.com/googleapis/genai-toolbox/issues/950))
([481cc60](481cc608ba))
* **prebuilt/firestore:** Mark database field as required in the
firestore prebuilt tools
([#959](https://github.com/googleapis/genai-toolbox/issues/959))
([15417d4](15417d4e0c))
* **prebuilt/cloud-sql-mssql:** Correct source reference for execute_sql
tool in cloud-sql-mssql.yaml prebuilt config
([#938](https://github.com/googleapis/genai-toolbox/issues/938))
([d16728e](d16728e5c6))
* **prebuilt/cloud-sql-mysql:** Update list_table tool
([#924](https://github.com/googleapis/genai-toolbox/issues/924))
([2083ba5](2083ba5048))
* Replace 'float' with 'number' in McpManifest
([#985](https://github.com/googleapis/genai-toolbox/issues/985))
([59e23e1](59e23e1725))
* **server/api:** Add logger to context in tool invoke handler
([#891](https://github.com/googleapis/genai-toolbox/issues/891))
([8ce311f](8ce311f256))
* **sources/looker:** Add agent tag to Looker API calls.
([#966](https://github.com/googleapis/genai-toolbox/issues/966))
([f55dd6f](f55dd6fcd0))
* **tools/bigquery-execute-sql:** Ensure invoke always returns a
non-null value
([#925](https://github.com/googleapis/genai-toolbox/issues/925))
([9a55b80](9a55b80482))
* **tools/mysqlsql:** Unmarshal json data from database during invoke
([#979](https://github.com/googleapis/genai-toolbox/issues/979))
([ccc3498](ccc3498cf0)),
closes [#840](https://github.com/googleapis/genai-toolbox/issues/840)

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

---------

Co-authored-by: release-please[bot] <55107282+release-please[bot]@users.noreply.github.com>
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-07-24 17:58:37 -07:00
nester-neo4j
be7db3dff2 feat(tools/neo4j-schema): add neo4j-schema tool (#978)
This pull request introduces a new tool, `neo4j-schema`, for extracting
and processing comprehensive schema information from Neo4j databases. It
includes updates to the documentation, implementation of caching
mechanisms, helper utilities for schema transformation, and
corresponding unit tests. The most important changes are grouped by
theme below:

### Tool Integration:
- **`cmd/root.go`**: Added import for the new `neo4j-schema` tool to
integrate it into the application.

### Documentation:
- **`docs/en/resources/tools/neo4j/neo4j-schema.md`**: Added detailed
documentation for the `neo4j-schema` tool, describing its functionality,
caching behavior, and usage examples.

### Caching Implementation:
- **`internal/tools/neo4j/neo4jschema/cache/cache.go`**: Implemented a
thread-safe, in-memory cache with expiration and optional janitor for
cleaning expired items.

### Unit Tests:
- **`internal/tools/neo4j/neo4jschema/cache/cache_test.go`**: Added
comprehensive tests for the caching system, including functionality for
setting, retrieving, expiration, janitor cleanup, and concurrent access.

### Helper Utilities:
- **`internal/tools/neo4j/neo4jschema/helpers/helpers.go`**: Added
utility functions for processing schema data, including support for APOC
and native Cypher queries, and converting raw query results into
structured formats.

---------

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-07-25 00:40:16 +00:00
Yuan Teoh
7e7d55c5d1 chore: add new docs to release please extraFiles (#994)
Add additional docs files and sort extraFiles list in alphabetical
order.
2025-07-24 17:10:09 -07:00
Anuj Jhunjhunwala
30c16a559e feat: add Dataplex source and tool (#847)
- Users have the preference to choose their clients. Below example is
using Gemini CLI.

- Users can use the pre-built Dataplex tools by creating a settings.json
file under .gemini directory. The contents of settings.json would be as
follows:-

```
{
  "mcpServers": {
    "dataplex": {
      "command": "./toolbox",
      "args": ["--prebuilt","dataplex","--stdio"],
      "env": {
          "DATAPLEX_PROJECT": "test-project"
      }
    }
  }
}
```

Fixes #831

---------

Co-authored-by: Wenxin Du <117315983+duwenxin99@users.noreply.github.com>
Co-authored-by: Mateusz Nowak <matnow@google.com>
Co-authored-by: Mateusz Nowak <kontakt@mateusznowak.pl>
2025-07-24 15:31:35 -07:00
ShanQincheng
14a868f2a0 feat(sources/mssql): add support for encrypt connection parameter (#874)
## 1. Why do we need to support the `encrypt` parameter?
MSSQL databases that `genai-toolbox` attempts to connect to may have
their encryption levels set differently. For example, a testing/demo
purpose MSSQL database may not require encryption at all. However,
`genai-toolbox` currently uses the default encryption parameter
(`encrypt=false`) to connect to this type of database and will throw an
error:
```
ERROR "toolbox failed to initialize: unable to initialize configs: unable to initialize source "my-mssql-source": unable to connect successfully: TLS Handshake failed: cannot read handshake packet: EOF"
```
> In this case, the encryption parameter should be set to
`encrypt=disable`.


## 2. Is this a necessary feature?
`genai-toolbox` uses the `github.com/microsoft/go-mssqldb` package as a
dependency to connect to MSSQL databases. According to the
[README](https://github.com/microsoft/go-mssqldb?tab=readme-ov-file#common-parameters)
of the `github.com/microsoft/go-mssqldb` package, `encrypt` is one of
the common parameters. Therefore, I believe supporting the `encrypt`
parameter in `genai-toolbox` is necessary.

## 3. How to replicate the error mentioned above?
### 3.1 Use this `docker-compose.yaml` file to start a demo MSSQL
instance
```
services:
  demo-mssql-database:
    image: mcr.microsoft.com/mssql/server:2017-CU1-ubuntu
    ports:
      - "20256:1433"
    environment:
      ACCEPT_EULA: "Y"
      MSSQL_SA_PASSWORD: "hellopassword!"
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "/opt/mssql-tools/bin/sqlcmd", "-S", "localhost", "-U", "sa", "-P", "hellopassword!", "-Q", "SELECT 1"]
      interval: 5s
      retries: 10

  demo-mssql-database-init:
    image: mcr.microsoft.com/mssql/server:2017-CU1-ubuntu
    network_mode: service:demo-mssql-database
    command: >
      /bin/bash -c "/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P hellopassword! -d master -Q 'CREATE DATABASE DemoDatabase;'"
    depends_on:
      demo-mssql-database:
        condition: service_healthy
```

### 3.2 Use `genai-toolbox` to connect to the above demo MSSQL database
with this `tools.yaml` configuration file:
```
sources:
       my-mssql-source:
                kind: mssql
                host: localhost
                port: 20256
                database: master
                user: sa
                password: 'hellopassword!'
```

### 3.3 We shall see the error:
```
ERROR "toolbox failed to initialize: unable to initialize configs: unable to initialize source "my-mssql-source": unable to connect successfully: TLS Handshake failed: cannot read handshake packet: EOF"
```

---------

Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-07-24 21:51:25 +00:00
Averi Kitsch
3746dbae65 docs: fix typos in MCP docs for Postgres (#991)
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-07-24 21:38:55 +00:00
Averi Kitsch
25a0bb7a37 docs: fix typos in MCP docs (#990)
Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com>
2025-07-24 21:14:37 +00:00
Wenxin Du
bd399bb0fb ci: Add MongoDB aggregate Tool and integration test (#977)
Co-authored-by: Author: Dennis Geurts <dennisg@dennisg.nl>
2025-07-24 16:49:41 -04:00
Wenxin Du
4c63f0c1e4 feat: Add MongoDB insert Tools (#975)
Add MongoDB `insert` Tools:

- mongodb-insert-one
- mongodb-insert-many

---------
Co-authored-by: Author: Dennis Geurts <dennisg@dennisg.nl>
2025-07-24 15:54:12 -04:00
Averi Kitsch
5b6883974c update 2025-07-23 11:58:59 -07:00
Averi Kitsch
a474dcfbc9 docs: add Prebuilt documentation 2025-07-23 11:03:32 -07:00
89 changed files with 5847 additions and 299 deletions

View File

@@ -153,6 +153,26 @@ steps:
"BigQuery" \
bigquery \
bigquery
- id: "dataplex"
name: golang:1
waitFor: ["compile-test-binary"]
entrypoint: /bin/bash
env:
- "GOPATH=/gopath"
- "DATAPLEX_PROJECT=$PROJECT_ID"
- "SERVICE_ACCOUNT_EMAIL=$SERVICE_ACCOUNT_EMAIL"
secretEnv: ["CLIENT_ID"]
volumes:
- name: "go"
path: "/gopath"
args:
- -c
- |
.ci/test_with_coverage.sh \
"Dataplex" \
dataplex \
dataplex
- id: "postgres"
name: golang:1

View File

@@ -18,6 +18,7 @@ releaseType: simple
versionFile: "cmd/version.txt"
extraFiles: [
"README.md",
"docs/en/getting-started/colab_quickstart.ipynb",
"docs/en/getting-started/introduction/_index.md",
"docs/en/getting-started/local_quickstart.md",
"docs/en/getting-started/local_quickstart_js.md",
@@ -25,13 +26,17 @@ extraFiles: [
"docs/en/getting-started/mcp_quickstart/_index.md",
"docs/en/samples/bigquery/local_quickstart.md",
"docs/en/samples/bigquery/mcp_quickstart/_index.md",
"docs/en/getting-started/colab_quickstart.ipynb",
"docs/en/samples/bigquery/colab_quickstart_bigquery.ipynb",
"docs/en/how-to/connect-ide/bigquery_mcp.md",
"docs/en/how-to/connect-ide/spanner_mcp.md",
"docs/en/samples/looker/looker_gemini.md",
"docs/en/samples/looker/looker_mcp_inspector.md",
"docs/en/how-to/connect-ide/alloydb_pg_mcp.md",
"docs/en/how-to/connect-ide/cloud_sql_mysql_mcp.md",
"docs/en/how-to/connect-ide/alloydb_pg_admin_mcp.md",
"docs/en/how-to/connect-ide/bigquery_mcp.md",
"docs/en/how-to/connect-ide/cloud_sql_pg_mcp.md",
"docs/en/how-to/connect-ide/postgres_mcp.md",
"docs/en/how-to/connect-ide/cloud_sql_mssql_mcp.md",
"docs/en/how-to/connect-ide/cloud_sql_mysql_mcp.md",
"docs/en/how-to/connect-ide/firestore_mcp.md",
"docs/en/how-to/connect-ide/looker_mcp.md",
"docs/en/how-to/connect-ide/postgres_mcp.md",
"docs/en/how-to/connect-ide/spanner_mcp.md",
]

2
.gitignore vendored
View File

@@ -20,4 +20,4 @@ node_modules
# executable
genai-toolbox
toolbox
toolbox

View File

@@ -1,5 +1,43 @@
# Changelog
## [0.10.0](https://github.com/googleapis/genai-toolbox/compare/v0.9.0...v0.10.0) (2025-07-25)
### Features
* Add `Map` parameters support ([#928](https://github.com/googleapis/genai-toolbox/issues/928)) ([4468bc9](https://github.com/googleapis/genai-toolbox/commit/4468bc920bbf27dce4ab160197587b7c12fcd20f))
* Add Dataplex source and tool ([#847](https://github.com/googleapis/genai-toolbox/issues/847)) ([30c16a5](https://github.com/googleapis/genai-toolbox/commit/30c16a559e8d49a9a717935269e69b97ec25519a))
* Add Looker source and tool ([#923](https://github.com/googleapis/genai-toolbox/issues/923)) ([c67e01b](https://github.com/googleapis/genai-toolbox/commit/c67e01bcf998e7b884be30ebb1fd277c89ed6ffc))
* Add support for null optional parameter ([#802](https://github.com/googleapis/genai-toolbox/issues/802)) ([a817b12](https://github.com/googleapis/genai-toolbox/commit/a817b120ca5e09ce80eb8d7544ebbe81fc28b082)), closes [#736](https://github.com/googleapis/genai-toolbox/issues/736)
* **prebuilt/alloydb-admin-config:** Add alloydb control plane as a prebuilt config ([#937](https://github.com/googleapis/genai-toolbox/issues/937)) ([0b28b72](https://github.com/googleapis/genai-toolbox/commit/0b28b72aa0ca2cdc87afbddbeb7f4dbb9688593d))
* **prebuilt/mysql,prebuilt/mssql:** Add generic mysql and mssql prebuilt tools ([#983](https://github.com/googleapis/genai-toolbox/issues/983)) ([c600c30](https://github.com/googleapis/genai-toolbox/commit/c600c30374443b6106c1f10b60cd334fd202789b))
* **server/mcp:** Support MCP version 2025-06-18 ([#898](https://github.com/googleapis/genai-toolbox/issues/898)) ([313d3ca](https://github.com/googleapis/genai-toolbox/commit/313d3ca0d084a3a6e7ac9a21a862aa31bf3edadd))
* **sources/mssql:** Add support for encrypt connection parameter ([#874](https://github.com/googleapis/genai-toolbox/issues/874)) ([14a868f](https://github.com/googleapis/genai-toolbox/commit/14a868f2a0780b94c2ca104419b2ff098778303b))
* **sources/firestore:** Add Firestore as Source ([#786](https://github.com/googleapis/genai-toolbox/issues/786)) ([2bb790e](https://github.com/googleapis/genai-toolbox/commit/2bb790e4f8194b677fe0ba40122d409d0e3e687e))
* **sources/mongodb:** Add MongoDB Source ([#969](https://github.com/googleapis/genai-toolbox/issues/969)) ([74dbd61](https://github.com/googleapis/genai-toolbox/commit/74dbd6124daab6192dd880dbd1d15f36861abf74))
* **tools/alloydb-wait-for-operation:** Add wait for operation tool with exponential backoff ([#920](https://github.com/googleapis/genai-toolbox/issues/920)) ([3f6ec29](https://github.com/googleapis/genai-toolbox/commit/3f6ec2944ede18ee02b10157cc048145bdaec87a))
* **tools/mongodb-aggregate:** Add MongoDB `aggregate` Tools ([#977](https://github.com/googleapis/genai-toolbox/issues/977)) ([bd399bb](https://github.com/googleapis/genai-toolbox/commit/bd399bb0fb7134469345ed9a1111ea4209440867))
* **tools/mongodb-delete:** Add MongoDB `delete` Tools ([#974](https://github.com/googleapis/genai-toolbox/issues/974)) ([78e9752](https://github.com/googleapis/genai-toolbox/commit/78e9752f620e065246f3e7b9d37062e492247c8a))
* **tools/mongodb-find:** Add MongoDB `find` Tools ([#970](https://github.com/googleapis/genai-toolbox/issues/970)) ([a747475](https://github.com/googleapis/genai-toolbox/commit/a7474752d8d7ea7af1e80a3c4533d2fd4154d897))
* **tools/mongodb-insert:** Add MongoDB `insert` Tools ([#975](https://github.com/googleapis/genai-toolbox/issues/975)) ([4c63f0c](https://github.com/googleapis/genai-toolbox/commit/4c63f0c1e402817a0c8fec611635e99290308d0e))
* **tools/mongodb-update:** Add MongoDB `update` Tools ([#972](https://github.com/googleapis/genai-toolbox/issues/972)) ([dfde52c](https://github.com/googleapis/genai-toolbox/commit/dfde52ca9a8e25e2f3944f52b4c2e307072b6c37))
* **tools/neo4j-execute-cypher:** Add neo4j-execute-cypher for Neo4j sources ([#946](https://github.com/googleapis/genai-toolbox/issues/946)) ([81d0505](https://github.com/googleapis/genai-toolbox/commit/81d05053b2e08338fd6eabe4849c309064f76b6b))
* **tools/neo4j-schema:** Add neo4j-schema tool ([#978](https://github.com/googleapis/genai-toolbox/issues/978)) ([be7db3d](https://github.com/googleapis/genai-toolbox/commit/be7db3dff263625ce64fdb726e81164996b7a708))
* **tools/wait:** Create wait for tool ([#885](https://github.com/googleapis/genai-toolbox/issues/885)) ([ed5ef4c](https://github.com/googleapis/genai-toolbox/commit/ed5ef4caea10ba1dbc49c0fc0a0d2b91cf341d3b))
### Bug Fixes
* Fix document preview pipeline for forked PRs ([#950](https://github.com/googleapis/genai-toolbox/issues/950)) ([481cc60](https://github.com/googleapis/genai-toolbox/commit/481cc608bae807d9e92497bc8863066916f7ef21))
* **prebuilt/firestore:** Mark database field as required in the firestore prebuilt tools ([#959](https://github.com/googleapis/genai-toolbox/issues/959)) ([15417d4](https://github.com/googleapis/genai-toolbox/commit/15417d4e0c7b173e81edbbeb672e53884d186104))
* **prebuilt/cloud-sql-mssql:** Correct source reference for execute_sql tool in cloud-sql-mssql.yaml prebuilt config ([#938](https://github.com/googleapis/genai-toolbox/issues/938)) ([d16728e](https://github.com/googleapis/genai-toolbox/commit/d16728e5c603eab37700876a6ddacbf709fd5823))
* **prebuilt/cloud-sql-mysql:** Update list_table tool ([#924](https://github.com/googleapis/genai-toolbox/issues/924)) ([2083ba5](https://github.com/googleapis/genai-toolbox/commit/2083ba50483951e9ee6101bb832aa68823cd96a5))
* Replace 'float' with 'number' in McpManifest ([#985](https://github.com/googleapis/genai-toolbox/issues/985)) ([59e23e1](https://github.com/googleapis/genai-toolbox/commit/59e23e17250a516e3931996114f32ac6526a4f8e))
* **server/api:** Add logger to context in tool invoke handler ([#891](https://github.com/googleapis/genai-toolbox/issues/891)) ([8ce311f](https://github.com/googleapis/genai-toolbox/commit/8ce311f256481e8f11ecb4aa505b95a562f394ef))
* **sources/looker:** Add agent tag to Looker API calls. ([#966](https://github.com/googleapis/genai-toolbox/issues/966)) ([f55dd6f](https://github.com/googleapis/genai-toolbox/commit/f55dd6fcd099f23bd89df62b268c4a53d16f3bac))
* **tools/bigquery-execute-sql:** Ensure invoke always returns a non-null value ([#925](https://github.com/googleapis/genai-toolbox/issues/925)) ([9a55b80](https://github.com/googleapis/genai-toolbox/commit/9a55b804821a6ccfcd157bcfaee7e599c4a5cb63))
* **tools/mysqlsql:** Unmarshal json data from database during invoke ([#979](https://github.com/googleapis/genai-toolbox/issues/979)) ([ccc3498](https://github.com/googleapis/genai-toolbox/commit/ccc3498cf0a4c43eb909e3850b9e6f582cd48f2a)), closes [#840](https://github.com/googleapis/genai-toolbox/issues/840)
## [0.9.0](https://github.com/googleapis/genai-toolbox/compare/v0.8.0...v0.9.0) (2025-07-11)

View File

@@ -114,7 +114,7 @@ To install Toolbox as a binary:
<!-- {x-release-please-start-version} -->
```sh
# see releases page for other versions
export VERSION=0.9.0
export VERSION=0.10.0
curl -O https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox
```
@@ -127,12 +127,23 @@ You can also install Toolbox as a container:
```sh
# see releases page for other versions
export VERSION=0.9.0
export VERSION=0.10.0
docker pull us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:$VERSION
```
</details>
<details>
<summary>Homebrew</summary>
To install Toolbox using Homebrew on macOS or Linux:
```sh
brew install mcp-toolbox
```
</details>
<details>
<summary>Compile from source</summary>
@@ -140,7 +151,7 @@ To install from source, ensure you have the latest version of
[Go installed](https://go.dev/doc/install), and then run the following command:
```sh
go install github.com/googleapis/genai-toolbox@v0.9.0
go install github.com/googleapis/genai-toolbox@v0.10.0
```
<!-- {x-release-please-end} -->
@@ -154,8 +165,18 @@ execute `toolbox` to start the server:
```sh
./toolbox --tools-file "tools.yaml"
```
> [!NOTE]
> Toolbox enables dynamic reloading by default. To disable, use the `--disable-reload` flag.
> Toolbox enables dynamic reloading by default. To disable, use the
> `--disable-reload` flag.
#### Homebrew Users
If you installed Toolbox using Homebrew, the `toolbox` binary is available in your system path. You can start the server with the same command:
```sh
toolbox --tools-file "tools.yaml"
```
You can use `toolbox help` for a full list of flags! To stop the server, send a
terminate signal (`ctrl+c` on most platforms).
@@ -509,9 +530,9 @@ For more detailed instructions on using the Toolbox Core SDK, see the
// Convert the tool using the tbgenkit package
// Use this tool with Genkit Go
genkitTool, err := tbgenkit.ToGenkitTool(tool, g)
if err != nil {
log.Fatalf("Failed to convert tool: %v\n", err)
}
if err != nil {
log.Fatalf("Failed to convert tool: %v\n", err)
}
}
```

20
cmd/BUILD Normal file
View File

@@ -0,0 +1,20 @@
load("//tools/build_defs/go:go_library.bzl", "go_library")
load("//tools/build_defs/go:go_test.bzl", "go_test")
go_library(
name = "cmd",
srcs = [
"options.go",
"root.go",
],
embedsrcs = ["version.txt"],
)
go_test(
name = "cmd_test",
srcs = [
"options_test.go",
"root_test.go",
],
library = ":cmd",
)

View File

@@ -51,6 +51,7 @@ import (
_ "github.com/googleapis/genai-toolbox/internal/tools/bigquery/bigquerysql"
_ "github.com/googleapis/genai-toolbox/internal/tools/bigtable"
_ "github.com/googleapis/genai-toolbox/internal/tools/couchbase"
_ "github.com/googleapis/genai-toolbox/internal/tools/dataplex/dataplexsearchentries"
_ "github.com/googleapis/genai-toolbox/internal/tools/dgraph"
_ "github.com/googleapis/genai-toolbox/internal/tools/firestore/firestoredeletedocuments"
_ "github.com/googleapis/genai-toolbox/internal/tools/firestore/firestoregetdocuments"
@@ -69,10 +70,13 @@ import (
_ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookerquery"
_ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookerquerysql"
_ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookerrunlook"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbaggregate"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbdeletemany"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbdeleteone"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbfind"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbfindone"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbinsertmany"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbinsertone"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbupdatemany"
_ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbupdateone"
_ "github.com/googleapis/genai-toolbox/internal/tools/mssql/mssqlexecutesql"
@@ -81,6 +85,7 @@ import (
_ "github.com/googleapis/genai-toolbox/internal/tools/mysql/mysqlsql"
_ "github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jcypher"
_ "github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jexecutecypher"
_ "github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema"
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgresexecutesql"
_ "github.com/googleapis/genai-toolbox/internal/tools/postgres/postgressql"
_ "github.com/googleapis/genai-toolbox/internal/tools/redis"
@@ -100,6 +105,7 @@ import (
_ "github.com/googleapis/genai-toolbox/internal/sources/cloudsqlmysql"
_ "github.com/googleapis/genai-toolbox/internal/sources/cloudsqlpg"
_ "github.com/googleapis/genai-toolbox/internal/sources/couchbase"
_ "github.com/googleapis/genai-toolbox/internal/sources/dataplex"
_ "github.com/googleapis/genai-toolbox/internal/sources/dgraph"
_ "github.com/googleapis/genai-toolbox/internal/sources/firestore"
_ "github.com/googleapis/genai-toolbox/internal/sources/http"
@@ -210,7 +216,7 @@ func NewCommand(opts ...Option) *Command {
flags.BoolVar(&cmd.cfg.TelemetryGCP, "telemetry-gcp", false, "Enable exporting directly to Google Cloud Monitoring.")
flags.StringVar(&cmd.cfg.TelemetryOTLP, "telemetry-otlp", "", "Enable exporting using OpenTelemetry Protocol (OTLP) to the specified endpoint (e.g. 'http://127.0.0.1:4318')")
flags.StringVar(&cmd.cfg.TelemetryServiceName, "telemetry-service-name", "toolbox", "Sets the value of the service.name resource attribute for telemetry data.")
flags.StringVar(&cmd.prebuiltConfig, "prebuilt", "", "Use a prebuilt tool configuration by source type. Cannot be used with --tools-file. Allowed: 'alloydb-postgres-admin', alloydb-postgres', 'bigquery', 'cloud-sql-mysql', 'cloud-sql-postgres', 'cloud-sql-mssql', 'firestore', 'mssql', 'mysql', 'postgres', 'spanner', 'spanner-postgres'.")
flags.StringVar(&cmd.prebuiltConfig, "prebuilt", "", "Use a prebuilt tool configuration by source type. Cannot be used with --tools-file. Allowed: 'alloydb-postgres-admin', alloydb-postgres', 'bigquery', 'cloud-sql-mysql', 'cloud-sql-postgres', 'cloud-sql-mssql', 'dataplex', 'firestore', 'mssql', 'mysql', 'postgres', 'spanner', 'spanner-postgres'.")
flags.BoolVar(&cmd.cfg.Stdio, "stdio", false, "Listens via MCP STDIO instead of acting as a remote HTTP server.")
flags.BoolVar(&cmd.cfg.DisableReload, "disable-reload", false, "Disables dynamic reloading of tools file.")

View File

@@ -1167,6 +1167,7 @@ func TestPrebuiltTools(t *testing.T) {
cloudsqlpg_config, _ := prebuiltconfigs.Get("cloud-sql-postgres")
cloudsqlmysql_config, _ := prebuiltconfigs.Get("cloud-sql-mysql")
cloudsqlmssql_config, _ := prebuiltconfigs.Get("cloud-sql-mssql")
dataplex_config, _ := prebuiltconfigs.Get("dataplex")
firestoreconfig, _ := prebuiltconfigs.Get("firestore")
mysql_config, _ := prebuiltconfigs.Get("mysql")
mssql_config, _ := prebuiltconfigs.Get("mssql")
@@ -1243,6 +1244,16 @@ func TestPrebuiltTools(t *testing.T) {
},
},
},
{
name: "dataplex prebuilt tools",
in: dataplex_config,
wantToolset: server.ToolsetConfigs{
"dataplex-tools": tools.ToolsetConfig{
Name: "dataplex-tools",
ToolNames: []string{"dataplex_search_entries"},
},
},
},
{
name: "firestore prebuilt tools",
in: firestoreconfig,

View File

@@ -1 +1 @@
0.9.0
0.10.0

View File

@@ -234,7 +234,7 @@
},
"outputs": [],
"source": [
"version = \"0.9.0\" # x-release-please-version\n",
"version = \"0.10.0\" # x-release-please-version\n",
"! curl -O https://storage.googleapis.com/genai-toolbox/v{version}/linux/amd64/toolbox\n",
"\n",
"# Make the binary executable\n",

View File

@@ -86,7 +86,7 @@ To install Toolbox as a binary:
```sh
# see releases page for other versions
export VERSION=0.9.0
export VERSION=0.10.0
curl -O https://storage.googleapis.com/genai-toolbox/v$VERSION/linux/amd64/toolbox
chmod +x toolbox
```
@@ -97,10 +97,17 @@ You can also install Toolbox as a container:
```sh
# see releases page for other versions
export VERSION=0.9.0
export VERSION=0.10.0
docker pull us-central1-docker.pkg.dev/database-toolbox/toolbox/toolbox:$VERSION
```
{{% /tab %}}
{{% tab header="Homebrew" lang="en" %}}
To install Toolbox using Homebrew on macOS or Linux:
```sh
brew install mcp-toolbox
```
{{% /tab %}}
{{% tab header="Compile from source" lang="en" %}}
@@ -108,7 +115,7 @@ To install from source, ensure you have the latest version of
[Go installed](https://go.dev/doc/install), and then run the following command:
```sh
go install github.com/googleapis/genai-toolbox@v0.9.0
go install github.com/googleapis/genai-toolbox@v0.10.0
```
{{% /tab %}}
@@ -123,10 +130,20 @@ execute `toolbox` to start the server:
```sh
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the `--disable-reload` flag.
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
#### Homebrew Users
If you installed Toolbox using Homebrew, the `toolbox` binary is available in your system path. You can start the server with the same command:
```sh
toolbox --tools-file "tools.yaml"
```
You can use `toolbox help` for a full list of flags! To stop the server, send a
terminate signal (`ctrl+c` on most platforms).
@@ -139,6 +156,7 @@ Once your server is up and running, you can load the tools into your
application. See below the list of Client SDKs for using various frameworks:
#### Python
{{< tabpane text=true persist=header >}}
{{% tab header="Core" lang="en" %}}
@@ -151,7 +169,7 @@ from toolbox_core import ToolboxClient
# update the url to point to your server
async with ToolboxClient("<http://127.0.0.1:5000>") as client:
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application!
tools = await client.load_toolset("toolset_name")
@@ -172,7 +190,7 @@ from toolbox_langchain import ToolboxClient
# update the url to point to your server
async with ToolboxClient("<http://127.0.0.1:5000>") as client:
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application!
tools = client.load_toolset()
@@ -193,7 +211,7 @@ from toolbox_llamaindex import ToolboxClient
# update the url to point to your server
async with ToolboxClient("<http://127.0.0.1:5000>") as client:
async with ToolboxClient("http://127.0.0.1:5000") as client:
# these tools can be passed to your application
@@ -565,4 +583,6 @@ func main() {
For more detailed instructions on using the Toolbox Go SDK, see the
[project's README](https://github.com/googleapis/mcp-toolbox-sdk-go/blob/main/core/README.md).
For end-to-end samples on using the Toolbox Go SDK with orchestration frameworks, see the [project's samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)
For end-to-end samples on using the Toolbox Go SDK with orchestration
frameworks, see the [project's
samples](https://github.com/googleapis/mcp-toolbox-sdk-go/tree/main/core/samples)

View File

@@ -19,7 +19,9 @@ This guide assumes you have already done the following:
### Cloud Setup (Optional)
If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using `vertexai=True` or a Google GenAI model), follow these one-time setup steps for local development:
If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using
`vertexai=True` or a Google GenAI model), follow these one-time setup steps for
local development:
1. [Install the Google Cloud CLI](https://cloud.google.com/sdk/docs/install)
1. [Set up Application Default Credentials (ADC)](https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment)
@@ -154,7 +156,6 @@ postgres` and a password next time.
\q
```
## Step 2: Install and configure Toolbox
In this section, we will download Toolbox, configure our tools in a
@@ -170,7 +171,7 @@ In this section, we will download Toolbox, configure our tools in a
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.9.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/$OS/toolbox
```
<!-- {x-release-please-end} -->
@@ -271,8 +272,10 @@ In this section, we will download Toolbox, configure our tools in a
```bash
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the `--disable-reload` flag.
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
## Step 3: Connect your agent to Toolbox

View File

@@ -15,7 +15,8 @@ This guide assumes you have already done the following:
### Cloud Setup (Optional)
If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using Gemini or PaLM models), follow these one-time setup steps:
If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using
Gemini or PaLM models), follow these one-time setup steps:
1. [Install the Google Cloud CLI]
1. [Set up Application Default Credentials (ADC)]
@@ -29,8 +30,8 @@ If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using G
[Go (v1.24.2 or higher)]: https://go.dev/doc/install
[install-postgres]: https://www.postgresql.org/download/
[Install the Google Cloud CLI]: https://cloud.google.com/sdk/docs/install
[Set up Application Default Credentials (ADC)]: https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment
[Set up Application Default Credentials (ADC)]:
https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment
## Step 1: Set up your database
@@ -166,7 +167,7 @@ In this section, we will download Toolbox, configure our tools in a
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.9.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/$OS/toolbox
```
<!-- {x-release-please-end} -->
@@ -267,8 +268,10 @@ In this section, we will download Toolbox, configure our tools in a
```bash
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the `--disable-reload` flag.
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
## Step 3: Connect your agent to Toolbox
@@ -282,13 +285,15 @@ from Toolbox.
go mod init main
```
1. In a new terminal, install the [SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go).
1. In a new terminal, install the
[SDK](https://pkg.go.dev/github.com/googleapis/mcp-toolbox-sdk-go).
```bash
go get github.com/googleapis/mcp-toolbox-sdk-go
```
1. Create a new file named `hotelagent.go` and copy the following code to create an agent:
1. Create a new file named `hotelagent.go` and copy the following code to create
an agent:
{{< tabpane persist=header >}}
{{< tab header="LangChain Go" lang="go" >}}
@@ -917,5 +922,6 @@ func main() {
```
{{< notice info >}}
For more information, visit the [Go SDK repo](https://github.com/googleapis/mcp-toolbox-sdk-go).
{{</ notice >}}
For more information, visit the [Go SDK
repo](https://github.com/googleapis/mcp-toolbox-sdk-go).
{{</ notice >}}

View File

@@ -15,7 +15,8 @@ This guide assumes you have already done the following:
### Cloud Setup (Optional)
If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using Gemini or PaLM models), follow these one-time setup steps:
If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using
Gemini or PaLM models), follow these one-time setup steps:
1. [Install the Google Cloud CLI]
1. [Set up Application Default Credentials (ADC)]
@@ -29,8 +30,8 @@ If you plan to use **Google Clouds Vertex AI** with your agent (e.g., using G
[Node.js (v18 or higher)]: https://nodejs.org/
[install-postgres]: https://www.postgresql.org/download/
[Install the Google Cloud CLI]: https://cloud.google.com/sdk/docs/install
[Set up Application Default Credentials (ADC)]: https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment
[Set up Application Default Credentials (ADC)]:
https://cloud.google.com/docs/authentication/set-up-adc-local-dev-environment
## Step 1: Set up your database
@@ -166,7 +167,7 @@ In this section, we will download Toolbox, configure our tools in a
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.9.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/$OS/toolbox
```
<!-- {x-release-please-end} -->
@@ -267,6 +268,7 @@ In this section, we will download Toolbox, configure our tools in a
```bash
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the `--disable-reload` flag.
{{< /notice >}}
@@ -338,7 +340,6 @@ async function runApplication() {
model: "gemini-2.0-flash",
});
const client = new ToolboxClient("http://127.0.0.1:5000");
const toolboxTools = await client.loadToolset("my-toolset");
@@ -363,7 +364,6 @@ async function runApplication() {
},
};
for (const query of queries) {
const agentOutput = await agent.invoke(
{
@@ -575,4 +575,4 @@ main();
{{< notice info >}}
For more information, visit the [JS SDK repo](https://github.com/googleapis/mcp-toolbox-sdk-js).
{{</ notice >}}
{{</ notice >}}

View File

@@ -105,7 +105,7 @@ In this section, we will download Toolbox, configure our tools in a
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.9.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/$OS/toolbox
```
<!-- {x-release-please-end} -->
@@ -218,7 +218,8 @@ In this section, we will download Toolbox, configure our tools in a
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please take note of `<YOUR_SESSION_TOKEN>`):
1. It should show the following when the MCP Inspector is up and running (please
take note of `<YOUR_SESSION_TOKEN>`):
```bash
Starting MCP inspector...
@@ -236,7 +237,8 @@ In this section, we will download Toolbox, configure our tools in a
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure `<YOUR_SESSION_TOKEN>` is present.
1. For `Configuration` -> `Proxy Session Token`, make sure
`<YOUR_SESSION_TOKEN>` is present.
1. Click Connect.
@@ -246,4 +248,4 @@ In this section, we will download Toolbox, configure our tools in a
![inspector_tools](./inspector_tools.png)
1. Test out your tools here!
1. Test out your tools here!

View File

@@ -7,3 +7,56 @@ description: >
aliases:
- /how-to/connect_tools_using_mcp
---
## `--prebuilt` Flag
The `--prebuilt` flag allows you to use predefined tool configurations for common database types without creating a custom `tools.yaml` file.
### Usage
```bash
./toolbox --prebuilt <source-type> [other-flags]
```
### Supported Source Types
The following prebuilt configurations are available:
- `alloydb-postgres` - AlloyDB PostgreSQL with execute_sql and list_tables tools
- `bigquery` - BigQuery with execute_sql, get_dataset_info, get_table_info, list_dataset_ids, and list_table_ids tools
- `cloud-sql-mysql` - Cloud SQL MySQL with execute_sql and list_tables tools
- `cloud-sql-postgres` - Cloud SQL PostgreSQL with execute_sql and list_tables tools
- `cloud-sql-mssql` - Cloud SQL SQL Server with execute_sql and list_tables tools
- `postgres` - PostgreSQL with execute_sql and list_tables tools
- `spanner` - Spanner (GoogleSQL) with execute_sql, execute_sql_dql, and list_tables tools
- `spanner-postgres` - Spanner (PostgreSQL) with execute_sql, execute_sql_dql, and list_tables tools
### Examples
#### PostgreSQL with STDIO transport
```bash
./toolbox --prebuilt postgres --stdio
```
This is commonly used in MCP client configurations:
#### BigQuery remote HTTP transport
```bash
./toolbox --prebuilt bigquery [--port 8080]
```
### Environment Variables
When using `--prebuilt`, you still need to provide database connection details through environment variables. The specific variables depend on the source type, see the documentation per database below for the complete list:
For PostgreSQL-based sources:
- `POSTGRES_HOST`
- `POSTGRES_PORT`
- `POSTGRES_DATABASE`
- `POSTGRES_USER`
- `POSTGRES_PASSWORD`
## Notes
The `--prebuilt` flag was added in version 0.6.0.

View File

@@ -6,11 +6,12 @@ description: >
Create your AlloyDB database with MCP Toolbox.
---
This guide covers how to use [MCP Toolbox for Databases][toolbox] to create AlloyDB clusters and instances from IDE enabling their E2E journey.
This guide covers how to use [MCP Toolbox for Databases][toolbox] to create
AlloyDB clusters and instances from IDE enabling their E2E journey.
- [Cursor][cursor]
- [Windsurf][windsurf] (Codium)
- [Visual Studio Code ][vscode] (Copilot)
- [Visual Studio Code][vscode] (Copilot)
- [Cline][cline] (VS Code extension)
- [Claude desktop][claudedesktop]
- [Claude code][claudecode]
@@ -29,20 +30,25 @@ This guide covers how to use [MCP Toolbox for Databases][toolbox] to create Allo
## Before you begin
1. In the Google Cloud console, on the [project selector page](https://console.cloud.google.com/projectselector2/home/dashboard), select or create a Google Cloud project.
1. In the Google Cloud console, on the [project selector
page](https://console.cloud.google.com/projectselector2/home/dashboard),
select or create a Google Cloud project.
1. [Make sure that billing is enabled for your Google Cloud project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).
1. [Make sure that billing is enabled for your Google Cloud
project](https://cloud.google.com/billing/docs/how-to/verify-billing-enabled#confirm_billing_is_enabled_on_a_project).
## Install MCP Toolbox
1. Download the latest version of Toolbox as a binary. Select the [correct binary](https://github.com/googleapis/genai-toolbox/releases) corresponding to your OS and CPU architecture. You are required to use Toolbox version V0.10.0+:
<!-- {x-release-please-start-version} -->
1. Download the latest version of Toolbox as a binary. Select the [correct
binary](https://github.com/googleapis/genai-toolbox/releases) corresponding
to your OS and CPU architecture. You are required to use Toolbox version
V0.10.0+:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/darwin/arm64/toolbox
@@ -53,11 +59,10 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/darwin/amd64/toolbo
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->
<!-- {x-release-please-end} -->
1. Make the binary executable:
@@ -76,13 +81,16 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
{{< tabpane text=true >}}
{{% tab header="Claude code" lang="en" %}}
1. Install [Claude Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Install [Claude
Code](https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview).
1. Create a `.mcp.json` file in your project root if it doesn't exist.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
1. Generate Access token to be used as API_KEY using `gcloud auth
print-access-token`.
> **Note:** The lifetime of token is 1 hour.
> **Note:** The lifetime of token is 1 hour.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -105,11 +113,13 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
1. Open [Claude desktop](https://claude.ai/download) and navigate to Settings.
1. Under the Developer tab, tap Edit Config to open the configuration file.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
1. Generate Access token to be used as API_KEY using `gcloud auth
print-access-token`.
> **Note:** The lifetime of token is 1 hour.
> **Note:** The lifetime of token is 1 hour.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -126,18 +136,22 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
```
1. Restart Claude desktop.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the new MCP server available.
1. From the new chat screen, you should see a hammer (MCP) icon appear with the
new MCP server available.
{{% /tab %}}
{{% tab header="Cline" lang="en" %}}
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap the **MCP Servers** icon.
1. Open the [Cline](https://github.com/cline/cline) extension in VS Code and tap
the **MCP Servers** icon.
1. Tap Configure MCP Servers to open the configuration file.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
1. Generate Access token to be used as API_KEY using `gcloud auth
print-access-token`.
> **Note:** The lifetime of token is 1 hour.
> **Note:** The lifetime of token is 1 hour.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -153,18 +167,21 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
}
```
1. You should see a green active status after the server is successfully connected.
1. You should see a green active status after the server is successfully
connected.
{{% /tab %}}
{{% tab header="Cursor" lang="en" %}}
1. Create a `.cursor` directory in your project root if it doesn't exist.
1. Create a `.cursor/mcp.json` file if it doesn't exist and open it.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
1. Generate Access token to be used as API_KEY using `gcloud auth
print-access-token`.
> **Note:** The lifetime of token is 1 hour.
> **Note:** The lifetime of token is 1 hour.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -180,18 +197,23 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
}
```
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor Settings > MCP**. You should see a green active status after the server is successfully connected.
1. [Cursor](https://www.cursor.com/) and navigate to **Settings > Cursor
Settings > MCP**. You should see a green active status after the server is
successfully connected.
{{% /tab %}}
{{% tab header="Visual Studio Code (Copilot)" lang="en" %}}
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
1. Open [VS Code](https://code.visualstudio.com/docs/copilot/overview) and
create a `.vscode` directory in your project root if it doesn't exist.
1. Create a `.vscode/mcp.json` file if it doesn't exist and open it.
1. Generate Access token to be used as API_KEY using `gcloud auth
print-access-token`.
> **Note:** The lifetime of token is 1 hour.
> **Note:** The lifetime of token is 1 hour.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -211,13 +233,16 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
{{% tab header="Windsurf" lang="en" %}}
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
1. Open [Windsurf](https://docs.codeium.com/windsurf) and navigate to the
Cascade assistant.
1. Tap on the hammer (MCP) icon, then Configure to open the configuration file.
1. Generate Access token to be used as API_KEY using `gcloud auth
print-access-token`.
> **Note:** The lifetime of token is 1 hour.
> **Note:** The lifetime of token is 1 hour.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -236,13 +261,16 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create
a `settings.json` file.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
> **Note:** The lifetime of token is 1 hour.
> **Note:** The lifetime of token is 1 hour.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -261,14 +289,18 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create
a `settings.json` file.
1. Generate Access token to be used as API_KEY using `gcloud auth print-access-token`.
> **Note:** The lifetime of token is 1 hour.
> **Note:** The lifetime of token is 1 hour.
1. Add the following configuration, replace the environment variables with your values, and save:
1. Add the following configuration, replace the environment variables with your
values, and save:
```json
{
@@ -289,7 +321,8 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
## Use Tools
Your AI tool is now connected to AlloyDB using MCP. Try asking your AI assistant to create a database, cluster or instance.
Your AI tool is now connected to AlloyDB using MCP. Try asking your AI assistant
to create a database, cluster or instance.
The following tools are available to the LLM:
@@ -298,9 +331,12 @@ The following tools are available to the LLM:
1. **alloydb-get-operation**: polls on operations API until the operation is done.
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs will adapt to the tools available, so this shouldn't affect most users.
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs
will adapt to the tools available, so this shouldn't affect most users.
{{< /notice >}}
## Connect to your Data
After setting up an AlloyDB cluster and instance, you can [connect your IDE to the database](https://cloud.google.com/alloydb/docs/pre-built-tools-with-mcp-toolbox).
After setting up an AlloyDB cluster and instance, you can [connect your IDE to
the
database](https://cloud.google.com/alloydb/docs/pre-built-tools-with-mcp-toolbox).

View File

@@ -28,18 +28,24 @@ to expose your developer assistant tools to a Firestore instance:
[claudedesktop]: #configure-your-mcp-client
[claudecode]: #configure-your-mcp-client
[geminicli]: #configure-your-mcp-client
[geminicodeassist]: #configure-your-mcp-client]
[geminicodeassist]: #configure-your-mcp-client
## Set up Firestore
1. Create or select a Google Cloud project.
* [Create a new project](https://cloud.google.com/resource-manager/docs/creating-managing-projects)
* [Select an existing project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects)
* [Create a new
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects)
* [Select an existing
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects)
1. [Enable the Firestore API](https://console.cloud.google.com/apis/library/firestore.googleapis.com) for your project.
1. [Enable the Firestore
API](https://console.cloud.google.com/apis/library/firestore.googleapis.com)
for your project.
1. [Create a Firestore database](https://cloud.google.com/firestore/docs/create-database-web-mobile-client-library) if you haven't already.
1. [Create a Firestore
database](https://cloud.google.com/firestore/docs/create-database-web-mobile-client-library)
if you haven't already.
1. Set up authentication for your local environment.
@@ -247,9 +253,13 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini CLI" lang="en" %}}
1. Install the [Gemini CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. Install the [Gemini
CLI](https://github.com/google-gemini/gemini-cli?tab=readme-ov-file#quickstart).
1. In your working directory, create a folder named `.gemini`. Within it, create
a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
@@ -269,10 +279,15 @@ curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolb
{{% /tab %}}
{{% tab header="Gemini Code Assist" lang="en" %}}
1. Install the [Gemini Code Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist) extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create a `settings.json` file.
1. Add the following configuration, replace the environment variables with your values, and then save:
1. Install the [Gemini Code
Assist](https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist)
extension in Visual Studio Code.
1. Enable Agent Mode in Gemini Code Assist chat.
1. In your working directory, create a folder named `.gemini`. Within it, create
a `settings.json` file.
1. Add the following configuration, replace the environment variables with your
values, and then save:
```json
{
"mcpServers": {
@@ -300,12 +315,17 @@ security rules.
The following tools are available to the LLM:
1. **firestore-get-documents**: Gets multiple documents from Firestore by their paths
1. **firestore-list-collections**: List Firestore collections for a given parent path
1. **firestore-get-documents**: Gets multiple documents from Firestore by their
paths
1. **firestore-list-collections**: List Firestore collections for a given parent
path
1. **firestore-delete-documents**: Delete multiple documents from Firestore
1. **firestore-query-collection**: Query documents from a collection with filtering, ordering, and limit options
1. **firestore-get-rules**: Retrieves the active Firestore security rules for the current project
1. **firestore-validate-rules**: Validates Firestore security rules syntax and errors
1. **firestore-query-collection**: Query documents from a collection with
filtering, ordering, and limit options
1. **firestore-get-rules**: Retrieves the active Firestore security rules for
the current project
1. **firestore-validate-rules**: Validates Firestore security rules syntax and
errors
{{< notice note >}}
Prebuilt tools are pre-1.0, so expect some tool changes between versions. LLMs

View File

@@ -46,19 +46,19 @@ to expose your developer assistant tools to a Looker instance:
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/linux/amd64/toolbox>
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/darwin/arm64/toolbox>
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/darwin/amd64/toolbox>
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolbox.exe>
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->
@@ -90,12 +90,8 @@ curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/tool
{
"mcpServers": {
"looker-toolbox": {
"command": "/PATH/TO/toolbox",
"args": [
"--stdio",
"--prebuilt",
"looker"
],
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
@@ -121,12 +117,8 @@ curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/tool
{
"mcpServers": {
"looker-toolbox": {
"command": "/PATH/TO/toolbox",
"args": [
"--stdio",
"--prebuilt",
"looker"
],
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
@@ -155,12 +147,8 @@ curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/tool
{
"mcpServers": {
"looker-toolbox": {
"command": "/PATH/TO/toolbox",
"args": [
"--stdio",
"--prebuilt",
"looker"
],
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
@@ -187,12 +175,8 @@ curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/tool
{
"mcpServers": {
"looker-toolbox": {
"command": "/PATH/TO/toolbox",
"args": [
"--stdio",
"--prebuilt",
"looker"
],
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
@@ -221,12 +205,8 @@ curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/tool
{
"mcpServers": {
"looker-toolbox": {
"command": "/PATH/TO/toolbox",
"args": [
"--stdio",
"--prebuilt",
"looker"
],
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",
@@ -252,12 +232,8 @@ curl -O <https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/tool
{
"mcpServers": {
"looker-toolbox": {
"command": "/PATH/TO/toolbox",
"args": [
"--stdio",
"--prebuilt",
"looker"
],
"command": "./PATH/TO/toolbox",
"args": ["--stdio", "--prebuilt", "looker"],
"env": {
"LOOKER_BASE_URL": "https://looker.example.com",
"LOOKER_CLIENT_ID": "",

View File

@@ -52,19 +52,19 @@ Omni](https://cloud.google.com/alloydb/omni/current/docs/overview).
<!-- {x-release-please-start-version} -->
{{< tabpane persist=header >}}
{{< tab header="linux/amd64" lang="bash" >}}
curl -O <https://storage.googleapis.com/genai-toolbox/v0.9.0/linux/amd64/toolbox>
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/linux/amd64/toolbox
{{< /tab >}}
{{< tab header="darwin/arm64" lang="bash" >}}
curl -O <https://storage.googleapis.com/genai-toolbox/v0.9.0/darwin/arm64/toolbox>
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/darwin/arm64/toolbox
{{< /tab >}}
{{< tab header="darwin/amd64" lang="bash" >}}
curl -O <https://storage.googleapis.com/genai-toolbox/v0.9.0/darwin/amd64/toolbox>
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/darwin/amd64/toolbox
{{< /tab >}}
{{< tab header="windows/amd64" lang="bash" >}}
curl -O <https://storage.googleapis.com/genai-toolbox/v0.9.0/windows/amd64/toolbox>
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/windows/amd64/toolbox.exe
{{< /tab >}}
{{< /tabpane >}}
<!-- {x-release-please-end} -->

View File

@@ -28,8 +28,9 @@ Toolbox currently supports the following versions of MCP specification:
The auth implementation in Toolbox is not supported in MCP's auth specification.
This includes:
* [Authenticated Parameters](../resources/tools/_index.md#authenticated-parameters)
* [Authorized Invocations](../resources/tools/_index.md#authorized-invocations)
* [Authenticated Parameters](../resources/tools/_index.md#authenticated-parameters)
* [Authorized Invocations](../resources/tools/_index.md#authorized-invocations)
## Connecting to Toolbox with an MCP client
@@ -62,7 +63,8 @@ remote HTTP server. Logs will be set to the `warn` level by default. `debug` and
`info` logs are not supported with stdio.
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the `--disable-reload` flag.
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
### Connecting via HTTP

View File

@@ -0,0 +1,93 @@
---
title: "Dataplex"
type: docs
weight: 1
description: >
Dataplex Universal Catalog is a unified, intelligent governance solution for data and AI assets in Google Cloud. Dataplex Universal Catalog powers AI, analytics, and business intelligence at scale.
---
# Dataplex Source
[Dataplex][dataplex-docs] Universal Catalog is a unified, intelligent governance
solution for data and AI assets in Google Cloud. Dataplex Universal Catalog
powers AI, analytics, and business intelligence at scale.
At the heart of these governance capabilities is a catalog that contains a
centralized inventory of the data assets in your organization. Dataplex
Universal Catalog holds business, technical, and runtime metadata for all of
your data. It helps you discover relationships and semantics in the metadata by
applying artificial intelligence and machine learning.
[dataplex-docs]: https://cloud.google.com/dataplex/docs
## Example
```yaml
sources:
my-dataplex-source:
kind: "dataplex"
project: "my-project-id"
```
## Sample System Prompt
You can use the following system prompt as "Custom Instructions" in your client
application.
```
Whenever you will receive response from dataplex_search_entries tool decide what do to by following these steps:
1. If there are multiple search results found
1.1. Present the list of search results
1.2. Format the output in nested ordered list, for example:
Given
```
{
results: [
{
name: "projects/test-project/locations/us/entryGroups/@bigquery-aws-us-east-1/entries/users"
entrySource: {
displayName: "Users"
description: "Table contains list of users."
location: "aws-us-east-1"
system: "BigQuery"
}
},
{
name: "projects/another_project/locations/us-central1/entryGroups/@bigquery/entries/top_customers"
entrySource: {
displayName: "Top customers",
description: "Table contains list of best customers."
location: "us-central1"
system: "BigQuery"
}
},
]
}
```
Return output formatted as markdown nested list:
```
* Users:
- projectId: test_project
- location: aws-us-east-1
- description: Table contains list of users.
* Top customers:
- projectId: another_project
- location: us-central1
- description: Table contains list of best customers.
```
1.3. Ask to select one of the presented search results
2. If there is only one search result found
2.1. Present the search result immediately.
3. If there are no search result found
3.1. Explain that no search result was found
3.2. Suggest to provide a more specific search query.
Do not try to search within search results on your own.
```
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|----------------------------------------------------------------------------------|
| kind | string | true | Must be "dataplex". |
| project | string | true | Id of the GCP project used for quota and billing purposes (e.g. "my-project-id").|

View File

@@ -33,9 +33,13 @@ with [Firestore][firestore-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for accessing
Firestore. Common roles include:
- `roles/datastore.user` - Read and write access to Firestore
- `roles/datastore.viewer` - Read-only access to Firestore
- `roles/firebaserules.admin` - Full management of Firebase Security Rules for Firestore. This role is required for operations that involve creating, updating, or managing Firestore security rules (see [Firebase Security Rules roles][firebaserules-roles])
- `roles/firebaserules.admin` - Full management of Firebase Security Rules for
Firestore. This role is required for operations that involve creating,
updating, or managing Firestore security rules (see [Firebase Security Rules
roles][firebaserules-roles])
See [Firestore access control][firestore-iam] for more information on
applying IAM permissions and roles to an identity.
@@ -44,7 +48,8 @@ applying IAM permissions and roles to an identity.
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[firestore-iam]: https://cloud.google.com/firestore/docs/security/iam
[firebaserules-roles]: https://cloud.google.com/iam/docs/roles-permissions/firebaserules
[firebaserules-roles]:
https://cloud.google.com/iam/docs/roles-permissions/firebaserules
### Database Selection

View File

@@ -21,7 +21,8 @@ in the cloud, on GCP, or on premises.
This source only uses API authentication. You will need to
[create an API user][looker-user] to login to Looker.
[looker-user]: https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk
[looker-user]:
https://cloud.google.com/looker/docs/api-auth#authentication_with_an_sdk
## Example
@@ -36,9 +37,10 @@ sources:
timeout: 600s
```
The Looker base url will look like "https://looker.example.com", don't include a
trailing "/". In some cases, especially if your Looker is deployed on-premises,
you may need to add the API port numner like "https://looker.example.com:19999".
The Looker base url will look like "https://looker.example.com", don't include
a trailing "/". In some cases, especially if your Looker is deployed
on-premises, you may need to add the API port numner like
"https://looker.example.com:19999".
Verify ssl should almost always be "true" (all lower case) unless you are using
a self-signed ssl certificate for the Looker server. Anything other than "true"

View File

@@ -9,7 +9,8 @@ description: >
## About
[MongoDB][mongodb-docs] is a popular NoSQL database that stores data in flexible, JSON-like documents, making it easy to develop and scale applications.
[MongoDB][mongodb-docs] is a popular NoSQL database that stores data in
flexible, JSON-like documents, making it easy to develop and scale applications.
[mongodb-docs]: https://www.mongodb.com/docs/atlas/getting-started/

View File

@@ -43,6 +43,7 @@ sources:
database: my_db
user: ${USER_NAME}
password: ${PASSWORD}
# encrypt: strict
```
{{< notice tip >}}
@@ -52,11 +53,12 @@ instead of hardcoding your secrets into the configuration file.
## Reference
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|------------------------------------------------------------------------|
| kind | string | true | Must be "mssql". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "1433"). |
| database | string | true | Name of the SQL Server database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SQL Server user to connect as (e.g. "my-user"). |
| password | string | true | Password of the SQL Server user (e.g. "my-password"). |
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "mssql". |
| host | string | true | IP address to connect to (e.g. "127.0.0.1"). |
| port | string | true | Port to connect to (e.g. "1433"). |
| database | string | true | Name of the SQL Server database to connect to (e.g. "my_db"). |
| user | string | true | Name of the SQL Server user to connect as (e.g. "my-user"). |
| password | string | true | Password of the SQL Server user (e.g. "my-password"). |
| encrypt | string | false | Encryption level for data transmitted between the client and server (e.g., "strict"). If not specified, defaults to the [github.com/microsoft/go-mssqldb](https://github.com/microsoft/go-mssqldb?tab=readme-ov-file#common-parameters) package's default encrypt value. |

View File

@@ -114,20 +114,24 @@ in the list using the items field:
| items | parameter object | true | Specify a Parameter object for the type of the values in the array. |
{{< notice note >}}
Items in array should not have a `default` or `required` value. If provided, it will be ignored.
Items in array should not have a `default` or `required` value. If provided, it
will be ignored.
{{< /notice >}}
### Map Parameters
The map type is a collection of key-value pairs. It can be configured in two ways:
The map type is a collection of key-value pairs. It can be configured in two
ways:
- Generic Map: By default, it accepts values of any primitive type (string, integer, float, boolean), allowing for mixed data.
- Generic Map: By default, it accepts values of any primitive type (string,
integer, float, boolean), allowing for mixed data.
- Typed Map: By setting the valueType field, you can enforce that all values
within the map must be of the same specified type.
#### Generic Map (Mixed Value Types)
This is the default behavior when valueType is omitted. It's useful for passing a flexible group of settings.
This is the default behavior when valueType is omitted. It's useful for passing
a flexible group of settings.
```yaml
parameters:
@@ -179,7 +183,7 @@ user's ID token.
| **field** | **type** | **required** | **description** |
|-----------|:--------:|:------------:|-----------------------------------------------------------------------------------------|
| name | string | true | Name of the [authServices](../authservices) used to verify the OIDC auth token. |
| name | string | true | Name of the [authServices](../authservices) used to verify the OIDC auth token. |
| field | string | true | Claim field decoded from the OIDC token used to auto-populate this parameter. |
### Template Parameters

View File

@@ -16,7 +16,7 @@ It's compatible with the following sources:
- [bigquery](../sources/bigquery.md)
`bigquery-get-dataset-info` takes a `dataset` parameter to specify the dataset
on the given source. It also optionally accepts a `project` parameter to
on the given source. It also optionally accepts a `project` parameter to
define the Google Cloud project ID. If the `project` parameter is not provided,
the tool defaults to using the project defined in the source configuration.

View File

@@ -16,8 +16,8 @@ It's compatible with the following sources:
- [bigquery](../sources/bigquery.md)
`bigquery-get-table-info` takes `dataset` and `table` parameters to specify
the target table. It also optionally accepts a `project` parameter to define
the Google Cloud project ID. If the `project` parameter is not provided, the
the target table. It also optionally accepts a `project` parameter to define
the Google Cloud project ID. If the `project` parameter is not provided, the
tool defaults to using the project defined in the source configuration.
## Example

View File

@@ -15,8 +15,8 @@ It's compatible with the following sources:
- [bigquery](../sources/bigquery.md)
`bigquery-list-dataset-ids` optionally accepts a `project` parameter to define
the Google Cloud project ID. If the `project` parameter is not provided, the
`bigquery-list-dataset-ids` optionally accepts a `project` parameter to define
the Google Cloud project ID. If the `project` parameter is not provided, the
tool defaults to using the project defined in the source configuration.
## Example

View File

@@ -16,8 +16,8 @@ It's compatible with the following sources:
- [bigquery](../sources/bigquery.md)
`bigquery-get-dataset-info` takes a required `dataset` parameter to specify the dataset
from which to list table IDs. It also optionally accepts a `project` parameter to
define the Google Cloud project ID. If the `project` parameter is not provided, the
from which to list table IDs. It also optionally accepts a `project` parameter to
define the Google Cloud project ID. If the `project` parameter is not provided, the
tool defaults to using the project defined in the source configuration.
## Example

View File

@@ -20,10 +20,14 @@ instance. It's compatible with any of the following sources:
Bigtable supports SQL queries. The integration with Toolbox supports `googlesql`
dialect, the specified SQL statement is executed as a [data manipulation
language (DML)][bigtable-googlesql] statements, and specified parameters will inserted according to their name: e.g. `@name`.
language (DML)][bigtable-googlesql] statements, and specified parameters will
inserted according to their name: e.g. `@name`.
{{<notice note>}}
Bigtable's GoogleSQL support for DML statements might be limited to certain query types. For detailed information on supported DML statements and use cases, refer to the [Bigtable GoogleSQL use cases](https://cloud.google.com/bigtable/docs/googlesql-overview#use-cases).
Bigtable's GoogleSQL support for DML statements might be limited to certain
query types. For detailed information on supported DML statements and use
cases, refer to the [Bigtable GoogleSQL use
cases](https://cloud.google.com/bigtable/docs/googlesql-overview#use-cases).
{{</notice>}}
[bigtable-googlesql]: https://cloud.google.com/bigtable/docs/googlesql-overview

View File

@@ -0,0 +1,7 @@
---
title: "Dataplex"
type: docs
weight: 1
description: >
Tools that work with Dataplex Sources.
---

View File

@@ -0,0 +1,75 @@
---
title: "dataplex-search-entries"
type: docs
weight: 1
description: >
A "dataplex-search-entries" tool allows to search for entries based on the provided query.
aliases:
- /resources/tools/dataplex-search-entries
---
## About
A `dataplex-search-entries` tool returns all entries in Dataplex Catalog (e.g.
tables, views, models) that matches given user query.
It's compatible with the following sources:
- [dataplex](../sources/dataplex.md)
`dataplex-search-entries` takes a required `query` parameter based on which
entries are filtered and returned to the user and a required `name` parameter
which is constructed using source's project if user does not provide it
explicitly and has the following format: projects/{project}/locations/global. It
also optionally accepts following parameters:
- `pageSize` - Number of results in the search page. Defaults to `5`.
- `pageToken` - Page token received from a previous locations.searchEntries
call.
- `orderBy` - Specifies the ordering of results. Supported values are: relevance
(default), last_modified_timestamp, last_modified_timestamp asc
- `semanticSearch` - Specifies whether the search should understand the meaning
and intent behind the query, rather than just matching keywords. Defaults to
`true`.
- `scope` - The scope under which the search should be operating. Since this
parameter is not exposed to the toolbox user, it defaults to the organization
where the project provided in name is located.
## Requirements
### IAM Permissions
Dataplex uses [Identity and Access Management (IAM)][iam-overview] to control
user and group access to Dataplex resources. Toolbox will use your
[Application Default Credentials (ADC)][adc] to authorize and authenticate when
interacting with [Dataplex][dataplex-docs].
In addition to [setting the ADC for your server][set-adc], you need to ensure
the IAM identity has been given the correct IAM permissions for the tasks you
intend to perform. See [Dataplex Universal Catalog IAM permissions][iam-permissions]
and [Dataplex Universal Catalog IAM roles][iam-roles] for more information on
applying IAM permissions and roles to an identity.
[iam-overview]: https://cloud.google.com/dataplex/docs/iam-and-access-control
[adc]: https://cloud.google.com/docs/authentication#adc
[set-adc]: https://cloud.google.com/docs/authentication/provide-credentials-adc
[iam-permissions]: https://cloud.google.com/dataplex/docs/iam-permissions
[iam-roles]: https://cloud.google.com/dataplex/docs/iam-roles
[dataplex-docs]: https://cloud.google.com/dataplex
## Example
```yaml
tools:
dataplex-search-entries:
kind: dataplex-search-entries
source: my-dataplex-source
description: Use this tool to get all the entries based on the provided query.
```
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "dataplex-search-entries". |
| source | string | true | Name of the source the tool should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,14 +10,15 @@ aliases:
## About
A `firestore-delete-documents` tool deletes multiple documents from Firestore by their paths.
A `firestore-delete-documents` tool deletes multiple documents from Firestore by
their paths.
It's compatible with the following sources:
- [firestore](../sources/firestore.md)
`firestore-delete-documents` takes one input parameter `documentPaths` which is an array of
document paths to delete. The tool uses Firestore's BulkWriter for efficient batch deletion
and returns the success status for each document.
`firestore-delete-documents` takes one input parameter `documentPaths` which is
an array of document paths to delete. The tool uses Firestore's BulkWriter for
efficient batch deletion and returns the success status for each document.
## Example
@@ -31,8 +32,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firestore-delete-documents". |
| source | string | true | Name of the Firestore source to delete documents from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------------:|:------------:|----------------------------------------------------------|
| kind | string | true | Must be "firestore-delete-documents". |
| source | string | true | Name of the Firestore source to delete documents from. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,14 +10,15 @@ aliases:
## About
A `firestore-get-documents` tool retrieves multiple documents from Firestore by their paths.
A `firestore-get-documents` tool retrieves multiple documents from Firestore by
their paths.
It's compatible with the following sources:
- [firestore](../sources/firestore.md)
`firestore-get-documents` takes one input parameter `documentPaths` which is an array of
document paths, and returns the documents' data along with metadata such as existence status,
creation time, update time, and read time.
`firestore-get-documents` takes one input parameter `documentPaths` which is an
array of document paths, and returns the documents' data along with metadata
such as existence status, creation time, update time, and read time.
## Example
@@ -31,8 +32,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firestore-get-documents". |
| source | string | true | Name of the Firestore source to retrieve documents from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:--------------:|:------------:|------------------------------------------------------------|
| kind | string | true | Must be "firestore-get-documents". |
| source | string | true | Name of the Firestore source to retrieve documents from. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,13 +10,15 @@ aliases:
## About
A `firestore-get-rules` tool retrieves the active [Firestore security rules](https://firebase.google.com/docs/firestore/security/get-started) for the current project.
A `firestore-get-rules` tool retrieves the active [Firestore security
rules](https://firebase.google.com/docs/firestore/security/get-started) for the
current project.
It's compatible with the following sources:
- [firestore](../sources/firestore.md)
`firestore-get-rules` takes no input parameters and returns the security rules content along with metadata
such as the ruleset name, and timestamps.
`firestore-get-rules` takes no input parameters and returns the security rules
content along with metadata such as the ruleset name, and timestamps.
## Example
@@ -30,8 +32,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firestore-get-rules". |
| source | string | true | Name of the Firestore source to retrieve rules from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:-------------:|:------------:|-------------------------------------------------------|
| kind | string | true | Must be "firestore-get-rules". |
| source | string | true | Name of the Firestore source to retrieve rules from. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -10,7 +10,11 @@ aliases:
## About
A `firestore-list-collections` tool lists [collections](https://firebase.google.com/docs/firestore/data-model#collections) in Firestore, either at the root level or as [subcollections](https://firebase.google.com/docs/firestore/data-model#subcollections) of a specific document.
A `firestore-list-collections` tool lists
[collections](https://firebase.google.com/docs/firestore/data-model#collections)
in Firestore, either at the root level or as
[subcollections](https://firebase.google.com/docs/firestore/data-model#subcollections)
of a specific document.
It's compatible with the following sources:
- [firestore](../sources/firestore.md)
@@ -31,8 +35,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "firestore-list-collections". |
| source | string | true | Name of the Firestore source to list collections from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| **field** | **type** | **required** | **description** |
|-------------|:----------------:|:------------:|--------------------------------------------------------|
| kind | string | true | Must be "firestore-list-collections". |
| source | string | true | Name of the Firestore source to list collections from. |
| description | string | true | Description of the tool that is passed to the LLM. |

View File

@@ -1,6 +1,17 @@
# firestore-query-collection
---
title: "firestore-query-collection"
type: docs
weight: 1
description: >
A "firestore-query-collection" tool allow to query collections in Firestore.
aliases:
- /resources/tools/firestore-query-collection
---
The `firestore-query-collection` tool allows you to query Firestore collections with filters, ordering, and limit capabilities.
# About
The `firestore-query-collection` tool allows you to query Firestore collections
with filters, ordering, and limit capabilities.
## Configuration
@@ -10,9 +21,8 @@ To use this tool, you need to configure it in your YAML configuration file:
sources:
my-firestore:
kind: firestore
config:
project: my-gcp-project
database: "(default)"
project: my-gcp-project
database: "(default)"
tools:
query_collection:
@@ -23,17 +33,18 @@ tools:
## Parameters
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `collectionPath` | string | Yes | - | The path to the Firestore collection to query |
| `filters` | array | No | - | Array of filter objects (as JSON strings) to apply to the query |
| `orderBy` | string | No | - | JSON string specifying field and direction to order results |
| `limit` | integer | No | 100 | Maximum number of documents to return |
| `analyzeQuery` | boolean | No | false | If true, returns query explain metrics including execution statistics |
| **parameters** | **type** | **required** | **default** | **description** |
|------------------|:------------:|:------------:|:-----------:|-----------------------------------------------------------------------|
| `collectionPath` | string | true | - | The Firestore Rules source code to validate |
| `filters` | array | false | - | Array of filter objects (as JSON strings) to apply to the query |
| `orderBy` | string | false | - | JSON string specifying field and direction to order results |
| `limit` | integer | false | 100 | Maximum number of documents to return |
| `analyzeQuery` | boolean | false | false | If true, returns query explain metrics including execution statistics |
### Filter Format
Each filter in the `filters` array should be a JSON string with the following structure:
Each filter in the `filters` array should be a JSON string with the following
structure:
```json
{
@@ -44,6 +55,7 @@ Each filter in the `filters` array should be a JSON string with the following st
```
Supported operators:
- `<` - Less than
- `<=` - Less than or equal to
- `>` - Greater than
@@ -56,10 +68,12 @@ Supported operators:
- `not-in` - Field value is not in the specified array
Value types supported:
- String: `"value": "text"`
- Number: `"value": 123` or `"value": 45.67`
- Boolean: `"value": true` or `"value": false`
- Array: `"value": ["item1", "item2"]` (for `in`, `not-in`, `array-contains-any` operators)
- Array: `"value": ["item1", "item2"]` (for `in`, `not-in`, `array-contains-any`
operators)
### OrderBy Format
@@ -73,6 +87,7 @@ The `orderBy` parameter should be a JSON string with the following structure:
```
Direction values:
- `ASCENDING`
- `DESCENDING`
@@ -154,7 +169,8 @@ The tool returns an array of documents, where each document includes:
### Response with Query Analysis (analyzeQuery = true)
When `analyzeQuery` is set to true, the tool returns a single object containing documents and explain metrics:
When `analyzeQuery` is set to true, the tool returns a single object containing
documents and explain metrics:
```json
{
@@ -191,6 +207,7 @@ When `analyzeQuery` is set to true, the tool returns a single object containing
## Error Handling
The tool will return errors for:
- Invalid collection path
- Malformed filter JSON
- Unsupported operators

View File

@@ -10,7 +10,9 @@ aliases:
## Overview
The `firestore-validate-rules` tool validates Firestore security rules syntax and semantic correctness without deploying them. It provides detailed error reporting with source positions and code snippets.
The `firestore-validate-rules` tool validates Firestore security rules syntax
and semantic correctness without deploying them. It provides detailed error
reporting with source positions and code snippets.
## Configuration
@@ -28,9 +30,9 @@ This tool requires authentication if the source requires authentication.
## Parameters
| Parameter | Type | Required | Description |
|-----------|--------|----------|-------------|
| source | string | Yes | The Firestore Rules source code to validate |
| **parameters** | **type** | **required** | **description** |
|-----------------|:------------:|:------------:|----------------------------------------------|
| source | string | true | The Firestore Rules source code to validate |
## Response
@@ -102,6 +104,7 @@ The tool returns a `ValidationResult` object containing:
## Error Handling
The tool will return errors for:
- Missing or empty `source` parameter
- API errors when calling the Firebase Rules service
- Network connectivity issues
@@ -115,5 +118,7 @@ The tool will return errors for:
## Related Tools
- [firestore-get-rules]({{< ref "firestore-get-rules" >}}): Retrieve current active rules
- [firestore-query-collection]({{< ref "firestore-query-collection" >}}): Test rules by querying collections
- [firestore-get-rules]({{< ref "firestore-get-rules" >}}): Retrieve current
active rules
- [firestore-query-collection]({{< ref "firestore-query-collection" >}}): Test
rules by querying collections

View File

@@ -19,6 +19,7 @@ It's compatible with the following sources:
- [looker](../sources/looker.md)
`looker-query` takes eight parameters:
1. the `model`
2. the `explore`
3. the `fields` list

View File

@@ -19,6 +19,7 @@ It's compatible with the following sources:
- [looker](../sources/looker.md)
`looker-query-sql` takes eight parameters:
1. the `model`
2. the `explore`
3. the `fields` list

View File

@@ -0,0 +1,81 @@
---
title: "mongodb-aggregate"
type: docs
weight: 1
description: >
A "mongodb-aggregate" tool executes a multi-stage aggregation pipeline against a MongoDB collection.
aliases:
- /resources/tools/mongodb-aggregate
---
## About
The `mongodb-aggregate` tool is the most powerful query tool for MongoDB,
allowing you to process data through a multi-stage pipeline. Each stage
transforms the documents as they pass through, enabling complex operations like
grouping, filtering, reshaping documents, and performing calculations.
The core of this tool is the `pipelinePayload`, which must be a string
containing a **JSON array of pipeline stage documents**. The tool returns a JSON
array of documents produced by the final stage of the pipeline.
A `readOnly` flag can be set to `true` as a safety measure to ensure the
pipeline does not contain any write stages (like `$out` or `$merge`).
This tool is compatible with the following source kind:
* [`mongodb`](../../sources/mongodb.md)
## Example
Here is an example that calculates the average price and total count of products
for each category, but only for products with an "active" status.
```yaml
tools:
get_category_stats:
kind: mongodb-aggregate
source: my-mongo-source
description: Calculates average price and count of products, grouped by category.
database: ecommerce
collection: products
readOnly: true
pipelinePayload: |
[
{
"$match": {
"status": {{json .status_filter}}
}
},
{
"$group": {
"_id": "$category",
"average_price": { "$avg": "$price" },
"item_count": { "$sum": 1 }
}
},
{
"$sort": {
"average_price": -1
}
}
]
pipelineParams:
- name: status_filter
type: string
description: The product status to filter by (e.g., "active").
```
## Reference
| **field** | **type** | **required** | **description** |
|:----------------|:---------|:-------------|:---------------------------------------------------------------------------------------------------------------|
| kind | string | true | Must be `mongodb-aggregate`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection to run the aggregation on. |
| pipelinePayload | string | true | A JSON array of aggregation stage documents, provided as a string. Uses `{{json .param_name}}` for templating. |
| pipelineParams | list | true | A list of parameter objects that define the variables used in the `pipelinePayload`. |
| canonical | bool | false | Determines if the pipeline string is parsed using MongoDB's Canonical or Relaxed Extended JSON format. |
| readOnly | bool | false | If `true`, the tool will fail if the pipeline contains write stages (`$out` or `$merge`). Defaults to `false`. |

View File

@@ -10,9 +10,12 @@ aliases:
## About
The `mongodb-delete-many` tool performs a **bulk destructive operation**, deleting **ALL** documents from a collection that match a specified filter.
The `mongodb-delete-many` tool performs a **bulk destructive operation**,
deleting **ALL** documents from a collection that match a specified filter.
The tool returns the total count of documents that were deleted. If the filter does not match any documents (i.e., the deleted count is 0), the tool will return an error.
The tool returns the total count of documents that were deleted. If the filter
does not match any documents (i.e., the deleted count is 0), the tool will
return an error.
This tool is compatible with the following source kind:
@@ -22,7 +25,8 @@ This tool is compatible with the following source kind:
## Example
Here is an example that performs a cleanup task by deleting all products from the `inventory` collection that belong to a discontinued brand.
Here is an example that performs a cleanup task by deleting all products from
the `inventory` collection that belong to a discontinued brand.
```yaml
tools:

View File

@@ -10,11 +10,16 @@ aliases:
## About
The `mongodb-delete-one` tool performs a destructive operation, deleting the **first single document** that matches a specified filter from a MongoDB collection.
The `mongodb-delete-one` tool performs a destructive operation, deleting the
**first single document** that matches a specified filter from a MongoDB
collection.
If the filter matches multiple documents, only the first one found by the database will be deleted. This tool is useful for removing specific entries, such as a user account or a single item from an inventory based on a unique ID.
If the filter matches multiple documents, only the first one found by the
database will be deleted. This tool is useful for removing specific entries,
such as a user account or a single item from an inventory based on a unique ID.
The tool returns the number of documents deleted, which will be either `1` if a document was found and deleted, or `0` if no matching document was found.
The tool returns the number of documents deleted, which will be either `1` if a
document was found and deleted, or `0` if no matching document was found.
This tool is compatible with the following source kind:
@@ -24,7 +29,8 @@ This tool is compatible with the following source kind:
## Example
Here is an example that deletes a specific user account from the `users` collection by matching their unique email address. This is a permanent action.
Here is an example that deletes a specific user account from the `users`
collection by matching their unique email address. This is a permanent action.
```yaml
tools:

View File

@@ -10,9 +10,13 @@ aliases:
## About
A `mongodb-find-one` tool is used to retrieve the **first single document** that matches a specified filter from a MongoDB collection. If multiple documents match the filter, you can use `sort` options to control which document is returned. Otherwise, the selection is not guaranteed.
A `mongodb-find-one` tool is used to retrieve the **first single document** that
matches a specified filter from a MongoDB collection. If multiple documents
match the filter, you can use `sort` options to control which document is
returned. Otherwise, the selection is not guaranteed.
The tool returns a single JSON object representing the document, wrapped in a JSON array.
The tool returns a single JSON object representing the document, wrapped in a
JSON array.
This tool is compatible with the following source kind:
@@ -22,7 +26,9 @@ This tool is compatible with the following source kind:
## Example
Here's a common use case: finding a specific user by their unique email address and returning their profile information, while excluding sensitive fields like the password hash.
Here's a common use case: finding a specific user by their unique email address
and returning their profile information, while excluding sensitive fields like
the password hash.
```yaml
tools:

View File

@@ -10,7 +10,11 @@ aliases:
## About
A `mongodb-find` tool is used to query a MongoDB collection and retrieve documents that match a specified filter. It's a flexible tool that allows you to shape the output by selecting specific fields (**projection**), ordering the results (**sorting**), and restricting the number of documents returned (**limiting**).
A `mongodb-find` tool is used to query a MongoDB collection and retrieve
documents that match a specified filter. It's a flexible tool that allows you to
shape the output by selecting specific fields (**projection**), ordering the
results (**sorting**), and restricting the number of documents returned
(**limiting**).
The tool returns a JSON array of the documents found.
@@ -20,7 +24,9 @@ This tool is compatible with the following source kind:
## Example
Here's an example that finds up to 10 users from the `customers` collection who live in a specific city. The results are sorted by their last name, and only their first name, last name, and email are returned.
Here's an example that finds up to 10 users from the `customers` collection who
live in a specific city. The results are sorted by their last name, and only
their first name, last name, and email are returned.
```yaml
tools:

View File

@@ -0,0 +1,58 @@
---
title: "mongodb-insert-many"
type: docs
weight: 1
description: >
A "mongodb-insert-many" tool inserts multiple new documents into a MongoDB collection.
aliases:
- /resources/tools/mongodb-insert-many
---
## About
The `mongodb-insert-many` tool inserts **multiple new documents** into a
specified MongoDB collection in a single bulk operation. This is highly
efficient for adding large amounts of data at once.
This tool takes one required parameter named `data`. This `data` parameter must
be a string containing a **JSON array of document objects**. Upon successful
insertion, the tool returns a JSON array containing the unique `_id` of **each**
new document that was created.
This tool is compatible with the following source kind:
* [`mongodb`](../../sources/mongodb.md)
---
## Example
Here is an example configuration for a tool that logs multiple events at once.
```yaml
tools:
log_batch_events:
kind: mongodb-insert-many
source: my-mongo-source
description: Inserts a batch of event logs into the database.
database: logging
collection: events
canonical: true
```
An LLM would call this tool by providing an array of documents as a JSON string
in the `data` parameter, like this:
`tool_code: log_batch_events(data='[{"event": "login", "user": "user1"}, {"event": "click", "user": "user2"}, {"event": "logout", "user": "user1"}]')`
---
## Reference
| **field** | **type** | **required** | **description** |
|:------------|:---------|:-------------|:---------------------------------------------------------------------------------------------------|
| kind | string | true | Must be `mongodb-insert-many`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection into which the documents will be inserted. |
| canonical | bool | true | Determines if the data string is parsed using MongoDB's Canonical or Relaxed Extended JSON format. |

View File

@@ -0,0 +1,53 @@
---
title: "mongodb-insert-one"
type: docs
weight: 1
description: >
A "mongodb-insert-one" tool inserts a single new document into a MongoDB collection.
aliases:
- /resources/tools/mongodb-insert-one
---
## About
The `mongodb-insert-one` tool inserts a **single new document** into a specified
MongoDB collection.
This tool takes one required parameter named `data`, which must be a string
containing the JSON object you want to insert. Upon successful insertion, the
tool returns the unique `_id` of the newly created document.
This tool is compatible with the following source kind:
* [`mongodb`](../../sources/mongodb.md)
## Example
Here is an example configuration for a tool that adds a new user to a `users`
collection.
```yaml
tools:
create_new_user:
kind: mongodb-insert-one
source: my-mongo-source
description: Creates a new user record in the database.
database: user_data
collection: users
canonical: false
```
An LLM would call this tool by providing the document as a JSON string in the
`data` parameter, like this:
`tool_code: create_new_user(data='{"email": "new.user@example.com", "name": "Jane Doe", "status": "active"}')`
## Reference
| **field** | **type** | **required** | **description** |
|:------------|:---------|:-------------|:---------------------------------------------------------------------------------------------------|
| kind | string | true | Must be `mongodb-insert-one`. |
| source | string | true | The name of the `mongodb` source to use. |
| description | string | true | A description of the tool that is passed to the LLM. |
| database | string | true | The name of the MongoDB database containing the collection. |
| collection | string | true | The name of the MongoDB collection into which the document will be inserted. |
| canonical | bool | true | Determines if the data string is parsed using MongoDB's Canonical or Relaxed Extended JSON format. |

View File

@@ -10,9 +10,12 @@ aliases:
## About
A `mongodb-update-many` tool updates **all** documents within a specified MongoDB collection that match a given filter. It locates the documents using a `filterPayload` and applies the modifications defined in an `updatePayload`.
A `mongodb-update-many` tool updates **all** documents within a specified
MongoDB collection that match a given filter. It locates the documents using a
`filterPayload` and applies the modifications defined in an `updatePayload`.
The tool returns an array of three integers: `[ModifiedCount, UpsertedCount, MatchedCount]`.
The tool returns an array of three integers: `[ModifiedCount, UpsertedCount,
MatchedCount]`.
This tool is compatible with the following source kind:
@@ -22,7 +25,8 @@ This tool is compatible with the following source kind:
## Example
Here's an example configuration. This tool applies a discount to all items within a specific category and also marks them as being on sale.
Here's an example configuration. This tool applies a discount to all items
within a specific category and also marks them as being on sale.
```yaml
tools:

View File

@@ -10,7 +10,10 @@ aliases:
## About
A `mongodb-update-one` tool updates a single document within a specified MongoDB collection. It locates the document to be updated using a `filterPayload` and applies modifications defined in an `updatePayload`. If the filter matches multiple documents, only the first one found will be updated.
A `mongodb-update-one` tool updates a single document within a specified MongoDB
collection. It locates the document to be updated using a `filterPayload` and
applies modifications defined in an `updatePayload`. If the filter matches
multiple documents, only the first one found will be updated.
This tool is compatible with the following source kind:
@@ -20,7 +23,10 @@ This tool is compatible with the following source kind:
## Example
Here's an example of a `mongodb-update-one` tool configuration. This tool updates the `stock` and `status` fields of a document in the `inventory` collection where the `item` field matches a provided value. If no matching document is found, the `upsert: true` option will create a new one.
Here's an example of a `mongodb-update-one` tool configuration. This tool
updates the `stock` and `status` fields of a document in the `inventory`
collection where the `item` field matches a provided value. If no matching
document is found, the `upsert: true` option will create a new one.
```yaml
tools:

View File

@@ -11,15 +11,24 @@ aliases:
## About
A `neo4j-execute-cypher` tool executes an arbitrary Cypher query provided as a string parameter against a Neo4j database. It's designed to be a flexible tool for interacting with the database when a pre-defined query is not sufficient. This tool is compatible with any of the following sources:
A `neo4j-execute-cypher` tool executes an arbitrary Cypher query provided as a
string parameter against a Neo4j database. It's designed to be a flexible tool
for interacting with the database when a pre-defined query is not sufficient.
This tool is compatible with any of the following sources:
- [neo4j](../sources/neo4j.md)
For security, the tool can be configured to be read-only. If the `readOnly` flag is set to `true`, the tool will analyze the incoming Cypher query and reject any write operations (like `CREATE`, `MERGE`, `DELETE`, etc.) before execution.
For security, the tool can be configured to be read-only. If the `readOnly` flag
is set to `true`, the tool will analyze the incoming Cypher query and reject any
write operations (like `CREATE`, `MERGE`, `DELETE`, etc.) before execution.
The Cypher query uses standard [Neo4j Cypher](https://neo4j.com/docs/cypher-manual/current/queries/) syntax and supports all Cypher features, including pattern matching, filtering, and aggregation.
The Cypher query uses standard [Neo4j
Cypher](https://neo4j.com/docs/cypher-manual/current/queries/) syntax and
supports all Cypher features, including pattern matching, filtering, and
aggregation.
`neo4j-execute-cypher` takes one input parameter `cypher` and run the cypher query against the `source`.
`neo4j-execute-cypher` takes one input parameter `cypher` and run the cypher
query against the `source`.
> **Note:** This tool is intended for developer assistant workflows with
> human-in-the-loop and shouldn't be used for production agents.
@@ -50,4 +59,3 @@ tools:
| source | string | true | Name of the source the Cypher query should execute on. |
| description | string | true | Description of the tool that is passed to the LLM. |
| readOnly | boolean | false | If set to `true`, the tool will reject any write operations in the Cypher query. Default is `false`. |

View File

@@ -0,0 +1,42 @@
---
title: "neo4j-schema"
type: "docs"
weight: 1
description: >
A "neo4j-schema" tool extracts a comprehensive schema from a Neo4j
database.
aliases:
- /resources/tools/neo4j-schema
---
## About
A `neo4j-schema` tool connects to a Neo4j database and extracts its complete schema information. It runs multiple queries concurrently to efficiently gather details about node labels, relationships, properties, constraints, and indexes.
The tool automatically detects if the APOC (Awesome Procedures on Cypher) library is available. If so, it uses APOC procedures like `apoc.meta.schema` for a highly detailed overview of the database structure; otherwise, it falls back to using native Cypher queries.
The extracted schema is **cached** to improve performance for subsequent requests. The output is a structured JSON object containing all the schema details, which can be invaluable for providing database context to an LLM. This tool is compatible with a `neo4j` source and takes no parameters.
## Example
```yaml
tools:
get_movie_db_schema:
kind: neo4j-schema
source: my-neo4j-movies-instance
description: |
Use this tool to get the full schema of the movie database.
This provides information on all available node labels (like Movie, Person),
relationships (like ACTED_IN), and the properties on each.
This tool takes no parameters.
# Optional configuration to cache the schema for 2 hours
cacheExpireMinutes: 120
```
## Reference
| **field** | **type** | **required** | **description** |
|---------------------|:----------:|:------------:|-------------------------------------------------------------------------------------------------|
| kind | string | true | Must be `neo4j-db-schema`. |
| source | string | true | Name of the source the schema should be extracted from. |
| description | string | true | Description of the tool that is passed to the LLM. |
| cacheExpireMinutes | integer | false | Cache expiration time in minutes. Defaults to 60. |

View File

@@ -28,7 +28,8 @@ inserted according to their name: e.g. `@name`.
> Parameters cannot be used as substitutes for identifiers, column names, table
> names, or other parts of the query.
[gsql-dml]: https://cloud.google.com/spanner/docs/reference/standard-sql/dml-syntax
[gsql-dml]:
https://cloud.google.com/spanner/docs/reference/standard-sql/dml-syntax
### PostgreSQL

View File

@@ -6,10 +6,14 @@ description: >
Wait for a long-running AlloyDB operation to complete.
---
The `alloydb-wait-for-operation` tool is a utility tool that waits for a long-running AlloyDB operation to complete. It does this by polling the AlloyDB Admin API operation status endpoint until the operation is finished, using exponential backoff.
The `alloydb-wait-for-operation` tool is a utility tool that waits for a
long-running AlloyDB operation to complete. It does this by polling the AlloyDB
Admin API operation status endpoint until the operation is finished, using
exponential backoff.
{{< notice info >}}
This tool is intended for developer assistant workflows with human-in-the-loop and shouldn't be used for production agents.
This tool is intended for developer assistant workflows with human-in-the-loop
and shouldn't be used for production agents.
{{< /notice >}}
## Example
@@ -40,7 +44,7 @@ tools:
| ----------- | :------: | :----------: | ---------------------------------------------------------------------------------------------------------------- |
| kind | string | true | Must be "alloydb-wait-for-operation". |
| source | string | true | Name of the source the HTTP request should be sent to. |
| description | string | true | A description of the tool. |
| description | string | true | A description of the tool. |
| delay | duration | false | The initial delay between polling requests (e.g., `3s`). Defaults to 3 seconds. |
| maxDelay | duration | false | The maximum delay between polling requests (e.g., `4m`). Defaults to 4 minutes. |
| multiplier | float | false | The multiplier for the polling delay. The delay is multiplied by this value after each request. Defaults to 2.0. |

View File

@@ -10,12 +10,15 @@ aliases:
## About
A `wait` tool pauses execution for a specified duration. This can be useful in workflows where a delay is needed between steps.
A `wait` tool pauses execution for a specified duration. This can be useful in
workflows where a delay is needed between steps.
`wait` takes one input parameter `duration` which is a string representing the time to wait (e.g., "10s", "2m", "1h").
`wait` takes one input parameter `duration` which is a string representing the
time to wait (e.g., "10s", "2m", "1h").
{{< notice info >}}
This tool is intended for developer assistant workflows with human-in-the-loop and shouldn't be used for production agents.
This tool is intended for developer assistant workflows with human-in-the-loop
and shouldn't be used for production agents.
{{< /notice >}}
## Example
@@ -30,8 +33,8 @@ tools:
## Reference
| **field** | **type** | **required** | **description** |
|-------------|:------------------------------------------:|:------------:|--------------------------------------------------------------------------------------------------|
| kind | string | true | Must be "wait". |
| description | string | true | Description of the tool that is passed to the LLM. |
| timeout | string | true | The default duration the tool can wait for. |
| **field** | **type** | **required** | **description** |
|-------------|:--------------:|:------------:|-------------------------------------------------------|
| kind | string | true | Must be "wait". |
| description | string | true | Description of the tool that is passed to the LLM. |
| timeout | string | true | The default duration the tool can wait for. |

View File

@@ -220,7 +220,7 @@
},
"outputs": [],
"source": [
"version = \"0.9.0\" # x-release-please-version\n",
"version = \"0.10.0\" # x-release-please-version\n",
"! curl -O https://storage.googleapis.com/genai-toolbox/v{version}/linux/amd64/toolbox\n",
"\n",
"# Make the binary executable\n",

View File

@@ -179,7 +179,7 @@ to use BigQuery, and then run the Toolbox server.
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.9.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/$OS/toolbox
```
<!-- {x-release-please-end} -->
@@ -292,8 +292,10 @@ to use BigQuery, and then run the Toolbox server.
```bash
./toolbox --tools-file "tools.yaml"
```
{{< notice note >}}
Toolbox enables dynamic reloading by default. To disable, use the `--disable-reload` flag.
Toolbox enables dynamic reloading by default. To disable, use the
`--disable-reload` flag.
{{< /notice >}}
## Step 3: Connect your agent to Toolbox

View File

@@ -98,7 +98,7 @@ In this section, we will download Toolbox, configure our tools in a
<!-- {x-release-please-start-version} -->
```bash
export OS="linux/amd64" # one of linux/amd64, darwin/arm64, darwin/amd64, or windows/amd64
curl -O https://storage.googleapis.com/genai-toolbox/v0.9.0/$OS/toolbox
curl -O https://storage.googleapis.com/genai-toolbox/v0.10.0/$OS/toolbox
```
<!-- {x-release-please-end} -->
@@ -208,7 +208,8 @@ In this section, we will download Toolbox, configure our tools in a
1. Type `y` when it asks to install the inspector package.
1. It should show the following when the MCP Inspector is up and running (please take note of `<YOUR_SESSION_TOKEN>`):
1. It should show the following when the MCP Inspector is up and running (please
take note of `<YOUR_SESSION_TOKEN>`):
```bash
Starting MCP inspector...
@@ -226,7 +227,8 @@ In this section, we will download Toolbox, configure our tools in a
1. For `URL`, type in `http://127.0.0.1:5000/mcp`.
1. For `Configuration` -> `Proxy Session Token`, make sure `<YOUR_SESSION_TOKEN>` is present.
1. For `Configuration` -> `Proxy Session Token`, make sure
`<YOUR_SESSION_TOKEN>` is present.
1. Click Connect.
@@ -236,4 +238,4 @@ In this section, we will download Toolbox, configure our tools in a
![inspector_tools](./inspector_tools.png)
1. Test out your tools here!
1. Test out your tools here!

View File

@@ -100,6 +100,7 @@ In this section, we will download Toolbox and run the Toolbox server.
- looker-toolbox__get_dimensions
- looker-toolbox__run_look
```
1. Start exploring your Looker instance with commands like
`Find an explore to see orders` or `show me my current
inventory broken down by item category`.

1
go.mod
View File

@@ -9,6 +9,7 @@ require (
cloud.google.com/go/bigquery v1.69.0
cloud.google.com/go/bigtable v1.38.0
cloud.google.com/go/cloudsqlconn v1.17.3
cloud.google.com/go/dataplex v1.26.0
cloud.google.com/go/firestore v1.18.0
cloud.google.com/go/spanner v1.83.0
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.53.0

2
go.sum
View File

@@ -236,6 +236,8 @@ cloud.google.com/go/dataplex v1.3.0/go.mod h1:hQuRtDg+fCiFgC8j0zV222HvzFQdRd+SVX
cloud.google.com/go/dataplex v1.4.0/go.mod h1:X51GfLXEMVJ6UN47ESVqvlsRplbLhcsAt0kZCCKsU0A=
cloud.google.com/go/dataplex v1.5.2/go.mod h1:cVMgQHsmfRoI5KFYq4JtIBEUbYwc3c7tXmIDhRmNNVQ=
cloud.google.com/go/dataplex v1.6.0/go.mod h1:bMsomC/aEJOSpHXdFKFGQ1b0TDPIeL28nJObeO1ppRs=
cloud.google.com/go/dataplex v1.26.0 h1:nu8/KrLR5v62L1lApGNgm61Oq+xaa2bS9rgc1csjqE0=
cloud.google.com/go/dataplex v1.26.0/go.mod h1:12R9nlLUzxOscbb2HgoYnkGNibmv4sXEVMXxrdw2a90=
cloud.google.com/go/dataproc v1.7.0/go.mod h1:CKAlMjII9H90RXaMpSxQ8EU6dQx6iAYNPcYPOkSbi8s=
cloud.google.com/go/dataproc v1.8.0/go.mod h1:5OW+zNAH0pMpw14JVrPONsxMQYMBqJuzORhIBfBn9uI=
cloud.google.com/go/dataproc v1.12.0/go.mod h1:zrF3aX0uV3ikkMz6z4uBbIKyhRITnxvr4i3IjKsKrw4=

View File

@@ -29,6 +29,7 @@ func TestLoadPrebuiltToolYAMLs(t *testing.T) {
"cloud-sql-mssql",
"cloud-sql-mysql",
"cloud-sql-postgres",
"dataplex",
"firestore",
"looker",
"mssql",
@@ -74,6 +75,7 @@ func TestGetPrebuiltTool(t *testing.T) {
cloudsqlpg_config, _ := Get("cloud-sql-postgres")
cloudsqlmysql_config, _ := Get("cloud-sql-mysql")
cloudsqlmssql_config, _ := Get("cloud-sql-mssql")
dataplex_config, _ := Get("dataplex")
firestoreconfig, _ := Get("firestore")
mysql_config, _ := Get("mysql")
mssql_config, _ := Get("mssql")
@@ -98,6 +100,9 @@ func TestGetPrebuiltTool(t *testing.T) {
if len(cloudsqlmssql_config) <= 0 {
t.Fatalf("unexpected error: could not fetch cloud sql mssql prebuilt tools yaml")
}
if len(dataplex_config) <= 0 {
t.Fatalf("unexpected error: could not fetch dataplex prebuilt tools yaml")
}
if len(firestoreconfig) <= 0 {
t.Fatalf("unexpected error: could not fetch firestore prebuilt tools yaml")
}

View File

@@ -0,0 +1,15 @@
sources:
dataplex-source:
kind: "dataplex"
project: ${DATAPLEX_PROJECT}
tools:
dataplex_search_entries:
kind: dataplex-search-entries
source: dataplex-source
description: |
Use this tool to search for entries in Dataplex Catalog that represent data assets (e.g. tables, views, models) based on the provided search query.
toolsets:
dataplex-tools:
- dataplex_search_entries

View File

@@ -0,0 +1,125 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package dataplex
import (
"context"
"fmt"
dataplexapi "cloud.google.com/go/dataplex/apiv1"
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/util"
"go.opentelemetry.io/otel/trace"
"golang.org/x/oauth2/google"
"google.golang.org/api/option"
)
const SourceKind string = "dataplex"
// validate interface
var _ sources.SourceConfig = Config{}
func init() {
if !sources.Register(SourceKind, newConfig) {
panic(fmt.Sprintf("source kind %q already registered", SourceKind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (sources.SourceConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type Config struct {
// Dataplex configs
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Project string `yaml:"project" validate:"required"`
}
func (r Config) SourceConfigKind() string {
// Returns Dataplex source kind
return SourceKind
}
func (r Config) Initialize(ctx context.Context, tracer trace.Tracer) (sources.Source, error) {
// Initializes a Dataplex source
client, err := initDataplexConnection(ctx, tracer, r.Name, r.Project)
if err != nil {
return nil, err
}
s := &Source{
Name: r.Name,
Kind: SourceKind,
Client: client,
Project: r.Project,
}
return s, nil
}
var _ sources.Source = &Source{}
type Source struct {
// Source struct with Dataplex client
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Client *dataplexapi.CatalogClient
Project string `yaml:"project"`
Location string `yaml:"location"`
}
func (s *Source) SourceKind() string {
// Returns Dataplex source kind
return SourceKind
}
func (s *Source) ProjectID() string {
return s.Project
}
func (s *Source) CatalogClient() *dataplexapi.CatalogClient {
return s.Client
}
func initDataplexConnection(
ctx context.Context,
tracer trace.Tracer,
name string,
project string,
) (*dataplexapi.CatalogClient, error) {
ctx, span := sources.InitConnectionSpan(ctx, tracer, SourceKind, name)
defer span.End()
cred, err := google.FindDefaultCredentials(ctx)
if err != nil {
return nil, fmt.Errorf("failed to find default Google Cloud credentials: %w", err)
}
userAgent, err := util.UserAgentFromContext(ctx)
if err != nil {
return nil, err
}
client, err := dataplexapi.NewCatalogClient(ctx, option.WithUserAgent(userAgent), option.WithCredentials(cred))
if err != nil {
return nil, fmt.Errorf("failed to create Dataplex client for project %q: %w", project, err)
}
return client, nil
}

View File

@@ -0,0 +1,111 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package dataplex_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/sources/dataplex"
"github.com/googleapis/genai-toolbox/internal/testutils"
)
func TestParseFromYamlDataplex(t *testing.T) {
tcs := []struct {
desc string
in string
want server.SourceConfigs
}{
{
desc: "basic example",
in: `
sources:
my-instance:
kind: dataplex
project: my-project
`,
want: server.SourceConfigs{
"my-instance": dataplex.Config{
Name: "my-instance",
Kind: dataplex.SourceKind,
Project: "my-project",
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Sources server.SourceConfigs `yaml:"sources"`
}{}
// Parse contents
err := yaml.Unmarshal(testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if !cmp.Equal(tc.want, got.Sources) {
t.Fatalf("incorrect parse: want %v, got %v", tc.want, got.Sources)
}
})
}
}
func TestFailParseFromYaml(t *testing.T) {
tcs := []struct {
desc string
in string
err string
}{
{
desc: "extra field",
in: `
sources:
my-instance:
kind: dataplex
project: my-project
foo: bar
`,
err: "unable to parse source \"my-instance\" as \"dataplex\": [1:1] unknown field \"foo\"\n> 1 | foo: bar\n ^\n 2 | kind: dataplex\n 3 | project: my-project",
},
{
desc: "missing required field",
in: `
sources:
my-instance:
kind: dataplex
`,
err: "unable to parse source \"my-instance\" as \"dataplex\": Key: 'Config.Project' Error:Field validation for 'Project' failed on the 'required' tag",
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Sources server.SourceConfigs `yaml:"sources"`
}{}
// Parse contents
err := yaml.Unmarshal(testutils.FormatYaml(tc.in), &got)
if err == nil {
t.Fatalf("expect parsing to fail")
}
errStr := err.Error()
if errStr != tc.err {
t.Fatalf("unexpected error: got %q, want %q", errStr, tc.err)
}
})
}
}

View File

@@ -54,6 +54,7 @@ type Config struct {
User string `yaml:"user" validate:"required"`
Password string `yaml:"password" validate:"required"`
Database string `yaml:"database" validate:"required"`
Encrypt string `yaml:"encrypt"`
}
func (r Config) SourceConfigKind() string {
@@ -63,7 +64,7 @@ func (r Config) SourceConfigKind() string {
func (r Config) Initialize(ctx context.Context, tracer trace.Tracer) (sources.Source, error) {
// Initializes a MSSQL source
db, err := initMssqlConnection(ctx, tracer, r.Name, r.Host, r.Port, r.User, r.Password, r.Database)
db, err := initMssqlConnection(ctx, tracer, r.Name, r.Host, r.Port, r.User, r.Password, r.Database, r.Encrypt)
if err != nil {
return nil, fmt.Errorf("unable to create db connection: %w", err)
}
@@ -101,7 +102,14 @@ func (s *Source) MSSQLDB() *sql.DB {
return s.Db
}
func initMssqlConnection(ctx context.Context, tracer trace.Tracer, name, host, port, user, pass, dbname string) (*sql.DB, error) {
func initMssqlConnection(
ctx context.Context,
tracer trace.Tracer,
name, host, port, user, pass, dbname, encrypt string,
) (
*sql.DB,
error,
) {
//nolint:all // Reassigned ctx
ctx, span := sources.InitConnectionSpan(ctx, tracer, SourceKind, name)
defer span.End()
@@ -109,6 +117,10 @@ func initMssqlConnection(ctx context.Context, tracer trace.Tracer, name, host, p
// Create dsn
query := url.Values{}
query.Add("database", dbname)
if encrypt != "" {
query.Add("encrypt", encrypt)
}
url := &url.URL{
Scheme: "sqlserver",
User: url.UserPassword(user, pass),

View File

@@ -54,6 +54,32 @@ func TestParseFromYamlMssql(t *testing.T) {
},
},
},
{
desc: "with encrypt field",
in: `
sources:
my-mssql-instance:
kind: mssql
host: 0.0.0.0
port: my-port
database: my_db
user: my_user
password: my_pass
encrypt: strict
`,
want: server.SourceConfigs{
"my-mssql-instance": mssql.Config{
Name: "my-mssql-instance",
Kind: mssql.SourceKind,
Host: "0.0.0.0",
Port: "my-port",
Database: "my_db",
User: "my_user",
Password: "my_pass",
Encrypt: "strict",
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {

View File

@@ -104,7 +104,7 @@ func PopulateTemplateWithJSON(templateName, templateString string, data map[stri
return result.String(), nil
}
// Verify there are no duplicate parameter names
// CheckDuplicateParameters verify there are no duplicate parameter names
func CheckDuplicateParameters(ps Parameters) error {
seenNames := make(map[string]bool)
for _, p := range ps {

View File

@@ -0,0 +1,175 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package dataplexsearchentries
import (
"context"
"fmt"
dataplexapi "cloud.google.com/go/dataplex/apiv1"
dataplexpb "cloud.google.com/go/dataplex/apiv1/dataplexpb"
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
dataplexds "github.com/googleapis/genai-toolbox/internal/sources/dataplex"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "dataplex-search-entries"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type compatibleSource interface {
CatalogClient() *dataplexapi.CatalogClient
ProjectID() string
}
// validate compatible sources are still compatible
var _ compatibleSource = &dataplexds.Source{}
var compatibleSources = [...]string{dataplexds.SourceKind}
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
}
// validate interface
var _ tools.ToolConfig = Config{}
func (cfg Config) ToolConfigKind() string {
return kind
}
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// Initialize the search configuration with the provided sources
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// verify the source is compatible
s, ok := rawS.(compatibleSource)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
}
query := tools.NewStringParameter("query", "The query against which entries in scope should be matched.")
name := tools.NewStringParameterWithDefault("name", fmt.Sprintf("projects/%s/locations/global", s.ProjectID()), "The project to which the request should be attributed in the following form: projects/{project}/locations/global")
pageSize := tools.NewIntParameterWithDefault("pageSize", 5, "Number of results in the search page.")
pageToken := tools.NewStringParameterWithDefault("pageToken", "", "Page token received from a previous locations.searchEntries call. Provide this to retrieve the subsequent page.")
orderBy := tools.NewStringParameterWithDefault("orderBy", "relevance", "Specifies the ordering of results. Supported values are: relevance, last_modified_timestamp, last_modified_timestamp asc")
semanticSearch := tools.NewBooleanParameterWithDefault("semanticSearch", true, "Whether to use semantic search for the query. If true, the query will be processed using semantic search capabilities.")
parameters := tools.Parameters{query, name, pageSize, pageToken, orderBy, semanticSearch}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: parameters.McpManifest(),
}
t := &SearchTool{
Name: cfg.Name,
Kind: kind,
Parameters: parameters,
AuthRequired: cfg.AuthRequired,
CatalogClient: s.CatalogClient(),
ProjectID: s.ProjectID(),
manifest: tools.Manifest{
Description: cfg.Description,
Parameters: parameters.Manifest(),
AuthRequired: cfg.AuthRequired,
},
mcpManifest: mcpManifest,
}
return t, nil
}
type SearchTool struct {
Name string
Kind string
Parameters tools.Parameters
AuthRequired []string
CatalogClient *dataplexapi.CatalogClient
ProjectID string
manifest tools.Manifest
mcpManifest tools.McpManifest
}
func (t *SearchTool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}
func (t *SearchTool) Invoke(ctx context.Context, params tools.ParamValues) (any, error) {
paramsMap := params.AsMap()
query, _ := paramsMap["query"].(string)
name, _ := paramsMap["name"].(string)
pageSize, _ := paramsMap["pageSize"].(int32)
pageToken, _ := paramsMap["pageToken"].(string)
orderBy, _ := paramsMap["orderBy"].(string)
semanticSearch, _ := paramsMap["semanticSearch"].(bool)
req := &dataplexpb.SearchEntriesRequest{
Query: query,
Name: name,
PageSize: pageSize,
PageToken: pageToken,
OrderBy: orderBy,
SemanticSearch: semanticSearch,
}
it := t.CatalogClient.SearchEntries(ctx, req)
if it == nil {
return nil, fmt.Errorf("failed to create search entries iterator for project %q", t.ProjectID)
}
var results []*dataplexpb.SearchEntriesResult
for {
entry, err := it.Next()
if err != nil {
break
}
results = append(results, entry)
}
return results, nil
}
func (t *SearchTool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
// Parse parameters from the provided data
return tools.ParseParams(t.Parameters, data, claims)
}
func (t *SearchTool) Manifest() tools.Manifest {
// Returns the tool manifest
return t.manifest
}
func (t *SearchTool) McpManifest() tools.McpManifest {
// Returns the tool MCP manifest
return t.mcpManifest
}

View File

@@ -0,0 +1,73 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package dataplexsearchentries_test
import (
"testing"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/internal/tools/dataplex/dataplexsearchentries"
)
func TestParseFromYamlDataplexSearchEntries(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
example_tool:
kind: dataplex-search-entries
source: my-instance
description: some description
`,
want: server.ToolConfigs{
"example_tool": dataplexsearchentries.Config{
Name: "example_tool",
Kind: "dataplex-search-entries",
Source: "my-instance",
Description: "some description",
AuthRequired: []string{},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,204 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mongodbaggregate
import (
"context"
"encoding/json"
"fmt"
"slices"
"github.com/goccy/go-yaml"
mongosrc "github.com/googleapis/genai-toolbox/internal/sources/mongodb"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"github.com/googleapis/genai-toolbox/internal/sources"
"github.com/googleapis/genai-toolbox/internal/tools"
)
const kind string = "mongodb-aggregate"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
AuthRequired []string `yaml:"authRequired" validate:"required"`
Description string `yaml:"description" validate:"required"`
Database string `yaml:"database" validate:"required"`
Collection string `yaml:"collection" validate:"required"`
PipelinePayload string `yaml:"pipelinePayload" validate:"required"`
PipelineParams tools.Parameters `yaml:"pipelineParams" validate:"required"`
Canonical bool `yaml:"canonical"`
ReadOnly bool `yaml:"readOnly"`
}
// validate interface
var _ tools.ToolConfig = Config{}
func (cfg Config) ToolConfigKind() string {
return kind
}
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// verify source exists
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// verify the source is compatible
s, ok := rawS.(*mongosrc.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `mongodb`", kind)
}
// Create a slice for all parameters
allParameters := slices.Concat(cfg.PipelineParams)
// Create Toolbox manifest
paramManifest := allParameters.Manifest()
if paramManifest == nil {
paramManifest = make([]tools.ParameterManifest, 0)
}
// Create MCP manifest
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: allParameters.McpManifest(),
}
// finish tool setup
return Tool{
Name: cfg.Name,
Kind: kind,
AuthRequired: cfg.AuthRequired,
Collection: cfg.Collection,
PipelinePayload: cfg.PipelinePayload,
PipelineParams: cfg.PipelineParams,
Canonical: cfg.Canonical,
ReadOnly: cfg.ReadOnly,
AllParams: allParameters,
database: s.Client.Database(cfg.Database),
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}, nil
}
// validate interface
var _ tools.Tool = Tool{}
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
Description string `yaml:"description"`
AuthRequired []string `yaml:"authRequired"`
Collection string `yaml:"collection"`
PipelinePayload string `yaml:"pipelinePayload"`
PipelineParams tools.Parameters `yaml:"pipelineParams"`
Canonical bool `yaml:"canonical"`
ReadOnly bool `yaml:"readOnly"`
AllParams tools.Parameters `yaml:"allParams"`
database *mongo.Database
manifest tools.Manifest
mcpManifest tools.McpManifest
}
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues) (any, error) {
paramsMap := params.AsMap()
pipelineString, err := tools.PopulateTemplateWithJSON("MongoDBAggregatePipeline", t.PipelinePayload, paramsMap)
if err != nil {
return nil, fmt.Errorf("error populating pipeline: %s", err)
}
var pipeline = []bson.M{}
err = bson.UnmarshalExtJSON([]byte(pipelineString), t.Canonical, &pipeline)
if err != nil {
return nil, err
}
if t.ReadOnly {
//fail if we do a merge or an out
for _, stage := range pipeline {
for key := range stage {
if key == "$merge" || key == "$out" {
return nil, fmt.Errorf("this is not a read-only pipeline: %+v", stage)
}
}
}
}
cur, err := t.database.Collection(t.Collection).Aggregate(ctx, pipeline)
if err != nil {
return nil, err
}
defer cur.Close(ctx)
var data = []any{}
err = cur.All(ctx, &data)
if err != nil {
return nil, err
}
if len(data) == 0 {
return []any{}, nil
}
var final []any
for _, item := range data {
tmp, _ := bson.MarshalExtJSON(item, false, false)
var tmp2 any
err = json.Unmarshal(tmp, &tmp2)
if err != nil {
return nil, err
}
final = append(final, tmp2)
}
return final, err
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.AllParams, data, claims)
}
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}

View File

@@ -0,0 +1,142 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mongodbaggregate_test
import (
"strings"
"testing"
"github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbaggregate"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/internal/tools"
)
func TestParseFromYamlMongoQuery(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
example_tool:
kind: mongodb-aggregate
source: my-instance
description: some description
database: test_db
collection: test_coll
readOnly: true
pipelinePayload: |
[{ $match: { name: {{json .name}} }}]
pipelineParams:
- name: name
type: string
description: small description
`,
want: server.ToolConfigs{
"example_tool": mongodbaggregate.Config{
Name: "example_tool",
Kind: "mongodb-aggregate",
Source: "my-instance",
AuthRequired: []string{},
Database: "test_db",
Collection: "test_coll",
Description: "some description",
PipelinePayload: "[{ $match: { name: {{json .name}} }}]\n",
PipelineParams: tools.Parameters{
&tools.StringParameter{
CommonParameter: tools.CommonParameter{
Name: "name",
Type: "string",
Desc: "small description",
},
},
},
ReadOnly: true,
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}
func TestFailParseFromYamlMongoQuery(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
err string
}{
{
desc: "Invalid method",
in: `
tools:
example_tool:
kind: mongodb-aggregate
source: my-instance
description: some description
collection: test_coll
pipelinePayload: |
[{ $match: { name : {{json .name}} }}]
`,
err: `unable to parse tool "example_tool" as kind "mongodb-aggregate"`,
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err == nil {
t.Fatalf("expect parsing to fail")
}
errStr := err.Error()
if !strings.Contains(errStr, tc.err) {
t.Fatalf("unexpected error string: got %q, want substring %q", errStr, tc.err)
}
})
}
}

View File

@@ -0,0 +1,167 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mongodbinsertmany
import (
"context"
"errors"
"fmt"
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
mongosrc "github.com/googleapis/genai-toolbox/internal/sources/mongodb"
"github.com/googleapis/genai-toolbox/internal/tools"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
const kind string = "mongodb-insert-many"
const paramDataKey = "data"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
AuthRequired []string `yaml:"authRequired" validate:"required"`
Description string `yaml:"description" validate:"required"`
Database string `yaml:"database" validate:"required"`
Collection string `yaml:"collection" validate:"required"`
Canonical bool `yaml:"canonical" validate:"required"` //i want to force the user to choose
}
// validate interface
var _ tools.ToolConfig = Config{}
func (cfg Config) ToolConfigKind() string {
return kind
}
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// verify source exists
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// verify the source is compatible
s, ok := rawS.(*mongosrc.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `mongodb`", kind)
}
dataParam := tools.NewStringParameterWithRequired(paramDataKey, "the JSON payload to insert, should be a JSON array of documents", true)
allParameters := tools.Parameters{dataParam}
// Create Toolbox manifest
paramManifest := allParameters.Manifest()
if paramManifest == nil {
paramManifest = make([]tools.ParameterManifest, 0)
}
// Create MCP manifest
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: allParameters.McpManifest(),
}
// finish tool setup
return Tool{
Name: cfg.Name,
Kind: kind,
AuthRequired: cfg.AuthRequired,
Collection: cfg.Collection,
Canonical: cfg.Canonical,
PayloadParams: allParameters,
database: s.Client.Database(cfg.Database),
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}, nil
}
// validate interface
var _ tools.Tool = Tool{}
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
AuthRequired []string `yaml:"authRequired"`
Description string `yaml:"description"`
Collection string `yaml:"collection"`
Canonical bool `yaml:"canonical" validation:"required"` //i want to force the user to choose
PayloadParams tools.Parameters
database *mongo.Database
manifest tools.Manifest
mcpManifest tools.McpManifest
}
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues) (any, error) {
if len(params) == 0 {
return nil, errors.New("no input found")
}
paramsMap := params.AsMap()
var jsonData, ok = paramsMap[paramDataKey].(string)
if !ok {
return nil, errors.New("no input found")
}
var data = []any{}
err := bson.UnmarshalExtJSON([]byte(jsonData), t.Canonical, &data)
if err != nil {
return nil, err
}
res, err := t.database.Collection(t.Collection).InsertMany(ctx, data, options.InsertMany())
if err != nil {
return nil, err
}
return res.InsertedIDs, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.PayloadParams, data, claims)
}
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}

View File

@@ -0,0 +1,123 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mongodbinsertmany_test
import (
"strings"
"testing"
"github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbinsertmany"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
)
func TestParseFromYamlMongoQuery(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
example_tool:
kind: mongodb-insert-many
source: my-instance
description: some description
database: test_db
collection: test_coll
canonical: true
`,
want: server.ToolConfigs{
"example_tool": mongodbinsertmany.Config{
Name: "example_tool",
Kind: "mongodb-insert-many",
Source: "my-instance",
AuthRequired: []string{},
Database: "test_db",
Collection: "test_coll",
Description: "some description",
Canonical: true,
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}
func TestFailParseFromYamlMongoQuery(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
err string
}{
{
desc: "Invalid method",
in: `
tools:
example_tool:
kind: mongodb-insert-many
source: my-instance
description: some description
collection: test_coll
`,
err: `unable to parse tool "example_tool" as kind "mongodb-insert-many"`,
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err == nil {
t.Fatalf("expect parsing to fail")
}
errStr := err.Error()
if !strings.Contains(errStr, tc.err) {
t.Fatalf("unexpected error string: got %q, want substring %q", errStr, tc.err)
}
})
}
}

View File

@@ -0,0 +1,166 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mongodbinsertone
import (
"context"
"errors"
"fmt"
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
mongosrc "github.com/googleapis/genai-toolbox/internal/sources/mongodb"
"github.com/googleapis/genai-toolbox/internal/tools"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
const kind string = "mongodb-insert-one"
const dataParamsKey = "data"
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
AuthRequired []string `yaml:"authRequired" validate:"required"`
Description string `yaml:"description" validate:"required"`
Database string `yaml:"database" validate:"required"`
Collection string `yaml:"collection" validate:"required"`
Canonical bool `yaml:"canonical" validate:"required"` //i want to force the user to choose
}
// validate interface
var _ tools.ToolConfig = Config{}
func (cfg Config) ToolConfigKind() string {
return kind
}
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// verify source exists
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// verify the source is compatible
s, ok := rawS.(*mongosrc.Source)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be `mongodb`", kind)
}
payloadParams := tools.NewStringParameterWithRequired(dataParamsKey, "the JSON payload to insert, should be a JSON object", true)
allParameters := tools.Parameters{payloadParams}
// Create Toolbox manifest
paramManifest := allParameters.Manifest()
if paramManifest == nil {
paramManifest = make([]tools.ParameterManifest, 0)
}
// Create MCP manifest
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: allParameters.McpManifest(),
}
// finish tool setup
return Tool{
Name: cfg.Name,
Kind: kind,
AuthRequired: cfg.AuthRequired,
Collection: cfg.Collection,
Canonical: cfg.Canonical,
PayloadParams: allParameters,
database: s.Client.Database(cfg.Database),
manifest: tools.Manifest{Description: cfg.Description, Parameters: paramManifest, AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}, nil
}
// validate interface
var _ tools.Tool = Tool{}
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
AuthRequired []string `yaml:"authRequired"`
Description string `yaml:"description"`
Collection string `yaml:"collection"`
Canonical bool `yaml:"canonical" validation:"required"`
PayloadParams tools.Parameters `yaml:"payloadParams" validate:"required"`
database *mongo.Database
manifest tools.Manifest
mcpManifest tools.McpManifest
}
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues) (any, error) {
if len(params) == 0 {
return nil, errors.New("no input found")
}
// use the first, assume it's a string
var jsonData, ok = params[0].Value.(string)
if !ok {
return nil, errors.New("no input found")
}
var data any
err := bson.UnmarshalExtJSON([]byte(jsonData), t.Canonical, &data)
if err != nil {
return nil, err
}
res, err := t.database.Collection(t.Collection).InsertOne(ctx, data, options.InsertOne())
if err != nil {
return nil, err
}
return res.InsertedID, nil
}
func (t Tool) ParseParams(data map[string]any, claims map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParseParams(t.PayloadParams, data, claims)
}
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}

View File

@@ -0,0 +1,124 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mongodbinsertone_test
import (
"strings"
"testing"
"github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbinsertone"
yaml "github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
)
func TestParseFromYamlMongoQuery(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example",
in: `
tools:
example_tool:
kind: mongodb-insert-one
source: my-instance
description: some description
database: test_db
collection: test_coll
canonical: true
`,
want: server.ToolConfigs{
"example_tool": mongodbinsertone.Config{
Name: "example_tool",
Kind: "mongodb-insert-one",
Source: "my-instance",
AuthRequired: []string{},
Database: "test_db",
Collection: "test_coll",
Canonical: true,
Description: "some description",
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}
func TestFailParseFromYamlMongoQuery(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
err string
}{
{
desc: "Invalid method",
in: `
tools:
example_tool:
kind: mongodb-insert-one
source: my-instance
description: some description
collection: test_coll
canonical: true
`,
err: `unable to parse tool "example_tool" as kind "mongodb-insert-one"`,
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err := yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err == nil {
t.Fatalf("expect parsing to fail")
}
errStr := err.Error()
if !strings.Contains(errStr, tc.err) {
t.Fatalf("unexpected error string: got %q, want substring %q", errStr, tc.err)
}
})
}
}

View File

@@ -55,7 +55,7 @@ type Config struct {
FilterParams tools.Parameters `yaml:"filterParams" validate:"required"`
UpdatePayload string `yaml:"updatePayload" validate:"required"`
UpdateParams tools.Parameters `yaml:"updateParams" validate:"required"`
Canonical bool `yaml:"canonical" validate:"required"` //i want to force the user to choose
Canonical bool `yaml:"canonical" validate:"required"`
Upsert bool `yaml:"upsert"`
}
@@ -135,7 +135,7 @@ type Tool struct {
UpdatePayload string `yaml:"updatePayload" validate:"required"`
UpdateParams tools.Parameters `yaml:"updateParams" validate:"required"`
AllParams tools.Parameters `yaml:"allParams"`
Canonical bool `yaml:"canonical" validation:"required"` //i want to force the user to choose
Canonical bool `yaml:"canonical" validation:"required"`
Upsert bool `yaml:"upsert"`
database *mongo.Database

View File

@@ -56,7 +56,7 @@ type Config struct {
UpdatePayload string `yaml:"updatePayload" validate:"required"`
UpdateParams tools.Parameters `yaml:"updateParams" validate:"required"`
Canonical bool `yaml:"canonical" validate:"required"` //i want to force the user to choose
Canonical bool `yaml:"canonical" validate:"required"`
Upsert bool `yaml:"upsert"`
}

View File

@@ -0,0 +1,204 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
Package cache provides a simple, thread-safe, in-memory key-value store.
It features item expiration and an optional background process (janitor) that
periodically removes expired items.
*/
package cache
import (
"sync"
"time"
)
const (
// DefaultJanitorInterval is the default interval at which the janitor
// runs to clean up expired cache items.
DefaultJanitorInterval = 1 * time.Minute
// DefaultExpiration is the default time-to-live for a cache item.
// Note: This constant is defined but not used in the current implementation,
// as expiration is set on a per-item basis.
DefaultExpiration = 60
)
// CacheItem represents a value stored in the cache, along with its expiration time.
type CacheItem struct {
Value any // The actual value being stored.
Expiration int64 // The time when the item expires, as a Unix nano timestamp. 0 means no expiration.
}
// isExpired checks if the cache item has passed its expiration time.
// It returns true if the item is expired, and false otherwise.
func (item CacheItem) isExpired() bool {
// If Expiration is 0, the item is considered to never expire.
if item.Expiration == 0 {
return false
}
return time.Now().UnixNano() > item.Expiration
}
// Cache is a thread-safe, in-memory key-value store with self-cleaning capabilities.
type Cache struct {
items map[string]CacheItem // The underlying map that stores the cache items.
mu sync.RWMutex // A read/write mutex to ensure thread safety for concurrent access.
stop chan struct{} // A channel used to signal the janitor goroutine to stop.
}
// NewCache creates and returns a new Cache instance.
// The janitor for cleaning up expired items is not started by default.
// Use the WithJanitor method to start the cleanup process.
//
// Example:
//
// c := cache.NewCache()
// c.Set("myKey", "myValue", 5*time.Minute)
func NewCache() *Cache {
return &Cache{
items: make(map[string]CacheItem),
}
}
// WithJanitor starts a background goroutine (janitor) that periodically cleans up
// expired items from the cache. If a janitor is already running, it will be
// stopped and a new one will be started with the specified interval.
//
// The interval parameter defines how often the janitor should run. If a non-positive
// interval is provided, it defaults to DefaultJanitorInterval (1 minute).
//
// It returns a pointer to the Cache to allow for method chaining.
//
// Example:
//
// // Create a cache that cleans itself every 10 minutes.
// c := cache.NewCache().WithJanitor(10 * time.Minute)
// defer c.Stop() // It's important to stop the janitor when the cache is no longer needed.
func (c *Cache) WithJanitor(interval time.Duration) *Cache {
c.mu.Lock()
defer c.mu.Unlock()
if c.stop != nil {
// If a janitor is already running, stop it before starting a new one.
close(c.stop)
}
c.stop = make(chan struct{})
// Use the default interval if an invalid one is provided.
if interval <= 0 {
interval = DefaultJanitorInterval
}
// Start the janitor in a new goroutine.
go c.janitor(interval, c.stop)
return c
}
// Get retrieves an item from the cache by its key.
// It returns the item's value and a boolean. The boolean is true if the key
// was found and the item has not expired. Otherwise, it is false.
//
// Example:
//
// v, found := c.Get("myKey")
// if found {
// fmt.Printf("Found value: %v\n", v)
// } else {
// fmt.Println("Key not found or expired.")
// }
func (c *Cache) Get(key string) (any, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
item, found := c.items[key]
// Return false if the item is not found or if it is found but has expired.
if !found || item.isExpired() {
return nil, false
}
return item.Value, true
}
// Set adds an item to the cache, replacing any existing item with the same key.
//
// The `ttl` (time-to-live) parameter specifies how long the item should remain
// in the cache. If `ttl` is positive, the item will expire after that duration.
// If `ttl` is zero or negative, the item will never expire.
//
// Example:
//
// // Add a key that expires in 5 minutes.
// c.Set("sessionToken", "xyz123", 5*time.Minute)
//
// // Add a key that never expires.
// c.Set("appConfig", "configValue", 0)
func (c *Cache) Set(key string, value any, ttl time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()
var expiration int64
// Calculate the expiration time only if ttl is positive.
if ttl > 0 {
expiration = time.Now().Add(ttl).UnixNano()
}
c.items[key] = CacheItem{
Value: value,
Expiration: expiration,
}
}
// Stop terminates the background janitor goroutine.
// It is safe to call Stop even if the janitor was never started or has already
// been stopped. This is useful for cleaning up resources.
func (c *Cache) Stop() {
c.mu.Lock()
defer c.mu.Unlock()
if c.stop != nil {
close(c.stop)
c.stop = nil
}
}
// janitor is the background cleanup worker. It runs in a separate goroutine.
// It uses a time.Ticker to periodically trigger the deletion of expired items.
func (c *Cache) janitor(interval time.Duration, stopCh chan struct{}) {
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
// Time to clean up expired items.
c.deleteExpired()
case <-stopCh:
// Stop signal received, exit the goroutine.
return
}
}
}
// deleteExpired scans the cache and removes all items that have expired.
// This function acquires a write lock on the cache to ensure safe mutation.
func (c *Cache) deleteExpired() {
c.mu.Lock()
defer c.mu.Unlock()
for k, v := range c.items {
if v.isExpired() {
delete(c.items, k)
}
}
}

View File

@@ -0,0 +1,170 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package cache
import (
"sync"
"testing"
"time"
)
// TestCache_SetAndGet verifies the basic functionality of setting a value
// and immediately retrieving it.
func TestCache_SetAndGet(t *testing.T) {
cache := NewCache()
defer cache.Stop()
key := "testKey"
value := "testValue"
cache.Set(key, value, 1*time.Minute)
retrievedValue, found := cache.Get(key)
if !found {
t.Errorf("Expected to find key %q, but it was not found", key)
}
if retrievedValue != value {
t.Errorf("Expected value %q, but got %q", value, retrievedValue)
}
}
// TestCache_GetExpired tests that an item is not retrievable after it has expired.
func TestCache_GetExpired(t *testing.T) {
cache := NewCache()
defer cache.Stop()
key := "expiredKey"
value := "expiredValue"
// Set an item with a very short TTL.
cache.Set(key, value, 1*time.Millisecond)
time.Sleep(2 * time.Millisecond) // Wait for the item to expire.
// Attempt to get the expired item.
_, found := cache.Get(key)
if found {
t.Errorf("Expected key %q to be expired, but it was found", key)
}
}
// TestCache_SetNoExpiration ensures that an item with a TTL of 0 or less
// does not expire.
func TestCache_SetNoExpiration(t *testing.T) {
cache := NewCache()
defer cache.Stop()
key := "noExpireKey"
value := "noExpireValue"
cache.Set(key, value, 0) // Setting with 0 should mean no expiration.
time.Sleep(5 * time.Millisecond)
retrievedValue, found := cache.Get(key)
if !found {
t.Errorf("Expected to find key %q, but it was not found", key)
}
if retrievedValue != value {
t.Errorf("Expected value %q, but got %q", value, retrievedValue)
}
}
// TestCache_Janitor verifies that the janitor goroutine automatically removes
// expired items from the cache.
func TestCache_Janitor(t *testing.T) {
// Initialize cache with a very short janitor interval for quick testing.
cache := NewCache().WithJanitor(10 * time.Millisecond)
defer cache.Stop()
expiredKey := "expired"
activeKey := "active"
// Set one item that will expire and one that will not.
cache.Set(expiredKey, "value", 1*time.Millisecond)
cache.Set(activeKey, "value", 1*time.Hour)
// Wait longer than the janitor interval to ensure it has a chance to run.
time.Sleep(20 * time.Millisecond)
// Check that the expired key has been removed.
_, found := cache.Get(expiredKey)
if found {
t.Errorf("Expected janitor to clean up expired key %q, but it was found", expiredKey)
}
// Check that the active key is still present.
_, found = cache.Get(activeKey)
if !found {
t.Errorf("Expected active key %q to be present, but it was not found", activeKey)
}
}
// TestCache_Stop ensures that calling the Stop method does not cause a panic,
// regardless of whether the janitor is running or not. It also tests idempotency.
func TestCache_Stop(t *testing.T) {
t.Run("Stop without janitor", func(t *testing.T) {
cache := NewCache()
// Test that calling Stop multiple times on a cache without a janitor is safe.
cache.Stop()
cache.Stop()
})
t.Run("Stop with janitor", func(t *testing.T) {
cache := NewCache().WithJanitor(1 * time.Minute)
// Test that calling Stop multiple times on a cache with a janitor is safe.
cache.Stop()
cache.Stop()
})
}
// TestCache_Concurrent performs a stress test on the cache with concurrent
// reads and writes to check for race conditions.
func TestCache_Concurrent(t *testing.T) {
cache := NewCache().WithJanitor(100 * time.Millisecond)
defer cache.Stop()
var wg sync.WaitGroup
numGoroutines := 100
numOperations := 1000
// Start concurrent writer goroutines.
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func(g int) {
defer wg.Done()
for j := 0; j < numOperations; j++ {
key := string(rune(g*numOperations + j))
value := g*numOperations + j
cache.Set(key, value, 10*time.Second)
}
}(i)
}
// Start concurrent reader goroutines.
for i := 0; i < numGoroutines; i++ {
wg.Add(1)
go func(g int) {
defer wg.Done()
for j := 0; j < numOperations; j++ {
key := string(rune(g*numOperations + j))
cache.Get(key) // We don't check the result, just that access is safe.
}
}(i)
}
// Wait for all goroutines to complete. If a race condition exists, the Go
// race detector (`go test -race`) will likely catch it.
wg.Wait()
}

View File

@@ -0,0 +1,291 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package helpers provides utility functions for transforming and processing Neo4j
// schema data. It includes functions for converting raw query results from both
// APOC and native Cypher queries into a standardized, structured format.
package helpers
import (
"fmt"
"sort"
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/types"
)
// ConvertToStringSlice converts a slice of any type to a slice of strings.
// It uses fmt.Sprintf to perform the conversion for each element.
// Example:
//
// input: []any{"user", 123, true}
// output: []string{"user", "123", "true"}
func ConvertToStringSlice(slice []any) []string {
result := make([]string, len(slice))
for i, v := range slice {
result[i] = fmt.Sprintf("%v", v)
}
return result
}
// GetStringValue safely converts any value to its string representation.
// If the input value is nil, it returns an empty string.
func GetStringValue(val any) string {
if val == nil {
return ""
}
return fmt.Sprintf("%v", val)
}
// MapToAPOCSchema converts a raw map from a Cypher query into a structured
// APOCSchemaResult. This is a workaround for database drivers that may return
// complex nested structures as `map[string]any` instead of unmarshalling
// directly into a struct. It achieves this by marshalling the map to YAML and
// then unmarshalling into the target struct.
func MapToAPOCSchema(schemaMap map[string]any) (*types.APOCSchemaResult, error) {
schemaBytes, err := yaml.Marshal(schemaMap)
if err != nil {
return nil, fmt.Errorf("failed to marshal schema map: %w", err)
}
var entities map[string]types.APOCEntity
if err = yaml.Unmarshal(schemaBytes, &entities); err != nil {
return nil, fmt.Errorf("failed to unmarshal schema map into entities: %w", err)
}
return &types.APOCSchemaResult{Value: entities}, nil
}
// ProcessAPOCSchema transforms the nested result from the `apoc.meta.schema()`
// procedure into flat lists of node labels and relationships, along with
// aggregated database statistics. It iterates through entities, processes nodes,
// and extracts outgoing relationship information nested within those nodes.
func ProcessAPOCSchema(apocSchema *types.APOCSchemaResult) ([]types.NodeLabel, []types.Relationship, *types.Statistics) {
var nodeLabels []types.NodeLabel
relMap := make(map[string]*types.Relationship)
stats := &types.Statistics{
NodesByLabel: make(map[string]int64),
RelationshipsByType: make(map[string]int64),
PropertiesByLabel: make(map[string]int64),
PropertiesByRelType: make(map[string]int64),
}
for name, entity := range apocSchema.Value {
// We only process top-level entities of type "node". Relationship info is
// derived from the "relationships" field within each node entity.
if entity.Type != "node" {
continue
}
nodeLabel := types.NodeLabel{
Name: name,
Count: entity.Count,
Properties: extractAPOCProperties(entity.Properties),
}
nodeLabels = append(nodeLabels, nodeLabel)
// Aggregate statistics for the node.
stats.NodesByLabel[name] = entity.Count
stats.TotalNodes += entity.Count
propCount := int64(len(nodeLabel.Properties))
stats.PropertiesByLabel[name] = propCount
stats.TotalProperties += propCount * entity.Count
// Extract relationship information from the node.
for relName, relInfo := range entity.Relationships {
// Only process outgoing relationships to avoid double-counting.
if relInfo.Direction != "out" {
continue
}
rel, exists := relMap[relName]
if !exists {
rel = &types.Relationship{
Type: relName,
Properties: extractAPOCProperties(relInfo.Properties),
}
if len(relInfo.Labels) > 0 {
rel.EndNode = relInfo.Labels[0]
}
rel.StartNode = name
relMap[relName] = rel
}
rel.Count += relInfo.Count
}
}
// Consolidate the relationships from the map into a slice and update stats.
relationships := make([]types.Relationship, 0, len(relMap))
for _, rel := range relMap {
relationships = append(relationships, *rel)
stats.RelationshipsByType[rel.Type] = rel.Count
stats.TotalRelationships += rel.Count
propCount := int64(len(rel.Properties))
stats.PropertiesByRelType[rel.Type] = propCount
stats.TotalProperties += propCount * rel.Count
}
sortAndClean(nodeLabels, relationships, stats)
// Set empty maps and lists to nil for cleaner output.
if len(nodeLabels) == 0 {
nodeLabels = nil
}
if len(relationships) == 0 {
relationships = nil
}
return nodeLabels, relationships, stats
}
// ProcessNonAPOCSchema serves as an alternative to ProcessAPOCSchema for environments
// where APOC procedures are not available. It converts schema data gathered from
// multiple separate, native Cypher queries (providing node counts, property maps, etc.)
// into the same standardized, structured format.
func ProcessNonAPOCSchema(
nodeCounts map[string]int64,
nodePropsMap map[string]map[string]map[string]bool,
relCounts map[string]int64,
relPropsMap map[string]map[string]map[string]bool,
relConnectivity map[string]types.RelConnectivityInfo,
) ([]types.NodeLabel, []types.Relationship, *types.Statistics) {
stats := &types.Statistics{
NodesByLabel: make(map[string]int64),
RelationshipsByType: make(map[string]int64),
PropertiesByLabel: make(map[string]int64),
PropertiesByRelType: make(map[string]int64),
}
// Process node information.
nodeLabels := make([]types.NodeLabel, 0, len(nodeCounts))
for label, count := range nodeCounts {
properties := make([]types.PropertyInfo, 0)
if props, ok := nodePropsMap[label]; ok {
for key, typeSet := range props {
typeList := make([]string, 0, len(typeSet))
for tp := range typeSet {
typeList = append(typeList, tp)
}
sort.Strings(typeList)
properties = append(properties, types.PropertyInfo{Name: key, Types: typeList})
}
}
sort.Slice(properties, func(i, j int) bool { return properties[i].Name < properties[j].Name })
nodeLabels = append(nodeLabels, types.NodeLabel{Name: label, Count: count, Properties: properties})
// Aggregate node statistics.
stats.NodesByLabel[label] = count
stats.TotalNodes += count
propCount := int64(len(properties))
stats.PropertiesByLabel[label] = propCount
stats.TotalProperties += propCount * count
}
// Process relationship information.
relationships := make([]types.Relationship, 0, len(relCounts))
for relType, count := range relCounts {
properties := make([]types.PropertyInfo, 0)
if props, ok := relPropsMap[relType]; ok {
for key, typeSet := range props {
typeList := make([]string, 0, len(typeSet))
for tp := range typeSet {
typeList = append(typeList, tp)
}
sort.Strings(typeList)
properties = append(properties, types.PropertyInfo{Name: key, Types: typeList})
}
}
sort.Slice(properties, func(i, j int) bool { return properties[i].Name < properties[j].Name })
conn := relConnectivity[relType]
relationships = append(relationships, types.Relationship{
Type: relType,
Count: count,
StartNode: conn.StartNode,
EndNode: conn.EndNode,
Properties: properties,
})
// Aggregate relationship statistics.
stats.RelationshipsByType[relType] = count
stats.TotalRelationships += count
propCount := int64(len(properties))
stats.PropertiesByRelType[relType] = propCount
stats.TotalProperties += propCount * count
}
sortAndClean(nodeLabels, relationships, stats)
// Set empty maps and lists to nil for cleaner output.
if len(nodeLabels) == 0 {
nodeLabels = nil
}
if len(relationships) == 0 {
relationships = nil
}
return nodeLabels, relationships, stats
}
// extractAPOCProperties is a helper that converts a map of APOC property
// information into a slice of standardized PropertyInfo structs. The resulting
// slice is sorted by property name for consistent ordering.
func extractAPOCProperties(props map[string]types.APOCProperty) []types.PropertyInfo {
properties := make([]types.PropertyInfo, 0, len(props))
for name, info := range props {
properties = append(properties, types.PropertyInfo{
Name: name,
Types: []string{info.Type},
Indexed: info.Indexed,
Unique: info.Unique,
Mandatory: info.Existence,
})
}
sort.Slice(properties, func(i, j int) bool {
return properties[i].Name < properties[j].Name
})
return properties
}
// sortAndClean performs final processing on the schema data. It sorts node and
// relationship slices for consistent output, primarily by count (descending) and
// secondarily by name/type. It also sets any empty maps in the statistics
// struct to nil, which can simplify downstream serialization (e.g., omitting
// empty fields in JSON).
func sortAndClean(nodeLabels []types.NodeLabel, relationships []types.Relationship, stats *types.Statistics) {
// Sort nodes by count (desc) then name (asc).
sort.Slice(nodeLabels, func(i, j int) bool {
if nodeLabels[i].Count != nodeLabels[j].Count {
return nodeLabels[i].Count > nodeLabels[j].Count
}
return nodeLabels[i].Name < nodeLabels[j].Name
})
// Sort relationships by count (desc) then type (asc).
sort.Slice(relationships, func(i, j int) bool {
if relationships[i].Count != relationships[j].Count {
return relationships[i].Count > relationships[j].Count
}
return relationships[i].Type < relationships[j].Type
})
// Nil out empty maps for cleaner output.
if len(stats.NodesByLabel) == 0 {
stats.NodesByLabel = nil
}
if len(stats.RelationshipsByType) == 0 {
stats.RelationshipsByType = nil
}
if len(stats.PropertiesByLabel) == 0 {
stats.PropertiesByLabel = nil
}
if len(stats.PropertiesByRelType) == 0 {
stats.PropertiesByRelType = nil
}
}

View File

@@ -0,0 +1,384 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package helpers
import (
"testing"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/types"
)
func TestHelperFunctions(t *testing.T) {
t.Run("ConvertToStringSlice", func(t *testing.T) {
tests := []struct {
name string
input []any
want []string
}{
{
name: "empty slice",
input: []any{},
want: []string{},
},
{
name: "string values",
input: []any{"a", "b", "c"},
want: []string{"a", "b", "c"},
},
{
name: "mixed types",
input: []any{"string", 123, true, 45.67},
want: []string{"string", "123", "true", "45.67"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := ConvertToStringSlice(tt.input)
if diff := cmp.Diff(tt.want, got); diff != "" {
t.Errorf("ConvertToStringSlice() mismatch (-want +got):\n%s", diff)
}
})
}
})
t.Run("GetStringValue", func(t *testing.T) {
tests := []struct {
name string
input any
want string
}{
{
name: "nil value",
input: nil,
want: "",
},
{
name: "string value",
input: "test",
want: "test",
},
{
name: "int value",
input: 42,
want: "42",
},
{
name: "bool value",
input: true,
want: "true",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := GetStringValue(tt.input)
if got != tt.want {
t.Errorf("GetStringValue() got %q, want %q", got, tt.want)
}
})
}
})
}
func TestMapToAPOCSchema(t *testing.T) {
tests := []struct {
name string
input map[string]any
want *types.APOCSchemaResult
wantErr bool
}{
{
name: "simple node schema",
input: map[string]any{
"Person": map[string]any{
"type": "node",
"count": int64(150),
"properties": map[string]any{
"name": map[string]any{
"type": "STRING",
"unique": false,
"indexed": true,
"existence": false,
},
},
},
},
want: &types.APOCSchemaResult{
Value: map[string]types.APOCEntity{
"Person": {
Type: "node",
Count: 150,
Properties: map[string]types.APOCProperty{
"name": {
Type: "STRING",
Unique: false,
Indexed: true,
Existence: false,
},
},
},
},
},
wantErr: false,
},
{
name: "empty input",
input: map[string]any{},
want: &types.APOCSchemaResult{Value: map[string]types.APOCEntity{}},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := MapToAPOCSchema(tt.input)
if (err != nil) != tt.wantErr {
t.Errorf("MapToAPOCSchema() error = %v, wantErr %v", err, tt.wantErr)
return
}
if diff := cmp.Diff(tt.want, got); diff != "" {
t.Errorf("MapToAPOCSchema() mismatch (-want +got):\n%s", diff)
}
})
}
}
func TestProcessAPOCSchema(t *testing.T) {
tests := []struct {
name string
input *types.APOCSchemaResult
wantNodes []types.NodeLabel
wantRels []types.Relationship
wantStats *types.Statistics
statsAreEmpty bool
}{
{
name: "empty schema",
input: &types.APOCSchemaResult{
Value: map[string]types.APOCEntity{},
},
wantNodes: nil,
wantRels: nil,
statsAreEmpty: true,
},
{
name: "simple node only",
input: &types.APOCSchemaResult{
Value: map[string]types.APOCEntity{
"Person": {
Type: "node",
Count: 100,
Properties: map[string]types.APOCProperty{
"name": {Type: "STRING", Indexed: true},
"age": {Type: "INTEGER"},
},
},
},
},
wantNodes: []types.NodeLabel{
{
Name: "Person",
Count: 100,
Properties: []types.PropertyInfo{
{Name: "age", Types: []string{"INTEGER"}},
{Name: "name", Types: []string{"STRING"}, Indexed: true},
},
},
},
wantRels: nil,
wantStats: &types.Statistics{
NodesByLabel: map[string]int64{"Person": 100},
PropertiesByLabel: map[string]int64{"Person": 2},
TotalNodes: 100,
TotalProperties: 200,
},
},
{
name: "nodes and relationships",
input: &types.APOCSchemaResult{
Value: map[string]types.APOCEntity{
"Person": {
Type: "node",
Count: 100,
Properties: map[string]types.APOCProperty{
"name": {Type: "STRING", Unique: true, Indexed: true, Existence: true},
},
Relationships: map[string]types.APOCRelationshipInfo{
"KNOWS": {
Direction: "out",
Count: 50,
Labels: []string{"Person"},
Properties: map[string]types.APOCProperty{
"since": {Type: "INTEGER"},
},
},
},
},
"Post": {
Type: "node",
Count: 200,
Properties: map[string]types.APOCProperty{"content": {Type: "STRING"}},
},
"FOLLOWS": {Type: "relationship", Count: 80},
},
},
wantNodes: []types.NodeLabel{
{
Name: "Post",
Count: 200,
Properties: []types.PropertyInfo{
{Name: "content", Types: []string{"STRING"}},
},
},
{
Name: "Person",
Count: 100,
Properties: []types.PropertyInfo{
{Name: "name", Types: []string{"STRING"}, Unique: true, Indexed: true, Mandatory: true},
},
},
},
wantRels: []types.Relationship{
{
Type: "KNOWS",
StartNode: "Person",
EndNode: "Person",
Count: 50,
Properties: []types.PropertyInfo{
{Name: "since", Types: []string{"INTEGER"}},
},
},
},
wantStats: &types.Statistics{
NodesByLabel: map[string]int64{"Person": 100, "Post": 200},
RelationshipsByType: map[string]int64{"KNOWS": 50},
PropertiesByLabel: map[string]int64{"Person": 1, "Post": 1},
PropertiesByRelType: map[string]int64{"KNOWS": 1},
TotalNodes: 300,
TotalRelationships: 50,
TotalProperties: 350, // (100*1 + 200*1) for nodes + (50*1) for rels
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotNodes, gotRels, gotStats := ProcessAPOCSchema(tt.input)
if diff := cmp.Diff(tt.wantNodes, gotNodes); diff != "" {
t.Errorf("ProcessAPOCSchema() node labels mismatch (-want +got):\n%s", diff)
}
if diff := cmp.Diff(tt.wantRels, gotRels); diff != "" {
t.Errorf("ProcessAPOCSchema() relationships mismatch (-want +got):\n%s", diff)
}
if tt.statsAreEmpty {
tt.wantStats = &types.Statistics{}
}
if diff := cmp.Diff(tt.wantStats, gotStats); diff != "" {
t.Errorf("ProcessAPOCSchema() statistics mismatch (-want +got):\n%s", diff)
}
})
}
}
func TestProcessNonAPOCSchema(t *testing.T) {
t.Run("full schema processing", func(t *testing.T) {
nodeCounts := map[string]int64{"Person": 10, "City": 5}
nodePropsMap := map[string]map[string]map[string]bool{
"Person": {"name": {"STRING": true}, "age": {"INTEGER": true}},
"City": {"name": {"STRING": true, "TEXT": true}},
}
relCounts := map[string]int64{"LIVES_IN": 8}
relPropsMap := map[string]map[string]map[string]bool{
"LIVES_IN": {"since": {"DATE": true}},
}
relConnectivity := map[string]types.RelConnectivityInfo{
"LIVES_IN": {StartNode: "Person", EndNode: "City", Count: 8},
}
wantNodes := []types.NodeLabel{
{
Name: "Person",
Count: 10,
Properties: []types.PropertyInfo{
{Name: "age", Types: []string{"INTEGER"}},
{Name: "name", Types: []string{"STRING"}},
},
},
{
Name: "City",
Count: 5,
Properties: []types.PropertyInfo{
{Name: "name", Types: []string{"STRING", "TEXT"}},
},
},
}
wantRels := []types.Relationship{
{
Type: "LIVES_IN",
Count: 8,
StartNode: "Person",
EndNode: "City",
Properties: []types.PropertyInfo{
{Name: "since", Types: []string{"DATE"}},
},
},
}
wantStats := &types.Statistics{
TotalNodes: 15,
TotalRelationships: 8,
TotalProperties: 33, // (10*2 + 5*1) for nodes + (8*1) for rels
NodesByLabel: map[string]int64{"Person": 10, "City": 5},
RelationshipsByType: map[string]int64{"LIVES_IN": 8},
PropertiesByLabel: map[string]int64{"Person": 2, "City": 1},
PropertiesByRelType: map[string]int64{"LIVES_IN": 1},
}
gotNodes, gotRels, gotStats := ProcessNonAPOCSchema(nodeCounts, nodePropsMap, relCounts, relPropsMap, relConnectivity)
if diff := cmp.Diff(wantNodes, gotNodes); diff != "" {
t.Errorf("ProcessNonAPOCSchema() nodes mismatch (-want +got):\n%s", diff)
}
if diff := cmp.Diff(wantRels, gotRels); diff != "" {
t.Errorf("ProcessNonAPOCSchema() relationships mismatch (-want +got):\n%s", diff)
}
if diff := cmp.Diff(wantStats, gotStats); diff != "" {
t.Errorf("ProcessNonAPOCSchema() stats mismatch (-want +got):\n%s", diff)
}
})
t.Run("empty schema", func(t *testing.T) {
gotNodes, gotRels, gotStats := ProcessNonAPOCSchema(
map[string]int64{},
map[string]map[string]map[string]bool{},
map[string]int64{},
map[string]map[string]map[string]bool{},
map[string]types.RelConnectivityInfo{},
)
if len(gotNodes) != 0 {
t.Errorf("expected 0 nodes, got %d", len(gotNodes))
}
if len(gotRels) != 0 {
t.Errorf("expected 0 relationships, got %d", len(gotRels))
}
if diff := cmp.Diff(&types.Statistics{}, gotStats); diff != "" {
t.Errorf("ProcessNonAPOCSchema() stats mismatch (-want +got):\n%s", diff)
}
})
}

View File

@@ -0,0 +1,712 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package neo4jschema
import (
"context"
"fmt"
"sync"
"time"
"github.com/goccy/go-yaml"
"github.com/googleapis/genai-toolbox/internal/sources"
neo4jsc "github.com/googleapis/genai-toolbox/internal/sources/neo4j"
"github.com/googleapis/genai-toolbox/internal/tools"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/cache"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/helpers"
"github.com/googleapis/genai-toolbox/internal/tools/neo4j/neo4jschema/types"
"github.com/neo4j/neo4j-go-driver/v5/neo4j"
)
// kind defines the unique identifier for this tool.
const kind string = "neo4j-schema"
// init registers the tool with the application's tool registry when the package is initialized.
func init() {
if !tools.Register(kind, newConfig) {
panic(fmt.Sprintf("tool kind %q already registered", kind))
}
}
// newConfig decodes a YAML configuration into a Config struct.
// This function is called by the tool registry to create a new configuration object.
func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) {
actual := Config{Name: name}
if err := decoder.DecodeContext(ctx, &actual); err != nil {
return nil, err
}
return actual, nil
}
// compatibleSource defines the interface a data source must implement to be used by this tool.
// It ensures that the source can provide a Neo4j driver and database name.
type compatibleSource interface {
Neo4jDriver() neo4j.DriverWithContext
Neo4jDatabase() string
}
// Statically verify that our compatible source implementation is valid.
var _ compatibleSource = &neo4jsc.Source{}
// compatibleSources lists the kinds of sources that are compatible with this tool.
var compatibleSources = [...]string{neo4jsc.SourceKind}
// Config holds the configuration settings for the Neo4j schema tool.
// These settings are typically read from a YAML file.
type Config struct {
Name string `yaml:"name" validate:"required"`
Kind string `yaml:"kind" validate:"required"`
Source string `yaml:"source" validate:"required"`
Description string `yaml:"description" validate:"required"`
AuthRequired []string `yaml:"authRequired"`
CacheExpireMinutes *int `yaml:"cacheExpireMinutes,omitempty"` // Cache expiration time in minutes.
}
// Statically verify that Config implements the tools.ToolConfig interface.
var _ tools.ToolConfig = Config{}
// ToolConfigKind returns the kind of this tool configuration.
func (cfg Config) ToolConfigKind() string {
return kind
}
// Initialize sets up the tool with its dependencies and returns a ready-to-use Tool instance.
func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) {
// Verify that the specified source exists.
rawS, ok := srcs[cfg.Source]
if !ok {
return nil, fmt.Errorf("no source named %q configured", cfg.Source)
}
// Verify the source is of a compatible kind.
s, ok := rawS.(compatibleSource)
if !ok {
return nil, fmt.Errorf("invalid source for %q tool: source kind must be one of %q", kind, compatibleSources)
}
parameters := tools.Parameters{}
mcpManifest := tools.McpManifest{
Name: cfg.Name,
Description: cfg.Description,
InputSchema: parameters.McpManifest(),
}
// Set a default cache expiration if not provided in the configuration.
if cfg.CacheExpireMinutes == nil {
defaultExpiration := cache.DefaultExpiration // Default to 60 minutes
cfg.CacheExpireMinutes = &defaultExpiration
}
// Finish tool setup by creating the Tool instance.
t := Tool{
Name: cfg.Name,
Kind: kind,
AuthRequired: cfg.AuthRequired,
Driver: s.Neo4jDriver(),
Database: s.Neo4jDatabase(),
cache: cache.NewCache(),
cacheExpireMinutes: cfg.CacheExpireMinutes,
manifest: tools.Manifest{Description: cfg.Description, Parameters: parameters.Manifest(), AuthRequired: cfg.AuthRequired},
mcpManifest: mcpManifest,
}
return t, nil
}
// Statically verify that Tool implements the tools.Tool interface.
var _ tools.Tool = Tool{}
// Tool represents the Neo4j schema extraction tool.
// It holds the Neo4j driver, database information, and a cache for the schema.
type Tool struct {
Name string `yaml:"name"`
Kind string `yaml:"kind"`
AuthRequired []string `yaml:"authRequired"`
Driver neo4j.DriverWithContext
Database string
cache *cache.Cache
cacheExpireMinutes *int
manifest tools.Manifest
mcpManifest tools.McpManifest
}
// Invoke executes the tool's main logic: fetching the Neo4j schema.
// It first checks the cache for a valid schema before extracting it from the database.
func (t Tool) Invoke(ctx context.Context, params tools.ParamValues) (any, error) {
// Check if a valid schema is already in the cache.
if cachedSchema, ok := t.cache.Get("schema"); ok {
if schema, ok := cachedSchema.(*types.SchemaInfo); ok {
return schema, nil
}
}
// If not cached, extract the schema from the database.
schema, err := t.extractSchema(ctx)
if err != nil {
return nil, fmt.Errorf("failed to extract database schema: %w", err)
}
// Cache the newly extracted schema for future use.
expiration := time.Duration(*t.cacheExpireMinutes) * time.Minute
t.cache.Set("schema", schema, expiration)
return schema, nil
}
// ParseParams is a placeholder as this tool does not require input parameters.
func (t Tool) ParseParams(data map[string]any, claimsMap map[string]map[string]any) (tools.ParamValues, error) {
return tools.ParamValues{}, nil
}
// Manifest returns the tool's manifest, which describes its purpose and parameters.
func (t Tool) Manifest() tools.Manifest {
return t.manifest
}
// McpManifest returns the machine-consumable manifest for the tool.
func (t Tool) McpManifest() tools.McpManifest {
return t.mcpManifest
}
// Authorized checks if the tool is authorized to run based on the provided authentication services.
func (t Tool) Authorized(verifiedAuthServices []string) bool {
return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices)
}
// checkAPOCProcedures verifies if essential APOC procedures are available in the database.
// It returns true only if all required procedures are found.
func (t Tool) checkAPOCProcedures(ctx context.Context) (bool, error) {
proceduresToCheck := []string{"apoc.meta.schema", "apoc.meta.cypher.types"}
session := t.Driver.NewSession(ctx, neo4j.SessionConfig{DatabaseName: t.Database})
defer session.Close(ctx)
// This query efficiently counts how many of the specified procedures exist.
query := "SHOW PROCEDURES YIELD name WHERE name IN $procs RETURN count(name) AS procCount"
params := map[string]any{"procs": proceduresToCheck}
result, err := session.Run(ctx, query, params)
if err != nil {
return false, fmt.Errorf("failed to execute procedure check query: %w", err)
}
record, err := result.Single(ctx)
if err != nil {
return false, fmt.Errorf("failed to retrieve single result for procedure check: %w", err)
}
rawCount, found := record.Get("procCount")
if !found {
return false, fmt.Errorf("field 'procCount' not found in result record")
}
procCount, ok := rawCount.(int64)
if !ok {
return false, fmt.Errorf("expected 'procCount' to be of type int64, but got %T", rawCount)
}
// Return true only if the number of found procedures matches the number we were looking for.
return procCount == int64(len(proceduresToCheck)), nil
}
// extractSchema orchestrates the concurrent extraction of different parts of the database schema.
// It runs several extraction tasks in parallel for efficiency.
func (t Tool) extractSchema(ctx context.Context) (*types.SchemaInfo, error) {
schema := &types.SchemaInfo{}
var mu sync.Mutex
// Define the different schema extraction tasks.
tasks := []struct {
name string
fn func() error
}{
{
name: "database-info",
fn: func() error {
dbInfo, err := t.extractDatabaseInfo(ctx)
if err != nil {
return fmt.Errorf("failed to extract database info: %w", err)
}
mu.Lock()
defer mu.Unlock()
schema.DatabaseInfo = *dbInfo
return nil
},
},
{
name: "schema-extraction",
fn: func() error {
// Check if APOC procedures are available.
hasAPOC, err := t.checkAPOCProcedures(ctx)
if err != nil {
return fmt.Errorf("failed to check APOC procedures: %w", err)
}
var nodeLabels []types.NodeLabel
var relationships []types.Relationship
var stats *types.Statistics
// Use APOC if available for a more detailed schema; otherwise, use native queries.
if hasAPOC {
nodeLabels, relationships, stats, err = t.GetAPOCSchema(ctx)
} else {
nodeLabels, relationships, stats, err = t.GetSchemaWithoutAPOC(ctx, 100)
}
if err != nil {
return fmt.Errorf("failed to get schema: %w", err)
}
mu.Lock()
defer mu.Unlock()
schema.NodeLabels = nodeLabels
schema.Relationships = relationships
schema.Statistics = *stats
return nil
},
},
{
name: "constraints",
fn: func() error {
constraints, err := t.extractConstraints(ctx)
if err != nil {
return fmt.Errorf("failed to extract constraints: %w", err)
}
mu.Lock()
defer mu.Unlock()
schema.Constraints = constraints
return nil
},
},
{
name: "indexes",
fn: func() error {
indexes, err := t.extractIndexes(ctx)
if err != nil {
return fmt.Errorf("failed to extract indexes: %w", err)
}
mu.Lock()
defer mu.Unlock()
schema.Indexes = indexes
return nil
},
},
}
var wg sync.WaitGroup
errCh := make(chan error, len(tasks))
// Execute all tasks concurrently.
for _, task := range tasks {
wg.Add(1)
go func(task struct {
name string
fn func() error
}) {
defer wg.Done()
if err := task.fn(); err != nil {
errCh <- err
}
}(task)
}
wg.Wait()
close(errCh)
// Collect any errors that occurred during the concurrent tasks.
for err := range errCh {
if err != nil {
schema.Errors = append(schema.Errors, err.Error())
}
}
return schema, nil
}
// GetAPOCSchema extracts schema information using the APOC library, which provides detailed metadata.
func (t Tool) GetAPOCSchema(ctx context.Context) ([]types.NodeLabel, []types.Relationship, *types.Statistics, error) {
var nodeLabels []types.NodeLabel
var relationships []types.Relationship
stats := &types.Statistics{
NodesByLabel: make(map[string]int64),
RelationshipsByType: make(map[string]int64),
PropertiesByLabel: make(map[string]int64),
PropertiesByRelType: make(map[string]int64),
}
var mu sync.Mutex
var firstErr error
ctx, cancel := context.WithCancel(ctx)
defer cancel()
handleError := func(err error) {
mu.Lock()
defer mu.Unlock()
if firstErr == nil {
firstErr = err
cancel() // Cancel other operations on the first error.
}
}
tasks := []struct {
name string
fn func(session neo4j.SessionWithContext) error
}{
{
name: "apoc-schema",
fn: func(session neo4j.SessionWithContext) error {
result, err := session.Run(ctx, "CALL apoc.meta.schema({sample: 10}) YIELD value RETURN value", nil)
if err != nil {
return fmt.Errorf("failed to run APOC schema query: %w", err)
}
if !result.Next(ctx) {
return fmt.Errorf("no results from APOC schema query")
}
schemaMap, ok := result.Record().Values[0].(map[string]any)
if !ok {
return fmt.Errorf("unexpected result format from APOC schema query: %T", result.Record().Values[0])
}
apocSchema, err := helpers.MapToAPOCSchema(schemaMap)
if err != nil {
return fmt.Errorf("failed to convert schema map to APOCSchemaResult: %w", err)
}
nodes, _, apocStats := helpers.ProcessAPOCSchema(apocSchema)
mu.Lock()
defer mu.Unlock()
nodeLabels = nodes
stats.TotalNodes = apocStats.TotalNodes
stats.TotalProperties += apocStats.TotalProperties
stats.NodesByLabel = apocStats.NodesByLabel
stats.PropertiesByLabel = apocStats.PropertiesByLabel
return nil
},
},
{
name: "apoc-relationships",
fn: func(session neo4j.SessionWithContext) error {
query := `
MATCH (startNode)-[rel]->(endNode)
WITH
labels(startNode)[0] AS startNode,
type(rel) AS relType,
apoc.meta.cypher.types(rel) AS relProperties,
labels(endNode)[0] AS endNode,
count(*) AS count
RETURN relType, startNode, endNode, relProperties, count`
result, err := session.Run(ctx, query, nil)
if err != nil {
return fmt.Errorf("failed to extract relationships: %w", err)
}
for result.Next(ctx) {
record := result.Record()
relType, startNode, endNode := record.Values[0].(string), record.Values[1].(string), record.Values[2].(string)
properties, count := record.Values[3].(map[string]any), record.Values[4].(int64)
if relType == "" || count == 0 {
continue
}
relationship := types.Relationship{Type: relType, StartNode: startNode, EndNode: endNode, Count: count, Properties: []types.PropertyInfo{}}
for prop, propType := range properties {
relationship.Properties = append(relationship.Properties, types.PropertyInfo{Name: prop, Types: []string{propType.(string)}})
}
mu.Lock()
relationships = append(relationships, relationship)
stats.RelationshipsByType[relType] += count
stats.TotalRelationships += count
propCount := int64(len(relationship.Properties))
stats.TotalProperties += propCount
stats.PropertiesByRelType[relType] += propCount
mu.Unlock()
}
mu.Lock()
defer mu.Unlock()
if len(stats.RelationshipsByType) == 0 {
stats.RelationshipsByType = nil
}
if len(stats.PropertiesByRelType) == 0 {
stats.PropertiesByRelType = nil
}
return nil
},
},
}
var wg sync.WaitGroup
wg.Add(len(tasks))
for _, task := range tasks {
go func(task struct {
name string
fn func(session neo4j.SessionWithContext) error
}) {
defer wg.Done()
session := t.Driver.NewSession(ctx, neo4j.SessionConfig{DatabaseName: t.Database})
defer session.Close(ctx)
if err := task.fn(session); err != nil {
handleError(fmt.Errorf("task %s failed: %w", task.name, err))
}
}(task)
}
wg.Wait()
if firstErr != nil {
return nil, nil, nil, firstErr
}
return nodeLabels, relationships, stats, nil
}
// GetSchemaWithoutAPOC extracts schema information using native Cypher queries.
// This serves as a fallback for databases without APOC installed.
func (t Tool) GetSchemaWithoutAPOC(ctx context.Context, sampleSize int) ([]types.NodeLabel, []types.Relationship, *types.Statistics, error) {
nodePropsMap := make(map[string]map[string]map[string]bool)
relPropsMap := make(map[string]map[string]map[string]bool)
nodeCounts := make(map[string]int64)
relCounts := make(map[string]int64)
relConnectivity := make(map[string]types.RelConnectivityInfo)
var mu sync.Mutex
var firstErr error
ctx, cancel := context.WithCancel(ctx)
defer cancel()
handleError := func(err error) {
mu.Lock()
defer mu.Unlock()
if firstErr == nil {
firstErr = err
cancel()
}
}
tasks := []struct {
name string
fn func(session neo4j.SessionWithContext) error
}{
{
name: "node-schema",
fn: func(session neo4j.SessionWithContext) error {
countResult, err := session.Run(ctx, `MATCH (n) UNWIND labels(n) AS label RETURN label, count(*) AS count ORDER BY count DESC`, nil)
if err != nil {
return fmt.Errorf("node count query failed: %w", err)
}
var labelsList []string
mu.Lock()
for countResult.Next(ctx) {
record := countResult.Record()
label, count := record.Values[0].(string), record.Values[1].(int64)
nodeCounts[label] = count
labelsList = append(labelsList, label)
}
mu.Unlock()
if err = countResult.Err(); err != nil {
return fmt.Errorf("node count result error: %w", err)
}
for _, label := range labelsList {
propQuery := fmt.Sprintf(`MATCH (n:%s) WITH n LIMIT $sampleSize UNWIND keys(n) AS key WITH key, n[key] AS value WHERE value IS NOT NULL RETURN key, COLLECT(DISTINCT valueType(value)) AS types`, label)
propResult, err := session.Run(ctx, propQuery, map[string]any{"sampleSize": sampleSize})
if err != nil {
return fmt.Errorf("node properties query for label %s failed: %w", label, err)
}
mu.Lock()
if nodePropsMap[label] == nil {
nodePropsMap[label] = make(map[string]map[string]bool)
}
for propResult.Next(ctx) {
record := propResult.Record()
key, types := record.Values[0].(string), record.Values[1].([]any)
if nodePropsMap[label][key] == nil {
nodePropsMap[label][key] = make(map[string]bool)
}
for _, tp := range types {
nodePropsMap[label][key][tp.(string)] = true
}
}
mu.Unlock()
if err = propResult.Err(); err != nil {
return fmt.Errorf("node properties result error for label %s: %w", label, err)
}
}
return nil
},
},
{
name: "relationship-schema",
fn: func(session neo4j.SessionWithContext) error {
relQuery := `
MATCH (start)-[r]->(end)
WITH type(r) AS relType, labels(start) AS startLabels, labels(end) AS endLabels, count(*) AS count
RETURN relType, CASE WHEN size(startLabels) > 0 THEN startLabels[0] ELSE null END AS startLabel, CASE WHEN size(endLabels) > 0 THEN endLabels[0] ELSE null END AS endLabel, sum(count) AS totalCount
ORDER BY totalCount DESC`
relResult, err := session.Run(ctx, relQuery, nil)
if err != nil {
return fmt.Errorf("relationship count query failed: %w", err)
}
var relTypesList []string
mu.Lock()
for relResult.Next(ctx) {
record := relResult.Record()
relType := record.Values[0].(string)
startLabel := ""
if record.Values[1] != nil {
startLabel = record.Values[1].(string)
}
endLabel := ""
if record.Values[2] != nil {
endLabel = record.Values[2].(string)
}
count := record.Values[3].(int64)
relCounts[relType] = count
relTypesList = append(relTypesList, relType)
if existing, ok := relConnectivity[relType]; !ok || count > existing.Count {
relConnectivity[relType] = types.RelConnectivityInfo{StartNode: startLabel, EndNode: endLabel, Count: count}
}
}
mu.Unlock()
if err = relResult.Err(); err != nil {
return fmt.Errorf("relationship count result error: %w", err)
}
for _, relType := range relTypesList {
propQuery := fmt.Sprintf(`MATCH ()-[r:%s]->() WITH r LIMIT $sampleSize WHERE size(keys(r)) > 0 UNWIND keys(r) AS key WITH key, r[key] AS value WHERE value IS NOT NULL RETURN key, COLLECT(DISTINCT valueType(value)) AS types`, relType)
propResult, err := session.Run(ctx, propQuery, map[string]any{"sampleSize": sampleSize})
if err != nil {
return fmt.Errorf("relationship properties query for type %s failed: %w", relType, err)
}
mu.Lock()
if relPropsMap[relType] == nil {
relPropsMap[relType] = make(map[string]map[string]bool)
}
for propResult.Next(ctx) {
record := propResult.Record()
key, propTypes := record.Values[0].(string), record.Values[1].([]any)
if relPropsMap[relType][key] == nil {
relPropsMap[relType][key] = make(map[string]bool)
}
for _, t := range propTypes {
relPropsMap[relType][key][t.(string)] = true
}
}
mu.Unlock()
if err = propResult.Err(); err != nil {
return fmt.Errorf("relationship properties result error for type %s: %w", relType, err)
}
}
return nil
},
},
}
var wg sync.WaitGroup
wg.Add(len(tasks))
for _, task := range tasks {
go func(task struct {
name string
fn func(session neo4j.SessionWithContext) error
}) {
defer wg.Done()
session := t.Driver.NewSession(ctx, neo4j.SessionConfig{DatabaseName: t.Database})
defer session.Close(ctx)
if err := task.fn(session); err != nil {
handleError(fmt.Errorf("task %s failed: %w", task.name, err))
}
}(task)
}
wg.Wait()
if firstErr != nil {
return nil, nil, nil, firstErr
}
nodeLabels, relationships, stats := helpers.ProcessNonAPOCSchema(nodeCounts, nodePropsMap, relCounts, relPropsMap, relConnectivity)
return nodeLabels, relationships, stats, nil
}
// extractDatabaseInfo retrieves general information about the Neo4j database instance.
func (t Tool) extractDatabaseInfo(ctx context.Context) (*types.DatabaseInfo, error) {
session := t.Driver.NewSession(ctx, neo4j.SessionConfig{DatabaseName: t.Database})
defer session.Close(ctx)
result, err := session.Run(ctx, "CALL dbms.components() YIELD name, versions, edition", nil)
if err != nil {
return nil, err
}
dbInfo := &types.DatabaseInfo{}
if result.Next(ctx) {
record := result.Record()
dbInfo.Name = record.Values[0].(string)
if versions, ok := record.Values[1].([]any); ok && len(versions) > 0 {
dbInfo.Version = versions[0].(string)
}
dbInfo.Edition = record.Values[2].(string)
}
return dbInfo, result.Err()
}
// extractConstraints fetches all schema constraints from the database.
func (t Tool) extractConstraints(ctx context.Context) ([]types.Constraint, error) {
session := t.Driver.NewSession(ctx, neo4j.SessionConfig{DatabaseName: t.Database})
defer session.Close(ctx)
result, err := session.Run(ctx, "SHOW CONSTRAINTS", nil)
if err != nil {
return nil, err
}
var constraints []types.Constraint
for result.Next(ctx) {
record := result.Record().AsMap()
constraint := types.Constraint{
Name: helpers.GetStringValue(record["name"]),
Type: helpers.GetStringValue(record["type"]),
EntityType: helpers.GetStringValue(record["entityType"]),
}
if labels, ok := record["labelsOrTypes"].([]any); ok && len(labels) > 0 {
constraint.Label = labels[0].(string)
}
if props, ok := record["properties"].([]any); ok {
constraint.Properties = helpers.ConvertToStringSlice(props)
}
constraints = append(constraints, constraint)
}
return constraints, result.Err()
}
// extractIndexes fetches all schema indexes from the database.
func (t Tool) extractIndexes(ctx context.Context) ([]types.Index, error) {
session := t.Driver.NewSession(ctx, neo4j.SessionConfig{DatabaseName: t.Database})
defer session.Close(ctx)
result, err := session.Run(ctx, "SHOW INDEXES", nil)
if err != nil {
return nil, err
}
var indexes []types.Index
for result.Next(ctx) {
record := result.Record().AsMap()
index := types.Index{
Name: helpers.GetStringValue(record["name"]),
State: helpers.GetStringValue(record["state"]),
Type: helpers.GetStringValue(record["type"]),
EntityType: helpers.GetStringValue(record["entityType"]),
}
if labels, ok := record["labelsOrTypes"].([]any); ok && len(labels) > 0 {
index.Label = labels[0].(string)
}
if props, ok := record["properties"].([]any); ok {
index.Properties = helpers.ConvertToStringSlice(props)
}
indexes = append(indexes, index)
}
return indexes, result.Err()
}

View File

@@ -0,0 +1,99 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package neo4jschema
import (
"testing"
"github.com/goccy/go-yaml"
"github.com/google/go-cmp/cmp"
"github.com/googleapis/genai-toolbox/internal/server"
"github.com/googleapis/genai-toolbox/internal/testutils"
)
func TestParseFromYamlNeo4j(t *testing.T) {
ctx, err := testutils.ContextWithNewLogger()
exp := 30
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
tcs := []struct {
desc string
in string
want server.ToolConfigs
}{
{
desc: "basic example with default cache expiration",
in: `
tools:
example_tool:
kind: neo4j-schema
source: my-neo4j-instance
description: some tool description
authRequired:
- my-google-auth-service
- other-auth-service
`,
want: server.ToolConfigs{
"example_tool": Config{
Name: "example_tool",
Kind: "neo4j-schema",
Source: "my-neo4j-instance",
Description: "some tool description",
AuthRequired: []string{"my-google-auth-service", "other-auth-service"},
CacheExpireMinutes: nil,
},
},
},
{
desc: "cache expire minutes set explicitly",
in: `
tools:
example_tool:
kind: neo4j-schema
source: my-neo4j-instance
description: some tool description
cacheExpireMinutes: 30
`,
want: server.ToolConfigs{
"example_tool": Config{
Name: "example_tool",
Kind: "neo4j-schema",
Source: "my-neo4j-instance",
Description: "some tool description",
AuthRequired: []string{}, // Expect an empty slice, not nil.
CacheExpireMinutes: &exp,
},
},
},
}
for _, tc := range tcs {
t.Run(tc.desc, func(t *testing.T) {
got := struct {
Tools server.ToolConfigs `yaml:"tools"`
}{}
// Parse contents
err = yaml.UnmarshalContext(ctx, testutils.FormatYaml(tc.in), &got)
if err != nil {
t.Fatalf("unable to unmarshal: %s", err)
}
if diff := cmp.Diff(tc.want, got.Tools); diff != "" {
t.Fatalf("incorrect parse: diff %v", diff)
}
})
}
}

View File

@@ -0,0 +1,127 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package types contains the shared data structures for Neo4j schema representation.
package types
// SchemaInfo represents the complete database schema.
type SchemaInfo struct {
NodeLabels []NodeLabel `json:"nodeLabels"`
Relationships []Relationship `json:"relationships"`
Constraints []Constraint `json:"constraints"`
Indexes []Index `json:"indexes"`
DatabaseInfo DatabaseInfo `json:"databaseInfo"`
Statistics Statistics `json:"statistics"`
Errors []string `json:"errors,omitempty"`
}
// NodeLabel represents a node label with its properties.
type NodeLabel struct {
Name string `json:"name"`
Properties []PropertyInfo `json:"properties"`
Count int64 `json:"count"`
}
// RelConnectivityInfo holds information about a relationship's start and end nodes,
// primarily used during schema extraction without APOC procedures.
type RelConnectivityInfo struct {
StartNode string
EndNode string
Count int64
}
// Relationship represents a relationship type with its properties.
type Relationship struct {
Type string `json:"type"`
Properties []PropertyInfo `json:"properties"`
StartNode string `json:"startNode,omitempty"`
EndNode string `json:"endNode,omitempty"`
Count int64 `json:"count"`
}
// PropertyInfo represents a property with its data types.
type PropertyInfo struct {
Name string `json:"name"`
Types []string `json:"types"`
Mandatory bool `json:"-"`
Unique bool `json:"-"`
Indexed bool `json:"-"`
}
// Constraint represents a database constraint.
type Constraint struct {
Name string `json:"name"`
Type string `json:"type"`
EntityType string `json:"entityType"`
Label string `json:"label,omitempty"`
Properties []string `json:"properties"`
}
// Index represents a database index.
type Index struct {
Name string `json:"name"`
State string `json:"state"`
Type string `json:"type"`
EntityType string `json:"entityType"`
Label string `json:"label,omitempty"`
Properties []string `json:"properties"`
}
// DatabaseInfo contains general database information.
type DatabaseInfo struct {
Name string `json:"name"`
Version string `json:"version"`
Edition string `json:"edition,omitempty"`
}
// Statistics contains database statistics.
type Statistics struct {
TotalNodes int64 `json:"totalNodes"`
TotalRelationships int64 `json:"totalRelationships"`
TotalProperties int64 `json:"totalProperties"`
NodesByLabel map[string]int64 `json:"nodesByLabel"`
RelationshipsByType map[string]int64 `json:"relationshipsByType"`
PropertiesByLabel map[string]int64 `json:"propertiesByLabel"`
PropertiesByRelType map[string]int64 `json:"propertiesByRelType"`
}
// APOCSchemaResult represents the result from apoc.meta.schema().
type APOCSchemaResult struct {
Value map[string]APOCEntity `json:"value"`
}
// APOCEntity represents a node or relationship in APOC schema.
type APOCEntity struct {
Type string `json:"type"`
Count int64 `json:"count"`
Labels []string `json:"labels,omitempty"`
Properties map[string]APOCProperty `json:"properties"`
Relationships map[string]APOCRelationshipInfo `json:"relationships,omitempty"`
}
// APOCProperty represents property info from APOC.
type APOCProperty struct {
Type string `json:"type"`
Indexed bool `json:"indexed"`
Unique bool `json:"unique"`
Existence bool `json:"existence"`
}
// APOCRelationshipInfo represents relationship info from APOC.
type APOCRelationshipInfo struct {
Count int64 `json:"count"`
Direction string `json:"direction"`
Labels []string `json:"labels"`
Properties map[string]APOCProperty `json:"properties"`
}

View File

@@ -0,0 +1,317 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package dataplex
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"regexp"
"strings"
"testing"
"time"
bigqueryapi "cloud.google.com/go/bigquery"
"github.com/google/uuid"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/tests"
"golang.org/x/oauth2/google"
"google.golang.org/api/googleapi"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
)
var (
DataplexSourceKind = "dataplex"
DataplexSearchEntriesToolKind = "dataplex-search-entries"
DataplexProject = os.Getenv("DATAPLEX_PROJECT")
)
func getDataplexVars(t *testing.T) map[string]any {
switch "" {
case DataplexProject:
t.Fatal("'DATAPLEX_PROJECT' not set")
}
return map[string]any{
"kind": DataplexSourceKind,
"project": DataplexProject,
}
}
// Copied over from bigquery.go
func initBigQueryConnection(ctx context.Context, project string) (*bigqueryapi.Client, error) {
cred, err := google.FindDefaultCredentials(ctx, bigqueryapi.Scope)
if err != nil {
return nil, fmt.Errorf("failed to find default Google Cloud credentials with scope %q: %w", bigqueryapi.Scope, err)
}
client, err := bigqueryapi.NewClient(ctx, project, option.WithCredentials(cred))
if err != nil {
return nil, fmt.Errorf("failed to create BigQuery client for project %q: %w", project, err)
}
return client, nil
}
func TestDataplexToolEndpoints(t *testing.T) {
sourceConfig := getDataplexVars(t)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
var args []string
bigqueryClient, err := initBigQueryConnection(ctx, DataplexProject)
if err != nil {
t.Fatalf("unable to create Cloud SQL connection pool: %s", err)
}
// create table name with UUID
datasetName := fmt.Sprintf("temp_toolbox_test_%s", strings.ReplaceAll(uuid.New().String(), "-", ""))
tableName := fmt.Sprintf("param_table_%s", strings.ReplaceAll(uuid.New().String(), "-", ""))
teardownTable1 := setupBigQueryTable(t, ctx, bigqueryClient, datasetName, tableName)
defer teardownTable1(t)
toolsFile := getDataplexToolsConfig(sourceConfig)
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
if err != nil {
t.Fatalf("command initialization returned an error: %s", err)
}
defer cleanup()
waitCtx, cancel := context.WithTimeout(ctx, 5*time.Minute)
defer cancel()
out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
if err != nil {
t.Logf("toolbox command logs: \n%s", out)
t.Fatalf("toolbox didn't start successfully: %s", err)
}
runDataplexSearchEntriesToolGetTest(t)
runDataplexSearchEntriesToolInvokeTest(t, tableName, datasetName)
}
func setupBigQueryTable(t *testing.T, ctx context.Context, client *bigqueryapi.Client, datasetName string, tableName string) func(*testing.T) {
// Create dataset
dataset := client.Dataset(datasetName)
_, err := dataset.Metadata(ctx)
if err != nil {
apiErr, ok := err.(*googleapi.Error)
if !ok || apiErr.Code != 404 {
t.Fatalf("Failed to check dataset %q existence: %v", datasetName, err)
}
metadataToCreate := &bigqueryapi.DatasetMetadata{Name: datasetName}
if err := dataset.Create(ctx, metadataToCreate); err != nil {
t.Fatalf("Failed to create dataset %q: %v", datasetName, err)
}
}
// Create table
tab := client.Dataset(datasetName).Table(tableName)
meta := &bigqueryapi.TableMetadata{}
if err := tab.Create(ctx, meta); err != nil {
t.Fatalf("Create table job for %s failed: %v", tableName, err)
}
time.Sleep(2 * time.Minute) // wait for table to be ingested
return func(t *testing.T) {
// tear down table
dropSQL := fmt.Sprintf("drop table %s.%s", datasetName, tableName)
dropJob, err := client.Query(dropSQL).Run(ctx)
if err != nil {
t.Errorf("Failed to start drop table job for %s: %v", tableName, err)
return
}
dropStatus, err := dropJob.Wait(ctx)
if err != nil {
t.Errorf("Failed to wait for drop table job for %s: %v", tableName, err)
return
}
if err := dropStatus.Err(); err != nil {
t.Errorf("Error dropping table %s: %v", tableName, err)
}
// tear down dataset
datasetToTeardown := client.Dataset(datasetName)
tablesIterator := datasetToTeardown.Tables(ctx)
_, err = tablesIterator.Next()
if err == iterator.Done {
if err := datasetToTeardown.Delete(ctx); err != nil {
t.Errorf("Failed to delete dataset %s: %v", datasetName, err)
}
} else if err != nil {
t.Errorf("Failed to list tables in dataset %s to check emptiness: %v.", datasetName, err)
}
}
}
func getDataplexToolsConfig(sourceConfig map[string]any) map[string]any {
// Write config into a file and pass it to command
toolsFile := map[string]any{
"sources": map[string]any{
"my-dataplex-instance": sourceConfig,
},
"tools": map[string]any{
"my-search-entries-tool": map[string]any{
"kind": DataplexSearchEntriesToolKind,
"source": "my-dataplex-instance",
"description": "Simple tool to test end to end functionality.",
},
},
}
return toolsFile
}
func runDataplexSearchEntriesToolGetTest(t *testing.T) {
resp, err := http.Get("http://127.0.0.1:5000/api/tool/my-search-entries-tool/")
if err != nil {
t.Fatalf("error making GET request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != 200 {
t.Fatalf("expected status code 200, got %d", resp.StatusCode)
}
var body map[string]interface{}
if err := json.NewDecoder(resp.Body).Decode(&body); err != nil {
t.Fatalf("error decoding response body: %s", err)
}
got, ok := body["tools"]
if !ok {
t.Fatalf("unable to find 'tools' key in response body")
}
toolsMap, ok := got.(map[string]interface{})
if !ok {
t.Fatalf("tools is not a map")
}
tool, ok := toolsMap["my-search-entries-tool"].(map[string]interface{})
if !ok {
t.Fatalf("tool not found in manifest")
}
params, ok := tool["parameters"].([]interface{})
if !ok {
t.Fatalf("parameters not found")
}
paramNames := []string{}
for _, param := range params {
paramMap, ok := param.(map[string]interface{})
if ok {
paramNames = append(paramNames, paramMap["name"].(string))
}
}
expected := []string{"name", "pageSize", "pageToken", "orderBy", "query"}
for _, want := range expected {
found := false
for _, got := range paramNames {
if got == want {
found = true
break
}
}
if !found {
t.Fatalf("expected parameter %q not found in tool parameters", want)
}
}
}
func runDataplexSearchEntriesToolInvokeTest(t *testing.T, tableName string, datasetName string) {
testCases := []struct {
name string
tableName string
datasetName string
wantStatusCode int
expectResult bool
wantContentKey string
}{
{
name: "Success - Entry Found",
tableName: tableName,
datasetName: datasetName,
wantStatusCode: 200,
expectResult: true,
wantContentKey: "dataplex_entry",
},
{
name: "Failure - Entry Not Found",
tableName: "",
datasetName: "",
wantStatusCode: 200,
expectResult: false,
wantContentKey: "",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
query := fmt.Sprintf("displayname=\"%s\" system=bigquery parent:\"%s\"", tc.tableName, tc.datasetName)
reqBodyMap := map[string]string{"query": query}
reqBodyBytes, err := json.Marshal(reqBodyMap)
if err != nil {
t.Fatalf("error marshalling request body: %s", err)
}
resp, err := http.Post("http://127.0.0.1:5000/api/tool/my-search-entries-tool/invoke", "application/json", bytes.NewBuffer(reqBodyBytes))
if err != nil {
t.Fatalf("error making POST request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != tc.wantStatusCode {
t.Fatalf("response status code is not %d.", tc.wantStatusCode)
}
var result map[string]interface{}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
t.Fatalf("error parsing response body: %s", err)
}
resultStr, ok := result["result"].(string)
if !ok {
if result["result"] == nil && !tc.expectResult {
return
}
t.Fatalf("expected 'result' field to be a string, got %T", result["result"])
}
if !tc.expectResult && (resultStr == "" || resultStr == "[]") {
return
}
var entries []interface{}
if err := json.Unmarshal([]byte(resultStr), &entries); err != nil {
t.Fatalf("error unmarshalling result string: %v", err)
}
if tc.expectResult {
if len(entries) == 0 {
t.Fatal("expected at least one entry, but got 0")
}
entry, ok := entries[0].(map[string]interface{})
if !ok {
t.Fatalf("expected first entry to be a map, got %T", entries[0])
}
if _, ok := entry[tc.wantContentKey]; !ok {
t.Fatalf("expected entry to have key '%s', but it was not found in %v", tc.wantContentKey, entry)
}
} else {
if len(entries) != 0 {
t.Fatalf("expected 0 entries, but got %d", len(entries))
}
}
})
}
}

View File

@@ -0,0 +1,755 @@
// Copyright 2025 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package mongodb
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"os"
"regexp"
"testing"
"time"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/tests"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
var (
MongoDbSourceKind = "mongodb"
MongoDbToolKind = "mongodb-find"
MongoDbUri = os.Getenv("MONGODB_URI")
MongoDbDatabase = os.Getenv("MONGODB_DATABASE")
ServiceAccountEmail = os.Getenv("SERVICE_ACCOUNT_EMAIL")
)
func getMongoDBVars(t *testing.T) map[string]any {
switch "" {
case MongoDbUri:
t.Fatal("'MongoDbUri' not set")
case MongoDbDatabase:
t.Fatal("'MongoDbDatabase' not set")
}
return map[string]any{
"kind": MongoDbSourceKind,
"uri": MongoDbUri,
}
}
func initMongoDbDatabase(ctx context.Context, uri, database string) (*mongo.Database, error) {
// Create a new mongodb Database
client, err := mongo.Connect(ctx, options.Client().ApplyURI(uri))
if err != nil {
return nil, fmt.Errorf("unable to connect to mongodb: %s", err)
}
err = client.Ping(ctx, nil)
if err != nil {
return nil, fmt.Errorf("unable to connect to mongodb: %s", err)
}
return client.Database(database), nil
}
func TestMongoDBToolEndpoints(t *testing.T) {
sourceConfig := getMongoDBVars(t)
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
var args []string
database, err := initMongoDbDatabase(ctx, MongoDbUri, MongoDbDatabase)
if err != nil {
t.Fatalf("unable to create MongoDB connection: %s", err)
}
// set up data for param tool
teardownDB := setupMongoDB(t, ctx, database)
defer teardownDB(t)
// Write config into a file and pass it to command
toolsFile := getMongoDBToolsConfig(sourceConfig, MongoDbToolKind)
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
if err != nil {
t.Fatalf("command initialization returned an error: %s", err)
}
defer cleanup()
waitCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
out, err := testutils.WaitForString(waitCtx, regexp.MustCompile(`Server ready to serve`), cmd.Out)
if err != nil {
t.Logf("toolbox command logs: \n%s", out)
t.Fatalf("toolbox didn't start successfully: %s", err)
}
tests.RunToolGetTest(t)
select1Want := `[{"_id":3,"id":3,"name":"Sid"}]`
failInvocationWant := `invalid JSON input: missing colon after key `
invokeParamWant := `[{"_id":5,"id":3,"name":"Alice"}]`
invokeIdNullWant := `[{"_id":4,"id":4,"name":null}]`
mcpInvokeParamWant := `{"jsonrpc":"2.0","id":"my-tool","result":{"content":[{"type":"text","text":"{\"_id\":5,\"id\":3,\"name\":\"Alice\"}"}]}}`
nullString := "null"
tests.RunToolInvokeTest(t, select1Want, invokeParamWant, invokeIdNullWant, nullString, true, true)
tests.RunMCPToolCallMethod(t, mcpInvokeParamWant, failInvocationWant)
delete1Want := "1"
deleteManyWant := "2"
RunToolDeleteInvokeTest(t, delete1Want, deleteManyWant)
insert1Want := `["68666e1035bb36bf1b4d47fb"]`
insertManyWant := `["68667a6436ec7d0363668db7","68667a6436ec7d0363668db8","68667a6436ec7d0363668db9"]`
RunToolInsertInvokeTest(t, insert1Want, insertManyWant)
update1Want := "1"
updateManyWant := "[2,0,2]"
RunToolUpdateInvokeTest(t, update1Want, updateManyWant)
aggregate1Want := `[{"id":2}]`
aggregateManyWant := `[{"id":500},{"id":501}]`
RunToolAggregateInvokeTest(t, aggregate1Want, aggregateManyWant)
}
func RunToolDeleteInvokeTest(t *testing.T, delete1Want, deleteManyWant string) {
// Test tool invoke endpoint
invokeTcs := []struct {
name string
api string
requestHeader map[string]string
requestBody io.Reader
want string
isErr bool
}{
{
name: "invoke my-delete-one-tool",
api: "http://127.0.0.1:5000/api/tool/my-delete-one-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "id" : 100 }`)),
want: delete1Want,
isErr: false,
},
{
name: "invoke my-delete-many-tool",
api: "http://127.0.0.1:5000/api/tool/my-delete-many-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "id" : 101 }`)),
want: deleteManyWant,
isErr: false,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
// Send Tool invocation request
req, err := http.NewRequest(http.MethodPost, tc.api, tc.requestBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
for k, v := range tc.requestHeader {
req.Header.Add(k, v)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
if tc.isErr {
return
}
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes))
}
// Check response body
var body map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&body)
if err != nil {
t.Fatalf("error parsing response body")
}
got, ok := body["result"].(string)
if !ok {
t.Fatalf("unable to find result in response body")
}
if got != tc.want {
t.Fatalf("unexpected value: got %q, want %q", got, tc.want)
}
})
}
}
func RunToolInsertInvokeTest(t *testing.T, insert1Want, insertManyWant string) {
// Test tool invoke endpoint
invokeTcs := []struct {
name string
api string
requestHeader map[string]string
requestBody io.Reader
want string
isErr bool
}{
{
name: "invoke my-insert-one-tool",
api: "http://127.0.0.1:5000/api/tool/my-insert-one-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "data" : "{ \"_id\": { \"$oid\": \"68666e1035bb36bf1b4d47fb\" }, \"id\" : 200 }" }"`)),
want: insert1Want,
isErr: false,
},
{
name: "invoke my-insert-many-tool",
api: "http://127.0.0.1:5000/api/tool/my-insert-many-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "data" : "[{ \"_id\": { \"$oid\": \"68667a6436ec7d0363668db7\"} , \"id\" : 201 }, { \"_id\" : { \"$oid\": \"68667a6436ec7d0363668db8\"}, \"id\" : 202 }, { \"_id\": { \"$oid\": \"68667a6436ec7d0363668db9\"}, \"id\": 203 }]" }`)),
want: insertManyWant,
isErr: false,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
// Send Tool invocation request
req, err := http.NewRequest(http.MethodPost, tc.api, tc.requestBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
for k, v := range tc.requestHeader {
req.Header.Add(k, v)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
if tc.isErr {
return
}
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes))
}
// Check response body
var body map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&body)
if err != nil {
t.Fatalf("error parsing response body")
}
got, ok := body["result"].(string)
if !ok {
t.Fatalf("unable to find result in response body")
}
if got != tc.want {
t.Fatalf("unexpected value: got %q, want %q", got, tc.want)
}
})
}
}
func RunToolUpdateInvokeTest(t *testing.T, update1Want, updateManyWant string) {
// Test tool invoke endpoint
invokeTcs := []struct {
name string
api string
requestHeader map[string]string
requestBody io.Reader
want string
isErr bool
}{
{
name: "invoke my-update-one-tool",
api: "http://127.0.0.1:5000/api/tool/my-update-one-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "id": 300, "name": "Bob" }`)),
want: update1Want,
isErr: false,
},
{
name: "invoke my-update-many-tool",
api: "http://127.0.0.1:5000/api/tool/my-update-many-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "id": 400, "name" : "Alice" }`)),
want: updateManyWant,
isErr: false,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
// Send Tool invocation request
req, err := http.NewRequest(http.MethodPost, tc.api, tc.requestBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
for k, v := range tc.requestHeader {
req.Header.Add(k, v)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
if tc.isErr {
return
}
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes))
}
// Check response body
var body map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&body)
if err != nil {
t.Fatalf("error parsing response body")
}
got, ok := body["result"].(string)
if !ok {
t.Fatalf("unable to find result in response body")
}
if got != tc.want {
t.Fatalf("unexpected value: got %q, want %q", got, tc.want)
}
})
}
}
func RunToolAggregateInvokeTest(t *testing.T, aggregate1Want string, aggregateManyWant string) {
// Test tool invoke endpoint
invokeTcs := []struct {
name string
api string
requestHeader map[string]string
requestBody io.Reader
want string
isErr bool
}{
{
name: "invoke my-aggregate-tool",
api: "http://127.0.0.1:5000/api/tool/my-aggregate-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "name": "Jane" }`)),
want: aggregate1Want,
isErr: false,
},
{
name: "invoke my-aggregate-tool",
api: "http://127.0.0.1:5000/api/tool/my-aggregate-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "name" : "ToBeAggregated" }`)),
want: aggregateManyWant,
isErr: false,
},
{
name: "invoke my-read-only-aggregate-tool",
api: "http://127.0.0.1:5000/api/tool/my-read-only-aggregate-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "name" : "ToBeAggregated" }`)),
want: "",
isErr: true,
},
{
name: "invoke my-read-write-aggregate-tool",
api: "http://127.0.0.1:5000/api/tool/my-read-write-aggregate-tool/invoke",
requestHeader: map[string]string{},
requestBody: bytes.NewBuffer([]byte(`{ "name" : "ToBeAggregated" }`)),
want: "[]",
isErr: false,
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
// Send Tool invocation request
req, err := http.NewRequest(http.MethodPost, tc.api, tc.requestBody)
if err != nil {
t.Fatalf("unable to create request: %s", err)
}
req.Header.Add("Content-type", "application/json")
for k, v := range tc.requestHeader {
req.Header.Add(k, v)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
t.Fatalf("unable to send request: %s", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
if tc.isErr {
return
}
bodyBytes, _ := io.ReadAll(resp.Body)
t.Fatalf("response status code is not 200, got %d: %s", resp.StatusCode, string(bodyBytes))
}
// Check response body
var body map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&body)
if err != nil {
t.Fatalf("error parsing response body")
}
got, ok := body["result"].(string)
if !ok {
t.Fatalf("unable to find result in response body")
}
if got != tc.want {
t.Fatalf("unexpected value: got %q, want %q", got, tc.want)
}
})
}
}
func setupMongoDB(t *testing.T, ctx context.Context, database *mongo.Database) func(*testing.T) {
collectionName := "test_collection"
documents := []map[string]any{
{"_id": 1, "id": 1, "name": "Alice", "email": ServiceAccountEmail},
{"_id": 2, "id": 2, "name": "Jane"},
{"_id": 3, "id": 3, "name": "Sid"},
{"_id": 4, "id": 4, "name": nil},
{"_id": 5, "id": 3, "name": "Alice", "email": "alice@gmail.com"},
{"_id": 6, "id": 100, "name": "ToBeDeleted", "email": "bob@gmail.com"},
{"_id": 7, "id": 101, "name": "ToBeDeleted", "email": "bob1@gmail.com"},
{"_id": 8, "id": 101, "name": "ToBeDeleted", "email": "bob2@gmail.com"},
{"_id": 9, "id": 300, "name": "ToBeUpdatedToBob", "email": "bob@gmail.com"},
{"_id": 10, "id": 400, "name": "ToBeUpdatedToAlice", "email": "alice@gmail.com"},
{"_id": 11, "id": 400, "name": "ToBeUpdatedToAlice", "email": "alice@gmail.com"},
{"_id": 12, "id": 500, "name": "ToBeAggregated", "email": "agatha@gmail.com"},
{"_id": 13, "id": 501, "name": "ToBeAggregated", "email": "agatha@gmail.com"},
}
for _, doc := range documents {
_, err := database.Collection(collectionName).InsertOne(ctx, doc)
if err != nil {
t.Fatalf("unable to insert test data: %s", err)
}
}
return func(t *testing.T) {
// tear down test
err := database.Collection(collectionName).Drop(ctx)
if err != nil {
t.Errorf("Teardown failed: %s", err)
}
}
}
func getMongoDBToolsConfig(sourceConfig map[string]any, toolKind string) map[string]any {
toolsFile := map[string]any{
"sources": map[string]any{
"my-instance": sourceConfig,
},
"authServices": map[string]any{
"my-google-auth": map[string]any{
"kind": "google",
"clientId": tests.ClientId,
},
},
"tools": map[string]any{
"my-simple-tool": map[string]any{
"kind": "mongodb-find-one",
"source": "my-instance",
"description": "Simple tool to test end to end functionality.",
"collection": "test_collection",
"filterPayload": `{ "_id" : 3 }`,
"filterParams": []any{},
"database": MongoDbDatabase,
},
"my-tool": map[string]any{
"kind": toolKind,
"source": "my-instance",
"description": "Tool to test invocation with params.",
"authRequired": []string{},
"collection": "test_collection",
"filterPayload": `{ "id" : {{ .id }}, "name" : {{json .name }} }`,
"filterParams": []map[string]any{
{
"name": "id",
"type": "integer",
"description": "user id",
},
{
"name": "name",
"type": "string",
"description": "user name",
},
},
"projectPayload": `{ "_id": 1, "id": 1, "name" : 1 }`,
"database": MongoDbDatabase,
},
"my-tool-by-id": map[string]any{
"kind": toolKind,
"source": "my-instance",
"description": "Tool to test invocation with params.",
"authRequired": []string{},
"collection": "test_collection",
"filterPayload": `{ "id" : {{ .id }} }`,
"filterParams": []map[string]any{
{
"name": "id",
"type": "integer",
"description": "user id",
},
},
"projectPayload": `{ "_id": 1, "id": 1, "name" : 1 }`,
"database": MongoDbDatabase,
},
"my-tool-by-name": map[string]any{
"kind": toolKind,
"source": "my-instance",
"description": "Tool to test invocation with params.",
"authRequired": []string{},
"collection": "test_collection",
"filterPayload": `{ "name" : {{ .name }} }`,
"filterParams": []map[string]any{
{
"name": "name",
"type": "string",
"description": "user name",
"required": false,
},
},
"projectPayload": `{ "_id": 1, "id": 1, "name" : 1 }`,
"database": MongoDbDatabase,
},
"my-array-tool": map[string]any{
"kind": toolKind,
"source": "my-instance",
"description": "Tool to test invocation with array.",
"authRequired": []string{},
"collection": "test_collection",
"filterPayload": `{ "name": { "$in": {{json .nameArray}} }, "_id": 5 })`,
"filterParams": []map[string]any{
{
"name": "nameArray",
"type": "array",
"description": "user names",
"items": map[string]any{
"name": "username",
"type": "string",
"description": "string item"},
},
},
"projectPayload": `{ "_id": 1, "id": 1, "name" : 1 }`,
"database": MongoDbDatabase,
},
"my-auth-tool": map[string]any{
"kind": toolKind,
"source": "my-instance",
"description": "Tool to test authenticated parameters.",
"authRequired": []string{},
"collection": "test_collection",
"filterPayload": `{ "email" : {{json .email }} }`,
"filterParams": []map[string]any{
{
"name": "email",
"type": "string",
"description": "user email",
"authServices": []map[string]string{
{
"name": "my-google-auth",
"field": "email",
},
},
},
},
"projectPayload": `{ "_id": 0, "name" : 1 }`,
"database": MongoDbDatabase,
},
"my-auth-required-tool": map[string]any{
"kind": toolKind,
"source": "my-instance",
"description": "Tool to test auth required invocation.",
"authRequired": []string{
"my-google-auth",
},
"collection": "test_collection",
"filterPayload": `{ "_id": 3, "id": 3 }`,
"filterParams": []any{},
"database": MongoDbDatabase,
},
"my-fail-tool": map[string]any{
"kind": toolKind,
"source": "my-instance",
"description": "Tool to test statement with incorrect syntax.",
"authRequired": []string{},
"collection": "test_collection",
"filterPayload": `{ "id" ; 1 }"}`,
"filterParams": []any{},
"database": MongoDbDatabase,
},
"my-delete-one-tool": map[string]any{
"kind": "mongodb-delete-one",
"source": "my-instance",
"description": "Tool to test deleting an entry.",
"authRequired": []string{},
"collection": "test_collection",
"filterPayload": `{ "id" : 100 }"}`,
"filterParams": []any{},
"database": MongoDbDatabase,
},
"my-delete-many-tool": map[string]any{
"kind": "mongodb-delete-many",
"source": "my-instance",
"description": "Tool to test deleting multiple entries.",
"authRequired": []string{},
"collection": "test_collection",
"filterPayload": `{ "id" : 101 }"}`,
"filterParams": []any{},
"database": MongoDbDatabase,
},
"my-insert-one-tool": map[string]any{
"kind": "mongodb-insert-one",
"source": "my-instance",
"description": "Tool to test inserting an entry.",
"authRequired": []string{},
"collection": "test_collection",
"canonical": true,
"database": MongoDbDatabase,
},
"my-insert-many-tool": map[string]any{
"kind": "mongodb-insert-many",
"source": "my-instance",
"description": "Tool to test inserting multiple entries.",
"authRequired": []string{},
"collection": "test_collection",
"canonical": true,
"database": MongoDbDatabase,
},
"my-update-one-tool": map[string]any{
"kind": "mongodb-update-one",
"source": "my-instance",
"description": "Tool to test updating an entry.",
"authRequired": []string{},
"collection": "test_collection",
"canonical": true,
"filterPayload": `{ "id" : 300 }`,
"filterParams": []any{},
"updatePayload": `{ "$set" : { "name": {{json .name}} } }`,
"updateParams": []map[string]any{
{
"name": "name",
"type": "string",
"description": "user name",
},
},
"database": MongoDbDatabase,
},
"my-update-many-tool": map[string]any{
"kind": "mongodb-update-many",
"source": "my-instance",
"description": "Tool to test updating multiple entries.",
"authRequired": []string{},
"collection": "test_collection",
"canonical": true,
"filterPayload": `{ "id" : {{ .id }} }`,
"filterParams": []map[string]any{
{
"name": "id",
"type": "integer",
"description": "id",
},
},
"updatePayload": `{ "$set" : { "name": {{json .name}} } }`,
"updateParams": []map[string]any{
{
"name": "name",
"type": "string",
"description": "user name",
},
},
"database": MongoDbDatabase,
},
"my-aggregate-tool": map[string]any{
"kind": "mongodb-aggregate",
"source": "my-instance",
"description": "Tool to test an aggregation.",
"authRequired": []string{},
"collection": "test_collection",
"canonical": true,
"pipelinePayload": `[{ "$match" : { "name": {{json .name}} } }, { "$project" : { "id" : 1, "_id" : 0 }}]`,
"pipelineParams": []map[string]any{
{
"name": "name",
"type": "string",
"description": "user name",
},
},
"database": MongoDbDatabase,
},
"my-read-only-aggregate-tool": map[string]any{
"kind": "mongodb-aggregate",
"source": "my-instance",
"description": "Tool to test an aggregation.",
"authRequired": []string{},
"collection": "test_collection",
"canonical": true,
"readOnly": true,
"pipelinePayload": `[{ "$match" : { "name": {{json .name}} } }, { "$out" : "target_collection" }]`,
"pipelineParams": []map[string]any{
{
"name": "name",
"type": "string",
"description": "user name",
},
},
"database": MongoDbDatabase,
},
"my-read-write-aggregate-tool": map[string]any{
"kind": "mongodb-aggregate",
"source": "my-instance",
"description": "Tool to test an aggregation.",
"authRequired": []string{},
"collection": "test_collection",
"canonical": true,
"readOnly": false,
"pipelinePayload": `[{ "$match" : { "name": {{json .name}} } }, { "$out" : "target_collection" }]`,
"pipelineParams": []map[string]any{
{
"name": "name",
"type": "string",
"description": "user name",
},
},
"database": MongoDbDatabase,
},
},
}
return toolsFile
}

View File

@@ -27,6 +27,8 @@ import (
"testing"
"time"
"github.com/neo4j/neo4j-go-driver/v5/neo4j"
"github.com/googleapis/genai-toolbox/internal/testutils"
"github.com/googleapis/genai-toolbox/tests"
)
@@ -39,6 +41,8 @@ var (
Neo4jPass = os.Getenv("NEO4J_PASS")
)
// getNeo4jVars retrieves necessary Neo4j connection details from environment variables.
// It fails the test if any required variable is not set.
func getNeo4jVars(t *testing.T) map[string]any {
switch "" {
case Neo4jDatabase:
@@ -60,6 +64,8 @@ func getNeo4jVars(t *testing.T) map[string]any {
}
}
// TestNeo4jToolEndpoints sets up an integration test server and tests the API endpoints
// for various Neo4j tools, including cypher execution and schema retrieval.
func TestNeo4jToolEndpoints(t *testing.T) {
sourceConfig := getNeo4jVars(t)
ctx, cancel := context.WithTimeout(context.Background(), time.Minute)
@@ -67,7 +73,8 @@ func TestNeo4jToolEndpoints(t *testing.T) {
var args []string
// Write config into a file and pass it to command
// Write config into a file and pass it to the command.
// This configuration defines the data source and the tools to be tested.
toolsFile := map[string]any{
"sources": map[string]any{
"my-neo4j-instance": sourceConfig,
@@ -90,6 +97,22 @@ func TestNeo4jToolEndpoints(t *testing.T) {
"description": "A readonly cypher execution tool.",
"readOnly": true,
},
"my-schema-tool": map[string]any{
"kind": "neo4j-schema",
"source": "my-neo4j-instance",
"description": "A tool to get the Neo4j schema.",
},
"my-schema-tool-with-cache": map[string]any{
"kind": "neo4j-schema",
"source": "my-neo4j-instance",
"description": "A schema tool with a custom cache expiration.",
"cacheExpireMinutes": 10,
},
"my-populated-schema-tool": map[string]any{
"kind": "neo4j-schema",
"source": "my-neo4j-instance",
"description": "A tool to get the Neo4j schema from a populated DB.",
},
},
}
cmd, cleanup, err := tests.StartCmd(ctx, toolsFile, args...)
@@ -106,7 +129,7 @@ func TestNeo4jToolEndpoints(t *testing.T) {
t.Fatalf("toolbox didn't start successfully: %s", err)
}
// Test tool get endpoint
// Test tool `GET` endpoints to verify their manifests are correct.
tcs := []struct {
name string
api string
@@ -142,6 +165,28 @@ func TestNeo4jToolEndpoints(t *testing.T) {
},
},
},
{
name: "get my-schema-tool",
api: "http://127.0.0.1:5000/api/tool/my-schema-tool/",
want: map[string]any{
"my-schema-tool": map[string]any{
"description": "A tool to get the Neo4j schema.",
"parameters": []any{},
"authRequired": []any{},
},
},
},
{
name: "get my-schema-tool-with-cache",
api: "http://127.0.0.1:5000/api/tool/my-schema-tool-with-cache/",
want: map[string]any{
"my-schema-tool-with-cache": map[string]any{
"description": "A schema tool with a custom cache expiration.",
"parameters": []any{},
"authRequired": []any{},
},
},
},
}
for _, tc := range tcs {
t.Run(tc.name, func(t *testing.T) {
@@ -170,7 +215,7 @@ func TestNeo4jToolEndpoints(t *testing.T) {
})
}
// Test tool invoke endpoint
// Test tool `invoke` endpoints to verify their functionality.
invokeTcs := []struct {
name string
api string
@@ -178,6 +223,8 @@ func TestNeo4jToolEndpoints(t *testing.T) {
want string
wantStatus int
wantErrorSubstring string
prepareData func(t *testing.T)
validateFunc func(t *testing.T, body string)
}{
{
name: "invoke my-simple-cypher-tool",
@@ -200,9 +247,225 @@ func TestNeo4jToolEndpoints(t *testing.T) {
wantStatus: http.StatusBadRequest,
wantErrorSubstring: "this tool is read-only and cannot execute write queries",
},
{
name: "invoke my-schema-tool",
api: "http://127.0.0.1:5000/api/tool/my-schema-tool/invoke",
requestBody: bytes.NewBuffer([]byte(`{}`)),
wantStatus: http.StatusOK,
validateFunc: func(t *testing.T, body string) {
var result map[string]any
if err := json.Unmarshal([]byte(body), &result); err != nil {
t.Fatalf("failed to unmarshal schema result: %v", err)
}
// Check for the presence of top-level keys in the schema response.
expectedKeys := []string{"nodeLabels", "relationships", "constraints", "indexes", "databaseInfo", "statistics"}
for _, key := range expectedKeys {
if _, ok := result[key]; !ok {
t.Errorf("expected key %q not found in schema response", key)
}
}
},
},
{
name: "invoke my-schema-tool-with-cache",
api: "http://127.0.0.1:5000/api/tool/my-schema-tool-with-cache/invoke",
requestBody: bytes.NewBuffer([]byte(`{}`)),
wantStatus: http.StatusOK,
validateFunc: func(t *testing.T, body string) {
var result map[string]any
if err := json.Unmarshal([]byte(body), &result); err != nil {
t.Fatalf("failed to unmarshal schema result: %v", err)
}
// Also check the structure of the schema response for the cached tool.
expectedKeys := []string{"nodeLabels", "relationships", "constraints", "indexes", "databaseInfo", "statistics"}
for _, key := range expectedKeys {
if _, ok := result[key]; !ok {
t.Errorf("expected key %q not found in schema response", key)
}
}
},
},
{
name: "invoke my-schema-tool with populated data",
api: "http://127.0.0.1:5000/api/tool/my-populated-schema-tool/invoke",
requestBody: bytes.NewBuffer([]byte(`{}`)),
wantStatus: http.StatusOK,
prepareData: func(t *testing.T) {
ctx := context.Background()
driver, err := neo4j.NewDriverWithContext(Neo4jUri, neo4j.BasicAuth(Neo4jUser, Neo4jPass, ""))
if err != nil {
t.Fatalf("failed to create neo4j driver: %v", err)
}
// Helper to execute queries for setup and teardown.
execute := func(query string) {
session := driver.NewSession(ctx, neo4j.SessionConfig{DatabaseName: Neo4jDatabase})
defer session.Close(ctx)
// Use ExecuteWrite to ensure the query is committed before proceeding.
_, err := session.ExecuteWrite(ctx, func(tx neo4j.ManagedTransaction) (any, error) {
_, err := tx.Run(ctx, query, nil)
return nil, err
})
// Don't fail the test on teardown errors (e.g., entity doesn't exist).
if err != nil && !strings.Contains(query, "DROP") {
t.Fatalf("query failed: %s\nerror: %v", query, err)
}
}
// Teardown logic is deferred to ensure it runs even if the test fails.
// The driver will be closed at the end of this block.
t.Cleanup(func() {
execute("DROP CONSTRAINT PersonNameUnique IF EXISTS")
execute("DROP INDEX MovieTitleIndex IF EXISTS")
execute("MATCH (n) DETACH DELETE n")
if err := driver.Close(ctx); err != nil {
t.Errorf("failed to close driver during cleanup: %v", err)
}
})
// Setup: Create constraints, indexes, and data.
execute("MERGE (p:Person {name: 'Alice'}) MERGE (m:Movie {title: 'The Matrix'}) MERGE (p)-[:ACTED_IN]->(m)")
execute("CREATE CONSTRAINT PersonNameUnique IF NOT EXISTS FOR (p:Person) REQUIRE p.name IS UNIQUE")
execute("CREATE INDEX MovieTitleIndex IF NOT EXISTS FOR (m:Movie) ON (m.title)")
},
validateFunc: func(t *testing.T, body string) {
// Define structs for unmarshaling the detailed schema.
type Property struct {
Name string `json:"name"`
Types []string `json:"types"`
}
type NodeLabel struct {
Name string `json:"name"`
Properties []Property `json:"properties"`
}
type Relationship struct {
Type string `json:"type"`
StartNode string `json:"startNode"`
EndNode string `json:"endNode"`
}
type Constraint struct {
Name string `json:"name"`
Label string `json:"label"`
Properties []string `json:"properties"`
}
type Index struct {
Name string `json:"name"`
Label string `json:"label"`
Properties []string `json:"properties"`
}
type Schema struct {
NodeLabels []NodeLabel `json:"nodeLabels"`
Relationships []Relationship `json:"relationships"`
Constraints []Constraint `json:"constraints"`
Indexes []Index `json:"indexes"`
}
var schema Schema
if err := json.Unmarshal([]byte(body), &schema); err != nil {
t.Fatalf("failed to unmarshal schema json: %v\nResponse body: %s", err, body)
}
// --- Validate Node Labels and Properties ---
var personLabelFound, movieLabelFound bool
for _, l := range schema.NodeLabels {
if l.Name == "Person" {
personLabelFound = true
propFound := false
for _, p := range l.Properties {
if p.Name == "name" {
propFound = true
break
}
}
if !propFound {
t.Errorf("expected Person label to have 'name' property, but it was not found")
}
}
if l.Name == "Movie" {
movieLabelFound = true
propFound := false
for _, p := range l.Properties {
if p.Name == "title" {
propFound = true
break
}
}
if !propFound {
t.Errorf("expected Movie label to have 'title' property, but it was not found")
}
}
}
if !personLabelFound {
t.Error("expected to find 'Person' in nodeLabels")
}
if !movieLabelFound {
t.Error("expected to find 'Movie' in nodeLabels")
}
// --- Validate Relationships ---
relFound := false
for _, r := range schema.Relationships {
if r.Type == "ACTED_IN" && r.StartNode == "Person" && r.EndNode == "Movie" {
relFound = true
break
}
}
if !relFound {
t.Errorf("expected to find relationship '(:Person)-[:ACTED_IN]->(:Movie)', but it was not found")
}
// --- Validate Constraints ---
constraintFound := false
for _, c := range schema.Constraints {
if c.Name == "PersonNameUnique" && c.Label == "Person" {
propFound := false
for _, p := range c.Properties {
if p == "name" {
propFound = true
break
}
}
if propFound {
constraintFound = true
break
}
}
}
if !constraintFound {
t.Errorf("expected to find constraint 'PersonNameUnique' on Person(name), but it was not found")
}
// --- Validate Indexes ---
indexFound := false
for _, i := range schema.Indexes {
if i.Name == "MovieTitleIndex" && i.Label == "Movie" {
propFound := false
for _, p := range i.Properties {
if p == "title" {
propFound = true
break
}
}
if propFound {
indexFound = true
break
}
}
}
if !indexFound {
t.Errorf("expected to find index 'MovieTitleIndex' on Movie(title), but it was not found")
}
},
},
}
for _, tc := range invokeTcs {
t.Run(tc.name, func(t *testing.T) {
// Prepare data if a preparation function is provided.
if tc.prepareData != nil {
tc.prepareData(t)
}
resp, err := http.Post(tc.api, "application/json", tc.requestBody)
if err != nil {
t.Fatalf("error when sending a request: %s", err)
@@ -224,7 +487,11 @@ func TestNeo4jToolEndpoints(t *testing.T) {
t.Fatalf("unable to find result in response body")
}
if got != tc.want {
if tc.validateFunc != nil {
// Use the custom validation function if provided.
tc.validateFunc(t, got)
} else if got != tc.want {
// Otherwise, perform a direct string comparison.
t.Fatalf("unexpected value: got %q, want %q", got, tc.want)
}
} else {