From a2097ba8ebab9d8c173190000c6644b21422737e Mon Sep 17 00:00:00 2001 From: Yuan Teoh <45984206+Yuan325@users.noreply.github.com> Date: Tue, 3 Feb 2026 12:22:03 -0800 Subject: [PATCH 01/13] docs: add index page for cloud logging admin tools (#2414) Add _index page for cloud logging admin tools for drop down. --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --- docs/en/resources/tools/cloudloggingadmin/_index.md | 8 ++++++++ 1 file changed, 8 insertions(+) create mode 100644 docs/en/resources/tools/cloudloggingadmin/_index.md diff --git a/docs/en/resources/tools/cloudloggingadmin/_index.md b/docs/en/resources/tools/cloudloggingadmin/_index.md new file mode 100644 index 0000000000..a5b34e9a76 --- /dev/null +++ b/docs/en/resources/tools/cloudloggingadmin/_index.md @@ -0,0 +1,8 @@ +--- +title: "Cloud Logging Admin" +linkTitle: "Cloud Logging Admin" +type: docs +weight: 1 +description: > + Tools that work with Cloud Logging Admin Sources. +--- From 732eaed41d4c0e26c328b97576b214b3d4d2b276 Mon Sep 17 00:00:00 2001 From: Mend Renovate Date: Wed, 4 Feb 2026 02:40:22 +0000 Subject: [PATCH 02/13] chore(deps): update github actions (#2386) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This PR contains the following updates: | Package | Type | Update | Change | |---|---|---|---| | [actions/cache](https://redirect.github.com/actions/cache) ([changelog](https://redirect.github.com/actions/cache/compare/8b402f58fbc84540c8b491a91e594a4576fec3d7..cdf6c1fa76f9f475f3d7449005a359c84ca0f306)) | action | digest | `8b402f5` → `cdf6c1f` | | [actions/checkout](https://redirect.github.com/actions/checkout) ([changelog](https://redirect.github.com/actions/checkout/compare/8e8c483db84b4bee98b60c0593521ed34d9990e8..de0fac2e4500dabe0009e67214ff5f5447ce83dd)) | action | digest | `8e8c483` → `de0fac2` | --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 👻 **Immortal**: This PR will be recreated if closed unmerged. Get [config help](https://redirect.github.com/renovatebot/renovate/discussions) if that's undesired. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/googleapis/genai-toolbox). Co-authored-by: Averi Kitsch --- .github/workflows/deploy_dev_docs.yaml | 4 ++-- .github/workflows/deploy_previous_version_docs.yaml | 4 ++-- .github/workflows/deploy_versioned_docs.yaml | 2 +- .github/workflows/docs_preview_clean.yaml | 2 +- .github/workflows/docs_preview_deploy.yaml | 4 ++-- .github/workflows/link_checker_workflow.yaml | 4 ++-- .github/workflows/publish-mcp.yml | 2 +- 7 files changed, 11 insertions(+), 11 deletions(-) diff --git a/.github/workflows/deploy_dev_docs.yaml b/.github/workflows/deploy_dev_docs.yaml index d51207e1ad..d71f1db273 100644 --- a/.github/workflows/deploy_dev_docs.yaml +++ b/.github/workflows/deploy_dev_docs.yaml @@ -40,7 +40,7 @@ jobs: group: docs-deployment cancel-in-progress: false steps: - - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6 with: fetch-depth: 0 # Fetch all history for .GitInfo and .Lastmod @@ -56,7 +56,7 @@ jobs: node-version: "22" - name: Cache dependencies - uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5 + uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5 with: path: ~/.npm key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} diff --git a/.github/workflows/deploy_previous_version_docs.yaml b/.github/workflows/deploy_previous_version_docs.yaml index 5c238d18b4..1c642262e7 100644 --- a/.github/workflows/deploy_previous_version_docs.yaml +++ b/.github/workflows/deploy_previous_version_docs.yaml @@ -30,14 +30,14 @@ jobs: steps: - name: Checkout main branch (for latest templates and theme) - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6 with: ref: 'main' submodules: 'recursive' fetch-depth: 0 - name: Checkout old content from tag into a temporary directory - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6 with: ref: ${{ github.event.inputs.version_tag }} path: 'old_version_source' # Checkout into a temp subdir diff --git a/.github/workflows/deploy_versioned_docs.yaml b/.github/workflows/deploy_versioned_docs.yaml index 42d0bd1a20..67e809935e 100644 --- a/.github/workflows/deploy_versioned_docs.yaml +++ b/.github/workflows/deploy_versioned_docs.yaml @@ -30,7 +30,7 @@ jobs: cancel-in-progress: false steps: - name: Checkout Code at Tag - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6 with: ref: ${{ github.event.release.tag_name }} diff --git a/.github/workflows/docs_preview_clean.yaml b/.github/workflows/docs_preview_clean.yaml index ba44bfcc8b..a3a1f07857 100644 --- a/.github/workflows/docs_preview_clean.yaml +++ b/.github/workflows/docs_preview_clean.yaml @@ -34,7 +34,7 @@ jobs: group: "preview-${{ github.event.number }}" cancel-in-progress: true steps: - - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6 with: ref: versioned-gh-pages diff --git a/.github/workflows/docs_preview_deploy.yaml b/.github/workflows/docs_preview_deploy.yaml index 769b4c5dc5..fda0e4895f 100644 --- a/.github/workflows/docs_preview_deploy.yaml +++ b/.github/workflows/docs_preview_deploy.yaml @@ -49,7 +49,7 @@ jobs: group: "preview-${{ github.event.number }}" cancel-in-progress: true steps: - - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6 with: # Checkout the PR's HEAD commit (supports forks). ref: ${{ github.event.pull_request.head.sha }} @@ -67,7 +67,7 @@ jobs: node-version: "22" - name: Cache dependencies - uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5 + uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5 with: path: ~/.npm key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} diff --git a/.github/workflows/link_checker_workflow.yaml b/.github/workflows/link_checker_workflow.yaml index 189016dbc4..ae51a4b08b 100644 --- a/.github/workflows/link_checker_workflow.yaml +++ b/.github/workflows/link_checker_workflow.yaml @@ -22,10 +22,10 @@ jobs: runs-on: ubuntu-latest steps: - name: Checkout Repository - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6 - name: Restore lychee cache - uses: actions/cache@8b402f58fbc84540c8b491a91e594a4576fec3d7 # v5 + uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5 with: path: .lycheecache key: cache-lychee-${{ github.sha }} diff --git a/.github/workflows/publish-mcp.yml b/.github/workflows/publish-mcp.yml index dc84fbb759..32264b393c 100644 --- a/.github/workflows/publish-mcp.yml +++ b/.github/workflows/publish-mcp.yml @@ -29,7 +29,7 @@ jobs: steps: - name: Checkout code - uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6 + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6 - name: Wait for image in Artifact Registry shell: bash From 80ef34621453b77bdf6a6016c354f102a17ada04 Mon Sep 17 00:00:00 2001 From: Haoyu Wang Date: Wed, 4 Feb 2026 15:51:14 -0500 Subject: [PATCH 03/13] feat(cli/skills): add support for generating agent skills from toolset (#2392) ## Description This PR introduces a new skills-generate command that enables users to generate standardized agent skills from their existing Toolbox tool configurations. This facilitates the integration of Toolbox tools into agentic workflows by automatically creating skill descriptions (SKILL.md) and executable wrappers. - New Subcommand: Implemented skills-generate, which automates the creation of agent skill packages including metadata and executable scripts. - Skill Generation: Added logic to generate SKILL.md files with parameter schemas and Node.js wrappers for cross-platform tool execution. - Toolset Integration: Supports selective generation of skills based on defined toolsets, including support for both local files and prebuilt configurations. - Testing: Added unit tests for the generation logic and integration tests for the CLI command. - Documentation: Created a new "how-to" guide for generating skills and updated the CLI reference documentation. --- cmd/root.go | 3 + cmd/skill_generate_test.go | 179 +++++++++++++ docs/en/how-to/generate_skill.md | 112 +++++++++ docs/en/how-to/invoke_tool.md | 5 +- docs/en/reference/cli.md | 35 ++- internal/cli/skills/command.go | 237 ++++++++++++++++++ internal/cli/skills/generator.go | 296 ++++++++++++++++++++++ internal/cli/skills/generator_test.go | 347 ++++++++++++++++++++++++++ internal/server/common_test.go | 135 ---------- internal/server/mocks.go | 159 ++++++++++++ 10 files changed, 1368 insertions(+), 140 deletions(-) create mode 100644 cmd/skill_generate_test.go create mode 100644 docs/en/how-to/generate_skill.md create mode 100644 internal/cli/skills/command.go create mode 100644 internal/cli/skills/generator.go create mode 100644 internal/cli/skills/generator_test.go create mode 100644 internal/server/mocks.go diff --git a/cmd/root.go b/cmd/root.go index d0d11e1a07..c4afedbbb2 100644 --- a/cmd/root.go +++ b/cmd/root.go @@ -35,6 +35,7 @@ import ( yaml "github.com/goccy/go-yaml" "github.com/googleapis/genai-toolbox/internal/auth" "github.com/googleapis/genai-toolbox/internal/cli/invoke" + "github.com/googleapis/genai-toolbox/internal/cli/skills" "github.com/googleapis/genai-toolbox/internal/embeddingmodels" "github.com/googleapis/genai-toolbox/internal/log" "github.com/googleapis/genai-toolbox/internal/prebuiltconfigs" @@ -401,6 +402,8 @@ func NewCommand(opts ...Option) *Command { // Register subcommands for tool invocation baseCmd.AddCommand(invoke.NewCommand(cmd)) + // Register subcommands for skill generation + baseCmd.AddCommand(skills.NewCommand(cmd)) return cmd } diff --git a/cmd/skill_generate_test.go b/cmd/skill_generate_test.go new file mode 100644 index 0000000000..3b91dc590b --- /dev/null +++ b/cmd/skill_generate_test.go @@ -0,0 +1,179 @@ +// Copyright 2026 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package cmd + +import ( + "context" + "os" + "path/filepath" + "strings" + "testing" + "time" +) + +func TestGenerateSkill(t *testing.T) { + // Create a temporary directory for tests + tmpDir := t.TempDir() + outputDir := filepath.Join(tmpDir, "skills") + + // Create a tools.yaml file with a sqlite tool + toolsFileContent := ` +sources: + my-sqlite: + kind: sqlite + database: test.db +tools: + hello-sqlite: + kind: sqlite-sql + source: my-sqlite + description: "hello tool" + statement: "SELECT 'hello' as greeting" +` + + toolsFilePath := filepath.Join(tmpDir, "tools.yaml") + if err := os.WriteFile(toolsFilePath, []byte(toolsFileContent), 0644); err != nil { + t.Fatalf("failed to write tools file: %v", err) + } + + args := []string{ + "skills-generate", + "--tools-file", toolsFilePath, + "--output-dir", outputDir, + "--name", "hello-sqlite", + "--description", "hello tool", + } + + ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) + defer cancel() + + _, got, err := invokeCommandWithContext(ctx, args) + if err != nil { + t.Fatalf("command failed: %v\nOutput: %s", err, got) + } + + // Verify generated directory structure + skillPath := filepath.Join(outputDir, "hello-sqlite") + if _, err := os.Stat(skillPath); os.IsNotExist(err) { + t.Fatalf("skill directory not created: %s", skillPath) + } + + // Check SKILL.md + skillMarkdown := filepath.Join(skillPath, "SKILL.md") + content, err := os.ReadFile(skillMarkdown) + if err != nil { + t.Fatalf("failed to read SKILL.md: %v", err) + } + + expectedFrontmatter := `--- +name: hello-sqlite +description: hello tool +---` + if !strings.HasPrefix(string(content), expectedFrontmatter) { + t.Errorf("SKILL.md does not have expected frontmatter format.\nExpected prefix:\n%s\nGot:\n%s", expectedFrontmatter, string(content)) + } + + if !strings.Contains(string(content), "## Usage") { + t.Errorf("SKILL.md does not contain '## Usage' section") + } + + if !strings.Contains(string(content), "## Scripts") { + t.Errorf("SKILL.md does not contain '## Scripts' section") + } + + if !strings.Contains(string(content), "### hello-sqlite") { + t.Errorf("SKILL.md does not contain '### hello-sqlite' tool header") + } + + // Check script file + scriptFilename := "hello-sqlite.js" + scriptPath := filepath.Join(skillPath, "scripts", scriptFilename) + if _, err := os.Stat(scriptPath); os.IsNotExist(err) { + t.Fatalf("script file not created: %s", scriptPath) + } + + scriptContent, err := os.ReadFile(scriptPath) + if err != nil { + t.Fatalf("failed to read script file: %v", err) + } + if !strings.Contains(string(scriptContent), "hello-sqlite") { + t.Errorf("script file does not contain expected tool name") + } + + // Check assets + assetPath := filepath.Join(skillPath, "assets", "hello-sqlite.yaml") + if _, err := os.Stat(assetPath); os.IsNotExist(err) { + t.Fatalf("asset file not created: %s", assetPath) + } + assetContent, err := os.ReadFile(assetPath) + if err != nil { + t.Fatalf("failed to read asset file: %v", err) + } + if !strings.Contains(string(assetContent), "hello-sqlite") { + t.Errorf("asset file does not contain expected tool name") + } +} + +func TestGenerateSkill_NoConfig(t *testing.T) { + tmpDir := t.TempDir() + outputDir := filepath.Join(tmpDir, "skills") + + args := []string{ + "skills-generate", + "--output-dir", outputDir, + "--name", "test", + "--description", "test", + } + + _, _, err := invokeCommandWithContext(context.Background(), args) + if err == nil { + t.Fatal("expected command to fail when no configuration is provided and tools.yaml is missing") + } + + // Should not have created the directory if no config was processed + if _, err := os.Stat(outputDir); !os.IsNotExist(err) { + t.Errorf("output directory should not have been created") + } +} + +func TestGenerateSkill_MissingArguments(t *testing.T) { + tmpDir := t.TempDir() + toolsFilePath := filepath.Join(tmpDir, "tools.yaml") + if err := os.WriteFile(toolsFilePath, []byte("tools: {}"), 0644); err != nil { + t.Fatalf("failed to write tools file: %v", err) + } + + tests := []struct { + name string + args []string + }{ + { + name: "missing name", + args: []string{"skills-generate", "--tools-file", toolsFilePath, "--description", "test"}, + }, + { + name: "missing description", + args: []string{"skills-generate", "--tools-file", toolsFilePath, "--name", "test"}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + _, got, err := invokeCommandWithContext(context.Background(), tt.args) + if err == nil { + t.Fatalf("expected command to fail due to missing arguments, but it succeeded\nOutput: %s", got) + } + }) + } +} diff --git a/docs/en/how-to/generate_skill.md b/docs/en/how-to/generate_skill.md new file mode 100644 index 0000000000..7fa731e85b --- /dev/null +++ b/docs/en/how-to/generate_skill.md @@ -0,0 +1,112 @@ +--- +title: "Generate Agent Skills" +type: docs +weight: 10 +description: > + How to generate agent skills from a toolset. +--- + +The `skills-generate` command allows you to convert a **toolset** into an **Agent Skill**. A toolset is a collection of tools, and the generated skill will contain metadata and execution scripts for all tools within that toolset, complying with the [Agent Skill specification](https://agentskills.io/specification). + +## Before you begin + +1. Make sure you have the `toolbox` executable in your PATH. +2. Make sure you have [Node.js](https://nodejs.org/) installed on your system. + +## Generating a Skill from a Toolset + +A skill package consists of a `SKILL.md` file (with required YAML frontmatter) and a set of Node.js scripts. Each tool defined in your toolset maps to a corresponding script in the generated Node.js scripts (`.js`) that work across different platforms (Linux, macOS, Windows). + + +### Command Usage + +The basic syntax for the command is: + +```bash +toolbox skills-generate \ + --name \ + --toolset \ + --description \ + --output-dir +``` + +- ``: Can be `--tools-file`, `--tools-files`, `--tools-folder`, and `--prebuilt`. See the [CLI Reference](../reference/cli.md) for details. +- `--name`: Name of the generated skill. +- `--description`: Description of the generated skill. +- `--toolset`: (Optional) Name of the toolset to convert into a skill. If not provided, all tools will be included. +- `--output-dir`: (Optional) Directory to output generated skills (default: "skills"). + +{{< notice note >}} +**Note:** The `` must follow the Agent Skill [naming convention](https://agentskills.io/specification): it must contain only lowercase alphanumeric characters and hyphens, cannot start or end with a hyphen, and cannot contain consecutive hyphens (e.g., `my-skill`, `data-processing`). +{{< /notice >}} + +### Example: Custom Tools File + +1. Create a `tools.yaml` file with a toolset and some tools: + + ```yaml + tools: + tool_a: + description: "First tool" + run: + command: "echo 'Tool A'" + tool_b: + description: "Second tool" + run: + command: "echo 'Tool B'" + toolsets: + my_toolset: + tools: + - tool_a + - tool_b + ``` + +2. Generate the skill: + + ```bash + toolbox --tools-file tools.yaml skills-generate \ + --name "my-skill" \ + --toolset "my_toolset" \ + --description "A skill containing multiple tools" \ + --output-dir "generated-skills" + ``` + +3. The generated skill directory structure: + + ```text + generated-skills/ + └── my-skill/ + ├── SKILL.md + ├── assets/ + │ ├── tool_a.yaml + │ └── tool_b.yaml + └── scripts/ + ├── tool_a.js + └── tool_b.js + ``` + + In this example, the skill contains two Node.js scripts (`tool_a.js` and `tool_b.js`), each mapping to a tool in the original toolset. + +### Example: Prebuilt Configuration + +You can also generate skills from prebuilt toolsets: + +```bash +toolbox --prebuilt alloydb-postgres-admin skills-generate \ + --name "alloydb-postgres-admin" \ + --description "skill for performing administrative operations on alloydb" +``` + +## Installing the Generated Skill in Gemini CLI + +Once you have generated a skill, you can install it into the Gemini CLI using the `gemini skills install` command. + +### Installation Command + +Provide the path to the directory containing the generated skill: + +```bash +gemini skills install /path/to/generated-skills/my-skill +``` + +Alternatively, use ~/.gemini/skills as the `--output-dir` to generate the skill straight to the Gemini CLI. diff --git a/docs/en/how-to/invoke_tool.md b/docs/en/how-to/invoke_tool.md index 7448de13fa..4fc23d3a2c 100644 --- a/docs/en/how-to/invoke_tool.md +++ b/docs/en/how-to/invoke_tool.md @@ -20,14 +20,15 @@ The `invoke` command allows you to invoke tools defined in your configuration di 1. Make sure you have the `toolbox` binary installed or built. 2. Make sure you have a valid tool configuration file (e.g., `tools.yaml`). -## Basic Usage +### Command Usage The basic syntax for the command is: ```bash -toolbox [--tools-file | --prebuilt ] invoke [params] +toolbox invoke [params] ``` +- ``: Can be `--tools-file`, `--tools-files`, `--tools-folder`, and `--prebuilt`. See the [CLI Reference](../reference/cli.md) for details. - ``: The name of the tool you want to call. This must match the name defined in your `tools.yaml`. - `[params]`: (Optional) A JSON string representing the arguments for the tool. diff --git a/docs/en/reference/cli.md b/docs/en/reference/cli.md index 150171f3aa..11549c2830 100644 --- a/docs/en/reference/cli.md +++ b/docs/en/reference/cli.md @@ -32,7 +32,8 @@ description: > ## Sub Commands -### `invoke` +
+invoke Executes a tool directly with the provided parameters. This is useful for testing tool configurations and parameters without needing a full client setup. @@ -42,8 +43,36 @@ Executes a tool directly with the provided parameters. This is useful for testin toolbox invoke [params] ``` -- ``: The name of the tool to execute (as defined in your configuration). -- `[params]`: (Optional) A JSON string containing the parameters for the tool. +**Arguments:** + +- `tool-name`: The name of the tool to execute (as defined in your configuration). +- `params`: (Optional) A JSON string containing the parameters for the tool. + +For more detailed instructions, see [Invoke Tools via CLI](../how-to/invoke_tool.md). + +
+ +
+skills-generate + +Generates a skill package from a specified toolset. Each tool in the toolset will have a corresponding Node.js execution script in the generated skill. + +**Syntax:** + +```bash +toolbox skills-generate --name --description --toolset --output-dir +``` + +**Flags:** + +- `--name`: Name of the generated skill. +- `--description`: Description of the generated skill. +- `--toolset`: (Optional) Name of the toolset to convert into a skill. If not provided, all tools will be included. +- `--output-dir`: (Optional) Directory to output generated skills (default: "skills"). + +For more detailed instructions, see [Generate Agent Skills](../how-to/generate_skill.md). + +
## Examples diff --git a/internal/cli/skills/command.go b/internal/cli/skills/command.go new file mode 100644 index 0000000000..d8b2d286a9 --- /dev/null +++ b/internal/cli/skills/command.go @@ -0,0 +1,237 @@ +// Copyright 2026 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package skills + +import ( + "context" + _ "embed" + "fmt" + "os" + "path/filepath" + "sort" + + "github.com/googleapis/genai-toolbox/internal/log" + "github.com/googleapis/genai-toolbox/internal/server" + "github.com/googleapis/genai-toolbox/internal/server/resources" + "github.com/googleapis/genai-toolbox/internal/tools" + + "github.com/spf13/cobra" +) + +// RootCommand defines the interface for required by skills-generate subcommand. +// This allows subcommands to access shared resources and functionality without +// direct coupling to the root command's implementation. +type RootCommand interface { + // Config returns a copy of the current server configuration. + Config() server.ServerConfig + + // LoadConfig loads and merges the configuration from files, folders, and prebuilts. + LoadConfig(ctx context.Context) error + + // Setup initializes the runtime environment, including logging and telemetry. + // It returns the updated context and a shutdown function to be called when finished. + Setup(ctx context.Context) (context.Context, func(context.Context) error, error) + + // Logger returns the logger instance. + Logger() log.Logger +} + +// Command is the command for generating skills. +type Command struct { + *cobra.Command + rootCmd RootCommand + name string + description string + toolset string + outputDir string +} + +// NewCommand creates a new Command. +func NewCommand(rootCmd RootCommand) *cobra.Command { + cmd := &Command{ + rootCmd: rootCmd, + } + cmd.Command = &cobra.Command{ + Use: "skills-generate", + Short: "Generate skills from tool configurations", + RunE: func(c *cobra.Command, args []string) error { + return cmd.run(c) + }, + } + + cmd.Flags().StringVar(&cmd.name, "name", "", "Name of the generated skill.") + cmd.Flags().StringVar(&cmd.description, "description", "", "Description of the generated skill") + cmd.Flags().StringVar(&cmd.toolset, "toolset", "", "Name of the toolset to convert into a skill. If not provided, all tools will be included.") + cmd.Flags().StringVar(&cmd.outputDir, "output-dir", "skills", "Directory to output generated skills") + + _ = cmd.MarkFlagRequired("name") + _ = cmd.MarkFlagRequired("description") + return cmd.Command +} + +func (c *Command) run(cmd *cobra.Command) error { + ctx, cancel := context.WithCancel(cmd.Context()) + defer cancel() + + ctx, shutdown, err := c.rootCmd.Setup(ctx) + if err != nil { + return err + } + defer func() { + _ = shutdown(ctx) + }() + + logger := c.rootCmd.Logger() + + // Load and merge tool configurations + if err := c.rootCmd.LoadConfig(ctx); err != nil { + return err + } + + if err := os.MkdirAll(c.outputDir, 0755); err != nil { + errMsg := fmt.Errorf("error creating output directory: %w", err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + + logger.InfoContext(ctx, fmt.Sprintf("Generating skill '%s'...", c.name)) + + // Initialize toolbox and collect tools + allTools, err := c.collectTools(ctx) + if err != nil { + errMsg := fmt.Errorf("error collecting tools: %w", err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + + if len(allTools) == 0 { + logger.InfoContext(ctx, "No tools found to generate.") + return nil + } + + // Generate the combined skill directory + skillPath := filepath.Join(c.outputDir, c.name) + if err := os.MkdirAll(skillPath, 0755); err != nil { + errMsg := fmt.Errorf("error creating skill directory: %w", err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + + // Generate assets directory + assetsPath := filepath.Join(skillPath, "assets") + if err := os.MkdirAll(assetsPath, 0755); err != nil { + errMsg := fmt.Errorf("error creating assets dir: %w", err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + + // Generate scripts directory + scriptsPath := filepath.Join(skillPath, "scripts") + if err := os.MkdirAll(scriptsPath, 0755); err != nil { + errMsg := fmt.Errorf("error creating scripts dir: %w", err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + + // Iterate over keys to ensure deterministic order + var toolNames []string + for name := range allTools { + toolNames = append(toolNames, name) + } + sort.Strings(toolNames) + + for _, toolName := range toolNames { + // Generate YAML config in asset directory + minimizedContent, err := generateToolConfigYAML(c.rootCmd.Config(), toolName) + if err != nil { + errMsg := fmt.Errorf("error generating filtered config for %s: %w", toolName, err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + + specificToolsFileName := fmt.Sprintf("%s.yaml", toolName) + if minimizedContent != nil { + destPath := filepath.Join(assetsPath, specificToolsFileName) + if err := os.WriteFile(destPath, minimizedContent, 0644); err != nil { + errMsg := fmt.Errorf("error writing filtered config for %s: %w", toolName, err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + } + + // Generate wrapper script in scripts directory + scriptContent, err := generateScriptContent(toolName, specificToolsFileName) + if err != nil { + errMsg := fmt.Errorf("error generating script content for %s: %w", toolName, err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + + scriptFilename := filepath.Join(scriptsPath, fmt.Sprintf("%s.js", toolName)) + if err := os.WriteFile(scriptFilename, []byte(scriptContent), 0755); err != nil { + errMsg := fmt.Errorf("error writing script %s: %w", scriptFilename, err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + } + + // Generate SKILL.md + skillContent, err := generateSkillMarkdown(c.name, c.description, allTools) + if err != nil { + errMsg := fmt.Errorf("error generating SKILL.md content: %w", err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + skillMdPath := filepath.Join(skillPath, "SKILL.md") + if err := os.WriteFile(skillMdPath, []byte(skillContent), 0644); err != nil { + errMsg := fmt.Errorf("error writing SKILL.md: %w", err) + logger.ErrorContext(ctx, errMsg.Error()) + return errMsg + } + + logger.InfoContext(ctx, fmt.Sprintf("Successfully generated skill '%s' with %d tools.", c.name, len(allTools))) + + return nil +} + +func (c *Command) collectTools(ctx context.Context) (map[string]tools.Tool, error) { + // Initialize Resources + sourcesMap, authServicesMap, embeddingModelsMap, toolsMap, toolsetsMap, promptsMap, promptsetsMap, err := server.InitializeConfigs(ctx, c.rootCmd.Config()) + if err != nil { + return nil, fmt.Errorf("failed to initialize resources: %w", err) + } + + resourceMgr := resources.NewResourceManager(sourcesMap, authServicesMap, embeddingModelsMap, toolsMap, toolsetsMap, promptsMap, promptsetsMap) + + result := make(map[string]tools.Tool) + + if c.toolset == "" { + return toolsMap, nil + } + + ts, ok := resourceMgr.GetToolset(c.toolset) + if !ok { + return nil, fmt.Errorf("toolset %q not found", c.toolset) + } + + for _, t := range ts.Tools { + if t != nil { + tool := *t + result[tool.McpManifest().Name] = tool + } + } + + return result, nil +} diff --git a/internal/cli/skills/generator.go b/internal/cli/skills/generator.go new file mode 100644 index 0000000000..a9e20fc9e3 --- /dev/null +++ b/internal/cli/skills/generator.go @@ -0,0 +1,296 @@ +// Copyright 2026 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package skills + +import ( + "bytes" + "encoding/json" + "fmt" + "sort" + "strings" + "text/template" + + "github.com/goccy/go-yaml" + "github.com/googleapis/genai-toolbox/internal/server" + "github.com/googleapis/genai-toolbox/internal/sources" + "github.com/googleapis/genai-toolbox/internal/tools" + "github.com/googleapis/genai-toolbox/internal/util/parameters" +) + +const skillTemplate = `--- +name: {{.SkillName}} +description: {{.SkillDescription}} +--- + +## Usage + +All scripts can be executed using Node.js. Replace ` + "`" + `` + "`" + ` and ` + "`" + `` + "`" + ` with actual values. + +**Bash:** +` + "`" + `node scripts/.js '{"": ""}'` + "`" + ` + +**PowerShell:** +` + "`" + `node scripts/.js '{\"\": \"\"}'` + "`" + ` + +## Scripts + +{{range .Tools}} +### {{.Name}} + +{{.Description}} + +{{.ParametersSchema}} + +--- +{{end}} +` + +type toolTemplateData struct { + Name string + Description string + ParametersSchema string +} + +type skillTemplateData struct { + SkillName string + SkillDescription string + Tools []toolTemplateData +} + +// generateSkillMarkdown generates the content of the SKILL.md file. +// It includes usage instructions and a reference section for each tool in the skill, +// detailing its description and parameters. +func generateSkillMarkdown(skillName, skillDescription string, toolsMap map[string]tools.Tool) (string, error) { + var toolsData []toolTemplateData + + // Order tools based on name + var toolNames []string + for name := range toolsMap { + toolNames = append(toolNames, name) + } + sort.Strings(toolNames) + + for _, name := range toolNames { + tool := toolsMap[name] + manifest := tool.Manifest() + + parametersSchema, err := formatParameters(manifest.Parameters) + if err != nil { + return "", err + } + + toolsData = append(toolsData, toolTemplateData{ + Name: name, + Description: manifest.Description, + ParametersSchema: parametersSchema, + }) + } + + data := skillTemplateData{ + SkillName: skillName, + SkillDescription: skillDescription, + Tools: toolsData, + } + + tmpl, err := template.New("markdown").Parse(skillTemplate) + if err != nil { + return "", fmt.Errorf("error parsing markdown template: %w", err) + } + + var buf strings.Builder + if err := tmpl.Execute(&buf, data); err != nil { + return "", fmt.Errorf("error executing markdown template: %w", err) + } + + return buf.String(), nil +} + +const nodeScriptTemplate = `#!/usr/bin/env node + +const { spawn, execSync } = require('child_process'); +const path = require('path'); +const fs = require('fs'); + +const toolName = "{{.Name}}"; +const toolsFileName = "{{.ToolsFileName}}"; + +function getToolboxPath() { + try { + const checkCommand = process.platform === 'win32' ? 'where toolbox' : 'which toolbox'; + const globalPath = execSync(checkCommand, { stdio: 'pipe', encoding: 'utf-8' }).trim(); + if (globalPath) { + return globalPath.split('\n')[0].trim(); + } + } catch (e) { + // Ignore error; + } + const localPath = path.resolve(__dirname, '../../../toolbox'); + if (fs.existsSync(localPath)) { + return localPath; + } + throw new Error("Toolbox binary not found"); +} + +let toolboxBinary; +try { + toolboxBinary = getToolboxPath(); +} catch (err) { + console.error("Error:", err.message); + process.exit(1); +} + +let configArgs = []; +if (toolsFileName) { + configArgs.push("--tools-file", path.join(__dirname, "..", "assets", toolsFileName)); +} + +const args = process.argv.slice(2); +const toolboxArgs = [...configArgs, "invoke", toolName, ...args]; + +const child = spawn(toolboxBinary, toolboxArgs, { stdio: 'inherit' }); + +child.on('close', (code) => { + process.exit(code); +}); + +child.on('error', (err) => { + console.error("Error executing toolbox:", err); + process.exit(1); +}); +` + +type scriptData struct { + Name string + ToolsFileName string +} + +// generateScriptContent creates the content for a Node.js wrapper script. +// This script invokes the toolbox CLI with the appropriate configuration +// (using a generated tools file) and arguments to execute the specific tool. +func generateScriptContent(name string, toolsFileName string) (string, error) { + data := scriptData{ + Name: name, + ToolsFileName: toolsFileName, + } + + tmpl, err := template.New("script").Parse(nodeScriptTemplate) + if err != nil { + return "", fmt.Errorf("error parsing script template: %w", err) + } + + var buf strings.Builder + if err := tmpl.Execute(&buf, data); err != nil { + return "", fmt.Errorf("error executing script template: %w", err) + } + + return buf.String(), nil +} + +// formatParameters converts a list of parameter manifests into a formatted JSON schema string. +// This schema is used in the skill documentation to describe the input parameters for a tool. +func formatParameters(params []parameters.ParameterManifest) (string, error) { + if len(params) == 0 { + return "", nil + } + + properties := make(map[string]interface{}) + var required []string + + for _, p := range params { + paramMap := map[string]interface{}{ + "type": p.Type, + "description": p.Description, + } + if p.Default != nil { + paramMap["default"] = p.Default + } + properties[p.Name] = paramMap + if p.Required { + required = append(required, p.Name) + } + } + + schema := map[string]interface{}{ + "type": "object", + "properties": properties, + } + if len(required) > 0 { + schema["required"] = required + } + + schemaJSON, err := json.MarshalIndent(schema, "", " ") + if err != nil { + return "", fmt.Errorf("error generating parameters schema: %w", err) + } + + return fmt.Sprintf("#### Parameters\n\n```json\n%s\n```", string(schemaJSON)), nil +} + +// generateToolConfigYAML generates the YAML configuration for a single tool and its dependency (source). +// It extracts the relevant tool and source configurations from the server config and formats them +// into a YAML document suitable for inclusion in the skill's assets. +func generateToolConfigYAML(cfg server.ServerConfig, toolName string) ([]byte, error) { + toolCfg, ok := cfg.ToolConfigs[toolName] + if !ok { + return nil, fmt.Errorf("error finding tool config: %s", toolName) + } + + var buf bytes.Buffer + encoder := yaml.NewEncoder(&buf) + + // Process Tool Config + toolWrapper := struct { + Kind string `yaml:"kind"` + Config tools.ToolConfig `yaml:",inline"` + }{ + Kind: "tools", + Config: toolCfg, + } + + if err := encoder.Encode(toolWrapper); err != nil { + return nil, fmt.Errorf("error encoding tool config: %w", err) + } + + // Process Source Config + var toolMap map[string]interface{} + b, err := yaml.Marshal(toolCfg) + if err != nil { + return nil, fmt.Errorf("error marshaling tool config: %w", err) + } + if err := yaml.Unmarshal(b, &toolMap); err != nil { + return nil, fmt.Errorf("error unmarshaling tool config map: %w", err) + } + + if sourceName, ok := toolMap["source"].(string); ok && sourceName != "" { + sourceCfg, ok := cfg.SourceConfigs[sourceName] + if !ok { + return nil, fmt.Errorf("error finding source config: %s", sourceName) + } + + sourceWrapper := struct { + Kind string `yaml:"kind"` + Config sources.SourceConfig `yaml:",inline"` + }{ + Kind: "sources", + Config: sourceCfg, + } + + if err := encoder.Encode(sourceWrapper); err != nil { + return nil, fmt.Errorf("error encoding source config: %w", err) + } + } + + return buf.Bytes(), nil +} diff --git a/internal/cli/skills/generator_test.go b/internal/cli/skills/generator_test.go new file mode 100644 index 0000000000..bd3a462180 --- /dev/null +++ b/internal/cli/skills/generator_test.go @@ -0,0 +1,347 @@ +// Copyright 2026 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package skills + +import ( + "context" + "strings" + "testing" + + "github.com/googleapis/genai-toolbox/internal/server" + "github.com/googleapis/genai-toolbox/internal/sources" + "github.com/googleapis/genai-toolbox/internal/tools" + "github.com/googleapis/genai-toolbox/internal/util/parameters" + "go.opentelemetry.io/otel/trace" +) + +type MockToolConfig struct { + Name string `yaml:"name"` + Type string `yaml:"type"` + Source string `yaml:"source"` + Other string `yaml:"other"` + Parameters parameters.Parameters `yaml:"parameters"` +} + +func (m MockToolConfig) ToolConfigType() string { + return m.Type +} + +func (m MockToolConfig) Initialize(map[string]sources.Source) (tools.Tool, error) { + return nil, nil +} + +type MockSourceConfig struct { + Name string `yaml:"name"` + Type string `yaml:"type"` + ConnectionString string `yaml:"connection_string"` +} + +func (m MockSourceConfig) SourceConfigType() string { + return m.Type +} + +func (m MockSourceConfig) Initialize(context.Context, trace.Tracer) (sources.Source, error) { + return nil, nil +} + +func TestFormatParameters(t *testing.T) { + tests := []struct { + name string + params []parameters.ParameterManifest + wantContains []string + wantErr bool + }{ + { + name: "empty parameters", + params: []parameters.ParameterManifest{}, + wantContains: []string{""}, + }, + { + name: "single required string parameter", + params: []parameters.ParameterManifest{ + { + Name: "param1", + Description: "A test parameter", + Type: "string", + Required: true, + }, + }, + wantContains: []string{ + "## Parameters", + "```json", + `"type": "object"`, + `"properties": {`, + `"param1": {`, + `"type": "string"`, + `"description": "A test parameter"`, + `"required": [`, + `"param1"`, + }, + }, + { + name: "mixed parameters with defaults", + params: []parameters.ParameterManifest{ + { + Name: "param1", + Description: "Param 1", + Type: "string", + Required: true, + }, + { + Name: "param2", + Description: "Param 2", + Type: "integer", + Default: 42, + Required: false, + }, + }, + wantContains: []string{ + `"param1": {`, + `"param2": {`, + `"default": 42`, + `"required": [`, + `"param1"`, + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got, err := formatParameters(tt.params) + if (err != nil) != tt.wantErr { + t.Errorf("formatParameters() error = %v, wantErr %v", err, tt.wantErr) + return + } + if tt.wantErr { + return + } + + if len(tt.params) == 0 { + if got != "" { + t.Errorf("formatParameters() = %v, want empty string", got) + } + return + } + + for _, want := range tt.wantContains { + if !strings.Contains(got, want) { + t.Errorf("formatParameters() result missing expected string: %s\nGot:\n%s", want, got) + } + } + }) + } +} + +func TestGenerateSkillMarkdown(t *testing.T) { + toolsMap := map[string]tools.Tool{ + "tool1": server.MockTool{ + Description: "First tool", + Params: []parameters.Parameter{ + parameters.NewStringParameter("p1", "d1"), + }, + }, + } + + got, err := generateSkillMarkdown("MySkill", "My Description", toolsMap) + if err != nil { + t.Fatalf("generateSkillMarkdown() error = %v", err) + } + + expectedSubstrings := []string{ + "name: MySkill", + "description: My Description", + "## Usage", + "All scripts can be executed using Node.js", + "**Bash:**", + "`node scripts/.js '{\"\": \"\"}'`", + "**PowerShell:**", + "`node scripts/.js '{\"\": \"\"}'`", + "## Scripts", + "### tool1", + "First tool", + "## Parameters", + } + + for _, s := range expectedSubstrings { + if !strings.Contains(got, s) { + t.Errorf("generateSkillMarkdown() missing substring %q", s) + } + } +} + +func TestGenerateScriptContent(t *testing.T) { + tests := []struct { + name string + toolName string + toolsFileName string + wantContains []string + }{ + { + name: "basic script", + toolName: "test-tool", + toolsFileName: "", + wantContains: []string{ + `const toolName = "test-tool";`, + `const toolsFileName = "";`, + `const toolboxArgs = [...configArgs, "invoke", toolName, ...args];`, + }, + }, + { + name: "script with tools file", + toolName: "complex-tool", + toolsFileName: "tools.yaml", + wantContains: []string{ + `const toolName = "complex-tool";`, + `const toolsFileName = "tools.yaml";`, + `configArgs.push("--tools-file", path.join(__dirname, "..", "assets", toolsFileName));`, + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got, err := generateScriptContent(tt.toolName, tt.toolsFileName) + if err != nil { + t.Fatalf("generateScriptContent() error = %v", err) + } + + for _, s := range tt.wantContains { + if !strings.Contains(got, s) { + t.Errorf("generateScriptContent() missing substring %q\nGot:\n%s", s, got) + } + } + }) + } +} + +func TestGenerateToolConfigYAML(t *testing.T) { + cfg := server.ServerConfig{ + ToolConfigs: server.ToolConfigs{ + "tool1": MockToolConfig{ + Name: "tool1", + Type: "custom-tool", + Source: "src1", + Other: "foo", + }, + "toolNoSource": MockToolConfig{ + Name: "toolNoSource", + Type: "http", + }, + "toolWithParams": MockToolConfig{ + Name: "toolWithParams", + Type: "custom-tool", + Parameters: []parameters.Parameter{ + parameters.NewStringParameter("param1", "desc1"), + }, + }, + "toolWithMissingSource": MockToolConfig{ + Name: "toolWithMissingSource", + Type: "custom-tool", + Source: "missing-src", + }, + }, + SourceConfigs: server.SourceConfigs{ + "src1": MockSourceConfig{ + Name: "src1", + Type: "postgres", + ConnectionString: "conn1", + }, + }, + } + + tests := []struct { + name string + toolName string + wantContains []string + wantErr bool + wantNil bool + }{ + { + name: "tool with source", + toolName: "tool1", + wantContains: []string{ + "kind: tools", + "name: tool1", + "type: custom-tool", + "source: src1", + "other: foo", + "---", + "kind: sources", + "name: src1", + "type: postgres", + "connection_string: conn1", + }, + }, + { + name: "tool without source", + toolName: "toolNoSource", + wantContains: []string{ + "kind: tools", + "name: toolNoSource", + "type: http", + }, + }, + { + name: "tool with parameters", + toolName: "toolWithParams", + wantContains: []string{ + "kind: tools", + "name: toolWithParams", + "type: custom-tool", + "parameters:", + "- name: param1", + "type: string", + "description: desc1", + }, + }, + { + name: "non-existent tool", + toolName: "missing-tool", + wantErr: true, + }, + { + name: "tool with missing source config", + toolName: "toolWithMissingSource", + wantErr: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + gotBytes, err := generateToolConfigYAML(cfg, tt.toolName) + if (err != nil) != tt.wantErr { + t.Errorf("generateToolConfigYAML() error = %v, wantErr %v", err, tt.wantErr) + return + } + if tt.wantErr { + return + } + + if tt.wantNil { + if gotBytes != nil { + t.Errorf("generateToolConfigYAML() expected nil, got %s", string(gotBytes)) + } + return + } + + got := string(gotBytes) + for _, want := range tt.wantContains { + if !strings.Contains(got, want) { + t.Errorf("generateToolConfigYAML() result missing expected string: %q\nGot:\n%s", want, got) + } + } + }) + } +} diff --git a/internal/server/common_test.go b/internal/server/common_test.go index 54109ac467..8944cfba20 100644 --- a/internal/server/common_test.go +++ b/internal/server/common_test.go @@ -24,7 +24,6 @@ import ( "testing" "github.com/go-chi/chi/v5" - "github.com/googleapis/genai-toolbox/internal/embeddingmodels" "github.com/googleapis/genai-toolbox/internal/log" "github.com/googleapis/genai-toolbox/internal/prompts" "github.com/googleapis/genai-toolbox/internal/server/resources" @@ -41,140 +40,6 @@ var ( _ prompts.Prompt = MockPrompt{} ) -// MockTool is used to mock tools in tests -type MockTool struct { - Name string - Description string - Params []parameters.Parameter - manifest tools.Manifest - unauthorized bool - requiresClientAuthrorization bool -} - -func (t MockTool) Invoke(context.Context, tools.SourceProvider, parameters.ParamValues, tools.AccessToken) (any, error) { - mock := []any{t.Name} - return mock, nil -} - -func (t MockTool) ToConfig() tools.ToolConfig { - return nil -} - -// claims is a map of user info decoded from an auth token -func (t MockTool) ParseParams(data map[string]any, claimsMap map[string]map[string]any) (parameters.ParamValues, error) { - return parameters.ParseParams(t.Params, data, claimsMap) -} - -func (t MockTool) EmbedParams(ctx context.Context, paramValues parameters.ParamValues, embeddingModelsMap map[string]embeddingmodels.EmbeddingModel) (parameters.ParamValues, error) { - return parameters.EmbedParams(ctx, t.Params, paramValues, embeddingModelsMap, nil) -} - -func (t MockTool) Manifest() tools.Manifest { - pMs := make([]parameters.ParameterManifest, 0, len(t.Params)) - for _, p := range t.Params { - pMs = append(pMs, p.Manifest()) - } - return tools.Manifest{Description: t.Description, Parameters: pMs} -} - -func (t MockTool) Authorized(verifiedAuthServices []string) bool { - // defaulted to true - return !t.unauthorized -} - -func (t MockTool) RequiresClientAuthorization(tools.SourceProvider) (bool, error) { - // defaulted to false - return t.requiresClientAuthrorization, nil -} - -func (t MockTool) GetParameters() parameters.Parameters { - return t.Params -} - -func (t MockTool) McpManifest() tools.McpManifest { - properties := make(map[string]parameters.ParameterMcpManifest) - required := make([]string, 0) - authParams := make(map[string][]string) - - for _, p := range t.Params { - name := p.GetName() - paramManifest, authParamList := p.McpManifest() - properties[name] = paramManifest - required = append(required, name) - - if len(authParamList) > 0 { - authParams[name] = authParamList - } - } - - toolsSchema := parameters.McpToolsSchema{ - Type: "object", - Properties: properties, - Required: required, - } - - mcpManifest := tools.McpManifest{ - Name: t.Name, - Description: t.Description, - InputSchema: toolsSchema, - } - - if len(authParams) > 0 { - mcpManifest.Metadata = map[string]any{ - "toolbox/authParams": authParams, - } - } - - return mcpManifest -} - -func (t MockTool) GetAuthTokenHeaderName(tools.SourceProvider) (string, error) { - return "Authorization", nil -} - -// MockPrompt is used to mock prompts in tests -type MockPrompt struct { - Name string - Description string - Args prompts.Arguments -} - -func (p MockPrompt) SubstituteParams(vals parameters.ParamValues) (any, error) { - return []prompts.Message{ - { - Role: "user", - Content: fmt.Sprintf("substituted %s", p.Name), - }, - }, nil -} - -func (p MockPrompt) ParseArgs(data map[string]any, claimsMap map[string]map[string]any) (parameters.ParamValues, error) { - var params parameters.Parameters - for _, arg := range p.Args { - params = append(params, arg.Parameter) - } - return parameters.ParseParams(params, data, claimsMap) -} - -func (p MockPrompt) Manifest() prompts.Manifest { - var argManifests []parameters.ParameterManifest - for _, arg := range p.Args { - argManifests = append(argManifests, arg.Manifest()) - } - return prompts.Manifest{ - Description: p.Description, - Arguments: argManifests, - } -} - -func (p MockPrompt) McpManifest() prompts.McpManifest { - return prompts.GetMcpManifest(p.Name, p.Description, p.Args) -} - -func (p MockPrompt) ToConfig() prompts.PromptConfig { - return nil -} - var tool1 = MockTool{ Name: "no_params", Params: []parameters.Parameter{}, diff --git a/internal/server/mocks.go b/internal/server/mocks.go new file mode 100644 index 0000000000..60aa4f6212 --- /dev/null +++ b/internal/server/mocks.go @@ -0,0 +1,159 @@ +// Copyright 2026 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package server + +import ( + "context" + "fmt" + + "github.com/googleapis/genai-toolbox/internal/embeddingmodels" + "github.com/googleapis/genai-toolbox/internal/prompts" + "github.com/googleapis/genai-toolbox/internal/tools" + "github.com/googleapis/genai-toolbox/internal/util/parameters" +) + +// MockTool is used to mock tools in tests +type MockTool struct { + Name string + Description string + Params []parameters.Parameter + manifest tools.Manifest + unauthorized bool + requiresClientAuthrorization bool +} + +func (t MockTool) Invoke(context.Context, tools.SourceProvider, parameters.ParamValues, tools.AccessToken) (any, error) { + mock := []any{t.Name} + return mock, nil +} + +func (t MockTool) ToConfig() tools.ToolConfig { + return nil +} + +// claims is a map of user info decoded from an auth token +func (t MockTool) ParseParams(data map[string]any, claimsMap map[string]map[string]any) (parameters.ParamValues, error) { + return parameters.ParseParams(t.Params, data, claimsMap) +} + +func (t MockTool) EmbedParams(ctx context.Context, paramValues parameters.ParamValues, embeddingModelsMap map[string]embeddingmodels.EmbeddingModel) (parameters.ParamValues, error) { + return parameters.EmbedParams(ctx, t.Params, paramValues, embeddingModelsMap, nil) +} + +func (t MockTool) Manifest() tools.Manifest { + pMs := make([]parameters.ParameterManifest, 0, len(t.Params)) + for _, p := range t.Params { + pMs = append(pMs, p.Manifest()) + } + return tools.Manifest{Description: t.Description, Parameters: pMs} +} + +func (t MockTool) Authorized(verifiedAuthServices []string) bool { + // defaulted to true + return !t.unauthorized +} + +func (t MockTool) RequiresClientAuthorization(tools.SourceProvider) (bool, error) { + // defaulted to false + return t.requiresClientAuthrorization, nil +} + +func (t MockTool) GetParameters() parameters.Parameters { + return t.Params +} + +func (t MockTool) McpManifest() tools.McpManifest { + properties := make(map[string]parameters.ParameterMcpManifest) + required := make([]string, 0) + authParams := make(map[string][]string) + + for _, p := range t.Params { + name := p.GetName() + paramManifest, authParamList := p.McpManifest() + properties[name] = paramManifest + required = append(required, name) + + if len(authParamList) > 0 { + authParams[name] = authParamList + } + } + + toolsSchema := parameters.McpToolsSchema{ + Type: "object", + Properties: properties, + Required: required, + } + + mcpManifest := tools.McpManifest{ + Name: t.Name, + Description: t.Description, + InputSchema: toolsSchema, + } + + if len(authParams) > 0 { + mcpManifest.Metadata = map[string]any{ + "toolbox/authParams": authParams, + } + } + + return mcpManifest +} + +func (t MockTool) GetAuthTokenHeaderName(tools.SourceProvider) (string, error) { + return "Authorization", nil +} + +// MockPrompt is used to mock prompts in tests +type MockPrompt struct { + Name string + Description string + Args prompts.Arguments +} + +func (p MockPrompt) SubstituteParams(vals parameters.ParamValues) (any, error) { + return []prompts.Message{ + { + Role: "user", + Content: fmt.Sprintf("substituted %s", p.Name), + }, + }, nil +} + +func (p MockPrompt) ParseArgs(data map[string]any, claimsMap map[string]map[string]any) (parameters.ParamValues, error) { + var params parameters.Parameters + for _, arg := range p.Args { + params = append(params, arg.Parameter) + } + return parameters.ParseParams(params, data, claimsMap) +} + +func (p MockPrompt) Manifest() prompts.Manifest { + var argManifests []parameters.ParameterManifest + for _, arg := range p.Args { + argManifests = append(argManifests, arg.Manifest()) + } + return prompts.Manifest{ + Description: p.Description, + Arguments: argManifests, + } +} + +func (p MockPrompt) McpManifest() prompts.McpManifest { + return prompts.GetMcpManifest(p.Name, p.Description, p.Args) +} + +func (p MockPrompt) ToConfig() prompts.PromptConfig { + return nil +} From 5e4f2a131f85613d4d5e82f75df1e1468948e1ec Mon Sep 17 00:00:00 2001 From: Averi Kitsch Date: Fri, 6 Feb 2026 10:01:58 -0800 Subject: [PATCH 04/13] docs: add managed connection pooling to docs (#2425) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Description > Should include a concise description of the changes (bug or feature), it's > impact, along with a summary of the solution ## PR Checklist > Thank you for opening a Pull Request! Before submitting your PR, there are a > few things you can do to make sure it goes smoothly: - [ ] Make sure you reviewed [CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md) - [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [ ] Ensure the tests and linter pass - [ ] Code coverage does not decrease (if any source code was changed) - [ ] Appropriate docs were updated (if necessary) - [ ] Make sure to add `!` if this involve a breaking change 🛠️ Fixes # --- docs/en/resources/sources/alloydb-pg.md | 9 +++++++++ docs/en/resources/sources/cloud-sql-pg.md | 9 +++++++++ 2 files changed, 18 insertions(+) diff --git a/docs/en/resources/sources/alloydb-pg.md b/docs/en/resources/sources/alloydb-pg.md index cf6d848ae6..b7fe99c759 100644 --- a/docs/en/resources/sources/alloydb-pg.md +++ b/docs/en/resources/sources/alloydb-pg.md @@ -194,6 +194,15 @@ Use environment variable replacement with the format ${ENV_NAME} instead of hardcoding your secrets into the configuration file. {{< /notice >}} +### Managed Connection Pooling + +Toolbox automatically supports [Managed Connection Pooling][alloydb-mcp]. If your AlloyDB instance has Managed Connection Pooling enabled, the connection will immediately benefit from increased throughput and reduced latency. + +The interface is identical, so there's no additional configuration required on the client. For more information on configuring your instance, see the [AlloyDB Managed Connection Pooling documentation][alloydb-mcp-docs]. + +[alloydb-mcp]: https://cloud.google.com/blog/products/databases/alloydb-managed-connection-pooling +[alloydb-mcp-docs]: https://cloud.google.com/alloydb/docs/configure-managed-connection-pooling + ## Reference | **field** | **type** | **required** | **description** | diff --git a/docs/en/resources/sources/cloud-sql-pg.md b/docs/en/resources/sources/cloud-sql-pg.md index 3b5f54781d..182b54e914 100644 --- a/docs/en/resources/sources/cloud-sql-pg.md +++ b/docs/en/resources/sources/cloud-sql-pg.md @@ -195,6 +195,15 @@ Use environment variable replacement with the format ${ENV_NAME} instead of hardcoding your secrets into the configuration file. {{< /notice >}} +### Managed Connection Pooling + +Toolbox automatically supports [Managed Connection Pooling][csql-mcp]. If your Cloud SQL for PostgreSQL instance has Managed Connection Pooling enabled, the connection will immediately benefit from increased throughput and reduced latency. + +The interface is identical, so there's no additional configuration required on the client. For more information on configuring your instance, see the [Cloud SQL Managed Connection Pooling documentation][csql-mcp-docs]. + +[csql-mcp]: https://docs.cloud.google.com/sql/docs/postgres/managed-connection-pooling +[csql-mcp-docs]: https://docs.cloud.google.com/sql/docs/postgres/configure-mcp + ## Reference | **field** | **type** | **required** | **description** | From a15a12873f936b0102aeb9500cc3bcd71bb38c34 Mon Sep 17 00:00:00 2001 From: "Dr. Strangelove" Date: Fri, 6 Feb 2026 17:17:45 -0500 Subject: [PATCH 05/13] feat(tools/looker): Validate project tool (#2430) ## Description Validate changes to a LookML project and report LookML errors. ## PR Checklist > Thank you for opening a Pull Request! Before submitting your PR, there are a > few things you can do to make sure it goes smoothly: - [x] Make sure you reviewed [CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md) - [x] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [x] Ensure the tests and linter pass - [x] Code coverage does not decrease (if any source code was changed) - [x] Appropriate docs were updated (if necessary) - [x] Make sure to add `!` if this involve a breaking change --- .lycheeignore | 3 +- cmd/root.go | 1 + cmd/root_test.go | 2 +- docs/en/reference/prebuilt-tools.md | 1 + .../tools/looker/looker-validate-project.md | 47 +++++ internal/prebuiltconfigs/tools/looker.yaml | 16 ++ .../lookervalidateproject.go | 177 ++++++++++++++++++ .../lookervalidateproject_test.go | 109 +++++++++++ tests/looker/looker_integration_test.go | 25 +++ 9 files changed, 379 insertions(+), 2 deletions(-) create mode 100644 docs/en/resources/tools/looker/looker-validate-project.md create mode 100644 internal/tools/looker/lookervalidateproject/lookervalidateproject.go create mode 100644 internal/tools/looker/lookervalidateproject/lookervalidateproject_test.go diff --git a/.lycheeignore b/.lycheeignore index baec3c4449..9ce08c5e6a 100644 --- a/.lycheeignore +++ b/.lycheeignore @@ -37,8 +37,9 @@ https://dev.mysql.com/doc/refman/8.4/en/sql-prepared-statements.html https://dev.mysql.com/doc/refman/8.4/en/user-names.html # npmjs links can occasionally trigger rate limiting during high-frequency CI builds -https://www.npmjs.com/package/@toolbox-sdk/core https://www.npmjs.com/package/@toolbox-sdk/adk +https://www.npmjs.com/package/@toolbox-sdk/core +https://www.npmjs.com/package/@toolbox-sdk/server https://www.oceanbase.com/ # Ignore social media and blog profiles to reduce external request overhead diff --git a/cmd/root.go b/cmd/root.go index c4afedbbb2..5e59997211 100644 --- a/cmd/root.go +++ b/cmd/root.go @@ -163,6 +163,7 @@ import ( _ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookerrundashboard" _ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookerrunlook" _ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookerupdateprojectfile" + _ "github.com/googleapis/genai-toolbox/internal/tools/looker/lookervalidateproject" _ "github.com/googleapis/genai-toolbox/internal/tools/mindsdb/mindsdbexecutesql" _ "github.com/googleapis/genai-toolbox/internal/tools/mindsdb/mindsdbsql" _ "github.com/googleapis/genai-toolbox/internal/tools/mongodb/mongodbaggregate" diff --git a/cmd/root_test.go b/cmd/root_test.go index 3c55e83d93..f26bd1706a 100644 --- a/cmd/root_test.go +++ b/cmd/root_test.go @@ -2370,7 +2370,7 @@ func TestPrebuiltTools(t *testing.T) { wantToolset: server.ToolsetConfigs{ "looker_tools": tools.ToolsetConfig{ Name: "looker_tools", - ToolNames: []string{"get_models", "get_explores", "get_dimensions", "get_measures", "get_filters", "get_parameters", "query", "query_sql", "query_url", "get_looks", "run_look", "make_look", "get_dashboards", "run_dashboard", "make_dashboard", "add_dashboard_element", "add_dashboard_filter", "generate_embed_url", "health_pulse", "health_analyze", "health_vacuum", "dev_mode", "get_projects", "get_project_files", "get_project_file", "create_project_file", "update_project_file", "delete_project_file", "get_connections", "get_connection_schemas", "get_connection_databases", "get_connection_tables", "get_connection_table_columns"}, + ToolNames: []string{"get_models", "get_explores", "get_dimensions", "get_measures", "get_filters", "get_parameters", "query", "query_sql", "query_url", "get_looks", "run_look", "make_look", "get_dashboards", "run_dashboard", "make_dashboard", "add_dashboard_element", "add_dashboard_filter", "generate_embed_url", "health_pulse", "health_analyze", "health_vacuum", "dev_mode", "get_projects", "get_project_files", "get_project_file", "create_project_file", "update_project_file", "delete_project_file", "validate_project", "get_connections", "get_connection_schemas", "get_connection_databases", "get_connection_tables", "get_connection_table_columns"}, }, }, }, diff --git a/docs/en/reference/prebuilt-tools.md b/docs/en/reference/prebuilt-tools.md index d8a2e806b3..7a52236dfa 100644 --- a/docs/en/reference/prebuilt-tools.md +++ b/docs/en/reference/prebuilt-tools.md @@ -488,6 +488,7 @@ See [Usage Examples](../reference/cli.md#examples). * `create_project_file`: Create a new LookML file. * `update_project_file`: Update an existing LookML file. * `delete_project_file`: Delete a LookML file. + * `validate_project`: Check the syntax of a LookML project. * `get_connections`: Get the available connections in a Looker instance. * `get_connection_schemas`: Get the available schemas in a connection. * `get_connection_databases`: Get the available databases in a connection. diff --git a/docs/en/resources/tools/looker/looker-validate-project.md b/docs/en/resources/tools/looker/looker-validate-project.md new file mode 100644 index 0000000000..956588b11d --- /dev/null +++ b/docs/en/resources/tools/looker/looker-validate-project.md @@ -0,0 +1,47 @@ +--- +title: "looker-validate-project" +type: docs +weight: 1 +description: > + A "looker-validate-project" tool checks the syntax of a LookML project and reports any errors +aliases: +- /resources/tools/looker-validate-project +--- + +## About + +A "looker-validate-project" tool checks the syntax of a LookML project and reports any errors + +It's compatible with the following sources: + +- [looker](../../sources/looker.md) + +`looker-validate-project` accepts a project_id parameter. + +## Example + +```yaml +tools: + validate_project: + kind: looker-validate-project + source: looker-source + description: | + This tool checks a LookML project for syntax errors. + + Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first. + + Parameters: + - project_id (required): The unique ID of the LookML project. + + Output: + A list of error details including the file path and line number, and also a list of models + that are not currently valid due to LookML errors. +``` + +## Reference + +| **field** | **type** | **required** | **description** | +|-------------|:--------:|:------------:|----------------------------------------------------| +| kind | string | true | Must be "looker-validate-project". | +| source | string | true | Name of the source Looker instance. | +| description | string | true | Description of the tool that is passed to the LLM. | diff --git a/internal/prebuiltconfigs/tools/looker.yaml b/internal/prebuiltconfigs/tools/looker.yaml index 442cd11106..c6bbd51c56 100644 --- a/internal/prebuiltconfigs/tools/looker.yaml +++ b/internal/prebuiltconfigs/tools/looker.yaml @@ -959,6 +959,21 @@ tools: Output: A confirmation message upon successful file deletion. + validate_project: + kind: looker-validate-project + source: looker-source + description: | + This tool checks a LookML project for syntax errors. + + Prerequisite: The Looker session must be in Development Mode. Use `dev_mode: true` first. + + Parameters: + - project_id (required): The unique ID of the LookML project. + + Output: + A list of error details including the file path and line number, and also a list of models + that are not currently valid due to LookML errors. + get_connections: kind: looker-get-connections source: looker-source @@ -1072,6 +1087,7 @@ toolsets: - create_project_file - update_project_file - delete_project_file + - validate_project - get_connections - get_connection_schemas - get_connection_databases diff --git a/internal/tools/looker/lookervalidateproject/lookervalidateproject.go b/internal/tools/looker/lookervalidateproject/lookervalidateproject.go new file mode 100644 index 0000000000..e36c3a4dd2 --- /dev/null +++ b/internal/tools/looker/lookervalidateproject/lookervalidateproject.go @@ -0,0 +1,177 @@ +// Copyright 2026 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +package lookervalidateproject + +import ( + "context" + "fmt" + + yaml "github.com/goccy/go-yaml" + "github.com/googleapis/genai-toolbox/internal/embeddingmodels" + "github.com/googleapis/genai-toolbox/internal/sources" + "github.com/googleapis/genai-toolbox/internal/tools" + "github.com/googleapis/genai-toolbox/internal/util" + "github.com/googleapis/genai-toolbox/internal/util/parameters" + + "github.com/looker-open-source/sdk-codegen/go/rtl" + v4 "github.com/looker-open-source/sdk-codegen/go/sdk/v4" +) + +const resourceType string = "looker-validate-project" + +func init() { + if !tools.Register(resourceType, newConfig) { + panic(fmt.Sprintf("tool type %q already registered", resourceType)) + } +} + +func newConfig(ctx context.Context, name string, decoder *yaml.Decoder) (tools.ToolConfig, error) { + actual := Config{Name: name} + if err := decoder.DecodeContext(ctx, &actual); err != nil { + return nil, err + } + return actual, nil +} + +type compatibleSource interface { + UseClientAuthorization() bool + GetAuthTokenHeaderName() string + LookerApiSettings() *rtl.ApiSettings + GetLookerSDK(string) (*v4.LookerSDK, error) +} + +type Config struct { + Name string `yaml:"name" validate:"required"` + Type string `yaml:"type" validate:"required"` + Source string `yaml:"source" validate:"required"` + Description string `yaml:"description" validate:"required"` + AuthRequired []string `yaml:"authRequired"` + Annotations *tools.ToolAnnotations `yaml:"annotations,omitempty"` +} + +// validate interface +var _ tools.ToolConfig = Config{} + +func (cfg Config) ToolConfigType() string { + return resourceType +} + +func (cfg Config) Initialize(srcs map[string]sources.Source) (tools.Tool, error) { + projectIdParameter := parameters.NewStringParameter("project_id", "The id of the project to validate") + params := parameters.Parameters{projectIdParameter} + + annotations := cfg.Annotations + if annotations == nil { + readOnlyHint := true + annotations = &tools.ToolAnnotations{ + ReadOnlyHint: &readOnlyHint, + } + } + + mcpManifest := tools.GetMcpManifest(cfg.Name, cfg.Description, cfg.AuthRequired, params, annotations) + + // finish tool setup + return Tool{ + Config: cfg, + Parameters: params, + manifest: tools.Manifest{ + Description: cfg.Description, + Parameters: params.Manifest(), + AuthRequired: cfg.AuthRequired, + }, + mcpManifest: mcpManifest, + }, nil +} + +// validate interface +var _ tools.Tool = Tool{} + +type Tool struct { + Config + Parameters parameters.Parameters `yaml:"parameters"` + manifest tools.Manifest + mcpManifest tools.McpManifest +} + +func (t Tool) ToConfig() tools.ToolConfig { + return t.Config +} + +func (t Tool) Invoke(ctx context.Context, resourceMgr tools.SourceProvider, params parameters.ParamValues, accessToken tools.AccessToken) (any, error) { + source, err := tools.GetCompatibleSource[compatibleSource](resourceMgr, t.Source, t.Name, t.Type) + if err != nil { + return nil, err + } + + logger, err := util.LoggerFromContext(ctx) + if err != nil { + return nil, fmt.Errorf("unable to get logger from ctx: %s", err) + } + + sdk, err := source.GetLookerSDK(string(accessToken)) + if err != nil { + return nil, fmt.Errorf("error getting sdk: %w", err) + } + + mapParams := params.AsMap() + projectId, ok := mapParams["project_id"].(string) + if !ok { + return nil, fmt.Errorf("'project_id' must be a string, got %T", mapParams["project_id"]) + } + + resp, err := sdk.ValidateProject(projectId, "", source.LookerApiSettings()) + if err != nil { + return nil, fmt.Errorf("error making validate_project request: %w", err) + } + + logger.DebugContext(ctx, "Got response of %v\n", resp) + + return resp, nil +} + +func (t Tool) EmbedParams(ctx context.Context, paramValues parameters.ParamValues, embeddingModelsMap map[string]embeddingmodels.EmbeddingModel) (parameters.ParamValues, error) { + return parameters.EmbedParams(ctx, t.Parameters, paramValues, embeddingModelsMap, nil) +} + +func (t Tool) Manifest() tools.Manifest { + return t.manifest +} + +func (t Tool) McpManifest() tools.McpManifest { + return t.mcpManifest +} + +func (t Tool) Authorized(verifiedAuthServices []string) bool { + return tools.IsAuthorized(t.AuthRequired, verifiedAuthServices) +} + +func (t Tool) RequiresClientAuthorization(resourceMgr tools.SourceProvider) (bool, error) { + source, err := tools.GetCompatibleSource[compatibleSource](resourceMgr, t.Source, t.Name, t.Type) + if err != nil { + return false, err + } + return source.UseClientAuthorization(), nil +} + +func (t Tool) GetAuthTokenHeaderName(resourceMgr tools.SourceProvider) (string, error) { + source, err := tools.GetCompatibleSource[compatibleSource](resourceMgr, t.Source, t.Name, t.Type) + if err != nil { + return "", err + } + return source.GetAuthTokenHeaderName(), nil +} + +func (t Tool) GetParameters() parameters.Parameters { + return t.Parameters +} diff --git a/internal/tools/looker/lookervalidateproject/lookervalidateproject_test.go b/internal/tools/looker/lookervalidateproject/lookervalidateproject_test.go new file mode 100644 index 0000000000..c71721ac9d --- /dev/null +++ b/internal/tools/looker/lookervalidateproject/lookervalidateproject_test.go @@ -0,0 +1,109 @@ +// Copyright 2026 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package lookervalidateproject_test + +import ( + "strings" + "testing" + + "github.com/google/go-cmp/cmp" + "github.com/googleapis/genai-toolbox/internal/server" + "github.com/googleapis/genai-toolbox/internal/testutils" + lkr "github.com/googleapis/genai-toolbox/internal/tools/looker/lookervalidateproject" +) + +func TestParseFromYamlLookerValidateProject(t *testing.T) { + ctx, err := testutils.ContextWithNewLogger() + if err != nil { + t.Fatalf("unexpected error: %s", err) + } + tcs := []struct { + desc string + in string + want server.ToolConfigs + }{ + { + desc: "basic example", + in: ` + kind: tools + name: example_tool + type: looker-validate-project + source: my-instance + description: some description + `, + want: server.ToolConfigs{ + "example_tool": lkr.Config{ + Name: "example_tool", + Type: "looker-validate-project", + Source: "my-instance", + Description: "some description", + AuthRequired: []string{}, + }, + }, + }, + } + for _, tc := range tcs { + t.Run(tc.desc, func(t *testing.T) { + // Parse contents + _, _, _, got, _, _, err := server.UnmarshalResourceConfig(ctx, testutils.FormatYaml(tc.in)) + if err != nil { + t.Fatalf("unable to unmarshal: %s", err) + } + if diff := cmp.Diff(tc.want, got); diff != "" { + t.Fatalf("incorrect parse: diff %v", diff) + } + }) + } + +} + +func TestFailParseFromYamlLookerValidateProject(t *testing.T) { + ctx, err := testutils.ContextWithNewLogger() + if err != nil { + t.Fatalf("unexpected error: %s", err) + } + tcs := []struct { + desc string + in string + err string + }{ + { + desc: "Invalid method", + in: ` + kind: tools + name: example_tool + type: looker-validate-project + source: my-instance + method: GOT + description: some description + `, + err: "error unmarshaling tools: unable to parse tool \"example_tool\" as type \"looker-validate-project\": [3:1] unknown field \"method\"\n 1 | authRequired: []\n 2 | description: some description\n> 3 | method: GOT\n ^\n 4 | name: example_tool\n 5 | source: my-instance\n 6 | type: looker-validate-project", + }, + } + for _, tc := range tcs { + t.Run(tc.desc, func(t *testing.T) { + // Parse contents + _, _, _, _, _, _, err := server.UnmarshalResourceConfig(ctx, testutils.FormatYaml(tc.in)) + if err == nil { + t.Fatalf("expect parsing to fail") + } + errStr := err.Error() + if !strings.Contains(errStr, tc.err) { + t.Fatalf("unexpected error string: got %q, want substring %q", errStr, tc.err) + } + }) + } + +} diff --git a/tests/looker/looker_integration_test.go b/tests/looker/looker_integration_test.go index 944cbcb926..3aa795683e 100644 --- a/tests/looker/looker_integration_test.go +++ b/tests/looker/looker_integration_test.go @@ -222,6 +222,11 @@ func TestLooker(t *testing.T) { "source": "my-instance", "description": "Simple tool to test end to end functionality.", }, + "validate_project": map[string]any{ + "type": "looker-validate-project", + "source": "my-instance", + "description": "Simple tool to test end to end functionality.", + }, "generate_embed_url": map[string]any{ "type": "looker-generate-embed-url", "source": "my-instance", @@ -1446,6 +1451,23 @@ func TestLooker(t *testing.T) { }, }, ) + tests.RunToolGetTestByName(t, "validate_project", + map[string]any{ + "validate_project": map[string]any{ + "description": "Simple tool to test end to end functionality.", + "authRequired": []any{}, + "parameters": []any{ + map[string]any{ + "authSources": []any{}, + "description": "The id of the project to validate", + "name": "project_id", + "required": true, + "type": "string", + }, + }, + }, + }, + ) tests.RunToolGetTestByName(t, "generate_embed_url", map[string]any{ "generate_embed_url": map[string]any{ @@ -1665,6 +1687,9 @@ func TestLooker(t *testing.T) { wantResult = "deleted" tests.RunToolInvokeParametersTest(t, "delete_project_file", []byte(`{"project_id": "the_look", "file_path": "foo.view.lkml"}`), wantResult) + wantResult = "\"errors\":[]" + tests.RunToolInvokeParametersTest(t, "validate_project", []byte(`{"project_id": "the_look"}`), wantResult) + wantResult = "production" tests.RunToolInvokeParametersTest(t, "dev_mode", []byte(`{"devMode": false}`), wantResult) From bb40cdf78cc168a6ba5f10197b39ba35cb850ce6 Mon Sep 17 00:00:00 2001 From: Twisha Bansal <58483338+twishabansal@users.noreply.github.com> Date: Mon, 9 Feb 2026 11:05:29 +0530 Subject: [PATCH 06/13] refactor: consolidate quickstart tests into universal runner (#2413) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Description Tested locally. Last successful runs: - Python: https://pantheon.corp.google.com/cloud-build/builds;region=global/e86608d7-b9df-46b3-b8e6-60f0d6144b59;step=0?e=13802955&mods=-autopush_coliseum&project=toolbox-testing-438616 - JS: https://pantheon.corp.google.com/cloud-build/builds;region=global/7d8803fb-3e4e-4d0b-8b60-44f0c7962132?e=13802955&mods=-autopush_coliseum&project=toolbox-testing-438616 - Go: https://pantheon.corp.google.com/cloud-build/builds;region=global/a0d0d693-ec14-422f-aa62-b0ae664005ff?e=13802955&mods=-autopush_coliseum&project=toolbox-testing-438616 Note: After merging change path to cloud build yaml files for all these triggers: quickstart-python, quickstart-python-test-on-merge and same for JS and Go. ## PR Checklist > Thank you for opening a Pull Request! Before submitting your PR, there are a > few things you can do to make sure it goes smoothly: - [ ] Make sure you reviewed [CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md) - [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [ ] Ensure the tests and linter pass - [ ] Code coverage does not decrease (if any source code was changed) - [ ] Appropriate docs were updated (if necessary) - [ ] Make sure to add `!` if this involve a breaking change 🛠️ Fixes # --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com> --- .ci/quickstart_test/run_go_tests.sh | 125 ----------- .ci/quickstart_test/run_js_tests.sh | 125 ----------- .ci/quickstart_test/run_py_tests.sh | 115 ---------- .../go.integration.cloudbuild.yaml | 9 +- .../js.integration.cloudbuild.yaml | 9 +- .../py.integration.cloudbuild.yaml | 9 +- .ci/sample_tests/run_tests.sh | 202 ++++++++++++++++++ .../setup_hotels.sql} | 0 8 files changed, 223 insertions(+), 371 deletions(-) delete mode 100644 .ci/quickstart_test/run_go_tests.sh delete mode 100644 .ci/quickstart_test/run_js_tests.sh delete mode 100644 .ci/quickstart_test/run_py_tests.sh rename .ci/{quickstart_test => sample_tests/quickstart}/go.integration.cloudbuild.yaml (84%) rename .ci/{quickstart_test => sample_tests/quickstart}/js.integration.cloudbuild.yaml (84%) rename .ci/{quickstart_test => sample_tests/quickstart}/py.integration.cloudbuild.yaml (84%) create mode 100644 .ci/sample_tests/run_tests.sh rename .ci/{quickstart_test/setup_hotels_sample.sql => sample_tests/setup_hotels.sql} (100%) diff --git a/.ci/quickstart_test/run_go_tests.sh b/.ci/quickstart_test/run_go_tests.sh deleted file mode 100644 index 43874931b2..0000000000 --- a/.ci/quickstart_test/run_go_tests.sh +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright 2025 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -#!/bin/bash - -set -e - -TABLE_NAME="hotels_go" -QUICKSTART_GO_DIR="docs/en/getting-started/quickstart/go" -SQL_FILE=".ci/quickstart_test/setup_hotels_sample.sql" - -PROXY_PID="" -TOOLBOX_PID="" - -install_system_packages() { - apt-get update && apt-get install -y \ - postgresql-client \ - wget \ - gettext-base \ - netcat-openbsd -} - -start_cloud_sql_proxy() { - wget "https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.10.0/cloud-sql-proxy.linux.amd64" -O /usr/local/bin/cloud-sql-proxy - chmod +x /usr/local/bin/cloud-sql-proxy - cloud-sql-proxy "${CLOUD_SQL_INSTANCE}" & - PROXY_PID=$! - - for i in {1..30}; do - if nc -z 127.0.0.1 5432; then - echo "Cloud SQL Proxy is up and running." - return - fi - sleep 1 - done - - echo "Cloud SQL Proxy failed to start within the timeout period." - exit 1 -} - -setup_toolbox() { - TOOLBOX_YAML="/tools.yaml" - echo "${TOOLS_YAML_CONTENT}" > "$TOOLBOX_YAML" - if [ ! -f "$TOOLBOX_YAML" ]; then echo "Failed to create tools.yaml"; exit 1; fi - wget "https://storage.googleapis.com/genai-toolbox/v${VERSION}/linux/amd64/toolbox" -O "/toolbox" - chmod +x "/toolbox" - /toolbox --tools-file "$TOOLBOX_YAML" & - TOOLBOX_PID=$! - sleep 2 -} - -setup_orch_table() { - export TABLE_NAME - envsubst < "$SQL_FILE" | psql -h "$PGHOST" -p "$PGPORT" -U "$DB_USER" -d "$DATABASE_NAME" -} - -run_orch_test() { - local orch_dir="$1" - local orch_name - orch_name=$(basename "$orch_dir") - - if [ "$orch_name" == "openAI" ]; then - echo -e "\nSkipping framework '${orch_name}': Temporarily excluded." - return - fi - - ( - set -e - setup_orch_table - - echo "--- Preparing module for $orch_name ---" - cd "$orch_dir" - - if [ -f "go.mod" ]; then - go mod tidy - fi - - cd .. - - export ORCH_NAME="$orch_name" - - echo "--- Running tests for $orch_name ---" - go test -v ./... - ) -} - -cleanup_all() { - echo "--- Final cleanup: Shutting down processes and dropping table ---" - if [ -n "$TOOLBOX_PID" ]; then - kill $TOOLBOX_PID || true - fi - if [ -n "$PROXY_PID" ]; then - kill $PROXY_PID || true - fi -} -trap cleanup_all EXIT - -# Main script execution -install_system_packages -start_cloud_sql_proxy - -export PGHOST=127.0.0.1 -export PGPORT=5432 -export PGPASSWORD="$DB_PASSWORD" -export GOOGLE_API_KEY="$GOOGLE_API_KEY" - -setup_toolbox - -for ORCH_DIR in "$QUICKSTART_GO_DIR"/*/; do - if [ ! -d "$ORCH_DIR" ]; then - continue - fi - run_orch_test "$ORCH_DIR" -done diff --git a/.ci/quickstart_test/run_js_tests.sh b/.ci/quickstart_test/run_js_tests.sh deleted file mode 100644 index 71816542f5..0000000000 --- a/.ci/quickstart_test/run_js_tests.sh +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright 2025 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -#!/bin/bash - -set -e - -TABLE_NAME="hotels_js" -QUICKSTART_JS_DIR="docs/en/getting-started/quickstart/js" -SQL_FILE=".ci/quickstart_test/setup_hotels_sample.sql" - -# Initialize process IDs to empty at the top of the script -PROXY_PID="" -TOOLBOX_PID="" - -install_system_packages() { - apt-get update && apt-get install -y \ - postgresql-client \ - wget \ - gettext-base \ - netcat-openbsd -} - -start_cloud_sql_proxy() { - wget "https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.10.0/cloud-sql-proxy.linux.amd64" -O /usr/local/bin/cloud-sql-proxy - chmod +x /usr/local/bin/cloud-sql-proxy - cloud-sql-proxy "${CLOUD_SQL_INSTANCE}" & - PROXY_PID=$! - - for i in {1..30}; do - if nc -z 127.0.0.1 5432; then - echo "Cloud SQL Proxy is up and running." - return - fi - sleep 1 - done - - echo "Cloud SQL Proxy failed to start within the timeout period." - exit 1 -} - -setup_toolbox() { - TOOLBOX_YAML="/tools.yaml" - echo "${TOOLS_YAML_CONTENT}" > "$TOOLBOX_YAML" - if [ ! -f "$TOOLBOX_YAML" ]; then echo "Failed to create tools.yaml"; exit 1; fi - wget "https://storage.googleapis.com/genai-toolbox/v${VERSION}/linux/amd64/toolbox" -O "/toolbox" - chmod +x "/toolbox" - /toolbox --tools-file "$TOOLBOX_YAML" & - TOOLBOX_PID=$! - sleep 2 -} - -setup_orch_table() { - export TABLE_NAME - envsubst < "$SQL_FILE" | psql -h "$PGHOST" -p "$PGPORT" -U "$DB_USER" -d "$DATABASE_NAME" -} - -run_orch_test() { - local orch_dir="$1" - local orch_name - orch_name=$(basename "$orch_dir") - - ( - set -e - echo "--- Preparing environment for $orch_name ---" - setup_orch_table - - cd "$orch_dir" - echo "Installing dependencies for $orch_name..." - if [ -f "package-lock.json" ]; then - npm ci - else - npm install - fi - - cd .. - - echo "--- Running tests for $orch_name ---" - export ORCH_NAME="$orch_name" - node --test quickstart.test.js - - echo "--- Cleaning environment for $orch_name ---" - rm -rf "${orch_name}/node_modules" - ) -} - -cleanup_all() { - echo "--- Final cleanup: Shutting down processes and dropping table ---" - if [ -n "$TOOLBOX_PID" ]; then - kill $TOOLBOX_PID || true - fi - if [ -n "$PROXY_PID" ]; then - kill $PROXY_PID || true - fi -} -trap cleanup_all EXIT - -# Main script execution -install_system_packages -start_cloud_sql_proxy - -export PGHOST=127.0.0.1 -export PGPORT=5432 -export PGPASSWORD="$DB_PASSWORD" -export GOOGLE_API_KEY="$GOOGLE_API_KEY" - -setup_toolbox - -for ORCH_DIR in "$QUICKSTART_JS_DIR"/*/; do - if [ ! -d "$ORCH_DIR" ]; then - continue - fi - run_orch_test "$ORCH_DIR" -done diff --git a/.ci/quickstart_test/run_py_tests.sh b/.ci/quickstart_test/run_py_tests.sh deleted file mode 100644 index 3d559838d9..0000000000 --- a/.ci/quickstart_test/run_py_tests.sh +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright 2025 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -#!/bin/bash - -set -e - -TABLE_NAME="hotels_python" -QUICKSTART_PYTHON_DIR="docs/en/getting-started/quickstart/python" -SQL_FILE=".ci/quickstart_test/setup_hotels_sample.sql" - -PROXY_PID="" -TOOLBOX_PID="" - -install_system_packages() { - apt-get update && apt-get install -y \ - postgresql-client \ - python3-venv \ - wget \ - gettext-base \ - netcat-openbsd -} - -start_cloud_sql_proxy() { - wget "https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.10.0/cloud-sql-proxy.linux.amd64" -O /usr/local/bin/cloud-sql-proxy - chmod +x /usr/local/bin/cloud-sql-proxy - cloud-sql-proxy "${CLOUD_SQL_INSTANCE}" & - PROXY_PID=$! - - for i in {1..30}; do - if nc -z 127.0.0.1 5432; then - echo "Cloud SQL Proxy is up and running." - return - fi - sleep 1 - done - - echo "Cloud SQL Proxy failed to start within the timeout period." - exit 1 -} - -setup_toolbox() { - TOOLBOX_YAML="/tools.yaml" - echo "${TOOLS_YAML_CONTENT}" > "$TOOLBOX_YAML" - if [ ! -f "$TOOLBOX_YAML" ]; then echo "Failed to create tools.yaml"; exit 1; fi - wget "https://storage.googleapis.com/genai-toolbox/v${VERSION}/linux/amd64/toolbox" -O "/toolbox" - chmod +x "/toolbox" - /toolbox --tools-file "$TOOLBOX_YAML" & - TOOLBOX_PID=$! - sleep 2 -} - -setup_orch_table() { - export TABLE_NAME - envsubst < "$SQL_FILE" | psql -h "$PGHOST" -p "$PGPORT" -U "$DB_USER" -d "$DATABASE_NAME" -} - -run_orch_test() { - local orch_dir="$1" - local orch_name - orch_name=$(basename "$orch_dir") - ( - set -e - setup_orch_table - cd "$orch_dir" - local VENV_DIR=".venv" - python3 -m venv "$VENV_DIR" - source "$VENV_DIR/bin/activate" - pip install -r requirements.txt - echo "--- Running tests for $orch_name ---" - cd .. - ORCH_NAME="$orch_name" pytest - rm -rf "$VENV_DIR" - ) -} - -cleanup_all() { - echo "--- Final cleanup: Shutting down processes and dropping table ---" - if [ -n "$TOOLBOX_PID" ]; then - kill $TOOLBOX_PID || true - fi - if [ -n "$PROXY_PID" ]; then - kill $PROXY_PID || true - fi -} -trap cleanup_all EXIT - -# Main script execution -install_system_packages -start_cloud_sql_proxy - -export PGHOST=127.0.0.1 -export PGPORT=5432 -export PGPASSWORD="$DB_PASSWORD" -export GOOGLE_API_KEY="$GOOGLE_API_KEY" - -setup_toolbox - -for ORCH_DIR in "$QUICKSTART_PYTHON_DIR"/*/; do - if [ ! -d "$ORCH_DIR" ]; then - continue - fi - run_orch_test "$ORCH_DIR" -done diff --git a/.ci/quickstart_test/go.integration.cloudbuild.yaml b/.ci/sample_tests/quickstart/go.integration.cloudbuild.yaml similarity index 84% rename from .ci/quickstart_test/go.integration.cloudbuild.yaml rename to .ci/sample_tests/quickstart/go.integration.cloudbuild.yaml index cf9128ce60..24b14bfc26 100644 --- a/.ci/quickstart_test/go.integration.cloudbuild.yaml +++ b/.ci/sample_tests/quickstart/go.integration.cloudbuild.yaml @@ -23,13 +23,18 @@ steps: - | set -ex export VERSION=$(cat ./cmd/version.txt) - chmod +x .ci/quickstart_test/run_go_tests.sh - .ci/quickstart_test/run_go_tests.sh + chmod +x .ci/sample_tests/run_tests.sh + .ci/sample_tests/run_tests.sh env: - 'CLOUD_SQL_INSTANCE=${_CLOUD_SQL_INSTANCE}' - 'GCP_PROJECT=${_GCP_PROJECT}' - 'DATABASE_NAME=${_DATABASE_NAME}' - 'DB_USER=${_DB_USER}' + - 'TARGET_ROOT=docs/en/getting-started/quickstart/go' + - 'TARGET_LANG=go' + - 'TABLE_NAME=hotels_go' + - 'SQL_FILE=.ci/sample_tests/setup_hotels.sql' + - 'AGENT_FILE_PATTERN=quickstart.go' secretEnv: ['TOOLS_YAML_CONTENT', 'GOOGLE_API_KEY', 'DB_PASSWORD'] availableSecrets: diff --git a/.ci/quickstart_test/js.integration.cloudbuild.yaml b/.ci/sample_tests/quickstart/js.integration.cloudbuild.yaml similarity index 84% rename from .ci/quickstart_test/js.integration.cloudbuild.yaml rename to .ci/sample_tests/quickstart/js.integration.cloudbuild.yaml index cbf4e8547f..6e5aac6b07 100644 --- a/.ci/quickstart_test/js.integration.cloudbuild.yaml +++ b/.ci/sample_tests/quickstart/js.integration.cloudbuild.yaml @@ -23,13 +23,18 @@ steps: - | set -ex export VERSION=$(cat ./cmd/version.txt) - chmod +x .ci/quickstart_test/run_js_tests.sh - .ci/quickstart_test/run_js_tests.sh + chmod +x .ci/sample_tests/run_tests.sh + .ci/sample_tests/run_tests.sh env: - 'CLOUD_SQL_INSTANCE=${_CLOUD_SQL_INSTANCE}' - 'GCP_PROJECT=${_GCP_PROJECT}' - 'DATABASE_NAME=${_DATABASE_NAME}' - 'DB_USER=${_DB_USER}' + - 'TARGET_ROOT=docs/en/getting-started/quickstart/js' + - 'TARGET_LANG=js' + - 'TABLE_NAME=hotels_js' + - 'SQL_FILE=.ci/sample_tests/setup_hotels.sql' + - 'AGENT_FILE_PATTERN=quickstart.js' secretEnv: ['TOOLS_YAML_CONTENT', 'GOOGLE_API_KEY', 'DB_PASSWORD'] availableSecrets: diff --git a/.ci/quickstart_test/py.integration.cloudbuild.yaml b/.ci/sample_tests/quickstart/py.integration.cloudbuild.yaml similarity index 84% rename from .ci/quickstart_test/py.integration.cloudbuild.yaml rename to .ci/sample_tests/quickstart/py.integration.cloudbuild.yaml index 8fd3834d6c..da8daf678f 100644 --- a/.ci/quickstart_test/py.integration.cloudbuild.yaml +++ b/.ci/sample_tests/quickstart/py.integration.cloudbuild.yaml @@ -23,13 +23,18 @@ steps: - | set -ex export VERSION=$(cat ./cmd/version.txt) - chmod +x .ci/quickstart_test/run_py_tests.sh - .ci/quickstart_test/run_py_tests.sh + chmod +x .ci/sample_tests/run_tests.sh + .ci/sample_tests/run_tests.sh env: - 'CLOUD_SQL_INSTANCE=${_CLOUD_SQL_INSTANCE}' - 'GCP_PROJECT=${_GCP_PROJECT}' - 'DATABASE_NAME=${_DATABASE_NAME}' - 'DB_USER=${_DB_USER}' + - 'TARGET_ROOT=docs/en/getting-started/quickstart/python' + - 'TARGET_LANG=python' + - 'TABLE_NAME=hotels_python' + - 'SQL_FILE=.ci/sample_tests/setup_hotels.sql' + - 'AGENT_FILE_PATTERN=quickstart.py' secretEnv: ['TOOLS_YAML_CONTENT', 'GOOGLE_API_KEY', 'DB_PASSWORD'] availableSecrets: diff --git a/.ci/sample_tests/run_tests.sh b/.ci/sample_tests/run_tests.sh new file mode 100644 index 0000000000..5de9234eb0 --- /dev/null +++ b/.ci/sample_tests/run_tests.sh @@ -0,0 +1,202 @@ +# Copyright 2026 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +set -e + +# --- Configuration (from Environment Variables) --- +# TARGET_ROOT: The directory to search for tests (e.g., docs/en/getting-started/quickstart/js) +# TARGET_LANG: python, js, go +# TABLE_NAME: Database table name to use +# SQL_FILE: Path to the SQL setup file +# AGENT_FILE_PATTERN: Filename to look for (e.g., quickstart.js or agent.py) + +VERSION=$(cat ./cmd/version.txt) + +# Process IDs & Logs +PROXY_PID="" +TOOLBOX_PID="" +PROXY_LOG="cloud_sql_proxy.log" +TOOLBOX_LOG="toolbox_server.log" + +install_system_packages() { + echo "Installing system packages..." + apt-get update && apt-get install -y \ + postgresql-client \ + wget \ + gettext-base \ + netcat-openbsd + + if [[ "$TARGET_LANG" == "python" ]]; then + apt-get install -y python3-venv + fi +} + +start_cloud_sql_proxy() { + echo "Starting Cloud SQL Proxy..." + wget -q "https://storage.googleapis.com/cloud-sql-connectors/cloud-sql-proxy/v2.10.0/cloud-sql-proxy.linux.amd64" -O /usr/local/bin/cloud-sql-proxy + chmod +x /usr/local/bin/cloud-sql-proxy + cloud-sql-proxy "${CLOUD_SQL_INSTANCE}" > "$PROXY_LOG" 2>&1 & + PROXY_PID=$! + + # Health Check + for i in {1..30}; do + if nc -z 127.0.0.1 5432; then + echo "Cloud SQL Proxy is up and running." + return + fi + sleep 1 + done + echo "ERROR: Cloud SQL Proxy failed to start. Logs:" + cat "$PROXY_LOG" + exit 1 +} + +setup_toolbox() { + echo "Setting up Toolbox server..." + TOOLBOX_YAML="/tools.yaml" + echo "${TOOLS_YAML_CONTENT}" > "$TOOLBOX_YAML" + wget -q "https://storage.googleapis.com/genai-toolbox/v${VERSION}/linux/amd64/toolbox" -O "/toolbox" + chmod +x "/toolbox" + /toolbox --tools-file "$TOOLBOX_YAML" > "$TOOLBOX_LOG" 2>&1 & + TOOLBOX_PID=$! + + # Health Check + for i in {1..15}; do + if nc -z 127.0.0.1 5000; then + echo "Toolbox server is up and running." + return + fi + sleep 1 + done + echo "ERROR: Toolbox server failed to start. Logs:" + cat "$TOOLBOX_LOG" + exit 1 +} + +setup_db_table() { + echo "Setting up database table $TABLE_NAME using $SQL_FILE..." + export TABLE_NAME + envsubst < "$SQL_FILE" | psql -h 127.0.0.1 -p 5432 -U "$DB_USER" -d "$DATABASE_NAME" +} + +run_python_test() { + local dir=$1 + local name=$(basename "$dir") + echo "--- Running Python Test: $name ---" + ( + cd "$dir" + python3 -m venv .venv + source .venv/bin/activate + pip install -q -r requirements.txt pytest + + cd .. + local test_file=$(find . -maxdepth 1 -name "*test.py" | head -n 1) + if [ -n "$test_file" ]; then + echo "Found native test: $test_file. Running pytest..." + export ORCH_NAME="$name" + export PYTHONPATH="../" + pytest "$test_file" + else + echo "No native test found. running agent directly..." + export PYTHONPATH="../" + python3 "${name}/${AGENT_FILE_PATTERN}" + fi + rm -rf "${name}/.venv" + ) +} + +run_js_test() { + local dir=$1 + local name=$(basename "$dir") + echo "--- Running JS Test: $name ---" + ( + cd "$dir" + if [ -f "package-lock.json" ]; then npm ci -q; else npm install -q; fi + + cd .. + # Looking for a JS test file in the parent directory + local test_file=$(find . -maxdepth 1 -name "*test.js" | head -n 1) + if [ -n "$test_file" ]; then + echo "Found native test: $test_file. Running node --test..." + export ORCH_NAME="$name" + node --test "$test_file" + else + echo "No native test found. running agent directly..." + node "${name}/${AGENT_FILE_PATTERN}" + fi + rm -rf "${name}/node_modules" + ) +} + +run_go_test() { + local dir=$1 + local name=$(basename "$dir") + + if [ "$name" == "openAI" ]; then + echo -e "\nSkipping framework '${name}': Temporarily excluded." + return + fi + + echo "--- Running Go Test: $name ---" + ( + cd "$dir" + if [ -f "go.mod" ]; then + go mod tidy + fi + + cd .. + local test_file=$(find . -maxdepth 1 -name "*test.go" | head -n 1) + if [ -n "$test_file" ]; then + echo "Found native test: $test_file. Running go test..." + export ORCH_NAME="$name" + go test -v ./... + else + echo "No native test found. running agent directly..." + cd "$name" + go run "." + fi + ) +} + +cleanup() { + echo "Cleaning up background processes..." + [ -n "$TOOLBOX_PID" ] && kill "$TOOLBOX_PID" || true + [ -n "$PROXY_PID" ] && kill "$PROXY_PID" || true +} +trap cleanup EXIT + +# --- Execution --- +install_system_packages +start_cloud_sql_proxy + +export PGHOST=127.0.0.1 +export PGPORT=5432 +export PGPASSWORD="$DB_PASSWORD" +export GOOGLE_API_KEY="$GOOGLE_API_KEY" + +setup_toolbox +setup_db_table + +echo "Scanning $TARGET_ROOT for tests with pattern $AGENT_FILE_PATTERN..." + +find "$TARGET_ROOT" -name "$AGENT_FILE_PATTERN" | while read -r agent_file; do + sample_dir=$(dirname "$agent_file") + if [[ "$TARGET_LANG" == "python" ]]; then + run_python_test "$sample_dir" + elif [[ "$TARGET_LANG" == "js" ]]; then + run_js_test "$sample_dir" + elif [[ "$TARGET_LANG" == "go" ]]; then + run_go_test "$sample_dir" + fi +done diff --git a/.ci/quickstart_test/setup_hotels_sample.sql b/.ci/sample_tests/setup_hotels.sql similarity index 100% rename from .ci/quickstart_test/setup_hotels_sample.sql rename to .ci/sample_tests/setup_hotels.sql From 7767e1e01931ca940eb1ff3667bd610a44603aef Mon Sep 17 00:00:00 2001 From: manuka rahul <96047526+rahulpinto19@users.noreply.github.com> Date: Mon, 9 Feb 2026 05:59:31 +0000 Subject: [PATCH 07/13] ci: optimize link checker workflow and failure reporting (#2411) The current system streamlines troubleshooting by omitting passing links from the output. Reporting now focuses solely on: - Broken or failing links - Redirecting links - File paths (now prioritized at the top of the log for rapid identification) The link resolution notice is now presented first, followed by specific errors to prevent developer distraction. The .lycheeignore file has been updated to use regular expressions for broader coverage of npm and SQL links, replacing the previous method of listing individual URLs. --------- Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com> --- .github/workflows/link_checker_workflow.yaml | 26 +++++++++++++------- .lycheeignore | 10 +++----- 2 files changed, 21 insertions(+), 15 deletions(-) diff --git a/.github/workflows/link_checker_workflow.yaml b/.github/workflows/link_checker_workflow.yaml index ae51a4b08b..591221d16e 100644 --- a/.github/workflows/link_checker_workflow.yaml +++ b/.github/workflows/link_checker_workflow.yaml @@ -32,29 +32,37 @@ jobs: restore-keys: cache-lychee- - name: Link Checker + id: lychee-check uses: lycheeverse/lychee-action@a8c4c7cb88f0c7386610c35eb25108e448569cb0 # v2 + continue-on-error: true with: args: > - --verbose + --quiet --no-progress --cache --max-cache-age 1d --exclude '^neo4j\+.*' --exclude '^bolt://.*' README.md docs/ - output: /tmp/foo.txt - fail: true - jobSummary: true - debug: true + output: lychee-report.md + format: markdown + fail: true + jobSummary: false + debug: false env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - # This step only runs if the 'lychee_check' step fails, ensuring the - # context note only appears when the developer needs to troubleshoot. - - name: Display Link Context Note on Failure - if: ${{ failure() }} + + - name: Display Failure Report + # Run this ONLY if the link checker failed + if: steps.lychee-check.outcome == 'failure' run: | echo "## Link Resolution Note" >> $GITHUB_STEP_SUMMARY echo "Local links and directory changes work differently on GitHub than on the docsite." >> $GITHUB_STEP_SUMMARY echo "You must ensure fixes pass the **GitHub check** and also work with **\`hugo server\`**." >> $GITHUB_STEP_SUMMARY + echo "See [Link Checking and Fixing with Lychee](https://github.com/googleapis/genai-toolbox/blob/main/DEVELOPER.md#link-checking-and-fixing-with-lychee) for more details." >> $GITHUB_STEP_SUMMARY echo "---" >> $GITHUB_STEP_SUMMARY + echo "### Broken Links Found" >> $GITHUB_STEP_SUMMARY + cat ./lychee-report.md >> $GITHUB_STEP_SUMMARY + + exit 1 diff --git a/.lycheeignore b/.lycheeignore index 9ce08c5e6a..236e2c8394 100644 --- a/.lycheeignore +++ b/.lycheeignore @@ -23,8 +23,7 @@ https://cloud.dgraph.io/login https://dgraph.io/docs # MySQL Community downloads and main site (often protected by bot mitigation) -https://dev.mysql.com/downloads/installer/ -https://www.mysql.com/ +^https?://(.*\.)?mysql\.com/.* # Claude desktop download link https://claude.ai/download @@ -37,10 +36,9 @@ https://dev.mysql.com/doc/refman/8.4/en/sql-prepared-statements.html https://dev.mysql.com/doc/refman/8.4/en/user-names.html # npmjs links can occasionally trigger rate limiting during high-frequency CI builds -https://www.npmjs.com/package/@toolbox-sdk/adk -https://www.npmjs.com/package/@toolbox-sdk/core -https://www.npmjs.com/package/@toolbox-sdk/server +^https?://(www\.)?npmjs\.com/.* + https://www.oceanbase.com/ # Ignore social media and blog profiles to reduce external request overhead -https://medium.com/@mcp_toolbox +https://medium.com/@mcp_toolbox \ No newline at end of file From 3c9365dfd946216dddbf8a370b35756289a5455a Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 9 Feb 2026 06:23:39 +0000 Subject: [PATCH 08/13] chore(deps): bump fast-xml-parser and @google-cloud/storage in /docs/en/getting-started/quickstart/js/adk (#2434) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [fast-xml-parser](https://github.com/NaturalIntelligence/fast-xml-parser) and [@google-cloud/storage](https://github.com/googleapis/nodejs-storage). These dependencies needed to be updated together. Updates `fast-xml-parser` from 4.5.3 to 5.3.5
Release notes

Sourced from fast-xml-parser's releases.

v5.3.5

What's Changed

New Contributors

Full Changelog: https://github.com/NaturalIntelligence/fast-xml-parser/compare/v5.3.4...v5.3.5

fix: handle HTML numeric and hex entities when out of range

No release notes provided.

bug fix and performance improvements

  • fix #775: transformTagName with allowBooleanAttributes adds an unnecessary attribute
  • Performance improvement for stopNodes (By Maciek Lamberski)

Replace Buffer with Uint8Array

  • Launched Separate CLI module
  • Replace Buffer with Uint8Array

Support EMPTY and ANY with ELEMENT in DOCTYPE

Full Changelog: https://github.com/NaturalIntelligence/fast-xml-parser/compare/v5.2.4...v5.2.4

upgrade to ESM module and fixing value parsing issues

  • Support ESM modules
  • fix value parsing issues
  • a feature to access tag location is added (metadata)
  • fix to read DOCTYPE correctly

Full Changelog: https://github.com/NaturalIntelligence/fast-xml-parser/blob/master/CHANGELOG.md

Summary update on all the previous releases from v4.2.4

  • Multiple minor fixes provided in the validator and parser
  • v6 is added for experimental use.
  • ignoreAttributes support function, and array of string or regex
  • Add support for parsing HTML numeric entities
  • v5 of the application is ESM module now. However, JS is also supported

Note: Release section in not updated frequently. Please check CHANGELOG or Tags for latest release information.

Changelog

Sourced from fast-xml-parser's changelog.

Note: If you find missing information about particular minor version, that version must have been changed without any functional change in this library.

5.3.5 / 2026-02-08

  • fix: Escape regex char in entity name
  • update strnum to 2.1.2
  • add missing exports in CJS typings

5.3.4 / 2026-01-30

  • fix: handle HTML numeric and hex entities when out of range

5.3.3 / 2025-12-12

  • fix #775: transformTagName with allowBooleanAttributes adds an unnecessary attribute

5.3.2 / 2025-11-14

  • fix for import statement for v6

5.3.1 / 2025-11-03

5.3.0 / 2025-10-03

  • Use Uint8Array in place of Buffer in Parser

5.2.5 / 2025-06-08

  • Inform user to use fxp-cli instead of in-built CLI feature
  • Export typings for direct use

5.2.4 / 2025-06-06

  • fix (#747): fix EMPTY and ANY with ELEMENT in DOCTYPE

5.2.3 / 2025-05-11

  • fix (#747): support EMPTY and ANY with ELEMENT in DOCTYPE

5.2.2 / 2025-05-05

  • fix (#746): update strnum to fix parsing issues related to enotations

5.2.1 / 2025-04-22

  • fix: read DOCTYPE entity value correctly
  • read DOCTYPE NOTATION, ELEMENT exp but not using read values

5.2.0 / 2025-04-03

5.1.0 / 2025-04-02

  • feat: declare package as side-effect free (#738) (By Thomas Bouffard)
  • fix cjs build mode
  • fix builder return type to string

... (truncated)

Commits

Updates `@google-cloud/storage` from 7.18.0 to 7.19.0
Release notes

Sourced from @​google-cloud/storage's releases.

v7.19.0

7.19.0 (2026-02-05)

Features

  • Enable full object checksum validation on JSON path (#2687) (08a8962)

Bug Fixes

  • deps: Update dependency fast-xml-parser to v5 [security] (#2713) (420935a)
Changelog

Sourced from @​google-cloud/storage's changelog.

7.19.0 (2026-02-05)

Features

  • Enable full object checksum validation on JSON path (#2687) (08a8962)

Bug Fixes

  • deps: Update dependency fast-xml-parser to v5 [security] (#2713) (420935a)
Commits

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/googleapis/genai-toolbox/network/alerts).
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .../quickstart/js/adk/package-lock.json | 81 +++++++++++++------ 1 file changed, 58 insertions(+), 23 deletions(-) diff --git a/docs/en/getting-started/quickstart/js/adk/package-lock.json b/docs/en/getting-started/quickstart/js/adk/package-lock.json index 84bc88e40a..9bcbc4080d 100644 --- a/docs/en/getting-started/quickstart/js/adk/package-lock.json +++ b/docs/en/getting-started/quickstart/js/adk/package-lock.json @@ -18,6 +18,7 @@ "resolved": "https://registry.npmjs.org/@google-cloud/paginator/-/paginator-5.0.2.tgz", "integrity": "sha512-DJS3s0OVH4zFDB1PzjxAsHqJT6sKVbRwwML0ZBP9PbU7Yebtu/7SWMRzvO2J3nUi9pRNITCfu4LJeooM2w4pjg==", "license": "Apache-2.0", + "peer": true, "dependencies": { "arrify": "^2.0.0", "extend": "^3.0.2" @@ -31,6 +32,7 @@ "resolved": "https://registry.npmjs.org/@google-cloud/projectify/-/projectify-4.0.0.tgz", "integrity": "sha512-MmaX6HeSvyPbWGwFq7mXdo0uQZLGBYCwziiLIGq5JVX+/bdI3SAq6bP98trV5eTWfLuvsMcIC1YJOF2vfteLFA==", "license": "Apache-2.0", + "peer": true, "engines": { "node": ">=14.0.0" } @@ -40,15 +42,17 @@ "resolved": "https://registry.npmjs.org/@google-cloud/promisify/-/promisify-4.0.0.tgz", "integrity": "sha512-Orxzlfb9c67A15cq2JQEyVc7wEsmFBmHjZWZYQMUyJ1qivXyMwdyNOs9odi79hze+2zqdTtu1E19IM/FtqZ10g==", "license": "Apache-2.0", + "peer": true, "engines": { "node": ">=14" } }, "node_modules/@google-cloud/storage": { - "version": "7.18.0", - "resolved": "https://registry.npmjs.org/@google-cloud/storage/-/storage-7.18.0.tgz", - "integrity": "sha512-r3ZwDMiz4nwW6R922Z1pwpePxyRwE5GdevYX63hRmAQUkUQJcBH/79EnQPDv5cOv1mFBgevdNWQfi3tie3dHrQ==", + "version": "7.19.0", + "resolved": "https://registry.npmjs.org/@google-cloud/storage/-/storage-7.19.0.tgz", + "integrity": "sha512-n2FjE7NAOYyshogdc7KQOl/VZb4sneqPjWouSyia9CMDdMhRX5+RIbqalNmC7LOLzuLAN89VlF2HvG8na9G+zQ==", "license": "Apache-2.0", + "peer": true, "dependencies": { "@google-cloud/paginator": "^5.0.0", "@google-cloud/projectify": "^4.0.0", @@ -56,7 +60,7 @@ "abort-controller": "^3.0.0", "async-retry": "^1.3.3", "duplexify": "^4.1.3", - "fast-xml-parser": "^4.4.1", + "fast-xml-parser": "^5.3.4", "gaxios": "^6.0.2", "google-auth-library": "^9.6.3", "html-entities": "^2.5.2", @@ -75,6 +79,7 @@ "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", "license": "MIT", + "peer": true, "bin": { "uuid": "dist/bin/uuid" } @@ -97,7 +102,6 @@ "resolved": "https://registry.npmjs.org/@google/genai/-/genai-1.14.0.tgz", "integrity": "sha512-jirYprAAJU1svjwSDVCzyVq+FrJpJd5CSxR/g2Ga/gZ0ZYZpcWjMS75KJl9y71K1mDN+tcx6s21CzCbB2R840g==", "license": "Apache-2.0", - "peer": true, "dependencies": { "google-auth-library": "^9.14.2", "ws": "^8.18.0" @@ -136,7 +140,6 @@ "resolved": "https://registry.npmjs.org/@modelcontextprotocol/sdk/-/sdk-1.17.5.tgz", "integrity": "sha512-QakrKIGniGuRVfWBdMsDea/dx1PNE739QJ7gCM41s9q+qaCYTHCdsIBXQVVXry3mfWAiaM9kT22Hyz53Uw8mfg==", "license": "MIT", - "peer": true, "dependencies": { "ajv": "^6.12.6", "content-type": "^1.0.5", @@ -299,6 +302,7 @@ "resolved": "https://registry.npmjs.org/@tootallnate/once/-/once-2.0.0.tgz", "integrity": "sha512-XCuKFP5PS55gnMVu3dty8KPatLqUoy/ZYzDzAGCQ8JNFCkLXzmI7vNHCR+XpbZaMWQK/vQubr7PkYq8g470J/A==", "license": "MIT", + "peer": true, "engines": { "node": ">= 10" } @@ -307,13 +311,15 @@ "version": "0.12.5", "resolved": "https://registry.npmjs.org/@types/caseless/-/caseless-0.12.5.tgz", "integrity": "sha512-hWtVTC2q7hc7xZ/RLbxapMvDMgUnDvKvMOpKal4DrMyfGBUfB1oKaZlIRr6mJL+If3bAP6sV/QneGzF6tJjZDg==", - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/@types/node": { "version": "24.10.1", "resolved": "https://registry.npmjs.org/@types/node/-/node-24.10.1.tgz", "integrity": "sha512-GNWcUTRBgIRJD5zj+Tq0fKOJ5XZajIiBroOF0yvj2bSU1WvNdYS/dn9UxwsujGW4JX06dnHyjV2y9rRaybH0iQ==", "license": "MIT", + "peer": true, "dependencies": { "undici-types": "~7.16.0" } @@ -323,6 +329,7 @@ "resolved": "https://registry.npmjs.org/@types/request/-/request-2.48.13.tgz", "integrity": "sha512-FGJ6udDNUCjd19pp0Q3iTiDkwhYup7J8hpMW9c4k53NrccQFFWKRho6hvtPPEhnXWKvukfwAlB6DbDz4yhH5Gg==", "license": "MIT", + "peer": true, "dependencies": { "@types/caseless": "*", "@types/node": "*", @@ -335,6 +342,7 @@ "resolved": "https://registry.npmjs.org/form-data/-/form-data-2.5.5.tgz", "integrity": "sha512-jqdObeR2rxZZbPSGL+3VckHMYtu+f9//KXBsVny6JSX/pa38Fy+bGjuG8eW/H6USNQWhLi8Num++cU2yOCNz4A==", "license": "MIT", + "peer": true, "dependencies": { "asynckit": "^0.4.0", "combined-stream": "^1.0.8", @@ -352,6 +360,7 @@ "resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz", "integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==", "license": "MIT", + "peer": true, "engines": { "node": ">= 0.6" } @@ -361,6 +370,7 @@ "resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz", "integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==", "license": "MIT", + "peer": true, "dependencies": { "mime-db": "1.52.0" }, @@ -372,13 +382,15 @@ "version": "4.0.5", "resolved": "https://registry.npmjs.org/@types/tough-cookie/-/tough-cookie-4.0.5.tgz", "integrity": "sha512-/Ad8+nIOV7Rl++6f1BdKxFSMgmoqEoYbHRpPcx3JEfv8VRsQe9Z4mCXeJBzxs7mbHY/XOZZuXlRNfhpVPbs6ZA==", - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/abort-controller": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/abort-controller/-/abort-controller-3.0.0.tgz", "integrity": "sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==", "license": "MIT", + "peer": true, "dependencies": { "event-target-shim": "^5.0.0" }, @@ -453,6 +465,7 @@ "resolved": "https://registry.npmjs.org/arrify/-/arrify-2.0.1.tgz", "integrity": "sha512-3duEwti880xqi4eAMN8AyR4a0ByT90zoYdLlevfrvU43vb0YZwZVfxOgxWrLXXXpyugL0hNZc9G6BiB5B3nUug==", "license": "MIT", + "peer": true, "engines": { "node": ">=8" } @@ -462,6 +475,7 @@ "resolved": "https://registry.npmjs.org/async-retry/-/async-retry-1.3.3.tgz", "integrity": "sha512-wfr/jstw9xNi/0teMHrRW7dsz3Lt5ARhYNZ2ewpadnhaIp5mbALhOAP+EAdsC7t4Z6wqsDVv9+W6gm1Dk9mEyw==", "license": "MIT", + "peer": true, "dependencies": { "retry": "0.13.1" } @@ -754,6 +768,7 @@ "resolved": "https://registry.npmjs.org/duplexify/-/duplexify-4.1.3.tgz", "integrity": "sha512-M3BmBhwJRZsSx38lZyhE53Csddgzl5R7xGJNk7CVddZD6CcmwMCH8J+7AprIrQKH7TonKxaCjcv27Qmf+sQ+oA==", "license": "MIT", + "peer": true, "dependencies": { "end-of-stream": "^1.4.1", "inherits": "^2.0.3", @@ -802,6 +817,7 @@ "resolved": "https://registry.npmjs.org/end-of-stream/-/end-of-stream-1.4.5.tgz", "integrity": "sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg==", "license": "MIT", + "peer": true, "dependencies": { "once": "^1.4.0" } @@ -871,6 +887,7 @@ "resolved": "https://registry.npmjs.org/event-target-shim/-/event-target-shim-5.0.1.tgz", "integrity": "sha512-i/2XbnSz/uxRCU6+NdVJgKWDTM427+MqYbkQzD321DuCQJUqOuJKIA0IM2+W2xtYHdKOmZ4dR6fExsd4SXL+WQ==", "license": "MIT", + "peer": true, "engines": { "node": ">=6" } @@ -901,7 +918,6 @@ "resolved": "https://registry.npmjs.org/express/-/express-5.1.0.tgz", "integrity": "sha512-DT9ck5YIRU+8GYzzU5kT3eHGA5iL+1Zd0EutOmTE9Dtk+Tvuzd23VBU+ec7HPNSTxXYO55gPV/hq4pSBJDjFpA==", "license": "MIT", - "peer": true, "dependencies": { "accepts": "^2.0.0", "body-parser": "^2.2.0", @@ -973,9 +989,9 @@ "license": "MIT" }, "node_modules/fast-xml-parser": { - "version": "4.5.3", - "resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-4.5.3.tgz", - "integrity": "sha512-RKihhV+SHsIUGXObeVy9AXiBbFwkVk7Syp8XgwN5U3JV416+Gwp/GO9i0JYKmikykgz/UHRrrV4ROuZEo/T0ig==", + "version": "5.3.5", + "resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-5.3.5.tgz", + "integrity": "sha512-JeaA2Vm9ffQKp9VjvfzObuMCjUYAp5WDYhRYL5LrBPY/jUDlUtOvDfot0vKSkB9tuX885BDHjtw4fZadD95wnA==", "funding": [ { "type": "github", @@ -983,8 +999,9 @@ } ], "license": "MIT", + "peer": true, "dependencies": { - "strnum": "^1.1.1" + "strnum": "^2.1.2" }, "bin": { "fxparser": "src/cli/cli.js" @@ -1333,7 +1350,8 @@ "url": "https://patreon.com/mdevils" } ], - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/http-errors": { "version": "2.0.0", @@ -1365,6 +1383,7 @@ "resolved": "https://registry.npmjs.org/http-proxy-agent/-/http-proxy-agent-5.0.0.tgz", "integrity": "sha512-n2hY8YdoRE1i7r6M0w9DIw5GgZN0G25P8zLCRQ8rjXtTU3vsNFBI/vWK/UIeE6g5MUUz6avwAPXmL6Fy9D/90w==", "license": "MIT", + "peer": true, "dependencies": { "@tootallnate/once": "2", "agent-base": "6", @@ -1379,6 +1398,7 @@ "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-6.0.2.tgz", "integrity": "sha512-RZNwNclF7+MS/8bDg70amg32dyeZGZxiDuQmZxKLAlQjr3jGyLx+4Kkk58UO7D2QdgFIQCovuSuZESne6RG6XQ==", "license": "MIT", + "peer": true, "dependencies": { "debug": "4" }, @@ -1555,6 +1575,7 @@ "resolved": "https://registry.npmjs.org/mime/-/mime-3.0.0.tgz", "integrity": "sha512-jSCU7/VB1loIWBZe14aEYHU/+1UMEHoaO7qxCOVJOw9GgH72VAWppxNcjU+x9a2k3GSIBXNKxXQFqRvvZ7vr3A==", "license": "MIT", + "peer": true, "bin": { "mime": "cli.js" }, @@ -1715,6 +1736,7 @@ "resolved": "https://registry.npmjs.org/p-limit/-/p-limit-3.1.0.tgz", "integrity": "sha512-TYOanM3wGwNGsZN2cVTYPArw454xnXj5qmWF1bEoAc4+cU/ol7GVh7odevjp1FNHduHc3KZMcFduxU5Xc6uJRQ==", "license": "MIT", + "peer": true, "dependencies": { "yocto-queue": "^0.1.0" }, @@ -1856,6 +1878,7 @@ "resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz", "integrity": "sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==", "license": "MIT", + "peer": true, "dependencies": { "inherits": "^2.0.3", "string_decoder": "^1.1.1", @@ -1870,6 +1893,7 @@ "resolved": "https://registry.npmjs.org/retry/-/retry-0.13.1.tgz", "integrity": "sha512-XQBQ3I8W1Cge0Seh+6gjj03LbmRFWuoszgK9ooCpwYIrhhoO80pfq4cUkU5DkknwfOfFteRwlZ56PYOGYyFWdg==", "license": "MIT", + "peer": true, "engines": { "node": ">= 4" } @@ -1879,6 +1903,7 @@ "resolved": "https://registry.npmjs.org/retry-request/-/retry-request-7.0.2.tgz", "integrity": "sha512-dUOvLMJ0/JJYEn8NrpOaGNE7X3vpI5XlZS/u0ANjqtcZVKnIxP7IgCFwrKTxENw29emmwug53awKtaMm4i9g5w==", "license": "MIT", + "peer": true, "dependencies": { "@types/request": "^2.48.8", "extend": "^3.0.2", @@ -2107,6 +2132,7 @@ "resolved": "https://registry.npmjs.org/stream-events/-/stream-events-1.0.5.tgz", "integrity": "sha512-E1GUzBSgvct8Jsb3v2X15pjzN1tYebtbLaMg+eBOUOAxgbLoSbT2NS91ckc5lJD1KfLjId+jXJRgo0qnV5Nerg==", "license": "MIT", + "peer": true, "dependencies": { "stubs": "^3.0.0" } @@ -2115,13 +2141,15 @@ "version": "1.0.3", "resolved": "https://registry.npmjs.org/stream-shift/-/stream-shift-1.0.3.tgz", "integrity": "sha512-76ORR0DO1o1hlKwTbi/DM3EXWGf3ZJYO8cXX5RJwnul2DEg2oyoZyjLNoQM8WsvZiFKCRfC1O0J7iCvie3RZmQ==", - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/string_decoder": { "version": "1.3.0", "resolved": "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz", "integrity": "sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==", "license": "MIT", + "peer": true, "dependencies": { "safe-buffer": "~5.2.0" } @@ -2223,28 +2251,31 @@ } }, "node_modules/strnum": { - "version": "1.1.2", - "resolved": "https://registry.npmjs.org/strnum/-/strnum-1.1.2.tgz", - "integrity": "sha512-vrN+B7DBIoTTZjnPNewwhx6cBA/H+IS7rfW68n7XxC1y7uoiGQBxaKzqucGUgavX15dJgiGztLJ8vxuEzwqBdA==", + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/strnum/-/strnum-2.1.2.tgz", + "integrity": "sha512-l63NF9y/cLROq/yqKXSLtcMeeyOfnSQlfMSlzFt/K73oIaD8DGaQWd7Z34X9GPiKqP5rbSh84Hl4bOlLcjiSrQ==", "funding": [ { "type": "github", "url": "https://github.com/sponsors/NaturalIntelligence" } ], - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/stubs": { "version": "3.0.0", "resolved": "https://registry.npmjs.org/stubs/-/stubs-3.0.0.tgz", "integrity": "sha512-PdHt7hHUJKxvTCgbKX9C1V/ftOcjJQgz8BZwNfV5c4B6dcGqlpelTbJ999jBGZ2jYiPAwcX5dP6oBwVlBlUbxw==", - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/teeny-request": { "version": "9.0.0", "resolved": "https://registry.npmjs.org/teeny-request/-/teeny-request-9.0.0.tgz", "integrity": "sha512-resvxdc6Mgb7YEThw6G6bExlXKkv6+YbuzGg9xuXxSgxJF7Ozs+o8Y9+2R3sArdWdW8nOokoQb1yrpFB0pQK2g==", "license": "Apache-2.0", + "peer": true, "dependencies": { "http-proxy-agent": "^5.0.0", "https-proxy-agent": "^5.0.0", @@ -2261,6 +2292,7 @@ "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-6.0.2.tgz", "integrity": "sha512-RZNwNclF7+MS/8bDg70amg32dyeZGZxiDuQmZxKLAlQjr3jGyLx+4Kkk58UO7D2QdgFIQCovuSuZESne6RG6XQ==", "license": "MIT", + "peer": true, "dependencies": { "debug": "4" }, @@ -2273,6 +2305,7 @@ "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-5.0.1.tgz", "integrity": "sha512-dFcAjpTQFgoLMzC2VwU+C/CbS7uRL0lWmxDITmqm7C+7F0Odmj6s9l6alZc6AELXhrnggM2CeWSXHGOdX2YtwA==", "license": "MIT", + "peer": true, "dependencies": { "agent-base": "6", "debug": "4" @@ -2314,7 +2347,8 @@ "version": "7.16.0", "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.16.0.tgz", "integrity": "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==", - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/unpipe": { "version": "1.0.0", @@ -2338,7 +2372,8 @@ "version": "1.0.2", "resolved": "https://registry.npmjs.org/util-deprecate/-/util-deprecate-1.0.2.tgz", "integrity": "sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==", - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/uuid": { "version": "9.0.1", @@ -2525,6 +2560,7 @@ "resolved": "https://registry.npmjs.org/yocto-queue/-/yocto-queue-0.1.0.tgz", "integrity": "sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==", "license": "MIT", + "peer": true, "engines": { "node": ">=10" }, @@ -2537,7 +2573,6 @@ "resolved": "https://registry.npmjs.org/zod/-/zod-3.25.76.tgz", "integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==", "license": "MIT", - "peer": true, "funding": { "url": "https://github.com/sponsors/colinhacks" } From 7f88caa985b20ed3ec6486935466d8871e132349 Mon Sep 17 00:00:00 2001 From: manuka rahul <96047526+rahulpinto19@users.noreply.github.com> Date: Tue, 10 Feb 2026 13:34:22 +0000 Subject: [PATCH 09/13] docs: fix broken links (#2440) Fix broken links --------- Co-authored-by: Twisha Bansal <58483338+twishabansal@users.noreply.github.com> --- docs/en/how-to/connect-ide/postgres_mcp.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/en/how-to/connect-ide/postgres_mcp.md b/docs/en/how-to/connect-ide/postgres_mcp.md index 44e3e09ade..7d778c6a4d 100644 --- a/docs/en/how-to/connect-ide/postgres_mcp.md +++ b/docs/en/how-to/connect-ide/postgres_mcp.md @@ -32,7 +32,7 @@ to expose your developer assistant tools to a Postgres instance: {{< notice tip >}} This guide can be used with [AlloyDB -Omni](https://cloud.google.com/alloydb/omni/current/docs/overview). +Omni](https://cloud.google.com/alloydb/omni/docs/overview). {{< /notice >}} ## Set up the database @@ -40,10 +40,10 @@ Omni](https://cloud.google.com/alloydb/omni/current/docs/overview). 1. Create or select a PostgreSQL instance. * [Install PostgreSQL locally](https://www.postgresql.org/download/) - * [Install AlloyDB Omni](https://cloud.google.com/alloydb/omni/current/docs/quickstart) + * [Install AlloyDB Omni](https://cloud.google.com/alloydb/omni/docs/quickstart) 1. Create or reuse [a database - user](https://cloud.google.com/alloydb/omni/current/docs/database-users/manage-users) + user](https://docs.cloud.google.com/alloydb/omni/containers/current/docs/database-users/manage-users) and have the username and password ready. ## Install MCP Toolbox From 1664a69dfd2789bc7fc63cd0aebc9ca5dbc81672 Mon Sep 17 00:00:00 2001 From: Twisha Bansal <58483338+twishabansal@users.noreply.github.com> Date: Tue, 10 Feb 2026 22:11:02 +0530 Subject: [PATCH 10/13] docs: add pre/post processing docs for langchain python (#2378) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Description Trigger has been tested corresponding to local changes. Latest successful run: https://pantheon.corp.google.com/cloud-build/builds;region=global/1c37031f-95f1-4c6c-9ef8-0452277599d5?e=13802955&mods=-autopush_coliseum&project=toolbox-testing-438616 Note: After merging, update python pre and post processing sample testing trigger. ## PR Checklist > Thank you for opening a Pull Request! Before submitting your PR, there are a > few things you can do to make sure it goes smoothly: - [x] Make sure you reviewed [CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md) - [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [ ] Ensure the tests and linter pass - [ ] Code coverage does not decrease (if any source code was changed) - [ ] Appropriate docs were updated (if necessary) - [ ] Make sure to add `!` if this involve a breaking change 🛠️ Fixes # --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Yuan Teoh <45984206+Yuan325@users.noreply.github.com> Co-authored-by: Averi Kitsch --- .../py.integration.cloudbuild.yaml | 57 +++++++++ docs/en/samples/pre_post_processing/_index.md | 54 ++++++++ docs/en/samples/pre_post_processing/python.md | 40 ++++++ .../pre_post_processing/python/__init__.py | 19 +++ .../pre_post_processing/python/agent_test.py | 51 ++++++++ .../python/langchain/agent.py | 116 ++++++++++++++++++ .../python/langchain/requirements.txt | 3 + 7 files changed, 340 insertions(+) create mode 100644 .ci/sample_tests/pre_post_processing/py.integration.cloudbuild.yaml create mode 100644 docs/en/samples/pre_post_processing/_index.md create mode 100644 docs/en/samples/pre_post_processing/python.md create mode 100644 docs/en/samples/pre_post_processing/python/__init__.py create mode 100644 docs/en/samples/pre_post_processing/python/agent_test.py create mode 100644 docs/en/samples/pre_post_processing/python/langchain/agent.py create mode 100644 docs/en/samples/pre_post_processing/python/langchain/requirements.txt diff --git a/.ci/sample_tests/pre_post_processing/py.integration.cloudbuild.yaml b/.ci/sample_tests/pre_post_processing/py.integration.cloudbuild.yaml new file mode 100644 index 0000000000..5844226428 --- /dev/null +++ b/.ci/sample_tests/pre_post_processing/py.integration.cloudbuild.yaml @@ -0,0 +1,57 @@ +# Copyright 2026 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +steps: + - name: "${_IMAGE}" + id: "py-pre-post-processing-test" + entrypoint: "bash" + args: + - -c + - | + set -ex + chmod +x .ci/sample_tests/run_tests.sh + .ci/sample_tests/run_tests.sh + env: + - "CLOUD_SQL_INSTANCE=${_CLOUD_SQL_INSTANCE}" + - "GCP_PROJECT=${_GCP_PROJECT}" + - "DATABASE_NAME=${_DATABASE_NAME}" + - "DB_USER=${_DB_USER}" + - "TARGET_ROOT=${_TARGET_ROOT}" + - "TARGET_LANG=${_TARGET_LANG}" + - "TABLE_NAME=${_TABLE_NAME}" + - "SQL_FILE=${_SQL_FILE}" + - "AGENT_FILE_PATTERN=${_AGENT_FILE_PATTERN}" + secretEnv: ["TOOLS_YAML_CONTENT", "GOOGLE_API_KEY", "DB_PASSWORD"] + +availableSecrets: + secretManager: + - versionName: projects/${_GCP_PROJECT}/secrets/${_TOOLS_YAML_SECRET}/versions/5 + env: "TOOLS_YAML_CONTENT" + - versionName: projects/${_GCP_PROJECT_NUMBER}/secrets/${_API_KEY_SECRET}/versions/latest + env: "GOOGLE_API_KEY" + - versionName: projects/${_GCP_PROJECT}/secrets/${_DB_PASS_SECRET}/versions/latest + env: "DB_PASSWORD" + +timeout: 1200s + +substitutions: + _TARGET_LANG: "python" + _IMAGE: "gcr.io/google.com/cloudsdktool/cloud-sdk:537.0.0" + _TARGET_ROOT: "docs/en/samples/pre_post_processing/python" + _TABLE_NAME: "hotels_py_pre_post_processing" + _SQL_FILE: ".ci/sample_tests/setup_hotels.sql" + _AGENT_FILE_PATTERN: "agent.py" + +options: + logging: CLOUD_LOGGING_ONLY \ No newline at end of file diff --git a/docs/en/samples/pre_post_processing/_index.md b/docs/en/samples/pre_post_processing/_index.md new file mode 100644 index 0000000000..6fcf570027 --- /dev/null +++ b/docs/en/samples/pre_post_processing/_index.md @@ -0,0 +1,54 @@ +--- +title: "Pre- and Post- Processing" +type: docs +weight: 1 +description: > + Intercept and modify interactions between the agent and its tools either before or after a tool is executed. +--- + +Pre- and post- processing allow developers to intercept and modify interactions between the agent and its tools or the user. + +{{< notice note >}} + +These capabilities are typically features of **orchestration frameworks** (like LangChain, LangGraph, or Agent Builder) rather than the Toolbox SDK itself. However, Toolbox tools are designed to fully leverage these framework capabilities to support robust, secure, and compliant agent architectures. + +{{< /notice >}} + +## Types of Processing + +### Pre-processing + +Pre-processing occurs before a tool is executed or an agent processes a message. Key types include: + +- **Input Sanitization & Redaction**: Detecting and masking sensitive information (like PII) in user queries or tool arguments to prevent it from being logged or sent to unauthorized systems. +- **Business Logic Validation**: Verifying that the proposed action complies with business rules (e.g., ensuring a requested hotel stay does not exceed 14 days, or checking if a user has sufficient permission). +- **Security Guardrails**: Analyzing inputs for potential prompt injection attacks or malicious payloads. + +### Post-processing + +Post-processing occurs after a tool has executed or the model has generated a response. Key types include: + +- **Response Enrichment**: Injecting additional data into the tool output that wasn't part of the raw API response (e.g., calculating loyalty points earned based on the booking value). +- **Output Formatting**: Transforming raw data (like JSON or XML) into a more human-readable or model-friendly format to improve the agent's understanding. +- **Compliance Auditing**: Logging the final outcome of transactions, including the original request and the result, to a secure audit trail. + +## Processing Scopes + +While processing logic can be applied at various levels (Agent, Model, Tool), this guide primarily focuses on **Tool Level** processing, which is most relevant for granular control over tool execution. + +### Tool Level (Primary Focus) + +Wraps individual tool executions. This is best for logic specific to a single tool or a set of tools. + +- **Scope**: Intercepts the raw inputs (arguments) to a tool and its outputs. +- **Use Cases**: Argument validation, output formatting, specific privacy rules for sensitive tools. + +### Other Levels + +It is helpful to understand how tool-level processing differs from other scopes: + +- **Model Level**: Intercepts individual calls to the LLM (prompts and responses). Unlike tool-level, this applies globally to all text sent/received, making it better for global PII redaction or token tracking. +- **Agent Level**: Wraps the high-level execution loop (e.g., a "turn" in the conversation). Unlike tool-level, this envelopes the entire turn (user input to final response), making it suitable for session management or end-to-end auditing. + + +## Samples diff --git a/docs/en/samples/pre_post_processing/python.md b/docs/en/samples/pre_post_processing/python.md new file mode 100644 index 0000000000..1c4311f487 --- /dev/null +++ b/docs/en/samples/pre_post_processing/python.md @@ -0,0 +1,40 @@ +--- +title: "Python" +type: docs +weight: 1 +description: > + How to add pre- and post- processing to your Agents using Python. +--- + +## Prerequisites + +This tutorial assumes that you have set up Toolbox with a basic agent as described in the [local quickstart](../../getting-started/local_quickstart.md). + +This guide demonstrates how to implement these patterns in your Toolbox applications. + +## Implementation + +{{< tabpane persist=header >}} +{{% tab header="ADK" text=true %}} +Coming soon. +{{% /tab %}} +{{% tab header="Langchain" text=true %}} +The following example demonstrates how to use `ToolboxClient` with LangChain's middleware to implement pre- and post- processing for tool calls. + +```py +{{< include "python/langchain/agent.py" >}} +``` + +You can also add model-level (`wrap_model`) and agent-level (`before_agent`, `after_agent`) hooks to intercept messages at different stages of the execution loop. See the [LangChain Middleware documentation](https://docs.langchain.com/oss/python/langchain/middleware/custom#wrap-style-hooks) for details on these additional hook types. +{{% /tab %}} +{{< /tabpane >}} + +## Results + +The output should look similar to the following. Note that exact responses may vary due to the non-deterministic nature of LLMs and differences between orchestration frameworks. + +``` +AI: Booking Confirmed! You earned 500 Loyalty Points with this stay. + +AI: Error: Maximum stay duration is 14 days. +``` diff --git a/docs/en/samples/pre_post_processing/python/__init__.py b/docs/en/samples/pre_post_processing/python/__init__.py new file mode 100644 index 0000000000..f5b7c1bfd2 --- /dev/null +++ b/docs/en/samples/pre_post_processing/python/__init__.py @@ -0,0 +1,19 @@ +# Copyright 2026 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +# This file makes the 'pre_post_processing/python' directory a Python package. + +# You can include any package-level initialization logic here if needed. +# For now, this file is empty. diff --git a/docs/en/samples/pre_post_processing/python/agent_test.py b/docs/en/samples/pre_post_processing/python/agent_test.py new file mode 100644 index 0000000000..36c5b8e27d --- /dev/null +++ b/docs/en/samples/pre_post_processing/python/agent_test.py @@ -0,0 +1,51 @@ +# Copyright 2026 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import asyncio +import importlib +import os +from pathlib import Path + +import pytest + +ORCH_NAME = os.environ.get("ORCH_NAME") +module_path = f"python.{ORCH_NAME}.agent" +agent = importlib.import_module(module_path) + +GOLDEN_KEYWORDS = [ + "AI:", + "Loyalty Points", + "POLICY CHECK: Intercepting 'update-hotel'", +] + +# --- Execution Tests --- +class TestExecution: + """Test framework execution and output validation.""" + + @pytest.fixture(scope="function") + def script_output(self, capsys): + """Run the agent function and return its output.""" + asyncio.run(agent.main()) + return capsys.readouterr() + + def test_script_runs_without_errors(self, script_output): + """Test that the script runs and produces no stderr.""" + assert script_output.err == "", f"Script produced stderr: {script_output.err}" + + def test_keywords_in_output(self, script_output): + """Test that expected keywords are present in the script's output.""" + output = script_output.out + print(f"\nAgent Output:\n{output}\n") + missing_keywords = [kw for kw in GOLDEN_KEYWORDS if kw not in output] + assert not missing_keywords, f"Missing keywords in output: {missing_keywords}" diff --git a/docs/en/samples/pre_post_processing/python/langchain/agent.py b/docs/en/samples/pre_post_processing/python/langchain/agent.py new file mode 100644 index 0000000000..5e174943a7 --- /dev/null +++ b/docs/en/samples/pre_post_processing/python/langchain/agent.py @@ -0,0 +1,116 @@ +import asyncio +from datetime import datetime + +from langchain.agents import create_agent +from langchain.agents.middleware import wrap_tool_call +from langchain_core.messages import ToolMessage +from langchain_google_vertexai import ChatVertexAI +from toolbox_langchain import ToolboxClient + +system_prompt = """ + You're a helpful hotel assistant. You handle hotel searching, booking and + cancellations. When the user searches for a hotel, mention it's name, id, + location and price tier. Always mention hotel ids while performing any + searches. This is very important for any operations. For any bookings or + cancellations, please provide the appropriate confirmation. Be sure to + update checkin or checkout dates if mentioned by the user. + Don't ask for confirmations from the user. +""" + + +# Pre processing +@wrap_tool_call +async def enforce_business_rules(request, handler): + """ + Business Logic Validation: + Enforces max stay duration (e.g., max 14 days). + """ + tool_call = request.tool_call + name = tool_call["name"] + args = tool_call["args"] + + print(f"POLICY CHECK: Intercepting '{name}'") + + if name == "update-hotel": + if "checkin_date" in args and "checkout_date" in args: + try: + start = datetime.fromisoformat(args["checkin_date"]) + end = datetime.fromisoformat(args["checkout_date"]) + duration = (end - start).days + + if duration > 14: + print("BLOCKED: Stay too long") + return ToolMessage( + content="Error: Maximum stay duration is 14 days.", + tool_call_id=tool_call["id"], + ) + except ValueError: + pass # Ignore invalid date formats + + # PRE: Code here runs BEFORE the tool execution + + # EXEC: Execute the tool (or next middleware) + result = await handler(request) + + # POST: Code here runs AFTER the tool execution + return result + + +# Post processing +@wrap_tool_call +async def enrich_response(request, handler): + """ + Post-Processing & Enrichment: + Adds loyalty points information to successful bookings. + Standardizes output format. + """ + # PRE: Code here runs BEFORE the tool execution + + # EXEC: Execute the tool (or next middleware) + result = await handler(request) + + # POST: Code here runs AFTER the tool execution + if isinstance(result, ToolMessage): + content = str(result.content) + tool_name = request.tool_call["name"] + + if tool_name == "book-hotel" and "Error" not in content: + loyalty_bonus = 500 + result.content = f"Booking Confirmed!\n You earned {loyalty_bonus} Loyalty Points with this stay.\n\nSystem Details: {content}" + + return result + + +async def main(): + async with ToolboxClient("http://127.0.0.1:5000") as client: + tools = await client.aload_toolset("my-toolset") + model = ChatVertexAI(model="gemini-2.5-flash") + agent = create_agent( + system_prompt=system_prompt, + model=model, + tools=tools, + # add any pre and post processing methods + middleware=[enforce_business_rules, enrich_response], + ) + + user_input = "Book hotel with id 3." + response = await agent.ainvoke( + {"messages": [{"role": "user", "content": user_input}]} + ) + + print("-" * 50) + last_ai_msg = response["messages"][-1].content + print(f"AI: {last_ai_msg}") + + # Test Pre-processing + print("-" * 50) + user_input = "Update my hotel with id 3 with checkin date 2025-01-18 and checkout date 2025-01-20" + response = await agent.ainvoke( + {"messages": [{"role": "user", "content": user_input}]} + ) + last_ai_msg = response["messages"][-1].content + print(f"AI: {last_ai_msg}") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/docs/en/samples/pre_post_processing/python/langchain/requirements.txt b/docs/en/samples/pre_post_processing/python/langchain/requirements.txt new file mode 100644 index 0000000000..5638e0c108 --- /dev/null +++ b/docs/en/samples/pre_post_processing/python/langchain/requirements.txt @@ -0,0 +1,3 @@ +langchain==1.2.6 +langchain-google-vertexai==3.2.2 +toolbox-langchain==0.5.8 \ No newline at end of file From 4fb5b34a5a7d84a6b3e02cb8494672ea1da7eba6 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 10 Feb 2026 17:04:04 +0000 Subject: [PATCH 11/13] chore(deps): bump axios from 1.12.2 to 1.13.5 in /docs/en/getting-started/quickstart/js/genkit (#2442) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [axios](https://github.com/axios/axios) from 1.12.2 to 1.13.5.
Release notes

Sourced from axios's releases.

v1.13.5

Release 1.13.5

Highlights

  • Security: Fixed a potential Denial of Service issue involving the __proto__ key in mergeConfig. (PR #7369)
  • Bug fix: Resolved an issue where AxiosError could be missing the status field on and after v1.13.3. (PR #7368)

Changes

Security

  • Fix Denial of Service via __proto__ key in mergeConfig. (PR #7369)

Fixes

  • Fix/5657. (PR #7313)
  • Ensure status is present in AxiosError on and after v1.13.3. (PR #7368)

Features / Improvements

  • Add input validation to isAbsoluteURL. (PR #7326)
  • Refactor: bump minor package versions. (PR #7356)

Documentation

  • Clarify object-check comment. (PR #7323)
  • Fix deprecated Buffer constructor usage and README formatting. (PR #7371)

CI / Maintenance

  • Chore: fix issues with YAML. (PR #7355)
  • CI: update workflow YAMLs. (PR #7372)
  • CI: fix run condition. (PR #7373)
  • Dev deps: bump karma-sourcemap-loader from 0.3.8 to 0.4.0. (PR #7360)
  • Chore(release): prepare release 1.13.5. (PR #7379)

New Contributors

Full Changelog: https://github.com/axios/axios/compare/v1.13.4...v1.13.5

v1.13.4

Overview

The release addresses issues discovered in v1.13.3 and includes significant CI/CD improvements.

Full Changelog: v1.13.3...v1.13.4

What's New in v1.13.4

Bug Fixes

  • fix: issues with version 1.13.3 (#7352) (ee90dfc)
    • Fixed issues discovered in v1.13.3 release

... (truncated)

Changelog

Sourced from axios's changelog.

Changelog

1.13.3 (2026-01-20)

Bug Fixes

  • http2: Use port 443 for HTTPS connections by default. (#7256) (d7e6065)
  • interceptor: handle the error in the same interceptor (#6269) (5945e40)
  • main field in package.json should correspond to cjs artifacts (#5756) (7373fbf)
  • package.json: add 'bun' package.json 'exports' condition. Load the Node.js build in Bun instead of the browser build (#5754) (b89217e)
  • silentJSONParsing=false should throw on invalid JSON (#7253) (#7257) (7d19335)
  • turn AxiosError into a native error (#5394) (#5558) (1c6a86d)
  • types: add handlers to AxiosInterceptorManager interface (#5551) (8d1271b)
  • types: restore AxiosError.cause type from unknown to Error (#7327) (d8233d9)
  • unclear error message is thrown when specifying an empty proxy authorization (#6314) (6ef867e)

Features

Reverts

  • Revert "fix: silentJSONParsing=false should throw on invalid JSON (#7253) (#7…" (#7298) (a4230f5), closes #7253 #7 #7298
  • deps: bump peter-evans/create-pull-request from 7 to 8 in the github-actions group (#7334) (2d6ad5e)

Contributors to this release

... (truncated)

Commits
  • 29f7542 chore(release): prepare release 1.13.5 (#7379)
  • 431c3a3 ci: fix run condition (#7373)
  • 9ff3a78 ci: update ymls (#7372)
  • 265b712 docs: fix deprecated Buffer constructor and formatting issues in README (#7371)
  • 475e75a feat: add input validation to isAbsoluteURL (#7326)
  • 28c7215 fix: Denial of Service via proto Key in mergeConfig (#7369)
  • 04cf019 docs: clarify object check comment (#7323)
  • 696fa75 fix: status is missing in AxiosError on and after v1.13.3 (#7368)
  • 569f028 fix: added a option to choose between legacy and the new request/response int...
  • 44b7c9f chore(deps-dev): bump karma-sourcemap-loader (#7360)
  • Additional commits viewable in compare view
Maintainer changes

This version was pushed to npm by [GitHub Actions](https://www.npmjs.com/~GitHub Actions), a new releaser for axios since your current version.


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=axios&package-manager=npm_and_yarn&previous-version=1.12.2&new-version=1.13.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/googleapis/genai-toolbox/network/alerts).
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .../quickstart/js/genkit/package-lock.json | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/docs/en/getting-started/quickstart/js/genkit/package-lock.json b/docs/en/getting-started/quickstart/js/genkit/package-lock.json index 1b00f903e3..cdb5744245 100644 --- a/docs/en/getting-started/quickstart/js/genkit/package-lock.json +++ b/docs/en/getting-started/quickstart/js/genkit/package-lock.json @@ -3351,13 +3351,13 @@ "integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==" }, "node_modules/axios": { - "version": "1.12.2", - "resolved": "https://registry.npmjs.org/axios/-/axios-1.12.2.tgz", - "integrity": "sha512-vMJzPewAlRyOgxV2dU0Cuz2O8zzzx9VYtbJOaBgXFeLc4IV/Eg50n4LowmehOOR61S8ZMpc2K5Sa7g6A4jfkUw==", + "version": "1.13.5", + "resolved": "https://registry.npmjs.org/axios/-/axios-1.13.5.tgz", + "integrity": "sha512-cz4ur7Vb0xS4/KUN0tPWe44eqxrIu31me+fbang3ijiNscE129POzipJJA6zniq2C/Z6sJCjMimjS8Lc/GAs8Q==", "license": "MIT", "dependencies": { - "follow-redirects": "^1.15.6", - "form-data": "^4.0.4", + "follow-redirects": "^1.15.11", + "form-data": "^4.0.5", "proxy-from-env": "^1.1.0" } }, @@ -4248,9 +4248,10 @@ } }, "node_modules/form-data": { - "version": "4.0.4", - "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz", - "integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==", + "version": "4.0.5", + "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.5.tgz", + "integrity": "sha512-8RipRLol37bNs2bhoV67fiTEvdTrbMUYcFTiy3+wuuOnUog2QBHCZWXDRijWQfAkhBj2Uf5UnVaiWwA5vdd82w==", + "license": "MIT", "dependencies": { "asynckit": "^0.4.0", "combined-stream": "^1.0.8", From 6e8255476a74e16ba6f3a05a9c00898abe88b701 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Tue, 10 Feb 2026 09:43:28 -0800 Subject: [PATCH 12/13] chore(deps): bump langsmith from 0.4.3 to 0.5.0 in /docs/en/getting-started/quickstart/js/langchain (#2438) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps [langsmith](https://github.com/langchain-ai/langsmith-sdk) from 0.4.3 to 0.5.0.
Release notes

Sourced from langsmith's releases.

v0.5.0

What's Changed

Full Changelog: https://github.com/langchain-ai/langsmith-sdk/compare/v0.4.60...v0.5.0

v0.4.60

What's Changed

New Contributors

Full Changelog: https://github.com/langchain-ai/langsmith-sdk/compare/v0.4.59...v0.4.60

v0.4.59

What's Changed

Full Changelog: https://github.com/langchain-ai/langsmith-sdk/compare/v0.4.58...v0.4.59

v0.4.58

What's Changed

Full Changelog: https://github.com/langchain-ai/langsmith-sdk/compare/v0.4.57...v0.4.58

... (truncated)

Commits
Maintainer changes

This version was pushed to npm by [GitHub Actions](https://www.npmjs.com/~GitHub Actions), a new releaser for langsmith since your current version.


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=langsmith&package-manager=npm_and_yarn&previous-version=0.4.3&new-version=0.5.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/googleapis/genai-toolbox/network/alerts).
Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> --- .../quickstart/js/langchain/package-lock.json | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/docs/en/getting-started/quickstart/js/langchain/package-lock.json b/docs/en/getting-started/quickstart/js/langchain/package-lock.json index a52001ef13..c71fb84620 100644 --- a/docs/en/getting-started/quickstart/js/langchain/package-lock.json +++ b/docs/en/getting-started/quickstart/js/langchain/package-lock.json @@ -18,7 +18,8 @@ "node_modules/@cfworker/json-schema": { "version": "4.1.1", "resolved": "https://registry.npmjs.org/@cfworker/json-schema/-/json-schema-4.1.1.tgz", - "integrity": "sha512-gAmrUZSGtKc3AiBL71iNWxDsyUC5uMaKKGdvzYsBoTW/xi42JQHl7eKV2OYzCUqvc+D2RCcf7EXY2iCyFIk6og==" + "integrity": "sha512-gAmrUZSGtKc3AiBL71iNWxDsyUC5uMaKKGdvzYsBoTW/xi42JQHl7eKV2OYzCUqvc+D2RCcf7EXY2iCyFIk6og==", + "peer": true }, "node_modules/@google/generative-ai": { "version": "0.24.1", @@ -225,6 +226,7 @@ "version": "5.2.0", "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-5.2.0.tgz", "integrity": "sha512-Cxwpt2SfTzTtXcfOlzGEee8O+c+MmUgGrNiBcXnuWxuFJHe6a5Hz7qwhwe5OgaSYI0IJvkLqWX1ASG+cJOkEiA==", + "peer": true, "engines": { "node": ">=10" }, @@ -308,6 +310,7 @@ "version": "6.3.0", "resolved": "https://registry.npmjs.org/camelcase/-/camelcase-6.3.0.tgz", "integrity": "sha512-Gmy6FhYlCY7uOElZUSbxo2UCDH8owEk996gkbrpsgGtrJLM3J7jGxl9Ic7Qwwj4ivOE5AWZWRMecDdF7hqGjFA==", + "peer": true, "engines": { "node": ">=10" }, @@ -420,6 +423,7 @@ "version": "1.2.0", "resolved": "https://registry.npmjs.org/decamelize/-/decamelize-1.2.0.tgz", "integrity": "sha512-z2S+W9X73hAUUki+N+9Za2lBlun89zigOyGrsax+KUQ6wKW4ZoWpEYBkGhQjwAjjDCkWxhY0VKEhk8wzY7F5cA==", + "peer": true, "engines": { "node": ">=0.10.0" } @@ -821,6 +825,7 @@ "version": "1.0.21", "resolved": "https://registry.npmjs.org/js-tiktoken/-/js-tiktoken-1.0.21.tgz", "integrity": "sha512-biOj/6M5qdgx5TKjDnFT1ymSpM5tbd3ylwDtrQvFQSu0Z7bBYko2dF+W/aUkXUPuk6IVpRxk/3Q2sHOzGlS36g==", + "peer": true, "dependencies": { "base64-js": "^1.5.1" } @@ -873,9 +878,9 @@ } }, "node_modules/langsmith": { - "version": "0.4.3", - "resolved": "https://registry.npmjs.org/langsmith/-/langsmith-0.4.3.tgz", - "integrity": "sha512-vuBAagBZulXj0rpZhUTxmHhrYIBk53z8e2Q8ty4OHVkahN4ul7Im3OZxD9jsXZB0EuncK1xRYtY8J3BW4vj1zw==", + "version": "0.5.2", + "resolved": "https://registry.npmjs.org/langsmith/-/langsmith-0.5.2.tgz", + "integrity": "sha512-CfkcQsiajtTWknAcyItvJsKEQdY2VgDpm6U8pRI9wnM07mevnOv5EF+RcqWGwx37SEUxtyi2RXMwnKW8b06JtA==", "license": "MIT", "dependencies": { "@types/uuid": "^10.0.0", @@ -969,6 +974,7 @@ "version": "4.2.0", "resolved": "https://registry.npmjs.org/mustache/-/mustache-4.2.0.tgz", "integrity": "sha512-71ippSywq5Yb7/tVYyGbkBggbU8H3u5Rz56fH60jGFgr8uHwxs+aSKeqmluIVzM0m0kB7xQjKS6qPfd0b2ZoqQ==", + "peer": true, "bin": { "mustache": "bin/mustache" } @@ -1407,7 +1413,6 @@ "version": "3.25.76", "resolved": "https://registry.npmjs.org/zod/-/zod-3.25.76.tgz", "integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==", - "peer": true, "funding": { "url": "https://github.com/sponsors/colinhacks" } From 1f8019c50a06d65553abd93da833b6dba09c612b Mon Sep 17 00:00:00 2001 From: Anubhav Dhawan Date: Wed, 11 Feb 2026 00:15:00 +0530 Subject: [PATCH 13/13] docs(adk): align quickstart script with other orchestrations (#2423) This brings the ADK Python quickstart sample up to par with the other orchestrations in the guidance, removing the need for special test handling. --- docs/en/getting-started/local_quickstart.md | 2 +- .../quickstart/python/adk/quickstart.py | 44 ++++++++++++++++++- .../quickstart/python/quickstart_test.py | 26 +++++------ docs/en/how-to/deploy_adk_agent.md | 13 +++--- 4 files changed, 60 insertions(+), 25 deletions(-) diff --git a/docs/en/getting-started/local_quickstart.md b/docs/en/getting-started/local_quickstart.md index 9049082a01..414156f672 100644 --- a/docs/en/getting-started/local_quickstart.md +++ b/docs/en/getting-started/local_quickstart.md @@ -115,7 +115,7 @@ pip install google-genai 1. Update `my_agent/agent.py` with the following content to connect to Toolbox: ```py - {{< include "quickstart/python/adk/quickstart.py" >}} + {{< regionInclude "quickstart/python/adk/quickstart.py" "quickstart" >}} ```
diff --git a/docs/en/getting-started/quickstart/python/adk/quickstart.py b/docs/en/getting-started/quickstart/python/adk/quickstart.py index 477ced578d..42a72fa38e 100644 --- a/docs/en/getting-started/quickstart/python/adk/quickstart.py +++ b/docs/en/getting-started/quickstart/python/adk/quickstart.py @@ -1,6 +1,21 @@ +# [START quickstart] +import asyncio + from google.adk import Agent from google.adk.apps import App +from google.adk.runners import InMemoryRunner from google.adk.tools.toolbox_toolset import ToolboxToolset +from google.genai.types import Content, Part + +prompt = """ +You're a helpful hotel assistant. You handle hotel searching, booking and +cancellations. When the user searches for a hotel, mention it's name, id, +location and price tier. Always mention hotel ids while performing any +searches. This is very important for any operations. For any bookings or +cancellations, please provide the appropriate confirmation. Be sure to +update checkin or checkout dates if mentioned by the user. +Don't ask for confirmations from the user. +""" # TODO(developer): update the TOOLBOX_URL to your toolbox endpoint toolset = ToolboxToolset( @@ -8,10 +23,35 @@ toolset = ToolboxToolset( ) root_agent = Agent( - name='root_agent', + name='hotel_assistant', model='gemini-2.5-flash', - instruction="You are a helpful AI assistant designed to provide accurate and useful information.", + instruction=prompt, tools=[toolset], ) app = App(root_agent=root_agent, name="my_agent") +# [END quickstart] + +queries = [ + "Find hotels in Basel with Basel in its name.", + "Can you book the Hilton Basel for me?", + "Oh wait, this is too expensive. Please cancel it and book the Hyatt Regency instead.", + "My check in dates would be from April 10, 2024 to April 19, 2024.", +] + +async def main(): + runner = InMemoryRunner(app=app) + session = await runner.session_service.create_session( + app_name=app.name, user_id="test_user" + ) + + for query in queries: + print(f"\nUser: {query}") + user_message = Content(parts=[Part.from_text(text=query)]) + + async for event in runner.run_async(user_id="test_user", session_id=session.id, new_message=user_message): + if event.is_final_response() and event.content and event.content.parts: + print(f"Agent: {event.content.parts[0].text}") + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/docs/en/getting-started/quickstart/python/quickstart_test.py b/docs/en/getting-started/quickstart/python/quickstart_test.py index eb46bee1f8..b6c6e3a8a8 100755 --- a/docs/en/getting-started/quickstart/python/quickstart_test.py +++ b/docs/en/getting-started/quickstart/python/quickstart_test.py @@ -41,31 +41,29 @@ def golden_keywords(): class TestExecution: """Test framework execution and output validation.""" + _cached_output = None + @pytest.fixture(scope="function") def script_output(self, capsys): """Run the quickstart function and return its output.""" - - # TODO: Add better validation for ADK once we have a way to capture its - # output. - if ORCH_NAME == "adk": - return quickstart.app.root_agent.name - else: + if TestExecution._cached_output is None: asyncio.run(quickstart.main()) - - return capsys.readouterr() + out, err = capsys.readouterr() + TestExecution._cached_output = (out, err) + + class Output: + def __init__(self, out, err): + self.out = out + self.err = err + + return Output(*TestExecution._cached_output) def test_script_runs_without_errors(self, script_output): """Test that the script runs and produces no stderr.""" - if ORCH_NAME == "adk": - return assert script_output.err == "", f"Script produced stderr: {script_output.err}" def test_keywords_in_output(self, script_output, golden_keywords): """Test that expected keywords are present in the script's output.""" - - if ORCH_NAME == "adk": - assert script_output == "root_agent" - return output = script_output.out missing_keywords = [kw for kw in golden_keywords if kw not in output] assert not missing_keywords, f"Missing keywords in output: {missing_keywords}" diff --git a/docs/en/how-to/deploy_adk_agent.md b/docs/en/how-to/deploy_adk_agent.md index 973d84dfe6..cc247d4831 100644 --- a/docs/en/how-to/deploy_adk_agent.md +++ b/docs/en/how-to/deploy_adk_agent.md @@ -83,15 +83,12 @@ Toolbox instead of the local address. 2. Open your agent file (`my_agent/agent.py`). -3. Update the `ToolboxSyncClient` initialization to use your Cloud Run URL. +3. Update the `ToolboxToolset` initialization to point to your Cloud Run service URL. Replace the existing initialization code with the following: - {{% alert color="info" %}} -Since Cloud Run services are secured by default, you also need to provide an -authentication token. + {{% alert color="info" title="Note" %}} +Since Cloud Run services are secured by default, you also need to provide a workload identity. {{% /alert %}} - Replace your existing client initialization code with the following: - ```python from google.adk import Agent from google.adk.apps import App @@ -132,14 +129,14 @@ app = App(root_agent=root_agent, name="my_agent") Run the deployment command: ```bash -make backend +make deploy ``` This command will build your agent's container image and deploy it to Vertex AI. ## Step 6: Test your Deployment -Once the deployment command (`make backend`) completes, it will output the URL +Once the deployment command (`make deploy`) completes, it will output the URL for the Agent Engine Playground. You can click on this URL to open the Playground in your browser and start chatting with your agent to test the tools.