Compare commits

..

8 Commits

Author SHA1 Message Date
Yuan Teoh
6048756e17 Merge branch 'main' into httplog 2026-02-13 11:07:19 -08:00
Averi Kitsch
a678886ee3 docs: update prebuilt tools ref (#2459)
## Description

> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution

## PR Checklist

> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [ ] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
  before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2026-02-13 18:57:55 +00:00
Averi Kitsch
6602abd059 docs: update diagram (#2461)
## Description

> Should include a concise description of the changes (bug or feature),
it's
> impact, along with a summary of the solution

## PR Checklist

> Thank you for opening a Pull Request! Before submitting your PR, there
are a
> few things you can do to make sure it goes smoothly:

- [ ] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [ ] Make sure to open an issue as a

[bug/issue](https://github.com/googleapis/genai-toolbox/issues/new/choose)
  before writing your code! That way we can discuss the change, evaluate
  designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #<issue_number_goes_here>
2026-02-13 10:25:34 -08:00
Binh Tran
62b830987d fix: Deflake alloydb omni (#2431)
## Description

Improve logic to check that database is up.

**IMPORTANT** DO NOT MERGE until I have reverted
f7d7d9e708

## PR Checklist

- [x] Make sure you reviewed

[CONTRIBUTING.md](https://github.com/googleapis/genai-toolbox/blob/main/CONTRIBUTING.md)
- [x] https://github.com/googleapis/genai-toolbox/issues/2422
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
- [ ] Make sure to add `!` if this involve a breaking change

🛠️ Fixes #2422

Co-authored-by: Averi Kitsch <akitsch@google.com>
2026-02-13 09:55:53 -08:00
Yuan Teoh
d55666504d make a local copy of the schema instead of modifying it directly 2026-02-10 16:36:48 -08:00
Yuan Teoh
df0229c882 Update internal/log/log.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-10 16:29:01 -08:00
Yuan Teoh
d4959093c8 fix panic 2026-02-10 16:07:44 -08:00
Yuan Teoh
8ffafd7937 chore(deps): update httplog to v3 2026-02-10 10:56:47 -08:00
12 changed files with 200 additions and 158 deletions

3
.gitignore vendored
View File

@@ -18,9 +18,6 @@ node_modules
# coverage
.coverage
# python
__pycache__/
# executable
genai-toolbox
toolbox

View File

@@ -1 +0,0 @@
GEMINI.md

View File

@@ -1 +0,0 @@
GEMINI.md

108
GEMINI.md
View File

@@ -1,108 +0,0 @@
# MCP Toolbox Context
This file (symlinked as `CLAUDE.md` and `AGENTS.md`) provides context and guidelines for AI agents working on the MCP Toolbox for Databases project. It summarizes key information from `CONTRIBUTING.md` and `DEVELOPER.md`.
## Project Overview
**MCP Toolbox for Databases** is a Go-based project designed to provide Model Context Protocol (MCP) tools for various data sources and services. It allows Large Language Models (LLMs) to interact with databases and other tools safely and efficiently.
## Tech Stack
- **Language:** Go (1.23+)
- **Documentation:** Hugo (Extended Edition v0.146.0+)
- **Containerization:** Docker
- **CI/CD:** GitHub Actions, Google Cloud Build
- **Linting:** `golangci-lint`
## Key Directories
- `cmd/`: Application entry points.
- `internal/sources/`: Implementations of database sources (e.g., Postgres, BigQuery).
- `internal/tools/`: Implementations of specific tools for each source.
- `tests/`: Integration tests.
- `docs/`: Project documentation (Hugo site).
## Development Workflow
### Prerequisites
- Go 1.23 or later.
- Docker (for building container images and running some tests).
- Access to necessary Google Cloud resources for integration testing (if applicable).
### Building and Running
1. **Build Binary:** `go build -o toolbox`
2. **Run Server:** `go run .` (Listens on port 5000 by default)
3. **Run with Help:** `go run . --help`
4. **Test Endpoint:** `curl http://127.0.0.1:5000`
### Testing
- **Unit Tests:** `go test -race -v ./cmd/... ./internal/...`
- **Integration Tests:**
- Run specific source tests: `go test -race -v ./tests/<source_dir>`
- Example: `go test -race -v ./tests/alloydbpg`
- Add new sources to `.ci/integration.cloudbuild.yaml`
- **Linting:** `golangci-lint run --fix`
## Developing Documentation
### Prerequisites
- Hugo (Extended Edition v0.146.0+)
- Node.js (for `npm ci`)
### Running Local Server
1. Navigate to `.hugo` directory: `cd .hugo`
2. Install dependencies: `npm ci`
3. Start server: `hugo server`
### Versioning Workflows
1. **Deploy In-development docs**: Merges to main -> `/dev/`.
2. **Deploy Versioned Docs**: New Release -> `/<version>/` and root.
3. **Deploy Previous Version Docs**: Manual workflow for older versions.
## Coding Conventions
### Tool Naming
- **Tool Name:** `snake_case` (e.g., `list_collections`, `run_query`).
- Do *not* include the product name (e.g., avoid `firestore_list_collections`).
- **Tool Type:** `kebab-case` (e.g., `firestore-list-collections`).
- *Must* include the product name.
### Branching and Commits
- **Branch Naming:** `feat/`, `fix/`, `docs/`, `chore/` (e.g., `feat/add-gemini-md`).
- **Commit Messages:** [Conventional Commits](https://www.conventionalcommits.org/) format.
- Format: `<type>(<scope>): <description>`
- Example: `feat(source/postgres): add new connection option`
- Types: `feat`, `fix`, `docs`, `chore`, `test`, `ci`, `refactor`, `revert`, `style`.
## Adding New Features
### Adding a New Data Source
1. Create a new directory: `internal/sources/<newdb>`.
2. Define `Config` and `Source` structs in `internal/sources/<newdb>/<newdb>.go`.
3. Implement `SourceConfig` interface (`SourceConfigType`, `Initialize`).
4. Implement `Source` interface (`SourceType`).
5. Implement `init()` to register the source.
6. Add unit tests in `internal/sources/<newdb>/<newdb>_test.go`.
### Adding a New Tool
1. Create a new directory: `internal/tools/<newdb>/<toolname>`.
2. Define `Config` and `Tool` structs.
3. Implement `ToolConfig` interface (`ToolConfigType`, `Initialize`).
4. Implement `Tool` interface (`Invoke`, `ParseParams`, `Manifest`, `McpManifest`, `Authorized`).
5. Implement `init()` to register the tool.
6. Add unit tests.
### Adding Documentation
- Add source documentation to `docs/en/resources/sources/`.
- Add tool documentation to `docs/en/resources/tools/`.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 241 KiB

After

Width:  |  Height:  |  Size: 271 KiB

View File

@@ -44,6 +44,12 @@ See [Usage Examples](../reference/cli.md#examples).
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_active_queries`: Lists ongoing queries.
* `list_available_extensions`: Discover all PostgreSQL extensions available for installation.
* `list_installed_extensions`: List all installed PostgreSQL extensions.
* `long_running_transactions`: Identifies and lists database transactions that exceed a specified time limit.
* `list_locks`: Identifies all locks held by active processes.
* `replication_stats`: Lists each replica's process ID and sync state.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
@@ -59,12 +65,16 @@ See [Usage Examples](../reference/cli.md#examples).
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_query_stats`: Lists query statistics.
* `get_column_cardinality`: Gets column cardinality.
* `list_table_stats`: Lists table statistics.
* `list_publication_tables`: List publication tables in a PostgreSQL database.
* `list_tablespaces`: Lists tablespaces in the database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the AlloyDB instance.
* `list_roles`: Lists all the user-created roles in PostgreSQL database.
* `list_stored_procedure`: Lists stored procedures.
## AlloyDB Postgres Admin
@@ -113,6 +123,12 @@ See [Usage Examples](../reference/cli.md#examples).
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_active_queries`: Lists ongoing queries.
* `list_available_extensions`: Discover all PostgreSQL extensions available for installation.
* `list_installed_extensions`: List all installed PostgreSQL extensions.
* `long_running_transactions`: Identifies and lists database transactions that exceed a specified time limit.
* `list_locks`: Identifies all locks held by active processes.
* `replication_stats`: Lists each replica's process ID and sync state.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_columnar_configurations`: List AlloyDB Omni columnar-related configurations.
@@ -130,12 +146,16 @@ See [Usage Examples](../reference/cli.md#examples).
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_query_stats`: Lists query statistics.
* `get_column_cardinality`: Gets column cardinality.
* `list_table_stats`: Lists table statistics.
* `list_publication_tables`: List publication tables in a PostgreSQL database.
* `list_tablespaces`: Lists tablespaces in the database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the AlloyDB instance.
* `list_roles`: Lists all the user-created roles in PostgreSQL database.
* `list_stored_procedure`: Lists stored procedures.
## BigQuery
@@ -173,6 +193,21 @@ See [Usage Examples](../reference/cli.md#examples).
* `list_table_ids`: Lists tables.
* `search_catalog`: Search for entries based on the provided query.
## ClickHouse
* `--prebuilt` value: `clickhouse`
* **Environment Variables:**
* `CLICKHOUSE_HOST`: The hostname or IP address of the ClickHouse server.
* `CLICKHOUSE_PORT`: The port number of the ClickHouse server.
* `CLICKHOUSE_USER`: The database username.
* `CLICKHOUSE_PASSWORD`: The password for the database user.
* `CLICKHOUSE_DATABASE`: The name of the database to connect to.
* `CLICKHOUSE_PROTOCOL`: The protocol to use (e.g., http).
* **Tools:**
* `execute_sql`: Use this tool to execute SQL.
* `list_databases`: Use this tool to list all databases in ClickHouse.
* `list_tables`: Use this tool to list all tables in a specific ClickHouse database.
## Cloud SQL for MySQL
* `--prebuilt` value: `cloud-sql-mysql`
@@ -270,6 +305,12 @@ See [Usage Examples](../reference/cli.md#examples).
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_active_queries`: Lists ongoing queries.
* `list_available_extensions`: Discover all PostgreSQL extensions available for installation.
* `list_installed_extensions`: List all installed PostgreSQL extensions.
* `long_running_transactions`: Identifies and lists database transactions that exceed a specified time limit.
* `list_locks`: Identifies all locks held by active processes.
* `replication_stats`: Lists each replica's process ID and sync state.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
@@ -285,12 +326,16 @@ See [Usage Examples](../reference/cli.md#examples).
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_query_stats`: Lists query statistics.
* `get_column_cardinality`: Gets column cardinality.
* `list_table_stats`: Lists table statistics.
* `list_publication_tables`: List publication tables in a PostgreSQL database.
* `list_tablespaces`: Lists tablespaces in the database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the postgreSQL instance.
* `list_roles`: Lists all the user-created roles in PostgreSQL database.
* `list_stored_procedure`: Lists stored procedures.
## Cloud SQL for PostgreSQL Observability
@@ -336,6 +381,7 @@ See [Usage Examples](../reference/cli.md#examples).
* `create_user`: Creates a new user in a Cloud SQL instance.
* `wait_for_operation`: Waits for a Cloud SQL operation to complete.
* `clone_instance`: Creates a clone for an existing Cloud SQL for PostgreSQL instance.
* `postgres_upgrade_precheck`: Performs a precheck for a major version upgrade of a Cloud SQL for PostgreSQL instance.
* `create_backup`: Creates a backup on a Cloud SQL instance.
* `restore_backup`: Restores a backup of a Cloud SQL instance.
@@ -420,6 +466,15 @@ See [Usage Examples](../reference/cli.md#examples).
* `search_aspect_types`: Finds aspect types relevant to the
query.
## Elasticsearch
* `--prebuilt` value: `elasticsearch`
* **Environment Variables:**
* `ELASTICSEARCH_HOST`: The hostname or IP address of the Elasticsearch server.
* `ELASTICSEARCH_APIKEY`: The API key for authentication.
* **Tools:**
* `execute_esql_query`: Use this tool to execute ES|QL queries.
## Firestore
* `--prebuilt` value: `firestore`
@@ -537,6 +592,19 @@ See [Usage Examples](../reference/cli.md#examples).
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
## MindsDB
* `--prebuilt` value: `mindsdb`
* **Environment Variables:**
* `MINDSDB_HOST`: The hostname or IP address of the MindsDB server.
* `MINDSDB_PORT`: The port number of the MindsDB server.
* `MINDSDB_DATABASE`: The name of the database to connect to.
* `MINDSDB_USER`: The database username.
* `MINDSDB_PASS`: The password for the database user.
* **Tools:**
* `mindsdb-execute-sql`: Execute SQL queries directly on MindsDB database.
* `mindsdb-sql`: Execute parameterized SQL queries on MindsDB database.
## MySQL
* `--prebuilt` value: `mysql`
@@ -592,6 +660,12 @@ See [Usage Examples](../reference/cli.md#examples).
* **Tools:**
* `execute_sql`: Executes a SQL query.
* `list_tables`: Lists tables in the database.
* `list_active_queries`: Lists ongoing queries.
* `list_available_extensions`: Discover all PostgreSQL extensions available for installation.
* `list_installed_extensions`: List all installed PostgreSQL extensions.
* `long_running_transactions`: Identifies and lists database transactions that exceed a specified time limit.
* `list_locks`: Identifies all locks held by active processes.
* `replication_stats`: Lists each replica's process ID and sync state.
* `list_autovacuum_configurations`: Lists autovacuum configurations in the
database.
* `list_memory_configurations`: Lists memory-related configurations in the
@@ -607,12 +681,16 @@ See [Usage Examples](../reference/cli.md#examples).
* `list_triggers`: Lists triggers in the database.
* `list_indexes`: List available user indexes in a PostgreSQL database.
* `list_sequences`: List sequences in a PostgreSQL database.
* `list_query_stats`: Lists query statistics.
* `get_column_cardinality`: Gets column cardinality.
* `list_table_stats`: Lists table statistics.
* `list_publication_tables`: List publication tables in a PostgreSQL database.
* `list_tablespaces`: Lists tablespaces in the database.
* `list_pg_settings`: List configuration parameters for the PostgreSQL server.
* `list_database_stats`: Lists the key performance and activity statistics for
each database in the PostgreSQL server.
* `list_roles`: Lists all the user-created roles in PostgreSQL database.
* `list_stored_procedure`: Lists stored procedures.
## Google Cloud Serverless for Apache Spark
@@ -627,6 +705,38 @@ See [Usage Examples](../reference/cli.md#examples).
view serverless batches.
* **Tools:**
* `list_batches`: Lists Spark batches.
* `get_batch`: Gets information about a Spark batch.
* `cancel_batch`: Cancels a Spark batch.
* `create_pyspark_batch`: Creates a PySpark batch.
* `create_spark_batch`: Creates a Spark batch.
## SingleStore
* `--prebuilt` value: `singlestore`
* **Environment Variables:**
* `SINGLESTORE_HOST`: The hostname or IP address of the SingleStore server.
* `SINGLESTORE_PORT`: The port number of the SingleStore server.
* `SINGLESTORE_DATABASE`: The name of the database to connect to.
* `SINGLESTORE_USER`: The database username.
* `SINGLESTORE_PASSWORD`: The password for the database user.
* **Tools:**
* `execute_sql`: Use this tool to execute SQL.
* `list_tables`: Lists detailed schema information for user-created tables.
## Snowflake
* `--prebuilt` value: `snowflake`
* **Environment Variables:**
* `SNOWFLAKE_ACCOUNT`: The Snowflake account.
* `SNOWFLAKE_USER`: The database username.
* `SNOWFLAKE_PASSWORD`: The password for the database user.
* `SNOWFLAKE_DATABASE`: The name of the database to connect to.
* `SNOWFLAKE_SCHEMA`: The schema name.
* `SNOWFLAKE_WAREHOUSE`: The warehouse name.
* `SNOWFLAKE_ROLE`: The role name.
* **Tools:**
* `execute_sql`: Use this tool to execute SQL.
* `list_tables`: Lists detailed schema information for user-created tables.
## Spanner (GoogleSQL dialect)

2
go.mod
View File

@@ -29,7 +29,7 @@ require (
github.com/fsnotify/fsnotify v1.9.0
github.com/go-chi/chi/v5 v5.2.3
github.com/go-chi/cors v1.2.2
github.com/go-chi/httplog/v2 v2.1.1
github.com/go-chi/httplog/v3 v3.3.0
github.com/go-chi/render v1.0.3
github.com/go-goquery/goquery v1.0.1
github.com/go-playground/validator/v10 v10.28.0

4
go.sum
View File

@@ -902,8 +902,8 @@ github.com/go-chi/chi/v5 v5.2.3 h1:WQIt9uxdsAbgIYgid+BpYc+liqQZGMHRaUwp0JUcvdE=
github.com/go-chi/chi/v5 v5.2.3/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/go-chi/cors v1.2.2 h1:Jmey33TE+b+rB7fT8MUy1u0I4L+NARQlK6LhzKPSyQE=
github.com/go-chi/cors v1.2.2/go.mod h1:sSbTewc+6wYHBBCW7ytsFSn836hqM7JxpglAy2Vzc58=
github.com/go-chi/httplog/v2 v2.1.1 h1:ojojiu4PIaoeJ/qAO4GWUxJqvYUTobeo7zmuHQJAxRk=
github.com/go-chi/httplog/v2 v2.1.1/go.mod h1:/XXdxicJsp4BA5fapgIC3VuTD+z0Z/VzukoB3VDc1YE=
github.com/go-chi/httplog/v3 v3.3.0 h1:Gr6Y7nSzbpyCyRwKPOVKjDH3BH6TH5uvRNDsTZWDpvU=
github.com/go-chi/httplog/v3 v3.3.0/go.mod h1:N/J1l5l1fozUrqIVuT8Z/HzNeSy8TF2EFyokPLe6y2w=
github.com/go-chi/render v1.0.3 h1:AsXqd2a1/INaIfUSKq3G5uA8weYx20FOsM7uSoCyyt4=
github.com/go-chi/render v1.0.3/go.mod h1:/gr3hVkmYR0YlEy3LxCuVRFzEu9Ruok+gFqbIofjao0=
github.com/go-faster/city v1.0.1 h1:4WAxSZ3V2Ws4QRDrscLEDcibJY8uf41H6AhXDrNDcGw=

View File

@@ -59,25 +59,35 @@ func NewStdLogger(outW, errW io.Writer, logLevel string) (Logger, error) {
}
// DebugContext logs debug messages
func (sl *StdLogger) DebugContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
func (sl *StdLogger) DebugContext(ctx context.Context, msg string, keysAndValues ...any) {
sl.outLogger.DebugContext(ctx, msg, keysAndValues...)
}
// InfoContext logs debug messages
func (sl *StdLogger) InfoContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
func (sl *StdLogger) InfoContext(ctx context.Context, msg string, keysAndValues ...any) {
sl.outLogger.InfoContext(ctx, msg, keysAndValues...)
}
// WarnContext logs warning messages
func (sl *StdLogger) WarnContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
func (sl *StdLogger) WarnContext(ctx context.Context, msg string, keysAndValues ...any) {
sl.errLogger.WarnContext(ctx, msg, keysAndValues...)
}
// ErrorContext logs error messages
func (sl *StdLogger) ErrorContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
func (sl *StdLogger) ErrorContext(ctx context.Context, msg string, keysAndValues ...any) {
sl.errLogger.ErrorContext(ctx, msg, keysAndValues...)
}
// SlogLogger returns a single standard *slog.Logger that routes
// records to the outLogger or errLogger based on the log level.
func (sl *StdLogger) SlogLogger() *slog.Logger {
splitHandler := &SplitHandler{
OutHandler: sl.outLogger.Handler(),
ErrHandler: sl.errLogger.Handler(),
}
return slog.New(splitHandler)
}
const (
Debug = "DEBUG"
Info = "INFO"
@@ -177,21 +187,64 @@ func NewStructuredLogger(outW, errW io.Writer, logLevel string) (Logger, error)
}
// DebugContext logs debug messages
func (sl *StructuredLogger) DebugContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
func (sl *StructuredLogger) DebugContext(ctx context.Context, msg string, keysAndValues ...any) {
sl.outLogger.DebugContext(ctx, msg, keysAndValues...)
}
// InfoContext logs info messages
func (sl *StructuredLogger) InfoContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
func (sl *StructuredLogger) InfoContext(ctx context.Context, msg string, keysAndValues ...any) {
sl.outLogger.InfoContext(ctx, msg, keysAndValues...)
}
// WarnContext logs warning messages
func (sl *StructuredLogger) WarnContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
func (sl *StructuredLogger) WarnContext(ctx context.Context, msg string, keysAndValues ...any) {
sl.errLogger.WarnContext(ctx, msg, keysAndValues...)
}
// ErrorContext logs error messages
func (sl *StructuredLogger) ErrorContext(ctx context.Context, msg string, keysAndValues ...interface{}) {
func (sl *StructuredLogger) ErrorContext(ctx context.Context, msg string, keysAndValues ...any) {
sl.errLogger.ErrorContext(ctx, msg, keysAndValues...)
}
// SlogLogger returns a single standard *slog.Logger that routes
// records to the outLogger or errLogger based on the log level.
func (sl *StructuredLogger) SlogLogger() *slog.Logger {
splitHandler := &SplitHandler{
OutHandler: sl.outLogger.Handler(),
ErrHandler: sl.errLogger.Handler(),
}
return slog.New(splitHandler)
}
type SplitHandler struct {
OutHandler slog.Handler
ErrHandler slog.Handler
}
func (h *SplitHandler) Enabled(ctx context.Context, level slog.Level) bool {
if level >= slog.LevelWarn {
return h.ErrHandler.Enabled(ctx, level)
}
return h.OutHandler.Enabled(ctx, level)
}
func (h *SplitHandler) Handle(ctx context.Context, r slog.Record) error {
if r.Level >= slog.LevelWarn {
return h.ErrHandler.Handle(ctx, r)
}
return h.OutHandler.Handle(ctx, r)
}
func (h *SplitHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
return &SplitHandler{
OutHandler: h.OutHandler.WithAttrs(attrs),
ErrHandler: h.ErrHandler.WithAttrs(attrs),
}
}
func (h *SplitHandler) WithGroup(name string) slog.Handler {
return &SplitHandler{
OutHandler: h.OutHandler.WithGroup(name),
ErrHandler: h.ErrHandler.WithGroup(name),
}
}

View File

@@ -16,16 +16,20 @@ package log
import (
"context"
"log/slog"
)
// Logger is the interface used throughout the project for logging.
type Logger interface {
// DebugContext is for reporting additional information about internal operations.
DebugContext(ctx context.Context, format string, args ...interface{})
DebugContext(ctx context.Context, format string, args ...any)
// InfoContext is for reporting informational messages.
InfoContext(ctx context.Context, format string, args ...interface{})
InfoContext(ctx context.Context, format string, args ...any)
// WarnContext is for reporting warning messages.
WarnContext(ctx context.Context, format string, args ...interface{})
WarnContext(ctx context.Context, format string, args ...any)
// ErrorContext is for reporting errors.
ErrorContext(ctx context.Context, format string, args ...interface{})
ErrorContext(ctx context.Context, format string, args ...any)
// Single standard slog.Logger that routes records to the outLogger or
// errLogger based on log levels
SlogLogger() *slog.Logger
}

View File

@@ -28,7 +28,7 @@ import (
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/go-chi/cors"
"github.com/go-chi/httplog/v2"
"github.com/go-chi/httplog/v3"
"github.com/googleapis/genai-toolbox/internal/auth"
"github.com/googleapis/genai-toolbox/internal/embeddingmodels"
"github.com/googleapis/genai-toolbox/internal/log"
@@ -347,31 +347,16 @@ func NewServer(ctx context.Context, cfg ServerConfig) (*Server, error) {
if err != nil {
return nil, fmt.Errorf("unable to initialize http log: %w", err)
}
var httpOpts httplog.Options
switch cfg.LoggingFormat.String() {
case "json":
httpOpts = httplog.Options{
JSON: true,
LogLevel: logLevel,
Concise: true,
RequestHeaders: false,
MessageFieldName: "message",
SourceFieldName: "logging.googleapis.com/sourceLocation",
TimeFieldName: "timestamp",
LevelFieldName: "severity",
}
case "standard":
httpOpts = httplog.Options{
LogLevel: logLevel,
Concise: true,
RequestHeaders: false,
MessageFieldName: "message",
}
default:
return nil, fmt.Errorf("invalid Logging format: %q", cfg.LoggingFormat.String())
schema := *httplog.SchemaGCP
schema.Level = cfg.LogLevel.String()
schema.Concise(true)
httpOpts := &httplog.Options{
Level: logLevel,
Schema: &schema,
}
httpLogger := httplog.NewLogger("httplog", httpOpts)
r.Use(httplog.RequestLogger(httpLogger))
logger := l.SlogLogger()
r.Use(httplog.RequestLogger(logger, httpOpts))
sourcesMap, authServicesMap, embeddingModelsMap, toolsMap, toolsetsMap, promptsMap, promptsetsMap, err := InitializeConfigs(ctx, cfg)
if err != nil {

View File

@@ -36,15 +36,17 @@ var (
AlloyDBDatabase = "postgres"
)
// Copied over from postgres.go
func initPostgresConnectionPool(host, port, user, pass, dbname string) (*pgxpool.Pool, error) {
// urlExample := "postgres:dd//username:password@localhost:5432/database_name"
url := &url.URL{
func buildPostgresURL(host, port, user, pass, dbname string) *url.URL {
return &url.URL{
Scheme: "postgres",
User: url.UserPassword(user, pass),
Host: fmt.Sprintf("%s:%s", host, port),
Path: dbname,
}
}
func initPostgresConnectionPool(host, port, user, pass, dbname string) (*pgxpool.Pool, error) {
url := buildPostgresURL(host, port, user, pass, dbname)
pool, err := pgxpool.New(context.Background(), url.String())
if err != nil {
return nil, fmt.Errorf("Unable to create connection pool: %w", err)
@@ -63,7 +65,8 @@ func setupAlloyDBContainer(ctx context.Context, t *testing.T) (string, string, f
"POSTGRES_PASSWORD": AlloyDBPass,
},
WaitingFor: wait.ForAll(
wait.ForLog("Post Startup: Successfully reinstalled extensions"),
wait.ForLog("database system was shut down at"),
wait.ForLog("database system is ready to accept connections"),
wait.ForExposedPort(),
),
}