## CHANGES
- Add `Variables` field to `OllamaRequestBody` struct for direct variable passing
- Change `Options` field from empty struct to flexible `map[string]any` type
- Extract variables from top-level `Variables` field or nested `Options.variables`
- Support parsing variables as JSON string or map format
- Pass extracted variables to `PromptRequest` for single message chats
- Pass extracted variables to `PromptRequest` for multi-message chats
- Add `omitempty` JSON tags to optional fields
## CHANGES
- Use int64 for prompt and eval count fields
- Skip sending secondary error message on stream write failure
- Allow non-http schemes and validate host only for address
- Flush response only when writer implements http.Flusher
## CHANGES
- Use Gin request context for outbound HTTP calls
- Send final stream chunk when response write fails
- Capture timing duration once for consistent metrics
- Build final Ollama response via shared helper function
- Validate Fabric base URL scheme is http/https only
- Add clarifying documentation comments for URL and writers
## CHANGES
- Use int64 for `load_duration` JSON field values
- Use int64 for `prompt_eval_duration` JSON field values
- Remove lossy int casts when assigning nanosecond durations
- Keep duration payloads consistent with `total_duration` precision
- Prevent potential overflow on long-running request timing
## CHANGES
- Validate parsed URL host not start with colon
- Return explicit error for missing hostname in URL
- Update unit test to expect error on port-only host
- Prevent accidental acceptance of malformed `https://:port` addresses
## CHANGES
- Set NDJSON content type before checking upstream errors
- Reject parsed URLs that omit a hostname
- Remove hardcoded eval count placeholders from responses
- Add unit tests for Fabric chat URL builder
- Cover colon-port, host:port, and IP address inputs
## CHANGES
- Convert JSON error responses to use err.Error()
- Detect upstream Fabric non-2xx status before scanning
- Read and log upstream error body when possible
- Return upstream status error message for non-stream mode
- Stream error message via NDJSON when streaming enabled
- Set NDJSON Content-Type header before first streaming write
- Remove per-chunk header setting during streaming output
Addresses all 8 Copilot review comments on PR #1940:
Critical fixes:
- Replace log.Fatal with proper HTTP error response to prevent
server crashes on request failures
- Add streaming-aware error handling to maintain consistent
response format (prevents mixing JSON with NDJSON)
Error messaging improvements:
- Replace "testing endpoint" placeholders with descriptive
error messages
- Add clear context for unmarshal and scanning failures
Protocol compliance:
- Set Content-Type: application/x-ndjson for streaming responses
- Ensure all error paths respect stream vs non-stream mode
Code cleanup:
- Remove commented-out dead code
Tested both streaming and non-streaming modes successfully.
- Add `json:"-"` tag to exclude UpdateChan from JSON serialization
- Extract URL building logic into dedicated `buildFabricChatURL` helper function
- Replace single-read body parsing with streaming `bufio.Scanner` approach
- Add proper SSE data prefix parsing for fabric response format
- Implement real-time streaming with `writeOllamaResponse` helper function
- Add `writeOllamaResponseStruct` for consistent JSON response writing
- Handle both streaming and non-streaming response modes separately
- Add proper error handling for fabric error response types
- Ensure response body is properly closed with defer statement
Update the SendStream interface to match the current Vendor interface
which now uses chan domain.StreamUpdate instead of chan string.
Changes:
- Update SendStream signature to use chan domain.StreamUpdate
- Update sendChatMessageStream signature accordingly
- Update parseSSEStream signature accordingly
- Wrap all channel sends with domain.StreamUpdate{Type: StreamTypeContent}
This fixes the build error introduced when the streaming interface was
updated to support metadata like token usage alongside content.
## CHANGES
- Add DigitalOcean as a new AI provider in plugin registry
- Implement DigitalOcean client with OpenAI-compatible inference endpoint
- Support model access key authentication for inference requests
- Add optional control plane token for model discovery
- Create DigitalOcean setup documentation with environment variables
- Update README to list DigitalOcean in supported providers
- Handle model listing via control plane API with fallback
## CHANGES
- Add Mammouth provider configuration with API base URL
- Configure Mammouth to use standard OpenAI-compatible interface
- Disable Responses API implementation for Mammouth provider
- Add "Mammouth" to VSCode spell check dictionary
Add centralized factory function for AI vendor plugin initialization:
- Add NewVendorPluginBase(name, configure) to internal/plugins/plugin.go
- Update 8 vendor files (anthropic, bedrock, gemini, lmstudio, ollama,
openai, perplexity, vertexai) to use the factory function
- Add 3 test cases for the new factory function
This removes ~40 lines of duplicated boilerplate code and ensures
consistent plugin initialization across all vendors.
MAESTRO: Loop 00001 refactoring implementation
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
I spend hundreds of hours a year on open source. If you'd like to help support this project, you can sponsor me here.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>