## CHANGES
- Add DigitalOcean as a new AI provider in plugin registry
- Implement DigitalOcean client with OpenAI-compatible inference endpoint
- Support model access key authentication for inference requests
- Add optional control plane token for model discovery
- Create DigitalOcean setup documentation with environment variables
- Update README to list DigitalOcean in supported providers
- Handle model listing via control plane API with fallback
## CHANGES
- Add Mammouth provider configuration with API base URL
- Configure Mammouth to use standard OpenAI-compatible interface
- Disable Responses API implementation for Mammouth provider
- Add "Mammouth" to VSCode spell check dictionary
Add centralized factory function for AI vendor plugin initialization:
- Add NewVendorPluginBase(name, configure) to internal/plugins/plugin.go
- Update 8 vendor files (anthropic, bedrock, gemini, lmstudio, ollama,
openai, perplexity, vertexai) to use the factory function
- Add 3 test cases for the new factory function
This removes ~40 lines of duplicated boilerplate code and ensures
consistent plugin initialization across all vendors.
MAESTRO: Loop 00001 refactoring implementation
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
I spend hundreds of hours a year on open source. If you'd like to help support this project, you can sponsor me here.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
### CHANGES
- Extract `getGeminiRegion` method for region determination
- Use `getGeminiRegion` in `sendGemini` for location setting
- Apply `getGeminiRegion` in `sendStreamGemini` for consistency
- Extract model fetching logic into a dedicated helper function.
- Improve response body cleanup during Vertex AI pagination loops.
- Remove unused time import and timeout constant from models.
- Streamline listPublisherModels function by delegating API requests to helper.
- Dynamic model listing from Vertex AI Model Garden API
- Support for both Gemini (genai SDK) and Claude (Anthropic SDK) models
- Curated Gemini model list (no API available to list them)
- Web search support for Gemini models
- Thinking/extended thinking support for Gemini
- TopP parameter support for Claude models
- Model filtering (excludes imagen, embeddings, legacy models)
- Model sorting (Gemini > Claude > DeepSeek > Llama > Mistral > Others)
# CHANGES
- List supported native and OpenAI-compatible AI provider integrations
- Document dry run mode for previewing prompt construction
- Explain Ollama compatibility mode for exposing API endpoints
- Detail available prompt strategies like chain-of-thought and reflexion
- Add documentation for the generate_changelog command-line tool used during CI/CD to update the ChangeLog
- Update table of contents to reflect new documentation sections
- Rename `code_helper` command to `code2context` throughout codebase
- Update README.md table of contents and references
- Update installation instructions with new binary name
- Update all usage examples in main.go help text
- Update create_coding_feature pattern documentation
- Rename cmd directory from code_helper to code2context