mirror of
https://github.com/danielmiessler/Fabric.git
synced 2026-01-17 02:07:56 -05:00
Compare commits
68 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e3c2723988 | ||
|
|
198b5af12c | ||
|
|
c66aad556b | ||
|
|
9f8a2531ca | ||
|
|
a2370a0e3b | ||
|
|
1af6418486 | ||
|
|
f50a7568d1 | ||
|
|
83d9d0b336 | ||
|
|
52db4f1961 | ||
|
|
36a22aa432 | ||
|
|
487199394b | ||
|
|
3a1d7757fb | ||
|
|
d98ad5290c | ||
|
|
a6fc9a0ef0 | ||
|
|
fd5530d38b | ||
|
|
8ec09be550 | ||
|
|
6bac79703e | ||
|
|
24afe127f1 | ||
|
|
c26a56a368 | ||
|
|
84470eac3f | ||
|
|
a2058ae26e | ||
|
|
74250bbcbd | ||
|
|
a593e83d9f | ||
|
|
62e6812c7f | ||
|
|
7e7ab9e5f2 | ||
|
|
df2938a7ee | ||
|
|
014985c407 | ||
|
|
a3d9bec537 | ||
|
|
febae215f3 | ||
|
|
cf55be784f | ||
|
|
6d2180e69a | ||
|
|
678db0c43e | ||
|
|
765977cd42 | ||
|
|
8017f376b1 | ||
|
|
6f103b2db2 | ||
|
|
19aeebe6f5 | ||
|
|
2d79d3b706 | ||
|
|
4fe501da02 | ||
|
|
2501cbf47e | ||
|
|
d96a1721bb | ||
|
|
c1838d3744 | ||
|
|
643a60a2cf | ||
|
|
90712506f1 | ||
|
|
edc02120bb | ||
|
|
8f05883581 | ||
|
|
996933e687 | ||
|
|
8806f4c2f4 | ||
|
|
b381bae24a | ||
|
|
a6c753499b | ||
|
|
90b2975fba | ||
|
|
145499ee4c | ||
|
|
f9359c99dc | ||
|
|
6b6d0adbfb | ||
|
|
55c94e65da | ||
|
|
2118013547 | ||
|
|
82a9f02879 | ||
|
|
602304e417 | ||
|
|
c0d00aeb1f | ||
|
|
1ec8ecba24 | ||
|
|
ad1465a2e5 | ||
|
|
12b6cf4a0a | ||
|
|
0776e77872 | ||
|
|
cb2759a5a1 | ||
|
|
c32a650eaa | ||
|
|
8a28ca7b1e | ||
|
|
435d61ae0e | ||
|
|
6ea5551f06 | ||
|
|
6a18913a23 |
5
.vscode/settings.json
vendored
5
.vscode/settings.json
vendored
@@ -117,6 +117,7 @@
|
||||
"listvendors",
|
||||
"lmstudio",
|
||||
"Makefiles",
|
||||
"Mammouth",
|
||||
"markmap",
|
||||
"matplotlib",
|
||||
"mattn",
|
||||
@@ -157,6 +158,7 @@
|
||||
"pyperclip",
|
||||
"qwen",
|
||||
"readystream",
|
||||
"reflexion",
|
||||
"restapi",
|
||||
"rmextension",
|
||||
"Sadachbia",
|
||||
@@ -247,5 +249,8 @@
|
||||
]
|
||||
},
|
||||
"MD041": false
|
||||
},
|
||||
"[json]": {
|
||||
"editor.formatOnSave": false
|
||||
}
|
||||
}
|
||||
|
||||
113
CHANGELOG.md
113
CHANGELOG.md
@@ -1,5 +1,118 @@
|
||||
# Changelog
|
||||
|
||||
## v1.4.380 (2026-01-16)
|
||||
|
||||
### PR [#1936](https://github.com/danielmiessler/Fabric/pull/1936) by [ksylvan](https://github.com/ksylvan): New Vendor: Microsoft Copilot
|
||||
|
||||
- Add Microsoft 365 Copilot integration as a new AI vendor with OAuth2 authentication for delegated user permissions
|
||||
- Enable querying of Microsoft 365 data including emails, documents, and chats with both synchronous and streaming response support
|
||||
- Provide comprehensive setup instructions for Azure AD app registration and detail licensing, technical, and permission requirements
|
||||
- Add troubleshooting steps for common authentication and API errors with current API limitations documentation
|
||||
- Fix SendStream interface to use domain.StreamUpdate instead of chan string to match current Vendor interface requirements
|
||||
|
||||
## v1.4.379 (2026-01-15)
|
||||
|
||||
### PR [#1935](https://github.com/danielmiessler/Fabric/pull/1935) by [dependabot](https://github.com/apps/dependabot): chore(deps): bump the npm_and_yarn group across 1 directory with 2 updates
|
||||
|
||||
- Updated @sveltejs/kit from version 2.21.1 to 2.49.5
|
||||
- Updated devalue dependency from version 5.3.2 to 5.6.2
|
||||
|
||||
## v1.4.378 (2026-01-14)
|
||||
|
||||
### PR [#1933](https://github.com/danielmiessler/Fabric/pull/1933) by [ksylvan](https://github.com/ksylvan): Add DigitalOcean Gradient AI support
|
||||
|
||||
- Feat: add DigitalOcean Gradient AI Agents as a new vendor
|
||||
- Add DigitalOcean as a new AI provider in plugin registry
|
||||
- Implement DigitalOcean client with OpenAI-compatible inference endpoint
|
||||
- Support model access key authentication for inference requests
|
||||
- Add optional control plane token for model discovery
|
||||
|
||||
### Direct commits
|
||||
|
||||
- Chore: Update README with a links to other docs
|
||||
|
||||
## v1.4.377 (2026-01-12)
|
||||
|
||||
### PR [#1929](https://github.com/danielmiessler/Fabric/pull/1929) by [ksylvan](https://github.com/ksylvan): Add Mammouth as new OpenAI-compatible AI provider
|
||||
|
||||
- Feat: add Mammouth as new OpenAI-compatible AI provider
|
||||
- Add Mammouth provider configuration with API base URL
|
||||
- Configure Mammouth to use standard OpenAI-compatible interface
|
||||
- Disable Responses API implementation for Mammouth provider
|
||||
- Add "Mammouth" to VSCode spell check dictionary
|
||||
|
||||
## v1.4.376 (2026-01-12)
|
||||
|
||||
### PR [#1928](https://github.com/danielmiessler/Fabric/pull/1928) by [ksylvan](https://github.com/ksylvan): Eliminate repetitive boilerplate across eight vendor implementations
|
||||
|
||||
- Refactor: add NewVendorPluginBase factory function to reduce duplication
|
||||
- Update 8 vendor files (anthropic, bedrock, gemini, lmstudio, ollama, openai, perplexity, vertexai) to use the factory function
|
||||
- Add 3 test cases for the new factory function
|
||||
- Add centralized factory function for AI vendor plugin initialization
|
||||
- Chore: exempt json files from VSCode format-on-save
|
||||
|
||||
### Direct commits
|
||||
|
||||
- Docs: Add GitHub sponsor section to README
|
||||
I spend hundreds of hours a year on open source. If you'd like to help support this project, you can sponsor me here.
|
||||
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
||||
|
||||
## v1.4.375 (2026-01-08)
|
||||
|
||||
### PR [#1925](https://github.com/danielmiessler/Fabric/pull/1925) by [ksylvan](https://github.com/ksylvan): docs: update README to document new AI providers and features
|
||||
|
||||
- Docs: update README to document new AI providers and features
|
||||
- List supported native and OpenAI-compatible AI provider integrations
|
||||
- Document dry run mode for previewing prompt construction
|
||||
- Explain Ollama compatibility mode for exposing API endpoints
|
||||
- Detail available prompt strategies like chain-of-thought and reflexion
|
||||
|
||||
### PR [#1926](https://github.com/danielmiessler/Fabric/pull/1926) by [henricook](https://github.com/henricook) and [ksylvan](https://github.com/ksylvan): feat(vertexai): add dynamic model listing and multi-model support
|
||||
|
||||
- Dynamic model listing from Vertex AI Model Garden API
|
||||
- Support for both Gemini (genai SDK) and Claude (Anthropic SDK) models
|
||||
- Curated Gemini model list with web search support for Gemini models
|
||||
- Thinking/extended thinking support for Gemini
|
||||
- TopP parameter support for Claude models
|
||||
|
||||
## v1.4.374 (2026-01-05)
|
||||
|
||||
### PR [#1924](https://github.com/danielmiessler/Fabric/pull/1924) by [ksylvan](https://github.com/ksylvan): Rename `code_helper` to `code2context` across documentation and CLI
|
||||
|
||||
- Rename `code_helper` command to `code2context` throughout codebase
|
||||
- Update README.md table of contents and references
|
||||
- Update installation instructions with new binary name
|
||||
- Update all usage examples in main.go help text
|
||||
- Update create_coding_feature pattern documentation
|
||||
|
||||
## v1.4.373 (2026-01-04)
|
||||
|
||||
### PR [#1914](https://github.com/danielmiessler/Fabric/pull/1914) by [majiayu000](https://github.com/majiayu000): feat(code_helper): add stdin support for piping file lists
|
||||
|
||||
- Added stdin support for piping file lists to code_helper, enabling commands like `find . -name '*.go' | code_helper "instructions"` and `git ls-files '*.py' | code_helper "Add type hints"`
|
||||
- Implemented automatic detection of stdin pipe mode with single argument (instructions) support
|
||||
- Enhanced tool to read file paths from stdin line by line while maintaining backward compatibility with existing directory scanning functionality
|
||||
|
||||
### PR [#1915](https://github.com/danielmiessler/Fabric/pull/1915) by [majiayu000](https://github.com/majiayu000): feat: parallelize audio chunk transcription for improved performance
|
||||
|
||||
- Parallelize audio chunk transcription using goroutines for improved performance
|
||||
|
||||
## v1.4.372 (2026-01-04)
|
||||
|
||||
### PR [#1913](https://github.com/danielmiessler/Fabric/pull/1913) by [majiayu000](https://github.com/majiayu000): fix: REST API /chat endpoint doesn't pass 'search' parameter to ChatOptions
|
||||
|
||||
- Fix: REST API /chat endpoint now properly passes Search and SearchLocation parameters to ChatOptions
|
||||
|
||||
## v1.4.371 (2026-01-04)
|
||||
|
||||
### PR [#1923](https://github.com/danielmiessler/Fabric/pull/1923) by [ksylvan](https://github.com/ksylvan): ChangeLog Generation stability
|
||||
|
||||
- Fix: improve date parsing and prevent early return when PR numbers exist
|
||||
- Add SQLite datetime formats to version date parsing logic
|
||||
- Loop through multiple date formats until one succeeds
|
||||
- Include SQLite fractional seconds format support
|
||||
- Prevent early return when version has PR numbers to output
|
||||
|
||||
## v1.4.370 (2026-01-04)
|
||||
|
||||
### PR [#1921](https://github.com/danielmiessler/Fabric/pull/1921) by [ksylvan](https://github.com/ksylvan): chore: remove redundant `--sync-db` step from changelog workflow
|
||||
|
||||
139
README.md
139
README.md
@@ -63,6 +63,9 @@ Fabric organizes prompts by real-world task, allowing people to create, collect,
|
||||
|
||||
## Updates
|
||||
|
||||
For a deep dive into Fabric and its internals, read the documentation in the [docs folder](https://github.com/danielmiessler/Fabric/tree/main/docs). There is
|
||||
also the extremely useful and regularly updated [DeepWiki](https://deepwiki.com/danielmiessler/Fabric) for Fabric.
|
||||
|
||||
<details>
|
||||
<summary>Click to view recent updates</summary>
|
||||
|
||||
@@ -74,6 +77,8 @@ Below are the **new features and capabilities** we've added (newest first):
|
||||
|
||||
### Recent Major Features
|
||||
|
||||
- [v1.4.380](https://github.com/danielmiessler/fabric/releases/tag/v1.4.380) (Jan 15, 2026) — **Microsoft 365 Copilot Integration**: Added support for corporate Microsoft 365 Copilot, enabling enterprise users to leverage AI grounded in their organization's Microsoft 365 data (emails, documents, meetings.
|
||||
- [v1.4.378](https://github.com/danielmiessler/fabric/releases/tag/v1.4.378) (Jan 14, 2026) — **Digital Ocean GenAI Support**: Added support for Digital Ocean GenAI, along with a [guide for how to use it](./docs/DigitalOcean-Agents-Setup.md).
|
||||
- [v1.4.356](https://github.com/danielmiessler/fabric/releases/tag/v1.4.356) (Dec 22, 2025) — **Complete Internationalization**: Full i18n support for setup prompts across all 10 languages with intelligent environment variable handling—making Fabric truly accessible worldwide while maintaining configuration consistency.
|
||||
- [v1.4.350](https://github.com/danielmiessler/fabric/releases/tag/v1.4.350) (Dec 18, 2025) — **Interactive API Documentation**: Adds Swagger/OpenAPI UI at `/swagger/index.html` with comprehensive REST API documentation, enhanced developer guides, and improved endpoint discoverability for easier integration.
|
||||
- [v1.4.338](https://github.com/danielmiessler/fabric/releases/tag/v1.4.338) (Dec 4, 2025) — Add Abacus vendor support for Chat-LLM
|
||||
@@ -160,6 +165,7 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
|
||||
- [Docker](#docker)
|
||||
- [Environment Variables](#environment-variables)
|
||||
- [Setup](#setup)
|
||||
- [Supported AI Providers](#supported-ai-providers)
|
||||
- [Per-Pattern Model Mapping](#per-pattern-model-mapping)
|
||||
- [Add aliases for all patterns](#add-aliases-for-all-patterns)
|
||||
- [Save your files in markdown using aliases](#save-your-files-in-markdown-using-aliases)
|
||||
@@ -172,12 +178,15 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
|
||||
- [Fish Completion](#fish-completion)
|
||||
- [Usage](#usage)
|
||||
- [Debug Levels](#debug-levels)
|
||||
- [Dry Run Mode](#dry-run-mode)
|
||||
- [Extensions](#extensions)
|
||||
- [REST API Server](#rest-api-server)
|
||||
- [Ollama Compatibility Mode](#ollama-compatibility-mode)
|
||||
- [Our approach to prompting](#our-approach-to-prompting)
|
||||
- [Examples](#examples)
|
||||
- [Just use the Patterns](#just-use-the-patterns)
|
||||
- [Prompt Strategies](#prompt-strategies)
|
||||
- [Available Strategies](#available-strategies)
|
||||
- [Custom Patterns](#custom-patterns)
|
||||
- [Setting Up Custom Patterns](#setting-up-custom-patterns)
|
||||
- [Using Custom Patterns](#using-custom-patterns)
|
||||
@@ -185,12 +194,14 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
|
||||
- [Helper Apps](#helper-apps)
|
||||
- [`to_pdf`](#to_pdf)
|
||||
- [`to_pdf` Installation](#to_pdf-installation)
|
||||
- [`code_helper`](#code_helper)
|
||||
- [`code2context`](#code2context)
|
||||
- [`generate_changelog`](#generate_changelog)
|
||||
- [pbpaste](#pbpaste)
|
||||
- [Web Interface (Fabric Web App)](#web-interface-fabric-web-app)
|
||||
- [Meta](#meta)
|
||||
- [Primary contributors](#primary-contributors)
|
||||
- [Contributors](#contributors)
|
||||
- [💜 Support This Project](#-support-this-project)
|
||||
|
||||
<br />
|
||||
|
||||
@@ -349,6 +360,44 @@ fabric --setup
|
||||
|
||||
If everything works you are good to go.
|
||||
|
||||
### Supported AI Providers
|
||||
|
||||
Fabric supports a wide range of AI providers:
|
||||
|
||||
**Native Integrations:**
|
||||
|
||||
- OpenAI
|
||||
- Anthropic (Claude)
|
||||
- Google Gemini
|
||||
- Ollama (local models)
|
||||
- Azure OpenAI
|
||||
- Amazon Bedrock
|
||||
- Vertex AI
|
||||
- LM Studio
|
||||
- Perplexity
|
||||
|
||||
**OpenAI-Compatible Providers:**
|
||||
|
||||
- Abacus
|
||||
- AIML
|
||||
- Cerebras
|
||||
- DeepSeek
|
||||
- DigitalOcean
|
||||
- GitHub Models
|
||||
- GrokAI
|
||||
- Groq
|
||||
- Langdock
|
||||
- LiteLLM
|
||||
- MiniMax
|
||||
- Mistral
|
||||
- OpenRouter
|
||||
- SiliconCloud
|
||||
- Together
|
||||
- Venice AI
|
||||
- Z AI
|
||||
|
||||
Run `fabric --setup` to configure your preferred provider(s), or use `fabric --listvendors` to see all available vendors.
|
||||
|
||||
### Per-Pattern Model Mapping
|
||||
|
||||
You can configure specific models for individual patterns using environment variables
|
||||
@@ -720,6 +769,16 @@ Use the `--debug` flag to control runtime logging:
|
||||
- `2`: detailed debugging
|
||||
- `3`: trace level
|
||||
|
||||
### Dry Run Mode
|
||||
|
||||
Use `--dry-run` to preview what would be sent to the AI model without making an API call:
|
||||
|
||||
```bash
|
||||
echo "test input" | fabric --dry-run -p summarize
|
||||
```
|
||||
|
||||
This is useful for debugging patterns, checking prompt construction, and verifying input formatting before using API credits.
|
||||
|
||||
### Extensions
|
||||
|
||||
Fabric supports extensions that can be called within patterns. See the [Extension Guide](internal/plugins/template/Examples/README.md) for complete documentation.
|
||||
@@ -745,6 +804,22 @@ The server provides endpoints for:
|
||||
|
||||
For complete endpoint documentation, authentication setup, and usage examples, see [REST API Documentation](docs/rest-api.md).
|
||||
|
||||
### Ollama Compatibility Mode
|
||||
|
||||
Fabric can serve as a drop-in replacement for Ollama by exposing Ollama-compatible API endpoints. Start the server with:
|
||||
|
||||
```bash
|
||||
fabric --serve --serveOllama
|
||||
```
|
||||
|
||||
This enables the following Ollama-compatible endpoints:
|
||||
|
||||
- `GET /api/tags` - List available patterns as models
|
||||
- `POST /api/chat` - Chat completions
|
||||
- `GET /api/version` - Server version
|
||||
|
||||
Applications configured to use the Ollama API can point to your Fabric server instead, allowing you to use any of Fabric's supported AI providers through the Ollama interface. Patterns appear as models (e.g., `summarize:latest`).
|
||||
|
||||
## Our approach to prompting
|
||||
|
||||
Fabric _Patterns_ are different than most prompts you'll see.
|
||||
@@ -825,6 +900,34 @@ LLM in the chat session.
|
||||
|
||||
Use `fabric -S` and select the option to install the strategies in your `~/.config/fabric` directory.
|
||||
|
||||
#### Available Strategies
|
||||
|
||||
Fabric includes several prompt strategies:
|
||||
|
||||
- `cot` - Chain-of-Thought: Step-by-step reasoning
|
||||
- `cod` - Chain-of-Draft: Iterative drafting with minimal notes (5 words max per step)
|
||||
- `tot` - Tree-of-Thought: Generate multiple reasoning paths and select the best one
|
||||
- `aot` - Atom-of-Thought: Break problems into smallest independent atomic sub-problems
|
||||
- `ltm` - Least-to-Most: Solve problems from easiest to hardest sub-problems
|
||||
- `self-consistent` - Self-Consistency: Multiple reasoning paths with consensus
|
||||
- `self-refine` - Self-Refinement: Answer, critique, and refine
|
||||
- `reflexion` - Reflexion: Answer, critique briefly, and provide refined answer
|
||||
- `standard` - Standard: Direct answer without explanation
|
||||
|
||||
Use the `--strategy` flag to apply a strategy:
|
||||
|
||||
```bash
|
||||
echo "Analyze this code" | fabric --strategy cot -p analyze_code
|
||||
```
|
||||
|
||||
List all available strategies with:
|
||||
|
||||
```bash
|
||||
fabric --liststrategies
|
||||
```
|
||||
|
||||
Strategies are stored as JSON files in `~/.config/fabric/strategies/`. See the default strategies for the format specification.
|
||||
|
||||
## Custom Patterns
|
||||
|
||||
You may want to use Fabric to create your own custom Patterns—but not share them with others. No problem!
|
||||
@@ -904,9 +1007,9 @@ go install github.com/danielmiessler/fabric/cmd/to_pdf@latest
|
||||
|
||||
Make sure you have a LaTeX distribution (like TeX Live or MiKTeX) installed on your system, as `to_pdf` requires `pdflatex` to be available in your system's PATH.
|
||||
|
||||
### `code_helper`
|
||||
### `code2context`
|
||||
|
||||
`code_helper` is used in conjunction with the `create_coding_feature` pattern.
|
||||
`code2context` is used in conjunction with the `create_coding_feature` pattern.
|
||||
It generates a `json` representation of a directory of code that can be fed into an AI model
|
||||
with instructions to create a new feature or edit the code in a specified way.
|
||||
|
||||
@@ -915,9 +1018,27 @@ See [the Create Coding Feature Pattern README](./data/patterns/create_coding_fea
|
||||
Install it first using:
|
||||
|
||||
```bash
|
||||
go install github.com/danielmiessler/fabric/cmd/code_helper@latest
|
||||
go install github.com/danielmiessler/fabric/cmd/code2context@latest
|
||||
```
|
||||
|
||||
### `generate_changelog`
|
||||
|
||||
`generate_changelog` generates changelogs from git commit history and GitHub pull requests. It walks through your repository's git history, extracts PR information, and produces well-formatted markdown changelogs.
|
||||
|
||||
```bash
|
||||
generate_changelog --help
|
||||
```
|
||||
|
||||
Features include SQLite caching for fast incremental updates, GitHub GraphQL API integration for efficient PR fetching, and optional AI-enhanced summaries using Fabric.
|
||||
|
||||
Install it using:
|
||||
|
||||
```bash
|
||||
go install github.com/danielmiessler/fabric/cmd/generate_changelog@latest
|
||||
```
|
||||
|
||||
See the [generate_changelog README](./cmd/generate_changelog/README.md) for detailed usage and options.
|
||||
|
||||
## pbpaste
|
||||
|
||||
The [examples](#examples) use the macOS program `pbpaste` to paste content from the clipboard to pipe into `fabric` as the input. `pbpaste` is not available on Windows or Linux, but there are alternatives.
|
||||
@@ -977,3 +1098,13 @@ Made with [contrib.rocks](https://contrib.rocks).
|
||||
`fabric` was created by <a href="https://danielmiessler.com/subscribe" target="_blank">Daniel Miessler</a> in January of 2024.
|
||||
<br /><br />
|
||||
<a href="https://twitter.com/intent/user?screen_name=danielmiessler"></a>
|
||||
|
||||
## 💜 Support This Project
|
||||
|
||||
<div align="center">
|
||||
|
||||
<img src="https://img.shields.io/badge/Sponsor-❤️-EA4AAA?style=for-the-badge&logo=github-sponsors&logoColor=white" alt="Sponsor">
|
||||
|
||||
**I spend hundreds of hours a year on open source. If you'd like to help support this project, you can [sponsor me here](https://github.com/sponsors/danielmiessler). 🙏🏼**
|
||||
|
||||
</div>
|
||||
|
||||
@@ -131,6 +131,75 @@ func ScanDirectory(rootDir string, maxDepth int, instructions string, ignoreList
|
||||
return json.MarshalIndent(data, "", " ")
|
||||
}
|
||||
|
||||
// ScanFiles scans specific files and returns a JSON representation
|
||||
func ScanFiles(files []string, instructions string) ([]byte, error) {
|
||||
fileCount := 0
|
||||
dirSet := make(map[string]bool)
|
||||
|
||||
// Create root directory item
|
||||
rootItem := FileItem{
|
||||
Type: "directory",
|
||||
Name: ".",
|
||||
Contents: []FileItem{},
|
||||
}
|
||||
|
||||
for _, filePath := range files {
|
||||
// Skip directories
|
||||
info, err := os.Stat(filePath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error accessing file %s: %v", filePath, err)
|
||||
}
|
||||
if info.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
// Track unique directories
|
||||
dir := filepath.Dir(filePath)
|
||||
if dir != "." {
|
||||
dirSet[dir] = true
|
||||
}
|
||||
|
||||
fileCount++
|
||||
|
||||
// Read file content
|
||||
content, err := os.ReadFile(filePath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error reading file %s: %v", filePath, err)
|
||||
}
|
||||
|
||||
// Clean path for consistent handling
|
||||
cleanPath := filepath.Clean(filePath)
|
||||
if strings.HasPrefix(cleanPath, "./") {
|
||||
cleanPath = cleanPath[2:]
|
||||
}
|
||||
|
||||
// Add file to the structure
|
||||
addFileToDirectory(&rootItem, cleanPath, string(content), ".")
|
||||
}
|
||||
|
||||
// Create final data structure
|
||||
var data []any
|
||||
data = append(data, rootItem)
|
||||
|
||||
// Add report
|
||||
reportItem := map[string]any{
|
||||
"type": "report",
|
||||
"directories": len(dirSet) + 1,
|
||||
"files": fileCount,
|
||||
}
|
||||
data = append(data, reportItem)
|
||||
|
||||
// Add instructions
|
||||
instructionsItem := map[string]any{
|
||||
"type": "instructions",
|
||||
"name": "code_change_instructions",
|
||||
"details": instructions,
|
||||
}
|
||||
data = append(data, instructionsItem)
|
||||
|
||||
return json.MarshalIndent(data, "", " ")
|
||||
}
|
||||
|
||||
// addFileToDirectory adds a file to the correct directory in the structure
|
||||
func addFileToDirectory(root *FileItem, path, content, rootDir string) {
|
||||
parts := strings.Split(path, string(filepath.Separator))
|
||||
100
cmd/code2context/code_test.go
Normal file
100
cmd/code2context/code_test.go
Normal file
@@ -0,0 +1,100 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestScanFiles(t *testing.T) {
|
||||
// Create temp directory with test files
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// Create test files
|
||||
file1 := filepath.Join(tmpDir, "test1.go")
|
||||
file2 := filepath.Join(tmpDir, "test2.go")
|
||||
subDir := filepath.Join(tmpDir, "subdir")
|
||||
file3 := filepath.Join(subDir, "test3.go")
|
||||
|
||||
require.NoError(t, os.WriteFile(file1, []byte("package main\n"), 0644))
|
||||
require.NoError(t, os.WriteFile(file2, []byte("package main\n\nfunc main() {}\n"), 0644))
|
||||
require.NoError(t, os.MkdirAll(subDir, 0755))
|
||||
require.NoError(t, os.WriteFile(file3, []byte("package subdir\n"), 0644))
|
||||
|
||||
// Test scanning specific files
|
||||
files := []string{file1, file3}
|
||||
instructions := "Test instructions"
|
||||
|
||||
jsonData, err := ScanFiles(files, instructions)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Parse the JSON output
|
||||
var result []any
|
||||
err = json.Unmarshal(jsonData, &result)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, result, 3) // directory, report, instructions
|
||||
|
||||
// Check report
|
||||
report := result[1].(map[string]any)
|
||||
assert.Equal(t, "report", report["type"])
|
||||
assert.Equal(t, float64(2), report["files"])
|
||||
|
||||
// Check instructions
|
||||
instr := result[2].(map[string]any)
|
||||
assert.Equal(t, "instructions", instr["type"])
|
||||
assert.Equal(t, "Test instructions", instr["details"])
|
||||
}
|
||||
|
||||
func TestScanFilesSkipsDirectories(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
file1 := filepath.Join(tmpDir, "test.go")
|
||||
subDir := filepath.Join(tmpDir, "subdir")
|
||||
|
||||
require.NoError(t, os.WriteFile(file1, []byte("package main\n"), 0644))
|
||||
require.NoError(t, os.MkdirAll(subDir, 0755))
|
||||
|
||||
// Include a directory in the file list - should be skipped
|
||||
files := []string{file1, subDir}
|
||||
|
||||
jsonData, err := ScanFiles(files, "test")
|
||||
require.NoError(t, err)
|
||||
|
||||
var result []any
|
||||
err = json.Unmarshal(jsonData, &result)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Check that only 1 file was counted (directory was skipped)
|
||||
report := result[1].(map[string]any)
|
||||
assert.Equal(t, float64(1), report["files"])
|
||||
}
|
||||
|
||||
func TestScanFilesNonExistentFile(t *testing.T) {
|
||||
files := []string{"/nonexistent/file.go"}
|
||||
_, err := ScanFiles(files, "test")
|
||||
assert.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "error accessing file")
|
||||
}
|
||||
|
||||
func TestScanDirectory(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
file1 := filepath.Join(tmpDir, "main.go")
|
||||
require.NoError(t, os.WriteFile(file1, []byte("package main\n"), 0644))
|
||||
|
||||
jsonData, err := ScanDirectory(tmpDir, 3, "Test instructions", []string{})
|
||||
require.NoError(t, err)
|
||||
|
||||
var result []any
|
||||
err = json.Unmarshal(jsonData, &result)
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, result, 3)
|
||||
|
||||
// Check instructions
|
||||
instr := result[2].(map[string]any)
|
||||
assert.Equal(t, "Test instructions", instr["details"])
|
||||
}
|
||||
109
cmd/code2context/main.go
Normal file
109
cmd/code2context/main.go
Normal file
@@ -0,0 +1,109 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Command line flags
|
||||
maxDepth := flag.Int("depth", 3, "Maximum directory depth to scan")
|
||||
ignorePatterns := flag.String("ignore", ".git,node_modules,vendor", "Comma-separated patterns to ignore")
|
||||
outputFile := flag.String("out", "", "Output file (default: stdout)")
|
||||
flag.Usage = printUsage
|
||||
flag.Parse()
|
||||
|
||||
// Check if stdin has data (is a pipe)
|
||||
stdinInfo, _ := os.Stdin.Stat()
|
||||
hasStdin := (stdinInfo.Mode() & os.ModeCharDevice) == 0
|
||||
|
||||
var jsonData []byte
|
||||
var err error
|
||||
|
||||
if hasStdin {
|
||||
// Stdin mode: read file list from stdin, instructions from argument
|
||||
if flag.NArg() != 1 {
|
||||
fmt.Fprintf(os.Stderr, "Error: When piping file list via stdin, provide exactly 1 argument: <instructions>\n")
|
||||
fmt.Fprintf(os.Stderr, "Usage: find . -name '*.go' | code2context \"instructions\"\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
instructions := flag.Arg(0)
|
||||
|
||||
// Read file paths from stdin
|
||||
var files []string
|
||||
scanner := bufio.NewScanner(os.Stdin)
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if line != "" {
|
||||
files = append(files, line)
|
||||
}
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error reading stdin: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
if len(files) == 0 {
|
||||
fmt.Fprintf(os.Stderr, "Error: No files provided via stdin\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
jsonData, err = ScanFiles(files, instructions)
|
||||
} else {
|
||||
// Directory mode: require directory and instructions arguments
|
||||
if flag.NArg() != 2 {
|
||||
printUsage()
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
directory := flag.Arg(0)
|
||||
instructions := flag.Arg(1)
|
||||
|
||||
// Validate directory
|
||||
if info, err := os.Stat(directory); err != nil || !info.IsDir() {
|
||||
fmt.Fprintf(os.Stderr, "Error: Directory '%s' does not exist or is not a directory\n", directory)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Parse ignore patterns and scan directory
|
||||
jsonData, err = ScanDirectory(directory, *maxDepth, instructions, strings.Split(*ignorePatterns, ","))
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error scanning: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Output result
|
||||
if *outputFile != "" {
|
||||
if err := os.WriteFile(*outputFile, jsonData, 0644); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error writing file: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
} else {
|
||||
fmt.Print(string(jsonData))
|
||||
}
|
||||
}
|
||||
|
||||
func printUsage() {
|
||||
fmt.Fprintf(os.Stderr, `code2context - Code project scanner for use with Fabric AI
|
||||
|
||||
Usage:
|
||||
code2context [options] <directory> <instructions>
|
||||
<file_list> | code2context [options] <instructions>
|
||||
|
||||
Examples:
|
||||
code2context . "Add input validation to all user inputs"
|
||||
code2context -depth 4 ./my-project "Implement error handling"
|
||||
code2context -out project.json ./src "Fix security issues"
|
||||
find . -name '*.go' | code2context "Refactor error handling"
|
||||
git ls-files '*.py' | code2context "Add type hints"
|
||||
|
||||
Options:
|
||||
`)
|
||||
flag.PrintDefaults()
|
||||
}
|
||||
@@ -1,65 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Command line flags
|
||||
maxDepth := flag.Int("depth", 3, "Maximum directory depth to scan")
|
||||
ignorePatterns := flag.String("ignore", ".git,node_modules,vendor", "Comma-separated patterns to ignore")
|
||||
outputFile := flag.String("out", "", "Output file (default: stdout)")
|
||||
flag.Usage = printUsage
|
||||
flag.Parse()
|
||||
|
||||
// Require exactly two positional arguments: directory and instructions
|
||||
if flag.NArg() != 2 {
|
||||
printUsage()
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
directory := flag.Arg(0)
|
||||
instructions := flag.Arg(1)
|
||||
|
||||
// Validate directory
|
||||
if info, err := os.Stat(directory); err != nil || !info.IsDir() {
|
||||
fmt.Fprintf(os.Stderr, "Error: Directory '%s' does not exist or is not a directory\n", directory)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Parse ignore patterns and scan directory
|
||||
jsonData, err := ScanDirectory(directory, *maxDepth, instructions, strings.Split(*ignorePatterns, ","))
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error scanning directory: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Output result
|
||||
if *outputFile != "" {
|
||||
if err := os.WriteFile(*outputFile, jsonData, 0644); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error writing file: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
} else {
|
||||
fmt.Print(string(jsonData))
|
||||
}
|
||||
}
|
||||
|
||||
func printUsage() {
|
||||
fmt.Fprintf(os.Stderr, `code_helper - Code project scanner for use with Fabric AI
|
||||
|
||||
Usage:
|
||||
code_helper [options] <directory> <instructions>
|
||||
|
||||
Examples:
|
||||
code_helper . "Add input validation to all user inputs"
|
||||
code_helper -depth 4 ./my-project "Implement error handling"
|
||||
code_helper -out project.json ./src "Fix security issues"
|
||||
|
||||
Options:
|
||||
`)
|
||||
flag.PrintDefaults()
|
||||
}
|
||||
@@ -1,3 +1,3 @@
|
||||
package main
|
||||
|
||||
var version = "v1.4.370"
|
||||
var version = "v1.4.380"
|
||||
|
||||
Binary file not shown.
21
cmd/generate_changelog/internal/cache/cache.go
vendored
21
cmd/generate_changelog/internal/cache/cache.go
vendored
@@ -202,14 +202,23 @@ func (c *Cache) GetVersions() (map[string]*git.Version, error) {
|
||||
}
|
||||
|
||||
if dateStr.Valid {
|
||||
// Try RFC3339Nano first (for nanosecond precision), then fall back to RFC3339
|
||||
v.Date, err = time.Parse(time.RFC3339Nano, dateStr.String)
|
||||
if err != nil {
|
||||
v.Date, err = time.Parse(time.RFC3339, dateStr.String)
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error parsing date '%s' for version '%s': %v. Expected format: RFC3339 or RFC3339Nano.\n", dateStr.String, v.Name, err)
|
||||
// Try multiple date formats: SQLite format, RFC3339Nano, and RFC3339
|
||||
dateFormats := []string{
|
||||
"2006-01-02 15:04:05-07:00", // SQLite DATETIME format
|
||||
"2006-01-02 15:04:05.999999999-07:00", // SQLite with fractional seconds
|
||||
time.RFC3339Nano,
|
||||
time.RFC3339,
|
||||
}
|
||||
var parseErr error
|
||||
for _, format := range dateFormats {
|
||||
v.Date, parseErr = time.Parse(format, dateStr.String)
|
||||
if parseErr == nil {
|
||||
break // Successfully parsed
|
||||
}
|
||||
}
|
||||
if parseErr != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error parsing date '%s' for version '%s': %v\n", dateStr.String, v.Name, parseErr)
|
||||
}
|
||||
}
|
||||
|
||||
if prNumbersJSON != "" {
|
||||
|
||||
@@ -470,7 +470,8 @@ func (g *Generator) generateRawVersionContent(version *git.Version) string {
|
||||
}
|
||||
|
||||
// There are occasionally no PRs or direct commits other than version bumps, so we handle that gracefully
|
||||
if len(prCommits) == 0 && len(directCommits) == 0 {
|
||||
// However, don't return early if we have PRs to output from version.PRNumbers
|
||||
if len(prCommits) == 0 && len(directCommits) == 0 && len(version.PRNumbers) == 0 {
|
||||
return ""
|
||||
}
|
||||
|
||||
|
||||
@@ -4,10 +4,10 @@ Generate code changes to an existing coding project using AI.
|
||||
|
||||
## Installation
|
||||
|
||||
After installing the `code_helper` binary:
|
||||
After installing the `code2context` binary:
|
||||
|
||||
```bash
|
||||
go install github.com/danielmiessler/fabric/cmd/code_helper@latest
|
||||
go install github.com/danielmiessler/fabric/cmd/code2context@latest
|
||||
```
|
||||
|
||||
## Usage
|
||||
@@ -15,18 +15,18 @@ go install github.com/danielmiessler/fabric/cmd/code_helper@latest
|
||||
The create_coding_feature allows you to apply AI-suggested code changes directly to your project files. Use it like this:
|
||||
|
||||
```bash
|
||||
code_helper [project_directory] "[instructions for code changes]" | fabric --pattern create_coding_feature
|
||||
code2context [project_directory] "[instructions for code changes]" | fabric --pattern create_coding_feature
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```bash
|
||||
code_helper . "Create a simple Hello World C program in file main.c" | fabric --pattern create_coding_feature
|
||||
code2context . "Create a simple Hello World C program in file main.c" | fabric --pattern create_coding_feature
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. `code_helper` scans your project directory and creates a JSON representation
|
||||
1. `code2context` scans your project directory and creates a JSON representation
|
||||
2. The AI model analyzes your project structure and instructions
|
||||
3. AI generates file changes in a standard format
|
||||
4. Fabric parses these changes and prompts you to confirm
|
||||
@@ -36,7 +36,7 @@ code_helper . "Create a simple Hello World C program in file main.c" | fabric --
|
||||
|
||||
```bash
|
||||
# Request AI to create a Hello World program
|
||||
code_helper . "Create a simple Hello World C program in file main.c" | fabric --pattern create_coding_feature
|
||||
code2context . "Create a simple Hello World C program in file main.c" | fabric --pattern create_coding_feature
|
||||
|
||||
# Review the changes made to your project
|
||||
git diff
|
||||
@@ -52,7 +52,7 @@ git commit -s -m "Add Hello World program"
|
||||
### Security Enhancement Example
|
||||
|
||||
```bash
|
||||
code_helper . "Ensure that all user input is validated and sanitized before being used in the program." | fabric --pattern create_coding_feature
|
||||
code2context . "Ensure that all user input is validated and sanitized before being used in the program." | fabric --pattern create_coding_feature
|
||||
git diff
|
||||
make check
|
||||
git add <changed files>
|
||||
|
||||
55
docs/DigitalOcean-Agents-Setup.md
Normal file
55
docs/DigitalOcean-Agents-Setup.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# DigitalOcean Gradient AI Agents
|
||||
|
||||
Fabric can talk to DigitalOcean Gradient™ AI Agents by using DigitalOcean's OpenAI-compatible
|
||||
inference endpoint. You provide a **model access key** for inference plus an optional **DigitalOcean
|
||||
API token** for model discovery.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Create or locate a Gradient AI Agent in the DigitalOcean control panel.
|
||||
2. Create a **model access key** for inference (this is not the same as your DigitalOcean API token).
|
||||
3. (Optional) Keep a DigitalOcean API token handy if you want `fabric --listmodels` to query the
|
||||
control plane for available models.
|
||||
|
||||
The official walkthrough for creating and using agents is here:
|
||||
<https://docs.digitalocean.com/products/gradient-ai-platform/how-to/use-agents/>
|
||||
|
||||
## Environment variables
|
||||
|
||||
Set the following environment variables before running `fabric --setup`:
|
||||
|
||||
```bash
|
||||
# Required: model access key for inference
|
||||
export DIGITALOCEAN_INFERENCE_KEY="your-model-access-key"
|
||||
|
||||
# Optional: control-plane token for model listing
|
||||
export DIGITALOCEAN_TOKEN="your-digitalocean-api-token"
|
||||
|
||||
# Optional: override the default inference base URL
|
||||
export DIGITALOCEAN_INFERENCE_BASE_URL="https://inference.do-ai.run/v1"
|
||||
```
|
||||
|
||||
If you need a region-specific inference URL, you can retrieve it from the GenAI regions API:
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
|
||||
"https://api.digitalocean.com/v2/gen-ai/regions"
|
||||
```
|
||||
|
||||
## Fabric setup
|
||||
|
||||
Run setup and select the DigitalOcean vendor:
|
||||
|
||||
```bash
|
||||
fabric --setup
|
||||
```
|
||||
|
||||
Then list models (requires `DIGITALOCEAN_TOKEN`) and pick the inference name:
|
||||
|
||||
```bash
|
||||
fabric --listmodels
|
||||
fabric --vendor DigitalOcean --model <inference_name> --pattern summarize
|
||||
```
|
||||
|
||||
If you skip `DIGITALOCEAN_TOKEN`, you can still use Fabric by supplying the model name directly
|
||||
based on the agent or model you created in DigitalOcean.
|
||||
449
docs/Microsoft-365-Copilot-Setup.md
Normal file
449
docs/Microsoft-365-Copilot-Setup.md
Normal file
@@ -0,0 +1,449 @@
|
||||
# Microsoft 365 Copilot Setup Guide for Fabric
|
||||
|
||||
This guide walks you through setting up and using Microsoft 365 Copilot with Fabric CLI. Microsoft 365 Copilot provides AI capabilities grounded in your organization's Microsoft 365 data, including emails, documents, meetings, and more.
|
||||
|
||||
> NOTE: As per the conversation in [discussion 1853](https://github.com/danielmiessler/Fabric/discussions/1853) - enterprise users with restrictive consent policies will probably need their IT admin to either create an app registration with the required permissions, or grant admin consent for an existing app like Graph Explorer.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [What is Microsoft 365 Copilot?](#what-is-microsoft-365-copilot)
|
||||
- [Requirements](#requirements)
|
||||
- [Azure AD App Registration](#azure-ad-app-registration)
|
||||
- [Obtaining Access Tokens](#obtaining-access-tokens)
|
||||
- [Configuring Fabric for Copilot](#configuring-fabric-for-copilot)
|
||||
- [Testing Your Setup](#testing-your-setup)
|
||||
- [Usage Examples](#usage-examples)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [API Limitations](#api-limitations)
|
||||
|
||||
---
|
||||
|
||||
## What is Microsoft 365 Copilot?
|
||||
|
||||
**Microsoft 365 Copilot** is an AI-powered assistant that works across Microsoft 365 applications. When integrated with Fabric, it allows you to:
|
||||
|
||||
- **Query your organization's data**: Ask questions about emails, documents, calendars, and Teams chats
|
||||
- **Grounded responses**: Get AI responses that are based on your actual Microsoft 365 content
|
||||
- **Enterprise compliance**: All interactions respect your organization's security policies, permissions, and sensitivity labels
|
||||
|
||||
### Why Use Microsoft 365 Copilot with Fabric?
|
||||
|
||||
- **Enterprise-ready**: Built for organizations with compliance requirements
|
||||
- **Data grounding**: Responses are based on your actual organizational data
|
||||
- **Unified access**: Single integration for all Microsoft 365 content
|
||||
- **Security**: Respects existing permissions and access controls
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
Before you begin, ensure you have:
|
||||
|
||||
### Licensing Requirements
|
||||
|
||||
1. **Microsoft 365 Copilot License**: Required for each user accessing the API
|
||||
2. **Microsoft 365 E3 or E5 Subscription** (or equivalent): Foundation for Copilot services
|
||||
|
||||
### Technical Requirements
|
||||
|
||||
1. **Azure AD Tenant**: Your organization's Azure Active Directory
|
||||
2. **Azure AD App Registration**: To authenticate with Microsoft Graph
|
||||
3. **Delegated Permissions**: The Chat API only supports delegated (user) permissions, not application permissions
|
||||
|
||||
### Permissions Required
|
||||
|
||||
The following Microsoft Graph permissions are needed:
|
||||
|
||||
| Permission | Type | Description |
|
||||
|------------|------|-------------|
|
||||
| `Sites.Read.All` | Delegated | Read SharePoint sites |
|
||||
| `Mail.Read` | Delegated | Read user's email |
|
||||
| `People.Read.All` | Delegated | Read organization's people directory |
|
||||
| `OnlineMeetingTranscript.Read.All` | Delegated | Read meeting transcripts |
|
||||
| `Chat.Read` | Delegated | Read Teams chat messages |
|
||||
| `ChannelMessage.Read.All` | Delegated | Read Teams channel messages |
|
||||
| `ExternalItem.Read.All` | Delegated | Read external content connectors |
|
||||
|
||||
---
|
||||
|
||||
## Azure AD App Registration
|
||||
|
||||
### Step 1: Create the App Registration
|
||||
|
||||
1. Go to the [Azure Portal](https://portal.azure.com)
|
||||
2. Navigate to **Azure Active Directory** > **App registrations**
|
||||
3. Click **New registration**
|
||||
4. Configure the application:
|
||||
- **Name**: `Fabric CLI - Copilot`
|
||||
- **Supported account types**: Select "Accounts in this organizational directory only"
|
||||
- **Redirect URI**: Select "Public client/native (mobile & desktop)" and enter `http://localhost:8400/callback`
|
||||
5. Click **Register**
|
||||
|
||||
### Step 2: Note Your Application IDs
|
||||
|
||||
After registration, note these values from the **Overview** page:
|
||||
|
||||
- **Application (client) ID**: e.g., `12345678-1234-1234-1234-123456789abc`
|
||||
- **Directory (tenant) ID**: e.g., `abcdef12-3456-7890-abcd-ef1234567890`
|
||||
|
||||
### Step 3: Configure API Permissions
|
||||
|
||||
1. Go to **API permissions** in your app registration
|
||||
2. Click **Add a permission**
|
||||
3. Select **Microsoft Graph**
|
||||
4. Select **Delegated permissions**
|
||||
5. Add the following permissions:
|
||||
- `Sites.Read.All`
|
||||
- `Mail.Read`
|
||||
- `People.Read.All`
|
||||
- `OnlineMeetingTranscript.Read.All`
|
||||
- `Chat.Read`
|
||||
- `ChannelMessage.Read.All`
|
||||
- `ExternalItem.Read.All`
|
||||
- `offline_access` (for refresh tokens)
|
||||
6. Click **Add permissions**
|
||||
7. **Important**: Click **Grant admin consent for [Your Organization]** (requires admin privileges)
|
||||
|
||||
### Step 4: Configure Authentication (Optional - For Confidential Clients)
|
||||
|
||||
If you want to use client credentials for token refresh:
|
||||
|
||||
1. Go to **Certificates & secrets**
|
||||
2. Click **New client secret**
|
||||
3. Add a description and select an expiration
|
||||
4. Click **Add**
|
||||
5. **Important**: Copy the secret value immediately (it won't be shown again)
|
||||
|
||||
---
|
||||
|
||||
## Obtaining Access Tokens
|
||||
|
||||
The Microsoft 365 Copilot Chat API requires **delegated permissions**, meaning you need to authenticate as a user. There are several ways to obtain tokens:
|
||||
|
||||
### Option 1: Using Azure CLI (Recommended for Development)
|
||||
|
||||
```bash
|
||||
# Install Azure CLI if not already installed
|
||||
# https://docs.microsoft.com/en-us/cli/azure/install-azure-cli
|
||||
|
||||
# Login with your work account
|
||||
az login --tenant YOUR_TENANT_ID
|
||||
|
||||
# Get an access token for Microsoft Graph
|
||||
az account get-access-token --resource https://graph.microsoft.com --query accessToken -o tsv
|
||||
```
|
||||
|
||||
### Option 2: Using Device Code Flow
|
||||
|
||||
For headless environments or when browser authentication isn't possible:
|
||||
|
||||
```bash
|
||||
# Request device code
|
||||
curl -X POST "https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/devicecode" \
|
||||
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||
-d "client_id=YOUR_CLIENT_ID&scope=Sites.Read.All Mail.Read People.Read.All OnlineMeetingTranscript.Read.All Chat.Read ChannelMessage.Read.All ExternalItem.Read.All offline_access"
|
||||
|
||||
# Follow the instructions to authenticate in a browser
|
||||
# Then poll for the token using the device_code from the response
|
||||
```
|
||||
|
||||
### Option 3: Using Microsoft Graph Explorer (For Testing)
|
||||
|
||||
1. Go to [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer)
|
||||
2. Sign in with your work account
|
||||
3. Click the gear icon > "Select permissions"
|
||||
4. Enable the required permissions
|
||||
5. Use the access token from the "Access token" tab
|
||||
|
||||
### Option 4: Using MSAL Libraries
|
||||
|
||||
For production applications, use Microsoft Authentication Library (MSAL):
|
||||
|
||||
```go
|
||||
// Example using Azure Identity SDK for Go
|
||||
import "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
|
||||
|
||||
cred, err := azidentity.NewInteractiveBrowserCredential(&azidentity.InteractiveBrowserCredentialOptions{
|
||||
TenantID: "YOUR_TENANT_ID",
|
||||
ClientID: "YOUR_CLIENT_ID",
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuring Fabric for Copilot
|
||||
|
||||
### Method 1: Using Fabric Setup (Recommended)
|
||||
|
||||
1. **Run Fabric Setup:**
|
||||
|
||||
```bash
|
||||
fabric --setup
|
||||
```
|
||||
|
||||
2. **Select Copilot from the menu:**
|
||||
- Find `Copilot` in the numbered list
|
||||
- Enter the number and press Enter
|
||||
|
||||
3. **Enter Configuration Values:**
|
||||
|
||||
```
|
||||
[Copilot] Enter your Azure AD Tenant ID:
|
||||
> contoso.onmicrosoft.com
|
||||
|
||||
[Copilot] Enter your Azure AD Application (Client) ID:
|
||||
> 12345678-1234-1234-1234-123456789abc
|
||||
|
||||
[Copilot] Enter your Azure AD Client Secret (optional):
|
||||
> (press Enter to skip, or enter secret for token refresh)
|
||||
|
||||
[Copilot] Enter a pre-obtained OAuth2 Access Token:
|
||||
> eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIs...
|
||||
|
||||
[Copilot] Enter a pre-obtained OAuth2 Refresh Token (optional):
|
||||
> (press Enter to skip, or enter refresh token)
|
||||
|
||||
[Copilot] Enter your timezone:
|
||||
> America/New_York
|
||||
```
|
||||
|
||||
### Method 2: Manual Configuration
|
||||
|
||||
Edit `~/.config/fabric/.env`:
|
||||
|
||||
```bash
|
||||
# Microsoft 365 Copilot Configuration
|
||||
COPILOT_TENANT_ID=contoso.onmicrosoft.com
|
||||
COPILOT_CLIENT_ID=12345678-1234-1234-1234-123456789abc
|
||||
COPILOT_CLIENT_SECRET=your-client-secret-if-applicable
|
||||
COPILOT_ACCESS_TOKEN=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIs...
|
||||
COPILOT_REFRESH_TOKEN=your-refresh-token-if-available
|
||||
COPILOT_API_BASE_URL=https://graph.microsoft.com/beta/copilot
|
||||
COPILOT_TIME_ZONE=America/New_York
|
||||
```
|
||||
|
||||
### Verify Configuration
|
||||
|
||||
```bash
|
||||
fabric --listmodels | grep -i copilot
|
||||
```
|
||||
|
||||
Expected output:
|
||||
|
||||
```
|
||||
[X] Copilot|microsoft-365-copilot
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Your Setup
|
||||
|
||||
### Basic Test
|
||||
|
||||
```bash
|
||||
# Simple query
|
||||
echo "What meetings do I have tomorrow?" | fabric --vendor Copilot
|
||||
|
||||
# With explicit model (though there's only one)
|
||||
echo "Summarize my recent emails" | fabric --vendor Copilot --model microsoft-365-copilot
|
||||
```
|
||||
|
||||
### Test with Streaming
|
||||
|
||||
```bash
|
||||
echo "What are the key points from my last team meeting?" | \
|
||||
fabric --vendor Copilot --stream
|
||||
```
|
||||
|
||||
### Test with Patterns
|
||||
|
||||
```bash
|
||||
# Use a pattern with Copilot
|
||||
echo "Find action items from my recent emails" | \
|
||||
fabric --pattern extract_wisdom --vendor Copilot
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Query Calendar
|
||||
|
||||
```bash
|
||||
echo "What meetings do I have scheduled for next week?" | fabric --vendor Copilot
|
||||
```
|
||||
|
||||
### Summarize Emails
|
||||
|
||||
```bash
|
||||
echo "Summarize the emails I received yesterday from my manager" | fabric --vendor Copilot
|
||||
```
|
||||
|
||||
### Search Documents
|
||||
|
||||
```bash
|
||||
echo "Find documents about the Q4 budget proposal" | fabric --vendor Copilot
|
||||
```
|
||||
|
||||
### Team Collaboration
|
||||
|
||||
```bash
|
||||
echo "What were the main discussion points in the engineering standup channel this week?" | fabric --vendor Copilot
|
||||
```
|
||||
|
||||
### Meeting Insights
|
||||
|
||||
```bash
|
||||
echo "What action items came out of the project review meeting on Monday?" | fabric --vendor Copilot
|
||||
```
|
||||
|
||||
### Using with Fabric Patterns
|
||||
|
||||
```bash
|
||||
# Extract wisdom from organizational content
|
||||
echo "What are the key decisions from last month's leadership updates?" | \
|
||||
fabric --pattern extract_wisdom --vendor Copilot
|
||||
|
||||
# Summarize with a specific pattern
|
||||
echo "Summarize the HR policy document about remote work" | \
|
||||
fabric --pattern summarize --vendor Copilot
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Error: "Authentication failed" or "401 Unauthorized"
|
||||
|
||||
**Cause**: Invalid or expired access token
|
||||
|
||||
**Solutions**:
|
||||
|
||||
1. Obtain a fresh access token:
|
||||
|
||||
```bash
|
||||
az account get-access-token --resource https://graph.microsoft.com --query accessToken -o tsv
|
||||
```
|
||||
|
||||
2. Update your configuration:
|
||||
|
||||
```bash
|
||||
fabric --setup
|
||||
# Select Copilot and enter the new token
|
||||
```
|
||||
|
||||
3. Check token hasn't expired (tokens typically expire after 1 hour)
|
||||
|
||||
### Error: "403 Forbidden"
|
||||
|
||||
**Cause**: Missing permissions or admin consent not granted
|
||||
|
||||
**Solutions**:
|
||||
|
||||
1. Verify all required permissions are added to your app registration
|
||||
2. Ensure admin consent has been granted
|
||||
3. Check that your user has a Microsoft 365 Copilot license
|
||||
|
||||
### Error: "Failed to create conversation"
|
||||
|
||||
**Cause**: API access issues or service unavailable
|
||||
|
||||
**Solutions**:
|
||||
|
||||
1. Verify the API base URL is correct: `https://graph.microsoft.com/beta/copilot`
|
||||
2. Check Microsoft 365 service status
|
||||
3. Ensure your organization has Copilot enabled
|
||||
|
||||
### Error: "Rate limit exceeded"
|
||||
|
||||
**Cause**: Too many requests
|
||||
|
||||
**Solutions**:
|
||||
|
||||
1. Wait a few minutes before retrying
|
||||
2. Reduce request frequency
|
||||
3. Consider batching queries
|
||||
|
||||
### Token Refresh Not Working
|
||||
|
||||
**Cause**: Missing client secret or refresh token
|
||||
|
||||
**Solutions**:
|
||||
|
||||
1. Ensure you have both a refresh token and client secret configured
|
||||
2. Re-authenticate to get new tokens
|
||||
3. Check that your app registration supports refresh tokens (public client)
|
||||
|
||||
---
|
||||
|
||||
## API Limitations
|
||||
|
||||
### Current Limitations
|
||||
|
||||
1. **Preview API**: The Chat API is currently in preview (`/beta` endpoint) and subject to change
|
||||
2. **Delegated Only**: Only delegated (user) permissions are supported, not application permissions
|
||||
3. **Single Model**: Copilot exposes a single unified model, unlike other vendors with multiple model options
|
||||
4. **Enterprise Only**: Requires Microsoft 365 work or school accounts
|
||||
5. **Licensing**: Requires Microsoft 365 Copilot license per user
|
||||
|
||||
### Rate Limits
|
||||
|
||||
The Microsoft Graph API has rate limits that apply:
|
||||
|
||||
- Per-app limits
|
||||
- Per-user limits
|
||||
- Tenant-wide limits
|
||||
|
||||
Consult [Microsoft Graph throttling guidance](https://docs.microsoft.com/en-us/graph/throttling) for details.
|
||||
|
||||
### Data Freshness
|
||||
|
||||
Copilot indexes data from Microsoft 365 services. There may be a delay between when content is created and when it becomes available in Copilot responses.
|
||||
|
||||
---
|
||||
|
||||
## Additional Resources
|
||||
|
||||
### Microsoft Documentation
|
||||
|
||||
- [Microsoft 365 Copilot APIs Overview](https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/copilot-apis-overview)
|
||||
- [Chat API Documentation](https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/api/ai-services/chat/overview)
|
||||
- [Microsoft Graph Authentication](https://learn.microsoft.com/en-us/graph/auth/)
|
||||
- [Azure AD App Registration](https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app)
|
||||
|
||||
### Fabric Documentation
|
||||
|
||||
- [Fabric README](../README.md)
|
||||
- [Contexts and Sessions Tutorial](./contexts-and-sessions-tutorial.md)
|
||||
- [Other Vendor Setup Guides](./GitHub-Models-Setup.md)
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Microsoft 365 Copilot integration with Fabric provides enterprise-ready AI capabilities grounded in your organization's data. Key points:
|
||||
|
||||
- **Enterprise compliance**: Works within your organization's security and compliance policies
|
||||
- **Data grounding**: Responses are based on your actual Microsoft 365 content
|
||||
- **Single model**: Exposes one unified AI model (`microsoft-365-copilot`)
|
||||
- **Delegated auth**: Requires user authentication (OAuth2 with delegated permissions)
|
||||
- **Preview API**: Currently in beta; expect changes
|
||||
|
||||
### Quick Start Commands
|
||||
|
||||
```bash
|
||||
# 1. Set up Azure AD app registration (see guide above)
|
||||
|
||||
# 2. Get access token
|
||||
az login --tenant YOUR_TENANT_ID
|
||||
ACCESS_TOKEN=$(az account get-access-token --resource https://graph.microsoft.com --query accessToken -o tsv)
|
||||
|
||||
# 3. Configure Fabric
|
||||
fabric --setup
|
||||
# Select Copilot, enter tenant ID, client ID, and access token
|
||||
|
||||
# 4. Test it
|
||||
echo "What meetings do I have this week?" | fabric --vendor Copilot
|
||||
```
|
||||
|
||||
Happy prompting with Microsoft 365 Copilot!
|
||||
@@ -15,6 +15,8 @@ import (
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/anthropic"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/azure"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/bedrock"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/copilot"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/digitalocean"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/dryrun"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/exolab"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/gemini"
|
||||
@@ -98,6 +100,7 @@ func NewPluginRegistry(db *fsdb.Db) (ret *PluginRegistry, err error) {
|
||||
// Add non-OpenAI compatible clients
|
||||
vendors = append(vendors,
|
||||
openai.NewClient(),
|
||||
digitalocean.NewClient(),
|
||||
ollama.NewClient(),
|
||||
azure.NewClient(),
|
||||
gemini.NewClient(),
|
||||
@@ -105,7 +108,8 @@ func NewPluginRegistry(db *fsdb.Db) (ret *PluginRegistry, err error) {
|
||||
vertexai.NewClient(),
|
||||
lmstudio.NewClient(),
|
||||
exolab.NewClient(),
|
||||
perplexity.NewClient(), // Added Perplexity client
|
||||
perplexity.NewClient(),
|
||||
copilot.NewClient(), // Microsoft 365 Copilot
|
||||
)
|
||||
|
||||
if hasAWSCredentials() {
|
||||
|
||||
@@ -29,11 +29,7 @@ func NewClient() (ret *Client) {
|
||||
vendorName := "Anthropic"
|
||||
ret = &Client{}
|
||||
|
||||
ret.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
ConfigureCustom: ret.configure,
|
||||
}
|
||||
ret.PluginBase = plugins.NewVendorPluginBase(vendorName, ret.configure)
|
||||
|
||||
ret.ApiBaseURL = ret.AddSetupQuestion("API Base URL", false)
|
||||
ret.ApiBaseURL.Value = defaultBaseUrl
|
||||
|
||||
@@ -51,13 +51,9 @@ func NewClient() (ret *BedrockClient) {
|
||||
cfg, err := config.LoadDefaultConfig(ctx)
|
||||
if err != nil {
|
||||
// Create a minimal client that will fail gracefully during configuration
|
||||
ret.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
ConfigureCustom: func() error {
|
||||
return fmt.Errorf("unable to load AWS Config: %w", err)
|
||||
},
|
||||
}
|
||||
ret.PluginBase = plugins.NewVendorPluginBase(vendorName, func() error {
|
||||
return fmt.Errorf("unable to load AWS Config: %w", err)
|
||||
})
|
||||
ret.bedrockRegion = ret.PluginBase.AddSetupQuestion("AWS Region", true)
|
||||
return
|
||||
}
|
||||
@@ -67,11 +63,7 @@ func NewClient() (ret *BedrockClient) {
|
||||
runtimeClient := bedrockruntime.NewFromConfig(cfg)
|
||||
controlPlaneClient := bedrock.NewFromConfig(cfg)
|
||||
|
||||
ret.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
ConfigureCustom: ret.configure,
|
||||
}
|
||||
ret.PluginBase = plugins.NewVendorPluginBase(vendorName, ret.configure)
|
||||
|
||||
ret.runtimeClient = runtimeClient
|
||||
ret.controlPlaneClient = controlPlaneClient
|
||||
|
||||
485
internal/plugins/ai/copilot/copilot.go
Normal file
485
internal/plugins/ai/copilot/copilot.go
Normal file
@@ -0,0 +1,485 @@
|
||||
// Package copilot provides integration with Microsoft 365 Copilot Chat API.
|
||||
// This vendor allows Fabric to interact with Microsoft 365 Copilot, which provides
|
||||
// AI capabilities grounded in your organization's Microsoft 365 data.
|
||||
//
|
||||
// Requirements:
|
||||
// - Microsoft 365 Copilot license for each user
|
||||
// - Microsoft 365 E3 or E5 subscription (or equivalent)
|
||||
// - Azure AD app registration with appropriate permissions
|
||||
//
|
||||
// The Chat API is currently in preview and requires delegated (work or school account)
|
||||
// permissions. Application permissions are not supported.
|
||||
package copilot
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/danielmiessler/fabric/internal/chat"
|
||||
"github.com/danielmiessler/fabric/internal/domain"
|
||||
debuglog "github.com/danielmiessler/fabric/internal/log"
|
||||
"github.com/danielmiessler/fabric/internal/plugins"
|
||||
"golang.org/x/oauth2"
|
||||
)
|
||||
|
||||
const (
|
||||
vendorName = "Copilot"
|
||||
|
||||
// Microsoft Graph API endpoints
|
||||
defaultBaseURL = "https://graph.microsoft.com/beta/copilot"
|
||||
conversationsPath = "/conversations"
|
||||
|
||||
// OAuth2 endpoints for Microsoft identity platform
|
||||
microsoftAuthURL = "https://login.microsoftonline.com/%s/oauth2/v2.0/authorize"
|
||||
microsoftTokenURL = "https://login.microsoftonline.com/%s/oauth2/v2.0/token"
|
||||
|
||||
// Default scopes required for Copilot Chat API
|
||||
// These are the minimum required permissions
|
||||
defaultScopes = "Sites.Read.All Mail.Read People.Read.All OnlineMeetingTranscript.Read.All Chat.Read ChannelMessage.Read.All ExternalItem.Read.All offline_access"
|
||||
|
||||
// Model name exposed by Copilot (single model)
|
||||
copilotModelName = "microsoft-365-copilot"
|
||||
)
|
||||
|
||||
// NewClient creates a new Microsoft 365 Copilot client.
|
||||
func NewClient() *Client {
|
||||
c := &Client{}
|
||||
|
||||
c.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
ConfigureCustom: c.configure,
|
||||
}
|
||||
|
||||
// Setup questions for configuration
|
||||
c.TenantID = c.AddSetupQuestion("Tenant ID", true)
|
||||
c.TenantID.Question = "Enter your Azure AD Tenant ID (e.g., contoso.onmicrosoft.com or GUID)"
|
||||
|
||||
c.ClientID = c.AddSetupQuestion("Client ID", true)
|
||||
c.ClientID.Question = "Enter your Azure AD Application (Client) ID"
|
||||
|
||||
c.ClientSecret = c.AddSetupQuestion("Client Secret", false)
|
||||
c.ClientSecret.Question = "Enter your Azure AD Client Secret (optional, for confidential clients)"
|
||||
|
||||
c.AccessToken = c.AddSetupQuestion("Access Token", false)
|
||||
c.AccessToken.Question = "Enter a pre-obtained OAuth2 Access Token (optional, for testing)"
|
||||
|
||||
c.RefreshToken = c.AddSetupQuestion("Refresh Token", false)
|
||||
c.RefreshToken.Question = "Enter a pre-obtained OAuth2 Refresh Token (optional)"
|
||||
|
||||
c.ApiBaseURL = c.AddSetupQuestion("API Base URL", false)
|
||||
c.ApiBaseURL.Value = defaultBaseURL
|
||||
|
||||
c.TimeZone = c.AddSetupQuestion("Time Zone", false)
|
||||
c.TimeZone.Value = "America/New_York"
|
||||
c.TimeZone.Question = "Enter your timezone (e.g., America/New_York, Europe/London)"
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
// Client represents a Microsoft 365 Copilot API client.
|
||||
type Client struct {
|
||||
*plugins.PluginBase
|
||||
|
||||
// Configuration
|
||||
TenantID *plugins.SetupQuestion
|
||||
ClientID *plugins.SetupQuestion
|
||||
ClientSecret *plugins.SetupQuestion
|
||||
AccessToken *plugins.SetupQuestion
|
||||
RefreshToken *plugins.SetupQuestion
|
||||
ApiBaseURL *plugins.SetupQuestion
|
||||
TimeZone *plugins.SetupQuestion
|
||||
|
||||
// Runtime state
|
||||
httpClient *http.Client
|
||||
oauth2Config *oauth2.Config
|
||||
token *oauth2.Token
|
||||
}
|
||||
|
||||
// configure initializes the client with OAuth2 configuration.
|
||||
func (c *Client) configure() error {
|
||||
if c.TenantID.Value == "" || c.ClientID.Value == "" {
|
||||
return fmt.Errorf("tenant ID and client ID are required")
|
||||
}
|
||||
|
||||
// Build OAuth2 configuration
|
||||
c.oauth2Config = &oauth2.Config{
|
||||
ClientID: c.ClientID.Value,
|
||||
ClientSecret: c.ClientSecret.Value,
|
||||
Endpoint: oauth2.Endpoint{
|
||||
AuthURL: fmt.Sprintf(microsoftAuthURL, c.TenantID.Value),
|
||||
TokenURL: fmt.Sprintf(microsoftTokenURL, c.TenantID.Value),
|
||||
},
|
||||
Scopes: strings.Split(defaultScopes, " "),
|
||||
}
|
||||
|
||||
// If we have pre-configured tokens, use them
|
||||
if c.AccessToken.Value != "" {
|
||||
c.token = &oauth2.Token{
|
||||
AccessToken: c.AccessToken.Value,
|
||||
RefreshToken: c.RefreshToken.Value,
|
||||
TokenType: "Bearer",
|
||||
}
|
||||
// If we have a refresh token, set expiry in the past to trigger refresh
|
||||
if c.RefreshToken.Value != "" && c.ClientSecret.Value != "" {
|
||||
c.token.Expiry = time.Now().Add(-time.Hour)
|
||||
}
|
||||
}
|
||||
|
||||
// Create HTTP client with OAuth2 token source
|
||||
if c.token != nil {
|
||||
tokenSource := c.oauth2Config.TokenSource(context.Background(), c.token)
|
||||
c.httpClient = oauth2.NewClient(context.Background(), tokenSource)
|
||||
} else {
|
||||
// No tokens available - will need device code flow or manual token
|
||||
c.httpClient = &http.Client{Timeout: 120 * time.Second}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsConfigured returns true if the client has valid configuration.
|
||||
func (c *Client) IsConfigured() bool {
|
||||
// Minimum requirement: tenant ID and client ID
|
||||
if c.TenantID.Value == "" || c.ClientID.Value == "" {
|
||||
return false
|
||||
}
|
||||
// Must have either an access token or ability to get one
|
||||
return c.AccessToken.Value != "" || (c.RefreshToken.Value != "" && c.ClientSecret.Value != "")
|
||||
}
|
||||
|
||||
// ListModels returns the available models.
|
||||
// Microsoft 365 Copilot exposes a single model - the Copilot service itself.
|
||||
func (c *Client) ListModels() ([]string, error) {
|
||||
// Copilot doesn't expose multiple models - it's a unified service
|
||||
// We expose it as a single "model" for consistency with Fabric's architecture
|
||||
return []string{copilotModelName}, nil
|
||||
}
|
||||
|
||||
// Send sends a message to Copilot and returns the response.
|
||||
func (c *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (string, error) {
|
||||
// Create a conversation
|
||||
conversationID, err := c.createConversation(ctx)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create conversation: %w", err)
|
||||
}
|
||||
|
||||
// Build the message content from chat messages
|
||||
messageText := c.buildMessageText(msgs)
|
||||
|
||||
// Send the chat message
|
||||
response, err := c.sendChatMessage(ctx, conversationID, messageText)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to send message: %w", err)
|
||||
}
|
||||
|
||||
return response, nil
|
||||
}
|
||||
|
||||
// SendStream sends a message to Copilot and streams the response.
|
||||
func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan domain.StreamUpdate) error {
|
||||
defer close(channel)
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Create a conversation
|
||||
conversationID, err := c.createConversation(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create conversation: %w", err)
|
||||
}
|
||||
|
||||
// Build the message content from chat messages
|
||||
messageText := c.buildMessageText(msgs)
|
||||
|
||||
// Send the streaming chat message
|
||||
if err := c.sendChatMessageStream(ctx, conversationID, messageText, channel); err != nil {
|
||||
return fmt.Errorf("failed to stream message: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// NeedsRawMode returns whether the model needs raw mode.
|
||||
func (c *Client) NeedsRawMode(modelName string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// buildMessageText combines chat messages into a single prompt for Copilot.
|
||||
func (c *Client) buildMessageText(msgs []*chat.ChatCompletionMessage) string {
|
||||
var parts []string
|
||||
|
||||
for _, msg := range msgs {
|
||||
content := strings.TrimSpace(msg.Content)
|
||||
if content == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
switch msg.Role {
|
||||
case chat.ChatMessageRoleSystem:
|
||||
// Prepend system messages as context
|
||||
parts = append([]string{content}, parts...)
|
||||
case chat.ChatMessageRoleUser, chat.ChatMessageRoleAssistant:
|
||||
parts = append(parts, content)
|
||||
}
|
||||
}
|
||||
|
||||
return strings.Join(parts, "\n\n")
|
||||
}
|
||||
|
||||
// createConversation creates a new Copilot conversation.
|
||||
func (c *Client) createConversation(ctx context.Context) (string, error) {
|
||||
url := c.ApiBaseURL.Value + conversationsPath
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBufferString("{}"))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
c.addAuthHeader(req)
|
||||
|
||||
resp, err := c.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusCreated {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return "", fmt.Errorf("failed to create conversation: %s - %s", resp.Status, string(body))
|
||||
}
|
||||
|
||||
var result conversationResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
debuglog.Debug(debuglog.Detailed, "Created Copilot conversation: %s\n", result.ID)
|
||||
return result.ID, nil
|
||||
}
|
||||
|
||||
// sendChatMessage sends a message to an existing conversation (synchronous).
|
||||
func (c *Client) sendChatMessage(ctx context.Context, conversationID, messageText string) (string, error) {
|
||||
url := fmt.Sprintf("%s%s/%s/chat", c.ApiBaseURL.Value, conversationsPath, conversationID)
|
||||
|
||||
reqBody := chatRequest{
|
||||
Message: messageParam{
|
||||
Text: messageText,
|
||||
},
|
||||
LocationHint: locationHint{
|
||||
TimeZone: c.TimeZone.Value,
|
||||
},
|
||||
}
|
||||
|
||||
jsonBody, err := json.Marshal(reqBody)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonBody))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
c.addAuthHeader(req)
|
||||
|
||||
resp, err := c.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return "", fmt.Errorf("chat request failed: %s - %s", resp.Status, string(body))
|
||||
}
|
||||
|
||||
var result conversationResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Extract the assistant's response from messages
|
||||
return c.extractResponseText(result.Messages), nil
|
||||
}
|
||||
|
||||
// sendChatMessageStream sends a message and streams the response via SSE.
|
||||
func (c *Client) sendChatMessageStream(ctx context.Context, conversationID, messageText string, channel chan domain.StreamUpdate) error {
|
||||
url := fmt.Sprintf("%s%s/%s/chatOverStream", c.ApiBaseURL.Value, conversationsPath, conversationID)
|
||||
|
||||
reqBody := chatRequest{
|
||||
Message: messageParam{
|
||||
Text: messageText,
|
||||
},
|
||||
LocationHint: locationHint{
|
||||
TimeZone: c.TimeZone.Value,
|
||||
},
|
||||
}
|
||||
|
||||
jsonBody, err := json.Marshal(reqBody)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonBody))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("Accept", "text/event-stream")
|
||||
c.addAuthHeader(req)
|
||||
|
||||
resp, err := c.httpClient.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf("stream request failed: %s - %s", resp.Status, string(body))
|
||||
}
|
||||
|
||||
// Parse SSE stream
|
||||
return c.parseSSEStream(resp.Body, channel)
|
||||
}
|
||||
|
||||
// parseSSEStream parses the Server-Sent Events stream from Copilot.
|
||||
func (c *Client) parseSSEStream(reader io.Reader, channel chan domain.StreamUpdate) error {
|
||||
scanner := bufio.NewScanner(reader)
|
||||
var lastMessageText string
|
||||
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
|
||||
// SSE format: "data: {...json...}"
|
||||
if !strings.HasPrefix(line, "data: ") {
|
||||
continue
|
||||
}
|
||||
|
||||
jsonData := strings.TrimPrefix(line, "data: ")
|
||||
if jsonData == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
var event conversationResponse
|
||||
if err := json.Unmarshal([]byte(jsonData), &event); err != nil {
|
||||
debuglog.Debug(debuglog.Detailed, "Failed to parse SSE event: %v\n", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Extract new text from the response
|
||||
newText := c.extractResponseText(event.Messages)
|
||||
if newText != "" && newText != lastMessageText {
|
||||
// Send only the delta (new content)
|
||||
if delta, ok := strings.CutPrefix(newText, lastMessageText); ok {
|
||||
if delta != "" {
|
||||
channel <- domain.StreamUpdate{Type: domain.StreamTypeContent, Content: delta}
|
||||
}
|
||||
} else {
|
||||
// Complete message replacement
|
||||
channel <- domain.StreamUpdate{Type: domain.StreamTypeContent, Content: newText}
|
||||
}
|
||||
lastMessageText = newText
|
||||
}
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
return fmt.Errorf("error reading stream: %w", err)
|
||||
}
|
||||
|
||||
channel <- domain.StreamUpdate{Type: domain.StreamTypeContent, Content: "\n"}
|
||||
return nil
|
||||
}
|
||||
|
||||
// extractResponseText extracts the assistant's response from messages.
|
||||
func (c *Client) extractResponseText(messages []responseMessage) string {
|
||||
// Find the last assistant message (Copilot's response)
|
||||
for i := len(messages) - 1; i >= 0; i-- {
|
||||
msg := messages[i]
|
||||
// Response messages from Copilot have the copilotConversationResponseMessage type
|
||||
if msg.ODataType == "#microsoft.graph.copilotConversationResponseMessage" {
|
||||
if msg.Text != "" {
|
||||
return msg.Text
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// addAuthHeader adds the authorization header to a request.
|
||||
func (c *Client) addAuthHeader(req *http.Request) {
|
||||
if c.token != nil && c.token.AccessToken != "" {
|
||||
req.Header.Set("Authorization", "Bearer "+c.token.AccessToken)
|
||||
} else if c.AccessToken.Value != "" {
|
||||
req.Header.Set("Authorization", "Bearer "+c.AccessToken.Value)
|
||||
}
|
||||
}
|
||||
|
||||
// API request/response types
|
||||
|
||||
type chatRequest struct {
|
||||
Message messageParam `json:"message"`
|
||||
LocationHint locationHint `json:"locationHint"`
|
||||
AdditionalContext []contextMessage `json:"additionalContext,omitempty"`
|
||||
ContextualResources *contextualResources `json:"contextualResources,omitempty"`
|
||||
}
|
||||
|
||||
type messageParam struct {
|
||||
Text string `json:"text"`
|
||||
}
|
||||
|
||||
type locationHint struct {
|
||||
TimeZone string `json:"timeZone"`
|
||||
}
|
||||
|
||||
type contextMessage struct {
|
||||
Text string `json:"text"`
|
||||
}
|
||||
|
||||
type contextualResources struct {
|
||||
Files []fileResource `json:"files,omitempty"`
|
||||
WebContext *webContext `json:"webContext,omitempty"`
|
||||
}
|
||||
|
||||
type fileResource struct {
|
||||
URI string `json:"uri"`
|
||||
}
|
||||
|
||||
type webContext struct {
|
||||
IsWebEnabled bool `json:"isWebEnabled"`
|
||||
}
|
||||
|
||||
type conversationResponse struct {
|
||||
ID string `json:"id"`
|
||||
CreatedDateTime string `json:"createdDateTime"`
|
||||
DisplayName string `json:"displayName"`
|
||||
State string `json:"state"`
|
||||
TurnCount int `json:"turnCount"`
|
||||
Messages []responseMessage `json:"messages,omitempty"`
|
||||
}
|
||||
|
||||
type responseMessage struct {
|
||||
ODataType string `json:"@odata.type"`
|
||||
ID string `json:"id"`
|
||||
Text string `json:"text"`
|
||||
CreatedDateTime string `json:"createdDateTime"`
|
||||
AdaptiveCards []any `json:"adaptiveCards,omitempty"`
|
||||
Attributions []attribution `json:"attributions,omitempty"`
|
||||
}
|
||||
|
||||
type attribution struct {
|
||||
AttributionType string `json:"attributionType"`
|
||||
ProviderDisplayName string `json:"providerDisplayName"`
|
||||
AttributionSource string `json:"attributionSource"`
|
||||
SeeMoreWebURL string `json:"seeMoreWebUrl"`
|
||||
}
|
||||
151
internal/plugins/ai/digitalocean/digitalocean.go
Normal file
151
internal/plugins/ai/digitalocean/digitalocean.go
Normal file
@@ -0,0 +1,151 @@
|
||||
package digitalocean
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"time"
|
||||
|
||||
"github.com/danielmiessler/fabric/internal/i18n"
|
||||
"github.com/danielmiessler/fabric/internal/plugins"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/openai"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultInferenceBaseURL = "https://inference.do-ai.run/v1"
|
||||
controlPlaneModelsURL = "https://api.digitalocean.com/v2/gen-ai/models"
|
||||
errorResponseLimit = 1024
|
||||
maxResponseSize = 10 * 1024 * 1024
|
||||
)
|
||||
|
||||
type Client struct {
|
||||
*openai.Client
|
||||
ControlPlaneToken *plugins.SetupQuestion
|
||||
httpClient *http.Client
|
||||
}
|
||||
|
||||
type modelsResponse struct {
|
||||
Models []modelDetails `json:"models"`
|
||||
}
|
||||
|
||||
type modelDetails struct {
|
||||
InferenceName string `json:"inference_name"`
|
||||
Name string `json:"name"`
|
||||
UUID string `json:"uuid"`
|
||||
}
|
||||
|
||||
func NewClient() *Client {
|
||||
base := openai.NewClientCompatibleNoSetupQuestions("DigitalOcean", nil)
|
||||
base.ApiKey = base.AddSetupQuestion("Inference Key", true)
|
||||
base.ApiBaseURL = base.AddSetupQuestion("Inference Base URL", false)
|
||||
base.ApiBaseURL.Value = defaultInferenceBaseURL
|
||||
base.ImplementsResponses = false
|
||||
|
||||
client := &Client{
|
||||
Client: base,
|
||||
}
|
||||
client.ControlPlaneToken = client.AddSetupQuestion("Token", false)
|
||||
return client
|
||||
}
|
||||
|
||||
func (c *Client) ListModels() ([]string, error) {
|
||||
if c.ControlPlaneToken.Value == "" {
|
||||
models, err := c.Client.ListModels()
|
||||
if err == nil && len(models) > 0 {
|
||||
return models, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"DigitalOcean model list unavailable: %w. Set DIGITALOCEAN_TOKEN to fetch models from the control plane",
|
||||
err,
|
||||
)
|
||||
}
|
||||
return nil, fmt.Errorf("DigitalOcean model list unavailable. Set DIGITALOCEAN_TOKEN to fetch models from the control plane")
|
||||
}
|
||||
return c.fetchModelsFromControlPlane(context.Background())
|
||||
}
|
||||
|
||||
func (c *Client) fetchModelsFromControlPlane(ctx context.Context) ([]string, error) {
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
|
||||
fullURL, err := url.Parse(controlPlaneModelsURL)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse DigitalOcean control plane URL: %w", err)
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, fullURL.String(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", c.ControlPlaneToken.Value))
|
||||
req.Header.Set("Accept", "application/json")
|
||||
|
||||
client := c.httpClient
|
||||
if client == nil {
|
||||
client = &http.Client{Timeout: 10 * time.Second}
|
||||
}
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
bodyBytes, readErr := io.ReadAll(io.LimitReader(resp.Body, errorResponseLimit))
|
||||
if readErr != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"DigitalOcean models request failed with status %d: %w",
|
||||
resp.StatusCode,
|
||||
readErr,
|
||||
)
|
||||
}
|
||||
return nil, fmt.Errorf(
|
||||
"DigitalOcean models request failed with status %d: %s",
|
||||
resp.StatusCode,
|
||||
string(bodyBytes),
|
||||
)
|
||||
}
|
||||
|
||||
bodyBytes, err := io.ReadAll(io.LimitReader(resp.Body, maxResponseSize+1))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(bodyBytes) > maxResponseSize {
|
||||
return nil, fmt.Errorf(i18n.T("openai_models_response_too_large"), c.GetName(), maxResponseSize)
|
||||
}
|
||||
|
||||
var payload modelsResponse
|
||||
if err := json.Unmarshal(bodyBytes, &payload); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
models := make([]string, 0, len(payload.Models))
|
||||
seen := make(map[string]struct{}, len(payload.Models))
|
||||
for _, model := range payload.Models {
|
||||
var value string
|
||||
switch {
|
||||
case model.InferenceName != "":
|
||||
value = model.InferenceName
|
||||
case model.Name != "":
|
||||
value = model.Name
|
||||
case model.UUID != "":
|
||||
value = model.UUID
|
||||
}
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[value]; ok {
|
||||
continue
|
||||
}
|
||||
seen[value] = struct{}{}
|
||||
models = append(models, value)
|
||||
}
|
||||
return models, nil
|
||||
}
|
||||
@@ -10,9 +10,9 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/danielmiessler/fabric/internal/chat"
|
||||
"github.com/danielmiessler/fabric/internal/plugins"
|
||||
|
||||
"github.com/danielmiessler/fabric/internal/domain"
|
||||
"github.com/danielmiessler/fabric/internal/plugins"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/geminicommon"
|
||||
"google.golang.org/genai"
|
||||
)
|
||||
|
||||
@@ -29,10 +29,6 @@ const (
|
||||
)
|
||||
|
||||
const (
|
||||
citationHeader = "\n\n## Sources\n\n"
|
||||
citationSeparator = "\n"
|
||||
citationFormat = "- [%s](%s)"
|
||||
|
||||
errInvalidLocationFormat = "invalid search location format %q: must be timezone (e.g., 'America/Los_Angeles') or language code (e.g., 'en-US')"
|
||||
locationSeparator = "/"
|
||||
langCodeSeparator = "_"
|
||||
@@ -50,10 +46,7 @@ func NewClient() (ret *Client) {
|
||||
vendorName := "Gemini"
|
||||
ret = &Client{}
|
||||
|
||||
ret.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
}
|
||||
ret.PluginBase = plugins.NewVendorPluginBase(vendorName, nil)
|
||||
|
||||
ret.ApiKey = ret.PluginBase.AddSetupQuestion("API key", true)
|
||||
|
||||
@@ -111,7 +104,7 @@ func (o *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
|
||||
}
|
||||
|
||||
// Convert messages to new SDK format
|
||||
contents := o.convertMessages(msgs)
|
||||
contents := geminicommon.ConvertMessages(msgs)
|
||||
|
||||
cfg, err := o.buildGenerateContentConfig(opts)
|
||||
if err != nil {
|
||||
@@ -125,7 +118,7 @@ func (o *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
|
||||
}
|
||||
|
||||
// Extract text from response
|
||||
ret = o.extractTextFromResponse(response)
|
||||
ret = geminicommon.ExtractTextWithCitations(response)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -142,7 +135,7 @@ func (o *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
|
||||
}
|
||||
|
||||
// Convert messages to new SDK format
|
||||
contents := o.convertMessages(msgs)
|
||||
contents := geminicommon.ConvertMessages(msgs)
|
||||
|
||||
cfg, err := o.buildGenerateContentConfig(opts)
|
||||
if err != nil {
|
||||
@@ -161,7 +154,7 @@ func (o *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
|
||||
return err
|
||||
}
|
||||
|
||||
text := o.extractTextFromResponse(response)
|
||||
text := geminicommon.ExtractTextWithCitations(response)
|
||||
if text != "" {
|
||||
channel <- domain.StreamUpdate{
|
||||
Type: domain.StreamTypeContent,
|
||||
@@ -218,10 +211,14 @@ func parseThinkingConfig(level domain.ThinkingLevel) (*genai.ThinkingConfig, boo
|
||||
func (o *Client) buildGenerateContentConfig(opts *domain.ChatOptions) (*genai.GenerateContentConfig, error) {
|
||||
temperature := float32(opts.Temperature)
|
||||
topP := float32(opts.TopP)
|
||||
var maxTokens int32
|
||||
if opts.MaxTokens > 0 {
|
||||
maxTokens = int32(opts.MaxTokens)
|
||||
}
|
||||
cfg := &genai.GenerateContentConfig{
|
||||
Temperature: &temperature,
|
||||
TopP: &topP,
|
||||
MaxOutputTokens: int32(opts.ModelContextLength),
|
||||
MaxOutputTokens: maxTokens,
|
||||
}
|
||||
|
||||
if opts.Search {
|
||||
@@ -452,113 +449,3 @@ func (o *Client) generateWAVFile(pcmData []byte) ([]byte, error) {
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// convertMessages converts fabric chat messages to genai Content format
|
||||
func (o *Client) convertMessages(msgs []*chat.ChatCompletionMessage) []*genai.Content {
|
||||
var contents []*genai.Content
|
||||
|
||||
for _, msg := range msgs {
|
||||
content := &genai.Content{Parts: []*genai.Part{}}
|
||||
|
||||
switch msg.Role {
|
||||
case chat.ChatMessageRoleAssistant:
|
||||
content.Role = "model"
|
||||
case chat.ChatMessageRoleUser:
|
||||
content.Role = "user"
|
||||
case chat.ChatMessageRoleSystem, chat.ChatMessageRoleDeveloper, chat.ChatMessageRoleFunction, chat.ChatMessageRoleTool:
|
||||
// Gemini's API only accepts "user" and "model" roles.
|
||||
// Map all other roles to "user" to preserve instruction context.
|
||||
content.Role = "user"
|
||||
default:
|
||||
content.Role = "user"
|
||||
}
|
||||
|
||||
if strings.TrimSpace(msg.Content) != "" {
|
||||
content.Parts = append(content.Parts, &genai.Part{Text: msg.Content})
|
||||
}
|
||||
|
||||
// Handle multi-content messages (images, etc.)
|
||||
for _, part := range msg.MultiContent {
|
||||
switch part.Type {
|
||||
case chat.ChatMessagePartTypeText:
|
||||
content.Parts = append(content.Parts, &genai.Part{Text: part.Text})
|
||||
case chat.ChatMessagePartTypeImageURL:
|
||||
// TODO: Handle image URLs if needed
|
||||
// This would require downloading and converting to inline data
|
||||
}
|
||||
}
|
||||
|
||||
contents = append(contents, content)
|
||||
}
|
||||
|
||||
return contents
|
||||
}
|
||||
|
||||
// extractTextFromResponse extracts text content from the response and appends
|
||||
// any web citations in a standardized format.
|
||||
func (o *Client) extractTextFromResponse(response *genai.GenerateContentResponse) string {
|
||||
if response == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
text := o.extractTextParts(response)
|
||||
citations := o.extractCitations(response)
|
||||
if len(citations) > 0 {
|
||||
return text + citationHeader + strings.Join(citations, citationSeparator)
|
||||
}
|
||||
return text
|
||||
}
|
||||
|
||||
func (o *Client) extractTextParts(response *genai.GenerateContentResponse) string {
|
||||
var builder strings.Builder
|
||||
for _, candidate := range response.Candidates {
|
||||
if candidate == nil || candidate.Content == nil {
|
||||
continue
|
||||
}
|
||||
for _, part := range candidate.Content.Parts {
|
||||
if part != nil && part.Text != "" {
|
||||
builder.WriteString(part.Text)
|
||||
}
|
||||
}
|
||||
}
|
||||
return builder.String()
|
||||
}
|
||||
|
||||
func (o *Client) extractCitations(response *genai.GenerateContentResponse) []string {
|
||||
if response == nil || len(response.Candidates) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
citationMap := make(map[string]bool)
|
||||
var citations []string
|
||||
for _, candidate := range response.Candidates {
|
||||
if candidate == nil || candidate.GroundingMetadata == nil {
|
||||
continue
|
||||
}
|
||||
chunks := candidate.GroundingMetadata.GroundingChunks
|
||||
if len(chunks) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, chunk := range chunks {
|
||||
if chunk == nil || chunk.Web == nil {
|
||||
continue
|
||||
}
|
||||
uri := chunk.Web.URI
|
||||
title := chunk.Web.Title
|
||||
if uri == "" || title == "" {
|
||||
continue
|
||||
}
|
||||
var keyBuilder strings.Builder
|
||||
keyBuilder.WriteString(uri)
|
||||
keyBuilder.WriteByte('|')
|
||||
keyBuilder.WriteString(title)
|
||||
key := keyBuilder.String()
|
||||
if !citationMap[key] {
|
||||
citationMap[key] = true
|
||||
citationText := fmt.Sprintf(citationFormat, title, uri)
|
||||
citations = append(citations, citationText)
|
||||
}
|
||||
}
|
||||
}
|
||||
return citations
|
||||
}
|
||||
|
||||
@@ -4,10 +4,10 @@ import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"google.golang.org/genai"
|
||||
|
||||
"github.com/danielmiessler/fabric/internal/chat"
|
||||
"github.com/danielmiessler/fabric/internal/domain"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/geminicommon"
|
||||
"google.golang.org/genai"
|
||||
)
|
||||
|
||||
// Test buildModelNameFull method
|
||||
@@ -31,9 +31,8 @@ func TestBuildModelNameFull(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// Test extractTextFromResponse method
|
||||
// Test ExtractTextWithCitations from geminicommon
|
||||
func TestExtractTextFromResponse(t *testing.T) {
|
||||
client := &Client{}
|
||||
response := &genai.GenerateContentResponse{
|
||||
Candidates: []*genai.Candidate{
|
||||
{
|
||||
@@ -48,7 +47,7 @@ func TestExtractTextFromResponse(t *testing.T) {
|
||||
}
|
||||
expected := "Hello, world!"
|
||||
|
||||
result := client.extractTextFromResponse(response)
|
||||
result := geminicommon.ExtractTextWithCitations(response)
|
||||
|
||||
if result != expected {
|
||||
t.Errorf("Expected %v, got %v", expected, result)
|
||||
@@ -56,14 +55,12 @@ func TestExtractTextFromResponse(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestExtractTextFromResponse_Nil(t *testing.T) {
|
||||
client := &Client{}
|
||||
if got := client.extractTextFromResponse(nil); got != "" {
|
||||
if got := geminicommon.ExtractTextWithCitations(nil); got != "" {
|
||||
t.Fatalf("expected empty string, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractTextFromResponse_EmptyGroundingChunks(t *testing.T) {
|
||||
client := &Client{}
|
||||
response := &genai.GenerateContentResponse{
|
||||
Candidates: []*genai.Candidate{
|
||||
{
|
||||
@@ -72,7 +69,7 @@ func TestExtractTextFromResponse_EmptyGroundingChunks(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
if got := client.extractTextFromResponse(response); got != "Hello" {
|
||||
if got := geminicommon.ExtractTextWithCitations(response); got != "Hello" {
|
||||
t.Fatalf("expected 'Hello', got %q", got)
|
||||
}
|
||||
}
|
||||
@@ -162,7 +159,6 @@ func TestBuildGenerateContentConfig_ThinkingTokens(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestCitationFormatting(t *testing.T) {
|
||||
client := &Client{}
|
||||
response := &genai.GenerateContentResponse{
|
||||
Candidates: []*genai.Candidate{
|
||||
{
|
||||
@@ -178,7 +174,7 @@ func TestCitationFormatting(t *testing.T) {
|
||||
},
|
||||
}
|
||||
|
||||
result := client.extractTextFromResponse(response)
|
||||
result := geminicommon.ExtractTextWithCitations(response)
|
||||
if !strings.Contains(result, "## Sources") {
|
||||
t.Fatalf("expected sources section in result: %s", result)
|
||||
}
|
||||
@@ -189,14 +185,13 @@ func TestCitationFormatting(t *testing.T) {
|
||||
|
||||
// Test convertMessages handles role mapping correctly
|
||||
func TestConvertMessagesRoles(t *testing.T) {
|
||||
client := &Client{}
|
||||
msgs := []*chat.ChatCompletionMessage{
|
||||
{Role: chat.ChatMessageRoleUser, Content: "user"},
|
||||
{Role: chat.ChatMessageRoleAssistant, Content: "assistant"},
|
||||
{Role: chat.ChatMessageRoleSystem, Content: "system"},
|
||||
}
|
||||
|
||||
contents := client.convertMessages(msgs)
|
||||
contents := geminicommon.ConvertMessages(msgs)
|
||||
|
||||
expected := []string{"user", "model", "user"}
|
||||
|
||||
|
||||
130
internal/plugins/ai/geminicommon/geminicommon.go
Normal file
130
internal/plugins/ai/geminicommon/geminicommon.go
Normal file
@@ -0,0 +1,130 @@
|
||||
// Package geminicommon provides shared utilities for Gemini API integrations.
|
||||
// Used by both the standalone Gemini provider (API key auth) and VertexAI provider (ADC auth).
|
||||
package geminicommon
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/danielmiessler/fabric/internal/chat"
|
||||
"google.golang.org/genai"
|
||||
)
|
||||
|
||||
// Citation formatting constants
|
||||
const (
|
||||
CitationHeader = "\n\n## Sources\n\n"
|
||||
CitationSeparator = "\n"
|
||||
CitationFormat = "- [%s](%s)"
|
||||
)
|
||||
|
||||
// ConvertMessages converts fabric chat messages to genai Content format.
|
||||
// Gemini's API only accepts "user" and "model" roles, so other roles are mapped to "user".
|
||||
func ConvertMessages(msgs []*chat.ChatCompletionMessage) []*genai.Content {
|
||||
var contents []*genai.Content
|
||||
|
||||
for _, msg := range msgs {
|
||||
content := &genai.Content{Parts: []*genai.Part{}}
|
||||
|
||||
switch msg.Role {
|
||||
case chat.ChatMessageRoleAssistant:
|
||||
content.Role = "model"
|
||||
case chat.ChatMessageRoleUser:
|
||||
content.Role = "user"
|
||||
case chat.ChatMessageRoleSystem, chat.ChatMessageRoleDeveloper, chat.ChatMessageRoleFunction, chat.ChatMessageRoleTool:
|
||||
// Gemini's API only accepts "user" and "model" roles.
|
||||
// Map all other roles to "user" to preserve instruction context.
|
||||
content.Role = "user"
|
||||
default:
|
||||
content.Role = "user"
|
||||
}
|
||||
|
||||
if strings.TrimSpace(msg.Content) != "" {
|
||||
content.Parts = append(content.Parts, &genai.Part{Text: msg.Content})
|
||||
}
|
||||
|
||||
// Handle multi-content messages (images, etc.)
|
||||
for _, part := range msg.MultiContent {
|
||||
switch part.Type {
|
||||
case chat.ChatMessagePartTypeText:
|
||||
content.Parts = append(content.Parts, &genai.Part{Text: part.Text})
|
||||
case chat.ChatMessagePartTypeImageURL:
|
||||
// TODO: Handle image URLs if needed
|
||||
// This would require downloading and converting to inline data
|
||||
}
|
||||
}
|
||||
|
||||
contents = append(contents, content)
|
||||
}
|
||||
|
||||
return contents
|
||||
}
|
||||
|
||||
// ExtractText extracts just the text parts from a Gemini response.
|
||||
func ExtractText(response *genai.GenerateContentResponse) string {
|
||||
if response == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
var builder strings.Builder
|
||||
for _, candidate := range response.Candidates {
|
||||
if candidate == nil || candidate.Content == nil {
|
||||
continue
|
||||
}
|
||||
for _, part := range candidate.Content.Parts {
|
||||
if part != nil && part.Text != "" {
|
||||
builder.WriteString(part.Text)
|
||||
}
|
||||
}
|
||||
}
|
||||
return builder.String()
|
||||
}
|
||||
|
||||
// ExtractTextWithCitations extracts text content from the response and appends
|
||||
// any web citations in a standardized format.
|
||||
func ExtractTextWithCitations(response *genai.GenerateContentResponse) string {
|
||||
if response == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
text := ExtractText(response)
|
||||
citations := ExtractCitations(response)
|
||||
if len(citations) > 0 {
|
||||
return text + CitationHeader + strings.Join(citations, CitationSeparator)
|
||||
}
|
||||
return text
|
||||
}
|
||||
|
||||
// ExtractCitations extracts web citations from grounding metadata.
|
||||
func ExtractCitations(response *genai.GenerateContentResponse) []string {
|
||||
if response == nil || len(response.Candidates) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
citationMap := make(map[string]bool)
|
||||
var citations []string
|
||||
for _, candidate := range response.Candidates {
|
||||
if candidate == nil || candidate.GroundingMetadata == nil {
|
||||
continue
|
||||
}
|
||||
chunks := candidate.GroundingMetadata.GroundingChunks
|
||||
if len(chunks) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, chunk := range chunks {
|
||||
if chunk == nil || chunk.Web == nil {
|
||||
continue
|
||||
}
|
||||
uri := chunk.Web.URI
|
||||
title := chunk.Web.Title
|
||||
if uri == "" || title == "" {
|
||||
continue
|
||||
}
|
||||
key := uri + "|" + title
|
||||
if !citationMap[key] {
|
||||
citationMap[key] = true
|
||||
citations = append(citations, fmt.Sprintf(CitationFormat, title, uri))
|
||||
}
|
||||
}
|
||||
}
|
||||
return citations
|
||||
}
|
||||
@@ -27,11 +27,7 @@ func NewClientCompatible(vendorName string, defaultBaseUrl string, configureCust
|
||||
if configureCustom == nil {
|
||||
configureCustom = ret.configure
|
||||
}
|
||||
ret.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
ConfigureCustom: configureCustom,
|
||||
}
|
||||
ret.PluginBase = plugins.NewVendorPluginBase(vendorName, configureCustom)
|
||||
ret.ApiUrl = ret.AddSetupQuestionCustom("API URL", true,
|
||||
fmt.Sprintf("Enter your %v URL (as a reminder, it is usually %v')", vendorName, defaultBaseUrl))
|
||||
return
|
||||
|
||||
@@ -24,11 +24,7 @@ func NewClient() (ret *Client) {
|
||||
vendorName := "Ollama"
|
||||
ret = &Client{}
|
||||
|
||||
ret.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
ConfigureCustom: ret.configure,
|
||||
}
|
||||
ret.PluginBase = plugins.NewVendorPluginBase(vendorName, ret.configure)
|
||||
|
||||
ret.ApiUrl = ret.AddSetupQuestionCustom("API URL", true,
|
||||
"Enter your Ollama URL (as a reminder, it is usually http://localhost:11434')")
|
||||
|
||||
@@ -52,11 +52,7 @@ func NewClientCompatibleNoSetupQuestions(vendorName string, configureCustom func
|
||||
configureCustom = ret.configure
|
||||
}
|
||||
|
||||
ret.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
ConfigureCustom: configureCustom,
|
||||
}
|
||||
ret.PluginBase = plugins.NewVendorPluginBase(vendorName, configureCustom)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
@@ -10,12 +10,20 @@ import (
|
||||
"slices"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
debuglog "github.com/danielmiessler/fabric/internal/log"
|
||||
|
||||
openai "github.com/openai/openai-go"
|
||||
)
|
||||
|
||||
// transcriptionResult holds the result of a single chunk transcription.
|
||||
type transcriptionResult struct {
|
||||
index int
|
||||
text string
|
||||
err error
|
||||
}
|
||||
|
||||
// MaxAudioFileSize defines the maximum allowed size for audio uploads (25MB).
|
||||
const MaxAudioFileSize int64 = 25 * 1024 * 1024
|
||||
|
||||
@@ -73,27 +81,56 @@ func (o *Client) TranscribeFile(ctx context.Context, filePath, model string, spl
|
||||
files = []string{filePath}
|
||||
}
|
||||
|
||||
var builder strings.Builder
|
||||
resultsChan := make(chan transcriptionResult, len(files))
|
||||
var wg sync.WaitGroup
|
||||
|
||||
for i, f := range files {
|
||||
debuglog.Log("Using model %s to transcribe part %d (file name: %s)...\n", model, i+1, f)
|
||||
var chunk *os.File
|
||||
if chunk, err = os.Open(f); err != nil {
|
||||
return "", err
|
||||
}
|
||||
params := openai.AudioTranscriptionNewParams{
|
||||
File: chunk,
|
||||
Model: openai.AudioModel(model),
|
||||
}
|
||||
var resp *openai.Transcription
|
||||
resp, err = o.ApiClient.Audio.Transcriptions.New(ctx, params)
|
||||
chunk.Close()
|
||||
if err != nil {
|
||||
return "", err
|
||||
wg.Add(1)
|
||||
go func(index int, filePath string) {
|
||||
defer wg.Done()
|
||||
debuglog.Log("Using model %s to transcribe part %d (file name: %s)...\n", model, index+1, filePath)
|
||||
|
||||
chunk, openErr := os.Open(filePath)
|
||||
if openErr != nil {
|
||||
resultsChan <- transcriptionResult{index: index, err: openErr}
|
||||
return
|
||||
}
|
||||
defer chunk.Close()
|
||||
|
||||
params := openai.AudioTranscriptionNewParams{
|
||||
File: chunk,
|
||||
Model: openai.AudioModel(model),
|
||||
}
|
||||
resp, transcribeErr := o.ApiClient.Audio.Transcriptions.New(ctx, params)
|
||||
if transcribeErr != nil {
|
||||
resultsChan <- transcriptionResult{index: index, err: transcribeErr}
|
||||
return
|
||||
}
|
||||
resultsChan <- transcriptionResult{index: index, text: resp.Text}
|
||||
}(i, f)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
close(resultsChan)
|
||||
|
||||
results := make([]transcriptionResult, 0, len(files))
|
||||
for result := range resultsChan {
|
||||
if result.err != nil {
|
||||
return "", result.err
|
||||
}
|
||||
results = append(results, result)
|
||||
}
|
||||
|
||||
sort.Slice(results, func(i, j int) bool {
|
||||
return results[i].index < results[j].index
|
||||
})
|
||||
|
||||
var builder strings.Builder
|
||||
for i, result := range results {
|
||||
if i > 0 {
|
||||
builder.WriteString(" ")
|
||||
}
|
||||
builder.WriteString(resp.Text)
|
||||
builder.WriteString(result.text)
|
||||
}
|
||||
|
||||
return builder.String(), nil
|
||||
|
||||
@@ -206,6 +206,11 @@ var ProviderMap = map[string]ProviderConfig{
|
||||
ModelsURL: "static:abacus", // Special marker for static model list
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"Mammouth": {
|
||||
Name: "Mammouth",
|
||||
BaseURL: "https://api.mammouth.ai/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
}
|
||||
|
||||
// GetProviderByName returns the provider configuration for a given name with O(1) lookup
|
||||
|
||||
@@ -31,11 +31,7 @@ type Client struct {
|
||||
|
||||
func NewClient() *Client {
|
||||
c := &Client{}
|
||||
c.PluginBase = &plugins.PluginBase{
|
||||
Name: providerName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(providerName),
|
||||
ConfigureCustom: c.Configure, // Assign the Configure method
|
||||
}
|
||||
c.PluginBase = plugins.NewVendorPluginBase(providerName, c.Configure)
|
||||
c.APIKey = c.AddSetupQuestion("API_KEY", true)
|
||||
return c
|
||||
}
|
||||
|
||||
237
internal/plugins/ai/vertexai/models.go
Normal file
237
internal/plugins/ai/vertexai/models.go
Normal file
@@ -0,0 +1,237 @@
|
||||
package vertexai
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
debuglog "github.com/danielmiessler/fabric/internal/log"
|
||||
)
|
||||
|
||||
const (
|
||||
// API limits
|
||||
maxResponseSize = 10 * 1024 * 1024 // 10MB
|
||||
errorResponseLimit = 1024 // 1KB for error messages
|
||||
|
||||
// Default region for Model Garden API (global doesn't work for this endpoint)
|
||||
defaultModelGardenRegion = "us-central1"
|
||||
)
|
||||
|
||||
// Supported Model Garden publishers (others can be added when SDK support is implemented)
|
||||
var publishers = []string{"google", "anthropic"}
|
||||
|
||||
// publisherModelsResponse represents the API response from publishers.models.list
|
||||
type publisherModelsResponse struct {
|
||||
PublisherModels []publisherModel `json:"publisherModels"`
|
||||
NextPageToken string `json:"nextPageToken"`
|
||||
}
|
||||
|
||||
// publisherModel represents a single model in the API response
|
||||
type publisherModel struct {
|
||||
Name string `json:"name"` // Format: publishers/{publisher}/models/{model}
|
||||
}
|
||||
|
||||
// fetchModelsPage makes a single API request and returns the parsed response.
|
||||
// Extracted to ensure proper cleanup of HTTP response bodies in pagination loops.
|
||||
func fetchModelsPage(ctx context.Context, httpClient *http.Client, url, projectID, publisher string) (*publisherModelsResponse, error) {
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
req.Header.Set("Accept", "application/json")
|
||||
// Set quota project header required by Vertex AI API
|
||||
req.Header.Set("x-goog-user-project", projectID)
|
||||
|
||||
resp, err := httpClient.Do(req)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("request failed: %w", err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
bodyBytes, _ := io.ReadAll(io.LimitReader(resp.Body, errorResponseLimit))
|
||||
debuglog.Debug(debuglog.Basic, "API error for %s: status %d, url: %s, body: %s\n", publisher, resp.StatusCode, url, string(bodyBytes))
|
||||
return nil, fmt.Errorf("API returned status %d: %s", resp.StatusCode, string(bodyBytes))
|
||||
}
|
||||
|
||||
bodyBytes, err := io.ReadAll(io.LimitReader(resp.Body, maxResponseSize+1))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read response: %w", err)
|
||||
}
|
||||
|
||||
if len(bodyBytes) > maxResponseSize {
|
||||
return nil, fmt.Errorf("response too large (>%d bytes)", maxResponseSize)
|
||||
}
|
||||
|
||||
var response publisherModelsResponse
|
||||
if err := json.Unmarshal(bodyBytes, &response); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse response: %w", err)
|
||||
}
|
||||
|
||||
return &response, nil
|
||||
}
|
||||
|
||||
// listPublisherModels fetches models from a specific publisher via the Model Garden API
|
||||
func listPublisherModels(ctx context.Context, httpClient *http.Client, region, projectID, publisher string) ([]string, error) {
|
||||
// Use default region if global or empty (Model Garden API requires a specific region)
|
||||
if region == "" || region == "global" {
|
||||
region = defaultModelGardenRegion
|
||||
}
|
||||
|
||||
baseURL := fmt.Sprintf("https://%s-aiplatform.googleapis.com/v1beta1/publishers/%s/models", region, publisher)
|
||||
|
||||
var allModels []string
|
||||
pageToken := ""
|
||||
|
||||
for {
|
||||
url := baseURL
|
||||
if pageToken != "" {
|
||||
url = fmt.Sprintf("%s?pageToken=%s", baseURL, pageToken)
|
||||
}
|
||||
|
||||
response, err := fetchModelsPage(ctx, httpClient, url, projectID, publisher)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Extract model names, stripping the publishers/{publisher}/models/ prefix
|
||||
for _, model := range response.PublisherModels {
|
||||
modelName := extractModelName(model.Name)
|
||||
if modelName != "" {
|
||||
allModels = append(allModels, modelName)
|
||||
}
|
||||
}
|
||||
|
||||
// Check for more pages
|
||||
if response.NextPageToken == "" {
|
||||
break
|
||||
}
|
||||
pageToken = response.NextPageToken
|
||||
}
|
||||
|
||||
debuglog.Debug(debuglog.Detailed, "Listed %d models from publisher %s\n", len(allModels), publisher)
|
||||
return allModels, nil
|
||||
}
|
||||
|
||||
// extractModelName extracts the model name from the full resource path
|
||||
// Input: "publishers/google/models/gemini-2.0-flash"
|
||||
// Output: "gemini-2.0-flash"
|
||||
func extractModelName(fullName string) string {
|
||||
parts := strings.Split(fullName, "/")
|
||||
if len(parts) >= 4 && parts[0] == "publishers" && parts[2] == "models" {
|
||||
return parts[3]
|
||||
}
|
||||
// Fallback: return the last segment
|
||||
if len(parts) > 0 {
|
||||
return parts[len(parts)-1]
|
||||
}
|
||||
return fullName
|
||||
}
|
||||
|
||||
// sortModels sorts models by priority: Gemini > Claude > Others
|
||||
// Within each group, models are sorted alphabetically
|
||||
func sortModels(models []string) []string {
|
||||
sort.Slice(models, func(i, j int) bool {
|
||||
pi := modelPriority(models[i])
|
||||
pj := modelPriority(models[j])
|
||||
if pi != pj {
|
||||
return pi < pj
|
||||
}
|
||||
// Same priority: sort alphabetically (case-insensitive)
|
||||
return strings.ToLower(models[i]) < strings.ToLower(models[j])
|
||||
})
|
||||
return models
|
||||
}
|
||||
|
||||
// modelPriority returns the sort priority for a model (lower = higher priority)
|
||||
func modelPriority(model string) int {
|
||||
lower := strings.ToLower(model)
|
||||
switch {
|
||||
case strings.HasPrefix(lower, "gemini"):
|
||||
return 1
|
||||
case strings.HasPrefix(lower, "claude"):
|
||||
return 2
|
||||
default:
|
||||
return 3
|
||||
}
|
||||
}
|
||||
|
||||
// knownGeminiModels is a curated list of Gemini models available on Vertex AI.
|
||||
// Vertex AI doesn't provide a list API for Gemini models - they must be known ahead of time.
|
||||
// This list is based on Google Cloud documentation as of January 2025.
|
||||
// See: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models
|
||||
var knownGeminiModels = []string{
|
||||
// Gemini 3 (Preview)
|
||||
"gemini-3-pro-preview",
|
||||
"gemini-3-flash-preview",
|
||||
// Gemini 2.5 (GA)
|
||||
"gemini-2.5-pro",
|
||||
"gemini-2.5-flash",
|
||||
"gemini-2.5-flash-lite",
|
||||
// Gemini 2.0 (GA)
|
||||
"gemini-2.0-flash",
|
||||
"gemini-2.0-flash-lite",
|
||||
}
|
||||
|
||||
// getKnownGeminiModels returns the curated list of Gemini models available on Vertex AI.
|
||||
// Unlike third-party models which can be listed via the Model Garden API,
|
||||
// Gemini models must be known ahead of time as there's no list endpoint for them.
|
||||
func getKnownGeminiModels() []string {
|
||||
return knownGeminiModels
|
||||
}
|
||||
|
||||
// isGeminiModel returns true if the model is a Gemini model
|
||||
func isGeminiModel(modelName string) bool {
|
||||
return strings.HasPrefix(strings.ToLower(modelName), "gemini")
|
||||
}
|
||||
|
||||
// isConversationalModel returns true if the model is suitable for text generation/chat
|
||||
// Filters out image generation, embeddings, and other non-conversational models
|
||||
func isConversationalModel(modelName string) bool {
|
||||
lower := strings.ToLower(modelName)
|
||||
|
||||
// Exclude patterns for non-conversational models
|
||||
excludePatterns := []string{
|
||||
"imagen", // Image generation models
|
||||
"imagegeneration",
|
||||
"imagetext",
|
||||
"image-segmentation",
|
||||
"embedding", // Embedding models
|
||||
"textembedding",
|
||||
"multimodalembedding",
|
||||
"text-bison", // Legacy completion models (not chat)
|
||||
"text-unicorn",
|
||||
"code-bison", // Legacy code models
|
||||
"code-gecko",
|
||||
"codechat-bison", // Deprecated chat model
|
||||
"chat-bison", // Deprecated chat model
|
||||
"veo", // Video generation
|
||||
"chirp", // Audio/speech models
|
||||
"medlm", // Medical models (restricted)
|
||||
"medical",
|
||||
}
|
||||
|
||||
for _, pattern := range excludePatterns {
|
||||
if strings.Contains(lower, pattern) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// filterConversationalModels returns only models suitable for text generation/chat
|
||||
func filterConversationalModels(models []string) []string {
|
||||
var filtered []string
|
||||
for _, model := range models {
|
||||
if isConversationalModel(model) {
|
||||
filtered = append(filtered, model)
|
||||
}
|
||||
}
|
||||
return filtered
|
||||
}
|
||||
@@ -9,13 +9,18 @@ import (
|
||||
"github.com/anthropics/anthropic-sdk-go/vertex"
|
||||
"github.com/danielmiessler/fabric/internal/chat"
|
||||
"github.com/danielmiessler/fabric/internal/domain"
|
||||
debuglog "github.com/danielmiessler/fabric/internal/log"
|
||||
"github.com/danielmiessler/fabric/internal/plugins"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/geminicommon"
|
||||
"golang.org/x/oauth2"
|
||||
"golang.org/x/oauth2/google"
|
||||
"google.golang.org/genai"
|
||||
)
|
||||
|
||||
const (
|
||||
cloudPlatformScope = "https://www.googleapis.com/auth/cloud-platform"
|
||||
defaultRegion = "global"
|
||||
maxTokens = 4096
|
||||
defaultMaxTokens = 4096
|
||||
)
|
||||
|
||||
// NewClient creates a new Vertex AI client for accessing Claude models via Google Cloud
|
||||
@@ -23,11 +28,7 @@ func NewClient() (ret *Client) {
|
||||
vendorName := "VertexAI"
|
||||
ret = &Client{}
|
||||
|
||||
ret.PluginBase = &plugins.PluginBase{
|
||||
Name: vendorName,
|
||||
EnvNamePrefix: plugins.BuildEnvVariablePrefix(vendorName),
|
||||
ConfigureCustom: ret.configure,
|
||||
}
|
||||
ret.PluginBase = plugins.NewVendorPluginBase(vendorName, ret.configure)
|
||||
|
||||
ret.ProjectID = ret.AddSetupQuestion("Project ID", true)
|
||||
ret.Region = ret.AddSetupQuestion("Region", false)
|
||||
@@ -59,17 +60,78 @@ func (c *Client) configure() error {
|
||||
}
|
||||
|
||||
func (c *Client) ListModels() ([]string, error) {
|
||||
// Return Claude models available on Vertex AI
|
||||
return []string{
|
||||
string(anthropic.ModelClaudeSonnet4_5),
|
||||
string(anthropic.ModelClaudeOpus4_5),
|
||||
string(anthropic.ModelClaudeHaiku4_5),
|
||||
string(anthropic.ModelClaude3_7SonnetLatest),
|
||||
string(anthropic.ModelClaude3_5HaikuLatest),
|
||||
}, nil
|
||||
ctx := context.Background()
|
||||
|
||||
// Get ADC credentials for API authentication
|
||||
creds, err := google.FindDefaultCredentials(ctx, cloudPlatformScope)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get Google credentials (ensure ADC is configured): %w", err)
|
||||
}
|
||||
httpClient := oauth2.NewClient(ctx, creds.TokenSource)
|
||||
|
||||
// Query all publishers in parallel for better performance
|
||||
type result struct {
|
||||
models []string
|
||||
err error
|
||||
publisher string
|
||||
}
|
||||
// +1 for known Gemini models (no API to list them)
|
||||
results := make(chan result, len(publishers)+1)
|
||||
|
||||
// Query Model Garden API for third-party models
|
||||
for _, pub := range publishers {
|
||||
go func(publisher string) {
|
||||
models, err := listPublisherModels(ctx, httpClient, c.Region.Value, c.ProjectID.Value, publisher)
|
||||
results <- result{models: models, err: err, publisher: publisher}
|
||||
}(pub)
|
||||
}
|
||||
|
||||
// Add known Gemini models (Vertex AI doesn't have a list API for Gemini)
|
||||
go func() {
|
||||
results <- result{models: getKnownGeminiModels(), err: nil, publisher: "gemini"}
|
||||
}()
|
||||
|
||||
// Collect results from all sources
|
||||
var allModels []string
|
||||
for range len(publishers) + 1 {
|
||||
r := <-results
|
||||
if r.err != nil {
|
||||
// Log warning but continue - some sources may not be available
|
||||
debuglog.Debug(debuglog.Basic, "Failed to list %s models: %v\n", r.publisher, r.err)
|
||||
continue
|
||||
}
|
||||
allModels = append(allModels, r.models...)
|
||||
}
|
||||
|
||||
if len(allModels) == 0 {
|
||||
return nil, fmt.Errorf("no models found from any publisher")
|
||||
}
|
||||
|
||||
// Filter to only conversational models and sort
|
||||
filtered := filterConversationalModels(allModels)
|
||||
if len(filtered) == 0 {
|
||||
return nil, fmt.Errorf("no conversational models found")
|
||||
}
|
||||
|
||||
return sortModels(filtered), nil
|
||||
}
|
||||
|
||||
func (c *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (string, error) {
|
||||
if isGeminiModel(opts.Model) {
|
||||
return c.sendGemini(ctx, msgs, opts)
|
||||
}
|
||||
return c.sendClaude(ctx, msgs, opts)
|
||||
}
|
||||
|
||||
// getMaxTokens returns the max output tokens to use for a request
|
||||
func getMaxTokens(opts *domain.ChatOptions) int64 {
|
||||
if opts.MaxTokens > 0 {
|
||||
return int64(opts.MaxTokens)
|
||||
}
|
||||
return int64(defaultMaxTokens)
|
||||
}
|
||||
|
||||
func (c *Client) sendClaude(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (string, error) {
|
||||
if c.client == nil {
|
||||
return "", fmt.Errorf("VertexAI client not initialized")
|
||||
}
|
||||
@@ -80,14 +142,22 @@ func (c *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
|
||||
return "", fmt.Errorf("no valid messages to send")
|
||||
}
|
||||
|
||||
// Create the request
|
||||
response, err := c.client.Messages.New(ctx, anthropic.MessageNewParams{
|
||||
Model: anthropic.Model(opts.Model),
|
||||
MaxTokens: int64(maxTokens),
|
||||
Messages: anthropicMessages,
|
||||
Temperature: anthropic.Opt(opts.Temperature),
|
||||
})
|
||||
// Build request params
|
||||
params := anthropic.MessageNewParams{
|
||||
Model: anthropic.Model(opts.Model),
|
||||
MaxTokens: getMaxTokens(opts),
|
||||
Messages: anthropicMessages,
|
||||
}
|
||||
|
||||
// Only set one of Temperature or TopP as some models don't allow both
|
||||
// (following anthropic.go pattern)
|
||||
if opts.TopP != domain.DefaultTopP {
|
||||
params.TopP = anthropic.Opt(opts.TopP)
|
||||
} else {
|
||||
params.Temperature = anthropic.Opt(opts.Temperature)
|
||||
}
|
||||
|
||||
response, err := c.client.Messages.New(ctx, params)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
@@ -108,6 +178,13 @@ func (c *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
|
||||
}
|
||||
|
||||
func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan domain.StreamUpdate) error {
|
||||
if isGeminiModel(opts.Model) {
|
||||
return c.sendStreamGemini(msgs, opts, channel)
|
||||
}
|
||||
return c.sendStreamClaude(msgs, opts, channel)
|
||||
}
|
||||
|
||||
func (c *Client) sendStreamClaude(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan domain.StreamUpdate) error {
|
||||
if c.client == nil {
|
||||
close(channel)
|
||||
return fmt.Errorf("VertexAI client not initialized")
|
||||
@@ -122,13 +199,22 @@ func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
|
||||
return fmt.Errorf("no valid messages to send")
|
||||
}
|
||||
|
||||
// Build request params
|
||||
params := anthropic.MessageNewParams{
|
||||
Model: anthropic.Model(opts.Model),
|
||||
MaxTokens: getMaxTokens(opts),
|
||||
Messages: anthropicMessages,
|
||||
}
|
||||
|
||||
// Only set one of Temperature or TopP as some models don't allow both
|
||||
if opts.TopP != domain.DefaultTopP {
|
||||
params.TopP = anthropic.Opt(opts.TopP)
|
||||
} else {
|
||||
params.Temperature = anthropic.Opt(opts.Temperature)
|
||||
}
|
||||
|
||||
// Create streaming request
|
||||
stream := c.client.Messages.NewStreaming(ctx, anthropic.MessageNewParams{
|
||||
Model: anthropic.Model(opts.Model),
|
||||
MaxTokens: int64(maxTokens),
|
||||
Messages: anthropicMessages,
|
||||
Temperature: anthropic.Opt(opts.Temperature),
|
||||
})
|
||||
stream := c.client.Messages.NewStreaming(ctx, params)
|
||||
|
||||
// Process stream
|
||||
for stream.Next() {
|
||||
@@ -167,6 +253,144 @@ func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
|
||||
return stream.Err()
|
||||
}
|
||||
|
||||
// Gemini methods using genai SDK with Vertex AI backend
|
||||
|
||||
// getGeminiRegion returns the appropriate region for a Gemini model.
|
||||
// Preview models are often only available on the global endpoint.
|
||||
func (c *Client) getGeminiRegion(model string) string {
|
||||
if strings.Contains(strings.ToLower(model), "preview") {
|
||||
return "global"
|
||||
}
|
||||
return c.Region.Value
|
||||
}
|
||||
|
||||
func (c *Client) sendGemini(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (string, error) {
|
||||
client, err := genai.NewClient(ctx, &genai.ClientConfig{
|
||||
Project: c.ProjectID.Value,
|
||||
Location: c.getGeminiRegion(opts.Model),
|
||||
Backend: genai.BackendVertexAI,
|
||||
})
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create Gemini client: %w", err)
|
||||
}
|
||||
|
||||
contents := geminicommon.ConvertMessages(msgs)
|
||||
if len(contents) == 0 {
|
||||
return "", fmt.Errorf("no valid messages to send")
|
||||
}
|
||||
|
||||
config := c.buildGeminiConfig(opts)
|
||||
|
||||
response, err := client.Models.GenerateContent(ctx, opts.Model, contents, config)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return geminicommon.ExtractTextWithCitations(response), nil
|
||||
}
|
||||
|
||||
// buildGeminiConfig creates the generation config for Gemini models
|
||||
// following the gemini.go pattern for feature parity
|
||||
func (c *Client) buildGeminiConfig(opts *domain.ChatOptions) *genai.GenerateContentConfig {
|
||||
temperature := float32(opts.Temperature)
|
||||
topP := float32(opts.TopP)
|
||||
config := &genai.GenerateContentConfig{
|
||||
Temperature: &temperature,
|
||||
TopP: &topP,
|
||||
MaxOutputTokens: int32(getMaxTokens(opts)),
|
||||
}
|
||||
|
||||
// Add web search support
|
||||
if opts.Search {
|
||||
config.Tools = []*genai.Tool{{GoogleSearch: &genai.GoogleSearch{}}}
|
||||
}
|
||||
|
||||
// Add thinking support
|
||||
if tc := parseGeminiThinking(opts.Thinking); tc != nil {
|
||||
config.ThinkingConfig = tc
|
||||
}
|
||||
|
||||
return config
|
||||
}
|
||||
|
||||
// parseGeminiThinking converts thinking level to Gemini thinking config
|
||||
func parseGeminiThinking(level domain.ThinkingLevel) *genai.ThinkingConfig {
|
||||
lower := strings.ToLower(strings.TrimSpace(string(level)))
|
||||
switch domain.ThinkingLevel(lower) {
|
||||
case "", domain.ThinkingOff:
|
||||
return nil
|
||||
case domain.ThinkingLow, domain.ThinkingMedium, domain.ThinkingHigh:
|
||||
if budget, ok := domain.ThinkingBudgets[domain.ThinkingLevel(lower)]; ok {
|
||||
b := int32(budget)
|
||||
return &genai.ThinkingConfig{IncludeThoughts: true, ThinkingBudget: &b}
|
||||
}
|
||||
default:
|
||||
// Try parsing as integer token count
|
||||
var tokens int
|
||||
if _, err := fmt.Sscanf(lower, "%d", &tokens); err == nil && tokens > 0 {
|
||||
t := int32(tokens)
|
||||
return &genai.ThinkingConfig{IncludeThoughts: true, ThinkingBudget: &t}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Client) sendStreamGemini(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan domain.StreamUpdate) error {
|
||||
defer close(channel)
|
||||
ctx := context.Background()
|
||||
|
||||
client, err := genai.NewClient(ctx, &genai.ClientConfig{
|
||||
Project: c.ProjectID.Value,
|
||||
Location: c.getGeminiRegion(opts.Model),
|
||||
Backend: genai.BackendVertexAI,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create Gemini client: %w", err)
|
||||
}
|
||||
|
||||
contents := geminicommon.ConvertMessages(msgs)
|
||||
if len(contents) == 0 {
|
||||
return fmt.Errorf("no valid messages to send")
|
||||
}
|
||||
|
||||
config := c.buildGeminiConfig(opts)
|
||||
|
||||
stream := client.Models.GenerateContentStream(ctx, opts.Model, contents, config)
|
||||
|
||||
for response, err := range stream {
|
||||
if err != nil {
|
||||
channel <- domain.StreamUpdate{
|
||||
Type: domain.StreamTypeError,
|
||||
Content: fmt.Sprintf("Error: %v", err),
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
text := geminicommon.ExtractText(response)
|
||||
if text != "" {
|
||||
channel <- domain.StreamUpdate{
|
||||
Type: domain.StreamTypeContent,
|
||||
Content: text,
|
||||
}
|
||||
}
|
||||
|
||||
if response.UsageMetadata != nil {
|
||||
channel <- domain.StreamUpdate{
|
||||
Type: domain.StreamTypeUsage,
|
||||
Usage: &domain.UsageMetadata{
|
||||
InputTokens: int(response.UsageMetadata.PromptTokenCount),
|
||||
OutputTokens: int(response.UsageMetadata.CandidatesTokenCount),
|
||||
TotalTokens: int(response.UsageMetadata.TotalTokenCount),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Claude message conversion
|
||||
|
||||
func (c *Client) toMessages(msgs []*chat.ChatCompletionMessage) []anthropic.MessageParam {
|
||||
// Convert messages to Anthropic format with proper role handling
|
||||
// - System messages become part of the first user message
|
||||
|
||||
442
internal/plugins/ai/vertexai/vertexai_test.go
Normal file
442
internal/plugins/ai/vertexai/vertexai_test.go
Normal file
@@ -0,0 +1,442 @@
|
||||
package vertexai
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/danielmiessler/fabric/internal/domain"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestExtractModelName(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "standard format",
|
||||
input: "publishers/google/models/gemini-2.0-flash",
|
||||
expected: "gemini-2.0-flash",
|
||||
},
|
||||
{
|
||||
name: "anthropic model",
|
||||
input: "publishers/anthropic/models/claude-sonnet-4-5",
|
||||
expected: "claude-sonnet-4-5",
|
||||
},
|
||||
{
|
||||
name: "model with version",
|
||||
input: "publishers/anthropic/models/claude-3-opus@20240229",
|
||||
expected: "claude-3-opus@20240229",
|
||||
},
|
||||
{
|
||||
name: "just model name",
|
||||
input: "gemini-pro",
|
||||
expected: "gemini-pro",
|
||||
},
|
||||
{
|
||||
name: "empty string",
|
||||
input: "",
|
||||
expected: "",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := extractModelName(tt.input)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSortModels(t *testing.T) {
|
||||
input := []string{
|
||||
"claude-sonnet-4-5",
|
||||
"gemini-2.0-flash",
|
||||
"gemini-pro",
|
||||
"claude-opus-4",
|
||||
"unknown-model",
|
||||
}
|
||||
|
||||
result := sortModels(input)
|
||||
|
||||
// Verify order: Gemini first, then Claude, then others (alphabetically within each group)
|
||||
expected := []string{
|
||||
"gemini-2.0-flash",
|
||||
"gemini-pro",
|
||||
"claude-opus-4",
|
||||
"claude-sonnet-4-5",
|
||||
"unknown-model",
|
||||
}
|
||||
|
||||
assert.Equal(t, expected, result)
|
||||
}
|
||||
|
||||
func TestModelPriority(t *testing.T) {
|
||||
tests := []struct {
|
||||
model string
|
||||
priority int
|
||||
}{
|
||||
{"gemini-2.0-flash", 1},
|
||||
{"Gemini-Pro", 1},
|
||||
{"claude-sonnet-4-5", 2},
|
||||
{"CLAUDE-OPUS", 2},
|
||||
{"some-other-model", 3},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.model, func(t *testing.T) {
|
||||
result := modelPriority(tt.model)
|
||||
assert.Equal(t, tt.priority, result, "priority for %s", tt.model)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestListPublisherModels_Success(t *testing.T) {
|
||||
// Create mock server
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
assert.Equal(t, http.MethodGet, r.Method)
|
||||
assert.Contains(t, r.URL.Path, "/v1/publishers/google/models")
|
||||
|
||||
response := publisherModelsResponse{
|
||||
PublisherModels: []publisherModel{
|
||||
{Name: "publishers/google/models/gemini-2.0-flash"},
|
||||
{Name: "publishers/google/models/gemini-pro"},
|
||||
},
|
||||
}
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(response)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
// Note: This test would need to mock the actual API endpoint
|
||||
// For now, we just verify the mock server works
|
||||
resp, err := http.Get(server.URL + "/v1/publishers/google/models")
|
||||
require.NoError(t, err)
|
||||
defer resp.Body.Close()
|
||||
|
||||
var response publisherModelsResponse
|
||||
err = json.NewDecoder(resp.Body).Decode(&response)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Len(t, response.PublisherModels, 2)
|
||||
assert.Equal(t, "publishers/google/models/gemini-2.0-flash", response.PublisherModels[0].Name)
|
||||
}
|
||||
|
||||
func TestListPublisherModels_Pagination(t *testing.T) {
|
||||
callCount := 0
|
||||
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
callCount++
|
||||
|
||||
var response publisherModelsResponse
|
||||
if callCount == 1 {
|
||||
response = publisherModelsResponse{
|
||||
PublisherModels: []publisherModel{
|
||||
{Name: "publishers/google/models/gemini-flash"},
|
||||
},
|
||||
NextPageToken: "page2",
|
||||
}
|
||||
} else {
|
||||
response = publisherModelsResponse{
|
||||
PublisherModels: []publisherModel{
|
||||
{Name: "publishers/google/models/gemini-pro"},
|
||||
},
|
||||
NextPageToken: "",
|
||||
}
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
json.NewEncoder(w).Encode(response)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
// Verify the server handles pagination correctly
|
||||
resp, err := http.Get(server.URL + "/page1")
|
||||
require.NoError(t, err)
|
||||
resp.Body.Close()
|
||||
|
||||
resp, err = http.Get(server.URL + "/page2")
|
||||
require.NoError(t, err)
|
||||
resp.Body.Close()
|
||||
|
||||
assert.Equal(t, 2, callCount)
|
||||
}
|
||||
|
||||
func TestListPublisherModels_ErrorResponse(t *testing.T) {
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusForbidden)
|
||||
w.Write([]byte(`{"error": "access denied"}`))
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
resp, err := http.Get(server.URL + "/v1/publishers/google/models")
|
||||
require.NoError(t, err)
|
||||
defer resp.Body.Close()
|
||||
|
||||
assert.Equal(t, http.StatusForbidden, resp.StatusCode)
|
||||
}
|
||||
|
||||
func TestNewClient(t *testing.T) {
|
||||
client := NewClient()
|
||||
|
||||
assert.NotNil(t, client)
|
||||
assert.Equal(t, "VertexAI", client.Name)
|
||||
assert.NotNil(t, client.ProjectID)
|
||||
assert.NotNil(t, client.Region)
|
||||
assert.Equal(t, "global", client.Region.Value)
|
||||
}
|
||||
|
||||
func TestPublishersListComplete(t *testing.T) {
|
||||
// Verify supported publishers are in the list
|
||||
expectedPublishers := []string{"google", "anthropic"}
|
||||
|
||||
assert.Equal(t, expectedPublishers, publishers)
|
||||
}
|
||||
|
||||
func TestIsConversationalModel(t *testing.T) {
|
||||
tests := []struct {
|
||||
model string
|
||||
expected bool
|
||||
}{
|
||||
// Conversational models (should return true)
|
||||
{"gemini-2.0-flash", true},
|
||||
{"gemini-2.5-pro", true},
|
||||
{"claude-sonnet-4-5", true},
|
||||
{"claude-opus-4", true},
|
||||
{"deepseek-v3", true},
|
||||
{"llama-3.1-405b", true},
|
||||
{"mistral-large", true},
|
||||
|
||||
// Non-conversational models (should return false)
|
||||
{"imagen-3.0-capability-002", false},
|
||||
{"imagen-4.0-fast-generate-001", false},
|
||||
{"imagegeneration", false},
|
||||
{"imagetext", false},
|
||||
{"image-segmentation-001", false},
|
||||
{"textembedding-gecko", false},
|
||||
{"multimodalembedding", false},
|
||||
{"text-embedding-004", false},
|
||||
{"text-bison", false},
|
||||
{"text-unicorn", false},
|
||||
{"code-bison", false},
|
||||
{"code-gecko", false},
|
||||
{"codechat-bison", false},
|
||||
{"chat-bison", false},
|
||||
{"veo-001", false},
|
||||
{"chirp", false},
|
||||
{"medlm-medium", false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.model, func(t *testing.T) {
|
||||
result := isConversationalModel(tt.model)
|
||||
assert.Equal(t, tt.expected, result, "isConversationalModel(%s)", tt.model)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterConversationalModels(t *testing.T) {
|
||||
input := []string{
|
||||
"gemini-2.0-flash",
|
||||
"imagen-3.0-capability-002",
|
||||
"claude-sonnet-4-5",
|
||||
"textembedding-gecko",
|
||||
"deepseek-v3",
|
||||
"chat-bison",
|
||||
"llama-3.1-405b",
|
||||
"code-bison",
|
||||
}
|
||||
|
||||
result := filterConversationalModels(input)
|
||||
|
||||
expected := []string{
|
||||
"gemini-2.0-flash",
|
||||
"claude-sonnet-4-5",
|
||||
"deepseek-v3",
|
||||
"llama-3.1-405b",
|
||||
}
|
||||
|
||||
assert.Equal(t, expected, result)
|
||||
}
|
||||
|
||||
func TestFilterConversationalModels_EmptyInput(t *testing.T) {
|
||||
result := filterConversationalModels([]string{})
|
||||
assert.Empty(t, result)
|
||||
}
|
||||
|
||||
func TestFilterConversationalModels_AllFiltered(t *testing.T) {
|
||||
input := []string{
|
||||
"imagen-3.0",
|
||||
"textembedding-gecko",
|
||||
"chat-bison",
|
||||
}
|
||||
|
||||
result := filterConversationalModels(input)
|
||||
assert.Empty(t, result)
|
||||
}
|
||||
|
||||
func TestIsGeminiModel(t *testing.T) {
|
||||
tests := []struct {
|
||||
model string
|
||||
expected bool
|
||||
}{
|
||||
{"gemini-2.5-pro", true},
|
||||
{"gemini-3-pro-preview", true},
|
||||
{"Gemini-2.0-flash", true},
|
||||
{"GEMINI-flash", true},
|
||||
{"claude-sonnet-4-5", false},
|
||||
{"claude-opus-4", false},
|
||||
{"deepseek-v3", false},
|
||||
{"llama-3.1-405b", false},
|
||||
{"", false},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.model, func(t *testing.T) {
|
||||
result := isGeminiModel(tt.model)
|
||||
assert.Equal(t, tt.expected, result, "isGeminiModel(%s)", tt.model)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetMaxTokens(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
opts *domain.ChatOptions
|
||||
expected int64
|
||||
}{
|
||||
{
|
||||
name: "MaxTokens specified",
|
||||
opts: &domain.ChatOptions{MaxTokens: 8192},
|
||||
expected: 8192,
|
||||
},
|
||||
{
|
||||
name: "Default when MaxTokens is 0",
|
||||
opts: &domain.ChatOptions{MaxTokens: 0},
|
||||
expected: int64(defaultMaxTokens),
|
||||
},
|
||||
{
|
||||
name: "Default when MaxTokens is negative",
|
||||
opts: &domain.ChatOptions{MaxTokens: -1},
|
||||
expected: int64(defaultMaxTokens),
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := getMaxTokens(tt.opts)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseGeminiThinking(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
level domain.ThinkingLevel
|
||||
expectNil bool
|
||||
expectedBudget int32
|
||||
}{
|
||||
{
|
||||
name: "empty string returns nil",
|
||||
level: "",
|
||||
expectNil: true,
|
||||
},
|
||||
{
|
||||
name: "off returns nil",
|
||||
level: domain.ThinkingOff,
|
||||
expectNil: true,
|
||||
},
|
||||
{
|
||||
name: "low thinking",
|
||||
level: domain.ThinkingLow,
|
||||
expectNil: false,
|
||||
expectedBudget: int32(domain.ThinkingBudgets[domain.ThinkingLow]),
|
||||
},
|
||||
{
|
||||
name: "medium thinking",
|
||||
level: domain.ThinkingMedium,
|
||||
expectNil: false,
|
||||
expectedBudget: int32(domain.ThinkingBudgets[domain.ThinkingMedium]),
|
||||
},
|
||||
{
|
||||
name: "high thinking",
|
||||
level: domain.ThinkingHigh,
|
||||
expectNil: false,
|
||||
expectedBudget: int32(domain.ThinkingBudgets[domain.ThinkingHigh]),
|
||||
},
|
||||
{
|
||||
name: "numeric string",
|
||||
level: "5000",
|
||||
expectNil: false,
|
||||
expectedBudget: 5000,
|
||||
},
|
||||
{
|
||||
name: "invalid string returns nil",
|
||||
level: "invalid",
|
||||
expectNil: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := parseGeminiThinking(tt.level)
|
||||
if tt.expectNil {
|
||||
assert.Nil(t, result)
|
||||
} else {
|
||||
require.NotNil(t, result)
|
||||
assert.True(t, result.IncludeThoughts)
|
||||
assert.Equal(t, tt.expectedBudget, *result.ThinkingBudget)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildGeminiConfig(t *testing.T) {
|
||||
client := &Client{}
|
||||
|
||||
t.Run("basic config with temperature and TopP", func(t *testing.T) {
|
||||
opts := &domain.ChatOptions{
|
||||
Temperature: 0.7,
|
||||
TopP: 0.9,
|
||||
MaxTokens: 8192,
|
||||
}
|
||||
config := client.buildGeminiConfig(opts)
|
||||
|
||||
assert.NotNil(t, config)
|
||||
assert.Equal(t, float32(0.7), *config.Temperature)
|
||||
assert.Equal(t, float32(0.9), *config.TopP)
|
||||
assert.Equal(t, int32(8192), config.MaxOutputTokens)
|
||||
assert.Nil(t, config.Tools)
|
||||
assert.Nil(t, config.ThinkingConfig)
|
||||
})
|
||||
|
||||
t.Run("config with search enabled", func(t *testing.T) {
|
||||
opts := &domain.ChatOptions{
|
||||
Temperature: 0.5,
|
||||
TopP: 0.8,
|
||||
Search: true,
|
||||
}
|
||||
config := client.buildGeminiConfig(opts)
|
||||
|
||||
assert.NotNil(t, config.Tools)
|
||||
assert.Len(t, config.Tools, 1)
|
||||
assert.NotNil(t, config.Tools[0].GoogleSearch)
|
||||
})
|
||||
|
||||
t.Run("config with thinking enabled", func(t *testing.T) {
|
||||
opts := &domain.ChatOptions{
|
||||
Temperature: 0.5,
|
||||
TopP: 0.8,
|
||||
Thinking: domain.ThinkingHigh,
|
||||
}
|
||||
config := client.buildGeminiConfig(opts)
|
||||
|
||||
assert.NotNil(t, config.ThinkingConfig)
|
||||
assert.True(t, config.ThinkingConfig.IncludeThoughts)
|
||||
})
|
||||
}
|
||||
@@ -36,6 +36,16 @@ func (o *PluginBase) GetName() string {
|
||||
return o.Name
|
||||
}
|
||||
|
||||
// NewVendorPluginBase creates a standardized PluginBase for AI vendor plugins.
|
||||
// This centralizes the common initialization pattern used by all vendors.
|
||||
func NewVendorPluginBase(name string, configure func() error) *PluginBase {
|
||||
return &PluginBase{
|
||||
Name: name,
|
||||
EnvNamePrefix: BuildEnvVariablePrefix(name),
|
||||
ConfigureCustom: configure,
|
||||
}
|
||||
}
|
||||
|
||||
func (o *PluginBase) GetSetupDescription() (ret string) {
|
||||
if ret = o.SetupDescription; ret == "" {
|
||||
ret = o.GetName()
|
||||
|
||||
@@ -8,6 +8,43 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestNewVendorPluginBase(t *testing.T) {
|
||||
// Test with configure function
|
||||
configureCalled := false
|
||||
configureFunc := func() error {
|
||||
configureCalled = true
|
||||
return nil
|
||||
}
|
||||
|
||||
plugin := NewVendorPluginBase("TestVendor", configureFunc)
|
||||
|
||||
assert.Equal(t, "TestVendor", plugin.Name)
|
||||
assert.Equal(t, "TESTVENDOR_", plugin.EnvNamePrefix)
|
||||
assert.NotNil(t, plugin.ConfigureCustom)
|
||||
|
||||
// Test that configure function is properly stored
|
||||
err := plugin.ConfigureCustom()
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, configureCalled)
|
||||
}
|
||||
|
||||
func TestNewVendorPluginBase_NilConfigure(t *testing.T) {
|
||||
// Test with nil configure function
|
||||
plugin := NewVendorPluginBase("TestVendor", nil)
|
||||
|
||||
assert.Equal(t, "TestVendor", plugin.Name)
|
||||
assert.Equal(t, "TESTVENDOR_", plugin.EnvNamePrefix)
|
||||
assert.Nil(t, plugin.ConfigureCustom)
|
||||
}
|
||||
|
||||
func TestNewVendorPluginBase_EnvPrefixWithSpaces(t *testing.T) {
|
||||
// Test that spaces are converted to underscores
|
||||
plugin := NewVendorPluginBase("LM Studio", nil)
|
||||
|
||||
assert.Equal(t, "LM Studio", plugin.Name)
|
||||
assert.Equal(t, "LM_STUDIO_", plugin.EnvNamePrefix)
|
||||
}
|
||||
|
||||
func TestConfigurable_AddSetting(t *testing.T) {
|
||||
conf := &PluginBase{
|
||||
Settings: Settings{},
|
||||
|
||||
@@ -145,6 +145,8 @@ func (h *ChatHandler) HandleChat(c *gin.Context) {
|
||||
FrequencyPenalty: request.FrequencyPenalty,
|
||||
PresencePenalty: request.PresencePenalty,
|
||||
Thinking: request.Thinking,
|
||||
Search: request.Search,
|
||||
SearchLocation: request.SearchLocation,
|
||||
UpdateChan: streamChan,
|
||||
Quiet: true,
|
||||
}
|
||||
|
||||
@@ -1 +1 @@
|
||||
"1.4.370"
|
||||
"1.4.380"
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
"@skeletonlabs/skeleton": "^2.11.0",
|
||||
"@skeletonlabs/tw-plugin": "^0.3.1",
|
||||
"@sveltejs/adapter-auto": "^3.3.1",
|
||||
"@sveltejs/kit": "^2.21.1",
|
||||
"@sveltejs/kit": "^2.49.5",
|
||||
"@sveltejs/vite-plugin-svelte": "^3.1.2",
|
||||
"@tailwindcss/forms": "^0.5.10",
|
||||
"@tailwindcss/typography": "^0.5.16",
|
||||
|
||||
127
web/pnpm-lock.yaml
generated
127
web/pnpm-lock.yaml
generated
@@ -77,10 +77,10 @@ importers:
|
||||
version: 0.3.1(tailwindcss@3.4.17)
|
||||
'@sveltejs/adapter-auto':
|
||||
specifier: ^3.3.1
|
||||
version: 3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))
|
||||
version: 3.3.1(@sveltejs/kit@2.49.5(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(typescript@5.8.3)(vite@5.4.21(@types/node@20.17.50)))
|
||||
'@sveltejs/kit':
|
||||
specifier: ^2.21.1
|
||||
version: 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
|
||||
specifier: ^2.49.5
|
||||
version: 2.49.5(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(typescript@5.8.3)(vite@5.4.21(@types/node@20.17.50))
|
||||
'@sveltejs/vite-plugin-svelte':
|
||||
specifier: ^3.1.2
|
||||
version: 3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
|
||||
@@ -317,8 +317,8 @@ packages:
|
||||
peerDependencies:
|
||||
eslint: ^6.0.0 || ^7.0.0 || >=8.0.0
|
||||
|
||||
'@eslint-community/eslint-utils@4.9.0':
|
||||
resolution: {integrity: sha512-ayVFHdtZ+hsq1t2Dy24wCmGXGe4q9Gu3smhLYALJrr473ZH27MsnSL+LKUlimp4BWJqMDMLmPpx/Q9R3OAlL4g==}
|
||||
'@eslint-community/eslint-utils@4.9.1':
|
||||
resolution: {integrity: sha512-phrYmNiYppR7znFEdqgfWHXR6NCkZEK7hwWDHZUjit/2/U0r6XvkDl0SYnoM51Hq7FhCGdLDT6zxCCOY1hexsQ==}
|
||||
engines: {node: ^12.22.0 || ^14.17.0 || >=16.0.0}
|
||||
peerDependencies:
|
||||
eslint: ^6.0.0 || ^7.0.0 || >=8.0.0
|
||||
@@ -403,6 +403,9 @@ packages:
|
||||
'@jridgewell/sourcemap-codec@1.5.0':
|
||||
resolution: {integrity: sha512-gv3ZRaISU3fjPAgNsriBRqGWQL6quFx04YMPW/zD8XMLsU32mhCCbfbO6KZFLjvYpCZ8zyDEgqsgf+PwPaM7GQ==}
|
||||
|
||||
'@jridgewell/sourcemap-codec@1.5.5':
|
||||
resolution: {integrity: sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==}
|
||||
|
||||
'@jridgewell/trace-mapping@0.3.25':
|
||||
resolution: {integrity: sha512-vNk6aEwybGtawWmy/PzwnGDOjCkLWSD2wqvjGGAgOAwCGWySYXfYoxt00IJkTF+8Lb57DwOb3Aa0o9CApepiYQ==}
|
||||
|
||||
@@ -630,8 +633,11 @@ packages:
|
||||
peerDependencies:
|
||||
tailwindcss: '>=3.0.0'
|
||||
|
||||
'@sveltejs/acorn-typescript@1.0.5':
|
||||
resolution: {integrity: sha512-IwQk4yfwLdibDlrXVE04jTZYlLnwsTT2PIOQQGNLWfjavGifnk1JD1LcZjZaBTRcxZu2FfPfNLOE04DSu9lqtQ==}
|
||||
'@standard-schema/spec@1.1.0':
|
||||
resolution: {integrity: sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w==}
|
||||
|
||||
'@sveltejs/acorn-typescript@1.0.8':
|
||||
resolution: {integrity: sha512-esgN+54+q0NjB0Y/4BomT9samII7jGwNy/2a3wNZbT2A2RpmXsXwUt24LvLhx6jUq2gVk4cWEvcRO6MFQbOfNA==}
|
||||
peerDependencies:
|
||||
acorn: ^8.9.0
|
||||
|
||||
@@ -640,14 +646,21 @@ packages:
|
||||
peerDependencies:
|
||||
'@sveltejs/kit': ^2.0.0
|
||||
|
||||
'@sveltejs/kit@2.21.1':
|
||||
resolution: {integrity: sha512-vLbtVwtDcK8LhJKnFkFYwM0uCdFmzioQnif0bjEYH1I24Arz22JPr/hLUiXGVYAwhu8INKx5qrdvr4tHgPwX6w==}
|
||||
'@sveltejs/kit@2.49.5':
|
||||
resolution: {integrity: sha512-dCYqelr2RVnWUuxc+Dk/dB/SjV/8JBndp1UovCyCZdIQezd8TRwFLNZctYkzgHxRJtaNvseCSRsuuHPeUgIN/A==}
|
||||
engines: {node: '>=18.13'}
|
||||
hasBin: true
|
||||
peerDependencies:
|
||||
'@sveltejs/vite-plugin-svelte': ^3.0.0 || ^4.0.0-next.1 || ^5.0.0
|
||||
'@opentelemetry/api': ^1.0.0
|
||||
'@sveltejs/vite-plugin-svelte': ^3.0.0 || ^4.0.0-next.1 || ^5.0.0 || ^6.0.0-next.0
|
||||
svelte: ^4.0.0 || ^5.0.0-next.0
|
||||
vite: ^5.0.3 || ^6.0.0
|
||||
typescript: ^5.3.3
|
||||
vite: ^5.0.3 || ^6.0.0 || ^7.0.0-beta.0
|
||||
peerDependenciesMeta:
|
||||
'@opentelemetry/api':
|
||||
optional: true
|
||||
typescript:
|
||||
optional: true
|
||||
|
||||
'@sveltejs/vite-plugin-svelte-inspector@2.1.0':
|
||||
resolution: {integrity: sha512-9QX28IymvBlSCqsCll5t0kQVxipsfhFFL+L2t3nTWfXnddYwxBuAEtTtlaVQpRz9c37BhJjltSeY4AJSC03SSg==}
|
||||
@@ -909,8 +922,8 @@ packages:
|
||||
concat-map@0.0.1:
|
||||
resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==}
|
||||
|
||||
cookie@1.0.2:
|
||||
resolution: {integrity: sha512-9Kr/j4O16ISv8zBBhJoi4bXOYNTkFLOqSL3UDB0njXxCXNezjeyVrJyGOWtgfs/q2km1gwBcfH8q1yEGoMYunA==}
|
||||
cookie@1.1.1:
|
||||
resolution: {integrity: sha512-ei8Aos7ja0weRpFzJnEA9UHJ/7XQmqglbRwnf2ATjcB9Wq874VKH9kfjjirM6UhU2/E5fFYadylyhFldcqSidQ==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
core-util-is@1.0.2:
|
||||
@@ -977,8 +990,8 @@ packages:
|
||||
resolution: {integrity: sha512-reYkTUJAZb9gUuZ2RvVCNhVHdg62RHnJ7WJl8ftMi4diZ6NWlciOzQN88pUhSELEwflJht4oQDv0F0BMlwaYtA==}
|
||||
engines: {node: '>=8'}
|
||||
|
||||
devalue@5.3.2:
|
||||
resolution: {integrity: sha512-UDsjUbpQn9kvm68slnrs+mfxwFkIflOhkanmyabZ8zOYk8SMEIbJ3TK+88g70hSIeytu4y18f0z/hYHMTrXIWw==}
|
||||
devalue@5.6.2:
|
||||
resolution: {integrity: sha512-nPRkjWzzDQlsejL1WVifk5rvcFi/y1onBRxjaFMjZeR9mFpqu2gmAZ9xUB9/IEanEP/vBtGeGganC/GO1fmufg==}
|
||||
|
||||
devlop@1.1.0:
|
||||
resolution: {integrity: sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA==}
|
||||
@@ -1099,8 +1112,8 @@ packages:
|
||||
resolution: {integrity: sha512-oruZaFkjorTpF32kDSI5/75ViwGeZginGGy2NoOSg3Q9bnwlnmDm4HLnkl0RE3n+njDXR037aY1+x58Z/zFdwQ==}
|
||||
engines: {node: ^12.22.0 || ^14.17.0 || >=16.0.0}
|
||||
|
||||
esquery@1.6.0:
|
||||
resolution: {integrity: sha512-ca9pw9fomFcKPvFLXhBKUK90ZvGibiGOvRJNbjljY7s7uq/5YO4BOzcYtJqExdx99rF6aAcnRxHmcUHcz6sQsg==}
|
||||
esquery@1.7.0:
|
||||
resolution: {integrity: sha512-Ap6G0WQwcU/LHsvLwON1fAQX9Zp0A2Y6Y/cJBl9r/JbW90Zyg4/zbG6zzKa2OTALELarYHmKu0GhpM5EO+7T0g==}
|
||||
engines: {node: '>=0.10'}
|
||||
|
||||
esrecurse@4.3.0:
|
||||
@@ -1477,6 +1490,9 @@ packages:
|
||||
magic-string@0.30.17:
|
||||
resolution: {integrity: sha512-sNPKHvyjVf7gyjwS4xGTaW/mCnF8wnjtifKBEhxfZ7E/S8tQ0rssrwGNn6q8JH/ohItJfSQp9mBtQYuTlH5QnA==}
|
||||
|
||||
magic-string@0.30.21:
|
||||
resolution: {integrity: sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==}
|
||||
|
||||
marked@15.0.12:
|
||||
resolution: {integrity: sha512-8dD6FusOQSrpv9Z1rdNMdlSgQOIP880DHqnohobOmYLElGEqAL/JvxvuxZO16r4HtjTlfPRDC1hbvxC9dPN2nA==}
|
||||
engines: {node: '>= 18'}
|
||||
@@ -1899,8 +1915,8 @@ packages:
|
||||
engines: {node: '>=10'}
|
||||
hasBin: true
|
||||
|
||||
set-cookie-parser@2.7.1:
|
||||
resolution: {integrity: sha512-IOc8uWeOZgnb3ptbCURJWNjWUPcO3ZnTTdzsurqERrP6nPyv+paC55vJM0LpOlT2ne+Ix+9+CRG1MNLlyZ4GjQ==}
|
||||
set-cookie-parser@2.7.2:
|
||||
resolution: {integrity: sha512-oeM1lpU/UvhTxw+g3cIfxXHyJRc/uidd3yK1P242gzHds0udQBYzs3y8j4gCCW+ZJ7ad0yctld8RYO+bdurlvw==}
|
||||
|
||||
set-function-length@1.2.2:
|
||||
resolution: {integrity: sha512-pgRc4hJ4/sNjWCSS9AmnS40x3bNMDTknHgL5UaMBTMyJnU90EgWh1Rz+MC9eFu4BuN/UwZjKQuY/1v3rM7HMfg==}
|
||||
@@ -1924,8 +1940,8 @@ packages:
|
||||
simple-statistics@7.8.8:
|
||||
resolution: {integrity: sha512-CUtP0+uZbcbsFpqEyvNDYjJCl+612fNgjT8GaVuvMG7tBuJg8gXGpsP5M7X658zy0IcepWOZ6nPBu1Qb9ezA1w==}
|
||||
|
||||
sirv@3.0.1:
|
||||
resolution: {integrity: sha512-FoqMu0NCGBLCcAkS1qA+XJIQTR6/JHfQXl+uGteNCQ76T91DMUjPa9xfmeqMY3z80nLSg9yQmNjK0Px6RWsH/A==}
|
||||
sirv@3.0.2:
|
||||
resolution: {integrity: sha512-2wcC/oGxHis/BoHkkPwldgiPSYcpZK3JU28WoMVv55yHJgcZ8rlXvuG9iZggz+sU1d4bRgIGASwyWqjxu3FM0g==}
|
||||
engines: {node: '>=18'}
|
||||
|
||||
slash@2.0.0:
|
||||
@@ -2377,7 +2393,7 @@ snapshots:
|
||||
eslint: 9.17.0(jiti@1.21.7)
|
||||
eslint-visitor-keys: 3.4.3
|
||||
|
||||
'@eslint-community/eslint-utils@4.9.0(eslint@9.17.0(jiti@1.21.7))':
|
||||
'@eslint-community/eslint-utils@4.9.1(eslint@9.17.0(jiti@1.21.7))':
|
||||
dependencies:
|
||||
eslint: 9.17.0(jiti@1.21.7)
|
||||
eslint-visitor-keys: 3.4.3
|
||||
@@ -2459,7 +2475,7 @@ snapshots:
|
||||
'@jridgewell/gen-mapping@0.3.8':
|
||||
dependencies:
|
||||
'@jridgewell/set-array': 1.2.1
|
||||
'@jridgewell/sourcemap-codec': 1.5.0
|
||||
'@jridgewell/sourcemap-codec': 1.5.5
|
||||
'@jridgewell/trace-mapping': 0.3.25
|
||||
|
||||
'@jridgewell/resolve-uri@3.1.2': {}
|
||||
@@ -2468,6 +2484,8 @@ snapshots:
|
||||
|
||||
'@jridgewell/sourcemap-codec@1.5.0': {}
|
||||
|
||||
'@jridgewell/sourcemap-codec@1.5.5': {}
|
||||
|
||||
'@jridgewell/trace-mapping@0.3.25':
|
||||
dependencies:
|
||||
'@jridgewell/resolve-uri': 3.1.2
|
||||
@@ -2644,32 +2662,37 @@ snapshots:
|
||||
dependencies:
|
||||
tailwindcss: 3.4.17
|
||||
|
||||
'@sveltejs/acorn-typescript@1.0.5(acorn@8.14.1)':
|
||||
dependencies:
|
||||
acorn: 8.14.1
|
||||
'@standard-schema/spec@1.1.0': {}
|
||||
|
||||
'@sveltejs/adapter-auto@3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))':
|
||||
'@sveltejs/acorn-typescript@1.0.8(acorn@8.15.0)':
|
||||
dependencies:
|
||||
'@sveltejs/kit': 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
|
||||
acorn: 8.15.0
|
||||
|
||||
'@sveltejs/adapter-auto@3.3.1(@sveltejs/kit@2.49.5(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(typescript@5.8.3)(vite@5.4.21(@types/node@20.17.50)))':
|
||||
dependencies:
|
||||
'@sveltejs/kit': 2.49.5(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(typescript@5.8.3)(vite@5.4.21(@types/node@20.17.50))
|
||||
import-meta-resolve: 4.1.0
|
||||
|
||||
'@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))':
|
||||
'@sveltejs/kit@2.49.5(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(typescript@5.8.3)(vite@5.4.21(@types/node@20.17.50))':
|
||||
dependencies:
|
||||
'@sveltejs/acorn-typescript': 1.0.5(acorn@8.14.1)
|
||||
'@standard-schema/spec': 1.1.0
|
||||
'@sveltejs/acorn-typescript': 1.0.8(acorn@8.15.0)
|
||||
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
|
||||
'@types/cookie': 0.6.0
|
||||
acorn: 8.14.1
|
||||
cookie: 1.0.2
|
||||
devalue: 5.3.2
|
||||
acorn: 8.15.0
|
||||
cookie: 1.1.1
|
||||
devalue: 5.6.2
|
||||
esm-env: 1.2.2
|
||||
kleur: 4.1.5
|
||||
magic-string: 0.30.17
|
||||
magic-string: 0.30.21
|
||||
mrmime: 2.0.1
|
||||
sade: 1.8.1
|
||||
set-cookie-parser: 2.7.1
|
||||
sirv: 3.0.1
|
||||
set-cookie-parser: 2.7.2
|
||||
sirv: 3.0.2
|
||||
svelte: 4.2.20
|
||||
vite: 5.4.21(@types/node@20.17.50)
|
||||
optionalDependencies:
|
||||
typescript: 5.8.3
|
||||
|
||||
'@sveltejs/vite-plugin-svelte-inspector@2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))':
|
||||
dependencies:
|
||||
@@ -2741,10 +2764,6 @@ snapshots:
|
||||
|
||||
'@yarnpkg/lockfile@1.1.0': {}
|
||||
|
||||
acorn-jsx@5.3.2(acorn@8.14.1):
|
||||
dependencies:
|
||||
acorn: 8.14.1
|
||||
|
||||
acorn-jsx@5.3.2(acorn@8.15.0):
|
||||
dependencies:
|
||||
acorn: 8.15.0
|
||||
@@ -2900,7 +2919,7 @@ snapshots:
|
||||
dependencies:
|
||||
'@jridgewell/sourcemap-codec': 1.5.0
|
||||
'@types/estree': 1.0.7
|
||||
acorn: 8.14.1
|
||||
acorn: 8.15.0
|
||||
estree-walker: 3.0.3
|
||||
periscopic: 3.1.0
|
||||
|
||||
@@ -2922,7 +2941,7 @@ snapshots:
|
||||
|
||||
concat-map@0.0.1: {}
|
||||
|
||||
cookie@1.0.2: {}
|
||||
cookie@1.1.1: {}
|
||||
|
||||
core-util-is@1.0.2: {}
|
||||
|
||||
@@ -2969,7 +2988,7 @@ snapshots:
|
||||
|
||||
detect-indent@6.1.0: {}
|
||||
|
||||
devalue@5.3.2: {}
|
||||
devalue@5.6.2: {}
|
||||
|
||||
devlop@1.1.0:
|
||||
dependencies:
|
||||
@@ -3082,7 +3101,7 @@ snapshots:
|
||||
|
||||
eslint@9.17.0(jiti@1.21.7):
|
||||
dependencies:
|
||||
'@eslint-community/eslint-utils': 4.9.0(eslint@9.17.0(jiti@1.21.7))
|
||||
'@eslint-community/eslint-utils': 4.9.1(eslint@9.17.0(jiti@1.21.7))
|
||||
'@eslint-community/regexpp': 4.12.2
|
||||
'@eslint/config-array': 0.19.2
|
||||
'@eslint/core': 0.9.1
|
||||
@@ -3102,7 +3121,7 @@ snapshots:
|
||||
eslint-scope: 8.4.0
|
||||
eslint-visitor-keys: 4.2.1
|
||||
espree: 10.4.0
|
||||
esquery: 1.6.0
|
||||
esquery: 1.7.0
|
||||
esutils: 2.0.3
|
||||
fast-deep-equal: 3.1.3
|
||||
file-entry-cache: 8.0.0
|
||||
@@ -3133,11 +3152,11 @@ snapshots:
|
||||
|
||||
espree@9.6.1:
|
||||
dependencies:
|
||||
acorn: 8.14.1
|
||||
acorn-jsx: 5.3.2(acorn@8.14.1)
|
||||
acorn: 8.15.0
|
||||
acorn-jsx: 5.3.2(acorn@8.15.0)
|
||||
eslint-visitor-keys: 3.4.3
|
||||
|
||||
esquery@1.6.0:
|
||||
esquery@1.7.0:
|
||||
dependencies:
|
||||
estraverse: 5.3.0
|
||||
|
||||
@@ -3533,6 +3552,10 @@ snapshots:
|
||||
dependencies:
|
||||
'@jridgewell/sourcemap-codec': 1.5.0
|
||||
|
||||
magic-string@0.30.21:
|
||||
dependencies:
|
||||
'@jridgewell/sourcemap-codec': 1.5.5
|
||||
|
||||
marked@15.0.12: {}
|
||||
|
||||
marked@5.1.2: {}
|
||||
@@ -3985,7 +4008,7 @@ snapshots:
|
||||
|
||||
semver@7.7.2: {}
|
||||
|
||||
set-cookie-parser@2.7.1: {}
|
||||
set-cookie-parser@2.7.2: {}
|
||||
|
||||
set-function-length@1.2.2:
|
||||
dependencies:
|
||||
@@ -4017,7 +4040,7 @@ snapshots:
|
||||
|
||||
simple-statistics@7.8.8: {}
|
||||
|
||||
sirv@3.0.1:
|
||||
sirv@3.0.2:
|
||||
dependencies:
|
||||
'@polka/url': 1.0.0-next.29
|
||||
mrmime: 2.0.1
|
||||
@@ -4027,7 +4050,7 @@ snapshots:
|
||||
|
||||
sorcery@0.11.1:
|
||||
dependencies:
|
||||
'@jridgewell/sourcemap-codec': 1.5.0
|
||||
'@jridgewell/sourcemap-codec': 1.5.5
|
||||
buffer-crc32: 1.0.0
|
||||
minimist: 1.2.8
|
||||
sander: 0.5.1
|
||||
@@ -4147,7 +4170,7 @@ snapshots:
|
||||
dependencies:
|
||||
'@types/pug': 2.0.10
|
||||
detect-indent: 6.1.0
|
||||
magic-string: 0.30.17
|
||||
magic-string: 0.30.21
|
||||
sorcery: 0.11.1
|
||||
strip-indent: 3.0.0
|
||||
svelte: 4.2.20
|
||||
|
||||
Reference in New Issue
Block a user