Compare commits

...

37 Commits

Author SHA1 Message Date
github-actions[bot]
e40d4e6623 chore(release): Update version to v1.4.299 2025-08-27 18:07:33 +00:00
Kayvan Sylvan
51bd1ebadf Merge pull request #1731 from ksylvan/0827-update-ollama-library-for-cve-fixes
chore: upgrade ollama dependency from v0.9.0 to v0.11.7
2025-08-27 11:05:04 -07:00
Kayvan Sylvan
d3de731967 chore: upgrade ollama dependency from v0.9.0 to v0.11.7
• Update ollama package to version 0.11.7
• Refresh go.sum with new dependency checksums

- **Link**: [https://nvd.nist.gov/vuln/detail/CVE-2025-0317](https://nvd.nist.gov/vuln/detail/CVE-2025-0317)
- **CVSS Score**: 7.5 (High)
- **Description**: A vulnerability in ollama/ollama versions <=0.3.14 allows a malicious user to upload and create a customized GGUF model file on the Ollama server. This can lead to a division by zero error in the ggufPadding function, causing the server to crash and resulting in a Denial of Service (DoS) attack.
- **Affected**: Ollama server versions ≤ 0.3.14
- **Impact**: Denial of Service through division by zero error

- **Link**: [https://nvd.nist.gov/vuln/detail/CVE-2025-0315](https://nvd.nist.gov/vuln/detail/CVE-2025-0315)
- **CVSS Score**: 7.5 (High)
- **Description**: Vulnerability allows Denial of Service via customized GGUF model file upload on Ollama server.
- **Affected**: Ollama/ollama versions ≤ 0.3.14
- **Impact**: Denial of Service through malicious GGUF model file uploads

- **Link**: [https://nvd.nist.gov/vuln/detail/CVE-2024-12886](https://nvd.nist.gov/vuln/detail/CVE-2024-12886)
- **CVSS Score**: 7.5 (High)
- **Description**: An Out-Of-Memory (OOM) vulnerability exists in the ollama server version 0.3.14. This vulnerability can be triggered when a malicious API server responds with a gzip bomb HTTP response, leading to the ollama server crashing.
- **Affected**: Ollama server version 0.3.14
- **Impact**: Denial of Service through memory exhaustion via gzip bomb attack

- **Link**: [https://nvd.nist.gov/vuln/detail/CVE-2024-8063](https://nvd.nist.gov/vuln/detail/CVE-2024-8063)
- **CVSS Score**: 7.5 (High)
- **Description**: Security vulnerability with high severity rating
- **Impact**: Requires patching for security compliance

- **Link**: [https://nvd.nist.gov/vuln/detail/CVE-2024-12055](https://nvd.nist.gov/vuln/detail/CVE-2024-12055)
- **CVSS Score**: 7.5 (High)
- **Description**: High-severity security vulnerability requiring immediate attention
- **Impact**: Critical security flaw needing remediation

- **Link**: [https://nvd.nist.gov/vuln/detail/CVE-2025-51471](https://nvd.nist.gov/vuln/detail/CVE-2025-51471)
- **CVSS Score**: 6.9 (Medium)
- **Description**: Medium severity security vulnerability
- **Impact**: Security risk requiring patching as part of comprehensive security updates

- **Link**: [https://nvd.nist.gov/vuln/detail/CVE-2025-46394](https://nvd.nist.gov/vuln/detail/CVE-2025-46394)
- **CVSS Score**: 3.2 (Low)
- **Description**: Low-severity security issue
- **Impact**: Minor security concern addressed as part of comprehensive security maintenance

- **Link**: [https://nvd.nist.gov/vuln/detail/CVE-2024-58251](https://nvd.nist.gov/vuln/detail/CVE-2024-58251)
- **CVSS Score**: 2.5 (Low)
- **Description**: Low-severity security vulnerability
- **Impact**: Minimal security risk addressed for comprehensive security posture

This comprehensive security fix addresses **8 CVEs** total:
- **5 High Severity** vulnerabilities (CVSS 7.5)
- **1 Medium Severity** vulnerability (CVSS 6.9)
- **2 Low Severity** vulnerabilities (CVSS 3.2 and 2.5)

The majority of high-severity issues are related to **Ollama server vulnerabilities** that could lead to Denial of Service attacks through various vectors including division by zero errors, memory exhaustion, and malicious file uploads. These fixes ensure robust protection against these attack vectors and maintain system availability.

**Priority**: The high-severity Ollama vulnerabilities should be considered critical for any systems running Ollama server components, as they can lead to service disruption and potential system crashes.
2025-08-27 10:53:31 -07:00
github-actions[bot]
458b0a5e1c chore(release): Update version to v1.4.298 2025-08-27 14:11:48 +00:00
Kayvan Sylvan
b8f64bd554 Merge pull request #1730 from ksylvan/0827-simplify-docker
Modernize Dockerfile with Best Practices Implementation
2025-08-27 07:09:12 -07:00
Kayvan Sylvan
1622a34331 chore: remove docker-test framework and simplify production docker setup
- Remove entire docker-test directory and testing infrastructure
- Delete complex test runner script and environment files
- Simplify production Dockerfile with multi-stage build optimization
- Remove docker-compose.yml and start-docker.sh helper scripts
- Update README with cleaner Docker usage instructions
- Streamline container build process and reduce image size
2025-08-27 07:00:52 -07:00
github-actions[bot]
6b9f4c1fb8 chore(release): Update version to v1.4.297 2025-08-26 15:11:22 +00:00
Kayvan Sylvan
4d2061a641 Merge pull request #1729 from ksylvan/0826-community-docs
Add GitHub Community Health Documents
2025-08-26 08:08:52 -07:00
Kayvan Sylvan
713f6e46fe docs: add contributing, security, support, and code-of-conduct docs; add docs index
CHANGES
- Add CODE_OF_CONDUCT defining respectful, collaborative community behavior
- Add CONTRIBUTING with setup, testing, PR, changelog requirements
- Add SECURITY policy with reporting process and response timelines
- Add SUPPORT guide for bugs, features, discussions, expectations
- Add docs README indexing guides, quick starts, contributor essentials
2025-08-26 07:10:08 -07:00
github-actions[bot]
efadc81974 chore(release): Update version to v1.4.296 2025-08-26 03:15:57 +00:00
Kayvan Sylvan
ea54f60dcc Merge pull request #1728 from ksylvan/0825-debug-logging-cleanup
Refactor Logging System to Use Centralized Debug Logger
2025-08-25 20:13:26 -07:00
Kayvan Sylvan
4008125e37 refactor: replace stderr prints with centralized debuglog.Log and improve auth messaging
- Replace fmt.Fprintf/os.Stderr with centralized debuglog.Log across CLI
- Add unconditional Log function to debuglog for important messages
- Improve OAuth flow messaging and token refresh diagnostics
- Update tests to capture debuglog output via SetOutput
- Convert Perplexity streaming errors to unified debug logging
- Emit file write notifications through debuglog instead of stderr
- Warn on ambiguous model selection using centralized logger
- Announce large audio processing steps via debuglog progress messages
- Standardize extension registry and patterns warnings through debuglog
2025-08-25 20:09:55 -07:00
github-actions[bot]
da94411bf3 chore(release): Update version to v1.4.295 2025-08-24 20:22:53 +00:00
Kayvan Sylvan
ab7b37be10 Merge pull request #1727 from ksylvan/0824-anthropic-beta-logs
Standardize Anthropic Beta Failure Logging
2025-08-24 13:20:19 -07:00
Kayvan Sylvan
772337bf0d refactor: route Anthropic beta failure logs through internal debug logger
CHANGES
- Replace fmt.Fprintf stderr with debuglog.Debug for beta failures
- Import internal log package and remove os dependency
- Standardize logging level to debuglog.Basic for beta errors
- Preserve fallback stream behavior when beta features fail
- Maintain message send fallback when beta options fail
2025-08-24 13:10:57 -07:00
github-actions[bot]
1e30c4e136 chore(release): Update version to v1.4.294 2025-08-20 16:37:50 +00:00
Kayvan Sylvan
e12a40ad4f Merge pull request #1723 from ksylvan/0820-venice-ai-provider
docs: update README with Venice AI provider and Windows install script
2025-08-20 09:35:18 -07:00
Kayvan Sylvan
97beaecbeb docs: update README with Venice AI provider and Windows install script
- Add Venice AI provider configuration with API endpoint
- Document Venice AI as privacy-first open-source provider
- Include PowerShell installation script for Windows users
- Add debug levels section to table of contents
- Update recent major features with v1.4.294 release notes
- Configure Venice AI base URL and response settings
2025-08-20 09:30:29 -07:00
github-actions[bot]
7af6817bac chore(release): Update version to v1.4.293 2025-08-19 11:29:38 +00:00
Kayvan Sylvan
50ecc32d85 Merge pull request #1718 from ksylvan/0819-debug-log-levels
Implement Configurable Debug Logging Levels
2025-08-19 04:27:08 -07:00
Kayvan Sylvan
ff1ef380a7 feat: add --debug flag with levels and centralized logging
CHANGES
- Add --debug flag controlling runtime logging verbosity levels
- Introduce internal/log package with Off, Basic, Detailed, Trace
- Replace ad-hoc Debugf and globals with centralized debug logger
- Wire debug level during early CLI argument parsing
- Add bash, zsh, fish completions for --debug levels
- Document debug levels in README with usage examples
- Add comprehensive STT guide covering models, flags, workflows
- Simplify splitAudioFile signature and log ffmpeg chunking operations
- Remove FABRIC_STT_DEBUG environment variable and related code
- Clean minor code paths in vendors and template modules
2025-08-19 04:23:40 -07:00
github-actions[bot]
6a3a7e82d1 chore(release): Update version to v1.4.292 2025-08-19 00:55:22 +00:00
Kayvan Sylvan
34bc0b5e31 Merge pull request #1717 from ksylvan/0818-feature-default-model-indicator
Highlight default vendor/model in model listing
2025-08-18 17:52:57 -07:00
Kayvan Sylvan
ce59999503 feat: highlight default vendor/model in listings, pass registry defaults
CHANGES
- Update PrintWithVendor signature to accept default vendor and model
- Mark default vendor/model with asterisk in non-shell output
- Compare vendor and model case-insensitively when marking
- Pass registry defaults to PrintWithVendor from CLI
- Add test ensuring default selection appears with asterisk
- Keep shell completion output unchanged without default markers
2025-08-18 16:58:25 -07:00
Kayvan Sylvan
9bb4ccf740 docs: update version number in README updates section from v1.4.290 to v1.4.291 2025-08-18 08:13:55 -07:00
github-actions[bot]
900b13f08c chore(release): Update version to v1.4.291 2025-08-18 15:05:02 +00:00
Kayvan Sylvan
6824f0c0a7 Merge pull request #1715 from ksylvan/0818-openai-transcribe-using-openai-models
Add speech-to-text via OpenAI with transcription flags and completions
2025-08-18 08:02:36 -07:00
Kayvan Sylvan
a2481406db feat: add speech-to-text via OpenAI with transcription flags and completions
CHANGES
- Add --transcribe-file flag to transcribe audio or video
- Add --transcribe-model flag with model listing and completion
- Add --split-media-file flag to chunk files over 25MB
- Implement OpenAI transcription using Whisper and GPT-4o Transcribe
- Integrate transcription pipeline into CLI before readability processing
- Provide zsh, bash, fish completions for new transcription flags
- Validate media extensions and enforce 25MB upload limits
- Update README with release and corrected pattern link path
2025-08-18 07:59:50 -07:00
github-actions[bot]
171f7eb3ab chore(release): Update version to v1.4.290 2025-08-17 23:52:24 +00:00
Kayvan Sylvan
dccc70c433 Merge pull request #1714 from ksylvan/0817-simple-pattern-to-model-mapping-via-env-vars
Add Per-Pattern Model Mapping via Environment Variables
2025-08-17 16:49:46 -07:00
Kayvan Sylvan
e5ec9acfac feat: add per-pattern model mapping support via environment variables
• Add per-pattern model mapping documentation section
• Implement environment variable lookup for pattern-specific models
• Support vendor|model format in environment variable specification
• Check pattern-specific model when no model explicitly set
• Transform pattern names to uppercase environment variable format
• Add table of contents entry for new feature
• Enable shell startup file configuration for patterns
2025-08-17 16:15:23 -07:00
github-actions[bot]
f0eb9f90a3 chore(release): Update version to v1.4.289 2025-08-16 21:22:43 +00:00
Kayvan Sylvan
758425f98a Merge pull request #1710 from ksylvan/0816-no-variable-replacement-flag
Add `--no-variable-replacement` Flag for Literal Pattern Handling
2025-08-16 14:20:18 -07:00
Kayvan Sylvan
b4b5b0a4d9 feat: add --no-variable-replacement flag to disable pattern variable substitution
- Introduce CLI flag to skip pattern variable replacement.
- Wire flag into domain request and session builder.
- Avoid applying input variables when replacement is disabled.
- Provide PatternsEntity.GetWithoutVariables for input-only pattern processing support.
- Refactor patterns code into reusable load and apply helpers.
- Update bash, zsh, fish completions with new flag.
- Document flag in README and CLI help output.
- Add unit tests covering GetWithoutVariables path and behavior.
- Ensure {{input}} placeholder appends when missing in patterns.
2025-08-16 14:12:06 -07:00
github-actions[bot]
81a47ecab7 chore(release): Update version to v1.4.288 2025-08-16 16:19:42 +00:00
Kayvan Sylvan
0bce5c7b6e Merge pull request #1709 from ksylvan/0816-fix-youtube-transcripts
Enhanced YouTube Subtitle Language Fallback Handling
2025-08-16 09:17:09 -07:00
Kayvan Sylvan
992936dbd8 fix: improve YouTube subtitle language fallback handling in yt-dlp integration
- Fix typo "Gemmini" to "Gemini" in README
- Add "kballard" and "shellquote" to VSCode dictionary
- Add "YTDLP" to VSCode spell checker
- Enhance subtitle language options with fallback variants
- Build language options string with comma-separated alternatives
2025-08-16 09:14:03 -07:00
50 changed files with 1476 additions and 557 deletions

10
.vscode/settings.json vendored
View File

@@ -25,6 +25,7 @@
"danielmiessler",
"davidanson",
"Debugf",
"debuglog",
"dedup",
"deepseek",
"Despina",
@@ -55,6 +56,7 @@
"godotenv",
"gofmt",
"goimports",
"golint",
"gomod",
"gonic",
"goopenai",
@@ -75,6 +77,7 @@
"jessevdk",
"Jina",
"joho",
"kballard",
"Keploy",
"Kore",
"ksylvan",
@@ -98,6 +101,7 @@
"mbed",
"metacharacters",
"Miessler",
"mpga",
"nometa",
"numpy",
"ollama",
@@ -129,6 +133,9 @@
"seaborn",
"semgrep",
"sess",
"sgaunet",
"shellquote",
"SSEHTTP",
"storer",
"Streamlit",
"stretchr",
@@ -156,7 +163,8 @@
"writeups",
"xclip",
"yourpatternname",
"youtu"
"youtu",
"YTDLP"
],
"cSpell.ignorePaths": ["go.mod", ".gitignore", "CHANGELOG.md"],
"markdownlint.config": {

View File

@@ -1,5 +1,128 @@
# Changelog
## v1.4.299 (2025-08-27)
### PR [#1731](https://github.com/danielmiessler/Fabric/pull/1731) by [ksylvan](https://github.com/ksylvan): chore: upgrade ollama dependency from v0.9.0 to v0.11.7
- Updated ollama package from version 0.9.0 to 0.11.7
- Fixed 8 security vulnerabilities including 5 high-severity CVEs that could cause denial of service attacks
- Patched Ollama server vulnerabilities related to division by zero errors and memory exhaustion
- Resolved security flaws that allowed malicious GGUF model file uploads to crash the server
- Enhanced system stability and security posture through comprehensive dependency upgrade
## v1.4.298 (2025-08-27)
### PR [#1730](https://github.com/danielmiessler/Fabric/pull/1730) by [ksylvan](https://github.com/ksylvan): Modernize Dockerfile with Best Practices Implementation
- Remove docker-test framework and simplify production docker setup by eliminating complex testing infrastructure
- Delete entire docker-test directory including test runner scripts and environment configuration files
- Implement multi-stage build optimization in production Dockerfile to improve build efficiency
- Remove docker-compose.yml and start-docker.sh helper scripts to streamline container workflow
- Update README documentation with cleaner Docker usage instructions and reduced image size benefits
## v1.4.297 (2025-08-26)
### PR [#1729](https://github.com/danielmiessler/Fabric/pull/1729) by [ksylvan](https://github.com/ksylvan): Add GitHub Community Health Documents
- Add CODE_OF_CONDUCT defining respectful, collaborative community behavior
- Add CONTRIBUTING with setup, testing, PR, changelog requirements
- Add SECURITY policy with reporting process and response timelines
- Add SUPPORT guide for bugs, features, discussions, expectations
- Add docs README indexing guides, quick starts, contributor essentials
## v1.4.296 (2025-08-26)
### PR [#1728](https://github.com/danielmiessler/Fabric/pull/1728) by [ksylvan](https://github.com/ksylvan): Refactor Logging System to Use Centralized Debug Logger
- Replace fmt.Fprintf/os.Stderr with centralized debuglog.Log across CLI and add unconditional Log function for important messages
- Improve OAuth flow messaging and token refresh diagnostics with better error handling
- Update tests to capture debuglog output via SetOutput for better test coverage
- Convert Perplexity streaming errors to unified debug logging and emit file write notifications through debuglog
- Standardize extension registry warnings and announce large audio processing steps via centralized logger
## v1.4.295 (2025-08-24)
### PR [#1727](https://github.com/danielmiessler/Fabric/pull/1727) by [ksylvan](https://github.com/ksylvan): Standardize Anthropic Beta Failure Logging
- Refactor: route Anthropic beta failure logs through internal debug logger
- Replace fmt.Fprintf stderr with debuglog.Debug for beta failures
- Import internal log package and remove os dependency
- Standardize logging level to debuglog.Basic for beta errors
- Preserve fallback stream behavior when beta features fail
## v1.4.294 (2025-08-20)
### PR [#1723](https://github.com/danielmiessler/Fabric/pull/1723) by [ksylvan](https://github.com/ksylvan): docs: update README with Venice AI provider and Windows install script
- Add Venice AI provider configuration with API endpoint
- Document Venice AI as privacy-first open-source provider
- Include PowerShell installation script for Windows users
- Add debug levels section to table of contents
- Update recent major features with v1.4.294 release notes
## v1.4.293 (2025-08-19)
### PR [#1718](https://github.com/danielmiessler/Fabric/pull/1718) by [ksylvan](https://github.com/ksylvan): Implement Configurable Debug Logging Levels
- Add --debug flag controlling runtime logging verbosity levels
- Introduce internal/log package with Off, Basic, Detailed, Trace
- Replace ad-hoc Debugf and globals with centralized debug logger
- Wire debug level during early CLI argument parsing
- Add bash, zsh, fish completions for --debug levels
## v1.4.292 (2025-08-18)
### PR [#1717](https://github.com/danielmiessler/Fabric/pull/1717) by [ksylvan](https://github.com/ksylvan): Highlight default vendor/model in model listing
- Update PrintWithVendor signature to accept default vendor and model
- Mark default vendor/model with asterisk in non-shell output
- Compare vendor and model case-insensitively when marking
- Pass registry defaults to PrintWithVendor from CLI
- Add test ensuring default selection appears with asterisk
### Direct commits
- Docs: update version number in README updates section from v1.4.290 to v1.4.291
## v1.4.291 (2025-08-18)
### PR [#1715](https://github.com/danielmiessler/Fabric/pull/1715) by [ksylvan](https://github.com/ksylvan): feat: add speech-to-text via OpenAI with transcription flags and comp…
- Add --transcribe-file flag to transcribe audio or video
- Add --transcribe-model flag with model listing and completion
- Add --split-media-file flag to chunk files over 25MB
- Implement OpenAI transcription using Whisper and GPT-4o Transcribe
- Integrate transcription pipeline into CLI before readability processing
## v1.4.290 (2025-08-17)
### PR [#1714](https://github.com/danielmiessler/Fabric/pull/1714) by [ksylvan](https://github.com/ksylvan): feat: add per-pattern model mapping support via environment variables
- Add per-pattern model mapping support via environment variables
- Implement environment variable lookup for pattern-specific models
- Support vendor|model format in environment variable specification
- Enable shell startup file configuration for patterns
- Transform pattern names to uppercase environment variable format
## v1.4.289 (2025-08-16)
### PR [#1710](https://github.com/danielmiessler/Fabric/pull/1710) by [ksylvan](https://github.com/ksylvan): feat: add --no-variable-replacement flag to disable pattern variable …
- Add --no-variable-replacement flag to disable pattern variable substitution
- Introduce CLI flag to skip pattern variable replacement and wire it into domain request and session builder
- Provide PatternsEntity.GetWithoutVariables for input-only pattern processing support
- Refactor patterns code into reusable load and apply helpers
- Update bash, zsh, fish completions with new flag and document in README and CLI help output
## v1.4.288 (2025-08-16)
### PR [#1709](https://github.com/danielmiessler/Fabric/pull/1709) by [ksylvan](https://github.com/ksylvan): Enhanced YouTube Subtitle Language Fallback Handling
- Fix: improve YouTube subtitle language fallback handling in yt-dlp integration
- Fix typo "Gemmini" to "Gemini" in README
- Add "kballard" and "shellquote" to VSCode dictionary
- Add "YTDLP" to VSCode spell checker
- Enhance subtitle language options with fallback variants
## v1.4.287 (2025-08-14)
### PR [#1706](https://github.com/danielmiessler/Fabric/pull/1706) by [ksylvan](https://github.com/ksylvan): Gemini Thinking Support and README (New Features) automation

View File

@@ -57,6 +57,8 @@ Below are the **new features and capabilities** we've added (newest first):
### Recent Major Features
- [v1.4.294](https://github.com/danielmiessler/fabric/releases/tag/v1.4.294) (Aug 20, 2025) — **Venice AI Support**: Added the Venice AI provider. Venice is a Privacy-First, Open-Source AI provider. See their ["About Venice"](https://docs.venice.ai/overview/about-venice) page for details.
- [v1.4.291](https://github.com/danielmiessler/fabric/releases/tag/v1.4.291) (Aug 18, 2025) — **Speech To Text**: Add OpenAI speech-to-text support with `--transcribe-file`, `--transcribe-model`, and `--split-media-file` flags.
- [v1.4.287](https://github.com/danielmiessler/fabric/releases/tag/v1.4.287) (Aug 16, 2025) — **AI Reasoning**: Add Thinking to Gemini models and introduce `readme_updates` python script
- [v1.4.286](https://github.com/danielmiessler/fabric/releases/tag/v1.4.286) (Aug 14, 2025) — **AI Reasoning**: Introduce Thinking Config Across Anthropic and OpenAI Providers
- [v1.4.285](https://github.com/danielmiessler/fabric/releases/tag/v1.4.285) (Aug 13, 2025) — **Extended Context**: Enable One Million Token Context Beta Feature for Sonnet-4
@@ -68,7 +70,7 @@ Below are the **new features and capabilities** we've added (newest first):
- [v1.4.277](https://github.com/danielmiessler/fabric/releases/tag/v1.4.277) (Aug 8, 2025) — **Desktop Notifications**: Add cross-platform desktop notifications to Fabric CLI
- [v1.4.274](https://github.com/danielmiessler/fabric/releases/tag/v1.4.274) (Aug 7, 2025) — **Claude 4.1 Added**: Add Support for Claude Opus 4.1 Model
- [v1.4.271](https://github.com/danielmiessler/fabric/releases/tag/v1.4.271) (Jul 28, 2025) — **AI Summarized Release Notes**: Enable AI summary updates for GitHub releases
- [v1.4.268](https://github.com/danielmiessler/fabric/releases/tag/v1.4.268) (Jul 26, 2025) — **Gemmini TTS Voice Selection**: add Gemini TTS voice selection and listing functionality
- [v1.4.268](https://github.com/danielmiessler/fabric/releases/tag/v1.4.268) (Jul 26, 2025) — **Gemini TTS Voice Selection**: add Gemini TTS voice selection and listing functionality
- [v1.4.267](https://github.com/danielmiessler/fabric/releases/tag/v1.4.267) (Jul 26, 2025) — **Text-to-Speech**: Update Gemini Plugin to New SDK with TTS Support
- [v1.4.258](https://github.com/danielmiessler/fabric/releases/tag/v1.4.258) (Jul 17, 2025) — **Onboarding Improved**: Add startup check to initialize config and .env file automatically
- [v1.4.257](https://github.com/danielmiessler/fabric/releases/tag/v1.4.257) (Jul 17, 2025) — **OpenAI Routing Control**: Introduce CLI Flag to Disable OpenAI Responses API
@@ -127,6 +129,7 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
- [From Source](#from-source)
- [Environment Variables](#environment-variables)
- [Setup](#setup)
- [Per-Pattern Model Mapping](#per-pattern-model-mapping)
- [Add aliases for all patterns](#add-aliases-for-all-patterns)
- [Save your files in markdown using aliases](#save-your-files-in-markdown-using-aliases)
- [Migration](#migration)
@@ -137,6 +140,7 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
- [Bash Completion](#bash-completion)
- [Fish Completion](#fish-completion)
- [Usage](#usage)
- [Debug Levels](#debug-levels)
- [Our approach to prompting](#our-approach-to-prompting)
- [Examples](#examples)
- [Just use the Patterns](#just-use-the-patterns)
@@ -207,6 +211,17 @@ To install Fabric, you can use the latest release binaries or install it from th
`https://github.com/danielmiessler/fabric/releases/latest/download/fabric-windows-amd64.exe`
Or via PowerShell, just copy and paste and run the following snippet to install the binary into `{HOME}\.local\bin`. Please make sure that directory is included in your `PATH`.
```powershell
$ErrorActionPreference = "Stop"
$LATEST="https://github.com/danielmiessler/fabric/releases/latest/download/fabric-windows-amd64.exe"
$DIR="${HOME}\.local\bin"
New-Item -Path $DIR -ItemType Directory -Force
Invoke-WebRequest -URI "${LATEST}" -outfile "${DIR}\fabric.exe"
& "${DIR}\fabric.exe" /version
```
#### macOS (arm64)
`curl -L https://github.com/danielmiessler/fabric/releases/latest/download/fabric-darwin-arm64 > fabric && chmod +x fabric && ./fabric --version`
@@ -284,6 +299,13 @@ fabric --setup
If everything works you are good to go.
### Per-Pattern Model Mapping
You can configure specific models for individual patterns using environment variables
like `FABRIC_MODEL_PATTERN_NAME=vendor|model`
This makes it easy to maintain these per-pattern model mappings in your shell startup files.
### Add aliases for all patterns
In order to add aliases for all your patterns and use them directly as commands ie. `summarize` instead of `fabric --pattern summarize`
@@ -591,6 +613,7 @@ Application Options:
--printsession= Print session
--readability Convert HTML input into a clean, readable view
--input-has-vars Apply variables to user input
--no-variable-replacement Disable pattern variable replacement
--dry-run Show what would be sent to the model without actually sending it
--serve Serve the Fabric Rest API
--serveOllama Serve the Fabric Rest API with ollama endpoints
@@ -626,10 +649,20 @@ Application Options:
--yt-dlp-args= Additional arguments to pass to yt-dlp (e.g. '--cookies-from-browser brave')
--thinking= Set reasoning/thinking level (e.g., off, low, medium, high, or
numeric tokens for Anthropic or Google Gemini)
--debug= Set debug level (0: off, 1: basic, 2: detailed, 3: trace)
Help Options:
-h, --help Show this help message
```
### Debug Levels
Use the `--debug` flag to control runtime logging:
- `0`: off (default)
- `1`: basic debug info
- `2`: detailed debugging
- `3`: trace level
## Our approach to prompting
Fabric _Patterns_ are different than most prompts you'll see.
@@ -639,7 +672,7 @@ Fabric _Patterns_ are different than most prompts you'll see.
Here's an example of a Fabric Pattern.
```bash
https://github.com/danielmiessler/fabric/blob/main/patterns/extract_wisdom/system.md
https://github.com/danielmiessler/Fabric/blob/main/data/patterns/extract_wisdom/system.md
```
<img width="1461" alt="pattern-example" src="https://github.com/danielmiessler/fabric/assets/50654/b910c551-9263-405f-9735-71ca69bbab6d">

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.287"
var version = "v1.4.299"

Binary file not shown.

View File

@@ -59,6 +59,13 @@ _fabric_gemini_voices() {
compadd -X "Gemini TTS Voices:" ${voices}
}
_fabric_transcription_models() {
local -a models
local cmd=${words[1]}
models=(${(f)"$($cmd --list-transcription-models --shell-complete-list 2>/dev/null)"})
compadd -X "Transcription Models:" ${models}
}
_fabric() {
local curcontext="$curcontext" state line
typeset -A opt_args
@@ -107,6 +114,7 @@ _fabric() {
'(--printsession)--printsession[Print session]:session:_fabric_sessions' \
'(--readability)--readability[Convert HTML input into a clean, readable view]' \
'(--input-has-vars)--input-has-vars[Apply variables to user input]' \
'(--no-variable-replacement)--no-variable-replacement[Disable pattern variable replacement]' \
'(--dry-run)--dry-run[Show what would be sent to the model without actually sending it]' \
'(--serve)--serve[Serve the Fabric Rest API]' \
'(--serveOllama)--serveOllama[Serve the Fabric Rest API with ollama endpoints]' \
@@ -134,6 +142,10 @@ _fabric() {
'(--think-start-tag)--think-start-tag[Start tag for thinking sections (default: <think>)]:start tag:' \
'(--think-end-tag)--think-end-tag[End tag for thinking sections (default: </think>)]:end tag:' \
'(--disable-responses-api)--disable-responses-api[Disable OpenAI Responses API (default: false)]' \
'(--transcribe-file)--transcribe-file[Audio or video file to transcribe]:audio file:_files -g "*.mp3 *.mp4 *.mpeg *.mpga *.m4a *.wav *.webm"' \
'(--transcribe-model)--transcribe-model[Model to use for transcription (separate from chat model)]:transcribe model:_fabric_transcription_models' \
'(--split-media-file)--split-media-file[Split audio/video files larger than 25MB using ffmpeg]' \
'(--debug)--debug[Set debug level (0=off, 1=basic, 2=detailed, 3=trace)]:debug level:(0 1 2 3)' \
'(--notification)--notification[Send desktop notification when command completes]' \
'(--notification-command)--notification-command[Custom command to run for notifications]:notification command:' \
'(-h --help)'{-h,--help}'[Show this help message]' \

View File

@@ -13,7 +13,7 @@ _fabric() {
_get_comp_words_by_ref -n : cur prev words cword
# Define all possible options/flags
local opts="--pattern -p --variable -v --context -C --session --attachment -a --setup -S --temperature -t --topp -T --stream -s --presencepenalty -P --raw -r --frequencypenalty -F --listpatterns -l --listmodels -L --listcontexts -x --listsessions -X --updatepatterns -U --copy -c --model -m --vendor -V --modelContextLength --output -o --output-session --latest -n --changeDefaultModel -d --youtube -y --playlist --transcript --transcript-with-timestamps --comments --metadata --yt-dlp-args --language -g --scrape_url -u --scrape_question -q --seed -e --thinking --wipecontext -w --wipesession -W --printcontext --printsession --readability --input-has-vars --dry-run --serve --serveOllama --address --api-key --config --search --search-location --image-file --image-size --image-quality --image-compression --image-background --suppress-think --think-start-tag --think-end-tag --disable-responses-api --voice --list-gemini-voices --notification --notification-command --version --listextensions --addextension --rmextension --strategy --liststrategies --listvendors --shell-complete-list --help -h"
local opts="--pattern -p --variable -v --context -C --session --attachment -a --setup -S --temperature -t --topp -T --stream -s --presencepenalty -P --raw -r --frequencypenalty -F --listpatterns -l --listmodels -L --listcontexts -x --listsessions -X --updatepatterns -U --copy -c --model -m --vendor -V --modelContextLength --output -o --output-session --latest -n --changeDefaultModel -d --youtube -y --playlist --transcript --transcript-with-timestamps --comments --metadata --yt-dlp-args --language -g --scrape_url -u --scrape_question -q --seed -e --thinking --wipecontext -w --wipesession -W --printcontext --printsession --readability --input-has-vars --no-variable-replacement --dry-run --serve --serveOllama --address --api-key --config --search --search-location --image-file --image-size --image-quality --image-compression --image-background --suppress-think --think-start-tag --think-end-tag --disable-responses-api --transcribe-file --transcribe-model --split-media-file --voice --list-gemini-voices --notification --notification-command --debug --version --listextensions --addextension --rmextension --strategy --liststrategies --listvendors --shell-complete-list --help -h"
# Helper function for dynamic completions
_fabric_get_list() {
@@ -74,8 +74,16 @@ _fabric() {
COMPREPLY=($(compgen -W "$(_fabric_get_list --list-gemini-voices)" -- "${cur}"))
return 0
;;
--transcribe-model)
COMPREPLY=($(compgen -W "$(_fabric_get_list --list-transcription-models)" -- "${cur}"))
return 0
;;
--debug)
COMPREPLY=($(compgen -W "0 1 2 3" -- "${cur}"))
return 0
;;
# Options requiring file/directory paths
-a | --attachment | -o | --output | --config | --addextension | --image-file)
-a | --attachment | -o | --output | --config | --addextension | --image-file | --transcribe-file)
_filedir
return 0
;;

View File

@@ -47,6 +47,11 @@ function __fabric_get_gemini_voices
$cmd --list-gemini-voices --shell-complete-list 2>/dev/null
end
function __fabric_get_transcription_models
set cmd (commandline -opc)[1]
$cmd --list-transcription-models --shell-complete-list 2>/dev/null
end
# Main completion function
function __fabric_register_completions
set cmd $argv[1]
@@ -92,6 +97,9 @@ function __fabric_register_completions
complete -c $cmd -l think-start-tag -d "Start tag for thinking sections (default: <think>)"
complete -c $cmd -l think-end-tag -d "End tag for thinking sections (default: </think>)"
complete -c $cmd -l voice -d "TTS voice name for supported models (e.g., Kore, Charon, Puck)" -a "(__fabric_get_gemini_voices)"
complete -c $cmd -l transcribe-file -d "Audio or video file to transcribe" -r -a "*.mp3 *.mp4 *.mpeg *.mpga *.m4a *.wav *.webm"
complete -c $cmd -l transcribe-model -d "Model to use for transcription (separate from chat model)" -a "(__fabric_get_transcription_models)"
complete -c $cmd -l debug -d "Set debug level (0=off, 1=basic, 2=detailed, 3=trace)" -a "0 1 2 3"
complete -c $cmd -l notification-command -d "Custom command to run for notifications (overrides built-in notifications)"
# Boolean flags (no arguments)
@@ -113,8 +121,9 @@ function __fabric_register_completions
complete -c $cmd -l metadata -d "Output video metadata"
complete -c $cmd -l yt-dlp-args -d "Additional arguments to pass to yt-dlp (e.g. '--cookies-from-browser brave')"
complete -c $cmd -l readability -d "Convert HTML input into a clean, readable view"
complete -c $cmd -l input-has-vars -d "Apply variables to user input"
complete -c $cmd -l dry-run -d "Show what would be sent to the model without actually sending it"
complete -c $cmd -l input-has-vars -d "Apply variables to user input"
complete -c $cmd -l no-variable-replacement -d "Disable pattern variable replacement"
complete -c $cmd -l dry-run -d "Show what would be sent to the model without actually sending it"
complete -c $cmd -l search -d "Enable web search tool for supported models (Anthropic, OpenAI, Gemini)"
complete -c $cmd -l serve -d "Serve the Fabric Rest API"
complete -c $cmd -l serveOllama -d "Serve the Fabric Rest API with ollama endpoints"
@@ -126,6 +135,7 @@ function __fabric_register_completions
complete -c $cmd -l shell-complete-list -d "Output raw list without headers/formatting (for shell completion)"
complete -c $cmd -l suppress-think -d "Suppress text enclosed in thinking tags"
complete -c $cmd -l disable-responses-api -d "Disable OpenAI Responses API (default: false)"
complete -c $cmd -l split-media-file -d "Split audio/video files larger than 25MB using ffmpeg"
complete -c $cmd -l notification -d "Send desktop notification when command completes"
complete -c $cmd -s h -l help -d "Show this help message"
end

26
docs/CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,26 @@
# Code of Conduct
## Our Expectation
We expect all contributors and community members to act with basic human decency and common sense.
This project exists to help people augment their capabilities with AI, and we welcome contributions from anyone who shares this mission. We assume good faith and trust that everyone involved is here to build something valuable together.
## Guidelines
- **Be respectful**: Treat others as you'd want to be treated in a professional setting
- **Be constructive**: Focus on the work and help make the project better
- **Be collaborative**: We're all working toward the same goal - making Fabric more useful
- **Use good judgment**: If you're not sure whether something is appropriate, it probably isn't
## Reporting Issues
If someone is being genuinely disruptive or harmful, please email the maintainers directly. We'll address legitimate concerns promptly and fairly.
## Enforcement
Maintainers reserve the right to remove content and restrict access for anyone who consistently acts in bad faith or disrupts the community.
---
*This project assumes contributors are adults who can work together professionally. If you can't do that, this isn't the right place for you.*

155
docs/CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,155 @@
# Contributing to Fabric
Thanks for contributing to Fabric! Here's what you need to know to get started quickly.
## Quick Setup
### Prerequisites
- Go 1.24+ installed
- Git configured with your details
### Getting Started
```bash
# Clone and setup
git clone https://github.com/danielmiessler/fabric.git
cd fabric
go build -o fabric ./cmd/fabric
./fabric --setup
# Run tests
go test ./...
```
## Development Guidelines
### Code Style
- Follow standard Go conventions (`gofmt`, `golint`)
- Use meaningful variable and function names
- Write tests for new functionality
- Keep functions focused and small
### Commit Messages
Use descriptive commit messages:
```text
feat: add new pattern for code analysis
fix: resolve OAuth token refresh issue
docs: update installation instructions
```
### Project Structure
- `cmd/` - Executable commands
- `internal/` - Private application code
- `data/patterns/` - AI patterns
- `docs/` - Documentation
## Pull Request Process
### Changelog Generation (REQUIRED)
Before submitting your PR, generate a changelog entry:
```bash
cd cmd/generate_changelog
go build -o generate_changelog .
./generate_changelog --incoming-pr YOUR_PR_NUMBER
```
**Requirements:**
- PR must be open and mergeable
- Working directory must be clean
- GitHub token available (GITHUB_TOKEN env var)
**Optional flags:**
- `--ai-summarize` - Enhanced AI-generated summaries
- `--push` - Auto-push the changelog commit
### PR Guidelines
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Write/update tests
5. Generate changelog entry (see above)
6. Submit PR with clear description
### Review Process
- PRs require maintainer review
- Address feedback promptly
- Keep PRs focused on single features/fixes
- Update changelog if you make significant changes
## Testing
### Run Tests
```bash
# All tests
go test ./...
# Specific package
go test ./internal/cli
# With coverage
go test -cover ./...
```
### Test Requirements
- Unit tests for core functionality
- Integration tests for external dependencies
- Examples in documentation
## Patterns
### Creating Patterns
Patterns go in `data/patterns/[pattern-name]/system.md`:
```markdown
# IDENTITY and PURPOSE
You are an expert at...
# STEPS
- Step 1
- Step 2
# OUTPUT
- Output format requirements
# EXAMPLE
Example output here
```
### Pattern Guidelines
- Use clear, actionable language
- Provide specific output formats
- Include examples when helpful
- Test with multiple AI providers
## Documentation
- Update README.md for new features
- Add docs to `docs/` for complex features
- Include usage examples
- Keep documentation current
## Getting Help
- Check existing issues first
- Ask questions in discussions
- Tag maintainers for urgent issues
- Be patient - maintainers are volunteers
## License
By contributing, you agree your contributions will be licensed under the MIT License.

88
docs/README.md Normal file
View File

@@ -0,0 +1,88 @@
# Fabric Documentation
Welcome to the Fabric documentation! This directory contains detailed guides and technical documentation for various features and components of Fabric.
## 📚 Available Documentation
### Core Features
**[Automated-Changelog-Usage.md](./Automated-Changelog-Usage.md)**
Complete guide for developers on using the automated changelog system. Covers the workflow for generating PR changelog entries during development, including setup, validation, and CI/CD integration.
**[YouTube-Processing.md](./YouTube-Processing.md)**
Comprehensive guide for processing YouTube videos and playlists with Fabric. Covers transcript extraction, comment processing, metadata retrieval, and advanced yt-dlp configurations.
**[Using-Speech-To-Text.md](./Using-Speech-To-Text.md)**
Documentation for Fabric's speech-to-text capabilities using OpenAI's Whisper models. Learn how to transcribe audio and video files and process them through Fabric patterns.
### User Interface & Experience
**[Desktop-Notifications.md](./Desktop-Notifications.md)**
Guide to setting up desktop notifications for Fabric commands. Useful for long-running tasks and multitasking scenarios with cross-platform notification support.
**[Shell-Completions.md](./Shell-Completions.md)**
Instructions for setting up intelligent tab completion for Fabric in Zsh, Bash, and Fish shells. Includes automated installation and manual setup options.
**[Gemini-TTS.md](./Gemini-TTS.md)**
Complete guide for using Google Gemini's text-to-speech features with Fabric. Covers voice selection, audio generation, and integration with Fabric patterns.
### Development & Architecture
**[Automated-ChangeLog.md](./Automated-ChangeLog.md)**
Technical documentation outlining the automated CHANGELOG system architecture for CI/CD integration. Details the infrastructure and workflow for maintainers.
**[Project-Restructured.md](./Project-Restructured.md)**
Project restructuring plan and architectural decisions. Documents the transition to standard Go conventions and project organization improvements.
**[NOTES.md](./NOTES.md)**
Development notes on refactoring efforts, model management improvements, and architectural changes. Includes technical details on vendor and model abstraction.
### Audio Resources
**[voices/README.md](./voices/README.md)**
Index of Gemini TTS voice samples demonstrating different AI voice characteristics available in Fabric.
## 🗂️ Additional Resources
### Configuration Files
- `./notification-config.yaml` - Example notification configuration
### Images
- `images/` - Screenshots and visual documentation assets
- `fabric-logo-gif.gif` - Animated Fabric logo
- `fabric-summarize.png` - Screenshot of summarization feature
- `svelte-preview.png` - Web interface preview
## 🚀 Quick Start
New to Fabric? Start with these essential docs:
1. **[../README.md](../README.md)** - Main project README with installation and basic usage
2. **[Shell-Completions.md](./Shell-Completions.md)** - Set up tab completion for better CLI experience
3. **[YouTube-Processing.md](./YouTube-Processing.md)** - Learn one of Fabric's most popular features
4. **[Desktop-Notifications.md](./Desktop-Notifications.md)** - Get notified when long tasks complete
## 🔧 For Contributors
Contributing to Fabric? These docs are essential:
1. **[./CONTRIBUTING.md](./CONTRIBUTING.md)** - Contribution guidelines and setup
2. **[Automated-Changelog-Usage.md](./Automated-Changelog-Usage.md)** - Required workflow for PR submissions
3. **[Project-Restructured.md](./Project-Restructured.md)** - Understanding project architecture
4. **[NOTES.md](./NOTES.md)** - Current development priorities and patterns
## 📝 Documentation Standards
When adding new documentation:
- Use clear, descriptive filenames
- Include practical examples and use cases
- Update this README index with your new docs
- Follow the established markdown formatting conventions
- Test all code examples before publication
---
*For general help and support, see [./SUPPORT.md](./SUPPORT.md)*

158
docs/SECURITY.md Normal file
View File

@@ -0,0 +1,158 @@
# Security Policy
## Supported Versions
We aim to provide security updates for the latest version of Fabric.
We recommend always using the latest version of Fabric for security fixes and improvements.
## Reporting Security Vulnerabilities
**Please DO NOT report security vulnerabilities through public GitHub issues.**
### Preferred Reporting Method
Send security reports directly to: **<kayvan@sylvan.com>** and CC to the project maintainer at **<daniel@danielmiessler.com>**
### What to Include
Please provide the following information:
1. **Vulnerability Type**: What kind of security issue (e.g., injection, authentication bypass, etc.)
2. **Affected Components**: Which parts of Fabric are affected
3. **Impact Assessment**: What could an attacker accomplish
4. **Reproduction Steps**: Clear steps to reproduce the vulnerability
5. **Proposed Fix**: If you have suggestions for remediation
6. **Disclosure Timeline**: Your preferred timeline for public disclosure
### Example Report Format
```text
Subject: [SECURITY] Brief description of vulnerability
Vulnerability Type: SQL Injection
Affected Component: Pattern database queries
Impact: Potential data exposure
Severity: High
Reproduction Steps:
1. Navigate to...
2. Submit payload: ...
3. Observe...
Evidence:
[Screenshots, logs, or proof of concept]
Suggested Fix:
Use parameterized queries instead of string concatenation...
```
## Security Considerations
### API Keys and Secrets
- Never commit API keys to the repository
- Store secrets in environment variables or secure configuration
- Use the built-in setup process for key management
- Regularly rotate API keys
### Input Validation
- All user inputs are validated before processing
- Special attention to pattern definitions and user content
- URL validation for web scraping features
### AI Provider Integration
- Secure communication with AI providers (HTTPS/TLS)
- Token handling follows provider best practices
- No sensitive data logged or cached unencrypted
### Network Security
- Web server endpoints properly authenticated when required
- CORS policies appropriately configured
- Rate limiting implemented where necessary
## Vulnerability Response Process
1. **Report Received**: We'll acknowledge receipt within 24 hours
2. **Initial Assessment**: We'll evaluate severity and impact within 72 hours
3. **Investigation**: We'll investigate and develop fixes
4. **Fix Development**: We'll create and test patches
5. **Coordinated Disclosure**: We'll work with reporter on disclosure timeline
6. **Release**: We'll release patched version with security advisory
### Timeline Expectations
- **Critical**: 1-7 days
- **High**: 7-30 days
- **Medium**: 30-90 days
- **Low**: Next scheduled release
## Bug Bounty
We don't currently offer a formal bug bounty program, but we deeply appreciate security research and will:
- Acknowledge contributors in release notes
- Provide credit in security advisories
- Consider swag or small rewards for significant findings
## Security Best Practices for Users
### Installation
- Download Fabric only from official sources
- Verify checksums when available
- Keep installations up to date
### Configuration
- Use strong, unique API keys
- Don't share configuration files containing secrets
- Set appropriate file permissions on config directories
### Usage
- Be cautious with patterns that process sensitive data
- Review AI provider terms for data handling
- Consider using local models for sensitive content
## Known Security Limitations
### AI Provider Dependencies
Fabric relies on external AI providers. Security depends partly on:
- Provider security practices
- Data transmission security
- Provider data handling policies
### Pattern Execution
Custom patterns could potentially:
- Process sensitive inputs inappropriately
- Generate outputs containing sensitive information
- Be used for adversarial prompt injection
**Recommendation**: Review patterns carefully, especially those from untrusted sources.
## Security Updates
Security updates are distributed through:
- GitHub Releases with security tags
- Security advisories on GitHub
- Project documentation updates
Subscribe to the repository to receive notifications about security updates.
## Contact
For non-security issues, please use GitHub issues.
For security concerns, email: **<kayvan@sylvan.com>** and CC to **<daniel@danielmiessler.com>**
---
*We take security seriously and appreciate the security research community's help in keeping Fabric secure.*

148
docs/SUPPORT.md Normal file
View File

@@ -0,0 +1,148 @@
# Support
## Getting Help with Fabric
Need help with Fabric? Here are the best ways to get assistance:
## 📖 Documentation First
Before reaching out, check these resources:
- **[README.md](../README.md)** - Installation, usage, and examples
- **[docs/](./README.md)** - Detailed documentation
- **[Patterns](../data/patterns/)** - Browse available AI patterns
## 🐛 Bug Reports
Found a bug? Please create an issue:
**[Report a Bug](https://github.com/danielmiessler/fabric/issues/new?template=bug.yml)**
Include:
- Fabric version (`fabric --version`)
- Operating system
- Steps to reproduce
- Expected vs actual behavior
- Error messages/logs
## 💡 Feature Requests
Have an idea for Fabric? We'd love to hear it:
**[Request a Feature](https://github.com/danielmiessler/fabric/issues/new)**
Describe:
- What you want to achieve
- Why it would be useful
- How you envision it working
- Any alternatives you've considered
## 🤔 Questions & Discussions
For general questions, usage help, or community discussion:
**[GitHub Discussions](https://github.com/danielmiessler/fabric/discussions)**
Great for:
- "How do I...?" questions
- Sharing patterns you've created
- Getting community advice
- Feature brainstorming
## 🏷️ Issue Labels
When creating issues, maintainers will add appropriate labels:
- `bug` - Something isn't working
- `enhancement` - New feature request
- `documentation` - Documentation improvements
- `help wanted` - Community contributions welcome
- `good first issue` - Great for new contributors
- `question` - General questions
- `pattern` - Related to AI patterns
## 📋 Issue Templates
We provide templates to help you create detailed reports:
- **Bug Report** - Structured bug reporting
- **Feature Request** - Detailed feature proposals
- **Pattern Submission** - New pattern contributions
## 🔒 Security Issues
**DO NOT create public issues for security vulnerabilities.**
See our [Security Policy](./SECURITY.md) for proper reporting procedures.
## ⚡ Response Times
We're a community-driven project with volunteer maintainers:
- **Bugs**: We aim to acknowledge within 48 hours
- **Features**: Response time varies based on complexity
- **Questions**: Community often responds quickly
- **Security**: See security policy for timelines
## 🛠️ Self-Help Tips
Before creating an issue, try:
1. **Update Fabric**: `go install github.com/danielmiessler/fabric/cmd/fabric@latest`
2. **Check existing issues**: Someone might have the same problem
3. **Run setup**: `fabric --setup` can fix configuration issues
4. **Test minimal example**: Isolate the problem
## 🤝 Community Guidelines
When asking for help:
- Be specific and provide context
- Include relevant details and error messages
- Be patient - maintainers are volunteers
- Help others when you can
- Say thanks when someone helps you
## 📞 Emergency Contact
For urgent security issues only:
- Email: <security@fabric.ai> (if available)
- Maintainer: <daniel@danielmiessler.com>
## 🎯 What We Can Help With
**We can help with:**
- Installation and setup issues
- Usage questions and examples
- Bug reports and fixes
- Feature discussions
- Pattern creation guidance
- Integration questions
**We cannot help with:**
- Custom development for your specific use case
- Troubleshooting your specific AI provider issues
- General AI or programming tutorials
- Commercial support agreements
## 💪 Contributing Back
The best way to get help is to help others:
- Answer questions in discussions
- Improve documentation
- Share useful patterns
- Report bugs clearly
- Review pull requests
See our [Contributing Guide](./CONTRIBUTING.md) for details.
---
*Remember: We're all here to make Fabric better. Be kind, be helpful, and let's build something amazing together!*

View File

@@ -0,0 +1,139 @@
# Using Speech-To-Text (STT) with Fabric
Fabric supports speech-to-text transcription of audio and video files using OpenAI's transcription models. This feature allows you to convert spoken content into text that can then be processed through Fabric's patterns.
## Overview
The STT feature integrates OpenAI's Whisper and GPT-4o transcription models to convert audio/video files into text. The transcribed text is automatically passed as input to your chosen pattern or chat session.
## Requirements
- OpenAI API key configured in Fabric
- For files larger than 25MB: `ffmpeg` installed on your system
- Supported audio/video formats: `.mp3`, `.mp4`, `.mpeg`, `.mpga`, `.m4a`, `.wav`, `.webm`
## Basic Usage
### Simple Transcription
To transcribe an audio file and send the result to a pattern:
```bash
fabric --transcribe-file /path/to/audio.mp3 --transcribe-model whisper-1 --pattern summarize
```
### Transcription Only
To just transcribe a file without applying a pattern:
```bash
fabric --transcribe-file /path/to/audio.mp3 --transcribe-model whisper-1
```
## Command Line Flags
### Required Flags
- `--transcribe-file`: Path to the audio or video file to transcribe
- `--transcribe-model`: Model to use for transcription (required when using transcription)
### Optional Flags
- `--split-media-file`: Automatically split files larger than 25MB into chunks using ffmpeg
## Available Models
You can list all available transcription models with:
```bash
fabric --list-transcription-models
```
Currently supported models:
- `whisper-1`: OpenAI's Whisper model
- `gpt-4o-mini-transcribe`: GPT-4o Mini transcription model
- `gpt-4o-transcribe`: GPT-4o transcription model
## File Size Handling
### Files Under 25MB
Files under the 25MB limit are processed directly without any special handling.
### Files Over 25MB
For files exceeding OpenAI's 25MB limit, you have two options:
1. **Manual handling**: The command will fail with an error message suggesting to use `--split-media-file`
2. **Automatic splitting**: Use the `--split-media-file` flag to automatically split the file into chunks
```bash
fabric --transcribe-file large_recording.mp4 --transcribe-model whisper-1 --split-media-file --pattern summarize
```
When splitting is enabled:
- Fabric uses `ffmpeg` to split the file into 10-minute segments initially
- If segments are still too large, it reduces the segment time by half repeatedly
- All segments are transcribed and the results are concatenated
- Temporary files are automatically cleaned up after processing
## Integration with Patterns
The transcribed text is seamlessly integrated into Fabric's workflow:
1. File is transcribed using the specified model
2. Transcribed text becomes the input message
3. Text is sent to the specified pattern or chat session
### Example Workflows
**Meeting transcription and summarization:**
```bash
fabric --transcribe-file meeting.mp4 --transcribe-model gpt-4o-transcribe --pattern summarize
```
**Interview analysis:**
```bash
fabric --transcribe-file interview.mp3 --transcribe-model whisper-1 --pattern extract_insights
```
**Large video file processing:**
```bash
fabric --transcribe-file presentation.mp4 --transcribe-model gpt-4o-transcribe --split-media-file --pattern create_summary
```
## Error Handling
Common error scenarios:
- **Unsupported format**: Only the listed audio/video formats are supported
- **File too large**: Use `--split-media-file` for files over 25MB
- **Missing ffmpeg**: Install ffmpeg for automatic file splitting
- **Invalid model**: Use `--list-transcription-models` to see available models
- **Missing model**: The `--transcribe-model` flag is required when using `--transcribe-file`
## Technical Details
### Implementation
- Transcription is handled in `internal/cli/transcribe.go:14`
- OpenAI-specific implementation in `internal/plugins/ai/openai/openai_audio.go:41`
- File splitting uses ffmpeg with configurable segment duration
- Supports any vendor that implements the `transcriber` interface
### Processing Pipeline
1. CLI validates file format and size
2. If file > 25MB and splitting enabled, file is split using ffmpeg
3. Each file/segment is sent to OpenAI's transcription API
4. Results are concatenated with spaces between segments
5. Transcribed text is passed as input to the main Fabric pipeline
### Vendor Support
Currently, only OpenAI is supported for transcription, but the interface allows for future expansion to other vendors that provide transcription capabilities.

2
go.mod
View File

@@ -21,7 +21,7 @@ require (
github.com/joho/godotenv v1.5.1
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/mattn/go-sqlite3 v1.14.28
github.com/ollama/ollama v0.9.0
github.com/ollama/ollama v0.11.7
github.com/openai/openai-go v1.8.2
github.com/otiai10/copy v1.14.1
github.com/pkg/errors v0.9.1

4
go.sum
View File

@@ -180,8 +180,8 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/ollama/ollama v0.9.0 h1:GvdGhi8G/QMnFrY0TMLDy1bXua+Ify8KTkFe4ZY/OZs=
github.com/ollama/ollama v0.9.0/go.mod h1:aio9yQ7nc4uwIbn6S0LkGEPgn8/9bNQLL1nHuH+OcD0=
github.com/ollama/ollama v0.11.7 h1:CuYjaJ/YEnvLDpJocJbbVdpdVFyGA/OP6lKFyzZD4dI=
github.com/ollama/ollama v0.11.7/go.mod h1:9+1//yWPsDE2u+l1a5mpaKrYw4VdnSsRU3ioq5BvMms=
github.com/onsi/gomega v1.34.1 h1:EUMJIKUjM8sKjYbtxQI9A4z2o+rruxnzNvpknOXie6k=
github.com/onsi/gomega v1.34.1/go.mod h1:kU1QgUvBDLXBJq618Xvm2LUX6rSAfRaFRTcdOeDLwwY=
github.com/openai/openai-go v1.8.2 h1:UqSkJ1vCOPUpz9Ka5tS0324EJFEuOvMc+lA/EarJWP8=

View File

@@ -9,6 +9,7 @@ import (
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/domain"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
"github.com/danielmiessler/fabric/internal/tools/notifications"
)
@@ -18,6 +19,19 @@ func handleChatProcessing(currentFlags *Flags, registry *core.PluginRegistry, me
if messageTools != "" {
currentFlags.AppendMessage(messageTools)
}
// Check for pattern-specific model via environment variable
if currentFlags.Pattern != "" && currentFlags.Model == "" {
envVar := "FABRIC_MODEL_" + strings.ToUpper(strings.ReplaceAll(currentFlags.Pattern, "-", "_"))
if modelSpec := os.Getenv(envVar); modelSpec != "" {
parts := strings.SplitN(modelSpec, "|", 2)
if len(parts) == 2 {
currentFlags.Vendor = parts[0]
currentFlags.Model = parts[1]
} else {
currentFlags.Model = modelSpec
}
}
}
var chatter *core.Chatter
if chatter, err = registry.GetChatter(currentFlags.Model, currentFlags.ModelContextLength,
@@ -122,7 +136,7 @@ func handleChatProcessing(currentFlags *Flags, registry *core.PluginRegistry, me
if chatOptions.Notification {
if err = sendNotification(chatOptions, chatReq.PatternName, result); err != nil {
// Log notification error but don't fail the main command
fmt.Fprintf(os.Stderr, "Failed to send notification: %v\n", err)
debuglog.Log("Failed to send notification: %v\n", err)
}
}

View File

@@ -3,10 +3,10 @@ package cli
import (
"encoding/json"
"fmt"
"os"
"strings"
"github.com/danielmiessler/fabric/internal/core"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins/ai/openai"
"github.com/danielmiessler/fabric/internal/tools/converter"
"github.com/danielmiessler/fabric/internal/tools/youtube"
@@ -34,7 +34,7 @@ func Cli(version string) (err error) {
var registry, err2 = initializeFabric()
if err2 != nil {
if !currentFlags.Setup {
fmt.Fprintln(os.Stderr, err2.Error())
debuglog.Log("%s\n", err2.Error())
currentFlags.Setup = true
}
// Return early if registry is nil to prevent panics in subsequent handlers
@@ -74,6 +74,15 @@ func Cli(version string) (err error) {
return
}
// Handle transcription if specified
if currentFlags.TranscribeFile != "" {
var transcriptionMessage string
if transcriptionMessage, err = handleTranscription(currentFlags, registry); err != nil {
return
}
currentFlags.Message = AppendMessage(currentFlags.Message, transcriptionMessage)
}
// Process HTML readability if needed
if currentFlags.HtmlReadability {
if msg, cleanErr := converter.HtmlReadability(currentFlags.Message); cleanErr != nil {

View File

@@ -13,6 +13,7 @@ import (
"github.com/danielmiessler/fabric/internal/chat"
"github.com/danielmiessler/fabric/internal/domain"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/util"
"github.com/jessevdk/go-flags"
"golang.org/x/text/language"
@@ -66,6 +67,7 @@ type Flags struct {
PrintSession string `long:"printsession" description:"Print session"`
HtmlReadability bool `long:"readability" description:"Convert HTML input into a clean, readable view"`
InputHasVars bool `long:"input-has-vars" description:"Apply variables to user input"`
NoVariableReplacement bool `long:"no-variable-replacement" description:"Disable pattern variable replacement"`
DryRun bool `long:"dry-run" description:"Show what would be sent to the model without actually sending it"`
Serve bool `long:"serve" description:"Serve the Fabric Rest API"`
ServeOllama bool `long:"serveOllama" description:"Serve the Fabric Rest API with ollama endpoints"`
@@ -91,23 +93,21 @@ type Flags struct {
ThinkStartTag string `long:"think-start-tag" yaml:"thinkStartTag" description:"Start tag for thinking sections" default:"<think>"`
ThinkEndTag string `long:"think-end-tag" yaml:"thinkEndTag" description:"End tag for thinking sections" default:"</think>"`
DisableResponsesAPI bool `long:"disable-responses-api" yaml:"disableResponsesAPI" description:"Disable OpenAI Responses API (default: false)"`
TranscribeFile string `long:"transcribe-file" yaml:"transcribeFile" description:"Audio or video file to transcribe"`
TranscribeModel string `long:"transcribe-model" yaml:"transcribeModel" description:"Model to use for transcription (separate from chat model)"`
SplitMediaFile bool `long:"split-media-file" yaml:"splitMediaFile" description:"Split audio/video files larger than 25MB using ffmpeg"`
Voice string `long:"voice" yaml:"voice" description:"TTS voice name for supported models (e.g., Kore, Charon, Puck)" default:"Kore"`
ListGeminiVoices bool `long:"list-gemini-voices" description:"List all available Gemini TTS voices"`
ListTranscriptionModels bool `long:"list-transcription-models" description:"List all available transcription models"`
Notification bool `long:"notification" yaml:"notification" description:"Send desktop notification when command completes"`
NotificationCommand string `long:"notification-command" yaml:"notificationCommand" description:"Custom command to run for notifications (overrides built-in notifications)"`
Thinking domain.ThinkingLevel `long:"thinking" yaml:"thinking" description:"Set reasoning/thinking level (e.g., off, low, medium, high, or numeric tokens for Anthropic or Google Gemini)"`
}
var debug = false
func Debugf(format string, a ...interface{}) {
if debug {
fmt.Printf("DEBUG: "+format, a...)
}
Debug int `long:"debug" description:"Set debug level (0=off, 1=basic, 2=detailed, 3=trace)" default:"0"`
}
// Init Initialize flags. returns a Flags struct and an error
func Init() (ret *Flags, err error) {
debuglog.SetLevel(debuglog.LevelFromInt(parseDebugLevel(os.Args[1:])))
// Track which yaml-configured flags were set on CLI
usedFlags := make(map[string]bool)
yamlArgsScan := os.Args[1:]
@@ -123,11 +123,11 @@ func Init() (ret *Flags, err error) {
shortTag := field.Tag.Get("short")
if longTag != "" {
flagToYamlTag[longTag] = yamlTag
Debugf("Mapped long flag %s to yaml tag %s\n", longTag, yamlTag)
debuglog.Debug(debuglog.Detailed, "Mapped long flag %s to yaml tag %s\n", longTag, yamlTag)
}
if shortTag != "" {
flagToYamlTag[shortTag] = yamlTag
Debugf("Mapped short flag %s to yaml tag %s\n", shortTag, yamlTag)
debuglog.Debug(debuglog.Detailed, "Mapped short flag %s to yaml tag %s\n", shortTag, yamlTag)
}
}
}
@@ -139,7 +139,7 @@ func Init() (ret *Flags, err error) {
if flag != "" {
if yamlTag, exists := flagToYamlTag[flag]; exists {
usedFlags[yamlTag] = true
Debugf("CLI flag used: %s (yaml: %s)\n", flag, yamlTag)
debuglog.Debug(debuglog.Detailed, "CLI flag used: %s (yaml: %s)\n", flag, yamlTag)
}
}
}
@@ -151,6 +151,7 @@ func Init() (ret *Flags, err error) {
if args, err = parser.Parse(); err != nil {
return
}
debuglog.SetLevel(debuglog.LevelFromInt(ret.Debug))
// Check to see if a ~/.config/fabric/config.yaml config file exists (only when user didn't specify a config)
if ret.Config == "" {
@@ -158,7 +159,7 @@ func Init() (ret *Flags, err error) {
if defaultConfigPath, err := util.GetDefaultConfigPath(); err == nil && defaultConfigPath != "" {
ret.Config = defaultConfigPath
} else if err != nil {
Debugf("Could not determine default config path: %v\n", err)
debuglog.Debug(debuglog.Detailed, "Could not determine default config path: %v\n", err)
}
}
@@ -183,13 +184,13 @@ func Init() (ret *Flags, err error) {
if flagField.CanSet() {
if yamlField.Type() != flagField.Type() {
if err := assignWithConversion(flagField, yamlField); err != nil {
Debugf("Type conversion failed for %s: %v\n", yamlTag, err)
debuglog.Debug(debuglog.Detailed, "Type conversion failed for %s: %v\n", yamlTag, err)
continue
}
} else {
flagField.Set(yamlField)
}
Debugf("Applied YAML value for %s: %v\n", yamlTag, yamlField.Interface())
debuglog.Debug(debuglog.Detailed, "Applied YAML value for %s: %v\n", yamlTag, yamlField.Interface())
}
}
}
@@ -215,6 +216,22 @@ func Init() (ret *Flags, err error) {
return
}
func parseDebugLevel(args []string) int {
for i := 0; i < len(args); i++ {
arg := args[i]
if arg == "--debug" && i+1 < len(args) {
if lvl, err := strconv.Atoi(args[i+1]); err == nil {
return lvl
}
} else if strings.HasPrefix(arg, "--debug=") {
if lvl, err := strconv.Atoi(strings.TrimPrefix(arg, "--debug=")); err == nil {
return lvl
}
}
}
return 0
}
func extractFlag(arg string) string {
var flag string
if strings.HasPrefix(arg, "--") {
@@ -284,7 +301,7 @@ func loadYAMLConfig(configPath string) (*Flags, error) {
return nil, fmt.Errorf("error parsing config file: %w", err)
}
Debugf("Config: %v\n", config)
debuglog.Debug(debuglog.Detailed, "Config: %v\n", config)
return config, nil
}
@@ -460,13 +477,14 @@ func (o *Flags) BuildChatOptions() (ret *domain.ChatOptions, err error) {
func (o *Flags) BuildChatRequest(Meta string) (ret *domain.ChatRequest, err error) {
ret = &domain.ChatRequest{
ContextName: o.Context,
SessionName: o.Session,
PatternName: o.Pattern,
StrategyName: o.Strategy,
PatternVariables: o.PatternVariables,
InputHasVars: o.InputHasVars,
Meta: Meta,
ContextName: o.Context,
SessionName: o.Session,
PatternName: o.Pattern,
StrategyName: o.Strategy,
PatternVariables: o.PatternVariables,
InputHasVars: o.InputHasVars,
NoVariableReplacement: o.NoVariableReplacement,
Meta: Meta,
}
var message *chat.ChatCompletionMessage

View File

@@ -5,6 +5,8 @@ import (
"os"
"strconv"
openai "github.com/openai/openai-go"
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/plugins/ai"
"github.com/danielmiessler/fabric/internal/plugins/ai/gemini"
@@ -39,7 +41,7 @@ func handleListingCommands(currentFlags *Flags, fabricDb *fsdb.Db, registry *cor
if currentFlags.ShellCompleteOutput {
models.Print(true)
} else {
models.PrintWithVendor(false)
models.PrintWithVendor(false, registry.Defaults.Vendor.Value, registry.Defaults.Model.Value)
}
return true, nil
}
@@ -70,5 +72,30 @@ func handleListingCommands(currentFlags *Flags, fabricDb *fsdb.Db, registry *cor
return true, nil
}
if currentFlags.ListTranscriptionModels {
listTranscriptionModels(currentFlags.ShellCompleteOutput)
return true, nil
}
return false, nil
}
// listTranscriptionModels lists all available transcription models
func listTranscriptionModels(shellComplete bool) {
models := []string{
string(openai.AudioModelWhisper1),
string(openai.AudioModelGPT4oMiniTranscribe),
string(openai.AudioModelGPT4oTranscribe),
}
if shellComplete {
for _, model := range models {
fmt.Println(model)
}
} else {
fmt.Println("Available transcription models:")
for _, model := range models {
fmt.Printf(" %s\n", model)
}
}
}

View File

@@ -7,6 +7,7 @@ import (
"strings"
"github.com/atotto/clipboard"
debuglog "github.com/danielmiessler/fabric/internal/log"
)
func CopyToClipboard(message string) (err error) {
@@ -30,7 +31,7 @@ func CreateOutputFile(message string, fileName string) (err error) {
if _, err = file.WriteString(message); err != nil {
err = fmt.Errorf("error writing to file: %v", err)
} else {
fmt.Fprintf(os.Stderr, "\n\n[Output also written to %s]\n", fileName)
debuglog.Log("\n\n[Output also written to %s]\n", fileName)
}
return
}

View File

@@ -0,0 +1,35 @@
package cli
import (
"context"
"fmt"
"github.com/danielmiessler/fabric/internal/core"
)
type transcriber interface {
TranscribeFile(ctx context.Context, filePath, model string, split bool) (string, error)
}
func handleTranscription(flags *Flags, registry *core.PluginRegistry) (message string, err error) {
vendorName := flags.Vendor
if vendorName == "" {
vendorName = "OpenAI"
}
vendor, ok := registry.VendorManager.VendorsByName[vendorName]
if !ok {
return "", fmt.Errorf("vendor %s not configured", vendorName)
}
tr, ok := vendor.(transcriber)
if !ok {
return "", fmt.Errorf("vendor %s does not support audio transcription", vendorName)
}
model := flags.TranscribeModel
if model == "" {
return "", fmt.Errorf("transcription model is required (use --transcribe-model)")
}
if message, err = tr.TranscribeFile(context.Background(), flags.TranscribeFile, model, flags.SplitMediaFile); err != nil {
return
}
return
}

View File

@@ -180,7 +180,7 @@ func (o *Chatter) BuildSession(request *domain.ChatRequest, raw bool) (session *
}
// Now we know request.Message is not nil, process template variables
if request.InputHasVars {
if request.InputHasVars && !request.NoVariableReplacement {
request.Message.Content, err = template.ApplyTemplate(request.Message.Content, request.PatternVariables, "")
if err != nil {
return nil, err
@@ -190,7 +190,12 @@ func (o *Chatter) BuildSession(request *domain.ChatRequest, raw bool) (session *
var patternContent string
inputUsed := false
if request.PatternName != "" {
pattern, err := o.db.Patterns.GetApplyVariables(request.PatternName, request.PatternVariables, request.Message.Content)
var pattern *fsdb.Pattern
if request.NoVariableReplacement {
pattern, err = o.db.Patterns.GetWithoutVariables(request.PatternName, request.Message.Content)
} else {
pattern, err = o.db.Patterns.GetApplyVariables(request.PatternName, request.PatternVariables, request.Message.Content)
}
if err != nil {
return nil, fmt.Errorf("could not get pattern %s: %v", request.PatternName, err)

View File

@@ -10,6 +10,7 @@ import (
"strconv"
"strings"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins/ai/anthropic"
"github.com/danielmiessler/fabric/internal/plugins/ai/azure"
"github.com/danielmiessler/fabric/internal/plugins/ai/bedrock"
@@ -20,7 +21,7 @@ import (
"github.com/danielmiessler/fabric/internal/plugins/ai/ollama"
"github.com/danielmiessler/fabric/internal/plugins/ai/openai"
"github.com/danielmiessler/fabric/internal/plugins/ai/openai_compatible"
"github.com/danielmiessler/fabric/internal/plugins/ai/perplexity" // Added Perplexity plugin
"github.com/danielmiessler/fabric/internal/plugins/ai/perplexity"
"github.com/danielmiessler/fabric/internal/plugins/strategy"
"github.com/samber/lo"
@@ -339,7 +340,7 @@ func (o *PluginRegistry) GetChatter(model string, modelContextLength int, vendor
} else {
availableVendors := models.FindGroupsByItem(model)
if len(availableVendors) > 1 {
fmt.Fprintf(os.Stderr, "Warning: multiple vendors provide model %s: %s. Using %s. Specify --vendor to select a vendor.\n", model, strings.Join(availableVendors, ", "), availableVendors[0])
debuglog.Log("Warning: multiple vendors provide model %s: %s. Using %s. Specify --vendor to select a vendor.\n", model, strings.Join(availableVendors, ", "), availableVendors[0])
}
ret.vendor = vendorManager.FindByName(models.FindGroupsByItemFirst(model))
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/danielmiessler/fabric/internal/chat"
"github.com/danielmiessler/fabric/internal/domain"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins"
"github.com/danielmiessler/fabric/internal/plugins/ai"
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
@@ -72,7 +73,12 @@ func TestGetChatter_WarnsOnAmbiguousModel(t *testing.T) {
r, w, _ := os.Pipe()
oldStderr := os.Stderr
os.Stderr = w
defer func() { os.Stderr = oldStderr }()
// Redirect log output to our pipe to capture unconditional log messages
debuglog.SetOutput(w)
defer func() {
os.Stderr = oldStderr
debuglog.SetOutput(oldStderr)
}()
chatter, err := registry.GetChatter("shared-model", 0, "", "", false, false)
w.Close()
@@ -81,8 +87,10 @@ func TestGetChatter_WarnsOnAmbiguousModel(t *testing.T) {
if err != nil {
t.Fatalf("GetChatter() error = %v", err)
}
if chatter.vendor.GetName() != "VendorA" {
t.Fatalf("expected vendor VendorA, got %s", chatter.vendor.GetName())
// Verify that one of the valid vendors was selected (don't care which one due to map iteration randomness)
vendorName := chatter.vendor.GetName()
if vendorName != "VendorA" && vendorName != "VendorB" {
t.Fatalf("expected vendor VendorA or VendorB, got %s", vendorName)
}
if !strings.Contains(string(warning), "multiple vendors provide model shared-model") {
t.Fatalf("expected warning about multiple vendors, got %q", string(warning))

View File

@@ -13,15 +13,16 @@ const (
)
type ChatRequest struct {
ContextName string
SessionName string
PatternName string
PatternVariables map[string]string
Message *chat.ChatCompletionMessage
Language string
Meta string
InputHasVars bool
StrategyName string
ContextName string
SessionName string
PatternName string
PatternVariables map[string]string
Message *chat.ChatCompletionMessage
Language string
Meta string
InputHasVars bool
NoVariableReplacement bool
StrategyName string
}
type ChatOptions struct {

78
internal/log/log.go Normal file
View File

@@ -0,0 +1,78 @@
package log
import (
"fmt"
"io"
"os"
"sync"
)
// Level represents the debug verbosity.
type Level int
const (
// Off disables all debug output.
Off Level = iota
// Basic provides minimal debugging information.
Basic
// Detailed provides more verbose debugging.
Detailed
// Trace is the most verbose level.
Trace
)
var (
mu sync.RWMutex
level Level = Off
output io.Writer = os.Stderr
)
// SetLevel sets the global debug level.
func SetLevel(l Level) {
mu.Lock()
level = l
mu.Unlock()
}
// LevelFromInt converts an int to a Level.
func LevelFromInt(i int) Level {
switch {
case i <= 0:
return Off
case i == 1:
return Basic
case i == 2:
return Detailed
case i >= 3:
return Trace
default:
return Off
}
}
// Debug writes a debug message if the global level permits.
func Debug(l Level, format string, a ...interface{}) {
mu.RLock()
current := level
w := output
mu.RUnlock()
if current >= l {
fmt.Fprintf(w, "DEBUG: "+format, a...)
}
}
// Log writes a message unconditionally to stderr.
// This is for important messages that should always be shown regardless of debug level.
func Log(format string, a ...interface{}) {
mu.RLock()
w := output
mu.RUnlock()
fmt.Fprintf(w, format, a...)
}
// SetOutput allows overriding the output destination for debug logs.
func SetOutput(w io.Writer) {
mu.Lock()
output = w
mu.Unlock()
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"net/http"
"os"
"strconv"
"strings"
@@ -12,6 +11,7 @@ import (
"github.com/anthropics/anthropic-sdk-go/option"
"github.com/danielmiessler/fabric/internal/chat"
"github.com/danielmiessler/fabric/internal/domain"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins"
"github.com/danielmiessler/fabric/internal/util"
)
@@ -195,7 +195,7 @@ func (an *Client) SendStream(
}
stream := an.client.Messages.NewStreaming(ctx, params, reqOpts...)
if stream.Err() != nil && len(betas) > 0 {
fmt.Fprintf(os.Stderr, "Anthropic beta feature %s failed: %v\n", strings.Join(betas, ","), stream.Err())
debuglog.Debug(debuglog.Basic, "Anthropic beta feature %s failed: %v\n", strings.Join(betas, ","), stream.Err())
stream = an.client.Messages.NewStreaming(ctx, params)
}
@@ -289,7 +289,7 @@ func (an *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage,
}
if message, err = an.client.Messages.New(ctx, params, reqOpts...); err != nil {
if len(betas) > 0 {
fmt.Fprintf(os.Stderr, "Anthropic beta feature %s failed: %v\n", strings.Join(betas, ","), err)
debuglog.Debug(debuglog.Basic, "Anthropic beta feature %s failed: %v\n", strings.Join(betas, ","), err)
if message, err = an.client.Messages.New(ctx, params); err != nil {
return
}

View File

@@ -9,11 +9,11 @@ import (
"fmt"
"io"
"net/http"
"os"
"os/exec"
"strings"
"time"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/util"
"golang.org/x/oauth2"
)
@@ -77,7 +77,7 @@ func (t *OAuthTransport) getValidToken(tokenIdentifier string) (string, error) {
}
// If no token exists, run OAuth flow
if token == nil {
fmt.Fprintln(os.Stderr, "No OAuth token found, initiating authentication...")
debuglog.Log("No OAuth token found, initiating authentication...\n")
newAccessToken, err := RunOAuthFlow(tokenIdentifier)
if err != nil {
return "", fmt.Errorf("failed to authenticate: %w", err)
@@ -87,11 +87,11 @@ func (t *OAuthTransport) getValidToken(tokenIdentifier string) (string, error) {
// Check if token needs refresh (5 minute buffer)
if token.IsExpired(5) {
fmt.Fprintln(os.Stderr, "OAuth token expired, refreshing...")
debuglog.Log("OAuth token expired, refreshing...\n")
newAccessToken, err := RefreshToken(tokenIdentifier)
if err != nil {
// If refresh fails, try re-authentication
fmt.Fprintln(os.Stderr, "Token refresh failed, re-authenticating...")
debuglog.Log("Token refresh failed, re-authenticating...\n")
newAccessToken, err = RunOAuthFlow(tokenIdentifier)
if err != nil {
return "", fmt.Errorf("failed to refresh or re-authenticate: %w", err)
@@ -143,13 +143,13 @@ func RunOAuthFlow(tokenIdentifier string) (token string, err error) {
if err == nil && existingToken != nil {
// If token exists but is expired, try refreshing first
if existingToken.IsExpired(5) {
fmt.Fprintln(os.Stderr, "Found expired OAuth token, attempting refresh...")
debuglog.Log("Found expired OAuth token, attempting refresh...\n")
refreshedToken, refreshErr := RefreshToken(tokenIdentifier)
if refreshErr == nil {
fmt.Fprintln(os.Stderr, "Token refresh successful")
debuglog.Log("Token refresh successful\n")
return refreshedToken, nil
}
fmt.Fprintf(os.Stderr, "Token refresh failed (%v), proceeding with full OAuth flow...\n", refreshErr)
debuglog.Log("Token refresh failed (%v), proceeding with full OAuth flow...\n", refreshErr)
} else {
// Token exists and is still valid
return existingToken.AccessToken, nil
@@ -176,10 +176,10 @@ func RunOAuthFlow(tokenIdentifier string) (token string, err error) {
oauth2.SetAuthURLParam("state", verifier),
)
fmt.Fprintln(os.Stderr, "Open the following URL in your browser. Fabric would like to authorize:")
fmt.Fprintln(os.Stderr, authURL)
debuglog.Log("Open the following URL in your browser. Fabric would like to authorize:\n")
debuglog.Log("%s\n", authURL)
openBrowser(authURL)
fmt.Fprint(os.Stderr, "Paste the authorization code here: ")
debuglog.Log("Paste the authorization code here: ")
var code string
fmt.Scanln(&code)
parts := strings.SplitN(code, "#", 2)

View File

@@ -18,7 +18,8 @@ type VendorsModels struct {
// PrintWithVendor prints models including their vendor on each line.
// When shellCompleteList is true, output is suitable for shell completion.
func (o *VendorsModels) PrintWithVendor(shellCompleteList bool) {
// Default vendor and model are highlighted with an asterisk.
func (o *VendorsModels) PrintWithVendor(shellCompleteList bool, defaultVendor, defaultModel string) {
if !shellCompleteList {
fmt.Printf("\n%v:\n", o.SelectionLabel)
}
@@ -42,7 +43,11 @@ func (o *VendorsModels) PrintWithVendor(shellCompleteList bool) {
if shellCompleteList {
fmt.Printf("%s|%s\n", groupItems.Group, item)
} else {
fmt.Printf("\t[%d]\t%s|%s\n", currentItemIndex, groupItems.Group, item)
mark := " "
if strings.EqualFold(groupItems.Group, defaultVendor) && strings.EqualFold(item, defaultModel) {
mark = " *"
}
fmt.Printf("%s\t[%d]\t%s|%s\n", mark, currentItemIndex, groupItems.Group, item)
}
}
}

View File

@@ -1,6 +1,9 @@
package ai
import (
"io"
"os"
"strings"
"testing"
)
@@ -31,3 +34,23 @@ func TestFindVendorsByModel(t *testing.T) {
t.Fatalf("FindVendorsByModel() = %v, want %v", foundVendors, []string{"vendor1"})
}
}
func TestPrintWithVendorMarksDefault(t *testing.T) {
vendors := NewVendorsModels()
vendors.AddGroupItems("vendor1", []string{"model1"}...)
vendors.AddGroupItems("vendor2", []string{"model2"}...)
r, w, _ := os.Pipe()
oldStdout := os.Stdout
os.Stdout = w
vendors.PrintWithVendor(false, "vendor2", "model2")
w.Close()
os.Stdout = oldStdout
out, _ := io.ReadAll(r)
if !strings.Contains(string(out), " *\t[2]\tvendor2|model2") {
t.Fatalf("default model not marked: %s", out)
}
}

View File

@@ -0,0 +1,153 @@
package openai
import (
"bytes"
"context"
"fmt"
"os"
"os/exec"
"path/filepath"
"slices"
"sort"
"strings"
debuglog "github.com/danielmiessler/fabric/internal/log"
openai "github.com/openai/openai-go"
)
// MaxAudioFileSize defines the maximum allowed size for audio uploads (25MB).
const MaxAudioFileSize int64 = 25 * 1024 * 1024
// AllowedTranscriptionModels lists the models supported for transcription.
var AllowedTranscriptionModels = []string{
string(openai.AudioModelWhisper1),
string(openai.AudioModelGPT4oMiniTranscribe),
string(openai.AudioModelGPT4oTranscribe),
}
// allowedAudioExtensions defines the supported input file extensions.
var allowedAudioExtensions = map[string]struct{}{
".mp3": {},
".mp4": {},
".mpeg": {},
".mpga": {},
".m4a": {},
".wav": {},
".webm": {},
}
// TranscribeFile transcribes the given audio file using the specified model. If the file
// exceeds the size limit, it can optionally be split into chunks using ffmpeg.
func (o *Client) TranscribeFile(ctx context.Context, filePath, model string, split bool) (string, error) {
if ctx == nil {
ctx = context.Background()
}
if !slices.Contains(AllowedTranscriptionModels, model) {
return "", fmt.Errorf("model '%s' is not supported for transcription", model)
}
ext := strings.ToLower(filepath.Ext(filePath))
if _, ok := allowedAudioExtensions[ext]; !ok {
return "", fmt.Errorf("unsupported audio format '%s'", ext)
}
info, err := os.Stat(filePath)
if err != nil {
return "", err
}
var files []string
var cleanup func()
if info.Size() > MaxAudioFileSize {
if !split {
return "", fmt.Errorf("file %s exceeds 25MB limit; use --split-media-file to enable automatic splitting", filePath)
}
debuglog.Log("File %s is larger than the size limit... breaking it up into chunks...\n", filePath)
if files, cleanup, err = splitAudioFile(filePath, ext, MaxAudioFileSize); err != nil {
return "", err
}
defer cleanup()
} else {
files = []string{filePath}
}
var builder strings.Builder
for i, f := range files {
debuglog.Log("Using model %s to transcribe part %d (file name: %s)...\n", model, i+1, f)
var chunk *os.File
if chunk, err = os.Open(f); err != nil {
return "", err
}
params := openai.AudioTranscriptionNewParams{
File: chunk,
Model: openai.AudioModel(model),
}
var resp *openai.Transcription
resp, err = o.ApiClient.Audio.Transcriptions.New(ctx, params)
chunk.Close()
if err != nil {
return "", err
}
if i > 0 {
builder.WriteString(" ")
}
builder.WriteString(resp.Text)
}
return builder.String(), nil
}
// splitAudioFile splits the source file into chunks smaller than maxSize using ffmpeg.
// It returns the list of chunk file paths and a cleanup function.
func splitAudioFile(src, ext string, maxSize int64) (files []string, cleanup func(), err error) {
if _, err = exec.LookPath("ffmpeg"); err != nil {
return nil, nil, fmt.Errorf("ffmpeg not found: please install it")
}
var dir string
if dir, err = os.MkdirTemp("", "fabric-audio-*"); err != nil {
return nil, nil, err
}
cleanup = func() { os.RemoveAll(dir) }
segmentTime := 600 // start with 10 minutes
for {
pattern := filepath.Join(dir, "chunk-%03d"+ext)
debuglog.Log("Running ffmpeg to split audio into %d-second chunks...\n", segmentTime)
cmd := exec.Command("ffmpeg", "-y", "-i", src, "-f", "segment", "-segment_time", fmt.Sprintf("%d", segmentTime), "-c", "copy", pattern)
var stderr bytes.Buffer
cmd.Stderr = &stderr
if err = cmd.Run(); err != nil {
return nil, cleanup, fmt.Errorf("ffmpeg failed: %v: %s", err, stderr.String())
}
if files, err = filepath.Glob(filepath.Join(dir, "chunk-*"+ext)); err != nil {
return nil, cleanup, err
}
sort.Strings(files)
tooBig := false
for _, f := range files {
var info os.FileInfo
if info, err = os.Stat(f); err != nil {
return nil, cleanup, err
}
if info.Size() > maxSize {
tooBig = true
break
}
}
if !tooBig {
return files, cleanup, nil
}
for _, f := range files {
_ = os.Remove(f)
}
if segmentTime <= 1 {
return nil, cleanup, fmt.Errorf("unable to split file into acceptable size chunks")
}
segmentTime /= 2
}
}

View File

@@ -102,6 +102,11 @@ var ProviderMap = map[string]ProviderConfig{
BaseURL: "https://api.together.xyz/v1",
ImplementsResponses: false,
},
"Venice AI": {
Name: "Venice AI",
BaseURL: "https://api.venice.ai/api/v1",
ImplementsResponses: false,
},
}
// GetProviderByName returns the provider configuration for a given name with O(1) lookup

View File

@@ -4,9 +4,10 @@ import (
"context"
"fmt"
"os"
"sync" // Added sync package
"sync"
"github.com/danielmiessler/fabric/internal/domain"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins"
perplexity "github.com/sgaunet/perplexity-go/v2"
@@ -171,7 +172,7 @@ func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
if err != nil {
// Log error, can't send to string channel directly.
// Consider a mechanism to propagate this error if needed.
fmt.Fprintf(os.Stderr, "perplexity streaming error: %v\\n", err) // Corrected capitalization
debuglog.Log("perplexity streaming error: %v\n", err)
// If the error occurs during stream setup, the channel might not have been closed by the receiver loop.
// However, closing it here might cause a panic if the receiver loop also tries to close it.
// close(channel) // Caution: Uncommenting this may cause panic, as channel is closed in the receiver goroutine.

View File

@@ -148,7 +148,6 @@ func (o *VendorsManager) setupVendorTo(vendor Vendor, configuredVendors map[stri
delete(configuredVendors, vendor.GetName())
fmt.Printf("[%v] skipped\n", vendor.GetName())
}
return
}
type modelResult struct {

View File

@@ -31,6 +31,27 @@ type Pattern struct {
func (o *PatternsEntity) GetApplyVariables(
source string, variables map[string]string, input string) (pattern *Pattern, err error) {
if pattern, err = o.loadPattern(source); err != nil {
return
}
err = o.applyVariables(pattern, variables, input)
return
}
// GetWithoutVariables returns a pattern with only the {{input}} placeholder processed
// and skips template variable replacement
func (o *PatternsEntity) GetWithoutVariables(source, input string) (pattern *Pattern, err error) {
if pattern, err = o.loadPattern(source); err != nil {
return
}
o.applyInput(pattern, input)
return
}
func (o *PatternsEntity) loadPattern(source string) (pattern *Pattern, err error) {
// Determine if this is a file path
isFilePath := strings.HasPrefix(source, "\\") ||
strings.HasPrefix(source, "/") ||
@@ -39,8 +60,8 @@ func (o *PatternsEntity) GetApplyVariables(
if isFilePath {
// Resolve the file path using GetAbsolutePath
absPath, err := util.GetAbsolutePath(source)
if err != nil {
var absPath string
if absPath, err = util.GetAbsolutePath(source); err != nil {
return nil, fmt.Errorf("could not resolve file path: %v", err)
}
@@ -51,26 +72,27 @@ func (o *PatternsEntity) GetApplyVariables(
pattern, err = o.getFromDB(source)
}
if err != nil {
return
}
// Apply variables to the pattern
err = o.applyVariables(pattern, variables, input)
return
}
func (o *PatternsEntity) applyVariables(
pattern *Pattern, variables map[string]string, input string) (err error) {
// Ensure pattern has an {{input}} placeholder
// If not present, append it on a new line
func (o *PatternsEntity) ensureInput(pattern *Pattern) {
if !strings.Contains(pattern.Pattern, "{{input}}") {
if !strings.HasSuffix(pattern.Pattern, "\n") {
pattern.Pattern += "\n"
}
pattern.Pattern += "{{input}}"
}
}
func (o *PatternsEntity) applyInput(pattern *Pattern, input string) {
o.ensureInput(pattern)
pattern.Pattern = strings.ReplaceAll(pattern.Pattern, "{{input}}", input)
}
func (o *PatternsEntity) applyVariables(
pattern *Pattern, variables map[string]string, input string) (err error) {
o.ensureInput(pattern)
// Temporarily replace {{input}} with a sentinel token to protect it
// from recursive variable resolution

View File

@@ -145,6 +145,22 @@ func TestGetApplyVariables(t *testing.T) {
}
}
func TestGetWithoutVariables(t *testing.T) {
entity, cleanup := setupTestPatternsEntity(t)
defer cleanup()
createTestPattern(t, entity, "test-pattern", "Prefix {{input}} {{roam}}")
result, err := entity.GetWithoutVariables("test-pattern", "hello")
require.NoError(t, err)
assert.Equal(t, "Prefix hello {{roam}}", result.Pattern)
createTestPattern(t, entity, "no-input", "Static content")
result, err = entity.GetWithoutVariables("no-input", "hi")
require.NoError(t, err)
assert.Equal(t, "Static content\nhi", result.Pattern)
}
func TestPatternsEntity_Save(t *testing.T) {
entity, cleanup := setupTestPatternsEntity(t)
defer cleanup()

View File

@@ -10,8 +10,9 @@ import (
"strings"
"time"
debuglog "github.com/danielmiessler/fabric/internal/log"
"gopkg.in/yaml.v3"
// Add this import
)
// ExtensionDefinition represents a single extension configuration
@@ -87,9 +88,7 @@ func NewExtensionRegistry(configDir string) *ExtensionRegistry {
r.ensureConfigDir()
if err := r.loadRegistry(); err != nil {
if Debug {
fmt.Printf("Warning: could not load extension registry: %v\n", err)
}
debuglog.Log("Warning: could not load extension registry: %v\n", err)
}
return r

View File

@@ -6,6 +6,8 @@ import (
"path/filepath"
"regexp"
"strings"
debuglog "github.com/danielmiessler/fabric/internal/log"
)
var (
@@ -14,7 +16,6 @@ var (
filePlugin = &FilePlugin{}
fetchPlugin = &FetchPlugin{}
sysPlugin = &SysPlugin{}
Debug = false // Debug flag
)
var extensionManager *ExtensionManager
@@ -33,9 +34,7 @@ var pluginPattern = regexp.MustCompile(`\{\{plugin:([^:]+):([^:]+)(?::([^}]+))?\
var extensionPattern = regexp.MustCompile(`\{\{ext:([^:]+):([^:]+)(?::([^}]+))?\}\}`)
func debugf(format string, a ...interface{}) {
if Debug {
fmt.Printf(format, a...)
}
debuglog.Debug(debuglog.Trace, format, a...)
}
func ApplyTemplate(content string, variables map[string]string, input string) (string, error) {

View File

@@ -7,6 +7,7 @@ import (
"sort"
"strings"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins"
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
"github.com/danielmiessler/fabric/internal/tools/githelper"
@@ -335,9 +336,9 @@ func (o *PatternsLoader) createUniquePatternsFile() (err error) {
patternNamesMap[entry.Name()] = true
}
}
fmt.Fprintf(os.Stderr, "📂 Also included patterns from custom directory: %s\n", o.Patterns.CustomPatternsDir)
debuglog.Log("📂 Also included patterns from custom directory: %s\n", o.Patterns.CustomPatternsDir)
} else {
fmt.Fprintf(os.Stderr, "Warning: Could not read custom patterns directory %s: %v\n", o.Patterns.CustomPatternsDir, customErr)
debuglog.Log("Warning: Could not read custom patterns directory %s: %v\n", o.Patterns.CustomPatternsDir, customErr)
}
}

View File

@@ -181,7 +181,8 @@ func (o *YouTube) tryMethodYtDlpInternal(videoId string, language string, additi
if len(langMatch) > 2 {
langMatch = langMatch[:2]
}
args = append(args, "--sub-langs", langMatch)
langOpts := language + "," + langMatch + ".*," + langMatch
args = append(args, "--sub-langs", langOpts)
}
// Add user-provided arguments last so they take precedence

View File

@@ -224,8 +224,8 @@ schema = 3
version = "v1.0.2"
hash = "sha256-+W9EIW7okXIXjWEgOaMh58eLvBZ7OshW2EhaIpNLSBU="
[mod."github.com/ollama/ollama"]
version = "v0.9.0"
hash = "sha256-r2eU+kMG3tuJy2B43RXsfmeltzM9t05NEmNiJAW5qr4="
version = "v0.11.7"
hash = "sha256-3Wn1JWmil0aQQ2I/r398HbnUsi8ADoroqNyPziuxn/c="
[mod."github.com/openai/openai-go"]
version = "v1.8.2"
hash = "sha256-O8aV3zEj6o8kIlzlkYaTW4RzvwR3qNUBYiN8SuTM1R0="

View File

@@ -1 +1 @@
"1.4.287"
"1.4.299"

View File

@@ -1,116 +0,0 @@
# Docker Test Environment for API Configuration Fix
This directory contains a Docker-based testing setup for fixing the issue where Fabric calls Ollama and Bedrock APIs even when not configured. This addresses the problem where unconfigured services show error messages during model listing.
## Quick Start
```bash
# Run all tests
./scripts/docker-test/test-runner.sh
# Interactive mode - pick which test to run
./scripts/docker-test/test-runner.sh -i
# Run specific test case
./scripts/docker-test/test-runner.sh gemini-only
# Shell into test environment
./scripts/docker-test/test-runner.sh -s gemini-only
# Build image only (for development)
./scripts/docker-test/test-runner.sh -b
# Show help
./scripts/docker-test/test-runner.sh -h
```
## Test Cases
1. **no-config**: No APIs configured
2. **gemini-only**: Only Gemini configured (reproduces original issue #1195)
3. **openai-only**: Only OpenAI configured
4. **ollama-only**: Only Ollama configured
5. **bedrock-only**: Only Bedrock configured
6. **mixed**: Multiple APIs configured (Gemini + OpenAI + Ollama)
## Environment Files
Each test case has a corresponding environment file in `scripts/docker-test/env/`:
- `env.no-config` - Empty configuration
- `env.gemini-only` - Only Gemini API key
- `env.openai-only` - Only OpenAI API key
- `env.ollama-only` - Only Ollama URL
- `env.bedrock-only` - Only Bedrock configuration
- `env.mixed` - Multiple API configurations
These files are volume-mounted into the Docker container and persist changes made with `fabric -S`.
## Interactive Mode & Shell Access
The interactive mode (`-i`) provides several options:
```text
Available test cases:
1) No APIs configured (no-config)
2) Only Gemini configured (gemini-only)
3) Only OpenAI configured (openai-only)
4) Only Ollama configured (ollama-only)
5) Only Bedrock configured (bedrock-only)
6) Mixed configuration (mixed)
7) Run all tests
0) Exit
Add '!' after number to shell into test environment (e.g., '1!' to shell into no-config)
```
### Shell Mode
- Use `1!`, `2!`, etc. to shell into any test environment
- Run `fabric -S` to configure APIs interactively
- Run `fabric --listmodels` or `fabric -L` to test model listing
- Changes persist in the environment files
- Type `exit` to return to test runner
## Expected Results
**Before Fix:**
- `no-config` and `gemini-only` tests show Ollama connection errors
- Tests show Bedrock authentication errors when BEDROCK_AWS_REGION not set
- Error: `Ollama Get "http://localhost:11434/api/tags": dial tcp...`
- Error: `Bedrock failed to list foundation models...`
**After Fix:**
- Clean output with no error messages for unconfigured services
- Only configured services appear in model listings
- Ollama only initialized when `OLLAMA_API_URL` is set
- Bedrock only initialized when `BEDROCK_AWS_REGION` is set
## Implementation Details
- **Volume-mounted configs**: Environment files are mounted to `/home/testuser/.config/fabric/.env`
- **Persistent state**: Configuration changes survive between test runs
- **Single Docker image**: Built once from `scripts/docker-test/base/Dockerfile`, reused for all tests
- **Isolated environments**: Each test uses its own environment file
- **Cross-platform**: Works on macOS, Linux, and Windows with Docker
## Development Workflow
1. Make code changes to fix API initialization logic
2. Run `./scripts/docker-test/test-runner.sh no-config` to test the main issue
3. Use `./scripts/docker-test/test-runner.sh -i` for interactive testing
4. Shell into environments (`1!`, `2!`, etc.) to debug specific configurations
5. Run all tests before submitting PR: `./scripts/docker-test/test-runner.sh`
## Architecture
The fix involves:
1. **Ollama**: Override `IsConfigured()` method to check for `OLLAMA_API_URL` env var
2. **Bedrock**: Modify `hasAWSCredentials()` to require `BEDROCK_AWS_REGION`
3. **Plugin Registry**: Only initialize providers when properly configured
This prevents unnecessary API calls and eliminates confusing error messages for users.

View File

@@ -1,30 +0,0 @@
FROM golang:1.24-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY ./cmd/fabric ./cmd/fabric
COPY ./internal ./internal
RUN go build -o fabric ./cmd/fabric
FROM alpine:latest
RUN apk --no-cache add ca-certificates
# Create a test user
RUN adduser -D -s /bin/sh testuser
# Switch to test user
USER testuser
WORKDIR /home/testuser
# Set environment variables for the test user
ENV HOME=/home/testuser
ENV USER=testuser
COPY --from=builder /app/fabric .
# Create fabric config directory and empty .env file
RUN mkdir -p .config/fabric && touch .config/fabric/.env
ENTRYPOINT ["./fabric"]

View File

@@ -1,235 +0,0 @@
#!/usr/bin/env bash
set -e
# Get the directory where this script is located
top_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
base_name="$(basename "$top_dir")"
cd "$top_dir"/../.. || exit 1
# Check if bash version supports associative arrays
if [[ ${BASH_VERSION%%.*} -lt 4 ]]; then
echo "This script requires bash 4.0 or later for associative arrays."
echo "Current version: $BASH_VERSION"
exit 1
fi
IMAGE_NAME="fabric-test-setup"
ENV_DIR="scripts/${base_name}/env"
# Test case descriptions
declare -A test_descriptions=(
["no-config"]="No APIs configured"
["gemini-only"]="Only Gemini configured (reproduces original issue)"
["openai-only"]="Only OpenAI configured"
["ollama-only"]="Only Ollama configured"
["bedrock-only"]="Only Bedrock configured"
["mixed"]="Mixed configuration (Gemini + OpenAI + Ollama)"
)
# Test case order for consistent display
test_order=("no-config" "gemini-only" "openai-only" "ollama-only" "bedrock-only" "mixed")
build_image() {
echo "=== Building Docker image ==="
docker build -f "${top_dir}/base/Dockerfile" -t "$IMAGE_NAME" .
echo
}
check_env_file() {
local test_name="$1"
local env_file="$ENV_DIR/env.$test_name"
if [[ ! -f "$env_file" ]]; then
echo "Error: Environment file not found: $env_file"
exit 1
fi
}
run_test() {
local test_name="$1"
local description="${test_descriptions[$test_name]}"
local env_file="$ENV_DIR/env.$test_name"
check_env_file "$test_name"
echo "===================="
echo "Test: $description"
echo "Config: $test_name"
echo "Env file: $env_file"
echo "===================="
echo "Running test..."
if docker run --rm \
-e HOME=/home/testuser \
-e USER=testuser \
-v "$(pwd)/$env_file:/home/testuser/.config/fabric/.env:ro" \
"$IMAGE_NAME" --listmodels 2>&1; then
echo "✅ Test completed"
else
echo "❌ Test failed"
fi
echo
}
shell_into_env() {
local test_name="$1"
local description="${test_descriptions[$test_name]}"
local env_file="$ENV_DIR/env.$test_name"
check_env_file "$test_name"
echo "===================="
echo "Shelling into: $description"
echo "Config: $test_name"
echo "Env file: $env_file"
echo "===================="
echo "You can now run 'fabric -S' to configure, or 'fabric --listmodels' or 'fabric -L' to test."
echo "Changes to .env will persist in $env_file"
echo "Type 'exit' to return to the test runner."
echo
docker run -it --rm \
-e HOME=/home/testuser \
-e USER=testuser \
-v "$(pwd)/$env_file:/home/testuser/.config/fabric/.env" \
--entrypoint=/bin/sh \
"$IMAGE_NAME"
}
interactive_mode() {
echo "=== Interactive Mode ==="
echo "Available test cases:"
echo
local i=1
local cases=()
for test_name in "${test_order[@]}"; do
echo "$i) ${test_descriptions[$test_name]} ($test_name)"
cases[i]="$test_name"
((i++))
done
echo "$i) Run all tests"
echo "0) Exit"
echo
echo "Add '!' after number to shell into test environment (e.g., '1!' to shell into no-config)"
echo
while true; do
read -r -p "Select test case (0-$i) [or 1!, etc. to shell into test environment]: " choice
# Check for shell mode (! suffix)
local shell_mode=false
if [[ "$choice" == *"!" ]]; then
shell_mode=true
choice="${choice%!}" # Remove the ! suffix
fi
if [[ "$choice" == "0" ]]; then
if [[ "$shell_mode" == true ]]; then
echo "Cannot shell into exit option."
continue
fi
echo "Exiting..."
exit 0
elif [[ "$choice" == "$i" ]]; then
if [[ "$shell_mode" == true ]]; then
echo "Cannot shell into 'run all tests' option."
continue
fi
echo "Running all tests..."
run_all_tests
break
elif [[ "$choice" -ge 1 && "$choice" -lt "$i" ]]; then
local selected_test="${cases[$choice]}"
if [[ "$shell_mode" == true ]]; then
echo "Shelling into: ${test_descriptions[$selected_test]}"
shell_into_env "$selected_test"
else
echo "Running: ${test_descriptions[$selected_test]}"
run_test "$selected_test"
fi
read -r -p "Continue testing? (y/n): " again
if [[ "$again" != "y" && "$again" != "Y" ]]; then
break
fi
echo
else
echo "Invalid choice. Please select 0-$i (optionally with '!' for shell mode)."
fi
done
}
run_all_tests() {
echo "=== Testing PR #1645: Conditional API initialization ==="
echo
for test_name in "${test_order[@]}"; do
run_test "$test_name"
done
echo "=== Test run complete ==="
echo "Review the output above to check:"
echo "1. No Ollama connection errors when OLLAMA_URL not set"
echo "2. No Bedrock authentication errors when BEDROCK_AWS_REGION not set"
echo "3. Only configured services appear in model listings"
}
show_help() {
echo "Usage: $0 [OPTIONS] [TEST_CASE]"
echo
echo "Test PR #1645 conditional API initialization"
echo
echo "Options:"
echo " -h, --help Show this help message"
echo " -i, --interactive Run in interactive mode"
echo " -b, --build-only Build image only, don't run tests"
echo " -s, --shell TEST Shell into test environment"
echo
echo "Test cases:"
for test_name in "${test_order[@]}"; do
echo " $test_name: ${test_descriptions[$test_name]}"
done
echo
echo "Examples:"
echo " $0 # Run all tests"
echo " $0 -i # Interactive mode"
echo " $0 gemini-only # Run specific test"
echo " $0 -s gemini-only # Shell into gemini-only environment"
echo " $0 -b # Build image only"
echo
echo "Environment files are located in $ENV_DIR/ and can be edited directly."
}
# Parse command line arguments
if [[ $# -eq 0 ]]; then
build_image
run_all_tests
elif [[ "$1" == "-h" || "$1" == "--help" ]]; then
show_help
elif [[ "$1" == "-i" || "$1" == "--interactive" ]]; then
build_image
interactive_mode
elif [[ "$1" == "-b" || "$1" == "--build-only" ]]; then
build_image
elif [[ "$1" == "-s" || "$1" == "--shell" ]]; then
if [[ -z "$2" ]]; then
echo "Error: -s/--shell requires a test case name"
echo "Use -h for help."
exit 1
fi
if [[ -z "${test_descriptions[$2]}" ]]; then
echo "Error: Unknown test case: $2"
echo "Use -h for help."
exit 1
fi
build_image
shell_into_env "$2"
elif [[ -n "${test_descriptions[$1]}" ]]; then
build_image
run_test "$1"
else
echo "Unknown test case or option: $1"
echo "Use -h for help."
exit 1
fi

View File

@@ -1,41 +1,26 @@
# Use official golang image as builder
FROM golang:1.24.2-alpine AS builder
# syntax=docker/dockerfile:1
# Set working directory
WORKDIR /app
FROM golang:1.24-alpine AS builder
WORKDIR /src
# Install build dependencies
RUN apk add --no-cache git
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download dependencies
RUN go mod download
# Copy source code
COPY . .
# Build the application
RUN CGO_ENABLED=0 GOOS=linux go build -o fabric ./cmd/fabric
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /fabric ./cmd/fabric
# Use scratch as final base image
FROM alpine:latest
# Copy the binary from builder
COPY --from=builder /app/fabric /fabric
RUN apk add --no-cache ca-certificates \
&& mkdir -p /root/.config/fabric
# Copy patterns directory
COPY patterns /patterns
COPY --from=builder /fabric /usr/local/bin/fabric
# Ensure clean config directory and copy ENV file
RUN rm -rf /root/.config/fabric && \
mkdir -p /root/.config/fabric
COPY ENV /root/.config/fabric/.env
# Add debug commands
RUN ls -la /root/.config/fabric/
# Expose port 8080
EXPOSE 8080
# Run the binary with debug output
ENTRYPOINT ["/fabric"]
CMD ["--serve"]
ENTRYPOINT ["fabric"]

View File

@@ -1,40 +1,48 @@
# Docker Deployment
# Fabric Docker Image
This directory contains Docker configuration files for running Fabric in containers.
This directory provides a simple Docker setup for running the [Fabric](https://github.com/danielmiessler/fabric) CLI.
## Files
## Build
- `Dockerfile` - Main Docker build configuration
- `docker-compose.yml` - Docker Compose stack configuration
- `start-docker.sh` - Helper script to start the stack
- `README.md` - This documentation
## Quick Start
Build the image from the repository root:
```bash
# Start the Docker stack
./start-docker.sh
# Or manually with docker-compose
docker-compose up -d
# View logs
docker-compose logs -f
# Stop the stack
docker-compose down
docker build -t fabric -f scripts/docker/Dockerfile .
```
## Building
## Persisting configuration
Fabric stores its configuration in `~/.config/fabric/.env`. Mount this path to keep your settings on the host.
### Using a host directory
```bash
# Build the Docker image
docker build -t fabric .
# Or use docker-compose
docker-compose build
mkdir -p $HOME/.fabric-config
# Run setup to create the .env and download patterns
docker run --rm -it -v $HOME/.fabric-config:/root/.config/fabric fabric --setup
```
## Configuration
Subsequent runs can reuse the same directory:
Make sure to configure your environment variables and API keys before running the Docker stack. See the main README.md for setup instructions.
```bash
docker run --rm -it -v $HOME/.fabric-config:/root/.config/fabric fabric -p your-pattern
```
### Mounting a single .env file
If you only want to persist the `.env` file:
```bash
# assuming .env exists in the current directory
docker run --rm -it -v $PWD/.env:/root/.config/fabric/.env fabric -p your-pattern
```
## Running the server
Expose port 8080 to use Fabric's REST API:
```bash
docker run --rm -it -p 8080:8080 -v $HOME/.fabric-config:/root/.config/fabric fabric --serve
```
The API will be available at `http://localhost:8080`.

View File

@@ -1,11 +0,0 @@
version: '3.8'
services:
fabric-api:
build: .
ports:
- "8080:8080"
volumes:
- ./ENV:/root/.config/fabric/.env:ro
environment:
- GIN_MODE=release

View File

@@ -1,11 +0,0 @@
#!/bin/bash
# Helper script to start the Fabric Docker stack
echo "Starting Fabric Docker stack..."
cd "$(dirname "$0")"
docker-compose up -d
echo "Fabric is now running!"
echo "Check logs with: docker-compose logs -f"
echo "Stop with: docker-compose down"