Compare commits

...

63 Commits

Author SHA1 Message Date
github-actions[bot]
fd40778472 Update version to v1.4.258 and commit 2025-07-17 22:28:08 +00:00
Kayvan Sylvan
bc1641a68c Merge pull request #1629 from ksylvan/0717-ensure-envFile
Create Default (empty) .env in ~/.config/fabric on Demand
2025-07-17 15:26:37 -07:00
Kayvan Sylvan
5cf15d22d3 chore: define constants for file and directory permissions 2025-07-17 15:14:15 -07:00
Kayvan Sylvan
2b2a25daaa chore: improve error handling in ensureEnvFile function 2025-07-17 15:00:26 -07:00
Kayvan Sylvan
75a7f25642 refactor: improve error handling and permissions in ensureEnvFile 2025-07-17 14:46:03 -07:00
Kayvan Sylvan
8bab58f225 feat: add startup check to initialize config and .env file
### CHANGES
- Introduce ensureEnvFile function to create ~/.config/fabric/.env if missing.
- Add directory creation for config path in ensureEnvFile.
- Integrate setup flag in CLI to call ensureEnvFile on demand.
- Handle errors for home directory detection and file operations.
2025-07-17 14:27:19 -07:00
Kayvan Sylvan
8ec006e02c docs: Update README and CHANGELOG after v1.4.257 2025-07-17 11:15:24 -07:00
github-actions[bot]
2508dc6397 Update version to v1.4.257 and commit 2025-07-17 18:03:45 +00:00
Kayvan Sylvan
7670df35ad Merge pull request #1628 from ksylvan/0717-give-users-who-use-llama-server-ability-to-say-dont-use-responses-api
Introduce CLI Flag to Disable OpenAI Responses API
2025-07-17 11:02:18 -07:00
Kayvan Sylvan
3b9782f942 feat: add disable-responses-api flag for OpenAI compatibility
## CHANGES

- Add disable-responses-api flag to CLI completions
- Update zsh completion with new API flag
- Update bash completion options list
- Add fish shell completion for API flag
- Add testpattern to VSCode spell checker dictionary
- Configure disableResponsesAPI in example YAML config
- Enable flag for llama-server compatibility
2025-07-17 10:47:35 -07:00
Kayvan Sylvan
3fca3489fb feat: add OpenAI Responses API configuration control via CLI flag
## CHANGES

- Add `--disable-responses-api` CLI flag for OpenAI control
- Implement `SetResponsesAPIEnabled` method in OpenAI client
- Configure OpenAI Responses API setting during CLI initialization
- Update default config path to `~/.config/fabric/config.yaml`
- Add OpenAI import to CLI package dependencies
2025-07-17 10:37:15 -07:00
Kayvan Sylvan
bb2d58eae0 docs: Update CHANGELOG after v1.4.256 2025-07-17 07:17:26 -07:00
github-actions[bot]
87df7dc383 Update version to v1.4.256 and commit 2025-07-17 14:14:50 +00:00
Kayvan Sylvan
1d69afa1c9 Merge pull request #1624 from ksylvan/0716-default-config-yaml
Feature: Add Automatic ~/.fabric.yaml Config Detection
2025-07-17 07:13:17 -07:00
Kayvan Sylvan
96c18b4c99 refactor: extract flag parsing logic into separate extractFlag function 2025-07-17 06:51:40 -07:00
Kayvan Sylvan
dd5173963b fix: improve error handling for default config path resolution
## CHANGES

- Update `GetDefaultConfigPath` to return error alongside path
- Add proper error handling in flags initialization
- Include debug logging for config path failures
- Move channel close to defer in dryrun SendStream
- Return wrapped errors with context messages
- Handle non-existent config as valid case
2025-07-17 06:18:55 -07:00
Kayvan Sylvan
da1c8ec979 fix: improve dry run output formatting and config path error handling
## CHANGES

- Remove leading newline from DryRunResponse constant
- Add newline separator in SendStream method output
- Add newline separator in Send method output
- Improve GetDefaultConfigPath error handling logic
- Add stderr error message for config access failures
- Return empty string when config file doesn't exist
2025-07-16 23:46:07 -07:00
Kayvan Sylvan
ac97f9984f chore: refactor constructRequest method for consistency
### CHANGES

- Rename `_ConstructRequest` to `constructRequest` for consistency
- Update `SendStream` to use `constructRequest`
- Update `Send` method to use `constructRequest`
2025-07-16 23:36:05 -07:00
Kayvan Sylvan
181b812eaf chore: remove unneeded parenthesis around function call 2025-07-16 23:31:17 -07:00
Kayvan Sylvan
fe94165d31 chore: update Send method to append request to DryRunResponse
### CHANGES

- Assign `_ConstructRequest` output to `request` variable
- Concatenate `request` with `DryRunResponse` in `Send` method
2025-07-16 23:28:48 -07:00
Kayvan Sylvan
16e92690aa feat: improve flag handling and add default config support
## CHANGES

- Map both short and long flags to yaml tags
- Add support for short flag parsing with dashes
- Implement default ~/.fabric.yaml config file detection
- Fix think block suppression in dry run mode
- Add think options to dry run output formatting
- Refactor dry run response construction into helper method
- Return actual response content from dry run client
- Create utility function for default config path resolution
2025-07-16 23:09:56 -07:00
Kayvan Sylvan
1c33799aa8 docs: Update CHANGELOG after v1.4.255 2025-07-16 14:44:32 -07:00
github-actions[bot]
9559e618c3 Update version to v1.4.255 and commit 2025-07-16 21:38:03 +00:00
Kayvan Sylvan
ac32e8e64a Merge branch 'danielmiessler:main' into main 2025-07-16 14:35:00 -07:00
Kayvan Sylvan
82340e6126 chore: add more paths to update-version-andcreate-tag workflow to reduce unnecessary tagging 2025-07-16 14:34:29 -07:00
github-actions[bot]
5dec53726a Update version to v1.4.254 and commit 2025-07-16 21:21:12 +00:00
Kayvan Sylvan
b0eb136cbb Merge pull request #1621 from robertocarvajal/main
Adds generate code rules pattern
2025-07-16 14:19:46 -07:00
Roberto Carvajal
63f4370ff1 Adds generate code rules pattern
Signed-off-by: Roberto Carvajal <roberto.carvajal@gmail.com>
2025-07-16 11:15:55 -04:00
Kayvan Sylvan
b3cc2c737d docs: Update CHANGELOG after v1.4.253 2025-07-15 22:36:26 -07:00
github-actions[bot]
e43b4191e4 Update version to v1.4.253 and commit 2025-07-16 05:34:13 +00:00
Kayvan Sylvan
744c565120 Merge pull request #1620 from ksylvan/0715-thinking-flags-completions-scripts
Update Shell Completions for New Think-Block Suppression Options
2025-07-15 22:32:41 -07:00
Kayvan Sylvan
1473ac1465 feat: add 'think' tag options for text suppression and completion
### CHANGES

- Remove outdated update notes from README
- Add `--suppress-think` option to suppress 'think' tags
- Introduce `--think-start-tag` and `--think-end-tag` options
- Update bash completion with 'think' tag options
- Update fish completion with 'think' tag options
2025-07-15 22:26:12 -07:00
Kayvan Sylvan
c38c16f0db docs: Update CHANGELOG after v.1.4.252 2025-07-15 22:08:43 -07:00
github-actions[bot]
a4b1db4193 Update version to v1.4.252 and commit 2025-07-16 05:05:47 +00:00
Kayvan Sylvan
d44bc19a84 Merge pull request #1619 from ksylvan/0715-suppress-think
Feature: Optional Hiding of Model Thinking Process with Configurable Tags
2025-07-15 22:04:12 -07:00
Kayvan Sylvan
a2e618e11c perf: add regex caching to StripThinkBlocks function for improved performance 2025-07-15 22:02:16 -07:00
Kayvan Sylvan
cb90379b30 feat: add suppress-think feature to filter AI reasoning output
## CHANGES

- Add suppress-think flag to hide thinking blocks
- Configure customizable start and end thinking tags
- Strip thinking content from final response output
- Update streaming logic to respect suppress-think setting
- Add YAML configuration support for thinking options
- Implement StripThinkBlocks utility function for content filtering
- Add comprehensive tests for thinking suppression functionality
2025-07-15 21:52:27 -07:00
Kayvan Sylvan
4868687746 chore: Update CHANGELOG after v1.4.251 2025-07-15 21:44:14 -07:00
github-actions[bot]
85780fee76 Update version to v1.4.251 and commit 2025-07-16 03:49:08 +00:00
Kayvan Sylvan
497b1ed682 Merge pull request #1618 from ksylvan/0715-refrain-from-version-bumping-when-only-changelog-cache-changes
Update GitHub Workflow to Ignore Additional File Paths
2025-07-15 20:47:35 -07:00
Kayvan Sylvan
135433b749 ci: update workflow to ignore additional paths during version updates
## CHANGES

- Add `data/strategies/**` to paths-ignore list
- Add `cmd/generate_changelog/*.db` to paths-ignore list
- Prevent workflow triggers from strategy data changes
- Prevent workflow triggers from changelog database files
2025-07-15 20:38:44 -07:00
github-actions[bot]
f185dedb37 Update version to v1.4.250 and commit 2025-07-16 02:31:31 +00:00
Kayvan Sylvan
c74a157dcf docs: Update changelog with v1.4.249 changes 2025-07-15 19:29:46 -07:00
github-actions[bot]
91a336e870 Update version to v1.4.249 and commit 2025-07-16 01:32:15 +00:00
Kayvan Sylvan
5212fbcc37 Merge pull request #1617 from ksylvan/0715-really-really-fix-changelog-pr-sync-issue
Improve PR Sync Logic for Changelog Generator
2025-07-15 18:30:42 -07:00
Kayvan Sylvan
6d8eb3d2b9 chore: add log message for missing PRs in cache 2025-07-15 18:27:06 -07:00
Kayvan Sylvan
d3bba5d026 feat: preserve PR numbers during version cache merges
### CHANGES

- Enhance changelog to associate PR numbers with version tags
- Improve PR number parsing with proper error handling
- Collect all PR numbers for commits between version tags
- Associate aggregated PR numbers with each version entry
- Update cached versions with newly found PR numbers
- Add check for missing PRs to trigger sync if needed
2025-07-15 18:12:07 -07:00
github-actions[bot]
699762b694 Update version to v1.4.248 and commit 2025-07-16 00:24:44 +00:00
Kayvan Sylvan
f2a6f1bd98 Merge pull request #1616 from ksylvan/0715-fix-changelog-cache-logic
Preserve PR Numbers During Version Cache Merges
2025-07-15 17:23:17 -07:00
Kayvan Sylvan
3176adf59b fix: improve PR number parsing with proper error handling 2025-07-15 17:18:45 -07:00
Kayvan Sylvan
7e29966622 feat: enhance changelog to correctly associate PR numbers with version tags
### CHANGES

-   Collect all PR numbers for commits between version tags.
-   Associate aggregated PR numbers with each version entry.
-   Update cached versions with newly found PR numbers.
-   Attribute all changes in a version to relevant PRs.
2025-07-15 17:06:40 -07:00
Kayvan Sylvan
0af0ab683d docs: reorganize v1.4.247 changelog to attribute changes to PR #1613 2025-07-15 07:03:04 -07:00
github-actions[bot]
e72e67de71 Update version to v1.4.247 and commit 2025-07-15 07:07:13 +00:00
Kayvan Sylvan
414b6174e7 Merge pull request #1613 from ksylvan/0714-fixes-for-custom-directory-unique-patterns-list-changelog-cache
Improve AI Summarization for Consistent Professional Changelog Entries
2025-07-15 00:05:44 -07:00
Kayvan Sylvan
f63e0dfc05 chore: update logging output to use os.Stderr 2025-07-15 00:00:09 -07:00
Kayvan Sylvan
4ef8578e47 fix: improve error handling in plugin registry configuration 2025-07-14 23:54:05 -07:00
Kayvan Sylvan
12ee690ae4 chore: remove debug logging and sync custom patterns directory configuration
## CHANGES

- Remove debug stderr logging from content summarization
- Add custom patterns directory to PatternsLoader configuration
- Ensure consistent patterns directory setup across components
- Clean up unnecessary console output during summarization
2025-07-14 23:19:05 -07:00
Kayvan Sylvan
cc378be485 feat: improve error handling in plugin_registry and patterns_loader
### CHANGES

- Adjust prompt formatting in `summarize.go`
- Add error handling for `CustomPatterns` configuration
- Enhance error messages in `patterns_loader`
- Check for patterns in multiple directories
2025-07-14 23:09:39 -07:00
Kayvan Sylvan
06fc8d8732 chore: reorder plugin configuration sequence in PluginRegistry.Configure method
## CHANGES

- Move CustomPatterns.Configure() before PatternsLoader.Configure()
- Adjust plugin initialization order in Configure method
- Ensure proper dependency sequence for pattern loading
2025-07-14 22:51:52 -07:00
Kayvan Sylvan
9e4ed8ecb3 fix: improve git walking termination and error handling in changelog generator
## CHANGES

- Add storer import for proper git iteration control
- Use storer.ErrStop instead of nil for commit iteration termination
- Handle storer.ErrStop as expected condition in git walker
- Update cache comment to clarify Unreleased version skipping
- Change custom patterns warning to stderr output
- Add storer to VSCode spell checker dictionary
2025-07-14 22:34:13 -07:00
Kayvan Sylvan
c369425708 chore: clean up changelog and add debug logging for content length validation 2025-07-14 20:53:59 -07:00
Kayvan Sylvan
cf074d3411 feat: enhance changelog generation with incremental caching and improved AI summarization
## CHANGES

- Add incremental processing for new Git tags since cache
- Implement `WalkHistorySinceTag` method for efficient history traversal
- Cache new versions and commits after AI processing
- Update AI summarization prompt for better release note formatting
- Remove conventional commit prefix stripping from commit messages
- Add custom patterns directory support to plugin registry
- Generate unique patterns file including custom directory patterns
- Improve session string formatting with switch statement
2025-07-14 15:12:57 -07:00
Kayvan Sylvan
47f75237ff docs: update README for GraphQL optimization and AI summary features
### CHANGES

- Detail GraphQL API usage for faster PR fetching
- Introduce AI-powered summaries via Fabric integration
- Explain content-based caching for AI summaries
- Document support for loading secrets from .env files
- Add usage examples for new AI summary feature
- Clarify project license is The MIT License
2025-07-13 21:57:11 -07:00
35 changed files with 1458 additions and 797 deletions

View File

@@ -7,6 +7,10 @@ on:
paths-ignore:
- "data/patterns/**"
- "**/*.md"
- "data/strategies/**"
- "cmd/generate_changelog/*.db"
- "scripts/pattern_descriptions/*.json"
- "web/static/data/pattern_descriptions.json"
permissions:
contents: write # Ensure the workflow has write permissions

View File

@@ -99,10 +99,12 @@
"seaborn",
"semgrep",
"sess",
"storer",
"Streamlit",
"stretchr",
"talkpanel",
"Telos",
"testpattern",
"Thacker",
"tidwall",
"topp",

File diff suppressed because it is too large Load Diff

View File

@@ -113,30 +113,9 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
## Updates
> [!NOTE]
>
> July 4, 2025
>
> - **Web Search**: Fabric now supports web search for Anthropic and OpenAI models using the `--search` and `--search-location` flags. This replaces the previous plugin-based search, so you may want to remove the old `ANTHROPIC_WEB_SEARCH_TOOL_*` variables from your `~/.config/fabric/.env` file.
> - **Image Generation**: Fabric now has powerful image generation capabilities with OpenAI.
> - Generate images from text prompts and save them using `--image-file`.
> - Edit existing images by providing an input image with `--attachment`.
> - Control image `size`, `quality`, `compression`, and `background` with the new `--image-*` flags.
>
>June 17, 2025
>
>- Fabric now supports Perplexity AI. Configure it by using `fabric -S` to add your Perplexity AI API Key,
> and then try:
>
> ```bash
> fabric -m sonar-pro "What is the latest world news?"
> ```
>
>June 11, 2025
>
>- Fabric's YouTube transcription now needs `yt-dlp` to be installed. Make sure to install the latest
> version (2025.06.09 as of this note). The YouTube API key is only needed for comments (the `--comments` flag)
> and metadata extraction (the `--metadata` flag).
Fabric is evolving rapidly.
Stay current with the latest features by reviewing the [CHANGELOG](./CHANGELOG.md) for all recent changes.
## Philosophy
@@ -565,10 +544,13 @@ Application Options:
--image-compression= Compression level 0-100 for JPEG/WebP formats (default: not set)
--image-background= Background type: opaque, transparent (default: opaque, only for
PNG/WebP)
--suppress-think Suppress text enclosed in thinking tags
--think-start-tag= Start tag for thinking sections (default: <think>)
--think-end-tag= End tag for thinking sections (default: </think>)
--disable-responses-api Disable OpenAI Responses API (default: false)
Help Options:
-h, --help Show this help message
```
## Our approach to prompting

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.246"
var version = "v1.4.258"

View File

@@ -11,8 +11,12 @@ A high-performance changelog generator for Git repositories that automatically c
- **Unreleased changes**: Tracks all commits since the last release
- **Concurrent processing**: Parallel GitHub API calls for improved performance
- **Flexible output**: Generate complete changelogs or target specific versions
- **Optimized PR fetching**: Batch fetches all merged PRs using GitHub Search API (drastically reduces API calls)
- **GraphQL optimization**: Ultra-fast PR fetching using GitHub GraphQL API (~5-10 calls vs 1000s)
- **Intelligent sync**: Automatically syncs new PRs every 24 hours or when missing PRs are detected
- **AI-powered summaries**: Optional Fabric integration for enhanced changelog summaries
- **Advanced caching**: Content-based change detection for AI summaries with hash comparison
- **Author type detection**: Distinguishes between users, bots, and organizations
- **Lightning-fast incremental updates**: SHA→PR mapping for instant git operations
## Installation
@@ -23,26 +27,31 @@ go install github.com/danielmiessler/fabric/cmd/generate_changelog@latest
## Usage
### Basic usage (generate complete changelog)
```bash
generate_changelog
```
### Save to file
```bash
generate_changelog -o CHANGELOG.md
```
### Generate for specific version
```bash
generate_changelog -v v1.4.244
```
### Limit to recent versions
```bash
generate_changelog -l 10
```
### Using GitHub token for private repos or higher rate limits
```bash
export GITHUB_TOKEN=your_token_here
generate_changelog
@@ -51,7 +60,18 @@ generate_changelog
generate_changelog --token your_token_here
```
### AI-enhanced summaries
```bash
# Enable AI summaries using Fabric
generate_changelog --ai-summarize
# Use custom model for AI summaries
FABRIC_CHANGELOG_SUMMARIZE_MODEL=claude-opus-4 generate_changelog --ai-summarize
```
### Cache management
```bash
# Rebuild cache from scratch
generate_changelog --rebuild-cache
@@ -80,6 +100,7 @@ generate_changelog --cache /path/to/cache.db
| `--rebuild-cache` | | Rebuild cache from scratch | false |
| `--force-pr-sync` | | Force a full PR sync from GitHub | false |
| `--token` | | GitHub API token | `$GITHUB_TOKEN` |
| `--ai-summarize` | | Generate AI-enhanced summaries using Fabric | false |
## Output Format
@@ -120,54 +141,118 @@ The generated changelog follows this structure:
- **Concurrent API calls**: Processes up to 10 GitHub API requests in parallel
- **Smart caching**: SQLite cache eliminates redundant API calls
- **Incremental updates**: Only processes new commits on subsequent runs
- **Batch PR fetching**: Uses GitHub Search API to fetch all merged PRs in minimal API calls
- **GraphQL optimization**: Uses GitHub GraphQL API to fetch all PR data in ~5-10 calls
- **AI-powered summaries**: Optional Fabric integration with intelligent caching
- **Content-based change detection**: AI summaries only regenerated when content changes
- **Lightning-fast git operations**: SHA→PR mapping stored in database for instant lookups
### Major Optimization: Batch PR Fetching
### Major Optimization: GraphQL + Advanced Caching
The tool has been optimized to drastically reduce GitHub API calls:
The tool has been optimized to drastically reduce GitHub API calls and improve performance:
**Previous approach**: Individual API calls for each PR (2 API calls per PR)
**Before**: Individual API calls for each PR (2 API calls per PR - one for PR details, one for commits)
- For a repo with 500 PRs: 1,000 API calls
**After**: Batch fetching using GitHub Search API
- For a repo with 500 PRs: ~10 API calls (search) + 500 API calls (details) = ~510 API calls
- **50% reduction in API calls!**
**Current approach**: GraphQL batch fetching with intelligent caching
- For a repo with 500 PRs: ~5-10 GraphQL calls (initial fetch) + 0 calls (subsequent runs with cache)
- **99%+ reduction in API calls after initial run!**
The optimization includes:
1. **Batch Search**: Uses GitHub's Search API to find all merged PRs in paginated batches
2. **Smart Caching**: Stores complete PR data and tracks last sync timestamp
3. **Incremental Sync**: Only fetches PRs merged after the last sync
1. **GraphQL Batch Fetch**: Uses GitHub's GraphQL API to fetch all merged PRs with commits in minimal calls
2. **Smart Caching**: Stores complete PR data, commits, and SHA mappings in SQLite
3. **Incremental Sync**: Only fetches PRs merged after the last sync timestamp
4. **Automatic Refresh**: PRs are synced every 24 hours or when missing PRs are detected
5. **Fallback Support**: If batch fetch fails, falls back to individual PR fetching
5. **AI Summary Caching**: Content-based change detection prevents unnecessary AI regeneration
6. **Fallback Support**: If GraphQL fails, falls back to REST API batch fetching
7. **Lightning Git Operations**: Pre-computed SHA→PR mappings for instant commit association
## Requirements
- Go 1.24+ (for installation from source)
- Git repository
- GitHub token (optional, for private repos or higher rate limits)
- Fabric CLI (optional, for AI-enhanced summaries)
## Authentication
The tool supports GitHub authentication via:
1. Environment variable: `export GITHUB_TOKEN=your_token`
2. Command line flag: `--token your_token`
3. `.env` file in the same directory as the binary
### Environment File Support
Create a `.env` file next to the `generate_changelog` binary:
```bash
GITHUB_TOKEN=your_github_token_here
FABRIC_CHANGELOG_SUMMARIZE_MODEL=claude-sonnet-4-20250514
```
The tool automatically loads `.env` files for convenient configuration management.
Without authentication, the tool is limited to 60 GitHub API requests per hour.
## Caching
The SQLite cache stores:
- Version information and commit associations
- Pull request details (title, body, commits, authors)
- Last processed commit SHA for incremental updates
- Last PR sync timestamp for intelligent refresh
- AI summaries with content-based change detection
- SHA→PR mappings for lightning-fast git operations
Cache benefits:
- Instant changelog regeneration
- Drastically reduced GitHub API usage (50%+ reduction)
- Drastically reduced GitHub API usage (99%+ reduction after initial run)
- Offline changelog generation (after initial cache build)
- Automatic PR data refresh every 24 hours
- Batch database transactions for better performance
- Content-aware AI summary regeneration
## AI-Enhanced Summaries
The tool can generate AI-powered summaries using Fabric for more polished, professional changelogs:
```bash
# Enable AI summarization
generate_changelog --ai-summarize
# Custom model (default: claude-sonnet-4-20250514)
FABRIC_CHANGELOG_SUMMARIZE_MODEL=claude-opus-4 generate_changelog --ai-summarize
```
### AI Summary Features
- **Content-based change detection**: AI summaries are only regenerated when version content changes
- **Intelligent caching**: Preserves existing summaries and only processes changed versions
- **Content hash comparison**: Uses SHA256 hashing to detect when "Unreleased" content changes
- **Automatic fallback**: Falls back to raw content if AI processing fails
- **Error detection**: Identifies and handles AI processing errors gracefully
- **Minimum content filtering**: Skips AI processing for very brief content (< 256 characters)
### AI Model Configuration
Set the model via environment variable:
```bash
export FABRIC_CHANGELOG_SUMMARIZE_MODEL=claude-opus-4
# or
export FABRIC_CHANGELOG_SUMMARIZE_MODEL=gpt-4
```
AI summaries are cached and only regenerated when:
- Version content changes (detected via hash comparison)
- No existing AI summary exists for the version
- Force rebuild is requested
## Contributing
@@ -175,4 +260,4 @@ This tool is part of the Fabric project. Contributions are welcome!
## License
Same as the Fabric project.
The MIT License. Same as the Fabric project.

Binary file not shown.

View File

@@ -82,8 +82,8 @@ func (g *Generator) collectData() error {
if cachedTag != "" {
// Get the current latest tag from git
currentTag, err := g.gitWalker.GetLatestTag()
if err == nil && currentTag == cachedTag {
// Same tag - load cached data and walk commits since tag for "Unreleased"
if err == nil {
// Load cached data - we can use it even if there are new tags
cachedVersions, err := g.cache.GetVersions()
if err == nil && len(cachedVersions) > 0 {
g.versions = cachedVersions
@@ -97,7 +97,30 @@ func (g *Generator) collectData() error {
}
}
// Walk commits since the latest tag to get new unreleased commits
// If we have new tags since cache, process the new versions only
if currentTag != cachedTag {
fmt.Fprintf(os.Stderr, "Processing new versions since %s...\n", cachedTag)
newVersions, err := g.gitWalker.WalkHistorySinceTag(cachedTag)
if err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to walk history since tag %s: %v\n", cachedTag, err)
} else {
// Merge new versions into cached versions (only add if not already cached)
for name, version := range newVersions {
if name != "Unreleased" { // Handle Unreleased separately
if existingVersion, exists := g.versions[name]; !exists {
g.versions[name] = version
} else {
// Update existing version with new PR numbers if they're missing
if len(existingVersion.PRNumbers) == 0 && len(version.PRNumbers) > 0 {
existingVersion.PRNumbers = version.PRNumbers
}
}
}
}
}
}
// Always update Unreleased section with latest commits
unreleasedVersion, err := g.gitWalker.WalkCommitsSinceTag(currentTag)
if err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to walk commits since tag %s: %v\n", currentTag, err)
@@ -110,6 +133,29 @@ func (g *Generator) collectData() error {
g.versions["Unreleased"] = unreleasedVersion
}
// Save any new versions to cache (after potential AI processing)
if currentTag != cachedTag {
for _, version := range g.versions {
// Skip versions that were already cached and Unreleased
if version.Name != "Unreleased" {
if err := g.cache.SaveVersion(version); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to save version to cache: %v\n", err)
}
for _, commit := range version.Commits {
if err := g.cache.SaveCommit(commit, version.Name); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to save commit to cache: %v\n", err)
}
}
}
}
// Update the last processed tag
if err := g.cache.SetLastProcessedTag(currentTag); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to update last processed tag: %v\n", err)
}
}
return nil
}
}
@@ -164,8 +210,26 @@ func (g *Generator) fetchPRs() error {
lastSync, _ = g.cache.GetLastPRSync()
}
// Check if we need to sync for missing PRs
missingPRs := false
for _, version := range g.versions {
for _, prNum := range version.PRNumbers {
if _, exists := g.prs[prNum]; !exists {
missingPRs = true
break
}
}
if missingPRs {
break
}
}
if missingPRs {
fmt.Fprintf(os.Stderr, "Full sync triggered due to missing PRs in cache.\n")
}
// If we have never synced or it's been more than 24 hours, do a full sync
needsSync := lastSync.IsZero() || time.Since(lastSync) > 24*time.Hour || g.cfg.ForcePRSync
// Also sync if we have versions with PR numbers that aren't cached
needsSync := lastSync.IsZero() || time.Since(lastSync) > 24*time.Hour || g.cfg.ForcePRSync || missingPRs
if !needsSync {
fmt.Fprintf(os.Stderr, "Using cached PR data (last sync: %s)\n", lastSync.Format("2006-01-02 15:04:05"))
@@ -298,6 +362,7 @@ func (g *Generator) formatVersion(version *git.Version) string {
}
}
// For released versions, if we have cached AI summary, use it!
if version.Name != "Unreleased" && version.AISummary != "" {
fmt.Fprintf(os.Stderr, "✅ %s already summarized (skipping)\n", version.Name)
sb.WriteString(version.AISummary)
@@ -529,8 +594,6 @@ func normalizeLineEndings(content string) string {
}
func (g *Generator) formatCommitMessage(message string) string {
prefixes := []string{"fix:", "feat:", "docs:", "style:", "refactor:",
"test:", "chore:", "perf:", "ci:", "build:", "revert:", "# docs:"}
strings_to_remove := []string{
"### CHANGES\n", "## CHANGES\n", "# CHANGES\n",
"...\n", "---\n", "## Changes\n", "## Change",
@@ -543,13 +606,6 @@ func (g *Generator) formatCommitMessage(message string) string {
// No hard tabs
message = strings.ReplaceAll(message, "\t", " ")
for _, prefix := range prefixes {
if strings.HasPrefix(strings.ToLower(message), prefix) {
message = strings.TrimSpace(message[len(prefix):])
break
}
}
if len(message) > 0 {
message = strings.ToUpper(message[:1]) + message[1:]
}

View File

@@ -10,6 +10,38 @@ import (
const DefaultSummarizeModel = "claude-sonnet-4-20250514"
const MinContentLength = 256 // Minimum content length to consider for summarization
const prompt = `# ROLE
You are an expert Technical Writer specializing in creating clear, concise,
and professional release notes from raw Git commit logs.
# TASK
Your goal is to transform a provided block of Git commit logs into a clean,
human-readable changelog summary. You will identify the most important changes,
format them as a bulleted list, and preserve the associated Pull Request (PR)
information.
# INSTRUCTIONS:
Follow these steps in order:
1. Deeply analyze the input. You will be given a block of text containing PR
information and commit log messages. Carefully read through the logs
to identify individual commits and their descriptions.
2. Identify Key Changes: Focus on commits that represent significant changes,
such as new features ("feat"), bug fixes ("fix"), performance improvements ("perf"),
or breaking changes ("BREAKING CHANGE").
3. Select the Top 5: From the identified key changes, select a maximum of the five
(5) most impactful ones to include in the summary.
If there are five or fewer total changes, include all of them.
4. Format the Output:
- Where you see a PR header, include the PR header verbatim. NO CHANGES.
**This is a critical rule: Do not modify the PR header, as it contains
important links.** What follow the PR header are the related changes.
- Do not add any additional text or preamble. Begin directly with the output.
- Use bullet points for each key change. Starting each point with a hyphen ("-").
- Ensure that the summary is concise and focused on the main changes.
- The summary should be in American English (en-US), using proper grammar and punctuation.
5. If the content is too brief or you do not see any PR headers, return the content as is.
`
// getSummarizeModel returns the model to use for AI summarization
func getSummarizeModel() string {
if model := os.Getenv("FABRIC_CHANGELOG_SUMMARIZE_MODEL"); model != "" {
@@ -30,17 +62,6 @@ func SummarizeVersionContent(content string) (string, error) {
model := getSummarizeModel()
prompt := `Summarize the changes extracted from Git commit logs in a concise, professional way.
Pay particular attention to the following rules:
- Preserve the PR headers verbatim to your summary.
- I REPEAT: Do not change the PR headers in any way. They contain links to the PRs and Author Profiles.
- Use bullet points for lists and key changes (rendered using "-")
- Focus on the main changes and improvements.
- Avoid unnecessary details or preamble.
- Keep it under 800 characters.
- Be brief. List only the 5 most important changes along with the PR information which should be kept intact.
- If the content is too brief or you do not see any PR headers, return the content as is.`
cmd := exec.Command("fabric", "-m", model, prompt)
cmd.Stdin = strings.NewReader(content)

View File

@@ -5,10 +5,12 @@ import (
"regexp"
"strconv"
"strings"
"time"
"github.com/go-git/go-git/v5"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/plumbing/storer"
)
var (
@@ -280,6 +282,111 @@ func parseGitHubURL(url string) (owner, repo string) {
return "", ""
}
// WalkHistorySinceTag walks git history from HEAD down to (but not including) the specified tag
// and returns any version commits found along the way
func (w *Walker) WalkHistorySinceTag(sinceTag string) (map[string]*Version, error) {
// Get the commit SHA for the sinceTag
tagRef, err := w.repo.Tag(sinceTag)
if err != nil {
return nil, fmt.Errorf("failed to get tag %s: %w", sinceTag, err)
}
tagCommit, err := w.repo.CommitObject(tagRef.Hash())
if err != nil {
return nil, fmt.Errorf("failed to get commit for tag %s: %w", sinceTag, err)
}
// Get HEAD reference
ref, err := w.repo.Head()
if err != nil {
return nil, fmt.Errorf("failed to get HEAD: %w", err)
}
// Walk from HEAD down to the tag commit (excluding it)
commitIter, err := w.repo.Log(&git.LogOptions{
From: ref.Hash(),
Order: git.LogOrderCommitterTime,
})
if err != nil {
return nil, fmt.Errorf("failed to create commit iterator: %w", err)
}
defer commitIter.Close()
versions := make(map[string]*Version)
currentVersion := "Unreleased"
prNumbers := make(map[string][]int)
err = commitIter.ForEach(func(c *object.Commit) error {
// Stop iteration when the hash of the current commit matches the hash of the specified sinceTag commit
if c.Hash == tagCommit.Hash {
return storer.ErrStop
}
commit := &Commit{
SHA: c.Hash.String(),
Message: strings.TrimSpace(c.Message),
Author: c.Author.Name,
Email: c.Author.Email,
Date: c.Author.When,
IsMerge: len(c.ParentHashes) > 1,
}
// Check for version pattern
if matches := versionPattern.FindStringSubmatch(commit.Message); len(matches) > 1 {
commit.IsVersion = true
commit.Version = matches[1]
currentVersion = commit.Version
if _, exists := versions[currentVersion]; !exists {
versions[currentVersion] = &Version{
Name: currentVersion,
Date: commit.Date,
CommitSHA: commit.SHA,
Commits: []*Commit{},
}
}
return nil
}
// Check for PR merge pattern
if matches := prPattern.FindStringSubmatch(commit.Message); len(matches) > 1 {
prNumber, err := strconv.Atoi(matches[1])
if err != nil {
// Handle parsing error (e.g., log it or skip processing)
return fmt.Errorf("failed to parse PR number: %v", err)
}
commit.PRNumber = prNumber
prNumbers[currentVersion] = append(prNumbers[currentVersion], prNumber)
}
// Add commit to current version
if _, exists := versions[currentVersion]; !exists {
versions[currentVersion] = &Version{
Name: currentVersion,
Date: time.Time{}, // Zero value, will be set by version commit
CommitSHA: "",
Commits: []*Commit{},
}
}
versions[currentVersion].Commits = append(versions[currentVersion].Commits, commit)
return nil
})
// Handle the stop condition - storer.ErrStop is expected
if err == storer.ErrStop {
err = nil
}
// Assign collected PR numbers to each version
for version, prs := range prNumbers {
versions[version].PRNumbers = dedupInts(prs)
}
return versions, err
}
func dedupInts(ints []int) []int {
seen := make(map[int]bool)
result := []int{}

View File

@@ -110,6 +110,10 @@ _fabric() {
'(--liststrategies)--liststrategies[List all strategies]' \
'(--listvendors)--listvendors[List all vendors]' \
'(--shell-complete-list)--shell-complete-list[Output raw list without headers/formatting (for shell completion)]' \
'(--suppress-think)--suppress-think[Suppress text enclosed in thinking tags]' \
'(--think-start-tag)--think-start-tag[Start tag for thinking sections (default: <think>)]:start tag:' \
'(--think-end-tag)--think-end-tag[End tag for thinking sections (default: </think>)]:end tag:' \
'(--disable-responses-api)--disable-responses-api[Disable OpenAI Responses API (default: false)]' \
'(-h --help)'{-h,--help}'[Show this help message]' \
'*:arguments:'
}

View File

@@ -13,7 +13,7 @@ _fabric() {
_get_comp_words_by_ref -n : cur prev words cword
# Define all possible options/flags
local opts="--pattern -p --variable -v --context -C --session --attachment -a --setup -S --temperature -t --topp -T --stream -s --presencepenalty -P --raw -r --frequencypenalty -F --listpatterns -l --listmodels -L --listcontexts -x --listsessions -X --updatepatterns -U --copy -c --model -m --modelContextLength --output -o --output-session --latest -n --changeDefaultModel -d --youtube -y --playlist --transcript --transcript-with-timestamps --comments --metadata --language -g --scrape_url -u --scrape_question -q --seed -e --wipecontext -w --wipesession -W --printcontext --printsession --readability --input-has-vars --dry-run --serve --serveOllama --address --api-key --config --search --search-location --image-file --image-size --image-quality --image-compression --image-background --version --listextensions --addextension --rmextension --strategy --liststrategies --listvendors --shell-complete-list --help -h"
local opts="--pattern -p --variable -v --context -C --session --attachment -a --setup -S --temperature -t --topp -T --stream -s --presencepenalty -P --raw -r --frequencypenalty -F --listpatterns -l --listmodels -L --listcontexts -x --listsessions -X --updatepatterns -U --copy -c --model -m --modelContextLength --output -o --output-session --latest -n --changeDefaultModel -d --youtube -y --playlist --transcript --transcript-with-timestamps --comments --metadata --language -g --scrape_url -u --scrape_question -q --seed -e --wipecontext -w --wipesession -W --printcontext --printsession --readability --input-has-vars --dry-run --serve --serveOllama --address --api-key --config --search --search-location --image-file --image-size --image-quality --image-compression --image-background --suppress-think --think-start-tag --think-end-tag --disable-responses-api --version --listextensions --addextension --rmextension --strategy --liststrategies --listvendors --shell-complete-list --help -h"
# Helper function for dynamic completions
_fabric_get_list() {
@@ -81,7 +81,7 @@ _fabric() {
return 0
;;
# Options requiring simple arguments (no specific completion logic here)
-v | --variable | -t | --temperature | -T | --topp | -P | --presencepenalty | -F | --frequencypenalty | --modelContextLength | -n | --latest | -y | --youtube | -g | --language | -u | --scrape_url | -q | --scrape_question | -e | --seed | --address | --api-key | --search-location | --image-compression)
-v | --variable | -t | --temperature | -T | --topp | -P | --presencepenalty | -F | --frequencypenalty | --modelContextLength | -n | --latest | -y | --youtube | -g | --language | -u | --scrape_url | -q | --scrape_question | -e | --seed | --address | --api-key | --search-location | --image-compression | --think-start-tag | --think-end-tag)
# No specific completion suggestions, user types the value
return 0
;;

View File

@@ -69,6 +69,8 @@ complete -c fabric -l image-background -d "Background type: opaque, transparent
complete -c fabric -l addextension -d "Register a new extension from config file path" -r -a "*.yaml *.yml"
complete -c fabric -l rmextension -d "Remove a registered extension by name" -a "(__fabric_get_extensions)"
complete -c fabric -l strategy -d "Choose a strategy from the available strategies" -a "(__fabric_get_strategies)"
complete -c fabric -l think-start-tag -d "Start tag for thinking sections (default: <think>)"
complete -c fabric -l think-end-tag -d "End tag for thinking sections (default: </think>)"
# Boolean flags (no arguments)
complete -c fabric -s S -l setup -d "Run setup for all reconfigurable parts of fabric"
@@ -98,4 +100,6 @@ complete -c fabric -l listextensions -d "List all registered extensions"
complete -c fabric -l liststrategies -d "List all strategies"
complete -c fabric -l listvendors -d "List all vendors"
complete -c fabric -l shell-complete-list -d "Output raw list without headers/formatting (for shell completion)"
complete -c fabric -l suppress-think -d "Suppress text enclosed in thinking tags"
complete -c fabric -l disable-responses-api -d "Disable OpenAI Responses API (default: false)"
complete -c fabric -s h -l help -d "Show this help message"

View File

@@ -0,0 +1,8 @@
# IDENTITY AND PURPOSE
You are a senior developer and expert prompt engineer. Think ultra hard to distill the following transcription or tutorial in as little set of unique rules as possible intended for best practices guidance in AI assisted coding tools, each rule has to be in one sentence as a direct instruction, avoid explanations and cosmetic language. Output in Markdown, I prefer bullet dash (-).
---
# TRANSCRIPT

View File

@@ -41,8 +41,8 @@ func handleChatProcessing(currentFlags *Flags, registry *core.PluginRegistry, me
result := session.GetLastMessage().Content
if !currentFlags.Stream {
// print the result if it was not streamed already
if !currentFlags.Stream || currentFlags.SuppressThink {
// print the result if it was not streamed already or suppress-think disabled streaming output
fmt.Println(result)
}

View File

@@ -7,6 +7,7 @@ import (
"strings"
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/plugins/ai/openai"
"github.com/danielmiessler/fabric/internal/tools/converter"
"github.com/danielmiessler/fabric/internal/tools/youtube"
)
@@ -18,6 +19,12 @@ func Cli(version string) (err error) {
return
}
if currentFlags.Setup {
if err = ensureEnvFile(); err != nil {
return
}
}
if currentFlags.Version {
fmt.Println(version)
return
@@ -36,6 +43,11 @@ func Cli(version string) (err error) {
}
}
// Configure OpenAI Responses API setting based on CLI flag
if registry != nil {
configureOpenAIResponsesAPI(registry, currentFlags.DisableResponsesAPI)
}
// Handle setup and server commands
var handled bool
if handled, err = handleSetupAndServerCommands(currentFlags, registry, version); err != nil || handled {
@@ -142,3 +154,21 @@ func WriteOutput(message string, outputFile string) (err error) {
}
return
}
// configureOpenAIResponsesAPI configures the OpenAI client's Responses API setting based on the CLI flag
func configureOpenAIResponsesAPI(registry *core.PluginRegistry, disableResponsesAPI bool) {
// Find the OpenAI vendor in the registry
if registry != nil && registry.VendorsAll != nil {
for _, vendor := range registry.VendorsAll.Vendors {
if vendor.GetName() == "OpenAI" {
// Type assertion to access the OpenAI-specific method
if openaiClient, ok := vendor.(*openai.Client); ok {
// Invert the disable flag to get the enable flag
enableResponsesAPI := !disableResponsesAPI
openaiClient.SetResponsesAPIEnabled(enableResponsesAPI)
}
break
}
}
}
}

View File

@@ -18,4 +18,13 @@ temperature: 0.88
seed: 42
stream: true
raw: false
raw: false
# suppress vendor thinking output
suppressThink: false
thinkStartTag: "<think>"
thinkEndTag: "</think>"
# OpenAI Responses API settings
# (use this for llama-server or other OpenAI-compatible local servers)
disableResponsesAPI: true

View File

@@ -83,6 +83,10 @@ type Flags struct {
ImageQuality string `long:"image-quality" description:"Image quality: low, medium, high, auto (default: auto)"`
ImageCompression int `long:"image-compression" description:"Compression level 0-100 for JPEG/WebP formats (default: not set)"`
ImageBackground string `long:"image-background" description:"Background type: opaque, transparent (default: opaque, only for PNG/WebP)"`
SuppressThink bool `long:"suppress-think" yaml:"suppressThink" description:"Suppress text enclosed in thinking tags"`
ThinkStartTag string `long:"think-start-tag" yaml:"thinkStartTag" description:"Start tag for thinking sections" default:"<think>"`
ThinkEndTag string `long:"think-end-tag" yaml:"thinkEndTag" description:"End tag for thinking sections" default:"</think>"`
DisableResponsesAPI bool `long:"disable-responses-api" yaml:"disableResponsesAPI" description:"Disable OpenAI Responses API (default: false)"`
}
var debug = false
@@ -99,26 +103,34 @@ func Init() (ret *Flags, err error) {
usedFlags := make(map[string]bool)
yamlArgsScan := os.Args[1:]
// Get list of fields that have yaml tags, could be in yaml config
yamlFields := make(map[string]bool)
// Create mapping from flag names (both short and long) to yaml tag names
flagToYamlTag := make(map[string]string)
t := reflect.TypeOf(Flags{})
for i := 0; i < t.NumField(); i++ {
if yamlTag := t.Field(i).Tag.Get("yaml"); yamlTag != "" {
yamlFields[yamlTag] = true
//Debugf("Found yaml-configured field: %s\n", yamlTag)
field := t.Field(i)
yamlTag := field.Tag.Get("yaml")
if yamlTag != "" {
longTag := field.Tag.Get("long")
shortTag := field.Tag.Get("short")
if longTag != "" {
flagToYamlTag[longTag] = yamlTag
Debugf("Mapped long flag %s to yaml tag %s\n", longTag, yamlTag)
}
if shortTag != "" {
flagToYamlTag[shortTag] = yamlTag
Debugf("Mapped short flag %s to yaml tag %s\n", shortTag, yamlTag)
}
}
}
// Scan args for that are provided by cli and might be in yaml
for _, arg := range yamlArgsScan {
if strings.HasPrefix(arg, "--") {
flag := strings.TrimPrefix(arg, "--")
if i := strings.Index(flag, "="); i > 0 {
flag = flag[:i]
}
if yamlFields[flag] {
usedFlags[flag] = true
Debugf("CLI flag used: %s\n", flag)
flag := extractFlag(arg)
if flag != "" {
if yamlTag, exists := flagToYamlTag[flag]; exists {
usedFlags[yamlTag] = true
Debugf("CLI flag used: %s (yaml: %s)\n", flag, yamlTag)
}
}
}
@@ -131,6 +143,16 @@ func Init() (ret *Flags, err error) {
return
}
// Check to see if a ~/.config/fabric/config.yaml config file exists (only when user didn't specify a config)
if ret.Config == "" {
// Default to ~/.config/fabric/config.yaml if no config specified
if defaultConfigPath, err := util.GetDefaultConfigPath(); err == nil && defaultConfigPath != "" {
ret.Config = defaultConfigPath
} else if err != nil {
Debugf("Could not determine default config path: %v\n", err)
}
}
// If config specified, load and apply YAML for unused flags
if ret.Config != "" {
var yamlFlags *Flags
@@ -165,7 +187,6 @@ func Init() (ret *Flags, err error) {
}
}
// Handle stdin and messages
// Handle stdin and messages
info, _ := os.Stdin.Stat()
pipedToStdin := (info.Mode() & os.ModeCharDevice) == 0
@@ -185,6 +206,22 @@ func Init() (ret *Flags, err error) {
return
}
func extractFlag(arg string) string {
var flag string
if strings.HasPrefix(arg, "--") {
flag = strings.TrimPrefix(arg, "--")
if i := strings.Index(flag, "="); i > 0 {
flag = flag[:i]
}
} else if strings.HasPrefix(arg, "-") && len(arg) > 1 {
flag = strings.TrimPrefix(arg, "-")
if i := strings.Index(flag, "="); i > 0 {
flag = flag[:i]
}
}
return flag
}
func assignWithConversion(targetField, sourceField reflect.Value) error {
// Handle string source values
if sourceField.Kind() == reflect.String {
@@ -376,6 +413,15 @@ func (o *Flags) BuildChatOptions() (ret *domain.ChatOptions, err error) {
return nil, err
}
startTag := o.ThinkStartTag
if startTag == "" {
startTag = "<think>"
}
endTag := o.ThinkEndTag
if endTag == "" {
endTag = "</think>"
}
ret = &domain.ChatOptions{
Model: o.Model,
Temperature: o.Temperature,
@@ -392,6 +438,9 @@ func (o *Flags) BuildChatOptions() (ret *domain.ChatOptions, err error) {
ImageQuality: o.ImageQuality,
ImageCompression: o.ImageCompression,
ImageBackground: o.ImageBackground,
SuppressThink: o.SuppressThink,
ThinkStartTag: startTag,
ThinkEndTag: endTag,
}
return
}

View File

@@ -64,6 +64,9 @@ func TestBuildChatOptions(t *testing.T) {
FrequencyPenalty: 0.2,
Raw: false,
Seed: 1,
SuppressThink: false,
ThinkStartTag: "<think>",
ThinkEndTag: "</think>",
}
options, err := flags.BuildChatOptions()
assert.NoError(t, err)
@@ -85,12 +88,29 @@ func TestBuildChatOptionsDefaultSeed(t *testing.T) {
FrequencyPenalty: 0.2,
Raw: false,
Seed: 0,
SuppressThink: false,
ThinkStartTag: "<think>",
ThinkEndTag: "</think>",
}
options, err := flags.BuildChatOptions()
assert.NoError(t, err)
assert.Equal(t, expectedOptions, options)
}
func TestBuildChatOptionsSuppressThink(t *testing.T) {
flags := &Flags{
SuppressThink: true,
ThinkStartTag: "[[t]]",
ThinkEndTag: "[[/t]]",
}
options, err := flags.BuildChatOptions()
assert.NoError(t, err)
assert.True(t, options.SuppressThink)
assert.Equal(t, "[[t]]", options.ThinkStartTag)
assert.Equal(t, "[[/t]]", options.ThinkEndTag)
}
func TestInitWithYAMLConfig(t *testing.T) {
// Create a temporary YAML config file
configContent := `

View File

@@ -1,6 +1,7 @@
package cli
import (
"fmt"
"os"
"path/filepath"
@@ -8,6 +9,9 @@ import (
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
)
const ConfigDirPerms os.FileMode = 0755
const EnvFilePerms os.FileMode = 0644
// initializeFabric initializes the fabric database and plugin registry
func initializeFabric() (registry *core.PluginRegistry, err error) {
var homedir string
@@ -26,3 +30,27 @@ func initializeFabric() (registry *core.PluginRegistry, err error) {
return
}
// ensureEnvFile checks for the default ~/.config/fabric/.env file and creates it
// along with the parent directory if it does not exist.
func ensureEnvFile() (err error) {
var homedir string
if homedir, err = os.UserHomeDir(); err != nil {
return fmt.Errorf("could not determine user home directory: %w", err)
}
configDir := filepath.Join(homedir, ".config", "fabric")
envPath := filepath.Join(configDir, ".env")
if _, statErr := os.Stat(envPath); statErr != nil {
if !os.IsNotExist(statErr) {
return fmt.Errorf("could not stat .env file: %w", statErr)
}
if err = os.MkdirAll(configDir, ConfigDirPerms); err != nil {
return fmt.Errorf("could not create config directory: %w", err)
}
if err = os.WriteFile(envPath, []byte{}, EnvFilePerms); err != nil {
return fmt.Errorf("could not create .env file: %w", err)
}
}
return
}

View File

@@ -79,7 +79,9 @@ func (o *Chatter) Send(request *domain.ChatRequest, opts *domain.ChatOptions) (s
for response := range responseChan {
message += response
fmt.Print(response)
if !opts.SuppressThink {
fmt.Print(response)
}
}
// Wait for goroutine to finish
@@ -101,6 +103,10 @@ func (o *Chatter) Send(request *domain.ChatRequest, opts *domain.ChatOptions) (s
}
}
if opts.SuppressThink && !o.DryRun {
message = domain.StripThinkBlocks(message, opts.ThinkStartTag, opts.ThinkEndTag)
}
if message == "" {
session = nil
err = fmt.Errorf("empty response")

View File

@@ -15,6 +15,7 @@ import (
type mockVendor struct {
sendStreamError error
streamChunks []string
sendFunc func(context.Context, []*chat.ChatCompletionMessage, *domain.ChatOptions) (string, error)
}
func (m *mockVendor) GetName() string {
@@ -57,6 +58,9 @@ func (m *mockVendor) SendStream(messages []*chat.ChatCompletionMessage, opts *do
}
func (m *mockVendor) Send(ctx context.Context, messages []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (string, error) {
if m.sendFunc != nil {
return m.sendFunc(ctx, messages, opts)
}
return "test response", nil
}
@@ -64,6 +68,51 @@ func (m *mockVendor) NeedsRawMode(modelName string) bool {
return false
}
func TestChatter_Send_SuppressThink(t *testing.T) {
tempDir := t.TempDir()
db := fsdb.NewDb(tempDir)
mockVendor := &mockVendor{}
chatter := &Chatter{
db: db,
Stream: false,
vendor: mockVendor,
model: "test-model",
}
request := &domain.ChatRequest{
Message: &chat.ChatCompletionMessage{
Role: chat.ChatMessageRoleUser,
Content: "test",
},
}
opts := &domain.ChatOptions{
Model: "test-model",
SuppressThink: true,
ThinkStartTag: "<think>",
ThinkEndTag: "</think>",
}
// custom send function returning a message with think tags
mockVendor.sendFunc = func(ctx context.Context, msgs []*chat.ChatCompletionMessage, o *domain.ChatOptions) (string, error) {
return "<think>hidden</think> visible", nil
}
session, err := chatter.Send(request, opts)
if err != nil {
t.Fatalf("Send returned error: %v", err)
}
if session == nil {
t.Fatal("expected session")
}
last := session.GetLastMessage()
if last.Content != "visible" {
t.Errorf("expected filtered content 'visible', got %q", last.Content)
}
}
func TestChatter_Send_StreamingErrorPropagation(t *testing.T) {
// Create a temporary database for testing
tempDir := t.TempDir()

View File

@@ -259,8 +259,24 @@ func (o *PluginRegistry) GetModels() (ret *ai.VendorsModels, err error) {
func (o *PluginRegistry) Configure() (err error) {
o.ConfigureVendors()
_ = o.Defaults.Configure()
if err := o.CustomPatterns.Configure(); err != nil {
return fmt.Errorf("error configuring CustomPatterns: %w", err)
}
_ = o.PatternsLoader.Configure()
// Refresh the database custom patterns directory after custom patterns plugin is configured
customPatternsDir := os.Getenv("CUSTOM_PATTERNS_DIRECTORY")
if customPatternsDir != "" {
// Expand home directory if needed
if strings.HasPrefix(customPatternsDir, "~/") {
if homeDir, err := os.UserHomeDir(); err == nil {
customPatternsDir = filepath.Join(homeDir, customPatternsDir[2:])
}
}
o.Db.Patterns.CustomPatternsDir = customPatternsDir
o.PatternsLoader.Patterns.CustomPatternsDir = customPatternsDir
}
//YouTube and Jina are not mandatory, so ignore not configured error
_ = o.YouTube.Configure()
_ = o.Jina.Configure()

View File

@@ -33,6 +33,9 @@ type ChatOptions struct {
ImageQuality string
ImageCompression int
ImageBackground string
SuppressThink bool
ThinkStartTag string
ThinkEndTag string
}
// NormalizeMessages remove empty messages and ensure messages order user-assist-user

32
internal/domain/think.go Normal file
View File

@@ -0,0 +1,32 @@
package domain
import (
"regexp"
"sync"
)
// StripThinkBlocks removes any content between the provided start and end tags
// from the input string. Whitespace following the end tag is also removed so
// output resumes at the next non-empty line.
var (
regexCache = make(map[string]*regexp.Regexp)
cacheMutex sync.Mutex
)
func StripThinkBlocks(input, startTag, endTag string) string {
if startTag == "" || endTag == "" {
return input
}
cacheKey := startTag + "|" + endTag
cacheMutex.Lock()
re, exists := regexCache[cacheKey]
if !exists {
pattern := "(?s)" + regexp.QuoteMeta(startTag) + ".*?" + regexp.QuoteMeta(endTag) + "\\s*"
re = regexp.MustCompile(pattern)
regexCache[cacheKey] = re
}
cacheMutex.Unlock()
return re.ReplaceAllString(input, "")
}

View File

@@ -0,0 +1,19 @@
package domain
import "testing"
func TestStripThinkBlocks(t *testing.T) {
input := "<think>internal</think>\n\nresult"
got := StripThinkBlocks(input, "<think>", "</think>")
if got != "result" {
t.Errorf("expected %q, got %q", "result", got)
}
}
func TestStripThinkBlocksCustomTags(t *testing.T) {
input := "[[t]]hidden[[/t]] visible"
got := StripThinkBlocks(input, "[[t]]", "[[/t]]")
if got != "visible" {
t.Errorf("expected %q, got %q", "visible", got)
}
}

View File

@@ -12,6 +12,8 @@ import (
"github.com/danielmiessler/fabric/internal/plugins"
)
const DryRunResponse = "Dry run: Fake response sent by DryRun plugin\n"
type Client struct {
*plugins.PluginBase
}
@@ -85,27 +87,37 @@ func (c *Client) formatOptions(opts *domain.ChatOptions) string {
if opts.ImageFile != "" {
builder.WriteString(fmt.Sprintf("ImageFile: %s\n", opts.ImageFile))
}
if opts.SuppressThink {
builder.WriteString("SuppressThink: enabled\n")
builder.WriteString(fmt.Sprintf("Thinking Start Tag: %s\n", opts.ThinkStartTag))
builder.WriteString(fmt.Sprintf("Thinking End Tag: %s\n", opts.ThinkEndTag))
}
return builder.String()
}
func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan string) error {
func (c *Client) constructRequest(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) string {
var builder strings.Builder
builder.WriteString("Dry run: Would send the following request:\n\n")
builder.WriteString(c.formatMessages(msgs))
builder.WriteString(c.formatOptions(opts))
channel <- builder.String()
close(channel)
return builder.String()
}
func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan string) error {
defer close(channel)
request := c.constructRequest(msgs, opts)
channel <- request
channel <- "\n"
channel <- DryRunResponse
return nil
}
func (c *Client) Send(_ context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (string, error) {
fmt.Println("Dry run: Would send the following request:")
fmt.Print(c.formatMessages(msgs))
fmt.Print(c.formatOptions(opts))
request := c.constructRequest(msgs, opts)
return "", nil
return request + "\n" + DryRunResponse, nil
}
func (c *Client) Setup() error {

View File

@@ -66,6 +66,11 @@ type Client struct {
ImplementsResponses bool // Whether this provider supports the Responses API
}
// SetResponsesAPIEnabled configures whether to use the Responses API
func (o *Client) SetResponsesAPIEnabled(enabled bool) {
o.ImplementsResponses = enabled
}
func (o *Client) configure() (ret error) {
opts := []option.RequestOption{option.WithAPIKey(o.ApiKey.Value)}
if o.ApiBaseURL.Value != "" {

View File

@@ -86,9 +86,10 @@ func (o *Session) String() (ret string) {
ret += fmt.Sprintf("\n--- \n[%v]\n%v", message.Role, message.Content)
if message.MultiContent != nil {
for _, part := range message.MultiContent {
if part.Type == chat.ChatMessagePartTypeImageURL {
switch part.Type {
case chat.ChatMessagePartTypeImageURL:
ret += fmt.Sprintf("\n%v: %v", part.Type, *part.ImageURL)
} else if part.Type == chat.ChatMessagePartTypeText {
case chat.ChatMessagePartTypeText:
ret += fmt.Sprintf("\n%v: %v", part.Type, part.Text)
}
}

View File

@@ -4,6 +4,8 @@ import (
"fmt"
"os"
"path/filepath"
"sort"
"strings"
"github.com/danielmiessler/fabric/internal/plugins"
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
@@ -107,6 +109,12 @@ func (o *PatternsLoader) PopulateDB() (err error) {
}
fmt.Printf("✅ Successfully downloaded and installed patterns to %s\n", o.Patterns.Dir)
// Create the unique patterns file after patterns are successfully moved
if err = o.createUniquePatternsFile(); err != nil {
return fmt.Errorf("failed to create unique patterns file: %w", err)
}
return
}
@@ -301,3 +309,60 @@ func (o *PatternsLoader) countPatternsInDirectory(dir string) (int, error) {
return patternCount, nil
}
// createUniquePatternsFile creates the unique_patterns.txt file with all pattern names
func (o *PatternsLoader) createUniquePatternsFile() (err error) {
// Read patterns from the main patterns directory
entries, err := os.ReadDir(o.Patterns.Dir)
if err != nil {
return fmt.Errorf("failed to read patterns directory: %w", err)
}
patternNamesMap := make(map[string]bool) // Use map to avoid duplicates
// Add patterns from main directory
for _, entry := range entries {
if entry.IsDir() {
patternNamesMap[entry.Name()] = true
}
}
// Add patterns from custom patterns directory if it exists
if o.Patterns.CustomPatternsDir != "" {
if customEntries, customErr := os.ReadDir(o.Patterns.CustomPatternsDir); customErr == nil {
for _, entry := range customEntries {
if entry.IsDir() {
patternNamesMap[entry.Name()] = true
}
}
fmt.Fprintf(os.Stderr, "📂 Also included patterns from custom directory: %s\n", o.Patterns.CustomPatternsDir)
} else {
fmt.Fprintf(os.Stderr, "Warning: Could not read custom patterns directory %s: %v\n", o.Patterns.CustomPatternsDir, customErr)
}
}
if len(patternNamesMap) == 0 {
if o.Patterns.CustomPatternsDir != "" {
return fmt.Errorf("no patterns found in directories %s and %s", o.Patterns.Dir, o.Patterns.CustomPatternsDir)
}
return fmt.Errorf("no patterns found in directory %s", o.Patterns.Dir)
}
// Convert map to sorted slice
var patternNames []string
for name := range patternNamesMap {
patternNames = append(patternNames, name)
}
// Sort patterns alphabetically for consistent output
sort.Strings(patternNames)
// Join pattern names with newlines
content := strings.Join(patternNames, "\n") + "\n"
if err = os.WriteFile(o.Patterns.UniquePatternsFilePath, []byte(content), 0644); err != nil {
return fmt.Errorf("failed to write unique patterns file: %w", err)
}
fmt.Printf("📝 Created unique patterns file with %d patterns\n", len(patternNames))
return nil
}

View File

@@ -71,3 +71,21 @@ func IsSymlinkToDir(path string) bool {
return false // Regular directories should not be treated as symlinks
}
// GetDefaultConfigPath returns the default path for the configuration file
// if it exists, otherwise returns an empty string.
func GetDefaultConfigPath() (string, error) {
homeDir, err := os.UserHomeDir()
if err != nil {
return "", fmt.Errorf("could not determine user home directory: %w", err)
}
defaultConfigPath := filepath.Join(homeDir, ".config", "fabric", "config.yaml")
if _, err := os.Stat(defaultConfigPath); err != nil {
if os.IsNotExist(err) {
return "", nil // Return no error for non-existent config path
}
return "", fmt.Errorf("error accessing default config path: %w", err)
}
return defaultConfigPath, nil
}

View File

@@ -1 +1 @@
"1.4.246"
"1.4.258"

View File

@@ -1861,6 +1861,16 @@
"CR THINKING",
"SELF"
]
},
{
"patternName": "generate_code_rules",
"description": "Extracts a list of best practices rules for AI coding assisted tools.",
"tags": [
"ANALYSIS",
"EXTRACT",
"DEVELOPMENT",
"AI"
]
}
]
}

View File

@@ -903,6 +903,10 @@
{
"patternName": "t_check_dunning_kruger",
"pattern_extract": "# IDENTITY You are an expert at understanding deep context about a person or entity, and then creating wisdom from that context combined with the instruction or question given in the input. # STEPS 1. Read the incoming TELOS File thoroughly. Fully understand everything about this person or entity. 2. Deeply study the input instruction or question. 3. Spend significant time and effort thinking about how these two are related, and what would be the best possible output for the person who sent the input. 4. Evaluate the input against the Dunning-Kruger effect and input's prior beliefs. Explore cognitive bias, subjective ability and objective ability for: low-ability areas where the input owner overestimate their knowledge or skill; and the opposite, high-ability areas where the input owner underestimate their knowledge or skill. # EXAMPLE In education, students who overestimate their understanding of a topic may not seek help or put in the necessary effort, while high-achieving students might doubt their abilities. In healthcare, overconfident practitioners might make critical errors, and underconfident practitioners might delay crucial decisions. In politics, politicians with limited expertise might propose simplistic solutions and ignore expert advice. END OF EXAMPLE # OUTPUT - In a section called OVERESTIMATION OF COMPETENCE, output a set of 10, 16-word bullets, that capture the principal misinterpretation of lack of knowledge or skill which are leading the input owner to believe they are more knowledgeable or skilled than they actually are. - In a section called UNDERESTIMATION OF COMPETENCE, output a set of 10, 16-word bullets,that capture the principal misinterpreation of underestimation of their knowledge or skill which are preventing the input owner to see opportunities. - In a section called METACOGNITIVIVE SKILLS, output a set of 10-word bullets that expose areas where the input owner struggles to accuratelly assess their own performance and may not be aware of the gap between their actual ability and their perceived ability. - In a section called IMPACT ON DECISION MAKING, output a set of 10-word bullets exposing facts, biases, traces of behavior based on overinflated self-assessment, that can lead to poor decisions. - At the end summarize the findings and give the input owner a motivational and constructive perspective on how they can start to tackle principal 5 gaps in their perceived skills and knowledge competencies. Don't be over simplistic. # OUTPUT INSTRUCTIONS 1. Only output valid, basic Markdown. No special formatting or italics or bolding or anything. 2. Do not output any content other than the sections above. Nothing else."
},
{
"patternName": "generate_code_rules",
"pattern_extract": "# IDENTITY AND PURPOSE You are a senior developer and expert prompt engineer. Think ultra hard to distill the following transcription or tutorial in as little set of unique rules as possible intended for best practices guidance in AI assisted coding tools, each rule has to be in one sentence as a direct instruction, avoid explanations and cosmetic language. Output in Markdown, I prefer bullet dash (-). --- # TRANSCRIPT"
}
]
}

View File

@@ -1861,6 +1861,16 @@
"CR THINKING",
"SELF"
]
},
{
"patternName": "generate_code_rules",
"description": "Extracts a list of best practices rules for AI coding assisted tools.",
"tags": [
"ANALYSIS",
"EXTRACT",
"DEVELOPMENT",
"AI"
]
}
]
}