Compare commits

...

617 Commits

Author SHA1 Message Date
github-actions[bot]
5dec53726a Update version to v1.4.254 and commit 2025-07-16 21:21:12 +00:00
Kayvan Sylvan
b0eb136cbb Merge pull request #1621 from robertocarvajal/main
Adds generate code rules pattern
2025-07-16 14:19:46 -07:00
Roberto Carvajal
63f4370ff1 Adds generate code rules pattern
Signed-off-by: Roberto Carvajal <roberto.carvajal@gmail.com>
2025-07-16 11:15:55 -04:00
Kayvan Sylvan
b3cc2c737d docs: Update CHANGELOG after v1.4.253 2025-07-15 22:36:26 -07:00
github-actions[bot]
e43b4191e4 Update version to v1.4.253 and commit 2025-07-16 05:34:13 +00:00
Kayvan Sylvan
744c565120 Merge pull request #1620 from ksylvan/0715-thinking-flags-completions-scripts
Update Shell Completions for New Think-Block Suppression Options
2025-07-15 22:32:41 -07:00
Kayvan Sylvan
1473ac1465 feat: add 'think' tag options for text suppression and completion
### CHANGES

- Remove outdated update notes from README
- Add `--suppress-think` option to suppress 'think' tags
- Introduce `--think-start-tag` and `--think-end-tag` options
- Update bash completion with 'think' tag options
- Update fish completion with 'think' tag options
2025-07-15 22:26:12 -07:00
Kayvan Sylvan
c38c16f0db docs: Update CHANGELOG after v.1.4.252 2025-07-15 22:08:43 -07:00
github-actions[bot]
a4b1db4193 Update version to v1.4.252 and commit 2025-07-16 05:05:47 +00:00
Kayvan Sylvan
d44bc19a84 Merge pull request #1619 from ksylvan/0715-suppress-think
Feature: Optional Hiding of Model Thinking Process with Configurable Tags
2025-07-15 22:04:12 -07:00
Kayvan Sylvan
a2e618e11c perf: add regex caching to StripThinkBlocks function for improved performance 2025-07-15 22:02:16 -07:00
Kayvan Sylvan
cb90379b30 feat: add suppress-think feature to filter AI reasoning output
## CHANGES

- Add suppress-think flag to hide thinking blocks
- Configure customizable start and end thinking tags
- Strip thinking content from final response output
- Update streaming logic to respect suppress-think setting
- Add YAML configuration support for thinking options
- Implement StripThinkBlocks utility function for content filtering
- Add comprehensive tests for thinking suppression functionality
2025-07-15 21:52:27 -07:00
Kayvan Sylvan
4868687746 chore: Update CHANGELOG after v1.4.251 2025-07-15 21:44:14 -07:00
github-actions[bot]
85780fee76 Update version to v1.4.251 and commit 2025-07-16 03:49:08 +00:00
Kayvan Sylvan
497b1ed682 Merge pull request #1618 from ksylvan/0715-refrain-from-version-bumping-when-only-changelog-cache-changes
Update GitHub Workflow to Ignore Additional File Paths
2025-07-15 20:47:35 -07:00
Kayvan Sylvan
135433b749 ci: update workflow to ignore additional paths during version updates
## CHANGES

- Add `data/strategies/**` to paths-ignore list
- Add `cmd/generate_changelog/*.db` to paths-ignore list
- Prevent workflow triggers from strategy data changes
- Prevent workflow triggers from changelog database files
2025-07-15 20:38:44 -07:00
github-actions[bot]
f185dedb37 Update version to v1.4.250 and commit 2025-07-16 02:31:31 +00:00
Kayvan Sylvan
c74a157dcf docs: Update changelog with v1.4.249 changes 2025-07-15 19:29:46 -07:00
github-actions[bot]
91a336e870 Update version to v1.4.249 and commit 2025-07-16 01:32:15 +00:00
Kayvan Sylvan
5212fbcc37 Merge pull request #1617 from ksylvan/0715-really-really-fix-changelog-pr-sync-issue
Improve PR Sync Logic for Changelog Generator
2025-07-15 18:30:42 -07:00
Kayvan Sylvan
6d8eb3d2b9 chore: add log message for missing PRs in cache 2025-07-15 18:27:06 -07:00
Kayvan Sylvan
d3bba5d026 feat: preserve PR numbers during version cache merges
### CHANGES

- Enhance changelog to associate PR numbers with version tags
- Improve PR number parsing with proper error handling
- Collect all PR numbers for commits between version tags
- Associate aggregated PR numbers with each version entry
- Update cached versions with newly found PR numbers
- Add check for missing PRs to trigger sync if needed
2025-07-15 18:12:07 -07:00
github-actions[bot]
699762b694 Update version to v1.4.248 and commit 2025-07-16 00:24:44 +00:00
Kayvan Sylvan
f2a6f1bd98 Merge pull request #1616 from ksylvan/0715-fix-changelog-cache-logic
Preserve PR Numbers During Version Cache Merges
2025-07-15 17:23:17 -07:00
Kayvan Sylvan
3176adf59b fix: improve PR number parsing with proper error handling 2025-07-15 17:18:45 -07:00
Kayvan Sylvan
7e29966622 feat: enhance changelog to correctly associate PR numbers with version tags
### CHANGES

-   Collect all PR numbers for commits between version tags.
-   Associate aggregated PR numbers with each version entry.
-   Update cached versions with newly found PR numbers.
-   Attribute all changes in a version to relevant PRs.
2025-07-15 17:06:40 -07:00
Kayvan Sylvan
0af0ab683d docs: reorganize v1.4.247 changelog to attribute changes to PR #1613 2025-07-15 07:03:04 -07:00
github-actions[bot]
e72e67de71 Update version to v1.4.247 and commit 2025-07-15 07:07:13 +00:00
Kayvan Sylvan
414b6174e7 Merge pull request #1613 from ksylvan/0714-fixes-for-custom-directory-unique-patterns-list-changelog-cache
Improve AI Summarization for Consistent Professional Changelog Entries
2025-07-15 00:05:44 -07:00
Kayvan Sylvan
f63e0dfc05 chore: update logging output to use os.Stderr 2025-07-15 00:00:09 -07:00
Kayvan Sylvan
4ef8578e47 fix: improve error handling in plugin registry configuration 2025-07-14 23:54:05 -07:00
Kayvan Sylvan
12ee690ae4 chore: remove debug logging and sync custom patterns directory configuration
## CHANGES

- Remove debug stderr logging from content summarization
- Add custom patterns directory to PatternsLoader configuration
- Ensure consistent patterns directory setup across components
- Clean up unnecessary console output during summarization
2025-07-14 23:19:05 -07:00
Kayvan Sylvan
cc378be485 feat: improve error handling in plugin_registry and patterns_loader
### CHANGES

- Adjust prompt formatting in `summarize.go`
- Add error handling for `CustomPatterns` configuration
- Enhance error messages in `patterns_loader`
- Check for patterns in multiple directories
2025-07-14 23:09:39 -07:00
Kayvan Sylvan
06fc8d8732 chore: reorder plugin configuration sequence in PluginRegistry.Configure method
## CHANGES

- Move CustomPatterns.Configure() before PatternsLoader.Configure()
- Adjust plugin initialization order in Configure method
- Ensure proper dependency sequence for pattern loading
2025-07-14 22:51:52 -07:00
Kayvan Sylvan
9e4ed8ecb3 fix: improve git walking termination and error handling in changelog generator
## CHANGES

- Add storer import for proper git iteration control
- Use storer.ErrStop instead of nil for commit iteration termination
- Handle storer.ErrStop as expected condition in git walker
- Update cache comment to clarify Unreleased version skipping
- Change custom patterns warning to stderr output
- Add storer to VSCode spell checker dictionary
2025-07-14 22:34:13 -07:00
Kayvan Sylvan
c369425708 chore: clean up changelog and add debug logging for content length validation 2025-07-14 20:53:59 -07:00
Kayvan Sylvan
cf074d3411 feat: enhance changelog generation with incremental caching and improved AI summarization
## CHANGES

- Add incremental processing for new Git tags since cache
- Implement `WalkHistorySinceTag` method for efficient history traversal
- Cache new versions and commits after AI processing
- Update AI summarization prompt for better release note formatting
- Remove conventional commit prefix stripping from commit messages
- Add custom patterns directory support to plugin registry
- Generate unique patterns file including custom directory patterns
- Improve session string formatting with switch statement
2025-07-14 15:12:57 -07:00
Kayvan Sylvan
47f75237ff docs: update README for GraphQL optimization and AI summary features
### CHANGES

- Detail GraphQL API usage for faster PR fetching
- Introduce AI-powered summaries via Fabric integration
- Explain content-based caching for AI summaries
- Document support for loading secrets from .env files
- Add usage examples for new AI summary feature
- Clarify project license is The MIT License
2025-07-13 21:57:11 -07:00
github-actions[bot]
fad0a065d4 Update version to v1.4.246 and commit 2025-07-14 03:44:10 +00:00
Kayvan Sylvan
a59a3517d8 Merge pull request #1611 from ksylvan/0711-changelog
Changelog Generator: AI-Powered Automation for Fabric Project
2025-07-13 20:42:41 -07:00
Kayvan Sylvan
04c3c0c512 docs: Update CHANGELOG 2025-07-13 20:39:21 -07:00
Kayvan Sylvan
cb837bde2d feat: add AI-powered changelog generation with high-performance Go tool and comprehensive caching
## CHANGES

- Add high-performance Go changelog generator with GraphQL integration
- Implement SQLite-based persistent caching for incremental updates
- Create one-pass git history walking algorithm with concurrent processing
- Add comprehensive CLI with cobra framework and tag-based caching
- Integrate AI summarization using Fabric CLI for enhanced output
- Support batch PR fetching with GitHub Search API optimization
- Add VSCode configuration with spell checking and markdown linting
- Include extensive documentation with PRD and README files
- Implement commit-PR mapping for lightning-fast git operations
- Add content hashing for change detection and cache optimization
2025-07-13 20:26:23 -07:00
github-actions[bot]
2ad454b6dc Update version to v1.4.245 and commit 2025-07-11 20:56:33 +00:00
Kayvan Sylvan
c0ea25f816 Merge pull request #1603 from ksylvan/0711-together-ai-implementation
Together AI Support with OpenAI Fallback Mechanism Added
2025-07-11 13:55:02 -07:00
Kayvan Sylvan
87796d4fa9 chore: optimize model ID extraction and remove redundant comment
## CHANGES

- Remove duplicate comment about reading response body
- Preallocate slice capacity in extractModelIDs function
- Initialize modelIDs slice with known capacity
2025-07-11 13:48:30 -07:00
Kayvan Sylvan
e1945a0b62 fix: improve error message truncation in DirectlyGetModels method
## CHANGES

- Add proper bounds checking for response body truncation
- Prevent slice out of bounds errors in error messages
- Add ellipsis indicator when response body is truncated
- Improve error message clarity for debugging purposes
2025-07-11 13:33:16 -07:00
Kayvan Sylvan
ecac2b4c34 refactor: clean up HTTP request handling and improve error response formatting
## CHANGES

- Remove unnecessary else block in HTTP request creation
- Move header setting outside conditional block for clarity
- Add TODO comment about reusing HTTP client instance
- Truncate error response output to prevent excessive logging
2025-07-11 13:19:44 -07:00
Kayvan Sylvan
7ed4de269e chore: refactor DirectlyGetModels to read response body once
## CHANGES
* Read response body once for efficiency
* Use io.ReadAll for response body
* Unmarshal json from bodyBytes
* Return error with raw response bytes
* Improve error handling for json parsing
2025-07-11 13:11:30 -07:00
Kayvan Sylvan
6bd305906d fix: increase error response limit and simplify model extraction logic
## CHANGES

- Increase error response limit from 500 to 1024 bytes
- Add documentation comment for ErrorResponseLimit constant
- Remove unnecessary error return from extractModelIDs function
- Fix return statements in DirectlyGetModels parsing logic
- Add TODO comment for proper context handling
- Simplify model ID extraction without error propagation
2025-07-11 12:59:55 -07:00
Kayvan Sylvan
6aeca6e4da fix: improve error message in DirectlyGetModels to include provider name
## CHANGES

- Add provider name to API base URL error message
- Enhance error context for better debugging experience
- Include GetName() method call in error formatting
2025-07-11 12:50:57 -07:00
Kayvan Sylvan
b34f249e24 feat: add context support to DirectlyGetModels method
## CHANGES

- Add context parameter to DirectlyGetModels method signature
- Add nil context check with Background fallback
- Extract magic number 500 into errorResponseLimit constant
- Update DirectlyGetModels call to pass context.Background
- Import context package in providers_config.go file
2025-07-11 12:43:31 -07:00
Kayvan Sylvan
b187a80275 refactor: replace string manipulation with url.JoinPath for models endpoint construction 2025-07-11 12:29:12 -07:00
Kayvan Sylvan
a6fc54a991 refactor: improve OpenAI compatible models API client with timeout and cleaner parsing 2025-07-11 12:09:20 -07:00
Kayvan Sylvan
b9f4b9837a refactor: extract model ID parsing logic into reusable helper function 2025-07-11 11:57:04 -07:00
Kayvan Sylvan
2bedf35957 fix: enhance error messages in OpenAI compatible models endpoint with response body details 2025-07-11 11:17:22 -07:00
Kayvan Sylvan
b9df64a0d8 feat: add direct model fetching support for non-standard providers
- Add `DirectlyGetModels` function to handle non-standard API responses
- Add Together provider configuration to ProviderMap
- Implement fallback to direct model fetching when standard method fails
2025-07-11 09:12:53 -07:00
Kayvan Sylvan
6b07b33ff2 fix: broken image link 2025-07-09 11:41:33 -07:00
Kayvan Sylvan
ff245edd51 Merge pull request #1599 from ksylvan/0708-fix-documentation-paths-for-restructured-repo
Update file paths to reflect new data directory structure
2025-07-09 08:47:21 -07:00
Kayvan Sylvan
2e0a4da876 docs: update file paths to reflect new data directory structure
## CHANGES

- Move fabric logo image path to docs directory
- Update patterns directory reference to data/patterns location
- Update strategies directory reference to data/strategies location
- Fix create_coding_feature README path reference
- Update code_helper install path to cmd directory
2025-07-09 08:44:12 -07:00
github-actions[bot]
1f3befbbbc Update version to v1.4.244 and commit 2025-07-09 14:58:49 +00:00
Kayvan Sylvan
8988206fbe Merge pull request #1598 from jaredmontoya/flake-enhance 2025-07-09 07:57:12 -07:00
jaredmontoya
1bd5f9d7e4 shell: fix typo 2025-07-09 14:00:59 +02:00
github-actions[bot]
832fd2f718 Update version to v1.4.243 and commit 2025-07-09 10:53:52 +00:00
Kayvan Sylvan
dd0935fb70 Merge pull request #1597 from ksylvan/0708-more-refactoring-fixes-for-patterns-loading
CLI Refactoring: Modular Command Processing and Pattern Loading Improvements
2025-07-09 03:52:25 -07:00
Kayvan Sylvan
e64bdd849c fix: improve error handling and temporary file management in patterns loader
## CHANGES

- Replace println with fmt.Fprintln to stderr for errors
- Use os.MkdirTemp for secure temporary directory creation
- Remove unused time import from patterns loader
- Add proper error wrapping for file operations
- Handle RemoveAll errors with warning messages
- Improve error messages with context information
- Add explicit error checking for cleanup operations
2025-07-09 03:40:03 -07:00
Kayvan Sylvan
be82b4b013 chore: improve error handling for scraping configuration in tools.go 2025-07-09 03:23:29 -07:00
Kayvan Sylvan
6e2f00090c chore: enhance error handling and early returns in CLI
### CHANGES
- Add early return if registry is nil to prevent panics.
- Introduce early return for non-chat tool operations.
- Update error message to use original input on HTML readability failure.
- Enhance error wrapping for playlist video fetching.
- Modify temp patterns folder name with timestamp for uniqueness.
- Improve error handling for patterns directory access.
2025-07-09 03:15:57 -07:00
jaredmontoya
7d6505fe98 update-mod: fix generation path 2025-07-09 12:07:56 +02:00
jaredmontoya
23c1437794 shell: rename command 2025-07-09 12:07:07 +02:00
jaredmontoya
dd5e57477f nix:pkgs:fabric: use self reference 2025-07-09 12:07:07 +02:00
Kayvan Sylvan
2c2b374664 chore: remove fabric binary 2025-07-09 02:58:09 -07:00
Kayvan Sylvan
b884c529bd chore: update command handlers to return 'handled' boolean
### CHANGES

- Add `handled` boolean return to command handlers
- Modify `handleSetupAndServerCommands` to use `handled`
- Update `handleConfigurationCommands` with `handled` logic
- Implement `handled` return in `handleExtensionCommands`
- Revise `handleListingCommands` to support `handled` return
- Adjust `handleManagementCommands` to return `handled`
2025-07-09 02:57:29 -07:00
Kayvan Sylvan
137aff2268 feat: refactor CLI to modularize command handling
### CHANGES

* Extract chat processing logic into separate function
* Create modular command handlers for setup, configuration, listing, management, and extensions
* Improve patterns loader with migration support and better error handling
* Simplify main CLI logic by delegating to specialized handlers
* Enhance code organization and maintainability
* Add tool processing for YouTube and web scraping functionality
2025-07-09 02:29:38 -07:00
github-actions[bot]
42d3f45c57 Update version to v1.4.242 and commit 2025-07-09 07:49:40 +00:00
Kayvan Sylvan
f8ada0b148 Merge pull request #1596 from ksylvan/0708-fix-patterns-workflow
Fix patterns zipping workflow
2025-07-09 00:48:08 -07:00
Kayvan Sylvan
cb5fa50f68 chore: update workflow paths to reflect directory structure change
### CHANGES

- Modify trigger path to `data/patterns/**`
- Update `git diff` command to new path
- Change zip command to include `data/patterns/` directory
2025-07-09 00:43:55 -07:00
github-actions[bot]
921d12a153 Update version to v1.4.241 and commit 2025-07-09 07:40:23 +00:00
Kayvan Sylvan
725c6f9327 Merge pull request #1595 from ksylvan/0708-project-restructure
Restructure project to align with standard Go layout
2025-07-09 00:38:49 -07:00
Kayvan Sylvan
9a64238f18 docs: Update README with new go install commands 2025-07-09 00:33:21 -07:00
Kayvan Sylvan
af318aca17 fix: minor edit 2025-07-09 00:24:35 -07:00
Kayvan Sylvan
4d7bc7deb8 docs: update restructure guide with Homebrew and go install details
### CHANGES

- Document required Homebrew formula update for new structure.
- Add new `go install` commands for all tools.
- Specify new build path is `./cmd/fabric`.
- Include link to the draft Homebrew PR.
2025-07-09 00:11:15 -07:00
Kayvan Sylvan
da1336e8cb feat: add new patterns for content tagging and cognitive bias analysis
## CHANGES

- Fix static directory path in extract_patterns.py script
- Add apply_ul_tags pattern for content categorization
- Add t_check_dunning_kruger pattern for bias analysis
- Update pattern descriptions with new entries
- Sync web static data with latest patterns
- Include pattern extracts for new functionality
- Support standardized content topic classification
- Enable cognitive bias identification capabilities
2025-07-08 23:48:30 -07:00
Kayvan Sylvan
81adb3b050 docs: update project restructuring status and reorganize pattern scripts
## CHANGES

- Mark all 10 migration steps as completed
- Add restructuring completion status section
- Move pattern generation scripts to pattern_descriptions
- Update completion checkmarks throughout migration plan
- Document remaining external packaging verification tasks
- Consolidate pattern description files under new directory
- Confirm all binaries compile and tests pass
- Note standard Go project layout achieved
2025-07-08 23:36:56 -07:00
Kayvan Sylvan
ebc59ee82a refactor: move common package to domain and util packages for better organization
## CHANGES

- Move domain types from common to domain package
- Move utility functions from common to util package
- Update all import statements across codebase
- Reorganize OAuth storage functionality into util package
- Move file management functions to domain package
- Update test files to use new package structure
- Maintain backward compatibility for existing functionality
2025-07-08 23:26:11 -07:00
Kayvan Sylvan
e242e0fc52 refactor: alias server package import as restapi for clarity
### CHANGES

*   Rename the `server` package import to `restapi`.
*   Improve code readability and prevent naming collisions.
2025-07-08 23:03:52 -07:00
Kayvan Sylvan
4004c51b9e refactor: restructure project to align with standard Go layout
### CHANGES

- Introduce `cmd` directory for all main application binaries.
- Move all Go packages into the `internal` directory.
- Rename the `restapi` package to `server` for clarity.
- Consolidate patterns and strategies into a new `data` directory.
- Group all auxiliary scripts into a new `scripts` directory.
- Move all documentation and images into a `docs` directory.
- Update all Go import paths to reflect the new structure.
- Adjust CI/CD workflows and build commands for new layout.
2025-07-08 22:47:17 -07:00
Kayvan Sylvan
6d67223a4b Merge pull request #1594 from amancioandre/main
Adds check Dunning-Kruger Telos self-evaluation pattern
2025-07-07 18:38:28 -07:00
github-actions[bot]
265f2b807e Update version to v1.4.240 and commit 2025-07-07 21:25:55 +00:00
Kayvan Sylvan
dc63e0d1cc Merge pull request #1593 from ksylvan/0707-claude-oauth-improvement
Refactor: Generalize OAuth flow for improved token handling.
2025-07-07 14:24:22 -07:00
Kayvan Sylvan
75842d8610 chore: refactor token path to use authTokenIdentifier 2025-07-07 13:59:13 -07:00
Kayvan Sylvan
bcd4c6caea test: add comprehensive OAuth testing suite for Anthropic plugin
## CHANGES

- Add OAuth test file with 434 lines coverage
- Create mock token server for safe testing
- Implement PKCE generation and validation tests
- Add token expiration logic verification tests
- Create OAuth transport round-trip testing
- Add benchmark tests for performance validation
- Implement helper functions for test token creation
- Add comprehensive error path testing scenarios
2025-07-07 13:50:57 -07:00
Kayvan Sylvan
a6a63698e1 fix: update RefreshToken to use tokenIdentifier parameter 2025-07-07 13:31:08 -07:00
Kayvan Sylvan
0528556b5c refactor: replace hardcoded "claude" with configurable authTokenIdentifier parameter
## CHANGES

- Replace hardcoded "claude" string with `authTokenIdentifier` constant
- Update `RunOAuthFlow` to accept token identifier parameter
- Modify `RefreshToken` to use configurable token identifier
- Update `exchangeToken` to accept token identifier parameter
- Enhance `getValidToken` to use parameterized token identifier
- Add token refresh attempt before full OAuth flow
- Improve OAuth flow with existing token validation
2025-07-07 13:19:00 -07:00
github-actions[bot]
47cf24e19d Update version to v1.4.239 and commit 2025-07-07 18:58:38 +00:00
Kayvan Sylvan
3f07afbef4 Merge pull request #1592 from ksylvan/0707-possible-go-routine-race-condition-fix
Fix Streaming Error Handling in Chatter
2025-07-07 11:57:11 -07:00
Kayvan Sylvan
38d714dccd chore: improve error comparison in TestChatter_Send_StreamingErrorPropagation 2025-07-07 11:20:01 -07:00
Kayvan Sylvan
d0b5c95d61 chore: remove redundant channel closure in Send method
### CHANGES

- Remove redundant `close(responseChan)` in `Send` method
- Update `SendStream` to close `responseChan` properly
- Modify test to reflect channel closure logic
2025-07-07 11:02:04 -07:00
Kayvan Sylvan
f8f80ca206 chore: rename doneChan to done and add streaming aggregation test
## CHANGES

- Rename `doneChan` variable to `done` for consistency
- Add `streamChunks` field to mock vendor struct
- Implement chunk sending logic in mock SendStream method
- Add comprehensive streaming success aggregation test case
- Verify message aggregation from multiple stream chunks
- Test assistant response role and content validation
- Ensure proper session handling in streaming scenarios
2025-07-07 10:49:29 -07:00
Kayvan Sylvan
0af458872f feat: add test for Chatter's Send method error propagation
### CHANGES

- Implement mockVendor for testing ai.Vendor interface
- Add TestChatter_Send_StreamingErrorPropagation test case
- Verify error propagation in Chatter's Send method
- Ensure session returns even on streaming error
- Create temporary database for testing Chatter functionality
2025-07-07 10:36:40 -07:00
Kayvan Sylvan
24e46a6f37 chore: rename channels for clarity in Send method
### CHANGES

- Rename `done` to `doneChan` for clarity
- Adjust channel closure for `doneChan`
- Update channel listening logic to use `doneChan`
2025-07-07 10:28:54 -07:00
Kayvan Sylvan
d6a31e68b0 refactor: rename channel variable to responseChan for better clarity in streaming logic
## CHANGES

- Rename `channel` variable to `responseChan` for clarity
- Update channel references in goroutine defer statements
- Pass renamed channel to `SendStream` method call
- Maintain consistent naming throughout streaming flow
2025-07-07 10:23:42 -07:00
Kayvan Sylvan
b1013ca61b chore: close channel after sending stream in Send
### CHANGES

- Add `channel` closure after sending stream
- Ensure resource cleanup in `Send` method
2025-07-07 10:09:24 -07:00
Kayvan Sylvan
6b4ce946a5 chore: refactor error handling and response aggregation in Send
### CHANGES

- Simplify response aggregation loop in `Send`
- Remove redundant select case for closed channel
- Streamline error checking from `errChan`
- Ensure goroutine completion before returning
2025-07-07 09:39:58 -07:00
Kayvan Sylvan
2d2830e9c8 chore: enhance Chatter.Send method with proper goroutine synchronization
### CHANGES
- Add `done` channel to track goroutine completion.
- Replace `errChan` closure with `done` channel closure.
- Ensure main loop waits for goroutine on channel close.
- Synchronize error handling with `done` channel wait.
2025-07-07 09:09:04 -07:00
Kayvan Sylvan
115327fdab refactor: use select to handle stream and error channels concurrently
### CHANGES

- Replace for-range loop with a non-blocking select statement.
- Process message and error channels concurrently for better handling.
- Improve the robustness of streaming error detection.
- Exit loop cleanly when the message channel closes.
2025-07-07 08:37:31 -07:00
Kayvan Sylvan
e672f9b73f chore: simplify error handling in streaming chat response by removing unnecessary select statement 2025-07-07 08:15:24 -07:00
Kayvan Sylvan
ef4364a1aa fix: improve error handling in streaming chat functionality
## CHANGES

- Add dedicated error channel for stream operations
- Separate error handling from message streaming logic
- Check for streaming errors after channel closure
- Close error channel properly in goroutine cleanup
- Remove error messages from message stream channel
- Add proper error propagation for stream failures
2025-07-07 03:31:58 -07:00
github-actions[bot]
cb3f8ed43d Update version to v1.4.238 and commit 2025-07-07 10:24:00 +00:00
Kayvan Sylvan
4c1803cb6d Merge pull request #1591 from ksylvan/0707-anthropic-can-now-use-only-oauth
Improved Anthropic Plugin Configuration Logic
2025-07-07 03:22:27 -07:00
Kayvan Sylvan
d1c614d44e refactor: extract vendor token identifier constant and remove redundant configure call
## CHANGES

- Extract vendor token identifier into named constant
- Remove redundant Configure() call from IsConfigured method
- Use constant for token validation consistency
- Improve code maintainability with centralized identifier
2025-07-07 03:16:45 -07:00
Kayvan Sylvan
dbaa0b9754 feat: add vendor configuration validation and OAuth auto-authentication
## CHANGES

- Add IsConfigured check to vendor configuration loop
- Implement IsConfigured method for Anthropic client validation
- Remove conditional API key requirement based on OAuth
- Add automatic OAuth flow when no valid token
- Validate both API key and OAuth token configurations
- Simplify API key setup question logic
- Add token expiration checking with 5-minute buffer
2025-07-07 02:49:27 -07:00
github-actions[bot]
4cfe2375ab Update version to v1.4.237 and commit 2025-07-07 03:05:51 +00:00
Kayvan Sylvan
2b371b69c7 Merge pull request #1590 from ksylvan/0706-webui-topp-fix
Do not pass non-default TopP values
2025-07-06 20:04:16 -07:00
Kayvan Sylvan
6222a613e4 fix: add conditional check for TopP parameter in OpenAI client
## CHANGES

- Add zero-value check before setting TopP parameter
- Prevent sending TopP when value is zero
- Apply fix to both chat completions method
- Apply fix to response parameters method
- Ensure consistent parameter handling across OpenAI calls
2025-07-06 19:53:21 -07:00
github-actions[bot]
0882c43532 Update version to v1.4.236 and commit 2025-07-06 19:39:24 +00:00
Kayvan Sylvan
f0e1a1b77f Merge pull request #1587 from ksylvan/0705-enhance-bug-report-template
Enhance bug report template
2025-07-06 12:37:56 -07:00
Kayvan Sylvan
a774f991ab chore: enhance bug report template with detailed system info and installation method fields
## CHANGES

- Add detailed instructions for bug reproduction steps
- Include operating system dropdown with specific architectures
- Add OS version textarea with command examples
- Create installation method dropdown with all options
- Replace version checkbox with proper version output field
- Improve formatting and organization of form sections
- Add helpful links to installation documentation
2025-07-06 11:01:53 -07:00
github-actions[bot]
a40bacaf34 Update version to v1.4.235 and commit 2025-07-06 10:36:33 +00:00
Kayvan Sylvan
969b85380c Merge pull request #1586 from ksylvan/0705-another-fix-for-cistom-directory
Fix to persist the CUSTOM_PATTERNS_DIRECTORY variable
2025-07-06 03:35:05 -07:00
Kayvan Sylvan
e8fe4434db fix: make custom patterns persist correctly 2025-07-06 03:29:10 -07:00
github-actions[bot]
7c7ceca264 Update version to v1.4.234 and commit 2025-07-06 09:00:46 +00:00
Kayvan Sylvan
c19d7ccd9d Merge pull request #1581 from ksylvan/0705-custom-directory-creation-bug
Fix Custom Patterns Directory Creation Logic
2025-07-06 01:59:19 -07:00
Kayvan Sylvan
bd0c5f730e chore: improve directory creation logic in configure method
### CHANGES

- Add `fmt` package for logging errors
- Check directory existence before creating
- Log error without clearing directory value
2025-07-06 01:55:18 -07:00
github-actions[bot]
5900dac58f Update version to v1.4.233 and commit 2025-07-06 08:36:45 +00:00
Kayvan Sylvan
237219c3cc Merge pull request #1580 from ksylvan/0705-fix-custom-pattern-loading
Alphabetical Pattern Sorting and Configuration Refactor
2025-07-06 01:35:19 -07:00
Kayvan Sylvan
26fd700098 refactor: move custom patterns directory initialization to Configure method
- Move custom patterns directory logic to Configure method
- Initialize CustomPatternsDir after loading .env file
- Add alphabetical sorting to pattern names retrieval
- Override ListNames method for PatternsEntity class
- Improve pattern listing with proper error handling
- Ensure custom patterns loaded after environment configuration
2025-07-06 01:30:12 -07:00
Kayvan Sylvan
6bd926dd0f Merge pull request #1578 from ksylvan/0705-custom-pattern-readme
Document Custom Patterns Directory Support
2025-07-06 00:38:01 -07:00
Kayvan Sylvan
16ac519415 docs: add comprehensive custom patterns setup and usage guide
## CHANGES

- Add custom patterns directory setup instructions
- Document priority system for custom vs built-in patterns
- Include step-by-step custom pattern creation workflow
- Explain update-safe custom pattern storage
- Add table of contents entries for new sections
- Document seamless integration with existing fabric commands
- Clarify privacy and precedence behavior for custom patterns
2025-07-06 00:32:54 -07:00
github-actions[bot]
a32cc5fa01 Update version to v1.4.232 and commit 2025-07-06 07:22:25 +00:00
Daniel Miessler 🛡️
26b5bb2e9e Merge pull request #1577 from ksylvan/0705-custom-patterns-dir
Add Custom Patterns Directory Support
2025-07-06 00:20:58 -07:00
Kayvan Sylvan
b751d323b1 feat: add custom patterns directory support with environment variable configuration
## CHANGES

- Add custom patterns directory support via environment variable
- Implement custom patterns plugin with registry integration
- Override main patterns with custom directory patterns
- Expand home directory paths in custom patterns config
- Add comprehensive test coverage for custom patterns functionality
- Integrate custom patterns into plugin setup workflow
- Support pattern precedence with custom over main patterns
2025-07-05 23:51:43 -07:00
github-actions[bot]
d081fd269c Update version to v1.4.231 and commit 2025-07-05 22:25:30 +00:00
Kayvan Sylvan
369a0a850d Merge pull request #1565 from ksylvan/0701-claude-oauth-support
OAuth Authentication Support for Anthropic
2025-07-05 15:23:56 -07:00
Kayvan Sylvan
8dc5343ee6 fix: remove duplicate API key setup question in Anthropic client 2025-07-05 15:05:21 -07:00
Kayvan Sylvan
eda552dac5 refactor: extract OAuth functionality from anthropic client to separate module
## CHANGES

- Remove OAuth transport implementation from main client
- Extract OAuth flow functions to separate module
- Remove unused imports and constants from client
- Replace inline OAuth transport with NewOAuthTransport call
- Update runOAuthFlow to exported RunOAuthFlow function
- Clean up token management and refresh logic
- Simplify client configuration by removing OAuth internals
2025-07-05 14:59:38 -07:00
Kayvan Sylvan
f13a56685b feat: add OAuth login support for Anthropic API configuration 2025-07-05 14:46:43 -07:00
Kayvan Sylvan
2f9afe0247 feat: remove OAuth flow functions for simplified token handling 2025-07-05 11:45:25 -07:00
Kayvan Sylvan
1ec525ad97 chore: simplify base URL configuration in configure method
### CHANGES

- Remove redundant base URL trimming logic
- Append base URL directly without modification
- Eliminate conditional check for API version suffix
2025-07-05 11:39:35 -07:00
Kayvan Sylvan
b7dc6748e0 feat: enhance OAuth authentication flow with automatic re-authentication and timeout handling
## CHANGES

- Add automatic OAuth flow initiation when no token exists
- Implement fallback re-authentication when token refresh fails
- Add timeout contexts for OAuth and refresh operations
- Create context-aware OAuth flow and token exchange functions
- Enhance error handling with graceful authentication recovery
- Add user input timeout protection for authorization codes
- Preserve refresh tokens during token exchange operations
2025-07-05 09:59:27 -07:00
Kayvan Sylvan
f1b612d828 refactor: remove OAuth endpoint logic and standardize on v2 API endpoint
## CHANGES

- Remove OAuth-specific v1 endpoint handling logic
- Standardize all API calls to use v2 endpoint
- Simplify baseURL configuration by removing conditional branching
- Update endpoint logic to always append v2 suffix
2025-07-05 09:37:57 -07:00
Kayvan Sylvan
eac5a104f2 feat: implement OAuth token refresh and persistent storage for Claude authentication
## CHANGES

- Add automatic OAuth token refresh when expired
- Implement persistent token storage using common OAuth storage
- Remove deprecated AuthToken setting from client configuration
- Add token validation with 5-minute expiration buffer
- Create refreshToken function for seamless token renewal
- Update OAuth flow to save complete token information
- Enhance error handling for OAuth authentication failures
- Simplify client configuration by removing manual token management
2025-07-05 09:17:50 -07:00
Kayvan Sylvan
4bff88fae3 feat: add OAuth authentication support for Anthropic Claude
- Move golang.org/x/oauth2 from indirect to direct dependency
- Add OAuth login option for Anthropic client
- Implement PKCE OAuth flow with browser integration
- Add custom HTTP transport for OAuth Bearer tokens
- Support both API key and OAuth authentication methods
- Add Claude Code system message for OAuth sessions
- Update REST API to handle OAuth tokens
- Improve environment variable name sanitization with regex
2025-07-05 08:32:16 -07:00
github-actions[bot]
acf1be71ce Update version to v1.4.230 and commit 2025-07-05 07:08:05 +00:00
Kayvan Sylvan
236a3c5f38 Merge pull request #1575 from ksylvan/0704-advanced-image-output-options
Advanced image generation parameters for OpenAI models
2025-07-05 00:06:38 -07:00
Kayvan Sylvan
b2418984f8 feat: add advanced image generation parameters for OpenAI models
## CHANGES

- Add four new image generation CLI flags
- Implement validation for image parameter combinations
- Support size, quality, compression, and background controls
- Add comprehensive test coverage for new parameters
- Update shell completions for new image options
- Enhance README with detailed image generation examples
- Fix PowerShell code block formatting issues
2025-07-04 23:04:50 -07:00
github-actions[bot]
152d74d160 Update version to v1.4.229 and commit 2025-07-05 01:59:22 +00:00
Kayvan Sylvan
4e16bbccd8 Merge pull request #1574 from ksylvan/0704-image-tool-model-validation
Add Model Validation for Image Generation and Fix CLI Flag Mapping
2025-07-04 18:57:51 -07:00
Kayvan Sylvan
60174f41a4 refactor: extract supported models list to shared constant for image generation validation
## CHANGES

• Extract hardcoded model lists into shared constant
• Create ImageGenerationSupportedModels variable for reusability
• Update supportsImageGeneration function to use shared constant
• Refactor error messages to reference centralized model list
• Add documentation comment for supported models variable
• Import strings package in test file
• Consolidate duplicate model validation logic across files
2025-07-04 18:52:49 -07:00
Kayvan Sylvan
ad4683952e Merge branch 'main' into 0704-image-tool-model-validation 2025-07-04 18:45:08 -07:00
Kayvan Sylvan
86a044735b feat: add model validation for image generation support
### CHANGES

- Add model field to `BuildChatOptions` method
- Implement `supportsImageGeneration` function for model checks
- Validate model supports image generation in `sendResponses`
- Remove `mars-colony.png` from repository
- Add tests for `supportsImageGeneration` function
- Validate image file support in `TestModelValidationLogic`
2025-07-04 18:40:20 -07:00
github-actions[bot]
58583114cb Update version to v1.4.228 and commit 2025-07-05 01:13:09 +00:00
Kayvan Sylvan
36524cd2e4 Merge pull request #1573 from ksylvan/0704-image-dynamic-formats
Add Image File Validation and Dynamic Format Support
2025-07-04 18:11:36 -07:00
Kayvan Sylvan
e59156ac2b feat: add image file validation and format detection for image generation
## CHANGES

• Add image file path validation with extension checking
• Implement dynamic output format detection from file extensions
• Update BuildChatOptions method to return error for validation
• Add comprehensive test coverage for image file validation
• Upgrade YAML library from v2 to v3
• Update shell completions to reflect supported image formats
• Add error handling for existing file conflicts
• Support PNG, JPEG, JPG, and WEBP image formats
2025-07-04 17:56:59 -07:00
Daniel Miessler
1eac026e92 Addded tutorial as a tag. 2025-07-04 16:42:19 -07:00
github-actions[bot]
17d863fd57 Update version to v1.4.227 and commit 2025-07-04 22:40:01 +00:00
Daniel Miessler 🛡️
7c9dbfd343 Merge pull request #1572 from ksylvan/0704-OpenAI-Image-Generation-Tool
Add Image Generation Support to Fabric
2025-07-04 15:38:27 -07:00
Kayvan Sylvan
d9260bdf26 chore: refactor image generation constants for clarity and reuse
### CHANGES

- Define `ImageGenerationResponseType` constant for response handling
- Define `ImageGenerationToolType` constant for tool type usage
- Update `addImageGenerationTool` to use defined constants
- Refactor `extractAndSaveImages` to use response type constant
2025-07-04 15:14:46 -07:00
Kayvan Sylvan
63a0cfeb1e feat: add web search and image file support to fabric CLI
## CHANGES

- Add web search tool for Anthropic and OpenAI models
- Add search location parameter for web search results
- Add image file output option with format support
- Update zsh completion with new search and image flags
- Update bash completion with new option handling logic
- Update fish completion with search and image descriptions
- Support PNG, JPG, JPEG, GIF, BMP image formats
2025-07-04 14:49:40 -07:00
Kayvan Sylvan
12fc6e2000 feat: add image generation support with OpenAI image generation model
## CHANGES

- Add `--image-file` flag for saving generated images
- Implement image generation tool integration with OpenAI
- Support image editing with attachment input files
- Add comprehensive test coverage for image features
- Update documentation with image generation examples
- Fix HTML formatting issues in README
- Improve PowerShell code block indentation
- Clean up help text formatting and spacing
2025-07-04 14:36:55 -07:00
Daniel Miessler
fe5900a5dc Fixed ul tag applier. 2025-07-04 14:25:54 -07:00
Daniel Miessler
1b6b8e3d72 Updated ul tag prompt. 2025-07-04 14:21:25 -07:00
Daniel Miessler
c85301cb1f Added the UL tags pattern. 2025-07-04 14:17:43 -07:00
github-actions[bot]
7cc8226339 Update version to v1.4.226 and commit 2025-07-04 07:25:58 +00:00
Kayvan Sylvan
fc8c4babf8 Merge pull request #1569 from ksylvan/0703-openai-web-search
OpenAI Plugin Now Supports Web Search Functionality
2025-07-04 00:24:32 -07:00
Kayvan Sylvan
bd809a1f94 docs: update README with new web search feature details 2025-07-04 00:22:36 -07:00
Kayvan Sylvan
50aec6291b feat: add web search tool support for OpenAI models with citation formatting
## CHANGES

- Enable web search tool for OpenAI models
- Add location parameter support for search results
- Extract and format citations from search responses
- Implement citation deduplication to avoid duplicates
- Add comprehensive test coverage for search functionality
- Update CLI flag description to include OpenAI
- Format citations as markdown links with sources
2025-07-04 00:01:54 -07:00
github-actions[bot]
f927fdf40f Update version to v1.4.225 and commit 2025-07-04 06:58:02 +00:00
Kayvan Sylvan
918862ef57 Merge pull request #1568 from ksylvan/0703-enhanced-anthropic-search-tool
Runtime Web Search Control via Command-Line Flag
2025-07-03 23:56:34 -07:00
Kayvan Sylvan
d9b8bc3233 chore: refactor Send method to optimize string building
### CHANGES

- Add `sourcesHeader` constant for citation section title.
- Use `strings.Builder` to construct result efficiently.
- Append sources header and citations in result builder.
- Update `ret` to use constructed string from builder.
2025-07-03 23:52:12 -07:00
Kayvan Sylvan
da29b8e388 chore: remove unused web-search tool parameters for simplification
### CHANGES

- Remove unused `AllowedDomains` and `MaxUses` parameters
- Simplify `webTool` definition in `buildMessageParams` method
2025-07-03 23:41:22 -07:00
Kayvan Sylvan
5e6d4110fa refactor: extract web search tool constants in anthropic plugin
## CHANGES

- Add webSearchToolName constant for tool identification
- Add webSearchToolType constant for tool versioning
- Replace hardcoded string literals with named constants
- Improve code maintainability through constant extraction
2025-07-03 23:20:19 -07:00
Kayvan Sylvan
4bb090694b chore: update formatOptions to include search options display
### CHANGES

- Add search option status to `formatOptions`
- Include `SearchLocation` in formatted output if specified
2025-07-03 22:55:55 -07:00
Kayvan Sylvan
d232222787 feat: add web search tool support for Anthropic models
## CHANGES

- Add --search flag to enable web search
- Add --search-location for timezone-based results
- Pass search options through ChatOptions struct
- Implement web search tool in Anthropic client
- Format search citations with sources section
- Add comprehensive tests for search functionality
- Remove plugin-level web search configuration
2025-07-03 22:40:39 -07:00
apollo
a43f267a69 Merge branch 'main' of https://github.com/amancioandre/Fabric 2025-07-01 16:12:23 -07:00
apollo
c78fe41ebc fix: sections as heading 1, typos 2025-07-01 16:12:13 -07:00
Andre Amnc
cab246bc74 Merge branch 'danielmiessler:main' into main 2025-07-01 16:07:25 -07:00
apollo
50c05e2d5c feat: adds pattern telos check dunning kruger 2025-07-01 16:06:59 -07:00
github-actions[bot]
095890a556 Update version to v1.4.224 and commit 2025-07-01 21:44:24 +00:00
Kayvan Sylvan
64c1fe18ef Merge pull request #1564 from ksylvan/0701-code-review-pattern
Add code_review pattern and updates in Pattern_Descriptions
2025-07-01 14:42:50 -07:00
Kayvan Sylvan
1cea32a677 feat: handle JSONDecodeError in load_existing_file gracefully
### CHANGES

- Add JSONDecodeError handling with warning message.
- Initialize with empty list on JSON decode failure.
- Reorder pattern processing to reduce redundant logs.
- Remove redundant directory check logging.
- Ensure new pattern processing is logged correctly.
2025-07-01 14:36:35 -07:00
Kayvan Sylvan
49658a3214 feat: add new patterns for code review, alpha extraction, and server analysis
### CHANGES
- Add `review_code`, `extract_alpha`, and `extract_mcp_servers` patterns.
- Refactor the pattern extraction script for improved clarity.
- Add docstrings and specific error handling to script.
- Improve formatting in the pattern management README.
- Fix typo in the `analyze_bill_short` pattern description.
2025-07-01 14:05:41 -07:00
Kayvan Sylvan
f236cab276 feat: add comprehensive code review pattern for systematic analysis
## CHANGES

- Add new code review system prompt
- Define principal engineer reviewer role
- Include systematic analysis framework
- Specify markdown output format
- Add prioritized recommendations section
- Include detailed feedback structure
- Provide example Python review
- Cover security, performance, readability
- Add error handling guidelines
2025-07-01 13:43:04 -07:00
github-actions[bot]
5e0aaa1f93 Update version to v1.4.223 and commit 2025-07-01 14:52:18 +00:00
Kayvan Sylvan
eb16806931 Merge pull request #1563 from ksylvan/0701-fix-windows-build
Fix Cross-Platform Compatibility in Release Workflow
2025-07-01 07:50:46 -07:00
Kayvan Sylvan
474dd786a4 chore: update GitHub Actions to use bash shell in release job
### CHANGES

- Adjust repository_dispatch type spacing for consistency
- Use bash shell for creating release if absent
2025-07-01 07:45:05 -07:00
github-actions[bot]
edad63df19 Update version to v1.4.222 and commit 2025-07-01 14:17:23 +00:00
Kayvan Sylvan
c7eb7439ef Merge pull request #1559 from ksylvan/0629-openai-responses-api
OpenAI Plugin Migrates to New Responses API
2025-07-01 07:15:43 -07:00
Daniel Miessler
23d678d62f Updated alpha post. 2025-06-30 06:50:47 -07:00
Kayvan Sylvan
de5260a661 feat(openai): add support for multi-content user messages in chat completions
### CHANGES

- Enhance user message conversion to support multi-content.
- Add capability to process image URLs in messages.
- Build multi-part messages with both text and images.
2025-06-30 00:21:42 -07:00
Kayvan Sylvan
baeadc2270 chore: update NewClient to use NewClientCompatibleWithResponses
### CHANGES

- Modify `NewClient` to call `NewClientCompatibleWithResponses`
- Add support for response handling in client initialization
2025-06-30 00:13:15 -07:00
Kayvan Sylvan
5b4cec81c3 feat: simplify supportsResponsesAPI 2025-06-29 23:57:57 -07:00
Kayvan Sylvan
eda5531087 refactor: extract common message conversion logic to reduce duplication
## CHANGES

- Extract shared message conversion to convertMessageCommon
- Reuse logic between chat and response APIs
- Maintain existing text-only behavior for chat
- Support multi-content messages in response API
- Reduce code duplication across converters
- Preserve backward compatibility for both APIs
2025-06-29 23:48:14 -07:00
Kayvan Sylvan
66925d188a fix: move channel close to defer statement in OpenAI streaming methods
## CHANGES

- Move close(channel) to defer statement
- Ensure channel closes even on errors
- Apply fix to sendStreamChatCompletions method
- Apply fix to sendStreamResponses method
- Improve error handling reliability
- Prevent potential channel leaks
2025-06-29 23:27:24 -07:00
Kayvan Sylvan
6179742e79 feat: add chat completions API support for OpenAI-compatible providers
## CHANGES

* Add chat completions API fallback for non-Responses API providers
* Implement `sendChatCompletions` and `sendStreamChatCompletions` methods
* Introduce `buildChatCompletionParams` to construct API request parameters
* Add `ImplementsResponses` flag to track provider API capabilities
* Update provider configurations with Responses API support status
* Enhance `Send` and `SendStream` methods to use appropriate API endpoints
2025-06-29 22:52:55 -07:00
Kayvan Sylvan
d8fc6940f0 feat: migrate OpenAI plugin to use new responses API instead of chat completions
- Replace chat completions with responses API
- Update message conversion to new format
- Refactor streaming to handle event types
- Remove frequency and presence penalty params
- Replace seed parameter with max tokens
- Update test cases for new API
- Add response text extraction method
2025-06-29 21:06:12 -07:00
Daniel Miessler
44f7e8dfef Updated extract alpha. 2025-06-28 15:18:53 -07:00
Daniel Miessler
c5ada714ff Updated extract alpha. 2025-06-28 15:17:10 -07:00
Daniel Miessler
80c4807f7e Added extract_alpha as kind of an experiment. 2025-06-28 15:14:14 -07:00
github-actions[bot]
b4126b6798 Update version to v1.4.221 and commit 2025-06-28 15:03:37 +00:00
Daniel Miessler 🛡️
f2ffa64af9 Merge pull request #1556 from ksylvan/0628-migrate-to-official-openai-go 2025-06-28 08:02:08 -07:00
Kayvan Sylvan
09e01eddf4 refactor: abstract chat message structs and migrate to official openai-go SDK
### CHANGES

- Introduce local `chat` package for message abstraction
- Replace sashabaranov/go-openai with official openai-go SDK
- Update OpenAI, Azure, and Exolab plugins for new client
- Refactor all AI providers to use internal chat types
- Decouple codebase from third-party AI provider structs
- Replace deprecated `ioutil` functions with `os` equivalents
2025-06-28 07:28:49 -07:00
github-actions[bot]
aa028a4a57 Update version to v1.4.220 and commit 2025-06-28 05:28:02 +00:00
Kayvan Sylvan
d8d157404c Merge pull request #1555 from ksylvan/0627-github-actions-release-fix
fix: Race condition in GitHub actions release flow
2025-06-27 22:26:27 -07:00
Kayvan Sylvan
d0602c9653 chore: improve release creation to gracefully handle pre-existing tags.
### CHANGES

*   Check if a release exists before attempting creation.
*   Suppress error output from `gh release view` command.
*   Add an informative log when release already exists.
2025-06-27 21:05:14 -07:00
github-actions[bot]
35155496a4 Update version to v1.4.219 and commit 2025-06-28 02:59:32 +00:00
Kayvan Sylvan
eef16b89f2 Merge pull request #1553 from ksylvan/0627-deepwiki-badge-added
docs: add DeepWiki badge and fix minor typos in README
2025-06-27 19:58:00 -07:00
Kayvan Sylvan
7f66097577 docs: add DeepWiki badge and fix minor typos in README
## CHANGES

- Add DeepWiki badge to README header
- Fix typo "chatbots" to "chat-bots"
- Correct "Perlexity" to "Perplexity"
- Fix "distro" to "Linux distribution"
- Add alt text to contributor images
- Update dependency versions in go.mod
- Remove unused soup dependency
2025-06-27 19:34:01 -07:00
Kayvan Sylvan
2012f22a9c Merge pull request #1552 from nawarajshahi/main
Fix typos in README.md
2025-06-27 12:18:47 -07:00
Nawaraj Shahi
08695c9e24 Fix typos on README.md 2025-06-27 10:14:48 -04:00
github-actions[bot]
d8cc9b5eef Update version to v1.4.218 and commit 2025-06-27 00:22:16 +00:00
Kayvan Sylvan
9dbe20cf7b Merge pull request #1550 from ksylvan/0626-more-openai-raw-mode
Add Support for OpenAI Search and Research Model Variants
2025-06-26 17:20:47 -07:00
Kayvan Sylvan
64763e1303 feat: add support for new OpenAI search and research model variants
## CHANGES

- Add slices import for array operations
- Define new search preview model names
- Add mini search preview variants
- Include deep research model support
- Add June 2025 dated model versions
- Replace hardcoded check with slices.Contains
- Support both prefix and exact model matching
2025-06-26 17:06:25 -07:00
github-actions[bot]
126a9ff406 Update version to v1.4.217 and commit 2025-06-26 23:09:56 +00:00
Kayvan Sylvan
e906425138 Merge pull request #1546 from ksylvan/0626-fix-yt-in-web-interface
New YouTube Transcript Endpoint Added to REST API
2025-06-26 16:08:23 -07:00
Daniel Miessler
df4a560302 Add extract_mcp_servers pattern
New pattern to extract mentions of MCP (Model Context Protocol) servers from content. Identifies server names, features, capabilities, and usage examples.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-26 11:39:21 -07:00
Kayvan Sylvan
34cf669bd4 chore: fix endpoint calls from frontend 2025-06-26 01:37:53 -07:00
Kayvan Sylvan
0dbe1bbb4e feat: add dedicated YouTube transcript API endpoint
## CHANGES

- Add new YouTube handler for transcript requests
- Create `/youtube/transcript` POST endpoint route
- Add request/response types for YouTube API
- Support language and timestamp options
- Update frontend to use new endpoint
- Remove chat endpoint dependency for transcripts
- Validate video vs playlist URLs properly
2025-06-26 01:21:27 -07:00
github-actions[bot]
e29ed908e6 Update version to v1.4.216 and commit 2025-06-26 06:52:16 +00:00
Kayvan Sylvan
3d049a435a Merge pull request #1545 from ksylvan/0625-fix-attachments-used-with-patterns
Update Message Handling for Attachments and Multi-Modal content
2025-06-25 23:50:43 -07:00
Kayvan Sylvan
1a335b3fb9 refactor(ai): unify assistant and user message formatting in dryrun
### CHANGES

- Unify assistant and user message formatting logic.
- Use `formatMultiContentMessage` for assistant role messages.
- Improve dryrun support for multi-part message content.
2025-06-25 23:49:23 -07:00
Kayvan Sylvan
e2430b6c75 fix: correctly combine text and attachments in raw mode sessions
### CHANGES

- Combine user text and attachments into MultiContent.
- Preserve existing non-text parts like images.
- Use standard content field for text-only messages.
2025-06-25 23:28:12 -07:00
Kayvan Sylvan
2497f10eca feat: add MultiContent support to chat message construction in raw mode 2025-06-25 23:18:56 -07:00
Kayvan Sylvan
f62d2198f9 refactor: extract message and option formatting logic into reusable methods
## CHANGES

- Extract multi-content message formatting to dedicated method
- Create formatMessages method for all message types
- Add formatOptions method for chat options display
- Replace inline formatting with strings.Builder usage
- Reduce code duplication between Send and SendStream
- Improve code organization and maintainability
2025-06-25 22:08:26 -07:00
Kayvan Sylvan
816e4072f4 fix(chatter): prevent duplicate user message when applying patterns
### CHANGES

*   Prevent adding user message twice when using patterns.
*   Ensure multi-part content is always included in session.
2025-06-25 21:43:46 -07:00
Kayvan Sylvan
85ee6196bd chore: fix formatting. 2025-06-25 18:31:46 -07:00
Kayvan Sylvan
e15645c1bc chore: clean up comments in chatter.go for clarity 2025-06-25 17:15:13 -07:00
Kayvan Sylvan
fada6bb044 chore: simplify user message appending logic in BuildSession
### CHANGES
- Remove conditional check for pattern name in message appending.
- Always append user message if it exists in request.
2025-06-25 17:12:48 -07:00
Kayvan Sylvan
4ad14bb752 feat: enhance dryrun client to display multi-content user messages
### CHANGES

- Handle multi-content messages for the user role.
- Display image URLs from user messages in output.
- Update both `Send` and `SendStream` methods.
- Retain existing behavior for simple text messages.
2025-06-25 17:08:30 -07:00
Kayvan Sylvan
97fc9b0d58 feat: allow combining user messages and attachments with patterns
- Allow user messages and attachments with patterns.
- Append user message to session regardless of pattern.
- Refactor chat request builder for improved clarity.
2025-06-25 16:24:47 -07:00
github-actions[bot]
ad0df37d10 Update version to v1.4.215 and commit 2025-06-25 11:07:45 +00:00
Kayvan Sylvan
666302c3c1 Merge pull request #1543 from ksylvan/0625-fix-pattern-descriptions-json
fix: Revert multiline tags in generated json files
2025-06-25 04:06:12 -07:00
Kayvan Sylvan
71e20cf251 chore: reformat pattern_descriptions.json to improve readability
### CHANGES

- Reformat JSON `tags` array to display on new lines.
- Update `write_essay` pattern description for clarity.
- Apply consistent formatting to both data files.
2025-06-25 03:55:00 -07:00
github-actions[bot]
b591666366 Update version to v1.4.214 and commit 2025-06-25 09:51:32 +00:00
Daniel Miessler 🛡️
155d9f0a76 Merge pull request #1542 from ksylvan/0624-write-essay-by-author-and-updates 2025-06-25 02:49:54 -07:00
Kayvan Sylvan
6a7cca65b4 chore: Fixes caught by review 2025-06-24 23:09:14 -07:00
Kayvan Sylvan
94020dbde0 chore: rename essay patterns to clarify Paul Graham style and author variable usage
## CHANGES

- Rename `write_essay` to `write_essay_pg` for Paul Graham style
- Rename `write_essay_by_author` to `write_essay` with author variable
- Update pattern descriptions to reflect naming changes
- Fix duplicate `write_essay_pg` entry in pattern descriptions
2025-06-24 21:54:39 -07:00
Kayvan Sylvan
f949391098 feat: add new pattern and update pattern metadata files.
### CHANGES

- Add tags and descriptions for five new creative and analytical patterns.
- Introduce `analyze_terraform_plan` for infrastructure review.
- Add `write_essay_by_author` for stylistic writing.
- Include `summarize_board_meeting` for corporate notes.
- Introduce `create_mnemonic_phrases` for memory aids.
- Update and clean pattern description data files.
- Sort the pattern explanations list alphabetically.
2025-06-24 12:42:39 -07:00
Kayvan Sylvan
64c3c69a70 Merge branch 'danielmiessler:main' into main 2025-06-23 13:03:07 -07:00
github-actions[bot]
4a830394be Update version to v1.4.213 and commit 2025-06-23 20:01:04 +00:00
Kayvan Sylvan
9f8a2d3b59 Merge pull request #1538 from andrewsjg/bug/bedrock-region-handling
Bug/bedrock region handling
2025-06-23 12:59:30 -07:00
github-actions[bot]
4353bc9f7f Update version to v1.4.212 and commit 2025-06-23 19:57:58 +00:00
Kayvan Sylvan
7a8024ee79 Merge pull request #1540 from ksylvan/0623-langdock-ai
Add Langdock AI and enhance generic OpenAI compatible support
2025-06-23 12:56:25 -07:00
Kayvan Sylvan
b5bf75ad2e chore: refactor ProviderMap for dynamic URL template handling
# CHANGES

- Add `os` and `strings` packages to imports
- Implement dynamic URL handling with environment variables
- Refactor provider configuration to support URL templates
- Reorder providers for consistent key order in ProviderMap
- Extract and parse template variables from BaseURL
- Use environment variables or default values for templates
- Replace template with actual values in BaseURL
2025-06-23 12:38:52 -07:00
Kayvan Sylvan
1ae847f397 chore: refactor ProviderMap for dynamic URL template handling
# CHANGES

- Add `os` and `strings` packages to imports
- Implement dynamic URL handling with environment variables
- Refactor provider configuration to support URL templates
- Reorder providers for consistent key order in ProviderMap
- Extract and parse template variables from BaseURL
- Use environment variables or default values for templates
- Replace template with actual values in BaseURL
2025-06-23 12:35:59 -07:00
Kayvan Sylvan
3fd923f6b8 chore: refactor Bedrock client to improve error handling and add interface compliance
## CHANGES

- Add ai.Vendor interface implementation check
- Improve error handling with wrapped errors
- Add AWS region validation logic
- Fix resource cleanup in SendStream
- Add nil checks for response parsing
- Update context usage to Background()
- Add user agent constants
- Enhance code documentation
2025-06-23 09:13:11 -07:00
James Andrews
eb251139b8 bedrock region handling - updated to set region value correctly if it exists in the config 2025-06-23 00:12:58 +01:00
James Andrews
0b5d3cfc30 bedrock region handling - updated to fix bad pointer reference 2025-06-23 00:03:32 +01:00
James Andrews
14a3c11930 Fixed bedrock region handling 2025-06-22 23:22:45 +01:00
James Andrews
c8cf6da0cc Updated hasAWSCredentials to also check for AWS_DEFAULT_REGION when access keys are configured in the environment 2025-06-22 14:27:04 +01:00
Daniel Miessler
a2c954ba50 Updated paper analyzer. 2025-06-19 14:48:05 -07:00
github-actions[bot]
730d0adc86 Update version to v1.4.211 and commit 2025-06-19 21:47:20 +00:00
Kayvan Sylvan
dc9168ab6f Merge pull request #1533 from ksylvan/0619-enhance-restapi-and-webui-with-variables
REST API and Web UI Now Support Dynamic Pattern Variables
2025-06-19 14:45:48 -07:00
Daniel Miessler
e500a5916e Updated paper analyzer. Went back to my own format. 2025-06-19 14:45:31 -07:00
Kayvan Sylvan
6ddf46a379 chore: removed a directory of raycast scripts sitting in the patterns/ directory 2025-06-19 14:11:29 -07:00
Kayvan Sylvan
e8aa358b15 refactor(ChatService): clean up message stream and pattern output methods
- Refactor `cleanPatternOutput` to use a dedicated return variable.
- Hoist `processResponse` function for improved stream readability.
- Remove unnecessary whitespace and trailing newlines from file.
2025-06-19 13:55:25 -07:00
Daniel Miessler
62f373c2b4 Updated paper analyzer. 2025-06-19 13:55:03 -07:00
Daniel Miessler
fcf826f3de Updated paper analyzer. 2025-06-19 13:48:57 -07:00
Kayvan Sylvan
bd2db29cee feat: add ApplyPattern route for applying patterns with variables
## CHANGES
- Create `PatternApplyRequest` struct for request body parsing
- Implement `ApplyPattern` method for POST /patterns/:name/apply
- Register manual routes for pattern operations in `NewPatternsHandler`
- Refactor `Get` method to return raw pattern content
- Merge query parameters with request body variables in `ApplyPattern`
- Use `StorageHandler` for pattern-related storage operations
2025-06-19 13:30:56 -07:00
Kayvan Sylvan
c6d612ee9a feat: add pattern variables support to REST API chat endpoint
## CHANGES

- Add Variables field to PromptRequest struct
- Pass pattern variables through chat handler
- Create API variables documentation example
- Add pattern variables UI in web interface
- Create pattern variables store in Svelte
- Include variables in chat service requests
- Add JSON textarea for variable input
2025-06-19 13:10:05 -07:00
Daniel Miessler
d613c25974 Updated sanitization instructions. 2025-06-19 12:24:09 -07:00
Daniel Miessler
c0abea7c66 Updated markdown cleaner. 2025-06-19 12:02:21 -07:00
Daniel Miessler
496bd2812a Updated markdown cleaner. 2025-06-19 11:34:09 -07:00
github-actions[bot]
70fccaf2fb Update version to v1.4.210 and commit 2025-06-18 07:40:11 +00:00
Kayvan Sylvan
9a71f7c96d Merge pull request #1530 from ksylvan/0617-add-citations-to-perplexity
Add Citation Support to Perplexity Response
2025-06-18 00:38:37 -07:00
Kayvan Sylvan
5da3db383d feat: add citation support to perplexity AI responses
## CHANGES

- Add citation extraction from API responses
- Append citations section to response content
- Format citations as numbered markdown list
- Handle citations in streaming responses
- Store last response for citation access
- Add citations after stream completion
- Maintain backward compatibility with responses
2025-06-17 20:45:03 -07:00
Daniel Miessler 🛡️
19438cbd20 Update README.md 2025-06-17 11:52:02 -07:00
Daniel Miessler 🛡️
a0b71ee365 Update README.md
Updated readme.
2025-06-17 11:48:44 -07:00
Daniel Miessler 🛡️
034513ece5 Update README.md
An update to the intro text, describing Fabric's utility to most people.
2025-06-17 11:45:46 -07:00
github-actions[bot]
0affb9bab1 Update version to v1.4.209 and commit 2025-06-17 10:21:02 +00:00
github-actions[bot]
3305df8fb2 Update version to v1.4.208 and commit 2025-06-17 10:19:28 +00:00
Kayvan Sylvan
892c229076 Merge pull request #1527 from ksylvan/0617-add-perplexity-vendor
Add Perplexity AI Provider with Token Limits Support
2025-06-17 03:17:57 -07:00
Kayvan Sylvan
599c5f2b9f Merge pull request #1526 from ConnorKirk/check-for-aws-credentials
Check for AWS_PROFILE or AWS_ROLE_SESSION_NAME environment variables
2025-06-17 03:17:48 -07:00
Kayvan Sylvan
19e5d8dbe0 chore: update README with Perplexity AI support instructions
### CHANGES
- Add instructions for configuring Perplexity AI with Fabric
- Include example command for querying Perplexity AI
- Retain existing instructions for YouTube transcription changes
2025-06-17 02:57:37 -07:00
Kayvan Sylvan
b772127738 feat: add Perplexity AI provider support with token limits and streaming
## CHANGES

- feat: Add `MaxTokens` field to `ChatOptions` struct for response control
- feat: Integrate Perplexity client into core plugin registry initialization
- build: Add perplexity-go/v2 dependency to enable API interactions
- feat: Implement stream handling in Perpexlty client using sync.WaitGroup
- fix: Correct parameter types for penalty options in API requests
## LINKS

<https://github.com/sgaunet/perlexipty-go> - Client library used
2025-06-17 02:32:53 -07:00
Connor Kirkpatrick
5dd61abe2a Check for AWS_PROFILE or AWS_ROLE_SESSION_NAME environment variables 2025-06-17 10:25:17 +01:00
github-actions[bot]
f45e140126 Update version to v1.4.207 and commit 2025-06-17 07:41:51 +00:00
Kayvan Sylvan
752a66cb48 Merge pull request #1525 from ksylvan/0617-fix-lang-code-vtt-youtube-transcript-bug
Refactor yt-dlp Transcript Logic and Fix Language Bug
2025-06-17 00:40:18 -07:00
Kayvan Sylvan
da28d91d65 refactor: extract common yt-dlp logic to reduce code duplication in YouTube plugin
## CHANGES

- Extract shared yt-dlp logic into tryMethodYtDlpInternal helper
- Add processVTTFileFunc parameter for flexible VTT processing
- Implement language matching for 2-char language codes
- Refactor tryMethodYtDlp to use new helper function
- Refactor tryMethodYtDlpWithTimestamps to use helper
- Reduce code duplication between transcript methods
- Maintain existing functionality with cleaner structure
2025-06-17 00:32:33 -07:00
Daniel Miessler
5a66ca1c5a Updated extract insights. 2025-06-16 16:43:21 -07:00
Daniel Miessler
98f3da610b Updated extract insights. 2025-06-16 16:41:14 -07:00
github-actions[bot]
73ce92ccd9 Update version to v1.4.206 and commit 2025-06-16 23:12:53 +00:00
Kayvan Sylvan
7f3f1d641f Merge pull request #1523 from ksylvan/0616-bedrock-plugin-config-fix
Conditional AWS Bedrock Plugin Initialization
2025-06-16 16:10:59 -07:00
Kayvan Sylvan
44b5c46beb feat: add AWS credential detection for Bedrock client initialization
## CHANGES

- Add hasAWSCredentials helper function
- Check for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
- Look for AWS shared credentials file
- Support custom AWS_SHARED_CREDENTIALS_FILE path
- Default to ~/.aws/credentials location
- Only initialize Bedrock client if credentials exist
- Prevent AWS SDK credential search failures
2025-06-16 15:11:58 -07:00
Daniel Miessler
8d37c9d6b9 Updated prompt. 2025-06-16 13:26:13 -07:00
github-actions[bot]
1138d0b60e Update version to v1.4.205 and commit 2025-06-16 13:26:26 +00:00
Kayvan Sylvan
b78217088d Merge pull request #1519 from ConnorKirk/bedrock-plugin-dynamically-fetch-models 2025-06-16 06:24:54 -07:00
Connor Kirkpatrick
76b889733d Dynamically fetch and list available foundation models and inference profiles 2025-06-16 11:05:34 +01:00
Kayvan Sylvan
3911fd9f5d Merge pull request #1518 from ksylvan/0615-remove-old-redundant-patterns
chore: remove duplicate/outdated patterns
2025-06-15 12:56:31 -07:00
Daniel Miessler
b06e29f8a8 Updated markdown sanitizer. 2025-06-15 12:52:39 -07:00
Kayvan Sylvan
11a7e542e1 chore: remove duplicate/outdated patterns 2025-06-15 12:47:08 -07:00
Daniel Miessler
6681078259 Updated markdown cleaner. 2025-06-15 12:45:34 -07:00
Daniel Miessler
be1edf7b1d Updated markdown cleaner. 2025-06-15 12:44:15 -07:00
github-actions[bot]
8ce748a1b1 Update version to v1.4.204 and commit 2025-06-15 05:53:11 +00:00
Kayvan Sylvan
96070f6f39 Merge pull request #1517 from ksylvan/0614-prevent-race-conditions-tag-and-release
Fix: Prevent race conditions in versioning workflow.
2025-06-14 22:51:39 -07:00
Kayvan Sylvan
ca3e89a889 ci: improve version update workflow to prevent race conditions
### CHANGES

- Add concurrency control to prevent simultaneous runs
- Pull latest main branch changes before tagging
- Fetch all remote tags before calculating version
2025-06-14 22:30:54 -07:00
github-actions[bot]
47d799d7ae Update version to v1.4.203 and commit 2025-06-14 06:01:13 +00:00
Eugen Eisler
4899ce56a5 Merge pull request #1512 from ConnorKirk/1500-add-support-for-amazon-bedrock
feat:Add support for Amazon Bedrock
2025-06-14 07:59:41 +02:00
Eugen Eisler
4a7b7becec Merge pull request #1513 from marcas756/feature/create_mnemonic_phrases
feat: create mnemonic phrase pattern
2025-06-14 07:53:05 +02:00
Eugen Eisler
80fdccbe89 Merge pull request #1516 from ksylvan/0612-fix-REST-api-put-pattern
Fix REST API pattern creation
2025-06-14 07:52:06 +02:00
Kayvan Sylvan
d9d8f7bf96 feat: add Save method to PatternsEntity for persisting patterns to filesystem
## CHANGES

- Add Save method to PatternsEntity struct
- Create pattern directory with proper permissions
- Write pattern content to system pattern file
- Add comprehensive test for Save functionality
- Verify directory creation and file contents
- Handle errors for directory and file operations
2025-06-13 15:52:01 -07:00
Marco Bacchi
a96ddbeef0 feat: create mnemonic phrase pattern
Add a new pattern for generating mnemonic phrases from diceware words. This includes two markdown files defining the user guide, and system implementation details.
2025-06-12 23:27:08 +02:00
Connor Kirkpatrick
d32a1d6a5a Add Bedrock plugin
This commits adds support for using Amazon Bedrock within fabric.
2025-06-12 13:07:12 +01:00
github-actions[bot]
201474791d Update version to v1.4.202 and commit 2025-06-12 05:47:10 +00:00
Eugen Eisler
6d09137fee Merge pull request #1510 from ksylvan/0611-fix-youtube-transcript-for-windows
Cross-Platform fix for Youtube Transcript extraction
2025-06-12 07:45:38 +02:00
Kayvan Sylvan
680febbe66 *fix: replace Unix-specific file operations with cross-platform alternatives
## CHANGES

- Replace hardcoded `/tmp` with `os.TempDir()` for paths
- Use `filepath.Join()` instead of string concatenation
- Remove Unix `find` command dependency completely
- Add new `findVTTFiles()` method using `filepath.Walk()`
- Make VTT file discovery work on Windows
- Improve error handling for file operations
- Maintain backward compatibility with existing functionality
2025-06-11 22:24:48 -07:00
github-actions[bot]
f59e5081f3 Update version to v1.4.201 and commit 2025-06-12 02:35:09 +00:00
Eugen Eisler
6a504c7422 Merge pull request #1503 from danielmiessler/dependabot/npm_and_yarn/web/npm_and_yarn-6ea9762674
chore(deps): bump brace-expansion from 1.1.11 to 1.1.12 in /web in the npm_and_yarn group across 1 directory
2025-06-12 04:33:36 +02:00
Eugen Eisler
89a0abcbe4 Merge pull request #1508 from ksylvan/0611-youtube-followup-fixes
feat: cleanup after `yt-dlp` addition
2025-06-12 04:32:30 +02:00
Kayvan Sylvan
2dfd78ef0b feat: cleanup after yt-dlp addition
### CHANGES
- Update README with yt-dlp requirement for transcripts
- Ensure the errors are clear and actionable.
2025-06-11 17:27:11 -07:00
github-actions[bot]
2200b6ea08 Update version to v1.4.200 and commit 2025-06-11 21:45:09 +00:00
Eugen Eisler
82f9ebaf99 Merge pull request #1507 from ksylvan/0611-youtube-fix
Refactor: No more web scraping, just use yt-dlp
2025-06-11 23:43:33 +02:00
Kayvan Sylvan
704ad3067a refactor: replace web scraping with yt-dlp for YouTube transcript extraction
## CHANGES

- Remove unreliable YouTube API scraping methods
- Add yt-dlp integration for transcript extraction
- Implement VTT subtitle parsing functionality
- Add timestamp preservation for transcripts
- Remove soup HTML parsing dependency
- Add error handling for missing yt-dlp
- Create temporary directory management
- Support multiple subtitle format fallbacks
2025-06-11 14:24:40 -07:00
github-actions[bot]
6f7e3c04d7 Update version to v1.4.199 and commit 2025-06-11 20:27:06 +00:00
Eugen Eisler
79f763456e Merge pull request #1506 from danielmiessler/feat/antropic_tool
fix: fix web search tool location
2025-06-11 22:25:22 +02:00
Eugen Eisler
9d4f7f1571 fix: fix web search tool location 2025-06-11 22:19:21 +02:00
github-actions[bot]
8e7373b308 Update version to v1.4.198 and commit 2025-06-11 18:51:13 +00:00
Eugen Eisler
7a39742507 Merge pull request #1504 from marcas756/fix/ollama-hardcoded-timeout
fix: Add configurable HTTP timeout for Ollama client
2025-06-11 20:49:41 +02:00
github-actions[bot]
cea218e61e Update version to v1.4.197 and commit 2025-06-11 18:41:32 +00:00
dependabot[bot]
02ac68834d chore(deps): bump brace-expansion
Bumps the npm_and_yarn group with 1 update in the /web directory: [brace-expansion](https://github.com/juliangruber/brace-expansion).


Updates `brace-expansion` from 1.1.11 to 1.1.12
- [Release notes](https://github.com/juliangruber/brace-expansion/releases)
- [Commits](https://github.com/juliangruber/brace-expansion/compare/1.1.11...v1.1.12)

---
updated-dependencies:
- dependency-name: brace-expansion
  dependency-version: 1.1.12
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-11 18:41:27 +00:00
Eugen Eisler
f673f424da Merge pull request #1502 from danielmiessler/feat/antropic_tool
Feat/antropic tool
2025-06-11 20:40:00 +02:00
Marco Bacchi
0ae41116aa fix: Add configurable HTTP timeout for Ollama client
Add a new setup question to configure the HTTP timeout duration for
Ollama requests. The default value is set to 20 minutes.
2025-06-11 20:36:57 +02:00
Eugen Eisler
2b11f3e48e feat: search tool result collection 2025-06-11 20:21:34 +02:00
Eugen Eisler
ed77cc2320 feat: search tool working 2025-06-11 19:56:38 +02:00
Eugen Eisler
29f19fce51 Merge pull request #1499 from noamsiegel/improve-create-prd-pattern
feat: Enhance the PRD Generator's identity and purpose
2025-06-11 18:04:53 +02:00
Eugen Eisler
62ed5d2b9a Merge pull request #1497 from ksylvan/0608-analyze-terraform-plan
feat: add Terraform plan analyzer pattern for infrastructure changes
2025-06-11 18:00:55 +02:00
GitButler
836e4c4fab GitButler Workspace Commit
This is a merge commit the virtual branches in your workspace.

Due to GitButler managing multiple virtual branches, you cannot switch back and
forth between git branches and virtual branches easily. 

If you switch to another branch, GitButler will need to be reinitialized.
If you commit on this branch, GitButler will throw it away.

Here are the branches that are currently applied:
 - improve-create-prd (refs/gitbutler/improve-create-prd)
For more information about what we're doing here, check out our docs:
https://docs.gitbutler.com/features/virtual-branches/integration-branch
2025-06-09 18:24:48 -07:00
Noam Siegel
946c1af42d feat: Enhance the PRD Generator's identity and purpose
The changes in this commit expand the identity and purpose of the PRD Generator
to provide more clarity on its role and the expected output. The key changes
include:

- Defining the Generator's purpose as transforming product ideas into a
  structured PRD that ensures clarity, alignment, and precision in product
  planning and execution.
- Outlining the key sections typically found in a PRD that the Generator should
  cover, such as Overview, Objectives, Target Audience, Features, User Stories,
  Functional and Non-functional Requirements, Success Metrics, and Timeline.
- Providing more detailed instructions on the expected output format, structure,
  and content, including the use of Markdown, labeled sections, bullet points,
  tables, and highlighting of priorities or MVP features.
2025-06-09 18:24:48 -07:00
Kayvan Sylvan
a74585cb14 feat: add Terraform plan analyzer pattern for infrastructure change assessment
- Create new pattern for analyzing Terraform plans
- Add identity defining expert plan analyzer role
- Include focus on security, cost, and compliance
- Define three output sections for summaries
- Specify 20-word sentence summary requirement
- List 10 critical changes with word limits
- Include 5 key takeaways section format
- Add markdown formatting output instructions
- Require numbered lists over bullet points
- Prohibit warnings and duplicate content
2025-06-08 22:49:02 -07:00
github-actions[bot]
5ffd458aa0 Update version to v1.4.196 and commit 2025-06-07 18:02:01 +00:00
Eugen Eisler
9786721037 Merge pull request #1495 from ksylvan/0606-aiml-provider
Add AIML provider configuration
2025-06-07 20:00:27 +02:00
Kayvan Sylvan
ffb31985e8 feat: add AIML provider to OpenAI compatible providers configuration
## CHANGES

- Add AIML provider configuration
- Set AIML base URL to api.aimlapi.com/v1
- Expand supported OpenAI compatible providers list
- Enable AIML API integration support
2025-06-06 07:13:10 -07:00
Daniel Miessler
eeee37a7cc Updated output. 2025-05-31 12:48:46 -07:00
Daniel Miessler
bd89a8d776 Updated output. 2025-05-31 07:14:36 -07:00
Daniel Miessler
2311e7e7a1 Updated output. 2025-05-31 07:14:10 -07:00
Daniel Miessler
09b79283e9 Updated output. 2025-05-31 07:12:29 -07:00
Daniel Miessler
7fbb5e0935 Updated output. 2025-05-31 07:07:56 -07:00
Daniel Miessler
984d9d03f5 Updated output. 2025-05-31 07:06:29 -07:00
Daniel Miessler
c47502fa8c Updated output. 2025-05-31 06:58:46 -07:00
Daniel Miessler
1fe02bdf22 Added simpler paper analyzer, updated the output. 2025-05-31 06:51:19 -07:00
Daniel Miessler
d550385a5e Added simpler paper analyzer. 2025-05-31 06:48:53 -07:00
github-actions[bot]
1e81da5f42 Update version to v1.4.195 and commit 2025-05-24 09:08:17 +00:00
Eugen Eisler
5b318dc402 Merge pull request #1487 from ksylvan/0524-update-pdfjs
Dependency Updates and PDF Worker Refactoring
2025-05-24 11:06:43 +02:00
Kayvan Sylvan
4027305345 feat: upgrade PDF.js to v4.2 and refactor worker initialization
### CHANGES
- Add `.browserslistrc` to define target browser versions.
- Upgrade `pdfjs-dist` dependency from v2.16 to v4.2.67.
- Upgrade `nanoid` dependency from v4.0.2 to v5.0.9.
- Introduce `pdf-config.ts` for centralized PDF.js worker setup.
- Refactor `PdfConversionService` to use new PDF worker configuration.
- Add static `pdf.worker.min.mjs` to serve PDF.js worker.
- Update Vite configuration for ESNext build target and PDF.js.
2025-05-24 00:29:20 -07:00
github-actions[bot]
63879d5cf7 Update version to v1.4.194 and commit 2025-05-24 06:04:31 +00:00
Eugen Eisler
9539441496 Merge pull request #1485 from ksylvan/0523-generalize-web-ui-connect-to-fabric-api
Web UI: Centralize Environment Configuration and Make Fabric Base URL Configurable
2025-05-24 08:02:57 +02:00
github-actions[bot]
352ade34c8 Update version to v1.4.193 and commit 2025-05-24 05:59:22 +00:00
Eugen Eisler
9abc69c1a9 Merge pull request #1484 from ksylvan/0523-web-ui-cleanup-and-updates
Web UI update all packages, reorganize docs, add install scripts
2025-05-24 07:57:42 +02:00
Kayvan Sylvan
93f6f2f0c4 feat: add centralized environment configuration for Fabric base URL
- Create environment config module for URL handling
- Add getFabricBaseUrl() function with server/client support
- Add getFabricApiUrl() helper for API endpoints
- Configure Vite to inject FABRIC_BASE_URL client-side
- Update proxy targets to use environment variable
- Add TypeScript definitions for window config
- Support FABRIC_BASE_URL env var with fallback
2025-05-23 20:45:57 -07:00
Kayvan Sylvan
1f5d3db3fb fix typo in script name 2025-05-23 17:51:41 -07:00
Kayvan Sylvan
4446b456ba docs: reorganize web documentation and add installation scripts
## CHANGES

- Move legacy documentation files to web/legacy/
- Update web README with installation instructions
- Add convenience scripts for npm and pnpm installation
- Update all package dependencies to latest versions
- Add PDF-to-Markdown installation steps to README
- Remove duplicate documentation files
2025-05-23 17:47:33 -07:00
Eugen Eisler
870941090a Merge pull request #1481 from skibum1869/feature/summarize_board_meeting
Add board meeting summary pattern template
2025-05-23 23:36:46 +02:00
Max Harris
5fc004805e Update meeting summary template with word count requirement
AI:

Add minimum word count for context section in board summary
2025-05-23 10:27:18 -05:00
Max Harris
ce47018fc3 Merge branch 'danielmiessler:main' into main 2025-05-23 09:38:40 -05:00
Max Harris
a09131ea72 Add board meeting summary pattern template 2025-05-23 09:38:24 -05:00
github-actions[bot]
36eb321059 Update version to v1.4.192 and commit 2025-05-23 05:44:31 +00:00
Eugen Eisler
47bf9600d6 Merge pull request #1480 from ksylvan/0522-auto-raw-mode-for-some-models
Automatic setting of "raw mode" for some models
2025-05-23 07:43:04 +02:00
Kayvan Sylvan
be674841e7 feat: add automatic raw mode detection for specific AI models
## CHANGES

- Add model-specific raw mode detection logic
- Check Ollama llama2/llama3 models for raw mode
- Check OpenAI o1/o3/o4 models for raw mode
- Use model from options or default chatter
- Auto-enable raw mode when vendor requires it
- Import strings package for prefix matching
2025-05-22 17:04:11 -07:00
Kayvan Sylvan
39a8b67438 feat: add NeedsRawMode method to AI vendor interface
## CHANGES

- Add NeedsRawMode to Vendor interface
- Implement NeedsRawMode in all AI clients
- Return false for all implementations
- Support model-specific raw mode detection
- Enable future raw mode requirements
2025-05-22 16:41:12 -07:00
github-actions[bot]
0a4950dd08 Update version to v1.4.191 and commit 2025-05-22 19:03:47 +00:00
Eugen Eisler
593c1558c0 Merge pull request #1478 from ksylvan/0522-upgrade-to-claude-4
Claude 4 Integration and README Updates
2025-05-22 21:02:11 +02:00
Kayvan Sylvan
c8f9a39a40 feat: add support for Anthropic Claude 4 models and update SDK to v1.2.0
CHANGES
- Upgrade `anthropic-sdk-go` dependency to version `v1.2.0`.
- Integrate new Anthropic Claude 4 Opus and Sonnet models.
- Remove deprecated Claude 2.0 and 2.1 models from list.
- Adjust model type casting for `anthropic-sdk-go v1.2.0` compatibility.
- Refresh README: announce Claude 4, update date, fix links.
2025-05-22 11:26:04 -07:00
github-actions[bot]
50ec02546f Update version to v1.4.190 and commit 2025-05-20 10:12:21 +00:00
Eugen Eisler
881085d0fe Merge pull request #1475 from ksylvan/0519-fix-dupe-input-attempt-2
refactor: improve raw mode handling in BuildSession
2025-05-20 12:10:47 +02:00
Kayvan Sylvan
2d75052e57 refactor: improve raw mode handling in BuildSession
## CHANGES

- Fix system message handling with patterns in raw mode
- Prevent duplicate inputs when using patterns
- Add conditional logic for pattern vs non-pattern scenarios
- Simplify message construction with clearer variable names
- Improve code comments for better readability
2025-05-19 22:18:12 -07:00
github-actions[bot]
fee604682b Update version to v1.4.189 and commit 2025-05-19 21:39:14 +00:00
Eugen Eisler
941ccabd92 Merge pull request #1473 from roumy/add_authent_ollama
add authentification for ollama instance
2025-05-19 23:37:45 +02:00
github-actions[bot]
57cd563963 Update version to v1.4.188 and commit 2025-05-19 21:36:30 +00:00
Eugen Eisler
274b6eada6 Merge pull request #1474 from ksylvan/0519-fix-doubled-user-input
feat: update `BuildSession` to handle message appending logic
2025-05-19 23:35:03 +02:00
Kayvan Sylvan
bc27f9d685 refactor: improve message handling for raw mode and Anthropic client
## CHANGES

- Clarify raw mode message handling in BuildSession
- Fix pattern-based message handling in non-raw mode
- Refactor Anthropic client message normalization
- Add proper handling for empty message arrays
- Implement user/assistant message alternation for Anthropic
- Preserve system messages in Anthropic conversations
- Add safeguards for message sequence validation
2025-05-19 12:50:41 -07:00
pr
1291b35b63 add authentification for ollama instance 2025-05-19 11:01:47 +02:00
Eugen Eisler
9862564c45 Merge pull request #1467 from joshuafuller/main
Typos, spelling, grammar and other minor updates
2025-05-19 07:52:12 +02:00
Eugen Eisler
bbc183f276 Merge pull request #1468 from NavNab/main
Refactor content structure in create_hormozi_offer system.md for clarity and readability
2025-05-18 20:26:28 +02:00
NavNab
9c4445d7bd Refactor content structure in system.md for clarity and readability
- Improved formatting of the introduction and content summary sections for better flow.
- Consolidated repetitive sentences and enhanced the overall coherence of the text.
- Adjusted bullet points and numbering for consistency and easier comprehension.
- Ensured that key concepts are clearly articulated and visually distinct to aid understanding.
2025-05-18 17:03:24 +02:00
Joshua Fuller
920620d771 Merge pull request #1 from joshuafuller/branch/fix-spelling-in-pattern-management-guide 2025-05-16 23:48:52 -05:00
Joshua Fuller
d734e25e0d Merge pull request #2 from joshuafuller/branch/fix-spelling-in-pr-1284-update-notes 2025-05-16 23:48:42 -05:00
Joshua Fuller
a31b2d5e41 Merge pull request #3 from joshuafuller/branch/fix-typos-in-web-readme 2025-05-16 23:48:31 -05:00
Joshua Fuller
8e7e4aa169 Merge pull request #4 from joshuafuller/branch/fix-spelling-of-anthropic-in-notes-md 2025-05-16 23:48:23 -05:00
Joshua Fuller
ea57a64afa Merge pull request #5 from joshuafuller/branch/fix-grammar-in-nuclei-template-instructions 2025-05-16 23:48:13 -05:00
Joshua Fuller
da1a9dab56 docs: fix grammar in nuclei template instructions 2025-05-16 23:45:33 -05:00
Joshua Fuller
068f111986 docs: correct Anthropic spelling in notes 2025-05-16 23:44:31 -05:00
Joshua Fuller
dd0be51726 docs: fix typos in web README 2025-05-16 23:44:19 -05:00
Joshua Fuller
43a1e66cc8 docs: fix spelling in PR 1284 update notes 2025-05-16 23:44:06 -05:00
Joshua Fuller
430a272e1d docs: fix spelling in pattern management guide 2025-05-16 23:43:38 -05:00
github-actions[bot]
0e892f38e4 Update version to v1.4.187 and commit 2025-05-10 07:42:11 +00:00
Eugen Eisler
aa0fe90258 Merge pull request #1463 from CodeCorrupt/nixpkgs_completion
Add completion to the build output for Nix
2025-05-10 09:40:45 +02:00
CodeCorrupt
c59c7553b3 Add completion files to the build output for Nix 2025-05-07 17:06:00 -04:00
github-actions[bot]
703756d0b0 Update version to v1.4.186 and commit 2025-05-06 22:06:46 +00:00
Eugen Eisler
50d22f8e77 Merge pull request #1459 from ksylvan/0505-cleanup-some-old-detritus
chore: Repository cleanup and .gitignore Update
2025-05-07 00:05:19 +02:00
Kayvan Sylvan
fde2efd4ce chore: update .gitignore and remove obsolete files
- Add `coverage.out` to `.gitignore` for ignoring coverage output.
- Remove `Alma.md` documentation file from the repository.
- Delete `rate_ai_result.txt` stitch script from `stitches` folder.
- Remove `readme.md` for `rate_ai_result` stitch documentation.
2025-05-05 17:16:38 -07:00
github-actions[bot]
0150c3a37d Update version to v1.4.185 and commit 2025-04-28 19:27:01 +00:00
Eugen Eisler
2a0216b9aa Merge pull request #1453 from ksylvan/0428-default-model-setting-fix
Fix for default model setting
2025-04-28 21:25:35 +02:00
Kayvan Sylvan
a6d14d86b8 refactor: introduce getSortedGroupsItems for consistent sorting logic
### CHANGES
- Add `getSortedGroupsItems` to centralize sorting logic.
- Sort groups and items alphabetically, case-insensitive.
- Replace inline sorting in `Print` with new method.
- Update `GetGroupAndItemByItemNumber` to use sorted data.
- Ensure original `GroupsItems` remains unmodified.
2025-04-28 11:41:32 -07:00
github-actions[bot]
a9374c128b Update version to v1.4.184 and commit 2025-04-25 08:27:55 +00:00
Eugen Eisler
f32b9f81da Merge pull request #1447 from ksylvan/0424-more-shell-completions
More shell completion scripts: Zsh, Bash, and Fish
2025-04-25 10:26:20 +02:00
Kayvan Sylvan
bf3af8e98e feat: add shell completion scripts for Zsh, Bash, and Fish
CHANGES:
- Add shell completion support for three major shells
- Create standardized completion scripts in completions/ directory
- Add --shell-complete-list flag for machine-readable output
- Update Print() methods to support plain output format
- Document installation steps for each shell in README
- Replace old fish completion script with improved version
2025-04-24 17:47:39 -07:00
github-actions[bot]
095c295ee5 Update version to v1.4.183 and commit 2025-04-23 20:03:10 +00:00
Eugen Eisler
93ecc9cfea Merge pull request #1431 from KenMacD/fish-completion
Add a completion script for fish
2025-04-23 22:01:41 +02:00
github-actions[bot]
e7aaa23fc2 Update version to v1.4.182 and commit 2025-04-23 20:00:59 +00:00
Eugen Eisler
197d3454f8 Merge pull request #1441 from ksylvan/0423-nix-go-build-toolchain-update
Update go toolchain and go module packages to latest versions
2025-04-23 21:59:25 +02:00
Kayvan Sylvan
50a4f8b491 chore: fix "nix flake check" errors 2025-04-23 11:07:01 -07:00
Kayvan Sylvan
894b4967dd refactor: centralize Go version definition in flake.nix
CHANGES
*   Define `getGoVersion` function in `flake.nix`.
*   Use `getGoVersion` to set Go version consistently.
*   Pass `goVersion` explicitly into `nix/shell.nix`.
*   Remove redundant Go version definition from `shell.nix`.
2025-04-23 09:34:39 -07:00
Kayvan Sylvan
9837bd6664 chore: update Go to 1.24.2 and refresh dependencies
Update Go version across Dockerfile, Nix configurations, and Go modules.
Refresh dependencies and Nix flake inputs.

CHANGES:
*   Update Go version to 1.24.2 in Dockerfile.
*   Set Go version to 1.24.0 and toolchain to 1.24.2.
*   Refresh Go module dependencies and sums (go.mod, go.sum).
*   Update Nix flake lock file inputs.
*   Configure Nix environment and packages for Go 1.24.
*   Update gomod2nix lock file with dependency hashes.
*   Use Go 1.24 in Nix development shell environment.
2025-04-23 09:18:01 -07:00
github-actions[bot]
6ca1b5dac4 Update version to v1.4.181 and commit 2025-04-22 16:03:07 +00:00
Eugen Eisler
c85135c04e Merge pull request #1433 from ksylvan/0421-anthropic-api-update
chore: update Anthropic SDK to v0.2.0-beta.3 and migrate to V2 API
2025-04-22 18:01:53 +02:00
github-actions[bot]
31e4e42a94 Update version to v1.4.180 and commit 2025-04-22 11:40:37 +00:00
Eugen Eisler
196db04fc2 Merge pull request #1435 from ksylvan/0421-fix-raw-input-with-stratetgies
chore: Fix user input handling when using raw mode and `--strategy` flag
2025-04-22 13:39:22 +02:00
Kayvan Sylvan
b3b1b5a471 chore: unify raw mode message handling and preserve env vars in extension executor
## CHANGES

- refactor BuildSession raw mode to prepend system to user content
- ensure raw mode messages always have User role
- keep existing user message when no systemMessage provided
- append systemMessage separately in non-raw mode sessions
- store original cmd.Env before context-based exec command creation
- recreate exec command with context then restore originalEnv
- add comments clarifying raw vs non-raw handling behavior
2025-04-21 17:04:11 -07:00
Kayvan Sylvan
892439a177 chore: update Anthropic SDK to v0.2.0-beta.3 and migrate to V2 API
## CHANGES

- Upgrade Anthropic SDK from alpha.11 to beta.3
- Update API endpoint from v1 to v2
- Replace anthropic.F() with direct assignment
- Replace anthropic.F() with anthropic.Opt() for optional params
- Simplify event delta handling in streaming
- Change client type from pointer to value type
- Update comment with SDK changelog reference
2025-04-21 13:17:03 -07:00
github-actions[bot]
ba2e178e03 Update version to v1.4.179 and commit 2025-04-21 18:08:47 +00:00
Eugen Eisler
ed298bcedd Merge pull request #1432 from ksylvan/0421-fix-tools-selection-in-setup
chore: fix fabric setup mess-up introduced by sorting lists (tools and models)
2025-04-21 20:07:33 +02:00
Kayvan Sylvan
6b04e6e674 chore: sort AI models alphabetically for consistent listing
CHANGES
*   Import `sort` and `strings` packages for sorting functionality.
*   Sort retrieved AI model names alphabetically, ignoring case.
*   Ensure consistent ordering of AI models in lists.
2025-04-21 10:41:41 -07:00
Kayvan Sylvan
04c0f6a0a5 chore: alphabetize the order of plugin tools 2025-04-21 10:26:04 -07:00
Kenny MacDermid
486ff42b59 Add a completion script for fish 2025-04-21 12:58:05 -03:00
github-actions[bot]
f7ab484510 Update version to v1.4.178 and commit 2025-04-21 13:21:52 +00:00
Eugen Eisler
f50a14305a Merge pull request #1427 from ksylvan/0420-refactor-openai-compatible-providers
Refactor OpenAI-compatible AI providers and add `--listvendors` flag
2025-04-21 15:20:33 +02:00
github-actions[bot]
d5f0cd7616 Update version to v1.4.177 and commit 2025-04-21 07:10:40 +00:00
Eugen Eisler
67c658f5b4 Merge pull request #1428 from ksylvan/0420-sorted-group-lists
feat: Alphabetical case-insensitive sorting for groups and items
2025-04-21 09:09:21 +02:00
github-actions[bot]
ef3bc03343 Update version to v1.4.176 and commit 2025-04-21 07:09:11 +00:00
Eugen Eisler
e31cb2b46a Merge pull request #1429 from ksylvan/0420-fix-strategies-api
feat: enhance StrategyMeta with Prompt field and dynamic naming
2025-04-21 09:07:57 +02:00
Kayvan Sylvan
6ca7142ea4 feat: enhance StrategyMeta with Prompt field and dynamic naming
### CHANGES

- Add `Prompt` field to `StrategyMeta` struct.
- Include `strings` package for filename processing.
- Derive strategy name from filename using `strings.TrimSuffix`.
- Store `Prompt` value from JSON data in `StrategyMeta`
2025-04-20 17:20:55 -07:00
Kayvan Sylvan
8b2174897a feat: add alphabetical sorting to groups and items in Print method**
### CHANGES
- Import `sort` and `strings` packages for sorting functionality.
- Create a copy of groups for stable sorting.
- Sort groups alphabetically in a case-insensitive manner.
- Create a copy of items within each group for sorting.
- Sort items alphabetically in a case-insensitive manner.
- Iterate over sorted groups and items for display.
2025-04-20 10:56:52 -07:00
Kayvan Sylvan
ac5eab0563 feat: add --listvendors command to list AI vendors
### CHANGES
- Introduce `--listvendors` flag to display all AI vendors.
- Refactor OpenAI-compatible providers into a unified configuration.
- Remove individual vendor packages for streamlined management.
- Add sorting for consistent vendor listing output.
- Update documentation to include new `--listvendors` option.
2025-04-20 08:53:20 -07:00
github-actions[bot]
65414dcc1c Update version to v1.4.175 and commit 2025-04-19 20:57:20 +00:00
Eugen Eisler
5db352f5be Merge pull request #1418 from danielmiessler/dependabot/go_modules/go_modules-bbb8b02913
chore(deps): bump golang.org/x/net from 0.36.0 to 0.38.0 in the go_modules group across 1 directory
2025-04-19 22:56:00 +02:00
github-actions[bot]
9e7830ff77 Update version to v1.4.174 and commit 2025-04-19 06:13:44 +00:00
Eugen Eisler
5945c0e16b Merge pull request #1425 from ksylvan/0418-cerebras-ai
feat: add Cerebras AI plugin to plugin registry
2025-04-19 08:12:32 +02:00
Kayvan Sylvan
29ee141822 feat: add Cerebras AI plugin to plugin registry
### CHANGES
- Introduce Cerebras AI plugin import in plugin registry.
- Register Cerebras client in the NewPluginRegistry function.
2025-04-18 15:14:08 -07:00
github-actions[bot]
8a69621e87 Update version to v1.4.173 and commit 2025-04-18 16:23:09 +00:00
Eugen Eisler
1645b0c4ea Merge pull request #1420 from sherif-fanous/main
Fix error in deleting patterns due to non empty directory
2025-04-18 18:21:50 +02:00
Eugen Eisler
45205574d5 Merge pull request #1421 from ksylvan/0417-atom-of-thought
feat: add Atom-of-Thought (AoT) strategy and prompt definition
2025-04-18 18:20:42 +02:00
Kayvan Sylvan
71a5e0394a chore: add final newline to aot json file 2025-04-17 13:43:06 -07:00
Kayvan Sylvan
f286936c23 feat: add Atom-of-Thought (AoT) strategy and prompt definition
## CHANGES

- add new aot.json for Atom-of-Thought (AoT) prompting
- define AoT strategy description and detailed prompt instructions
- update strategies.json to include AoT in available strategies list
- ensure AoT strategy appears alongside CoD, CoT, and LTM options
2025-04-17 13:31:18 -07:00
Sherif Fanous
9000f92a55 Fix error in deleting patterns due to non empty directory 2025-04-17 12:05:52 -04:00
dependabot[bot]
8c84d4b3c8 chore(deps): bump golang.org/x/net
Bumps the go_modules group with 1 update in the / directory: [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/net` from 0.36.0 to 0.38.0
- [Commits](https://github.com/golang/net/compare/v0.36.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-16 23:25:39 +00:00
github-actions[bot]
1d77afcc44 Update version to v1.4.172 and commit 2025-04-16 18:17:12 +00:00
Eugen Eisler
835bc6044b Merge pull request #1415 from ksylvan/0416-grok-ai
feat: add Grok AI provider support
2025-04-16 20:15:45 +02:00
Kayvan Sylvan
ef895a1ab9 chore: Update README with a note about Grok 2025-04-16 09:23:20 -07:00
Kayvan Sylvan
82039cedaf feat: add Grok AI provider support`
Integrate the Grok AI provider into the Fabric system for AI model interactions.

### CHANGES

*   Add Grok AI client to the plugin registry.
*   Include Grok AI API key in REST API configuration endpoints.
2025-04-16 09:15:32 -07:00
Eugen Eisler
973df61dfd Merge pull request #1411 from ksylvan/0415-readme-add-contributors
docs: add contributors section to README with contrib.rocks image
2025-04-16 11:29:58 +02:00
Kayvan Sylvan
661c85d7a6 # docs: add contributors section to README with contrib.rocks image
## CHANGES

- Add contributors section with visual representation
- Include link to project contributors page
- Add attribution to contrib.rocks tool
2025-04-15 08:29:45 -07:00
github-actions[bot]
4638f67fb7 Update version to v1.4.171 and commit 2025-04-15 08:56:37 +00:00
Eugen Eisler
ab71dbcd4f Merge pull request #1407 from sherif-fanous/main
Update Dockerfile so that Go image version matches go.mod version
2025-04-15 10:55:18 +02:00
Daniel Miessler
2abdabc100 Update README.md 2025-04-14 09:45:02 -07:00
Daniel Miessler
9f78a2c8e1 Update README.md 2025-04-14 09:44:16 -07:00
Daniel Miessler
76f78601f2 Update README.md 2025-04-14 09:43:36 -07:00
Daniel Miessler
4eaba2dc56 Update README.md 2025-04-14 09:43:06 -07:00
Daniel Miessler
2dcd9cb5f7 Update README.md 2025-04-14 09:42:19 -07:00
Daniel Miessler
2943872bde Update README.md 2025-04-14 09:41:05 -07:00
Daniel Miessler
b901542a48 Update README.md 2025-04-14 09:40:17 -07:00
Daniel Miessler
c122ff8960 Update README.md 2025-04-14 09:39:52 -07:00
Daniel Miessler
e128d818c4 Update README.md 2025-04-14 09:39:15 -07:00
Daniel Miessler
5e9d6d0a91 Update README.md 2025-04-14 09:38:51 -07:00
Daniel Miessler
70edf9cbe3 Update README.md 2025-04-14 09:36:53 -07:00
Daniel Miessler
e61a0a9391 Update README.md 2025-04-14 09:35:50 -07:00
github-actions[bot]
f8ddf98404 Update version to v1.4.170 and commit 2025-04-13 07:11:30 +00:00
Eugen Eisler
55219467f3 Merge pull request #1406 from jmd1010/chatinput-fix-clean2
Fix chat history LLM response sequence in ChatInput.svelte
2025-04-13 09:10:11 +02:00
Sherif Fanous
74d4be1ac6 Bump golang version to match go.mod 2025-04-12 21:03:53 -04:00
JM
9e57f8c6f1 Update pattern_descriptions.json 2025-04-12 19:26:32 -04:00
jmd1010
3d2903cb47 Finalize WEB UI V2 loose endsfixes 2025-04-12 17:15:14 -04:00
jmd1010
13e9d22ec6 Fix chat history LLM response sequence in ChatInput.svelte 2025-04-11 21:40:33 -04:00
github-actions[bot]
01d12c47cf Update version to v1.4.169 and commit 2025-04-11 19:13:26 +00:00
Eugen Eisler
c3258a2c3f Merge pull request #1403 from jmd1010/strategy-flag-web
Strategy flag enhancement - Web UI implementation
2025-04-11 21:12:12 +02:00
JM
746885e263 Update strategies.json 2025-04-11 12:40:27 -04:00
jmd1010
b25895c1d2 Integrate in web ui the strategy flag enhancement first developed in fabric cli 2025-04-10 18:25:09 -04:00
Daniel Miessler
e40b1c1f66 updated ed 2025-04-06 15:23:25 -07:00
Daniel Miessler
ef2ec8bffe Added excalidraw pattern. 2025-04-06 15:18:17 -07:00
Daniel Miessler
589991e6a6 Shorter version of analyze bill. 2025-04-06 13:42:03 -07:00
Daniel Miessler
965392ebbd Merge branch 'main' of github.com:danielmiessler/fabric 2025-04-06 13:33:31 -07:00
Daniel Miessler
6f615baf53 Added bill analyzer. 2025-04-06 13:33:21 -07:00
github-actions[bot]
b60bad7799 Update version to v1.4.168 and commit 2025-04-02 13:33:53 +00:00
Eugen Eisler
234d1303ad Merge pull request #1399 from HaroldFinchIFT/add-optional-simple-apikey-for-server
feat: add simple optional api key management for protect routes in --serve mode
2025-04-02 15:32:31 +02:00
Harold
cd74a96be2 refactor: refactor API key middleware based on code review feedback 2025-04-01 22:47:39 +02:00
Harold
ceaa90a7c7 fix: bad format 2025-04-01 01:26:53 +02:00
Harold
15a2eeadc9 feat: add simple optional api key management for protect routes in --serve mode 2025-04-01 01:26:53 +02:00
github-actions[bot]
8d02f5b21d Update version to v1.4.167 and commit 2025-03-31 14:42:50 +00:00
Eugen Eisler
0f8f0b6b39 Merge pull request #1397 from HaroldFinchIFT/add-italian-language-gui
feat: add it lang to the chat drop down menu lang in web gui
2025-03-31 16:41:34 +02:00
Harold
fd58b6d410 feat: add it lang to the chat drop down menu lang in web gui 2025-03-30 12:05:22 +02:00
github-actions[bot]
2579d37c16 Update version to v1.4.166 and commit 2025-03-29 20:12:49 +00:00
Eugen Eisler
4f28d85e96 Merge pull request #1392 from ksylvan/0327-fix-code-helper-arg-handling
chore: enhance argument validation in `code_helper` tool
2025-03-29 21:11:34 +01:00
Kayvan Sylvan
f529b8bb80 refactor: streamline code_helper CLI interface and require explicit instructions
## CHANGES

- Require exactly two arguments: directory and instructions
- Remove dedicated help flag, use flag.Usage instead
- Improve directory validation to check if it's a directory
- Inline pattern parsing, removing separate function
- Simplify error messages for better clarity
- Update usage text to reflect required instructions parameter
- Print usage to stderr instead of stdout
2025-03-27 19:17:54 -07:00
Eugen Eisler
71437605e1 Merge pull request #1390 from PatrickCLee/03-26-README-fix
docs: improve README link
2025-03-26 08:31:15 +01:00
github-actions[bot]
cf5753a186 Update version to v1.4.165 and commit 2025-03-26 07:30:28 +00:00
Eugen Eisler
433c83fe2c Merge pull request #1389 from ksylvan/0323-nysan-conding-feature
Create Coding Feature
2025-03-26 08:29:08 +01:00
PatrickCLee
01770cc6e3 docs: improve README link
- Fix broken what-and-why link reference
2025-03-26 11:34:11 +08:00
Kayvan Sylvan
55fda5e025 fix: enhance JSON string handling with proper control character escaping
## CHANGES

- Convert control chars to proper JSON escape sequences
- Prevent invalid JSON due to literal control chars
2025-03-25 20:00:44 -07:00
Kayvan Sylvan
daad5f986e refactor: rename fabric_code tool to code_helper for clarity
## CHANGES

- Rename tool from `fabric_code` to `code_helper`
- Update all documentation references to the tool
- Update installation instructions in README
- Modify usage examples in documentation
- Update tool's self-description and help text
2025-03-25 19:14:25 -07:00
Kayvan Sylvan
3785d0a5fa refactor: modify ParseFileChanges to return summary and changes separately
CHANGES:
*   Return summary text from `ParseFileChanges` separately.
*   Update `chatter` to use returned summary text.
*   Update tests to match new function signature.
2025-03-25 18:54:01 -07:00
Kayvan Sylvan
8a326e9cfb refactor: replace FILE_CHANGES marker with constant FileChangesMarker
## CHANGES

- Add FileChangesMarker constant for file changes section
- Update parser to use new constant marker
- Improve error messages with dynamic marker reference
- Update tests to use new marker format
- Update system documentation with new marker syntax
2025-03-25 17:51:57 -07:00
Kayvan Sylvan
5f5822f1c6 fix: improve JSON parsing in ParseFileChanges to handle invalid escape sequences
## CHANGES

- Add dedicated function to fix invalid JSON escapes
- Handle common \C escape sequence issue
- Implement fallback parsing with comprehensive escape fixes
- Track string context for accurate escape detection
- Preserve valid JSON escape sequences
2025-03-25 16:16:51 -07:00
Kayvan Sylvan
111482e46e feat: add file management system for AI-driven code changes
CHANGES:
- Replace deprecated io/ioutil with modern alternatives
- Add file change parsing and validation system
- Create secure file application mechanism
- Update chatter to process AI file changes
- Improve create_coding_feature pattern documentation
2025-03-25 16:09:28 -07:00
Kayvan Sylvan
9b56e0e996 feat: add fabric_code tool and create_coding_feature pattern
This commit introduces the `fabric_code` tool and the `create_coding_feature` pattern, allowing Fabric to modify existing codebases.

## CHANGES

-   add `fabric_code` tool to generate JSON representation of code projects
-   add `create_coding_feature` pattern to apply AI-generated code changes
-   update README with `fabric_code` installation and usage
-   walk file system with maximum depth and ignore list
-   scan directory and return file/dir JSON data for AI model
-   provide usage instructions and examples for `fabric_code`
-   add file management API to system prompt for code changes
2025-03-25 08:04:55 -07:00
github-actions[bot]
9b830f9801 Update version to v1.4.164 and commit 2025-03-22 08:52:21 +00:00
Eugen Eisler
dda73d3333 Merge pull request #1380 from jmd1010/web-windows-resizing
Add flex windows sizing to web interface + raw text input fix
2025-03-22 09:51:07 +01:00
Eugen Eisler
40c26d9c9e Merge pull request #1379 from guilhermechapiewski/patch-1
Fix typo on fallacies instruction.
2025-03-22 09:50:14 +01:00
Eugen Eisler
cdd86b0ed9 Merge pull request #1382 from ksylvan/03-21-README-fixes
docs: improve README formatting and fix some broken links
2025-03-22 09:49:32 +01:00
Kayvan Sylvan
c2e84d6db9 docs: improve README formatting and add clipboard support section
## CHANGES

- Remove colons from heading anchors
- Fix broken installation link reference
- Replace code tags with backticks
- Improve code block formatting with indentation
- Clarify package manager alias requirements
- Fix environment variables link
- Simplify custom patterns directory instructions
2025-03-21 23:34:27 -07:00
jmd1010
4208a02191 fixed processing message not stopping after pattern output completion 2025-03-21 23:26:54 -04:00
jmd1010
943b26eeef Add flex windows sizing to web interface 2025-03-21 18:22:54 -04:00
Guilherme Chapiewski
d6ceae9efd Fix typo on fallacies instruction. 2025-03-21 11:30:52 -07:00
Eugen Eisler
f57dc6d681 Merge pull request #1376 from vaygr/install-update
Add installation instructions for OS package managers
2025-03-21 13:57:00 +01:00
Val V
4e4bfc9d5d Add installation instructions for OS package managers 2025-03-21 03:12:43 +00:00
Daniel Miessler
ea137c1525 Updated find prompt. 2025-03-20 14:36:00 -07:00
Daniel Miessler
f0be1d4735 Updated find prompt. 2025-03-20 14:28:15 -07:00
Daniel Miessler
e3975b9364 Updated find prompt. 2025-03-20 14:25:24 -07:00
Daniel Miessler
cd48802ea0 Updated find prompt. 2025-03-20 14:23:10 -07:00
Daniel Miessler
dceccd8e72 Merge branch 'main' of github.com:danielmiessler/fabric 2025-03-20 14:16:17 -07:00
Daniel Miessler
0813ad9c39 Added find_female_life_partner. 2025-03-20 14:16:06 -07:00
github-actions[bot]
e391132167 Update version to v1.4.163 and commit 2025-03-19 03:50:11 +00:00
Eugen Eisler
c7f86d3a0c Merge pull request #1362 from danielmiessler/dependabot/go_modules/go_modules-c153b83258
Bump golang.org/x/net from 0.35.0 to 0.36.0 in the go_modules group across 1 directory
2025-03-19 04:48:57 +01:00
Eugen Eisler
f0d92f9424 Merge pull request #1372 from rube-de/patch-1
fix: set percentEncoded to false
2025-03-19 04:48:26 +01:00
Eugen Eisler
4a9bdb1479 Merge pull request #1373 from ksylvan/main
Remove unnecessary `system.md` file at top level.
2025-03-19 04:47:02 +01:00
github-actions[bot]
7eed80710e Update version to v1.4.162 and commit 2025-03-19 03:45:25 +00:00
Eugen Eisler
fbd62be47d Merge pull request #1374 from ksylvan/fix/change-default-model-save
Fix Default Model Change Functionality
2025-03-19 04:44:11 +01:00
Kayvan Sylvan
85cc7b8a9d fix: improve error handling in ChangeDefaultModel flow and save environment file
- Add early return on setup error
- Save environment file after successful setup
- Maintain proper error propagation
2025-03-17 19:37:26 -07:00
Kayvan Sylvan
1fe8afd329 chore: Remove redundant file system.md at top level.
CHANGES:
- Removed `system.md` on the top level of the fabric repo.
- system.md was an RPG session summarization prompt.
- There are two other RPM summary patterns created after this file was added: `create_rpg_summary` and `summarize_rpg_session`
2025-03-17 15:09:36 -07:00
beruf
e89ccf5e97 fix: set percentEncoded to false
If you use a youtube link like `https://youtu.be/sHIlFKKaq0A` percentEndcoding encodes the link to `https%3A%2F%2Fyoutu.be%2FsHIlFKKaq0A`, which throws an error in fabric.

With percentEndcoding false, the script receives the link without encoding and works.
2025-03-17 22:43:02 +01:00
github-actions[bot]
0eee89140c Update version to v1.4.161 and commit 2025-03-17 14:21:02 +00:00
Eugen Eisler
5571e6fafd Merge pull request #1363 from garkpit/streamlit-clipboard-ops-for-all-platforms
clipboard operations now work on Mac and PC
2025-03-17 15:19:42 +01:00
github-actions[bot]
9a4e920618 Update version to v1.4.160 and commit 2025-03-17 14:17:36 +00:00
Eugen Eisler
6e479999b1 Merge pull request #1368 from vaygr/std-no-repeat
Standardize sections for no repeat guidelines
2025-03-17 15:16:21 +01:00
Daniel Miessler
f65f2501b4 Moved system file to proper directory. 2025-03-16 14:46:13 -07:00
Daniel Miessler
4b12bd2a61 Moved system file to proper directory. 2025-03-16 14:43:55 -07:00
Daniel Miessler
d83a3beeeb Merge branch 'main' of github.com:danielmiessler/fabric 2025-03-16 14:16:02 -07:00
Daniel Miessler
7428c8017f Added activity extractor. 2025-03-16 14:15:54 -07:00
Val V
008ed76d37 Standardize sections for no repeat guidelines 2025-03-16 19:48:15 +00:00
github-actions[bot]
ce9d4ad831 Update version to v1.4.159 and commit 2025-03-16 19:23:15 +00:00
Daniel Miessler
657bcab48c Added flashcard generator. 2025-03-16 12:21:50 -07:00
github-actions[bot]
cd11dcc7a9 Update version to v1.4.158 and commit 2025-03-16 17:26:52 +00:00
Eugen Eisler
22040a42f2 Merge pull request #1367 from ksylvan/fix/code_cleanup
Remove Generic Type Parameters from StorageHandler Initialization
2025-03-16 18:25:33 +01:00
Kayvan Sylvan
705ccd750b refactor: remove generic type parameters from NewStorageHandler calls
## CHANGES

- Remove explicit type parameters from StorageHandler initialization
- Update contexts handler constructor implementation
- Update patterns handler constructor implementation
- Update sessions handler constructor implementation
- Simplify API by relying on type inference
2025-03-16 09:21:17 -07:00
github-actions[bot]
db7c2b70cb Update version to v1.4.157 and commit 2025-03-16 07:36:39 +00:00
Eugen Eisler
9dc9bfa1d5 Merge pull request #1365 from ksylvan/feature/strategies
Implement Prompt Strategies in Fabric
2025-03-16 08:35:23 +01:00
Kayvan Sylvan
6b93658191 chore: remove redundant yt function definition 2025-03-15 17:35:45 -07:00
Kayvan Sylvan
ea7a425a26 add newline to end of cod.json 2025-03-15 13:18:53 -07:00
Kayvan Sylvan
9582978adb Fix help message when no strategies found. 2025-03-15 13:05:51 -07:00
Kayvan Sylvan
453d8e75e4 fix: fix handling of the installed strategies dir 2025-03-15 09:36:34 -07:00
Kayvan Sylvan
901a010efd chore: remove fallback to local strategies directory if missing 2025-03-15 08:29:58 -07:00
Jay
b5c2d069f2 clipboard operations now work on Mac and PC 2025-03-15 08:31:48 +00:00
Kayvan Sylvan
f744e25b39 change [optional] to [required] in strategies 2025-03-15 00:33:58 -07:00
Kayvan Sylvan
096f40df68 feat: add prompt strategies and improve installation documentation
## CHANGES

- Add prompt strategies like Chain of Thought (CoT)
- Implement strategy selection with `--strategy` flag
- Improve README with platform-specific installation instructions
- Fix web interface documentation link
- Refactor git operations with new githelper package
- Add `--liststrategies` command to view available strategies
- Support applying strategies to system prompts
- Fix YouTube configuration check
- Improve error handling in session management
2025-03-15 00:30:30 -07:00
dependabot[bot]
a227e61952 Bump golang.org/x/net in the go_modules group across 1 directory
Bumps the go_modules group with 1 update in the / directory: [golang.org/x/net](https://github.com/golang/net).


Updates `golang.org/x/net` from 0.35.0 to 0.36.0
- [Commits](https://github.com/golang/net/compare/v0.35.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
  dependency-group: go_modules
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-13 02:06:32 +00:00
github-actions[bot]
29f4534001 Update version to v1.4.156 and commit 2025-03-11 09:05:45 +00:00
Eugen Eisler
ed9324d611 Merge pull request #1356 from ksylvan/main
chore: add .vscode to `.gitignore` and fix typos and markdown linting  in `Alma.md`
2025-03-11 10:04:32 +01:00
Eugen Eisler
438b3c5211 Merge pull request #1352 from matmilbury/patch-1
pattern_explanations.md: fix typo
2025-03-11 07:52:48 +01:00
Eugen Eisler
efeeb7a796 Merge pull request #1354 from jmd1010/chat-history-window-sizing
Fix Chat history window scrolling behavior
2025-03-11 07:51:51 +01:00
Kayvan Sylvan
6b1ff0ab21 chore: add .vscode to .gitignore and fix typos and markdown linting in Alma.md 2025-03-10 09:29:16 -07:00
jmd1010
acb925f5a9 Update Web V2 Install Guide with improved instructions 2025-03-09 15:27:02 -04:00
jmd1010
761293ede7 Fix Chat history window sizing 2025-03-09 14:58:45 -04:00
Mat Milbury
e004e50037 pattern_explanations.md: fix typo 2025-03-09 16:11:14 +01:00
github-actions[bot]
44a6c03bc8 Update version to v1.4.155 and commit 2025-03-09 09:01:41 +00:00
Eugen Eisler
d794afe405 Merge pull request #1350 from jmd1010/pattern-search-implementation
Implement Pattern Tile search functionality
2025-03-09 10:00:28 +01:00
github-actions[bot]
e4ac322227 Update version to v1.4.154 and commit 2025-03-09 08:56:50 +00:00
Eugen Eisler
1fc19da19f Merge pull request #1349 from ksylvan/03-08-extra-version-declaration-removed
Fix: v1.4.153 does not compile because of extra version declaration
2025-03-09 09:55:37 +01:00
jmd1010
b213068680 Implement column resize functionnality 2025-03-08 17:34:49 -05:00
jmd1010
bf3af207b9 Implement Pattern Tile search functionality 2025-03-08 12:56:55 -05:00
Kayvan Sylvan
e28ba224b5 fix: update Azure client API version access path in tests 2025-03-08 09:52:20 -08:00
Kayvan Sylvan
5b7697c5ab chore: remove unnecessary version variable from main.go 2025-03-08 09:29:20 -08:00
github-actions[bot]
0701b7d263 Update version to v1.4.153 and commit 2025-03-08 08:56:19 +00:00
Eugen Eisler
aac29025fb Merge pull request #1348 from liyuankui/feature/add-litellm-vendor
feat: Add LiteLLM AI plugin support with local endpoint configuration
2025-03-08 09:55:08 +01:00
kyle
6928f9a312 feat: Add LiteLLM AI plugin support with local endpoint configuration 2025-03-08 11:37:54 +08:00
github-actions[bot]
ef2e985d3f Update version to v1.4.152 and commit 2025-03-07 08:23:45 +00:00
Eugen Eisler
1df945556d fix: Fix pipe handling 2025-03-07 09:22:26 +01:00
github-actions[bot]
b6f313db8f Update version to v1.4.151 and commit 2025-03-07 07:47:35 +00:00
Eugen Eisler
cb00f2026e Merge pull request #1339 from Eckii24/feature/add-azure-api-version
Feature/add azure api version
2025-03-07 08:46:20 +01:00
github-actions[bot]
bd39d98ffc Update version to v1.4.150 and commit 2025-03-07 07:44:52 +00:00
Eugen Eisler
36b0afa692 Merge pull request #1343 from jmd1010/input-component-update
Rename input.svelte to Input.svelte for proper component naming convention
2025-03-07 08:43:32 +01:00
jmd1010
53d09d8a5a Rename input.svelte to Input.svelte for proper component naming convention 2025-03-06 21:08:30 -05:00
github-actions[bot]
703144edad Update version to v1.4.149 and commit 2025-03-05 12:50:38 +00:00
Eugen Eisler
717e50e570 Merge pull request #1340 from ksylvan/03-04-youtube-live-fix
Fix for youtube live links plus new youtube_summary pattern
2025-03-05 13:49:22 +01:00
Eugen Eisler
4af6d5eeed Merge pull request #1338 from jmd1010/directory-structure-update
Update Web V2 Install Guide layout
2025-03-05 10:09:23 +01:00
Kayvan Sylvan
de356ddeb5 chore: update version 2025-03-04 22:00:26 -08:00
Kayvan Sylvan
59c50def2a feat: update yt commands and docs to support timestamped transcripts
CHANGES
- Add argument validation to yt for usage errors
- Enable -t flag for transcript with timestamps
- Refactor PowerShell yt function with parameter switch
- Update README to dynamically select transcript option
- Document youtube_summary feature in pattern explanations
- Introduce youtube_summary pattern.
2025-03-04 21:46:50 -08:00
Kayvan Sylvan
1dafb09e07 remove spurious newline 2025-03-04 21:20:24 -08:00
Kayvan Sylvan
e8caf9fc10 feat: update YouTube regex to support live URLs 2025-03-04 21:18:51 -08:00
jmd1010
59537c4bf5 Update Web V2 Install Guide layout 2 2025-03-04 12:29:31 -05:00
Eckii24
231516917d Update azure.go 2025-03-04 17:39:56 +01:00
Eckii24
58d17fd0ec Update openai.go 2025-03-04 17:37:19 +01:00
Eckii24
8bd4aa6d1a Update azure_test.go 2025-03-04 17:36:37 +01:00
Eckii24
629c1b3e11 Update azure.go 2025-03-04 17:36:11 +01:00
jmd1010
2f4569177d Update Web V2 Install Guide layout 2025-03-04 10:58:48 -05:00
Eugen Eisler
f2b85af0ea Merge pull request #1330 from jmd1010/directory-structure-update
Fixed ALL CAP DIR as requested and processed minor updates to documentation
2025-03-04 12:17:03 +01:00
Eugen Eisler
36be4c747c Merge pull request #1333 from asasidh/main
Update QUOTES section to include speaker names for clarity
2025-03-04 09:35:53 +01:00
github-actions[bot]
7ac9b862f5 Update version to v1.4.148 and commit 2025-03-03 11:08:37 +00:00
Eugen Eisler
1515139dd6 fix: Rework LM Studio plugin 2025-03-03 12:07:20 +01:00
asasidh
be6049e577 Update QUOTES section to include speaker names for clarity 2025-03-02 17:42:52 -06:00
jmd1010
a8ae09d4d6 Update Web V2 Install Guide with improved instructions V2 2025-02-28 19:57:13 -05:00
jmd1010
30059c46a0 Update Web V2 Install Guide with improved instructions 2025-02-28 13:23:43 -05:00
jmd1010
2d10c71e39 Reorganize documentation with consistent directory naming and updated guides 2025-02-28 11:34:17 -05:00
github-actions[bot]
02e12b028c Update version to v1.4.147 and commit 2025-02-28 07:59:43 +00:00
Eugen Eisler
6686b83fc8 Merge pull request #1326 from pavdmyt/pavdmyt/fix-vendors-manager-for-localhost-models
fix: continue fetching models even if some vendors fail
2025-02-28 08:58:25 +01:00
Eugen Eisler
d53b0b678f Merge pull request #1329 from jmd1010/web-install-guide-pr
Svelte Web V2 Installation Guide
2025-02-28 08:55:51 +01:00
jmd1010
0d5454372e Update install guide with Plain Text instructions 2025-02-28 02:08:23 -05:00
jmd1010
2f040f94c3 Add Web V2 Installation Guide 2025-02-28 00:54:44 -05:00
Pavel Dmytrenko
0b29ebd14b fix: continue fetching models even if some vendors fail
Remove the cancellation of remaining goroutines when a vendor collection fails.
This ensures that other vendor collections continue even if one fails.

Fixes listing models via `fabric -L` and using non-default models via `fabric -m custom_model`,
when localhost models (e.g. Ollama, LM Studio) are not listening on a given port (basically shut down).
2025-02-27 16:13:08 +02:00
github-actions[bot]
1dad903199 Update version to v1.4.146 and commit 2025-02-27 06:16:20 +00:00
Eugen Eisler
0bec53360e Merge pull request #1319 from jmd1010/pdf-integration-clean
Enhancement: PDF to Markdown Conversion Functionality to the Web Svelte Chat Interface
2025-02-27 07:15:03 +01:00
JM
cf637e4137 Merge branch 'main' into pdf-integration-clean 2025-02-27 01:02:04 -05:00
jmd1010
9507c2cca1 Reinstate file in original location to resolve PR conflict 2025-02-27 01:01:16 -05:00
jmd1010
fa575638d1 Remove pr-1284-update.md from tracking to resolve PR conflict 2025-02-27 00:55:58 -05:00
jmd1010
51220c40d9 Add required UI image assets for feature implementation 2025-02-27 00:11:04 -05:00
jmd1010
d1d62fcc4c Complete directory reorganization by moving pr-1284-update.md to new location 2025-02-26 23:44:56 -05:00
jmd1010
96e6a56e5f Restore file to original location to resolve path conflict 2025-02-26 23:38:14 -05:00
jmd1010
0d7514ea0e Remove pr-1284-update.md from PR scope 2025-02-26 23:21:06 -05:00
jmd1010
a74da4acff Rename pattern descriptions directory to follow consistent naming convention 2025-02-26 23:14:51 -05:00
jmd1010
6d8c3eb6e2 Update README files directory structure and naming convention 2025-02-26 22:23:53 -05:00
jmd1010
adbfa2f6ba Remove pdf-to-markdown folder from PR 2025-02-26 21:57:04 -05:00
github-actions[bot]
f5776637d9 Update version to v1.4.145 and commit 2025-02-26 16:54:02 +00:00
Eugen Eisler
34db384265 Merge pull request #1324 from jaredmontoya/nix-fix
flake: fix/update and enhance
2025-02-26 17:52:47 +01:00
jaredmontoya
1f765c5b53 flake: fix/update 2025-02-26 16:12:05 +01:00
github-actions[bot]
f9395fa108 Update version to v1.4.144 and commit 2025-02-26 08:55:35 +00:00
Eugen Eisler
22d2a3ee19 Upgrade upload artifacts to v4 2025-02-26 09:54:44 +01:00
github-actions[bot]
b64178c292 Update version to v1.4.143 and commit 2025-02-26 08:53:09 +00:00
Eugen Eisler
f7d38fb51f Merge pull request #1264 from danielmiessler/feat/exolab
feat: implement support for exolab
2025-02-26 09:52:13 +01:00
jmd1010
4725a94f00 Add complete PDF to Markdown documentation F 2025-02-24 22:33:27 -05:00
jmd1010
15ac5351cf Add Svelte implementation files for PDF integration 2025-02-24 21:46:03 -05:00
jmd1010
f69cda8fab Add PDF to Markdown integration documentation 2025-02-24 21:39:43 -05:00
jmd1010
a0e1f7204d Add PDF to Markdown conversion functionality to the web svelte caht interface 2025-02-24 17:24:02 -05:00
590 changed files with 30228 additions and 11341 deletions

2
.envrc
View File

@@ -1,2 +1,2 @@
watch_file shell.nix
watch_file nix/shell.nix
use flake

View File

@@ -7,29 +7,74 @@ body:
attributes:
value: |
Thanks for taking the time to fill out this bug report!
Please provide as much detail as possible to help us understand and reproduce the issue.
- type: textarea
id: what-happened
attributes:
label: What happened?
description: Also tell us, what did you expect to happen?
placeholder: Tell us what you see!
value: "I was doing THIS, when THAT happened. I was expecting THAT_OTHER_THING to happen instead."
value: "Please provide all the steps to reproduce the bug. I was doing THIS, when THAT happened. I was expecting THAT_OTHER_THING to happen instead."
validations:
required: true
- type: checkboxes
- type: dropdown
id: os
attributes:
label: Operating System
options:
- macOS - Silicon (arm64)
- macOS - Intel (amd64)
- Linux - amd64
- Linux - arm64
- Windows
validations:
required: true
- type: textarea
id: os-version
attributes:
label: OS Version
description: Please provide details about your OS version by running one of the following commands.
placeholder: |
macOS: `sw_vers`
Linux: `uname -a` or `cat /etc/os-release`
Windows: `ver`
render: shell
- type: dropdown
id: installation
attributes:
label: How did you install Fabric?
description: "Please select the method you used to install Fabric. You can find this information in the [Installation section of the README](https://github.com/ksylvan/fabric/blob/main/README.md#installation)."
options:
- Release Binary - Windows
- Release Binary - macOS (arm64)
- Release Binary - macOS (amd64)
- Release Binary - Linux (amd64)
- Release Binary - Linux (arm64)
- Package Manager - Homebrew (macOS)
- Package Manager - AUR (Arch Linux)
- From Source
- Other
validations:
required: true
- type: textarea
id: version
attributes:
label: Version check
description: Please make sure you were using the latest version of this project available in the `main` branch.
options:
- label: Yes I was.
required: true
label: Version
description: Please copy and paste the output of `fabric --version` (or `fabric-ai --version` if you installed it via brew) here.
render: text
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: textarea
id: screens
attributes:

View File

@@ -4,13 +4,13 @@ on:
push:
branches: ["main"]
paths-ignore:
- 'patterns/**'
- '**/*.md'
- "data/patterns/**"
- "**/*.md"
pull_request:
branches: ["main"]
paths-ignore:
- 'patterns/**'
- '**/*.md'
- "data/patterns/**"
- "**/*.md"
jobs:
test:
@@ -22,6 +22,9 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@main
- name: Set up Go
uses: actions/setup-go@v4
with:
@@ -29,3 +32,6 @@ jobs:
- name: Run tests
run: go test -v ./...
- name: Check Formatting
run: nix flake check

View File

@@ -3,7 +3,7 @@ name: Patterns Artifact
on:
push:
paths:
- "patterns/**" # Trigger only on changes to files in the patterns folder
- "data/patterns/**" # Trigger only on changes to files in the patterns folder
jobs:
zip-and-upload:
@@ -18,16 +18,16 @@ jobs:
- name: Verify Changes in Patterns Folder
run: |
git fetch origin
if git diff --quiet HEAD~1 -- patterns; then
if git diff --quiet HEAD~1 -- data/patterns; then
echo "No changes detected in patterns folder."
exit 1
fi
- name: Zip the Patterns Folder
run: zip -r patterns.zip patterns/
run: zip -r patterns.zip data/patterns/
- name: Upload Patterns Artifact
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: patterns
path: patterns.zip

View File

@@ -2,7 +2,7 @@ name: Go Release
on:
repository_dispatch:
types: [ tag_created ]
types: [tag_created]
push:
tags:
- "v*"
@@ -69,7 +69,7 @@ jobs:
GOOS: ${{ env.OS }}
GOARCH: ${{ matrix.arch }}
run: |
go build -o fabric-${OS}-${{ matrix.arch }} .
go build -o fabric-${OS}-${{ matrix.arch }} ./cmd/fabric
- name: Build binary on Windows
if: matrix.os == 'windows-latest'
@@ -77,7 +77,7 @@ jobs:
GOOS: windows
GOARCH: ${{ matrix.arch }}
run: |
go build -o fabric-windows-${{ matrix.arch }}.exe .
go build -o fabric-windows-${{ matrix.arch }}.exe ./cmd/fabric
- name: Upload build artifact
if: matrix.os != 'windows-latest'
@@ -108,10 +108,15 @@ jobs:
Add-Content -Path $env:GITHUB_ENV -Value "latest_tag=$latest_tag"
- name: Create release if it doesn't exist
shell: bash
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh release view ${{ env.latest_tag }} || gh release create ${{ env.latest_tag }} --title "Release ${{ env.latest_tag }}" --notes "Automated release for ${{ env.latest_tag }}"
if ! gh release view ${{ env.latest_tag }} >/dev/null 2>&1; then
gh release create ${{ env.latest_tag }} --title "Release ${{ env.latest_tag }}" --notes "Automated release for ${{ env.latest_tag }}"
else
echo "Release ${{ env.latest_tag }} already exists."
fi
- name: Upload release artifact
if: matrix.os == 'windows-latest'

View File

@@ -3,13 +3,19 @@ name: Update Version File and Create Tag
on:
push:
branches:
- main # Monitor the main branch
- main # Monitor the main branch
paths-ignore:
- 'patterns/**'
- '**/*.md'
- "data/patterns/**"
- "**/*.md"
- "data/strategies/**"
- "cmd/generate_changelog/*.db"
permissions:
contents: write # Ensure the workflow has write permissions
contents: write # Ensure the workflow has write permissions
concurrency:
group: version-update
cancel-in-progress: false
jobs:
update-version:
@@ -25,14 +31,16 @@ jobs:
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@main
- name: Setup Nix Cache
uses: DeterminateSystems/magic-nix-cache-action@main
- name: Set up Git
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
- name: Pull latest main and tags
run: |
git pull --rebase origin main
git fetch --tags
- name: Get the latest tag
id: get_latest_tag
run: |
@@ -57,27 +65,27 @@ jobs:
- name: Update version.go file
run: |
echo "package main" > version.go
echo "" >> version.go
echo "var version = \"${{ env.new_tag }}\"" >> version.go
echo "package main" > cmd/fabric/version.go
echo "" >> cmd/fabric/version.go
echo "var version = \"${{ env.new_tag }}\"" >> cmd/fabric/version.go
- name: Update version.nix file
run: |
echo "\"${{ env.new_version }}\"" > pkgs/fabric/version.nix
- name: Format source codes
echo "\"${{ env.new_version }}\"" > nix/pkgs/fabric/version.nix
- name: Format source code
run: |
go fmt ./...
nix fmt
- name: Update gomod2nix.toml file
run: |
nix run .#gomod2nix
nix run .#gomod2nix -- --outdir nix/pkgs/fabric
- name: Commit changes
run: |
git add version.go
git add pkgs/fabric/version.nix
git add gomod2nix.toml
git add cmd/fabric/version.go
git add nix/pkgs/fabric/version.nix
git add nix/pkgs/fabric/gomod2nix.toml
git add .
if ! git diff --staged --quiet; then
git commit -m "Update version to ${{ env.new_tag }} and commit $commit_hash"
@@ -87,7 +95,7 @@ jobs:
- name: Push changes
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Use GITHUB_TOKEN to authenticate the push
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Use GITHUB_TOKEN to authenticate the push
run: |
git push origin main # Push changes to the main branch
@@ -100,7 +108,7 @@ jobs:
- name: Dispatch event to trigger release workflow
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Use GITHUB_TOKEN to authenticate the dispatch
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Use GITHUB_TOKEN to authenticate the dispatch
run: |
curl -X POST \
-H "Authorization: token $GITHUB_TOKEN" \

7
.gitignore vendored
View File

@@ -58,6 +58,7 @@ coverage.xml
.hypothesis/
.pytest_cache/
cover/
coverage.out
# Translations
*.mo
@@ -130,9 +131,7 @@ celerybeat.pid
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
@@ -347,3 +346,7 @@ ENV
web/package-lock.json
.gitignore_backup
web/static/*.png
# Local tmp directory
.tmp/
tmp/

3
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,3 @@
{
"recommendations": ["davidanson.vscode-markdownlint"]
}

144
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,144 @@
{
"cSpell.words": [
"addextension",
"AIML",
"anthropics",
"badfile",
"Behrens",
"blindspots",
"Bombal",
"Cerebras",
"compinit",
"creatordate",
"custompatterns",
"danielmiessler",
"davidanson",
"Debugf",
"dedup",
"deepseek",
"direnv",
"dryrun",
"dsrp",
"editability",
"Eisler",
"elif",
"envrc",
"eugeis",
"Eugen",
"excalidraw",
"exolab",
"fabriclogo",
"fpath",
"frequencypenalty",
"fsdb",
"gantt",
"genai",
"githelper",
"gjson",
"GOARCH",
"godotenv",
"gofmt",
"goimports",
"gomod",
"gonic",
"goopenai",
"GOPATH",
"gopkg",
"GOROOT",
"Graphviz",
"grokai",
"Groq",
"hackerone",
"Haddix",
"hasura",
"hormozi",
"Hormozi's",
"HTMLURL",
"jaredmontoya",
"jessevdk",
"Jina",
"joho",
"ksylvan",
"Langdock",
"ldflags",
"libexec",
"listcontexts",
"listextensions",
"listmodels",
"listpatterns",
"listsessions",
"liststrategies",
"listvendors",
"lmstudio",
"Makefiles",
"markmap",
"matplotlib",
"mattn",
"Miessler",
"nometa",
"numpy",
"ollama",
"opencode",
"openrouter",
"otiai",
"pdflatex",
"pipx",
"PKCE",
"pkgs",
"presencepenalty",
"printcontext",
"printsession",
"pycache",
"pyperclip",
"readystream",
"restapi",
"rmextension",
"samber",
"sashabaranov",
"sdist",
"seaborn",
"semgrep",
"sess",
"storer",
"Streamlit",
"stretchr",
"talkpanel",
"Telos",
"Thacker",
"tidwall",
"topp",
"ttrc",
"unalias",
"unmarshalling",
"updatepatterns",
"videoid",
"webp",
"wipecontext",
"wipesession",
"writeups",
"xclip",
"yourpatternname"
],
"cSpell.ignorePaths": ["go.mod", ".gitignore", "CHANGELOG.md"],
"markdownlint.config": {
"MD004": false,
"MD011": false,
"MD024": false,
"MD025": false,
"M032": false,
"MD033": {
"allowed_elements": [
"a",
"br",
"code",
"div",
"em",
"h4",
"img",
"module",
"p"
]
},
"MD041": false
}
}

314
Alma.md
View File

@@ -1,314 +0,0 @@
## Document Purpose
This document captures the SPQA policy and State for Alma Security, a security startup out of Redwood City, Ca.
This is part of the SPQA context that will be used to answer questions and create artifacts for the company, e.g., company strategy, security strategy, quarterly security reports (QSRs), project plans, recommendations on which projects to undertake, which investments to take and avoid, and other such decisions.
A major aspect of the SPQA system is the definition of the company's mission, goals, KPIs, and challenges. These shape everything within the company and thus should be used to shape the recommendations made when asked.
In addition to the clearly stated goals and other defining characteristics listed above, there will also be a streaming list of updates coming into this system using the Activity document.
Those will be changes, updates, or modifications to the direction of the company. For example, if Goal number 4 is to build a new datacenter in Boise, Idaho, but we see an update in the Activity section that says we've lost the ability to build in Boise, we should consider goal #4 out of the picture for prioritization and other decision purposes. In other words, the streaming activity log into this document should be considered updates to the core content.
## Company History
Alma Security was started by Chris Meyers, who was previously at Sigma Systems as CTO and HPE as a senior security engineer.
He started the company because, "I saw a gap in the authentication market, where companies were only looking at one or two aspects of one's identity to do authentication. They we're looking at the whole picture and turning that into a continuous authentication story."
## Company Mission
The mission of Alma Security is to ensure businesses can continuously authenticate their users using their whole selves.
## Company Goals (G1 means goal 1, G2 is goal 2, etc. Treat each item (goal/kpi/etc) as half as important as the one before it.)
NOTE: Some goals are things like project rollouts which serve the higher goals. In that case they shouldn't always be considered so much lower priority because one is serving the other.
## Company Goals
- G1: Achieve 20% market share by January 2025
- G2: Hit 10000 active customers by January 2025
- G3: Hit a customer trust score of 90+% by January 2025
- G4: Get churn below 5% by August 2024
- G5: Launch in Europe by August 2024
- G6: Launch in India by November 2024
- G7: Launch Mood-monitor integration by February 2024
- G8: Launch partnership with Apple Passkeys by June 2024
## Company KPIs
- K1: Current marketshare percentage
- K2: Number of active customers
- K3: Current churn percentage
- K4: Launched_in_Europe (yes/no)
- K4: Launched_in_India (yes/no)
-----------------------------------------------------------------------------------------------------------------------
## Security Team Mission
- SM1: Protect Alma Security's customers and intellectual property from security and privacy incidents.
## Security Team Goals
- SG1: Secure all customer data -- especially biometric -- from security and privacy incidents.
- SG2: Protect Alma Security's intellectual property from being captured by unauthorized parties.
- SG3: Reach a time to detect malicious behavior of less than 4 minutes by January 2025
- SG4: Ensure the public trusts our product, because it's an authentication product we can't survive if people don't trust us.
- SG5: Reach a time to remediate critical vulnerabilties on crown jewel systems of less than 16 hours by August 2025
- SG6: Reach a time to remediate critical vulnerabilties on all systems of less than 3 days by August 2025
- SG5: Reach a time to remediate critical vulnerabilities on crown jewel systems of less than 16 hours by August 2025
- SG6: Reach a time to remediate critical vulnerabilities on all systems of less than 3 days by August 2025
- SG7: Complete audit of Apple Passkey integration by February 2025
- SG8: Complete remediation of Apple Passkey vulns by February 2025
## Security Team KPIs (How we measure the team)
- SK1: TTD: Time to detect malicious behavior (Minutes)
- SK1: TTI: Time to begin investigation of malicious behavior (Minutes)
- SK3: TTR-CJC: Time to remediate critical vulnerabilities on crown jewel systems (Hours)
- SK3: TTR-C: Time to remediate critical vulnerabilities on all systems (Hours)
- SK4: PT: Public trust score (Complete, Significant, Moderate, Minimal, Distrust, N/A)
## Risk Register (The things we're most worried about)
- R1: Our infrastructure security team is understaffed by 50% after 5 key people left
- R2: We are not currently monitoring our external perimeter for attack surface related vulnerabilities like open ports, listening applications, unknown hosts, unknown subdomains pointing to these things, etc. We only do scans once every couple of months and we don't really have anyone to look at the results
- R3: It takes us multiple days to investigate potential malicious behavior on our systems.
- R4: We lack a full list of our assets, including externally facing hosts, S3 buckets, etc., which make up our attack surface
- R5: We have a low public trust score due to the events of 2022.
## Security Team Narrative
### Background
Alma hired a new security team starting in January of 2023 and we have been building out the program since then. The philosophy and approach for the security team is to explicitly articulate what we believe the highest risks are to Alma, to deploy targeted strategies to address those risks, and to use clear, transparent KPIs to show progress towards our goals over time.
### Current Risks
So our risk register looks like this:
1. We are understaffed by 50% after 5 key people left in 2022
2. Our perimeter is not being monitored for attack surface related vulnerabilities
3. It takes us too long to detect and start investigating malicious behavior on our systems
4. We do not have a full list of our assets, which makes it difficult to know what we need to protect
5. We have a low public trust score due to the events of 2022
### Strategies
As such, our strategies are as follows:
1. Hire 5 more A-tier security professionals
2. Purchase and implement an attack surface management solution
3. Invest in our detection and response capabilities
4. Purchase an asset inventory system that integrates with our attack surface management tool
5. Leverage PR to share as much of our progress as possible with the public to rebuild trust
### How We're Doing
We believe being transparent about our progress is key to everything, and for that reason we maintain a limited number of KPIs that we update every quarter. These metrics will not change often. They will remain consistent so that it's easy to track how we're spending our resources and the progress we're making.
Those KPIs are:
1. Time to detect malicious behavior
2. Time to start investigating malicious behavior
3. Time to remediate critical vulnerabilities on crown jewel systems
4. Time to remediate critical vulnerabilities on all systems
5. Our public trust score
As of $DATE$, our KPIs for these are currently:
$GIVE CURRENT KPIs from the Activity section below$
$INSERT GRAPHS OF KPI PROGRESS OVER TIME HERE$
## Security Team Strategies
- STS1: Hire 5 more A-tier security professionals
- STS2: Purchase an attack surface management solution
- STS3: Invest in our detection and response capabilities
- STS4: Purchase an asset inventory system that integrates with our attack surface management tool
## Infrastructure Notes (a basic description of our tech stack and various context around it)
- We currently have no WAF protecting our main web app but we're considering adding one
- We have had some issues with S3 buckets becoming public, or being set up as public, which has lead to some close calls with customer data almost being exposed.
- alma.amazon-domain.com is our primary S3 bucket that contains everything, but it's not public readable or listable
- We have a root account for our AWS account that doesn't yet have 2FA on it, but we're working on fixing that within a few weeks (but it's been open for a few months)
- We also use Postgres for all our databases.
- Developers have root access to the all kubernetes nodes via SSH on port 45,001 using a shared developer key issued during laptop provisioning.
- We're a kubernetes shop and do everything through AWS
- We're logging most stuff to Cloudtrail and we kind of use guarduty, but we don't have a 24/7 team to monitor alerts and logs. We should add that to our list of challenges next time we update our overarll policy
- We also have a Windows infrastructure because some key personnel came from Microsoft. The DC is hosted in our head office which is in Redwood City, and anyone who works in that office (most of the 300 employees) uses that to log in when they start work. The domain is ALMA.
- There's a domain-joined fileserver running Windows 2012 that most people use to upload new ideas and plans for new products. It uses Windows authentication from the domain.
- We use a palo alto firewall with 2fa using windows authenticator tied to SSO.
- The name of the AI system doing all this context creation using SPQA is Alma, which is also the name of the company.
- We use Workday for HR stuff. Slack for realtime communications. Outlook 365 as a service. Sentinel One on the workstations and laptops. Servers in AWS are mostly Amazon Linux 2 with a few Ubuntu boxes that are a few years old.
- We also primarily use Postgres for all of our systems.
## Team
TEAM MEMBER | TEAM ASSIGNED | SKILLS | PAY LEVEL | LOCATION | PROJECTS
Nadia Khan | Detection and Response | D&R (Expert), AWS (Strong), Python (Expert), Kubernetes (Basic), Postgres (Basic) | $249K | Redwood City
Chris Magann | Vulnerability Management | VM (Expert), AWS (Strong), Python (Basic), Postgres (Basic) | $212K | Redwood City
Tigan Wang | Vulnerability Management | VM (Expert), AWS (Strong), Python (Basic), Postgres (Basic) | $217K | Redwood City
## Projects
PROJECT NAME | PROJECT DESCRIPTION | PROJECT PRIORITY | PROJECT MEMBERS | START DATE | END DATE | STATUS | PROJECT COST
WAF Install | Install a WAF in front of our main web app | Critical | Nadia Khan | 2024-01-01 - Ongoing | In Progress | $112K one-time, $9K/month
Multi-Factor Authentication (MFA) Rollout | Implement MFA across all internal and external systems | Critical | Chris Magaan | 2024-01-15 | 2024-05-01 | Planned | $80K one-time, $5K/month
Procure and Implement ASM | Implement continuous monitoring for attack surface vulnerabilities | High | Tigan Wang | 2024-02-15 | 2024-06-15 | Not Started | $75K one-time, $6K/month
Data Encryption Upgrade | Upgrade encryption protocols for all sensitive data | Medium | Nadia Khan | 2024-04-01 | 2024-08-01 | Planned | $95K one-time
Incident Response Enhancement | Develop and implement a 24/7 incident response team | High | Nadia Khan | 2024-03-01 | 2024-07-01 | In Progress | $150K one-time, $10K/month
Cloud Security Optimization | Optimize AWS cloud security configurations and practices | Medium | Tigan Wang | 2024-02-01 | 2024-06-01 | In Progress | $100K one-time, $8K/month
S3 Bucket Security | Review and secure all S3 buckets to prevent data breaches | High | Chris Magaan | 2024-01-10 | 2024-04-10 | In Progress | $70K one-time, $5K/month
SQL Injection Mitigation | Implement measures to eliminate SQL injection vulnerabilities | High | Tigan Wang | 2024-01-20 | 2024-05-20 | Not Started | $60K one-time
## SECURITY POSTURE (To be referenced for compliance questions and security questionnaires)
July 2019
Admin accounts still not required to use 2FA.
Company laptops distributed to employees, no MDM yet for device management.
AWS IAM roles created for engineers, but root access still frequently used.
Started basic vulnerability scanning using open-source tools.
December 2019
MFA enforced for all Google Workspace accounts after a phishing attempt.
Introduced ClamAV for basic endpoint protection on corporate laptops.
AWS GuardDuty enabled for threat detection, but no formal incident response team.
First incident response plan table-top exercise conducted, but findings not fully documented.
April 2020
Migrated from Google Workspace to Office 365, with MFA enabled for all users.
Rolled out SentinelOne for endpoint protection on 50% of company laptops.
Implemented least-privilege access control for AWS IAM roles.
First formal vendor risk management review completed for major SaaS providers.
August 2020
Completed full deployment of SentinelOne across all endpoints.
Implemented AWS CloudWatch for real-time alerts; however, logs still not monitored 24/7.
Began encrypting all AWS S3 buckets at rest using server-side encryption.
First internal review of data retention policies, started drafting data disposal policy.
January 2021
Rolled out Jamf MDM for centralized management of macOS devices, enforcing encryption (FileVault) on all laptops.
Strengthened Office 365 security by implementing phishing-resistant MFA using authenticator apps.
AWS KMS introduced for managing encryption keys; manual key rotation policy documented.
Introduced formal onboarding and offboarding processes for employee account management.
July 2021
Conditional access policies introduced for Office 365, restricting access based on geography (US-only).
Conducted company-wide security awareness training for the first time, focusing on phishing threats.
Completed first backup and disaster recovery (DR) drill with AWS, documenting recovery times.
AWS Config deployed to monitor and enforce encryption and access control policies across accounts.
December 2021
Full migration to AWS for all production systems completed.
Incident response playbook finalized and shared with the security team; still no 24/7 monitoring.
Documented data classification policies for handling sensitive customer data in preparation for SOC 2 audit.
First third-party penetration test conducted, critical vulnerabilities identified and remediated within 30 days.
March 2022
Rolled out company-wide 2FA for all critical systems, including Office 365, AWS, GitHub, and Slack.
Introduced AWS Secrets Manager for managing sensitive credentials, eliminating hardcoded API keys.
Updated all documentation for identity and access management in preparation for SOC 2 Type 1 audit.
First external vulnerability scan completed using Qualys, with remediation SLAs established.
April 2022
Updated and consolidated all security policies (incident response, access control, data retention) in preparation for SOC 2 audit.
Conducted tabletop exercise for ransomware response, documenting gaps in the incident response process.
Implemented Just-In-Time (JIT) access for administrative privileges in AWS, reducing unnecessary persistent access.
October 2022
Passed SOC 2 Type 1 audit, with recommendations to improve monitoring and asset management.
Launched quarterly phishing simulations to raise employee awareness and track training effectiveness.
Fully enforced encryption for all customer data in transit and at rest using AWS KMS.
Extended GuardDuty to cover all AWS regions; started monitoring alerts daily.
January 2023
Hired a dedicated CISO and expanded security team by 30%.
Integrated continuous vulnerability scanning across all externally facing assets using Qualys.
Conducted first third-party vendor risk assessment to ensure alignment with SOC 2 and internal security standards.
Implemented automated patch management for all AWS EC2 instances, reducing time to deploy critical patches.
July 2023
Rolled out continuous attack surface monitoring (ASM) to identify and remediate external vulnerabilities.
Performed annual data retention review, ensuring compliance with SOC 2 and GDPR requirements.
Conducted a disaster recovery drill for AWS workloads, achieving a recovery time objective (RTO) of under 4 hours.
Completed SOC 2 Type 2 readiness assessment, with focus on improving incident response times.
November 2023
Updated incident response documentation and assigned 24/7 monitoring to a third-party SOC provider.
Rolled out zero-trust network architecture across the organization, removing reliance on VPN for remote access.
Passed SOC 2 Type 2 audit with no major findings; recommendations included improved asset inventory tracking.
Conducted full audit of access control policies and JIT access implementation in preparation for ISO 27001 certification.
April 2024
Implemented AI-driven threat detection to reduce time to detect security incidents from 10 hours to under 2 hours.
Completed full encryption audit across all databases, ensuring compliance with GDPR, HIPAA, and other privacy regulations.
Updated employee training programs to include privacy regulations (GDPR, CCPA) and data handling best practices.
Completed internal review and audit of vendor access to critical systems as part of SOC 2 compliance effort.
Completed move of all AWS services to us-west-2 and us-east-1 regions for 100% us-based cloud services.
October 2024
Conducted organization-wide review of data retention and disposal policies, implementing automated data deletion for expired data.
Implemented continuous compliance monitoring for SOC 2, with automated alerts for deviations in access controls and encryption settings.
Finalized implementation of AI-based monitoring and response systems, significantly reducing time to remediate critical vulnerabilities.
Passed SOC 2 Type 2 and ISO 27001 audits with zero non-conformities, achieving full compliance across all control areas.March 2018
Personal Gmail accounts used for internal and external communication.
No 2FA enabled on any accounts.
AWS accounts shared with engineers, no IAM roles or formal access control policies.
No centralized endpoint protection; employees use personal laptops with no security controls.
No documented security policies or incident response plan.
September 2018
Initiated migration from personal Gmail to Google Workspace (G Suite) for business email.
Password complexity requirements introduced (minimum 8 characters).
AWS root credentials still shared among team members, no MFA enabled.
No formal logging or monitoring in place for AWS activity.
February 2019
Completed migration to Google Workspace; no email encryption yet.
Introduced a basic password manager (LastPass) but no enforcement policy.
AWS CloudTrail enabled for logging, but no one is reviewing logs.
First draft of the incident response plan created, but not tested.
June 2019
Enforced MFA for Google Workspace admin accounts; standard user
## CURRENT STATE (KPIs, Metrics, Project Activity Updates, etc.)
- October 2022: Current time to detect malicious behavior is 81 hours
- October 2022: Current time to start investigating malicious behavior is 82 hours
- October 2022: Current time to remediate critical vulnerabilities on crown jewel systems is 21 days
- October 2022: Current time to remediate critical vulnerabilities on all systems is 51 days
- January 2023: Current time to detect malicious behavior is 62 hours
- January 2023: Current time to start investigating malicious behavior is 72 hours
- January 2023: Current time to remediate critical vulnerabilities on crown jewel systems is 17 days
- January 2023: Current time to remediate critical vulnerabilities on all systems is 43 days
- July 2023: Current time to detect malicious behavior is 29 hours
- July 2023: Current time to start investigating malicious behavior is 41 hours
- July 2023: Current time to remediate critical vulnerabilities on crown jewel systems is 12 days
- July 2023: Current time to remediate critical vulnerabilities on all systems is 29 days
- November 2023: Current time to start detect malicious behavior is 12 hours
- November 2023: Current time to start investigating malicious behavior is 16 hours
- November 2023: Current time to remediate critical vulnerabilities on crown jewel systems is 9 days
- November 2023: Current time to remediate critical vulnerabilities on all systems is 17 days
- February 2024: Started attack surface management vendor selection process
- January 2024: Current time to start detect malicious behavior is 9 hours
- January 2024: Current time to start investigating malicious behavior is 14 hours
- January 2024: Current time to remediate critical vulnerabilities on crown jewel systems is 8 days
- January 2024: Current time to remediate critical vulnerabilities on all systems is 12 days
- March 2024: We're now remediating crits on crown jewels in less than 6 days
- April 2024: We're now remediating all criticals within 11 days
- July 2024: Criticals are now being fixed in 9 days
- On August 5 we got remediation of critical vulnerabilities down to 7 days

2300
CHANGELOG.md Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

477
README.md
View File

@@ -1,6 +1,9 @@
<div align="center">
Fabric is graciously supported by…
<img src="./images/fabric-logo-gif.gif" alt="fabriclogo" width="400" height="400"/>
[![Github Repo Tagline](https://github.com/user-attachments/assets/96ab3d81-9b13-4df4-ba09-75dee7a5c3d2)](https://warp.dev/fabric)
<img src="./docs/images/fabric-logo-gif.gif" alt="fabriclogo" width="400" height="400"/>
# `fabric`
@@ -9,38 +12,70 @@
![GitHub top language](https://img.shields.io/github/languages/top/danielmiessler/fabric)
![GitHub last commit](https://img.shields.io/github/last-commit/danielmiessler/fabric)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/danielmiessler/fabric)
<p class="align center">
<div align="center">
<h4><code>fabric</code> is an open-source framework for augmenting humans using AI.</h4>
</p>
</div>
[Updates](#updates) •
[What and Why](#whatandwhy) •
[What and Why](#what-and-why) •
[Philosophy](#philosophy) •
[Installation](#Installation) •
[Usage](#Usage) •
[Installation](#installation) •
[Usage](#usage) •
[Examples](#examples) •
[Just Use the Patterns](#just-use-the-patterns) •
[Custom Patterns](#custom-patterns) •
[Helper Apps](#helper-apps) •
[Meta](#meta)
![Screenshot of fabric](images/fabric-summarize.png)
![Screenshot of fabric](./docs/images/fabric-summarize.png)
</div>
## What and why
Since the start of modern AI in late 2022 we've seen an **_extraordinary_** number of AI applications for accomplishing tasks. There are thousands of websites, chat-bots, mobile apps, and other interfaces for using all the different AI out there.
It's all really exciting and powerful, but _it's not easy to integrate this functionality into our lives._
<div class="align center">
<h4>In other words, AI doesn't have a capabilities problem—it has an <em>integration</em> problem.</h4>
</div>
**Fabric was created to address this by creating and organizing the fundamental units of AI—the prompts themselves!**
Fabric organizes prompts by real-world task, allowing people to create, collect, and organize their most important AI solutions in a single place for use in their favorite tools. And if you're command-line focused, you can use Fabric itself as the interface!
## Intro videos
Keep in mind that many of these were recorded when Fabric was Python-based, so remember to use the current [install instructions](#installation) below.
- [Network Chuck](https://www.youtube.com/watch?v=UbDyjIIGaxQ)
- [David Bombal](https://www.youtube.com/watch?v=vF-MQmVxnCs)
- [My Own Intro to the Tool](https://www.youtube.com/watch?v=wPEyyigh10g)
- [More Fabric YouTube Videos](https://www.youtube.com/results?search_query=fabric+ai)
## Navigation
- [`fabric`](#fabric)
- [What and why](#what-and-why)
- [Intro videos](#intro-videos)
- [Navigation](#navigation)
- [Updates](#updates)
- [Intro videos](#intro-videos)
- [What and why](#what-and-why)
- [Philosophy](#philosophy)
- [Breaking problems into components](#breaking-problems-into-components)
- [Too many prompts](#too-many-prompts)
- [Installation](#installation)
- [Get Latest Release Binaries](#get-latest-release-binaries)
- [Windows](#windows)
- [macOS (arm64)](#macos-arm64)
- [macOS (amd64)](#macos-amd64)
- [Linux (amd64)](#linux-amd64)
- [Linux (arm64)](#linux-arm64)
- [Using package managers](#using-package-managers)
- [macOS (Homebrew)](#macos-homebrew)
- [Arch Linux (AUR)](#arch-linux-aur)
- [From Source](#from-source)
- [Environment Variables](#environment-variables)
- [Setup](#setup)
@@ -48,46 +83,39 @@
- [Save your files in markdown using aliases](#save-your-files-in-markdown-using-aliases)
- [Migration](#migration)
- [Upgrading](#upgrading)
- [Shell Completions](#shell-completions)
- [Zsh Completion](#zsh-completion)
- [Bash Completion](#bash-completion)
- [Fish Completion](#fish-completion)
- [Usage](#usage)
- [Our approach to prompting](#our-approach-to-prompting)
- [Examples](#examples)
- [Just use the Patterns](#just-use-the-patterns)
- [Prompt Strategies](#prompt-strategies)
- [Custom Patterns](#custom-patterns)
- [Setting Up Custom Patterns](#setting-up-custom-patterns)
- [Using Custom Patterns](#using-custom-patterns)
- [How It Works](#how-it-works)
- [Helper Apps](#helper-apps)
- [`to_pdf`](#to_pdf)
- [`to_pdf` Installation](#to_pdf-installation)
- [`code_helper`](#code_helper)
- [pbpaste](#pbpaste)
- [Web Interface](#Web_Interface)
- [Web Interface](#web-interface)
- [Installing](#installing)
- [Streamlit UI](#streamlit-ui)
- [Clipboard Support](#clipboard-support)
- [Meta](#meta)
- [Primary contributors](#primary-contributors)
- [Contributors](#contributors)
<br />
## Updates
> [!NOTE]
> February 24, 2025
>
> - Fabric now supports Sonnet 3.7! Update and use `-S` to select it as your default if you want, or just use the shortcut `-m claude-3-7-sonnet-latest`. Enjoy!
Fabric is evolving rapidly.
## What and why
Since the start of 2023 and GenAI we've seen a massive number of AI applications for accomplishing tasks. It's powerful, but _it's not easy to integrate this functionality into our lives._
<div align="center">
<h4>In other words, AI doesn't have a capabilities problem—it has an <em>integration</em> problem.</h4>
</div>
Fabric was created to address this by enabling everyone to granularly apply AI to everyday challenges.
## Intro videos
Keep in mind that many of these were recorded when Fabric was Python-based, so remember to use the current [install instructions](#Installation) below.
- [Network Chuck](https://www.youtube.com/watch?v=UbDyjIIGaxQ)
- [David Bombal](https://www.youtube.com/watch?v=vF-MQmVxnCs)
- [My Own Intro to the Tool](https://www.youtube.com/watch?v=wPEyyigh10g)
- [More Fabric YouTube Videos](https://www.youtube.com/results?search_query=fabric+ai)
Stay current with the latest features by reviewing the [CHANGELOG](./CHANGELOG.md) for all recent changes.
## Philosophy
@@ -105,7 +133,7 @@ Our approach is to break problems into individual pieces (see below) and then ap
Prompts are good for this, but the biggest challenge I faced in 2023——which still exists today—is **the sheer number of AI prompts out there**. We all have prompts that are useful, but it's hard to discover new ones, know if they are good or not, _and manage different versions of the ones we like_.
One of <code>fabric</code>'s primary features is helping people collect and integrate prompts, which we call _Patterns_, into various parts of their lives.
One of `fabric`'s primary features is helping people collect and integrate prompts, which we call _Patterns_, into various parts of their lives.
Fabric has Patterns for all sorts of life and work activities, including:
@@ -126,28 +154,50 @@ To install Fabric, you can use the latest release binaries or install it from th
### Get Latest Release Binaries
#### Windows:
#### Windows
`https://github.com/danielmiessler/fabric/releases/latest/download/fabric-windows-amd64.exe`
#### MacOS (arm64):
#### macOS (arm64)
`curl -L https://github.com/danielmiessler/fabric/releases/latest/download/fabric-darwin-arm64 > fabric && chmod +x fabric && ./fabric --version`
#### MacOS (amd64):
#### macOS (amd64)
`curl -L https://github.com/danielmiessler/fabric/releases/latest/download/fabric-darwin-amd64 > fabric && chmod +x fabric && ./fabric --version`
#### Linux (amd64):
#### Linux (amd64)
`curl -L https://github.com/danielmiessler/fabric/releases/latest/download/fabric-linux-amd64 > fabric && chmod +x fabric && ./fabric --version`
#### Linux (arm64):
#### Linux (arm64)
`curl -L https://github.com/danielmiessler/fabric/releases/latest/download/fabric-linux-arm64 > fabric && chmod +x fabric && ./fabric --version`
### Using package managers
**NOTE:** using Homebrew or the Arch Linux package managers makes `fabric` available as `fabric-ai`, so add
the following alias to your shell startup files to account for this:
```bash
alias fabric='fabric-ai'
```
#### macOS (Homebrew)
`brew install fabric-ai`
#### Arch Linux (AUR)
`yay -S fabric-ai`
### From Source
To install Fabric, [make sure Go is installed](https://go.dev/doc/install), and then run the following command.
```bash
# Install Fabric directly from the repo
go install github.com/danielmiessler/fabric@latest
go install github.com/danielmiessler/fabric/cmd/fabric@latest
```
### Environment Variables
@@ -204,8 +254,19 @@ for pattern_file in $HOME/.config/fabric/patterns/*; do
done
yt() {
if [ "$#" -eq 0 ] || [ "$#" -gt 2 ]; then
echo "Usage: yt [-t | --timestamps] youtube-link"
echo "Use the '-t' flag to get the transcript with timestamps."
return 1
fi
transcript_flag="--transcript"
if [ "$1" = "-t" ] || [ "$1" = "--timestamps" ]; then
transcript_flag="--transcript-with-timestamps"
shift
fi
local video_link="$1"
fabric -y "$video_link" --transcript
fabric -y "$video_link" $transcript_flag
}
```
@@ -263,10 +324,34 @@ function $patternName {
function yt {
[CmdletBinding()]
param(
[Parameter(Mandatory = $true)]
[Parameter()]
[Alias("timestamps")]
[switch]$t,
[Parameter(Position = 0, ValueFromPipeline = $true)]
[string]$videoLink
)
fabric -y $videoLink --transcript
begin {
$transcriptFlag = "--transcript"
if ($t) {
$transcriptFlag = "--transcript-with-timestamps"
}
}
process {
if (-not $videoLink) {
Write-Error "Usage: yt [-t | --timestamps] youtube-link"
return
}
}
end {
if ($videoLink) {
# Execute and allow output to flow through the pipeline
fabric -y $videoLink $transcriptFlag
}
}
}
```
@@ -274,7 +359,7 @@ This also creates a `yt` alias that allows you to use `yt https://www.youtube.co
#### Save your files in markdown using aliases
If in addition to the above aliases you would like to have the option to save the output to your favourite markdown note vault like Obsidian then instead of the above add the following to your `.zshrc` or `.bashrc` file:
If in addition to the above aliases you would like to have the option to save the output to your favorite markdown note vault like Obsidian then instead of the above add the following to your `.zshrc` or `.bashrc` file:
```bash
# Define the base directory for Obsidian notes
@@ -285,7 +370,7 @@ for pattern_file in ~/.config/fabric/patterns/*; do
# Get the base name of the file (i.e., remove the directory path)
pattern_name=$(basename "$pattern_file")
# Unalias any existing alias with the same name
# Remove any existing alias with the same name
unalias "$pattern_name" 2>/dev/null
# Define a function dynamically for each pattern
@@ -306,11 +391,6 @@ for pattern_file in ~/.config/fabric/patterns/*; do
}
"
done
yt() {
local video_link="$1"
fabric -y "$video_link" --transcript
}
```
This will allow you to use the patterns as aliases like in the above for example `summarize` instead of `fabric --pattern summarize --stream`, however if you pass in an extra argument like this `summarize "my_article_title"` your output will be saved in the destination that you set in `obsidian_base="/path/to/obsidian"` in the following format `YYYY-MM-DD-my_article_title.md` where the date gets autogenerated for you.
@@ -327,19 +407,61 @@ pipx uninstall fabric
# Clear any old Fabric aliases
(check your .bashrc, .zshrc, etc.)
# Install the Go version
go install github.com/danielmiessler/fabric@latest
go install github.com/danielmiessler/fabric/cmd/fabric@latest
# Run setup for the new version. Important because things have changed
fabric --setup
```
Then [set your environmental variables](#environmental-variables) as shown above.
Then [set your environmental variables](#environment-variables) as shown above.
### Upgrading
The great thing about Go is that it's super easy to upgrade. Just run the same command you used to install it in the first place and you'll always get the latest version.
```bash
go install github.com/danielmiessler/fabric@latest
go install github.com/danielmiessler/fabric/cmd/fabric@latest
```
### Shell Completions
Fabric provides shell completion scripts for Zsh, Bash, and Fish
shells, making it easier to use the CLI by providing tab completion
for commands and options.
#### Zsh Completion
To enable Zsh completion:
```bash
# Copy the completion file to a directory in your $fpath
mkdir -p ~/.zsh/completions
cp completions/_fabric ~/.zsh/completions/
# Add the directory to fpath in your .zshrc before compinit
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
echo 'autoload -Uz compinit && compinit' >> ~/.zshrc
```
#### Bash Completion
To enable Bash completion:
```bash
# Source the completion script in your .bashrc
echo 'source /path/to/fabric/completions/fabric.bash' >> ~/.bashrc
# Or copy to the system-wide bash completion directory
sudo cp completions/fabric.bash /etc/bash_completion.d/
```
#### Fish Completion
To enable Fish completion:
```bash
# Copy the completion file to the fish completions directory
mkdir -p ~/.config/fish/completions
cp completions/fabric.fish ~/.config/fish/completions/
```
## Usage
@@ -350,54 +472,81 @@ Once you have it all set up, here's how to use it.
fabric -h
```
```bash
```plaintext
Usage:
fabric [OPTIONS]
Application Options:
-p, --pattern= Choose a pattern from the available patterns
-v, --variable= Values for pattern variables, e.g. -v=#role:expert -v=#points:30"
-C, --context= Choose a context from the available contexts
--session= Choose a session from the available sessions
-a, --attachment= Attachment path or URL (e.g. for OpenAI image recognition messages)
-S, --setup Run setup for all reconfigurable parts of fabric
-t, --temperature= Set temperature (default: 0.7)
-T, --topp= Set top P (default: 0.9)
-s, --stream Stream
-P, --presencepenalty= Set presence penalty (default: 0.0)
-r, --raw Use the defaults of the model without sending chat options (like temperature etc.) and use the user role instead of the system role for patterns.
-F, --frequencypenalty= Set frequency penalty (default: 0.0)
-l, --listpatterns List all patterns
-L, --listmodels List all available models
-x, --listcontexts List all contexts
-X, --listsessions List all sessions
-U, --updatepatterns Update patterns
-c, --copy Copy to clipboard
-m, --model= Choose model
-o, --output= Output to file
--output-session Output the entire session (also a temporary one) to the output file
-n, --latest= Number of latest patterns to list (default: 0)
-d, --changeDefaultModel Change default model
-y, --youtube= YouTube video "URL" to grab transcript, comments from it and send to chat
--transcript Grab transcript from YouTube video and send to chat (it used per default).
--comments Grab comments from YouTube video and send to chat
--metadata Grab metadata from YouTube video and send to chat
-g, --language= Specify the Language Code for the chat, e.g. -g=en -g=zh
-u, --scrape_url= Scrape website URL to markdown using Jina AI
-q, --scrape_question= Search question using Jina AI
-e, --seed= Seed to be used for LMM generation
-w, --wipecontext= Wipe context
-W, --wipesession= Wipe session
--printcontext= Print context
--printsession= Print session
--readability Convert HTML input into a clean, readable view
--serve Initiate the API server
--dry-run Show what would be sent to the model without actually sending it
--version Print current version
-p, --pattern= Choose a pattern from the available patterns
-v, --variable= Values for pattern variables, e.g. -v=#role:expert -v=#points:30
-C, --context= Choose a context from the available contexts
--session= Choose a session from the available sessions
-a, --attachment= Attachment path or URL (e.g. for OpenAI image recognition messages)
-S, --setup Run setup for all reconfigurable parts of fabric
-t, --temperature= Set temperature (default: 0.7)
-T, --topp= Set top P (default: 0.9)
-s, --stream Stream
-P, --presencepenalty= Set presence penalty (default: 0.0)
-r, --raw Use the defaults of the model without sending chat options (like
temperature etc.) and use the user role instead of the system role for
patterns.
-F, --frequencypenalty= Set frequency penalty (default: 0.0)
-l, --listpatterns List all patterns
-L, --listmodels List all available models
-x, --listcontexts List all contexts
-X, --listsessions List all sessions
-U, --updatepatterns Update patterns
-c, --copy Copy to clipboard
-m, --model= Choose model
--modelContextLength= Model context length (only affects ollama)
-o, --output= Output to file
--output-session Output the entire session (also a temporary one) to the output file
-n, --latest= Number of latest patterns to list (default: 0)
-d, --changeDefaultModel Change default model
-y, --youtube= YouTube video or play list "URL" to grab transcript, comments from it
and send to chat or print it put to the console and store it in the
output file
--playlist Prefer playlist over video if both ids are present in the URL
--transcript Grab transcript from YouTube video and send to chat (it is used per
default).
--transcript-with-timestamps Grab transcript from YouTube video with timestamps and send to chat
--comments Grab comments from YouTube video and send to chat
--metadata Output video metadata
-g, --language= Specify the Language Code for the chat, e.g. -g=en -g=zh
-u, --scrape_url= Scrape website URL to markdown using Jina AI
-q, --scrape_question= Search question using Jina AI
-e, --seed= Seed to be used for LMM generation
-w, --wipecontext= Wipe context
-W, --wipesession= Wipe session
--printcontext= Print context
--printsession= Print session
--readability Convert HTML input into a clean, readable view
--input-has-vars Apply variables to user input
--dry-run Show what would be sent to the model without actually sending it
--serve Serve the Fabric Rest API
--serveOllama Serve the Fabric Rest API with ollama endpoints
--address= The address to bind the REST API (default: :8080)
--api-key= API key used to secure server routes
--config= Path to YAML config file
--version Print current version
--listextensions List all registered extensions
--addextension= Register a new extension from config file path
--rmextension= Remove a registered extension by name
--strategy= Choose a strategy from the available strategies
--liststrategies List all strategies
--listvendors List all vendors
--shell-complete-list Output raw list without headers/formatting (for shell completion)
--search Enable web search tool for supported models (Anthropic, OpenAI)
--search-location= Set location for web search results (e.g., 'America/Los_Angeles')
--image-file= Save generated image to specified file path (e.g., 'output.png')
--image-size= Image dimensions: 1024x1024, 1536x1024, 1024x1536, auto (default: auto)
--image-quality= Image quality: low, medium, high, auto (default: auto)
--image-compression= Compression level 0-100 for JPEG/WebP formats (default: not set)
--image-background= Background type: opaque, transparent (default: opaque, only for
PNG/WebP)
Help Options:
-h, --help Show this help message
-h, --help Show this help message
```
@@ -427,31 +576,29 @@ Now let's look at some things you can do with Fabric.
1. Run the `summarize` Pattern based on input from `stdin`. In this case, the body of an article.
```bash
pbpaste | fabric --pattern summarize
```
```bash
pbpaste | fabric --pattern summarize
```
2. Run the `analyze_claims` Pattern with the `--stream` option to get immediate and streaming results.
```bash
pbpaste | fabric --stream --pattern analyze_claims
```
```bash
pbpaste | fabric --stream --pattern analyze_claims
```
3. Run the `extract_wisdom` Pattern with the `--stream` option to get immediate and streaming results from any Youtube video (much like in the original introduction video).
```bash
fabric -y "https://youtube.com/watch?v=uXs-zPc63kM" --stream --pattern extract_wisdom
```
3. Run the `extract_wisdom` Pattern with the `--stream` option to get immediate and streaming results from any Youtube video (much like in the original introduction video).
```bash
fabric -y "https://youtube.com/watch?v=uXs-zPc63kM" --stream --pattern extract_wisdom
```
4. Create patterns- you must create a .md file with the pattern and save it to `~/.config/fabric/patterns/[yourpatternname]`.
5. Run a `analyze_claims` pattern on a website. Fabric uses Jina AI to scrape the URL into markdown format before sending it to the model.
```bash
fabric -u https://github.com/danielmiessler/fabric/ -p analyze_claims
```
```bash
fabric -u https://github.com/danielmiessler/fabric/ -p analyze_claims
```
## Just use the Patterns
@@ -460,7 +607,7 @@ fabric -u https://github.com/danielmiessler/fabric/ -p analyze_claims
<br />
<br />
If you're not looking to do anything fancy, and you just want a lot of great prompts, you can navigate to the [`/patterns`](https://github.com/danielmiessler/fabric/tree/main/patterns) directory and start exploring!
If you're not looking to do anything fancy, and you just want a lot of great prompts, you can navigate to the [`/patterns`](https://github.com/danielmiessler/fabric/tree/main/data/patterns) directory and start exploring!
We hope that if you used nothing else from Fabric, the Patterns by themselves will make the project useful.
@@ -468,20 +615,67 @@ You can use any of the Patterns you see there in any AI application that you hav
The wisdom of crowds for the win.
### Prompt Strategies
Fabric also implements prompt strategies like "Chain of Thought" or "Chain of Draft" which can
be used in addition to the basic patterns.
See the [Thinking Faster by Writing Less](https://arxiv.org/pdf/2502.18600) paper and
the [Thought Generation section of Learn Prompting](https://learnprompting.org/docs/advanced/thought_generation/introduction) for examples of prompt strategies.
Each strategy is available as a small `json` file in the [`/strategies`](https://github.com/danielmiessler/fabric/tree/main/data/strategies) directory.
The prompt modification of the strategy is applied to the system prompt and passed on to the
LLM in the chat session.
Use `fabric -S` and select the option to install the strategies in your `~/.config/fabric` directory.
## Custom Patterns
You may want to use Fabric to create your own custom Patterns—but not share them with others. No problem!
Just make a directory in `~/.config/custompatterns/` (or wherever) and put your `.md` files in there.
Fabric now supports a dedicated custom patterns directory that keeps your personal patterns separate from the built-in ones. This means your custom patterns won't be overwritten when you update Fabric's built-in patterns.
When you're ready to use them, copy them into:
### Setting Up Custom Patterns
```
~/.config/fabric/patterns/
```
1. Run the Fabric setup:
You can then use them like any other Patterns, but they won't be public unless you explicitly submit them as Pull Requests to the Fabric project. So don't worry—they're private to you.
```bash
fabric --setup
```
2. Select the "Custom Patterns" option from the Tools menu and enter your desired directory path (e.g., `~/my-custom-patterns`)
3. Fabric will automatically create the directory if it does not exist.
### Using Custom Patterns
1. Create your custom pattern directory structure:
```bash
mkdir -p ~/my-custom-patterns/my-analyzer
```
2. Create your pattern file
```bash
echo "You are an expert analyzer of ..." > ~/my-custom-patterns/my-analyzer/system.md
```
3. **Use your custom pattern:**
```bash
fabric --pattern my-analyzer "analyze this text"
```
### How It Works
- **Priority System**: Custom patterns take precedence over built-in patterns with the same name
- **Seamless Integration**: Custom patterns appear in `fabric --listpatterns` alongside built-in ones
- **Update Safe**: Your custom patterns are never affected by `fabric --updatepatterns`
- **Private by Default**: Custom patterns remain private unless you explicitly share them
Your custom patterns are completely private and won't be affected by Fabric updates!
## Helper Apps
@@ -510,11 +704,25 @@ This will create a PDF file named `output.pdf` in the current directory.
To install `to_pdf`, install it the same way as you install Fabric, just with a different repo name.
```bash
go install github.com/danielmiessler/fabric/plugins/tools/to_pdf@latest
go install github.com/danielmiessler/fabric/cmd/to_pdf@latest
```
Make sure you have a LaTeX distribution (like TeX Live or MiKTeX) installed on your system, as `to_pdf` requires `pdflatex` to be available in your system's PATH.
### `code_helper`
`code_helper` is used in conjunction with the `create_coding_feature` pattern.
It generates a `json` representation of a directory of code that can be fed into an AI model
with instructions to create a new feature or edit the code in a specified way.
See [the Create Coding Feature Pattern README](./data/patterns/create_coding_feature/README.md) for details.
Install it first using:
```bash
go install github.com/danielmiessler/fabric/cmd/code_helper@latest
```
## pbpaste
The [examples](#examples) use the macOS program `pbpaste` to paste content from the clipboard to pipe into `fabric` as the input. `pbpaste` is not available on Windows or Linux, but there are alternatives.
@@ -540,16 +748,16 @@ alias pbpaste='xclip -selection clipboard -o'
## Web Interface
Fabric now includes a built-in web interface that provides a GUI alternative to the command-line interface and an out-of-the-box website for those who want to get started with web development or blogging.
Fabric now includes a built-in web interface that provides a GUI alternative to the command-line interface and an out-of-the-box website for those who want to get started with web development or blogging.
You can use this app as a GUI interface for Fabric, a ready to go blog-site, or a website template for your own projects.
The `web/src/lib/content` directory includes starter `.obsidian/` and `templates/` directories, allowing you to open up the `web/src/lib/content/` directory as an [Obsidian.md](https://obsidian.md) vault. You can place your posts in the posts directory when you're ready to publish.
### Installing
The GUI can be installed by navigating to the `web` directory and using `npm install`, `pnpm install`, or your favorite package manager. Then simply run the development server to start the app.
The GUI can be installed by navigating to the `web` directory and using `npm install`, `pnpm install`, or your favorite package manager. Then simply run the development server to start the app.
_You will need to run fabric in a separate terminal with the `fabric --serve` command._
_You will need to run fabric in a separate terminal with the `fabric --serve` command._
**From the fabric project `web/` directory:**
@@ -569,7 +777,10 @@ To run the Streamlit user interface:
```bash
# Install required dependencies
pip install streamlit pandas matplotlib seaborn numpy python-dotenv
pip install -r requirements.txt
# Or manually install dependencies
pip install streamlit pandas matplotlib seaborn numpy python-dotenv pyperclip
# Run the Streamlit app
streamlit run streamlit.py
@@ -582,6 +793,14 @@ The Streamlit UI provides a user-friendly interface for:
- Creating and editing patterns
- Analyzing pattern results
#### Clipboard Support
The Streamlit UI supports clipboard operations across different platforms:
- **macOS**: Uses `pbcopy` and `pbpaste` (built-in)
- **Windows**: Uses `pyperclip` library (install with `pip install pyperclip`)
- **Linux**: Uses `xclip` (install with `sudo apt-get install xclip` or equivalent for your Linux distribution)
## Meta
> [!NOTE]
@@ -598,10 +817,18 @@ The Streamlit UI provides a user-friendly interface for:
### Primary contributors
<a href="https://github.com/danielmiessler"><img src="https://avatars.githubusercontent.com/u/50654?v=4" title="Daniel Miessler" width="50" height="50"></a>
<a href="https://github.com/xssdoctor"><img src="https://avatars.githubusercontent.com/u/9218431?v=4" title="Jonathan Dunn" width="50" height="50"></a>
<a href="https://github.com/sbehrens"><img src="https://avatars.githubusercontent.com/u/688589?v=4" title="Scott Behrens" width="50" height="50"></a>
<a href="https://github.com/agu3rra"><img src="https://avatars.githubusercontent.com/u/10410523?v=4" title="Andre Guerra" width="50" height="50"></a>
<a href="https://github.com/danielmiessler"><img src="https://avatars.githubusercontent.com/u/50654?v=4" title="Daniel Miessler" width="50" height="50" alt="Daniel Miessler"></a>
<a href="https://github.com/xssdoctor"><img src="https://avatars.githubusercontent.com/u/9218431?v=4" title="Jonathan Dunn" width="50" height="50" alt="Jonathan Dunn"></a>
<a href="https://github.com/sbehrens"><img src="https://avatars.githubusercontent.com/u/688589?v=4" title="Scott Behrens" width="50" height="50" alt="Scott Behrens"></a>
<a href="https://github.com/agu3rra"><img src="https://avatars.githubusercontent.com/u/10410523?v=4" title="Andre Guerra" width="50" height="50" alt="Andre Guerra"></a>
### Contributors
<a href="https://github.com/danielmiessler/fabric/graphs/contributors">
<img src="https://contrib.rocks/image?repo=danielmiessler/fabric" alt="contrib.rocks" />
</a>
Made with [contrib.rocks](https://contrib.rocks).
`fabric` was created by <a href="https://danielmiessler.com/subscribe" target="_blank">Daniel Miessler</a> in January of 2024.
<br /><br />

View File

@@ -1,341 +0,0 @@
package cli
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/danielmiessler/fabric/plugins/tools/youtube"
"github.com/danielmiessler/fabric/common"
"github.com/danielmiessler/fabric/core"
"github.com/danielmiessler/fabric/plugins/ai"
"github.com/danielmiessler/fabric/plugins/db/fsdb"
"github.com/danielmiessler/fabric/plugins/tools/converter"
"github.com/danielmiessler/fabric/restapi"
)
// Cli Controls the cli. It takes in the flags and runs the appropriate functions
func Cli(version string) (err error) {
var currentFlags *Flags
if currentFlags, err = Init(); err != nil {
return
}
if currentFlags.Version {
fmt.Println(version)
return
}
var homedir string
if homedir, err = os.UserHomeDir(); err != nil {
return
}
fabricDb := fsdb.NewDb(filepath.Join(homedir, ".config/fabric"))
if err = fabricDb.Configure(); err != nil {
if !currentFlags.Setup {
println(err.Error())
currentFlags.Setup = true
}
}
var registry *core.PluginRegistry
if registry, err = core.NewPluginRegistry(fabricDb); err != nil {
return
}
// if the setup flag is set, run the setup function
if currentFlags.Setup {
err = registry.Setup()
return
}
if currentFlags.Serve {
registry.ConfigureVendors()
err = restapi.Serve(registry, currentFlags.ServeAddress)
return
}
if currentFlags.ServeOllama {
registry.ConfigureVendors()
err = restapi.ServeOllama(registry, currentFlags.ServeAddress, version)
return
}
if currentFlags.UpdatePatterns {
err = registry.PatternsLoader.PopulateDB()
return
}
if currentFlags.ChangeDefaultModel {
err = registry.Defaults.Setup()
return
}
if currentFlags.LatestPatterns != "0" {
var parsedToInt int
if parsedToInt, err = strconv.Atoi(currentFlags.LatestPatterns); err != nil {
return
}
if err = fabricDb.Patterns.PrintLatestPatterns(parsedToInt); err != nil {
return
}
return
}
if currentFlags.ListPatterns {
err = fabricDb.Patterns.ListNames()
return
}
if currentFlags.ListAllModels {
var models *ai.VendorsModels
if models, err = registry.VendorManager.GetModels(); err != nil {
return
}
models.Print()
return
}
if currentFlags.ListAllContexts {
err = fabricDb.Contexts.ListNames()
return
}
if currentFlags.ListAllSessions {
err = fabricDb.Sessions.ListNames()
return
}
if currentFlags.WipeContext != "" {
err = fabricDb.Contexts.Delete(currentFlags.WipeContext)
return
}
if currentFlags.WipeSession != "" {
err = fabricDb.Sessions.Delete(currentFlags.WipeSession)
return
}
if currentFlags.PrintSession != "" {
err = fabricDb.Sessions.PrintSession(currentFlags.PrintSession)
return
}
if currentFlags.PrintContext != "" {
err = fabricDb.Contexts.PrintContext(currentFlags.PrintContext)
return
}
if currentFlags.HtmlReadability {
if msg, cleanErr := converter.HtmlReadability(currentFlags.Message); cleanErr != nil {
fmt.Println("use original input, because can't apply html readability", err)
} else {
currentFlags.Message = msg
}
}
if currentFlags.ListExtensions {
err = registry.TemplateExtensions.ListExtensions()
return
}
if currentFlags.AddExtension != "" {
err = registry.TemplateExtensions.RegisterExtension(currentFlags.AddExtension)
return
}
if currentFlags.RemoveExtension != "" {
err = registry.TemplateExtensions.RemoveExtension(currentFlags.RemoveExtension)
return
}
// if the interactive flag is set, run the interactive function
// if currentFlags.Interactive {
// interactive.Interactive()
// }
// if none of the above currentFlags are set, run the initiate chat function
var messageTools string
if currentFlags.YouTube != "" {
if registry.YouTube.IsConfigured() == false {
err = fmt.Errorf("YouTube is not configured, please run the setup procedure")
return
}
var videoId string
var playlistId string
if videoId, playlistId, err = registry.YouTube.GetVideoOrPlaylistId(currentFlags.YouTube); err != nil {
return
} else if (videoId == "" || currentFlags.YouTubePlaylist) && playlistId != "" {
if currentFlags.Output != "" {
err = registry.YouTube.FetchAndSavePlaylist(playlistId, currentFlags.Output)
} else {
var videos []*youtube.VideoMeta
if videos, err = registry.YouTube.FetchPlaylistVideos(playlistId); err != nil {
err = fmt.Errorf("error fetching playlist videos: %v", err)
return
}
for _, video := range videos {
var message string
if message, err = processYoutubeVideo(currentFlags, registry, video.Id); err != nil {
return
}
if !currentFlags.IsChatRequest() {
if err = WriteOutput(message, fmt.Sprintf("%v.md", video.TitleNormalized)); err != nil {
return
}
} else {
messageTools = AppendMessage(messageTools, message)
}
}
}
return
}
messageTools, err = processYoutubeVideo(currentFlags, registry, videoId)
if !currentFlags.IsChatRequest() {
err = currentFlags.WriteOutput(messageTools)
return
}
}
if (currentFlags.ScrapeURL != "" || currentFlags.ScrapeQuestion != "") && registry.Jina.IsConfigured() {
// Check if the scrape_url flag is set and call ScrapeURL
if currentFlags.ScrapeURL != "" {
var website string
if website, err = registry.Jina.ScrapeURL(currentFlags.ScrapeURL); err != nil {
return
}
messageTools = AppendMessage(messageTools, website)
}
// Check if the scrape_question flag is set and call ScrapeQuestion
if currentFlags.ScrapeQuestion != "" {
var website string
if website, err = registry.Jina.ScrapeQuestion(currentFlags.ScrapeQuestion); err != nil {
return
}
messageTools = AppendMessage(messageTools, website)
}
if !currentFlags.IsChatRequest() {
err = currentFlags.WriteOutput(messageTools)
return
}
}
if messageTools != "" {
currentFlags.AppendMessage(messageTools)
}
var chatter *core.Chatter
if chatter, err = registry.GetChatter(currentFlags.Model, currentFlags.ModelContextLength, currentFlags.Stream, currentFlags.DryRun); err != nil {
return
}
var session *fsdb.Session
var chatReq *common.ChatRequest
if chatReq, err = currentFlags.BuildChatRequest(strings.Join(os.Args[1:], " ")); err != nil {
return
}
if chatReq.Language == "" {
chatReq.Language = registry.Language.DefaultLanguage.Value
}
if session, err = chatter.Send(chatReq, currentFlags.BuildChatOptions()); err != nil {
return
}
result := session.GetLastMessage().Content
if !currentFlags.Stream {
// print the result if it was not streamed already
fmt.Println(result)
}
// if the copy flag is set, copy the message to the clipboard
if currentFlags.Copy {
if err = CopyToClipboard(result); err != nil {
return
}
}
// if the output flag is set, create an output file
if currentFlags.Output != "" {
if currentFlags.OutputSession {
sessionAsString := session.String()
err = CreateOutputFile(sessionAsString, currentFlags.Output)
} else {
err = CreateOutputFile(result, currentFlags.Output)
}
}
return
}
func processYoutubeVideo(
flags *Flags, registry *core.PluginRegistry, videoId string) (message string, err error) {
if (!flags.YouTubeComments && !flags.YouTubeMetadata) || flags.YouTubeTranscript || flags.YouTubeTranscriptWithTimestamps {
var transcript string
var language = "en"
if flags.Language != "" || registry.Language.DefaultLanguage.Value != "" {
if flags.Language != "" {
language = flags.Language
} else {
language = registry.Language.DefaultLanguage.Value
}
}
if flags.YouTubeTranscriptWithTimestamps {
if transcript, err = registry.YouTube.GrabTranscriptWithTimestamps(videoId, language); err != nil {
return
}
} else {
if transcript, err = registry.YouTube.GrabTranscript(videoId, language); err != nil {
return
}
}
message = AppendMessage(message, transcript)
}
if flags.YouTubeComments {
var comments []string
if comments, err = registry.YouTube.GrabComments(videoId); err != nil {
return
}
commentsString := strings.Join(comments, "\n")
message = AppendMessage(message, commentsString)
}
if flags.YouTubeMetadata {
var metadata *youtube.VideoMetadata
if metadata, err = registry.YouTube.GrabMetadata(videoId); err != nil {
return
}
metadataJson, _ := json.MarshalIndent(metadata, "", " ")
message = AppendMessage(message, string(metadataJson))
}
return
}
func WriteOutput(message string, outputFile string) (err error) {
fmt.Println(message)
if outputFile != "" {
err = CreateOutputFile(message, outputFile)
}
return
}

View File

@@ -1,166 +0,0 @@
package cli
import (
"bytes"
"io"
"os"
"strings"
"testing"
"github.com/danielmiessler/fabric/common"
"github.com/stretchr/testify/assert"
)
func TestInit(t *testing.T) {
args := []string{"--copy"}
expectedFlags := &Flags{Copy: true}
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = append([]string{"cmd"}, args...)
flags, err := Init()
assert.NoError(t, err)
assert.Equal(t, expectedFlags.Copy, flags.Copy)
}
func TestReadStdin(t *testing.T) {
input := "test input"
stdin := io.NopCloser(strings.NewReader(input))
// No need to cast stdin to *os.File, pass it as io.ReadCloser directly
content, err := ReadStdin(stdin)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if content != input {
t.Fatalf("expected %q, got %q", input, content)
}
}
// ReadStdin function assuming it's part of `cli` package
func ReadStdin(reader io.ReadCloser) (string, error) {
defer reader.Close()
buf := new(bytes.Buffer)
_, err := buf.ReadFrom(reader)
if err != nil {
return "", err
}
return buf.String(), nil
}
func TestBuildChatOptions(t *testing.T) {
flags := &Flags{
Temperature: 0.8,
TopP: 0.9,
PresencePenalty: 0.1,
FrequencyPenalty: 0.2,
Seed: 1,
}
expectedOptions := &common.ChatOptions{
Temperature: 0.8,
TopP: 0.9,
PresencePenalty: 0.1,
FrequencyPenalty: 0.2,
Raw: false,
Seed: 1,
}
options := flags.BuildChatOptions()
assert.Equal(t, expectedOptions, options)
}
func TestBuildChatOptionsDefaultSeed(t *testing.T) {
flags := &Flags{
Temperature: 0.8,
TopP: 0.9,
PresencePenalty: 0.1,
FrequencyPenalty: 0.2,
}
expectedOptions := &common.ChatOptions{
Temperature: 0.8,
TopP: 0.9,
PresencePenalty: 0.1,
FrequencyPenalty: 0.2,
Raw: false,
Seed: 0,
}
options := flags.BuildChatOptions()
assert.Equal(t, expectedOptions, options)
}
func TestInitWithYAMLConfig(t *testing.T) {
// Create a temporary YAML config file
configContent := `
temperature: 0.9
model: gpt-4
pattern: analyze
stream: true
`
tmpfile, err := os.CreateTemp("", "config.*.yaml")
if err != nil {
t.Fatal(err)
}
defer os.Remove(tmpfile.Name())
if _, err := tmpfile.Write([]byte(configContent)); err != nil {
t.Fatal(err)
}
if err := tmpfile.Close(); err != nil {
t.Fatal(err)
}
// Test 1: Basic YAML loading
t.Run("Load YAML config", func(t *testing.T) {
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"cmd", "--config", tmpfile.Name()}
flags, err := Init()
assert.NoError(t, err)
assert.Equal(t, 0.9, flags.Temperature)
assert.Equal(t, "gpt-4", flags.Model)
assert.Equal(t, "analyze", flags.Pattern)
assert.True(t, flags.Stream)
})
// Test 2: CLI overrides YAML
t.Run("CLI overrides YAML", func(t *testing.T) {
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"cmd", "--config", tmpfile.Name(), "--temperature", "0.7", "--model", "gpt-3.5-turbo"}
flags, err := Init()
assert.NoError(t, err)
assert.Equal(t, 0.7, flags.Temperature)
assert.Equal(t, "gpt-3.5-turbo", flags.Model)
assert.Equal(t, "analyze", flags.Pattern) // unchanged from YAML
assert.True(t, flags.Stream) // unchanged from YAML
})
// Test 3: Invalid YAML config
t.Run("Invalid YAML config", func(t *testing.T) {
badConfig := `
temperature: "not a float"
model: 123 # should be string
`
badfile, err := os.CreateTemp("", "bad-config.*.yaml")
if err != nil {
t.Fatal(err)
}
defer os.Remove(badfile.Name())
if _, err := badfile.Write([]byte(badConfig)); err != nil {
t.Fatal(err)
}
if err := badfile.Close(); err != nil {
t.Fatal(err)
}
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"cmd", "--config", badfile.Name()}
_, err = Init()
assert.Error(t, err)
})
}

181
cmd/code_helper/code.go Normal file
View File

@@ -0,0 +1,181 @@
package main
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"strings"
)
// FileItem represents a file in the project
type FileItem struct {
Type string `json:"type"`
Name string `json:"name"`
Content string `json:"content,omitempty"`
Contents []FileItem `json:"contents,omitempty"`
}
// ProjectData represents the entire project structure with instructions
type ProjectData struct {
Files []FileItem `json:"files"`
Instructions struct {
Type string `json:"type"`
Name string `json:"name"`
Details string `json:"details"`
} `json:"instructions"`
Report struct {
Type string `json:"type"`
Directories int `json:"directories"`
Files int `json:"files"`
} `json:"report"`
}
// ScanDirectory scans a directory and returns a JSON representation of its structure
func ScanDirectory(rootDir string, maxDepth int, instructions string, ignoreList []string) ([]byte, error) {
// Count totals for report
dirCount := 1
fileCount := 0
// Create root directory item
rootItem := FileItem{
Type: "directory",
Name: rootDir,
Contents: []FileItem{},
}
// Walk through the directory
err := filepath.Walk(rootDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
// Skip .git directory
if strings.Contains(path, ".git") {
if info.IsDir() {
return filepath.SkipDir
}
return nil
}
// Check if path matches any ignore pattern
relPath, err := filepath.Rel(rootDir, path)
if err != nil {
return err
}
for _, pattern := range ignoreList {
if strings.Contains(relPath, pattern) {
if info.IsDir() {
return filepath.SkipDir
}
return nil
}
}
if relPath == "." {
return nil
}
depth := len(strings.Split(relPath, string(filepath.Separator)))
if depth > maxDepth {
if info.IsDir() {
return filepath.SkipDir
}
return nil
}
// Create directory structure
if info.IsDir() {
dirCount++
} else {
fileCount++
// Read file content
content, err := os.ReadFile(path)
if err != nil {
return fmt.Errorf("error reading file %s: %v", path, err)
}
// Add file to appropriate parent directory
addFileToDirectory(&rootItem, relPath, string(content), rootDir)
}
return nil
})
if err != nil {
return nil, err
}
// Create final data structure
var data []interface{}
data = append(data, rootItem)
// Add report
reportItem := map[string]interface{}{
"type": "report",
"directories": dirCount,
"files": fileCount,
}
data = append(data, reportItem)
// Add instructions
instructionsItem := map[string]interface{}{
"type": "instructions",
"name": "code_change_instructions",
"details": instructions,
}
data = append(data, instructionsItem)
return json.MarshalIndent(data, "", " ")
}
// addFileToDirectory adds a file to the correct directory in the structure
func addFileToDirectory(root *FileItem, path, content, rootDir string) {
parts := strings.Split(path, string(filepath.Separator))
// If this is a file at the root level
if len(parts) == 1 {
root.Contents = append(root.Contents, FileItem{
Type: "file",
Name: parts[0],
Content: content,
})
return
}
// Otherwise, find or create the directory path
current := root
for i := 0; i < len(parts)-1; i++ {
dirName := parts[i]
found := false
// Look for existing directory
for j, item := range current.Contents {
if item.Type == "directory" && item.Name == dirName {
current = &current.Contents[j]
found = true
break
}
}
// Create directory if not found
if !found {
newDir := FileItem{
Type: "directory",
Name: dirName,
Contents: []FileItem{},
}
current.Contents = append(current.Contents, newDir)
current = &current.Contents[len(current.Contents)-1]
}
}
// Add the file to the current directory
current.Contents = append(current.Contents, FileItem{
Type: "file",
Name: parts[len(parts)-1],
Content: content,
})
}

65
cmd/code_helper/main.go Normal file
View File

@@ -0,0 +1,65 @@
package main
import (
"flag"
"fmt"
"os"
"strings"
)
func main() {
// Command line flags
maxDepth := flag.Int("depth", 3, "Maximum directory depth to scan")
ignorePatterns := flag.String("ignore", ".git,node_modules,vendor", "Comma-separated patterns to ignore")
outputFile := flag.String("out", "", "Output file (default: stdout)")
flag.Usage = printUsage
flag.Parse()
// Require exactly two positional arguments: directory and instructions
if flag.NArg() != 2 {
printUsage()
os.Exit(1)
}
directory := flag.Arg(0)
instructions := flag.Arg(1)
// Validate directory
if info, err := os.Stat(directory); err != nil || !info.IsDir() {
fmt.Fprintf(os.Stderr, "Error: Directory '%s' does not exist or is not a directory\n", directory)
os.Exit(1)
}
// Parse ignore patterns and scan directory
jsonData, err := ScanDirectory(directory, *maxDepth, instructions, strings.Split(*ignorePatterns, ","))
if err != nil {
fmt.Fprintf(os.Stderr, "Error scanning directory: %v\n", err)
os.Exit(1)
}
// Output result
if *outputFile != "" {
if err := os.WriteFile(*outputFile, jsonData, 0644); err != nil {
fmt.Fprintf(os.Stderr, "Error writing file: %v\n", err)
os.Exit(1)
}
} else {
fmt.Print(string(jsonData))
}
}
func printUsage() {
fmt.Fprintf(os.Stderr, `code_helper - Code project scanner for use with Fabric AI
Usage:
code_helper [options] <directory> <instructions>
Examples:
code_helper . "Add input validation to all user inputs"
code_helper -depth 4 ./my-project "Implement error handling"
code_helper -out project.json ./src "Fix security issues"
Options:
`)
flag.PrintDefaults()
}

View File

@@ -2,10 +2,11 @@ package main
import (
"fmt"
"github.com/jessevdk/go-flags"
"os"
"github.com/danielmiessler/fabric/cli"
"github.com/jessevdk/go-flags"
"github.com/danielmiessler/fabric/internal/cli"
)
func main() {

3
cmd/fabric/version.go Normal file
View File

@@ -0,0 +1,3 @@
package main
var version = "v1.4.254"

View File

@@ -0,0 +1,151 @@
# Product Requirements Document: Changelog Generator
## Overview
The Changelog Generator is a high-performance Go tool that automatically generates comprehensive changelogs from git history and GitHub pull requests.
## Goals
1. **Performance**: Very fast. Efficient enough to be used in CI/CD as part of release process.
2. **Completeness**: Capture ALL commits including unreleased changes
3. **Efficiency**: Minimize API calls through caching and batch operations
4. **Reliability**: Handle errors gracefully with proper Go error handling
5. **Simplicity**: Single binary with no runtime dependencies
## Key Features
### 1. One-Pass Git History Algorithm
- Walk git history once from newest to oldest
- Start with "Unreleased" bucket for all new commits
- Switch buckets when encountering version commits
- No need to calculate ranges between versions
### 2. Native Library Integration
- **go-git**: Pure Go git implementation (no git binary required)
- **go-github**: Official GitHub Go client library
- Benefits: Type safety, better error handling, no subprocess overhead
### 3. Smart Caching System
- SQLite-based persistent cache
- Stores: versions, commits, PR details, last processed commit
- Enables incremental updates on subsequent runs
- Instant changelog regeneration from cache
### 4. Concurrent Processing
- Parallel GitHub API calls (up to 10 concurrent)
- Batch PR fetching with deduplication
- Rate limiting awareness
### 5. Enhanced Output
- "Unreleased" section for commits since last version
- Clean markdown formatting
- Configurable version limiting
- Direct commit tracking (non-PR commits)
## Technical Architecture
### Module Structure
```text
cmd/generate_changelog/
├── main.go # CLI entry point with cobra
├── internal/
│ ├── git/ # Git operations (go-git)
│ ├── github/ # GitHub API client (go-github)
│ ├── cache/ # SQLite caching layer
│ ├── changelog/ # Core generation logic
│ └── config/ # Configuration management
└── changelog.db # SQLite cache (generated)
```
### Data Flow
1. Git walker collects all commits in one pass
2. Commits bucketed by version (starting with "Unreleased")
3. PR numbers extracted from merge commits
4. GitHub API batch-fetches PR details
5. Cache stores everything for future runs
6. Formatter generates markdown output
### Cache Schema
- **metadata**: Last processed commit SHA
- **versions**: Version names, dates, commit SHAs
- **commits**: Full commit details with version associations
- **pull_requests**: PR details including commits
- Indexes on version and PR number for fast lookups
### Features
- **Unreleased section**: Shows all new commits
- **Better caching**: SQLite vs JSON, incremental updates
- **Smarter deduplication**: Removes consecutive duplicate commits
- **Direct commit tracking**: Shows non-PR commits
### Reliability
- **No subprocess errors**: Direct library usage
- **Type safety**: Compile-time checking
- **Better error handling**: Go's explicit error returns
### Deployment
- **Single binary**: No Python/pip/dependencies
- **Cross-platform**: Compile for any OS/architecture
- **No git CLI required**: Uses go-git library
## Configuration
### Environment Variables
- `GITHUB_TOKEN`: GitHub API authentication token
### Command Line Flags
- `--repo, -r`: Repository path (default: current directory)
- `--output, -o`: Output file (default: stdout)
- `--limit, -l`: Version limit (default: all)
- `--version, -v`: Target specific version
- `--save-data`: Export debug JSON
- `--cache`: Cache file location
- `--no-cache`: Disable caching
- `--rebuild-cache`: Force cache rebuild
- `--token`: GitHub token override
## Success Metrics
1. **Performance**: Generate full changelog in <5 seconds for fabric repo
2. **Completeness**: 100% commit coverage including unreleased
3. **Accuracy**: Correct PR associations and change extraction
4. **Reliability**: Handle network failures gracefully
5. **Usability**: Simple CLI with sensible defaults
## Future Enhancements
1. **Multiple output formats**: JSON, HTML, etc.
2. **Custom version patterns**: Configurable regex
3. **Change categorization**: feat/fix/docs auto-grouping
4. **Conventional commits**: Full support for semantic versioning
5. **GitLab/Bitbucket**: Support other platforms
6. **Web UI**: Interactive changelog browser
7. **Incremental updates**: Update existing CHANGELOG.md file
8. **Breaking change detection**: Highlight breaking changes
## Implementation Status
- ✅ Core architecture and modules
- ✅ One-pass git walking algorithm
- ✅ GitHub API integration with concurrency
- ✅ SQLite caching system
- ✅ Changelog formatting and generation
- ✅ CLI with all planned flags
- ✅ Documentation (README and PRD)
## Conclusion
This Go implementation provides a modern, efficient, and feature-rich changelog generator.

View File

@@ -0,0 +1,263 @@
# Changelog Generator
A high-performance changelog generator for Git repositories that automatically creates comprehensive, well-formatted changelogs from your git history and GitHub pull requests.
## Features
- **One-pass git history walking**: Efficiently processes entire repository history in a single pass
- **Automatic PR detection**: Extracts pull request information from merge commits
- **GitHub API integration**: Fetches detailed PR information including commits, authors, and descriptions
- **Smart caching**: SQLite-based caching for instant incremental updates
- **Unreleased changes**: Tracks all commits since the last release
- **Concurrent processing**: Parallel GitHub API calls for improved performance
- **Flexible output**: Generate complete changelogs or target specific versions
- **GraphQL optimization**: Ultra-fast PR fetching using GitHub GraphQL API (~5-10 calls vs 1000s)
- **Intelligent sync**: Automatically syncs new PRs every 24 hours or when missing PRs are detected
- **AI-powered summaries**: Optional Fabric integration for enhanced changelog summaries
- **Advanced caching**: Content-based change detection for AI summaries with hash comparison
- **Author type detection**: Distinguishes between users, bots, and organizations
- **Lightning-fast incremental updates**: SHA→PR mapping for instant git operations
## Installation
```bash
go install github.com/danielmiessler/fabric/cmd/generate_changelog@latest
```
## Usage
### Basic usage (generate complete changelog)
```bash
generate_changelog
```
### Save to file
```bash
generate_changelog -o CHANGELOG.md
```
### Generate for specific version
```bash
generate_changelog -v v1.4.244
```
### Limit to recent versions
```bash
generate_changelog -l 10
```
### Using GitHub token for private repos or higher rate limits
```bash
export GITHUB_TOKEN=your_token_here
generate_changelog
# Or pass directly
generate_changelog --token your_token_here
```
### AI-enhanced summaries
```bash
# Enable AI summaries using Fabric
generate_changelog --ai-summarize
# Use custom model for AI summaries
FABRIC_CHANGELOG_SUMMARIZE_MODEL=claude-opus-4 generate_changelog --ai-summarize
```
### Cache management
```bash
# Rebuild cache from scratch
generate_changelog --rebuild-cache
# Force a full PR sync from GitHub
generate_changelog --force-pr-sync
# Disable cache usage
generate_changelog --no-cache
# Use custom cache location
generate_changelog --cache /path/to/cache.db
```
## Command Line Options
| Flag | Short | Description | Default |
|------|-------|-------------|---------|
| `--repo` | `-r` | Repository path | `.` (current directory) |
| `--output` | `-o` | Output file | stdout |
| `--limit` | `-l` | Limit number of versions | 0 (all) |
| `--version` | `-v` | Generate for specific version | |
| `--save-data` | | Save version data to JSON | false |
| `--cache` | | Cache database file | `./cmd/generate_changelog/changelog.db` |
| `--no-cache` | | Disable cache usage | false |
| `--rebuild-cache` | | Rebuild cache from scratch | false |
| `--force-pr-sync` | | Force a full PR sync from GitHub | false |
| `--token` | | GitHub API token | `$GITHUB_TOKEN` |
| `--ai-summarize` | | Generate AI-enhanced summaries using Fabric | false |
## Output Format
The generated changelog follows this structure:
```markdown
# Changelog
## Unreleased
### PR [#1601](url) by [author](profile): PR Title
- Change description 1
- Change description 2
### Direct commits
- Direct commit message 1
- Direct commit message 2
## v1.4.244 (2025-07-09)
### PR [#1598](url) by [author](profile): PR Title
- Change description
...
```
## How It Works
1. **Git History Walking**: The tool walks through your git history from newest to oldest commits
2. **Version Detection**: Identifies version bump commits (pattern: "Update version to vX.Y.Z")
3. **PR Extraction**: Detects merge commits and extracts PR numbers
4. **GitHub API Calls**: Fetches detailed PR information in parallel batches
5. **Change Extraction**: Extracts changes from PR commit messages or PR body
6. **Formatting**: Generates clean, organized markdown output
## Performance
- **Native Go libraries**: Uses go-git and go-github for maximum performance
- **Concurrent API calls**: Processes up to 10 GitHub API requests in parallel
- **Smart caching**: SQLite cache eliminates redundant API calls
- **Incremental updates**: Only processes new commits on subsequent runs
- **GraphQL optimization**: Uses GitHub GraphQL API to fetch all PR data in ~5-10 calls
- **AI-powered summaries**: Optional Fabric integration with intelligent caching
- **Content-based change detection**: AI summaries only regenerated when content changes
- **Lightning-fast git operations**: SHA→PR mapping stored in database for instant lookups
### Major Optimization: GraphQL + Advanced Caching
The tool has been optimized to drastically reduce GitHub API calls and improve performance:
**Previous approach**: Individual API calls for each PR (2 API calls per PR)
- For a repo with 500 PRs: 1,000 API calls
**Current approach**: GraphQL batch fetching with intelligent caching
- For a repo with 500 PRs: ~5-10 GraphQL calls (initial fetch) + 0 calls (subsequent runs with cache)
- **99%+ reduction in API calls after initial run!**
The optimization includes:
1. **GraphQL Batch Fetch**: Uses GitHub's GraphQL API to fetch all merged PRs with commits in minimal calls
2. **Smart Caching**: Stores complete PR data, commits, and SHA mappings in SQLite
3. **Incremental Sync**: Only fetches PRs merged after the last sync timestamp
4. **Automatic Refresh**: PRs are synced every 24 hours or when missing PRs are detected
5. **AI Summary Caching**: Content-based change detection prevents unnecessary AI regeneration
6. **Fallback Support**: If GraphQL fails, falls back to REST API batch fetching
7. **Lightning Git Operations**: Pre-computed SHA→PR mappings for instant commit association
## Requirements
- Go 1.24+ (for installation from source)
- Git repository
- GitHub token (optional, for private repos or higher rate limits)
- Fabric CLI (optional, for AI-enhanced summaries)
## Authentication
The tool supports GitHub authentication via:
1. Environment variable: `export GITHUB_TOKEN=your_token`
2. Command line flag: `--token your_token`
3. `.env` file in the same directory as the binary
### Environment File Support
Create a `.env` file next to the `generate_changelog` binary:
```bash
GITHUB_TOKEN=your_github_token_here
FABRIC_CHANGELOG_SUMMARIZE_MODEL=claude-sonnet-4-20250514
```
The tool automatically loads `.env` files for convenient configuration management.
Without authentication, the tool is limited to 60 GitHub API requests per hour.
## Caching
The SQLite cache stores:
- Version information and commit associations
- Pull request details (title, body, commits, authors)
- Last processed commit SHA for incremental updates
- Last PR sync timestamp for intelligent refresh
- AI summaries with content-based change detection
- SHA→PR mappings for lightning-fast git operations
Cache benefits:
- Instant changelog regeneration
- Drastically reduced GitHub API usage (99%+ reduction after initial run)
- Offline changelog generation (after initial cache build)
- Automatic PR data refresh every 24 hours
- Batch database transactions for better performance
- Content-aware AI summary regeneration
## AI-Enhanced Summaries
The tool can generate AI-powered summaries using Fabric for more polished, professional changelogs:
```bash
# Enable AI summarization
generate_changelog --ai-summarize
# Custom model (default: claude-sonnet-4-20250514)
FABRIC_CHANGELOG_SUMMARIZE_MODEL=claude-opus-4 generate_changelog --ai-summarize
```
### AI Summary Features
- **Content-based change detection**: AI summaries are only regenerated when version content changes
- **Intelligent caching**: Preserves existing summaries and only processes changed versions
- **Content hash comparison**: Uses SHA256 hashing to detect when "Unreleased" content changes
- **Automatic fallback**: Falls back to raw content if AI processing fails
- **Error detection**: Identifies and handles AI processing errors gracefully
- **Minimum content filtering**: Skips AI processing for very brief content (< 256 characters)
### AI Model Configuration
Set the model via environment variable:
```bash
export FABRIC_CHANGELOG_SUMMARIZE_MODEL=claude-opus-4
# or
export FABRIC_CHANGELOG_SUMMARIZE_MODEL=gpt-4
```
AI summaries are cached and only regenerated when:
- Version content changes (detected via hash comparison)
- No existing AI summary exists for the version
- Force rebuild is requested
## Contributing
This tool is part of the Fabric project. Contributions are welcome!
## License
The MIT License. Same as the Fabric project.

Binary file not shown.

View File

@@ -0,0 +1,448 @@
package cache
import (
"database/sql"
"encoding/json"
"fmt"
"time"
"github.com/danielmiessler/fabric/cmd/generate_changelog/internal/git"
"github.com/danielmiessler/fabric/cmd/generate_changelog/internal/github"
_ "github.com/mattn/go-sqlite3"
)
type Cache struct {
db *sql.DB
}
func New(dbPath string) (*Cache, error) {
db, err := sql.Open("sqlite3", dbPath)
if err != nil {
return nil, fmt.Errorf("failed to open database: %w", err)
}
cache := &Cache{db: db}
if err := cache.createTables(); err != nil {
return nil, fmt.Errorf("failed to create tables: %w", err)
}
return cache, nil
}
func (c *Cache) Close() error {
return c.db.Close()
}
func (c *Cache) createTables() error {
queries := []string{
`CREATE TABLE IF NOT EXISTS metadata (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
)`,
`CREATE TABLE IF NOT EXISTS versions (
name TEXT PRIMARY KEY,
date DATETIME,
commit_sha TEXT,
pr_numbers TEXT,
ai_summary TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
)`,
`CREATE TABLE IF NOT EXISTS commits (
sha TEXT PRIMARY KEY,
version TEXT NOT NULL,
message TEXT,
author TEXT,
email TEXT,
date DATETIME,
is_merge BOOLEAN,
pr_number INTEGER,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (version) REFERENCES versions(name)
)`,
`CREATE TABLE IF NOT EXISTS pull_requests (
number INTEGER PRIMARY KEY,
title TEXT,
body TEXT,
author TEXT,
author_url TEXT,
author_type TEXT DEFAULT 'user',
url TEXT,
merged_at DATETIME,
merge_commit TEXT,
commits TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
)`,
`CREATE INDEX IF NOT EXISTS idx_commits_version ON commits(version)`,
`CREATE INDEX IF NOT EXISTS idx_commits_pr_number ON commits(pr_number)`,
`CREATE TABLE IF NOT EXISTS commit_pr_mapping (
commit_sha TEXT PRIMARY KEY,
pr_number INTEGER NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (pr_number) REFERENCES pull_requests(number)
)`,
`CREATE INDEX IF NOT EXISTS idx_commit_pr_mapping_sha ON commit_pr_mapping(commit_sha)`,
}
for _, query := range queries {
if _, err := c.db.Exec(query); err != nil {
return fmt.Errorf("failed to execute query: %w", err)
}
}
return nil
}
func (c *Cache) GetLastProcessedTag() (string, error) {
var tag string
err := c.db.QueryRow("SELECT value FROM metadata WHERE key = 'last_processed_tag'").Scan(&tag)
if err == sql.ErrNoRows {
return "", nil
}
return tag, err
}
func (c *Cache) SetLastProcessedTag(tag string) error {
_, err := c.db.Exec(`
INSERT OR REPLACE INTO metadata (key, value, updated_at)
VALUES ('last_processed_tag', ?, CURRENT_TIMESTAMP)
`, tag)
return err
}
func (c *Cache) SaveVersion(v *git.Version) error {
prNumbers, _ := json.Marshal(v.PRNumbers)
_, err := c.db.Exec(`
INSERT OR REPLACE INTO versions (name, date, commit_sha, pr_numbers, ai_summary)
VALUES (?, ?, ?, ?, ?)
`, v.Name, v.Date, v.CommitSHA, string(prNumbers), v.AISummary)
return err
}
// UpdateVersionAISummary updates only the AI summary for a specific version
func (c *Cache) UpdateVersionAISummary(versionName, aiSummary string) error {
_, err := c.db.Exec(`
UPDATE versions SET ai_summary = ? WHERE name = ?
`, aiSummary, versionName)
return err
}
func (c *Cache) SaveCommit(commit *git.Commit, version string) error {
_, err := c.db.Exec(`
INSERT OR REPLACE INTO commits
(sha, version, message, author, email, date, is_merge, pr_number)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
`, commit.SHA, version, commit.Message, commit.Author, commit.Email,
commit.Date, commit.IsMerge, commit.PRNumber)
return err
}
func (c *Cache) SavePR(pr *github.PR) error {
commits, _ := json.Marshal(pr.Commits)
_, err := c.db.Exec(`
INSERT OR REPLACE INTO pull_requests
(number, title, body, author, author_url, author_type, url, merged_at, merge_commit, commits)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`, pr.Number, pr.Title, pr.Body, pr.Author, pr.AuthorURL, pr.AuthorType,
pr.URL, pr.MergedAt, pr.MergeCommit, string(commits))
return err
}
func (c *Cache) GetPR(number int) (*github.PR, error) {
var pr github.PR
var commitsJSON string
err := c.db.QueryRow(`
SELECT number, title, body, author, author_url, COALESCE(author_type, 'user'), url, merged_at, merge_commit, commits
FROM pull_requests WHERE number = ?
`, number).Scan(
&pr.Number, &pr.Title, &pr.Body, &pr.Author, &pr.AuthorURL, &pr.AuthorType,
&pr.URL, &pr.MergedAt, &pr.MergeCommit, &commitsJSON,
)
if err == sql.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
if err := json.Unmarshal([]byte(commitsJSON), &pr.Commits); err != nil {
return nil, fmt.Errorf("failed to unmarshal commits: %w", err)
}
return &pr, nil
}
func (c *Cache) GetVersions() (map[string]*git.Version, error) {
rows, err := c.db.Query(`
SELECT name, date, commit_sha, pr_numbers, ai_summary FROM versions
`)
if err != nil {
return nil, err
}
defer rows.Close()
versions := make(map[string]*git.Version)
for rows.Next() {
var v git.Version
var dateStr sql.NullString
var prNumbersJSON string
var aiSummary sql.NullString
if err := rows.Scan(&v.Name, &dateStr, &v.CommitSHA, &prNumbersJSON, &aiSummary); err != nil {
return nil, err
}
if dateStr.Valid {
v.Date, _ = time.Parse(time.RFC3339, dateStr.String)
}
if prNumbersJSON != "" {
json.Unmarshal([]byte(prNumbersJSON), &v.PRNumbers)
}
if aiSummary.Valid {
v.AISummary = aiSummary.String
}
v.Commits, err = c.getCommitsForVersion(v.Name)
if err != nil {
return nil, err
}
versions[v.Name] = &v
}
return versions, rows.Err()
}
func (c *Cache) getCommitsForVersion(version string) ([]*git.Commit, error) {
rows, err := c.db.Query(`
SELECT sha, message, author, email, date, is_merge, pr_number
FROM commits WHERE version = ?
ORDER BY date DESC
`, version)
if err != nil {
return nil, err
}
defer rows.Close()
var commits []*git.Commit
for rows.Next() {
var commit git.Commit
if err := rows.Scan(
&commit.SHA, &commit.Message, &commit.Author, &commit.Email,
&commit.Date, &commit.IsMerge, &commit.PRNumber,
); err != nil {
return nil, err
}
commits = append(commits, &commit)
}
return commits, rows.Err()
}
func (c *Cache) Clear() error {
tables := []string{"metadata", "versions", "commits", "pull_requests"}
for _, table := range tables {
if _, err := c.db.Exec("DELETE FROM " + table); err != nil {
return err
}
}
return nil
}
// GetLastPRSync returns the timestamp of the last PR sync
func (c *Cache) GetLastPRSync() (time.Time, error) {
var timestamp string
err := c.db.QueryRow("SELECT value FROM metadata WHERE key = 'last_pr_sync'").Scan(&timestamp)
if err == sql.ErrNoRows {
return time.Time{}, nil
}
if err != nil {
return time.Time{}, err
}
return time.Parse(time.RFC3339, timestamp)
}
// SetLastPRSync updates the timestamp of the last PR sync
func (c *Cache) SetLastPRSync(timestamp time.Time) error {
_, err := c.db.Exec(`
INSERT OR REPLACE INTO metadata (key, value, updated_at)
VALUES ('last_pr_sync', ?, CURRENT_TIMESTAMP)
`, timestamp.Format(time.RFC3339))
return err
}
// SavePRBatch saves multiple PRs in a single transaction for better performance
func (c *Cache) SavePRBatch(prs []*github.PR) error {
tx, err := c.db.Begin()
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
stmt, err := tx.Prepare(`
INSERT OR REPLACE INTO pull_requests
(number, title, body, author, author_url, author_type, url, merged_at, merge_commit, commits)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`)
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close()
for _, pr := range prs {
commits, _ := json.Marshal(pr.Commits)
_, err := stmt.Exec(
pr.Number, pr.Title, pr.Body, pr.Author, pr.AuthorURL, pr.AuthorType,
pr.URL, pr.MergedAt, pr.MergeCommit, string(commits),
)
if err != nil {
return fmt.Errorf("failed to save PR #%d: %w", pr.Number, err)
}
}
return tx.Commit()
}
// GetAllPRs returns all cached PRs
func (c *Cache) GetAllPRs() (map[int]*github.PR, error) {
rows, err := c.db.Query(`
SELECT number, title, body, author, author_url, COALESCE(author_type, 'user'), url, merged_at, merge_commit, commits
FROM pull_requests
`)
if err != nil {
return nil, err
}
defer rows.Close()
prs := make(map[int]*github.PR)
for rows.Next() {
var pr github.PR
var commitsJSON string
if err := rows.Scan(
&pr.Number, &pr.Title, &pr.Body, &pr.Author, &pr.AuthorURL, &pr.AuthorType,
&pr.URL, &pr.MergedAt, &pr.MergeCommit, &commitsJSON,
); err != nil {
return nil, err
}
if err := json.Unmarshal([]byte(commitsJSON), &pr.Commits); err != nil {
return nil, fmt.Errorf("failed to unmarshal commits for PR #%d: %w", pr.Number, err)
}
prs[pr.Number] = &pr
}
return prs, rows.Err()
}
// MarkPRAsNonExistent marks a PR number as non-existent to avoid future fetches
func (c *Cache) MarkPRAsNonExistent(prNumber int) error {
_, err := c.db.Exec(`
INSERT OR REPLACE INTO metadata (key, value, updated_at)
VALUES (?, 'non_existent', CURRENT_TIMESTAMP)
`, fmt.Sprintf("pr_non_existent_%d", prNumber))
return err
}
// IsPRMarkedAsNonExistent checks if a PR is marked as non-existent
func (c *Cache) IsPRMarkedAsNonExistent(prNumber int) bool {
var value string
err := c.db.QueryRow("SELECT value FROM metadata WHERE key = ?",
fmt.Sprintf("pr_non_existent_%d", prNumber)).Scan(&value)
return err == nil && value == "non_existent"
}
// SaveCommitPRMappings saves SHA→PR mappings for all commits in PRs
func (c *Cache) SaveCommitPRMappings(prs []*github.PR) error {
tx, err := c.db.Begin()
if err != nil {
return fmt.Errorf("failed to begin transaction: %w", err)
}
defer tx.Rollback()
stmt, err := tx.Prepare(`
INSERT OR REPLACE INTO commit_pr_mapping (commit_sha, pr_number)
VALUES (?, ?)
`)
if err != nil {
return fmt.Errorf("failed to prepare statement: %w", err)
}
defer stmt.Close()
for _, pr := range prs {
for _, commit := range pr.Commits {
_, err := stmt.Exec(commit.SHA, pr.Number)
if err != nil {
return fmt.Errorf("failed to save commit mapping %s→%d: %w", commit.SHA, pr.Number, err)
}
}
}
return tx.Commit()
}
// GetPRNumberBySHA returns the PR number for a given commit SHA
func (c *Cache) GetPRNumberBySHA(sha string) (int, bool) {
var prNumber int
err := c.db.QueryRow("SELECT pr_number FROM commit_pr_mapping WHERE commit_sha = ?", sha).Scan(&prNumber)
if err == sql.ErrNoRows {
return 0, false
}
if err != nil {
return 0, false
}
return prNumber, true
}
// GetCommitSHAsForPR returns all commit SHAs for a given PR number
func (c *Cache) GetCommitSHAsForPR(prNumber int) ([]string, error) {
rows, err := c.db.Query("SELECT commit_sha FROM commit_pr_mapping WHERE pr_number = ?", prNumber)
if err != nil {
return nil, err
}
defer rows.Close()
var shas []string
for rows.Next() {
var sha string
if err := rows.Scan(&sha); err != nil {
return nil, err
}
shas = append(shas, sha)
}
return shas, rows.Err()
}
// GetUnreleasedContentHash returns the cached content hash for Unreleased
func (c *Cache) GetUnreleasedContentHash() (string, error) {
var hash string
err := c.db.QueryRow("SELECT value FROM metadata WHERE key = 'unreleased_content_hash'").Scan(&hash)
if err == sql.ErrNoRows {
return "", fmt.Errorf("no content hash found")
}
return hash, err
}
// SetUnreleasedContentHash stores the content hash for Unreleased
func (c *Cache) SetUnreleasedContentHash(hash string) error {
_, err := c.db.Exec(`
INSERT OR REPLACE INTO metadata (key, value, updated_at)
VALUES ('unreleased_content_hash', ?, CURRENT_TIMESTAMP)
`, hash)
return err
}

View File

@@ -0,0 +1,699 @@
package changelog
import (
"crypto/sha256"
"fmt"
"os"
"regexp"
"sort"
"strings"
"time"
"github.com/danielmiessler/fabric/cmd/generate_changelog/internal/cache"
"github.com/danielmiessler/fabric/cmd/generate_changelog/internal/config"
"github.com/danielmiessler/fabric/cmd/generate_changelog/internal/git"
"github.com/danielmiessler/fabric/cmd/generate_changelog/internal/github"
)
type Generator struct {
cfg *config.Config
gitWalker *git.Walker
ghClient *github.Client
cache *cache.Cache
versions map[string]*git.Version
prs map[int]*github.PR
}
func New(cfg *config.Config) (*Generator, error) {
gitWalker, err := git.NewWalker(cfg.RepoPath)
if err != nil {
return nil, fmt.Errorf("failed to create git walker: %w", err)
}
owner, repo, err := gitWalker.GetRepoInfo()
if err != nil {
return nil, fmt.Errorf("failed to get repo info: %w", err)
}
ghClient := github.NewClient(cfg.GitHubToken, owner, repo)
var c *cache.Cache
if !cfg.NoCache {
c, err = cache.New(cfg.CacheFile)
if err != nil {
return nil, fmt.Errorf("failed to create cache: %w", err)
}
if cfg.RebuildCache {
if err := c.Clear(); err != nil {
return nil, fmt.Errorf("failed to clear cache: %w", err)
}
}
}
return &Generator{
cfg: cfg,
gitWalker: gitWalker,
ghClient: ghClient,
cache: c,
prs: make(map[int]*github.PR),
}, nil
}
func (g *Generator) Generate() (string, error) {
if err := g.collectData(); err != nil {
return "", fmt.Errorf("failed to collect data: %w", err)
}
if err := g.fetchPRs(); err != nil {
return "", fmt.Errorf("failed to fetch PRs: %w", err)
}
return g.formatChangelog(), nil
}
func (g *Generator) collectData() error {
if g.cache != nil && !g.cfg.RebuildCache {
cachedTag, err := g.cache.GetLastProcessedTag()
if err != nil {
return fmt.Errorf("failed to get last processed tag: %w", err)
}
if cachedTag != "" {
// Get the current latest tag from git
currentTag, err := g.gitWalker.GetLatestTag()
if err == nil {
// Load cached data - we can use it even if there are new tags
cachedVersions, err := g.cache.GetVersions()
if err == nil && len(cachedVersions) > 0 {
g.versions = cachedVersions
// Load cached PRs
for _, version := range g.versions {
for _, prNum := range version.PRNumbers {
if pr, err := g.cache.GetPR(prNum); err == nil && pr != nil {
g.prs[prNum] = pr
}
}
}
// If we have new tags since cache, process the new versions only
if currentTag != cachedTag {
fmt.Fprintf(os.Stderr, "Processing new versions since %s...\n", cachedTag)
newVersions, err := g.gitWalker.WalkHistorySinceTag(cachedTag)
if err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to walk history since tag %s: %v\n", cachedTag, err)
} else {
// Merge new versions into cached versions (only add if not already cached)
for name, version := range newVersions {
if name != "Unreleased" { // Handle Unreleased separately
if existingVersion, exists := g.versions[name]; !exists {
g.versions[name] = version
} else {
// Update existing version with new PR numbers if they're missing
if len(existingVersion.PRNumbers) == 0 && len(version.PRNumbers) > 0 {
existingVersion.PRNumbers = version.PRNumbers
}
}
}
}
}
}
// Always update Unreleased section with latest commits
unreleasedVersion, err := g.gitWalker.WalkCommitsSinceTag(currentTag)
if err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to walk commits since tag %s: %v\n", currentTag, err)
} else if unreleasedVersion != nil {
// Preserve existing AI summary if available
if existingUnreleased, exists := g.versions["Unreleased"]; exists {
unreleasedVersion.AISummary = existingUnreleased.AISummary
}
// Replace or add the unreleased version
g.versions["Unreleased"] = unreleasedVersion
}
// Save any new versions to cache (after potential AI processing)
if currentTag != cachedTag {
for _, version := range g.versions {
// Skip versions that were already cached and Unreleased
if version.Name != "Unreleased" {
if err := g.cache.SaveVersion(version); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to save version to cache: %v\n", err)
}
for _, commit := range version.Commits {
if err := g.cache.SaveCommit(commit, version.Name); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to save commit to cache: %v\n", err)
}
}
}
}
// Update the last processed tag
if err := g.cache.SetLastProcessedTag(currentTag); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to update last processed tag: %v\n", err)
}
}
return nil
}
}
}
}
versions, err := g.gitWalker.WalkHistory()
if err != nil {
return fmt.Errorf("failed to walk history: %w", err)
}
g.versions = versions
if g.cache != nil {
for _, version := range versions {
if err := g.cache.SaveVersion(version); err != nil {
return fmt.Errorf("failed to save version to cache: %w", err)
}
for _, commit := range version.Commits {
if err := g.cache.SaveCommit(commit, version.Name); err != nil {
return fmt.Errorf("failed to save commit to cache: %w", err)
}
}
}
// Save the latest tag as our cache anchor point
if latestTag, err := g.gitWalker.GetLatestTag(); err == nil && latestTag != "" {
if err := g.cache.SetLastProcessedTag(latestTag); err != nil {
return fmt.Errorf("failed to save last processed tag: %w", err)
}
}
}
return nil
}
func (g *Generator) fetchPRs() error {
// First, load all cached PRs
if g.cache != nil {
cachedPRs, err := g.cache.GetAllPRs()
if err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to load cached PRs: %v\n", err)
} else {
g.prs = cachedPRs
}
}
// Check if we need to fetch new PRs
var lastSync time.Time
if g.cache != nil {
lastSync, _ = g.cache.GetLastPRSync()
}
// Check if we need to sync for missing PRs
missingPRs := false
for _, version := range g.versions {
for _, prNum := range version.PRNumbers {
if _, exists := g.prs[prNum]; !exists {
missingPRs = true
break
}
}
if missingPRs {
break
}
}
if missingPRs {
fmt.Fprintf(os.Stderr, "Full sync triggered due to missing PRs in cache.\n")
}
// If we have never synced or it's been more than 24 hours, do a full sync
// Also sync if we have versions with PR numbers that aren't cached
needsSync := lastSync.IsZero() || time.Since(lastSync) > 24*time.Hour || g.cfg.ForcePRSync || missingPRs
if !needsSync {
fmt.Fprintf(os.Stderr, "Using cached PR data (last sync: %s)\n", lastSync.Format("2006-01-02 15:04:05"))
return nil
}
fmt.Fprintf(os.Stderr, "Fetching merged PRs from GitHub using GraphQL...\n")
// Use GraphQL for ultimate performance - gets everything in ~5-10 calls
prs, err := g.ghClient.FetchAllMergedPRsGraphQL(lastSync)
if err != nil {
fmt.Fprintf(os.Stderr, "GraphQL fetch failed, falling back to REST API: %v\n", err)
// Fall back to REST API
prs, err = g.ghClient.FetchAllMergedPRs(lastSync)
if err != nil {
return fmt.Errorf("both GraphQL and REST API failed: %w", err)
}
}
// Update our PR map with new data
for _, pr := range prs {
g.prs[pr.Number] = pr
}
// Save all PRs to cache in a batch transaction
if g.cache != nil && len(prs) > 0 {
// Save PRs
if err := g.cache.SavePRBatch(prs); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to cache PRs: %v\n", err)
}
// Save SHA→PR mappings for lightning-fast git operations
if err := g.cache.SaveCommitPRMappings(prs); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to cache commit mappings: %v\n", err)
}
// Update last sync timestamp
if err := g.cache.SetLastPRSync(time.Now()); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to update last sync timestamp: %v\n", err)
}
}
if len(prs) > 0 {
fmt.Fprintf(os.Stderr, "Fetched %d PRs with commits (total cached: %d)\n", len(prs), len(g.prs))
}
return nil
}
func (g *Generator) formatChangelog() string {
var sb strings.Builder
sb.WriteString("# Changelog\n")
versionList := g.getSortedVersions()
for _, version := range versionList {
if g.cfg.Version != "" && version.Name != g.cfg.Version {
continue
}
versionText := g.formatVersion(version)
if versionText != "" {
sb.WriteString("\n")
sb.WriteString(versionText)
}
}
return sb.String()
}
func (g *Generator) getSortedVersions() []*git.Version {
var versions []*git.Version
var releasedVersions []*git.Version
// Collect all released versions (non-"Unreleased")
for name, version := range g.versions {
if name != "Unreleased" {
releasedVersions = append(releasedVersions, version)
}
}
// Sort released versions by date (newest first)
sort.Slice(releasedVersions, func(i, j int) bool {
return releasedVersions[i].Date.After(releasedVersions[j].Date)
})
// Add "Unreleased" first if it exists and has commits
if unreleased, exists := g.versions["Unreleased"]; exists && len(unreleased.Commits) > 0 {
versions = append(versions, unreleased)
}
// Add sorted released versions
versions = append(versions, releasedVersions...)
if g.cfg.Limit > 0 && len(versions) > g.cfg.Limit {
versions = versions[:g.cfg.Limit]
}
return versions
}
func (g *Generator) formatVersion(version *git.Version) string {
var sb strings.Builder
// Generate raw content
rawContent := g.generateRawVersionContent(version)
if rawContent == "" {
return ""
}
header := g.formatVersionHeader(version)
sb.WriteString(("\n"))
sb.WriteString(header)
// If AI summarization is enabled, enhance with AI
if g.cfg.EnableAISummary {
// For "Unreleased", check if content has changed since last AI summary
if version.Name == "Unreleased" && version.AISummary != "" && g.cache != nil {
// Get cached content hash
cachedHash, err := g.cache.GetUnreleasedContentHash()
if err == nil {
// Calculate current content hash
currentHash := hashContent(rawContent)
if cachedHash == currentHash {
// Content unchanged, use cached summary
fmt.Fprintf(os.Stderr, "✅ %s content unchanged (skipping AI)\n", version.Name)
sb.WriteString(version.AISummary)
return fixMarkdown(sb.String())
}
}
}
// For released versions, if we have cached AI summary, use it!
if version.Name != "Unreleased" && version.AISummary != "" {
fmt.Fprintf(os.Stderr, "✅ %s already summarized (skipping)\n", version.Name)
sb.WriteString(version.AISummary)
return fixMarkdown(sb.String())
}
fmt.Fprintf(os.Stderr, "🤖 AI summarizing %s...", version.Name)
aiSummary, err := SummarizeVersionContent(rawContent)
if err != nil {
fmt.Fprintf(os.Stderr, " Failed: %v\n", err)
sb.WriteString((rawContent))
return fixMarkdown(sb.String())
}
if checkForAIError(aiSummary) {
fmt.Fprintf(os.Stderr, " AI error detected, using raw content instead\n")
sb.WriteString(rawContent)
fmt.Fprintf(os.Stderr, "Raw Content was: (%d bytes) %s \n", len(rawContent), rawContent)
fmt.Fprintf(os.Stderr, "AI Summary was: (%d bytes) %s\n", len(aiSummary), aiSummary)
return fixMarkdown(sb.String())
}
fmt.Fprintf(os.Stderr, " Done!\n")
aiSummary = strings.TrimSpace(aiSummary)
// Cache the AI summary and content hash
version.AISummary = aiSummary
if g.cache != nil {
if err := g.cache.UpdateVersionAISummary(version.Name, aiSummary); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to cache AI summary: %v\n", err)
}
// Cache content hash for "Unreleased" to detect changes
if version.Name == "Unreleased" {
if err := g.cache.SetUnreleasedContentHash(hashContent(rawContent)); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to cache content hash: %v\n", err)
}
}
}
sb.WriteString(aiSummary)
return fixMarkdown(sb.String())
}
sb.WriteString(rawContent)
return fixMarkdown(sb.String())
}
func checkForAIError(summary string) bool {
// Check for common AI error patterns
errorPatterns := []string{
"I don't see any", "please provide",
"content you've provided appears to be incomplete",
}
for _, pattern := range errorPatterns {
if strings.Contains(summary, pattern) {
return true
}
}
return false
}
// formatVersionHeader formats just the version header (## ...)
func (g *Generator) formatVersionHeader(version *git.Version) string {
if version.Name == "Unreleased" {
return "## Unreleased\n\n"
}
return fmt.Sprintf("\n## %s (%s)\n\n", version.Name, version.Date.Format("2006-01-02"))
}
// generateRawVersionContent generates the raw content (PRs + commits) for a version
func (g *Generator) generateRawVersionContent(version *git.Version) string {
var sb strings.Builder
// Build a set of commit SHAs that are part of fetched PRs
prCommitSHAs := make(map[string]bool)
for _, prNum := range version.PRNumbers {
if pr, exists := g.prs[prNum]; exists {
for _, prCommit := range pr.Commits {
prCommitSHAs[prCommit.SHA] = true
}
}
}
prCommits := make(map[int][]*git.Commit)
directCommits := []*git.Commit{}
for _, commit := range version.Commits {
// Skip version bump commits from output
if commit.IsVersion {
continue
}
// If this commit is part of a fetched PR, don't include it in direct commits
if prCommitSHAs[commit.SHA] {
continue
}
if commit.PRNumber > 0 {
prCommits[commit.PRNumber] = append(prCommits[commit.PRNumber], commit)
} else {
directCommits = append(directCommits, commit)
}
}
// There are occasionally no PRs or direct commits other than version bumps, so we handle that gracefully
if len(prCommits) == 0 && len(directCommits) == 0 {
return ""
}
prependNewline := ""
for _, prNum := range version.PRNumbers {
if pr, exists := g.prs[prNum]; exists {
sb.WriteString(prependNewline)
sb.WriteString(g.formatPR(pr))
prependNewline = "\n"
}
}
if len(directCommits) > 0 {
// Sort direct commits by date (newest first) for consistent ordering
sort.Slice(directCommits, func(i, j int) bool {
return directCommits[i].Date.After(directCommits[j].Date)
})
sb.WriteString(prependNewline + "### Direct commits\n\n")
for _, commit := range directCommits {
message := g.formatCommitMessage(strings.TrimSpace(commit.Message))
if message != "" && !g.isDuplicateMessage(message, directCommits) {
sb.WriteString(fmt.Sprintf("- %s\n", message))
}
}
}
return fixMarkdown(
strings.ReplaceAll(sb.String(), "\n-\n", "\n"), // Remove empty list items
)
}
func fixMarkdown(content string) string {
// Fix MD032/blank-around-lists: Lists should be surrounded by blank lines
lines := strings.Split(content, "\n")
inList := false
preListNewline := false
for i := range lines {
line := strings.TrimSpace(lines[i])
if strings.HasPrefix(line, "- ") || strings.HasPrefix(line, "* ") {
if !inList {
inList = true
// Ensure there's a blank line before the list starts
if !preListNewline && i > 0 && lines[i-1] != "" {
line = "\n" + line
preListNewline = true
}
}
} else {
if inList {
inList = false
preListNewline = false
}
}
lines[i] = strings.TrimRight(line, " \t")
}
fixedContent := strings.TrimSpace(strings.Join(lines, "\n"))
return fixedContent + "\n"
}
func (g *Generator) formatPR(pr *github.PR) string {
var sb strings.Builder
pr.Title = strings.TrimRight(strings.TrimSpace(pr.Title), ".")
// Add type indicator for non-users
authorName := pr.Author
switch pr.AuthorType {
case "bot":
authorName += "[bot]"
case "organization":
authorName += "[org]"
}
sb.WriteString(fmt.Sprintf("### PR [#%d](%s) by [%s](%s): %s\n\n",
pr.Number, pr.URL, authorName, pr.AuthorURL, strings.TrimSpace(pr.Title)))
changes := g.extractChanges(pr)
for _, change := range changes {
if change != "" {
sb.WriteString(fmt.Sprintf("- %s\n", change))
}
}
return sb.String()
}
func (g *Generator) extractChanges(pr *github.PR) []string {
var changes []string
seen := make(map[string]bool)
for _, commit := range pr.Commits {
message := g.formatCommitMessage(commit.Message)
if message != "" && !seen[message] {
seen[message] = true
changes = append(changes, message)
}
}
if len(changes) == 0 && pr.Body != "" {
lines := strings.Split(pr.Body, "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "- ") || strings.HasPrefix(line, "* ") {
change := strings.TrimPrefix(strings.TrimPrefix(line, "- "), "* ")
if change != "" {
changes = append(changes, change)
}
}
}
}
return changes
}
func normalizeLineEndings(content string) string {
return strings.ReplaceAll(content, "\r\n", "\n")
}
func (g *Generator) formatCommitMessage(message string) string {
strings_to_remove := []string{
"### CHANGES\n", "## CHANGES\n", "# CHANGES\n",
"...\n", "---\n", "## Changes\n", "## Change",
"Update version to v..1 and commit\n",
"# What this Pull Request (PR) does\n",
"# Conflicts:",
}
message = normalizeLineEndings(message)
// No hard tabs
message = strings.ReplaceAll(message, "\t", " ")
if len(message) > 0 {
message = strings.ToUpper(message[:1]) + message[1:]
}
for _, str := range strings_to_remove {
if strings.Contains(message, str) {
message = strings.ReplaceAll(message, str, "")
}
}
message = fixFormatting(message)
return message
}
func fixFormatting(message string) string {
// Turn "*"" lists into "-" lists"
message = strings.ReplaceAll(message, "* ", "- ")
// Remove extra spaces around dashes
message = strings.ReplaceAll(message, "- ", "- ")
message = strings.ReplaceAll(message, "- ", "- ")
// turn bare URL into <URL>
if strings.Contains(message, "http://") || strings.Contains(message, "https://") {
// Use regex to wrap bare URLs with angle brackets
urlRegex := regexp.MustCompile(`\b(https?://[^\s<>]+)`)
message = urlRegex.ReplaceAllString(message, "<$1>")
}
// Replace "## LINKS\n" with "- "
message = strings.ReplaceAll(message, "## LINKS\n", "- ")
// Dependabot messages: "- [Commits]" should become "\n- [Commits]"
message = strings.TrimSpace(message)
// Turn multiple newlines into a single newline
message = strings.TrimSpace(strings.ReplaceAll(message, "\n\n", "\n"))
// Fix inline trailing spaces
message = strings.ReplaceAll(message, " \n", "\n")
// Fix weird indent before list,
message = strings.ReplaceAll(message, "\n - ", "\n- ")
// blanks-around-lists MD032 fix
// Use regex to ensure blank line before list items that don't already have one
listRegex := regexp.MustCompile(`(?m)([^\n-].*[^:\n])\n([-*] .*)`)
message = listRegex.ReplaceAllString(message, "$1\n\n$2")
// Change random first-level "#" to 4th level "####"
// This is a hack to fix spurious first-level headings that are not actual headings
// but rather just comments or notes in the commit message.
message = strings.ReplaceAll(message, "# ", "\n#### ")
message = strings.ReplaceAll(message, "\n\n\n", "\n\n")
// Wrap any non-wrapped Emails with angle brackets
emailRegex := regexp.MustCompile(`([a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,})`)
message = emailRegex.ReplaceAllString(message, "<$1>")
// Wrap any non-wrapped URLs with angle brackets
urlRegex := regexp.MustCompile(`(https?://[^\s<]+)`)
message = urlRegex.ReplaceAllString(message, "<$1>")
message = strings.ReplaceAll(message, "<<", "<")
message = strings.ReplaceAll(message, ">>", ">")
// Fix some spurious Issue/PR links at the beginning of a commit message line
prOrIssueLinkRegex := regexp.MustCompile("\n" + `(#\d+)`)
message = prOrIssueLinkRegex.ReplaceAllString(message, " $1")
// Remove leading/trailing whitespace
message = strings.TrimSpace(message)
return message
}
func (g *Generator) isDuplicateMessage(message string, commits []*git.Commit) bool {
if message == "." || strings.ToLower(message) == "fix" {
count := 0
for _, commit := range commits {
formatted := g.formatCommitMessage(commit.Message)
if formatted == message {
count++
if count > 1 {
return true
}
}
}
}
return false
}
// hashContent generates a SHA256 hash of the content for change detection
func hashContent(content string) string {
hash := sha256.Sum256([]byte(content))
return fmt.Sprintf("%x", hash)
}

View File

@@ -0,0 +1,79 @@
package changelog
import (
"fmt"
"os"
"os/exec"
"strings"
)
const DefaultSummarizeModel = "claude-sonnet-4-20250514"
const MinContentLength = 256 // Minimum content length to consider for summarization
const prompt = `# ROLE
You are an expert Technical Writer specializing in creating clear, concise,
and professional release notes from raw Git commit logs.
# TASK
Your goal is to transform a provided block of Git commit logs into a clean,
human-readable changelog summary. You will identify the most important changes,
format them as a bulleted list, and preserve the associated Pull Request (PR)
information.
# INSTRUCTIONS:
Follow these steps in order:
1. Deeply analyze the input. You will be given a block of text containing PR
information and commit log messages. Carefully read through the logs
to identify individual commits and their descriptions.
2. Identify Key Changes: Focus on commits that represent significant changes,
such as new features ("feat"), bug fixes ("fix"), performance improvements ("perf"),
or breaking changes ("BREAKING CHANGE").
3. Select the Top 5: From the identified key changes, select a maximum of the five
(5) most impactful ones to include in the summary.
If there are five or fewer total changes, include all of them.
4. Format the Output:
- Where you see a PR header, include the PR header verbatim. NO CHANGES.
**This is a critical rule: Do not modify the PR header, as it contains
important links.** What follow the PR header are the related changes.
- Do not add any additional text or preamble. Begin directly with the output.
- Use bullet points for each key change. Starting each point with a hyphen ("-").
- Ensure that the summary is concise and focused on the main changes.
- The summary should be in American English (en-US), using proper grammar and punctuation.
5. If the content is too brief or you do not see any PR headers, return the content as is.
`
// getSummarizeModel returns the model to use for AI summarization
func getSummarizeModel() string {
if model := os.Getenv("FABRIC_CHANGELOG_SUMMARIZE_MODEL"); model != "" {
return model
}
return DefaultSummarizeModel
}
// SummarizeVersionContent takes raw version content and returns AI-enhanced summary
func SummarizeVersionContent(content string) (string, error) {
if strings.TrimSpace(content) == "" {
return "", fmt.Errorf("no content to summarize")
}
if len(content) < MinContentLength {
// If content is too brief, return it as is
return content, nil
}
model := getSummarizeModel()
cmd := exec.Command("fabric", "-m", model, prompt)
cmd.Stdin = strings.NewReader(content)
output, err := cmd.Output()
if err != nil {
return "", fmt.Errorf("fabric command failed: %w", err)
}
summary := strings.TrimSpace(string(output))
if summary == "" {
return "", fmt.Errorf("fabric returned empty summary")
}
return summary, nil
}

View File

@@ -0,0 +1,15 @@
package config
type Config struct {
RepoPath string
OutputFile string
Limit int
Version string
SaveData bool
CacheFile string
NoCache bool
RebuildCache bool
GitHubToken string
ForcePRSync bool
EnableAISummary bool
}

View File

@@ -0,0 +1,26 @@
package git
import (
"time"
)
type Commit struct {
SHA string
Message string
Author string
Email string
Date time.Time
IsMerge bool
PRNumber int
IsVersion bool
Version string
}
type Version struct {
Name string
Date time.Time
CommitSHA string
Commits []*Commit
PRNumbers []int
AISummary string
}

View File

@@ -0,0 +1,402 @@
package git
import (
"fmt"
"regexp"
"strconv"
"strings"
"time"
"github.com/go-git/go-git/v5"
"github.com/go-git/go-git/v5/plumbing"
"github.com/go-git/go-git/v5/plumbing/object"
"github.com/go-git/go-git/v5/plumbing/storer"
)
var (
versionPattern = regexp.MustCompile(`Update version to (v\d+\.\d+\.\d+)`)
prPattern = regexp.MustCompile(`Merge pull request #(\d+)`)
)
type Walker struct {
repo *git.Repository
}
func NewWalker(repoPath string) (*Walker, error) {
repo, err := git.PlainOpen(repoPath)
if err != nil {
return nil, fmt.Errorf("failed to open repository: %w", err)
}
return &Walker{repo: repo}, nil
}
// GetLatestTag returns the name of the most recent tag by committer date
func (w *Walker) GetLatestTag() (string, error) {
tagRefs, err := w.repo.Tags()
if err != nil {
return "", err
}
var latestTagCommit *object.Commit
var latestTagName string
err = tagRefs.ForEach(func(tagRef *plumbing.Reference) error {
revision := plumbing.Revision(tagRef.Name().String())
tagCommitHash, err := w.repo.ResolveRevision(revision)
if err != nil {
return err
}
commit, err := w.repo.CommitObject(*tagCommitHash)
if err != nil {
return err
}
if latestTagCommit == nil {
latestTagCommit = commit
latestTagName = tagRef.Name().Short() // Get short name like "v1.4.245"
}
if commit.Committer.When.After(latestTagCommit.Committer.When) {
latestTagCommit = commit
latestTagName = tagRef.Name().Short()
}
return nil
})
if err != nil {
return "", err
}
return latestTagName, nil
}
// WalkCommitsSinceTag walks commits from the specified tag to HEAD and returns only "Unreleased" version
func (w *Walker) WalkCommitsSinceTag(tagName string) (*Version, error) {
// Get the tag reference
tagRef, err := w.repo.Tag(tagName)
if err != nil {
return nil, fmt.Errorf("failed to find tag %s: %w", tagName, err)
}
// Get the commit that the tag points to
tagCommit, err := w.repo.CommitObject(tagRef.Hash())
if err != nil {
return nil, fmt.Errorf("failed to get tag commit: %w", err)
}
// Get HEAD
headRef, err := w.repo.Head()
if err != nil {
return nil, fmt.Errorf("failed to get HEAD: %w", err)
}
// Walk from HEAD back to the tag commit (exclusive)
commitIter, err := w.repo.Log(&git.LogOptions{
From: headRef.Hash(),
Order: git.LogOrderCommitterTime,
})
if err != nil {
return nil, fmt.Errorf("failed to get commit log: %w", err)
}
version := &Version{
Name: "Unreleased",
Commits: []*Commit{},
}
prNumbers := []int{}
err = commitIter.ForEach(func(c *object.Commit) error {
// Stop when we reach the tag commit (don't include it)
if c.Hash == tagCommit.Hash {
return fmt.Errorf("reached tag commit") // Use error to break out of iteration
}
commit := &Commit{
SHA: c.Hash.String(),
Message: strings.TrimSpace(c.Message),
Date: c.Committer.When,
}
// Check for version patterns
if versionMatch := versionPattern.FindStringSubmatch(commit.Message); versionMatch != nil {
commit.IsVersion = true
}
// Check for PR merge patterns
if prMatch := prPattern.FindStringSubmatch(commit.Message); prMatch != nil {
if prNumber, err := strconv.Atoi(prMatch[1]); err == nil {
commit.PRNumber = prNumber
prNumbers = append(prNumbers, prNumber)
}
}
version.Commits = append(version.Commits, commit)
return nil
})
// Ignore the "reached tag commit" error - it's expected
if err != nil && !strings.Contains(err.Error(), "reached tag commit") {
return nil, fmt.Errorf("failed to walk commits: %w", err)
}
// Remove duplicates from prNumbers and set them
prNumbersMap := make(map[int]bool)
for _, prNum := range prNumbers {
prNumbersMap[prNum] = true
}
version.PRNumbers = make([]int, 0, len(prNumbersMap))
for prNum := range prNumbersMap {
version.PRNumbers = append(version.PRNumbers, prNum)
}
return version, nil
}
func (w *Walker) WalkHistory() (map[string]*Version, error) {
ref, err := w.repo.Head()
if err != nil {
return nil, fmt.Errorf("failed to get HEAD: %w", err)
}
commitIter, err := w.repo.Log(&git.LogOptions{
From: ref.Hash(),
Order: git.LogOrderCommitterTime,
})
if err != nil {
return nil, fmt.Errorf("failed to get commit log: %w", err)
}
versions := make(map[string]*Version)
currentVersion := "Unreleased"
versions[currentVersion] = &Version{
Name: currentVersion,
Commits: []*Commit{},
}
prNumbers := make(map[string][]int)
err = commitIter.ForEach(func(c *object.Commit) error {
// c.Message = Summarize(c.Message)
commit := &Commit{
SHA: c.Hash.String(),
Message: strings.TrimSpace(c.Message),
Author: c.Author.Name,
Email: c.Author.Email,
Date: c.Author.When,
IsMerge: len(c.ParentHashes) > 1,
}
if matches := versionPattern.FindStringSubmatch(commit.Message); len(matches) > 1 {
commit.IsVersion = true
commit.Version = matches[1]
currentVersion = commit.Version
if _, exists := versions[currentVersion]; !exists {
versions[currentVersion] = &Version{
Name: currentVersion,
Date: commit.Date,
CommitSHA: commit.SHA,
Commits: []*Commit{},
}
}
return nil
}
if matches := prPattern.FindStringSubmatch(commit.Message); len(matches) > 1 {
prNumber := 0
fmt.Sscanf(matches[1], "%d", &prNumber)
commit.PRNumber = prNumber
prNumbers[currentVersion] = append(prNumbers[currentVersion], prNumber)
}
versions[currentVersion].Commits = append(versions[currentVersion].Commits, commit)
return nil
})
if err != nil {
return nil, fmt.Errorf("failed to walk commits: %w", err)
}
for version, prs := range prNumbers {
versions[version].PRNumbers = dedupInts(prs)
}
return versions, nil
}
func (w *Walker) GetRepoInfo() (owner string, name string, err error) {
remotes, err := w.repo.Remotes()
if err != nil {
return "", "", fmt.Errorf("failed to get remotes: %w", err)
}
// First try upstream (preferred for forks)
for _, remote := range remotes {
if remote.Config().Name == "upstream" {
urls := remote.Config().URLs
if len(urls) > 0 {
owner, name = parseGitHubURL(urls[0])
if owner != "" && name != "" {
return owner, name, nil
}
}
}
}
// Then try origin
for _, remote := range remotes {
if remote.Config().Name == "origin" {
urls := remote.Config().URLs
if len(urls) > 0 {
owner, name = parseGitHubURL(urls[0])
if owner != "" && name != "" {
return owner, name, nil
}
}
}
}
return "danielmiessler", "fabric", nil
}
func parseGitHubURL(url string) (owner, repo string) {
patterns := []string{
`github\.com[:/]([^/]+)/([^/.]+)`,
`github\.com[:/]([^/]+)/([^/]+)\.git$`,
}
for _, pattern := range patterns {
re := regexp.MustCompile(pattern)
matches := re.FindStringSubmatch(url)
if len(matches) > 2 {
return matches[1], matches[2]
}
}
return "", ""
}
// WalkHistorySinceTag walks git history from HEAD down to (but not including) the specified tag
// and returns any version commits found along the way
func (w *Walker) WalkHistorySinceTag(sinceTag string) (map[string]*Version, error) {
// Get the commit SHA for the sinceTag
tagRef, err := w.repo.Tag(sinceTag)
if err != nil {
return nil, fmt.Errorf("failed to get tag %s: %w", sinceTag, err)
}
tagCommit, err := w.repo.CommitObject(tagRef.Hash())
if err != nil {
return nil, fmt.Errorf("failed to get commit for tag %s: %w", sinceTag, err)
}
// Get HEAD reference
ref, err := w.repo.Head()
if err != nil {
return nil, fmt.Errorf("failed to get HEAD: %w", err)
}
// Walk from HEAD down to the tag commit (excluding it)
commitIter, err := w.repo.Log(&git.LogOptions{
From: ref.Hash(),
Order: git.LogOrderCommitterTime,
})
if err != nil {
return nil, fmt.Errorf("failed to create commit iterator: %w", err)
}
defer commitIter.Close()
versions := make(map[string]*Version)
currentVersion := "Unreleased"
prNumbers := make(map[string][]int)
err = commitIter.ForEach(func(c *object.Commit) error {
// Stop iteration when the hash of the current commit matches the hash of the specified sinceTag commit
if c.Hash == tagCommit.Hash {
return storer.ErrStop
}
commit := &Commit{
SHA: c.Hash.String(),
Message: strings.TrimSpace(c.Message),
Author: c.Author.Name,
Email: c.Author.Email,
Date: c.Author.When,
IsMerge: len(c.ParentHashes) > 1,
}
// Check for version pattern
if matches := versionPattern.FindStringSubmatch(commit.Message); len(matches) > 1 {
commit.IsVersion = true
commit.Version = matches[1]
currentVersion = commit.Version
if _, exists := versions[currentVersion]; !exists {
versions[currentVersion] = &Version{
Name: currentVersion,
Date: commit.Date,
CommitSHA: commit.SHA,
Commits: []*Commit{},
}
}
return nil
}
// Check for PR merge pattern
if matches := prPattern.FindStringSubmatch(commit.Message); len(matches) > 1 {
prNumber, err := strconv.Atoi(matches[1])
if err != nil {
// Handle parsing error (e.g., log it or skip processing)
return fmt.Errorf("failed to parse PR number: %v", err)
}
commit.PRNumber = prNumber
prNumbers[currentVersion] = append(prNumbers[currentVersion], prNumber)
}
// Add commit to current version
if _, exists := versions[currentVersion]; !exists {
versions[currentVersion] = &Version{
Name: currentVersion,
Date: time.Time{}, // Zero value, will be set by version commit
CommitSHA: "",
Commits: []*Commit{},
}
}
versions[currentVersion].Commits = append(versions[currentVersion].Commits, commit)
return nil
})
// Handle the stop condition - storer.ErrStop is expected
if err == storer.ErrStop {
err = nil
}
// Assign collected PR numbers to each version
for version, prs := range prNumbers {
versions[version].PRNumbers = dedupInts(prs)
}
return versions, err
}
func dedupInts(ints []int) []int {
seen := make(map[int]bool)
result := []int{}
for _, i := range ints {
if !seen[i] {
seen[i] = true
result = append(result, i)
}
}
return result
}

View File

@@ -0,0 +1,354 @@
package github
import (
"context"
"fmt"
"net/http"
"os"
"strings"
"sync"
"time"
"github.com/google/go-github/v66/github"
"github.com/hasura/go-graphql-client"
"golang.org/x/oauth2"
)
type Client struct {
client *github.Client
graphqlClient *graphql.Client
owner string
repo string
token string
}
func NewClient(token, owner, repo string) *Client {
var githubClient *github.Client
var httpClient *http.Client
var gqlClient *graphql.Client
if token != "" {
ts := oauth2.StaticTokenSource(
&oauth2.Token{AccessToken: token},
)
httpClient = oauth2.NewClient(context.Background(), ts)
githubClient = github.NewClient(httpClient)
gqlClient = graphql.NewClient("https://api.github.com/graphql", httpClient)
} else {
httpClient = http.DefaultClient
githubClient = github.NewClient(nil)
gqlClient = graphql.NewClient("https://api.github.com/graphql", httpClient)
}
return &Client{
client: githubClient,
graphqlClient: gqlClient,
owner: owner,
repo: repo,
token: token,
}
}
func (c *Client) FetchPRs(prNumbers []int) ([]*PR, error) {
if len(prNumbers) == 0 {
return []*PR{}, nil
}
ctx := context.Background()
prs := make([]*PR, 0, len(prNumbers))
prsChan := make(chan *PR, len(prNumbers))
errChan := make(chan error, len(prNumbers))
var wg sync.WaitGroup
semaphore := make(chan struct{}, 10)
for _, prNumber := range prNumbers {
wg.Add(1)
go func(num int) {
defer wg.Done()
semaphore <- struct{}{}
defer func() { <-semaphore }()
pr, err := c.fetchSinglePR(ctx, num)
if err != nil {
errChan <- fmt.Errorf("failed to fetch PR #%d: %w", num, err)
return
}
prsChan <- pr
}(prNumber)
}
go func() {
wg.Wait()
close(prsChan)
close(errChan)
}()
var errors []error
for pr := range prsChan {
prs = append(prs, pr)
}
for err := range errChan {
errors = append(errors, err)
}
if len(errors) > 0 {
return prs, fmt.Errorf("some PRs failed to fetch: %v", errors)
}
return prs, nil
}
func (c *Client) fetchSinglePR(ctx context.Context, prNumber int) (*PR, error) {
pr, _, err := c.client.PullRequests.Get(ctx, c.owner, c.repo, prNumber)
if err != nil {
return nil, err
}
commits, _, err := c.client.PullRequests.ListCommits(ctx, c.owner, c.repo, prNumber, nil)
if err != nil {
return nil, fmt.Errorf("failed to fetch commits: %w", err)
}
result := &PR{
Number: prNumber,
Title: getString(pr.Title),
Body: getString(pr.Body),
URL: getString(pr.HTMLURL),
Commits: make([]PRCommit, 0, len(commits)),
}
if pr.MergedAt != nil {
result.MergedAt = pr.MergedAt.Time
}
if pr.User != nil {
result.Author = getString(pr.User.Login)
result.AuthorURL = getString(pr.User.HTMLURL)
userType := getString(pr.User.Type) // GitHub API returns "User", "Organization", or "Bot"
// Convert GitHub API type to lowercase
switch userType {
case "User":
result.AuthorType = "user"
case "Organization":
result.AuthorType = "organization"
case "Bot":
result.AuthorType = "bot"
default:
result.AuthorType = "user" // Default fallback
}
}
if pr.MergeCommitSHA != nil {
result.MergeCommit = *pr.MergeCommitSHA
}
for _, commit := range commits {
if commit.Commit != nil {
prCommit := PRCommit{
SHA: getString(commit.SHA),
Message: strings.TrimSpace(getString(commit.Commit.Message)),
}
if commit.Commit.Author != nil {
prCommit.Author = getString(commit.Commit.Author.Name)
}
result.Commits = append(result.Commits, prCommit)
}
}
return result, nil
}
func getString(s *string) string {
if s == nil {
return ""
}
return *s
}
// FetchAllMergedPRs fetches all merged PRs using GitHub's search API
// This is much more efficient than fetching PRs individually
func (c *Client) FetchAllMergedPRs(since time.Time) ([]*PR, error) {
ctx := context.Background()
var allPRs []*PR
// Build search query for merged PRs
query := fmt.Sprintf("repo:%s/%s is:pr is:merged", c.owner, c.repo)
if !since.IsZero() {
query += fmt.Sprintf(" merged:>=%s", since.Format("2006-01-02"))
}
opts := &github.SearchOptions{
Sort: "created",
Order: "desc",
ListOptions: github.ListOptions{
PerPage: 100, // Maximum allowed
},
}
for {
result, resp, err := c.client.Search.Issues(ctx, query, opts)
if err != nil {
return allPRs, fmt.Errorf("failed to search PRs: %w", err)
}
// Process PRs in parallel
prsChan := make(chan *PR, len(result.Issues))
errChan := make(chan error, len(result.Issues))
var wg sync.WaitGroup
semaphore := make(chan struct{}, 10) // Limit concurrent requests
for _, issue := range result.Issues {
if issue.PullRequestLinks == nil {
continue // Not a PR
}
wg.Add(1)
go func(prNumber int) {
defer wg.Done()
semaphore <- struct{}{}
defer func() { <-semaphore }()
pr, err := c.fetchSinglePR(ctx, prNumber)
if err != nil {
errChan <- fmt.Errorf("failed to fetch PR #%d: %w", prNumber, err)
return
}
prsChan <- pr
}(*issue.Number)
}
go func() {
wg.Wait()
close(prsChan)
close(errChan)
}()
// Collect results
for pr := range prsChan {
allPRs = append(allPRs, pr)
}
// Check for errors
for err := range errChan {
// Log error but continue processing
fmt.Fprintf(os.Stderr, "Warning: %v\n", err)
}
if resp.NextPage == 0 {
break
}
opts.Page = resp.NextPage
}
return allPRs, nil
}
// FetchAllMergedPRsGraphQL fetches all merged PRs with their commits using GraphQL
// This is the ultimate optimization - gets everything in ~5-10 API calls
func (c *Client) FetchAllMergedPRsGraphQL(since time.Time) ([]*PR, error) {
ctx := context.Background()
var allPRs []*PR
var after *string
totalFetched := 0
for {
// Prepare variables
variables := map[string]interface{}{
"owner": graphql.String(c.owner),
"repo": graphql.String(c.repo),
"after": (*graphql.String)(after),
}
// Execute GraphQL query
var query PullRequestsQuery
err := c.graphqlClient.Query(ctx, &query, variables)
if err != nil {
return allPRs, fmt.Errorf("GraphQL query failed: %w", err)
}
prs := query.Repository.PullRequests.Nodes
fmt.Fprintf(os.Stderr, "Fetched %d PRs via GraphQL (page %d)\n", len(prs), (totalFetched/100)+1)
// Convert GraphQL PRs to our PR struct
for _, gqlPR := range prs {
// If we have a since filter, stop when we reach older PRs
if !since.IsZero() && gqlPR.MergedAt.Before(since) {
fmt.Fprintf(os.Stderr, "Reached PRs older than %s, stopping\n", since.Format("2006-01-02"))
return allPRs, nil
}
pr := &PR{
Number: gqlPR.Number,
Title: gqlPR.Title,
Body: gqlPR.Body,
URL: gqlPR.URL,
MergedAt: gqlPR.MergedAt,
Commits: make([]PRCommit, 0, len(gqlPR.Commits.Nodes)),
}
// Handle author - check if it's nil first
if gqlPR.Author != nil {
pr.Author = gqlPR.Author.Login
pr.AuthorURL = gqlPR.Author.URL
switch gqlPR.Author.Typename {
case "Bot":
pr.AuthorType = "bot"
case "Organization":
pr.AuthorType = "organization"
case "User":
pr.AuthorType = "user"
default:
pr.AuthorType = "user" // fallback
if gqlPR.Author.Typename != "" {
fmt.Fprintf(os.Stderr, "PR #%d: Unknown author typename '%s'\n", gqlPR.Number, gqlPR.Author.Typename)
}
}
} else {
// Author is nil - try to fetch from REST API as fallback
fmt.Fprintf(os.Stderr, "PR #%d: Author is nil in GraphQL response, fetching from REST API\n", gqlPR.Number)
// Fetch this specific PR from REST API
restPR, err := c.fetchSinglePR(ctx, gqlPR.Number)
if err == nil && restPR != nil && restPR.Author != "" {
pr.Author = restPR.Author
pr.AuthorURL = restPR.AuthorURL
pr.AuthorType = restPR.AuthorType
} else {
// Fallback if REST API also fails
pr.Author = "[unknown]"
pr.AuthorURL = ""
pr.AuthorType = "user"
}
}
// Convert commits
for _, commitNode := range gqlPR.Commits.Nodes {
commit := PRCommit{
SHA: commitNode.Commit.OID,
Message: strings.TrimSpace(commitNode.Commit.Message),
Author: commitNode.Commit.Author.Name,
}
pr.Commits = append(pr.Commits, commit)
}
allPRs = append(allPRs, pr)
}
totalFetched += len(prs)
// Check if we need to fetch more pages
if !query.Repository.PullRequests.PageInfo.HasNextPage {
break
}
after = &query.Repository.PullRequests.PageInfo.EndCursor
}
fmt.Fprintf(os.Stderr, "Total PRs fetched via GraphQL: %d\n", len(allPRs))
return allPRs, nil
}

View File

@@ -0,0 +1,57 @@
package github
import "time"
type PR struct {
Number int
Title string
Body string
Author string
AuthorURL string
AuthorType string // "user", "organization", or "bot"
URL string
MergedAt time.Time
Commits []PRCommit
MergeCommit string
}
type PRCommit struct {
SHA string
Message string
Author string
}
// GraphQL query structures for hasura client
type PullRequestsQuery struct {
Repository struct {
PullRequests struct {
PageInfo struct {
HasNextPage bool
EndCursor string
}
Nodes []struct {
Number int
Title string
Body string
URL string
MergedAt time.Time
Author *struct {
Typename string `graphql:"__typename"`
Login string `graphql:"login"`
URL string `graphql:"url"`
}
Commits struct {
Nodes []struct {
Commit struct {
OID string `graphql:"oid"`
Message string
Author struct {
Name string
}
}
}
} `graphql:"commits(first: 250)"`
}
} `graphql:"pullRequests(first: 100, after: $after, states: MERGED, orderBy: {field: UPDATED_AT, direction: DESC})"`
} `graphql:"repository(owner: $owner, name: $repo)"`
}

View File

@@ -0,0 +1,84 @@
package main
import (
"fmt"
"os"
"path/filepath"
"github.com/danielmiessler/fabric/cmd/generate_changelog/internal/changelog"
"github.com/danielmiessler/fabric/cmd/generate_changelog/internal/config"
"github.com/joho/godotenv"
"github.com/spf13/cobra"
)
var (
cfg = &config.Config{}
)
var rootCmd = &cobra.Command{
Use: "generate_changelog",
Short: "Generate changelog from git history and GitHub PRs",
Long: `A high-performance changelog generator that walks git history,
collects version information and pull requests, and generates a
comprehensive changelog in markdown format.`,
RunE: run,
}
func init() {
rootCmd.Flags().StringVarP(&cfg.RepoPath, "repo", "r", ".", "Repository path")
rootCmd.Flags().StringVarP(&cfg.OutputFile, "output", "o", "", "Output file (default: stdout)")
rootCmd.Flags().IntVarP(&cfg.Limit, "limit", "l", 0, "Limit number of versions (0 = all)")
rootCmd.Flags().StringVarP(&cfg.Version, "version", "v", "", "Generate changelog for specific version")
rootCmd.Flags().BoolVar(&cfg.SaveData, "save-data", false, "Save version data to JSON for debugging")
rootCmd.Flags().StringVar(&cfg.CacheFile, "cache", "./cmd/generate_changelog/changelog.db", "Cache database file")
rootCmd.Flags().BoolVar(&cfg.NoCache, "no-cache", false, "Disable cache usage")
rootCmd.Flags().BoolVar(&cfg.RebuildCache, "rebuild-cache", false, "Rebuild cache from scratch")
rootCmd.Flags().StringVar(&cfg.GitHubToken, "token", "", "GitHub API token (or set GITHUB_TOKEN env var)")
rootCmd.Flags().BoolVar(&cfg.ForcePRSync, "force-pr-sync", false, "Force a full PR sync from GitHub (ignores cache age)")
rootCmd.Flags().BoolVar(&cfg.EnableAISummary, "ai-summarize", false, "Generate AI-enhanced summaries using Fabric")
}
func run(cmd *cobra.Command, args []string) error {
if cfg.GitHubToken == "" {
cfg.GitHubToken = os.Getenv("GITHUB_TOKEN")
}
generator, err := changelog.New(cfg)
if err != nil {
return fmt.Errorf("failed to create changelog generator: %w", err)
}
output, err := generator.Generate()
if err != nil {
return fmt.Errorf("failed to generate changelog: %w", err)
}
if cfg.OutputFile != "" {
if err := os.WriteFile(cfg.OutputFile, []byte(output), 0644); err != nil {
return fmt.Errorf("failed to write output file: %w", err)
}
fmt.Printf("Changelog written to %s\n", cfg.OutputFile)
} else {
fmt.Print(output)
}
return nil
}
func main() {
// Load .env file from the same directory as the binary
if exePath, err := os.Executable(); err == nil {
envPath := filepath.Join(filepath.Dir(exePath), ".env")
if _, err := os.Stat(envPath); err == nil {
// .env file exists, load it
if err := godotenv.Load(envPath); err != nil {
fmt.Fprintf(os.Stderr, "Warning: Failed to load .env file: %v\n", err)
}
}
}
if err := rootCmd.Execute(); err != nil {
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
os.Exit(1)
}
}

View File

@@ -1,26 +0,0 @@
package common
import (
goopenai "github.com/sashabaranov/go-openai"
"github.com/stretchr/testify/assert"
"testing"
)
func TestNormalizeMessages(t *testing.T) {
msgs := []*goopenai.ChatCompletionMessage{
{Role: goopenai.ChatMessageRoleUser, Content: "Hello"},
{Role: goopenai.ChatMessageRoleAssistant, Content: "Hi there!"},
{Role: goopenai.ChatMessageRoleUser, Content: ""},
{Role: goopenai.ChatMessageRoleUser, Content: ""},
{Role: goopenai.ChatMessageRoleUser, Content: "How are you?"},
}
expected := []*goopenai.ChatCompletionMessage{
{Role: goopenai.ChatMessageRoleUser, Content: "Hello"},
{Role: goopenai.ChatMessageRoleAssistant, Content: "Hi there!"},
{Role: goopenai.ChatMessageRoleUser, Content: "How are you?"},
}
actual := NormalizeMessages(msgs, "default")
assert.Equal(t, expected, actual)
}

121
completions/_fabric Normal file
View File

@@ -0,0 +1,121 @@
#compdef fabric
# Zsh completion for fabric CLI
# Place this file in a directory in your $fpath (e.g. /usr/local/share/zsh/site-functions)
_fabric_patterns() {
local -a patterns
patterns=(${(f)"$(fabric --listpatterns --shell-complete-list 2>/dev/null)"})
compadd -X "Patterns:" ${patterns}
}
_fabric_models() {
local -a models
models=(${(f)"$(fabric --listmodels --shell-complete-list 2>/dev/null)"})
compadd -X "Models:" ${models}
}
_fabric_contexts() {
local -a contexts
contexts=(${(f)"$(fabric --listcontexts --shell-complete-list 2>/dev/null)"})
compadd -X "Contexts:" ${contexts}
}
_fabric_sessions() {
local -a sessions
sessions=(${(f)"$(fabric --listsessions --shell-complete-list 2>/dev/null)"})
compadd -X "Sessions:" ${sessions}
}
_fabric_strategies() {
local -a strategies
strategies=(${(f)"$(fabric --liststrategies --shell-complete-list 2>/dev/null)"})
compadd -X "Strategies:" ${strategies}
}
_fabric_extensions() {
local -a extensions
extensions=(${(f)"$(fabric --listextensions --shell-complete-list 2>/dev/null)"})
compadd -X "Extensions:" ${extensions}
'(-L --listmodels)'{-L,--listmodels}'[List all available models]:list models:_fabric_models' \
'(-x --listcontexts)'{-x,--listcontexts}'[List all contexts]:list contexts:_fabric_contexts' \
'(-X --listsessions)'{-X,--listsessions}'[List all sessions]:list sessions:_fabric_sessions' \
'(--listextensions)--listextensions[List all registered extensions]' \
'(--liststrategies)--liststrategies[List all strategies]:list strategies:_fabric_strategies' \
'(--listvendors)--listvendors[List all vendors]' \
vendors=(${(f)"$(fabric --listvendors 2>/dev/null)"})
compadd -X "Vendors:" ${vendors}
}
_fabric() {
local curcontext="$curcontext" state line
typeset -A opt_args
_arguments -C \
'(-p --pattern)'{-p,--pattern}'[Choose a pattern from the available patterns]:pattern:_fabric_patterns' \
'(-v --variable)'{-v,--variable}'[Values for pattern variables, e.g. -v=#role:expert -v=#points:30]:variable:' \
'(-C --context)'{-C,--context}'[Choose a context from the available contexts]:context:_fabric_contexts' \
'(--session)--session[Choose a session from the available sessions]:session:_fabric_sessions' \
'(-a --attachment)'{-a,--attachment}'[Attachment path or URL (e.g. for OpenAI image recognition messages)]:file:_files' \
'(-S --setup)'{-S,--setup}'[Run setup for all reconfigurable parts of fabric]' \
'(-t --temperature)'{-t,--temperature}'[Set temperature (default: 0.7)]:temperature:' \
'(-T --topp)'{-T,--topp}'[Set top P (default: 0.9)]:topp:' \
'(-s --stream)'{-s,--stream}'[Stream]' \
'(-P --presencepenalty)'{-P,--presencepenalty}'[Set presence penalty (default: 0.0)]:presence penalty:' \
'(-r --raw)'{-r,--raw}'[Use the defaults of the model without sending chat options]' \
'(-F --frequencypenalty)'{-F,--frequencypenalty}'[Set frequency penalty (default: 0.0)]:frequency penalty:' \
'(-l --listpatterns)'{-l,--listpatterns}'[List all patterns]' \
'(-L --listmodels)'{-L,--listmodels}'[List all available models]' \
'(-x --listcontexts)'{-x,--listcontexts}'[List all contexts]' \
'(-X --listsessions)'{-X,--listsessions}'[List all sessions]' \
'(-U --updatepatterns)'{-U,--updatepatterns}'[Update patterns]' \
'(-c --copy)'{-c,--copy}'[Copy to clipboard]' \
'(-m --model)'{-m,--model}'[Choose model]:model:_fabric_models' \
'(--modelContextLength)--modelContextLength[Model context length (only affects ollama)]:length:' \
'(-o --output)'{-o,--output}'[Output to file]:file:_files' \
'(--output-session)--output-session[Output the entire session to the output file]' \
'(-n --latest)'{-n,--latest}'[Number of latest patterns to list (default: 0)]:number:' \
'(-d --changeDefaultModel)'{-d,--changeDefaultModel}'[Change default model]' \
'(-y --youtube)'{-y,--youtube}'[YouTube video or play list URL]:youtube url:' \
'(--playlist)--playlist[Prefer playlist over video if both ids are present in the URL]' \
'(--transcript)--transcript[Grab transcript from YouTube video and send to chat]' \
'(--transcript-with-timestamps)--transcript-with-timestamps[Grab transcript from YouTube video with timestamps]' \
'(--comments)--comments[Grab comments from YouTube video and send to chat]' \
'(--metadata)--metadata[Output video metadata]' \
'(-g --language)'{-g,--language}'[Specify the Language Code for the chat, e.g. -g=en -g=zh]:language:' \
'(-u --scrape_url)'{-u,--scrape_url}'[Scrape website URL to markdown using Jina AI]:url:' \
'(-q --scrape_question)'{-q,--scrape_question}'[Search question using Jina AI]:question:' \
'(-e --seed)'{-e,--seed}'[Seed to be used for LMM generation]:seed:' \
'(-w --wipecontext)'{-w,--wipecontext}'[Wipe context]:context:_fabric_contexts' \
'(-W --wipesession)'{-W,--wipesession}'[Wipe session]:session:_fabric_sessions' \
'(--printcontext)--printcontext[Print context]:context:_fabric_contexts' \
'(--printsession)--printsession[Print session]:session:_fabric_sessions' \
'(--readability)--readability[Convert HTML input into a clean, readable view]' \
'(--input-has-vars)--input-has-vars[Apply variables to user input]' \
'(--dry-run)--dry-run[Show what would be sent to the model without actually sending it]' \
'(--serve)--serve[Serve the Fabric Rest API]' \
'(--serveOllama)--serveOllama[Serve the Fabric Rest API with ollama endpoints]' \
'(--address)--address[The address to bind the REST API (default: :8080)]:address:' \
'(--api-key)--api-key[API key used to secure server routes]:api-key:' \
'(--config)--config[Path to YAML config file]:config file:_files -g "*.yaml *.yml"' \
'(--version)--version[Print current version]' \
'(--search)--search[Enable web search tool for supported models (Anthropic, OpenAI)]' \
'(--search-location)--search-location[Set location for web search results]:location:' \
'(--image-file)--image-file[Save generated image to specified file path]:image file:_files -g "*.png *.webp *.jpeg *.jpg"' \
'(--image-size)--image-size[Image dimensions]:size:(1024x1024 1536x1024 1024x1536 auto)' \
'(--image-quality)--image-quality[Image quality]:quality:(low medium high auto)' \
'(--image-compression)--image-compression[Compression level 0-100 for JPEG/WebP formats]:compression:' \
'(--image-background)--image-background[Background type]:background:(opaque transparent)' \
'(--listextensions)--listextensions[List all registered extensions]' \
'(--addextension)--addextension[Register a new extension from config file path]:config file:_files -g "*.yaml *.yml"' \
'(--rmextension)--rmextension[Remove a registered extension by name]:extension:_fabric_extensions' \
'(--strategy)--strategy[Choose a strategy from the available strategies]:strategy:_fabric_strategies' \
'(--liststrategies)--liststrategies[List all strategies]' \
'(--listvendors)--listvendors[List all vendors]' \
'(--shell-complete-list)--shell-complete-list[Output raw list without headers/formatting (for shell completion)]' \
'(--suppress-think)--suppress-think[Suppress text enclosed in thinking tags]' \
'(--think-start-tag)--think-start-tag[Start tag for thinking sections (default: <think>)]:start tag:' \
'(--think-end-tag)--think-end-tag[End tag for thinking sections (default: </think>)]:end tag:' \
'(-h --help)'{-h,--help}'[Show this help message]' \
'*:arguments:'
}
_fabric "$@"

103
completions/fabric.bash Normal file
View File

@@ -0,0 +1,103 @@
# Bash completion for fabric CLI
#
# Installation:
# 1. Place this file in a standard completion directory, e.g.,
# - /etc/bash_completion.d/
# - /usr/local/etc/bash_completion.d/
# - ~/.local/share/bash-completion/completions/
# 2. Or, source it directly in your ~/.bashrc or ~/.bash_profile:
# source /path/to/fabric.bash
_fabric() {
local cur prev words cword
_get_comp_words_by_ref -n : cur prev words cword
# Define all possible options/flags
local opts="--pattern -p --variable -v --context -C --session --attachment -a --setup -S --temperature -t --topp -T --stream -s --presencepenalty -P --raw -r --frequencypenalty -F --listpatterns -l --listmodels -L --listcontexts -x --listsessions -X --updatepatterns -U --copy -c --model -m --modelContextLength --output -o --output-session --latest -n --changeDefaultModel -d --youtube -y --playlist --transcript --transcript-with-timestamps --comments --metadata --language -g --scrape_url -u --scrape_question -q --seed -e --wipecontext -w --wipesession -W --printcontext --printsession --readability --input-has-vars --dry-run --serve --serveOllama --address --api-key --config --search --search-location --image-file --image-size --image-quality --image-compression --image-background --suppress-think --think-start-tag --think-end-tag --version --listextensions --addextension --rmextension --strategy --liststrategies --listvendors --shell-complete-list --help -h"
# Helper function for dynamic completions
_fabric_get_list() {
fabric "$1" --shell-complete-list 2>/dev/null
}
# Handle completions based on the previous word
case "${prev}" in
-p | --pattern)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listpatterns)" -- "${cur}"))
return 0
;;
-C | --context)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listcontexts)" -- "${cur}"))
return 0
;;
--session)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listsessions)" -- "${cur}"))
return 0
;;
-m | --model)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listmodels)" -- "${cur}"))
return 0
;;
-w | --wipecontext)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listcontexts)" -- "${cur}"))
return 0
;;
-W | --wipesession)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listsessions)" -- "${cur}"))
return 0
;;
--printcontext)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listcontexts)" -- "${cur}"))
return 0
;;
--printsession)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listsessions)" -- "${cur}"))
return 0
;;
--rmextension)
COMPREPLY=($(compgen -W "$(_fabric_get_list --listextensions)" -- "${cur}"))
return 0
;;
--strategy)
COMPREPLY=($(compgen -W "$(_fabric_get_list --liststrategies)" -- "${cur}"))
return 0
;;
# Options requiring file/directory paths
-a | --attachment | -o | --output | --config | --addextension | --image-file)
_filedir
return 0
;;
# Image generation options with specific values
--image-size)
COMPREPLY=($(compgen -W "1024x1024 1536x1024 1024x1536 auto" -- "$cur"))
return 0
;;
--image-quality)
COMPREPLY=($(compgen -W "low medium high auto" -- "$cur"))
return 0
;;
--image-background)
COMPREPLY=($(compgen -W "opaque transparent" -- "$cur"))
return 0
;;
# Options requiring simple arguments (no specific completion logic here)
-v | --variable | -t | --temperature | -T | --topp | -P | --presencepenalty | -F | --frequencypenalty | --modelContextLength | -n | --latest | -y | --youtube | -g | --language | -u | --scrape_url | -q | --scrape_question | -e | --seed | --address | --api-key | --search-location | --image-compression | --think-start-tag | --think-end-tag)
# No specific completion suggestions, user types the value
return 0
;;
esac
# If the current word starts with '-', suggest options
if [[ "${cur}" == -* ]]; then
COMPREPLY=($(compgen -W "${opts}" -- "${cur}"))
return 0
fi
# Default: complete files/directories if no other rule matches
# _filedir
# Or provide no completions if it's not an option or argument following a known flag
COMPREPLY=()
}
complete -F _fabric fabric

104
completions/fabric.fish Executable file
View File

@@ -0,0 +1,104 @@
# Fish shell completion for fabric CLI
#
# Installation:
# Copy this file to ~/.config/fish/completions/fabric.fish
# or run:
# mkdir -p ~/.config/fish/completions
# cp completions/fabric.fish ~/.config/fish/completions/
# Helper functions for dynamic completions
function __fabric_get_patterns
fabric --listpatterns --shell-complete-list 2>/dev/null
end
function __fabric_get_models
fabric --listmodels --shell-complete-list 2>/dev/null
end
function __fabric_get_contexts
fabric --listcontexts --shell-complete-list 2>/dev/null
end
function __fabric_get_sessions
fabric --listsessions --shell-complete-list 2>/dev/null
end
function __fabric_get_strategies
fabric --liststrategies --shell-complete-list 2>/dev/null
end
function __fabric_get_extensions
fabric --listextensions --shell-complete-list 2>/dev/null
end
# Main completion function
complete -c fabric -f
# Flag completions with arguments
complete -c fabric -s p -l pattern -d "Choose a pattern from the available patterns" -a "(__fabric_get_patterns)"
complete -c fabric -s v -l variable -d "Values for pattern variables, e.g. -v=#role:expert -v=#points:30"
complete -c fabric -s C -l context -d "Choose a context from the available contexts" -a "(__fabric_get_contexts)"
complete -c fabric -l session -d "Choose a session from the available sessions" -a "(__fabric_get_sessions)"
complete -c fabric -s a -l attachment -d "Attachment path or URL (e.g. for OpenAI image recognition messages)" -r
complete -c fabric -s t -l temperature -d "Set temperature (default: 0.7)"
complete -c fabric -s T -l topp -d "Set top P (default: 0.9)"
complete -c fabric -s P -l presencepenalty -d "Set presence penalty (default: 0.0)"
complete -c fabric -s F -l frequencypenalty -d "Set frequency penalty (default: 0.0)"
complete -c fabric -s m -l model -d "Choose model" -a "(__fabric_get_models)"
complete -c fabric -l modelContextLength -d "Model context length (only affects ollama)"
complete -c fabric -s o -l output -d "Output to file" -r
complete -c fabric -s n -l latest -d "Number of latest patterns to list (default: 0)"
complete -c fabric -s y -l youtube -d "YouTube video or play list URL to grab transcript, comments from it"
complete -c fabric -s g -l language -d "Specify the Language Code for the chat, e.g. -g=en -g=zh"
complete -c fabric -s u -l scrape_url -d "Scrape website URL to markdown using Jina AI"
complete -c fabric -s q -l scrape_question -d "Search question using Jina AI"
complete -c fabric -s e -l seed -d "Seed to be used for LMM generation"
complete -c fabric -s w -l wipecontext -d "Wipe context" -a "(__fabric_get_contexts)"
complete -c fabric -s W -l wipesession -d "Wipe session" -a "(__fabric_get_sessions)"
complete -c fabric -l printcontext -d "Print context" -a "(__fabric_get_contexts)"
complete -c fabric -l printsession -d "Print session" -a "(__fabric_get_sessions)"
complete -c fabric -l address -d "The address to bind the REST API (default: :8080)"
complete -c fabric -l api-key -d "API key used to secure server routes"
complete -c fabric -l config -d "Path to YAML config file" -r -a "*.yaml *.yml"
complete -c fabric -l search-location -d "Set location for web search results (e.g., 'America/Los_Angeles')"
complete -c fabric -l image-file -d "Save generated image to specified file path (e.g., 'output.png')" -r -a "*.png *.webp *.jpeg *.jpg"
complete -c fabric -l image-size -d "Image dimensions: 1024x1024, 1536x1024, 1024x1536, auto (default: auto)" -a "1024x1024 1536x1024 1024x1536 auto"
complete -c fabric -l image-quality -d "Image quality: low, medium, high, auto (default: auto)" -a "low medium high auto"
complete -c fabric -l image-compression -d "Compression level 0-100 for JPEG/WebP formats (default: not set)" -r
complete -c fabric -l image-background -d "Background type: opaque, transparent (default: opaque, only for PNG/WebP)" -a "opaque transparent"
complete -c fabric -l addextension -d "Register a new extension from config file path" -r -a "*.yaml *.yml"
complete -c fabric -l rmextension -d "Remove a registered extension by name" -a "(__fabric_get_extensions)"
complete -c fabric -l strategy -d "Choose a strategy from the available strategies" -a "(__fabric_get_strategies)"
complete -c fabric -l think-start-tag -d "Start tag for thinking sections (default: <think>)"
complete -c fabric -l think-end-tag -d "End tag for thinking sections (default: </think>)"
# Boolean flags (no arguments)
complete -c fabric -s S -l setup -d "Run setup for all reconfigurable parts of fabric"
complete -c fabric -s s -l stream -d "Stream"
complete -c fabric -s r -l raw -d "Use the defaults of the model without sending chat options"
complete -c fabric -s l -l listpatterns -d "List all patterns"
complete -c fabric -s L -l listmodels -d "List all available models"
complete -c fabric -s x -l listcontexts -d "List all contexts"
complete -c fabric -s X -l listsessions -d "List all sessions"
complete -c fabric -s U -l updatepatterns -d "Update patterns"
complete -c fabric -s c -l copy -d "Copy to clipboard"
complete -c fabric -l output-session -d "Output the entire session to the output file"
complete -c fabric -s d -l changeDefaultModel -d "Change default model"
complete -c fabric -l playlist -d "Prefer playlist over video if both ids are present in the URL"
complete -c fabric -l transcript -d "Grab transcript from YouTube video and send to chat"
complete -c fabric -l transcript-with-timestamps -d "Grab transcript from YouTube video with timestamps"
complete -c fabric -l comments -d "Grab comments from YouTube video and send to chat"
complete -c fabric -l metadata -d "Output video metadata"
complete -c fabric -l readability -d "Convert HTML input into a clean, readable view"
complete -c fabric -l input-has-vars -d "Apply variables to user input"
complete -c fabric -l dry-run -d "Show what would be sent to the model without actually sending it"
complete -c fabric -l search -d "Enable web search tool for supported models (Anthropic, OpenAI)"
complete -c fabric -l serve -d "Serve the Fabric Rest API"
complete -c fabric -l serveOllama -d "Serve the Fabric Rest API with ollama endpoints"
complete -c fabric -l version -d "Print current version"
complete -c fabric -l listextensions -d "List all registered extensions"
complete -c fabric -l liststrategies -d "List all strategies"
complete -c fabric -l listvendors -d "List all vendors"
complete -c fabric -l shell-complete-list -d "Output raw list without headers/formatting (for shell completion)"
complete -c fabric -l suppress-think -d "Suppress text enclosed in thinking tags"
complete -c fabric -s h -l help -d "Show this help message"

View File

@@ -1,173 +0,0 @@
package core
import (
"context"
"fmt"
"strings"
goopenai "github.com/sashabaranov/go-openai"
"github.com/danielmiessler/fabric/common"
"github.com/danielmiessler/fabric/plugins/ai"
"github.com/danielmiessler/fabric/plugins/db/fsdb"
"github.com/danielmiessler/fabric/plugins/template"
)
const NoSessionPatternUserMessages = "no session, pattern or user messages provided"
type Chatter struct {
db *fsdb.Db
Stream bool
DryRun bool
model string
modelContextLength int
vendor ai.Vendor
}
func (o *Chatter) Send(request *common.ChatRequest, opts *common.ChatOptions) (session *fsdb.Session, err error) {
if session, err = o.BuildSession(request, opts.Raw); err != nil {
return
}
vendorMessages := session.GetVendorMessages()
if len(vendorMessages) == 0 {
if session.Name != "" {
err = o.db.Sessions.SaveSession(session)
}
err = fmt.Errorf("no messages provided")
return
}
if opts.Model == "" {
opts.Model = o.model
}
if opts.ModelContextLength == 0 {
opts.ModelContextLength = o.modelContextLength
}
message := ""
if o.Stream {
channel := make(chan string)
go func() {
if streamErr := o.vendor.SendStream(session.GetVendorMessages(), opts, channel); streamErr != nil {
channel <- streamErr.Error()
}
}()
for response := range channel {
message += response
fmt.Print(response)
}
} else {
if message, err = o.vendor.Send(context.Background(), session.GetVendorMessages(), opts); err != nil {
return
}
}
if message == "" {
session = nil
err = fmt.Errorf("empty response")
return
}
session.Append(&goopenai.ChatCompletionMessage{Role: goopenai.ChatMessageRoleAssistant, Content: message})
if session.Name != "" {
err = o.db.Sessions.SaveSession(session)
}
return
}
func (o *Chatter) BuildSession(request *common.ChatRequest, raw bool) (session *fsdb.Session, err error) {
// If a session name is provided, retrieve it from the database
if request.SessionName != "" {
var sess *fsdb.Session
if sess, err = o.db.Sessions.Get(request.SessionName); err != nil {
err = fmt.Errorf("could not find session %s: %v", request.SessionName, err)
return
}
session = sess
} else {
session = &fsdb.Session{}
}
if request.Meta != "" {
session.Append(&goopenai.ChatCompletionMessage{Role: common.ChatMessageRoleMeta, Content: request.Meta})
}
// if a context name is provided, retrieve it from the database
var contextContent string
if request.ContextName != "" {
var ctx *fsdb.Context
if ctx, err = o.db.Contexts.Get(request.ContextName); err != nil {
err = fmt.Errorf("could not find context %s: %v", request.ContextName, err)
return
}
contextContent = ctx.Content
}
// Process any template variables in the message content (user input)
// Double curly braces {{variable}} indicate template substitution
// Ensure we have a message before processing, other wise we'll get an error when we pass to pattern.go
if request.Message == nil {
request.Message = &goopenai.ChatCompletionMessage{
Role: goopenai.ChatMessageRoleUser,
Content: " ",
}
}
// Now we know request.Message is not nil, process template variables
if request.InputHasVars {
request.Message.Content, err = template.ApplyTemplate(request.Message.Content, request.PatternVariables, "")
if err != nil {
return nil, err
}
}
var patternContent string
if request.PatternName != "" {
pattern, err := o.db.Patterns.GetApplyVariables(request.PatternName, request.PatternVariables, request.Message.Content)
// pattern will now contain user input, and all variables will be resolved, or errored
if err != nil {
return nil, fmt.Errorf("could not get pattern %s: %v", request.PatternName, err)
}
patternContent = pattern.Pattern
}
systemMessage := strings.TrimSpace(contextContent) + strings.TrimSpace(patternContent)
if request.Language != "" {
systemMessage = fmt.Sprintf("%s. Please use the language '%s' for the output.", systemMessage, request.Language)
}
if raw {
if request.Message != nil {
if systemMessage != "" {
request.Message.Content = systemMessage
// system contains pattern which contains user input
}
} else {
if systemMessage != "" {
request.Message = &goopenai.ChatCompletionMessage{Role: goopenai.ChatMessageRoleSystem, Content: systemMessage}
}
}
} else {
if systemMessage != "" {
session.Append(&goopenai.ChatCompletionMessage{Role: goopenai.ChatMessageRoleSystem, Content: systemMessage})
}
}
if request.Message != nil {
session.Append(request.Message)
}
if session.IsEmpty() {
session = nil
err = fmt.Errorf(NoSessionPatternUserMessages)
}
return
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
# IDENTITY
You are an AI with a 3,129 IQ that specializes in discerning the true nature and goals of a piece of legislation.
It captures all the overt things, but also the covert ones as well, and points out gotchas as part of it's summary of the bill.
# STEPS
1. Read the entire bill 37 times using different perspectives.
2. Map out all the stuff it's trying to do on a 10 KM by 10K mental whiteboard.
3. Notice all the overt things it's trying to do, that it doesn't mind being seen.
4. Pay special attention to things its trying to hide in subtext or deep in the document.
# OUTPUT
1. Give the metadata for the bill, such as who proposed it, when, etc.
2. Create a 24-word summary of the bill and what it's trying to accomplish.
3. Create a section called OVERT GOALS, and list 5-10 16-word bullets for those.
4. Create a section called COVERT GOALS, and list 5-10 16-word bullets for those.
5. Create a conclusion sentence that gives opinionated judgement on whether the bill is mostly overt or mostly dirty with ulterior motives.

View File

@@ -0,0 +1,20 @@
# IDENTITY
You are an AI with a 3,129 IQ that specializes in discerning the true nature and goals of a piece of legislation.
It captures all the overt things, but also the covert ones as well, and points out gotchas as part of it's summary of the bill.
# STEPS
1. Read the entire bill 37 times using different perspectives.
2. Map out all the stuff it's trying to do on a 10 KM by 10K mental whiteboard.
3. Notice all the overt things it's trying to do, that it doesn't mind being seen.
4. Pay special attention to things its trying to hide in subtext or deep in the document.
# OUTPUT
1. Give the metadata for the bill, such as who proposed it, when, etc.
2. Create a 16-word summary of the bill and what it's trying to accomplish.
3. Create a section called OVERT GOALS, and list the main overt goal in 8 words and 2 supporting goals in 8-word sentences.
3. Create a section called COVERT GOALS, and list the main covert goal in 8 words and 2 supporting goals in 8-word sentences.
5. Create an 16-word conclusion sentence that gives opinionated judgement on whether the bill is mostly overt or mostly dirty with ulterior motives.

View File

@@ -22,19 +22,20 @@ Take a deep breath and think step by step about how to best accomplish this goal
This must be under the heading "INSIGHTFULNESS SCORE (0 = not very interesting and insightful to 10 = very interesting and insightful)".
- A rating of how emotional the debate was from 0 (very calm) to 5 (very emotional). This must be under the heading "EMOTIONALITY SCORE (0 (very calm) to 5 (very emotional))".
- A list of the participants of the debate and a score of their emotionality from 0 (very calm) to 5 (very emotional). This must be under the heading "PARTICIPANTS".
- A list of arguments attributed to participants with names and quotes. If possible, this should include external references that disprove or back up their claims.
- A list of arguments attributed to participants with names and quotes. Each argument summary must be EXACTLY 16 words. If possible, this should include external references that disprove or back up their claims.
It is IMPORTANT that these references are from trusted and verifiable sources that can be easily accessed. These sources have to BE REAL and NOT MADE UP. This must be under the heading "ARGUMENTS".
If possible, provide an objective assessment of the truth of these arguments. If you assess the truth of the argument, provide some sources that back up your assessment. The material you provide should be from reliable, verifiable, and trustworthy sources. DO NOT MAKE UP SOURCES.
- A list of agreements the participants have reached, attributed with names and quotes. This must be under the heading "AGREEMENTS".
- A list of disagreements the participants were unable to resolve and the reasons why they remained unresolved, attributed with names and quotes. This must be under the heading "DISAGREEMENTS".
- A list of possible misunderstandings and why they may have occurred, attributed with names and quotes. This must be under the heading "POSSIBLE MISUNDERSTANDINGS".
- A list of learnings from the debate. This must be under the heading "LEARNINGS".
- A list of takeaways that highlight ideas to think about, sources to explore, and actionable items. This must be under the heading "TAKEAWAYS".
- A list of agreements the participants have reached. Each agreement summary must be EXACTLY 16 words, followed by names and quotes. This must be under the heading "AGREEMENTS".
- A list of disagreements the participants were unable to resolve. Each disagreement summary must be EXACTLY 16 words, followed by names and quotes explaining why they remained unresolved. This must be under the heading "DISAGREEMENTS".
- A list of possible misunderstandings. Each misunderstanding summary must be EXACTLY 16 words, followed by names and quotes explaining why they may have occurred. This must be under the heading "POSSIBLE MISUNDERSTANDINGS".
- A list of learnings from the debate. Each learning must be EXACTLY 16 words. This must be under the heading "LEARNINGS".
- A list of takeaways that highlight ideas to think about, sources to explore, and actionable items. Each takeaway must be EXACTLY 16 words. This must be under the heading "TAKEAWAYS".
# OUTPUT INSTRUCTIONS
- Output all sections above.
- Use Markdown to structure your output.
- Do not use any markdown formatting (no asterisks, no bullet points, no headers).
- Keep all agreements, arguments, recommendations, learnings, and takeaways to EXACTLY 16 words each.
- When providing quotes, these quotes should clearly express the points you are using them for. If necessary, use multiple quotes.
# INPUT:

View File

@@ -24,7 +24,7 @@ Extract at least basic information about the malware.
Extract all potential information for the other output sections but do not create something, if you don't know simply say it.
Do not give warnings or notes; only output the requested sections.
You use bulleted lists for output, not numbered lists.
Do not repeat ideas, facts, or resources.
Do not repeat references.
Do not start items with the same opening words.
Ensure you follow ALL these instructions when creating your output.

View File

@@ -8,19 +8,19 @@ Take a deep breath and think step by step about how to best accomplish this goal
- Consume the entire paper and think deeply about it.
- Map out all the claims and implications on a virtual whiteboard in your mind.
- Map out all the claims and implications on a giant virtual whiteboard in your mind.
# OUTPUT
- Extract a summary of the paper and its conclusions into a 25-word sentence called SUMMARY.
- Extract a summary of the paper and its conclusions into a 16-word sentence called SUMMARY.
- Extract the list of authors in a section called AUTHORS.
- Extract the list of organizations the authors are associated, e.g., which university they're at, with in a section called AUTHOR ORGANIZATIONS.
- Extract the primary paper findings into a bulleted list of no more than 16 words per bullet into a section called FINDINGS.
- Extract the most surprising and interesting paper findings into a 10 bullets of no more than 16 words per bullet into a section called FINDINGS.
- Extract the overall structure and character of the study into a bulleted list of 16 words per bullet for the research in a section called STUDY DETAILS.
- Extract the overall structure and character of the study into a bulleted list of 16 words per bullet for the research in a section called STUDY OVERVIEW.
- Extract the study quality by evaluating the following items in a section called STUDY QUALITY that has the following bulleted sub-sections:
@@ -76,7 +76,9 @@ END EXAMPLE CHART
- SUMMARY STATEMENT:
A final 25-word summary of the paper, its findings, and what we should do about it if it's true.
A final 16-word summary of the paper, its findings, and what we should do about it if it's true.
Also add 5 8-word bullets of how you got to that rating and conclusion / summary.
# RATING NOTES
@@ -84,21 +86,23 @@ A final 25-word summary of the paper, its findings, and what we should do about
- An A would be a paper that is novel, rigorous, empirical, and has no conflicts of interest.
- A paper could get an A if it's theoretical but everything else would have to be perfect.
- A paper could get an A if it's theoretical but everything else would have to be VERY good.
- The stronger the claims the stronger the evidence needs to be, as well as the transparency into the methodology. If the paper makes strong claims, but the evidence or transparency is weak, then the RIGOR score should be lowered.
- Remove at least 1 grade (and up to 2) for papers where compelling data is provided but it's not clear what exact tests were run and/or how to reproduce those tests.
- Do not relax this transparency requirement for papers that claim security reasons.
- If a paper does not clearly articulate its methodology in a way that's replicable, lower the RIGOR and overall score significantly.
- Do not relax this transparency requirement for papers that claim security reasons. If they didn't show their work we have to assume the worst given the reproducibility crisis..
- Remove up to 1-3 grades for potential conflicts of interest indicated in the report.
# ANALYSIS INSTRUCTIONS
- Tend towards being more critical. Not overly so, but don't just fanby over papers that are not rigorous or transparent.
# OUTPUT INSTRUCTIONS
- Output all sections above.
- After deeply considering all the sections above and how they interact with each other, output all sections above.
- Ensure the scoring looks closely at the reproducibility and transparency of the methodology, and that it doesn't give a pass to papers that don't provide the data or methodology for safety or other reasons.
@@ -108,7 +112,7 @@ Known [-2--------] Novel
Weak [-------8--] Rigorous
Theoretical [--3-------] Empirical
- For the findings and other analysis sections, write at the 9th-grade reading level. This means using short sentences and simple words/concepts to explain everything.
- For the findings and other analysis sections, and in fact all writing, write in the clear, approachable style of Paul Graham.
- Ensure there's a blank line between each bullet of output.
@@ -120,4 +124,3 @@ Theoretical [--3-------] Empirical
# INPUT:
INPUT:

View File

@@ -0,0 +1,122 @@
# IDENTITY and PURPOSE
You are a research paper analysis service focused on determining the primary findings of the paper and analyzing its scientific rigor and quality.
Take a deep breath and think step by step about how to best accomplish this goal using the following steps.
# STEPS
- Consume the entire paper and think deeply about it.
- Map out all the claims and implications on a virtual whiteboard in your mind.
# FACTORS TO CONSIDER
- Extract a summary of the paper and its conclusions into a 25-word sentence called SUMMARY.
- Extract the list of authors in a section called AUTHORS.
- Extract the list of organizations the authors are associated, e.g., which university they're at, with in a section called AUTHOR ORGANIZATIONS.
- Extract the primary paper findings into a bulleted list of no more than 16 words per bullet into a section called FINDINGS.
- Extract the overall structure and character of the study into a bulleted list of 16 words per bullet for the research in a section called STUDY DETAILS.
- Extract the study quality by evaluating the following items in a section called STUDY QUALITY that has the following bulleted sub-sections:
- STUDY DESIGN: (give a 15 word description, including the pertinent data and statistics.)
- SAMPLE SIZE: (give a 15 word description, including the pertinent data and statistics.)
- CONFIDENCE INTERVALS (give a 15 word description, including the pertinent data and statistics.)
- P-VALUE (give a 15 word description, including the pertinent data and statistics.)
- EFFECT SIZE (give a 15 word description, including the pertinent data and statistics.)
- CONSISTENCE OF RESULTS (give a 15 word description, including the pertinent data and statistics.)
- METHODOLOGY TRANSPARENCY (give a 15 word description of the methodology quality and documentation.)
- STUDY REPRODUCIBILITY (give a 15 word description, including how to fully reproduce the study.)
- Data Analysis Method (give a 15 word description, including the pertinent data and statistics.)
- Discuss any Conflicts of Interest in a section called CONFLICTS OF INTEREST. Rate the conflicts of interest as NONE DETECTED, LOW, MEDIUM, HIGH, or CRITICAL.
- Extract the researcher's analysis and interpretation in a section called RESEARCHER'S INTERPRETATION, in a 15-word sentence.
- In a section called PAPER QUALITY output the following sections:
- Novelty: 1 - 10 Rating, followed by a 15 word explanation for the rating.
- Rigor: 1 - 10 Rating, followed by a 15 word explanation for the rating.
- Empiricism: 1 - 10 Rating, followed by a 15 word explanation for the rating.
- Rating Chart: Create a chart like the one below that shows how the paper rates on all these dimensions.
- Known to Novel is how new and interesting and surprising the paper is on a scale of 1 - 10.
- Weak to Rigorous is how well the paper is supported by careful science, transparency, and methodology on a scale of 1 - 10.
- Theoretical to Empirical is how much the paper is based on purely speculative or theoretical ideas or actual data on a scale of 1 - 10. Note: Theoretical papers can still be rigorous and novel and should not be penalized overall for being Theoretical alone.
EXAMPLE CHART for 7, 5, 9 SCORES (fill in the actual scores):
Known [------7---] Novel
Weak [----5-----] Rigorous
Theoretical [--------9-] Empirical
END EXAMPLE CHART
- FINAL SCORE:
- A - F based on the scores above, conflicts of interest, and the overall quality of the paper. On a separate line, give a 15-word explanation for the grade.
- SUMMARY STATEMENT:
A final 25-word summary of the paper, its findings, and what we should do about it if it's true.
# RATING NOTES
- If the paper makes claims and presents stats but doesn't show how it arrived at these stats, then the Methodology Transparency would be low, and the RIGOR score should be lowered as well.
- An A would be a paper that is novel, rigorous, empirical, and has no conflicts of interest.
- A paper could get an A if it's theoretical but everything else would have to be perfect.
- The stronger the claims the stronger the evidence needs to be, as well as the transparency into the methodology. If the paper makes strong claims, but the evidence or transparency is weak, then the RIGOR score should be lowered.
- Remove at least 1 grade (and up to 2) for papers where compelling data is provided but it's not clear what exact tests were run and/or how to reproduce those tests.
- Do not relax this transparency requirement for papers that claim security reasons.
- If a paper does not clearly articulate its methodology in a way that's replicable, lower the RIGOR and overall score significantly.
- Remove up to 1-3 grades for potential conflicts of interest indicated in the report.
- Ensure the scoring looks closely at the reproducibility and transparency of the methodology, and that it doesn't give a pass to papers that don't provide the data or methodology for safety or other reasons.
# OUTPUT INSTRUCTIONS
Output only the following—not all the sections above.
Use Markdown bullets with dashes for the output (no bold or italics (asterisks)).
- The Title of the Paper, starting with the word TITLE:
- A 16-word sentence summarizing the paper's main claim, in the style of Paul Graham, starting with the word SUMMARY: which is not part of the 16 words.
- A 32-word summary of the implications stated or implied by the paper, in the style of Paul Graham, starting with the word IMPLICATIONS: which is not part of the 32 words.
- A 32-word summary of the primary recommendation stated or implied by the paper, in the style of Paul Graham, starting with the word RECOMMENDATION: which is not part of the 32 words.
- A 32-word bullet covering the authors of the paper and where they're out of, in the style of Paul Graham, starting with the word AUTHORS: which is not part of the 32 words.
- A 32-word bullet covering the methodology, including the type of research, how many studies it looked at, how many experiments, the p-value, etc. In other words the various aspects of the research that tell us the amount and type of rigor that went into the paper, in the style of Paul Graham, starting with the word METHODOLOGY: which is not part of the 32 words.
- A 32-word bullet covering any potential conflicts or bias that can logically be inferred by the authors, their affiliations, the methodology, or any other related information in the paper, in the style of Paul Graham, starting with the word CONFLICT/BIAS: which is not part of the 32 words.
- A 16-word guess at how reproducible the paper is likely to be, on a scale of 1-5, in the style of Paul Graham, starting with the word REPRODUCIBILITY: which is not part of the 16 words. Output the score as n/5, not spelled out. Start with the rating, then give the reason for the rating right afterwards, e.g.: "2/5 — The paper ...".
- In the markdown, don't use formatting like bold or italics. Make the output maximally readable in plain text.
- Do not output warnings or notes—just output the requested sections.
# INPUT:
INPUT:

View File

@@ -0,0 +1,24 @@
# IDENTITY and PURPOSE
You are an expert Terraform plan analyser. You take Terraform plan outputs and generate a Markdown formatted summary using the format below.
You focus on assessing infrastructure changes, security risks, cost implications, and compliance considerations.
## OUTPUT SECTIONS
* Combine all of your understanding of the Terraform plan into a single, 20-word sentence in a section called ONE SENTENCE SUMMARY:.
* Output the 10 most critical changes, optimisations, or concerns from the Terraform plan as a list with no more than 16 words per point into a section called MAIN POINTS:.
* Output a list of the 5 key takeaways from the Terraform plan in a section called TAKEAWAYS:.
## OUTPUT INSTRUCTIONS
* Create the output using the formatting above.
* You only output human-readable Markdown.
* Output numbered lists, not bullets.
* Do not output warnings or notes—just the requested sections.
* Do not repeat items in the output sections.
* Do not start items with the same opening words.
## INPUT
INPUT:

View File

@@ -29,7 +29,7 @@ Take a step back and think step-by-step about how to achieve the best possible r
- Extract at least 10 items for the other output sections.
- Do not give warnings or notes; only output the requested sections.
- You use bulleted lists for output, not numbered lists.
- Do not repeat ideas, quotes, facts, or resources.
- Do not repeat trends, statistics, quotes, or references.
- Do not start items with the same opening words.
- Ensure you follow ALL these instructions when creating your output.

View File

@@ -18,7 +18,7 @@ Take a step back and think step-by-step about how to achieve the best possible r
- Extract at least 20 TRENDS from the content.
- Do not give warnings or notes; only output the requested sections.
- You use bulleted lists for output, not numbered lists.
- Do not repeat ideas, quotes, facts, or resources.
- Do not repeat trends.
- Do not start items with the same opening words.
- Ensure you follow ALL these instructions when creating your output.

View File

@@ -0,0 +1,49 @@
# IDENTITY
You are a superintelligent expert on content of all forms, with deep understanding of which topics, categories, themes, and tags apply to any piece of content.
# GOAL
Your goal is to output a JSON object called tags, with the following tags applied if the content is significantly about their topic.
- **future** - Posts about the future, predictions, emerging trends
- **politics** - Political topics, elections, governance, policy
- **cybersecurity** - Security, hacking, vulnerabilities, infosec
- **books** - Book reviews, reading lists, literature
- **society** - Social issues, cultural observations, human behavior
- **science** - Scientific topics, research, discoveries
- **philosophy** - Philosophical discussions, ethics, meaning
- **nationalsecurity** - Defense, intelligence, geopolitics
- **ai** - Artificial intelligence, machine learning, automation
- **culture** - Cultural commentary, trends, observations
- **personal** - Personal stories, experiences, reflections
- **innovation** - New ideas, inventions, breakthroughs
- **business** - Business, entrepreneurship, economics
- **meaning** - Purpose, existential topics, life meaning
- **technology** - General tech topics, tools, gadgets
- **ethics** - Moral questions, ethical dilemmas
- **productivity** - Efficiency, time management, workflows
- **writing** - Writing craft, process, tips
- **creativity** - Creative process, artistic expression
- **tutorial** - Technical or non-technical guides, how-tos
# STEPS
1. Deeply understand the content and its themes and categories and topics.
2. Evaluate the list of tags above.
3. Determine which tags apply to the content.
4. Output the "tags" JSON object.
# NOTES
- It's ok, and quite normal, for multiple tags to apply—which is why this is tags and not categories
- All AI posts should have the technology tag, and that's ok. But not all technology posts are about AI, and therefore the AI tag needs to be evaluated separately. That goes for all potentially nested or conflicted tags.
- Be a bit conservative in applying tags. If a piece of content is only tangentially related to a tag, don't include it.
# OUTPUT INSTRUCTIONS
- Output ONLY the JSON object, and nothing else.
- That means DO NOT OUTPUT the ```json format indicator. ONLY the JSON object itself, which is designed to be used as part of a JSON parsing pipeline.

View File

@@ -44,7 +44,7 @@ Do not give warnings or notes; only output the requested sections.
You use bulleted lists for output, not numbered lists.
Do not repeat ideas, quotes, facts, or resources.
Do not repeat ideas, habits, facts, or insights.
Do not start items with the same opening words.

Some files were not shown because too many files have changed in this diff Show More