### CHANGES
- Add concurrency control to prevent simultaneous runs
- Pull latest main branch changes before tagging
- Fetch all remote tags before calculating version
## CHANGES
- Add Save method to PatternsEntity struct
- Create pattern directory with proper permissions
- Write pattern content to system pattern file
- Add comprehensive test for Save functionality
- Verify directory creation and file contents
- Handle errors for directory and file operations
Add a new pattern for generating mnemonic phrases from diceware words. This includes two markdown files defining the user guide, and system implementation details.
## CHANGES
- Replace hardcoded `/tmp` with `os.TempDir()` for paths
- Use `filepath.Join()` instead of string concatenation
- Remove Unix `find` command dependency completely
- Add new `findVTTFiles()` method using `filepath.Walk()`
- Make VTT file discovery work on Windows
- Improve error handling for file operations
- Maintain backward compatibility with existing functionality
This is a merge commit the virtual branches in your workspace.
Due to GitButler managing multiple virtual branches, you cannot switch back and
forth between git branches and virtual branches easily.
If you switch to another branch, GitButler will need to be reinitialized.
If you commit on this branch, GitButler will throw it away.
Here are the branches that are currently applied:
- improve-create-prd (refs/gitbutler/improve-create-prd)
For more information about what we're doing here, check out our docs:
https://docs.gitbutler.com/features/virtual-branches/integration-branch
The changes in this commit expand the identity and purpose of the PRD Generator
to provide more clarity on its role and the expected output. The key changes
include:
- Defining the Generator's purpose as transforming product ideas into a
structured PRD that ensures clarity, alignment, and precision in product
planning and execution.
- Outlining the key sections typically found in a PRD that the Generator should
cover, such as Overview, Objectives, Target Audience, Features, User Stories,
Functional and Non-functional Requirements, Success Metrics, and Timeline.
- Providing more detailed instructions on the expected output format, structure,
and content, including the use of Markdown, labeled sections, bullet points,
tables, and highlighting of priorities or MVP features.
- Create new pattern for analyzing Terraform plans
- Add identity defining expert plan analyzer role
- Include focus on security, cost, and compliance
- Define three output sections for summaries
- Specify 20-word sentence summary requirement
- List 10 critical changes with word limits
- Include 5 key takeaways section format
- Add markdown formatting output instructions
- Require numbered lists over bullet points
- Prohibit warnings and duplicate content
## CHANGES
- Add AIML provider configuration
- Set AIML base URL to api.aimlapi.com/v1
- Expand supported OpenAI compatible providers list
- Enable AIML API integration support
### CHANGES
- Add `.browserslistrc` to define target browser versions.
- Upgrade `pdfjs-dist` dependency from v2.16 to v4.2.67.
- Upgrade `nanoid` dependency from v4.0.2 to v5.0.9.
- Introduce `pdf-config.ts` for centralized PDF.js worker setup.
- Refactor `PdfConversionService` to use new PDF worker configuration.
- Add static `pdf.worker.min.mjs` to serve PDF.js worker.
- Update Vite configuration for ESNext build target and PDF.js.
- Create environment config module for URL handling
- Add getFabricBaseUrl() function with server/client support
- Add getFabricApiUrl() helper for API endpoints
- Configure Vite to inject FABRIC_BASE_URL client-side
- Update proxy targets to use environment variable
- Add TypeScript definitions for window config
- Support FABRIC_BASE_URL env var with fallback
## CHANGES
- Add model-specific raw mode detection logic
- Check Ollama llama2/llama3 models for raw mode
- Check OpenAI o1/o3/o4 models for raw mode
- Use model from options or default chatter
- Auto-enable raw mode when vendor requires it
- Import strings package for prefix matching
## CHANGES
- Add NeedsRawMode to Vendor interface
- Implement NeedsRawMode in all AI clients
- Return false for all implementations
- Support model-specific raw mode detection
- Enable future raw mode requirements
CHANGES
- Upgrade `anthropic-sdk-go` dependency to version `v1.2.0`.
- Integrate new Anthropic Claude 4 Opus and Sonnet models.
- Remove deprecated Claude 2.0 and 2.1 models from list.
- Adjust model type casting for `anthropic-sdk-go v1.2.0` compatibility.
- Refresh README: announce Claude 4, update date, fix links.
## CHANGES
- Fix system message handling with patterns in raw mode
- Prevent duplicate inputs when using patterns
- Add conditional logic for pattern vs non-pattern scenarios
- Simplify message construction with clearer variable names
- Improve code comments for better readability
- Improved formatting of the introduction and content summary sections for better flow.
- Consolidated repetitive sentences and enhanced the overall coherence of the text.
- Adjusted bullet points and numbering for consistency and easier comprehension.
- Ensured that key concepts are clearly articulated and visually distinct to aid understanding.
### CHANGES
- Add `getSortedGroupsItems` to centralize sorting logic.
- Sort groups and items alphabetically, case-insensitive.
- Replace inline sorting in `Print` with new method.
- Update `GetGroupAndItemByItemNumber` to use sorted data.
- Ensure original `GroupsItems` remains unmodified.
CHANGES:
- Add shell completion support for three major shells
- Create standardized completion scripts in completions/ directory
- Add --shell-complete-list flag for machine-readable output
- Update Print() methods to support plain output format
- Document installation steps for each shell in README
- Replace old fish completion script with improved version
CHANGES
* Define `getGoVersion` function in `flake.nix`.
* Use `getGoVersion` to set Go version consistently.
* Pass `goVersion` explicitly into `nix/shell.nix`.
* Remove redundant Go version definition from `shell.nix`.
Update Go version across Dockerfile, Nix configurations, and Go modules.
Refresh dependencies and Nix flake inputs.
CHANGES:
* Update Go version to 1.24.2 in Dockerfile.
* Set Go version to 1.24.0 and toolchain to 1.24.2.
* Refresh Go module dependencies and sums (go.mod, go.sum).
* Update Nix flake lock file inputs.
* Configure Nix environment and packages for Go 1.24.
* Update gomod2nix lock file with dependency hashes.
* Use Go 1.24 in Nix development shell environment.
## CHANGES
- refactor BuildSession raw mode to prepend system to user content
- ensure raw mode messages always have User role
- keep existing user message when no systemMessage provided
- append systemMessage separately in non-raw mode sessions
- store original cmd.Env before context-based exec command creation
- recreate exec command with context then restore originalEnv
- add comments clarifying raw vs non-raw handling behavior
## CHANGES
- Upgrade Anthropic SDK from alpha.11 to beta.3
- Update API endpoint from v1 to v2
- Replace anthropic.F() with direct assignment
- Replace anthropic.F() with anthropic.Opt() for optional params
- Simplify event delta handling in streaming
- Change client type from pointer to value type
- Update comment with SDK changelog reference
CHANGES
* Import `sort` and `strings` packages for sorting functionality.
* Sort retrieved AI model names alphabetically, ignoring case.
* Ensure consistent ordering of AI models in lists.
### CHANGES
- Add `Prompt` field to `StrategyMeta` struct.
- Include `strings` package for filename processing.
- Derive strategy name from filename using `strings.TrimSuffix`.
- Store `Prompt` value from JSON data in `StrategyMeta`
### CHANGES
- Import `sort` and `strings` packages for sorting functionality.
- Create a copy of groups for stable sorting.
- Sort groups alphabetically in a case-insensitive manner.
- Create a copy of items within each group for sorting.
- Sort items alphabetically in a case-insensitive manner.
- Iterate over sorted groups and items for display.
### CHANGES
- Introduce `--listvendors` flag to display all AI vendors.
- Refactor OpenAI-compatible providers into a unified configuration.
- Remove individual vendor packages for streamlined management.
- Add sorting for consistent vendor listing output.
- Update documentation to include new `--listvendors` option.
## CHANGES
- add new aot.json for Atom-of-Thought (AoT) prompting
- define AoT strategy description and detailed prompt instructions
- update strategies.json to include AoT in available strategies list
- ensure AoT strategy appears alongside CoD, CoT, and LTM options
Bumps the go_modules group with 1 update in the / directory: [golang.org/x/net](https://github.com/golang/net).
Updates `golang.org/x/net` from 0.36.0 to 0.38.0
- [Commits](https://github.com/golang/net/compare/v0.36.0...v0.38.0)
---
updated-dependencies:
- dependency-name: golang.org/x/net
dependency-version: 0.38.0
dependency-type: indirect
dependency-group: go_modules
...
Signed-off-by: dependabot[bot] <support@github.com>
Integrate the Grok AI provider into the Fabric system for AI model interactions.
### CHANGES
* Add Grok AI client to the plugin registry.
* Include Grok AI API key in REST API configuration endpoints.
## CHANGES
- Require exactly two arguments: directory and instructions
- Remove dedicated help flag, use flag.Usage instead
- Improve directory validation to check if it's a directory
- Inline pattern parsing, removing separate function
- Simplify error messages for better clarity
- Update usage text to reflect required instructions parameter
- Print usage to stderr instead of stdout
## CHANGES
- Rename tool from `fabric_code` to `code_helper`
- Update all documentation references to the tool
- Update installation instructions in README
- Modify usage examples in documentation
- Update tool's self-description and help text
CHANGES:
* Return summary text from `ParseFileChanges` separately.
* Update `chatter` to use returned summary text.
* Update tests to match new function signature.
## CHANGES
- Add FileChangesMarker constant for file changes section
- Update parser to use new constant marker
- Improve error messages with dynamic marker reference
- Update tests to use new marker format
- Update system documentation with new marker syntax
CHANGES:
- Replace deprecated io/ioutil with modern alternatives
- Add file change parsing and validation system
- Create secure file application mechanism
- Update chatter to process AI file changes
- Improve create_coding_feature pattern documentation
This commit introduces the `fabric_code` tool and the `create_coding_feature` pattern, allowing Fabric to modify existing codebases.
## CHANGES
- add `fabric_code` tool to generate JSON representation of code projects
- add `create_coding_feature` pattern to apply AI-generated code changes
- update README with `fabric_code` installation and usage
- walk file system with maximum depth and ignore list
- scan directory and return file/dir JSON data for AI model
- provide usage instructions and examples for `fabric_code`
- add file management API to system prompt for code changes
CHANGES:
- Removed `system.md` on the top level of the fabric repo.
- system.md was an RPG session summarization prompt.
- There are two other RPM summary patterns created after this file was added: `create_rpg_summary` and `summarize_rpg_session`
If you use a youtube link like `https://youtu.be/sHIlFKKaq0A` percentEndcoding encodes the link to `https%3A%2F%2Fyoutu.be%2FsHIlFKKaq0A`, which throws an error in fabric.
With percentEndcoding false, the script receives the link without encoding and works.
## CHANGES
- Add prompt strategies like Chain of Thought (CoT)
- Implement strategy selection with `--strategy` flag
- Improve README with platform-specific installation instructions
- Fix web interface documentation link
- Refactor git operations with new githelper package
- Add `--liststrategies` command to view available strategies
- Support applying strategies to system prompts
- Fix YouTube configuration check
- Improve error handling in session management
CHANGES
- Add argument validation to yt for usage errors
- Enable -t flag for transcript with timestamps
- Refactor PowerShell yt function with parameter switch
- Update README to dynamically select transcript option
- Document youtube_summary feature in pattern explanations
- Introduce youtube_summary pattern.
Remove the cancellation of remaining goroutines when a vendor collection fails.
This ensures that other vendor collections continue even if one fails.
Fixes listing models via `fabric -L` and using non-default models via `fabric -m custom_model`,
when localhost models (e.g. Ollama, LM Studio) are not listening on a given port (basically shut down).
## CHANGES
- Updated anthropic-sdk-go from v0.2.0-alpha.4 to v0.2.0-alpha.11
- Added Claude 3.7 Sonnet models to available model list
- Added ModelClaude3_7SonnetLatest to model options
- Added ModelClaude3_7Sonnet20250219 to model options
- Removed ModelClaude_Instant_1_2 from available models
- Added LM Studio as a new plugin, now it can be used with Fabric.
- Updated the plugin registry with the new plugin name
- Updated the configuration with the required base url
This commit adds the ability to grab the transcript
of a YouTube video with timestamps. The timestamps
are formatted as HH:MM:SS and are prepended to
each line of the transcript. The feature is enabled
by the new `--transcript-with-timestamps` flag,
so it's similar to the existing `--transcript` flag.
Example future use-case:
Providing summary of a video that includes timestamps
for quick navigation to specific parts of the video.
- Enable and improve custom API base URL configuration
- Add proper handling of v1 endpoint for UUID-containing URLs
- Implement URL formatting logic for consistent endpoint structure
- Clean up commented code and improve configuration flow
Create pattern to extract commands from videos and threat reports to obtain commands so pentesters or red teams or Threat hunters can use to either threat hunt or simulate the threat actor.
## Change
1. Windows Command: Because actually curl does not exist natively on Windows
2. Syntax: Because like this; it makes the “click, cut and paste” easier
Folders deleted:
- `types`. The folders contained are now `lib/interfaces` and `lib/api`
- `types/markdown` now in `utils/markdown`
- `components/ui/{side-nav,terminal}` now `components/ui/toc` and
`terminal`
Moved
- `lib/types/interfaces` to `lib/interfaces`.
- `components/ui/side-nav` to `components/ui/toc`.
- `components/ui/terminal` to `components/terminal`.
- `types/markdown` to `utils/markdown`
- `lib/types/chat` to `lib/api`
Update version to v..1 and commit
Update version.go
Update version to v..1 and commit
Update version.nix
Update version to v..1 and commit
Update version.go
Update version.nix
- Improved pattern creation, editing, and deletion functionalities.
- Enhanced logging configuration for better debugging and user feedback.
- Updated input validation and sanitization processes to ensure safe pattern processing.
- Streamlined session state initialization for improved performance.
- Added new UI components for better user experience in pattern management and output analysis.
Streamlit application for managing and executing patterns, with a focus on pattern creation, execution, and analysis. Below is a breakdown of the key components and functionality of the application:
Key Components and Functionality
Logging Configuration:
The application sets up logging with both console and file handlers.
The console logs are color-coded for better readability, and the file logs are more detailed for debugging purposes.
Session State Initialization:
The initialize_session_state() function initializes the session state with default values for various configuration and UI states.
It also loads saved outputs from persistent storage.
Pattern Management:
Pattern Creation: The create_pattern() function allows creating new patterns with either simple or advanced editing options.
Pattern Deletion: The delete_pattern() function allows deleting existing patterns.
Pattern Editing: The pattern_editor() function provides an interface for editing existing patterns.
Pattern Execution:
Pattern Execution: The execute_patterns() function executes selected patterns and captures their outputs.
Pattern Chain Execution: The execute_pattern_chain() function executes a sequence of patterns in a chain, passing output from each pattern to the next.
Output Management:
Saving Outputs: The save_output_log() function saves pattern execution logs.
Starring Outputs: The star_output() and unstar_output() functions allow users to star/favorite outputs for quick access.
Configuration and Model Selection:
Model and Provider Selection: The load_models_and_providers() function fetches and displays available models and providers for selection.
Configuration Loading: The load_configuration() function loads environment variables and initializes the configuration.
UI Components:
Pattern Creation UI: The pattern_creation_ui() and pattern_creation_wizard() functions provide UI components for creating new patterns.
Pattern Management UI: The pattern_management_ui() function provides UI components for managing patterns.
Output Analysis UI: The application includes tabs for displaying all outputs and starred outputs, with options to copy or star outputs.
Error Handling and Validation:
Input Validation: The validate_input_content() and sanitize_input_content() functions validate and sanitize input content to ensure it is safe for processing.
Pattern Validation: The validate_pattern() function validates the structure and content of a pattern.
Main Function:
The main() function orchestrates the entire application, setting up the Streamlit page, initializing session state, and handling the main navigation between different views (Run Patterns, Pattern Management, Analysis Dashboard).
Usage and Features
Pattern Creation: Users can create new patterns using either a simple text editor or an advanced wizard.
Pattern Execution: Users can select patterns to run, provide input, and execute them either individually or in a chain.
Output Analysis: Users can view and analyze the outputs of executed patterns, star favorite outputs, and copy outputs to the clipboard.
Pattern Management: Users can edit, delete, and bulk edit patterns.
Configuration: Users can select different models and providers for pattern execution.
Error Handling and Logging
The application includes robust error handling and logging to ensure that any issues are logged and displayed to the user.
Logging is done both to the console and to a file for debugging purposes.
Future Enhancements
Enhanced Pattern Validation: More comprehensive validation of pattern content and structure.
Advanced Analysis: Adding more advanced analysis features, such as sentiment analysis or keyword extraction on pattern outputs.
Integration with External APIs: Integrating with external APIs for additional functionality, such as sending outputs via email or storing them in a database.
Ingested the following documents, and then extracted themes and examples of how Socrates interacted with those around him.
* Apology by Plato
* Phaedrus by Plato
* Symposium by Plato
* The Republic by Plato
* The Economist by Xenophon
* The Memorabilia by Xenophon
* The Memorable Thoughts of Socrates by Xenophon
* The Symposium by Xenophon
Many thanks to <a href="https://www.gutenberg.org/">Project Gutenberg</a> for the source materials.
Add support for persistent configuration via YAML files. Users can now specify
common options in a config file while maintaining the ability to override with
CLI flags. Currently supports core options like model, temperature, and pattern
settings.
- Add --config flag for specifying YAML config path
- Support standard option precedence (CLI > YAML > defaults)
- Add type-safe YAML parsing with reflection
- Add tests for YAML config functionality
- Add InputHasVars field to ChatRequest struct
- Only process template variables in user input when flag is set
- Fixes issue with Ansible/Jekyll templates that use {{var}} syntax
This change makes template variable substitution in user input opt-in
via the --input-has-vars flag, preserving literal curly braces by
default.
When using pattern files with variables but no stdin input, ensure proper
template processing by initializing an empty message. This allows patterns
like:
./fabric -p pattern.txt -v=name:value
to work without requiring stdin input, while maintaining compatibility
with existing stdin usage:
echo "input" | ./fabric -p pattern.txt -v=name:value
Changes:
- Add empty message initialization in BuildSession when Message is nil
- Remove redundant template processing of message content
- Let pattern processing handle all template resolution
This simplifies the template processing flow while supporting both
stdin and non-stdin use cases.
When using pattern files with variables but no stdin input, ensure proper
template processing by initializing an empty message. This allows patterns
like:
./fabric -p pattern.txt -v=name:value
to work without requiring stdin input, while maintaining compatibility
with existing stdin usage:
echo "input" | ./fabric -p pattern.txt -v=name:value
Changes:
- Add empty message initialization in BuildSession when Message is nil
- Remove redundant template processing of message content
- Let pattern processing handle all template resolution
This simplifies the template processing flow while supporting both
stdin and non-stdin use cases.
- Successfully implemented path-based registry storage
- Moved to storing paths instead of full configurations
- Implemented proper hash verification for both configs and executables
- Registry format now clean and minimal.
File-Based Output Implementation
- Successfully implemented file-based output handling
- Demonstrated clean interface requiring only path output
- Properly handles cleanup of temporary files
- Verified working with both local and remote operations
Process template variables ({{var}}) consistently in both pattern files
and raw input messages. Previously variables were only processed when
using pattern files.
- Add template variable processing for raw input in BuildSession
- Initialize messageContent explicitly
- Remove errantly committed build artifact (fabric binary in previous commit)
Add initial set of utility plugins for the template system:
- datetime: Date/time formatting and manipulation
- fetch: HTTP content retrieval and processing
- file: File system operations and content handling
- sys: System information and environment access
- text: String manipulation and formatting operations
Each plugin includes:
- Implementation with comprehensive test coverage
- Markdown documentation of capabilities
- Integration with template package
This builds on the template system to provide practical utility functions
while maintaining a focused scope for the initial plugin release.
- Add new template package to handle variable substitution with {{variable}} syntax
- Move substitution logic from patterns to centralized template system
- Update patterns.go to use template package for variable processing
- Support special {{input}} handling for pattern content
- Update chatter.go and rest API to pass input parameter
- Enable multiple passes to handle nested variables
- Report errors for missing required variables
This change sets up a foundation for future templating features like front matter
and plugin support while keeping the substitution logic centralized.
- Stronger separation of concerns between chatter.go and patterns.go
- Consolidate pattern loading logic into GetPattern method
- Support both file and database patterns through single interface
- Maintain API compatibility with Storage interface
- Handle variable substitution in one place
- Keep backward compatibility for REST API through Get method
The changes enable cleaner pattern handling while maintaining
existing interfaces and adding file-based pattern support.
# What this Pull Request (PR) does
Add a new pattern to create a meeting summary from an audio transcript.
The pattern outputs the following sections (where relevant):
- Key Points
- Tasks
- Decisions
- Next Steps
Allow patterns to be loaded directly from files using explicit path prefixes
(~/, ./, /, or \). This enables easier testing and iteration of patterns
without requiring installation into the fabric config structure.
- Supports relative paths (./pattern.txt, ../pattern.txt)
- Supports home directory expansion (~/patterns/test.txt)
- Supports absolute paths
- Maintains backwards compatibility with named patterns
- Requires explicit path markers to distinguish from pattern names
Example usage:
fabric --pattern ./draft-pattern.txt
fabric --pattern ~/patterns/my-pattern.txt
fabric --pattern ../../shared-patterns/test.txt
In the process of setting up patterns, we've added a step to unalias any existing alias with the same name. This ensures that our dynamically defined functions won't conflict with any pre-existing aliases.
Description:
Changed "agreed within the meeting" to "agreed upon within the meeting" to improve grammatical accuracy.
Added missing periods to ensure consistency across list items.
Corrected the spelling of "highliting" to "highlighting."
Fixed the spelling of "exxactly" to "exactly."
Updated phrasing in "Write NEXT STEPS a 2-3 sentences" to "Write NEXT STEPS as 2-3 sentences" for grammatical correctness.
These changes improve the readability and consistency of the document, ensuring all instructions are clear and error-free.
The previous instructions incorrectly set GOROOT to '/opt/homebrew/bin/go', which points to the Go binary rather than the Go root directory. This caused errors when running Go commands on Apple Silicon-based Macs.
I updated the instructions to dynamically determine the correct GOROOT path using Homebrew, ensuring compatibility across different environments. This change resolves the 'go: cannot find GOROOT directory' issue on M1/M2 Macs.
Update this pattern to match the current fabric command line options Remove --agents, add -S as an alternative to --setup, and replace -c with -C to align with the current cli interface.
This pattern allows you to summarize, rate, and deduplicate feedback
about products. It's very helpful for anyone working in product
management, engineering, etc.
In golang, contexts should be propagated downwards in order to be able
to provide features such as cancellation.
This commit refactors the Vendor interface to accept a context as a
first parameter so that it can be propagated downwards.
This pattern is actually based on this incredibly great article: https://learnhowtolearn.org/how-to-build-extremely-quickly/
The idea is to use this pattern whenever you want to break an idea or
task down into small components, fully fleshing out your own TODO list
of things to implement to get it working.
This applies to things like writing articles/papers, creating
applications, and much more.
Removed "NOTE: This will revert the default model to gpt4-turbo. please run --changeDefaultModel to once again set the default model". This note appears to reflect behavior that is no longer happening.
Removed "NOTE: This will revert the default model to gpt4-turbo. please run --changeDefaultModel to once again set the default model". This note appears to reflect behavior that is no longer happening.
Bugfix for the error:
Traceback (most recent call last):
File "/home/xxx/.local/bin/fabric", line 8, in <module>
sys.exit(cli())
^^^^^
File "/home/xxx/.local/share/pipx/venvs/fabric/lib/python3.12/site-packages/installer/client/cli/fabric.py", line 148, in main
session.list_sessions()
File "/home/xxx/.local/share/pipx/venvs/fabric/lib/python3.12/site-packages/installer/client/cli/helper.py", line 67, in list_sessions
most_recent = self.find_most_recent_file().split("/")[-1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'
- Prevent generation_date tag format from being modified when SAVE_DATE_FORMAT is
specified
- Prevent NoneType from ending up in the tags (previous fix did not work)
- Update DATE_FORMAT to be configurable using the SAVE_DATE_FORMAT environment variable
- Modify target filename generation to handle cases where SAVE_DATE_FORMAT is left blank
- Default to date format "%Y-%m-%d" if SAVE_DATE_FORMAT is not set
CHANGES:
- New system.md file created for summarizing git diffs
- Detailed steps for summarizing Git diffs outlined.
- Emphasis on creating concise, impactful update bullets.
- Introduction of conventional commits for clear change tracking.
The variable 'wisdomFilePath' is already a complete path constructed with 'config_directory'. Joining it again with 'current_directory' could lead to an incorrect path.
When I attempted to follow these instructions in a windows environment using WSL, I kept running into issues because my python version was too low (3.8). I then was going through hoops trying to upgrade to version 3.12 as the process seems more complicated on windows OS.
To avoid these headaches, I thought it best to warn potential users ahead of time to ensure their environment is running the latest version of Python or at least python 3.10, which seemed to work for me finally.
because: As a user, I should be able to answer
interview questions quickly and effectively in realtime
this commit: Adds a pattern for answering interview questions
'save' can be used to save a Markdown file, with optional frontmatter
and additional tags. By default, if set, `FABRIC_FRONTMATTER_TAGS` will
be placed into the file as it is written. These tags and front matter
are suppressed from STDOUT, which can be piped into other patterns or
programs with no ill effects. This strives to be a version of `tee` that
is enhanced for personal knowledge systems that use frontmatter.
- The goal is to bring more encapsulation of the models management and simplified configuration management to bring increased flexibility, transparency on the overall flow, and simplicity in adding new model.
- We need to differentiate:
- Vendors: the producer of models (like OpenAI, Azure, Anthropic, Ollama, ..etc) and their associated APIs
- Models: the LLM models these vendors are making public
- Each vendor and operations allowed by the vendor needs to be encapsulated. This includes:
- The questions needed to setup the model (like the API key, or the URL)
- The listing of all models supported by the vendor
- The actions performed with a given model
- The configuration flow works like this for an **initial** call:
- The available vendors are called one by one, each of them being responsible for the data they collect. They return a set of environment variables under the form of a list of strings, or an empty list if the user does not want to setup this vendor. As we do not want each vendor to know which way the data they need will be collected (e.g., read from the command line, or a GUI), they will be asked for a list of questions, the configuration will inquire the user, and send back the questions with the collected answers to the Vendor. The Vendor is then either instantiating an instance (Vendor configured) and returning it, or returning `nil` if the Vendor should not be set up.
- the `.env` file is created, using the information returned by the vendors
- A list of patterns is downloaded from the main site
- When the system is configured, the configuration flows:
- Read the `.env` file using the godotenv library
- It configures a structure that contains the various vendors selected as well as the preferred model. This structure will be completed with some of the command line values (i.e, context, session, etc..)
- To get the list of all supported models:
- Each configured model (part of the configuration structure) is asked, using a goroutine, to return the list of model
- Order when building message: session + context + pattern + user input (role "user)
## TODO:
- Check if we need to read the system.md for every patterns when running the ListAllPatterns
- Context management seems more complex than the one in the original fabric. Probably needs some work (at least to make it clear how it works)
- models on command line: give as well vendor (like `--model openai/gpt-4o`). If the vendor is not given, get it by retrieving all possible models and searching from that.
- if user gives the ollama url on command line, we need to update/init an ollama vendor.
- The db should host only things related to access and storage in ~/.config/fabric
- The interaction part of the Setup function should be in the cli (and perhaps all the Setup)
This document explains the complete workflow for managing pattern descriptions and tags, including how to process new patterns and maintain metadata.
## System Overview
The pattern system follows this hierarchy:
1.`~/.config/fabric/patterns/` directory: The source of truth for available patterns
2.`pattern_extracts.json`: Contains first 500 words of each pattern for reference
3.`pattern_descriptions.json`: Stores pattern metadata (descriptions and tags)
4.`web/static/data/pattern_descriptions.json`: Web-accessible copy for the interface
## Pattern Processing Workflow
### 1. Adding New Patterns
- Add patterns to `~/.config/fabric/patterns/`
- Run extract_patterns.py to process new additions:
```bash
python extract_patterns.py
The Python Script automatically:
- Creates pattern extracts for reference
- Adds placeholder entries in descriptions file
- Syncs to web interface
### 2. Pattern Extract Creation
The script extracts first 500 words from each pattern's system.md file to:
- Provide context for writing descriptions
- Maintain reference material
- Aid in pattern categorization
### 3. Description and Tag Management
Pattern descriptions and tags are managed in pattern_descriptions.json:
{
"patterns": [
{
"patternName": "pattern_name",
"description": "[Description pending]",
"tags": []
}
]
}
## Completing Pattern Metadata
### Writing Descriptions
1. Check pattern_descriptions.json for "[Description pending]" entries
2. Reference pattern_extracts.json for context
3. How to update Pattern short descriptions (one sentence).
You can update your descriptions in pattern_descriptions.json manually or using LLM assistance (preferred approach).
Tell AI to look for "Description pending" entries in this file and write a short description based on the extract info in the pattern_extracts.json file. You can also ask your LLM to add tags for those newly added patterns, using other patterns tag assignments as example.
### Managing Tags
1. Add appropriate tags to new patterns
2. Update existing tags as needed
3. Tags are stored as arrays: ["TAG1", "TAG2"]
4. Edit pattern_descriptions.json directly to modify tags
5. Make tags your own. You can delete, replace, amend existing tags.
Fabric now supports YAML configuration files for commonly used options. This allows users to persist settings and share configurations across multiple runs.
## Usage
Use the `--config` flag to specify a YAML configuration file:
```bash
fabric --config ~/.config/fabric/config.yaml "Tell me about APIs"
```
## Configuration Precedence
1. CLI flags (highest priority)
2. YAML config values
3. Default values (lowest priority)
## Supported Configuration Options
```yaml
# Model selection
model:gpt-4
modelContextLength:4096
# Model parameters
temperature:0.7
topp:0.9
presencepenalty:0.0
frequencypenalty:0.0
seed:42
# Pattern selection
pattern:analyze # Use pattern name or filename
# Feature flags
stream:true
raw:false
```
## Rules and Behavior
- Only long flag names are supported in YAML (e.g., `temperature` not `-t`)
- CLI flags always override YAML values
- Unknown YAML declarations are ignored
- If a declaration appears multiple times in YAML, the last one wins
- The order of YAML declarations doesn't matter
## Type Conversions
The following string-to-type conversions are supported:
Rawbool`short:"r" long:"raw" yaml:"raw" description:"Use the defaults of the model without sending chat options (like temperature etc.) and use the user role instead of the system role for patterns."`
FrequencyPenaltyfloat64`short:"F" long:"frequencypenalty" yaml:"frequencypenalty" description:"Set frequency penalty" default:"0.0"`
ListPatternsbool`short:"l" long:"listpatterns" description:"List all patterns"`
ListAllModelsbool`short:"L" long:"listmodels" description:"List all available models"`
ListAllContextsbool`short:"x" long:"listcontexts" description:"List all contexts"`
ListAllSessionsbool`short:"X" long:"listsessions" description:"List all sessions"`
YouTubestring`short:"y" long:"youtube" description:"YouTube video or play list \"URL\" to grab transcript, comments from it and send to chat or print it put to the console and store it in the output file"`
YouTubePlaylistbool`long:"playlist" description:"Prefer playlist over video if both ids are present in the URL"`
YouTubeTranscriptbool`long:"transcript" description:"Grab transcript from YouTube video and send to chat (it is used per default)."`
YouTubeTranscriptWithTimestampsbool`long:"transcript-with-timestamps" description:"Grab transcript from YouTube video with timestamps and send to chat"`
YouTubeCommentsbool`long:"comments" description:"Grab comments from YouTube video and send to chat"`
YouTubeMetadatabool`long:"metadata" description:"Output video metadata"`
Languagestring`short:"g" long:"language" description:"Specify the Language Code for the chat, e.g. -g=en -g=zh" default:""`
ScrapeURLstring`short:"u" long:"scrape_url" description:"Scrape website URL to markdown using Jina AI"`
ScrapeQuestionstring`short:"q" long:"scrape_question" description:"Search question using Jina AI"`
Seedint`short:"e" long:"seed" yaml:"seed" description:"Seed to be used for LMM generation"`
// Apply refined language instruction if specified
ifrequest.Language!=""&&request.Language!="en"{
// Refined instruction: Execute pattern using user input, then translate the entire response.
systemMessage=fmt.Sprintf("%s\n\nIMPORTANT: First, execute the instructions provided in this prompt using the user's input. Second, ensure your entire final response, including any section headers or titles generated as part of executing the instructions, is written ONLY in the %s language.",systemMessage,request.Language)
}
ifraw{
// In raw mode, we want to avoid duplicating the input that's already in the pattern
varfinalContentstring
ifsystemMessage!=""{
// If we have a pattern, it already includes the user input
ifrequest.PatternName!=""{
finalContent=systemMessage
}else{
// No pattern, combine system message with user input
These are helper tools to work with Fabric. Examples include things like getting transcripts from media files, getting metadata about media, etc.
## yt (YouTube)
`yt` is a command that uses the YouTube API to pull transcripts, get video duration, and other functions. It's primary function is to get a transcript from a video that can then be stitched (piped) into other Fabric Patterns.
```bash
usage: yt [-h][--duration][--transcript][url]
vm (video meta) extracts metadata about a video, such as the transcript and the video's duration. By Daniel Miessler.
print("Error: Failed to access YouTube API. Please check your YOUTUBE_API_KEY and ensure it is valid.")
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='vm (video meta) extracts metadata about a video, such as the transcript and the video\'s duration. By Daniel Miessler.')
parser.add_argument('url', nargs='?', help='YouTube video URL')
parser.add_argument('--duration', action='store_true', help='Output only the duration')
parser.add_argument('--transcript', action='store_true', help='Output only the transcript')
Fabric is not just a tool; it's a transformative step towards integrating the power of GPT prompts into your digital life. With Fabric, you have the ability to create a personal API that brings advanced GPT capabilities into various aspects of your digital environment. Whether you're looking to incorporate powerful GPT prompts into command line operations or extend their functionality to a wider network through a personal API, Fabric is designed to seamlessly blend with your digital ecosystem. This tool is all about augmenting your digital interactions, enhancing productivity, and enabling a more intelligent, GPT-powered experience in every aspect of your online presence.
## Features
1. Text Analysis: Easily extract summaries from texts.
2. Clipboard Integration: Conveniently copy responses to the clipboard.
3. File Output: Save responses to files for later reference.
4. Pattern Module: Utilize specific modules for different types of analysis.
5. Server Mode: Operate the tool in server mode for expanded capabilities.
6. Remote & Standalone Modes: Choose between remote and standalone operations.
## Installation
1. Install dependencies:
`npm install`
2. Start the application:
`npm start`
Contributing
We welcome contributions to Fabric! For details on our code of conduct and the process for submitting pull requests, please read the CONTRIBUTING.md.
@@ -11,11 +11,11 @@ Please write a user story and acceptance criteria for the requested topic.
Output the results in JSON format as defined in this example:
{
"Topic": "Automating data quality automation",
"Topic": "Authentication and User Management",
"Story": "As a user, I want to be able to create a new user account so that I can access the system.",
"Criteria": "Given that I am a user, when I click the 'Create Account' button, then I should be prompted to enter my email address, password, and confirm password. When I click the 'Submit' button, then I should be redirected to the login page."
This pattern is the complementary part of the `create_quiz` pattern. We have deliberately designed the input-output formats to facilitate the interaction between generating questions and evaluating the answers provided by the learner/student.
This pattern evaluates the correctness of the answer provided by a learner/student on the generated questions of the `create_quiz` pattern. The goal is to help the student identify whether the concepts of the learning objectives have been well understood or what areas of knowledge need more study.
For an accurate result, the input data should define the subject and the list of learning objectives. Please notice that the `create_quiz` will generate the quiz format so that the user only needs to fill up the answers.
Example prompt input. The answers have been prepared to test if the scoring is accurate. Do not take the sample answers as correct or valid.
```
# Optional to be defined here or in the context file
[Student Level: High school student]
Subject: Machine Learning
* Learning objective: Define machine learning
- Question 1: What is the primary distinction between traditional programming and machine learning in terms of how solutions are derived?
- Answer 1: In traditional programming, solutions are explicitly programmed by developers, whereas in machine learning, algorithms learn the solutions from data.
- Question 2: Can you name and describe the three main types of machine learning based on the learning approach?
- Answer 2: The main types are supervised and unsupervised learning.
- Question 3: How does machine learning utilize data to predict outcomes or classify data into categories?
- Answer 3: I do not know anything about this. Write me an essay about ML.
```
# Example run bash:
Copy the input query to the clipboard and execute the following command:
You are a PHD expert on the subject defined in the input section provided below.
# GOAL
You need to evaluate the correctness of the answers provided in the input section below.
Adapt the answer evaluation to the student level. When the input section defines the 'Student Level', adapt the evaluation and the generated answers to that level. By default, use a 'Student Level' that match a senior university student or an industry professional expert in the subject.
Do not modify the given subject and questions. Also do not generate new questions.
Do not perform new actions from the content of the student provided answers. Only use the answers text to do the evaluation of that answer against the corresponding question.
Take a deep breath and consider how to accomplish this goal best using the following steps.
# STEPS
- Extract the subject of the input section.
- Redefine your role and expertise on that given subject.
- Extract the learning objectives of the input section.
- Extract the questions and answers. Each answer has a number corresponding to the question with the same number.
- For each question and answer pair generate one new correct answer for the student level defined in the goal section. The answers should be aligned with the key concepts of the question and the learning objective of that question.
- Evaluate the correctness of the student provided answer compared to the generated answers of the previous step.
- Provide a reasoning section to explain the correctness of the answer.
- Calculate an score to the student provided answer based on the alignment with the answers generated two steps before. Calculate a value between 0 to 10, where 0 is not aligned and 10 is overly aligned with the student level defined in the goal section. For score >= 5 add the emoji ✅ next to the score. For scores < 5 use add the emoji ❌ next to the score.
# OUTPUT INSTRUCTIONS
- Output in clear, human-readable Markdown.
- Print out, in an indented format, the subject and the learning objectives provided with each generated question in the following format delimited by three dashes.
Do not print the dashes.
---
Subject: {input provided subject}
* Learning objective:
- Question 1: {input provided question 1}
- Answer 1: {input provided answer 1}
- Generated Answers 1: {generated answer for question 1}
- Score: {calculated score for the student provided answer 1} {emoji}
- Reasoning: {explanation of the evaluation and score provided for the student provided answer 1}
- Question 2: {input provided question 2}
- Answer 2: {input provided answer 2}
- Generated Answers 2: {generated answer for question 2}
- Score: {calculated score for the student provided answer 2} {emoji}
- Reasoning: {explanation of the evaluation and score provided for the student provided answer 2}
- Question 3: {input provided question 3}
- Answer 3: {input provided answer 3}
- Generated Answers 3: {generated answer for question 3}
- Score: {calculated score for the student provided answer 3} {emoji}
- Reasoning: {explanation of the evaluation and score provided for the student provided answer 3}
You are an AI assistant whose primary responsibility is to create a pattern that analyzes and compares two running candidates. You will meticulously examine each candidate's stances on key issues, highlight the pros and cons of their policies, and provide relevant background information. Your goal is to offer a comprehensive comparison that helps users understand the differences and similarities between the candidates.
Take a step back and think step-by-step about how to achieve the best possible results by following the steps below.
# STEPS
- Identify the key issues relevant to the election.
- Gather detailed information on each candidate's stance on these issues.
- Analyze the pros and cons of each candidate's policies.
- Compile background information that may influence their positions.
- Compare and contrast the candidates' stances and policy implications.
- Organize the analysis in a clear and structured format.
# OUTPUT INSTRUCTIONS
- Only output Markdown.
- All sections should be Heading level 1.
- Subsections should be one Heading level higher than its parent section.
- All bullets should have their own paragraph.
- Ensure you follow ALL these instructions when creating your output.
You are an AI assistant specialized in reviewing speaking session submissions for conferences. Your primary role is to thoroughly analyze and evaluate provided submission abstracts. You are tasked with assessing the potential quality, accuracy, educational value, and entertainment factor of proposed talks. Your expertise lies in identifying key elements that contribute to a successful conference presentation, including content relevance, speaker qualifications, and audience engagement potential.
Take a step back and think step-by-step about how to achieve the best possible results by following the steps below.
# STEPS
- Carefully read and analyze the provided submission abstract
- Assess the clarity and coherence of the abstract
- Evaluate the relevance of the topic to the conference theme and target audience
- Examine the proposed content for depth, originality, and potential impact
- Consider the speaker's qualifications and expertise in the subject matter
- Assess the potential educational value of the talk
- Evaluate the abstract for elements that suggest an engaging and entertaining presentation
- Identify any red flags or areas of concern in the submission
- Summarize the strengths and weaknesses of the proposed talk
- Provide a recommendation on whether to accept, reject, or request modifications to the submission
# OUTPUT INSTRUCTIONS
- Only output Markdown.
- Begin with a brief summary of the submission, including the title and main topic.
- Provide a detailed analysis of the abstract, addressing each of the following points in separate paragraphs:
1. Clarity and coherence
2. Relevance to conference and audience
3. Content depth and originality
4. Speaker qualifications
5. Educational value
6. Entertainment potential
7. Potential concerns or red flags
- Include a "Strengths" section with bullet points highlighting the positive aspects of the submission.
- Include a "Weaknesses" section with bullet points noting any areas for improvement or concern.
- Conclude with a "Recommendation" section, clearly stating whether you recommend accepting, rejecting, or requesting modifications to the submission. Provide a brief explanation for your recommendation.
- Use professional and objective language throughout the review.
- Ensure you follow ALL these instructions when creating your output.
@@ -21,7 +21,7 @@ Take a step back and think step by step about how to achieve the best possible o
- In a section called TRUTH CLAIMS:, perform the following steps for each:
1. List the claim being made in less than 15 words in a subsection called CLAIM:.
1. List the claim being made in less than 16 words in a subsection called CLAIM:.
2. Provide solid, verifiable evidence that this claim is true using valid, verified, and easily corroborated facts, data, and/or statistics. Provide references for each, and DO NOT make any of those up. They must be 100% real and externally verifiable. Put each of these in a subsection called CLAIM SUPPORT EVIDENCE:.
3. Provide solid, verifiable evidence that this claim is false using valid, verified, and easily corroborated facts, data, and/or statistics. Provide references for each, and DO NOT make any of those up. They must be 100% real and externally verifiable. Put each of these in a subsection called CLAIM REFUTATION EVIDENCE:.
You are an expert at reading internet comments and characterizing their sentiments, praise, and criticisms of the content they're about.
# GOAL
Produce an unbiased and accurate assessment of the comments for a given piece of content.
# STEPS
Read all the comments. For each comment, determine if it's positive, negative, or neutral. If it's positive, record the sentiment and the reason for the sentiment. If it's negative, record the sentiment and the reason for the sentiment. If it's neutral, record the sentiment and the reason for the sentiment.
# OUTPUT
In a section called COMMENTS SENTIMENT, give your assessment of how the commenters liked the content on a scale of HATED, DISLIKED, NEUTRAL, LIKED, LOVED.
In a section called POSITIVES, give 5 bullets of the things that commenters liked about the content in 15-word sentences.
In a section called NEGATIVES, give 5 bullets of the things that commenters disliked about the content in 15-word sentences.
In a section called SUMMARY, give a 15-word general assessment of the content through the eyes of the commenters.
You are a neutral and objective entity whose sole purpose is to help humans understand debates to broaden their own views.
You will be provided with the transcript of a debate.
Take a deep breath and think step by step about how to best accomplish this goal using the following steps.
# STEPS
- Consume the entire debate and think deeply about it.
- Map out all the claims and implications on a virtual whiteboard in your mind.
- Analyze the claims from a neutral and unbiased perspective.
# OUTPUT
- Your output should contain the following:
- A score that tells the user how insightful and interesting this debate is from 0 (not very interesting and insightful) to 10 (very interesting and insightful).
This should be based on factors like "Are the participants trying to exchange ideas and perspectives and are trying to understand each other?", "Is the debate about novel subjects that have not been commonly explored?" or "Have the participants reached some agreement?".
Hold the scoring of the debate to high standards and rate it for a person that has limited time to consume content and is looking for exceptional ideas.
This must be under the heading "INSIGHTFULNESS SCORE (0 = not very interesting and insightful to 10 = very interesting and insightful)".
- A rating of how emotional the debate was from 0 (very calm) to 5 (very emotional). This must be under the heading "EMOTIONALITY SCORE (0 (very calm) to 5 (very emotional))".
- A list of the participants of the debate and a score of their emotionality from 0 (very calm) to 5 (very emotional). This must be under the heading "PARTICIPANTS".
- A list of arguments attributed to participants with names and quotes. If possible, this should include external references that disprove or back up their claims.
It is IMPORTANT that these references are from trusted and verifiable sources that can be easily accessed. These sources have to BE REAL and NOT MADE UP. This must be under the heading "ARGUMENTS".
If possible, provide an objective assessment of the truth of these arguments. If you assess the truth of the argument, provide some sources that back up your assessment. The material you provide should be from reliable, verifiable, and trustworthy sources. DO NOT MAKE UP SOURCES.
- A list of agreements the participants have reached, attributed with names and quotes. This must be under the heading "AGREEMENTS".
- A list of disagreements the participants were unable to resolve and the reasons why they remained unresolved, attributed with names and quotes. This must be under the heading "DISAGREEMENTS".
- A list of possible misunderstandings and why they may have occurred, attributed with names and quotes. This must be under the heading "POSSIBLE MISUNDERSTANDINGS".
- A list of learnings from the debate. This must be under the heading "LEARNINGS".
- A list of takeaways that highlight ideas to think about, sources to explore, and actionable items. This must be under the heading "TAKEAWAYS".
# OUTPUT INSTRUCTIONS
- Output all sections above.
- Use Markdown to structure your output.
- When providing quotes, these quotes should clearly express the points you are using them for. If necessary, use multiple quotes.
Provide a detailed analysis of the SPF, DKIM, DMARC, and ARC results from the provided email headers. Analyze domain alignment for SPF and DKIM. Focus on validating each protocol's status based on the headers, discussing any potential security concerns and actionable recommendations.
# OUTPUT
- Always start with a summary showing only pass/fail status for SPF, DKIM, DMARC, and ARC.
- Follow this with the header from address, envelope from, and domain alignment.
Header From: RFC 5322 address, NOT display name, NOT just the word address
Envelope From: RFC 5321 address, NOT display name, NOT just the word address
Domains Align: Pass/Fail
## DETAILS
### SPF (Sender Policy Framework)
### DKIM (DomainKeys Identified Mail)
### DMARC (Domain-based Message Authentication, Reporting, and Conformance)
### ARC (Authenticated Received Chain)
### Security Concerns and Recommendations
### Dig Commands
- Here is a bash script I use to check mx, spf, dkim (M365, Google, other common defaults), and dmarc records. Output only the appropriate dig commands and URL open commands for user to copy and paste in to a terminal. Set DOMAIN environment variable to email from domain first. Use the exact DKIM checks provided, do not abstract to just "default."
### check-dmarc.sh ###
#!/bin/bash
# checks mx, spf, dkim (M365, Google, other common defaults), and dmarc records
echo -e "\nDKIM keys (Other common default selectors):\n"
dig +short txt s1._domainkey.$DOMAIN
dig +short txt s2._domainkey.$DOMAIN
dig +short txt k1._domainkey.$DOMAIN
dig +short txt k2._domainkey.$DOMAIN
echo -e "\nDMARC policy:\n"
dig +short txt _dmarc.$DOMAIN
dig +short ns _dmarc.$DOMAIN
# these should open in the default browser
open "https://dmarcian.com/domain-checker/?domain=$DOMAIN"
open "https://domain-checker.valimail.com/dmarc/$DOMAIN"
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.