Compare commits

...

24 Commits

Author SHA1 Message Date
github-actions[bot]
278d488dbf chore(release): Update version to v1.4.326 2025-11-16 19:36:17 +00:00
Kayvan Sylvan
d590c0dd15 Merge pull request #1830 from ksylvan/kayvan/newline-in-output-fix
Ensure final newline in model generated outputs
2025-11-16 11:33:47 -08:00
Kayvan Sylvan
c936f8e77b feat: ensure newline in CreateOutputFile and improve tests
- Add newline to `CreateOutputFile` if missing
- Use `t.Cleanup` for file removal in tests
- Add test for message with trailing newline
- Introduce `printedStream` flag in `Chatter.Send`
- Print newline if stream printed without trailing newline
2025-11-16 11:15:47 -08:00
Kayvan Sylvan
7dacc07f03 chore: update README with recent features and extensions
### CHANGES

- Add v1.4.322 release with concept maps
- Introduce WELLNESS category with psychological analysis
- Upgrade to Claude Sonnet 4.5
- Add Portuguese language variants with BCP 47 support
- Migrate to `openai-go/azure` SDK for Azure
- Add Extensions section to README navigation
2025-11-15 09:34:27 -08:00
github-actions[bot]
4e6a2736ad chore(release): Update version to v1.4.325 2025-11-15 05:25:51 +00:00
Kayvan Sylvan
14c95d7bc1 Merge pull request #1828 from ksylvan/kayvan/fix-empty-input-bug
Fix empty string detection in chatter and AI clients
2025-11-14 21:22:53 -08:00
Changelog Bot
2e7b664e1e chore: incoming 1828 changelog entry 2025-11-14 21:20:52 -08:00
Kayvan Sylvan
729d092754 chore: improve message handling by trimming whitespace in content checks
### CHANGES

- Remove default space in `BuildSession` message content
- Trim whitespace in `anthropic` message content check
- Trim whitespace in `gemini` message content check
2025-11-14 21:13:08 -08:00
github-actions[bot]
5b7017d67b chore(release): Update version to v1.4.324 2025-11-14 07:49:26 +00:00
Kayvan Sylvan
6f5b89a0df Merge pull request #1827 from ksylvan/kayvan/fix-youtube-key-not-optional
Make YouTube API key optional in setup
2025-11-13 23:46:45 -08:00
Kayvan Sylvan
d02a55ee01 feat: make YouTube API key optional in setup
- Change API key setup question to optional
- Add test for optional API key behavior
- Ensure plugin configuration without API key
- chore: incoming 1827 changelog entry
2025-11-13 23:44:41 -08:00
github-actions[bot]
c498085feb chore(release): Update version to v1.4.323 2025-11-12 01:24:07 +00:00
Kayvan Sylvan
4996832e64 Merge pull request #1802 from nickarino/input-extension-bug-fix
fix: improve template extension handling for {{input}} and add examples
2025-11-11 17:21:13 -08:00
Kayvan Sylvan
79d04b2ada add byid to spell list 2025-11-11 17:18:21 -08:00
Kayvan Sylvan
c7206c0a01 docs: minor formatting fixes 2025-11-11 17:16:55 -08:00
Kayvan Sylvan
4aceb64284 chore: incoming 1823 changelog entry 2025-11-11 11:46:42 -08:00
Kayvan Sylvan
4864a63d35 Merge pull request #1823 from ksylvan/kayvan/add-missing-pattern-explanations
Add missing patterns and renumber pattern explanations list
2025-11-10 14:10:07 -08:00
Kayvan Sylvan
8e18753c0f docs: add new patterns and renumber pattern explanations list
# CHANGES

- Add `apply_ul_tags` pattern for content categorization
- Add `extract_mcp_servers` pattern for MCP server identification
- Add `generate_code_rules` pattern for AI coding guardrails
- Add `t_check_dunning_kruger` pattern for competence assessment
- Renumber all patterns from 37-226 to 37-230
- Insert new patterns at positions 37, 129, 153, 203
2025-11-10 14:01:29 -08:00
Nick Skriloff
b8027582f4 docs: clarify extensions only work within patterns, not stdin
- Add prominent warning at top of Extensions guide with visual indicators
- Update main README with brief Extensions section and link to full guide
- Remove misleading examples showing direct piping to fabric
- Add clear examples:  what DOES NOT WORK vs  what WORKS
- Consolidate all extension documentation in Examples/README.md
- Explain technical reason: extensions only processed via ApplyTemplate()
- Prevents user confusion about extension syntax processing
2025-10-31 19:53:47 -04:00
Nick Skriloff
4b82534708 refactor: address PR review feedback
- Extract InputSentinel constant to shared constants.go file
- Remove duplicate inputSentinel definitions from template.go and patterns.go
- Create withTestExtension helper function to reduce test code duplication
- Refactor 3 test functions to use the helper (reduces ~40 lines per test)
- Fix shell script to use $@ instead of $* for proper argument quoting

Addresses review comments from @ksylvan and @Copilot AI
2025-10-31 13:27:38 -04:00
Nick Skriloff
eb1cfe8340 Complete merge from upstream/main 2025-10-30 21:13:46 -04:00
Nick Skriloff
f8f9f6ba65 Update internal/plugins/template/Examples/openai.yaml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-20 20:42:52 -04:00
Changelog Bot
bc273db19d chore: incoming 1802 changelog entry 2025-10-20 19:57:13 -04:00
Nick Skriloff
29c24c8387 fix: improve template extension handling for {{input}} and add examples 2025-10-20 19:49:33 -04:00
24 changed files with 932 additions and 375 deletions

View File

@@ -15,6 +15,7 @@
"blindspots",
"Bombal",
"Buildx",
"byid",
"Callirhoe",
"Callirrhoe",
"Cerebras",

View File

@@ -1,5 +1,65 @@
# Changelog
## v1.4.326 (2025-11-16)
### PR [#1830](https://github.com/danielmiessler/Fabric/pull/1830) by [ksylvan](https://github.com/ksylvan): Ensure final newline in model generated outputs
- Feat: ensure newline in `CreateOutputFile` and improve tests
- Add newline to `CreateOutputFile` if missing
- Use `t.Cleanup` for file removal in tests
- Add test for message with trailing newline
- Introduce `printedStream` flag in `Chatter.Send`
### Direct commits
- Chore: update README with recent features and extensions
- Add v1.4.322 release with concept maps
- Introduce WELLNESS category with psychological analysis
- Upgrade to Claude Sonnet 4.5
- Add Portuguese language variants with BCP 47 support
- Migrate to `openai-go/azure` SDK for Azure
- Add Extensions section to README navigation
## v1.4.325 (2025-11-15)
### PR [#1828](https://github.com/danielmiessler/Fabric/pull/1828) by [ksylvan](https://github.com/ksylvan): Fix empty string detection in chatter and AI clients
- Chore: improve message handling by trimming whitespace in content checks
- Remove default space in `BuildSession` message content
- Trim whitespace in `anthropic` message content check
- Trim whitespace in `gemini` message content check
## v1.4.324 (2025-11-14)
### PR [#1827](https://github.com/danielmiessler/Fabric/pull/1827) by [ksylvan](https://github.com/ksylvan): Make YouTube API key optional in setup
- Make YouTube API key optional in setup process
- Change API key setup question to optional configuration
- Add test for optional API key behavior
- Ensure plugin configuration works without API key
## v1.4.323 (2025-11-12)
### PR [#1802](https://github.com/danielmiessler/Fabric/pull/1802) by [nickarino](https://github.com/nickarino): fix: improve template extension handling for {{input}} and add examples
- Fix: improve template extension handling for {{input}} and add examples
### PR [#1823](https://github.com/danielmiessler/Fabric/pull/1823) by [ksylvan](https://github.com/ksylvan): Add missing patterns and renumber pattern explanations list
- Add `apply_ul_tags` pattern for content categorization
- Add `extract_mcp_servers` pattern for MCP server identification
- Add `generate_code_rules` pattern for AI coding guardrails
- Add `t_check_dunning_kruger` pattern for competence assessment
- Renumber all patterns from 37-226 to 37-230
### Direct commits
- Chore: incoming 1823 changelog entry
## v1.4.322 (2025-11-05)
### PR [#1814](https://github.com/danielmiessler/Fabric/pull/1814) by [ksylvan](https://github.com/ksylvan): Add Concept Map in html

View File

@@ -73,6 +73,9 @@ Below are the **new features and capabilities** we've added (newest first):
### Recent Major Features
- [v1.4.322](https://github.com/danielmiessler/fabric/releases/tag/v1.4.322) (Nov 5, 2025) — **Interactive HTML Concept Maps and Claude Sonnet 4.5**: Adds `create_conceptmap` pattern for visual knowledge representation using Vis.js, introduces WELLNESS category with psychological analysis patterns, and upgrades to Claude Sonnet 4.5
- [v1.4.317](https://github.com/danielmiessler/fabric/releases/tag/v1.4.317) (Sep 21, 2025) — **Portuguese Language Variants**: Adds BCP 47 locale normalization with support for Brazilian Portuguese (pt-BR) and European Portuguese (pt-PT) with intelligent fallback chains
- [v1.4.314](https://github.com/danielmiessler/fabric/releases/tag/v1.4.314) (Sep 17, 2025) — **Azure OpenAI Migration**: Migrates to official `openai-go/azure` SDK with improved authentication and default API version support
- [v1.4.311](https://github.com/danielmiessler/fabric/releases/tag/v1.4.311) (Sep 13, 2025) — **More internationalization support**: Adds de (German), fa (Persian / Farsi), fr (French), it (Italian),
ja (Japanese), pt (Portuguese), zh (Chinese)
- [v1.4.309](https://github.com/danielmiessler/fabric/releases/tag/v1.4.309) (Sep 9, 2025) — **Comprehensive internationalization support**: Includes English and Spanish locale files.
@@ -161,6 +164,7 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
- [Fish Completion](#fish-completion)
- [Usage](#usage)
- [Debug Levels](#debug-levels)
- [Extensions](#extensions)
- [Our approach to prompting](#our-approach-to-prompting)
- [Examples](#examples)
- [Just use the Patterns](#just-use-the-patterns)
@@ -705,6 +709,12 @@ Use the `--debug` flag to control runtime logging:
- `2`: detailed debugging
- `3`: trace level
### Extensions
Fabric supports extensions that can be called within patterns. See the [Extension Guide](internal/plugins/template/Examples/README.md) for complete documentation.
**Important:** Extensions only work within pattern files, not via direct stdin. See the guide for details and examples.
## Our approach to prompting
Fabric _Patterns_ are different than most prompts you'll see.

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.322"
var version = "v1.4.326"

Binary file not shown.

View File

@@ -38,193 +38,197 @@
34. **analyze_threat_report_cmds**: Extract and synthesize actionable cybersecurity commands from provided materials, incorporating command-line arguments and expert insights for pentesters and non-experts.
35. **analyze_threat_report_trends**: Extract up to 50 surprising, insightful, and interesting trends from a cybersecurity threat report in markdown format.
36. **answer_interview_question**: Generates concise, tailored responses to technical interview questions, incorporating alternative approaches and evidence to demonstrate the candidate's expertise and experience.
37. **ask_secure_by_design_questions**: Generates a set of security-focused questions to ensure a project is built securely by design, covering key components and considerations.
38. **ask_uncle_duke**: Coordinates a team of AI agents to research and produce multiple software development solutions based on provided specifications, and conducts detailed code reviews to ensure adherence to best practices.
39. **capture_thinkers_work**: Analyze philosophers or philosophies and provide detailed summaries about their teachings, background, works, advice, and related concepts in a structured template.
40. **check_agreement**: Analyze contracts and agreements to identify important stipulations, issues, and potential gotchas, then summarize them in Markdown.
41. **clean_text**: Fix broken or malformatted text by correcting line breaks, punctuation, capitalization, and paragraphs without altering content or spelling.
42. **coding_master**: Explain a coding concept to a beginner, providing examples, and formatting code in markdown with specific output sections like ideas, recommendations, facts, and insights.
43. **compare_and_contrast**: Compare and contrast a list of items in a markdown table, with items on the left and topics on top.
44. **convert_to_markdown**: Convert content to clean, complete Markdown format, preserving all original structure, formatting, links, and code blocks without alterations.
45. **create_5_sentence_summary**: Create concise summaries or answers to input at 5 different levels of depth, from 5 words to 1 word.
46. **create_academic_paper**: Generate a high-quality academic paper in LaTeX format with clear concepts, structured content, and a professional layout.
47. **create_ai_jobs_analysis**: Analyze job categories' susceptibility to automation, identify resilient roles, and provide strategies for personal adaptation to AI-driven changes in the workforce.
48. **create_aphorisms**: Find and generate a list of brief, witty statements.
49. **create_art_prompt**: Generates a detailed, compelling visual description of a concept, including stylistic references and direct AI instructions for creating art.
50. **create_better_frame**: Identifies and analyzes different frames of interpreting reality, emphasizing the power of positive, productive lenses in shaping outcomes.
51. **create_coding_feature**: Generates secure and composable code features using modern technology and best practices from project specifications.
52. **create_coding_project**: Generate wireframes and starter code for any coding ideas that you have.
53. **create_command**: Helps determine the correct parameters and switches for penetration testing tools based on a brief description of the objective.
54. **create_conceptmap**: Transforms unstructured text or markdown content into an interactive HTML concept map using Vis.js by extracting key concepts and their logical relationships.
55. **create_cyber_summary**: Summarizes cybersecurity threats, vulnerabilities, incidents, and malware with a 25-word summary and categorized bullet points, after thoroughly analyzing and mapping the provided input.
56. **create_design_document**: Creates a detailed design document for a system using the C4 model, addressing business and security postures, and including a system context diagram.
57. **create_diy**: Creates structured "Do It Yourself" tutorial patterns by analyzing prompts, organizing requirements, and providing step-by-step instructions in Markdown format.
58. **create_excalidraw_visualization**: Creates complex Excalidraw diagrams to visualize relationships between concepts and ideas in structured format.
59. **create_flash_cards**: Creates flashcards for key concepts, definitions, and terms with question-answer format for educational purposes.
60. **create_formal_email**: Crafts professional, clear, and respectful emails by analyzing context, tone, and purpose, ensuring proper structure and formatting.
61. **create_git_diff_commit**: Generates Git commands and commit messages for reflecting changes in a repository, using conventional commits and providing concise shell commands for updates.
62. **create_graph_from_input**: Generates a CSV file with progress-over-time data for a security program, focusing on relevant metrics and KPIs.
63. **create_hormozi_offer**: Creates a customized business offer based on principles from Alex Hormozi's book, "$100M Offers."
64. **create_idea_compass**: Organizes and structures ideas by exploring their definition, evidence, sources, and related themes or consequences.
65. **create_investigation_visualization**: Creates detailed Graphviz visualizations of complex input, highlighting key aspects and providing clear, well-annotated diagrams for investigative analysis and conclusions.
66. **create_keynote**: Creates TED-style keynote presentations with a clear narrative, structured slides, and speaker notes, emphasizing impactful takeaways and cohesive flow.
67. **create_loe_document**: Creates detailed Level of Effort documents for estimating work effort, resources, and costs for tasks or projects.
68. **create_logo**: Creates simple, minimalist company logos without text, generating AI prompts for vector graphic logos based on input.
69. **create_markmap_visualization**: Transforms complex ideas into clear visualizations using MarkMap syntax, simplifying concepts into diagrams with relationships, boxes, arrows, and labels.
70. **create_mermaid_visualization**: Creates detailed, standalone visualizations of concepts using Mermaid (Markdown) syntax, ensuring clarity and coherence in diagrams.
71. **create_mermaid_visualization_for_github**: Creates standalone, detailed visualizations using Mermaid (Markdown) syntax to effectively explain complex concepts, ensuring clarity and precision.
72. **create_micro_summary**: Summarizes content into a concise, 20-word summary with main points and takeaways, formatted in Markdown.
73. **create_mnemonic_phrases**: Creates memorable mnemonic sentences from given words to aid in memory retention and learning.
74. **create_network_threat_landscape**: Analyzes open ports and services from a network scan and generates a comprehensive, insightful, and detailed security threat report in Markdown.
75. **create_newsletter_entry**: Condenses provided article text into a concise, objective, newsletter-style summary with a title in the style of Frontend Weekly.
76. **create_npc**: Generates a detailed D&D 5E NPC, including background, flaws, stats, appearance, personality, goals, and more in Markdown format.
77. **create_pattern**: Extracts, organizes, and formats LLM/AI prompts into structured sections, detailing the AI's role, instructions, output format, and any provided examples for clarity and accuracy.
78. **create_prd**: Creates a precise Product Requirements Document (PRD) in Markdown based on input.
79. **create_prediction_block**: Extracts and formats predictions from input into a structured Markdown block for a blog post.
80. **create_quiz**: Creates a three-phase reading plan based on an author or topic to help the user become significantly knowledgeable, including core, extended, and supplementary readings.
81. **create_reading_plan**: Generates review questions based on learning objectives from the input, adapted to the specified student level, and outputs them in a clear markdown format.
82. **create_recursive_outline**: Breaks down complex tasks or projects into manageable, hierarchical components with recursive outlining for clarity and simplicity.
83. **create_report_finding**: Creates a detailed, structured security finding report in markdown, including sections on Description, Risk, Recommendations, References, One-Sentence-Summary, and Quotes.
84. **create_rpg_summary**: Summarizes an in-person RPG session with key events, combat details, player stats, and role-playing highlights in a structured format.
85. **create_security_update**: Creates concise security updates for newsletters, covering stories, threats, advisories, vulnerabilities, and a summary of key issues.
86. **create_show_intro**: Creates compelling short intros for podcasts, summarizing key topics and themes discussed in the episode.
87. **create_sigma_rules**: Extracts Tactics, Techniques, and Procedures (TTPs) from security news and converts them into Sigma detection rules for host-based detections.
88. **create_story_about_person**: Creates compelling, realistic short stories based on psychological profiles, showing how characters navigate everyday problems using strategies consistent with their personality traits.
37. **apply_ul_tags**: Apply standardized content tags to categorize topics like AI, cybersecurity, politics, and culture.
38. **ask_secure_by_design_questions**: Generates a set of security-focused questions to ensure a project is built securely by design, covering key components and considerations.
39. **ask_uncle_duke**: Coordinates a team of AI agents to research and produce multiple software development solutions based on provided specifications, and conducts detailed code reviews to ensure adherence to best practices.
40. **capture_thinkers_work**: Analyze philosophers or philosophies and provide detailed summaries about their teachings, background, works, advice, and related concepts in a structured template.
41. **check_agreement**: Analyze contracts and agreements to identify important stipulations, issues, and potential gotchas, then summarize them in Markdown.
42. **clean_text**: Fix broken or malformatted text by correcting line breaks, punctuation, capitalization, and paragraphs without altering content or spelling.
43. **coding_master**: Explain a coding concept to a beginner, providing examples, and formatting code in markdown with specific output sections like ideas, recommendations, facts, and insights.
44. **compare_and_contrast**: Compare and contrast a list of items in a markdown table, with items on the left and topics on top.
45. **convert_to_markdown**: Convert content to clean, complete Markdown format, preserving all original structure, formatting, links, and code blocks without alterations.
46. **create_5_sentence_summary**: Create concise summaries or answers to input at 5 different levels of depth, from 5 words to 1 word.
47. **create_academic_paper**: Generate a high-quality academic paper in LaTeX format with clear concepts, structured content, and a professional layout.
48. **create_ai_jobs_analysis**: Analyze job categories' susceptibility to automation, identify resilient roles, and provide strategies for personal adaptation to AI-driven changes in the workforce.
49. **create_aphorisms**: Find and generate a list of brief, witty statements.
50. **create_art_prompt**: Generates a detailed, compelling visual description of a concept, including stylistic references and direct AI instructions for creating art.
51. **create_better_frame**: Identifies and analyzes different frames of interpreting reality, emphasizing the power of positive, productive lenses in shaping outcomes.
52. **create_coding_feature**: Generates secure and composable code features using modern technology and best practices from project specifications.
53. **create_coding_project**: Generate wireframes and starter code for any coding ideas that you have.
54. **create_command**: Helps determine the correct parameters and switches for penetration testing tools based on a brief description of the objective.
55. **create_conceptmap**: Transforms unstructured text or markdown content into an interactive HTML concept map using Vis.js by extracting key concepts and their logical relationships.
56. **create_cyber_summary**: Summarizes cybersecurity threats, vulnerabilities, incidents, and malware with a 25-word summary and categorized bullet points, after thoroughly analyzing and mapping the provided input.
57. **create_design_document**: Creates a detailed design document for a system using the C4 model, addressing business and security postures, and including a system context diagram.
58. **create_diy**: Creates structured "Do It Yourself" tutorial patterns by analyzing prompts, organizing requirements, and providing step-by-step instructions in Markdown format.
59. **create_excalidraw_visualization**: Creates complex Excalidraw diagrams to visualize relationships between concepts and ideas in structured format.
60. **create_flash_cards**: Creates flashcards for key concepts, definitions, and terms with question-answer format for educational purposes.
61. **create_formal_email**: Crafts professional, clear, and respectful emails by analyzing context, tone, and purpose, ensuring proper structure and formatting.
62. **create_git_diff_commit**: Generates Git commands and commit messages for reflecting changes in a repository, using conventional commits and providing concise shell commands for updates.
63. **create_graph_from_input**: Generates a CSV file with progress-over-time data for a security program, focusing on relevant metrics and KPIs.
64. **create_hormozi_offer**: Creates a customized business offer based on principles from Alex Hormozi's book, "$100M Offers."
65. **create_idea_compass**: Organizes and structures ideas by exploring their definition, evidence, sources, and related themes or consequences.
66. **create_investigation_visualization**: Creates detailed Graphviz visualizations of complex input, highlighting key aspects and providing clear, well-annotated diagrams for investigative analysis and conclusions.
67. **create_keynote**: Creates TED-style keynote presentations with a clear narrative, structured slides, and speaker notes, emphasizing impactful takeaways and cohesive flow.
68. **create_loe_document**: Creates detailed Level of Effort documents for estimating work effort, resources, and costs for tasks or projects.
69. **create_logo**: Creates simple, minimalist company logos without text, generating AI prompts for vector graphic logos based on input.
70. **create_markmap_visualization**: Transforms complex ideas into clear visualizations using MarkMap syntax, simplifying concepts into diagrams with relationships, boxes, arrows, and labels.
71. **create_mermaid_visualization**: Creates detailed, standalone visualizations of concepts using Mermaid (Markdown) syntax, ensuring clarity and coherence in diagrams.
72. **create_mermaid_visualization_for_github**: Creates standalone, detailed visualizations using Mermaid (Markdown) syntax to effectively explain complex concepts, ensuring clarity and precision.
73. **create_micro_summary**: Summarizes content into a concise, 20-word summary with main points and takeaways, formatted in Markdown.
74. **create_mnemonic_phrases**: Creates memorable mnemonic sentences from given words to aid in memory retention and learning.
75. **create_network_threat_landscape**: Analyzes open ports and services from a network scan and generates a comprehensive, insightful, and detailed security threat report in Markdown.
76. **create_newsletter_entry**: Condenses provided article text into a concise, objective, newsletter-style summary with a title in the style of Frontend Weekly.
77. **create_npc**: Generates a detailed D&D 5E NPC, including background, flaws, stats, appearance, personality, goals, and more in Markdown format.
78. **create_pattern**: Extracts, organizes, and formats LLM/AI prompts into structured sections, detailing the AI's role, instructions, output format, and any provided examples for clarity and accuracy.
79. **create_prd**: Creates a precise Product Requirements Document (PRD) in Markdown based on input.
80. **create_prediction_block**: Extracts and formats predictions from input into a structured Markdown block for a blog post.
81. **create_quiz**: Creates a three-phase reading plan based on an author or topic to help the user become significantly knowledgeable, including core, extended, and supplementary readings.
82. **create_reading_plan**: Generates review questions based on learning objectives from the input, adapted to the specified student level, and outputs them in a clear markdown format.
83. **create_recursive_outline**: Breaks down complex tasks or projects into manageable, hierarchical components with recursive outlining for clarity and simplicity.
84. **create_report_finding**: Creates a detailed, structured security finding report in markdown, including sections on Description, Risk, Recommendations, References, One-Sentence-Summary, and Quotes.
85. **create_rpg_summary**: Summarizes an in-person RPG session with key events, combat details, player stats, and role-playing highlights in a structured format.
86. **create_security_update**: Creates concise security updates for newsletters, covering stories, threats, advisories, vulnerabilities, and a summary of key issues.
87. **create_show_intro**: Creates compelling short intros for podcasts, summarizing key topics and themes discussed in the episode.
88. **create_sigma_rules**: Extracts Tactics, Techniques, and Procedures (TTPs) from security news and converts them into Sigma detection rules for host-based detections.
89. **create_story_about_people_interaction**: Analyze two personas, compare their dynamics, and craft a realistic, character-driven story from those insights.
90. **create_story_explanation**: Summarizes complex content in a clear, approachable story format that makes the concepts easy to understand.
91. **create_stride_threat_model**: Create a STRIDE-based threat model for a system design, identifying assets, trust boundaries, data flows, and prioritizing threats with mitigations.
92. **create_summary**: Summarizes content into a 20-word sentence, 10 main points (16 words max), and 5 key takeaways in Markdown format.
93. **create_tags**: Identifies at least 5 tags from text content for mind mapping tools, including authors and existing tags if present.
94. **create_threat_scenarios**: Identifies likely attack methods for any system by providing a narrative-based threat model, balancing risk and opportunity.
95. **create_ttrc_graph**: Creates a CSV file showing the progress of Time to Remediate Critical Vulnerabilities over time using given data.
96. **create_ttrc_narrative**: Creates a persuasive narrative highlighting progress in reducing the Time to Remediate Critical Vulnerabilities metric over time.
97. **create_upgrade_pack**: Extracts world model and task algorithm updates from content, providing beliefs about how the world works and task performance.
98. **create_user_story**: Writes concise and clear technical user stories for new features in complex software programs, formatted for all stakeholders.
99. **create_video_chapters**: Extracts interesting topics and timestamps from a transcript, providing concise summaries of key moments.
100. **create_visualization**: Transforms complex ideas into visualizations using intricate ASCII art, simplifying concepts where necessary.
101. **dialog_with_socrates**: Engages in deep, meaningful dialogues to explore and challenge beliefs using the Socratic method.
102. **enrich_blog_post**: Enhances Markdown blog files by applying instructions to improve structure, visuals, and readability for HTML rendering.
103. **explain_code**: Explains code, security tool output, configuration text, and answers questions based on the provided input.
104. **explain_docs**: Improves and restructures tool documentation into clear, concise instructions, including overviews, usage, use cases, and key features.
105. **explain_math**: Helps you understand mathematical concepts in a clear and engaging way.
106. **explain_project**: Summarizes project documentation into clear, concise sections covering the project, problem, solution, installation, usage, and examples.
107. **explain_terms**: Produces a glossary of advanced terms from content, providing a definition, analogy, and explanation of why each term matters.
108. **export_data_as_csv**: Extracts and outputs all data structures from the input in properly formatted CSV data.
109. **extract_algorithm_update_recommendations**: Extracts concise, practical algorithm update recommendations from the input and outputs them in a bulleted list.
110. **extract_article_wisdom**: Extracts surprising, insightful, and interesting information from content, categorizing it into sections like summary, ideas, quotes, facts, references, and recommendations.
111. **extract_book_ideas**: Extracts and outputs 50 to 100 of the most surprising, insightful, and interesting ideas from a book's content.
112. **extract_book_recommendations**: Extracts and outputs 50 to 100 practical, actionable recommendations from a book's content.
113. **extract_business_ideas**: Extracts top business ideas from content and elaborates on the best 10 with unique differentiators.
114. **extract_characters**: Identify all characters (human and non-human), resolve their aliases and pronouns into canonical names, and produce detailed descriptions of each character's role, motivations, and interactions ranked by narrative importance.
115. **extract_controversial_ideas**: Extracts and outputs controversial statements and supporting quotes from the input in a structured Markdown list.
116. **extract_core_message**: Extracts and outputs a clear, concise sentence that articulates the core message of a given text or body of work.
117. **extract_ctf_writeup**: Extracts a short writeup from a warstory-like text about a cyber security engagement.
118. **extract_domains**: Extracts domains and URLs from content to identify sources used for articles, newsletters, and other publications.
119. **extract_extraordinary_claims**: Extracts and outputs a list of extraordinary claims from conversations, focusing on scientifically disputed or false statements.
120. **extract_ideas**: Extracts and outputs all the key ideas from input, presented as 15-word bullet points in Markdown.
121. **extract_insights**: Extracts and outputs the most powerful and insightful ideas from text, formatted as 16-word bullet points in the INSIGHTS section, also IDEAS section.
122. **extract_insights_dm**: Extracts and outputs all valuable insights and a concise summary of the content, including key points and topics discussed.
123. **extract_instructions**: Extracts clear, actionable step-by-step instructions and main objectives from instructional video transcripts, organizing them into a concise list.
124. **extract_jokes**: Extracts jokes from text content, presenting each joke with its punchline in separate bullet points.
125. **extract_latest_video**: Extracts the latest video URL from a YouTube RSS feed and outputs the URL only.
126. **extract_main_activities**: Extracts key events and activities from transcripts or logs, providing a summary of what happened.
127. **extract_main_idea**: Extracts the main idea and key recommendation from the input, summarizing them in 15-word sentences.
128. **extract_most_redeeming_thing**: Extracts the most redeeming aspect from an input, summarizing it in a single 15-word sentence.
129. **extract_patterns**: Extracts and analyzes recurring, surprising, and insightful patterns from input, providing detailed analysis and advice for builders.
130. **extract_poc**: Extracts proof of concept URLs and validation methods from security reports, providing the URL and command to run.
131. **extract_predictions**: Extracts predictions from input, including specific details such as date, confidence level, and verification method.
132. **extract_primary_problem**: Extracts the primary problem with the world as presented in a given text or body of work.
133. **extract_primary_solution**: Extracts the primary solution for the world as presented in a given text or body of work.
134. **extract_product_features**: Extracts and outputs a list of product features from the provided input in a bulleted format.
135. **extract_questions**: Extracts and outputs all questions asked by the interviewer in a conversation or interview.
136. **extract_recipe**: Extracts and outputs a recipe with a short meal description, ingredients with measurements, and preparation steps.
137. **extract_recommendations**: Extracts and outputs concise, practical recommendations from a given piece of content in a bulleted list.
138. **extract_references**: Extracts and outputs a bulleted list of references to art, stories, books, literature, and other sources from content.
139. **extract_skills**: Extracts and classifies skills from a job description into a table, separating each skill and classifying it as either hard or soft.
140. **extract_song_meaning**: Analyzes a song to provide a summary of its meaning, supported by detailed evidence from lyrics, artist commentary, and fan analysis.
141. **extract_sponsors**: Extracts and lists official sponsors and potential sponsors from a provided transcript.
142. **extract_videoid**: Extracts and outputs the video ID from any given URL.
143. **extract_wisdom**: Extracts surprising, insightful, and interesting information from text on topics like human flourishing, AI, learning, and more.
144. **extract_wisdom_agents**: Extracts valuable insights, ideas, quotes, and references from content, emphasizing topics like human flourishing, AI, learning, and technology.
145. **extract_wisdom_dm**: Extracts all valuable, insightful, and thought-provoking information from content, focusing on topics like human flourishing, AI, learning, and technology.
146. **extract_wisdom_nometa**: Extracts insights, ideas, quotes, habits, facts, references, and recommendations from content, focusing on human flourishing, AI, technology, and related topics.
147. **find_female_life_partner**: Analyzes criteria for finding a female life partner and provides clear, direct, and poetic descriptions.
148. **find_hidden_message**: Extracts overt and hidden political messages, justifications, audience actions, and a cynical analysis from content.
149. **find_logical_fallacies**: Identifies and analyzes fallacies in arguments, classifying them as formal or informal with detailed reasoning.
150. **fix_typos**: Proofreads and corrects typos, spelling, grammar, and punctuation errors in text.
151. **get_wow_per_minute**: Determines the wow-factor of content per minute based on surprise, novelty, insight, value, and wisdom, measuring how rewarding the content is for the viewer.
152. **get_youtube_rss**: Returns the RSS URL for a given YouTube channel based on the channel ID or URL.
153. **heal_person**: Develops a comprehensive plan for spiritual and mental healing based on psychological profiles, providing personalized recommendations for mental health improvement and overall life enhancement.
154. **humanize**: Rewrites AI-generated text to sound natural, conversational, and easy to understand, maintaining clarity and simplicity.
155. **identify_dsrp_distinctions**: Encourages creative, systems-based thinking by exploring distinctions, boundaries, and their implications, drawing on insights from prominent systems thinkers.
156. **identify_dsrp_perspectives**: Explores the concept of distinctions in systems thinking, focusing on how boundaries define ideas, influence understanding, and reveal or obscure insights.
157. **identify_dsrp_relationships**: Encourages exploration of connections, distinctions, and boundaries between ideas, inspired by systems thinkers to reveal new insights and patterns in complex systems.
158. **identify_dsrp_systems**: Encourages organizing ideas into systems of parts and wholes, inspired by systems thinkers to explore relationships and how changes in organization impact meaning and understanding.
159. **identify_job_stories**: Identifies key job stories or requirements for roles.
160. **improve_academic_writing**: Refines text into clear, concise academic language while improving grammar, coherence, and clarity, with a list of changes.
161. **improve_prompt**: Improves an LLM/AI prompt by applying expert prompt writing strategies for better results and clarity.
162. **improve_report_finding**: Improves a penetration test security finding by providing detailed descriptions, risks, recommendations, references, quotes, and a concise summary in markdown format.
163. **improve_writing**: Refines text by correcting grammar, enhancing style, improving clarity, and maintaining the original meaning. skills.
164. **judge_output**: Evaluates Honeycomb queries by judging their effectiveness, providing critiques and outcomes based on language nuances and analytics relevance.
165. **label_and_rate**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
166. **md_callout**: Classifies content and generates a markdown callout based on the provided text, selecting the most appropriate type.
167. **model_as_sherlock_freud**: Builds psychological models using detective reasoning and psychoanalytic insight to understand human behavior.
168. **official_pattern_template**: Template to use if you want to create new fabric patterns.
169. **prepare_7s_strategy**: Prepares a comprehensive briefing document from 7S's strategy capturing organizational profile, strategic elements, and market dynamics with clear, concise, and organized content.
170. **predict_person_actions**: Predicts behavioral responses based on psychological profiles and challenges.
171. **provide_guidance**: Provides psychological and life coaching advice, including analysis, recommendations, and potential diagnoses, with a compassionate and honest tone.
172. **rate_ai_response**: Rates the quality of AI responses by comparing them to top human expert performance, assigning a letter grade, reasoning, and providing a 1-100 score based on the evaluation.
173. **rate_ai_result**: Assesses the quality of AI/ML/LLM work by deeply analyzing content, instructions, and output, then rates performance based on multiple dimensions, including coverage, creativity, and interdisciplinary thinking.
174. **rate_content**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
175. **rate_value**: Produces the best possible output by deeply analyzing and understanding the input and its intended purpose.
176. **raw_query**: Fully digests and contemplates the input to produce the best possible result based on understanding the sender's intent.
177. **recommend_artists**: Recommends a personalized festival schedule with artists aligned to your favorite styles and interests, including rationale.
178. **recommend_pipeline_upgrades**: Optimizes vulnerability-checking pipelines by incorporating new information and improving their efficiency, with detailed explanations of changes.
179. **recommend_talkpanel_topics**: Produces a clean set of proposed talks or panel talking points for a person based on their interests and goals, formatted for submission to a conference organizer.
180. **recommend_yoga_practice**: Provides personalized yoga sequences, meditation guidance, and holistic lifestyle advice based on individual profiles.
181. **refine_design_document**: Refines a design document based on a design review by analyzing, mapping concepts, and implementing changes using valid Markdown.
182. **review_design**: Reviews and analyzes architecture design, focusing on clarity, component design, system integrations, security, performance, scalability, and data management.
183. **sanitize_broken_html_to_markdown**: Converts messy HTML into clean, properly formatted Markdown, applying custom styling and ensuring compatibility with Vite.
184. **suggest_pattern**: Suggests appropriate fabric patterns or commands based on user input, providing clear explanations and options for users.
185. **summarize**: Summarizes content into a 20-word sentence, main points, and takeaways, formatted with numbered lists in Markdown.
186. **summarize_board_meeting**: Creates formal meeting notes from board meeting transcripts for corporate governance documentation.
187. **summarize_debate**: Summarizes debates, identifies primary disagreement, extracts arguments, and provides analysis of evidence and argument strength to predict outcomes.
188. **summarize_git_changes**: Summarizes recent project updates from the last 7 days, focusing on key changes with enthusiasm.
189. **summarize_git_diff**: Summarizes and organizes Git diff changes with clear, succinct commit messages and bullet points.
190. **summarize_lecture**: Extracts relevant topics, definitions, and tools from lecture transcripts, providing structured summaries with timestamps and key takeaways.
191. **summarize_legislation**: Summarizes complex political proposals and legislation by analyzing key points, proposed changes, and providing balanced, positive, and cynical characterizations.
192. **summarize_meeting**: Analyzes meeting transcripts to extract a structured summary, including an overview, key points, tasks, decisions, challenges, timeline, references, and next steps.
193. **summarize_micro**: Summarizes content into a 20-word sentence, 3 main points, and 3 takeaways, formatted in clear, concise Markdown.
194. **summarize_newsletter**: Extracts the most meaningful, interesting, and useful content from a newsletter, summarizing key sections such as content, opinions, tools, companies, and follow-up items in clear, structured Markdown.
195. **summarize_paper**: Summarizes an academic paper by detailing its title, authors, technical approach, distinctive features, experimental setup, results, advantages, limitations, and conclusion in a clear, structured format using human-readable Markdown.
196. **summarize_prompt**: Summarizes AI chat prompts by describing the primary function, unique approach, and expected output in a concise paragraph. The summary is focused on the prompt's purpose without unnecessary details or formatting.
197. **summarize_pull-requests**: Summarizes pull requests for a coding project by providing a summary and listing the top PRs with human-readable descriptions.
198. **summarize_rpg_session**: Summarizes a role-playing game session by extracting key events, combat stats, character changes, quotes, and more.
199. **t_analyze_challenge_handling**: Provides 8-16 word bullet points evaluating how well challenges are being addressed, calling out any lack of effort.
200. **t_check_metrics**: Analyzes deep context from the TELOS file and input instruction, then provides a wisdom-based output while considering metrics and KPIs to assess recent improvements.
201. **t_create_h3_career**: Summarizes context and produces wisdom-based output by deeply analyzing both the TELOS File and the input instruction, considering the relationship between the two.
202. **t_create_opening_sentences**: Describes from TELOS file the person's identity, goals, and actions in 4 concise, 32-word bullet points, humbly.
203. **t_describe_life_outlook**: Describes from TELOS file a person's life outlook in 5 concise, 16-word bullet points.
204. **t_extract_intro_sentences**: Summarizes from TELOS file a person's identity, work, and current projects in 5 concise and grounded bullet points.
205. **t_extract_panel_topics**: Creates 5 panel ideas with titles and descriptions based on deep context from a TELOS file and input.
206. **t_find_blindspots**: Identify potential blindspots in thinking, frames, or models that may expose the individual to error or risk.
207. **t_find_negative_thinking**: Analyze a TELOS file and input to identify negative thinking in documents or journals, followed by tough love encouragement.
208. **t_find_neglected_goals**: Analyze a TELOS file and input instructions to identify goals or projects that have not been worked on recently.
209. **t_give_encouragement**: Analyze a TELOS file and input instructions to evaluate progress, provide encouragement, and offer recommendations for continued effort.
210. **t_red_team_thinking**: Analyze a TELOS file and input instructions to red-team thinking, models, and frames, then provide recommendations for improvement.
211. **t_threat_model_plans**: Analyze a TELOS file and input instructions to create threat models for a life plan and recommend improvements.
212. **t_visualize_mission_goals_projects**: Analyze a TELOS file and input instructions to create an ASCII art diagram illustrating the relationship of missions, goals, and projects.
213. **t_year_in_review**: Analyze a TELOS file to create insights about a person or entity, then summarize accomplishments and visualizations in bullet points.
214. **to_flashcards**: Create Anki flashcards from a given text, focusing on concise, optimized questions and answers without external context.
215. **transcribe_minutes**: Extracts (from meeting transcription) meeting minutes, identifying actionables, insightful ideas, decisions, challenges, and next steps in a structured format.
216. **translate**: Translates sentences or documentation into the specified language code while maintaining the original formatting and tone.
217. **tweet**: Provides a step-by-step guide on crafting engaging tweets with emojis, covering Twitter basics, account creation, features, and audience targeting.
218. **write_essay**: Writes essays in the style of a specified author, embodying their unique voice, vocabulary, and approach. Uses `author_name` variable.
219. **write_essay_pg**: Writes concise, clear essays in the style of Paul Graham, focusing on simplicity, clarity, and illumination of the provided topic.
220. **write_hackerone_report**: Generates concise, clear, and reproducible bug bounty reports, detailing vulnerability impact, steps to reproduce, and exploit details for triagers.
221. **write_latex**: Generates syntactically correct LaTeX code for a new.tex document, ensuring proper formatting and compatibility with pdflatex.
222. **write_micro_essay**: Writes concise, clear, and illuminating essays on the given topic in the style of Paul Graham.
223. **write_nuclei_template_rule**: Generates Nuclei YAML templates for detecting vulnerabilities using HTTP requests, matchers, extractors, and dynamic data extraction.
224. **write_pull-request**: Drafts detailed pull request descriptions, explaining changes, providing reasoning, and identifying potential bugs from the git diff command output.
225. **write_semgrep_rule**: Creates accurate and working Semgrep rules based on input, following syntax guidelines and specific language considerations.
226. **youtube_summary**: Create concise, timestamped Youtube video summaries that highlight key points.
90. **create_story_about_person**: Creates compelling, realistic short stories based on psychological profiles, showing how characters navigate everyday problems using strategies consistent with their personality traits.
91. **create_story_explanation**: Summarizes complex content in a clear, approachable story format that makes the concepts easy to understand.
92. **create_stride_threat_model**: Create a STRIDE-based threat model for a system design, identifying assets, trust boundaries, data flows, and prioritizing threats with mitigations.
93. **create_summary**: Summarizes content into a 20-word sentence, 10 main points (16 words max), and 5 key takeaways in Markdown format.
94. **create_tags**: Identifies at least 5 tags from text content for mind mapping tools, including authors and existing tags if present.
95. **create_threat_scenarios**: Identifies likely attack methods for any system by providing a narrative-based threat model, balancing risk and opportunity.
96. **create_ttrc_graph**: Creates a CSV file showing the progress of Time to Remediate Critical Vulnerabilities over time using given data.
97. **create_ttrc_narrative**: Creates a persuasive narrative highlighting progress in reducing the Time to Remediate Critical Vulnerabilities metric over time.
98. **create_upgrade_pack**: Extracts world model and task algorithm updates from content, providing beliefs about how the world works and task performance.
99. **create_user_story**: Writes concise and clear technical user stories for new features in complex software programs, formatted for all stakeholders.
100. **create_video_chapters**: Extracts interesting topics and timestamps from a transcript, providing concise summaries of key moments.
101. **create_visualization**: Transforms complex ideas into visualizations using intricate ASCII art, simplifying concepts where necessary.
102. **dialog_with_socrates**: Engages in deep, meaningful dialogues to explore and challenge beliefs using the Socratic method.
103. **enrich_blog_post**: Enhances Markdown blog files by applying instructions to improve structure, visuals, and readability for HTML rendering.
104. **explain_code**: Explains code, security tool output, configuration text, and answers questions based on the provided input.
105. **explain_docs**: Improves and restructures tool documentation into clear, concise instructions, including overviews, usage, use cases, and key features.
106. **explain_math**: Helps you understand mathematical concepts in a clear and engaging way.
107. **explain_project**: Summarizes project documentation into clear, concise sections covering the project, problem, solution, installation, usage, and examples.
108. **explain_terms**: Produces a glossary of advanced terms from content, providing a definition, analogy, and explanation of why each term matters.
109. **export_data_as_csv**: Extracts and outputs all data structures from the input in properly formatted CSV data.
110. **extract_algorithm_update_recommendations**: Extracts concise, practical algorithm update recommendations from the input and outputs them in a bulleted list.
111. **extract_article_wisdom**: Extracts surprising, insightful, and interesting information from content, categorizing it into sections like summary, ideas, quotes, facts, references, and recommendations.
112. **extract_book_ideas**: Extracts and outputs 50 to 100 of the most surprising, insightful, and interesting ideas from a book's content.
113. **extract_book_recommendations**: Extracts and outputs 50 to 100 practical, actionable recommendations from a book's content.
114. **extract_business_ideas**: Extracts top business ideas from content and elaborates on the best 10 with unique differentiators.
115. **extract_characters**: Identify all characters (human and non-human), resolve their aliases and pronouns into canonical names, and produce detailed descriptions of each character's role, motivations, and interactions ranked by narrative importance.
116. **extract_controversial_ideas**: Extracts and outputs controversial statements and supporting quotes from the input in a structured Markdown list.
117. **extract_core_message**: Extracts and outputs a clear, concise sentence that articulates the core message of a given text or body of work.
118. **extract_ctf_writeup**: Extracts a short writeup from a warstory-like text about a cyber security engagement.
119. **extract_domains**: Extracts domains and URLs from content to identify sources used for articles, newsletters, and other publications.
120. **extract_extraordinary_claims**: Extracts and outputs a list of extraordinary claims from conversations, focusing on scientifically disputed or false statements.
121. **extract_ideas**: Extracts and outputs all the key ideas from input, presented as 15-word bullet points in Markdown.
122. **extract_insights**: Extracts and outputs the most powerful and insightful ideas from text, formatted as 16-word bullet points in the INSIGHTS section, also IDEAS section.
123. **extract_insights_dm**: Extracts and outputs all valuable insights and a concise summary of the content, including key points and topics discussed.
124. **extract_instructions**: Extracts clear, actionable step-by-step instructions and main objectives from instructional video transcripts, organizing them into a concise list.
125. **extract_jokes**: Extracts jokes from text content, presenting each joke with its punchline in separate bullet points.
126. **extract_latest_video**: Extracts the latest video URL from a YouTube RSS feed and outputs the URL only.
127. **extract_main_activities**: Extracts key events and activities from transcripts or logs, providing a summary of what happened.
128. **extract_main_idea**: Extracts the main idea and key recommendation from the input, summarizing them in 15-word sentences.
129. **extract_mcp_servers**: Identify and summarize Model Context Protocol (MCP) servers referenced in the input along with their key details.
130. **extract_most_redeeming_thing**: Extracts the most redeeming aspect from an input, summarizing it in a single 15-word sentence.
131. **extract_patterns**: Extracts and analyzes recurring, surprising, and insightful patterns from input, providing detailed analysis and advice for builders.
132. **extract_poc**: Extracts proof of concept URLs and validation methods from security reports, providing the URL and command to run.
133. **extract_predictions**: Extracts predictions from input, including specific details such as date, confidence level, and verification method.
134. **extract_primary_problem**: Extracts the primary problem with the world as presented in a given text or body of work.
135. **extract_primary_solution**: Extracts the primary solution for the world as presented in a given text or body of work.
136. **extract_product_features**: Extracts and outputs a list of product features from the provided input in a bulleted format.
137. **extract_questions**: Extracts and outputs all questions asked by the interviewer in a conversation or interview.
138. **extract_recipe**: Extracts and outputs a recipe with a short meal description, ingredients with measurements, and preparation steps.
139. **extract_recommendations**: Extracts and outputs concise, practical recommendations from a given piece of content in a bulleted list.
140. **extract_references**: Extracts and outputs a bulleted list of references to art, stories, books, literature, and other sources from content.
141. **extract_skills**: Extracts and classifies skills from a job description into a table, separating each skill and classifying it as either hard or soft.
142. **extract_song_meaning**: Analyzes a song to provide a summary of its meaning, supported by detailed evidence from lyrics, artist commentary, and fan analysis.
143. **extract_sponsors**: Extracts and lists official sponsors and potential sponsors from a provided transcript.
144. **extract_videoid**: Extracts and outputs the video ID from any given URL.
145. **extract_wisdom**: Extracts surprising, insightful, and interesting information from text on topics like human flourishing, AI, learning, and more.
146. **extract_wisdom_agents**: Extracts valuable insights, ideas, quotes, and references from content, emphasizing topics like human flourishing, AI, learning, and technology.
147. **extract_wisdom_dm**: Extracts all valuable, insightful, and thought-provoking information from content, focusing on topics like human flourishing, AI, learning, and technology.
148. **extract_wisdom_nometa**: Extracts insights, ideas, quotes, habits, facts, references, and recommendations from content, focusing on human flourishing, AI, technology, and related topics.
149. **find_female_life_partner**: Analyzes criteria for finding a female life partner and provides clear, direct, and poetic descriptions.
150. **find_hidden_message**: Extracts overt and hidden political messages, justifications, audience actions, and a cynical analysis from content.
151. **find_logical_fallacies**: Identifies and analyzes fallacies in arguments, classifying them as formal or informal with detailed reasoning.
152. **fix_typos**: Proofreads and corrects typos, spelling, grammar, and punctuation errors in text.
153. **generate_code_rules**: Compile best-practice coding rules and guardrails for AI-assisted development workflows from the provided content.
154. **get_wow_per_minute**: Determines the wow-factor of content per minute based on surprise, novelty, insight, value, and wisdom, measuring how rewarding the content is for the viewer.
155. **get_youtube_rss**: Returns the RSS URL for a given YouTube channel based on the channel ID or URL.
156. **heal_person**: Develops a comprehensive plan for spiritual and mental healing based on psychological profiles, providing personalized recommendations for mental health improvement and overall life enhancement.
157. **humanize**: Rewrites AI-generated text to sound natural, conversational, and easy to understand, maintaining clarity and simplicity.
158. **identify_dsrp_distinctions**: Encourages creative, systems-based thinking by exploring distinctions, boundaries, and their implications, drawing on insights from prominent systems thinkers.
159. **identify_dsrp_perspectives**: Explores the concept of distinctions in systems thinking, focusing on how boundaries define ideas, influence understanding, and reveal or obscure insights.
160. **identify_dsrp_relationships**: Encourages exploration of connections, distinctions, and boundaries between ideas, inspired by systems thinkers to reveal new insights and patterns in complex systems.
161. **identify_dsrp_systems**: Encourages organizing ideas into systems of parts and wholes, inspired by systems thinkers to explore relationships and how changes in organization impact meaning and understanding.
162. **identify_job_stories**: Identifies key job stories or requirements for roles.
163. **improve_academic_writing**: Refines text into clear, concise academic language while improving grammar, coherence, and clarity, with a list of changes.
164. **improve_prompt**: Improves an LLM/AI prompt by applying expert prompt writing strategies for better results and clarity.
165. **improve_report_finding**: Improves a penetration test security finding by providing detailed descriptions, risks, recommendations, references, quotes, and a concise summary in markdown format.
166. **improve_writing**: Refines text by correcting grammar, enhancing style, improving clarity, and maintaining the original meaning. skills.
167. **judge_output**: Evaluates Honeycomb queries by judging their effectiveness, providing critiques and outcomes based on language nuances and analytics relevance.
168. **label_and_rate**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
169. **md_callout**: Classifies content and generates a markdown callout based on the provided text, selecting the most appropriate type.
170. **model_as_sherlock_freud**: Builds psychological models using detective reasoning and psychoanalytic insight to understand human behavior.
171. **official_pattern_template**: Template to use if you want to create new fabric patterns.
172. **predict_person_actions**: Predicts behavioral responses based on psychological profiles and challenges.
173. **prepare_7s_strategy**: Prepares a comprehensive briefing document from 7S's strategy capturing organizational profile, strategic elements, and market dynamics with clear, concise, and organized content.
174. **provide_guidance**: Provides psychological and life coaching advice, including analysis, recommendations, and potential diagnoses, with a compassionate and honest tone.
175. **rate_ai_response**: Rates the quality of AI responses by comparing them to top human expert performance, assigning a letter grade, reasoning, and providing a 1-100 score based on the evaluation.
176. **rate_ai_result**: Assesses the quality of AI/ML/LLM work by deeply analyzing content, instructions, and output, then rates performance based on multiple dimensions, including coverage, creativity, and interdisciplinary thinking.
177. **rate_content**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
178. **rate_value**: Produces the best possible output by deeply analyzing and understanding the input and its intended purpose.
179. **raw_query**: Fully digests and contemplates the input to produce the best possible result based on understanding the sender's intent.
180. **recommend_artists**: Recommends a personalized festival schedule with artists aligned to your favorite styles and interests, including rationale.
181. **recommend_pipeline_upgrades**: Optimizes vulnerability-checking pipelines by incorporating new information and improving their efficiency, with detailed explanations of changes.
182. **recommend_talkpanel_topics**: Produces a clean set of proposed talks or panel talking points for a person based on their interests and goals, formatted for submission to a conference organizer.
183. **recommend_yoga_practice**: Provides personalized yoga sequences, meditation guidance, and holistic lifestyle advice based on individual profiles.
184. **refine_design_document**: Refines a design document based on a design review by analyzing, mapping concepts, and implementing changes using valid Markdown.
185. **review_design**: Reviews and analyzes architecture design, focusing on clarity, component design, system integrations, security, performance, scalability, and data management.
186. **sanitize_broken_html_to_markdown**: Converts messy HTML into clean, properly formatted Markdown, applying custom styling and ensuring compatibility with Vite.
187. **suggest_pattern**: Suggests appropriate fabric patterns or commands based on user input, providing clear explanations and options for users.
188. **summarize**: Summarizes content into a 20-word sentence, main points, and takeaways, formatted with numbered lists in Markdown.
189. **summarize_board_meeting**: Creates formal meeting notes from board meeting transcripts for corporate governance documentation.
190. **summarize_debate**: Summarizes debates, identifies primary disagreement, extracts arguments, and provides analysis of evidence and argument strength to predict outcomes.
191. **summarize_git_changes**: Summarizes recent project updates from the last 7 days, focusing on key changes with enthusiasm.
192. **summarize_git_diff**: Summarizes and organizes Git diff changes with clear, succinct commit messages and bullet points.
193. **summarize_lecture**: Extracts relevant topics, definitions, and tools from lecture transcripts, providing structured summaries with timestamps and key takeaways.
194. **summarize_legislation**: Summarizes complex political proposals and legislation by analyzing key points, proposed changes, and providing balanced, positive, and cynical characterizations.
195. **summarize_meeting**: Analyzes meeting transcripts to extract a structured summary, including an overview, key points, tasks, decisions, challenges, timeline, references, and next steps.
196. **summarize_micro**: Summarizes content into a 20-word sentence, 3 main points, and 3 takeaways, formatted in clear, concise Markdown.
197. **summarize_newsletter**: Extracts the most meaningful, interesting, and useful content from a newsletter, summarizing key sections such as content, opinions, tools, companies, and follow-up items in clear, structured Markdown.
198. **summarize_paper**: Summarizes an academic paper by detailing its title, authors, technical approach, distinctive features, experimental setup, results, advantages, limitations, and conclusion in a clear, structured format using human-readable Markdown.
199. **summarize_prompt**: Summarizes AI chat prompts by describing the primary function, unique approach, and expected output in a concise paragraph. The summary is focused on the prompt's purpose without unnecessary details or formatting.
200. **summarize_pull-requests**: Summarizes pull requests for a coding project by providing a summary and listing the top PRs with human-readable descriptions.
201. **summarize_rpg_session**: Summarizes a role-playing game session by extracting key events, combat stats, character changes, quotes, and more.
202. **t_analyze_challenge_handling**: Provides 8-16 word bullet points evaluating how well challenges are being addressed, calling out any lack of effort.
203. **t_check_dunning_kruger**: Assess narratives for Dunning-Kruger patterns by contrasting self-perception with demonstrated competence and confidence cues.
204. **t_check_metrics**: Analyzes deep context from the TELOS file and input instruction, then provides a wisdom-based output while considering metrics and KPIs to assess recent improvements.
205. **t_create_h3_career**: Summarizes context and produces wisdom-based output by deeply analyzing both the TELOS File and the input instruction, considering the relationship between the two.
206. **t_create_opening_sentences**: Describes from TELOS file the person's identity, goals, and actions in 4 concise, 32-word bullet points, humbly.
207. **t_describe_life_outlook**: Describes from TELOS file a person's life outlook in 5 concise, 16-word bullet points.
208. **t_extract_intro_sentences**: Summarizes from TELOS file a person's identity, work, and current projects in 5 concise and grounded bullet points.
209. **t_extract_panel_topics**: Creates 5 panel ideas with titles and descriptions based on deep context from a TELOS file and input.
210. **t_find_blindspots**: Identify potential blindspots in thinking, frames, or models that may expose the individual to error or risk.
211. **t_find_negative_thinking**: Analyze a TELOS file and input to identify negative thinking in documents or journals, followed by tough love encouragement.
212. **t_find_neglected_goals**: Analyze a TELOS file and input instructions to identify goals or projects that have not been worked on recently.
213. **t_give_encouragement**: Analyze a TELOS file and input instructions to evaluate progress, provide encouragement, and offer recommendations for continued effort.
214. **t_red_team_thinking**: Analyze a TELOS file and input instructions to red-team thinking, models, and frames, then provide recommendations for improvement.
215. **t_threat_model_plans**: Analyze a TELOS file and input instructions to create threat models for a life plan and recommend improvements.
216. **t_visualize_mission_goals_projects**: Analyze a TELOS file and input instructions to create an ASCII art diagram illustrating the relationship of missions, goals, and projects.
217. **t_year_in_review**: Analyze a TELOS file to create insights about a person or entity, then summarize accomplishments and visualizations in bullet points.
218. **to_flashcards**: Create Anki flashcards from a given text, focusing on concise, optimized questions and answers without external context.
219. **transcribe_minutes**: Extracts (from meeting transcription) meeting minutes, identifying actionables, insightful ideas, decisions, challenges, and next steps in a structured format.
220. **translate**: Translates sentences or documentation into the specified language code while maintaining the original formatting and tone.
221. **tweet**: Provides a step-by-step guide on crafting engaging tweets with emojis, covering Twitter basics, account creation, features, and audience targeting.
222. **write_essay**: Writes essays in the style of a specified author, embodying their unique voice, vocabulary, and approach. Uses `author_name` variable.
223. **write_essay_pg**: Writes concise, clear essays in the style of Paul Graham, focusing on simplicity, clarity, and illumination of the provided topic.
224. **write_hackerone_report**: Generates concise, clear, and reproducible bug bounty reports, detailing vulnerability impact, steps to reproduce, and exploit details for triagers.
225. **write_latex**: Generates syntactically correct LaTeX code for a new.tex document, ensuring proper formatting and compatibility with pdflatex.
226. **write_micro_essay**: Writes concise, clear, and illuminating essays on the given topic in the style of Paul Graham.
227. **write_nuclei_template_rule**: Generates Nuclei YAML templates for detecting vulnerabilities using HTTP requests, matchers, extractors, and dynamic data extraction.
228. **write_pull-request**: Drafts detailed pull request descriptions, explaining changes, providing reasoning, and identifying potential bugs from the git diff command output.
229. **write_semgrep_rule**: Creates accurate and working Semgrep rules based on input, following syntax guidelines and specific language considerations.
230. **youtube_summary**: Create concise, timestamped Youtube video summaries that highlight key points.

View File

@@ -29,6 +29,9 @@ func CreateOutputFile(message string, fileName string) (err error) {
return
}
defer file.Close()
if !strings.HasSuffix(message, "\n") {
message += "\n"
}
if _, err = file.WriteString(message); err != nil {
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_writing_to_file"), err))
} else {

View File

@@ -24,5 +24,34 @@ func TestCreateOutputFile(t *testing.T) {
t.Fatalf("CreateOutputFile() error = %v", err)
}
defer os.Remove(fileName)
t.Cleanup(func() { os.Remove(fileName) })
data, err := os.ReadFile(fileName)
if err != nil {
t.Fatalf("failed to read output file: %v", err)
}
expected := message + "\n"
if string(data) != expected {
t.Fatalf("expected file contents %q, got %q", expected, data)
}
}
func TestCreateOutputFileMessageWithTrailingNewline(t *testing.T) {
fileName := "test_output_with_newline.txt"
message := "test message with newline\n"
if err := CreateOutputFile(message, fileName); err != nil {
t.Fatalf("CreateOutputFile() error = %v", err)
}
t.Cleanup(func() { os.Remove(fileName) })
data, err := os.ReadFile(fileName)
if err != nil {
t.Fatalf("failed to read output file: %v", err)
}
if string(data) != message {
t.Fatalf("expected file contents %q, got %q", message, data)
}
}

View File

@@ -69,6 +69,7 @@ func (o *Chatter) Send(request *domain.ChatRequest, opts *domain.ChatOptions) (s
responseChan := make(chan string)
errChan := make(chan error, 1)
done := make(chan struct{})
printedStream := false
go func() {
defer close(done)
@@ -81,9 +82,14 @@ func (o *Chatter) Send(request *domain.ChatRequest, opts *domain.ChatOptions) (s
message += response
if !opts.SuppressThink {
fmt.Print(response)
printedStream = true
}
}
if printedStream && !opts.SuppressThink && !strings.HasSuffix(message, "\n") {
fmt.Println()
}
// Wait for goroutine to finish
<-done
@@ -175,7 +181,7 @@ func (o *Chatter) BuildSession(request *domain.ChatRequest, raw bool) (session *
if request.Message == nil {
request.Message = &chat.ChatCompletionMessage{
Role: chat.ChatMessageRoleUser,
Content: " ",
Content: "",
}
}

View File

@@ -356,7 +356,7 @@ func (an *Client) toMessages(msgs []*chat.ChatCompletionMessage) (ret []anthropi
lastRoleWasUser := false
for _, msg := range msgs {
if msg.Content == "" {
if strings.TrimSpace(msg.Content) == "" {
continue // Skip empty messages
}

View File

@@ -456,7 +456,7 @@ func (o *Client) convertMessages(msgs []*chat.ChatCompletionMessage) []*genai.Co
content.Role = "user"
}
if msg.Content != "" {
if strings.TrimSpace(msg.Content) != "" {
content.Parts = append(content.Parts, &genai.Part{Text: msg.Content})
}

View File

@@ -11,8 +11,6 @@ import (
"github.com/danielmiessler/fabric/internal/util"
)
const inputSentinel = "__FABRIC_INPUT_SENTINEL_TOKEN__"
type PatternsEntity struct {
*StorageEntity
SystemPatternFile string
@@ -96,18 +94,18 @@ func (o *PatternsEntity) applyVariables(
// Temporarily replace {{input}} with a sentinel token to protect it
// from recursive variable resolution
withSentinel := strings.ReplaceAll(pattern.Pattern, "{{input}}", inputSentinel)
withSentinel := strings.ReplaceAll(pattern.Pattern, "{{input}}", template.InputSentinel)
// Process all other template variables in the pattern
// At this point, our sentinel ensures {{input}} won't be affected
// Pass the actual input so extension calls can use {{input}} within their value parameter
var processed string
if processed, err = template.ApplyTemplate(withSentinel, variables, ""); err != nil {
if processed, err = template.ApplyTemplate(withSentinel, variables, input); err != nil {
return
}
// Finally, replace our sentinel with the actual user input
// The input has already been processed for variables if InputHasVars was true
pattern.Pattern = strings.ReplaceAll(processed, inputSentinel, input)
pattern.Pattern = strings.ReplaceAll(processed, template.InputSentinel, input)
return
}

View File

@@ -1,9 +1,24 @@
# Fabric Extensions: Complete Guide
## Important: Extensions Only Work in Patterns
**Extensions are ONLY processed when used within pattern files, not via direct piping to fabric.**
```bash
# ❌ This DOES NOT WORK - extensions are not processed in stdin
echo "{{ext:word-generator:generate:3}}" | fabric
# ✅ This WORKS - extensions are processed within patterns
fabric -p my-pattern-with-extensions.md
```
When you pipe directly to fabric without a pattern, the input goes straight to the LLM without template processing. Extensions are only evaluated during pattern template processing via `ApplyTemplate()`.
## Understanding Extension Architecture
### Registry Structure
The extension registry is stored at `~/.config/fabric/extensions/extensions.yaml` and tracks registered extensions:
```yaml
@@ -17,6 +32,7 @@ extensions:
The registry maintains security through hash verification of both configs and executables.
### Extension Configuration
Each extension requires a YAML configuration file with the following structure:
```yaml
@@ -42,8 +58,10 @@ config: # Output configuration
```
### Directory Structure
Recommended organization:
```
```text
~/.config/fabric/extensions/
├── bin/ # Extension executables
├── configs/ # Extension YAML configs
@@ -51,9 +69,11 @@ Recommended organization:
```
## Example 1: Python Wrapper (Word Generator)
A simple example wrapping a Python script.
### 1. Position Files
```bash
# Create directories
mkdir -p ~/.config/fabric/extensions/{bin,configs}
@@ -64,7 +84,9 @@ chmod +x ~/.config/fabric/extensions/bin/word-generator.py
```
### 2. Configure
Create `~/.config/fabric/extensions/configs/word-generator.yaml`:
```yaml
name: word-generator
executable: "~/.config/fabric/extensions/bin/word-generator.py"
@@ -83,22 +105,26 @@ config:
```
### 3. Register & Run
```bash
# Register
fabric --addextension ~/.config/fabric/extensions/configs/word-generator.yaml
# Run (generate 3 random words)
echo "{{ext:word-generator:generate:3}}" | fabric
# Extensions must be used within patterns (see "Extensions in patterns" section below)
# Direct piping to fabric will NOT process extension syntax
```
## Example 2: Direct Executable (SQLite3)
Using a system executable directly.
copy the memories to your home directory
~/memories.db
### 1. Configure
Create `~/.config/fabric/extensions/configs/memory-query.yaml`:
```yaml
name: memory-query
executable: "/usr/bin/sqlite3"
@@ -123,19 +149,19 @@ config:
```
### 2. Register & Run
```bash
# Register
fabric --addextension ~/.config/fabric/extensions/configs/memory-query.yaml
# Run queries
echo "{{ext:memory-query:all}}" | fabric
echo "{{ext:memory-query:byid:3}}" | fabric
# Extensions must be used within patterns (see "Extensions in patterns" section below)
# Direct piping to fabric will NOT process extension syntax
```
## Extension Management Commands
### Add Extension
```bash
fabric --addextension ~/.config/fabric/extensions/configs/memory-query.yaml
```
@@ -143,25 +169,29 @@ fabric --addextension ~/.config/fabric/extensions/configs/memory-query.yaml
Note : if the executable or config file changes, you must re-add the extension.
This will recompute the hash for the extension.
### List Extensions
```bash
fabric --listextensions
```
Shows all registered extensions with their status and configuration details.
### Remove Extension
```bash
fabric --rmextension <extension-name>
```
Removes an extension from the registry.
Removes an extension from the registry.
## Extensions in patterns
```
Create a pattern that use multiple extensions.
**IMPORTANT**: Extensions are ONLY processed when used within pattern files, not via direct piping to fabric.
Create a pattern file (e.g., `test_pattern.md`):
```markdown
These are my favorite
{{ext:word-generator:generate:3}}
@@ -171,8 +201,30 @@ These are my least favorite
what does this say about me?
```
Run the pattern:
```bash
./fabric -p ./plugins/template/Examples/test_pattern.md
fabric -p ./internal/plugins/template/Examples/test_pattern.md
```
## Passing {{input}} to extensions inside patterns
```text
Create a pattern called ai_summarize that uses extensions (see openai.yaml and copy for claude)
Summarize the responses from both AI models:
OpenAI Response:
{{ext:openai:chat:{{input}}}}
Claude Response:
{{ext:claude:chat:{{input}}}}
```
```bash
echo "What is Artificial Intelligence" | ../fabric-fix -p ai_summarize
```
## Security Considerations
@@ -197,6 +249,7 @@ what does this say about me?
## Troubleshooting
### Common Issues
1. **Registration Failures**
- Verify file permissions
- Check executable paths
@@ -214,10 +267,10 @@ what does this say about me?
- Monitor disk space for file operations
### Debug Tips
1. Enable verbose logging when available
2. Check system logs for execution errors
3. Verify extension dependencies
4. Test extensions with minimal configurations first
Would you like me to expand on any particular section or add more examples?
Would you like me to expand on any particular section or add more examples?

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
INPUT=$(jq -R -s '.' <<< "$*")
RESPONSE=$(curl "$OPENAI_API_BASE_URL/chat/completions" \
-s -w "\n%{http_code}" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d "{\"model\":\"gpt-4o-mini\",\"messages\":[{\"role\":\"user\",\"content\":$INPUT}]}")
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
BODY=$(echo "$RESPONSE" | sed '$d')
if [[ "$HTTP_CODE" -ne 200 ]]; then
echo "Error: HTTP $HTTP_CODE" >&2
echo "$BODY" | jq -r '.error.message // "Unknown error"' >&2
exit 1
fi
echo "$BODY" | jq -r '.choices[0].message.content'

View File

@@ -0,0 +1,14 @@
name: openai
executable: "/path/to/your/openai-chat.sh"
type: executable
timeout: "30s"
description: "Call OpenAI Chat Completions API"
version: "1.0.0"
operations:
chat:
cmd_template: "{{executable}} {{value}}"
config:
output:
method: stdout

View File

@@ -0,0 +1,5 @@
package template
// InputSentinel is used to temporarily replace {{input}} during template processing
// to prevent recursive variable resolution
const InputSentinel = "__FABRIC_INPUT_SENTINEL_TOKEN__"

View File

@@ -140,6 +140,11 @@ func (r *ExtensionRegistry) Register(configPath string) error {
return fmt.Errorf("failed to hash executable: %w", err)
}
// Validate full extension definition (ensures operations and cmd_template present)
if err := r.validateExtensionDefinition(&ext); err != nil {
return fmt.Errorf("invalid extension definition: %w", err)
}
// Store entry
r.registry.Extensions[ext.Name] = &RegistryEntry{
ConfigPath: absPath,

View File

@@ -37,152 +37,65 @@ func debugf(format string, a ...interface{}) {
debuglog.Debug(debuglog.Trace, format, a...)
}
func ApplyTemplate(content string, variables map[string]string, input string) (string, error) {
var missingVars []string
r := regexp.MustCompile(`\{\{([^{}]+)\}\}`)
debugf("Starting template processing\n")
for strings.Contains(content, "{{") {
matches := r.FindAllStringSubmatch(content, -1)
if len(matches) == 0 {
break
}
replaced := false
for _, match := range matches {
fullMatch := match[0]
varName := match[1]
// Check if this is a plugin call
if strings.HasPrefix(varName, "plugin:") {
pluginMatches := pluginPattern.FindStringSubmatch(fullMatch)
if len(pluginMatches) >= 3 {
namespace := pluginMatches[1]
operation := pluginMatches[2]
value := ""
if len(pluginMatches) == 4 {
value = pluginMatches[3]
}
debugf("\nPlugin call:\n")
debugf(" Namespace: %s\n", namespace)
debugf(" Operation: %s\n", operation)
debugf(" Value: %s\n", value)
var result string
var err error
switch namespace {
case "text":
debugf("Executing text plugin\n")
result, err = textPlugin.Apply(operation, value)
case "datetime":
debugf("Executing datetime plugin\n")
result, err = datetimePlugin.Apply(operation, value)
case "file":
debugf("Executing file plugin\n")
result, err = filePlugin.Apply(operation, value)
debugf("File plugin result: %#v\n", result)
case "fetch":
debugf("Executing fetch plugin\n")
result, err = fetchPlugin.Apply(operation, value)
case "sys":
debugf("Executing sys plugin\n")
result, err = sysPlugin.Apply(operation, value)
default:
return "", fmt.Errorf("unknown plugin namespace: %s", namespace)
}
if err != nil {
debugf("Plugin error: %v\n", err)
return "", fmt.Errorf("plugin %s error: %v", namespace, err)
}
debugf("Plugin result: %s\n", result)
content = strings.ReplaceAll(content, fullMatch, result)
debugf("Content after replacement: %s\n", content)
continue
}
}
if pluginMatches := extensionPattern.FindStringSubmatch(fullMatch); len(pluginMatches) >= 3 {
name := pluginMatches[1]
operation := pluginMatches[2]
value := ""
if len(pluginMatches) == 4 {
value = pluginMatches[3]
}
debugf("\nExtension call:\n")
debugf(" Name: %s\n", name)
debugf(" Operation: %s\n", operation)
debugf(" Value: %s\n", value)
result, err := extensionManager.ProcessExtension(name, operation, value)
if err != nil {
return "", fmt.Errorf("extension %s error: %v", name, err)
}
content = strings.ReplaceAll(content, fullMatch, result)
replaced = true
continue
}
// Handle regular variables and input
debugf("Processing variable: %s\n", varName)
if varName == "input" {
debugf("Replacing {{input}}\n")
replaced = true
content = strings.ReplaceAll(content, fullMatch, input)
} else {
if val, ok := variables[varName]; !ok {
debugf("Missing variable: %s\n", varName)
missingVars = append(missingVars, varName)
return "", fmt.Errorf("missing required variable: %s", varName)
} else {
debugf("Replacing variable %s with value: %s\n", varName, val)
content = strings.ReplaceAll(content, fullMatch, val)
replaced = true
}
}
if !replaced {
return "", fmt.Errorf("template processing stuck - potential infinite loop")
}
// matchTriple extracts the first two required and optional third value from a token
// pattern of the form {{type:part1:part2(:part3)?}} returning part1, part2, part3 (possibly empty)
func matchTriple(r *regexp.Regexp, full string) (string, string, string, bool) {
parts := r.FindStringSubmatch(full)
if len(parts) >= 3 {
v := ""
if len(parts) == 4 {
v = parts[3]
}
return parts[1], parts[2], v, true
}
return "", "", "", false
}
debugf("Starting template processing\n")
for strings.Contains(content, "{{") {
matches := r.FindAllStringSubmatch(content, -1)
func ApplyTemplate(content string, variables map[string]string, input string) (string, error) {
tokenPattern := regexp.MustCompile(`\{\{([^{}]+)\}\}`)
debugf("Starting template processing with input='%s'\n", input)
for {
if !strings.Contains(content, "{{") {
break
}
matches := tokenPattern.FindAllStringSubmatch(content, -1)
if len(matches) == 0 {
break
}
replaced := false
for _, match := range matches {
fullMatch := match[0]
varName := match[1]
progress := false
for _, m := range matches {
full := m[0]
raw := m[1]
// Check if this is a plugin call
if strings.HasPrefix(varName, "plugin:") {
pluginMatches := pluginPattern.FindStringSubmatch(fullMatch)
if len(pluginMatches) >= 3 {
namespace := pluginMatches[1]
operation := pluginMatches[2]
value := ""
if len(pluginMatches) == 4 {
value = pluginMatches[3]
// Extension call
if strings.HasPrefix(raw, "ext:") {
if name, operation, value, ok := matchTriple(extensionPattern, full); ok {
if strings.Contains(value, InputSentinel) {
value = strings.ReplaceAll(value, InputSentinel, input)
debugf("Replaced sentinel in extension value with input\n")
}
debugf("Extension call: name=%s operation=%s value=%s\n", name, operation, value)
result, err := extensionManager.ProcessExtension(name, operation, value)
if err != nil {
return "", fmt.Errorf("extension %s error: %v", name, err)
}
content = strings.ReplaceAll(content, full, result)
progress = true
continue
}
}
debugf("\nPlugin call:\n")
debugf(" Namespace: %s\n", namespace)
debugf(" Operation: %s\n", operation)
debugf(" Value: %s\n", value)
var result string
var err error
// Plugin call
if strings.HasPrefix(raw, "plugin:") {
if namespace, operation, value, ok := matchTriple(pluginPattern, full); ok {
debugf("Plugin call: namespace=%s operation=%s value=%s\n", namespace, operation, value)
var (
result string
err error
)
switch namespace {
case "text":
debugf("Executing text plugin\n")
@@ -203,39 +116,33 @@ func ApplyTemplate(content string, variables map[string]string, input string) (s
default:
return "", fmt.Errorf("unknown plugin namespace: %s", namespace)
}
if err != nil {
debugf("Plugin error: %v\n", err)
return "", fmt.Errorf("plugin %s error: %v", namespace, err)
}
debugf("Plugin result: %s\n", result)
content = strings.ReplaceAll(content, fullMatch, result)
debugf("Content after replacement: %s\n", content)
content = strings.ReplaceAll(content, full, result)
progress = true
continue
}
}
// Handle regular variables and input
debugf("Processing variable: %s\n", varName)
if varName == "input" {
debugf("Replacing {{input}}\n")
replaced = true
content = strings.ReplaceAll(content, fullMatch, input)
} else {
if val, ok := variables[varName]; !ok {
debugf("Missing variable: %s\n", varName)
missingVars = append(missingVars, varName)
return "", fmt.Errorf("missing required variable: %s", varName)
} else {
debugf("Replacing variable %s with value: %s\n", varName, val)
content = strings.ReplaceAll(content, fullMatch, val)
replaced = true
// Variables / input / sentinel
switch raw {
case "input", InputSentinel:
content = strings.ReplaceAll(content, full, input)
progress = true
default:
val, ok := variables[raw]
if !ok {
return "", fmt.Errorf("missing required variable: %s", raw)
}
content = strings.ReplaceAll(content, full, val)
progress = true
}
if !replaced {
return "", fmt.Errorf("template processing stuck - potential infinite loop")
}
}
if !progress {
return "", fmt.Errorf("template processing stuck - potential infinite loop")
}
}

View File

@@ -0,0 +1,77 @@
package template
import (
"os"
"path/filepath"
"strings"
"testing"
)
// TestExtensionValueMixedInputAndVariable ensures an extension value mixing {{input}} and another template variable is processed.
func TestExtensionValueMixedInputAndVariable(t *testing.T) {
input := "PRIMARY"
variables := map[string]string{
"suffix": "SUF",
}
// Build temp extension environment
tmp := t.TempDir()
configDir := filepath.Join(tmp, ".config", "fabric")
extsDir := filepath.Join(configDir, "extensions")
binDir := filepath.Join(extsDir, "bin")
configsDir := filepath.Join(extsDir, "configs")
if err := os.MkdirAll(binDir, 0o755); err != nil {
t.Fatalf("mkdir bin: %v", err)
}
if err := os.MkdirAll(configsDir, 0o755); err != nil {
t.Fatalf("mkdir configs: %v", err)
}
scriptPath := filepath.Join(binDir, "mix-echo.sh")
// Simple echo script; avoid percent formatting complexities
script := "#!/bin/sh\necho VAL=$1\n"
if err := os.WriteFile(scriptPath, []byte(script), 0o755); err != nil {
t.Fatalf("write script: %v", err)
}
configYAML := "" +
"name: mix-echo\n" +
"type: executable\n" +
"executable: " + scriptPath + "\n" +
"description: mixed input/variable test\n" +
"version: 1.0.0\n" +
"timeout: 5s\n" +
"operations:\n" +
" echo:\n" +
" cmd_template: '{{executable}} {{value}}'\n"
if err := os.WriteFile(filepath.Join(configsDir, "mix-echo.yaml"), []byte(configYAML), 0o644); err != nil {
t.Fatalf("write config: %v", err)
}
// Use a fresh extension manager isolated from global one
mgr := NewExtensionManager(configDir)
if err := mgr.RegisterExtension(filepath.Join(configsDir, "mix-echo.yaml")); err != nil {
// Some environments may not support execution; skip instead of fail hard
if strings.Contains(err.Error(), "operation not permitted") {
t.Skipf("skipping due to exec restriction: %v", err)
}
t.Fatalf("register: %v", err)
}
// Temporarily swap global extensionManager for this test
prevMgr := extensionManager
extensionManager = mgr
defer func() { extensionManager = prevMgr }()
// Template uses input plus a variable inside extension value
tmpl := "{{ext:mix-echo:echo:pre-{{input}}-mid-{{suffix}}-post}}"
out, err := ApplyTemplate(tmpl, variables, input)
if err != nil {
t.Fatalf("ApplyTemplate error: %v", err)
}
if !strings.Contains(out, "VAL=pre-PRIMARY-mid-SUF-post") {
t.Fatalf("unexpected output: %q", out)
}
}

View File

@@ -0,0 +1,71 @@
package template
import (
"os"
"path/filepath"
"strings"
"testing"
)
// TestMultipleExtensionsWithInput ensures multiple extension calls each using {{input}} get proper substitution.
func TestMultipleExtensionsWithInput(t *testing.T) {
input := "DATA"
variables := map[string]string{}
tmp := t.TempDir()
configDir := filepath.Join(tmp, ".config", "fabric")
extsDir := filepath.Join(configDir, "extensions")
binDir := filepath.Join(extsDir, "bin")
configsDir := filepath.Join(extsDir, "configs")
if err := os.MkdirAll(binDir, 0o755); err != nil {
t.Fatalf("mkdir bin: %v", err)
}
if err := os.MkdirAll(configsDir, 0o755); err != nil {
t.Fatalf("mkdir configs: %v", err)
}
scriptPath := filepath.Join(binDir, "multi-echo.sh")
script := "#!/bin/sh\necho ECHO=$1\n"
if err := os.WriteFile(scriptPath, []byte(script), 0o755); err != nil {
t.Fatalf("write script: %v", err)
}
configYAML := "" +
"name: multi-echo\n" +
"type: executable\n" +
"executable: " + scriptPath + "\n" +
"description: multi echo extension\n" +
"version: 1.0.0\n" +
"timeout: 5s\n" +
"operations:\n" +
" echo:\n" +
" cmd_template: '{{executable}} {{value}}'\n"
if err := os.WriteFile(filepath.Join(configsDir, "multi-echo.yaml"), []byte(configYAML), 0o644); err != nil {
t.Fatalf("write config: %v", err)
}
mgr := NewExtensionManager(configDir)
if err := mgr.RegisterExtension(filepath.Join(configsDir, "multi-echo.yaml")); err != nil {
t.Fatalf("register: %v", err)
}
prev := extensionManager
extensionManager = mgr
defer func() { extensionManager = prev }()
tmpl := strings.Join([]string{
"First: {{ext:multi-echo:echo:{{input}}}}",
"Second: {{ext:multi-echo:echo:{{input}}}}",
"Third: {{ext:multi-echo:echo:{{input}}}}",
}, " | ")
out, err := ApplyTemplate(tmpl, variables, input)
if err != nil {
t.Fatalf("ApplyTemplate error: %v", err)
}
wantCount := 3
occ := strings.Count(out, "ECHO=DATA")
if occ != wantCount {
t.Fatalf("expected %d occurrences of ECHO=DATA, got %d; output=%q", wantCount, occ, out)
}
}

View File

@@ -0,0 +1,275 @@
package template
import (
"fmt"
"os"
"path/filepath"
"strings"
"testing"
)
// withTestExtension creates a temporary test extension and runs the test function
func withTestExtension(t *testing.T, name string, scriptContent string, testFunc func(*ExtensionManager, string)) {
t.Helper()
// Create a temporary directory for test extension
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".config", "fabric")
extensionsDir := filepath.Join(configDir, "extensions")
binDir := filepath.Join(extensionsDir, "bin")
configsDir := filepath.Join(extensionsDir, "configs")
err := os.MkdirAll(binDir, 0755)
if err != nil {
t.Fatalf("Failed to create bin directory: %v", err)
}
err = os.MkdirAll(configsDir, 0755)
if err != nil {
t.Fatalf("Failed to create configs directory: %v", err)
}
// Create a test script
scriptPath := filepath.Join(binDir, name+".sh")
err = os.WriteFile(scriptPath, []byte(scriptContent), 0755)
if err != nil {
t.Fatalf("Failed to create test script: %v", err)
}
// Create extension config
configPath := filepath.Join(configsDir, name+".yaml")
configContent := fmt.Sprintf(`name: %s
executable: %s
type: executable
timeout: "5s"
description: "Test extension"
version: "1.0.0"
operations:
echo:
cmd_template: "{{executable}} {{value}}"
config:
output:
method: stdout
`, name, scriptPath)
err = os.WriteFile(configPath, []byte(configContent), 0644)
if err != nil {
t.Fatalf("Failed to create extension config: %v", err)
}
// Initialize extension manager with test config directory
mgr := NewExtensionManager(configDir)
// Register the test extension
err = mgr.RegisterExtension(configPath)
if err != nil {
t.Fatalf("Failed to register extension: %v", err)
}
// Run the test
testFunc(mgr, name)
}
// TestSentinelTokenReplacement tests the fix for the {{input}} sentinel token bug
// This test verifies that when {{input}} is used inside an extension call,
// the actual input is passed to the extension, not the sentinel token.
func TestSentinelTokenReplacement(t *testing.T) {
scriptContent := `#!/bin/bash
echo "RECEIVED: $@"
`
withTestExtension(t, "echo-test", scriptContent, func(mgr *ExtensionManager, name string) {
// Save and restore global extension manager
oldManager := extensionManager
defer func() { extensionManager = oldManager }()
extensionManager = mgr
tests := []struct {
name string
template string
input string
wantContain string
wantNotContain string
}{
{
name: "sentinel token with {{input}} in extension value",
template: "{{ext:echo-test:echo:__FABRIC_INPUT_SENTINEL_TOKEN__}}",
input: "test input data",
wantContain: "RECEIVED: test input data",
wantNotContain: "__FABRIC_INPUT_SENTINEL_TOKEN__",
},
{
name: "direct input variable replacement",
template: "{{ext:echo-test:echo:{{input}}}}",
input: "Hello World",
wantContain: "RECEIVED: Hello World",
wantNotContain: "{{input}}",
},
{
name: "sentinel with complex input",
template: "Result: {{ext:echo-test:echo:__FABRIC_INPUT_SENTINEL_TOKEN__}}",
input: "What is AI?",
wantContain: "RECEIVED: What is AI?",
wantNotContain: "__FABRIC_INPUT_SENTINEL_TOKEN__",
},
{
name: "multiple words in input",
template: "{{ext:echo-test:echo:{{input}}}}",
input: "Multiple word input string",
wantContain: "RECEIVED: Multiple word input string",
wantNotContain: "{{input}}",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ApplyTemplate(tt.template, map[string]string{}, tt.input)
if err != nil {
t.Errorf("ApplyTemplate() error = %v", err)
return
}
// Check that result contains expected string
if !strings.Contains(got, tt.wantContain) {
t.Errorf("ApplyTemplate() = %q, should contain %q", got, tt.wantContain)
}
// Check that result does NOT contain unwanted string
if strings.Contains(got, tt.wantNotContain) {
t.Errorf("ApplyTemplate() = %q, should NOT contain %q", got, tt.wantNotContain)
}
})
}
})
}
// TestSentinelInVariableProcessing tests that the sentinel token is handled
// correctly in regular variable processing (not just extensions)
// Note: The sentinel is only replaced when it appears in extension values,
// not when used as a standalone variable (which would be a user error)
func TestSentinelInVariableProcessing(t *testing.T) {
tests := []struct {
name string
template string
vars map[string]string
input string
want string
}{
{
name: "input variable works normally",
template: "Value: {{input}}",
input: "actual input",
want: "Value: actual input",
},
{
name: "multiple input references",
template: "First: {{input}}, Second: {{input}}",
input: "test",
want: "First: test, Second: test",
},
{
name: "input with variables",
template: "Var: {{name}}, Input: {{input}}",
vars: map[string]string{"name": "TestVar"},
input: "input value",
want: "Var: TestVar, Input: input value",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ApplyTemplate(tt.template, tt.vars, tt.input)
if err != nil {
t.Errorf("ApplyTemplate() error = %v", err)
return
}
if got != tt.want {
t.Errorf("ApplyTemplate() = %q, want %q", got, tt.want)
}
})
}
}
// TestExtensionValueWithSentinel specifically tests the extension value
// sentinel replacement logic
func TestExtensionValueWithSentinel(t *testing.T) {
scriptContent := `#!/bin/bash
# Output each argument on a separate line
for arg in "$@"; do
echo "ARG: $arg"
done
`
withTestExtension(t, "arg-test", scriptContent, func(mgr *ExtensionManager, name string) {
// Save and restore global extension manager
oldManager := extensionManager
defer func() { extensionManager = oldManager }()
extensionManager = mgr
// Test that sentinel token in extension value gets replaced
template := "{{ext:arg-test:echo:prefix-__FABRIC_INPUT_SENTINEL_TOKEN__-suffix}}"
input := "MYINPUT"
got, err := ApplyTemplate(template, map[string]string{}, input)
if err != nil {
t.Fatalf("ApplyTemplate() error = %v", err)
}
// The sentinel should be replaced with actual input
expectedContain := "ARG: prefix-MYINPUT-suffix"
if !strings.Contains(got, expectedContain) {
t.Errorf("ApplyTemplate() = %q, should contain %q", got, expectedContain)
}
// The sentinel token should NOT appear in output
if strings.Contains(got, "__FABRIC_INPUT_SENTINEL_TOKEN__") {
t.Errorf("ApplyTemplate() = %q, should NOT contain sentinel token", got)
}
})
}
// TestNestedInputInExtension tests the original bug case:
// {{ext:name:op:{{input}}}} should pass the actual input, not the sentinel
func TestNestedInputInExtension(t *testing.T) {
scriptContent := `#!/bin/bash
echo "NESTED_TEST: $*"
`
withTestExtension(t, "nested-test", scriptContent, func(mgr *ExtensionManager, name string) {
// Save and restore global extension manager
oldManager := extensionManager
defer func() { extensionManager = oldManager }()
extensionManager = mgr
// This is the bug case: {{input}} nested inside extension call
// The template processing should:
// 1. Replace {{input}} with sentinel during variable protection
// 2. Process the extension, replacing sentinel with actual input
// 3. Execute extension with actual input, not sentinel
template := "{{ext:nested-test:echo:{{input}}}}"
input := "What is Artificial Intelligence"
got, err := ApplyTemplate(template, map[string]string{}, input)
if err != nil {
t.Fatalf("ApplyTemplate() error = %v", err)
}
// Verify the actual input was passed, not the sentinel
expectedContain := "NESTED_TEST: What is Artificial Intelligence"
if !strings.Contains(got, expectedContain) {
t.Errorf("ApplyTemplate() = %q, should contain %q", got, expectedContain)
}
// Verify sentinel token does NOT appear
if strings.Contains(got, "__FABRIC_INPUT_SENTINEL_TOKEN__") {
t.Errorf("ApplyTemplate() output contains sentinel token (BUG NOT FIXED): %q", got)
}
// Verify {{input}} template tag does NOT appear
if strings.Contains(got, "{{input}}") {
t.Errorf("ApplyTemplate() output contains unresolved {{input}}: %q", got)
}
})
}

View File

@@ -69,7 +69,7 @@ func NewYouTube() (ret *YouTube) {
EnvNamePrefix: plugins.BuildEnvVariablePrefix(label),
}
ret.ApiKey = ret.AddSetupQuestion("API key", true)
ret.ApiKey = ret.AddSetupQuestion("API key", false)
return
}

View File

@@ -0,0 +1,19 @@
package youtube
import "testing"
func TestNewYouTubeApiKeyOptional(t *testing.T) {
yt := NewYouTube()
if yt.ApiKey == nil {
t.Fatal("expected API key setup question to be initialized")
}
if yt.ApiKey.Required {
t.Fatalf("expected YouTube API key to be optional, but it is marked as required")
}
if !yt.IsConfigured() {
t.Fatalf("expected YouTube plugin to be considered configured without an API key")
}
}

View File

@@ -1 +1 @@
"1.4.322"
"1.4.326"