Compare commits

...

19 Commits

Author SHA1 Message Date
github-actions[bot]
0e4c4619f9 chore(release): Update version to v1.4.310 2025-09-11 18:07:20 +00:00
Kayvan Sylvan
1280e8136c Merge pull request #1759 from ksylvan/kayvan/fix/0909-windows-flag-fix
Add Windows-style Flag Support for Language Detection
2025-09-11 11:04:27 -07:00
Kayvan Sylvan
59695428e3 feat: update Vite and Rollup dependencies to latest versions
### CHANGES

- Update Vite to version 5.4.20
- Update Rollup to version 4.50.1
- Add `@eslint-community/eslint-utils` version 4.9.0
- Update `@humanfs/node` to version 0.16.7
- Update `@humanwhocodes/retry` to version 0.4.3
- Update Rollup platform-specific packages to 4.50.1
- Add `@rollup/rollup-openharmony-arm64` version 4.50.1
- Closes Dependabot PR https://github.com/danielmiessler/Fabric/pull/1763
2025-09-11 10:54:55 -07:00
Kayvan Sylvan
8daba467b1 Merge branch 'main' into kayvan/fix/0909-windows-flag-fix 2025-09-11 10:50:11 -07:00
Kayvan Sylvan
b4b062bd11 chore: update alias creation to use consistent naming
### CHANGES

- Remove redundant prefix from `pattern_name` variable
- Add `alias_name` variable for consistent alias creation
- Update alias command to use `alias_name`
- Modify PowerShell function to use `aliasName`
2025-09-11 10:21:14 -07:00
Kayvan Sylvan
a851e6e9ca docs: add optional prefix support for fabric pattern aliases via FABRIC_ALIAS_PREFIX env var
## CHANGES

- Add FABRIC_ALIAS_PREFIX environment variable support
- Update bash/zsh alias generation with prefix
- Update PowerShell alias generation with prefix
- Improve readability of alias setup instructions
- Enable custom prefixing for pattern commands
- Maintain backward compatibility without prefix
2025-09-11 07:13:28 -07:00
Kayvan Sylvan
a8f071b1c4 Merge branch 'main' into kayvan/fix/0909-windows-flag-fix 2025-09-10 20:02:44 -07:00
Kayvan Sylvan
bce7384771 Merge pull request #1762 from danielmiessler/OmriH-Elister/main
New pattern for writing interaction between two characters
2025-09-10 19:56:32 -07:00
Kayvan Sylvan
65268e5f62 fix: Change attribution of PR to https://github.com/OmriH-Elister 2025-09-10 19:51:53 -07:00
Changelog Bot
617c31d15a chore: incoming 1762 changelog entry 2025-09-10 17:09:53 -07:00
Kayvan Sylvan
3017b1a5b2 chore: add create_story_about_people_interaction pattern for persona analysis
### CHANGES

- Add `create_story_about_people_interaction` pattern description
- Include pattern in `ANALYSIS` and `WRITING` categories
- Update `suggest_pattern` system and user documentation
- Modify JSON files to incorporate new pattern details
2025-09-10 16:59:44 -07:00
Omri Herman
97e2a76566 Merge pull request #1 from OmriH-Elister/stick
Stick
2025-09-10 17:54:18 +03:00
Omri Herman
8416500f81 Merge branch 'danielmiessler:main' into stick 2025-09-10 17:51:44 +03:00
OmriH-Elister
5073aac99b feat: add new pattern that creates story simulating interaction between two people 2025-09-10 14:37:15 +00:00
Changelog Bot
d89d932be1 chore: incoming 1759 changelog entry 2025-09-10 06:56:57 -07:00
Kayvan Sylvan
78280810f4 feat: add Windows-style forward slash flag support to CLI argument parser
- Add runtime OS detection for Windows platform
- Support `/flag` syntax for Windows command line
- Handle Windows colon delimiter `/flag:value` format
- Maintain backward compatibility with Unix-style flags
- Add comprehensive test coverage for flag extraction
- Support both `:` and `=` delimiters on Windows
- Preserve existing dash-based flag parsing logic
2025-09-10 06:30:20 -07:00
github-actions[bot]
65dae9bb85 chore(release): Update version to v1.4.309 2025-09-09 20:57:29 +00:00
Kayvan Sylvan
cbd88f6314 Merge pull request #1756 from ksylvan/kayvan/feature/0908-i18n-help-text
Add Internationalization Support with Custom Help System
2025-09-09 13:54:51 -07:00
Kayvan Sylvan
651c5743f1 feat: add comprehensive internationalization support with English and Spanish locales
- Replace hardcoded strings with i18n.T translations
- Add en and es JSON locale files
- Implement custom translated help system
- Enable language detection from CLI args
- Add locale download capability
- Localize error messages throughout codebase
- Support TTS and notification translations
2025-09-09 09:34:54 -07:00
29 changed files with 1092 additions and 319 deletions

View File

@@ -1,5 +1,53 @@
# Changelog
## v1.4.310 (2025-09-11)
### PR [#1759](https://github.com/danielmiessler/Fabric/pull/1759) by [ksylvan](https://github.com/ksylvan): Add Windows-style Flag Support for Language Detection
- Feat: add Windows-style forward slash flag support to CLI argument parser
- Add runtime OS detection for Windows platform
- Support `/flag` syntax for Windows command line
- Handle Windows colon delimiter `/flag:value` format
- Maintain backward compatibility with Unix-style flags
### PR [#1762](https://github.com/danielmiessler/Fabric/pull/1762) by [OmriH-Elister](https://github.com/OmriH-Elister): New pattern for writing interaction between two characters
- Feat: add new pattern that creates story simulating interaction between two people
- Chore: add `create_story_about_people_interaction` pattern for persona analysis
- Add `create_story_about_people_interaction` pattern description
- Include pattern in `ANALYSIS` and `WRITING` categories
- Update `suggest_pattern` system and user documentation
### Direct commits
- Chore: update alias creation to use consistent naming
- Remove redundant prefix from `pattern_name` variable
- Add `alias_name` variable for consistent alias creation
- Update alias command to use `alias_name`
- Modify PowerShell function to use `aliasName`
- Docs: add optional prefix support for fabric pattern aliases via FABRIC_ALIAS_PREFIX env var
- Add FABRIC_ALIAS_PREFIX environment variable support
- Update bash/zsh alias generation with prefix
- Update PowerShell alias generation with prefix
- Improve readability of alias setup instructions
- Enable custom prefixing for pattern commands
- Maintain backward compatibility without prefix
## v1.4.309 (2025-09-09)
### PR [#1756](https://github.com/danielmiessler/Fabric/pull/1756) by [ksylvan](https://github.com/ksylvan): Add Internationalization Support with Custom Help System
- Add comprehensive internationalization support with English and Spanish locales
- Replace hardcoded strings with i18n.T translations and add en and es JSON locale files
- Implement custom translated help system with language detection from CLI args
- Add locale download capability and localize error messages throughout codebase
- Support TTS and notification translations
## v1.4.308 (2025-09-05)
### PR [#1755](https://github.com/danielmiessler/Fabric/pull/1755) by [ksylvan](https://github.com/ksylvan): Add i18n Support for Multi-Language Fabric Experience

View File

@@ -72,6 +72,7 @@ Below are the **new features and capabilities** we've added (newest first):
### Recent Major Features
- [v1.4.309](https://github.com/danielmiessler/fabric/releases/tag/v1.4.309) (Sep 9, 2025) — **Comprehensive internationalization support**: Includes English and Spanish locale files.
- [v1.4.303](https://github.com/danielmiessler/fabric/releases/tag/v1.4.303) (Aug 29, 2025) — **New Binary Releases**: Linux ARM and Windows ARM targets. You can run Fabric on the Raspberry PI and on your Windows Surface!
- [v1.4.294](https://github.com/danielmiessler/fabric/releases/tag/v1.4.294) (Aug 20, 2025) — **Venice AI Support**: Added the Venice AI provider. Venice is a Privacy-First, Open-Source AI provider. See their ["About Venice"](https://docs.venice.ai/overview/about-venice) page for details.
- [v1.4.291](https://github.com/danielmiessler/fabric/releases/tag/v1.4.291) (Aug 18, 2025) — **Speech To Text**: Add OpenAI speech-to-text support with `--transcribe-file`, `--transcribe-model`, and `--split-media-file` flags.
@@ -342,17 +343,20 @@ If everything works you are good to go.
### Add aliases for all patterns
In order to add aliases for all your patterns and use them directly as commands ie. `summarize` instead of `fabric --pattern summarize`
You can add the following to your `.zshrc` or `.bashrc` file.
In order to add aliases for all your patterns and use them directly as commands, for example, `summarize` instead of `fabric --pattern summarize`
You can add the following to your `.zshrc` or `.bashrc` file. You
can also optionally set the `FABRIC_ALIAS_PREFIX` environment variable
before, if you'd prefer all the fabric aliases to start with the same prefix.
```bash
# Loop through all files in the ~/.config/fabric/patterns directory
for pattern_file in $HOME/.config/fabric/patterns/*; do
# Get the base name of the file (i.e., remove the directory path)
pattern_name=$(basename "$pattern_file")
pattern_name="$(basename "$pattern_file")"
alias_name="${FABRIC_ALIAS_PREFIX:-}${pattern_name}"
# Create an alias in the form: alias pattern_name="fabric --pattern pattern_name"
alias_command="alias $pattern_name='fabric --pattern $pattern_name'"
alias_command="alias $alias_name='fabric --pattern $pattern_name'"
# Evaluate the alias command to add it to the current shell
eval "$alias_command"
@@ -381,11 +385,13 @@ You can add the below code for the equivalent aliases inside PowerShell by runni
# Path to the patterns directory
$patternsPath = Join-Path $HOME ".config/fabric/patterns"
foreach ($patternDir in Get-ChildItem -Path $patternsPath -Directory) {
$patternName = $patternDir.Name
# Prepend FABRIC_ALIAS_PREFIX if set; otherwise use empty string
$prefix = $env:FABRIC_ALIAS_PREFIX ?? ''
$patternName = "$($patternDir.Name)"
$aliasName = "$prefix$patternName"
# Dynamically define a function for each pattern
$functionDefinition = @"
function $patternName {
function $aliasName {
[CmdletBinding()]
param(
[Parameter(ValueFromPipeline = `$true)]

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.308"
var version = "v1.4.310"

Binary file not shown.

View File

@@ -0,0 +1,37 @@
### Prompt
You will be provided with information about **two individuals** (real or fictional). The input will be **delimited by triple backticks**. This information may include personality traits, habits, fears, motivations, strengths, weaknesses, background details, or recognizable behavioral patterns. Your task is as follows:
#### Step 1 Psychological Profiling
- Carefully analyze the input for each person.
- Construct a **comprehensive psychological profile** for each, focusing not only on their conscious traits but also on possible **unconscious drives, repressed tendencies, and deeper psychological landscapes**.
- Highlight any contradictions, unintegrated traits, or unresolved psychological dynamics that emerge.
#### Step 2 Comparative Analysis
- Compare and contrast the two profiles.
- Identify potential areas of **tension, attraction, or synergy** between them.
- Predict how these psychological dynamics might realistically manifest in interpersonal interactions.
#### Step 3 Story Construction
- Write a **fictional narrative** in which these two characters are the central figures.
- The story should:
- Be driven primarily by their interaction.
- Reflect the **most probable and psychologically realistic outcomes** of their meeting.
- Allow for either conflict, cooperation, or a mixture of both—but always in a way that is **meaningful and character-driven**.
- Ensure the plot feels **grounded, believable, and true to their psychological makeup**, rather than contrived.
#### Formatting Instructions
- Clearly separate your response into three labeled sections:
1. **Profile A**
2. **Profile B**
3. **Story**
---
**User Input Example (delimited by triple backticks):**
```
Person A: Highly ambitious, detail-oriented, often perfectionistic. Has a fear of failure and tends to overwork. Childhood marked by pressure to achieve. Secretly desires freedom from expectations.
Person B: Warm, empathetic, values relationships over achievement. Struggles with self-assertion, avoids conflict. Childhood marked by neglect. Desires to be seen and valued. Often represses anger.
```

View File

@@ -1,6 +1,6 @@
# Brief one-line summary from AI analysis of what each pattern does
- Key pattern to use: **suggest_pattern**, suggests appropriate fabric patterns or commands based on user input.**
- Key pattern to use: **suggest_pattern**, suggests appropriate fabric patterns or commands based on user input.
1. **agility_story**: Generate a user story and acceptance criteria in JSON format based on the given topic.
2. **ai**: Interpret questions deeply and provide concise, insightful answers in Markdown bullet points.
@@ -89,135 +89,136 @@
85. **create_show_intro**: Creates compelling short intros for podcasts, summarizing key topics and themes discussed in the episode.
86. **create_sigma_rules**: Extracts Tactics, Techniques, and Procedures (TTPs) from security news and converts them into Sigma detection rules for host-based detections.
87. **create_story_about_person**: Creates compelling, realistic short stories based on psychological profiles, showing how characters navigate everyday problems using strategies consistent with their personality traits.
88. **create_story_explanation**: Summarizes complex content in a clear, approachable story format that makes the concepts easy to understand.
89. **create_stride_threat_model**: Create a STRIDE-based threat model for a system design, identifying assets, trust boundaries, data flows, and prioritizing threats with mitigations.
90. **create_summary**: Summarizes content into a 20-word sentence, 10 main points (16 words max), and 5 key takeaways in Markdown format.
91. **create_tags**: Identifies at least 5 tags from text content for mind mapping tools, including authors and existing tags if present.
92. **create_threat_scenarios**: Identifies likely attack methods for any system by providing a narrative-based threat model, balancing risk and opportunity.
93. **create_ttrc_graph**: Creates a CSV file showing the progress of Time to Remediate Critical Vulnerabilities over time using given data.
94. **create_ttrc_narrative**: Creates a persuasive narrative highlighting progress in reducing the Time to Remediate Critical Vulnerabilities metric over time.
95. **create_upgrade_pack**: Extracts world model and task algorithm updates from content, providing beliefs about how the world works and task performance.
96. **create_user_story**: Writes concise and clear technical user stories for new features in complex software programs, formatted for all stakeholders.
97. **create_video_chapters**: Extracts interesting topics and timestamps from a transcript, providing concise summaries of key moments.
98. **create_visualization**: Transforms complex ideas into visualizations using intricate ASCII art, simplifying concepts where necessary.
99. **dialog_with_socrates**: Engages in deep, meaningful dialogues to explore and challenge beliefs using the Socratic method.
100. **enrich_blog_post**: Enhances Markdown blog files by applying instructions to improve structure, visuals, and readability for HTML rendering.
101. **explain_code**: Explains code, security tool output, configuration text, and answers questions based on the provided input.
102. **explain_docs**: Improves and restructures tool documentation into clear, concise instructions, including overviews, usage, use cases, and key features.
103. **explain_math**: Helps you understand mathematical concepts in a clear and engaging way.
104. **explain_project**: Summarizes project documentation into clear, concise sections covering the project, problem, solution, installation, usage, and examples.
105. **explain_terms**: Produces a glossary of advanced terms from content, providing a definition, analogy, and explanation of why each term matters.
106. **export_data_as_csv**: Extracts and outputs all data structures from the input in properly formatted CSV data.
107. **extract_algorithm_update_recommendations**: Extracts concise, practical algorithm update recommendations from the input and outputs them in a bulleted list.
108. **extract_article_wisdom**: Extracts surprising, insightful, and interesting information from content, categorizing it into sections like summary, ideas, quotes, facts, references, and recommendations.
109. **extract_book_ideas**: Extracts and outputs 50 to 100 of the most surprising, insightful, and interesting ideas from a book's content.
110. **extract_book_recommendations**: Extracts and outputs 50 to 100 practical, actionable recommendations from a book's content.
111. **extract_business_ideas**: Extracts top business ideas from content and elaborates on the best 10 with unique differentiators.
112. **extract_controversial_ideas**: Extracts and outputs controversial statements and supporting quotes from the input in a structured Markdown list.
113. **extract_core_message**: Extracts and outputs a clear, concise sentence that articulates the core message of a given text or body of work.
114. **extract_ctf_writeup**: Extracts a short writeup from a warstory-like text about a cyber security engagement.
115. **extract_domains**: Extracts domains and URLs from content to identify sources used for articles, newsletters, and other publications.
116. **extract_extraordinary_claims**: Extracts and outputs a list of extraordinary claims from conversations, focusing on scientifically disputed or false statements.
117. **extract_ideas**: Extracts and outputs all the key ideas from input, presented as 15-word bullet points in Markdown.
118. **extract_insights**: Extracts and outputs the most powerful and insightful ideas from text, formatted as 16-word bullet points in the INSIGHTS section, also IDEAS section.
119. **extract_insights_dm**: Extracts and outputs all valuable insights and a concise summary of the content, including key points and topics discussed.
120. **extract_instructions**: Extracts clear, actionable step-by-step instructions and main objectives from instructional video transcripts, organizing them into a concise list.
121. **extract_jokes**: Extracts jokes from text content, presenting each joke with its punchline in separate bullet points.
122. **extract_latest_video**: Extracts the latest video URL from a YouTube RSS feed and outputs the URL only.
123. **extract_main_activities**: Extracts key events and activities from transcripts or logs, providing a summary of what happened.
124. **extract_main_idea**: Extracts the main idea and key recommendation from the input, summarizing them in 15-word sentences.
125. **extract_most_redeeming_thing**: Extracts the most redeeming aspect from an input, summarizing it in a single 15-word sentence.
126. **extract_patterns**: Extracts and analyzes recurring, surprising, and insightful patterns from input, providing detailed analysis and advice for builders.
127. **extract_poc**: Extracts proof of concept URLs and validation methods from security reports, providing the URL and command to run.
128. **extract_predictions**: Extracts predictions from input, including specific details such as date, confidence level, and verification method.
129. **extract_primary_problem**: Extracts the primary problem with the world as presented in a given text or body of work.
130. **extract_primary_solution**: Extracts the primary solution for the world as presented in a given text or body of work.
131. **extract_product_features**: Extracts and outputs a list of product features from the provided input in a bulleted format.
132. **extract_questions**: Extracts and outputs all questions asked by the interviewer in a conversation or interview.
133. **extract_recipe**: Extracts and outputs a recipe with a short meal description, ingredients with measurements, and preparation steps.
134. **extract_recommendations**: Extracts and outputs concise, practical recommendations from a given piece of content in a bulleted list.
135. **extract_references**: Extracts and outputs a bulleted list of references to art, stories, books, literature, and other sources from content.
136. **extract_skills**: Extracts and classifies skills from a job description into a table, separating each skill and classifying it as either hard or soft.
137. **extract_song_meaning**: Analyzes a song to provide a summary of its meaning, supported by detailed evidence from lyrics, artist commentary, and fan analysis.
138. **extract_sponsors**: Extracts and lists official sponsors and potential sponsors from a provided transcript.
139. **extract_videoid**: Extracts and outputs the video ID from any given URL.
140. **extract_wisdom**: Extracts surprising, insightful, and interesting information from text on topics like human flourishing, AI, learning, and more.
141. **extract_wisdom_agents**: Extracts valuable insights, ideas, quotes, and references from content, emphasizing topics like human flourishing, AI, learning, and technology.
142. **extract_wisdom_dm**: Extracts all valuable, insightful, and thought-provoking information from content, focusing on topics like human flourishing, AI, learning, and technology.
143. **extract_wisdom_nometa**: Extracts insights, ideas, quotes, habits, facts, references, and recommendations from content, focusing on human flourishing, AI, technology, and related topics.
144. **find_female_life_partner**: Analyzes criteria for finding a female life partner and provides clear, direct, and poetic descriptions.
145. **find_hidden_message**: Extracts overt and hidden political messages, justifications, audience actions, and a cynical analysis from content.
146. **find_logical_fallacies**: Identifies and analyzes fallacies in arguments, classifying them as formal or informal with detailed reasoning.
147. **get_wow_per_minute**: Determines the wow-factor of content per minute based on surprise, novelty, insight, value, and wisdom, measuring how rewarding the content is for the viewer.
148. **get_youtube_rss**: Returns the RSS URL for a given YouTube channel based on the channel ID or URL.
149. **heal_person**: Develops a comprehensive plan for spiritual and mental healing based on psychological profiles, providing personalized recommendations for mental health improvement and overall life enhancement.
150. **humanize**: Rewrites AI-generated text to sound natural, conversational, and easy to understand, maintaining clarity and simplicity.
151. **identify_dsrp_distinctions**: Encourages creative, systems-based thinking by exploring distinctions, boundaries, and their implications, drawing on insights from prominent systems thinkers.
152. **identify_dsrp_perspectives**: Explores the concept of distinctions in systems thinking, focusing on how boundaries define ideas, influence understanding, and reveal or obscure insights.
153. **identify_dsrp_relationships**: Encourages exploration of connections, distinctions, and boundaries between ideas, inspired by systems thinkers to reveal new insights and patterns in complex systems.
154. **identify_dsrp_systems**: Encourages organizing ideas into systems of parts and wholes, inspired by systems thinkers to explore relationships and how changes in organization impact meaning and understanding.
155. **identify_job_stories**: Identifies key job stories or requirements for roles.
156. **improve_academic_writing**: Refines text into clear, concise academic language while improving grammar, coherence, and clarity, with a list of changes.
157. **improve_prompt**: Improves an LLM/AI prompt by applying expert prompt writing strategies for better results and clarity.
158. **improve_report_finding**: Improves a penetration test security finding by providing detailed descriptions, risks, recommendations, references, quotes, and a concise summary in markdown format.
159. **improve_writing**: Refines text by correcting grammar, enhancing style, improving clarity, and maintaining the original meaning. skills.
160. **judge_output**: Evaluates Honeycomb queries by judging their effectiveness, providing critiques and outcomes based on language nuances and analytics relevance.
161. **label_and_rate**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
162. **md_callout**: Classifies content and generates a markdown callout based on the provided text, selecting the most appropriate type.
163. **official_pattern_template**: Template to use if you want to create new fabric patterns.
164. **prepare_7s_strategy**: Prepares a comprehensive briefing document from 7S's strategy capturing organizational profile, strategic elements, and market dynamics with clear, concise, and organized content.
165. **provide_guidance**: Provides psychological and life coaching advice, including analysis, recommendations, and potential diagnoses, with a compassionate and honest tone.
166. **rate_ai_response**: Rates the quality of AI responses by comparing them to top human expert performance, assigning a letter grade, reasoning, and providing a 1-100 score based on the evaluation.
167. **rate_ai_result**: Assesses the quality of AI/ML/LLM work by deeply analyzing content, instructions, and output, then rates performance based on multiple dimensions, including coverage, creativity, and interdisciplinary thinking.
168. **rate_content**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
169. **rate_value**: Produces the best possible output by deeply analyzing and understanding the input and its intended purpose.
170. **raw_query**: Fully digests and contemplates the input to produce the best possible result based on understanding the sender's intent.
171. **recommend_artists**: Recommends a personalized festival schedule with artists aligned to your favorite styles and interests, including rationale.
172. **recommend_pipeline_upgrades**: Optimizes vulnerability-checking pipelines by incorporating new information and improving their efficiency, with detailed explanations of changes.
173. **recommend_talkpanel_topics**: Produces a clean set of proposed talks or panel talking points for a person based on their interests and goals, formatted for submission to a conference organizer.
174. **refine_design_document**: Refines a design document based on a design review by analyzing, mapping concepts, and implementing changes using valid Markdown.
175. **review_design**: Reviews and analyzes architecture design, focusing on clarity, component design, system integrations, security, performance, scalability, and data management.
176. **sanitize_broken_html_to_markdown**: Converts messy HTML into clean, properly formatted Markdown, applying custom styling and ensuring compatibility with Vite.
177. **suggest_pattern**: Suggests appropriate fabric patterns or commands based on user input, providing clear explanations and options for users.
178. **summarize**: Summarizes content into a 20-word sentence, main points, and takeaways, formatted with numbered lists in Markdown.
179. **summarize_board_meeting**: Creates formal meeting notes from board meeting transcripts for corporate governance documentation.
180. **summarize_debate**: Summarizes debates, identifies primary disagreement, extracts arguments, and provides analysis of evidence and argument strength to predict outcomes.
181. **summarize_git_changes**: Summarizes recent project updates from the last 7 days, focusing on key changes with enthusiasm.
182. **summarize_git_diff**: Summarizes and organizes Git diff changes with clear, succinct commit messages and bullet points.
183. **summarize_lecture**: Extracts relevant topics, definitions, and tools from lecture transcripts, providing structured summaries with timestamps and key takeaways.
184. **summarize_legislation**: Summarizes complex political proposals and legislation by analyzing key points, proposed changes, and providing balanced, positive, and cynical characterizations.
185. **summarize_meeting**: Analyzes meeting transcripts to extract a structured summary, including an overview, key points, tasks, decisions, challenges, timeline, references, and next steps.
186. **summarize_micro**: Summarizes content into a 20-word sentence, 3 main points, and 3 takeaways, formatted in clear, concise Markdown.
187. **summarize_newsletter**: Extracts the most meaningful, interesting, and useful content from a newsletter, summarizing key sections such as content, opinions, tools, companies, and follow-up items in clear, structured Markdown.
188. **summarize_paper**: Summarizes an academic paper by detailing its title, authors, technical approach, distinctive features, experimental setup, results, advantages, limitations, and conclusion in a clear, structured format using human-readable Markdown.
189. **summarize_prompt**: Summarizes AI chat prompts by describing the primary function, unique approach, and expected output in a concise paragraph. The summary is focused on the prompt's purpose without unnecessary details or formatting.
190. **summarize_pull-requests**: Summarizes pull requests for a coding project by providing a summary and listing the top PRs with human-readable descriptions.
191. **summarize_rpg_session**: Summarizes a role-playing game session by extracting key events, combat stats, character changes, quotes, and more.
192. **t_analyze_challenge_handling**: Provides 8-16 word bullet points evaluating how well challenges are being addressed, calling out any lack of effort.
193. **t_check_metrics**: Analyzes deep context from the TELOS file and input instruction, then provides a wisdom-based output while considering metrics and KPIs to assess recent improvements.
194. **t_create_h3_career**: Summarizes context and produces wisdom-based output by deeply analyzing both the TELOS File and the input instruction, considering the relationship between the two.
195. **t_create_opening_sentences**: Describes from TELOS file the person's identity, goals, and actions in 4 concise, 32-word bullet points, humbly.
196. **t_describe_life_outlook**: Describes from TELOS file a person's life outlook in 5 concise, 16-word bullet points.
197. **t_extract_intro_sentences**: Summarizes from TELOS file a person's identity, work, and current projects in 5 concise and grounded bullet points.
198. **t_extract_panel_topics**: Creates 5 panel ideas with titles and descriptions based on deep context from a TELOS file and input.
199. **t_find_blindspots**: Identify potential blindspots in thinking, frames, or models that may expose the individual to error or risk.
200. **t_find_negative_thinking**: Analyze a TELOS file and input to identify negative thinking in documents or journals, followed by tough love encouragement.
201. **t_find_neglected_goals**: Analyze a TELOS file and input instructions to identify goals or projects that have not been worked on recently.
202. **t_give_encouragement**: Analyze a TELOS file and input instructions to evaluate progress, provide encouragement, and offer recommendations for continued effort.
203. **t_red_team_thinking**: Analyze a TELOS file and input instructions to red-team thinking, models, and frames, then provide recommendations for improvement.
204. **t_threat_model_plans**: Analyze a TELOS file and input instructions to create threat models for a life plan and recommend improvements.
205. **t_visualize_mission_goals_projects**: Analyze a TELOS file and input instructions to create an ASCII art diagram illustrating the relationship of missions, goals, and projects.
206. **t_year_in_review**: Analyze a TELOS file to create insights about a person or entity, then summarize accomplishments and visualizations in bullet points.
207. **to_flashcards**: Create Anki flashcards from a given text, focusing on concise, optimized questions and answers without external context.
208. **transcribe_minutes**: Extracts (from meeting transcription) meeting minutes, identifying actionables, insightful ideas, decisions, challenges, and next steps in a structured format.
209. **translate**: Translates sentences or documentation into the specified language code while maintaining the original formatting and tone.
210. **tweet**: Provides a step-by-step guide on crafting engaging tweets with emojis, covering Twitter basics, account creation, features, and audience targeting.
211. **write_essay**: Writes essays in the style of a specified author, embodying their unique voice, vocabulary, and approach. Uses `author_name` variable.
212. **write_essay_pg**: Writes concise, clear essays in the style of Paul Graham, focusing on simplicity, clarity, and illumination of the provided topic.
213. **write_hackerone_report**: Generates concise, clear, and reproducible bug bounty reports, detailing vulnerability impact, steps to reproduce, and exploit details for triagers.
214. **write_latex**: Generates syntactically correct LaTeX code for a new.tex document, ensuring proper formatting and compatibility with pdflatex.
215. **write_micro_essay**: Writes concise, clear, and illuminating essays on the given topic in the style of Paul Graham.
216. **write_nuclei_template_rule**: Generates Nuclei YAML templates for detecting vulnerabilities using HTTP requests, matchers, extractors, and dynamic data extraction.
217. **write_pull-request**: Drafts detailed pull request descriptions, explaining changes, providing reasoning, and identifying potential bugs from the git diff command output.
218. **write_semgrep_rule**: Creates accurate and working Semgrep rules based on input, following syntax guidelines and specific language considerations.
219. **youtube_summary**: Create concise, timestamped Youtube video summaries that highlight key points.
88. **create_story_about_people_interaction**: Analyze two personas, compare their dynamics, and craft a realistic, character-driven story from those insights.
89. **create_story_explanation**: Summarizes complex content in a clear, approachable story format that makes the concepts easy to understand.
90. **create_stride_threat_model**: Create a STRIDE-based threat model for a system design, identifying assets, trust boundaries, data flows, and prioritizing threats with mitigations.
91. **create_summary**: Summarizes content into a 20-word sentence, 10 main points (16 words max), and 5 key takeaways in Markdown format.
92. **create_tags**: Identifies at least 5 tags from text content for mind mapping tools, including authors and existing tags if present.
93. **create_threat_scenarios**: Identifies likely attack methods for any system by providing a narrative-based threat model, balancing risk and opportunity.
94. **create_ttrc_graph**: Creates a CSV file showing the progress of Time to Remediate Critical Vulnerabilities over time using given data.
95. **create_ttrc_narrative**: Creates a persuasive narrative highlighting progress in reducing the Time to Remediate Critical Vulnerabilities metric over time.
96. **create_upgrade_pack**: Extracts world model and task algorithm updates from content, providing beliefs about how the world works and task performance.
97. **create_user_story**: Writes concise and clear technical user stories for new features in complex software programs, formatted for all stakeholders.
98. **create_video_chapters**: Extracts interesting topics and timestamps from a transcript, providing concise summaries of key moments.
99. **create_visualization**: Transforms complex ideas into visualizations using intricate ASCII art, simplifying concepts where necessary.
100. **dialog_with_socrates**: Engages in deep, meaningful dialogues to explore and challenge beliefs using the Socratic method.
101. **enrich_blog_post**: Enhances Markdown blog files by applying instructions to improve structure, visuals, and readability for HTML rendering.
102. **explain_code**: Explains code, security tool output, configuration text, and answers questions based on the provided input.
103. **explain_docs**: Improves and restructures tool documentation into clear, concise instructions, including overviews, usage, use cases, and key features.
104. **explain_math**: Helps you understand mathematical concepts in a clear and engaging way.
105. **explain_project**: Summarizes project documentation into clear, concise sections covering the project, problem, solution, installation, usage, and examples.
106. **explain_terms**: Produces a glossary of advanced terms from content, providing a definition, analogy, and explanation of why each term matters.
107. **export_data_as_csv**: Extracts and outputs all data structures from the input in properly formatted CSV data.
108. **extract_algorithm_update_recommendations**: Extracts concise, practical algorithm update recommendations from the input and outputs them in a bulleted list.
109. **extract_article_wisdom**: Extracts surprising, insightful, and interesting information from content, categorizing it into sections like summary, ideas, quotes, facts, references, and recommendations.
110. **extract_book_ideas**: Extracts and outputs 50 to 100 of the most surprising, insightful, and interesting ideas from a book's content.
111. **extract_book_recommendations**: Extracts and outputs 50 to 100 practical, actionable recommendations from a book's content.
112. **extract_business_ideas**: Extracts top business ideas from content and elaborates on the best 10 with unique differentiators.
113. **extract_controversial_ideas**: Extracts and outputs controversial statements and supporting quotes from the input in a structured Markdown list.
114. **extract_core_message**: Extracts and outputs a clear, concise sentence that articulates the core message of a given text or body of work.
115. **extract_ctf_writeup**: Extracts a short writeup from a warstory-like text about a cyber security engagement.
116. **extract_domains**: Extracts domains and URLs from content to identify sources used for articles, newsletters, and other publications.
117. **extract_extraordinary_claims**: Extracts and outputs a list of extraordinary claims from conversations, focusing on scientifically disputed or false statements.
118. **extract_ideas**: Extracts and outputs all the key ideas from input, presented as 15-word bullet points in Markdown.
119. **extract_insights**: Extracts and outputs the most powerful and insightful ideas from text, formatted as 16-word bullet points in the INSIGHTS section, also IDEAS section.
120. **extract_insights_dm**: Extracts and outputs all valuable insights and a concise summary of the content, including key points and topics discussed.
121. **extract_instructions**: Extracts clear, actionable step-by-step instructions and main objectives from instructional video transcripts, organizing them into a concise list.
122. **extract_jokes**: Extracts jokes from text content, presenting each joke with its punchline in separate bullet points.
123. **extract_latest_video**: Extracts the latest video URL from a YouTube RSS feed and outputs the URL only.
124. **extract_main_activities**: Extracts key events and activities from transcripts or logs, providing a summary of what happened.
125. **extract_main_idea**: Extracts the main idea and key recommendation from the input, summarizing them in 15-word sentences.
126. **extract_most_redeeming_thing**: Extracts the most redeeming aspect from an input, summarizing it in a single 15-word sentence.
127. **extract_patterns**: Extracts and analyzes recurring, surprising, and insightful patterns from input, providing detailed analysis and advice for builders.
128. **extract_poc**: Extracts proof of concept URLs and validation methods from security reports, providing the URL and command to run.
129. **extract_predictions**: Extracts predictions from input, including specific details such as date, confidence level, and verification method.
130. **extract_primary_problem**: Extracts the primary problem with the world as presented in a given text or body of work.
131. **extract_primary_solution**: Extracts the primary solution for the world as presented in a given text or body of work.
132. **extract_product_features**: Extracts and outputs a list of product features from the provided input in a bulleted format.
133. **extract_questions**: Extracts and outputs all questions asked by the interviewer in a conversation or interview.
134. **extract_recipe**: Extracts and outputs a recipe with a short meal description, ingredients with measurements, and preparation steps.
135. **extract_recommendations**: Extracts and outputs concise, practical recommendations from a given piece of content in a bulleted list.
136. **extract_references**: Extracts and outputs a bulleted list of references to art, stories, books, literature, and other sources from content.
137. **extract_skills**: Extracts and classifies skills from a job description into a table, separating each skill and classifying it as either hard or soft.
138. **extract_song_meaning**: Analyzes a song to provide a summary of its meaning, supported by detailed evidence from lyrics, artist commentary, and fan analysis.
139. **extract_sponsors**: Extracts and lists official sponsors and potential sponsors from a provided transcript.
140. **extract_videoid**: Extracts and outputs the video ID from any given URL.
141. **extract_wisdom**: Extracts surprising, insightful, and interesting information from text on topics like human flourishing, AI, learning, and more.
142. **extract_wisdom_agents**: Extracts valuable insights, ideas, quotes, and references from content, emphasizing topics like human flourishing, AI, learning, and technology.
143. **extract_wisdom_dm**: Extracts all valuable, insightful, and thought-provoking information from content, focusing on topics like human flourishing, AI, learning, and technology.
144. **extract_wisdom_nometa**: Extracts insights, ideas, quotes, habits, facts, references, and recommendations from content, focusing on human flourishing, AI, technology, and related topics.
145. **find_female_life_partner**: Analyzes criteria for finding a female life partner and provides clear, direct, and poetic descriptions.
146. **find_hidden_message**: Extracts overt and hidden political messages, justifications, audience actions, and a cynical analysis from content.
147. **find_logical_fallacies**: Identifies and analyzes fallacies in arguments, classifying them as formal or informal with detailed reasoning.
148. **get_wow_per_minute**: Determines the wow-factor of content per minute based on surprise, novelty, insight, value, and wisdom, measuring how rewarding the content is for the viewer.
149. **get_youtube_rss**: Returns the RSS URL for a given YouTube channel based on the channel ID or URL.
150. **heal_person**: Develops a comprehensive plan for spiritual and mental healing based on psychological profiles, providing personalized recommendations for mental health improvement and overall life enhancement.
151. **humanize**: Rewrites AI-generated text to sound natural, conversational, and easy to understand, maintaining clarity and simplicity.
152. **identify_dsrp_distinctions**: Encourages creative, systems-based thinking by exploring distinctions, boundaries, and their implications, drawing on insights from prominent systems thinkers.
153. **identify_dsrp_perspectives**: Explores the concept of distinctions in systems thinking, focusing on how boundaries define ideas, influence understanding, and reveal or obscure insights.
154. **identify_dsrp_relationships**: Encourages exploration of connections, distinctions, and boundaries between ideas, inspired by systems thinkers to reveal new insights and patterns in complex systems.
155. **identify_dsrp_systems**: Encourages organizing ideas into systems of parts and wholes, inspired by systems thinkers to explore relationships and how changes in organization impact meaning and understanding.
156. **identify_job_stories**: Identifies key job stories or requirements for roles.
157. **improve_academic_writing**: Refines text into clear, concise academic language while improving grammar, coherence, and clarity, with a list of changes.
158. **improve_prompt**: Improves an LLM/AI prompt by applying expert prompt writing strategies for better results and clarity.
159. **improve_report_finding**: Improves a penetration test security finding by providing detailed descriptions, risks, recommendations, references, quotes, and a concise summary in markdown format.
160. **improve_writing**: Refines text by correcting grammar, enhancing style, improving clarity, and maintaining the original meaning. skills.
161. **judge_output**: Evaluates Honeycomb queries by judging their effectiveness, providing critiques and outcomes based on language nuances and analytics relevance.
162. **label_and_rate**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
163. **md_callout**: Classifies content and generates a markdown callout based on the provided text, selecting the most appropriate type.
164. **official_pattern_template**: Template to use if you want to create new fabric patterns.
165. **prepare_7s_strategy**: Prepares a comprehensive briefing document from 7S's strategy capturing organizational profile, strategic elements, and market dynamics with clear, concise, and organized content.
166. **provide_guidance**: Provides psychological and life coaching advice, including analysis, recommendations, and potential diagnoses, with a compassionate and honest tone.
167. **rate_ai_response**: Rates the quality of AI responses by comparing them to top human expert performance, assigning a letter grade, reasoning, and providing a 1-100 score based on the evaluation.
168. **rate_ai_result**: Assesses the quality of AI/ML/LLM work by deeply analyzing content, instructions, and output, then rates performance based on multiple dimensions, including coverage, creativity, and interdisciplinary thinking.
169. **rate_content**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
170. **rate_value**: Produces the best possible output by deeply analyzing and understanding the input and its intended purpose.
171. **raw_query**: Fully digests and contemplates the input to produce the best possible result based on understanding the sender's intent.
172. **recommend_artists**: Recommends a personalized festival schedule with artists aligned to your favorite styles and interests, including rationale.
173. **recommend_pipeline_upgrades**: Optimizes vulnerability-checking pipelines by incorporating new information and improving their efficiency, with detailed explanations of changes.
174. **recommend_talkpanel_topics**: Produces a clean set of proposed talks or panel talking points for a person based on their interests and goals, formatted for submission to a conference organizer.
175. **refine_design_document**: Refines a design document based on a design review by analyzing, mapping concepts, and implementing changes using valid Markdown.
176. **review_design**: Reviews and analyzes architecture design, focusing on clarity, component design, system integrations, security, performance, scalability, and data management.
177. **sanitize_broken_html_to_markdown**: Converts messy HTML into clean, properly formatted Markdown, applying custom styling and ensuring compatibility with Vite.
178. **suggest_pattern**: Suggests appropriate fabric patterns or commands based on user input, providing clear explanations and options for users.
179. **summarize**: Summarizes content into a 20-word sentence, main points, and takeaways, formatted with numbered lists in Markdown.
180. **summarize_board_meeting**: Creates formal meeting notes from board meeting transcripts for corporate governance documentation.
181. **summarize_debate**: Summarizes debates, identifies primary disagreement, extracts arguments, and provides analysis of evidence and argument strength to predict outcomes.
182. **summarize_git_changes**: Summarizes recent project updates from the last 7 days, focusing on key changes with enthusiasm.
183. **summarize_git_diff**: Summarizes and organizes Git diff changes with clear, succinct commit messages and bullet points.
184. **summarize_lecture**: Extracts relevant topics, definitions, and tools from lecture transcripts, providing structured summaries with timestamps and key takeaways.
185. **summarize_legislation**: Summarizes complex political proposals and legislation by analyzing key points, proposed changes, and providing balanced, positive, and cynical characterizations.
186. **summarize_meeting**: Analyzes meeting transcripts to extract a structured summary, including an overview, key points, tasks, decisions, challenges, timeline, references, and next steps.
187. **summarize_micro**: Summarizes content into a 20-word sentence, 3 main points, and 3 takeaways, formatted in clear, concise Markdown.
188. **summarize_newsletter**: Extracts the most meaningful, interesting, and useful content from a newsletter, summarizing key sections such as content, opinions, tools, companies, and follow-up items in clear, structured Markdown.
189. **summarize_paper**: Summarizes an academic paper by detailing its title, authors, technical approach, distinctive features, experimental setup, results, advantages, limitations, and conclusion in a clear, structured format using human-readable Markdown.
190. **summarize_prompt**: Summarizes AI chat prompts by describing the primary function, unique approach, and expected output in a concise paragraph. The summary is focused on the prompt's purpose without unnecessary details or formatting.
191. **summarize_pull-requests**: Summarizes pull requests for a coding project by providing a summary and listing the top PRs with human-readable descriptions.
192. **summarize_rpg_session**: Summarizes a role-playing game session by extracting key events, combat stats, character changes, quotes, and more.
193. **t_analyze_challenge_handling**: Provides 8-16 word bullet points evaluating how well challenges are being addressed, calling out any lack of effort.
194. **t_check_metrics**: Analyzes deep context from the TELOS file and input instruction, then provides a wisdom-based output while considering metrics and KPIs to assess recent improvements.
195. **t_create_h3_career**: Summarizes context and produces wisdom-based output by deeply analyzing both the TELOS File and the input instruction, considering the relationship between the two.
196. **t_create_opening_sentences**: Describes from TELOS file the person's identity, goals, and actions in 4 concise, 32-word bullet points, humbly.
197. **t_describe_life_outlook**: Describes from TELOS file a person's life outlook in 5 concise, 16-word bullet points.
198. **t_extract_intro_sentences**: Summarizes from TELOS file a person's identity, work, and current projects in 5 concise and grounded bullet points.
199. **t_extract_panel_topics**: Creates 5 panel ideas with titles and descriptions based on deep context from a TELOS file and input.
200. **t_find_blindspots**: Identify potential blindspots in thinking, frames, or models that may expose the individual to error or risk.
201. **t_find_negative_thinking**: Analyze a TELOS file and input to identify negative thinking in documents or journals, followed by tough love encouragement.
202. **t_find_neglected_goals**: Analyze a TELOS file and input instructions to identify goals or projects that have not been worked on recently.
203. **t_give_encouragement**: Analyze a TELOS file and input instructions to evaluate progress, provide encouragement, and offer recommendations for continued effort.
204. **t_red_team_thinking**: Analyze a TELOS file and input instructions to red-team thinking, models, and frames, then provide recommendations for improvement.
205. **t_threat_model_plans**: Analyze a TELOS file and input instructions to create threat models for a life plan and recommend improvements.
206. **t_visualize_mission_goals_projects**: Analyze a TELOS file and input instructions to create an ASCII art diagram illustrating the relationship of missions, goals, and projects.
207. **t_year_in_review**: Analyze a TELOS file to create insights about a person or entity, then summarize accomplishments and visualizations in bullet points.
208. **to_flashcards**: Create Anki flashcards from a given text, focusing on concise, optimized questions and answers without external context.
209. **transcribe_minutes**: Extracts (from meeting transcription) meeting minutes, identifying actionables, insightful ideas, decisions, challenges, and next steps in a structured format.
210. **translate**: Translates sentences or documentation into the specified language code while maintaining the original formatting and tone.
211. **tweet**: Provides a step-by-step guide on crafting engaging tweets with emojis, covering Twitter basics, account creation, features, and audience targeting.
212. **write_essay**: Writes essays in the style of a specified author, embodying their unique voice, vocabulary, and approach. Uses `author_name` variable.
213. **write_essay_pg**: Writes concise, clear essays in the style of Paul Graham, focusing on simplicity, clarity, and illumination of the provided topic.
214. **write_hackerone_report**: Generates concise, clear, and reproducible bug bounty reports, detailing vulnerability impact, steps to reproduce, and exploit details for triagers.
215. **write_latex**: Generates syntactically correct LaTeX code for a new.tex document, ensuring proper formatting and compatibility with pdflatex.
216. **write_micro_essay**: Writes concise, clear, and illuminating essays on the given topic in the style of Paul Graham.
217. **write_nuclei_template_rule**: Generates Nuclei YAML templates for detecting vulnerabilities using HTTP requests, matchers, extractors, and dynamic data extraction.
218. **write_pull-request**: Drafts detailed pull request descriptions, explaining changes, providing reasoning, and identifying potential bugs from the git diff command output.
219. **write_semgrep_rule**: Creates accurate and working Semgrep rules based on input, following syntax guidelines and specific language considerations.
220. **youtube_summary**: Create concise, timestamped Youtube video summaries that highlight key points.

View File

@@ -73,7 +73,7 @@ Match the request to one or more of these primary categories:
**AI**: ai, create_ai_jobs_analysis, create_art_prompt, create_pattern, create_prediction_block, extract_mcp_servers, extract_wisdom_agents, generate_code_rules, improve_prompt, judge_output, rate_ai_response, rate_ai_result, raw_query, suggest_pattern, summarize_prompt
**ANALYSIS**: ai, analyze_answers, analyze_bill, analyze_bill_short, analyze_candidates, analyze_cfp_submission, analyze_claims, analyze_comments, analyze_debate, analyze_email_headers, analyze_incident, analyze_interviewer_techniques, analyze_logs, analyze_malware, analyze_military_strategy, analyze_mistakes, analyze_paper, analyze_paper_simple, analyze_patent, analyze_personality, analyze_presentation, analyze_product_feedback, analyze_proposition, analyze_prose, analyze_prose_json, analyze_prose_pinker, analyze_risk, analyze_sales_call, analyze_spiritual_text, analyze_tech_impact, analyze_terraform_plan, analyze_threat_report, analyze_threat_report_cmds, analyze_threat_report_trends, apply_ul_tags, check_agreement, compare_and_contrast, create_ai_jobs_analysis, create_idea_compass, create_investigation_visualization, create_prediction_block, create_recursive_outline, create_tags, dialog_with_socrates, extract_main_idea, extract_predictions, find_hidden_message, find_logical_fallacies, get_wow_per_minute, identify_dsrp_distinctions, identify_dsrp_perspectives, identify_dsrp_relationships, identify_dsrp_systems, identify_job_stories, label_and_rate, prepare_7s_strategy, provide_guidance, rate_content, rate_value, recommend_artists, recommend_talkpanel_topics, review_design, summarize_board_meeting, t_analyze_challenge_handling, t_check_dunning_kruger, t_check_metrics, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_find_blindspots, t_find_negative_thinking, t_red_team_thinking, t_threat_model_plans, t_year_in_review, write_hackerone_report
**ANALYSIS**: ai, analyze_answers, analyze_bill, analyze_bill_short, analyze_candidates, analyze_cfp_submission, analyze_claims, analyze_comments, analyze_debate, analyze_email_headers, analyze_incident, analyze_interviewer_techniques, analyze_logs, analyze_malware, analyze_military_strategy, analyze_mistakes, analyze_paper, analyze_paper_simple, analyze_patent, analyze_personality, analyze_presentation, analyze_product_feedback, analyze_proposition, analyze_prose, analyze_prose_json, analyze_prose_pinker, analyze_risk, analyze_sales_call, analyze_spiritual_text, analyze_tech_impact, analyze_terraform_plan, analyze_threat_report, analyze_threat_report_cmds, analyze_threat_report_trends, apply_ul_tags, check_agreement, compare_and_contrast, create_ai_jobs_analysis, create_idea_compass, create_investigation_visualization, create_prediction_block, create_recursive_outline, create_story_about_people_interaction, create_tags, dialog_with_socrates, extract_main_idea, extract_predictions, find_hidden_message, find_logical_fallacies, get_wow_per_minute, identify_dsrp_distinctions, identify_dsrp_perspectives, identify_dsrp_relationships, identify_dsrp_systems, identify_job_stories, label_and_rate, prepare_7s_strategy, provide_guidance, rate_content, rate_value, recommend_artists, recommend_talkpanel_topics, review_design, summarize_board_meeting, t_analyze_challenge_handling, t_check_dunning_kruger, t_check_metrics, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_find_blindspots, t_find_negative_thinking, t_red_team_thinking, t_threat_model_plans, t_year_in_review, write_hackerone_report
**BILL**: analyze_bill, analyze_bill_short
@@ -115,7 +115,7 @@ Match the request to one or more of these primary categories:
**WISDOM**: extract_alpha, extract_article_wisdom, extract_book_ideas, extract_insights, extract_most_redeeming_thing, extract_recommendations, extract_wisdom, extract_wisdom_dm, extract_wisdom_nometa, extract_wisdom_short
**WRITING**: analyze_prose_json, analyze_prose_pinker, apply_ul_tags, clean_text, compare_and_contrast, convert_to_markdown, create_5_sentence_summary, create_academic_paper, create_aphorisms, create_better_frame, create_design_document, create_diy, create_formal_email, create_hormozi_offer, create_keynote, create_micro_summary, create_newsletter_entry, create_prediction_block, create_prd, create_show_intro, create_story_explanation, create_summary, create_tags, create_user_story, enrich_blog_post, explain_docs, explain_terms, humanize, improve_academic_writing, improve_writing, label_and_rate, md_callout, official_pattern_template, recommend_talkpanel_topics, refine_design_document, summarize, summarize_debate, summarize_lecture, summarize_legislation, summarize_meeting, summarize_micro, summarize_newsletter, summarize_paper, summarize_rpg_session, t_create_opening_sentences, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_give_encouragement, t_year_in_review, transcribe_minutes, tweet, write_essay, write_essay_pg, write_hackerone_report, write_latex, write_micro_essay, write_pull-request
**WRITING**: analyze_prose_json, analyze_prose_pinker, apply_ul_tags, clean_text, compare_and_contrast, convert_to_markdown, create_5_sentence_summary, create_academic_paper, create_aphorisms, create_better_frame, create_design_document, create_diy, create_formal_email, create_hormozi_offer, create_keynote, create_micro_summary, create_newsletter_entry, create_prediction_block, create_prd, create_show_intro, create_story_about_people_interaction, create_story_explanation, create_summary, create_tags, create_user_story, enrich_blog_post, explain_docs, explain_terms, humanize, improve_academic_writing, improve_writing, label_and_rate, md_callout, official_pattern_template, recommend_talkpanel_topics, refine_design_document, summarize, summarize_debate, summarize_lecture, summarize_legislation, summarize_meeting, summarize_micro, summarize_newsletter, summarize_paper, summarize_rpg_session, t_create_opening_sentences, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_give_encouragement, t_year_in_review, transcribe_minutes, tweet, write_essay, write_essay_pg, write_hackerone_report, write_latex, write_micro_essay, write_pull-request
## Workflow Suggestions

View File

@@ -204,6 +204,10 @@ Identify automation risks and career resilience strategies.
Develop positive mental frameworks for challenging situations.
### create_story_about_people_interaction
Analyze two personas, compare their dynamics, and craft a realistic, character-driven story from those insights.
### create_idea_compass
Organize thoughts analyzing definitions, evidence, relationships, implications.
@@ -570,6 +574,10 @@ Write concise newsletter content focusing on key insights.
Craft compelling podcast/show intros to engage audience.
### create_story_about_people_interaction
Analyze two personas, compare their dynamics, and craft a realistic, character-driven story from those insights.
### create_story_explanation
Transform complex concepts into clear, engaging narratives.

View File

@@ -9,6 +9,7 @@ import (
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/domain"
"github.com/danielmiessler/fabric/internal/i18n"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
"github.com/danielmiessler/fabric/internal/tools/notifications"
@@ -58,12 +59,12 @@ func handleChatProcessing(currentFlags *Flags, registry *core.PluginRegistry, me
isTTSModel := isTTSModel(currentFlags.Model)
if isTTSModel && !isAudioOutput {
err = fmt.Errorf("TTS model '%s' requires audio output. Please specify an audio output file with -o flag (e.g., -o output.wav)", currentFlags.Model)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("tts_model_requires_audio_output"), currentFlags.Model))
return
}
if isAudioOutput && !isTTSModel {
err = fmt.Errorf("audio output file '%s' specified but model '%s' is not a TTS model. Please use a TTS model like gemini-2.5-flash-preview-tts", currentFlags.Output, currentFlags.Model)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("audio_output_file_specified_but_not_tts_model"), currentFlags.Output, currentFlags.Model))
return
}
@@ -75,7 +76,7 @@ func handleChatProcessing(currentFlags *Flags, registry *core.PluginRegistry, me
outputFile += ".wav"
}
if _, err = os.Stat(outputFile); err == nil {
err = fmt.Errorf("file %s already exists. Please choose a different filename or remove the existing file", outputFile)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("file_already_exists_choose_different"), outputFile))
return
}
}
@@ -95,7 +96,7 @@ func handleChatProcessing(currentFlags *Flags, registry *core.PluginRegistry, me
if !currentFlags.Stream || currentFlags.SuppressThink {
// For TTS models with audio output, show a user-friendly message instead of raw data
if isTTSModel && isAudioOutput && strings.HasPrefix(result, "FABRIC_AUDIO_DATA:") {
fmt.Printf("TTS audio generated successfully and saved to: %s\n", currentFlags.Output)
fmt.Printf(i18n.T("tts_audio_generated_successfully"), currentFlags.Output)
} else {
// print the result if it was not streamed already or suppress-think disabled streaming output
fmt.Println(result)
@@ -149,20 +150,20 @@ func handleChatProcessing(currentFlags *Flags, registry *core.PluginRegistry, me
// not grapheme clusters. As a result, complex emoji or accented characters with multiple combining
// characters may be truncated improperly. This is a limitation of the current implementation.
func sendNotification(options *domain.ChatOptions, patternName, result string) error {
title := "Fabric Command Complete"
title := i18n.T("fabric_command_complete")
if patternName != "" {
title = fmt.Sprintf("Fabric: %s Complete", patternName)
title = fmt.Sprintf(i18n.T("fabric_command_complete_with_pattern"), patternName)
}
// Limit message length for notification display (counts Unicode code points)
message := "Command completed successfully"
message := i18n.T("command_completed_successfully")
if result != "" {
maxLength := 100
runes := []rune(result)
if len(runes) > maxLength {
message = fmt.Sprintf("Output: %s...", string(runes[:maxLength]))
message = fmt.Sprintf(i18n.T("output_truncated"), string(runes[:maxLength]))
} else {
message = fmt.Sprintf("Output: %s", result)
message = fmt.Sprintf(i18n.T("output_full"), result)
}
// Clean up newlines for notification display
message = strings.ReplaceAll(message, "\n", " ")
@@ -184,7 +185,7 @@ func sendNotification(options *domain.ChatOptions, patternName, result string) e
// Use built-in notification system
notificationManager := notifications.NewNotificationManager()
if !notificationManager.IsAvailable() {
return fmt.Errorf("no notification system available")
return fmt.Errorf("%s", i18n.T("no_notification_system_available"))
}
return notificationManager.Send(title, message)

View File

@@ -13,6 +13,7 @@ import (
"github.com/danielmiessler/fabric/internal/chat"
"github.com/danielmiessler/fabric/internal/domain"
"github.com/danielmiessler/fabric/internal/i18n"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/util"
"github.com/jessevdk/go-flags"
@@ -146,9 +147,15 @@ func Init() (ret *Flags, err error) {
// Parse CLI flags first
ret = &Flags{}
parser := flags.NewParser(ret, flags.Default)
parser := flags.NewParser(ret, flags.HelpFlag|flags.PassDoubleDash)
var args []string
if args, err = parser.Parse(); err != nil {
// Check if this is a help request and handle it with our custom help
if flagsErr, ok := err.(*flags.Error); ok && flagsErr.Type == flags.ErrHelp {
CustomHelpHandler(parser, os.Stdout)
os.Exit(0)
}
return
}
debuglog.SetLevel(debuglog.LevelFromInt(ret.Debug))
@@ -275,30 +282,30 @@ func assignWithConversion(targetField, sourceField reflect.Value) error {
return nil
}
}
return fmt.Errorf("cannot convert string %q to %v", str, targetField.Kind())
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("cannot_convert_string"), str, targetField.Kind()))
}
return fmt.Errorf("unsupported conversion from %v to %v", sourceField.Kind(), targetField.Kind())
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("unsupported_conversion"), sourceField.Kind(), targetField.Kind()))
}
func loadYAMLConfig(configPath string) (*Flags, error) {
absPath, err := util.GetAbsolutePath(configPath)
if err != nil {
return nil, fmt.Errorf("invalid config path: %w", err)
return nil, fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_config_path"), err))
}
data, err := os.ReadFile(absPath)
if err != nil {
if os.IsNotExist(err) {
return nil, fmt.Errorf("config file not found: %s", absPath)
return nil, fmt.Errorf("%s", fmt.Sprintf(i18n.T("config_file_not_found"), absPath))
}
return nil, fmt.Errorf("error reading config file: %w", err)
return nil, fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_reading_config_file"), err))
}
// Use the existing Flags struct for YAML unmarshal
config := &Flags{}
if err := yaml.Unmarshal(data, config); err != nil {
return nil, fmt.Errorf("error parsing config file: %w", err)
return nil, fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_parsing_config_file"), err))
}
debuglog.Debug(debuglog.Detailed, "Config: %v\n", config)
@@ -316,7 +323,7 @@ func readStdin() (ret string, err error) {
sb.WriteString(line)
break
}
err = fmt.Errorf("error reading piped message from stdin: %w", readErr)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_reading_piped_message"), readErr))
return
} else {
sb.WriteString(line)
@@ -334,7 +341,7 @@ func validateImageFile(imagePath string) error {
// Check if file already exists
if _, err := os.Stat(imagePath); err == nil {
return fmt.Errorf("image file already exists: %s", imagePath)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("image_file_already_exists"), imagePath))
}
// Check file extension
@@ -347,7 +354,7 @@ func validateImageFile(imagePath string) error {
}
}
return fmt.Errorf("invalid image file extension '%s'. Supported formats: .png, .jpeg, .jpg, .webp", ext)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_image_file_extension"), ext))
}
// validateImageParameters validates image generation parameters
@@ -355,7 +362,7 @@ func validateImageParameters(imagePath, size, quality, background string, compre
if imagePath == "" {
// Check if any image parameters are specified without --image-file
if size != "" || quality != "" || background != "" || compression != 0 {
return fmt.Errorf("image parameters (--image-size, --image-quality, --image-background, --image-compression) can only be used with --image-file")
return fmt.Errorf("%s", i18n.T("image_parameters_require_image_file"))
}
return nil
}
@@ -371,7 +378,7 @@ func validateImageParameters(imagePath, size, quality, background string, compre
}
}
if !valid {
return fmt.Errorf("invalid image size '%s'. Supported sizes: 1024x1024, 1536x1024, 1024x1536, auto", size)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_image_size"), size))
}
}
@@ -386,7 +393,7 @@ func validateImageParameters(imagePath, size, quality, background string, compre
}
}
if !valid {
return fmt.Errorf("invalid image quality '%s'. Supported qualities: low, medium, high, auto", quality)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_image_quality"), quality))
}
}
@@ -401,7 +408,7 @@ func validateImageParameters(imagePath, size, quality, background string, compre
}
}
if !valid {
return fmt.Errorf("invalid image background '%s'. Supported backgrounds: opaque, transparent", background)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_image_background"), background))
}
}
@@ -411,17 +418,17 @@ func validateImageParameters(imagePath, size, quality, background string, compre
// Validate compression (only for jpeg/webp)
if compression != 0 { // 0 means not set
if ext != ".jpg" && ext != ".jpeg" && ext != ".webp" {
return fmt.Errorf("image compression can only be used with JPEG and WebP formats, not %s", ext)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("image_compression_jpeg_webp_only"), ext))
}
if compression < 0 || compression > 100 {
return fmt.Errorf("image compression must be between 0 and 100, got %d", compression)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("image_compression_range_error"), compression))
}
}
// Validate background transparency (only for png/webp)
if background == "transparent" {
if ext != ".png" && ext != ".webp" {
return fmt.Errorf("transparent background can only be used with PNG and WebP formats, not %s", ext)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("transparent_background_png_webp_only"), ext))
}
}

View File

@@ -455,3 +455,30 @@ func TestBuildChatOptionsWithImageParameters(t *testing.T) {
assert.Contains(t, err.Error(), "can only be used with --image-file")
})
}
func TestExtractFlag(t *testing.T) {
tests := []struct {
name string
arg string
expected string
}{
// Unix-style flags
{"long flag", "--help", "help"},
{"long flag with value", "--pattern=analyze", "pattern"},
{"short flag", "-h", "h"},
{"short flag with value", "-p=test", "p"},
{"single dash", "-", ""},
{"double dash only", "--", ""},
// Non-flags
{"regular arg", "analyze", ""},
{"path arg", "./file.txt", ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := extractFlag(tt.arg)
assert.Equal(t, tt.expected, result)
})
}
}

291
internal/cli/help.go Normal file
View File

@@ -0,0 +1,291 @@
package cli
import (
"fmt"
"io"
"os"
"reflect"
"runtime"
"strings"
"github.com/danielmiessler/fabric/internal/i18n"
"github.com/jessevdk/go-flags"
)
// flagDescriptionMap maps flag names to their i18n keys
var flagDescriptionMap = map[string]string{
"pattern": "choose_pattern_from_available",
"variable": "pattern_variables_help",
"context": "choose_context_from_available",
"session": "choose_session_from_available",
"attachment": "attachment_path_or_url_help",
"setup": "run_setup_for_reconfigurable_parts",
"temperature": "set_temperature",
"topp": "set_top_p",
"stream": "stream_help",
"presencepenalty": "set_presence_penalty",
"raw": "use_model_defaults_raw_help",
"frequencypenalty": "set_frequency_penalty",
"listpatterns": "list_all_patterns",
"listmodels": "list_all_available_models",
"listcontexts": "list_all_contexts",
"listsessions": "list_all_sessions",
"updatepatterns": "update_patterns",
"copy": "copy_to_clipboard",
"model": "choose_model",
"vendor": "specify_vendor_for_model",
"modelContextLength": "model_context_length_ollama",
"output": "output_to_file",
"output-session": "output_entire_session",
"latest": "number_of_latest_patterns",
"changeDefaultModel": "change_default_model",
"youtube": "youtube_url_help",
"playlist": "prefer_playlist_over_video",
"transcript": "grab_transcript_from_youtube",
"transcript-with-timestamps": "grab_transcript_with_timestamps",
"comments": "grab_comments_from_youtube",
"metadata": "output_video_metadata",
"yt-dlp-args": "additional_yt_dlp_args",
"language": "specify_language_code",
"scrape_url": "scrape_website_url",
"scrape_question": "search_question_jina",
"seed": "seed_for_lmm_generation",
"wipecontext": "wipe_context",
"wipesession": "wipe_session",
"printcontext": "print_context",
"printsession": "print_session",
"readability": "convert_html_readability",
"input-has-vars": "apply_variables_to_input",
"no-variable-replacement": "disable_pattern_variable_replacement",
"dry-run": "show_dry_run",
"serve": "serve_fabric_rest_api",
"serveOllama": "serve_fabric_api_ollama_endpoints",
"address": "address_to_bind_rest_api",
"api-key": "api_key_secure_server_routes",
"config": "path_to_yaml_config",
"version": "print_current_version",
"listextensions": "list_all_registered_extensions",
"addextension": "register_new_extension",
"rmextension": "remove_registered_extension",
"strategy": "choose_strategy_from_available",
"liststrategies": "list_all_strategies",
"listvendors": "list_all_vendors",
"shell-complete-list": "output_raw_list_shell_completion",
"search": "enable_web_search_tool",
"search-location": "set_location_web_search",
"image-file": "save_generated_image_to_file",
"image-size": "image_dimensions_help",
"image-quality": "image_quality_help",
"image-compression": "compression_level_jpeg_webp",
"image-background": "background_type_help",
"suppress-think": "suppress_thinking_tags",
"think-start-tag": "start_tag_thinking_sections",
"think-end-tag": "end_tag_thinking_sections",
"disable-responses-api": "disable_openai_responses_api",
"transcribe-file": "audio_video_file_transcribe",
"transcribe-model": "model_for_transcription",
"split-media-file": "split_media_files_ffmpeg",
"voice": "tts_voice_name",
"list-gemini-voices": "list_gemini_tts_voices",
"list-transcription-models": "list_transcription_models",
"notification": "send_desktop_notification",
"notification-command": "custom_notification_command",
"thinking": "set_reasoning_thinking_level",
"debug": "set_debug_level",
}
// TranslatedHelpWriter provides custom help output with translated descriptions
type TranslatedHelpWriter struct {
parser *flags.Parser
writer io.Writer
}
// NewTranslatedHelpWriter creates a new help writer with translations
func NewTranslatedHelpWriter(parser *flags.Parser, writer io.Writer) *TranslatedHelpWriter {
return &TranslatedHelpWriter{
parser: parser,
writer: writer,
}
}
// WriteHelp writes the help output with translated flag descriptions
func (h *TranslatedHelpWriter) WriteHelp() {
fmt.Fprintf(h.writer, "%s\n", i18n.T("usage_header"))
fmt.Fprintf(h.writer, " %s %s\n\n", h.parser.Name, i18n.T("options_placeholder"))
fmt.Fprintf(h.writer, "%s\n", i18n.T("application_options_header"))
h.writeAllFlags()
fmt.Fprintf(h.writer, "\n%s\n", i18n.T("help_options_header"))
fmt.Fprintf(h.writer, " -h, --help %s\n", i18n.T("help_message"))
}
// getTranslatedDescription gets the translated description for a flag
func (h *TranslatedHelpWriter) getTranslatedDescription(flagName string) string {
if i18nKey, exists := flagDescriptionMap[flagName]; exists {
return i18n.T(i18nKey)
}
// Fallback 1: Try to get original description from struct tag
if desc := h.getOriginalDescription(flagName); desc != "" {
return desc
}
// Fallback 2: Provide a user-friendly default message
return i18n.T("no_description_available")
}
// getOriginalDescription retrieves the original description from struct tags
func (h *TranslatedHelpWriter) getOriginalDescription(flagName string) string {
flags := &Flags{}
flagsType := reflect.TypeOf(flags).Elem()
for i := 0; i < flagsType.NumField(); i++ {
field := flagsType.Field(i)
longTag := field.Tag.Get("long")
if longTag == flagName {
if description := field.Tag.Get("description"); description != "" {
return description
}
break
}
}
return ""
}
// CustomHelpHandler handles help output with translations
func CustomHelpHandler(parser *flags.Parser, writer io.Writer) {
// Initialize i18n system with detected language if not already initialized
ensureI18nInitialized()
helpWriter := NewTranslatedHelpWriter(parser, writer)
helpWriter.WriteHelp()
}
// ensureI18nInitialized initializes the i18n system if not already done
func ensureI18nInitialized() {
// Try to detect language from command line args or environment
lang := detectLanguageFromArgs()
if lang == "" {
// Try to detect from environment variables
lang = detectLanguageFromEnv()
}
// Initialize i18n with detected language (or empty for system default)
i18n.Init(lang)
}
// detectLanguageFromArgs looks for --language/-g flag in os.Args
func detectLanguageFromArgs() string {
args := os.Args[1:]
for i, arg := range args {
if arg == "--language" || arg == "-g" || (runtime.GOOS == "windows" && arg == "/g") {
if i+1 < len(args) {
return args[i+1]
}
} else if strings.HasPrefix(arg, "--language=") {
return strings.TrimPrefix(arg, "--language=")
} else if strings.HasPrefix(arg, "-g=") {
return strings.TrimPrefix(arg, "-g=")
} else if runtime.GOOS == "windows" && strings.HasPrefix(arg, "/g:") {
return strings.TrimPrefix(arg, "/g:")
} else if runtime.GOOS == "windows" && strings.HasPrefix(arg, "/g=") {
return strings.TrimPrefix(arg, "/g=")
}
}
return ""
}
// detectLanguageFromEnv detects language from environment variables
func detectLanguageFromEnv() string {
// Check standard locale environment variables
envVars := []string{"LC_ALL", "LC_MESSAGES", "LANG"}
for _, envVar := range envVars {
if value := os.Getenv(envVar); value != "" {
// Extract language code from locale (e.g., "es_ES.UTF-8" -> "es")
if strings.Contains(value, "_") {
return strings.Split(value, "_")[0]
}
if value != "C" && value != "POSIX" {
return value
}
}
}
return ""
}
// writeAllFlags writes all flags with translated descriptions
func (h *TranslatedHelpWriter) writeAllFlags() {
// Use direct reflection on the Flags struct to get all flag definitions
flags := &Flags{}
flagsType := reflect.TypeOf(flags).Elem()
for i := 0; i < flagsType.NumField(); i++ {
field := flagsType.Field(i)
shortTag := field.Tag.Get("short")
longTag := field.Tag.Get("long")
defaultTag := field.Tag.Get("default")
if longTag == "" {
continue // Skip fields without long tags
}
// Get translated description
description := h.getTranslatedDescription(longTag)
// Format the flag line
var flagLine strings.Builder
flagLine.WriteString(" ")
if shortTag != "" {
flagLine.WriteString(fmt.Sprintf("-%s, ", shortTag))
}
flagLine.WriteString(fmt.Sprintf("--%s", longTag))
// Add parameter indicator for non-boolean flags
isBoolFlag := field.Type.Kind() == reflect.Bool ||
strings.HasSuffix(longTag, "patterns") ||
strings.HasSuffix(longTag, "models") ||
strings.HasSuffix(longTag, "contexts") ||
strings.HasSuffix(longTag, "sessions") ||
strings.HasSuffix(longTag, "extensions") ||
strings.HasSuffix(longTag, "strategies") ||
strings.HasSuffix(longTag, "vendors") ||
strings.HasSuffix(longTag, "voices") ||
longTag == "setup" || longTag == "stream" || longTag == "raw" ||
longTag == "copy" || longTag == "updatepatterns" ||
longTag == "output-session" || longTag == "changeDefaultModel" ||
longTag == "playlist" || longTag == "transcript" ||
longTag == "transcript-with-timestamps" || longTag == "comments" ||
longTag == "metadata" || longTag == "readability" ||
longTag == "input-has-vars" || longTag == "no-variable-replacement" ||
longTag == "dry-run" || longTag == "serve" || longTag == "serveOllama" ||
longTag == "version" || longTag == "shell-complete-list" ||
longTag == "search" || longTag == "suppress-think" ||
longTag == "disable-responses-api" || longTag == "split-media-file" ||
longTag == "notification"
if !isBoolFlag {
flagLine.WriteString("=")
}
// Pad to align descriptions
flagStr := flagLine.String()
padding := 34 - len(flagStr)
if padding < 2 {
padding = 2
}
fmt.Fprintf(h.writer, "%s%s%s", flagStr, strings.Repeat(" ", padding), description)
// Add default value if present
if defaultTag != "" && defaultTag != "0" && defaultTag != "false" {
fmt.Fprintf(h.writer, " (default: %s)", defaultTag)
}
fmt.Fprintf(h.writer, "\n")
}
}

View File

@@ -6,6 +6,7 @@ import (
"path/filepath"
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/i18n"
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
)
@@ -36,20 +37,20 @@ func initializeFabric() (registry *core.PluginRegistry, err error) {
func ensureEnvFile() (err error) {
var homedir string
if homedir, err = os.UserHomeDir(); err != nil {
return fmt.Errorf("could not determine user home directory: %w", err)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("could_not_determine_home_dir"), err))
}
configDir := filepath.Join(homedir, ".config", "fabric")
envPath := filepath.Join(configDir, ".env")
if _, statErr := os.Stat(envPath); statErr != nil {
if !os.IsNotExist(statErr) {
return fmt.Errorf("could not stat .env file: %w", statErr)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("could_not_stat_env_file"), statErr))
}
if err = os.MkdirAll(configDir, ConfigDirPerms); err != nil {
return fmt.Errorf("could not create config directory: %w", err)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("could_not_create_config_dir"), err))
}
if err = os.WriteFile(envPath, []byte{}, EnvFilePerms); err != nil {
return fmt.Errorf("could not create .env file: %w", err)
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("could_not_create_env_file"), err))
}
}
return

View File

@@ -8,6 +8,7 @@ import (
openai "github.com/openai/openai-go"
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/i18n"
"github.com/danielmiessler/fabric/internal/plugins/ai"
"github.com/danielmiessler/fabric/internal/plugins/ai/gemini"
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
@@ -93,7 +94,7 @@ func listTranscriptionModels(shellComplete bool) {
fmt.Println(model)
}
} else {
fmt.Println("Available transcription models:")
fmt.Println(i18n.T("available_transcription_models"))
for _, model := range models {
fmt.Printf(" %s\n", model)
}

View File

@@ -7,29 +7,30 @@ import (
"strings"
"github.com/atotto/clipboard"
"github.com/danielmiessler/fabric/internal/i18n"
debuglog "github.com/danielmiessler/fabric/internal/log"
)
func CopyToClipboard(message string) (err error) {
if err = clipboard.WriteAll(message); err != nil {
err = fmt.Errorf("could not copy to clipboard: %v", err)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("could_not_copy_to_clipboard"), err))
}
return
}
func CreateOutputFile(message string, fileName string) (err error) {
if _, err = os.Stat(fileName); err == nil {
err = fmt.Errorf("file %s already exists, not overwriting. Rename the existing file or choose a different name", fileName)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("file_already_exists_not_overwriting"), fileName))
return
}
var file *os.File
if file, err = os.Create(fileName); err != nil {
err = fmt.Errorf("error creating file: %v", err)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_creating_file"), err))
return
}
defer file.Close()
if _, err = file.WriteString(message); err != nil {
err = fmt.Errorf("error writing to file: %v", err)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_writing_to_file"), err))
} else {
debuglog.Log("\n\n[Output also written to %s]\n", fileName)
}
@@ -46,13 +47,13 @@ func CreateAudioOutputFile(audioData []byte, fileName string) (err error) {
// File existence check is now done in the CLI layer before TTS generation
var file *os.File
if file, err = os.Create(fileName); err != nil {
err = fmt.Errorf("error creating audio file: %v", err)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_creating_audio_file"), err))
return
}
defer file.Close()
if _, err = file.Write(audioData); err != nil {
err = fmt.Errorf("error writing audio data to file: %v", err)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_writing_audio_data"), err))
}
// No redundant output message here - the CLI layer handles success messaging
return

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/i18n"
"github.com/danielmiessler/fabric/internal/tools/youtube"
)
@@ -11,7 +12,7 @@ import (
func handleToolProcessing(currentFlags *Flags, registry *core.PluginRegistry) (messageTools string, err error) {
if currentFlags.YouTube != "" {
if !registry.YouTube.IsConfigured() {
err = fmt.Errorf("YouTube is not configured, please run the setup procedure")
err = fmt.Errorf("%s", i18n.T("youtube_not_configured"))
return
}
@@ -25,7 +26,7 @@ func handleToolProcessing(currentFlags *Flags, registry *core.PluginRegistry) (m
} else {
var videos []*youtube.VideoMeta
if videos, err = registry.YouTube.FetchPlaylistVideos(playlistId); err != nil {
err = fmt.Errorf("error fetching playlist videos: %w", err)
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_fetching_playlist_videos"), err))
return
}
@@ -58,7 +59,7 @@ func handleToolProcessing(currentFlags *Flags, registry *core.PluginRegistry) (m
if currentFlags.ScrapeURL != "" || currentFlags.ScrapeQuestion != "" {
if !registry.Jina.IsConfigured() {
err = fmt.Errorf("scraping functionality is not configured. Please set up Jina to enable scraping")
err = fmt.Errorf("%s", i18n.T("scraping_not_configured"))
return
}
// Check if the scrape_url flag is set and call ScrapeURL

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/i18n"
)
type transcriber interface {
@@ -18,15 +19,15 @@ func handleTranscription(flags *Flags, registry *core.PluginRegistry) (message s
}
vendor, ok := registry.VendorManager.VendorsByName[vendorName]
if !ok {
return "", fmt.Errorf("vendor %s not configured", vendorName)
return "", fmt.Errorf("%s", fmt.Sprintf(i18n.T("vendor_not_configured"), vendorName))
}
tr, ok := vendor.(transcriber)
if !ok {
return "", fmt.Errorf("vendor %s does not support audio transcription", vendorName)
return "", fmt.Errorf("%s", fmt.Sprintf(i18n.T("vendor_no_transcription_support"), vendorName))
}
model := flags.TranscribeModel
if model == "" {
return "", fmt.Errorf("transcription model is required (use --transcribe-model)")
return "", fmt.Errorf("%s", i18n.T("transcription_model_required"))
}
if message, err = tr.TranscribeFile(context.Background(), flags.TranscribeFile, model, flags.SplitMediaFile); err != nil {
return

View File

@@ -10,6 +10,7 @@ import (
"strconv"
"strings"
"github.com/danielmiessler/fabric/internal/i18n"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins/ai/anthropic"
"github.com/danielmiessler/fabric/internal/plugins/ai/azure"
@@ -131,7 +132,7 @@ func (o *PluginRegistry) ListVendors(out io.Writer) error {
vendors := lo.Map(o.VendorsAll.Vendors, func(vendor ai.Vendor, _ int) string {
return vendor.GetName()
})
fmt.Fprint(out, "Available Vendors:\n\n")
fmt.Fprintf(out, "%s\n\n", i18n.T("available_vendors_header"))
for _, vendor := range vendors {
fmt.Fprintf(out, "%s\n", vendor)
}

View File

@@ -66,12 +66,12 @@ func Init(locale string) (*i18n.Localizer, error) {
if _, err := os.Stat(path); os.IsNotExist(err) && !embedded {
if err := downloadLocale(path, locale); err != nil {
// if download fails, still continue with embedded translations
fmt.Fprintln(os.Stderr, "i18n download failed:", err)
fmt.Fprintf(os.Stderr, "%s\n", fmt.Sprintf(getErrorMessage("i18n_download_failed", "Failed to download translation for language '%s': %v"), locale, err))
}
}
if _, err := os.Stat(path); err == nil {
if _, err := bundle.LoadMessageFile(path); err != nil {
fmt.Fprintln(os.Stderr, "i18n load failed:", err)
fmt.Fprintf(os.Stderr, "%s\n", fmt.Sprintf(getErrorMessage("i18n_load_failed", "Failed to load translation file: %v"), err))
}
}
@@ -119,3 +119,42 @@ func downloadLocale(path, locale string) error {
_, err = io.Copy(f, resp.Body)
return err
}
// getErrorMessage tries to get a translated error message, falling back to system locale
// and then to the provided fallback message. This is used during initialization when
// the translator may not be fully ready.
func getErrorMessage(messageID, fallback string) string {
// Try to get system locale for error messages
systemLocale := getPreferredLocale("")
if systemLocale == "" {
systemLocale = "en"
}
// First try the system locale
if msg := tryGetMessage(systemLocale, messageID); msg != "" {
return msg
}
// Fall back to English
if systemLocale != "en" {
if msg := tryGetMessage("en", messageID); msg != "" {
return msg
}
}
// Final fallback to hardcoded message
return fallback
}
// tryGetMessage attempts to get a message from embedded locale files
func tryGetMessage(locale, messageID string) string {
if data, err := localeFS.ReadFile("locales/" + locale + ".json"); err == nil {
var messages map[string]string
if json.Unmarshal(data, &messages) == nil {
if msg, exists := messages[messageID]; exists {
return msg
}
}
}
return ""
}

View File

@@ -1,3 +1,136 @@
{
"html_readability_error": "use original input, because can't apply html readability"
"html_readability_error": "use original input, because can't apply html readability",
"vendor_not_configured": "vendor %s not configured",
"vendor_no_transcription_support": "vendor %s does not support audio transcription",
"transcription_model_required": "transcription model is required (use --transcribe-model)",
"youtube_not_configured": "YouTube is not configured, please run the setup procedure",
"error_fetching_playlist_videos": "error fetching playlist videos: %w",
"scraping_not_configured": "scraping functionality is not configured. Please set up Jina to enable scraping",
"could_not_determine_home_dir": "could not determine user home directory: %w",
"could_not_stat_env_file": "could not stat .env file: %w",
"could_not_create_config_dir": "could not create config directory: %w",
"could_not_create_env_file": "could not create .env file: %w",
"could_not_copy_to_clipboard": "could not copy to clipboard: %v",
"file_already_exists_not_overwriting": "file %s already exists, not overwriting. Rename the existing file or choose a different name",
"error_creating_file": "error creating file: %v",
"error_writing_to_file": "error writing to file: %v",
"error_creating_audio_file": "error creating audio file: %v",
"error_writing_audio_data": "error writing audio data to file: %v",
"tts_model_requires_audio_output": "TTS model '%s' requires audio output. Please specify an audio output file with -o flag (e.g., -o output.wav)",
"audio_output_file_specified_but_not_tts_model": "audio output file '%s' specified but model '%s' is not a TTS model. Please use a TTS model like gemini-2.5-flash-preview-tts",
"file_already_exists_choose_different": "file %s already exists. Please choose a different filename or remove the existing file",
"no_notification_system_available": "no notification system available",
"cannot_convert_string": "cannot convert string %q to %v",
"unsupported_conversion": "unsupported conversion from %v to %v",
"invalid_config_path": "invalid config path: %w",
"config_file_not_found": "config file not found: %s",
"error_reading_config_file": "error reading config file: %w",
"error_parsing_config_file": "error parsing config file: %w",
"error_reading_piped_message": "error reading piped message from stdin: %w",
"image_file_already_exists": "image file already exists: %s",
"invalid_image_file_extension": "invalid image file extension '%s'. Supported formats: .png, .jpeg, .jpg, .webp",
"image_parameters_require_image_file": "image parameters (--image-size, --image-quality, --image-background, --image-compression) can only be used with --image-file",
"invalid_image_size": "invalid image size '%s'. Supported sizes: 1024x1024, 1536x1024, 1024x1536, auto",
"invalid_image_quality": "invalid image quality '%s'. Supported qualities: low, medium, high, auto",
"invalid_image_background": "invalid image background '%s'. Supported backgrounds: opaque, transparent",
"image_compression_jpeg_webp_only": "image compression can only be used with JPEG and WebP formats, not %s",
"image_compression_range_error": "image compression must be between 0 and 100, got %d",
"transparent_background_png_webp_only": "transparent background can only be used with PNG and WebP formats, not %s",
"available_transcription_models": "Available transcription models:",
"tts_audio_generated_successfully": "TTS audio generated successfully and saved to: %s\n",
"fabric_command_complete": "Fabric Command Complete",
"fabric_command_complete_with_pattern": "Fabric: %s Complete",
"command_completed_successfully": "Command completed successfully",
"output_truncated": "Output: %s...",
"output_full": "Output: %s",
"choose_pattern_from_available": "Choose a pattern from the available patterns",
"pattern_variables_help": "Values for pattern variables, e.g. -v=#role:expert -v=#points:30",
"choose_context_from_available": "Choose a context from the available contexts",
"choose_session_from_available": "Choose a session from the available sessions",
"attachment_path_or_url_help": "Attachment path or URL (e.g. for OpenAI image recognition messages)",
"run_setup_for_reconfigurable_parts": "Run setup for all reconfigurable parts of fabric",
"set_temperature": "Set temperature",
"set_top_p": "Set top P",
"stream_help": "Stream",
"set_presence_penalty": "Set presence penalty",
"use_model_defaults_raw_help": "Use the defaults of the model without sending chat options (like temperature etc.) and use the user role instead of the system role for patterns.",
"set_frequency_penalty": "Set frequency penalty",
"list_all_patterns": "List all patterns",
"list_all_available_models": "List all available models",
"list_all_contexts": "List all contexts",
"list_all_sessions": "List all sessions",
"update_patterns": "Update patterns",
"messages_to_send_to_chat": "Messages to send to chat",
"copy_to_clipboard": "Copy to clipboard",
"choose_model": "Choose model",
"specify_vendor_for_model": "Specify vendor for the selected model (e.g., -V \"LM Studio\" -m openai/gpt-oss-20b)",
"model_context_length_ollama": "Model context length (only affects ollama)",
"output_to_file": "Output to file",
"output_entire_session": "Output the entire session (also a temporary one) to the output file",
"number_of_latest_patterns": "Number of latest patterns to list",
"change_default_model": "Change default model",
"youtube_url_help": "YouTube video or play list \"URL\" to grab transcript, comments from it and send to chat or print it put to the console and store it in the output file",
"prefer_playlist_over_video": "Prefer playlist over video if both ids are present in the URL",
"grab_transcript_from_youtube": "Grab transcript from YouTube video and send to chat (it is used per default).",
"grab_transcript_with_timestamps": "Grab transcript from YouTube video with timestamps and send to chat",
"grab_comments_from_youtube": "Grab comments from YouTube video and send to chat",
"output_video_metadata": "Output video metadata",
"additional_yt_dlp_args": "Additional arguments to pass to yt-dlp (e.g. '--cookies-from-browser brave')",
"specify_language_code": "Specify the Language Code for the chat, e.g. -g=en -g=zh",
"scrape_website_url": "Scrape website URL to markdown using Jina AI",
"search_question_jina": "Search question using Jina AI",
"seed_for_lmm_generation": "Seed to be used for LMM generation",
"wipe_context": "Wipe context",
"wipe_session": "Wipe session",
"print_context": "Print context",
"print_session": "Print session",
"convert_html_readability": "Convert HTML input into a clean, readable view",
"apply_variables_to_input": "Apply variables to user input",
"disable_pattern_variable_replacement": "Disable pattern variable replacement",
"show_dry_run": "Show what would be sent to the model without actually sending it",
"serve_fabric_rest_api": "Serve the Fabric Rest API",
"serve_fabric_api_ollama_endpoints": "Serve the Fabric Rest API with ollama endpoints",
"address_to_bind_rest_api": "The address to bind the REST API",
"api_key_secure_server_routes": "API key used to secure server routes",
"path_to_yaml_config": "Path to YAML config file",
"print_current_version": "Print current version",
"list_all_registered_extensions": "List all registered extensions",
"register_new_extension": "Register a new extension from config file path",
"remove_registered_extension": "Remove a registered extension by name",
"choose_strategy_from_available": "Choose a strategy from the available strategies",
"list_all_strategies": "List all strategies",
"list_all_vendors": "List all vendors",
"output_raw_list_shell_completion": "Output raw list without headers/formatting (for shell completion)",
"enable_web_search_tool": "Enable web search tool for supported models (Anthropic, OpenAI, Gemini)",
"set_location_web_search": "Set location for web search results (e.g., 'America/Los_Angeles')",
"save_generated_image_to_file": "Save generated image to specified file path (e.g., 'output.png')",
"image_dimensions_help": "Image dimensions: 1024x1024, 1536x1024, 1024x1536, auto (default: auto)",
"image_quality_help": "Image quality: low, medium, high, auto (default: auto)",
"compression_level_jpeg_webp": "Compression level 0-100 for JPEG/WebP formats (default: not set)",
"background_type_help": "Background type: opaque, transparent (default: opaque, only for PNG/WebP)",
"suppress_thinking_tags": "Suppress text enclosed in thinking tags",
"start_tag_thinking_sections": "Start tag for thinking sections",
"end_tag_thinking_sections": "End tag for thinking sections",
"disable_openai_responses_api": "Disable OpenAI Responses API (default: false)",
"audio_video_file_transcribe": "Audio or video file to transcribe",
"model_for_transcription": "Model to use for transcription (separate from chat model)",
"split_media_files_ffmpeg": "Split audio/video files larger than 25MB using ffmpeg",
"tts_voice_name": "TTS voice name for supported models (e.g., Kore, Charon, Puck)",
"list_gemini_tts_voices": "List all available Gemini TTS voices",
"list_transcription_models": "List all available transcription models",
"send_desktop_notification": "Send desktop notification when command completes",
"custom_notification_command": "Custom command to run for notifications (overrides built-in notifications)",
"set_reasoning_thinking_level": "Set reasoning/thinking level (e.g., off, low, medium, high, or numeric tokens for Anthropic or Google Gemini)",
"set_debug_level": "Set debug level (0=off, 1=basic, 2=detailed, 3=trace)",
"usage_header": "Usage:",
"application_options_header": "Application Options:",
"help_options_header": "Help Options:",
"help_message": "Show this help message",
"options_placeholder": "[OPTIONS]",
"available_vendors_header": "Available Vendors:",
"available_models_header": "Available models",
"no_items_found": "No %s",
"no_description_available": "No description available",
"i18n_download_failed": "Failed to download translation for language '%s': %v",
"i18n_load_failed": "Failed to load translation file: %v"
}

View File

@@ -1,3 +1,136 @@
{
"html_readability_error": "usa la entrada original, porque no se puede aplicar la legibilidad de html"
"html_readability_error": "usa la entrada original, porque no se puede aplicar la legibilidad de html",
"vendor_not_configured": "el proveedor %s no está configurado",
"vendor_no_transcription_support": "el proveedor %s no admite transcripción de audio",
"transcription_model_required": "se requiere un modelo de transcripción (usa --transcribe-model)",
"youtube_not_configured": "YouTube no está configurado, por favor ejecuta el procedimiento de configuración",
"error_fetching_playlist_videos": "error al obtener videos de la lista de reproducción: %w",
"scraping_not_configured": "la funcionalidad de extracción no está configurada. Por favor configura Jina para habilitar la extracción",
"could_not_determine_home_dir": "no se pudo determinar el directorio home del usuario: %w",
"could_not_stat_env_file": "no se pudo verificar el archivo .env: %w",
"could_not_create_config_dir": "no se pudo crear el directorio de configuración: %w",
"could_not_create_env_file": "no se pudo crear el archivo .env: %w",
"could_not_copy_to_clipboard": "no se pudo copiar al portapapeles: %v",
"file_already_exists_not_overwriting": "el archivo %s ya existe, no se sobrescribirá. Renombra el archivo existente o elige un nombre diferente",
"error_creating_file": "error al crear el archivo: %v",
"error_writing_to_file": "error al escribir al archivo: %v",
"error_creating_audio_file": "error al crear el archivo de audio: %v",
"error_writing_audio_data": "error al escribir datos de audio al archivo: %v",
"tts_model_requires_audio_output": "el modelo TTS '%s' requiere salida de audio. Por favor especifica un archivo de salida de audio con la bandera -o (ej., -o output.wav)",
"audio_output_file_specified_but_not_tts_model": "se especificó el archivo de salida de audio '%s' pero el modelo '%s' no es un modelo TTS. Por favor usa un modelo TTS como gemini-2.5-flash-preview-tts",
"file_already_exists_choose_different": "el archivo %s ya existe. Por favor elige un nombre diferente o elimina el archivo existente",
"no_notification_system_available": "no hay sistema de notificaciones disponible",
"cannot_convert_string": "no se puede convertir la cadena %q a %v",
"unsupported_conversion": "conversión no soportada de %v a %v",
"invalid_config_path": "ruta de configuración inválida: %w",
"config_file_not_found": "archivo de configuración no encontrado: %s",
"error_reading_config_file": "error al leer el archivo de configuración: %w",
"error_parsing_config_file": "error al analizar el archivo de configuración: %w",
"error_reading_piped_message": "error al leer mensaje desde stdin: %w",
"image_file_already_exists": "el archivo de imagen ya existe: %s",
"invalid_image_file_extension": "extensión de archivo de imagen inválida '%s'. Formatos soportados: .png, .jpeg, .jpg, .webp",
"image_parameters_require_image_file": "los parámetros de imagen (--image-size, --image-quality, --image-background, --image-compression) solo pueden usarse con --image-file",
"invalid_image_size": "tamaño de imagen inválido '%s'. Tamaños soportados: 1024x1024, 1536x1024, 1024x1536, auto",
"invalid_image_quality": "calidad de imagen inválida '%s'. Calidades soportadas: low, medium, high, auto",
"invalid_image_background": "fondo de imagen inválido '%s'. Fondos soportados: opaque, transparent",
"image_compression_jpeg_webp_only": "la compresión de imagen solo puede usarse con formatos JPEG y WebP, no %s",
"image_compression_range_error": "la compresión de imagen debe estar entre 0 y 100, se obtuvo %d",
"transparent_background_png_webp_only": "el fondo transparente solo puede usarse con formatos PNG y WebP, no %s",
"available_transcription_models": "Modelos de transcripción disponibles:",
"tts_audio_generated_successfully": "Audio TTS generado exitosamente y guardado en: %s\n",
"fabric_command_complete": "Comando Fabric Completado",
"fabric_command_complete_with_pattern": "Fabric: %s Completado",
"command_completed_successfully": "Comando completado exitosamente",
"output_truncated": "Salida: %s...",
"output_full": "Salida: %s",
"choose_pattern_from_available": "Elige un patrón de los patrones disponibles",
"pattern_variables_help": "Valores para variables de patrón, ej. -v=#role:expert -v=#points:30",
"choose_context_from_available": "Elige un contexto de los contextos disponibles",
"choose_session_from_available": "Elige una sesión de las sesiones disponibles",
"attachment_path_or_url_help": "Ruta de adjunto o URL (ej. para mensajes de reconocimiento de imagen de OpenAI)",
"run_setup_for_reconfigurable_parts": "Ejecutar configuración para todas las partes reconfigurables de fabric",
"set_temperature": "Establecer temperatura",
"set_top_p": "Establecer top P",
"stream_help": "Transmitir",
"set_presence_penalty": "Establecer penalización de presencia",
"use_model_defaults_raw_help": "Usar los valores predeterminados del modelo sin enviar opciones de chat (como temperatura, etc.) y usar el rol de usuario en lugar del rol del sistema para patrones.",
"set_frequency_penalty": "Establecer penalización de frecuencia",
"list_all_patterns": "Listar todos los patrones",
"list_all_available_models": "Listar todos los modelos disponibles",
"list_all_contexts": "Listar todos los contextos",
"list_all_sessions": "Listar todas las sesiones",
"update_patterns": "Actualizar patrones",
"messages_to_send_to_chat": "Mensajes para enviar al chat",
"copy_to_clipboard": "Copiar al portapapeles",
"choose_model": "Elegir modelo",
"specify_vendor_for_model": "Especificar proveedor para el modelo seleccionado (ej., -V \"LM Studio\" -m openai/gpt-oss-20b)",
"model_context_length_ollama": "Longitud de contexto del modelo (solo afecta a ollama)",
"output_to_file": "Salida a archivo",
"output_entire_session": "Salida de toda la sesión (también una temporal) al archivo de salida",
"number_of_latest_patterns": "Número de patrones más recientes a listar",
"change_default_model": "Cambiar modelo predeterminado",
"youtube_url_help": "Video de YouTube o \"URL\" de lista de reproducción para obtener transcripción, comentarios y enviar al chat o imprimir en la consola y almacenar en el archivo de salida",
"prefer_playlist_over_video": "Preferir lista de reproducción sobre video si ambos ids están presentes en la URL",
"grab_transcript_from_youtube": "Obtener transcripción del video de YouTube y enviar al chat (se usa por defecto).",
"grab_transcript_with_timestamps": "Obtener transcripción del video de YouTube con marcas de tiempo y enviar al chat",
"grab_comments_from_youtube": "Obtener comentarios del video de YouTube y enviar al chat",
"output_video_metadata": "Salida de metadatos del video",
"additional_yt_dlp_args": "Argumentos adicionales para pasar a yt-dlp (ej. '--cookies-from-browser brave')",
"specify_language_code": "Especificar el Código de Idioma para el chat, ej. -g=en -g=zh",
"scrape_website_url": "Extraer URL del sitio web a markdown usando Jina AI",
"search_question_jina": "Pregunta de búsqueda usando Jina AI",
"seed_for_lmm_generation": "Semilla para ser usada en la generación LMM",
"wipe_context": "Limpiar contexto",
"wipe_session": "Limpiar sesión",
"print_context": "Imprimir contexto",
"print_session": "Imprimir sesión",
"convert_html_readability": "Convertir entrada HTML en una vista limpia y legible",
"apply_variables_to_input": "Aplicar variables a la entrada del usuario",
"disable_pattern_variable_replacement": "Deshabilitar reemplazo de variables de patrón",
"show_dry_run": "Mostrar lo que se enviaría al modelo sin enviarlo realmente",
"serve_fabric_rest_api": "Servir la API REST de Fabric",
"serve_fabric_api_ollama_endpoints": "Servir la API REST de Fabric con endpoints de ollama",
"address_to_bind_rest_api": "La dirección para vincular la API REST",
"api_key_secure_server_routes": "Clave API usada para asegurar rutas del servidor",
"path_to_yaml_config": "Ruta al archivo de configuración YAML",
"print_current_version": "Imprimir versión actual",
"list_all_registered_extensions": "Listar todas las extensiones registradas",
"register_new_extension": "Registrar una nueva extensión desde la ruta del archivo de configuración",
"remove_registered_extension": "Eliminar una extensión registrada por nombre",
"choose_strategy_from_available": "Elegir una estrategia de las estrategias disponibles",
"list_all_strategies": "Listar todas las estrategias",
"list_all_vendors": "Listar todos los proveedores",
"output_raw_list_shell_completion": "Salida de lista sin procesar sin encabezados/formato (para completado de shell)",
"enable_web_search_tool": "Habilitar herramienta de búsqueda web para modelos soportados (Anthropic, OpenAI, Gemini)",
"set_location_web_search": "Establecer ubicación para resultados de búsqueda web (ej., 'America/Los_Angeles')",
"save_generated_image_to_file": "Guardar imagen generada en la ruta de archivo especificada (ej., 'output.png')",
"image_dimensions_help": "Dimensiones de imagen: 1024x1024, 1536x1024, 1024x1536, auto (predeterminado: auto)",
"image_quality_help": "Calidad de imagen: low, medium, high, auto (predeterminado: auto)",
"compression_level_jpeg_webp": "Nivel de compresión 0-100 para formatos JPEG/WebP (predeterminado: no establecido)",
"background_type_help": "Tipo de fondo: opaque, transparent (predeterminado: opaque, solo para PNG/WebP)",
"suppress_thinking_tags": "Suprimir texto encerrado en etiquetas de pensamiento",
"start_tag_thinking_sections": "Etiqueta de inicio para secciones de pensamiento",
"end_tag_thinking_sections": "Etiqueta de fin para secciones de pensamiento",
"disable_openai_responses_api": "Deshabilitar API de Respuestas de OpenAI (predeterminado: false)",
"audio_video_file_transcribe": "Archivo de audio o video para transcribir",
"model_for_transcription": "Modelo para usar en transcripción (separado del modelo de chat)",
"split_media_files_ffmpeg": "Dividir archivos de audio/video mayores a 25MB usando ffmpeg",
"tts_voice_name": "Nombre de voz TTS para modelos soportados (ej., Kore, Charon, Puck)",
"list_gemini_tts_voices": "Listar todas las voces TTS de Gemini disponibles",
"list_transcription_models": "Listar todos los modelos de transcripción disponibles",
"send_desktop_notification": "Enviar notificación de escritorio cuando se complete el comando",
"custom_notification_command": "Comando personalizado para ejecutar notificaciones (anula las notificaciones integradas)",
"set_reasoning_thinking_level": "Establecer nivel de razonamiento/pensamiento (ej., off, low, medium, high, o tokens numéricos para Anthropic o Google Gemini)",
"set_debug_level": "Establecer nivel de depuración (0=apagado, 1=básico, 2=detallado, 3=rastreo)",
"usage_header": "Uso:",
"application_options_header": "Opciones de la Aplicación:",
"help_options_header": "Opciones de Ayuda:",
"help_message": "Mostrar este mensaje de ayuda",
"options_placeholder": "[OPCIONES]",
"available_vendors_header": "Proveedores Disponibles:",
"available_models_header": "Modelos disponibles",
"no_items_found": "No hay %s",
"no_description_available": "No hay descripción disponible",
"i18n_download_failed": "Error al descargar traducción para el idioma '%s': %v",
"i18n_load_failed": "Error al cargar archivo de traducción: %v"
}

View File

@@ -5,11 +5,12 @@ import (
"sort"
"strings"
"github.com/danielmiessler/fabric/internal/i18n"
"github.com/danielmiessler/fabric/internal/util"
)
func NewVendorsModels() *VendorsModels {
return &VendorsModels{GroupsItemsSelectorString: util.NewGroupsItemsSelectorString("Available models")}
return &VendorsModels{GroupsItemsSelectorString: util.NewGroupsItemsSelectorString(i18n.T("available_models_header"))}
}
type VendorsModels struct {
@@ -21,7 +22,7 @@ type VendorsModels struct {
// Default vendor and model are highlighted with an asterisk.
func (o *VendorsModels) PrintWithVendor(shellCompleteList bool, defaultVendor, defaultModel string) {
if !shellCompleteList {
fmt.Printf("\n%v:\n", o.SelectionLabel)
fmt.Printf("%s:\n\n", o.SelectionLabel)
}
var currentItemIndex int

View File

@@ -7,6 +7,7 @@ import (
"path/filepath"
"strings"
"github.com/danielmiessler/fabric/internal/i18n"
"github.com/danielmiessler/fabric/internal/util"
)
@@ -108,7 +109,7 @@ func (o *StorageEntity) ListNames(shellCompleteList bool) (err error) {
if len(names) == 0 {
if !shellCompleteList {
fmt.Printf("\nNo %v\n", o.Label)
fmt.Printf("%s\n", fmt.Sprintf(i18n.T("no_items_found"), o.Label))
}
return
}

View File

@@ -1 +1 @@
"1.4.308"
"1.4.310"

View File

@@ -1870,6 +1870,14 @@
"ANALYSIS",
"SELF"
]
},
{
"patternName": "create_story_about_people_interaction",
"description": "Analyze two personas, compare their dynamics, and craft a realistic, character-driven story from those insights.",
"tags": [
"ANALYSIS",
"WRITING"
]
}
]
}

View File

@@ -907,6 +907,10 @@
{
"patternName": "heal_person",
"pattern_extract": "# IDENTITY and PURPOSE You are an AI assistant whose primary responsibility is to interpret and analyze psychological profiles and/or psychology data files provided as input. Your role is to carefully process this data and use your expertise to develop a tailored plan aimed at spiritual and mental healing, as well as overall life improvement for the subject. You must approach each case with sensitivity, applying psychological knowledge and holistic strategies to create actionable, personalized recommendations that address both mental and spiritual well-being. Your focus is on structured, compassionate, and practical guidance that can help the individual make meaningful improvements in their life. Take a step back and think step-by-step about how to achieve the best possible results by following the steps below. # STEPS - Carefully review the psychological-profile and/or psychology data file provided as input. - Analyze the data to identify key issues, strengths, and areas needing improvement related to the subject's mental and spiritual well-being. - Develop a comprehensive plan that includes specific strategies for spiritual healing, mental health improvement, and overall life enhancement. - Structure your output to clearly outline recommendations, resources, and actionable steps tailored to the individual's unique profile. # OUTPUT INSTRUCTIONS - Only output Markdown. - Ensure your output is organized, clear, and easy to follow, using headings, subheadings, and bullet points where appropriate. - Ensure you follow ALL these instructions when creating your output. # INPUT INPUT:# IDENTITY and PURPOSE You are an AI assistant whose primary responsibility is to interpret and analyze psychological profiles and/or psychology data files provided as input. Your role is to carefully process this data and use your expertise to develop a tailored plan aimed at spiritual and mental healing, as well as overall life improvement for the subject. You must approach each case with sensitivity, applying psychological knowledge and holistic strategies to create actionable, personalized recommendations that address both mental and spiritual well-being. Your focus is on structured, compassionate, and practical guidance that can help the individual make meaningful improvements in their life. Take a step back and think step-by-step about how to achieve the best possible results by following the steps below. # STEPS - Carefully review the psychological-profile and/or psychology data file provided as input. - Analyze the data to identify key issues, strengths, and areas needing improvement related to the subject's mental and spiritual well-being. - Develop a comprehensive plan that includes specific strategies for spiritual healing, mental health improvement, and overall life enhancement. - Structure your output to clearly outline recommendations, resources, and actionable steps tailored to the individual's unique profile. # OUTPUT INSTRUCTIONS - Only output Markdown. - Ensure your output is organized, clear, and easy to follow, using headings, subheadings, and bullet points where appropriate. - Ensure you follow ALL these instructions when creating your output. # INPUT INPUT:"
},
{
"patternName": "create_story_about_people_interaction",
"pattern_extract": "### Prompt You will be provided with information about **two individuals** (real or fictional). The input will be **delimited by triple backticks**. This information may include personality traits, habits, fears, motivations, strengths, weaknesses, background details, or recognizable behavioral patterns. Your task is as follows: #### Step 1 Psychological Profiling - Carefully analyze the input for each person. - Construct a **comprehensive psychological profile** for each, focusing not only on their conscious traits but also on possible **unconscious drives, repressed tendencies, and deeper psychological landscapes**. - Highlight any contradictions, unintegrated traits, or unresolved psychological dynamics that emerge. #### Step 2 Comparative Analysis - Compare and contrast the two profiles. - Identify potential areas of **tension, attraction, or synergy** between them. - Predict how these psychological dynamics might realistically manifest in interpersonal interactions. #### Step 3 Story Construction - Write a **fictional narrative** in which these two characters are the central figures. - The story should: - Be driven primarily by their interaction. - Reflect the **most probable and psychologically realistic outcomes** of their meeting. - Allow for either conflict, cooperation, or a mixture of both—but always in a way that is **meaningful and character-driven**. - Ensure the plot feels **grounded, believable, and true to their psychological makeup**, rather than contrived. #### Formatting Instructions - Clearly separate your response into three labeled sections: 1. **Profile A** 2. **Profile B** 3. **Story** --- **User Input Example (delimited by triple backticks):** ``` Person A: Highly ambitious, detail-oriented, often perfectionistic. Has a fear of failure and tends to overwork. Childhood marked by pressure to achieve. Secretly desires freedom from expectations. Person B: Warm, empathetic, values relationships over achievement. Struggles with self-assertion, avoids conflict. Childhood marked by neglect. Desires to be seen and valued. Often represses anger. ```"
}
]
}

View File

@@ -43,7 +43,7 @@
"svelte-youtube-lite": "^0.6.2",
"tailwindcss": "^3.4.17",
"typescript": "^5.8.3",
"vite": "^5.4.19",
"vite": "^5.4.20",
"vite-plugin-tailwind-purgecss": "^0.2.1"
},
"type": "module",

258
web/pnpm-lock.yaml generated
View File

@@ -77,13 +77,13 @@ importers:
version: 0.3.1(tailwindcss@3.4.17)
'@sveltejs/adapter-auto':
specifier: ^3.3.1
version: 3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))
version: 3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))
'@sveltejs/kit':
specifier: ^2.21.1
version: 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))
version: 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
'@sveltejs/vite-plugin-svelte':
specifier: ^3.1.2
version: 3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))
version: 3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
'@tailwindcss/forms':
specifier: ^0.5.10
version: 0.5.10(tailwindcss@3.4.17)
@@ -157,11 +157,11 @@ importers:
specifier: ^5.8.3
version: 5.8.3
vite:
specifier: ^5.4.19
version: 5.4.19(@types/node@20.17.50)
specifier: ^5.4.20
version: 5.4.20(@types/node@20.17.50)
vite-plugin-tailwind-purgecss:
specifier: ^0.2.1
version: 0.2.1(vite@5.4.19(@types/node@20.17.50))
version: 0.2.1(vite@5.4.20(@types/node@20.17.50))
packages:
@@ -317,6 +317,12 @@ packages:
peerDependencies:
eslint: ^6.0.0 || ^7.0.0 || >=8.0.0
'@eslint-community/eslint-utils@4.9.0':
resolution: {integrity: sha512-ayVFHdtZ+hsq1t2Dy24wCmGXGe4q9Gu3smhLYALJrr473ZH27MsnSL+LKUlimp4BWJqMDMLmPpx/Q9R3OAlL4g==}
engines: {node: ^12.22.0 || ^14.17.0 || >=16.0.0}
peerDependencies:
eslint: ^6.0.0 || ^7.0.0 || >=8.0.0
'@eslint-community/regexpp@4.12.1':
resolution: {integrity: sha512-CCZCDJuduB9OUkFkY2IgppNZMi2lBQgD2qzwXkEia16cge2pijY/aXi96CJMquDMn3nJdlPV1A5KrJEXwfLNzQ==}
engines: {node: ^12.0.0 || ^14.0.0 || >=16.0.0}
@@ -366,18 +372,14 @@ packages:
resolution: {integrity: sha512-5DyQ4+1JEUzejeK1JGICcideyfUbGixgS9jNgex5nqkW+cY7WZhxBigmieN5Qnw9ZosSNVC9KQKyb+GUaGyKUA==}
engines: {node: '>=18.18.0'}
'@humanfs/node@0.16.6':
resolution: {integrity: sha512-YuI2ZHQL78Q5HbhDiBA1X4LmYdXCKCMQIfw0pw7piHJwyREFebJUvrQN4cMssyES6x+vfUbx1CIpaQUKYdQZOw==}
'@humanfs/node@0.16.7':
resolution: {integrity: sha512-/zUx+yOsIrG4Y43Eh2peDeKCxlRt/gET6aHfaKpuq267qXdYDFViVHfMaLyygZOnl0kGWxFIgsBy8QFuTLUXEQ==}
engines: {node: '>=18.18.0'}
'@humanwhocodes/module-importer@1.0.1':
resolution: {integrity: sha512-bxveV4V8v5Yb4ncFTT3rPSgZBOpCkjfK0y4oVVVJwIuDVBRMDXrPyXRL988i5ap9m9bnyEEjWfm5WkBmtffLfA==}
engines: {node: '>=12.22'}
'@humanwhocodes/retry@0.3.1':
resolution: {integrity: sha512-JBxkERygn7Bv/GbN5Rv8Ul6LVknS+5Bp6RgDC/O8gEBU/yeH5Ui5C/OlWrTb6qct7LjjfT6Re2NxB0ln0yYybA==}
engines: {node: '>=18.18'}
'@humanwhocodes/retry@0.4.3':
resolution: {integrity: sha512-bV0Tgo9K4hfPCek+aMAn81RppFKv2ySDQeMoSZuvTASywNTnVJCArCZE2FWqpvIatKu7VMRLWlR1EazvVhDyhQ==}
engines: {node: '>=18.18'}
@@ -427,103 +429,108 @@ packages:
'@polka/url@1.0.0-next.29':
resolution: {integrity: sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww==}
'@rollup/rollup-android-arm-eabi@4.41.0':
resolution: {integrity: sha512-KxN+zCjOYHGwCl4UCtSfZ6jrq/qi88JDUtiEFk8LELEHq2Egfc/FgW+jItZiOLRuQfb/3xJSgFuNPC9jzggX+A==}
'@rollup/rollup-android-arm-eabi@4.50.1':
resolution: {integrity: sha512-HJXwzoZN4eYTdD8bVV22DN8gsPCAj3V20NHKOs8ezfXanGpmVPR7kalUHd+Y31IJp9stdB87VKPFbsGY3H/2ag==}
cpu: [arm]
os: [android]
'@rollup/rollup-android-arm64@4.41.0':
resolution: {integrity: sha512-yDvqx3lWlcugozax3DItKJI5j05B0d4Kvnjx+5mwiUpWramVvmAByYigMplaoAQ3pvdprGCTCE03eduqE/8mPQ==}
'@rollup/rollup-android-arm64@4.50.1':
resolution: {integrity: sha512-PZlsJVcjHfcH53mOImyt3bc97Ep3FJDXRpk9sMdGX0qgLmY0EIWxCag6EigerGhLVuL8lDVYNnSo8qnTElO4xw==}
cpu: [arm64]
os: [android]
'@rollup/rollup-darwin-arm64@4.41.0':
resolution: {integrity: sha512-2KOU574vD3gzcPSjxO0eyR5iWlnxxtmW1F5CkNOHmMlueKNCQkxR6+ekgWyVnz6zaZihpUNkGxjsYrkTJKhkaw==}
'@rollup/rollup-darwin-arm64@4.50.1':
resolution: {integrity: sha512-xc6i2AuWh++oGi4ylOFPmzJOEeAa2lJeGUGb4MudOtgfyyjr4UPNK+eEWTPLvmPJIY/pgw6ssFIox23SyrkkJw==}
cpu: [arm64]
os: [darwin]
'@rollup/rollup-darwin-x64@4.41.0':
resolution: {integrity: sha512-gE5ACNSxHcEZyP2BA9TuTakfZvULEW4YAOtxl/A/YDbIir/wPKukde0BNPlnBiP88ecaN4BJI2TtAd+HKuZPQQ==}
'@rollup/rollup-darwin-x64@4.50.1':
resolution: {integrity: sha512-2ofU89lEpDYhdLAbRdeyz/kX3Y2lpYc6ShRnDjY35bZhd2ipuDMDi6ZTQ9NIag94K28nFMofdnKeHR7BT0CATw==}
cpu: [x64]
os: [darwin]
'@rollup/rollup-freebsd-arm64@4.41.0':
resolution: {integrity: sha512-GSxU6r5HnWij7FoSo7cZg3l5GPg4HFLkzsFFh0N/b16q5buW1NAWuCJ+HMtIdUEi6XF0qH+hN0TEd78laRp7Dg==}
'@rollup/rollup-freebsd-arm64@4.50.1':
resolution: {integrity: sha512-wOsE6H2u6PxsHY/BeFHA4VGQN3KUJFZp7QJBmDYI983fgxq5Th8FDkVuERb2l9vDMs1D5XhOrhBrnqcEY6l8ZA==}
cpu: [arm64]
os: [freebsd]
'@rollup/rollup-freebsd-x64@4.41.0':
resolution: {integrity: sha512-KGiGKGDg8qLRyOWmk6IeiHJzsN/OYxO6nSbT0Vj4MwjS2XQy/5emsmtoqLAabqrohbgLWJ5GV3s/ljdrIr8Qjg==}
'@rollup/rollup-freebsd-x64@4.50.1':
resolution: {integrity: sha512-A/xeqaHTlKbQggxCqispFAcNjycpUEHP52mwMQZUNqDUJFFYtPHCXS1VAG29uMlDzIVr+i00tSFWFLivMcoIBQ==}
cpu: [x64]
os: [freebsd]
'@rollup/rollup-linux-arm-gnueabihf@4.41.0':
resolution: {integrity: sha512-46OzWeqEVQyX3N2/QdiU/CMXYDH/lSHpgfBkuhl3igpZiaB3ZIfSjKuOnybFVBQzjsLwkus2mjaESy8H41SzvA==}
'@rollup/rollup-linux-arm-gnueabihf@4.50.1':
resolution: {integrity: sha512-54v4okehwl5TaSIkpp97rAHGp7t3ghinRd/vyC1iXqXMfjYUTm7TfYmCzXDoHUPTTf36L8pr0E7YsD3CfB3ZDg==}
cpu: [arm]
os: [linux]
'@rollup/rollup-linux-arm-musleabihf@4.41.0':
resolution: {integrity: sha512-lfgW3KtQP4YauqdPpcUZHPcqQXmTmH4nYU0cplNeW583CMkAGjtImw4PKli09NFi2iQgChk4e9erkwlfYem6Lg==}
'@rollup/rollup-linux-arm-musleabihf@4.50.1':
resolution: {integrity: sha512-p/LaFyajPN/0PUHjv8TNyxLiA7RwmDoVY3flXHPSzqrGcIp/c2FjwPPP5++u87DGHtw+5kSH5bCJz0mvXngYxw==}
cpu: [arm]
os: [linux]
'@rollup/rollup-linux-arm64-gnu@4.41.0':
resolution: {integrity: sha512-nn8mEyzMbdEJzT7cwxgObuwviMx6kPRxzYiOl6o/o+ChQq23gfdlZcUNnt89lPhhz3BYsZ72rp0rxNqBSfqlqw==}
'@rollup/rollup-linux-arm64-gnu@4.50.1':
resolution: {integrity: sha512-2AbMhFFkTo6Ptna1zO7kAXXDLi7H9fGTbVaIq2AAYO7yzcAsuTNWPHhb2aTA6GPiP+JXh85Y8CiS54iZoj4opw==}
cpu: [arm64]
os: [linux]
'@rollup/rollup-linux-arm64-musl@4.41.0':
resolution: {integrity: sha512-l+QK99je2zUKGd31Gh+45c4pGDAqZSuWQiuRFCdHYC2CSiO47qUWsCcenrI6p22hvHZrDje9QjwSMAFL3iwXwQ==}
'@rollup/rollup-linux-arm64-musl@4.50.1':
resolution: {integrity: sha512-Cgef+5aZwuvesQNw9eX7g19FfKX5/pQRIyhoXLCiBOrWopjo7ycfB292TX9MDcDijiuIJlx1IzJz3IoCPfqs9w==}
cpu: [arm64]
os: [linux]
'@rollup/rollup-linux-loongarch64-gnu@4.41.0':
resolution: {integrity: sha512-WbnJaxPv1gPIm6S8O/Wg+wfE/OzGSXlBMbOe4ie+zMyykMOeqmgD1BhPxZQuDqwUN+0T/xOFtL2RUWBspnZj3w==}
'@rollup/rollup-linux-loongarch64-gnu@4.50.1':
resolution: {integrity: sha512-RPhTwWMzpYYrHrJAS7CmpdtHNKtt2Ueo+BlLBjfZEhYBhK00OsEqM08/7f+eohiF6poe0YRDDd8nAvwtE/Y62Q==}
cpu: [loong64]
os: [linux]
'@rollup/rollup-linux-powerpc64le-gnu@4.41.0':
resolution: {integrity: sha512-eRDWR5t67/b2g8Q/S8XPi0YdbKcCs4WQ8vklNnUYLaSWF+Cbv2axZsp4jni6/j7eKvMLYCYdcsv8dcU+a6QNFg==}
'@rollup/rollup-linux-ppc64-gnu@4.50.1':
resolution: {integrity: sha512-eSGMVQw9iekut62O7eBdbiccRguuDgiPMsw++BVUg+1K7WjZXHOg/YOT9SWMzPZA+w98G+Fa1VqJgHZOHHnY0Q==}
cpu: [ppc64]
os: [linux]
'@rollup/rollup-linux-riscv64-gnu@4.41.0':
resolution: {integrity: sha512-TWrZb6GF5jsEKG7T1IHwlLMDRy2f3DPqYldmIhnA2DVqvvhY2Ai184vZGgahRrg8k9UBWoSlHv+suRfTN7Ua4A==}
'@rollup/rollup-linux-riscv64-gnu@4.50.1':
resolution: {integrity: sha512-S208ojx8a4ciIPrLgazF6AgdcNJzQE4+S9rsmOmDJkusvctii+ZvEuIC4v/xFqzbuP8yDjn73oBlNDgF6YGSXQ==}
cpu: [riscv64]
os: [linux]
'@rollup/rollup-linux-riscv64-musl@4.41.0':
resolution: {integrity: sha512-ieQljaZKuJpmWvd8gW87ZmSFwid6AxMDk5bhONJ57U8zT77zpZ/TPKkU9HpnnFrM4zsgr4kiGuzbIbZTGi7u9A==}
'@rollup/rollup-linux-riscv64-musl@4.50.1':
resolution: {integrity: sha512-3Ag8Ls1ggqkGUvSZWYcdgFwriy2lWo+0QlYgEFra/5JGtAd6C5Hw59oojx1DeqcA2Wds2ayRgvJ4qxVTzCHgzg==}
cpu: [riscv64]
os: [linux]
'@rollup/rollup-linux-s390x-gnu@4.41.0':
resolution: {integrity: sha512-/L3pW48SxrWAlVsKCN0dGLB2bi8Nv8pr5S5ocSM+S0XCn5RCVCXqi8GVtHFsOBBCSeR+u9brV2zno5+mg3S4Aw==}
'@rollup/rollup-linux-s390x-gnu@4.50.1':
resolution: {integrity: sha512-t9YrKfaxCYe7l7ldFERE1BRg/4TATxIg+YieHQ966jwvo7ddHJxPj9cNFWLAzhkVsbBvNA4qTbPVNsZKBO4NSg==}
cpu: [s390x]
os: [linux]
'@rollup/rollup-linux-x64-gnu@4.41.0':
resolution: {integrity: sha512-XMLeKjyH8NsEDCRptf6LO8lJk23o9wvB+dJwcXMaH6ZQbbkHu2dbGIUindbMtRN6ux1xKi16iXWu6q9mu7gDhQ==}
'@rollup/rollup-linux-x64-gnu@4.50.1':
resolution: {integrity: sha512-MCgtFB2+SVNuQmmjHf+wfI4CMxy3Tk8XjA5Z//A0AKD7QXUYFMQcns91K6dEHBvZPCnhJSyDWLApk40Iq/H3tA==}
cpu: [x64]
os: [linux]
'@rollup/rollup-linux-x64-musl@4.41.0':
resolution: {integrity: sha512-m/P7LycHZTvSQeXhFmgmdqEiTqSV80zn6xHaQ1JSqwCtD1YGtwEK515Qmy9DcB2HK4dOUVypQxvhVSy06cJPEg==}
'@rollup/rollup-linux-x64-musl@4.50.1':
resolution: {integrity: sha512-nEvqG+0jeRmqaUMuwzlfMKwcIVffy/9KGbAGyoa26iu6eSngAYQ512bMXuqqPrlTyfqdlB9FVINs93j534UJrg==}
cpu: [x64]
os: [linux]
'@rollup/rollup-win32-arm64-msvc@4.41.0':
resolution: {integrity: sha512-4yodtcOrFHpbomJGVEqZ8fzD4kfBeCbpsUy5Pqk4RluXOdsWdjLnjhiKy2w3qzcASWd04fp52Xz7JKarVJ5BTg==}
'@rollup/rollup-openharmony-arm64@4.50.1':
resolution: {integrity: sha512-RDsLm+phmT3MJd9SNxA9MNuEAO/J2fhW8GXk62G/B4G7sLVumNFbRwDL6v5NrESb48k+QMqdGbHgEtfU0LCpbA==}
cpu: [arm64]
os: [openharmony]
'@rollup/rollup-win32-arm64-msvc@4.50.1':
resolution: {integrity: sha512-hpZB/TImk2FlAFAIsoElM3tLzq57uxnGYwplg6WDyAxbYczSi8O2eQ+H2Lx74504rwKtZ3N2g4bCUkiamzS6TQ==}
cpu: [arm64]
os: [win32]
'@rollup/rollup-win32-ia32-msvc@4.41.0':
resolution: {integrity: sha512-tmazCrAsKzdkXssEc65zIE1oC6xPHwfy9d5Ta25SRCDOZS+I6RypVVShWALNuU9bxIfGA0aqrmzlzoM5wO5SPQ==}
'@rollup/rollup-win32-ia32-msvc@4.50.1':
resolution: {integrity: sha512-SXjv8JlbzKM0fTJidX4eVsH+Wmnp0/WcD8gJxIZyR6Gay5Qcsmdbi9zVtnbkGPG8v2vMR1AD06lGWy5FLMcG7A==}
cpu: [ia32]
os: [win32]
'@rollup/rollup-win32-x64-msvc@4.41.0':
resolution: {integrity: sha512-h1J+Yzjo/X+0EAvR2kIXJDuTuyT7drc+t2ALY0nIcGPbTatNOf0VWdhEA2Z4AAjv6X1NJV7SYo5oCTYRJhSlVA==}
'@rollup/rollup-win32-x64-msvc@4.50.1':
resolution: {integrity: sha512-StxAO/8ts62KZVRAm4JZYq9+NqNsV7RvimNK+YM7ry//zebEH6meuugqW/P5OFUCjyQgui+9fUxT6d5NShvMvA==}
cpu: [x64]
os: [win32]
@@ -1916,8 +1923,8 @@ packages:
deprecated: Rimraf versions prior to v4 are no longer supported
hasBin: true
rollup@4.41.0:
resolution: {integrity: sha512-HqMFpUbWlf/tvcxBFNKnJyzc7Lk+XO3FGc3pbNBLqEbOz0gPLRgcrlS3UF4MfUrVlstOaP/q0kM6GVvi+LrLRg==}
rollup@4.50.1:
resolution: {integrity: sha512-78E9voJHwnXQMiQdiqswVLZwJIzdBKJ1GdI5Zx6XwoFKUIk09/sSrr+05QFzvYb8q6Y9pPV45zzDuYa3907TZA==}
engines: {node: '>=18.0.0', npm: '>=8.0.0'}
hasBin: true
@@ -2281,8 +2288,8 @@ packages:
peerDependencies:
vite: ^4.1.1 || ^5.0.0
vite@5.4.19:
resolution: {integrity: sha512-qO3aKv3HoQC8QKiNSTuUM1l9o/XX3+c+VTgLHbJWHZGeTPVAg2XwazI9UWzoxjIJCGCV2zU60uqMzjeLZuULqA==}
vite@5.4.20:
resolution: {integrity: sha512-j3lYzGC3P+B5Yfy/pfKNgVEg4+UtcIJcVRt2cDjIOmhLourAqPqf8P7acgxeiSgUB7E3p2P8/3gNIgDLpwzs4g==}
engines: {node: ^18.0.0 || >=20.0.0}
hasBin: true
peerDependencies:
@@ -2458,6 +2465,11 @@ snapshots:
eslint: 9.17.0(jiti@1.21.7)
eslint-visitor-keys: 3.4.3
'@eslint-community/eslint-utils@4.9.0(eslint@9.17.0(jiti@1.21.7))':
dependencies:
eslint: 9.17.0(jiti@1.21.7)
eslint-visitor-keys: 3.4.3
'@eslint-community/regexpp@4.12.1': {}
'@eslint/config-array@0.19.2':
@@ -2514,15 +2526,13 @@ snapshots:
'@humanfs/core@0.19.1': {}
'@humanfs/node@0.16.6':
'@humanfs/node@0.16.7':
dependencies:
'@humanfs/core': 0.19.1
'@humanwhocodes/retry': 0.3.1
'@humanwhocodes/retry': 0.4.3
'@humanwhocodes/module-importer@1.0.1': {}
'@humanwhocodes/retry@0.3.1': {}
'@humanwhocodes/retry@0.4.3': {}
'@isaacs/cliui@8.0.2':
@@ -2584,64 +2594,67 @@ snapshots:
'@polka/url@1.0.0-next.29': {}
'@rollup/rollup-android-arm-eabi@4.41.0':
'@rollup/rollup-android-arm-eabi@4.50.1':
optional: true
'@rollup/rollup-android-arm64@4.41.0':
'@rollup/rollup-android-arm64@4.50.1':
optional: true
'@rollup/rollup-darwin-arm64@4.41.0':
'@rollup/rollup-darwin-arm64@4.50.1':
optional: true
'@rollup/rollup-darwin-x64@4.41.0':
'@rollup/rollup-darwin-x64@4.50.1':
optional: true
'@rollup/rollup-freebsd-arm64@4.41.0':
'@rollup/rollup-freebsd-arm64@4.50.1':
optional: true
'@rollup/rollup-freebsd-x64@4.41.0':
'@rollup/rollup-freebsd-x64@4.50.1':
optional: true
'@rollup/rollup-linux-arm-gnueabihf@4.41.0':
'@rollup/rollup-linux-arm-gnueabihf@4.50.1':
optional: true
'@rollup/rollup-linux-arm-musleabihf@4.41.0':
'@rollup/rollup-linux-arm-musleabihf@4.50.1':
optional: true
'@rollup/rollup-linux-arm64-gnu@4.41.0':
'@rollup/rollup-linux-arm64-gnu@4.50.1':
optional: true
'@rollup/rollup-linux-arm64-musl@4.41.0':
'@rollup/rollup-linux-arm64-musl@4.50.1':
optional: true
'@rollup/rollup-linux-loongarch64-gnu@4.41.0':
'@rollup/rollup-linux-loongarch64-gnu@4.50.1':
optional: true
'@rollup/rollup-linux-powerpc64le-gnu@4.41.0':
'@rollup/rollup-linux-ppc64-gnu@4.50.1':
optional: true
'@rollup/rollup-linux-riscv64-gnu@4.41.0':
'@rollup/rollup-linux-riscv64-gnu@4.50.1':
optional: true
'@rollup/rollup-linux-riscv64-musl@4.41.0':
'@rollup/rollup-linux-riscv64-musl@4.50.1':
optional: true
'@rollup/rollup-linux-s390x-gnu@4.41.0':
'@rollup/rollup-linux-s390x-gnu@4.50.1':
optional: true
'@rollup/rollup-linux-x64-gnu@4.41.0':
'@rollup/rollup-linux-x64-gnu@4.50.1':
optional: true
'@rollup/rollup-linux-x64-musl@4.41.0':
'@rollup/rollup-linux-x64-musl@4.50.1':
optional: true
'@rollup/rollup-win32-arm64-msvc@4.41.0':
'@rollup/rollup-openharmony-arm64@4.50.1':
optional: true
'@rollup/rollup-win32-ia32-msvc@4.41.0':
'@rollup/rollup-win32-arm64-msvc@4.50.1':
optional: true
'@rollup/rollup-win32-x64-msvc@4.41.0':
'@rollup/rollup-win32-ia32-msvc@4.50.1':
optional: true
'@rollup/rollup-win32-x64-msvc@4.50.1':
optional: true
'@shikijs/core@1.29.2':
@@ -2692,15 +2705,15 @@ snapshots:
dependencies:
acorn: 8.14.1
'@sveltejs/adapter-auto@3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))':
'@sveltejs/adapter-auto@3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))':
dependencies:
'@sveltejs/kit': 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))
'@sveltejs/kit': 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
import-meta-resolve: 4.1.0
'@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))':
'@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))':
dependencies:
'@sveltejs/acorn-typescript': 1.0.5(acorn@8.14.1)
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
'@types/cookie': 0.6.0
acorn: 8.14.1
cookie: 1.0.2
@@ -2713,28 +2726,28 @@ snapshots:
set-cookie-parser: 2.7.1
sirv: 3.0.1
svelte: 4.2.20
vite: 5.4.19(@types/node@20.17.50)
vite: 5.4.20(@types/node@20.17.50)
'@sveltejs/vite-plugin-svelte-inspector@2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))':
'@sveltejs/vite-plugin-svelte-inspector@2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))':
dependencies:
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
debug: 4.4.1
svelte: 4.2.20
vite: 5.4.19(@types/node@20.17.50)
vite: 5.4.20(@types/node@20.17.50)
transitivePeerDependencies:
- supports-color
'@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))':
'@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))':
dependencies:
'@sveltejs/vite-plugin-svelte-inspector': 2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.19(@types/node@20.17.50))
'@sveltejs/vite-plugin-svelte-inspector': 2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
debug: 4.4.1
deepmerge: 4.3.1
kleur: 4.1.5
magic-string: 0.30.17
svelte: 4.2.20
svelte-hmr: 0.16.0(svelte@4.2.20)
vite: 5.4.19(@types/node@20.17.50)
vitefu: 0.2.5(vite@5.4.19(@types/node@20.17.50))
vite: 5.4.20(@types/node@20.17.50)
vitefu: 0.2.5(vite@5.4.20(@types/node@20.17.50))
transitivePeerDependencies:
- supports-color
@@ -3173,14 +3186,14 @@ snapshots:
eslint@9.17.0(jiti@1.21.7):
dependencies:
'@eslint-community/eslint-utils': 4.7.0(eslint@9.17.0(jiti@1.21.7))
'@eslint-community/eslint-utils': 4.9.0(eslint@9.17.0(jiti@1.21.7))
'@eslint-community/regexpp': 4.12.1
'@eslint/config-array': 0.19.2
'@eslint/core': 0.9.1
'@eslint/eslintrc': 3.3.1
'@eslint/js': 9.17.0
'@eslint/plugin-kit': 0.2.8
'@humanfs/node': 0.16.6
'@humanfs/node': 0.16.7
'@humanwhocodes/module-importer': 1.0.1
'@humanwhocodes/retry': 0.4.3
'@types/estree': 1.0.8
@@ -4125,30 +4138,31 @@ snapshots:
glob: 7.2.3
optional: true
rollup@4.41.0:
rollup@4.50.1:
dependencies:
'@types/estree': 1.0.7
'@types/estree': 1.0.8
optionalDependencies:
'@rollup/rollup-android-arm-eabi': 4.41.0
'@rollup/rollup-android-arm64': 4.41.0
'@rollup/rollup-darwin-arm64': 4.41.0
'@rollup/rollup-darwin-x64': 4.41.0
'@rollup/rollup-freebsd-arm64': 4.41.0
'@rollup/rollup-freebsd-x64': 4.41.0
'@rollup/rollup-linux-arm-gnueabihf': 4.41.0
'@rollup/rollup-linux-arm-musleabihf': 4.41.0
'@rollup/rollup-linux-arm64-gnu': 4.41.0
'@rollup/rollup-linux-arm64-musl': 4.41.0
'@rollup/rollup-linux-loongarch64-gnu': 4.41.0
'@rollup/rollup-linux-powerpc64le-gnu': 4.41.0
'@rollup/rollup-linux-riscv64-gnu': 4.41.0
'@rollup/rollup-linux-riscv64-musl': 4.41.0
'@rollup/rollup-linux-s390x-gnu': 4.41.0
'@rollup/rollup-linux-x64-gnu': 4.41.0
'@rollup/rollup-linux-x64-musl': 4.41.0
'@rollup/rollup-win32-arm64-msvc': 4.41.0
'@rollup/rollup-win32-ia32-msvc': 4.41.0
'@rollup/rollup-win32-x64-msvc': 4.41.0
'@rollup/rollup-android-arm-eabi': 4.50.1
'@rollup/rollup-android-arm64': 4.50.1
'@rollup/rollup-darwin-arm64': 4.50.1
'@rollup/rollup-darwin-x64': 4.50.1
'@rollup/rollup-freebsd-arm64': 4.50.1
'@rollup/rollup-freebsd-x64': 4.50.1
'@rollup/rollup-linux-arm-gnueabihf': 4.50.1
'@rollup/rollup-linux-arm-musleabihf': 4.50.1
'@rollup/rollup-linux-arm64-gnu': 4.50.1
'@rollup/rollup-linux-arm64-musl': 4.50.1
'@rollup/rollup-linux-loongarch64-gnu': 4.50.1
'@rollup/rollup-linux-ppc64-gnu': 4.50.1
'@rollup/rollup-linux-riscv64-gnu': 4.50.1
'@rollup/rollup-linux-riscv64-musl': 4.50.1
'@rollup/rollup-linux-s390x-gnu': 4.50.1
'@rollup/rollup-linux-x64-gnu': 4.50.1
'@rollup/rollup-linux-x64-musl': 4.50.1
'@rollup/rollup-openharmony-arm64': 4.50.1
'@rollup/rollup-win32-arm64-msvc': 4.50.1
'@rollup/rollup-win32-ia32-msvc': 4.50.1
'@rollup/rollup-win32-x64-msvc': 4.50.1
fsevents: 2.3.3
run-parallel@1.2.0:
@@ -4565,24 +4579,24 @@ snapshots:
'@types/unist': 3.0.3
vfile-message: 4.0.2
vite-plugin-tailwind-purgecss@0.2.1(vite@5.4.19(@types/node@20.17.50)):
vite-plugin-tailwind-purgecss@0.2.1(vite@5.4.20(@types/node@20.17.50)):
dependencies:
estree-walker: 3.0.3
purgecss: 6.0.0
vite: 5.4.19(@types/node@20.17.50)
vite: 5.4.20(@types/node@20.17.50)
vite@5.4.19(@types/node@20.17.50):
vite@5.4.20(@types/node@20.17.50):
dependencies:
esbuild: 0.21.5
postcss: 8.5.3
rollup: 4.41.0
rollup: 4.50.1
optionalDependencies:
'@types/node': 20.17.50
fsevents: 2.3.3
vitefu@0.2.5(vite@5.4.19(@types/node@20.17.50)):
vitefu@0.2.5(vite@5.4.20(@types/node@20.17.50)):
optionalDependencies:
vite: 5.4.19(@types/node@20.17.50)
vite: 5.4.20(@types/node@20.17.50)
web-namespaces@2.0.1: {}

View File

@@ -1870,6 +1870,14 @@
"ANALYSIS",
"SELF"
]
},
{
"patternName": "create_story_about_people_interaction",
"description": "Analyze two personas, compare their dynamics, and craft a realistic, character-driven story from those insights.",
"tags": [
"ANALYSIS",
"WRITING"
]
}
]
}