mirror of
https://github.com/danielmiessler/Fabric.git
synced 2026-01-07 21:44:02 -05:00
feat: add i18n support with Spanish localization and documentation improvements
- Add internationalization system with Spanish support - Create contexts and sessions tutorial documentation - Fix broken Warp sponsorship image URL - Add locale detection from environment variables - Update VSCode settings with new dictionary words - Exclude VSCode settings from version workflows - Update pattern descriptions and explanations - Add comprehensive i18n test coverage
This commit is contained in:
@@ -12,6 +12,7 @@ on:
|
||||
- "scripts/pattern_descriptions/*.json"
|
||||
- "web/static/data/pattern_descriptions.json"
|
||||
- "**/*.md"
|
||||
- .vscode/**
|
||||
|
||||
permissions:
|
||||
contents: write # Ensure the workflow has write permissions
|
||||
|
||||
16
.vscode/settings.json
vendored
16
.vscode/settings.json
vendored
@@ -7,6 +7,7 @@
|
||||
"Anki",
|
||||
"anthropics",
|
||||
"Aoede",
|
||||
"aplicar",
|
||||
"atotto",
|
||||
"Autonoe",
|
||||
"badfile",
|
||||
@@ -39,6 +40,7 @@
|
||||
"Eisler",
|
||||
"elif",
|
||||
"Elister",
|
||||
"entrada",
|
||||
"envrc",
|
||||
"Erinome",
|
||||
"Errorf",
|
||||
@@ -89,6 +91,7 @@
|
||||
"Langdock",
|
||||
"Laomedeia",
|
||||
"ldflags",
|
||||
"legibilidad",
|
||||
"libexec",
|
||||
"libnotify",
|
||||
"listcontexts",
|
||||
@@ -109,6 +112,7 @@
|
||||
"modeline",
|
||||
"modelines",
|
||||
"mpga",
|
||||
"nicksnyder",
|
||||
"nometa",
|
||||
"numpy",
|
||||
"ollama",
|
||||
@@ -125,9 +129,11 @@
|
||||
"pipx",
|
||||
"PKCE",
|
||||
"pkgs",
|
||||
"porque",
|
||||
"presencepenalty",
|
||||
"printcontext",
|
||||
"printsession",
|
||||
"puede",
|
||||
"Pulcherrima",
|
||||
"pycache",
|
||||
"pyperclip",
|
||||
@@ -182,7 +188,11 @@
|
||||
"go.mod",
|
||||
".gitignore",
|
||||
"CHANGELOG.md",
|
||||
"./scripts/installer/install.*"
|
||||
"scripts/installer/install.*",
|
||||
"web/static/data/pattern_descriptions.json",
|
||||
"scripts/pattern_descriptions/*.json",
|
||||
"data/patterns/pattern_explanations.md",
|
||||
"internal/i18n/locales/es.json"
|
||||
],
|
||||
"markdownlint.config": {
|
||||
"MD004": false,
|
||||
@@ -197,10 +207,12 @@
|
||||
"code",
|
||||
"div",
|
||||
"em",
|
||||
"h",
|
||||
"h4",
|
||||
"img",
|
||||
"module",
|
||||
"p"
|
||||
"p",
|
||||
"sup"
|
||||
]
|
||||
},
|
||||
"MD041": false
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
<a href="https://go.warp.dev/fabric" target="_blank">
|
||||
<sup>Special thanks to:</sup>
|
||||
<br>
|
||||
<img alt="Warp sponsorship" width="400" src="https://github.com/warpdotdev/brand-assets/blob/main/Github/Sponsor/Warp-Github-LG-02.png">
|
||||
<img alt="Warp sponsorship" width="400" src="https://raw.githubusercontent.com/warpdotdev/brand-assets/refs/heads/main/Github/Sponsor/Warp-Github-LG-02.png">
|
||||
<br>
|
||||
<h>Warp, built for coding with multiple AI agents</b>
|
||||
<br>
|
||||
|
||||
7
cmd/generate_changelog/incoming/1755.txt
Normal file
7
cmd/generate_changelog/incoming/1755.txt
Normal file
@@ -0,0 +1,7 @@
|
||||
### PR [#1755](https://github.com/danielmiessler/Fabric/pull/1755) by [ksylvan](https://github.com/ksylvan): Add i18n Support for Multi-Language Fabric Experience
|
||||
|
||||
- Add Spanish localization support with i18n
|
||||
- Create contexts and sessions tutorial documentation
|
||||
- Fix broken Warp sponsorship image URL
|
||||
- Remove solve_with_cot pattern from codebase
|
||||
- Update pattern descriptions and explanations
|
||||
@@ -178,47 +178,46 @@
|
||||
174. **refine_design_document**: Refines a design document based on a design review by analyzing, mapping concepts, and implementing changes using valid Markdown.
|
||||
175. **review_design**: Reviews and analyzes architecture design, focusing on clarity, component design, system integrations, security, performance, scalability, and data management.
|
||||
176. **sanitize_broken_html_to_markdown**: Converts messy HTML into clean, properly formatted Markdown, applying custom styling and ensuring compatibility with Vite.
|
||||
177. **solve_with_cot**: Provides detailed, step-by-step responses with chain of thought reasoning, using structured thinking, reflection, and output sections.
|
||||
178. **suggest_pattern**: Suggests appropriate fabric patterns or commands based on user input, providing clear explanations and options for users.
|
||||
179. **summarize**: Summarizes content into a 20-word sentence, main points, and takeaways, formatted with numbered lists in Markdown.
|
||||
180. **summarize_board_meeting**: Creates formal meeting notes from board meeting transcripts for corporate governance documentation.
|
||||
181. **summarize_debate**: Summarizes debates, identifies primary disagreement, extracts arguments, and provides analysis of evidence and argument strength to predict outcomes.
|
||||
182. **summarize_git_changes**: Summarizes recent project updates from the last 7 days, focusing on key changes with enthusiasm.
|
||||
183. **summarize_git_diff**: Summarizes and organizes Git diff changes with clear, succinct commit messages and bullet points.
|
||||
184. **summarize_lecture**: Extracts relevant topics, definitions, and tools from lecture transcripts, providing structured summaries with timestamps and key takeaways.
|
||||
185. **summarize_legislation**: Summarizes complex political proposals and legislation by analyzing key points, proposed changes, and providing balanced, positive, and cynical characterizations.
|
||||
186. **summarize_meeting**: Analyzes meeting transcripts to extract a structured summary, including an overview, key points, tasks, decisions, challenges, timeline, references, and next steps.
|
||||
187. **summarize_micro**: Summarizes content into a 20-word sentence, 3 main points, and 3 takeaways, formatted in clear, concise Markdown.
|
||||
188. **summarize_newsletter**: Extracts the most meaningful, interesting, and useful content from a newsletter, summarizing key sections such as content, opinions, tools, companies, and follow-up items in clear, structured Markdown.
|
||||
189. **summarize_paper**: Summarizes an academic paper by detailing its title, authors, technical approach, distinctive features, experimental setup, results, advantages, limitations, and conclusion in a clear, structured format using human-readable Markdown.
|
||||
190. **summarize_prompt**: Summarizes AI chat prompts by describing the primary function, unique approach, and expected output in a concise paragraph. The summary is focused on the prompt's purpose without unnecessary details or formatting.
|
||||
191. **summarize_pull-requests**: Summarizes pull requests for a coding project by providing a summary and listing the top PRs with human-readable descriptions.
|
||||
192. **summarize_rpg_session**: Summarizes a role-playing game session by extracting key events, combat stats, character changes, quotes, and more.
|
||||
193. **t_analyze_challenge_handling**: Provides 8-16 word bullet points evaluating how well challenges are being addressed, calling out any lack of effort.
|
||||
194. **t_check_metrics**: Analyzes deep context from the TELOS file and input instruction, then provides a wisdom-based output while considering metrics and KPIs to assess recent improvements.
|
||||
195. **t_create_h3_career**: Summarizes context and produces wisdom-based output by deeply analyzing both the TELOS File and the input instruction, considering the relationship between the two.
|
||||
196. **t_create_opening_sentences**: Describes from TELOS file the person's identity, goals, and actions in 4 concise, 32-word bullet points, humbly.
|
||||
197. **t_describe_life_outlook**: Describes from TELOS file a person's life outlook in 5 concise, 16-word bullet points.
|
||||
198. **t_extract_intro_sentences**: Summarizes from TELOS file a person's identity, work, and current projects in 5 concise and grounded bullet points.
|
||||
199. **t_extract_panel_topics**: Creates 5 panel ideas with titles and descriptions based on deep context from a TELOS file and input.
|
||||
200. **t_find_blindspots**: Identify potential blindspots in thinking, frames, or models that may expose the individual to error or risk.
|
||||
201. **t_find_negative_thinking**: Analyze a TELOS file and input to identify negative thinking in documents or journals, followed by tough love encouragement.
|
||||
202. **t_find_neglected_goals**: Analyze a TELOS file and input instructions to identify goals or projects that have not been worked on recently.
|
||||
203. **t_give_encouragement**: Analyze a TELOS file and input instructions to evaluate progress, provide encouragement, and offer recommendations for continued effort.
|
||||
204. **t_red_team_thinking**: Analyze a TELOS file and input instructions to red-team thinking, models, and frames, then provide recommendations for improvement.
|
||||
205. **t_threat_model_plans**: Analyze a TELOS file and input instructions to create threat models for a life plan and recommend improvements.
|
||||
206. **t_visualize_mission_goals_projects**: Analyze a TELOS file and input instructions to create an ASCII art diagram illustrating the relationship of missions, goals, and projects.
|
||||
207. **t_year_in_review**: Analyze a TELOS file to create insights about a person or entity, then summarize accomplishments and visualizations in bullet points.
|
||||
208. **to_flashcards**: Create Anki flashcards from a given text, focusing on concise, optimized questions and answers without external context.
|
||||
209. **transcribe_minutes**: Extracts (from meeting transcription) meeting minutes, identifying actionables, insightful ideas, decisions, challenges, and next steps in a structured format.
|
||||
210. **translate**: Translates sentences or documentation into the specified language code while maintaining the original formatting and tone.
|
||||
211. **tweet**: Provides a step-by-step guide on crafting engaging tweets with emojis, covering Twitter basics, account creation, features, and audience targeting.
|
||||
212. **write_essay**: Writes essays in the style of a specified author, embodying their unique voice, vocabulary, and approach. Uses `author_name` variable.
|
||||
213. **write_essay_pg**: Writes concise, clear essays in the style of Paul Graham, focusing on simplicity, clarity, and illumination of the provided topic.
|
||||
214. **write_hackerone_report**: Generates concise, clear, and reproducible bug bounty reports, detailing vulnerability impact, steps to reproduce, and exploit details for triagers.
|
||||
215. **write_latex**: Generates syntactically correct LaTeX code for a new.tex document, ensuring proper formatting and compatibility with pdflatex.
|
||||
216. **write_micro_essay**: Writes concise, clear, and illuminating essays on the given topic in the style of Paul Graham.
|
||||
217. **write_nuclei_template_rule**: Generates Nuclei YAML templates for detecting vulnerabilities using HTTP requests, matchers, extractors, and dynamic data extraction.
|
||||
218. **write_pull-request**: Drafts detailed pull request descriptions, explaining changes, providing reasoning, and identifying potential bugs from the git diff command output.
|
||||
219. **write_semgrep_rule**: Creates accurate and working Semgrep rules based on input, following syntax guidelines and specific language considerations.
|
||||
220. **youtube_summary**: Create concise, timestamped Youtube video summaries that highlight key points.
|
||||
177. **suggest_pattern**: Suggests appropriate fabric patterns or commands based on user input, providing clear explanations and options for users.
|
||||
178. **summarize**: Summarizes content into a 20-word sentence, main points, and takeaways, formatted with numbered lists in Markdown.
|
||||
179. **summarize_board_meeting**: Creates formal meeting notes from board meeting transcripts for corporate governance documentation.
|
||||
180. **summarize_debate**: Summarizes debates, identifies primary disagreement, extracts arguments, and provides analysis of evidence and argument strength to predict outcomes.
|
||||
181. **summarize_git_changes**: Summarizes recent project updates from the last 7 days, focusing on key changes with enthusiasm.
|
||||
182. **summarize_git_diff**: Summarizes and organizes Git diff changes with clear, succinct commit messages and bullet points.
|
||||
183. **summarize_lecture**: Extracts relevant topics, definitions, and tools from lecture transcripts, providing structured summaries with timestamps and key takeaways.
|
||||
184. **summarize_legislation**: Summarizes complex political proposals and legislation by analyzing key points, proposed changes, and providing balanced, positive, and cynical characterizations.
|
||||
185. **summarize_meeting**: Analyzes meeting transcripts to extract a structured summary, including an overview, key points, tasks, decisions, challenges, timeline, references, and next steps.
|
||||
186. **summarize_micro**: Summarizes content into a 20-word sentence, 3 main points, and 3 takeaways, formatted in clear, concise Markdown.
|
||||
187. **summarize_newsletter**: Extracts the most meaningful, interesting, and useful content from a newsletter, summarizing key sections such as content, opinions, tools, companies, and follow-up items in clear, structured Markdown.
|
||||
188. **summarize_paper**: Summarizes an academic paper by detailing its title, authors, technical approach, distinctive features, experimental setup, results, advantages, limitations, and conclusion in a clear, structured format using human-readable Markdown.
|
||||
189. **summarize_prompt**: Summarizes AI chat prompts by describing the primary function, unique approach, and expected output in a concise paragraph. The summary is focused on the prompt's purpose without unnecessary details or formatting.
|
||||
190. **summarize_pull-requests**: Summarizes pull requests for a coding project by providing a summary and listing the top PRs with human-readable descriptions.
|
||||
191. **summarize_rpg_session**: Summarizes a role-playing game session by extracting key events, combat stats, character changes, quotes, and more.
|
||||
192. **t_analyze_challenge_handling**: Provides 8-16 word bullet points evaluating how well challenges are being addressed, calling out any lack of effort.
|
||||
193. **t_check_metrics**: Analyzes deep context from the TELOS file and input instruction, then provides a wisdom-based output while considering metrics and KPIs to assess recent improvements.
|
||||
194. **t_create_h3_career**: Summarizes context and produces wisdom-based output by deeply analyzing both the TELOS File and the input instruction, considering the relationship between the two.
|
||||
195. **t_create_opening_sentences**: Describes from TELOS file the person's identity, goals, and actions in 4 concise, 32-word bullet points, humbly.
|
||||
196. **t_describe_life_outlook**: Describes from TELOS file a person's life outlook in 5 concise, 16-word bullet points.
|
||||
197. **t_extract_intro_sentences**: Summarizes from TELOS file a person's identity, work, and current projects in 5 concise and grounded bullet points.
|
||||
198. **t_extract_panel_topics**: Creates 5 panel ideas with titles and descriptions based on deep context from a TELOS file and input.
|
||||
199. **t_find_blindspots**: Identify potential blindspots in thinking, frames, or models that may expose the individual to error or risk.
|
||||
200. **t_find_negative_thinking**: Analyze a TELOS file and input to identify negative thinking in documents or journals, followed by tough love encouragement.
|
||||
201. **t_find_neglected_goals**: Analyze a TELOS file and input instructions to identify goals or projects that have not been worked on recently.
|
||||
202. **t_give_encouragement**: Analyze a TELOS file and input instructions to evaluate progress, provide encouragement, and offer recommendations for continued effort.
|
||||
203. **t_red_team_thinking**: Analyze a TELOS file and input instructions to red-team thinking, models, and frames, then provide recommendations for improvement.
|
||||
204. **t_threat_model_plans**: Analyze a TELOS file and input instructions to create threat models for a life plan and recommend improvements.
|
||||
205. **t_visualize_mission_goals_projects**: Analyze a TELOS file and input instructions to create an ASCII art diagram illustrating the relationship of missions, goals, and projects.
|
||||
206. **t_year_in_review**: Analyze a TELOS file to create insights about a person or entity, then summarize accomplishments and visualizations in bullet points.
|
||||
207. **to_flashcards**: Create Anki flashcards from a given text, focusing on concise, optimized questions and answers without external context.
|
||||
208. **transcribe_minutes**: Extracts (from meeting transcription) meeting minutes, identifying actionables, insightful ideas, decisions, challenges, and next steps in a structured format.
|
||||
209. **translate**: Translates sentences or documentation into the specified language code while maintaining the original formatting and tone.
|
||||
210. **tweet**: Provides a step-by-step guide on crafting engaging tweets with emojis, covering Twitter basics, account creation, features, and audience targeting.
|
||||
211. **write_essay**: Writes essays in the style of a specified author, embodying their unique voice, vocabulary, and approach. Uses `author_name` variable.
|
||||
212. **write_essay_pg**: Writes concise, clear essays in the style of Paul Graham, focusing on simplicity, clarity, and illumination of the provided topic.
|
||||
213. **write_hackerone_report**: Generates concise, clear, and reproducible bug bounty reports, detailing vulnerability impact, steps to reproduce, and exploit details for triagers.
|
||||
214. **write_latex**: Generates syntactically correct LaTeX code for a new.tex document, ensuring proper formatting and compatibility with pdflatex.
|
||||
215. **write_micro_essay**: Writes concise, clear, and illuminating essays on the given topic in the style of Paul Graham.
|
||||
216. **write_nuclei_template_rule**: Generates Nuclei YAML templates for detecting vulnerabilities using HTTP requests, matchers, extractors, and dynamic data extraction.
|
||||
217. **write_pull-request**: Drafts detailed pull request descriptions, explaining changes, providing reasoning, and identifying potential bugs from the git diff command output.
|
||||
218. **write_semgrep_rule**: Creates accurate and working Semgrep rules based on input, following syntax guidelines and specific language considerations.
|
||||
219. **youtube_summary**: Create concise, timestamped Youtube video summaries that highlight key points.
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
# IDENTITY
|
||||
|
||||
You are an AI assistant designed to provide detailed, step-by-step responses. Your outputs should follow this structure:
|
||||
|
||||
# STEPS
|
||||
|
||||
1. Begin with a <thinking> section.
|
||||
|
||||
2. Inside the thinking section:
|
||||
|
||||
- a. Briefly analyze the question and outline your approach.
|
||||
|
||||
- b. Present a clear plan of steps to solve the problem.
|
||||
|
||||
- c. Use a "Chain of Thought" reasoning process if necessary, breaking down your thought process into numbered steps.
|
||||
|
||||
3. Include a <reflection> section for each idea where you:
|
||||
|
||||
- a. Review your reasoning.
|
||||
|
||||
- b. Check for potential errors or oversights.
|
||||
|
||||
- c. Confirm or adjust your conclusion if necessary.
|
||||
- Be sure to close all reflection sections.
|
||||
- Close the thinking section with </thinking>.
|
||||
- Provide your final answer in an <output> section.
|
||||
|
||||
Always use these tags in your responses. Be thorough in your explanations, showing each step of your reasoning process.
|
||||
Aim to be precise and logical in your approach, and don't hesitate to break down complex problems into simpler components.
|
||||
Your tone should be analytical and slightly formal, focusing on clear communication of your thought process.
|
||||
Remember: Both <thinking> and <reflection> MUST be tags and must be closed at their conclusion.
|
||||
Make sure all <tags> are on separate lines with no other text.
|
||||
|
||||
# INPUT
|
||||
|
||||
INPUT:
|
||||
@@ -71,7 +71,7 @@ Match the request to one or more of these primary categories:
|
||||
|
||||
## Common Request Types and Best Patterns
|
||||
|
||||
**AI**: ai, create_ai_jobs_analysis, create_art_prompt, create_pattern, create_prediction_block, extract_mcp_servers, extract_wisdom_agents, generate_code_rules, improve_prompt, judge_output, rate_ai_response, rate_ai_result, raw_query, solve_with_cot, suggest_pattern, summarize_prompt
|
||||
**AI**: ai, create_ai_jobs_analysis, create_art_prompt, create_pattern, create_prediction_block, extract_mcp_servers, extract_wisdom_agents, generate_code_rules, improve_prompt, judge_output, rate_ai_response, rate_ai_result, raw_query, suggest_pattern, summarize_prompt
|
||||
|
||||
**ANALYSIS**: ai, analyze_answers, analyze_bill, analyze_bill_short, analyze_candidates, analyze_cfp_submission, analyze_claims, analyze_comments, analyze_debate, analyze_email_headers, analyze_incident, analyze_interviewer_techniques, analyze_logs, analyze_malware, analyze_military_strategy, analyze_mistakes, analyze_paper, analyze_paper_simple, analyze_patent, analyze_personality, analyze_presentation, analyze_product_feedback, analyze_proposition, analyze_prose, analyze_prose_json, analyze_prose_pinker, analyze_risk, analyze_sales_call, analyze_spiritual_text, analyze_tech_impact, analyze_terraform_plan, analyze_threat_report, analyze_threat_report_cmds, analyze_threat_report_trends, apply_ul_tags, check_agreement, compare_and_contrast, create_ai_jobs_analysis, create_idea_compass, create_investigation_visualization, create_prediction_block, create_recursive_outline, create_tags, dialog_with_socrates, extract_main_idea, extract_predictions, find_hidden_message, find_logical_fallacies, get_wow_per_minute, identify_dsrp_distinctions, identify_dsrp_perspectives, identify_dsrp_relationships, identify_dsrp_systems, identify_job_stories, label_and_rate, prepare_7s_strategy, provide_guidance, rate_content, rate_value, recommend_artists, recommend_talkpanel_topics, review_design, summarize_board_meeting, t_analyze_challenge_handling, t_check_dunning_kruger, t_check_metrics, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_find_blindspots, t_find_negative_thinking, t_red_team_thinking, t_threat_model_plans, t_year_in_review, write_hackerone_report
|
||||
|
||||
@@ -83,7 +83,7 @@ Match the request to one or more of these primary categories:
|
||||
|
||||
**CONVERSION**: clean_text, convert_to_markdown, create_graph_from_input, export_data_as_csv, extract_videoid, get_youtube_rss, humanize, md_callout, sanitize_broken_html_to_markdown, to_flashcards, transcribe_minutes, translate, tweet, write_latex
|
||||
|
||||
**CR THINKING**: capture_thinkers_work, create_idea_compass, create_markmap_visualization, dialog_with_socrates, extract_alpha, extract_controversial_ideas, extract_extraordinary_claims, extract_predictions, extract_primary_problem, extract_wisdom_nometa, find_hidden_message, find_logical_fallacies, solve_with_cot, summarize_debate, t_analyze_challenge_handling, t_check_dunning_kruger, t_find_blindspots, t_find_negative_thinking, t_find_neglected_goals, t_red_team_thinking
|
||||
**CR THINKING**: capture_thinkers_work, create_idea_compass, create_markmap_visualization, dialog_with_socrates, extract_alpha, extract_controversial_ideas, extract_extraordinary_claims, extract_predictions, extract_primary_problem, extract_wisdom_nometa, find_hidden_message, find_logical_fallacies, summarize_debate, t_analyze_challenge_handling, t_check_dunning_kruger, t_find_blindspots, t_find_negative_thinking, t_find_neglected_goals, t_red_team_thinking
|
||||
|
||||
**CREATIVITY**: create_mnemonic_phrases, write_essay
|
||||
|
||||
@@ -95,7 +95,7 @@ Match the request to one or more of these primary categories:
|
||||
|
||||
**GAMING**: create_npc, create_rpg_summary, summarize_rpg_session
|
||||
|
||||
**LEARNING**: analyze_answers, ask_uncle_duke, coding_master, create_diy, create_flash_cards, create_quiz, create_reading_plan, create_story_explanation, dialog_with_socrates, explain_code, explain_docs, explain_math, explain_project, explain_terms, extract_references, improve_academic_writing, provide_guidance, solve_with_cot, summarize_lecture, summarize_paper, to_flashcards, write_essay_pg
|
||||
**LEARNING**: analyze_answers, ask_uncle_duke, coding_master, create_diy, create_flash_cards, create_quiz, create_reading_plan, create_story_explanation, dialog_with_socrates, explain_code, explain_docs, explain_math, explain_project, explain_terms, extract_references, improve_academic_writing, provide_guidance, summarize_lecture, summarize_paper, to_flashcards, write_essay_pg
|
||||
|
||||
**OTHER**: extract_jokes
|
||||
|
||||
|
||||
@@ -78,10 +78,6 @@ Assess AI outputs against criteria, providing scores and feedback.
|
||||
|
||||
Process direct queries by interpreting intent.
|
||||
|
||||
### solve_with_cot
|
||||
|
||||
Solve problems using chain-of-thought reasoning.
|
||||
|
||||
### suggest_pattern
|
||||
|
||||
Recommend Fabric patterns based on user requirements.
|
||||
|
||||
@@ -8,12 +8,13 @@ Thanks for contributing to Fabric! Here's what you need to know to get started q
|
||||
|
||||
- Go 1.24+ installed
|
||||
- Git configured with your details
|
||||
- GitHub CLI (`gh`)
|
||||
|
||||
### Getting Started
|
||||
|
||||
```bash
|
||||
# Clone and setup
|
||||
git clone https://github.com/danielmiessler/fabric.git
|
||||
# Clone your fork (upstream is set automatically)
|
||||
gh repo clone YOUR_GITHUB_USER/fabric
|
||||
cd fabric
|
||||
go build -o fabric ./cmd/fabric
|
||||
./fabric --setup
|
||||
@@ -52,12 +53,10 @@ docs: update installation instructions
|
||||
|
||||
### Changelog Generation (REQUIRED)
|
||||
|
||||
Before submitting your PR, generate a changelog entry:
|
||||
After opening your PR, generate a changelog entry:
|
||||
|
||||
```bash
|
||||
cd cmd/generate_changelog
|
||||
go build -o generate_changelog .
|
||||
./generate_changelog --incoming-pr YOUR_PR_NUMBER
|
||||
go run ./cmd/generate_changelog --ai-summarize --incoming-pr YOUR_PR_NUMBER
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
|
||||
107
docs/contexts-and-sessions-tutorial.md
Normal file
107
docs/contexts-and-sessions-tutorial.md
Normal file
@@ -0,0 +1,107 @@
|
||||
# Contexts and Sessions in Fabric
|
||||
|
||||
Fabric uses **contexts** and **sessions** to manage conversation state and reusable prompt data. This guide focuses on how to use them from the CLI and REST API.
|
||||
|
||||
## What is a Context?
|
||||
|
||||
A context is named text that Fabric injects at the beginning of a conversation. Contexts live on disk under `~/.config/fabric/contexts`; each file name is the context name, and its contents are included as a system message.
|
||||
|
||||
Command-line helpers:
|
||||
|
||||
- `--context <name>` select a context
|
||||
- `--listcontexts` list available contexts
|
||||
- `--printcontext <name>` show the contents
|
||||
- `--wipecontext <name>` delete it
|
||||
|
||||
## What is a Session?
|
||||
|
||||
A session tracks the message history of a conversation. When you specify a session name, Fabric loads any existing messages, appends new ones, and saves back to disk. Sessions are stored as JSON under `~/.config/fabric/sessions`.
|
||||
|
||||
Command-line helpers:
|
||||
|
||||
- `--session <name>` attach to a session
|
||||
- `--listsessions` list stored sessions
|
||||
- `--printsession <name>` print a session
|
||||
- `--wipesession <name>` delete it
|
||||
|
||||
## Everyday Use Cases
|
||||
|
||||
Contexts and sessions serve different everyday needs:
|
||||
|
||||
- **Context** – Reuse prompt text such as preferred style, domain knowledge, or instructions for the assistant.
|
||||
- **Session** – Maintain ongoing conversation history so Fabric remembers earlier exchanges.
|
||||
|
||||
Example workflow:
|
||||
|
||||
1. Create a context file manually in `~/.config/fabric/contexts/writer` with your writing guidelines.
|
||||
2. Start a session while chatting to build on previous answers (`fabric --session mychat`). Sessions are automatically created if they don't exist.
|
||||
|
||||
## How Contexts and Sessions Interact
|
||||
|
||||
When Fabric handles a chat request, it loads any named context, combines it with pattern text, and adds the result as a system message before sending the conversation history to the model. The assistant's reply is appended to the session so future calls continue from the same state.
|
||||
|
||||
## REST API Endpoints
|
||||
|
||||
The REST server exposes CRUD endpoints for managing contexts and sessions:
|
||||
|
||||
- `/contexts/:name` – get or save a context
|
||||
- `/contexts/names` – list available contexts
|
||||
- `/sessions/:name` – get or save a session
|
||||
- `/sessions/names` – list available sessions
|
||||
|
||||
## Summary
|
||||
|
||||
Contexts provide reusable system-level instructions, while sessions maintain conversation history. Together they allow Fabric to build rich, stateful interactions with language models.
|
||||
|
||||
## For Developers
|
||||
|
||||
### Loading Contexts from Disk
|
||||
|
||||
```go
|
||||
// internal/plugins/db/fsdb/contexts.go
|
||||
func (o *ContextsEntity) Get(name string) (*Context, error) {
|
||||
content, err := o.Load(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &Context{Name: name, Content: string(content)}, nil
|
||||
}
|
||||
```
|
||||
|
||||
### Handling Sessions
|
||||
|
||||
```go
|
||||
// internal/plugins/db/fsdb/sessions.go
|
||||
type Session struct {
|
||||
Name string
|
||||
Messages []*chat.ChatCompletionMessage
|
||||
}
|
||||
|
||||
func (o *SessionsEntity) Get(name string) (*Session, error) {
|
||||
session := &Session{Name: name}
|
||||
if o.Exists(name) {
|
||||
err = o.LoadAsJson(name, &session.Messages)
|
||||
} else {
|
||||
fmt.Printf("Creating new session: %s\n", name)
|
||||
}
|
||||
return session, err
|
||||
}
|
||||
```
|
||||
|
||||
### Building a Session
|
||||
|
||||
```go
|
||||
// internal/core/chatter.go
|
||||
if request.ContextName != "" {
|
||||
ctx, err := o.db.Contexts.Get(request.ContextName)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not find context %s: %v", request.ContextName, err)
|
||||
}
|
||||
contextContent = ctx.Content
|
||||
}
|
||||
|
||||
systemMessage := strings.TrimSpace(contextContent) + strings.TrimSpace(patternContent)
|
||||
if systemMessage != "" {
|
||||
session.Append(&chat.ChatCompletionMessage{Role: chat.ChatMessageRoleSystem, Content: systemMessage})
|
||||
}
|
||||
```
|
||||
182
docs/i18n.md
Normal file
182
docs/i18n.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# Internationalization (i18n) in Fabric
|
||||
|
||||
Fabric supports multiple languages through its internationalization system. The system automatically detects your preferred language from environment variables and provides localized messages.
|
||||
|
||||
## How Locale Detection Works
|
||||
|
||||
Fabric follows POSIX standards for locale detection with the following priority order:
|
||||
|
||||
1. **Explicit language flag**: `--language` or `-g` (highest priority)
|
||||
2. **LC_ALL**: Complete locale override environment variable
|
||||
3. **LC_MESSAGES**: Messages-specific locale environment variable
|
||||
4. **LANG**: General locale environment variable
|
||||
5. **Default fallback**: English (`en`) if none are set or valid
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
# Use explicit language flag
|
||||
fabric --language es --pattern summarize
|
||||
|
||||
# Use LC_ALL environment variable
|
||||
LC_ALL=fr_FR.UTF-8 fabric --pattern summarize
|
||||
|
||||
# Use LANG environment variable
|
||||
LANG=de_DE.UTF-8 fabric --pattern summarize
|
||||
|
||||
# Multiple environment variables (LC_ALL takes priority)
|
||||
LC_ALL=es_ES.UTF-8 LANG=fr_FR.UTF-8 fabric --pattern summarize
|
||||
# Uses Spanish (es_ES) because LC_ALL has higher priority
|
||||
```
|
||||
|
||||
## Supported Locale Formats
|
||||
|
||||
The system automatically normalizes various locale formats:
|
||||
|
||||
- `en_US.UTF-8` → `en-US`
|
||||
- `fr_FR@euro` → `fr-FR`
|
||||
- `zh_CN.GB2312` → `zh-CN`
|
||||
- `de_DE.UTF-8@traditional` → `de-DE`
|
||||
|
||||
Special cases:
|
||||
|
||||
- `C` or `POSIX` → treated as invalid, falls back to English
|
||||
|
||||
## Translation File Locations
|
||||
|
||||
Translations are loaded from multiple sources in this order:
|
||||
|
||||
1. **Embedded files** (highest priority): Compiled into the binary
|
||||
- Location: `internal/i18n/locales/*.json`
|
||||
- Always available, no download required
|
||||
|
||||
2. **User config directory**: Downloaded on demand
|
||||
- Location: `~/.config/fabric/locales/`
|
||||
- Downloaded from GitHub when needed
|
||||
|
||||
3. **GitHub repository**: Source for downloads
|
||||
- URL: `https://raw.githubusercontent.com/danielmiessler/Fabric/main/internal/i18n/locales/`
|
||||
|
||||
## Currently Supported Languages
|
||||
|
||||
- **English** (`en`): Default language, always available
|
||||
- **Spanish** (`es`): Available in embedded files
|
||||
|
||||
## Adding New Languages
|
||||
|
||||
To add support for a new language:
|
||||
|
||||
1. Create a new JSON file: `internal/i18n/locales/{lang}.json`
|
||||
2. Add translations in the format:
|
||||
|
||||
```json
|
||||
{
|
||||
"message_id": "localized message text"
|
||||
}
|
||||
```
|
||||
|
||||
3. Rebuild Fabric to embed the new translations
|
||||
|
||||
### Translation File Format
|
||||
|
||||
Translation files use JSON format with message IDs as keys:
|
||||
|
||||
```json
|
||||
{
|
||||
"html_readability_error": "use original input, because can't apply html readability"
|
||||
}
|
||||
```
|
||||
|
||||
Spanish example:
|
||||
|
||||
```json
|
||||
{
|
||||
"html_readability_error": "usa la entrada original, porque no se puede aplicar la legibilidad de html"
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The i18n system is designed to be robust:
|
||||
|
||||
- **Download failures**: Non-fatal, falls back to embedded translations
|
||||
- **Invalid locales**: Skipped, next priority locale used
|
||||
- **Missing translations**: Falls back to English
|
||||
- **Missing files**: Uses embedded defaults
|
||||
|
||||
Error messages are logged to stderr but don't prevent operation.
|
||||
|
||||
## Environment Variable Examples
|
||||
|
||||
### Common Unix Locale Settings
|
||||
|
||||
```bash
|
||||
# Set system-wide locale
|
||||
export LANG=en_US.UTF-8
|
||||
|
||||
# Override all locale categories
|
||||
export LC_ALL=fr_FR.UTF-8
|
||||
|
||||
# Set only message locale (for this session)
|
||||
LC_MESSAGES=es_ES.UTF-8 fabric --pattern summarize
|
||||
|
||||
# Check current locale settings
|
||||
locale
|
||||
```
|
||||
|
||||
### Testing Locale Detection
|
||||
|
||||
You can test locale detection without changing your system settings:
|
||||
|
||||
```bash
|
||||
# Test with French
|
||||
LC_ALL=fr_FR.UTF-8 fabric --version
|
||||
|
||||
# Test with Spanish (if available)
|
||||
LC_ALL=es_ES.UTF-8 fabric --version
|
||||
|
||||
# Test with German (will download if available)
|
||||
LC_ALL=de_DE.UTF-8 fabric --version
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "i18n download failed" messages
|
||||
|
||||
This is normal when requesting a language not yet available. The system will fall back to English.
|
||||
|
||||
### Locale not detected
|
||||
|
||||
Check your environment variables:
|
||||
|
||||
```bash
|
||||
echo $LC_ALL
|
||||
echo $LC_MESSAGES
|
||||
echo $LANG
|
||||
```
|
||||
|
||||
Ensure they're in a valid format like `en_US.UTF-8` or `fr_FR`.
|
||||
|
||||
### Wrong language used
|
||||
|
||||
Remember the priority order:
|
||||
|
||||
1. `--language` flag overrides everything
|
||||
2. `LC_ALL` overrides `LC_MESSAGES` and `LANG`
|
||||
3. `LC_MESSAGES` overrides `LANG`
|
||||
|
||||
## Implementation Details
|
||||
|
||||
The locale detection system:
|
||||
|
||||
- Uses `golang.org/x/text/language` for parsing and validation
|
||||
- Follows BCP 47 language tag standards
|
||||
- Implements POSIX locale environment variable precedence
|
||||
- Provides comprehensive test coverage
|
||||
- Handles edge cases gracefully
|
||||
|
||||
For developers working on the codebase, see the implementation in:
|
||||
|
||||
- `internal/i18n/locale.go`: Locale detection logic
|
||||
- `internal/i18n/i18n.go`: Main i18n initialization
|
||||
- `internal/i18n/locale_test.go`: Test suite
|
||||
1
go.mod
1
go.mod
@@ -21,6 +21,7 @@ require (
|
||||
github.com/joho/godotenv v1.5.1
|
||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
|
||||
github.com/mattn/go-sqlite3 v1.14.28
|
||||
github.com/nicksnyder/go-i18n/v2 v2.6.0
|
||||
github.com/ollama/ollama v0.11.7
|
||||
github.com/openai/openai-go v1.8.2
|
||||
github.com/otiai10/copy v1.14.1
|
||||
|
||||
4
go.sum
4
go.sum
@@ -8,6 +8,8 @@ cloud.google.com/go/compute/metadata v0.7.0 h1:PBWF+iiAerVNe8UCHxdOt6eHLVc3ydFeO
|
||||
cloud.google.com/go/compute/metadata v0.7.0/go.mod h1:j5MvL9PprKL39t166CoB1uVHfQMs4tFQZZcKwksXUjo=
|
||||
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
|
||||
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
|
||||
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
|
||||
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
|
||||
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
|
||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||
@@ -180,6 +182,8 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/nicksnyder/go-i18n/v2 v2.6.0 h1:C/m2NNWNiTB6SK4Ao8df5EWm3JETSTIGNXBpMJTxzxQ=
|
||||
github.com/nicksnyder/go-i18n/v2 v2.6.0/go.mod h1:88sRqr0C6OPyJn0/KRNaEz1uWorjxIKP7rUUcvycecE=
|
||||
github.com/ollama/ollama v0.11.7 h1:CuYjaJ/YEnvLDpJocJbbVdpdVFyGA/OP6lKFyzZD4dI=
|
||||
github.com/ollama/ollama v0.11.7/go.mod h1:9+1//yWPsDE2u+l1a5mpaKrYw4VdnSsRU3ioq5BvMms=
|
||||
github.com/onsi/gomega v1.34.1 h1:EUMJIKUjM8sKjYbtxQI9A4z2o+rruxnzNvpknOXie6k=
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/danielmiessler/fabric/internal/core"
|
||||
"github.com/danielmiessler/fabric/internal/i18n"
|
||||
debuglog "github.com/danielmiessler/fabric/internal/log"
|
||||
"github.com/danielmiessler/fabric/internal/plugins/ai/openai"
|
||||
"github.com/danielmiessler/fabric/internal/tools/converter"
|
||||
@@ -19,6 +20,11 @@ func Cli(version string) (err error) {
|
||||
return
|
||||
}
|
||||
|
||||
// initialize internationalization using requested language
|
||||
if _, err = i18n.Init(currentFlags.Language); err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if currentFlags.Setup {
|
||||
if err = ensureEnvFile(); err != nil {
|
||||
return
|
||||
@@ -86,7 +92,7 @@ func Cli(version string) (err error) {
|
||||
// Process HTML readability if needed
|
||||
if currentFlags.HtmlReadability {
|
||||
if msg, cleanErr := converter.HtmlReadability(currentFlags.Message); cleanErr != nil {
|
||||
fmt.Println("use original input, because can't apply html readability", cleanErr)
|
||||
fmt.Println(i18n.T("html_readability_error"), cleanErr)
|
||||
} else {
|
||||
currentFlags.Message = msg
|
||||
}
|
||||
|
||||
121
internal/i18n/i18n.go
Normal file
121
internal/i18n/i18n.go
Normal file
@@ -0,0 +1,121 @@
|
||||
package i18n
|
||||
|
||||
import (
|
||||
"embed"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/nicksnyder/go-i18n/v2/i18n"
|
||||
"golang.org/x/text/language"
|
||||
)
|
||||
|
||||
// embedded default locales
|
||||
//
|
||||
//go:embed locales/*.json
|
||||
var localeFS embed.FS
|
||||
|
||||
var (
|
||||
translator *i18n.Localizer
|
||||
initOnce sync.Once
|
||||
)
|
||||
|
||||
// Init initializes the i18n bundle and localizer. It loads the specified locale
|
||||
// and falls back to English if loading fails.
|
||||
// Translation files are searched in the user config directory and downloaded
|
||||
// from GitHub if missing.
|
||||
//
|
||||
// If locale is empty, it will attempt to detect the system locale from
|
||||
// environment variables (LC_ALL, LC_MESSAGES, LANG) following POSIX standards.
|
||||
func Init(locale string) (*i18n.Localizer, error) {
|
||||
// Use preferred locale detection if no explicit locale provided
|
||||
locale = getPreferredLocale(locale)
|
||||
if locale == "" {
|
||||
locale = "en"
|
||||
}
|
||||
|
||||
bundle := i18n.NewBundle(language.English)
|
||||
bundle.RegisterUnmarshalFunc("json", json.Unmarshal)
|
||||
|
||||
// load embedded translations for the requested locale if available
|
||||
embedded := false
|
||||
if data, err := localeFS.ReadFile("locales/" + locale + ".json"); err == nil {
|
||||
_, _ = bundle.ParseMessageFileBytes(data, locale+".json")
|
||||
embedded = true
|
||||
} else if strings.Contains(locale, "-") {
|
||||
// Try base language if regional variant not found (e.g., es-ES -> es)
|
||||
baseLang := strings.Split(locale, "-")[0]
|
||||
if data, err := localeFS.ReadFile("locales/" + baseLang + ".json"); err == nil {
|
||||
_, _ = bundle.ParseMessageFileBytes(data, baseLang+".json")
|
||||
embedded = true
|
||||
}
|
||||
}
|
||||
if !embedded {
|
||||
if data, err := localeFS.ReadFile("locales/en.json"); err == nil {
|
||||
_, _ = bundle.ParseMessageFileBytes(data, "en.json")
|
||||
}
|
||||
}
|
||||
|
||||
// load locale from disk or download when not embedded
|
||||
path := filepath.Join(userLocaleDir(), locale+".json")
|
||||
if _, err := os.Stat(path); os.IsNotExist(err) && !embedded {
|
||||
if err := downloadLocale(path, locale); err != nil {
|
||||
// if download fails, still continue with embedded translations
|
||||
fmt.Fprintln(os.Stderr, "i18n download failed:", err)
|
||||
}
|
||||
}
|
||||
if _, err := os.Stat(path); err == nil {
|
||||
if _, err := bundle.LoadMessageFile(path); err != nil {
|
||||
fmt.Fprintln(os.Stderr, "i18n load failed:", err)
|
||||
}
|
||||
}
|
||||
|
||||
translator = i18n.NewLocalizer(bundle, locale)
|
||||
return translator, nil
|
||||
}
|
||||
|
||||
// T returns the localized string for the given message id.
|
||||
// If the translator is not initialized, it will automatically initialize
|
||||
// with system locale detection.
|
||||
func T(messageID string) string {
|
||||
initOnce.Do(func() {
|
||||
if translator == nil {
|
||||
Init("") // Empty string triggers system locale detection
|
||||
}
|
||||
})
|
||||
return translator.MustLocalize(&i18n.LocalizeConfig{MessageID: messageID})
|
||||
}
|
||||
|
||||
func userLocaleDir() string {
|
||||
dir, err := os.UserConfigDir()
|
||||
if err != nil {
|
||||
dir = "."
|
||||
}
|
||||
path := filepath.Join(dir, "fabric", "locales")
|
||||
os.MkdirAll(path, 0o755)
|
||||
return path
|
||||
}
|
||||
|
||||
func downloadLocale(path, locale string) error {
|
||||
url := fmt.Sprintf("https://raw.githubusercontent.com/danielmiessler/Fabric/main/internal/i18n/locales/%s.json", locale)
|
||||
resp, err := http.Get(url)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
return fmt.Errorf("unexpected status: %s", resp.Status)
|
||||
}
|
||||
f, err := os.Create(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
_, err = io.Copy(f, resp.Body)
|
||||
return err
|
||||
}
|
||||
22
internal/i18n/i18n_test.go
Normal file
22
internal/i18n/i18n_test.go
Normal file
@@ -0,0 +1,22 @@
|
||||
package i18n
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
gi18n "github.com/nicksnyder/go-i18n/v2/i18n"
|
||||
)
|
||||
|
||||
func TestTranslation(t *testing.T) {
|
||||
loc, err := Init("es")
|
||||
if err != nil {
|
||||
t.Fatalf("init failed: %v", err)
|
||||
}
|
||||
msg, err := loc.Localize(&gi18n.LocalizeConfig{MessageID: "html_readability_error"})
|
||||
if err != nil {
|
||||
t.Fatalf("localize failed: %v", err)
|
||||
}
|
||||
expected := "usa la entrada original, porque no se puede aplicar la legibilidad de html"
|
||||
if msg != expected {
|
||||
t.Fatalf("unexpected translation: %s", msg)
|
||||
}
|
||||
}
|
||||
82
internal/i18n/locale.go
Normal file
82
internal/i18n/locale.go
Normal file
@@ -0,0 +1,82 @@
|
||||
package i18n
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/text/language"
|
||||
)
|
||||
|
||||
// detectSystemLocale detects the system locale using standard Unix environment variables.
|
||||
// Follows the POSIX priority order for locale environment variables:
|
||||
// 1. LC_ALL (highest priority - overrides all others)
|
||||
// 2. LC_MESSAGES (for messages specifically)
|
||||
// 3. LANG (general locale setting)
|
||||
// 4. Returns empty string if none are set or valid
|
||||
//
|
||||
// This implementation follows POSIX standards and Unix best practices for locale detection.
|
||||
func detectSystemLocale() string {
|
||||
// Check environment variables in priority order
|
||||
envVars := []string{"LC_ALL", "LC_MESSAGES", "LANG"}
|
||||
|
||||
for _, envVar := range envVars {
|
||||
if value := os.Getenv(envVar); value != "" {
|
||||
locale := normalizeLocale(value)
|
||||
if locale != "" && isValidLocale(locale) {
|
||||
return locale
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// normalizeLocale converts various locale formats to BCP 47 language tags.
|
||||
// Examples:
|
||||
// - "en_US.UTF-8" -> "en-US"
|
||||
// - "fr_FR@euro" -> "fr-FR"
|
||||
// - "zh_CN.GB2312" -> "zh-CN"
|
||||
// - "C" or "POSIX" -> "" (invalid, falls back to default)
|
||||
func normalizeLocale(locale string) string {
|
||||
// Handle special cases
|
||||
if locale == "C" || locale == "POSIX" || locale == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Remove encoding and modifiers
|
||||
// Examples: en_US.UTF-8@euro -> en_US
|
||||
locale = strings.Split(locale, ".")[0] // Remove encoding (.UTF-8)
|
||||
locale = strings.Split(locale, "@")[0] // Remove modifiers (@euro)
|
||||
|
||||
// Convert underscore to hyphen for BCP 47 compliance
|
||||
// en_US -> en-US
|
||||
locale = strings.ReplaceAll(locale, "_", "-")
|
||||
|
||||
return locale
|
||||
}
|
||||
|
||||
// isValidLocale checks if a locale string can be parsed as a valid language tag.
|
||||
func isValidLocale(locale string) bool {
|
||||
if locale == "" {
|
||||
return false
|
||||
}
|
||||
|
||||
// Use golang.org/x/text/language to validate
|
||||
_, err := language.Parse(locale)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
// getPreferredLocale returns the best locale to use based on user preferences.
|
||||
// Priority order:
|
||||
// 1. Explicit language flag (if provided)
|
||||
// 2. System environment variables (LC_ALL, LC_MESSAGES, LANG)
|
||||
// 3. Default fallback (empty string, which triggers "en" in Init)
|
||||
func getPreferredLocale(explicitLang string) string {
|
||||
// If explicitly set via flag, use that
|
||||
if explicitLang != "" {
|
||||
return explicitLang
|
||||
}
|
||||
|
||||
// Otherwise try to detect from system environment
|
||||
return detectSystemLocale()
|
||||
}
|
||||
288
internal/i18n/locale_test.go
Normal file
288
internal/i18n/locale_test.go
Normal file
@@ -0,0 +1,288 @@
|
||||
package i18n
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestDetectSystemLocale(t *testing.T) {
|
||||
// Save original environment
|
||||
originalLC_ALL := os.Getenv("LC_ALL")
|
||||
originalLC_MESSAGES := os.Getenv("LC_MESSAGES")
|
||||
originalLANG := os.Getenv("LANG")
|
||||
|
||||
// Clean up after test
|
||||
defer func() {
|
||||
os.Setenv("LC_ALL", originalLC_ALL)
|
||||
os.Setenv("LC_MESSAGES", originalLC_MESSAGES)
|
||||
os.Setenv("LANG", originalLANG)
|
||||
}()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
LC_ALL string
|
||||
LC_MESSAGES string
|
||||
LANG string
|
||||
expected string
|
||||
description string
|
||||
}{
|
||||
{
|
||||
name: "LC_ALL takes highest priority",
|
||||
LC_ALL: "fr_FR.UTF-8",
|
||||
LC_MESSAGES: "de_DE.UTF-8",
|
||||
LANG: "es_ES.UTF-8",
|
||||
expected: "fr-FR",
|
||||
description: "LC_ALL should override all other variables",
|
||||
},
|
||||
{
|
||||
name: "LC_MESSAGES used when LC_ALL empty",
|
||||
LC_ALL: "",
|
||||
LC_MESSAGES: "ja_JP.UTF-8",
|
||||
LANG: "ko_KR.UTF-8",
|
||||
expected: "ja-JP",
|
||||
description: "LC_MESSAGES should be used when LC_ALL is not set",
|
||||
},
|
||||
{
|
||||
name: "LANG used when LC_ALL and LC_MESSAGES empty",
|
||||
LC_ALL: "",
|
||||
LC_MESSAGES: "",
|
||||
LANG: "zh_CN.GB2312",
|
||||
expected: "zh-CN",
|
||||
description: "LANG should be fallback when others are not set",
|
||||
},
|
||||
{
|
||||
name: "Empty when no valid locale set",
|
||||
LC_ALL: "",
|
||||
LC_MESSAGES: "",
|
||||
LANG: "",
|
||||
expected: "",
|
||||
description: "Should return empty when no environment variables set",
|
||||
},
|
||||
{
|
||||
name: "Handle C locale",
|
||||
LC_ALL: "C",
|
||||
LC_MESSAGES: "",
|
||||
LANG: "",
|
||||
expected: "",
|
||||
description: "C locale should be treated as invalid (fallback to default)",
|
||||
},
|
||||
{
|
||||
name: "Handle POSIX locale",
|
||||
LC_ALL: "",
|
||||
LC_MESSAGES: "POSIX",
|
||||
LANG: "",
|
||||
expected: "",
|
||||
description: "POSIX locale should be treated as invalid (fallback to default)",
|
||||
},
|
||||
{
|
||||
name: "Handle locale with modifiers",
|
||||
LC_ALL: "",
|
||||
LC_MESSAGES: "",
|
||||
LANG: "de_DE.UTF-8@euro",
|
||||
expected: "de-DE",
|
||||
description: "Should strip encoding and modifiers",
|
||||
},
|
||||
{
|
||||
name: "Skip invalid locale and use next priority",
|
||||
LC_ALL: "invalid_locale",
|
||||
LC_MESSAGES: "fr_CA.UTF-8",
|
||||
LANG: "en_US.UTF-8",
|
||||
expected: "fr-CA",
|
||||
description: "Should skip invalid high-priority locale and use next valid one",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Set test environment
|
||||
os.Setenv("LC_ALL", tt.LC_ALL)
|
||||
os.Setenv("LC_MESSAGES", tt.LC_MESSAGES)
|
||||
os.Setenv("LANG", tt.LANG)
|
||||
|
||||
result := detectSystemLocale()
|
||||
if result != tt.expected {
|
||||
t.Errorf("%s: expected %q, got %q", tt.description, tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeLocale(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
expected string
|
||||
}{
|
||||
// Standard Unix locale formats
|
||||
{"en_US.UTF-8", "en-US"},
|
||||
{"fr_FR.ISO8859-1", "fr-FR"},
|
||||
{"de_DE@euro", "de-DE"},
|
||||
{"zh_CN.GB2312", "zh-CN"},
|
||||
{"ja_JP.eucJP@traditional", "ja-JP"},
|
||||
|
||||
// Already normalized
|
||||
{"en-US", "en-US"},
|
||||
{"fr-CA", "fr-CA"},
|
||||
|
||||
// Language only
|
||||
{"en", "en"},
|
||||
{"fr", "fr"},
|
||||
{"zh", "zh"},
|
||||
|
||||
// Special cases
|
||||
{"C", ""},
|
||||
{"POSIX", ""},
|
||||
{"", ""},
|
||||
|
||||
// Complex cases
|
||||
{"pt_BR.UTF-8@currency=BRL", "pt-BR"},
|
||||
{"sr_RS.UTF-8@latin", "sr-RS"},
|
||||
{"uz_UZ.UTF-8@cyrillic", "uz-UZ"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.input, func(t *testing.T) {
|
||||
result := normalizeLocale(tt.input)
|
||||
if result != tt.expected {
|
||||
t.Errorf("normalizeLocale(%q): expected %q, got %q", tt.input, tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsValidLocale(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
expected bool
|
||||
}{
|
||||
// Valid locales
|
||||
{"en", true},
|
||||
{"en-US", true},
|
||||
{"fr-FR", true},
|
||||
{"zh-CN", true},
|
||||
{"ja-JP", true},
|
||||
{"pt-BR", true},
|
||||
{"es-MX", true},
|
||||
|
||||
// Invalid locales
|
||||
{"", false},
|
||||
{"invalid", false},
|
||||
{"123", false}, // Numbers
|
||||
|
||||
// Note: golang.org/x/text/language is quite lenient and accepts:
|
||||
// - "en-ZZ" (unknown country codes are allowed)
|
||||
// - "en_US" (underscores are normalized to hyphens)
|
||||
// These are actually valid according to the language package
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.input, func(t *testing.T) {
|
||||
result := isValidLocale(tt.input)
|
||||
if result != tt.expected {
|
||||
t.Errorf("isValidLocale(%q): expected %v, got %v", tt.input, tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestGetPreferredLocale(t *testing.T) {
|
||||
// Save original environment
|
||||
originalLC_ALL := os.Getenv("LC_ALL")
|
||||
originalLC_MESSAGES := os.Getenv("LC_MESSAGES")
|
||||
originalLANG := os.Getenv("LANG")
|
||||
|
||||
// Clean up after test
|
||||
defer func() {
|
||||
os.Setenv("LC_ALL", originalLC_ALL)
|
||||
os.Setenv("LC_MESSAGES", originalLC_MESSAGES)
|
||||
os.Setenv("LANG", originalLANG)
|
||||
}()
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
explicitLang string
|
||||
LC_ALL string
|
||||
LC_MESSAGES string
|
||||
LANG string
|
||||
expected string
|
||||
description string
|
||||
}{
|
||||
{
|
||||
name: "Explicit language takes precedence",
|
||||
explicitLang: "es-ES",
|
||||
LC_ALL: "fr_FR.UTF-8",
|
||||
LC_MESSAGES: "de_DE.UTF-8",
|
||||
LANG: "ja_JP.UTF-8",
|
||||
expected: "es-ES",
|
||||
description: "Explicit language should override environment variables",
|
||||
},
|
||||
{
|
||||
name: "Use environment when no explicit language",
|
||||
explicitLang: "",
|
||||
LC_ALL: "it_IT.UTF-8",
|
||||
LC_MESSAGES: "ru_RU.UTF-8",
|
||||
LANG: "pl_PL.UTF-8",
|
||||
expected: "it-IT",
|
||||
description: "Should detect from environment when no explicit language",
|
||||
},
|
||||
{
|
||||
name: "Empty when no explicit and no environment",
|
||||
explicitLang: "",
|
||||
LC_ALL: "",
|
||||
LC_MESSAGES: "",
|
||||
LANG: "",
|
||||
expected: "",
|
||||
description: "Should return empty when nothing is set",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Set test environment
|
||||
os.Setenv("LC_ALL", tt.LC_ALL)
|
||||
os.Setenv("LC_MESSAGES", tt.LC_MESSAGES)
|
||||
os.Setenv("LANG", tt.LANG)
|
||||
|
||||
result := getPreferredLocale(tt.explicitLang)
|
||||
if result != tt.expected {
|
||||
t.Errorf("%s: expected %q, got %q", tt.description, tt.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestIntegrationWithInit(t *testing.T) {
|
||||
// Save original environment
|
||||
originalLC_ALL := os.Getenv("LC_ALL")
|
||||
originalLANG := os.Getenv("LANG")
|
||||
|
||||
// Clean up after test
|
||||
defer func() {
|
||||
os.Setenv("LC_ALL", originalLC_ALL)
|
||||
os.Setenv("LANG", originalLANG)
|
||||
translator = nil // Reset global state
|
||||
}()
|
||||
|
||||
// Test that Init uses environment variables when no explicit locale provided
|
||||
os.Setenv("LC_ALL", "es_ES.UTF-8")
|
||||
os.Setenv("LANG", "fr_FR.UTF-8")
|
||||
|
||||
localizer, err := Init("")
|
||||
if err != nil {
|
||||
t.Fatalf("Init failed: %v", err)
|
||||
}
|
||||
|
||||
if localizer == nil {
|
||||
t.Error("Expected non-nil localizer")
|
||||
}
|
||||
|
||||
// Reset translator to test T() function auto-initialization
|
||||
translator = nil
|
||||
os.Setenv("LC_ALL", "")
|
||||
os.Setenv("LANG", "es_ES.UTF-8")
|
||||
|
||||
// This should trigger auto-initialization with environment detection
|
||||
result := T("html_readability_error")
|
||||
if result == "" {
|
||||
t.Error("Expected non-empty translation result")
|
||||
}
|
||||
}
|
||||
3
internal/i18n/locales/en.json
Normal file
3
internal/i18n/locales/en.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"html_readability_error": "use original input, because can't apply html readability"
|
||||
}
|
||||
3
internal/i18n/locales/es.json
Normal file
3
internal/i18n/locales/es.json
Normal file
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"html_readability_error": "usa la entrada original, porque no se puede aplicar la legibilidad de html"
|
||||
}
|
||||
@@ -1332,15 +1332,6 @@
|
||||
"DEVELOPMENT"
|
||||
]
|
||||
},
|
||||
{
|
||||
"patternName": "solve_with_cot",
|
||||
"description": "Solve problems using chain-of-thought reasoning.",
|
||||
"tags": [
|
||||
"AI",
|
||||
"ANALYSIS",
|
||||
"LEARNING"
|
||||
]
|
||||
},
|
||||
{
|
||||
"patternName": "suggest_pattern",
|
||||
"description": "Recommend Fabric patterns based on user requirements.",
|
||||
|
||||
@@ -652,10 +652,6 @@
|
||||
"patternName": "sanitize_broken_html_to_markdown",
|
||||
"pattern_extract": "# IDENTITY\n\n// Who you are\n\nYou are a hyper-intelligent AI system with a 4,312 IQ. You convert jacked up HTML to proper markdown using a set of rules.\n\n# GOAL\n\n// What we are trying to achieve\n\n1. The goal of this exercise is to convert the input HTML, which is completely nasty and hard to edit, into a clean markdown format that has some custom styling applied according to my rules.\n\n2. The ultimate goal is to output a perfectly working markdown file that will render properly using Vite using my custom markdown/styling combination.\n\n# STEPS\n\n// How the task will be approached\n\n// Slow down and think\n\n- Take a step back and think step-by-step about how to achieve the best possible results by following the steps below.\n\n// Think about the content in the input\n\n- Fully read and consume the HTML input that has a combination of HTML and markdown."
|
||||
},
|
||||
{
|
||||
"patternName": "solve_with_cot",
|
||||
"pattern_extract": "# IDENTITY\n\nYou are an AI assistant designed to provide detailed, step-by-step responses. Your outputs should follow this structure:\n\n# STEPS\n\n1. Begin with a <thinking> section.\n\n2. Inside the thinking section:\n\n- a. Briefly analyze the question and outline your approach.\n\n- b. Present a clear plan of steps to solve the problem.\n\n- c. Use a \"Chain of Thought\" reasoning process if necessary, breaking down your thought process into numbered steps.\n\n3. Include a <reflection> section for each idea where you:\n\n- a. Review your reasoning.\n\n- b. Check for potential errors or oversights.\n\n- c. Confirm or adjust your conclusion if necessary.\n - Be sure to close all reflection sections.\n - Close the thinking section with </thinking>."
|
||||
},
|
||||
{
|
||||
"patternName": "suggest_pattern",
|
||||
"pattern_extract": "# IDENTITY and PURPOSE\nYou are an AI assistant tasked with creating a new feature for a fabric command-line tool. Your primary responsibility is to develop a pattern that suggests appropriate fabric patterns or commands based on user input. You are knowledgeable about fabric commands and understand the need to expand the tool's functionality. Your role involves analyzing user requests, determining the most suitable fabric commands or patterns, and providing helpful suggestions to users.\n\nTake a step back and think step-by-step about how to achieve the best possible results by following the steps below.\n\n# STEPS\n- Analyze the user's input to understand their specific needs and context\n- Determine the appropriate fabric pattern or command based on the user's request\n- Generate a response that suggests the relevant fabric command(s) or pattern(s)\n- Provide explanations or multiple options when applicable\n- If no specific command is found, suggest using `create_pattern`\n\n# OUTPUT INSTRUCTIONS\n- Only output Markdown\n- Provide suggestions for fabric commands or patterns based on the user's input\n- Include explanations or multiple options when appropriate\n- If suggesting `create_pattern`, include instructions for saving and using the new pattern\n- Format the output to be clear and easy to understand for users new to fabric\n- Ensure the response aligns with the goal of making fabric more accessible and user-friendly\n- Ensure you follow ALL these instructions when creating your output\n\n# INPUT\nINPUT:"
|
||||
|
||||
@@ -1332,15 +1332,6 @@
|
||||
"DEVELOPMENT"
|
||||
]
|
||||
},
|
||||
{
|
||||
"patternName": "solve_with_cot",
|
||||
"description": "Solve problems using chain-of-thought reasoning.",
|
||||
"tags": [
|
||||
"AI",
|
||||
"ANALYSIS",
|
||||
"LEARNING"
|
||||
]
|
||||
},
|
||||
{
|
||||
"patternName": "suggest_pattern",
|
||||
"description": "Recommend Fabric patterns based on user requirements.",
|
||||
|
||||
Reference in New Issue
Block a user