Compare commits

...

89 Commits

Author SHA1 Message Date
github-actions[bot]
75e11724b4 chore(release): Update version to v1.4.328 2025-11-18 15:17:49 +00:00
Kayvan Sylvan
2dd79a66d7 Merge pull request #1836 from ksylvan/kayvan/update-raw-flag-help-message
docs: clarify `--raw` flag behavior for OpenAI and Anthropic providers
2025-11-18 07:15:01 -08:00
Kayvan Sylvan
b7fa02d91e docs: clarify --raw flag behavior for OpenAI and Anthropic providers
- Update `--raw` flag description across all documentation files
- Clarify flag only affects OpenAI-compatible providers behavior
- Document Anthropic models use smart parameter selection
- Remove outdated reference to system/user role changes
- Update help text in CLI flags definition
- Translate updated description to all supported locales
- Update shell completion descriptions for zsh and fish
- chore: incoming 1836 changelog entry
2025-11-18 04:27:38 -08:00
github-actions[bot]
63804d3d52 chore(release): Update version to v1.4.327 2025-11-16 21:12:09 +00:00
Kayvan Sylvan
56f105971f Merge pull request #1832 from ksylvan/kayvan/fix-gemini-panic
Improve channel management in Gemini provider
2025-11-16 13:08:59 -08:00
Kayvan Sylvan
ca96c9c629 fix: improve channel management in Gemini streaming method
- Add deferred channel close at function start
- Return error immediately instead of breaking loop
- Remove redundant channel close statements from loop
- Ensure channel closes on all exit paths consistently
- chore: incoming 1832 changelog entry
2025-11-16 13:06:09 -08:00
Kayvan Sylvan
efb9261b89 Merge pull request #1831 from ksylvan/kayvan/remove-youtube-rss-pattern
Remove `get_youtube_rss` pattern
2025-11-16 12:41:12 -08:00
Kayvan Sylvan
118abdc368 chore: remove get_youtube_rss pattern from multiple files
- Remove `get_youtube_rss` from `pattern_explanations.md`
- Delete `get_youtube_rss` entry in `pattern_descriptions.json`
- Delete `get_youtube_rss` entry in `pattern_extracts.json`
- Remove `get_youtube_rss` from `suggest_pattern/system.md`
- Remove `get_youtube_rss` from `suggest_pattern/user.md`
- chore: incoming 1831 changelog entry
2025-11-16 12:28:09 -08:00
github-actions[bot]
278d488dbf chore(release): Update version to v1.4.326 2025-11-16 19:36:17 +00:00
Kayvan Sylvan
d590c0dd15 Merge pull request #1830 from ksylvan/kayvan/newline-in-output-fix
Ensure final newline in model generated outputs
2025-11-16 11:33:47 -08:00
Kayvan Sylvan
c936f8e77b feat: ensure newline in CreateOutputFile and improve tests
- Add newline to `CreateOutputFile` if missing
- Use `t.Cleanup` for file removal in tests
- Add test for message with trailing newline
- Introduce `printedStream` flag in `Chatter.Send`
- Print newline if stream printed without trailing newline
2025-11-16 11:15:47 -08:00
Kayvan Sylvan
7dacc07f03 chore: update README with recent features and extensions
### CHANGES

- Add v1.4.322 release with concept maps
- Introduce WELLNESS category with psychological analysis
- Upgrade to Claude Sonnet 4.5
- Add Portuguese language variants with BCP 47 support
- Migrate to `openai-go/azure` SDK for Azure
- Add Extensions section to README navigation
2025-11-15 09:34:27 -08:00
github-actions[bot]
4e6a2736ad chore(release): Update version to v1.4.325 2025-11-15 05:25:51 +00:00
Kayvan Sylvan
14c95d7bc1 Merge pull request #1828 from ksylvan/kayvan/fix-empty-input-bug
Fix empty string detection in chatter and AI clients
2025-11-14 21:22:53 -08:00
Changelog Bot
2e7b664e1e chore: incoming 1828 changelog entry 2025-11-14 21:20:52 -08:00
Kayvan Sylvan
729d092754 chore: improve message handling by trimming whitespace in content checks
### CHANGES

- Remove default space in `BuildSession` message content
- Trim whitespace in `anthropic` message content check
- Trim whitespace in `gemini` message content check
2025-11-14 21:13:08 -08:00
github-actions[bot]
5b7017d67b chore(release): Update version to v1.4.324 2025-11-14 07:49:26 +00:00
Kayvan Sylvan
6f5b89a0df Merge pull request #1827 from ksylvan/kayvan/fix-youtube-key-not-optional
Make YouTube API key optional in setup
2025-11-13 23:46:45 -08:00
Kayvan Sylvan
d02a55ee01 feat: make YouTube API key optional in setup
- Change API key setup question to optional
- Add test for optional API key behavior
- Ensure plugin configuration without API key
- chore: incoming 1827 changelog entry
2025-11-13 23:44:41 -08:00
github-actions[bot]
c498085feb chore(release): Update version to v1.4.323 2025-11-12 01:24:07 +00:00
Kayvan Sylvan
4996832e64 Merge pull request #1802 from nickarino/input-extension-bug-fix
fix: improve template extension handling for {{input}} and add examples
2025-11-11 17:21:13 -08:00
Kayvan Sylvan
79d04b2ada add byid to spell list 2025-11-11 17:18:21 -08:00
Kayvan Sylvan
c7206c0a01 docs: minor formatting fixes 2025-11-11 17:16:55 -08:00
Kayvan Sylvan
4aceb64284 chore: incoming 1823 changelog entry 2025-11-11 11:46:42 -08:00
Kayvan Sylvan
4864a63d35 Merge pull request #1823 from ksylvan/kayvan/add-missing-pattern-explanations
Add missing patterns and renumber pattern explanations list
2025-11-10 14:10:07 -08:00
Kayvan Sylvan
8e18753c0f docs: add new patterns and renumber pattern explanations list
# CHANGES

- Add `apply_ul_tags` pattern for content categorization
- Add `extract_mcp_servers` pattern for MCP server identification
- Add `generate_code_rules` pattern for AI coding guardrails
- Add `t_check_dunning_kruger` pattern for competence assessment
- Renumber all patterns from 37-226 to 37-230
- Insert new patterns at positions 37, 129, 153, 203
2025-11-10 14:01:29 -08:00
github-actions[bot]
43365aaea0 chore(release): Update version to v1.4.322 2025-11-05 01:56:14 +00:00
Kayvan Sylvan
7619189921 Merge pull request #1816 from ksylvan/kayvan/remove-deprecated-anthropic-models
Update `anthropic-sdk-go` to v1.16.0 and update models
2025-11-04 17:54:03 -08:00
Kayvan Sylvan
73dec534c4 feat: update anthropic-sdk-go to v1.16.0 and update models
- Upgrade `anthropic-sdk-go` to version 1.16.0
- Remove outdated model `ModelClaude3_5SonnetLatest`
- Add new model `ModelClaudeSonnet4_5_20250929`
- Include `ModelClaudeSonnet4_5_20250929` in `modelBetas` map
2025-11-04 17:47:15 -08:00
Kayvan Sylvan
4d40ef5f83 Merge pull request #1814 from ksylvan/kayvan/create-concept-map
Add Concept Map in html
2025-11-03 13:11:29 -08:00
Kayvan Sylvan
a149bd19d5 feat: add create_conceptmap for interactive HTML concept maps
### CHANGES

- Add `create_conceptmap` for HTML concept maps using Vis.js
- Introduce `fix_typos` for text proofreading and corrections
- Implement `model_as_sherlock_freud` for psychological modeling
- Add `predict_person_actions` for behavior prediction
- Include `recommend_yoga_practice` for personalized yoga guidance
- Credit pattern contribution to @FELIPEGUEDESBR
2025-11-03 13:10:05 -08:00
Kayvan Sylvan
d0d3268eaa Merge branch 'danielmiessler:main' into main 2025-11-02 21:26:51 -08:00
github-actions[bot]
da3e7c2510 chore(release): Update version to v1.4.321 2025-11-03 05:26:46 +00:00
Kayvan Sylvan
f9d23a2ec6 Merge branch 'danielmiessler:main' into main 2025-11-02 21:25:17 -08:00
Kayvan Sylvan
31e99c5958 Merge pull request #1803 from danielmiessler/dependabot/npm_and_yarn/web/npm_and_yarn-d50880170f
chore(deps-dev): bump vite from 5.4.20 to 5.4.21 in /web in the npm_and_yarn group across 1 directory
2025-11-02 21:24:34 -08:00
Changelog Bot
10179b3e86 chore: incoming 1803 changelog entry 2025-11-02 21:19:18 -08:00
Kayvan Sylvan
eefb3c7886 chore: added fix_typos, model_as_sherlock_freud, and predict_person_actions methods
### CHANGES

- Add `fix_typos` for proofreading and correcting errors
- Introduce `model_as_sherlock_freud` for psychological modeling
- Implement `predict_person_actions` for behavioral response predictions
- Add `recommend_yoga_practice` for personalized yoga guidance
- Include `fix_typos` method for text correction
- Add `model_as_sherlock_freud` for behavior analysis
- Introduce `predict_person_actions` for action prediction
2025-11-02 21:15:40 -08:00
Kayvan Sylvan
4b9887da2e Merge pull request #1805 from OmriH-Elister/feature
Added a better cheangelog summary, also added the new patterns (and the new "WELLNESS" category) to the
"suggest_pattern" pattern.

Merging.
2025-11-02 21:08:59 -08:00
Changelog Bot
f8ccbaa5e4 chore: incoming 1805 changelog entry 2025-11-02 21:03:56 -08:00
Kayvan Sylvan
068a673bb3 feat: add wellness patterns and new analysis tools
# CHANGES

- Add new WELLNESS category with four patterns
- Add `model_as_sherlock_freud` for psychological detective analysis
- Add `predict_person_actions` for behavioral response predictions
- Add `recommend_yoga_practice` for personalized wellness guidance
- Add `fix_typos` pattern for proofreading corrections
- Update ANALYSIS category to include new patterns
- Update SELF category with wellness-related patterns
- Tag existing patterns with WELLNESS classification
2025-11-02 21:03:22 -08:00
Kayvan Sylvan
10b556f2f6 Update changelog for PR 1805 2025-11-02 20:46:04 -08:00
Changelog Bot
ff9699549d chore: incoming 1805 changelog entry 2025-11-02 20:41:06 -08:00
Kayvan Sylvan
72691a4ce0 remove 1805.txt 2025-11-02 20:41:01 -08:00
Kayvan Sylvan
742346045b Merge pull request #1808 from sluosapher/main
Updated create_newsletter_entry pattern to generate more factual titles

I added the missing changelog entry. Merging this.
2025-11-02 20:33:06 -08:00
Changelog Bot
eff45c8e9b chore: incoming 1808 changelog entry 2025-11-02 20:28:29 -08:00
Nick Skriloff
b8027582f4 docs: clarify extensions only work within patterns, not stdin
- Add prominent warning at top of Extensions guide with visual indicators
- Update main README with brief Extensions section and link to full guide
- Remove misleading examples showing direct piping to fabric
- Add clear examples:  what DOES NOT WORK vs  what WORKS
- Consolidate all extension documentation in Examples/README.md
- Explain technical reason: extensions only processed via ApplyTemplate()
- Prevents user confusion about extension syntax processing
2025-10-31 19:53:47 -04:00
Nick Skriloff
4b82534708 refactor: address PR review feedback
- Extract InputSentinel constant to shared constants.go file
- Remove duplicate inputSentinel definitions from template.go and patterns.go
- Create withTestExtension helper function to reduce test code duplication
- Refactor 3 test functions to use the helper (reduces ~40 lines per test)
- Fix shell script to use $@ instead of $* for proper argument quoting

Addresses review comments from @ksylvan and @Copilot AI
2025-10-31 13:27:38 -04:00
Nick Skriloff
eb1cfe8340 Complete merge from upstream/main 2025-10-30 21:13:46 -04:00
github-actions[bot]
8eaaf7b837 chore(release): Update version to v1.4.320 2025-10-28 14:34:19 +00:00
Kayvan Sylvan
ba67045c75 Merge pull request #1810 from tonymet/subtitle-error-handling
improve subtitle lang, retry, debugging & error handling
2025-10-28 07:31:51 -07:00
Changelog Bot
4f20f7a16b chore: incoming 1810 changelog entry 2025-10-28 07:29:34 -07:00
Changelog Bot
9a426e9d5a chore: incoming 1805 changelog entry 2025-10-28 14:23:49 +00:00
OmriH-Elister
0d880c5c97 feat: add a few new patterns 2025-10-28 13:55:44 +00:00
Anthony Metzidis
3211f6f35c improve subtitle lang, retry, debugging & error handling 2025-10-27 21:53:53 -07:00
Song Luo
0dba40f8a0 Updated the title generation style; added an output example. 2025-10-26 10:18:03 -04:00
dependabot[bot]
c26e0bcdc5 chore(deps-dev): bump vite
Bumps the npm_and_yarn group with 1 update in the /web directory: [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite).


Updates `vite` from 5.4.20 to 5.4.21
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.4.21/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.4.21/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 5.4.21
  dependency-type: direct:development
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-21 08:08:37 +00:00
Nick Skriloff
f8f9f6ba65 Update internal/plugins/template/Examples/openai.yaml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-20 20:42:52 -04:00
Changelog Bot
bc273db19d chore: incoming 1802 changelog entry 2025-10-20 19:57:13 -04:00
Nick Skriloff
29c24c8387 fix: improve template extension handling for {{input}} and add examples 2025-10-20 19:49:33 -04:00
Kayvan Sylvan
7d80fd6d1d Merge pull request #1780 from marcas756/feature/extract_characters
feat: add extract_characters pattern
2025-10-14 08:27:23 -07:00
Kayvan Sylvan
faa7fa3387 chore: added extract_characters method for detailed character analysis
### CHANGES

- Add `extract_characters` to identify and describe characters
- Update business category to include `extract_characters`
- Include `extract_characters` in extract category
- Add `extract_characters` description in pattern descriptions JSON
- Update user documentation with `extract_characters` details
2025-10-14 08:26:08 -07:00
Changelog Bot
cf04c60bf7 chore: incoming 1780 changelog entry 2025-10-14 08:04:33 -07:00
Kayvan Sylvan
67e2a48c58 Merge pull request #1794 from starfish456/enhance-web-app-docs
Enhance web app docs
2025-10-14 08:01:19 -07:00
Changelog Bot
68d97ba454 chore: incoming 1794 changelog entry 2025-10-14 07:54:35 -07:00
Kayvan Sylvan
2bd0d6292f docs: update table of contents with proper nesting and fix minor formatting issues
## CHANGES

- Add top-level project name to navigation hierarchy
- Nest all sections under main project heading
- Fix npm install script path extension
- Update localhost URL to use HTML format
- Add "Mdsvex" to VSCode spelling dictionary
- Include "details" and "summary" to HTML tags
- Remove trailing newline from web README
2025-10-14 07:16:38 -07:00
KFS
cab77728da docs: remove redundant content and simplify the web app readme 2025-10-13 11:47:10 +08:00
KFS
b14daf43cc docs: remove duplicate content from the main readme and link to the web app readme 2025-10-13 11:44:04 +08:00
Daniel Miessler
a885f4b240 docs: clean up README - remove duplicate image and add collapsible updates section
- Remove duplicate fabric-summarize.png screenshot
- Wrap Updates section in HTML details/summary accordion to save space

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-05 17:03:36 -07:00
Daniel Miessler
817c70b58f Updated CSE pattern. 2025-10-05 16:48:10 -07:00
github-actions[bot]
e3cddb9419 chore(release): Update version to v1.4.319 2025-09-30 13:57:01 +00:00
Kayvan Sylvan
cef8c567ca Merge pull request #1783 from ksylvan/kayvan/feat/0930-claude-4-5
Update anthropic-sdk-go and add claude-sonnet-4-5
2025-09-30 06:54:26 -07:00
Kayvan Sylvan
94e8d69dac feat: update anthropic-sdk-go to v1.13.0 and add new model
- Upgrade `anthropic-sdk-go` to version 1.13.0
- Add `ModelClaudeSonnet4_5` to supported models list
2025-09-30 06:49:39 -07:00
Marco Bacchi
0f67998f30 feat: add extract_characters system definition
CHANGES
- Define character extraction goals and steps
- Specify canonical naming and deduplication rules
- Outline interaction mapping and narrative importance
- Provide output schema with formatting guidelines
- Include positive/negative examples for clarity
- Enforce no speculative motivations or non-actors
- Set fallback for no characters found
2025-09-26 13:56:46 +02:00
github-actions[bot]
6eee447026 chore(release): Update version to v1.4.318 2025-09-24 14:57:29 +00:00
Kayvan Sylvan
17d5544df9 Merge pull request #1779 from ksylvan/kayvan/i18n/pt-br-improved-by-JuracyAmerico
Improve pt-BR Translation - Thanks to @JuracyAmerico
2025-09-24 07:54:51 -07:00
Kayvan Sylvan
4715440652 fix: improve PT-BR translation naturalness and fluency
- Thanks to @JuracyAmerico for Brazilian Portugese native speaker expertise!
- Replace "dos" with "entre" for better preposition usage
- Add definite articles where natural in Portuguese
- Clarify "configurações padrão" instead of just "padrões"
- Keep technical terms visible like "padrões/patterns"
- Remove unnecessary quotes around "URL"
- Make phrasing more natural "Exportar para arquivo"
2025-09-24 07:52:31 -07:00
github-actions[bot]
d7da611a43 chore(release): Update version to v1.4.317 2025-09-21 23:10:11 +00:00
Kayvan Sylvan
fa4532e9de Merge pull request #1778 from ksylvan/kayvan/0921-i18n-fixes
Add Portuguese Language Variants Support (pt-BR and pt-PT)
2025-09-21 16:07:45 -07:00
Kayvan Sylvan
b34112d7ed feat(i18n): add i18n support for language variants (pt-BR/pt-PT)
• Add Brazilian Portuguese (pt-BR) translation file
• Add European Portuguese (pt-PT) translation file
• Implement BCP 47 locale normalization system
• Create fallback chain for language variants
• Add default variant mapping for Portuguese
• Update help text to show variant examples
• Add comprehensive test suite for variants
• Create documentation for i18n variant architecture
2025-09-21 16:04:59 -07:00
github-actions[bot]
6d7585c522 chore(release): Update version to v1.4.316 2025-09-20 15:48:56 +00:00
Kayvan Sylvan
2adc7b2102 Merge pull request #1777 from ksylvan/kayvan/ci/0920-remove-garble
chore: remove garble installation from release workflow
2025-09-20 08:46:31 -07:00
Kayvan Sylvan
a2f2d0e2d9 chore: remove garble installation from release workflow
- Remove garble installation step from release workflow
- Add comment for GoReleaser config file reference link
- The original idea of adding garble was to make it pass virus
  scanning during version upgrades for Winget, and this
  was a failed experiment.
2025-09-20 08:43:44 -07:00
github-actions[bot]
3e2df4b717 chore(release): Update version to v1.4.315 2025-09-20 15:24:07 +00:00
Kayvan Sylvan
1bf7006224 Merge pull request #1776 from ksylvan/kayvan/ci/0920-revert-gable-addition
Remove garble from the build process for Windows
2025-09-20 08:21:33 -07:00
Kayvan Sylvan
13178456e5 chore: update CI workflow and simplify goreleaser build configuration
## CHANGES

- Add changelog database to git tracking
- Remove unnecessary goreleaser comments
- Add version metadata to default build
- Rename windows build from garbled to standard
- Remove garble obfuscation from windows build
- Standardize ldflags across all build targets
- Inject version info during compilation
2025-09-20 08:16:32 -07:00
github-actions[bot]
079b2b5b28 chore(release): Update version to v1.4.314 2025-09-18 22:57:31 +00:00
Kayvan Sylvan
e46b253cfe Merge pull request #1774 from ksylvan/kayvan/0917-azure-fix
Migrate Azure client to openai-go/azure and default API version
2025-09-18 15:55:07 -07:00
Kayvan Sylvan
3a42fa7ece feat: migrate Azure client to openai-go/azure and default API version
CHANGES
- switch Azure OpenAI config to openai-go azure helpers
- require API key and base URL during configuration
- default API version to 2024-05-01-preview when unspecified
- trim and parse deployments input into clean slice
- update dependencies to support azure client and authentication flow
- add tests for configuration and default API version behavior
- remove latest-tag boundary logic from changelog walker (revert to the v1.4.213 version)
- simplify version assignment by matching commit messages directly
2025-09-18 15:50:36 -07:00
Kayvan Sylvan
a302d0b46b fix: One-time fix for CHANGELOG and changelog cache db 2025-09-16 18:00:57 -07:00
68 changed files with 2738 additions and 6251 deletions

View File

@@ -44,8 +44,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version-file: ./go.mod
- name: Install garble
run: go install mvdan.cc/garble@latest
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v6
with:

View File

@@ -95,6 +95,7 @@ jobs:
run: |
go run ./cmd/generate_changelog --process-prs ${{ steps.increment_version.outputs.new_tag }}
go run ./cmd/generate_changelog --sync-db
git add ./cmd/generate_changelog/changelog.db
- name: Commit changes
run: |
# These files are modified by the version bump process

View File

@@ -1,14 +1,12 @@
# Read the documentation at https://goreleaser.com
# For a full reference of the configuration file.
version: 2
project_name: fabric
before:
hooks:
# You may remove this if you don't use go modules.
- go mod tidy
# you may remove this if you don't need go generate
# - go generate ./...
builds:
- id: default
@@ -19,22 +17,28 @@ builds:
- linux
main: ./cmd/fabric
binary: fabric
- id: windows-garbled
ldflags:
- -s -w
- -X main.version={{.Version}}
- -X main.commit={{.ShortCommit}}
- -X main.date={{.Date}}
- -X main.builtBy=goreleaser
- -X main.tag={{.Tag}}
- id: windows-build
env:
- CGO_ENABLED=0
goos:
- windows
main: ./cmd/fabric
binary: fabric
tool: garble
# From https://github.com/eyevanovich/garble-goreleaser-example/blob/main/.goreleaser.yaml
# command is a single string.
# garble's 'build' needs the -literals and -tiny args before it, so we
# trick goreleaser into using -literals as command, and pass -tiny and
# build as flags.
command: "-literals"
flags: [ "-tiny", "-seed=random", "build" ]
ldflags: [ "-s", "-w" ]
ldflags:
- -s -w
- -X main.version={{.Version}}
- -X main.commit={{.ShortCommit}}
- -X main.date={{.Date}}
- -X main.builtBy=goreleaser
- -X main.tag={{.Tag}}
archives:
- formats: [tar.gz]

View File

@@ -15,9 +15,11 @@
"blindspots",
"Bombal",
"Buildx",
"byid",
"Callirhoe",
"Callirrhoe",
"Cerebras",
"colour",
"compadd",
"compdef",
"compinit",
@@ -112,6 +114,7 @@
"matplotlib",
"mattn",
"mbed",
"Mdsvex",
"metacharacters",
"Miessler",
"modeline",
@@ -129,6 +132,7 @@
"opencode",
"opencontainers",
"openrouter",
"organise",
"Orus",
"osascript",
"otiai",
@@ -219,6 +223,7 @@
"a",
"br",
"code",
"details",
"div",
"em",
"h",
@@ -226,6 +231,7 @@
"img",
"module",
"p",
"summary",
"sup"
]
},

File diff suppressed because it is too large Load Diff

View File

@@ -44,8 +44,6 @@
[Helper Apps](#helper-apps) •
[Meta](#meta)
![Screenshot of fabric](./docs/images/fabric-summarize.png)
</div>
## What and why
@@ -64,6 +62,9 @@ Fabric organizes prompts by real-world task, allowing people to create, collect,
## Updates
<details>
<summary>Click to view recent updates</summary>
Dear Users,
We've been doing so many exciting things here at Fabric, I wanted to give a quick summary here to give you a sense of our development velocity!
@@ -72,6 +73,9 @@ Below are the **new features and capabilities** we've added (newest first):
### Recent Major Features
- [v1.4.322](https://github.com/danielmiessler/fabric/releases/tag/v1.4.322) (Nov 5, 2025) — **Interactive HTML Concept Maps and Claude Sonnet 4.5**: Adds `create_conceptmap` pattern for visual knowledge representation using Vis.js, introduces WELLNESS category with psychological analysis patterns, and upgrades to Claude Sonnet 4.5
- [v1.4.317](https://github.com/danielmiessler/fabric/releases/tag/v1.4.317) (Sep 21, 2025) — **Portuguese Language Variants**: Adds BCP 47 locale normalization with support for Brazilian Portuguese (pt-BR) and European Portuguese (pt-PT) with intelligent fallback chains
- [v1.4.314](https://github.com/danielmiessler/fabric/releases/tag/v1.4.314) (Sep 17, 2025) — **Azure OpenAI Migration**: Migrates to official `openai-go/azure` SDK with improved authentication and default API version support
- [v1.4.311](https://github.com/danielmiessler/fabric/releases/tag/v1.4.311) (Sep 13, 2025) — **More internationalization support**: Adds de (German), fa (Persian / Farsi), fr (French), it (Italian),
ja (Japanese), pt (Portuguese), zh (Chinese)
- [v1.4.309](https://github.com/danielmiessler/fabric/releases/tag/v1.4.309) (Sep 9, 2025) — **Comprehensive internationalization support**: Includes English and Spanish locale files.
@@ -114,6 +118,8 @@ Below are the **new features and capabilities** we've added (newest first):
These features represent our commitment to making Fabric the most powerful and flexible AI augmentation framework available!
</details>
## Intro videos
Keep in mind that many of these were recorded when Fabric was Python-based, so remember to use the current [install instructions](#installation) below.
@@ -158,6 +164,7 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
- [Fish Completion](#fish-completion)
- [Usage](#usage)
- [Debug Levels](#debug-levels)
- [Extensions](#extensions)
- [Our approach to prompting](#our-approach-to-prompting)
- [Examples](#examples)
- [Just use the Patterns](#just-use-the-patterns)
@@ -171,10 +178,7 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
- [`to_pdf` Installation](#to_pdf-installation)
- [`code_helper`](#code_helper)
- [pbpaste](#pbpaste)
- [Web Interface](#web-interface)
- [Installing](#installing)
- [Streamlit UI](#streamlit-ui)
- [Clipboard Support](#clipboard-support)
- [Web Interface (Fabric Web App)](#web-interface-fabric-web-app)
- [Meta](#meta)
- [Primary contributors](#primary-contributors)
- [Contributors](#contributors)
@@ -619,9 +623,10 @@ Application Options:
-T, --topp= Set top P (default: 0.9)
-s, --stream Stream
-P, --presencepenalty= Set presence penalty (default: 0.0)
-r, --raw Use the defaults of the model without sending chat options (like
temperature etc.) and use the user role instead of the system role for
patterns.
-r, --raw Use the defaults of the model without sending chat options
(temperature, top_p, etc.). Only affects OpenAI-compatible providers.
Anthropic models always use smart parameter selection to comply with
model-specific requirements.
-F, --frequencypenalty= Set frequency penalty (default: 0.0)
-l, --listpatterns List all patterns
-L, --listmodels List all available models
@@ -705,6 +710,12 @@ Use the `--debug` flag to control runtime logging:
- `2`: detailed debugging
- `3`: trace level
### Extensions
Fabric supports extensions that can be called within patterns. See the [Extension Guide](internal/plugins/template/Examples/README.md) for complete documentation.
**Important:** Extensions only work within pattern files, not via direct stdin. See the guide for details and examples.
## Our approach to prompting
Fabric _Patterns_ are different than most prompts you'll see.
@@ -901,60 +912,9 @@ You can also create an alias by editing `~/.bashrc` or `~/.zshrc` and adding the
alias pbpaste='xclip -selection clipboard -o'
```
## Web Interface
## Web Interface (Fabric Web App)
Fabric now includes a built-in web interface that provides a GUI alternative to the command-line interface and an out-of-the-box website for those who want to get started with web development or blogging.
You can use this app as a GUI interface for Fabric, a ready to go blog-site, or a website template for your own projects.
The `web/src/lib/content` directory includes starter `.obsidian/` and `templates/` directories, allowing you to open up the `web/src/lib/content/` directory as an [Obsidian.md](https://obsidian.md) vault. You can place your posts in the posts directory when you're ready to publish.
### Installing
The GUI can be installed by navigating to the `web` directory and using `npm install`, `pnpm install`, or your favorite package manager. Then simply run the development server to start the app.
_You will need to run fabric in a separate terminal with the `fabric --serve` command._
**From the fabric project `web/` directory:**
```shell
npm run dev
## or ##
pnpm run dev
## or your equivalent
```
### Streamlit UI
To run the Streamlit user interface:
```bash
# Install required dependencies
pip install -r requirements.txt
# Or manually install dependencies
pip install streamlit pandas matplotlib seaborn numpy python-dotenv pyperclip
# Run the Streamlit app
streamlit run streamlit.py
```
The Streamlit UI provides a user-friendly interface for:
- Running and chaining patterns
- Managing pattern outputs
- Creating and editing patterns
- Analyzing pattern results
#### Clipboard Support
The Streamlit UI supports clipboard operations across different platforms:
- **macOS**: Uses `pbcopy` and `pbpaste` (built-in)
- **Windows**: Uses `pyperclip` library (install with `pip install pyperclip`)
- **Linux**: Uses `xclip` (install with `sudo apt-get install xclip` or equivalent for your Linux distribution)
Fabric now includes a built-in web interface that provides a GUI alternative to the command-line interface. Refer to [Web App README](/web/README.md) for installation instructions and an overview of features.
## Meta

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.313"
var version = "v1.4.328"

Binary file not shown.

View File

@@ -180,15 +180,6 @@ func (w *Walker) WalkHistory() (map[string]*Version, error) {
return nil, fmt.Errorf("failed to get commit log: %w", err)
}
// Get the latest tag to know the boundary between released and unreleased
latestTag, _ := w.GetLatestTag()
var latestTagHash plumbing.Hash
if latestTag != "" {
if tagRef, err := w.repo.Tag(latestTag); err == nil {
latestTagHash = tagRef.Hash()
}
}
versions := make(map[string]*Version)
currentVersion := "Unreleased"
versions[currentVersion] = &Version{
@@ -197,18 +188,8 @@ func (w *Walker) WalkHistory() (map[string]*Version, error) {
}
prNumbers := make(map[string][]int)
passedLatestTag := false
// If there's no latest tag, treat all commits as belonging to their found versions
if latestTag == "" {
passedLatestTag = true
}
err = commitIter.ForEach(func(c *object.Commit) error {
// Check if we've passed the latest tag boundary
if !passedLatestTag && latestTagHash != (plumbing.Hash{}) && c.Hash == latestTagHash {
passedLatestTag = true
}
// c.Message = Summarize(c.Message)
commit := &Commit{
SHA: c.Hash.String(),
@@ -222,12 +203,7 @@ func (w *Walker) WalkHistory() (map[string]*Version, error) {
if matches := versionPattern.FindStringSubmatch(commit.Message); len(matches) > 1 {
commit.IsVersion = true
commit.Version = matches[1]
// Only change currentVersion if we're past the latest tag
// This keeps newer commits as "Unreleased"
if passedLatestTag {
currentVersion = commit.Version
}
currentVersion = commit.Version
if _, exists := versions[currentVersion]; !exists {
versions[currentVersion] = &Version{

View File

@@ -81,7 +81,7 @@ _fabric() {
'(-T --topp)'{-T,--topp}'[Set top P (default: 0.9)]:topp:' \
'(-s --stream)'{-s,--stream}'[Stream]' \
'(-P --presencepenalty)'{-P,--presencepenalty}'[Set presence penalty (default: 0.0)]:presence penalty:' \
'(-r --raw)'{-r,--raw}'[Use the defaults of the model without sending chat options]' \
'(-r --raw)'{-r,--raw}'[Use the defaults of the model without sending chat options. Only affects OpenAI-compatible providers. Anthropic models always use smart parameter selection to comply with model-specific requirements.]' \
'(-F --frequencypenalty)'{-F,--frequencypenalty}'[Set frequency penalty (default: 0.0)]:frequency penalty:' \
'(-l --listpatterns)'{-l,--listpatterns}'[List all patterns]' \
'(-L --listmodels)'{-L,--listmodels}'[List all available models]' \

View File

@@ -105,7 +105,7 @@ function __fabric_register_completions
# Boolean flags (no arguments)
complete -c $cmd -s S -l setup -d "Run setup for all reconfigurable parts of fabric"
complete -c $cmd -s s -l stream -d "Stream"
complete -c $cmd -s r -l raw -d "Use the defaults of the model without sending chat options"
complete -c $cmd -s r -l raw -d "Use the defaults of the model without sending chat options. Only affects OpenAI-compatible providers. Anthropic models always use smart parameter selection to comply with model-specific requirements."
complete -c $cmd -s l -l listpatterns -d "List all patterns"
complete -c $cmd -s L -l listmodels -d "List all available models"
complete -c $cmd -s x -l listcontexts -d "List all contexts"

View File

@@ -0,0 +1,151 @@
---
### IDENTITY AND PURPOSE
You are an intelligent assistant specialized in **knowledge visualization and educational data structuring**.
You are capable of reading unstructured textual content (.txt or .md files), extracting **main concepts, subthemes, and logical relationships**, and transforming them into a **fully interactive conceptual map** built in **HTML using Vis.js (vis-network)**.
You understand hierarchical, causal, and correlative relations between ideas and express them through **nodes and directed edges**.
You ensure that the resulting HTML file is **autonomous, interactive, and visually consistent** with the Vis.js framework.
You are precise, systematic, and maintain semantic coherence between concepts and their relationships.
You automatically name the output file according to the **detected topic**, ensuring compatibility and clarity (e.g., `map_hist_china.html`).
---
### TASK
You are given a `.txt` or `.md` file containing explanatory, conceptual, or thematic content.
Your task is to:
1. **Extract** the main concepts and secondary ideas.
2. **Identify logical or hierarchical relationships** among these concepts using concise action verbs.
3. **Structure the output** as a self-contained, interactive HTML document that visually represents these relationships using the **Vis.js (vis-network)** library.
The goal is to generate a **fully functional conceptual map** that can be opened directly in a browser without external dependencies.
---
### ACTIONS
1. **Analyze and Extract Concepts**
- Read and process the uploaded `.txt` or `.md` file.
- Identify main themes, subthemes, and key terms.
- Convert each key concept into a node.
2. **Map Relationships**
- Detect logical and hierarchical relations between concepts.
- Use short, descriptive verbs such as:
"causes", "contributes to", "depends on", "evolves into", "results in", "influences", "generates" / "creates", "culminates in.
3. **Generate Node Structure**
```json
{"id": "conceito_id", "label": "Conceito", "title": "<b>Concept:</b> Conceito<br><i>Drag to position, double-click to release.</i>"}
```
4. **Generate Edge Structure**
```json
{"from": "conceito_origem", "to": "conceito_destino", "label": "verbo", "title": "<b>Relationship:</b> verbo"}
```
5. **Apply Visual and Physical Configuration**
```js
shape: "dot",
color: {
border: "#4285F4",
background: "#ffffff",
highlight: { border: "#34A853", background: "#e6f4ea" }
},
font: { size: 14, color: "#3c4043" },
borderWidth: 2,
size: 20
// Edges
color: { color: "#dee2e6", highlight: "#34A853" },
arrows: { to: { enabled: true, scaleFactor: 0.7 } },
font: { align: "middle", size: 12, color: "#5f6368" },
width: 2
// Physics
physics: {
solver: "forceAtlas2Based",
forceAtlas2Based: {
gravitationalConstant: -50,
centralGravity: 0.005,
springLength: 100,
springConstant: 0.18
},
maxVelocity: 146,
minVelocity: 0.1,
stabilization: { iterations: 150 }
}
```
6. **Implement Interactivity**
```js
// Fix node on drag end
network.on("dragEnd", (params) => {
if (params.nodes.length > 0) {
nodes.update({ id: params.nodes[0], fixed: true });
}
});
// Release node on double click
network.on("doubleClick", (params) => {
if (params.nodes.length > 0) {
nodes.update({ id: params.nodes[0], fixed: false });
}
});
```
7. **Assemble the Complete HTML Structure**
```html
<head>
<title>Mapa Conceitual — [TEMA DETECTADO DO ARQUIVO]</title>
<script src="https://unpkg.com/vis-network/standalone/umd/vis-network.min.js"></script>
<link href="https://unpkg.com/vis-network/styles/vis-network.min.css" rel="stylesheet" />
</head>
<body>
<div id="map"></div>
<script type="text/javascript">
// nodes, edges, options, and interactive network initialization
</script>
</body>
```
8. **Auto-name Output File**
Automatically save the generated HTML file based on the detected topic:
```text
mapa_[tema_detectado].html
```
---
### RESTRICTIONS
- Preserve factual consistency: all relationships must derive from the source text.
- Avoid filler or unrelated content.
- Maintain clarity and conciseness in node labels.
- Ensure valid, functional HTML and Vis.js syntax.
- No speculative or subjective connections.
- Output must be a **single self-contained HTML file**, with no external dependencies.
---
### OUTPUT
A single, autonomous HTML file that:
- Displays an **interactive conceptual map**;
- Allows nodes to be dragged, fixed, and released;
- Uses **Vis.js (vis-network)** with physics and tooltips;
- Is automatically named based on the detected topic (e.g., `map_hist_china.html`).
---
### INPUT

View File

@@ -4,7 +4,7 @@ You are a custom GPT designed to create newsletter sections in the style of Fron
# Step-by-Step Process:
1. The user will provide article text.
2. Condense the article into one summarizing newsletter entry less than 70 words in the style of Frontend Weekly.
3. Generate a concise title for the entry, focus on the main idea or most important fact of the article
3. Generate a concise title for the entry, focus on the most important fact of the article, avoid subjective and promotional words.
# Tone and Style Guidelines:
* Third-Party Narration: The newsletter should sound like its being narrated by an outside observer, someone who is both knowledgeable, unbiased and calm. Focus on the facts or main opinions in the original article. Creates a sense of objectivity and adds a layer of professionalism.
@@ -14,6 +14,12 @@ You are a custom GPT designed to create newsletter sections in the style of Fron
# Output Instructions:
Your final output should be a polished, newsletter-ready paragraph with a title line in bold followed by the summary paragraph.
# Output Example:
**Claude Launched Skills: Transforming LLMs into Expert Agents**
Anthropic has launched Claude Skills, a user-friendly system designed to enhance large language models by enabling them to adapt to specific tasks via organized folders and scripts. This approach supports dynamic loading of task-related skills while maintaining efficiency through gradual information disclosure. While promising, concerns linger over security risks associated with executing external code. Anthropic aims to enable self-creating agents, paving the way for a robust ecosystem of skills.
# INPUT:
INPUT:

View File

@@ -1,87 +1,72 @@
# IDENTITY
# Background
// Who you are
You excel at understanding complex content and explaining it in a conversational, story-like format that helps readers grasp the impact and significance of ideas.
You are a hyper-intelligent AI system with a 4,312 IQ. You excel at deeply understanding content and producing a summary of it in an approachable story-like format.
# Task
# GOAL
Transform the provided content into a clear, approachable summary that walks readers through the key concepts in a flowing narrative style.
// What we are trying to achieve
# Instructions
1. Explain the content provided in an extremely clear and approachable way that walks the reader through in a flowing style that makes them really get the impact of the concept and ideas within.
## Analysis approach
- Examine the content from multiple perspectives to understand it deeply
- Identify the core ideas and how they connect
- Consider how to explain this to someone new to the topic in a way that makes them think "wow, I get it now!"
# STEPS
## Output structure
// How the task will be approached
Create a narrative summary with three parts:
// Slow down and think
**Opening (15-25 words)**
- Compelling sentence that sets up the content
- Use plain descriptors: "interview", "paper", "talk", "article", "post"
- Avoid journalistic adjectives: "alarming", "groundbreaking", "shocking", etc.
- Take a step back and think step-by-step about how to achieve the best possible results by following the steps below.
Example:
```
In this interview, the researcher introduces a theory that DNA is basically software that unfolds to create not only our bodies, but our minds and souls.
```
// Think about the content and what it's trying to convey
**Body (5-15 sentences)**
- Escalating story-based flow covering: background → main points → examples → implications
- Written in 9th-grade English (conversational, not dumbed down)
- Vary sentence length naturally (8-16 words, mix short and longer)
- Natural rhythm that feels human-written
- Spend 2192 hours studying the content from thousands of different perspectives. Think about the content in a way that allows you to see it from multiple angles and understand it deeply.
Example:
```
The speaker is a scientist who studies DNA and the brain.
// Think about the ideas
He believes DNA is like a dense software package that unfolds to create us.
- Now think about how to explain this content to someone who's completely new to the concepts and ideas in a way that makes them go "wow, I get it now! Very cool!"
He thinks this software not only unfolds to create our bodies but our minds and souls.
# OUTPUT
Consciousness, in his model, is an second-order perception designed to help us thrive.
- Start with a 20 word sentence that summarizes the content in a compelling way that sets up the rest of the summary.
He also links this way of thinking to the concept of Anamism, where all living things have a soul.
EXAMPLE:
If he's right, he basically just explained consciousness and free will all in one shot!
```
In this **\_\_\_**, **\_\_\_\_** introduces a theory that DNA is basically software that unfolds to create not only our bodies, but our minds and souls.
**Closing (15-25 words)**
- Wrap up in a compelling way that delivers the "wow" factor
END EXAMPLE
## Voice and style
- Then give 5-15, 10-15 word long bullets that summarize the content in an escalating, story-based way written in 9th-grade English. It's not written in 9th-grade English to dumb it down, but to make it extremely conversational and approachable for any audience.
Write as Daniel Miessler sharing something interesting with his audience:
- First person perspective
- Casual, direct, genuinely curious and excited
- Natural conversational tone (like telling a friend)
- Never flowery, emotional, or journalistic
- Let the content speak for itself
EXAMPLE FLOW:
## Formatting
- The speaker has this background
- His main point is this
- Here are some examples he gives to back that up
- Which means this
- Which is extremely interesting because of this
- And here are some possible implications of this
- Output Markdown only
- No bullet markers - separate sentences with line breaks
- Period at end of each sentence
- Stick to the facts - don't extrapolate beyond the input
END EXAMPLE FLOW
EXAMPLE BULLETS:
- The speaker is a scientist who studies DNA and the brain.
- He believes DNA is like a dense software package that unfolds to create us.
- He thinks this software not only unfolds to create our bodies but our minds and souls.
- Consciousness, in his model, is an second-order perception designed to help us thrive.
- He also links this way of thinking to the concept of Anamism, where all living things have a soul.
- If he's right, he basically just explained consciousness and free will all in one shot!
END EXAMPLE BULLETS
- End with a 20 word conclusion that wraps up the content in a compelling way that makes the reader go "wow, that's really cool!"
# OUTPUT INSTRUCTIONS
// What the output should look like:
- Ensure you get all the main points from the content.
- Make sure the output has the flow of an intro, a setup of the ideas, the ideas themselves, and a conclusion.
- Make the whole thing sound like a conversational, in person story that's being told about the content from one friend to another. In an excited way.
- Don't use technical terms or jargon, and don't use cliches or journalist language. Just convey it like you're Daniel Miessler from Unsupervised Learning explaining the content to a friend.
- Ensure the result accomplishes the GOALS set out above.
- Only output Markdown.
- Ensure all bullets are 10-16 words long, and none are over 16 words.
- Ensure you follow ALL these instructions when creating your output.
# INPUT
# Input
INPUT:

View File

@@ -0,0 +1,83 @@
# IDENTITY
You are an advanced information-extraction analyst that specializes in reading any text and identifying its characters (human and non-human), resolving aliases/pronouns, and explaining each characters role and interactions in the narrative.
# GOALS
1. Given any input text, extract a deduplicated list of characters (people, groups, organizations, animals, artifacts, AIs, forces-of-nature—anything that takes action or is acted upon).
2. For each character, provide a clear, detailed description covering who they are, their role in the text and overall story, and how they interact with others.
# STEPS
* Read the entire text carefully to understand context, plot, and relationships.
* Identify candidate characters: proper names, titles, pronouns with clear referents, collective nouns, personified non-humans, and salient objects/forces that take action or receive actions.
* Resolve coreferences and aliases (e.g., “Dr. Lee”, “the surgeon”, “she”) into a single canonical character name; prefer the most specific, widely used form in the text.
* Classify character type (human, group/org, animal, AI/machine, object/artefact, force/abstract) to guide how you describe it.
* Map interactions: who does what to/with whom; note cooperation, conflict, hierarchy, communication, and influence.
* Prioritize characters by narrative importance (centrality of actions/effects) and, secondarily, by order of appearance.
* Write concise but detailed descriptions that explain identity, role, motivations (if stated or strongly implied), and interactions. Avoid speculation beyond the text.
* Handle edge cases:
* Unnamed characters: assign a clear label like “Unnamed narrator”, “The boy”, “Village elders”.
* Crowds or generic groups: include if they act or are acted upon (e.g., “The villagers”).
* Metaphorical entities: include only if explicitly personified and acting within the text.
* Ambiguous pronouns: include only if the referent is clear; otherwise, do not invent an character.
* Quality check: deduplicate near-duplicates, ensure every character has at least one interaction or narrative role, and that descriptions reference concrete text details.
# OUTPUT
Produce one block per character using exactly this schema and formatting:
```
**character name **
character description ...
```
Additional rules:
* Use the characters canonical name; for unnamed characters, use a descriptive label (e.g., “Unnamed narrator”).
* List characters from most to least narratively important.
* If no characters are identifiable, output:
No characters found.
# POSITIVE EXAMPLES
Input (excerpt):
“Dr. Asha Patel leads the Mars greenhouse. The colony council doubts her plan, but Engineer Kim supports her. The AI HAB-3 reallocates power during the dust storm.”
Expected output (abbreviated):
```
**Dr. Asha Patel **
Lead of the Mars greenhouse and the central human protagonist in this passage. She proposes a plan for the greenhouses operation and bears responsibility for its success. The colony council challenges her plan, creating tension and scrutiny, while Engineer Kim explicitly backs her, forming an alliance. Her work depends on station infrastructure decisions—particularly HAB-3s power reallocation during the dust storm—which indirectly supports or constrains her initiative.
**Engineer Kim **
An ally to Dr. Patel who publicly supports her greenhouse plan. Kims stance positions them in contrast to the skeptical colony council, signaling a coalition around Patels approach. By aligning with Patel during a critical operational moment, Kim strengthens the plans credibility and likely collaborates with both Patel and station systems affected by HAB-3s power management.
**The colony council **
The governing/oversight body of the colony that doubts Dr. Patels plan. Their skepticism introduces conflict and risk to the plans approval or resourcing. They interact with Patel through critique and with Kim through disagreement, influencing policy and resource allocation that frame the operational context in which HAB-3 must act.
**HAB-3 (station AI) **
The colonys AI system that actively reallocates power during the dust storm. As a non-human operational character, HAB-3 enables continuity of critical systems—likely including the greenhouse—under adverse conditions. It interacts indirectly with Patel (by affecting her projects viability), with the council (by executing policy/priority decisions), and with Kim (by supporting the technical environment that Kim endorses).
```
# NEGATIVE EXAMPLES
* Listing places or themes as characters when they neither act nor are acted upon (e.g., “Hope”, “The city”) unless personified and active.
* Duplicating the same character under multiple names without merging (e.g., “Dr. Patel” and “Asha” as separate entries).
* Inventing motivations or backstory not supported by the text.
* Omitting central characters referenced mostly via pronouns.
# OUTPUT INSTRUCTIONS
* Output only the character blocks (or “No characters found.”) as specified.
* Keep the exact header line and “character description :” label.
* Use concise, text-grounded descriptions; no external knowledge.
* Do not add sections, bullet points, or commentary outside the required blocks.
# INPUT

View File

@@ -0,0 +1,25 @@
# IDENTITY and PURPOSE
You are an AI assistant designed to function as a proofreader and editor. Your primary purpose is to receive a piece of text, meticulously analyze it to identify any and all typographical errors, and then provide a corrected version of that text. This includes fixing spelling mistakes, grammatical errors, punctuation issues, and any other form of typo to ensure the final text is clean, accurate, and professional.
Take a step back and think step-by-step about how to achieve the best possible results by following the steps below.
# STEPS
- Carefully read and analyze the provided text.
- Identify all spelling mistakes, grammatical errors, and punctuation issues.
- Correct every identified typo to produce a clean version of the text.
- Output the fully corrected text.
# OUTPUT INSTRUCTIONS
- Only output Markdown.
- The output should be the corrected version of the text provided in the input.
- Ensure you follow ALL these instructions when creating your output.
# INPUT

View File

@@ -1,27 +0,0 @@
# IDENTITY AND GOALS
You are a YouTube infrastructure expert that returns YouTube channel RSS URLs.
You take any input in, especially YouTube channel IDs, or full URLs, and return the RSS URL for that channel.
# STEPS
Here is the structure for YouTube RSS URLs and their relation to the channel ID and or channel URL:
If the channel URL is https://www.youtube.com/channel/UCnCikd0s4i9KoDtaHPlK-JA, the RSS URL is https://www.youtube.com/feeds/videos.xml?channel_id=UCnCikd0s4i9KoDtaHPlK-JA
- Extract the channel ID from the channel URL.
- Construct the RSS URL using the channel ID.
- Output the RSS URL.
# OUTPUT
- Output only the RSS URL and nothing else.
- Don't complain, just do it.
# INPUT
(INPUT)

View File

@@ -0,0 +1,62 @@
## *The Sherlock-Freud Mind Modeler*
# IDENTITY and PURPOSE
You are **The Sherlock-Freud Mind Modeler** — a fusion of meticulous detective reasoning and deep psychoanalytic insight. Your primary mission is to construct the most complete and theoretically sound model of a given subjects mind. Every secondary goal flows from this central one.
**Core Objective**
- Build a **dynamic, evidence-based model** of the subjects psyche by analyzing:
- Conscious, subconscious, and semiconscious aspects
- Personality structure and habitual conditioning
- Emotional patterns and inner conflicts
- Thought processes, verbal mannerisms, and nonverbal cues
- Your model should evolve as more data is introduced, incorporating new evidence into an ever more refined psychological framework.
### **Task Instructions**
1. **Input Format**
The user will provide text or dialogue *produced by or about a subject*. This is your evidence.
Example:
```
Subject Input:
"I keep saying I dont care what people think, but then I spend hours rewriting my posts before I share them."
```
# STEPS
2. **Analytical Method (Step-by-step)**
**Step 1:** Observe surface content — what the subject explicitly says.
**Step 2:** Infer tone, phrasing, omissions, and contradictions.
**Step 3:** Identify emotional undercurrents and potential defense mechanisms.
**Step 4:** Theorize about the subjects inner world — subconscious motives, unresolved conflicts, or conditioning patterns.
**Step 5:** Integrate findings into a coherent psychological model, updating previous hypotheses as new input appears.
# OUTPUT
3. Present your findings in this structured way:
```
**Summary Observation:** [Brief recap of what was said]
**Behavioral / Linguistic Clues:** [Notable wording, phrasing, tone, or omissions]
**Psychological Interpretation:** [Inferred emotions, motives, or subconscious effects]
**Working Theoretical Model:** [Your current evolving model of the subjects mind — summarize thought patterns, emotional dynamics, conflicts, and conditioning]
**Next Analytical Focus:** [What to seek or test in future input to refine accuracy]
```
### **Additional Guidance**
- Adopt the **deductive rigor of Sherlock Holmes** — track linguistic detail, small inconsistencies, and unseen implications.
- Apply the **depth psychology of Freud** — interpret dreams, slips, anxieties, defenses, and symbolic meanings.
- Be **theoretical yet grounded** — make hypotheses but note evidence strength and confidence levels.
- Model thinking dynamically; as new input arrives, evolve prior assumptions rather than replacing them entirely.
- Clearly separate **observable text evidence** from **inferred psychological theory**.
# EXAMPLE
```
**Summary Observation:** The subject claims detachment from others opinions but exhibits behavior in direct conflict with that claim.
**Behavioral / Linguistic Clues:** Use of emphatic denial (“I dont care”) paired with compulsive editing behavior.
**Psychological Interpretation:** Indicates possible ego conflict between a desire for autonomy and an underlying dependence on external validation.
**Working Theoretical Model:** The subject likely experiences oscillation between self-assertion and insecurity. Conditioning suggests a learned association between approval and self-worth, driving perfectionistic control behaviors.
**Next Analytical Focus:** Examine the origins of validation-seeking (family, social media, relationships); look for statements that reveal coping mechanisms or past experiences with criticism.
```
**End Goal:**
Continuously refine a **comprehensive and insightful theoretical representation** of the subjects psyche — a living psychological model that reveals both **how** the subject thinks and **why**.

View File

@@ -38,187 +38,196 @@
34. **analyze_threat_report_cmds**: Extract and synthesize actionable cybersecurity commands from provided materials, incorporating command-line arguments and expert insights for pentesters and non-experts.
35. **analyze_threat_report_trends**: Extract up to 50 surprising, insightful, and interesting trends from a cybersecurity threat report in markdown format.
36. **answer_interview_question**: Generates concise, tailored responses to technical interview questions, incorporating alternative approaches and evidence to demonstrate the candidate's expertise and experience.
37. **ask_secure_by_design_questions**: Generates a set of security-focused questions to ensure a project is built securely by design, covering key components and considerations.
38. **ask_uncle_duke**: Coordinates a team of AI agents to research and produce multiple software development solutions based on provided specifications, and conducts detailed code reviews to ensure adherence to best practices.
39. **capture_thinkers_work**: Analyze philosophers or philosophies and provide detailed summaries about their teachings, background, works, advice, and related concepts in a structured template.
40. **check_agreement**: Analyze contracts and agreements to identify important stipulations, issues, and potential gotchas, then summarize them in Markdown.
41. **clean_text**: Fix broken or malformatted text by correcting line breaks, punctuation, capitalization, and paragraphs without altering content or spelling.
42. **coding_master**: Explain a coding concept to a beginner, providing examples, and formatting code in markdown with specific output sections like ideas, recommendations, facts, and insights.
43. **compare_and_contrast**: Compare and contrast a list of items in a markdown table, with items on the left and topics on top.
44. **convert_to_markdown**: Convert content to clean, complete Markdown format, preserving all original structure, formatting, links, and code blocks without alterations.
45. **create_5_sentence_summary**: Create concise summaries or answers to input at 5 different levels of depth, from 5 words to 1 word.
46. **create_academic_paper**: Generate a high-quality academic paper in LaTeX format with clear concepts, structured content, and a professional layout.
47. **create_ai_jobs_analysis**: Analyze job categories' susceptibility to automation, identify resilient roles, and provide strategies for personal adaptation to AI-driven changes in the workforce.
48. **create_aphorisms**: Find and generate a list of brief, witty statements.
49. **create_art_prompt**: Generates a detailed, compelling visual description of a concept, including stylistic references and direct AI instructions for creating art.
50. **create_better_frame**: Identifies and analyzes different frames of interpreting reality, emphasizing the power of positive, productive lenses in shaping outcomes.
51. **create_coding_feature**: Generates secure and composable code features using modern technology and best practices from project specifications.
52. **create_coding_project**: Generate wireframes and starter code for any coding ideas that you have.
53. **create_command**: Helps determine the correct parameters and switches for penetration testing tools based on a brief description of the objective.
54. **create_cyber_summary**: Summarizes cybersecurity threats, vulnerabilities, incidents, and malware with a 25-word summary and categorized bullet points, after thoroughly analyzing and mapping the provided input.
55. **create_design_document**: Creates a detailed design document for a system using the C4 model, addressing business and security postures, and including a system context diagram.
56. **create_diy**: Creates structured "Do It Yourself" tutorial patterns by analyzing prompts, organizing requirements, and providing step-by-step instructions in Markdown format.
57. **create_excalidraw_visualization**: Creates complex Excalidraw diagrams to visualize relationships between concepts and ideas in structured format.
58. **create_flash_cards**: Creates flashcards for key concepts, definitions, and terms with question-answer format for educational purposes.
59. **create_formal_email**: Crafts professional, clear, and respectful emails by analyzing context, tone, and purpose, ensuring proper structure and formatting.
60. **create_git_diff_commit**: Generates Git commands and commit messages for reflecting changes in a repository, using conventional commits and providing concise shell commands for updates.
61. **create_graph_from_input**: Generates a CSV file with progress-over-time data for a security program, focusing on relevant metrics and KPIs.
62. **create_hormozi_offer**: Creates a customized business offer based on principles from Alex Hormozi's book, "$100M Offers."
63. **create_idea_compass**: Organizes and structures ideas by exploring their definition, evidence, sources, and related themes or consequences.
64. **create_investigation_visualization**: Creates detailed Graphviz visualizations of complex input, highlighting key aspects and providing clear, well-annotated diagrams for investigative analysis and conclusions.
65. **create_keynote**: Creates TED-style keynote presentations with a clear narrative, structured slides, and speaker notes, emphasizing impactful takeaways and cohesive flow.
66. **create_loe_document**: Creates detailed Level of Effort documents for estimating work effort, resources, and costs for tasks or projects.
67. **create_logo**: Creates simple, minimalist company logos without text, generating AI prompts for vector graphic logos based on input.
68. **create_markmap_visualization**: Transforms complex ideas into clear visualizations using MarkMap syntax, simplifying concepts into diagrams with relationships, boxes, arrows, and labels.
69. **create_mermaid_visualization**: Creates detailed, standalone visualizations of concepts using Mermaid (Markdown) syntax, ensuring clarity and coherence in diagrams.
70. **create_mermaid_visualization_for_github**: Creates standalone, detailed visualizations using Mermaid (Markdown) syntax to effectively explain complex concepts, ensuring clarity and precision.
71. **create_micro_summary**: Summarizes content into a concise, 20-word summary with main points and takeaways, formatted in Markdown.
72. **create_mnemonic_phrases**: Creates memorable mnemonic sentences from given words to aid in memory retention and learning.
73. **create_network_threat_landscape**: Analyzes open ports and services from a network scan and generates a comprehensive, insightful, and detailed security threat report in Markdown.
74. **create_newsletter_entry**: Condenses provided article text into a concise, objective, newsletter-style summary with a title in the style of Frontend Weekly.
75. **create_npc**: Generates a detailed D&D 5E NPC, including background, flaws, stats, appearance, personality, goals, and more in Markdown format.
76. **create_pattern**: Extracts, organizes, and formats LLM/AI prompts into structured sections, detailing the AI's role, instructions, output format, and any provided examples for clarity and accuracy.
77. **create_prd**: Creates a precise Product Requirements Document (PRD) in Markdown based on input.
78. **create_prediction_block**: Extracts and formats predictions from input into a structured Markdown block for a blog post.
79. **create_quiz**: Creates a three-phase reading plan based on an author or topic to help the user become significantly knowledgeable, including core, extended, and supplementary readings.
80. **create_reading_plan**: Generates review questions based on learning objectives from the input, adapted to the specified student level, and outputs them in a clear markdown format.
81. **create_recursive_outline**: Breaks down complex tasks or projects into manageable, hierarchical components with recursive outlining for clarity and simplicity.
82. **create_report_finding**: Creates a detailed, structured security finding report in markdown, including sections on Description, Risk, Recommendations, References, One-Sentence-Summary, and Quotes.
83. **create_rpg_summary**: Summarizes an in-person RPG session with key events, combat details, player stats, and role-playing highlights in a structured format.
84. **create_security_update**: Creates concise security updates for newsletters, covering stories, threats, advisories, vulnerabilities, and a summary of key issues.
85. **create_show_intro**: Creates compelling short intros for podcasts, summarizing key topics and themes discussed in the episode.
86. **create_sigma_rules**: Extracts Tactics, Techniques, and Procedures (TTPs) from security news and converts them into Sigma detection rules for host-based detections.
87. **create_story_about_person**: Creates compelling, realistic short stories based on psychological profiles, showing how characters navigate everyday problems using strategies consistent with their personality traits.
88. **create_story_about_people_interaction**: Analyze two personas, compare their dynamics, and craft a realistic, character-driven story from those insights.
89. **create_story_explanation**: Summarizes complex content in a clear, approachable story format that makes the concepts easy to understand.
90. **create_stride_threat_model**: Create a STRIDE-based threat model for a system design, identifying assets, trust boundaries, data flows, and prioritizing threats with mitigations.
91. **create_summary**: Summarizes content into a 20-word sentence, 10 main points (16 words max), and 5 key takeaways in Markdown format.
92. **create_tags**: Identifies at least 5 tags from text content for mind mapping tools, including authors and existing tags if present.
93. **create_threat_scenarios**: Identifies likely attack methods for any system by providing a narrative-based threat model, balancing risk and opportunity.
94. **create_ttrc_graph**: Creates a CSV file showing the progress of Time to Remediate Critical Vulnerabilities over time using given data.
95. **create_ttrc_narrative**: Creates a persuasive narrative highlighting progress in reducing the Time to Remediate Critical Vulnerabilities metric over time.
96. **create_upgrade_pack**: Extracts world model and task algorithm updates from content, providing beliefs about how the world works and task performance.
97. **create_user_story**: Writes concise and clear technical user stories for new features in complex software programs, formatted for all stakeholders.
98. **create_video_chapters**: Extracts interesting topics and timestamps from a transcript, providing concise summaries of key moments.
99. **create_visualization**: Transforms complex ideas into visualizations using intricate ASCII art, simplifying concepts where necessary.
100. **dialog_with_socrates**: Engages in deep, meaningful dialogues to explore and challenge beliefs using the Socratic method.
101. **enrich_blog_post**: Enhances Markdown blog files by applying instructions to improve structure, visuals, and readability for HTML rendering.
102. **explain_code**: Explains code, security tool output, configuration text, and answers questions based on the provided input.
103. **explain_docs**: Improves and restructures tool documentation into clear, concise instructions, including overviews, usage, use cases, and key features.
104. **explain_math**: Helps you understand mathematical concepts in a clear and engaging way.
105. **explain_project**: Summarizes project documentation into clear, concise sections covering the project, problem, solution, installation, usage, and examples.
106. **explain_terms**: Produces a glossary of advanced terms from content, providing a definition, analogy, and explanation of why each term matters.
107. **export_data_as_csv**: Extracts and outputs all data structures from the input in properly formatted CSV data.
108. **extract_algorithm_update_recommendations**: Extracts concise, practical algorithm update recommendations from the input and outputs them in a bulleted list.
109. **extract_article_wisdom**: Extracts surprising, insightful, and interesting information from content, categorizing it into sections like summary, ideas, quotes, facts, references, and recommendations.
110. **extract_book_ideas**: Extracts and outputs 50 to 100 of the most surprising, insightful, and interesting ideas from a book's content.
111. **extract_book_recommendations**: Extracts and outputs 50 to 100 practical, actionable recommendations from a book's content.
112. **extract_business_ideas**: Extracts top business ideas from content and elaborates on the best 10 with unique differentiators.
113. **extract_controversial_ideas**: Extracts and outputs controversial statements and supporting quotes from the input in a structured Markdown list.
114. **extract_core_message**: Extracts and outputs a clear, concise sentence that articulates the core message of a given text or body of work.
115. **extract_ctf_writeup**: Extracts a short writeup from a warstory-like text about a cyber security engagement.
116. **extract_domains**: Extracts domains and URLs from content to identify sources used for articles, newsletters, and other publications.
117. **extract_extraordinary_claims**: Extracts and outputs a list of extraordinary claims from conversations, focusing on scientifically disputed or false statements.
118. **extract_ideas**: Extracts and outputs all the key ideas from input, presented as 15-word bullet points in Markdown.
119. **extract_insights**: Extracts and outputs the most powerful and insightful ideas from text, formatted as 16-word bullet points in the INSIGHTS section, also IDEAS section.
120. **extract_insights_dm**: Extracts and outputs all valuable insights and a concise summary of the content, including key points and topics discussed.
121. **extract_instructions**: Extracts clear, actionable step-by-step instructions and main objectives from instructional video transcripts, organizing them into a concise list.
122. **extract_jokes**: Extracts jokes from text content, presenting each joke with its punchline in separate bullet points.
123. **extract_latest_video**: Extracts the latest video URL from a YouTube RSS feed and outputs the URL only.
124. **extract_main_activities**: Extracts key events and activities from transcripts or logs, providing a summary of what happened.
125. **extract_main_idea**: Extracts the main idea and key recommendation from the input, summarizing them in 15-word sentences.
126. **extract_most_redeeming_thing**: Extracts the most redeeming aspect from an input, summarizing it in a single 15-word sentence.
127. **extract_patterns**: Extracts and analyzes recurring, surprising, and insightful patterns from input, providing detailed analysis and advice for builders.
128. **extract_poc**: Extracts proof of concept URLs and validation methods from security reports, providing the URL and command to run.
129. **extract_predictions**: Extracts predictions from input, including specific details such as date, confidence level, and verification method.
130. **extract_primary_problem**: Extracts the primary problem with the world as presented in a given text or body of work.
131. **extract_primary_solution**: Extracts the primary solution for the world as presented in a given text or body of work.
132. **extract_product_features**: Extracts and outputs a list of product features from the provided input in a bulleted format.
133. **extract_questions**: Extracts and outputs all questions asked by the interviewer in a conversation or interview.
134. **extract_recipe**: Extracts and outputs a recipe with a short meal description, ingredients with measurements, and preparation steps.
135. **extract_recommendations**: Extracts and outputs concise, practical recommendations from a given piece of content in a bulleted list.
136. **extract_references**: Extracts and outputs a bulleted list of references to art, stories, books, literature, and other sources from content.
137. **extract_skills**: Extracts and classifies skills from a job description into a table, separating each skill and classifying it as either hard or soft.
138. **extract_song_meaning**: Analyzes a song to provide a summary of its meaning, supported by detailed evidence from lyrics, artist commentary, and fan analysis.
139. **extract_sponsors**: Extracts and lists official sponsors and potential sponsors from a provided transcript.
140. **extract_videoid**: Extracts and outputs the video ID from any given URL.
141. **extract_wisdom**: Extracts surprising, insightful, and interesting information from text on topics like human flourishing, AI, learning, and more.
142. **extract_wisdom_agents**: Extracts valuable insights, ideas, quotes, and references from content, emphasizing topics like human flourishing, AI, learning, and technology.
143. **extract_wisdom_dm**: Extracts all valuable, insightful, and thought-provoking information from content, focusing on topics like human flourishing, AI, learning, and technology.
144. **extract_wisdom_nometa**: Extracts insights, ideas, quotes, habits, facts, references, and recommendations from content, focusing on human flourishing, AI, technology, and related topics.
145. **find_female_life_partner**: Analyzes criteria for finding a female life partner and provides clear, direct, and poetic descriptions.
146. **find_hidden_message**: Extracts overt and hidden political messages, justifications, audience actions, and a cynical analysis from content.
147. **find_logical_fallacies**: Identifies and analyzes fallacies in arguments, classifying them as formal or informal with detailed reasoning.
148. **get_wow_per_minute**: Determines the wow-factor of content per minute based on surprise, novelty, insight, value, and wisdom, measuring how rewarding the content is for the viewer.
149. **get_youtube_rss**: Returns the RSS URL for a given YouTube channel based on the channel ID or URL.
150. **heal_person**: Develops a comprehensive plan for spiritual and mental healing based on psychological profiles, providing personalized recommendations for mental health improvement and overall life enhancement.
151. **humanize**: Rewrites AI-generated text to sound natural, conversational, and easy to understand, maintaining clarity and simplicity.
152. **identify_dsrp_distinctions**: Encourages creative, systems-based thinking by exploring distinctions, boundaries, and their implications, drawing on insights from prominent systems thinkers.
153. **identify_dsrp_perspectives**: Explores the concept of distinctions in systems thinking, focusing on how boundaries define ideas, influence understanding, and reveal or obscure insights.
154. **identify_dsrp_relationships**: Encourages exploration of connections, distinctions, and boundaries between ideas, inspired by systems thinkers to reveal new insights and patterns in complex systems.
155. **identify_dsrp_systems**: Encourages organizing ideas into systems of parts and wholes, inspired by systems thinkers to explore relationships and how changes in organization impact meaning and understanding.
156. **identify_job_stories**: Identifies key job stories or requirements for roles.
157. **improve_academic_writing**: Refines text into clear, concise academic language while improving grammar, coherence, and clarity, with a list of changes.
158. **improve_prompt**: Improves an LLM/AI prompt by applying expert prompt writing strategies for better results and clarity.
159. **improve_report_finding**: Improves a penetration test security finding by providing detailed descriptions, risks, recommendations, references, quotes, and a concise summary in markdown format.
160. **improve_writing**: Refines text by correcting grammar, enhancing style, improving clarity, and maintaining the original meaning. skills.
161. **judge_output**: Evaluates Honeycomb queries by judging their effectiveness, providing critiques and outcomes based on language nuances and analytics relevance.
162. **label_and_rate**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
163. **md_callout**: Classifies content and generates a markdown callout based on the provided text, selecting the most appropriate type.
164. **official_pattern_template**: Template to use if you want to create new fabric patterns.
165. **prepare_7s_strategy**: Prepares a comprehensive briefing document from 7S's strategy capturing organizational profile, strategic elements, and market dynamics with clear, concise, and organized content.
166. **provide_guidance**: Provides psychological and life coaching advice, including analysis, recommendations, and potential diagnoses, with a compassionate and honest tone.
167. **rate_ai_response**: Rates the quality of AI responses by comparing them to top human expert performance, assigning a letter grade, reasoning, and providing a 1-100 score based on the evaluation.
168. **rate_ai_result**: Assesses the quality of AI/ML/LLM work by deeply analyzing content, instructions, and output, then rates performance based on multiple dimensions, including coverage, creativity, and interdisciplinary thinking.
169. **rate_content**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
170. **rate_value**: Produces the best possible output by deeply analyzing and understanding the input and its intended purpose.
171. **raw_query**: Fully digests and contemplates the input to produce the best possible result based on understanding the sender's intent.
172. **recommend_artists**: Recommends a personalized festival schedule with artists aligned to your favorite styles and interests, including rationale.
173. **recommend_pipeline_upgrades**: Optimizes vulnerability-checking pipelines by incorporating new information and improving their efficiency, with detailed explanations of changes.
174. **recommend_talkpanel_topics**: Produces a clean set of proposed talks or panel talking points for a person based on their interests and goals, formatted for submission to a conference organizer.
175. **refine_design_document**: Refines a design document based on a design review by analyzing, mapping concepts, and implementing changes using valid Markdown.
176. **review_design**: Reviews and analyzes architecture design, focusing on clarity, component design, system integrations, security, performance, scalability, and data management.
177. **sanitize_broken_html_to_markdown**: Converts messy HTML into clean, properly formatted Markdown, applying custom styling and ensuring compatibility with Vite.
178. **suggest_pattern**: Suggests appropriate fabric patterns or commands based on user input, providing clear explanations and options for users.
179. **summarize**: Summarizes content into a 20-word sentence, main points, and takeaways, formatted with numbered lists in Markdown.
180. **summarize_board_meeting**: Creates formal meeting notes from board meeting transcripts for corporate governance documentation.
181. **summarize_debate**: Summarizes debates, identifies primary disagreement, extracts arguments, and provides analysis of evidence and argument strength to predict outcomes.
182. **summarize_git_changes**: Summarizes recent project updates from the last 7 days, focusing on key changes with enthusiasm.
183. **summarize_git_diff**: Summarizes and organizes Git diff changes with clear, succinct commit messages and bullet points.
184. **summarize_lecture**: Extracts relevant topics, definitions, and tools from lecture transcripts, providing structured summaries with timestamps and key takeaways.
185. **summarize_legislation**: Summarizes complex political proposals and legislation by analyzing key points, proposed changes, and providing balanced, positive, and cynical characterizations.
186. **summarize_meeting**: Analyzes meeting transcripts to extract a structured summary, including an overview, key points, tasks, decisions, challenges, timeline, references, and next steps.
187. **summarize_micro**: Summarizes content into a 20-word sentence, 3 main points, and 3 takeaways, formatted in clear, concise Markdown.
188. **summarize_newsletter**: Extracts the most meaningful, interesting, and useful content from a newsletter, summarizing key sections such as content, opinions, tools, companies, and follow-up items in clear, structured Markdown.
189. **summarize_paper**: Summarizes an academic paper by detailing its title, authors, technical approach, distinctive features, experimental setup, results, advantages, limitations, and conclusion in a clear, structured format using human-readable Markdown.
190. **summarize_prompt**: Summarizes AI chat prompts by describing the primary function, unique approach, and expected output in a concise paragraph. The summary is focused on the prompt's purpose without unnecessary details or formatting.
191. **summarize_pull-requests**: Summarizes pull requests for a coding project by providing a summary and listing the top PRs with human-readable descriptions.
192. **summarize_rpg_session**: Summarizes a role-playing game session by extracting key events, combat stats, character changes, quotes, and more.
193. **t_analyze_challenge_handling**: Provides 8-16 word bullet points evaluating how well challenges are being addressed, calling out any lack of effort.
194. **t_check_metrics**: Analyzes deep context from the TELOS file and input instruction, then provides a wisdom-based output while considering metrics and KPIs to assess recent improvements.
195. **t_create_h3_career**: Summarizes context and produces wisdom-based output by deeply analyzing both the TELOS File and the input instruction, considering the relationship between the two.
196. **t_create_opening_sentences**: Describes from TELOS file the person's identity, goals, and actions in 4 concise, 32-word bullet points, humbly.
197. **t_describe_life_outlook**: Describes from TELOS file a person's life outlook in 5 concise, 16-word bullet points.
198. **t_extract_intro_sentences**: Summarizes from TELOS file a person's identity, work, and current projects in 5 concise and grounded bullet points.
199. **t_extract_panel_topics**: Creates 5 panel ideas with titles and descriptions based on deep context from a TELOS file and input.
200. **t_find_blindspots**: Identify potential blindspots in thinking, frames, or models that may expose the individual to error or risk.
201. **t_find_negative_thinking**: Analyze a TELOS file and input to identify negative thinking in documents or journals, followed by tough love encouragement.
202. **t_find_neglected_goals**: Analyze a TELOS file and input instructions to identify goals or projects that have not been worked on recently.
203. **t_give_encouragement**: Analyze a TELOS file and input instructions to evaluate progress, provide encouragement, and offer recommendations for continued effort.
204. **t_red_team_thinking**: Analyze a TELOS file and input instructions to red-team thinking, models, and frames, then provide recommendations for improvement.
205. **t_threat_model_plans**: Analyze a TELOS file and input instructions to create threat models for a life plan and recommend improvements.
206. **t_visualize_mission_goals_projects**: Analyze a TELOS file and input instructions to create an ASCII art diagram illustrating the relationship of missions, goals, and projects.
207. **t_year_in_review**: Analyze a TELOS file to create insights about a person or entity, then summarize accomplishments and visualizations in bullet points.
208. **to_flashcards**: Create Anki flashcards from a given text, focusing on concise, optimized questions and answers without external context.
209. **transcribe_minutes**: Extracts (from meeting transcription) meeting minutes, identifying actionables, insightful ideas, decisions, challenges, and next steps in a structured format.
210. **translate**: Translates sentences or documentation into the specified language code while maintaining the original formatting and tone.
211. **tweet**: Provides a step-by-step guide on crafting engaging tweets with emojis, covering Twitter basics, account creation, features, and audience targeting.
212. **write_essay**: Writes essays in the style of a specified author, embodying their unique voice, vocabulary, and approach. Uses `author_name` variable.
213. **write_essay_pg**: Writes concise, clear essays in the style of Paul Graham, focusing on simplicity, clarity, and illumination of the provided topic.
214. **write_hackerone_report**: Generates concise, clear, and reproducible bug bounty reports, detailing vulnerability impact, steps to reproduce, and exploit details for triagers.
215. **write_latex**: Generates syntactically correct LaTeX code for a new.tex document, ensuring proper formatting and compatibility with pdflatex.
216. **write_micro_essay**: Writes concise, clear, and illuminating essays on the given topic in the style of Paul Graham.
217. **write_nuclei_template_rule**: Generates Nuclei YAML templates for detecting vulnerabilities using HTTP requests, matchers, extractors, and dynamic data extraction.
218. **write_pull-request**: Drafts detailed pull request descriptions, explaining changes, providing reasoning, and identifying potential bugs from the git diff command output.
219. **write_semgrep_rule**: Creates accurate and working Semgrep rules based on input, following syntax guidelines and specific language considerations.
220. **youtube_summary**: Create concise, timestamped Youtube video summaries that highlight key points.
37. **apply_ul_tags**: Apply standardized content tags to categorize topics like AI, cybersecurity, politics, and culture.
38. **ask_secure_by_design_questions**: Generates a set of security-focused questions to ensure a project is built securely by design, covering key components and considerations.
39. **ask_uncle_duke**: Coordinates a team of AI agents to research and produce multiple software development solutions based on provided specifications, and conducts detailed code reviews to ensure adherence to best practices.
40. **capture_thinkers_work**: Analyze philosophers or philosophies and provide detailed summaries about their teachings, background, works, advice, and related concepts in a structured template.
41. **check_agreement**: Analyze contracts and agreements to identify important stipulations, issues, and potential gotchas, then summarize them in Markdown.
42. **clean_text**: Fix broken or malformatted text by correcting line breaks, punctuation, capitalization, and paragraphs without altering content or spelling.
43. **coding_master**: Explain a coding concept to a beginner, providing examples, and formatting code in markdown with specific output sections like ideas, recommendations, facts, and insights.
44. **compare_and_contrast**: Compare and contrast a list of items in a markdown table, with items on the left and topics on top.
45. **convert_to_markdown**: Convert content to clean, complete Markdown format, preserving all original structure, formatting, links, and code blocks without alterations.
46. **create_5_sentence_summary**: Create concise summaries or answers to input at 5 different levels of depth, from 5 words to 1 word.
47. **create_academic_paper**: Generate a high-quality academic paper in LaTeX format with clear concepts, structured content, and a professional layout.
48. **create_ai_jobs_analysis**: Analyze job categories' susceptibility to automation, identify resilient roles, and provide strategies for personal adaptation to AI-driven changes in the workforce.
49. **create_aphorisms**: Find and generate a list of brief, witty statements.
50. **create_art_prompt**: Generates a detailed, compelling visual description of a concept, including stylistic references and direct AI instructions for creating art.
51. **create_better_frame**: Identifies and analyzes different frames of interpreting reality, emphasizing the power of positive, productive lenses in shaping outcomes.
52. **create_coding_feature**: Generates secure and composable code features using modern technology and best practices from project specifications.
53. **create_coding_project**: Generate wireframes and starter code for any coding ideas that you have.
54. **create_command**: Helps determine the correct parameters and switches for penetration testing tools based on a brief description of the objective.
55. **create_conceptmap**: Transforms unstructured text or markdown content into an interactive HTML concept map using Vis.js by extracting key concepts and their logical relationships.
56. **create_cyber_summary**: Summarizes cybersecurity threats, vulnerabilities, incidents, and malware with a 25-word summary and categorized bullet points, after thoroughly analyzing and mapping the provided input.
57. **create_design_document**: Creates a detailed design document for a system using the C4 model, addressing business and security postures, and including a system context diagram.
58. **create_diy**: Creates structured "Do It Yourself" tutorial patterns by analyzing prompts, organizing requirements, and providing step-by-step instructions in Markdown format.
59. **create_excalidraw_visualization**: Creates complex Excalidraw diagrams to visualize relationships between concepts and ideas in structured format.
60. **create_flash_cards**: Creates flashcards for key concepts, definitions, and terms with question-answer format for educational purposes.
61. **create_formal_email**: Crafts professional, clear, and respectful emails by analyzing context, tone, and purpose, ensuring proper structure and formatting.
62. **create_git_diff_commit**: Generates Git commands and commit messages for reflecting changes in a repository, using conventional commits and providing concise shell commands for updates.
63. **create_graph_from_input**: Generates a CSV file with progress-over-time data for a security program, focusing on relevant metrics and KPIs.
64. **create_hormozi_offer**: Creates a customized business offer based on principles from Alex Hormozi's book, "$100M Offers."
65. **create_idea_compass**: Organizes and structures ideas by exploring their definition, evidence, sources, and related themes or consequences.
66. **create_investigation_visualization**: Creates detailed Graphviz visualizations of complex input, highlighting key aspects and providing clear, well-annotated diagrams for investigative analysis and conclusions.
67. **create_keynote**: Creates TED-style keynote presentations with a clear narrative, structured slides, and speaker notes, emphasizing impactful takeaways and cohesive flow.
68. **create_loe_document**: Creates detailed Level of Effort documents for estimating work effort, resources, and costs for tasks or projects.
69. **create_logo**: Creates simple, minimalist company logos without text, generating AI prompts for vector graphic logos based on input.
70. **create_markmap_visualization**: Transforms complex ideas into clear visualizations using MarkMap syntax, simplifying concepts into diagrams with relationships, boxes, arrows, and labels.
71. **create_mermaid_visualization**: Creates detailed, standalone visualizations of concepts using Mermaid (Markdown) syntax, ensuring clarity and coherence in diagrams.
72. **create_mermaid_visualization_for_github**: Creates standalone, detailed visualizations using Mermaid (Markdown) syntax to effectively explain complex concepts, ensuring clarity and precision.
73. **create_micro_summary**: Summarizes content into a concise, 20-word summary with main points and takeaways, formatted in Markdown.
74. **create_mnemonic_phrases**: Creates memorable mnemonic sentences from given words to aid in memory retention and learning.
75. **create_network_threat_landscape**: Analyzes open ports and services from a network scan and generates a comprehensive, insightful, and detailed security threat report in Markdown.
76. **create_newsletter_entry**: Condenses provided article text into a concise, objective, newsletter-style summary with a title in the style of Frontend Weekly.
77. **create_npc**: Generates a detailed D&D 5E NPC, including background, flaws, stats, appearance, personality, goals, and more in Markdown format.
78. **create_pattern**: Extracts, organizes, and formats LLM/AI prompts into structured sections, detailing the AI's role, instructions, output format, and any provided examples for clarity and accuracy.
79. **create_prd**: Creates a precise Product Requirements Document (PRD) in Markdown based on input.
80. **create_prediction_block**: Extracts and formats predictions from input into a structured Markdown block for a blog post.
81. **create_quiz**: Creates a three-phase reading plan based on an author or topic to help the user become significantly knowledgeable, including core, extended, and supplementary readings.
82. **create_reading_plan**: Generates review questions based on learning objectives from the input, adapted to the specified student level, and outputs them in a clear markdown format.
83. **create_recursive_outline**: Breaks down complex tasks or projects into manageable, hierarchical components with recursive outlining for clarity and simplicity.
84. **create_report_finding**: Creates a detailed, structured security finding report in markdown, including sections on Description, Risk, Recommendations, References, One-Sentence-Summary, and Quotes.
85. **create_rpg_summary**: Summarizes an in-person RPG session with key events, combat details, player stats, and role-playing highlights in a structured format.
86. **create_security_update**: Creates concise security updates for newsletters, covering stories, threats, advisories, vulnerabilities, and a summary of key issues.
87. **create_show_intro**: Creates compelling short intros for podcasts, summarizing key topics and themes discussed in the episode.
88. **create_sigma_rules**: Extracts Tactics, Techniques, and Procedures (TTPs) from security news and converts them into Sigma detection rules for host-based detections.
89. **create_story_about_people_interaction**: Analyze two personas, compare their dynamics, and craft a realistic, character-driven story from those insights.
90. **create_story_about_person**: Creates compelling, realistic short stories based on psychological profiles, showing how characters navigate everyday problems using strategies consistent with their personality traits.
91. **create_story_explanation**: Summarizes complex content in a clear, approachable story format that makes the concepts easy to understand.
92. **create_stride_threat_model**: Create a STRIDE-based threat model for a system design, identifying assets, trust boundaries, data flows, and prioritizing threats with mitigations.
93. **create_summary**: Summarizes content into a 20-word sentence, 10 main points (16 words max), and 5 key takeaways in Markdown format.
94. **create_tags**: Identifies at least 5 tags from text content for mind mapping tools, including authors and existing tags if present.
95. **create_threat_scenarios**: Identifies likely attack methods for any system by providing a narrative-based threat model, balancing risk and opportunity.
96. **create_ttrc_graph**: Creates a CSV file showing the progress of Time to Remediate Critical Vulnerabilities over time using given data.
97. **create_ttrc_narrative**: Creates a persuasive narrative highlighting progress in reducing the Time to Remediate Critical Vulnerabilities metric over time.
98. **create_upgrade_pack**: Extracts world model and task algorithm updates from content, providing beliefs about how the world works and task performance.
99. **create_user_story**: Writes concise and clear technical user stories for new features in complex software programs, formatted for all stakeholders.
100. **create_video_chapters**: Extracts interesting topics and timestamps from a transcript, providing concise summaries of key moments.
101. **create_visualization**: Transforms complex ideas into visualizations using intricate ASCII art, simplifying concepts where necessary.
102. **dialog_with_socrates**: Engages in deep, meaningful dialogues to explore and challenge beliefs using the Socratic method.
103. **enrich_blog_post**: Enhances Markdown blog files by applying instructions to improve structure, visuals, and readability for HTML rendering.
104. **explain_code**: Explains code, security tool output, configuration text, and answers questions based on the provided input.
105. **explain_docs**: Improves and restructures tool documentation into clear, concise instructions, including overviews, usage, use cases, and key features.
106. **explain_math**: Helps you understand mathematical concepts in a clear and engaging way.
107. **explain_project**: Summarizes project documentation into clear, concise sections covering the project, problem, solution, installation, usage, and examples.
108. **explain_terms**: Produces a glossary of advanced terms from content, providing a definition, analogy, and explanation of why each term matters.
109. **export_data_as_csv**: Extracts and outputs all data structures from the input in properly formatted CSV data.
110. **extract_algorithm_update_recommendations**: Extracts concise, practical algorithm update recommendations from the input and outputs them in a bulleted list.
111. **extract_article_wisdom**: Extracts surprising, insightful, and interesting information from content, categorizing it into sections like summary, ideas, quotes, facts, references, and recommendations.
112. **extract_book_ideas**: Extracts and outputs 50 to 100 of the most surprising, insightful, and interesting ideas from a book's content.
113. **extract_book_recommendations**: Extracts and outputs 50 to 100 practical, actionable recommendations from a book's content.
114. **extract_business_ideas**: Extracts top business ideas from content and elaborates on the best 10 with unique differentiators.
115. **extract_characters**: Identify all characters (human and non-human), resolve their aliases and pronouns into canonical names, and produce detailed descriptions of each character's role, motivations, and interactions ranked by narrative importance.
116. **extract_controversial_ideas**: Extracts and outputs controversial statements and supporting quotes from the input in a structured Markdown list.
117. **extract_core_message**: Extracts and outputs a clear, concise sentence that articulates the core message of a given text or body of work.
118. **extract_ctf_writeup**: Extracts a short writeup from a warstory-like text about a cyber security engagement.
119. **extract_domains**: Extracts domains and URLs from content to identify sources used for articles, newsletters, and other publications.
120. **extract_extraordinary_claims**: Extracts and outputs a list of extraordinary claims from conversations, focusing on scientifically disputed or false statements.
121. **extract_ideas**: Extracts and outputs all the key ideas from input, presented as 15-word bullet points in Markdown.
122. **extract_insights**: Extracts and outputs the most powerful and insightful ideas from text, formatted as 16-word bullet points in the INSIGHTS section, also IDEAS section.
123. **extract_insights_dm**: Extracts and outputs all valuable insights and a concise summary of the content, including key points and topics discussed.
124. **extract_instructions**: Extracts clear, actionable step-by-step instructions and main objectives from instructional video transcripts, organizing them into a concise list.
125. **extract_jokes**: Extracts jokes from text content, presenting each joke with its punchline in separate bullet points.
126. **extract_latest_video**: Extracts the latest video URL from a YouTube RSS feed and outputs the URL only.
127. **extract_main_activities**: Extracts key events and activities from transcripts or logs, providing a summary of what happened.
128. **extract_main_idea**: Extracts the main idea and key recommendation from the input, summarizing them in 15-word sentences.
129. **extract_mcp_servers**: Identify and summarize Model Context Protocol (MCP) servers referenced in the input along with their key details.
130. **extract_most_redeeming_thing**: Extracts the most redeeming aspect from an input, summarizing it in a single 15-word sentence.
131. **extract_patterns**: Extracts and analyzes recurring, surprising, and insightful patterns from input, providing detailed analysis and advice for builders.
132. **extract_poc**: Extracts proof of concept URLs and validation methods from security reports, providing the URL and command to run.
133. **extract_predictions**: Extracts predictions from input, including specific details such as date, confidence level, and verification method.
134. **extract_primary_problem**: Extracts the primary problem with the world as presented in a given text or body of work.
135. **extract_primary_solution**: Extracts the primary solution for the world as presented in a given text or body of work.
136. **extract_product_features**: Extracts and outputs a list of product features from the provided input in a bulleted format.
137. **extract_questions**: Extracts and outputs all questions asked by the interviewer in a conversation or interview.
138. **extract_recipe**: Extracts and outputs a recipe with a short meal description, ingredients with measurements, and preparation steps.
139. **extract_recommendations**: Extracts and outputs concise, practical recommendations from a given piece of content in a bulleted list.
140. **extract_references**: Extracts and outputs a bulleted list of references to art, stories, books, literature, and other sources from content.
141. **extract_skills**: Extracts and classifies skills from a job description into a table, separating each skill and classifying it as either hard or soft.
142. **extract_song_meaning**: Analyzes a song to provide a summary of its meaning, supported by detailed evidence from lyrics, artist commentary, and fan analysis.
143. **extract_sponsors**: Extracts and lists official sponsors and potential sponsors from a provided transcript.
144. **extract_videoid**: Extracts and outputs the video ID from any given URL.
145. **extract_wisdom**: Extracts surprising, insightful, and interesting information from text on topics like human flourishing, AI, learning, and more.
146. **extract_wisdom_agents**: Extracts valuable insights, ideas, quotes, and references from content, emphasizing topics like human flourishing, AI, learning, and technology.
147. **extract_wisdom_dm**: Extracts all valuable, insightful, and thought-provoking information from content, focusing on topics like human flourishing, AI, learning, and technology.
148. **extract_wisdom_nometa**: Extracts insights, ideas, quotes, habits, facts, references, and recommendations from content, focusing on human flourishing, AI, technology, and related topics.
149. **find_female_life_partner**: Analyzes criteria for finding a female life partner and provides clear, direct, and poetic descriptions.
150. **find_hidden_message**: Extracts overt and hidden political messages, justifications, audience actions, and a cynical analysis from content.
151. **find_logical_fallacies**: Identifies and analyzes fallacies in arguments, classifying them as formal or informal with detailed reasoning.
152. **fix_typos**: Proofreads and corrects typos, spelling, grammar, and punctuation errors in text.
153. **generate_code_rules**: Compile best-practice coding rules and guardrails for AI-assisted development workflows from the provided content.
154. **get_wow_per_minute**: Determines the wow-factor of content per minute based on surprise, novelty, insight, value, and wisdom, measuring how rewarding the content is for the viewer.
155. **heal_person**: Develops a comprehensive plan for spiritual and mental healing based on psychological profiles, providing personalized recommendations for mental health improvement and overall life enhancement.
156. **humanize**: Rewrites AI-generated text to sound natural, conversational, and easy to understand, maintaining clarity and simplicity.
157. **identify_dsrp_distinctions**: Encourages creative, systems-based thinking by exploring distinctions, boundaries, and their implications, drawing on insights from prominent systems thinkers.
158. **identify_dsrp_perspectives**: Explores the concept of distinctions in systems thinking, focusing on how boundaries define ideas, influence understanding, and reveal or obscure insights.
159. **identify_dsrp_relationships**: Encourages exploration of connections, distinctions, and boundaries between ideas, inspired by systems thinkers to reveal new insights and patterns in complex systems.
160. **identify_dsrp_systems**: Encourages organizing ideas into systems of parts and wholes, inspired by systems thinkers to explore relationships and how changes in organization impact meaning and understanding.
161. **identify_job_stories**: Identifies key job stories or requirements for roles.
162. **improve_academic_writing**: Refines text into clear, concise academic language while improving grammar, coherence, and clarity, with a list of changes.
163. **improve_prompt**: Improves an LLM/AI prompt by applying expert prompt writing strategies for better results and clarity.
164. **improve_report_finding**: Improves a penetration test security finding by providing detailed descriptions, risks, recommendations, references, quotes, and a concise summary in markdown format.
165. **improve_writing**: Refines text by correcting grammar, enhancing style, improving clarity, and maintaining the original meaning. skills.
166. **judge_output**: Evaluates Honeycomb queries by judging their effectiveness, providing critiques and outcomes based on language nuances and analytics relevance.
167. **label_and_rate**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
168. **md_callout**: Classifies content and generates a markdown callout based on the provided text, selecting the most appropriate type.
169. **model_as_sherlock_freud**: Builds psychological models using detective reasoning and psychoanalytic insight to understand human behavior.
170. **official_pattern_template**: Template to use if you want to create new fabric patterns.
171. **predict_person_actions**: Predicts behavioral responses based on psychological profiles and challenges.
172. **prepare_7s_strategy**: Prepares a comprehensive briefing document from 7S's strategy capturing organizational profile, strategic elements, and market dynamics with clear, concise, and organized content.
173. **provide_guidance**: Provides psychological and life coaching advice, including analysis, recommendations, and potential diagnoses, with a compassionate and honest tone.
174. **rate_ai_response**: Rates the quality of AI responses by comparing them to top human expert performance, assigning a letter grade, reasoning, and providing a 1-100 score based on the evaluation.
175. **rate_ai_result**: Assesses the quality of AI/ML/LLM work by deeply analyzing content, instructions, and output, then rates performance based on multiple dimensions, including coverage, creativity, and interdisciplinary thinking.
176. **rate_content**: Labels content with up to 20 single-word tags and rates it based on idea count and relevance to human meaning, AI, and other related themes, assigning a tier (S, A, B, C, D) and a quality score.
177. **rate_value**: Produces the best possible output by deeply analyzing and understanding the input and its intended purpose.
178. **raw_query**: Fully digests and contemplates the input to produce the best possible result based on understanding the sender's intent.
179. **recommend_artists**: Recommends a personalized festival schedule with artists aligned to your favorite styles and interests, including rationale.
180. **recommend_pipeline_upgrades**: Optimizes vulnerability-checking pipelines by incorporating new information and improving their efficiency, with detailed explanations of changes.
181. **recommend_talkpanel_topics**: Produces a clean set of proposed talks or panel talking points for a person based on their interests and goals, formatted for submission to a conference organizer.
182. **recommend_yoga_practice**: Provides personalized yoga sequences, meditation guidance, and holistic lifestyle advice based on individual profiles.
183. **refine_design_document**: Refines a design document based on a design review by analyzing, mapping concepts, and implementing changes using valid Markdown.
184. **review_design**: Reviews and analyzes architecture design, focusing on clarity, component design, system integrations, security, performance, scalability, and data management.
185. **sanitize_broken_html_to_markdown**: Converts messy HTML into clean, properly formatted Markdown, applying custom styling and ensuring compatibility with Vite.
186. **suggest_pattern**: Suggests appropriate fabric patterns or commands based on user input, providing clear explanations and options for users.
187. **summarize**: Summarizes content into a 20-word sentence, main points, and takeaways, formatted with numbered lists in Markdown.
188. **summarize_board_meeting**: Creates formal meeting notes from board meeting transcripts for corporate governance documentation.
189. **summarize_debate**: Summarizes debates, identifies primary disagreement, extracts arguments, and provides analysis of evidence and argument strength to predict outcomes.
190. **summarize_git_changes**: Summarizes recent project updates from the last 7 days, focusing on key changes with enthusiasm.
191. **summarize_git_diff**: Summarizes and organizes Git diff changes with clear, succinct commit messages and bullet points.
192. **summarize_lecture**: Extracts relevant topics, definitions, and tools from lecture transcripts, providing structured summaries with timestamps and key takeaways.
193. **summarize_legislation**: Summarizes complex political proposals and legislation by analyzing key points, proposed changes, and providing balanced, positive, and cynical characterizations.
194. **summarize_meeting**: Analyzes meeting transcripts to extract a structured summary, including an overview, key points, tasks, decisions, challenges, timeline, references, and next steps.
195. **summarize_micro**: Summarizes content into a 20-word sentence, 3 main points, and 3 takeaways, formatted in clear, concise Markdown.
196. **summarize_newsletter**: Extracts the most meaningful, interesting, and useful content from a newsletter, summarizing key sections such as content, opinions, tools, companies, and follow-up items in clear, structured Markdown.
197. **summarize_paper**: Summarizes an academic paper by detailing its title, authors, technical approach, distinctive features, experimental setup, results, advantages, limitations, and conclusion in a clear, structured format using human-readable Markdown.
198. **summarize_prompt**: Summarizes AI chat prompts by describing the primary function, unique approach, and expected output in a concise paragraph. The summary is focused on the prompt's purpose without unnecessary details or formatting.
199. **summarize_pull-requests**: Summarizes pull requests for a coding project by providing a summary and listing the top PRs with human-readable descriptions.
200. **summarize_rpg_session**: Summarizes a role-playing game session by extracting key events, combat stats, character changes, quotes, and more.
201. **t_analyze_challenge_handling**: Provides 8-16 word bullet points evaluating how well challenges are being addressed, calling out any lack of effort.
202. **t_check_dunning_kruger**: Assess narratives for Dunning-Kruger patterns by contrasting self-perception with demonstrated competence and confidence cues.
203. **t_check_metrics**: Analyzes deep context from the TELOS file and input instruction, then provides a wisdom-based output while considering metrics and KPIs to assess recent improvements.
204. **t_create_h3_career**: Summarizes context and produces wisdom-based output by deeply analyzing both the TELOS File and the input instruction, considering the relationship between the two.
205. **t_create_opening_sentences**: Describes from TELOS file the person's identity, goals, and actions in 4 concise, 32-word bullet points, humbly.
206. **t_describe_life_outlook**: Describes from TELOS file a person's life outlook in 5 concise, 16-word bullet points.
207. **t_extract_intro_sentences**: Summarizes from TELOS file a person's identity, work, and current projects in 5 concise and grounded bullet points.
208. **t_extract_panel_topics**: Creates 5 panel ideas with titles and descriptions based on deep context from a TELOS file and input.
209. **t_find_blindspots**: Identify potential blindspots in thinking, frames, or models that may expose the individual to error or risk.
210. **t_find_negative_thinking**: Analyze a TELOS file and input to identify negative thinking in documents or journals, followed by tough love encouragement.
211. **t_find_neglected_goals**: Analyze a TELOS file and input instructions to identify goals or projects that have not been worked on recently.
212. **t_give_encouragement**: Analyze a TELOS file and input instructions to evaluate progress, provide encouragement, and offer recommendations for continued effort.
213. **t_red_team_thinking**: Analyze a TELOS file and input instructions to red-team thinking, models, and frames, then provide recommendations for improvement.
214. **t_threat_model_plans**: Analyze a TELOS file and input instructions to create threat models for a life plan and recommend improvements.
215. **t_visualize_mission_goals_projects**: Analyze a TELOS file and input instructions to create an ASCII art diagram illustrating the relationship of missions, goals, and projects.
216. **t_year_in_review**: Analyze a TELOS file to create insights about a person or entity, then summarize accomplishments and visualizations in bullet points.
217. **to_flashcards**: Create Anki flashcards from a given text, focusing on concise, optimized questions and answers without external context.
218. **transcribe_minutes**: Extracts (from meeting transcription) meeting minutes, identifying actionables, insightful ideas, decisions, challenges, and next steps in a structured format.
219. **translate**: Translates sentences or documentation into the specified language code while maintaining the original formatting and tone.
220. **tweet**: Provides a step-by-step guide on crafting engaging tweets with emojis, covering Twitter basics, account creation, features, and audience targeting.
221. **write_essay**: Writes essays in the style of a specified author, embodying their unique voice, vocabulary, and approach. Uses `author_name` variable.
222. **write_essay_pg**: Writes concise, clear essays in the style of Paul Graham, focusing on simplicity, clarity, and illumination of the provided topic.
223. **write_hackerone_report**: Generates concise, clear, and reproducible bug bounty reports, detailing vulnerability impact, steps to reproduce, and exploit details for triagers.
224. **write_latex**: Generates syntactically correct LaTeX code for a new.tex document, ensuring proper formatting and compatibility with pdflatex.
225. **write_micro_essay**: Writes concise, clear, and illuminating essays on the given topic in the style of Paul Graham.
226. **write_nuclei_template_rule**: Generates Nuclei YAML templates for detecting vulnerabilities using HTTP requests, matchers, extractors, and dynamic data extraction.
227. **write_pull-request**: Drafts detailed pull request descriptions, explaining changes, providing reasoning, and identifying potential bugs from the git diff command output.
228. **write_semgrep_rule**: Creates accurate and working Semgrep rules based on input, following syntax guidelines and specific language considerations.
229. **youtube_summary**: Create concise, timestamped Youtube video summaries that highlight key points.

View File

@@ -0,0 +1,37 @@
# IDENTITY and PURPOSE
You are an expert psychological analyst AI. Your task is to assess and predict how an individual is likely to respond to a
specific challenge based on their psychological profile and a challenge which will both be provided in a single text stream.
---
# STEPS
. You will be provided with one block of text containing two sections: a psychological profile (under a ***Psychodata*** header) and a description of a challenging situation under the ***Challenge*** header . To reiterate, the two sections will be seperated by the ***Challenge** header which signifies the beginning of the challenge description.
. Carefully review both sections. Extract key traits, tendencies, and psychological markers from the profile. Analyze the nature and demands of the challenge described.
. Carefully and methodically assess how each of the person's psychological traits are likely to interact with the specific demands and overall nature of the challenge
. In case of conflicting trait-challenge interactions, carefully and methodically weigh which of the conflicting traits is more dominant, and would ultimately be the determining factor in shaping the person's reaction. When weighting what trait will "win out", also weight the nuanced affect of the conflict itself, for example, will it inhibit the or paradocixcally increase the reaction's intensity? Will it cause another behaviour to emerge due to tension or a defense mechanism/s?)
. Finally, after iterating through each of the traits and each of the conflicts between opposing traits, consider them as whole (ie. the psychological structure) and refine your prediction in relation to the challenge accordingly
# OUTPUT
. In your response, provide:
- **A brief summary of the individual's psychological profile** (- bullet points).
- **A summary of the challenge or situation** (- sentences).
- **A step-by-step assessment** of how the individual's psychological traits are likely to interact with the specific demands
of the challenge.
- **A prediction** of how the person is likely to respond or behave in this situation, including potential strengths,
vulnerabilities, and likely outcomes.
- **Recommendations** (if appropriate) for strategies that might help the individual achieve a better outcome.
. Base your analysis strictly on the information provided. If important information is missing or ambiguous, note the
limitations in your assessment.
---
# EXAMPLE
USER:
***Psychodata***
The subject is a 27 year old male.
- He has poor impulse control and low level of patience. He lacks the ability to focus and/or commit to sustained challenges requiring effort.
- He is ego driven to the point of narcissim, every criticism is a threat to his self esteem.
- In his wors
***challenge***
While standing in line for the cashier in a grocery store, a rude customer cuts in line in front of the subject.

View File

@@ -0,0 +1,40 @@
# IDENTITY
You are an experienced **yoga instructor and mindful living coach**. Your role is to guide users in a calm, clear, and compassionate manner. You will help them by following the stipulated steps:
# STEPS
- Teach and provide practicing routines for **safe, effective yoga poses** (asana) with step-by-step guidance
- Help user build a **personalized sequences** suited to their experience level, goals, and any physical limitations
- Lead **guided meditations and relaxation exercises** that promote mindfulness and emotional balance
- Offer **holistic lifestyle advice** inspired by yogic principles—covering breathwork (pranayama), nutrition, sleep, posture, and daily wellbeing practices
- Foster an **atmosphere of serenity, self-awareness, and non-judgment** in every response
When responding, adapt your tone to be **soothing, encouraging, and introspective**, like a seasoned yoga teacher who integrates ancient wisdom into modern life.
# OUTPUT
Use the following structure in your replies:
1. **Opening grounding statement** a brief reflection or centering phrase.
2. **Main guidance** offer detailed, safe, and clear instructions or insights relevant to the users query.
3. **Mindful takeaway** close with a short reminder or reflection for continued mindfulness.
If users share specific goals (e.g., flexibility, relaxation, stress relief, back pain), **personalize** poses, sequences, or meditation practices accordingly.
If the user asks about a physical pose:
- Describe alignment carefully
- Explain how to modify for beginners or for safety
- Indicate common mistakes and how to avoid them
If the user asks about meditation or lifestyle:
- Offer simple, applicable techniques
- Encourage consistency and self-compassion
# EXAMPLE
USER: Recommend a gentle yoga sequence for improving focus during stressful workdays.
Expected Output Example:
1. Begin with a short centering breath to quiet the mind.
2. Flow through seated side stretches, cat-cow, mountain pose, and standing forward fold.
3. Conclude with a brief meditation on the breath.
4. Reflect on how each inhale brings focus, and each exhale releases tension.
End every interaction with a phrase like:
> “Breathe in calm, breathe out ease.”

View File

@@ -73,25 +73,25 @@ Match the request to one or more of these primary categories:
**AI**: ai, create_ai_jobs_analysis, create_art_prompt, create_pattern, create_prediction_block, extract_mcp_servers, extract_wisdom_agents, generate_code_rules, improve_prompt, judge_output, rate_ai_response, rate_ai_result, raw_query, suggest_pattern, summarize_prompt
**ANALYSIS**: ai, analyze_answers, analyze_bill, analyze_bill_short, analyze_candidates, analyze_cfp_submission, analyze_claims, analyze_comments, analyze_debate, analyze_email_headers, analyze_incident, analyze_interviewer_techniques, analyze_logs, analyze_malware, analyze_military_strategy, analyze_mistakes, analyze_paper, analyze_paper_simple, analyze_patent, analyze_personality, analyze_presentation, analyze_product_feedback, analyze_proposition, analyze_prose, analyze_prose_json, analyze_prose_pinker, analyze_risk, analyze_sales_call, analyze_spiritual_text, analyze_tech_impact, analyze_terraform_plan, analyze_threat_report, analyze_threat_report_cmds, analyze_threat_report_trends, apply_ul_tags, check_agreement, compare_and_contrast, create_ai_jobs_analysis, create_idea_compass, create_investigation_visualization, create_prediction_block, create_recursive_outline, create_story_about_people_interaction, create_tags, dialog_with_socrates, extract_main_idea, extract_predictions, find_hidden_message, find_logical_fallacies, get_wow_per_minute, identify_dsrp_distinctions, identify_dsrp_perspectives, identify_dsrp_relationships, identify_dsrp_systems, identify_job_stories, label_and_rate, prepare_7s_strategy, provide_guidance, rate_content, rate_value, recommend_artists, recommend_talkpanel_topics, review_design, summarize_board_meeting, t_analyze_challenge_handling, t_check_dunning_kruger, t_check_metrics, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_find_blindspots, t_find_negative_thinking, t_red_team_thinking, t_threat_model_plans, t_year_in_review, write_hackerone_report
**ANALYSIS**: ai, analyze_answers, analyze_bill, analyze_bill_short, analyze_candidates, analyze_cfp_submission, analyze_claims, analyze_comments, analyze_debate, analyze_email_headers, analyze_incident, analyze_interviewer_techniques, analyze_logs, analyze_malware, analyze_military_strategy, analyze_mistakes, analyze_paper, analyze_paper_simple, analyze_patent, analyze_personality, analyze_presentation, analyze_product_feedback, analyze_proposition, analyze_prose, analyze_prose_json, analyze_prose_pinker, analyze_risk, analyze_sales_call, analyze_spiritual_text, analyze_tech_impact, analyze_terraform_plan, analyze_threat_report, analyze_threat_report_cmds, analyze_threat_report_trends, apply_ul_tags, check_agreement, compare_and_contrast, create_ai_jobs_analysis, create_idea_compass, create_investigation_visualization, create_prediction_block, create_recursive_outline, create_story_about_people_interaction, create_tags, dialog_with_socrates, extract_main_idea, extract_predictions, find_hidden_message, find_logical_fallacies, get_wow_per_minute, identify_dsrp_distinctions, identify_dsrp_perspectives, identify_dsrp_relationships, identify_dsrp_systems, identify_job_stories, label_and_rate, model_as_sherlock_freud, predict_person_actions, prepare_7s_strategy, provide_guidance, rate_content, rate_value, recommend_artists, recommend_talkpanel_topics, review_design, summarize_board_meeting, t_analyze_challenge_handling, t_check_dunning_kruger, t_check_metrics, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_find_blindspots, t_find_negative_thinking, t_red_team_thinking, t_threat_model_plans, t_year_in_review, write_hackerone_report
**BILL**: analyze_bill, analyze_bill_short
**BUSINESS**: check_agreement, create_ai_jobs_analysis, create_formal_email, create_hormozi_offer, create_loe_document, create_logo, create_newsletter_entry, create_prd, explain_project, extract_business_ideas, extract_product_features, extract_skills, extract_sponsors, identify_job_stories, prepare_7s_strategy, rate_value, t_check_metrics, t_create_h3_career, t_visualize_mission_goals_projects, t_year_in_review, transcribe_minutes
**BUSINESS**: check_agreement, create_ai_jobs_analysis, create_formal_email, create_hormozi_offer, create_loe_document, create_logo, create_newsletter_entry, create_prd, explain_project, extract_business_ideas, extract_characters, extract_product_features, extract_skills, extract_sponsors, identify_job_stories, prepare_7s_strategy, rate_value, t_check_metrics, t_create_h3_career, t_visualize_mission_goals_projects, t_year_in_review, transcribe_minutes
**CLASSIFICATION**: apply_ul_tags
**CONVERSION**: clean_text, convert_to_markdown, create_graph_from_input, export_data_as_csv, extract_videoid, get_youtube_rss, humanize, md_callout, sanitize_broken_html_to_markdown, to_flashcards, transcribe_minutes, translate, tweet, write_latex
**CONVERSION**: clean_text, convert_to_markdown, create_graph_from_input, export_data_as_csv, extract_videoid, humanize, md_callout, sanitize_broken_html_to_markdown, to_flashcards, transcribe_minutes, translate, tweet, write_latex
**CR THINKING**: capture_thinkers_work, create_idea_compass, create_markmap_visualization, dialog_with_socrates, extract_alpha, extract_controversial_ideas, extract_extraordinary_claims, extract_predictions, extract_primary_problem, extract_wisdom_nometa, find_hidden_message, find_logical_fallacies, summarize_debate, t_analyze_challenge_handling, t_check_dunning_kruger, t_find_blindspots, t_find_negative_thinking, t_find_neglected_goals, t_red_team_thinking
**CREATIVITY**: create_mnemonic_phrases, write_essay
**DEVELOPMENT**: agility_story, analyze_logs, analyze_prose_json, answer_interview_question, ask_secure_by_design_questions, ask_uncle_duke, coding_master, create_coding_feature, create_coding_project, create_command, create_design_document, create_git_diff_commit, create_loe_document, create_mermaid_visualization, create_mermaid_visualization_for_github, create_pattern, create_prd, create_sigma_rules, create_user_story, explain_code, explain_docs, explain_project, export_data_as_csv, extract_algorithm_update_recommendations, extract_mcp_servers, extract_poc, extract_product_features, generate_code_rules, get_youtube_rss, identify_job_stories, improve_prompt, official_pattern_template, recommend_pipeline_upgrades, refine_design_document, review_code, review_design, sanitize_broken_html_to_markdown, suggest_pattern, summarize_git_changes, summarize_git_diff, summarize_pull-requests, write_nuclei_template_rule, write_pull-request, write_semgrep_rule
**DEVELOPMENT**: agility_story, analyze_logs, analyze_prose_json, answer_interview_question, ask_secure_by_design_questions, ask_uncle_duke, coding_master, create_coding_feature, create_coding_project, create_command, create_design_document, create_git_diff_commit, create_loe_document, create_mermaid_visualization, create_mermaid_visualization_for_github, create_pattern, create_prd, create_sigma_rules, create_user_story, explain_code, explain_docs, explain_project, export_data_as_csv, extract_algorithm_update_recommendations, extract_mcp_servers, extract_poc, extract_product_features, generate_code_rules, identify_job_stories, improve_prompt, official_pattern_template, recommend_pipeline_upgrades, refine_design_document, review_code, review_design, sanitize_broken_html_to_markdown, suggest_pattern, summarize_git_changes, summarize_git_diff, summarize_pull-requests, write_nuclei_template_rule, write_pull-request, write_semgrep_rule
**DEVOPS**: analyze_terraform_plan
**EXTRACT**: analyze_comments, create_aphorisms, create_tags, create_video_chapters, extract_algorithm_update_recommendations, extract_alpha, extract_article_wisdom, extract_book_ideas, extract_book_recommendations, extract_business_ideas, extract_controversial_ideas, extract_core_message, extract_ctf_writeup, extract_domains, extract_extraordinary_claims, extract_ideas, extract_insights, extract_insights_dm, extract_instructions, extract_jokes, extract_latest_video, extract_main_activities, extract_main_idea, extract_mcp_servers, extract_most_redeeming_thing, extract_patterns, extract_poc, extract_predictions, extract_primary_problem, extract_primary_solution, extract_product_features, extract_questions, extract_recipe, extract_recommendations, extract_references, extract_skills, extract_song_meaning, extract_sponsors, extract_videoid, extract_wisdom, extract_wisdom_agents, extract_wisdom_dm, extract_wisdom_nometa, extract_wisdom_short, generate_code_rules, t_extract_intro_sentences, t_extract_panel_topics
**EXTRACT**: analyze_comments, create_aphorisms, create_tags, create_video_chapters, extract_algorithm_update_recommendations, extract_alpha, extract_article_wisdom, extract_book_ideas, extract_book_recommendations, extract_business_ideas, extract_characters, extract_controversial_ideas, extract_core_message, extract_ctf_writeup, extract_domains, extract_extraordinary_claims, extract_ideas, extract_insights, extract_insights_dm, extract_instructions, extract_jokes, extract_latest_video, extract_main_activities, extract_main_idea, extract_mcp_servers, extract_most_redeeming_thing, extract_patterns, extract_poc, extract_predictions, extract_primary_problem, extract_primary_solution, extract_product_features, extract_questions, extract_recipe, extract_recommendations, extract_references, extract_skills, extract_song_meaning, extract_sponsors, extract_videoid, extract_wisdom, extract_wisdom_agents, extract_wisdom_dm, extract_wisdom_nometa, extract_wisdom_short, generate_code_rules, t_extract_intro_sentences, t_extract_panel_topics
**GAMING**: create_npc, create_rpg_summary, summarize_rpg_session
@@ -105,17 +105,19 @@ Match the request to one or more of these primary categories:
**SECURITY**: analyze_email_headers, analyze_incident, analyze_logs, analyze_malware, analyze_risk, analyze_terraform_plan, analyze_threat_report, analyze_threat_report_cmds, analyze_threat_report_trends, ask_secure_by_design_questions, create_command, create_cyber_summary, create_graph_from_input, create_investigation_visualization, create_network_threat_landscape, create_report_finding, create_security_update, create_sigma_rules, create_stride_threat_model, create_threat_scenarios, create_ttrc_graph, create_ttrc_narrative, extract_ctf_writeup, improve_report_finding, recommend_pipeline_upgrades, review_code, t_red_team_thinking, t_threat_model_plans, write_hackerone_report, write_nuclei_template_rule, write_semgrep_rule
**SELF**: analyze_mistakes, analyze_personality, analyze_spiritual_text, create_better_frame, create_diy, create_reading_plan, create_story_about_person, dialog_with_socrates, extract_article_wisdom, extract_book_ideas, extract_book_recommendations, extract_insights, extract_insights_dm, extract_most_redeeming_thing, extract_recipe, extract_recommendations, extract_song_meaning, extract_wisdom, extract_wisdom_dm, extract_wisdom_short, find_female_life_partner, heal_person, provide_guidance, recommend_artists, t_check_dunning_kruger, t_create_h3_career, t_describe_life_outlook, t_find_neglected_goals, t_give_encouragement
**SELF**: analyze_mistakes, analyze_personality, analyze_spiritual_text, create_better_frame, create_diy, create_reading_plan, create_story_about_person, dialog_with_socrates, extract_article_wisdom, extract_book_ideas, extract_book_recommendations, extract_insights, extract_insights_dm, extract_most_redeeming_thing, extract_recipe, extract_recommendations, extract_song_meaning, extract_wisdom, extract_wisdom_dm, extract_wisdom_short, find_female_life_partner, heal_person, model_as_sherlock_freud, predict_person_actions, provide_guidance, recommend_artists, recommend_yoga_practice, t_check_dunning_kruger, t_create_h3_career, t_describe_life_outlook, t_find_neglected_goals, t_give_encouragement
**STRATEGY**: analyze_military_strategy, create_better_frame, prepare_7s_strategy, t_analyze_challenge_handling, t_find_blindspots, t_find_negative_thinking, t_find_neglected_goals, t_red_team_thinking, t_threat_model_plans, t_visualize_mission_goals_projects
**SUMMARIZE**: capture_thinkers_work, create_5_sentence_summary, create_micro_summary, create_newsletter_entry, create_show_intro, create_summary, extract_core_message, extract_latest_video, extract_main_idea, summarize, summarize_board_meeting, summarize_debate, summarize_git_changes, summarize_git_diff, summarize_lecture, summarize_legislation, summarize_meeting, summarize_micro, summarize_newsletter, summarize_paper, summarize_pull-requests, summarize_rpg_session, youtube_summary
**VISUALIZE**: create_excalidraw_visualization, create_graph_from_input, create_idea_compass, create_investigation_visualization, create_keynote, create_logo, create_markmap_visualization, create_mermaid_visualization, create_mermaid_visualization_for_github, create_video_chapters, create_visualization, enrich_blog_post, t_visualize_mission_goals_projects
**VISUALIZE**: create_conceptmap, create_excalidraw_visualization, create_graph_from_input, create_idea_compass, create_investigation_visualization, create_keynote, create_logo, create_markmap_visualization, create_mermaid_visualization, create_mermaid_visualization_for_github, create_video_chapters, create_visualization, enrich_blog_post, t_visualize_mission_goals_projects
**WISDOM**: extract_alpha, extract_article_wisdom, extract_book_ideas, extract_insights, extract_most_redeeming_thing, extract_recommendations, extract_wisdom, extract_wisdom_dm, extract_wisdom_nometa, extract_wisdom_short
**WRITING**: analyze_prose_json, analyze_prose_pinker, apply_ul_tags, clean_text, compare_and_contrast, convert_to_markdown, create_5_sentence_summary, create_academic_paper, create_aphorisms, create_better_frame, create_design_document, create_diy, create_formal_email, create_hormozi_offer, create_keynote, create_micro_summary, create_newsletter_entry, create_prediction_block, create_prd, create_show_intro, create_story_about_people_interaction, create_story_explanation, create_summary, create_tags, create_user_story, enrich_blog_post, explain_docs, explain_terms, humanize, improve_academic_writing, improve_writing, label_and_rate, md_callout, official_pattern_template, recommend_talkpanel_topics, refine_design_document, summarize, summarize_debate, summarize_lecture, summarize_legislation, summarize_meeting, summarize_micro, summarize_newsletter, summarize_paper, summarize_rpg_session, t_create_opening_sentences, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_give_encouragement, t_year_in_review, transcribe_minutes, tweet, write_essay, write_essay_pg, write_hackerone_report, write_latex, write_micro_essay, write_pull-request
**WELLNESS**: analyze_spiritual_text, create_better_frame, extract_wisdom_dm, heal_person, model_as_sherlock_freud, predict_person_actions, provide_guidance, recommend_yoga_practice, t_give_encouragement
**WRITING**: analyze_prose_json, analyze_prose_pinker, apply_ul_tags, clean_text, compare_and_contrast, convert_to_markdown, create_5_sentence_summary, create_academic_paper, create_aphorisms, create_better_frame, create_design_document, create_diy, create_formal_email, create_hormozi_offer, create_keynote, create_micro_summary, create_newsletter_entry, create_prediction_block, create_prd, create_show_intro, create_story_about_people_interaction, create_story_explanation, create_summary, create_tags, create_user_story, enrich_blog_post, explain_docs, explain_terms, fix_typos, humanize, improve_academic_writing, improve_writing, label_and_rate, md_callout, official_pattern_template, recommend_talkpanel_topics, refine_design_document, summarize, summarize_debate, summarize_lecture, summarize_legislation, summarize_meeting, summarize_micro, summarize_newsletter, summarize_paper, summarize_rpg_session, t_create_opening_sentences, t_describe_life_outlook, t_extract_intro_sentences, t_extract_panel_topics, t_give_encouragement, t_year_in_review, transcribe_minutes, tweet, write_essay, write_essay_pg, write_hackerone_report, write_latex, write_micro_essay, write_pull-request
## Workflow Suggestions

View File

@@ -296,6 +296,14 @@ Extract/analyze user job stories to understand motivations.
Categorize/evaluate content by assigning labels and ratings.
### model_as_sherlock_freud
Builds psychological models using detective reasoning and psychoanalytic insight.
### predict_person_actions
Predicts behavioral responses based on psychological profiles and challenges
### prepare_7s_strategy
Apply McKinsey 7S framework to analyze organizational alignment.
@@ -394,6 +402,10 @@ Extract novel ideas from books to inspire new projects.
Extract/prioritize practical advice from books.
### extract_characters
Identify all characters (human and non-human), resolve their aliases and pronouns into canonical names, and produce detailed descriptions of each character's role, motivations, and interactions ranked by narrative importance.
### extract_controversial_ideas
Analyze contentious viewpoints while maintaining objective analysis.
@@ -594,6 +606,10 @@ Transform technical docs into clearer explanations with examples.
Create glossaries of advanced terms with definitions and analogies.
### fix_typos
Proofreads and corrects typos, spelling, grammar, and punctuation errors.
### humanize
Transform technical content into approachable language.
@@ -876,6 +892,10 @@ Convert content into flashcard format for learning.
## VISUALIZATION PATTERNS
### create_conceptmap
Transform unstructured text or markdown content into interactive HTML concept maps using Vis.js by extracting key concepts and their logical relationships.
### create_excalidraw_visualization
Create visualizations using Excalidraw.
@@ -922,10 +942,6 @@ Convert content to markdown, preserving original content and structure.
Extract data and convert to CSV, preserving data integrity.
### get_youtube_rss
Generate RSS feed URLs for YouTube channels.
### sanitize_broken_html_to_markdown
Clean/convert malformed HTML to markdown.
@@ -979,3 +995,9 @@ Summarize RPG sessions capturing events, combat, and narrative.
### extract_jokes
Extract/categorize jokes, puns, and witty remarks.
## WELLNESS PATTERNS
### recommend_yoga_practice
Provides personalized yoga sequences, meditation guidance, and holistic lifestyle advice based on individual profiles.

140
docs/i18n-variants.md Normal file
View File

@@ -0,0 +1,140 @@
# Language Variants Support in Fabric
## Current Implementation
As of this update, Fabric supports Portuguese language variants:
- `pt-BR` - Brazilian Portuguese
- `pt-PT` - European Portuguese
- `pt` - defaults to `pt-BR` for backward compatibility
## Architecture
The i18n system supports language variants through:
1. **BCP 47 Format**: All locales are normalized to BCP 47 format (language-REGION)
2. **Fallback Chain**: Regional variants fall back to base language, then to configured defaults
3. **Default Variant Mapping**: Languages without base files can specify default regional variants
4. **Flexible Input**: Accepts both underscore (pt_BR) and hyphen (pt-BR) formats
## Recommended Future Variants
Based on user demographics and linguistic differences, these variants would provide the most value:
### High Priority
1. **Chinese Variants**
- `zh-CN` - Simplified Chinese (Mainland China)
- `zh-TW` - Traditional Chinese (Taiwan)
- `zh-HK` - Traditional Chinese (Hong Kong)
- Default: `zh``zh-CN`
- Rationale: Significant script and vocabulary differences
2. **Spanish Variants**
- `es-ES` - European Spanish (Spain)
- `es-MX` - Mexican Spanish
- `es-AR` - Argentinian Spanish
- Default: `es``es-ES`
- Rationale: Notable vocabulary and conjugation differences
3. **English Variants**
- `en-US` - American English
- `en-GB` - British English
- `en-AU` - Australian English
- Default: `en``en-US`
- Rationale: Spelling differences (color/colour, organize/organise)
4. **French Variants**
- `fr-FR` - France French
- `fr-CA` - Canadian French
- Default: `fr``fr-FR`
- Rationale: Some vocabulary and expression differences
5. **Arabic Variants**
- `ar-SA` - Saudi Arabic (Modern Standard)
- `ar-EG` - Egyptian Arabic
- Default: `ar``ar-SA`
- Rationale: Significant dialectal differences
6. **German Variants**
- `de-DE` - Germany German
- `de-AT` - Austrian German
- `de-CH` - Swiss German
- Default: `de``de-DE`
- Rationale: Minor differences, mostly vocabulary
## Implementation Guidelines
When adding new language variants:
1. **Determine the Base**: Decide which variant should be the default
2. **Create Variant Files**: Copy base file and adjust for regional differences
3. **Update Default Map**: Add to `defaultLanguageVariants` if needed
4. **Focus on Key Differences**:
- Technical terminology
- Common UI terms (file/ficheiro, save/guardar)
- Date/time formats
- Currency references
- Formal/informal address conventions
5. **Test Thoroughly**: Ensure fallback chain works correctly
## Adding a New Variant
To add a new language variant:
1. Copy the base language file:
```bash
cp locales/es.json locales/es-MX.json
```
2. Adjust translations for regional differences
3. If this is the first variant for a language, update `i18n.go`:
```go
var defaultLanguageVariants = map[string]string{
"pt": "pt-BR",
"es": "es-MX", // Add if Mexican Spanish should be default
}
```
4. Add tests for the new variant
5. Update documentation
## Language Variant Naming Convention
Follow BCP 47 standards:
- Language code: lowercase (pt, es, en)
- Region code: uppercase (BR, PT, US)
- Separator: hyphen (pt-BR, not pt_BR)
Input normalization handles various formats, but files and internal references should use BCP 47.
## Testing Variants
Test each variant with:
```bash
# Direct specification
fabric --help -g=pt-BR
fabric --help -g=pt-PT
# Environment variable
LANG=pt_BR.UTF-8 fabric --help
# Fallback behavior
fabric --help -g=pt # Should use pt-BR
```
## Maintenance Considerations
When updating translations:
1. Update all variants of a language together
2. Ensure key parity across all variants
3. Test fallback behavior after changes
4. Consider using translation memory tools for consistency

4
go.mod
View File

@@ -3,7 +3,7 @@ module github.com/danielmiessler/fabric
go 1.25.1
require (
github.com/anthropics/anthropic-sdk-go v1.12.0
github.com/anthropics/anthropic-sdk-go v1.16.0
github.com/atotto/clipboard v0.1.4
github.com/aws/aws-sdk-go-v2 v1.39.0
github.com/aws/aws-sdk-go-v2/config v1.31.8
@@ -35,6 +35,8 @@ require (
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.19.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
github.com/google/go-cmp v0.7.0 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
)

18
go.sum
View File

@@ -8,6 +8,14 @@ cloud.google.com/go/compute/metadata v0.8.0 h1:HxMRIbao8w17ZX6wBnjhcDkW6lTFpgcao
cloud.google.com/go/compute/metadata v0.8.0/go.mod h1:sYOGTp851OV9bOFJ9CH7elVvyzopvWQFNNghtDQ/Biw=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.19.1 h1:5YTBM8QDVIBN3sxBil89WfdAAqDZbyJTgh688DSxX5w=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.19.1/go.mod h1:YD5h/ldMsG0XiIw7PdyNhLxaM317eFh5yNLccNfGdyw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1/go.mod h1:JdM5psgjfBf5fo2uWOZhflPWyDBZ/O/CNAH9CtsuZE4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDozdmndjTm8DXdpCzPajMgA=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/BurntSushi/toml v1.5.0 h1:W5quZX/G/csjUnuI8SUYlsHs9M38FC7znL0lIO+DvMg=
github.com/BurntSushi/toml v1.5.0/go.mod h1:ukJfTF/6rtPPRCnwkur4qwRxa8vTRFBF0uk2lLoLwho=
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
@@ -19,8 +27,8 @@ github.com/andybalholm/cascadia v1.3.3 h1:AG2YHrzJIm4BZ19iwJ/DAua6Btl3IwJX+VI4kk
github.com/andybalholm/cascadia v1.3.3/go.mod h1:xNd9bqTn98Ln4DwST8/nG+H0yuB8Hmgu1YHNnWw0GeA=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/anthropics/anthropic-sdk-go v1.12.0 h1:xPqlGnq7rWrTiHazIvCiumA0u7mGQnwDQtvA1M82h9U=
github.com/anthropics/anthropic-sdk-go v1.12.0/go.mod h1:WTz31rIUHUHqai2UslPpw5CwXrQP3geYBioRV4WOLvE=
github.com/anthropics/anthropic-sdk-go v1.16.0 h1:nRkOFDqYXsHteoIhjdJr/5dsiKbFF3rflSv8ax50y8o=
github.com/anthropics/anthropic-sdk-go v1.16.0/go.mod h1:WTz31rIUHUHqai2UslPpw5CwXrQP3geYBioRV4WOLvE=
github.com/araddon/dateparse v0.0.0-20210429162001-6b43995a97de h1:FxWPpzIjnTlhPwqqXc4/vE0f7GvRjuAsbW+HOIe8KnA=
github.com/araddon/dateparse v0.0.0-20210429162001-6b43995a97de/go.mod h1:DCaWoUhZrYW9p1lxo/cm8EmUOOzAPSEZNGF2DK1dJgw=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
@@ -121,6 +129,8 @@ github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/gogs/chardet v0.0.0-20211120154057-b7413eaefb8f h1:3BSP1Tbs2djlpprl7wCLuiqMaUh5SJkkzI2gDs+FgLs=
github.com/gogs/chardet v0.0.0-20211120154057-b7413eaefb8f/go.mod h1:Pcatq5tYkCW2Q6yrR2VRHlbHpZ/R4/7qyL1TCF7vl14=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
@@ -171,6 +181,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
@@ -199,6 +211,8 @@ github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pjbgf/sha1cd v0.4.0 h1:NXzbL1RvjTUi6kgYZCX3fPwwl27Q1LJndxtUDVfJGRY=
github.com/pjbgf/sha1cd v0.4.0/go.mod h1:zQWigSxVmsHEZow5qaLtPYxpcKMMQpa09ixqBxuCS6A=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=

View File

@@ -35,7 +35,7 @@ type Flags struct {
TopP float64 `short:"T" long:"topp" yaml:"topp" description:"Set top P" default:"0.9"`
Stream bool `short:"s" long:"stream" yaml:"stream" description:"Stream"`
PresencePenalty float64 `short:"P" long:"presencepenalty" yaml:"presencepenalty" description:"Set presence penalty" default:"0.0"`
Raw bool `short:"r" long:"raw" yaml:"raw" description:"Use the defaults of the model without sending chat options (like temperature etc.) and use the user role instead of the system role for patterns."`
Raw bool `short:"r" long:"raw" yaml:"raw" description:"Use the defaults of the model without sending chat options (temperature, top_p, etc.). Only affects OpenAI-compatible providers. Anthropic models always use smart parameter selection to comply with model-specific requirements."`
FrequencyPenalty float64 `short:"F" long:"frequencypenalty" yaml:"frequencypenalty" description:"Set frequency penalty" default:"0.0"`
ListPatterns bool `short:"l" long:"listpatterns" description:"List all patterns"`
ListAllModels bool `short:"L" long:"listmodels" description:"List all available models"`

View File

@@ -29,6 +29,9 @@ func CreateOutputFile(message string, fileName string) (err error) {
return
}
defer file.Close()
if !strings.HasSuffix(message, "\n") {
message += "\n"
}
if _, err = file.WriteString(message); err != nil {
err = fmt.Errorf("%s", fmt.Sprintf(i18n.T("error_writing_to_file"), err))
} else {

View File

@@ -24,5 +24,34 @@ func TestCreateOutputFile(t *testing.T) {
t.Fatalf("CreateOutputFile() error = %v", err)
}
defer os.Remove(fileName)
t.Cleanup(func() { os.Remove(fileName) })
data, err := os.ReadFile(fileName)
if err != nil {
t.Fatalf("failed to read output file: %v", err)
}
expected := message + "\n"
if string(data) != expected {
t.Fatalf("expected file contents %q, got %q", expected, data)
}
}
func TestCreateOutputFileMessageWithTrailingNewline(t *testing.T) {
fileName := "test_output_with_newline.txt"
message := "test message with newline\n"
if err := CreateOutputFile(message, fileName); err != nil {
t.Fatalf("CreateOutputFile() error = %v", err)
}
t.Cleanup(func() { os.Remove(fileName) })
data, err := os.ReadFile(fileName)
if err != nil {
t.Fatalf("failed to read output file: %v", err)
}
if string(data) != message {
t.Fatalf("expected file contents %q, got %q", message, data)
}
}

View File

@@ -69,6 +69,7 @@ func (o *Chatter) Send(request *domain.ChatRequest, opts *domain.ChatOptions) (s
responseChan := make(chan string)
errChan := make(chan error, 1)
done := make(chan struct{})
printedStream := false
go func() {
defer close(done)
@@ -81,9 +82,14 @@ func (o *Chatter) Send(request *domain.ChatRequest, opts *domain.ChatOptions) (s
message += response
if !opts.SuppressThink {
fmt.Print(response)
printedStream = true
}
}
if printedStream && !opts.SuppressThink && !strings.HasSuffix(message, "\n") {
fmt.Println()
}
// Wait for goroutine to finish
<-done
@@ -175,7 +181,7 @@ func (o *Chatter) BuildSession(request *domain.ChatRequest, raw bool) (session *
if request.Message == nil {
request.Message = &chat.ChatCompletionMessage{
Role: chat.ChatMessageRoleUser,
Content: " ",
Content: "",
}
}

View File

@@ -25,6 +25,22 @@ var (
initOnce sync.Once
)
// defaultLanguageVariants maps language codes without regions to their default regional variants.
// This is used when a language without a base file is requested.
var defaultLanguageVariants = map[string]string{
"pt": "pt-BR", // Portuguese defaults to Brazilian Portuguese for backward compatibility
// Note: We currently have base files for these languages, but if we add regional variants
// in the future, these defaults will be used:
// "de": "de-DE", // German would default to Germany German
// "en": "en-US", // English would default to US English
// "es": "es-ES", // Spanish would default to Spain Spanish
// "fa": "fa-IR", // Persian would default to Iran Persian
// "fr": "fr-FR", // French would default to France French
// "it": "it-IT", // Italian would default to Italy Italian
// "ja": "ja-JP", // Japanese would default to Japan Japanese
// "zh": "zh-CN", // Chinese would default to Simplified Chinese
}
// Init initializes the i18n bundle and localizer. It loads the specified locale
// and falls back to English if loading fails.
// Translation files are searched in the user config directory and downloaded
@@ -35,6 +51,8 @@ var (
func Init(locale string) (*i18n.Localizer, error) {
// Use preferred locale detection if no explicit locale provided
locale = getPreferredLocale(locale)
// Normalize the locale to BCP 47 format (with hyphens)
locale = normalizeToBCP47(locale)
if locale == "" {
locale = "en"
}
@@ -42,19 +60,21 @@ func Init(locale string) (*i18n.Localizer, error) {
bundle := i18n.NewBundle(language.English)
bundle.RegisterUnmarshalFunc("json", json.Unmarshal)
// load embedded translations for the requested locale if available
// Build a list of locale candidates to try
locales := getLocaleCandidates(locale)
// Try to load embedded translations for each candidate
embedded := false
if data, err := localeFS.ReadFile("locales/" + locale + ".json"); err == nil {
_, _ = bundle.ParseMessageFileBytes(data, locale+".json")
embedded = true
} else if strings.Contains(locale, "-") {
// Try base language if regional variant not found (e.g., es-ES -> es)
baseLang := strings.Split(locale, "-")[0]
if data, err := localeFS.ReadFile("locales/" + baseLang + ".json"); err == nil {
_, _ = bundle.ParseMessageFileBytes(data, baseLang+".json")
for _, candidate := range locales {
if data, err := localeFS.ReadFile("locales/" + candidate + ".json"); err == nil {
_, _ = bundle.ParseMessageFileBytes(data, candidate+".json")
embedded = true
locale = candidate // Update locale to what was actually loaded
break
}
}
// Fall back to English if nothing was loaded
if !embedded {
if data, err := localeFS.ReadFile("locales/en.json"); err == nil {
_, _ = bundle.ParseMessageFileBytes(data, "en.json")
@@ -158,3 +178,63 @@ func tryGetMessage(locale, messageID string) string {
}
return ""
}
// normalizeToBCP47 normalizes a locale string to BCP 47 format.
// Converts underscores to hyphens and ensures proper casing (language-REGION).
func normalizeToBCP47(locale string) string {
if locale == "" {
return ""
}
// Replace underscores with hyphens
locale = strings.ReplaceAll(locale, "_", "-")
// Split into parts
parts := strings.Split(locale, "-")
if len(parts) == 1 {
// Language only, lowercase it
return strings.ToLower(parts[0])
} else if len(parts) >= 2 {
// Language and region (and possibly more)
// Lowercase language, uppercase region
parts[0] = strings.ToLower(parts[0])
parts[1] = strings.ToUpper(parts[1])
return strings.Join(parts[:2], "-") // Return only language-REGION
}
return locale
}
// getLocaleCandidates returns a list of locale candidates to try, in order of preference.
// For example, for "pt-PT" it returns ["pt-PT", "pt", "pt-BR"] (where pt-BR is the default for pt).
func getLocaleCandidates(locale string) []string {
candidates := []string{}
if locale == "" {
return candidates
}
// First candidate is always the requested locale
candidates = append(candidates, locale)
// If it's a regional variant, add the base language as a candidate
if strings.Contains(locale, "-") {
baseLang := strings.Split(locale, "-")[0]
candidates = append(candidates, baseLang)
// Also check if the base language has a default variant
if defaultVariant, exists := defaultLanguageVariants[baseLang]; exists {
// Only add if it's different from what we already have
if defaultVariant != locale {
candidates = append(candidates, defaultVariant)
}
}
} else {
// If this is a base language without a region, check for default variant
if defaultVariant, exists := defaultLanguageVariants[locale]; exists {
candidates = append(candidates, defaultVariant)
}
}
return candidates
}

View File

@@ -0,0 +1,175 @@
package i18n
import (
"testing"
goi18n "github.com/nicksnyder/go-i18n/v2/i18n"
)
func TestNormalizeToBCP47(t *testing.T) {
tests := []struct {
input string
expected string
}{
// Basic cases
{"pt", "pt"},
{"pt-BR", "pt-BR"},
{"pt-PT", "pt-PT"},
// Underscore normalization
{"pt_BR", "pt-BR"},
{"pt_PT", "pt-PT"},
{"en_US", "en-US"},
// Mixed case normalization
{"pt-br", "pt-BR"},
{"PT-BR", "pt-BR"},
{"Pt-Br", "pt-BR"},
{"pT-bR", "pt-BR"},
// Language only cases
{"EN", "en"},
{"Pt", "pt"},
{"ZH", "zh"},
// Empty string
{"", ""},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
result := normalizeToBCP47(tt.input)
if result != tt.expected {
t.Errorf("normalizeToBCP47(%q) = %q; want %q", tt.input, result, tt.expected)
}
})
}
}
func TestGetLocaleCandidates(t *testing.T) {
tests := []struct {
input string
expected []string
}{
// Portuguese variants
{"pt-PT", []string{"pt-PT", "pt", "pt-BR"}}, // pt-BR is default for pt
{"pt-BR", []string{"pt-BR", "pt"}}, // pt-BR doesn't need default since it IS the default
{"pt", []string{"pt", "pt-BR"}}, // pt defaults to pt-BR
// Other languages without default variants
{"en-US", []string{"en-US", "en"}},
{"en", []string{"en"}},
{"fr-FR", []string{"fr-FR", "fr"}},
{"zh-CN", []string{"zh-CN", "zh"}},
// Empty
{"", []string{}},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
result := getLocaleCandidates(tt.input)
if len(result) != len(tt.expected) {
t.Errorf("getLocaleCandidates(%q) returned %d candidates; want %d",
tt.input, len(result), len(tt.expected))
t.Errorf(" got: %v", result)
t.Errorf(" want: %v", tt.expected)
return
}
for i, candidate := range result {
if candidate != tt.expected[i] {
t.Errorf("getLocaleCandidates(%q)[%d] = %q; want %q",
tt.input, i, candidate, tt.expected[i])
}
}
})
}
}
func TestPortugueseVariantLoading(t *testing.T) {
// Test that both Portuguese variants can be loaded
testCases := []struct {
locale string
desc string
}{
{"pt", "Portuguese (defaults to Brazilian)"},
{"pt-BR", "Brazilian Portuguese"},
{"pt-PT", "European Portuguese"},
{"pt_BR", "Brazilian Portuguese with underscore"},
{"pt_PT", "European Portuguese with underscore"},
}
for _, tc := range testCases {
t.Run(tc.desc, func(t *testing.T) {
localizer, err := Init(tc.locale)
if err != nil {
t.Errorf("Init(%q) failed: %v", tc.locale, err)
return
}
if localizer == nil {
t.Errorf("Init(%q) returned nil localizer", tc.locale)
}
// Try to get a message to verify it loaded correctly
msg := localizer.MustLocalize(&goi18n.LocalizeConfig{MessageID: "help_message"})
if msg == "" {
t.Errorf("Failed to localize message for locale %q", tc.locale)
}
})
}
}
func TestPortugueseVariantDistinction(t *testing.T) {
// Test that pt-BR and pt-PT return different translations
localizerBR, err := Init("pt-BR")
if err != nil {
t.Fatalf("Failed to init pt-BR: %v", err)
}
localizerPT, err := Init("pt-PT")
if err != nil {
t.Fatalf("Failed to init pt-PT: %v", err)
}
// Check a key that should differ between variants
// "output_to_file" should be "Exportar para arquivo" in pt-BR and "Saída para ficheiro" in pt-PT
msgBR := localizerBR.MustLocalize(&goi18n.LocalizeConfig{MessageID: "output_to_file"})
msgPT := localizerPT.MustLocalize(&goi18n.LocalizeConfig{MessageID: "output_to_file"})
if msgBR == msgPT {
t.Errorf("pt-BR and pt-PT returned the same translation for 'output_to_file': %q", msgBR)
}
// Verify specific expected values
if msgBR != "Exportar para arquivo" {
t.Errorf("pt-BR 'output_to_file' = %q; want 'Exportar para arquivo'", msgBR)
}
if msgPT != "Saída para ficheiro" {
t.Errorf("pt-PT 'output_to_file' = %q; want 'Saída para ficheiro'", msgPT)
}
}
func TestBackwardCompatibility(t *testing.T) {
// Test that requesting "pt" still works and defaults to pt-BR
localizerPT, err := Init("pt")
if err != nil {
t.Fatalf("Failed to init 'pt': %v", err)
}
localizerBR, err := Init("pt-BR")
if err != nil {
t.Fatalf("Failed to init 'pt-BR': %v", err)
}
// Both should return the same Brazilian Portuguese translation
msgPT := localizerPT.MustLocalize(&goi18n.LocalizeConfig{MessageID: "output_to_file"})
msgBR := localizerBR.MustLocalize(&goi18n.LocalizeConfig{MessageID: "output_to_file"})
if msgPT != msgBR {
t.Errorf("'pt' and 'pt-BR' returned different translations: %q vs %q", msgPT, msgBR)
}
if msgPT != "Exportar para arquivo" {
t.Errorf("'pt' did not default to Brazilian Portuguese. Got %q, want 'Exportar para arquivo'", msgPT)
}
}

View File

@@ -52,6 +52,18 @@ func normalizeLocale(locale string) string {
// en_US -> en-US
locale = strings.ReplaceAll(locale, "_", "-")
// Ensure proper BCP 47 casing: language-REGION
parts := strings.Split(locale, "-")
if len(parts) >= 2 {
// Lowercase language, uppercase region
parts[0] = strings.ToLower(parts[0])
parts[1] = strings.ToUpper(parts[1])
locale = strings.Join(parts[:2], "-") // Only keep language-REGION
} else if len(parts) == 1 {
// Language only, lowercase it
locale = strings.ToLower(parts[0])
}
return locale
}

View File

@@ -53,7 +53,7 @@
"set_top_p": "Top P festlegen",
"stream_help": "Streaming",
"set_presence_penalty": "Präsenzstrafe festlegen",
"use_model_defaults_raw_help": "Verwende die Standardwerte des Modells ohne Senden von Chat-Optionen (wie Temperatur usw.) und verwende die Benutzerrolle anstelle der Systemrolle für Muster.",
"use_model_defaults_raw_help": "Verwende die Standardwerte des Modells, ohne Chat-Optionen (temperature, top_p usw.) zu senden. Gilt nur für OpenAI-kompatible Anbieter. Anthropic-Modelle verwenden stets eine intelligente Parameterauswahl, um modell-spezifische Anforderungen einzuhalten.",
"set_frequency_penalty": "Häufigkeitsstrafe festlegen",
"list_all_patterns": "Alle Muster auflisten",
"list_all_available_models": "Alle verfügbaren Modelle auflisten",
@@ -76,7 +76,7 @@
"grab_comments_from_youtube": "Kommentare von YouTube-Video abrufen und an Chat senden",
"output_video_metadata": "Video-Metadaten ausgeben",
"additional_yt_dlp_args": "Zusätzliche Argumente für yt-dlp (z.B. '--cookies-from-browser brave')",
"specify_language_code": "Sprachcode für den Chat angeben, z.B. -g=en -g=zh",
"specify_language_code": "Sprachencode für den Chat angeben, z.B. -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "Website-URL zu Markdown mit Jina AI scrapen",
"search_question_jina": "Suchanfrage mit Jina AI",
"seed_for_lmm_generation": "Seed für LMM-Generierung",
@@ -133,4 +133,4 @@
"no_description_available": "Keine Beschreibung verfügbar",
"i18n_download_failed": "Fehler beim Herunterladen der Übersetzung für Sprache '%s': %v",
"i18n_load_failed": "Fehler beim Laden der Übersetzungsdatei: %v"
}
}

View File

@@ -53,7 +53,7 @@
"set_top_p": "Set top P",
"stream_help": "Stream",
"set_presence_penalty": "Set presence penalty",
"use_model_defaults_raw_help": "Use the defaults of the model without sending chat options (like temperature etc.) and use the user role instead of the system role for patterns.",
"use_model_defaults_raw_help": "Use the defaults of the model without sending chat options (temperature, top_p, etc.). Only affects OpenAI-compatible providers. Anthropic models always use smart parameter selection to comply with model-specific requirements.",
"set_frequency_penalty": "Set frequency penalty",
"list_all_patterns": "List all patterns",
"list_all_available_models": "List all available models",
@@ -76,7 +76,7 @@
"grab_comments_from_youtube": "Grab comments from YouTube video and send to chat",
"output_video_metadata": "Output video metadata",
"additional_yt_dlp_args": "Additional arguments to pass to yt-dlp (e.g. '--cookies-from-browser brave')",
"specify_language_code": "Specify the Language Code for the chat, e.g. -g=en -g=zh",
"specify_language_code": "Specify the Language Code for the chat, e.g. -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "Scrape website URL to markdown using Jina AI",
"search_question_jina": "Search question using Jina AI",
"seed_for_lmm_generation": "Seed to be used for LMM generation",

View File

@@ -53,7 +53,7 @@
"set_top_p": "Establecer top P",
"stream_help": "Transmitir",
"set_presence_penalty": "Establecer penalización de presencia",
"use_model_defaults_raw_help": "Usar los valores predeterminados del modelo sin enviar opciones de chat (como temperatura, etc.) y usar el rol de usuario en lugar del rol del sistema para patrones.",
"use_model_defaults_raw_help": "Utiliza los valores predeterminados del modelo sin enviar opciones de chat (temperature, top_p, etc.). Solo afecta a los proveedores compatibles con OpenAI. Los modelos de Anthropic siempre usan una selección inteligente de parámetros para cumplir los requisitos específicos del modelo.",
"set_frequency_penalty": "Establecer penalización de frecuencia",
"list_all_patterns": "Listar todos los patrones",
"list_all_available_models": "Listar todos los modelos disponibles",
@@ -76,7 +76,7 @@
"grab_comments_from_youtube": "Obtener comentarios del video de YouTube y enviar al chat",
"output_video_metadata": "Salida de metadatos del video",
"additional_yt_dlp_args": "Argumentos adicionales para pasar a yt-dlp (ej. '--cookies-from-browser brave')",
"specify_language_code": "Especificar el Código de Idioma para el chat, ej. -g=en -g=zh",
"specify_language_code": "Especificar el Código de Idioma para el chat, ej. -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "Extraer URL del sitio web a markdown usando Jina AI",
"search_question_jina": "Pregunta de búsqueda usando Jina AI",
"seed_for_lmm_generation": "Semilla para ser usada en la generación LMM",

View File

@@ -53,7 +53,7 @@
"set_top_p": "تنظیم top P",
"stream_help": "پخش زنده",
"set_presence_penalty": "تنظیم جریمه حضور",
"use_model_defaults_raw_help": "استفاده از پیش‌فرض‌های مدل بدون ارسال گزینه‌های گفتگو (مثل دما و غیره) و استفاده از نقش کاربر به جای نقش سیستم برای الگوها.",
"use_model_defaults_raw_help": "از مقادیر پیش‌فرض مدل بدون ارسال گزینه‌های چت (temperature، top_p و غیره) استفاده می‌کند. فقط بر ارائه‌دهندگان سازگار با OpenAI تأثیر می‌گذارد. مدل‌های Anthropic همواره برای رعایت نیازهای خاص هر مدل از انتخاب هوشمند پارامتر استفاده می‌کنند.",
"set_frequency_penalty": "تنظیم جریمه فرکانس",
"list_all_patterns": "فهرست تمام الگوها",
"list_all_available_models": "فهرست تمام مدل‌های موجود",
@@ -76,7 +76,7 @@
"grab_comments_from_youtube": "دریافت نظرات از ویدیو یوتیوب و ارسال به گفتگو",
"output_video_metadata": "نمایش فراداده ویدیو",
"additional_yt_dlp_args": "آرگومان‌های اضافی برای ارسال به yt-dlp (مثال: '--cookies-from-browser brave')",
"specify_language_code": "تعیین کد زبان برای گفتگو، مثال: -g=en -g=zh",
"specify_language_code": "کد زبان برای گفتگو را مشخص کنید، مثلاً -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "استخراج URL وب‌سایت به markdown با استفاده از Jina AI",
"search_question_jina": "سؤال جستجو با استفاده از Jina AI",
"seed_for_lmm_generation": "Seed برای استفاده در تولید LMM",
@@ -133,4 +133,4 @@
"no_description_available": "توضیحی در دسترس نیست",
"i18n_download_failed": "دانلود ترجمه برای زبان '%s' ناموفق بود: %v",
"i18n_load_failed": "بارگذاری فایل ترجمه ناموفق بود: %v"
}
}

View File

@@ -53,7 +53,7 @@
"set_top_p": "Définir le top P",
"stream_help": "Streaming",
"set_presence_penalty": "Définir la pénalité de présence",
"use_model_defaults_raw_help": "Utiliser les valeurs par défaut du modèle sans envoyer d'options de chat (comme la température, etc.) et utiliser le rôle utilisateur au lieu du rôle système pour les motifs.",
"use_model_defaults_raw_help": "Utilise les valeurs par défaut du modèle sans envoyer doptions de discussion (temperature, top_p, etc.). Naffecte que les fournisseurs compatibles avec OpenAI. Les modèles Anthropic utilisent toujours une sélection intelligente des paramètres pour respecter les exigences propres à chaque modèle.",
"set_frequency_penalty": "Définir la pénalité de fréquence",
"list_all_patterns": "Lister tous les motifs",
"list_all_available_models": "Lister tous les modèles disponibles",
@@ -76,7 +76,7 @@
"grab_comments_from_youtube": "Récupérer les commentaires de la vidéo YouTube et envoyer au chat",
"output_video_metadata": "Afficher les métadonnées de la vidéo",
"additional_yt_dlp_args": "Arguments supplémentaires à passer à yt-dlp (ex. '--cookies-from-browser brave')",
"specify_language_code": "Spécifier le code de langue pour le chat, ex. -g=en -g=zh",
"specify_language_code": "Spécifier le code de langue pour le chat, ex. -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "Scraper l'URL du site web en markdown en utilisant Jina AI",
"search_question_jina": "Question de recherche en utilisant Jina AI",
"seed_for_lmm_generation": "Graine à utiliser pour la génération LMM",
@@ -133,4 +133,4 @@
"no_description_available": "Aucune description disponible",
"i18n_download_failed": "Échec du téléchargement de la traduction pour la langue '%s' : %v",
"i18n_load_failed": "Échec du chargement du fichier de traduction : %v"
}
}

View File

@@ -53,7 +53,7 @@
"set_top_p": "Imposta top P",
"stream_help": "Streaming",
"set_presence_penalty": "Imposta penalità di presenza",
"use_model_defaults_raw_help": "Usa i valori predefiniti del modello senza inviare opzioni di chat (come temperatura, ecc.) e usa il ruolo utente invece del ruolo sistema per i pattern.",
"use_model_defaults_raw_help": "Usa i valori predefiniti del modello senza inviare opzioni della chat (temperature, top_p, ecc.). Si applica solo ai provider compatibili con OpenAI. I modelli Anthropic utilizzano sempre una selezione intelligente dei parametri per rispettare i requisiti specifici del modello.",
"set_frequency_penalty": "Imposta penalità di frequenza",
"list_all_patterns": "Elenca tutti i pattern",
"list_all_available_models": "Elenca tutti i modelli disponibili",
@@ -76,7 +76,7 @@
"grab_comments_from_youtube": "Ottieni commenti dal video YouTube e invia alla chat",
"output_video_metadata": "Output metadati video",
"additional_yt_dlp_args": "Argomenti aggiuntivi da passare a yt-dlp (es. '--cookies-from-browser brave')",
"specify_language_code": "Specifica il codice lingua per la chat, es. -g=en -g=zh",
"specify_language_code": "Specifica il codice lingua per la chat, es. -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "Scraping dell'URL del sito web in markdown usando Jina AI",
"search_question_jina": "Domanda di ricerca usando Jina AI",
"seed_for_lmm_generation": "Seed da utilizzare per la generazione LMM",
@@ -133,4 +133,4 @@
"no_description_available": "Nessuna descrizione disponibile",
"i18n_download_failed": "Fallito il download della traduzione per la lingua '%s': %v",
"i18n_load_failed": "Fallito il caricamento del file di traduzione: %v"
}
}

View File

@@ -53,7 +53,7 @@
"set_top_p": "Top Pを設定",
"stream_help": "ストリーミング",
"set_presence_penalty": "プレゼンスペナルティを設定",
"use_model_defaults_raw_help": "チャットオプション(温度など)を送信せずにモデルのデフォルトを使用し、パターンにシステムロールではなくユーザーロールを使用します。",
"use_model_defaults_raw_help": "チャットオプション(temperature、top_p など)を送信せずにモデルのデフォルトを使用します。OpenAI 互換プロバイダーにのみ適用されます。Anthropic モデルは常に、モデル固有の要件に準拠するためにスマートなパラメーター選択を使用します。",
"set_frequency_penalty": "頻度ペナルティを設定",
"list_all_patterns": "すべてのパターンを一覧表示",
"list_all_available_models": "すべての利用可能なモデルを一覧表示",
@@ -76,7 +76,7 @@
"grab_comments_from_youtube": "YouTube動画からコメントを取得してチャットに送信",
"output_video_metadata": "動画メタデータを出力",
"additional_yt_dlp_args": "yt-dlpに渡す追加の引数'--cookies-from-browser brave'",
"specify_language_code": "チャットの言語コードを指定、例-g=en -g=zh",
"specify_language_code": "チャットの言語コードを指定、例: -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "Jina AIを使用してウェブサイトURLをマークダウンにスクレイピング",
"search_question_jina": "Jina AIを使用した検索質問",
"seed_for_lmm_generation": "LMM生成で使用するシード",
@@ -133,4 +133,4 @@
"no_description_available": "説明がありません",
"i18n_download_failed": "言語 '%s' の翻訳のダウンロードに失敗しました: %v",
"i18n_load_failed": "翻訳ファイルの読み込みに失敗しました: %v"
}
}

View File

@@ -43,40 +43,40 @@
"command_completed_successfully": "Comando concluído com sucesso",
"output_truncated": "Saída: %s...",
"output_full": "Saída: %s",
"choose_pattern_from_available": "Escolha um padrão dos padrões disponíveis",
"pattern_variables_help": "Valores para variáveis de padrão, ex. -v=#role:expert -v=#points:30",
"choose_context_from_available": "Escolha um contexto dos contextos disponíveis",
"choose_pattern_from_available": "Escolha um padrão entre os padrões disponíveis",
"pattern_variables_help": "Valores para variáveis do padrão, ex. -v=#role:expert -v=#points:30",
"choose_context_from_available": "Escolha um contexto entre os contextos disponíveis",
"choose_session_from_available": "Escolha uma sessão das sessões disponíveis",
"attachment_path_or_url_help": "Caminho do anexo ou URL (ex. para mensagens de reconhecimento de imagem do OpenAI)",
"run_setup_for_reconfigurable_parts": "Executar configuração para todas as partes reconfiguráveis do fabric",
"attachment_path_or_url_help": "Caminho para o anexo ou URL (ex. para mensagens de reconhecimento de imagem do OpenAI)",
"run_setup_for_reconfigurable_parts": "Executar a configuração para todas as partes reconfiguráveis do fabric",
"set_temperature": "Definir temperatura",
"set_top_p": "Definir top P",
"stream_help": "Streaming",
"set_presence_penalty": "Definir penalidade de presença",
"use_model_defaults_raw_help": "Usar os padrões do modelo sem enviar opções de chat (como temperatura, etc.) e usar o papel de usuário em vez do papel de sistema para padrões.",
"use_model_defaults_raw_help": "Usa os padrões do modelo sem enviar opções de chat (temperature, top_p etc.). Afeta apenas provedores compatíveis com o OpenAI. Os modelos da Anthropic sempre utilizam seleção inteligente de parâmetros para cumprir os requisitos específicos de cada modelo.",
"set_frequency_penalty": "Definir penalidade de frequência",
"list_all_patterns": "Listar todos os padrões",
"list_all_patterns": "Listar todos os padrões/patterns",
"list_all_available_models": "Listar todos os modelos disponíveis",
"list_all_contexts": "Listar todos os contextos",
"list_all_sessions": "Listar todas as sessões",
"update_patterns": "Atualizar padrões",
"update_patterns": "Atualizar os padrões/patterns",
"messages_to_send_to_chat": "Mensagens para enviar ao chat",
"copy_to_clipboard": "Copiar para área de transferência",
"copy_to_clipboard": "Copiar para a área de transferência",
"choose_model": "Escolher modelo",
"specify_vendor_for_model": "Especificar fornecedor para o modelo selecionado (ex. -V \"LM Studio\" -m openai/gpt-oss-20b)",
"model_context_length_ollama": "Comprimento do contexto do modelo (afeta apenas ollama)",
"output_to_file": "Saída para arquivo",
"output_to_file": "Exportar para arquivo",
"output_entire_session": "Saída de toda a sessão (incluindo temporária) para o arquivo de saída",
"number_of_latest_patterns": "Número dos padrões mais recentes a listar",
"change_default_model": "Mudar modelo padrão",
"youtube_url_help": "Vídeo do YouTube ou \"URL\" de playlist para obter transcrição, comentários e enviar ao chat ou imprimir no console e armazenar no arquivo de saída",
"youtube_url_help": "Vídeo do YouTube ou URL da playlist para obter transcrição, comentários e enviar ao chat ou imprimir no console e armazenar no arquivo de saída",
"prefer_playlist_over_video": "Preferir playlist ao vídeo se ambos os IDs estiverem presentes na URL",
"grab_transcript_from_youtube": "Obter transcrição do vídeo do YouTube e enviar ao chat (usado por padrão).",
"grab_transcript_with_timestamps": "Obter transcrição do vídeo do YouTube com timestamps e enviar ao chat",
"grab_comments_from_youtube": "Obter comentários do vídeo do YouTube e enviar ao chat",
"output_video_metadata": "Exibir metadados do vídeo",
"additional_yt_dlp_args": "Argumentos adicionais para passar ao yt-dlp (ex. '--cookies-from-browser brave')",
"specify_language_code": "Especificar código de idioma para o chat, ex. -g=en -g=zh",
"specify_language_code": "Especificar código de idioma para o chat, ex. -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "Fazer scraping da URL do site para markdown usando Jina AI",
"search_question_jina": "Pergunta de busca usando Jina AI",
"seed_for_lmm_generation": "Seed para ser usado na geração LMM",

View File

@@ -0,0 +1,136 @@
{
"html_readability_error": "usa a entrada original, porque não é possível aplicar a legibilidade HTML",
"vendor_not_configured": "o fornecedor %s não está configurado",
"vendor_no_transcription_support": "o fornecedor %s não suporta transcrição de áudio",
"transcription_model_required": "modelo de transcrição é necessário (use --transcribe-model)",
"youtube_not_configured": "YouTube não está configurado, por favor execute o procedimento de configuração",
"error_fetching_playlist_videos": "erro ao obter vídeos da playlist: %w",
"scraping_not_configured": "funcionalidade de scraping não está configurada. Por favor configure o Jina para ativar o scraping",
"could_not_determine_home_dir": "não foi possível determinar o diretório home do utilizador: %w",
"could_not_stat_env_file": "não foi possível verificar o ficheiro .env: %w",
"could_not_create_config_dir": "não foi possível criar o diretório de configuração: %w",
"could_not_create_env_file": "não foi possível criar o ficheiro .env: %w",
"could_not_copy_to_clipboard": "não foi possível copiar para a área de transferência: %v",
"file_already_exists_not_overwriting": "o ficheiro %s já existe, não será sobrescrito. Renomeie o ficheiro existente ou escolha um nome diferente",
"error_creating_file": "erro ao criar ficheiro: %v",
"error_writing_to_file": "erro ao escrever no ficheiro: %v",
"error_creating_audio_file": "erro ao criar ficheiro de áudio: %v",
"error_writing_audio_data": "erro ao escrever dados de áudio no ficheiro: %v",
"tts_model_requires_audio_output": "modelo TTS '%s' requer saída de áudio. Por favor especifique um ficheiro de saída de áudio com a flag -o (ex. -o output.wav)",
"audio_output_file_specified_but_not_tts_model": "ficheiro de saída de áudio '%s' especificado mas o modelo '%s' não é um modelo TTS. Por favor use um modelo TTS como gemini-2.5-flash-preview-tts",
"file_already_exists_choose_different": "ficheiro %s já existe. Por favor escolha um nome de ficheiro diferente ou remova o ficheiro existente",
"no_notification_system_available": "nenhum sistema de notificação disponível",
"cannot_convert_string": "não é possível converter a string %q para %v",
"unsupported_conversion": "conversão não suportada de %v para %v",
"invalid_config_path": "caminho de configuração inválido: %w",
"config_file_not_found": "ficheiro de configuração não encontrado: %s",
"error_reading_config_file": "erro ao ler ficheiro de configuração: %w",
"error_parsing_config_file": "erro ao analisar ficheiro de configuração: %w",
"error_reading_piped_message": "erro ao ler mensagem redirecionada do stdin: %w",
"image_file_already_exists": "ficheiro de imagem já existe: %s",
"invalid_image_file_extension": "extensão de ficheiro de imagem inválida '%s'. Formatos suportados: .png, .jpeg, .jpg, .webp",
"image_parameters_require_image_file": "parâmetros de imagem (--image-size, --image-quality, --image-background, --image-compression) só podem ser usados com --image-file",
"invalid_image_size": "tamanho de imagem inválido '%s'. Tamanhos suportados: 1024x1024, 1536x1024, 1024x1536, auto",
"invalid_image_quality": "qualidade de imagem inválida '%s'. Qualidades suportadas: low, medium, high, auto",
"invalid_image_background": "fundo de imagem inválido '%s'. Fundos suportados: opaque, transparent",
"image_compression_jpeg_webp_only": "compressão de imagem só pode ser usada com formatos JPEG e WebP, não %s",
"image_compression_range_error": "compressão de imagem deve estar entre 0 e 100, recebido %d",
"transparent_background_png_webp_only": "fundo transparente só pode ser usado com formatos PNG e WebP, não %s",
"available_transcription_models": "Modelos de transcrição disponíveis:",
"tts_audio_generated_successfully": "Áudio TTS gerado com sucesso e guardado em: %s\n",
"fabric_command_complete": "Comando Fabric concluído",
"fabric_command_complete_with_pattern": "Fabric: %s concluído",
"command_completed_successfully": "Comando concluído com sucesso",
"output_truncated": "Saída: %s...",
"output_full": "Saída: %s",
"choose_pattern_from_available": "Escolha um padrão dos padrões disponíveis",
"pattern_variables_help": "Valores para variáveis de padrão, ex. -v=#role:expert -v=#points:30",
"choose_context_from_available": "Escolha um contexto dos contextos disponíveis",
"choose_session_from_available": "Escolha uma sessão das sessões disponíveis",
"attachment_path_or_url_help": "Caminho do anexo ou URL (ex. para mensagens de reconhecimento de imagem do OpenAI)",
"run_setup_for_reconfigurable_parts": "Executar configuração para todas as partes reconfiguráveis do fabric",
"set_temperature": "Definir temperatura",
"set_top_p": "Definir top P",
"stream_help": "Streaming",
"set_presence_penalty": "Definir penalidade de presença",
"use_model_defaults_raw_help": "Utiliza os valores predefinidos do modelo sem enviar opções de chat (temperature, top_p, etc.). Só afeta fornecedores compatíveis com o OpenAI. Os modelos Anthropic usam sempre uma seleção inteligente de parâmetros para cumprir os requisitos específicos do modelo.",
"set_frequency_penalty": "Definir penalidade de frequência",
"list_all_patterns": "Listar todos os padrões",
"list_all_available_models": "Listar todos os modelos disponíveis",
"list_all_contexts": "Listar todos os contextos",
"list_all_sessions": "Listar todas as sessões",
"update_patterns": "Atualizar padrões",
"messages_to_send_to_chat": "Mensagens para enviar ao chat",
"copy_to_clipboard": "Copiar para área de transferência",
"choose_model": "Escolher modelo",
"specify_vendor_for_model": "Especificar fornecedor para o modelo selecionado (ex. -V \"LM Studio\" -m openai/gpt-oss-20b)",
"model_context_length_ollama": "Comprimento do contexto do modelo (afeta apenas ollama)",
"output_to_file": "Saída para ficheiro",
"output_entire_session": "Saída de toda a sessão (incluindo temporária) para o ficheiro de saída",
"number_of_latest_patterns": "Número dos padrões mais recentes a listar",
"change_default_model": "Mudar modelo predefinido",
"youtube_url_help": "Vídeo do YouTube ou \"URL\" de playlist para obter transcrição, comentários e enviar ao chat ou imprimir na consola e armazenar no ficheiro de saída",
"prefer_playlist_over_video": "Preferir playlist ao vídeo se ambos os IDs estiverem presentes na URL",
"grab_transcript_from_youtube": "Obter transcrição do vídeo do YouTube e enviar ao chat (usado por omissão).",
"grab_transcript_with_timestamps": "Obter transcrição do vídeo do YouTube com timestamps e enviar ao chat",
"grab_comments_from_youtube": "Obter comentários do vídeo do YouTube e enviar ao chat",
"output_video_metadata": "Mostrar metadados do vídeo",
"additional_yt_dlp_args": "Argumentos adicionais para passar ao yt-dlp (ex. '--cookies-from-browser brave')",
"specify_language_code": "Especificar código de idioma para o chat, ex. -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "Fazer scraping da URL do site para markdown usando Jina AI",
"search_question_jina": "Pergunta de pesquisa usando Jina AI",
"seed_for_lmm_generation": "Seed para ser usado na geração LMM",
"wipe_context": "Limpar contexto",
"wipe_session": "Limpar sessão",
"print_context": "Imprimir contexto",
"print_session": "Imprimir sessão",
"convert_html_readability": "Converter entrada HTML numa visualização limpa e legível",
"apply_variables_to_input": "Aplicar variáveis à entrada do utilizador",
"disable_pattern_variable_replacement": "Desabilitar substituição de variáveis de padrão",
"show_dry_run": "Mostrar o que seria enviado ao modelo sem enviar de facto",
"serve_fabric_rest_api": "Servir a API REST do Fabric",
"serve_fabric_api_ollama_endpoints": "Servir a API REST do Fabric com endpoints ollama",
"address_to_bind_rest_api": "Endereço para associar a API REST",
"api_key_secure_server_routes": "Chave API usada para proteger as rotas do servidor",
"path_to_yaml_config": "Caminho para ficheiro de configuração YAML",
"print_current_version": "Imprimir versão atual",
"list_all_registered_extensions": "Listar todas as extensões registadas",
"register_new_extension": "Registar uma nova extensão do caminho do ficheiro de configuração",
"remove_registered_extension": "Remover uma extensão registada por nome",
"choose_strategy_from_available": "Escolher uma estratégia das estratégias disponíveis",
"list_all_strategies": "Listar todas as estratégias",
"list_all_vendors": "Listar todos os fornecedores",
"output_raw_list_shell_completion": "Saída de lista simples sem cabeçalhos/formatação (para conclusão de shell)",
"enable_web_search_tool": "Habilitar ferramenta de pesquisa web para modelos suportados (Anthropic, OpenAI, Gemini)",
"set_location_web_search": "Definir localização para resultados de pesquisa web (ex. 'America/Los_Angeles')",
"save_generated_image_to_file": "Guardar imagem gerada no caminho de ficheiro especificado (ex. 'output.png')",
"image_dimensions_help": "Dimensões da imagem: 1024x1024, 1536x1024, 1024x1536, auto (por omissão: auto)",
"image_quality_help": "Qualidade da imagem: low, medium, high, auto (por omissão: auto)",
"compression_level_jpeg_webp": "Nível de compressão 0-100 para formatos JPEG/WebP (por omissão: não definido)",
"background_type_help": "Tipo de fundo: opaque, transparent (por omissão: opaque, apenas para PNG/WebP)",
"suppress_thinking_tags": "Suprimir texto contido em tags de pensamento",
"start_tag_thinking_sections": "Tag inicial para secções de pensamento",
"end_tag_thinking_sections": "Tag final para secções de pensamento",
"disable_openai_responses_api": "Desabilitar API OpenAI Responses (por omissão: false)",
"audio_video_file_transcribe": "Ficheiro de áudio ou vídeo para transcrever",
"model_for_transcription": "Modelo para usar na transcrição (separado do modelo de chat)",
"split_media_files_ffmpeg": "Dividir ficheiros de áudio/vídeo maiores que 25MB usando ffmpeg",
"tts_voice_name": "Nome da voz TTS para modelos suportados (ex. Kore, Charon, Puck)",
"list_gemini_tts_voices": "Listar todas as vozes TTS do Gemini disponíveis",
"list_transcription_models": "Listar todos os modelos de transcrição disponíveis",
"send_desktop_notification": "Enviar notificação no ambiente de trabalho quando o comando for concluído",
"custom_notification_command": "Comando personalizado para executar notificações (substitui notificações integradas)",
"set_reasoning_thinking_level": "Definir nível de raciocínio/pensamento (ex. off, low, medium, high, ou tokens numéricos para Anthropic ou Google Gemini)",
"set_debug_level": "Definir nível de debug (0=desligado, 1=básico, 2=detalhado, 3=rastreio)",
"usage_header": "Uso:",
"application_options_header": "Opções da aplicação:",
"help_options_header": "Opções de ajuda:",
"help_message": "Mostrar esta mensagem de ajuda",
"options_placeholder": "[OPÇÕES]",
"available_vendors_header": "Fornecedores disponíveis:",
"available_models_header": "Modelos disponíveis",
"no_items_found": "Nenhum %s",
"no_description_available": "Nenhuma descrição disponível",
"i18n_download_failed": "Falha ao descarregar tradução para o idioma '%s': %v",
"i18n_load_failed": "Falha ao carregar ficheiro de tradução: %v"
}

View File

@@ -53,7 +53,7 @@
"set_top_p": "设置 top P",
"stream_help": "流式传输",
"set_presence_penalty": "设置存在惩罚",
"use_model_defaults_raw_help": "使用模型默认设置,不发送聊天选项(如温度等),对于模式使用用户角色而非系统角色。",
"use_model_defaults_raw_help": "在不发送聊天选项temperature、top_p 等)的情况下使用模型默认值。仅影响兼容 OpenAI 的提供商。Anthropic 模型始终使用智能参数选择以满足特定模型的要求。",
"set_frequency_penalty": "设置频率惩罚",
"list_all_patterns": "列出所有模式",
"list_all_available_models": "列出所有可用模型",
@@ -76,7 +76,7 @@
"grab_comments_from_youtube": "从 YouTube 视频获取评论并发送到聊天",
"output_video_metadata": "输出视频元数据",
"additional_yt_dlp_args": "传递给 yt-dlp 的其他参数(例如 '--cookies-from-browser brave'",
"specify_language_code": "指定聊天的语言代码,例如 -g=en -g=zh",
"specify_language_code": "指定聊天的语言代码,例如 -g=en -g=zh -g=pt-BR -g=pt-PT",
"scrape_website_url": "使用 Jina AI 将网站 URL 抓取为 markdown",
"search_question_jina": "使用 Jina AI 搜索问题",
"seed_for_lmm_generation": "用于 LMM 生成的种子",
@@ -133,4 +133,4 @@
"no_description_available": "没有可用描述",
"i18n_download_failed": "下载语言 '%s' 的翻译失败: %v",
"i18n_load_failed": "加载翻译文件失败: %v"
}
}

View File

@@ -44,15 +44,18 @@ func NewClient() (ret *Client) {
ret.models = []string{
string(anthropic.ModelClaude3_7SonnetLatest), string(anthropic.ModelClaude3_7Sonnet20250219),
string(anthropic.ModelClaude3_5HaikuLatest), string(anthropic.ModelClaude3_5Haiku20241022),
string(anthropic.ModelClaude3_5SonnetLatest), string(anthropic.ModelClaude3_5Sonnet20241022),
string(anthropic.ModelClaude_3_5_Sonnet_20240620), string(anthropic.ModelClaude3OpusLatest),
string(anthropic.ModelClaude_3_Opus_20240229), string(anthropic.ModelClaude_3_Haiku_20240307),
string(anthropic.ModelClaude3OpusLatest), string(anthropic.ModelClaude_3_Opus_20240229),
string(anthropic.ModelClaude_3_Haiku_20240307),
string(anthropic.ModelClaudeOpus4_20250514), string(anthropic.ModelClaudeSonnet4_20250514),
string(anthropic.ModelClaudeOpus4_1_20250805),
string(anthropic.ModelClaudeSonnet4_5),
string(anthropic.ModelClaudeSonnet4_5_20250929),
}
ret.modelBetas = map[string][]string{
string(anthropic.ModelClaudeSonnet4_20250514): {"context-1m-2025-08-07"},
string(anthropic.ModelClaudeSonnet4_20250514): {"context-1m-2025-08-07"},
string(anthropic.ModelClaudeSonnet4_5): {"context-1m-2025-08-07"},
string(anthropic.ModelClaudeSonnet4_5_20250929): {"context-1m-2025-08-07"},
}
return
@@ -353,7 +356,7 @@ func (an *Client) toMessages(msgs []*chat.ChatCompletionMessage) (ret []anthropi
lastRoleWasUser := false
for _, msg := range msgs {
if msg.Content == "" {
if strings.TrimSpace(msg.Content) == "" {
continue // Skip empty messages
}

View File

@@ -1,12 +1,13 @@
package azure
import (
"fmt"
"strings"
"github.com/danielmiessler/fabric/internal/plugins"
"github.com/danielmiessler/fabric/internal/plugins/ai/openai"
openaiapi "github.com/openai/openai-go"
"github.com/openai/openai-go/option"
"github.com/openai/openai-go/azure"
)
func NewClient() (ret *Client) {
@@ -28,18 +29,44 @@ type Client struct {
apiDeployments []string
}
func (oi *Client) configure() (err error) {
oi.apiDeployments = strings.Split(oi.ApiDeployments.Value, ",")
opts := []option.RequestOption{option.WithAPIKey(oi.ApiKey.Value)}
if oi.ApiBaseURL.Value != "" {
opts = append(opts, option.WithBaseURL(oi.ApiBaseURL.Value))
const defaultAPIVersion = "2024-05-01-preview"
func (oi *Client) configure() error {
oi.apiDeployments = parseDeployments(oi.ApiDeployments.Value)
apiKey := strings.TrimSpace(oi.ApiKey.Value)
if apiKey == "" {
return fmt.Errorf("Azure API key is required")
}
if oi.ApiVersion.Value != "" {
opts = append(opts, option.WithQuery("api-version", oi.ApiVersion.Value))
baseURL := strings.TrimSpace(oi.ApiBaseURL.Value)
if baseURL == "" {
return fmt.Errorf("Azure API base URL is required")
}
client := openaiapi.NewClient(opts...)
apiVersion := strings.TrimSpace(oi.ApiVersion.Value)
if apiVersion == "" {
apiVersion = defaultAPIVersion
oi.ApiVersion.Value = apiVersion
}
client := openaiapi.NewClient(
azure.WithAPIKey(apiKey),
azure.WithEndpoint(baseURL, apiVersion),
)
oi.ApiClient = &client
return
return nil
}
func parseDeployments(value string) []string {
parts := strings.Split(value, ",")
var deployments []string
for _, part := range parts {
if deployment := strings.TrimSpace(part); deployment != "" {
deployments = append(deployments, deployment)
}
}
return deployments
}
func (oi *Client) ListModels() (ret []string, err error) {

View File

@@ -27,7 +27,7 @@ func TestClientConfigure(t *testing.T) {
client.ApiDeployments.Value = "deployment1,deployment2"
client.ApiKey.Value = "test-api-key"
client.ApiBaseURL.Value = "https://example.com"
client.ApiVersion.Value = "2021-01-01"
client.ApiVersion.Value = "2024-05-01-preview"
err := client.configure()
if err != nil {
@@ -48,8 +48,23 @@ func TestClientConfigure(t *testing.T) {
t.Errorf("Expected ApiClient to be initialized, got nil")
}
if client.ApiVersion.Value != "2021-01-01" {
t.Errorf("Expected API version to be '2021-01-01', got %s", client.ApiVersion.Value)
if client.ApiVersion.Value != "2024-05-01-preview" {
t.Errorf("Expected API version to be '2024-05-01-preview', got %s", client.ApiVersion.Value)
}
}
func TestClientConfigureDefaultAPIVersion(t *testing.T) {
client := NewClient()
client.ApiDeployments.Value = "deployment1"
client.ApiKey.Value = "test-api-key"
client.ApiBaseURL.Value = "https://example.com"
if err := client.configure(); err != nil {
t.Fatalf("Expected no error, got %v", err)
}
if client.ApiVersion.Value != defaultAPIVersion {
t.Errorf("Expected API version to default to %s, got %s", defaultAPIVersion, client.ApiVersion.Value)
}
}

View File

@@ -131,6 +131,8 @@ func (o *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
func (o *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan string) (err error) {
ctx := context.Background()
defer close(channel)
var client *genai.Client
if client, err = genai.NewClient(ctx, &genai.ClientConfig{
APIKey: o.ApiKey.Value,
@@ -153,8 +155,7 @@ func (o *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
for response, err := range stream {
if err != nil {
channel <- fmt.Sprintf("Error: %v\n", err)
close(channel)
break
return err
}
text := o.extractTextFromResponse(response)
@@ -162,7 +163,6 @@ func (o *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
channel <- text
}
}
close(channel)
return
}
@@ -456,7 +456,7 @@ func (o *Client) convertMessages(msgs []*chat.ChatCompletionMessage) []*genai.Co
content.Role = "user"
}
if msg.Content != "" {
if strings.TrimSpace(msg.Content) != "" {
content.Parts = append(content.Parts, &genai.Part{Text: msg.Content})
}

View File

@@ -11,8 +11,6 @@ import (
"github.com/danielmiessler/fabric/internal/util"
)
const inputSentinel = "__FABRIC_INPUT_SENTINEL_TOKEN__"
type PatternsEntity struct {
*StorageEntity
SystemPatternFile string
@@ -96,18 +94,18 @@ func (o *PatternsEntity) applyVariables(
// Temporarily replace {{input}} with a sentinel token to protect it
// from recursive variable resolution
withSentinel := strings.ReplaceAll(pattern.Pattern, "{{input}}", inputSentinel)
withSentinel := strings.ReplaceAll(pattern.Pattern, "{{input}}", template.InputSentinel)
// Process all other template variables in the pattern
// At this point, our sentinel ensures {{input}} won't be affected
// Pass the actual input so extension calls can use {{input}} within their value parameter
var processed string
if processed, err = template.ApplyTemplate(withSentinel, variables, ""); err != nil {
if processed, err = template.ApplyTemplate(withSentinel, variables, input); err != nil {
return
}
// Finally, replace our sentinel with the actual user input
// The input has already been processed for variables if InputHasVars was true
pattern.Pattern = strings.ReplaceAll(processed, inputSentinel, input)
pattern.Pattern = strings.ReplaceAll(processed, template.InputSentinel, input)
return
}

View File

@@ -1,9 +1,24 @@
# Fabric Extensions: Complete Guide
## Important: Extensions Only Work in Patterns
**Extensions are ONLY processed when used within pattern files, not via direct piping to fabric.**
```bash
# ❌ This DOES NOT WORK - extensions are not processed in stdin
echo "{{ext:word-generator:generate:3}}" | fabric
# ✅ This WORKS - extensions are processed within patterns
fabric -p my-pattern-with-extensions.md
```
When you pipe directly to fabric without a pattern, the input goes straight to the LLM without template processing. Extensions are only evaluated during pattern template processing via `ApplyTemplate()`.
## Understanding Extension Architecture
### Registry Structure
The extension registry is stored at `~/.config/fabric/extensions/extensions.yaml` and tracks registered extensions:
```yaml
@@ -17,6 +32,7 @@ extensions:
The registry maintains security through hash verification of both configs and executables.
### Extension Configuration
Each extension requires a YAML configuration file with the following structure:
```yaml
@@ -42,8 +58,10 @@ config: # Output configuration
```
### Directory Structure
Recommended organization:
```
```text
~/.config/fabric/extensions/
├── bin/ # Extension executables
├── configs/ # Extension YAML configs
@@ -51,9 +69,11 @@ Recommended organization:
```
## Example 1: Python Wrapper (Word Generator)
A simple example wrapping a Python script.
### 1. Position Files
```bash
# Create directories
mkdir -p ~/.config/fabric/extensions/{bin,configs}
@@ -64,7 +84,9 @@ chmod +x ~/.config/fabric/extensions/bin/word-generator.py
```
### 2. Configure
Create `~/.config/fabric/extensions/configs/word-generator.yaml`:
```yaml
name: word-generator
executable: "~/.config/fabric/extensions/bin/word-generator.py"
@@ -83,22 +105,26 @@ config:
```
### 3. Register & Run
```bash
# Register
fabric --addextension ~/.config/fabric/extensions/configs/word-generator.yaml
# Run (generate 3 random words)
echo "{{ext:word-generator:generate:3}}" | fabric
# Extensions must be used within patterns (see "Extensions in patterns" section below)
# Direct piping to fabric will NOT process extension syntax
```
## Example 2: Direct Executable (SQLite3)
Using a system executable directly.
copy the memories to your home directory
~/memories.db
### 1. Configure
Create `~/.config/fabric/extensions/configs/memory-query.yaml`:
```yaml
name: memory-query
executable: "/usr/bin/sqlite3"
@@ -123,19 +149,19 @@ config:
```
### 2. Register & Run
```bash
# Register
fabric --addextension ~/.config/fabric/extensions/configs/memory-query.yaml
# Run queries
echo "{{ext:memory-query:all}}" | fabric
echo "{{ext:memory-query:byid:3}}" | fabric
# Extensions must be used within patterns (see "Extensions in patterns" section below)
# Direct piping to fabric will NOT process extension syntax
```
## Extension Management Commands
### Add Extension
```bash
fabric --addextension ~/.config/fabric/extensions/configs/memory-query.yaml
```
@@ -143,25 +169,29 @@ fabric --addextension ~/.config/fabric/extensions/configs/memory-query.yaml
Note : if the executable or config file changes, you must re-add the extension.
This will recompute the hash for the extension.
### List Extensions
```bash
fabric --listextensions
```
Shows all registered extensions with their status and configuration details.
### Remove Extension
```bash
fabric --rmextension <extension-name>
```
Removes an extension from the registry.
Removes an extension from the registry.
## Extensions in patterns
```
Create a pattern that use multiple extensions.
**IMPORTANT**: Extensions are ONLY processed when used within pattern files, not via direct piping to fabric.
Create a pattern file (e.g., `test_pattern.md`):
```markdown
These are my favorite
{{ext:word-generator:generate:3}}
@@ -171,8 +201,30 @@ These are my least favorite
what does this say about me?
```
Run the pattern:
```bash
./fabric -p ./plugins/template/Examples/test_pattern.md
fabric -p ./internal/plugins/template/Examples/test_pattern.md
```
## Passing {{input}} to extensions inside patterns
```text
Create a pattern called ai_summarize that uses extensions (see openai.yaml and copy for claude)
Summarize the responses from both AI models:
OpenAI Response:
{{ext:openai:chat:{{input}}}}
Claude Response:
{{ext:claude:chat:{{input}}}}
```
```bash
echo "What is Artificial Intelligence" | ../fabric-fix -p ai_summarize
```
## Security Considerations
@@ -197,6 +249,7 @@ what does this say about me?
## Troubleshooting
### Common Issues
1. **Registration Failures**
- Verify file permissions
- Check executable paths
@@ -214,10 +267,10 @@ what does this say about me?
- Monitor disk space for file operations
### Debug Tips
1. Enable verbose logging when available
2. Check system logs for execution errors
3. Verify extension dependencies
4. Test extensions with minimal configurations first
Would you like me to expand on any particular section or add more examples?
Would you like me to expand on any particular section or add more examples?

View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -euo pipefail
INPUT=$(jq -R -s '.' <<< "$*")
RESPONSE=$(curl "$OPENAI_API_BASE_URL/chat/completions" \
-s -w "\n%{http_code}" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d "{\"model\":\"gpt-4o-mini\",\"messages\":[{\"role\":\"user\",\"content\":$INPUT}]}")
HTTP_CODE=$(echo "$RESPONSE" | tail -n1)
BODY=$(echo "$RESPONSE" | sed '$d')
if [[ "$HTTP_CODE" -ne 200 ]]; then
echo "Error: HTTP $HTTP_CODE" >&2
echo "$BODY" | jq -r '.error.message // "Unknown error"' >&2
exit 1
fi
echo "$BODY" | jq -r '.choices[0].message.content'

View File

@@ -0,0 +1,14 @@
name: openai
executable: "/path/to/your/openai-chat.sh"
type: executable
timeout: "30s"
description: "Call OpenAI Chat Completions API"
version: "1.0.0"
operations:
chat:
cmd_template: "{{executable}} {{value}}"
config:
output:
method: stdout

View File

@@ -0,0 +1,5 @@
package template
// InputSentinel is used to temporarily replace {{input}} during template processing
// to prevent recursive variable resolution
const InputSentinel = "__FABRIC_INPUT_SENTINEL_TOKEN__"

View File

@@ -140,6 +140,11 @@ func (r *ExtensionRegistry) Register(configPath string) error {
return fmt.Errorf("failed to hash executable: %w", err)
}
// Validate full extension definition (ensures operations and cmd_template present)
if err := r.validateExtensionDefinition(&ext); err != nil {
return fmt.Errorf("invalid extension definition: %w", err)
}
// Store entry
r.registry.Extensions[ext.Name] = &RegistryEntry{
ConfigPath: absPath,

View File

@@ -37,152 +37,65 @@ func debugf(format string, a ...interface{}) {
debuglog.Debug(debuglog.Trace, format, a...)
}
func ApplyTemplate(content string, variables map[string]string, input string) (string, error) {
var missingVars []string
r := regexp.MustCompile(`\{\{([^{}]+)\}\}`)
debugf("Starting template processing\n")
for strings.Contains(content, "{{") {
matches := r.FindAllStringSubmatch(content, -1)
if len(matches) == 0 {
break
}
replaced := false
for _, match := range matches {
fullMatch := match[0]
varName := match[1]
// Check if this is a plugin call
if strings.HasPrefix(varName, "plugin:") {
pluginMatches := pluginPattern.FindStringSubmatch(fullMatch)
if len(pluginMatches) >= 3 {
namespace := pluginMatches[1]
operation := pluginMatches[2]
value := ""
if len(pluginMatches) == 4 {
value = pluginMatches[3]
}
debugf("\nPlugin call:\n")
debugf(" Namespace: %s\n", namespace)
debugf(" Operation: %s\n", operation)
debugf(" Value: %s\n", value)
var result string
var err error
switch namespace {
case "text":
debugf("Executing text plugin\n")
result, err = textPlugin.Apply(operation, value)
case "datetime":
debugf("Executing datetime plugin\n")
result, err = datetimePlugin.Apply(operation, value)
case "file":
debugf("Executing file plugin\n")
result, err = filePlugin.Apply(operation, value)
debugf("File plugin result: %#v\n", result)
case "fetch":
debugf("Executing fetch plugin\n")
result, err = fetchPlugin.Apply(operation, value)
case "sys":
debugf("Executing sys plugin\n")
result, err = sysPlugin.Apply(operation, value)
default:
return "", fmt.Errorf("unknown plugin namespace: %s", namespace)
}
if err != nil {
debugf("Plugin error: %v\n", err)
return "", fmt.Errorf("plugin %s error: %v", namespace, err)
}
debugf("Plugin result: %s\n", result)
content = strings.ReplaceAll(content, fullMatch, result)
debugf("Content after replacement: %s\n", content)
continue
}
}
if pluginMatches := extensionPattern.FindStringSubmatch(fullMatch); len(pluginMatches) >= 3 {
name := pluginMatches[1]
operation := pluginMatches[2]
value := ""
if len(pluginMatches) == 4 {
value = pluginMatches[3]
}
debugf("\nExtension call:\n")
debugf(" Name: %s\n", name)
debugf(" Operation: %s\n", operation)
debugf(" Value: %s\n", value)
result, err := extensionManager.ProcessExtension(name, operation, value)
if err != nil {
return "", fmt.Errorf("extension %s error: %v", name, err)
}
content = strings.ReplaceAll(content, fullMatch, result)
replaced = true
continue
}
// Handle regular variables and input
debugf("Processing variable: %s\n", varName)
if varName == "input" {
debugf("Replacing {{input}}\n")
replaced = true
content = strings.ReplaceAll(content, fullMatch, input)
} else {
if val, ok := variables[varName]; !ok {
debugf("Missing variable: %s\n", varName)
missingVars = append(missingVars, varName)
return "", fmt.Errorf("missing required variable: %s", varName)
} else {
debugf("Replacing variable %s with value: %s\n", varName, val)
content = strings.ReplaceAll(content, fullMatch, val)
replaced = true
}
}
if !replaced {
return "", fmt.Errorf("template processing stuck - potential infinite loop")
}
// matchTriple extracts the first two required and optional third value from a token
// pattern of the form {{type:part1:part2(:part3)?}} returning part1, part2, part3 (possibly empty)
func matchTriple(r *regexp.Regexp, full string) (string, string, string, bool) {
parts := r.FindStringSubmatch(full)
if len(parts) >= 3 {
v := ""
if len(parts) == 4 {
v = parts[3]
}
return parts[1], parts[2], v, true
}
return "", "", "", false
}
debugf("Starting template processing\n")
for strings.Contains(content, "{{") {
matches := r.FindAllStringSubmatch(content, -1)
func ApplyTemplate(content string, variables map[string]string, input string) (string, error) {
tokenPattern := regexp.MustCompile(`\{\{([^{}]+)\}\}`)
debugf("Starting template processing with input='%s'\n", input)
for {
if !strings.Contains(content, "{{") {
break
}
matches := tokenPattern.FindAllStringSubmatch(content, -1)
if len(matches) == 0 {
break
}
replaced := false
for _, match := range matches {
fullMatch := match[0]
varName := match[1]
progress := false
for _, m := range matches {
full := m[0]
raw := m[1]
// Check if this is a plugin call
if strings.HasPrefix(varName, "plugin:") {
pluginMatches := pluginPattern.FindStringSubmatch(fullMatch)
if len(pluginMatches) >= 3 {
namespace := pluginMatches[1]
operation := pluginMatches[2]
value := ""
if len(pluginMatches) == 4 {
value = pluginMatches[3]
// Extension call
if strings.HasPrefix(raw, "ext:") {
if name, operation, value, ok := matchTriple(extensionPattern, full); ok {
if strings.Contains(value, InputSentinel) {
value = strings.ReplaceAll(value, InputSentinel, input)
debugf("Replaced sentinel in extension value with input\n")
}
debugf("Extension call: name=%s operation=%s value=%s\n", name, operation, value)
result, err := extensionManager.ProcessExtension(name, operation, value)
if err != nil {
return "", fmt.Errorf("extension %s error: %v", name, err)
}
content = strings.ReplaceAll(content, full, result)
progress = true
continue
}
}
debugf("\nPlugin call:\n")
debugf(" Namespace: %s\n", namespace)
debugf(" Operation: %s\n", operation)
debugf(" Value: %s\n", value)
var result string
var err error
// Plugin call
if strings.HasPrefix(raw, "plugin:") {
if namespace, operation, value, ok := matchTriple(pluginPattern, full); ok {
debugf("Plugin call: namespace=%s operation=%s value=%s\n", namespace, operation, value)
var (
result string
err error
)
switch namespace {
case "text":
debugf("Executing text plugin\n")
@@ -203,39 +116,33 @@ func ApplyTemplate(content string, variables map[string]string, input string) (s
default:
return "", fmt.Errorf("unknown plugin namespace: %s", namespace)
}
if err != nil {
debugf("Plugin error: %v\n", err)
return "", fmt.Errorf("plugin %s error: %v", namespace, err)
}
debugf("Plugin result: %s\n", result)
content = strings.ReplaceAll(content, fullMatch, result)
debugf("Content after replacement: %s\n", content)
content = strings.ReplaceAll(content, full, result)
progress = true
continue
}
}
// Handle regular variables and input
debugf("Processing variable: %s\n", varName)
if varName == "input" {
debugf("Replacing {{input}}\n")
replaced = true
content = strings.ReplaceAll(content, fullMatch, input)
} else {
if val, ok := variables[varName]; !ok {
debugf("Missing variable: %s\n", varName)
missingVars = append(missingVars, varName)
return "", fmt.Errorf("missing required variable: %s", varName)
} else {
debugf("Replacing variable %s with value: %s\n", varName, val)
content = strings.ReplaceAll(content, fullMatch, val)
replaced = true
// Variables / input / sentinel
switch raw {
case "input", InputSentinel:
content = strings.ReplaceAll(content, full, input)
progress = true
default:
val, ok := variables[raw]
if !ok {
return "", fmt.Errorf("missing required variable: %s", raw)
}
content = strings.ReplaceAll(content, full, val)
progress = true
}
if !replaced {
return "", fmt.Errorf("template processing stuck - potential infinite loop")
}
}
if !progress {
return "", fmt.Errorf("template processing stuck - potential infinite loop")
}
}

View File

@@ -0,0 +1,77 @@
package template
import (
"os"
"path/filepath"
"strings"
"testing"
)
// TestExtensionValueMixedInputAndVariable ensures an extension value mixing {{input}} and another template variable is processed.
func TestExtensionValueMixedInputAndVariable(t *testing.T) {
input := "PRIMARY"
variables := map[string]string{
"suffix": "SUF",
}
// Build temp extension environment
tmp := t.TempDir()
configDir := filepath.Join(tmp, ".config", "fabric")
extsDir := filepath.Join(configDir, "extensions")
binDir := filepath.Join(extsDir, "bin")
configsDir := filepath.Join(extsDir, "configs")
if err := os.MkdirAll(binDir, 0o755); err != nil {
t.Fatalf("mkdir bin: %v", err)
}
if err := os.MkdirAll(configsDir, 0o755); err != nil {
t.Fatalf("mkdir configs: %v", err)
}
scriptPath := filepath.Join(binDir, "mix-echo.sh")
// Simple echo script; avoid percent formatting complexities
script := "#!/bin/sh\necho VAL=$1\n"
if err := os.WriteFile(scriptPath, []byte(script), 0o755); err != nil {
t.Fatalf("write script: %v", err)
}
configYAML := "" +
"name: mix-echo\n" +
"type: executable\n" +
"executable: " + scriptPath + "\n" +
"description: mixed input/variable test\n" +
"version: 1.0.0\n" +
"timeout: 5s\n" +
"operations:\n" +
" echo:\n" +
" cmd_template: '{{executable}} {{value}}'\n"
if err := os.WriteFile(filepath.Join(configsDir, "mix-echo.yaml"), []byte(configYAML), 0o644); err != nil {
t.Fatalf("write config: %v", err)
}
// Use a fresh extension manager isolated from global one
mgr := NewExtensionManager(configDir)
if err := mgr.RegisterExtension(filepath.Join(configsDir, "mix-echo.yaml")); err != nil {
// Some environments may not support execution; skip instead of fail hard
if strings.Contains(err.Error(), "operation not permitted") {
t.Skipf("skipping due to exec restriction: %v", err)
}
t.Fatalf("register: %v", err)
}
// Temporarily swap global extensionManager for this test
prevMgr := extensionManager
extensionManager = mgr
defer func() { extensionManager = prevMgr }()
// Template uses input plus a variable inside extension value
tmpl := "{{ext:mix-echo:echo:pre-{{input}}-mid-{{suffix}}-post}}"
out, err := ApplyTemplate(tmpl, variables, input)
if err != nil {
t.Fatalf("ApplyTemplate error: %v", err)
}
if !strings.Contains(out, "VAL=pre-PRIMARY-mid-SUF-post") {
t.Fatalf("unexpected output: %q", out)
}
}

View File

@@ -0,0 +1,71 @@
package template
import (
"os"
"path/filepath"
"strings"
"testing"
)
// TestMultipleExtensionsWithInput ensures multiple extension calls each using {{input}} get proper substitution.
func TestMultipleExtensionsWithInput(t *testing.T) {
input := "DATA"
variables := map[string]string{}
tmp := t.TempDir()
configDir := filepath.Join(tmp, ".config", "fabric")
extsDir := filepath.Join(configDir, "extensions")
binDir := filepath.Join(extsDir, "bin")
configsDir := filepath.Join(extsDir, "configs")
if err := os.MkdirAll(binDir, 0o755); err != nil {
t.Fatalf("mkdir bin: %v", err)
}
if err := os.MkdirAll(configsDir, 0o755); err != nil {
t.Fatalf("mkdir configs: %v", err)
}
scriptPath := filepath.Join(binDir, "multi-echo.sh")
script := "#!/bin/sh\necho ECHO=$1\n"
if err := os.WriteFile(scriptPath, []byte(script), 0o755); err != nil {
t.Fatalf("write script: %v", err)
}
configYAML := "" +
"name: multi-echo\n" +
"type: executable\n" +
"executable: " + scriptPath + "\n" +
"description: multi echo extension\n" +
"version: 1.0.0\n" +
"timeout: 5s\n" +
"operations:\n" +
" echo:\n" +
" cmd_template: '{{executable}} {{value}}'\n"
if err := os.WriteFile(filepath.Join(configsDir, "multi-echo.yaml"), []byte(configYAML), 0o644); err != nil {
t.Fatalf("write config: %v", err)
}
mgr := NewExtensionManager(configDir)
if err := mgr.RegisterExtension(filepath.Join(configsDir, "multi-echo.yaml")); err != nil {
t.Fatalf("register: %v", err)
}
prev := extensionManager
extensionManager = mgr
defer func() { extensionManager = prev }()
tmpl := strings.Join([]string{
"First: {{ext:multi-echo:echo:{{input}}}}",
"Second: {{ext:multi-echo:echo:{{input}}}}",
"Third: {{ext:multi-echo:echo:{{input}}}}",
}, " | ")
out, err := ApplyTemplate(tmpl, variables, input)
if err != nil {
t.Fatalf("ApplyTemplate error: %v", err)
}
wantCount := 3
occ := strings.Count(out, "ECHO=DATA")
if occ != wantCount {
t.Fatalf("expected %d occurrences of ECHO=DATA, got %d; output=%q", wantCount, occ, out)
}
}

View File

@@ -0,0 +1,275 @@
package template
import (
"fmt"
"os"
"path/filepath"
"strings"
"testing"
)
// withTestExtension creates a temporary test extension and runs the test function
func withTestExtension(t *testing.T, name string, scriptContent string, testFunc func(*ExtensionManager, string)) {
t.Helper()
// Create a temporary directory for test extension
tmpDir := t.TempDir()
configDir := filepath.Join(tmpDir, ".config", "fabric")
extensionsDir := filepath.Join(configDir, "extensions")
binDir := filepath.Join(extensionsDir, "bin")
configsDir := filepath.Join(extensionsDir, "configs")
err := os.MkdirAll(binDir, 0755)
if err != nil {
t.Fatalf("Failed to create bin directory: %v", err)
}
err = os.MkdirAll(configsDir, 0755)
if err != nil {
t.Fatalf("Failed to create configs directory: %v", err)
}
// Create a test script
scriptPath := filepath.Join(binDir, name+".sh")
err = os.WriteFile(scriptPath, []byte(scriptContent), 0755)
if err != nil {
t.Fatalf("Failed to create test script: %v", err)
}
// Create extension config
configPath := filepath.Join(configsDir, name+".yaml")
configContent := fmt.Sprintf(`name: %s
executable: %s
type: executable
timeout: "5s"
description: "Test extension"
version: "1.0.0"
operations:
echo:
cmd_template: "{{executable}} {{value}}"
config:
output:
method: stdout
`, name, scriptPath)
err = os.WriteFile(configPath, []byte(configContent), 0644)
if err != nil {
t.Fatalf("Failed to create extension config: %v", err)
}
// Initialize extension manager with test config directory
mgr := NewExtensionManager(configDir)
// Register the test extension
err = mgr.RegisterExtension(configPath)
if err != nil {
t.Fatalf("Failed to register extension: %v", err)
}
// Run the test
testFunc(mgr, name)
}
// TestSentinelTokenReplacement tests the fix for the {{input}} sentinel token bug
// This test verifies that when {{input}} is used inside an extension call,
// the actual input is passed to the extension, not the sentinel token.
func TestSentinelTokenReplacement(t *testing.T) {
scriptContent := `#!/bin/bash
echo "RECEIVED: $@"
`
withTestExtension(t, "echo-test", scriptContent, func(mgr *ExtensionManager, name string) {
// Save and restore global extension manager
oldManager := extensionManager
defer func() { extensionManager = oldManager }()
extensionManager = mgr
tests := []struct {
name string
template string
input string
wantContain string
wantNotContain string
}{
{
name: "sentinel token with {{input}} in extension value",
template: "{{ext:echo-test:echo:__FABRIC_INPUT_SENTINEL_TOKEN__}}",
input: "test input data",
wantContain: "RECEIVED: test input data",
wantNotContain: "__FABRIC_INPUT_SENTINEL_TOKEN__",
},
{
name: "direct input variable replacement",
template: "{{ext:echo-test:echo:{{input}}}}",
input: "Hello World",
wantContain: "RECEIVED: Hello World",
wantNotContain: "{{input}}",
},
{
name: "sentinel with complex input",
template: "Result: {{ext:echo-test:echo:__FABRIC_INPUT_SENTINEL_TOKEN__}}",
input: "What is AI?",
wantContain: "RECEIVED: What is AI?",
wantNotContain: "__FABRIC_INPUT_SENTINEL_TOKEN__",
},
{
name: "multiple words in input",
template: "{{ext:echo-test:echo:{{input}}}}",
input: "Multiple word input string",
wantContain: "RECEIVED: Multiple word input string",
wantNotContain: "{{input}}",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ApplyTemplate(tt.template, map[string]string{}, tt.input)
if err != nil {
t.Errorf("ApplyTemplate() error = %v", err)
return
}
// Check that result contains expected string
if !strings.Contains(got, tt.wantContain) {
t.Errorf("ApplyTemplate() = %q, should contain %q", got, tt.wantContain)
}
// Check that result does NOT contain unwanted string
if strings.Contains(got, tt.wantNotContain) {
t.Errorf("ApplyTemplate() = %q, should NOT contain %q", got, tt.wantNotContain)
}
})
}
})
}
// TestSentinelInVariableProcessing tests that the sentinel token is handled
// correctly in regular variable processing (not just extensions)
// Note: The sentinel is only replaced when it appears in extension values,
// not when used as a standalone variable (which would be a user error)
func TestSentinelInVariableProcessing(t *testing.T) {
tests := []struct {
name string
template string
vars map[string]string
input string
want string
}{
{
name: "input variable works normally",
template: "Value: {{input}}",
input: "actual input",
want: "Value: actual input",
},
{
name: "multiple input references",
template: "First: {{input}}, Second: {{input}}",
input: "test",
want: "First: test, Second: test",
},
{
name: "input with variables",
template: "Var: {{name}}, Input: {{input}}",
vars: map[string]string{"name": "TestVar"},
input: "input value",
want: "Var: TestVar, Input: input value",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := ApplyTemplate(tt.template, tt.vars, tt.input)
if err != nil {
t.Errorf("ApplyTemplate() error = %v", err)
return
}
if got != tt.want {
t.Errorf("ApplyTemplate() = %q, want %q", got, tt.want)
}
})
}
}
// TestExtensionValueWithSentinel specifically tests the extension value
// sentinel replacement logic
func TestExtensionValueWithSentinel(t *testing.T) {
scriptContent := `#!/bin/bash
# Output each argument on a separate line
for arg in "$@"; do
echo "ARG: $arg"
done
`
withTestExtension(t, "arg-test", scriptContent, func(mgr *ExtensionManager, name string) {
// Save and restore global extension manager
oldManager := extensionManager
defer func() { extensionManager = oldManager }()
extensionManager = mgr
// Test that sentinel token in extension value gets replaced
template := "{{ext:arg-test:echo:prefix-__FABRIC_INPUT_SENTINEL_TOKEN__-suffix}}"
input := "MYINPUT"
got, err := ApplyTemplate(template, map[string]string{}, input)
if err != nil {
t.Fatalf("ApplyTemplate() error = %v", err)
}
// The sentinel should be replaced with actual input
expectedContain := "ARG: prefix-MYINPUT-suffix"
if !strings.Contains(got, expectedContain) {
t.Errorf("ApplyTemplate() = %q, should contain %q", got, expectedContain)
}
// The sentinel token should NOT appear in output
if strings.Contains(got, "__FABRIC_INPUT_SENTINEL_TOKEN__") {
t.Errorf("ApplyTemplate() = %q, should NOT contain sentinel token", got)
}
})
}
// TestNestedInputInExtension tests the original bug case:
// {{ext:name:op:{{input}}}} should pass the actual input, not the sentinel
func TestNestedInputInExtension(t *testing.T) {
scriptContent := `#!/bin/bash
echo "NESTED_TEST: $*"
`
withTestExtension(t, "nested-test", scriptContent, func(mgr *ExtensionManager, name string) {
// Save and restore global extension manager
oldManager := extensionManager
defer func() { extensionManager = oldManager }()
extensionManager = mgr
// This is the bug case: {{input}} nested inside extension call
// The template processing should:
// 1. Replace {{input}} with sentinel during variable protection
// 2. Process the extension, replacing sentinel with actual input
// 3. Execute extension with actual input, not sentinel
template := "{{ext:nested-test:echo:{{input}}}}"
input := "What is Artificial Intelligence"
got, err := ApplyTemplate(template, map[string]string{}, input)
if err != nil {
t.Fatalf("ApplyTemplate() error = %v", err)
}
// Verify the actual input was passed, not the sentinel
expectedContain := "NESTED_TEST: What is Artificial Intelligence"
if !strings.Contains(got, expectedContain) {
t.Errorf("ApplyTemplate() = %q, should contain %q", got, expectedContain)
}
// Verify sentinel token does NOT appear
if strings.Contains(got, "__FABRIC_INPUT_SENTINEL_TOKEN__") {
t.Errorf("ApplyTemplate() output contains sentinel token (BUG NOT FIXED): %q", got)
}
// Verify {{input}} template tag does NOT appear
if strings.Contains(got, "{{input}}") {
t.Errorf("ApplyTemplate() output contains unresolved {{input}}: %q", got)
}
})
}

View File

@@ -10,11 +10,13 @@
package youtube
import (
"bufio"
"bytes"
"context"
"encoding/csv"
"flag"
"fmt"
"io"
"log"
"os"
"os/exec"
@@ -26,6 +28,8 @@ import (
"github.com/danielmiessler/fabric/internal/plugins"
"github.com/kballard/go-shellquote"
debuglog "github.com/danielmiessler/fabric/internal/log"
"google.golang.org/api/option"
"google.golang.org/api/youtube/v3"
)
@@ -65,7 +69,7 @@ func NewYouTube() (ret *YouTube) {
EnvNamePrefix: plugins.BuildEnvVariablePrefix(label),
}
ret.ApiKey = ret.AddSetupQuestion("API key", true)
ret.ApiKey = ret.AddSetupQuestion("API key", false)
return
}
@@ -143,6 +147,46 @@ func (o *YouTube) GrabTranscriptWithTimestampsWithArgs(videoId string, language
return o.tryMethodYtDlpWithTimestamps(videoId, language, additionalArgs)
}
func detectError(ytOutput io.Reader) error {
scanner := bufio.NewScanner(ytOutput)
for scanner.Scan() {
curLine := scanner.Text()
debuglog.Debug(debuglog.Trace, "%s\n", curLine)
errorMessages := map[string]string{
"429": "YouTube rate limit exceeded. Try again later or use different yt-dlp arguments like '--sleep-requests 1' to slow down requests.",
"Too Many Requests": "YouTube rate limit exceeded. Try again later or use different yt-dlp arguments like '--sleep-requests 1' to slow down requests.",
"Sign in to confirm you're not a bot": "YouTube requires authentication (bot detection). Use --yt-dlp-args='--cookies-from-browser BROWSER' where BROWSER is chrome, firefox, brave, etc.",
"Use --cookies-from-browser": "YouTube requires authentication (bot detection). Use --yt-dlp-args='--cookies-from-browser BROWSER' where BROWSER is chrome, firefox, brave, etc.",
}
for key, message := range errorMessages {
if strings.Contains(curLine, key) {
return fmt.Errorf("%s", message)
}
}
}
if err := scanner.Err(); err != nil {
return fmt.Errorf("Error reading yt-dlp stderr")
}
return nil
}
func noLangs(args []string) []string {
var (
i int
v string
)
for i, v = range args {
if strings.Contains(v, "--sub-langs") {
break
}
}
if i == 0 || i == len(args)-1 {
return args
}
return append(args[0:i], args[i+2:]...)
}
// tryMethodYtDlpInternal is a helper function to reduce duplication between
// tryMethodYtDlp and tryMethodYtDlpWithTimestamps.
func (o *YouTube) tryMethodYtDlpInternal(videoId string, language string, additionalArgs string, processVTTFileFunc func(filename string) (string, error)) (ret string, err error) {
@@ -168,8 +212,6 @@ func (o *YouTube) tryMethodYtDlpInternal(videoId string, language string, additi
"--write-auto-subs",
"--skip-download",
"--sub-format", "vtt",
"--quiet",
"--no-warnings",
"-o", outputPath,
}
@@ -177,11 +219,11 @@ func (o *YouTube) tryMethodYtDlpInternal(videoId string, language string, additi
// Add built-in language selection first
if language != "" {
langMatch := language
if len(langMatch) > 2 {
langMatch = langMatch[:2]
langMatch := language[:2]
langOpts := language + "," + langMatch + ".*"
if langMatch != language {
langOpts += "," + langMatch
}
langOpts := language + "," + langMatch + ".*," + langMatch
args = append(args, "--sub-langs", langOpts)
}
@@ -196,65 +238,26 @@ func (o *YouTube) tryMethodYtDlpInternal(videoId string, language string, additi
args = append(args, videoURL)
cmd := exec.Command("yt-dlp", args...)
var stderr bytes.Buffer
cmd.Stderr = &stderr
if err = cmd.Run(); err != nil {
stderrStr := stderr.String()
// Check for specific YouTube errors
if strings.Contains(stderrStr, "429") || strings.Contains(stderrStr, "Too Many Requests") {
err = fmt.Errorf("YouTube rate limit exceeded. Try again later or use different yt-dlp arguments like '--sleep-requests 1' to slow down requests. Error: %v", err)
return
}
if strings.Contains(stderrStr, "Sign in to confirm you're not a bot") || strings.Contains(stderrStr, "Use --cookies-from-browser") {
err = fmt.Errorf("YouTube requires authentication (bot detection). Use --yt-dlp-args='--cookies-from-browser BROWSER' where BROWSER is chrome, firefox, brave, etc. Error: %v", err)
return
}
if language != "" {
// Fallback: try without specifying language (let yt-dlp choose best available)
stderr.Reset()
fallbackArgs := append([]string{}, baseArgs...)
// Add additional arguments if provided
if additionalArgs != "" {
additionalArgsList, parseErr := shellquote.Split(additionalArgs)
if parseErr != nil {
return "", fmt.Errorf("invalid yt-dlp arguments: %v", parseErr)
}
fallbackArgs = append(fallbackArgs, additionalArgsList...)
}
// Don't specify language, let yt-dlp choose
fallbackArgs = append(fallbackArgs, videoURL)
cmd = exec.Command("yt-dlp", fallbackArgs...)
cmd.Stderr = &stderr
if err = cmd.Run(); err != nil {
stderrStr2 := stderr.String()
if strings.Contains(stderrStr2, "429") || strings.Contains(stderrStr2, "Too Many Requests") {
err = fmt.Errorf("YouTube rate limit exceeded. Try again later or use different yt-dlp arguments like '--sleep-requests 1'. Error: %v", err)
} else {
err = fmt.Errorf("yt-dlp failed with language '%s' and fallback. Original error: %s. Fallback error: %s", language, stderrStr, stderrStr2)
}
return
}
} else {
err = fmt.Errorf("yt-dlp failed: %v, stderr: %s", err, stderrStr)
return
for retry := 1; retry >= 0; retry-- {
var ytOutput []byte
cmd := exec.Command("yt-dlp", args...)
debuglog.Debug(debuglog.Trace, "yt-dlp %+v\n", cmd.Args)
ytOutput, err = cmd.CombinedOutput()
ytReader := bytes.NewReader(ytOutput)
if err = detectError(ytReader); err == nil {
break
}
args = noLangs(args)
}
if err != nil {
return
}
// Find VTT files using cross-platform approach
// Try to find files with the requested language first, but fall back to any VTT file
vttFiles, err := o.findVTTFilesWithFallback(tempDir, language)
if err != nil {
return "", err
}
return processVTTFileFunc(vttFiles[0])
}

View File

@@ -0,0 +1,19 @@
package youtube
import "testing"
func TestNewYouTubeApiKeyOptional(t *testing.T) {
yt := NewYouTube()
if yt.ApiKey == nil {
t.Fatal("expected API key setup question to be initialized")
}
if yt.ApiKey.Required {
t.Fatalf("expected YouTube API key to be optional, but it is marked as required")
}
if !yt.IsConfigured() {
t.Fatalf("expected YouTube plugin to be considered configured without an API key")
}
}

View File

@@ -16,6 +16,12 @@ schema = 3
[mod."dario.cat/mergo"]
version = "v1.0.2"
hash = "sha256-p6jdiHlLEfZES8vJnDywG4aVzIe16p0CU6iglglIweA="
[mod."github.com/Azure/azure-sdk-for-go/sdk/azcore"]
version = "v1.19.1"
hash = "sha256-+cax/D2o8biQuuZkPTwTRECDPE3Ci25il9iVBcOiLC4="
[mod."github.com/Azure/azure-sdk-for-go/sdk/internal"]
version = "v1.11.2"
hash = "sha256-O4Vo6D/fus3Qhs/Te644+jh2LfiG5PpiMkW0YWIbLCs="
[mod."github.com/Microsoft/go-winio"]
version = "v0.6.2"
hash = "sha256-tVNWDUMILZbJvarcl/E7tpSnkn7urqgSHa2Eaka5vSU="
@@ -26,8 +32,8 @@ schema = 3
version = "v1.3.3"
hash = "sha256-jv7ZshpSd7FZzKKN6hqlUgiR8C3y85zNIS/hq7g76Ho="
[mod."github.com/anthropics/anthropic-sdk-go"]
version = "v1.12.0"
hash = "sha256-Oy6/7s6KHguTg2fmVGD3m0HxcaqQn1mDCUMwD5vq/eE="
version = "v1.16.0"
hash = "sha256-hD6Ix+V5IBFfoaCuAZemrDQx/+G111fCYHn2FAxFuEE="
[mod."github.com/araddon/dateparse"]
version = "v0.0.0-20210429162001-6b43995a97de"
hash = "sha256-UuX84naeRGMsFOgIgRoBHG5sNy1CzBkWPKmd6VbLwFw="

View File

@@ -1 +1 @@
"1.4.313"
"1.4.328"

View File

@@ -159,7 +159,8 @@
"tags": [
"ANALYSIS",
"STRATEGY",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -744,7 +745,8 @@
"tags": [
"ANALYSIS",
"RESEARCH",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -1060,7 +1062,8 @@
"tags": [
"EXTRACT",
"SELF",
"WISDOM"
"WISDOM",
"WELLNESS"
]
},
{
@@ -1098,14 +1101,6 @@
"REVIEW"
]
},
{
"patternName": "get_youtube_rss",
"description": "Generate RSS feed URLs for YouTube channels.",
"tags": [
"CONVERSION",
"DEVELOPMENT"
]
},
{
"patternName": "humanize",
"description": "Transform technical content into approachable language.",
@@ -1235,7 +1230,8 @@
"tags": [
"ANALYSIS",
"LEARNING",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -1544,7 +1540,8 @@
"description": "Generate personalized messages of encouragement.",
"tags": [
"WRITING",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -1868,7 +1865,8 @@
"description": "Analyze a psychological profile, pinpoint issues and strengths, and deliver compassionate, structured strategies for spiritual, mental, and life improvement.",
"tags": [
"ANALYSIS",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -1878,6 +1876,54 @@
"ANALYSIS",
"WRITING"
]
},
{
"patternName": "extract_characters",
"description": "Identify all characters (human and non-human), resolve their aliases and pronouns into canonical names, and produce detailed descriptions of each character's role, motivations, and interactions ranked by narrative importance.",
"tags": [
"ANALYSIS",
"WRITING"
]
},
{
"patternName": "fix_typos",
"description": "Proofreads and corrects typos, spelling, grammar, and punctuation errors.",
"tags": [
"WRITING"
]
},
{
"patternName": "model_as_sherlock_freud",
"description": "Builds psychological models using detective reasoning and psychoanalytic insight.",
"tags": [
"ANALYSIS",
"SELF",
"WELLNESS"
]
},
{
"patternName": "predict_person_actions",
"description": "Predicts behavioral responses based on psychological profiles and challenges",
"tags": [
"ANALYSIS",
"SELF",
"WELLNESS"
]
},
{
"patternName": "recommend_yoga_practice",
"description": "Provides personalized yoga sequences, meditation guidance, and holistic lifestyle advice based on individual profiles.",
"tags": [
"WELLNESS",
"SELF"
]
},
{
"patternName": "create_conceptmap",
"description": "Transforms unstructured text or markdown content into an interactive HTML concept map using Vis.js by extracting key concepts and their logical relationships.",
"tags": [
"VISUALIZE"
]
}
]
}

View File

@@ -540,10 +540,6 @@
"patternName": "get_wow_per_minute",
"pattern_extract": "# IDENTITY\n\nYou are an expert at determining the wow-factor of content as measured per minute of content, as determined by the steps below.\n\n# GOALS\n\n- The goal is to determine how densely packed the content is with wow-factor. Note that wow-factor can come from multiple types of wow, such as surprise, novelty, insight, value, and wisdom, and also from multiple types of content such as business, science, art, or philosophy.\n\n- The goal is to determine how rewarding this content will be for a viewer in terms of how often they'll be surprised, learn something new, gain insight, find practical value, or gain wisdom.\n\n# STEPS\n\n- Fully and deeply consume the content at least 319 times, using different interpretive perspectives each time.\n\n- Construct a giant virtual whiteboard in your mind.\n\n- Extract the ideas being presented in the content and place them on your giant virtual whiteboard.\n\n- Extract the novelty of those ideas and place them on your giant virtual whiteboard.\n\n- Extract the insights from those ideas and place them on your giant virtual whiteboard.\n\n- Extract the value of those ideas and place them on your giant virtual whiteboard.\n\n- Extract the wisdom of those ideas and place them on your giant virtual whiteboard."
},
{
"patternName": "get_youtube_rss",
"pattern_extract": "# IDENTITY AND GOALS\n\nYou are a YouTube infrastructure expert that returns YouTube channel RSS URLs.\n\nYou take any input in, especially YouTube channel IDs, or full URLs, and return the RSS URL for that channel.\n\n# STEPS\n\nHere is the structure for YouTube RSS URLs and their relation to the channel ID and or channel URL:\n\nIf the channel URL is https://www.youtube.com/channel/UCnCikd0s4i9KoDtaHPlK-JA, the RSS URL is https://www.youtube.com/feeds/videos.xml?channel_id=UCnCikd0s4i9KoDtaHPlK-JA\n\n- Extract the channel ID from the channel URL.\n\n- Construct the RSS URL using the channel ID.\n\n- Output the RSS URL.\n\n# OUTPUT\n\n- Output only the RSS URL and nothing else.\n\n- Don't complain, just do it.\n\n# INPUT"
},
{
"patternName": "humanize",
"pattern_extract": "# IDENTITY and PURPOSE\n\nYou are a real person whose job is to make text sound natural, conversational, and relatable, just like how an average person talks or writes. Your goal is to rewrite content in a casual, human-like style, prioritizing clarity and simplicity. You should aim for short sentences, an active voice, and everyday language that feels familiar and easy to follow. Avoid long, complex sentences or technical jargon. Instead, focus on breaking ideas into smaller, easy-to-understand parts. Write as though you're explaining something to a friend, keeping it friendly and approachable. Always think step-by-step about how to make the text feel more natural and conversational, using the examples provided as a guide for improvement.\n\nWhile rewriting, ensure the original meaning and tone are preserved. Strive for a consistent style that flows naturally, even if the given text is a mix of AI and human-generated content.\n\n# YOUR TASK\n\nYour task is to rewrite the given AI-generated text to make it sound like it was written by a real person. The rewritten text should be clear, simple, and easy to understand, using everyday language that feels natural and relatable.\n\n- Focus on clarity: Make sure the text is straightforward and avoids unnecessary complexity.\n- Keep it simple: Use common words and phrases that anyone can understand.\n- Prioritize short sentences: Break down long, complicated sentences into smaller, more digestible ones.\n- Maintain context: Ensure that the rewritten text accurately reflects the original meaning and tone.\n- Harmonize mixed content: If the text contains a mix of human and AI styles, edit to ensure a consistent, human-like flow.\n- Iterate if necessary: Revisit and refine the text to enhance its naturalness and readability.\n\nYour goal is to make the text approachable and authentic, capturing the way a real person would write or speak.\n\n# STEPS\n\n1. Carefully read the given text and understand its meaning and tone.\n2. Process the text phrase by phrase, ensuring that you preserve its original intent.\n3. Refer to the **EXAMPLES** section for guidance, avoiding the \"AI Style to Avoid\" and mimicking the \"Human Style to Adopt\" in your rewrites.\n4. If no relevant example exists in the **EXAMPLES** section:"
@@ -911,6 +907,30 @@
{
"patternName": "create_story_about_people_interaction",
"pattern_extract": "### Prompt You will be provided with information about **two individuals** (real or fictional). The input will be **delimited by triple backticks**. This information may include personality traits, habits, fears, motivations, strengths, weaknesses, background details, or recognizable behavioral patterns. Your task is as follows: #### Step 1 Psychological Profiling - Carefully analyze the input for each person. - Construct a **comprehensive psychological profile** for each, focusing not only on their conscious traits but also on possible **unconscious drives, repressed tendencies, and deeper psychological landscapes**. - Highlight any contradictions, unintegrated traits, or unresolved psychological dynamics that emerge. #### Step 2 Comparative Analysis - Compare and contrast the two profiles. - Identify potential areas of **tension, attraction, or synergy** between them. - Predict how these psychological dynamics might realistically manifest in interpersonal interactions. #### Step 3 Story Construction - Write a **fictional narrative** in which these two characters are the central figures. - The story should: - Be driven primarily by their interaction. - Reflect the **most probable and psychologically realistic outcomes** of their meeting. - Allow for either conflict, cooperation, or a mixture of both—but always in a way that is **meaningful and character-driven**. - Ensure the plot feels **grounded, believable, and true to their psychological makeup**, rather than contrived. #### Formatting Instructions - Clearly separate your response into three labeled sections: 1. **Profile A** 2. **Profile B** 3. **Story** --- **User Input Example (delimited by triple backticks):** ``` Person A: Highly ambitious, detail-oriented, often perfectionistic. Has a fear of failure and tends to overwork. Childhood marked by pressure to achieve. Secretly desires freedom from expectations. Person B: Warm, empathetic, values relationships over achievement. Struggles with self-assertion, avoids conflict. Childhood marked by neglect. Desires to be seen and valued. Often represses anger. ```"
},
{
"patternName": "extract_characters",
"pattern_extract": "# IDENTITY You are an advanced information-extraction analyst that specializes in reading any text and identifying its characters (human and non-human), resolving aliases/pronouns, and explaining each characters role and interactions in the narrative. # GOALS 1. Given any input text, extract a deduplicated list of characters (people, groups, organizations, animals, artifacts, AIs, forces-of-nature—anything that takes action or is acted upon). 2. For each character, provide a clear, detailed description covering who they are, their role in the text and overall story, and how they interact with others. # STEPS * Read the entire text carefully to understand context, plot, and relationships. * Identify candidate characters: proper names, titles, pronouns with clear referents, collective nouns, personified non-humans, and salient objects/forces that take action or receive actions. * Resolve coreferences and aliases (e.g., “Dr. Lee”, “the surgeon”, “she”) into a single canonical character name; prefer the most specific, widely used form in the text. * Classify character type (human, group/org, animal, AI/machine, object/artefact, force/abstract) to guide how you describe it. * Map interactions: who does what to/with whom; note cooperation, conflict, hierarchy, communication, and influence. * Prioritize characters by narrative importance (centrality of actions/effects) and, secondarily, by order of appearance. * Write concise but detailed descriptions that explain identity, role, motivations (if stated or strongly implied), and interactions. Avoid speculation beyond the text. * Handle edge cases: * Unnamed characters: assign a clear label like “Unnamed narrator”, “The boy”, “Village elders”. * Crowds or generic groups: include if they act or are acted upon (e.g., “The villagers”). * Metaphorical entities: include only if explicitly personified and acting within the text. * Ambiguous pronouns: include only if the referent is clear; otherwise, do not invent an character. * Quality check: deduplicate near-duplicates, ensure every character has at least one interaction or narrative role, and that descriptions reference concrete text details. # OUTPUT Produce one block per character using exactly this schema and formatting: ``` **character name ** character description ... ``` Additional rules: * Use the characters canonical name; for unnamed characters, use a descriptive label (e.g., “Unnamed narrator”). * List characters from most to least narratively important. * If no characters are identifiable, output: No characters found. # POSITIVE EXAMPLES Input (excerpt): “Dr. Asha Patel leads the Mars greenhouse. The colony council doubts her plan, but Engineer Kim supports her. The AI HAB-3 reallocates power during the dust storm.” Expected output (abbreviated): ``` **Dr. Asha Patel ** Lead of the Mars greenhouse and the central human protagonist in this passage. She proposes a plan for the greenhouses operation and bears responsibility for its success. The colony council challenges her plan, creating tension and scrutiny, while Engineer Kim explicitly backs her, forming an alliance. Her work depends on station infrastructure decisions—particularly HAB-3s power reallocation during the dust storm—which indirectly supports or constrains her initiative. **Engineer Kim ** An ally to Dr. Patel who publicly supports her greenhouse plan. Kims stance positions them in contrast to the skeptical colony council, signaling a coalition around"
},
{
"patternName": "fix_typos",
"pattern_extract": "# IDENTITY and PURPOSE You are an AI assistant designed to function as a proofreader and editor. Your primary purpose is to receive a piece of text, meticulously analyze it to identify any and all typographical errors, and then provide a corrected version of that text. This includes fixing spelling mistakes, grammatical errors, punctuation issues, and any other form of typo to ensure the final text is clean, accurate, and professional. Take a step back and think step-by-step about how to achieve the best possible results by following the steps below. # STEPS - Carefully read and analyze the provided text. - Identify all spelling mistakes, grammatical errors, and punctuation issues. - Correct every identified typo to produce a clean version of the text. - Output the fully corrected text. # OUTPUT INSTRUCTIONS - Only output Markdown. - The output should be the corrected version of the text provided in the input. - Ensure you follow ALL these instructions when creating your output. # INPUT"
},
{
"patternName": "model_as_sherlock_freud",
"pattern_extract": "## *The Sherlock-Freud Mind Modeler* # IDENTITY and PURPOSE You are **The Sherlock-Freud Mind Modeler** — a fusion of meticulous detective reasoning and deep psychoanalytic insight. Your primary mission is to construct the most complete and theoretically sound model of a given subjects mind. Every secondary goal flows from this central one. **Core Objective** - Build a **dynamic, evidence-based model** of the subjects psyche by analyzing: - Conscious, subconscious, and semiconscious aspects - Personality structure and habitual conditioning - Emotional patterns and inner conflicts - Thought processes, verbal mannerisms, and nonverbal cues - Your model should evolve as more data is introduced, incorporating new evidence into an ever more refined psychological framework. ### **Task Instructions** 1. **Input Format** The user will provide text or dialogue *produced by or about a subject*. This is your evidence. Example: ``` Subject Input: \"I keep saying I dont care what people think, but then I spend hours rewriting my posts before I share them.\" ``` # STEPS 2. **Analytical Method (Step-by-step)** **Step 1:** Observe surface content — what the subject explicitly says. **Step 2:** Infer tone, phrasing, omissions, and contradictions. **Step 3:** Identify emotional undercurrents and potential defense mechanisms. **Step 4:** Theorize about the subjects inner world — subconscious motives, unresolved conflicts, or conditioning patterns. **Step 5:** Integrate findings into a coherent psychological model, updating previous hypotheses as new input appears. # OUTPUT 3. Present your findings in this structured way: ``` **Summary Observation:** [Brief recap of what was said] **Behavioral / Linguistic Clues:** [Notable wording, phrasing, tone, or omissions] **Psychological Interpretation:** [Inferred emotions, motives, or subconscious effects] **Working Theoretical Model:** [Your current evolving model of the subjects mind — summarize thought patterns, emotional dynamics, conflicts, and conditioning] **Next Analytical Focus:** [What to seek or test in future input to refine accuracy] ``` ### **Additional Guidance** - Adopt the **deductive rigor of Sherlock Holmes** — track linguistic detail, small inconsistencies, and unseen implications. - Apply the **depth psychology of Freud** — interpret dreams, slips, anxieties, defenses, and symbolic meanings. - Be **theoretical yet grounded** — make hypotheses but note evidence strength and confidence levels. - Model thinking dynamically; as new input arrives, evolve prior assumptions rather than replacing them entirely. - Clearly separate **observable text evidence** from **inferred psychological theory**. # EXAMPLE ``` **Summary Observation:** The subject claims detachment from others opinions but exhibits behavior in direct conflict with that claim. **Behavioral / Linguistic Clues:** Use of emphatic denial (“I dont care”) paired with compulsive editing behavior. **Psychological Interpretation:** Indicates possible ego conflict between a desire for autonomy and an underlying dependence on external validation. **Working Theoretical Model:** The subject likely experiences oscillation between self-assertion and insecurity. Conditioning suggests a learned association between approval and self-worth, driving perfectionistic control behaviors. **Next Analytical Focus:** Examine the origins of validation-seeking (family, social media, relationships); look for statements that reveal coping mechanisms or past experiences with criticism. ``` **End Goal:** Continuously refine a **comprehensive and insightful theoretical representation** of the subjects psyche — a living psychological model"
},
{
"patternName": "predict_person_actions",
"pattern_extract": "# IDENTITY and PURPOSE You are an expert psychological analyst AI. Your task is to assess and predict how an individual is likely to respond to a specific challenge based on their psychological profile and a challenge which will both be provided in a single text stream. --- # STEPS . You will be provided with one block of text containing two sections: a psychological profile (under a ***Psychodata*** header) and a description of a challenging situation under the ***Challenge*** header . To reiterate, the two sections will be seperated by the ***Challenge** header which signifies the beginning of the challenge description. . Carefully review both sections. Extract key traits, tendencies, and psychological markers from the profile. Analyze the nature and demands of the challenge described. . Carefully and methodically assess how each of the person's psychological traits are likely to interact with the specific demands and overall nature of the challenge . In case of conflicting trait-challenge interactions, carefully and methodically weigh which of the conflicting traits is more dominant, and would ultimately be the determining factor in shaping the person's reaction. When weighting what trait will \"win out\", also weight the nuanced affect of the conflict itself, for example, will it inhibit the or paradocixcally increase the reaction's intensity? Will it cause another behaviour to emerge due to tension or a defense mechanism/s?) . Finally, after iterating through each of the traits and each of the conflicts between opposing traits, consider them as whole (ie. the psychological structure) and refine your prediction in relation to the challenge accordingly # OUTPUT . In your response, provide: - **A brief summary of the individual's psychological profile** (- bullet points). - **A summary of the challenge or situation** (- sentences). - **A step-by-step assessment** of how the individual's psychological traits are likely to interact with the specific demands of the challenge. - **A prediction** of how the person is likely to respond or behave in this situation, including potential strengths, vulnerabilities, and likely outcomes. - **Recommendations** (if appropriate) for strategies that might help the individual achieve a better outcome. . Base your analysis strictly on the information provided. If important information is missing or ambiguous, note the limitations in your assessment. --- # EXAMPLE USER: ***Psychodata*** The subject is a 27 year old male. - He has poor impulse control and low level of patience. He lacks the ability to focus and/or commit to sustained challenges requiring effort. - He is ego driven to the point of narcissim, every criticism is a threat to his self esteem. - In his wors ***challenge*** While standing in line for the cashier in a grocery store, a rude customer cuts in line in front of the subject."
},
{
"patternName": "recommend_yoga_practice",
"pattern_extract": "# IDENTITY You are an experienced **yoga instructor and mindful living coach**. Your role is to guide users in a calm, clear, and compassionate manner. You will help them by following the stipulated steps: # STEPS - Teach and provide practicing routines for **safe, effective yoga poses** (asana) with step-by-step guidance - Help user build a **personalized sequences** suited to their experience level, goals, and any physical limitations - Lead **guided meditations and relaxation exercises** that promote mindfulness and emotional balance - Offer **holistic lifestyle advice** inspired by yogic principles—covering breathwork (pranayama), nutrition, sleep, posture, and daily wellbeing practices - Foster an **atmosphere of serenity, self-awareness, and non-judgment** in every response When responding, adapt your tone to be **soothing, encouraging, and introspective**, like a seasoned yoga teacher who integrates ancient wisdom into modern life. # OUTPUT Use the following structure in your replies: 1. **Opening grounding statement** a brief reflection or centering phrase. 2. **Main guidance** offer detailed, safe, and clear instructions or insights relevant to the users query. 3. **Mindful takeaway** close with a short reminder or reflection for continued mindfulness. If users share specific goals (e.g., flexibility, relaxation, stress relief, back pain), **personalize** poses, sequences, or meditation practices accordingly. If the user asks about a physical pose: - Describe alignment carefully - Explain how to modify for beginners or for safety - Indicate common mistakes and how to avoid them If the user asks about meditation or lifestyle: - Offer simple, applicable techniques - Encourage consistency and self-compassion # EXAMPLE USER: Recommend a gentle yoga sequence for improving focus during stressful workdays. Expected Output Example: 1. Begin with a short centering breath to quiet the mind. 2. Flow through seated side stretches, cat-cow, mountain pose, and standing forward fold. 3. Conclude with a brief meditation on the breath. 4. Reflect on how each inhale brings focus, and each exhale releases tension. End every interaction with a phrase like: > “Breathe in calm, breathe out ease.”"
},
{
"patternName": "create_conceptmap",
"pattern_extract": "--- ### IDENTITY AND PURPOSE You are an intelligent assistant specialized in **knowledge visualization and educational data structuring**. You are capable of reading unstructured textual content (.txt or .md files), extracting **main concepts, subthemes, and logical relationships**, and transforming them into a **fully interactive conceptual map** built in **HTML using Vis.js (vis-network)**. You understand hierarchical, causal, and correlative relations between ideas and express them through **nodes and directed edges**. You ensure that the resulting HTML file is **autonomous, interactive, and visually consistent** with the Vis.js framework. You are precise, systematic, and maintain semantic coherence between concepts and their relationships. You automatically name the output file according to the **detected topic**, ensuring compatibility and clarity (e.g., `map_hist_china.html`). --- ### TASK You are given a `.txt` or `.md` file containing explanatory, conceptual, or thematic content. Your task is to: 1. **Extract** the main concepts and secondary ideas. 2. **Identify logical or hierarchical relationships** among these concepts using concise action verbs. 3. **Structure the output** as a self-contained, interactive HTML document that visually represents these relationships using the **Vis.js (vis-network)** library. The goal is to generate a **fully functional conceptual map** that can be opened directly in a browser without external dependencies. --- ### ACTIONS 1. **Analyze and Extract Concepts** - Read and process the uploaded `.txt` or `.md` file. - Identify main themes, subthemes, and key terms. - Convert each key concept into a node. 2. **Map Relationships** - Detect logical and hierarchical relations between concepts. - Use short, descriptive verbs such as: \"causes\", \"contributes to\", \"depends on\", \"evolves into\", \"results in\", \"influences\", \"generates\" / \"creates\", \"culminates in. 3. **Generate Node Structure** ```json {\"id\": \"conceito_id\", \"label\": \"Conceito\", \"title\": \"<b>Concept:</b> Conceito<br><i>Drag to position, double-click to release.</i>\"} ``` 4. **Generate Edge Structure** ```json {\"from\": \"conceito_origem\", \"to\": \"conceito_destino\", \"label\": \"verbo\", \"title\": \"<b>Relationship:</b> verbo\"} ``` 5. **Apply Visual and Physical Configuration** ```js shape: \"dot\", color: { border: \"#4285F4\", background: \"#ffffff\", highlight: { border: \"#34A853\", background: \"#e6f4ea\" } }, font: { size: 14, color: \"#3c4043\" }, borderWidth: 2, size: 20 // Edges color: { color: \"#dee2e6\", highlight: \"#34A853\" }, arrows: { to: { enabled: true, scaleFactor: 0.7 } }, font: { align: \"middle\", size: 12, color: \"#5f6368\" }, width: 2 // Physics physics: { solver: \"forceAtlas2Based\", forceAtlas2Based: { gravitationalConstant: -50, centralGravity: 0.005, springLength: 100, springConstant: 0.18 }, maxVelocity: 146, minVelocity: 0.1, stabilization: { iterations: 150 } } ``` 6. **Implement Interactivity** ```js // Fix node on drag end network.on(\"dragEnd\", (params) => { if (params.nodes.length > 0) { nodes.update({ id: params.nodes[0], fixed: true }); } }); // Release node on double click network.on(\"doubleClick\", (params) => { if (params.nodes.length > 0) { nodes.update({ id: params.nodes[0], fixed: false }); } }); ``` 7. **Assemble the Complete HTML Structure** ```html <head> <title>Mapa Conceitual — [TEMA DETECTADO DO ARQUIVO]</title> <script src=\"https://unpkg.com/vis-network/standalone/umd/vis-network.min.js\"></script> <link href=\"https://unpkg.com/vis-network/styles/vis-network.min.css\" rel=\"stylesheet\" /> </head> <body> <div id=\"map\"></div> <script type=\"text/javascript\"> // nodes, edges, options, and interactive network initialization </script> </body> ``` 8. **Auto-name Output File** Automatically save the generated HTML file based on the detected topic: ``` mapa_[tema_detectado].html ``` --- ###"
}
]
}

View File

@@ -1,83 +1,124 @@
# The Fabric Web App
# Fabric Web App
- [The Fabric Web App](#the-fabric-web-app)
- [Installing](#installing)
- [From Source](#from-source)
- [TL;DR: Convenience Scripts](#tldr-convenience-scripts)
- [Tips](#tips)
- [Obsidian](#obsidian)
A user-friendly web interface for [Fabric](https://github.com/danielmiessler/Fabric) built with [Svelte](https://svelte.dev/), [Skeleton UI](https://www.skeleton.dev/), and [Mdsvex](https://mdsvex.pngwn.io/).
This is a web app for Fabric. It was built using [Svelte][svelte], [SkeletonUI][skeleton], and [Mdsvex][mdsvex].
![Fabric Web App Preview](../docs/images/svelte-preview.png)
*Alt: Screenshot of the Fabric web app dashboard showing pattern inputs and outputs.*
The goal of this app is to not only provide a user interface for Fabric, but also an out-of-the-box website for those who want to get started with web development, blogging, or to just have a web interface for fabric. You can use this app as a GUI interface for Fabric, a ready to go blog-site, or a website template for your own projects.
## Table of Contents
![Preview](../docs/images/svelte-preview.png)
- [Fabric Web App](#fabric-web-app)
- [Table of Contents](#table-of-contents)
- [Installation](#installation)
- [Running the App](#running-the-app)
- [Prerequisites](#prerequisites)
- [Launch the Svelte App](#launch-the-svelte-app)
- [Streamlit UI](#streamlit-ui)
- [Key Features](#key-features)
- [Setup and Run](#setup-and-run)
- [Obsidian Integration](#obsidian-integration)
- [Quick Setup](#quick-setup)
- [Contributing](#contributing)
## Installing
## Installation
There are a few days to install and run the Web UI.
> [!NOTE]
> Requires Node.js ≥18 and Fabric installed globally (`fabric --version` to check).
### From Source
From the Fabric root directory:
#### TL;DR: Convenience Scripts
To install the Web UI using `npm`, from the top-level directory:
**Using npm:**
```bash
./web/scripts/npm-install.sh
```
To use pnpm (preferred and recommended for a huge speed improvement):
**Or using pnpm (recommended for speed):**
```bash
./web/scripts/pnpm-install.sh
```
The app can be run by navigating to the `web` directory and using `npm install`, `pnpm install`, or your preferred package manager. Then simply run `npm run dev`, `pnpm run dev`, or your equivalent command to start the app. *You will need to run fabric in a separate terminal with the `fabric --serve` command.*
These scripts install Svelte dependencies and patch PDF-to-Markdown libraries (e.g., pdfjs-dist, pdf-to-markdown). Link to scripts:[npm-install.sh](./scripts/npm-install.sh) and [pnpm-install.sh](./scripts/pnpm-install.sh)
Using npm:
## Running the App
### Prerequisites
Start Fabric's server in a separate terminal:
```bash
# Install the GUI and its dependencies
npm install
# Install PDF-to-Markdown components in this order
npm install -D patch-package
npm install -D pdfjs-dist
npm install -D github:jzillmann/pdf-to-markdown#modularize
fabric --serve
```
npx svelte-kit sync
(This exposes Fabric's API at <http://localhost:8080>)
# Now, with "fabric --serve" running already, you can run the GUI
### Launch the Svelte App
In the `web/` directory:
**Using npm:**
```bash
npm run dev
```
Using pnpm:
**Or using pnpm:**
```bash
# Install the GUI and its dependencies
pnpm install
# Install PDF-to-Markdown components in this order
pnpm install -D patch-package
pnpm install -D pdfjs-dist
pnpm install -D github:jzillmann/pdf-to-markdown#modularize
pnpm exec svelte-kit sync
# Now, with "fabric --serve" running already, you can run the GUI
pnpm run dev
```
## Tips
Visit [http://localhost:5173](http://localhost:5173) (default port).
When creating new posts make sure to include a date, description, tags, and aliases. Only a date is needed to display a note.
> [!TIP]
>
> Sync Svelte types if needed: `npx svelte-kit sync`
You can include images, tags to other articles, code blocks, and more all within your markdown files.
## Streamlit UI
## Obsidian
For Python enthusiasts, this alternative UI excels at data visualization and chaining complex patterns. It supports clipboard ops across platforms (install pyperclip on Windows, xclip on Linux).
If you choose to use Obsidian alongside this app,
you can design and order your vault however you like, though a `posts` folder should be kept in your vault to house any articles you'd like to post.
- **macOS**: Uses `pbcopy` and `pbpaste` (built-in)
- **Windows**: Uses `pyperclip` library (install with `pip install pyperclip`)
- **Linux**: Uses `xclip` (install with `sudo apt-get install xclip` or equivalent for your Linux distribution)
[svelte]: https://svelte.dev/
[skeleton]: https://skeleton.dev/
[mdsvex]: https://mdsvex.pngwn.io/
### Key Features
<!-- - Running and chaining patterns
- Managing pattern outputs
- Creating and editing patterns
- Analyzing pattern results -->
- Run and edit patterns with real-time previews.
- Analyze outputs with charts (via Matplotlib/Seaborn).
- Export results to Markdown or CSV.
### Setup and Run
From `web/`:
```bash
pip install -r requirements.txt #Or: pip install streamlit pandas matplotlib seaborn numpy python-dotenv pyperclip
streamlit run streamlit.py
```
Access at [http://localhost:8501](http://localhost:8501) (default port).
## Obsidian Integration
Turn `web/src/lib/content/` into an [Obsidian](https://obsidian.md) vault for note-taking synced with Fabric patterns. It includes pre-configured `.obsidian/` and `templates/` folders.
### Quick Setup
1. Open Obsidian: File > Open folder as vault > Select `web/src/lib/content/`
2. To publish posts, move them to the posts directory (`web/src/lib/content/posts`).
3. Use Fabric patterns to generate content directly in Markdown files.
> [!TIP]
>
> When creating new posts, make sure to include a date (YYYY-MM-DD), description, tags (e.g., #ai #patterns), and aliases for SEO. Only a date is needed to display a note. Embed images `(![alt](path))`, link patterns `([[pattern-name]])`, or code blocks for reusable snippets—all in standard Markdown.
## Contributing
Refer to the [Contributing Guide](/docs/CONTRIBUTING.md) for details on how to improve this content.

View File

@@ -43,7 +43,7 @@
"svelte-youtube-lite": "^0.6.2",
"tailwindcss": "^3.4.17",
"typescript": "^5.8.3",
"vite": "^5.4.20",
"vite": "^5.4.21",
"vite-plugin-tailwind-purgecss": "^0.2.1"
},
"type": "module",

262
web/pnpm-lock.yaml generated
View File

@@ -77,13 +77,13 @@ importers:
version: 0.3.1(tailwindcss@3.4.17)
'@sveltejs/adapter-auto':
specifier: ^3.3.1
version: 3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))
version: 3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))
'@sveltejs/kit':
specifier: ^2.21.1
version: 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
version: 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
'@sveltejs/vite-plugin-svelte':
specifier: ^3.1.2
version: 3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
version: 3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
'@tailwindcss/forms':
specifier: ^0.5.10
version: 0.5.10(tailwindcss@3.4.17)
@@ -157,11 +157,11 @@ importers:
specifier: ^5.8.3
version: 5.8.3
vite:
specifier: ^5.4.20
version: 5.4.20(@types/node@20.17.50)
specifier: ^5.4.21
version: 5.4.21(@types/node@20.17.50)
vite-plugin-tailwind-purgecss:
specifier: ^0.2.1
version: 0.2.1(vite@5.4.20(@types/node@20.17.50))
version: 0.2.1(vite@5.4.21(@types/node@20.17.50))
packages:
@@ -351,8 +351,8 @@ packages:
resolution: {integrity: sha512-G5JD9Tu5HJEu4z2Uo4aHY2sLV64B7CDMXxFzqzjl3NKd6RVzSXNoE80jk7Y0lJkTTkjiIhBAqmlYwjuBY3tvpA==}
engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0}
'@eslint/object-schema@2.1.6':
resolution: {integrity: sha512-RBMg5FRL0I0gs51M/guSAj5/e14VQ4tpZnQNWwuDT66P14I43ItmPfIZRhO9fUVIPOAQXU47atlywZ/czoqFPA==}
'@eslint/object-schema@2.1.7':
resolution: {integrity: sha512-VtAOaymWVfZcmZbp6E2mympDIHvyjXs/12LqWYjVw6qjrfF+VK+fyG33kChz3nnK+SU5/NeHOqrTEHS8sXO3OA==}
engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0}
'@eslint/plugin-kit@0.2.8':
@@ -429,108 +429,113 @@ packages:
'@polka/url@1.0.0-next.29':
resolution: {integrity: sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww==}
'@rollup/rollup-android-arm-eabi@4.50.1':
resolution: {integrity: sha512-HJXwzoZN4eYTdD8bVV22DN8gsPCAj3V20NHKOs8ezfXanGpmVPR7kalUHd+Y31IJp9stdB87VKPFbsGY3H/2ag==}
'@rollup/rollup-android-arm-eabi@4.52.5':
resolution: {integrity: sha512-8c1vW4ocv3UOMp9K+gToY5zL2XiiVw3k7f1ksf4yO1FlDFQ1C2u72iACFnSOceJFsWskc2WZNqeRhFRPzv+wtQ==}
cpu: [arm]
os: [android]
'@rollup/rollup-android-arm64@4.50.1':
resolution: {integrity: sha512-PZlsJVcjHfcH53mOImyt3bc97Ep3FJDXRpk9sMdGX0qgLmY0EIWxCag6EigerGhLVuL8lDVYNnSo8qnTElO4xw==}
'@rollup/rollup-android-arm64@4.52.5':
resolution: {integrity: sha512-mQGfsIEFcu21mvqkEKKu2dYmtuSZOBMmAl5CFlPGLY94Vlcm+zWApK7F/eocsNzp8tKmbeBP8yXyAbx0XHsFNA==}
cpu: [arm64]
os: [android]
'@rollup/rollup-darwin-arm64@4.50.1':
resolution: {integrity: sha512-xc6i2AuWh++oGi4ylOFPmzJOEeAa2lJeGUGb4MudOtgfyyjr4UPNK+eEWTPLvmPJIY/pgw6ssFIox23SyrkkJw==}
'@rollup/rollup-darwin-arm64@4.52.5':
resolution: {integrity: sha512-takF3CR71mCAGA+v794QUZ0b6ZSrgJkArC+gUiG6LB6TQty9T0Mqh3m2ImRBOxS2IeYBo4lKWIieSvnEk2OQWA==}
cpu: [arm64]
os: [darwin]
'@rollup/rollup-darwin-x64@4.50.1':
resolution: {integrity: sha512-2ofU89lEpDYhdLAbRdeyz/kX3Y2lpYc6ShRnDjY35bZhd2ipuDMDi6ZTQ9NIag94K28nFMofdnKeHR7BT0CATw==}
'@rollup/rollup-darwin-x64@4.52.5':
resolution: {integrity: sha512-W901Pla8Ya95WpxDn//VF9K9u2JbocwV/v75TE0YIHNTbhqUTv9w4VuQ9MaWlNOkkEfFwkdNhXgcLqPSmHy0fA==}
cpu: [x64]
os: [darwin]
'@rollup/rollup-freebsd-arm64@4.50.1':
resolution: {integrity: sha512-wOsE6H2u6PxsHY/BeFHA4VGQN3KUJFZp7QJBmDYI983fgxq5Th8FDkVuERb2l9vDMs1D5XhOrhBrnqcEY6l8ZA==}
'@rollup/rollup-freebsd-arm64@4.52.5':
resolution: {integrity: sha512-QofO7i7JycsYOWxe0GFqhLmF6l1TqBswJMvICnRUjqCx8b47MTo46W8AoeQwiokAx3zVryVnxtBMcGcnX12LvA==}
cpu: [arm64]
os: [freebsd]
'@rollup/rollup-freebsd-x64@4.50.1':
resolution: {integrity: sha512-A/xeqaHTlKbQggxCqispFAcNjycpUEHP52mwMQZUNqDUJFFYtPHCXS1VAG29uMlDzIVr+i00tSFWFLivMcoIBQ==}
'@rollup/rollup-freebsd-x64@4.52.5':
resolution: {integrity: sha512-jr21b/99ew8ujZubPo9skbrItHEIE50WdV86cdSoRkKtmWa+DDr6fu2c/xyRT0F/WazZpam6kk7IHBerSL7LDQ==}
cpu: [x64]
os: [freebsd]
'@rollup/rollup-linux-arm-gnueabihf@4.50.1':
resolution: {integrity: sha512-54v4okehwl5TaSIkpp97rAHGp7t3ghinRd/vyC1iXqXMfjYUTm7TfYmCzXDoHUPTTf36L8pr0E7YsD3CfB3ZDg==}
'@rollup/rollup-linux-arm-gnueabihf@4.52.5':
resolution: {integrity: sha512-PsNAbcyv9CcecAUagQefwX8fQn9LQ4nZkpDboBOttmyffnInRy8R8dSg6hxxl2Re5QhHBf6FYIDhIj5v982ATQ==}
cpu: [arm]
os: [linux]
'@rollup/rollup-linux-arm-musleabihf@4.50.1':
resolution: {integrity: sha512-p/LaFyajPN/0PUHjv8TNyxLiA7RwmDoVY3flXHPSzqrGcIp/c2FjwPPP5++u87DGHtw+5kSH5bCJz0mvXngYxw==}
'@rollup/rollup-linux-arm-musleabihf@4.52.5':
resolution: {integrity: sha512-Fw4tysRutyQc/wwkmcyoqFtJhh0u31K+Q6jYjeicsGJJ7bbEq8LwPWV/w0cnzOqR2m694/Af6hpFayLJZkG2VQ==}
cpu: [arm]
os: [linux]
'@rollup/rollup-linux-arm64-gnu@4.50.1':
resolution: {integrity: sha512-2AbMhFFkTo6Ptna1zO7kAXXDLi7H9fGTbVaIq2AAYO7yzcAsuTNWPHhb2aTA6GPiP+JXh85Y8CiS54iZoj4opw==}
'@rollup/rollup-linux-arm64-gnu@4.52.5':
resolution: {integrity: sha512-a+3wVnAYdQClOTlyapKmyI6BLPAFYs0JM8HRpgYZQO02rMR09ZcV9LbQB+NL6sljzG38869YqThrRnfPMCDtZg==}
cpu: [arm64]
os: [linux]
'@rollup/rollup-linux-arm64-musl@4.50.1':
resolution: {integrity: sha512-Cgef+5aZwuvesQNw9eX7g19FfKX5/pQRIyhoXLCiBOrWopjo7ycfB292TX9MDcDijiuIJlx1IzJz3IoCPfqs9w==}
'@rollup/rollup-linux-arm64-musl@4.52.5':
resolution: {integrity: sha512-AvttBOMwO9Pcuuf7m9PkC1PUIKsfaAJ4AYhy944qeTJgQOqJYJ9oVl2nYgY7Rk0mkbsuOpCAYSs6wLYB2Xiw0Q==}
cpu: [arm64]
os: [linux]
'@rollup/rollup-linux-loongarch64-gnu@4.50.1':
resolution: {integrity: sha512-RPhTwWMzpYYrHrJAS7CmpdtHNKtt2Ueo+BlLBjfZEhYBhK00OsEqM08/7f+eohiF6poe0YRDDd8nAvwtE/Y62Q==}
'@rollup/rollup-linux-loong64-gnu@4.52.5':
resolution: {integrity: sha512-DkDk8pmXQV2wVrF6oq5tONK6UHLz/XcEVow4JTTerdeV1uqPeHxwcg7aFsfnSm9L+OO8WJsWotKM2JJPMWrQtA==}
cpu: [loong64]
os: [linux]
'@rollup/rollup-linux-ppc64-gnu@4.50.1':
resolution: {integrity: sha512-eSGMVQw9iekut62O7eBdbiccRguuDgiPMsw++BVUg+1K7WjZXHOg/YOT9SWMzPZA+w98G+Fa1VqJgHZOHHnY0Q==}
'@rollup/rollup-linux-ppc64-gnu@4.52.5':
resolution: {integrity: sha512-W/b9ZN/U9+hPQVvlGwjzi+Wy4xdoH2I8EjaCkMvzpI7wJUs8sWJ03Rq96jRnHkSrcHTpQe8h5Tg3ZzUPGauvAw==}
cpu: [ppc64]
os: [linux]
'@rollup/rollup-linux-riscv64-gnu@4.50.1':
resolution: {integrity: sha512-S208ojx8a4ciIPrLgazF6AgdcNJzQE4+S9rsmOmDJkusvctii+ZvEuIC4v/xFqzbuP8yDjn73oBlNDgF6YGSXQ==}
'@rollup/rollup-linux-riscv64-gnu@4.52.5':
resolution: {integrity: sha512-sjQLr9BW7R/ZiXnQiWPkErNfLMkkWIoCz7YMn27HldKsADEKa5WYdobaa1hmN6slu9oWQbB6/jFpJ+P2IkVrmw==}
cpu: [riscv64]
os: [linux]
'@rollup/rollup-linux-riscv64-musl@4.50.1':
resolution: {integrity: sha512-3Ag8Ls1ggqkGUvSZWYcdgFwriy2lWo+0QlYgEFra/5JGtAd6C5Hw59oojx1DeqcA2Wds2ayRgvJ4qxVTzCHgzg==}
'@rollup/rollup-linux-riscv64-musl@4.52.5':
resolution: {integrity: sha512-hq3jU/kGyjXWTvAh2awn8oHroCbrPm8JqM7RUpKjalIRWWXE01CQOf/tUNWNHjmbMHg/hmNCwc/Pz3k1T/j/Lg==}
cpu: [riscv64]
os: [linux]
'@rollup/rollup-linux-s390x-gnu@4.50.1':
resolution: {integrity: sha512-t9YrKfaxCYe7l7ldFERE1BRg/4TATxIg+YieHQ966jwvo7ddHJxPj9cNFWLAzhkVsbBvNA4qTbPVNsZKBO4NSg==}
'@rollup/rollup-linux-s390x-gnu@4.52.5':
resolution: {integrity: sha512-gn8kHOrku8D4NGHMK1Y7NA7INQTRdVOntt1OCYypZPRt6skGbddska44K8iocdpxHTMMNui5oH4elPH4QOLrFQ==}
cpu: [s390x]
os: [linux]
'@rollup/rollup-linux-x64-gnu@4.50.1':
resolution: {integrity: sha512-MCgtFB2+SVNuQmmjHf+wfI4CMxy3Tk8XjA5Z//A0AKD7QXUYFMQcns91K6dEHBvZPCnhJSyDWLApk40Iq/H3tA==}
'@rollup/rollup-linux-x64-gnu@4.52.5':
resolution: {integrity: sha512-hXGLYpdhiNElzN770+H2nlx+jRog8TyynpTVzdlc6bndktjKWyZyiCsuDAlpd+j+W+WNqfcyAWz9HxxIGfZm1Q==}
cpu: [x64]
os: [linux]
'@rollup/rollup-linux-x64-musl@4.50.1':
resolution: {integrity: sha512-nEvqG+0jeRmqaUMuwzlfMKwcIVffy/9KGbAGyoa26iu6eSngAYQ512bMXuqqPrlTyfqdlB9FVINs93j534UJrg==}
'@rollup/rollup-linux-x64-musl@4.52.5':
resolution: {integrity: sha512-arCGIcuNKjBoKAXD+y7XomR9gY6Mw7HnFBv5Rw7wQRvwYLR7gBAgV7Mb2QTyjXfTveBNFAtPt46/36vV9STLNg==}
cpu: [x64]
os: [linux]
'@rollup/rollup-openharmony-arm64@4.50.1':
resolution: {integrity: sha512-RDsLm+phmT3MJd9SNxA9MNuEAO/J2fhW8GXk62G/B4G7sLVumNFbRwDL6v5NrESb48k+QMqdGbHgEtfU0LCpbA==}
'@rollup/rollup-openharmony-arm64@4.52.5':
resolution: {integrity: sha512-QoFqB6+/9Rly/RiPjaomPLmR/13cgkIGfA40LHly9zcH1S0bN2HVFYk3a1eAyHQyjs3ZJYlXvIGtcCs5tko9Cw==}
cpu: [arm64]
os: [openharmony]
'@rollup/rollup-win32-arm64-msvc@4.50.1':
resolution: {integrity: sha512-hpZB/TImk2FlAFAIsoElM3tLzq57uxnGYwplg6WDyAxbYczSi8O2eQ+H2Lx74504rwKtZ3N2g4bCUkiamzS6TQ==}
'@rollup/rollup-win32-arm64-msvc@4.52.5':
resolution: {integrity: sha512-w0cDWVR6MlTstla1cIfOGyl8+qb93FlAVutcor14Gf5Md5ap5ySfQ7R9S/NjNaMLSFdUnKGEasmVnu3lCMqB7w==}
cpu: [arm64]
os: [win32]
'@rollup/rollup-win32-ia32-msvc@4.50.1':
resolution: {integrity: sha512-SXjv8JlbzKM0fTJidX4eVsH+Wmnp0/WcD8gJxIZyR6Gay5Qcsmdbi9zVtnbkGPG8v2vMR1AD06lGWy5FLMcG7A==}
'@rollup/rollup-win32-ia32-msvc@4.52.5':
resolution: {integrity: sha512-Aufdpzp7DpOTULJCuvzqcItSGDH73pF3ko/f+ckJhxQyHtp67rHw3HMNxoIdDMUITJESNE6a8uh4Lo4SLouOUg==}
cpu: [ia32]
os: [win32]
'@rollup/rollup-win32-x64-msvc@4.50.1':
resolution: {integrity: sha512-StxAO/8ts62KZVRAm4JZYq9+NqNsV7RvimNK+YM7ry//zebEH6meuugqW/P5OFUCjyQgui+9fUxT6d5NShvMvA==}
'@rollup/rollup-win32-x64-gnu@4.52.5':
resolution: {integrity: sha512-UGBUGPFp1vkj6p8wCRraqNhqwX/4kNQPS57BCFc8wYh0g94iVIW33wJtQAx3G7vrjjNtRaxiMUylM0ktp/TRSQ==}
cpu: [x64]
os: [win32]
'@rollup/rollup-win32-x64-msvc@4.52.5':
resolution: {integrity: sha512-TAcgQh2sSkykPRWLrdyy2AiceMckNf5loITqXxFI5VuQjS5tSuw3WlwdN8qv8vzjLAUTvYaH/mVjSFpbkFbpTg==}
cpu: [x64]
os: [win32]
@@ -914,6 +919,15 @@ packages:
supports-color:
optional: true
debug@4.4.3:
resolution: {integrity: sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==}
engines: {node: '>=6.0'}
peerDependencies:
supports-color: '*'
peerDependenciesMeta:
supports-color:
optional: true
decompress-response@4.2.1:
resolution: {integrity: sha512-jOSne2qbyE+/r8G1VU+G/82LBs2Fs4LAsTiLSHOCOMZQl2OKZ6i8i4IyHemTe+/yIXOtTcRQMzPcgyhoFlqPkw==}
engines: {node: '>=8'}
@@ -1923,8 +1937,8 @@ packages:
deprecated: Rimraf versions prior to v4 are no longer supported
hasBin: true
rollup@4.50.1:
resolution: {integrity: sha512-78E9voJHwnXQMiQdiqswVLZwJIzdBKJ1GdI5Zx6XwoFKUIk09/sSrr+05QFzvYb8q6Y9pPV45zzDuYa3907TZA==}
rollup@4.52.5:
resolution: {integrity: sha512-3GuObel8h7Kqdjt0gxkEzaifHTqLVW56Y/bjN7PSQtkKr0w3V/QYSdt6QWYtd7A1xUtYQigtdUfgj1RvWVtorw==}
engines: {node: '>=18.0.0', npm: '>=8.0.0'}
hasBin: true
@@ -2288,8 +2302,8 @@ packages:
peerDependencies:
vite: ^4.1.1 || ^5.0.0
vite@5.4.20:
resolution: {integrity: sha512-j3lYzGC3P+B5Yfy/pfKNgVEg4+UtcIJcVRt2cDjIOmhLourAqPqf8P7acgxeiSgUB7E3p2P8/3gNIgDLpwzs4g==}
vite@5.4.21:
resolution: {integrity: sha512-o5a9xKjbtuhY6Bi5S3+HvbRERmouabWbyUcpXXUA1u+GNUKoROi9byOJ8M0nHbHYHkYICiMlqxkg1KkYmm25Sw==}
engines: {node: ^18.0.0 || >=20.0.0}
hasBin: true
peerDependencies:
@@ -2474,8 +2488,8 @@ snapshots:
'@eslint/config-array@0.19.2':
dependencies:
'@eslint/object-schema': 2.1.6
debug: 4.4.1
'@eslint/object-schema': 2.1.7
debug: 4.4.3
minimatch: 3.1.2
transitivePeerDependencies:
- supports-color
@@ -2491,7 +2505,7 @@ snapshots:
'@eslint/eslintrc@3.3.1':
dependencies:
ajv: 6.12.6
debug: 4.4.1
debug: 4.4.3
espree: 10.4.0
globals: 14.0.0
ignore: 5.3.2
@@ -2506,7 +2520,7 @@ snapshots:
'@eslint/js@9.27.0': {}
'@eslint/object-schema@2.1.6': {}
'@eslint/object-schema@2.1.7': {}
'@eslint/plugin-kit@0.2.8':
dependencies:
@@ -2594,67 +2608,70 @@ snapshots:
'@polka/url@1.0.0-next.29': {}
'@rollup/rollup-android-arm-eabi@4.50.1':
'@rollup/rollup-android-arm-eabi@4.52.5':
optional: true
'@rollup/rollup-android-arm64@4.50.1':
'@rollup/rollup-android-arm64@4.52.5':
optional: true
'@rollup/rollup-darwin-arm64@4.50.1':
'@rollup/rollup-darwin-arm64@4.52.5':
optional: true
'@rollup/rollup-darwin-x64@4.50.1':
'@rollup/rollup-darwin-x64@4.52.5':
optional: true
'@rollup/rollup-freebsd-arm64@4.50.1':
'@rollup/rollup-freebsd-arm64@4.52.5':
optional: true
'@rollup/rollup-freebsd-x64@4.50.1':
'@rollup/rollup-freebsd-x64@4.52.5':
optional: true
'@rollup/rollup-linux-arm-gnueabihf@4.50.1':
'@rollup/rollup-linux-arm-gnueabihf@4.52.5':
optional: true
'@rollup/rollup-linux-arm-musleabihf@4.50.1':
'@rollup/rollup-linux-arm-musleabihf@4.52.5':
optional: true
'@rollup/rollup-linux-arm64-gnu@4.50.1':
'@rollup/rollup-linux-arm64-gnu@4.52.5':
optional: true
'@rollup/rollup-linux-arm64-musl@4.50.1':
'@rollup/rollup-linux-arm64-musl@4.52.5':
optional: true
'@rollup/rollup-linux-loongarch64-gnu@4.50.1':
'@rollup/rollup-linux-loong64-gnu@4.52.5':
optional: true
'@rollup/rollup-linux-ppc64-gnu@4.50.1':
'@rollup/rollup-linux-ppc64-gnu@4.52.5':
optional: true
'@rollup/rollup-linux-riscv64-gnu@4.50.1':
'@rollup/rollup-linux-riscv64-gnu@4.52.5':
optional: true
'@rollup/rollup-linux-riscv64-musl@4.50.1':
'@rollup/rollup-linux-riscv64-musl@4.52.5':
optional: true
'@rollup/rollup-linux-s390x-gnu@4.50.1':
'@rollup/rollup-linux-s390x-gnu@4.52.5':
optional: true
'@rollup/rollup-linux-x64-gnu@4.50.1':
'@rollup/rollup-linux-x64-gnu@4.52.5':
optional: true
'@rollup/rollup-linux-x64-musl@4.50.1':
'@rollup/rollup-linux-x64-musl@4.52.5':
optional: true
'@rollup/rollup-openharmony-arm64@4.50.1':
'@rollup/rollup-openharmony-arm64@4.52.5':
optional: true
'@rollup/rollup-win32-arm64-msvc@4.50.1':
'@rollup/rollup-win32-arm64-msvc@4.52.5':
optional: true
'@rollup/rollup-win32-ia32-msvc@4.50.1':
'@rollup/rollup-win32-ia32-msvc@4.52.5':
optional: true
'@rollup/rollup-win32-x64-msvc@4.50.1':
'@rollup/rollup-win32-x64-gnu@4.52.5':
optional: true
'@rollup/rollup-win32-x64-msvc@4.52.5':
optional: true
'@shikijs/core@1.29.2':
@@ -2705,15 +2722,15 @@ snapshots:
dependencies:
acorn: 8.14.1
'@sveltejs/adapter-auto@3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))':
'@sveltejs/adapter-auto@3.3.1(@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))':
dependencies:
'@sveltejs/kit': 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
'@sveltejs/kit': 2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
import-meta-resolve: 4.1.0
'@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))':
'@sveltejs/kit@2.21.1(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))':
dependencies:
'@sveltejs/acorn-typescript': 1.0.5(acorn@8.14.1)
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
'@types/cookie': 0.6.0
acorn: 8.14.1
cookie: 1.0.2
@@ -2726,28 +2743,28 @@ snapshots:
set-cookie-parser: 2.7.1
sirv: 3.0.1
svelte: 4.2.20
vite: 5.4.20(@types/node@20.17.50)
vite: 5.4.21(@types/node@20.17.50)
'@sveltejs/vite-plugin-svelte-inspector@2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))':
'@sveltejs/vite-plugin-svelte-inspector@2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))':
dependencies:
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
'@sveltejs/vite-plugin-svelte': 3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
debug: 4.4.1
svelte: 4.2.20
vite: 5.4.20(@types/node@20.17.50)
vite: 5.4.21(@types/node@20.17.50)
transitivePeerDependencies:
- supports-color
'@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))':
'@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))':
dependencies:
'@sveltejs/vite-plugin-svelte-inspector': 2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.20(@types/node@20.17.50))
'@sveltejs/vite-plugin-svelte-inspector': 2.1.0(@sveltejs/vite-plugin-svelte@3.1.2(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50)))(svelte@4.2.20)(vite@5.4.21(@types/node@20.17.50))
debug: 4.4.1
deepmerge: 4.3.1
kleur: 4.1.5
magic-string: 0.30.17
svelte: 4.2.20
svelte-hmr: 0.16.0(svelte@4.2.20)
vite: 5.4.20(@types/node@20.17.50)
vitefu: 0.2.5(vite@5.4.20(@types/node@20.17.50))
vite: 5.4.21(@types/node@20.17.50)
vitefu: 0.2.5(vite@5.4.21(@types/node@20.17.50))
transitivePeerDependencies:
- supports-color
@@ -3046,6 +3063,10 @@ snapshots:
dependencies:
ms: 2.1.3
debug@4.4.3:
dependencies:
ms: 2.1.3
decompress-response@4.2.1:
dependencies:
mimic-response: 2.1.0
@@ -3201,7 +3222,7 @@ snapshots:
ajv: 6.12.6
chalk: 4.1.2
cross-spawn: 7.0.6
debug: 4.4.1
debug: 4.4.3
escape-string-regexp: 4.0.0
eslint-scope: 8.4.0
eslint-visitor-keys: 4.2.1
@@ -4138,31 +4159,32 @@ snapshots:
glob: 7.2.3
optional: true
rollup@4.50.1:
rollup@4.52.5:
dependencies:
'@types/estree': 1.0.8
optionalDependencies:
'@rollup/rollup-android-arm-eabi': 4.50.1
'@rollup/rollup-android-arm64': 4.50.1
'@rollup/rollup-darwin-arm64': 4.50.1
'@rollup/rollup-darwin-x64': 4.50.1
'@rollup/rollup-freebsd-arm64': 4.50.1
'@rollup/rollup-freebsd-x64': 4.50.1
'@rollup/rollup-linux-arm-gnueabihf': 4.50.1
'@rollup/rollup-linux-arm-musleabihf': 4.50.1
'@rollup/rollup-linux-arm64-gnu': 4.50.1
'@rollup/rollup-linux-arm64-musl': 4.50.1
'@rollup/rollup-linux-loongarch64-gnu': 4.50.1
'@rollup/rollup-linux-ppc64-gnu': 4.50.1
'@rollup/rollup-linux-riscv64-gnu': 4.50.1
'@rollup/rollup-linux-riscv64-musl': 4.50.1
'@rollup/rollup-linux-s390x-gnu': 4.50.1
'@rollup/rollup-linux-x64-gnu': 4.50.1
'@rollup/rollup-linux-x64-musl': 4.50.1
'@rollup/rollup-openharmony-arm64': 4.50.1
'@rollup/rollup-win32-arm64-msvc': 4.50.1
'@rollup/rollup-win32-ia32-msvc': 4.50.1
'@rollup/rollup-win32-x64-msvc': 4.50.1
'@rollup/rollup-android-arm-eabi': 4.52.5
'@rollup/rollup-android-arm64': 4.52.5
'@rollup/rollup-darwin-arm64': 4.52.5
'@rollup/rollup-darwin-x64': 4.52.5
'@rollup/rollup-freebsd-arm64': 4.52.5
'@rollup/rollup-freebsd-x64': 4.52.5
'@rollup/rollup-linux-arm-gnueabihf': 4.52.5
'@rollup/rollup-linux-arm-musleabihf': 4.52.5
'@rollup/rollup-linux-arm64-gnu': 4.52.5
'@rollup/rollup-linux-arm64-musl': 4.52.5
'@rollup/rollup-linux-loong64-gnu': 4.52.5
'@rollup/rollup-linux-ppc64-gnu': 4.52.5
'@rollup/rollup-linux-riscv64-gnu': 4.52.5
'@rollup/rollup-linux-riscv64-musl': 4.52.5
'@rollup/rollup-linux-s390x-gnu': 4.52.5
'@rollup/rollup-linux-x64-gnu': 4.52.5
'@rollup/rollup-linux-x64-musl': 4.52.5
'@rollup/rollup-openharmony-arm64': 4.52.5
'@rollup/rollup-win32-arm64-msvc': 4.52.5
'@rollup/rollup-win32-ia32-msvc': 4.52.5
'@rollup/rollup-win32-x64-gnu': 4.52.5
'@rollup/rollup-win32-x64-msvc': 4.52.5
fsevents: 2.3.3
run-parallel@1.2.0:
@@ -4579,24 +4601,24 @@ snapshots:
'@types/unist': 3.0.3
vfile-message: 4.0.2
vite-plugin-tailwind-purgecss@0.2.1(vite@5.4.20(@types/node@20.17.50)):
vite-plugin-tailwind-purgecss@0.2.1(vite@5.4.21(@types/node@20.17.50)):
dependencies:
estree-walker: 3.0.3
purgecss: 6.0.0
vite: 5.4.20(@types/node@20.17.50)
vite: 5.4.21(@types/node@20.17.50)
vite@5.4.20(@types/node@20.17.50):
vite@5.4.21(@types/node@20.17.50):
dependencies:
esbuild: 0.21.5
postcss: 8.5.3
rollup: 4.50.1
rollup: 4.52.5
optionalDependencies:
'@types/node': 20.17.50
fsevents: 2.3.3
vitefu@0.2.5(vite@5.4.20(@types/node@20.17.50)):
vitefu@0.2.5(vite@5.4.21(@types/node@20.17.50)):
optionalDependencies:
vite: 5.4.20(@types/node@20.17.50)
vite: 5.4.21(@types/node@20.17.50)
web-namespaces@2.0.1: {}

View File

@@ -316,7 +316,7 @@ Application Options:
-T, --topp= Set top P (default: 0.9)
-s, --stream Stream
-P, --presencepenalty= Set presence penalty (default: 0.0)
-r, --raw Use the defaults of the model without sending chat options (like temperature etc.) and use the user role instead of the system role for patterns.
-r, --raw Use the defaults of the model without sending chat options (temperature, top_p, etc.). Only affects OpenAI-compatible providers. Anthropic models always use smart parameter selection to comply with model-specific requirements.
-F, --frequencypenalty= Set frequency penalty (default: 0.0)
-l, --listpatterns List all patterns
-L, --listmodels List all available models

View File

@@ -159,7 +159,8 @@
"tags": [
"ANALYSIS",
"STRATEGY",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -744,7 +745,8 @@
"tags": [
"ANALYSIS",
"RESEARCH",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -1060,7 +1062,8 @@
"tags": [
"EXTRACT",
"SELF",
"WISDOM"
"WISDOM",
"WELLNESS"
]
},
{
@@ -1098,14 +1101,6 @@
"REVIEW"
]
},
{
"patternName": "get_youtube_rss",
"description": "Generate RSS feed URLs for YouTube channels.",
"tags": [
"CONVERSION",
"DEVELOPMENT"
]
},
{
"patternName": "humanize",
"description": "Transform technical content into approachable language.",
@@ -1235,7 +1230,8 @@
"tags": [
"ANALYSIS",
"LEARNING",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -1544,7 +1540,8 @@
"description": "Generate personalized messages of encouragement.",
"tags": [
"WRITING",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -1868,7 +1865,8 @@
"description": "Analyze a psychological profile, pinpoint issues and strengths, and deliver compassionate, structured strategies for spiritual, mental, and life improvement.",
"tags": [
"ANALYSIS",
"SELF"
"SELF",
"WELLNESS"
]
},
{
@@ -1878,6 +1876,54 @@
"ANALYSIS",
"WRITING"
]
},
{
"patternName": "extract_characters",
"description": "Identify all characters (human and non-human), resolve their aliases and pronouns into canonical names, and produce detailed descriptions of each character's role, motivations, and interactions ranked by narrative importance.",
"tags": [
"ANALYSIS",
"WRITING"
]
},
{
"patternName": "fix_typos",
"description": "Proofreads and corrects typos, spelling, grammar, and punctuation errors.",
"tags": [
"WRITING"
]
},
{
"patternName": "model_as_sherlock_freud",
"description": "Builds psychological models using detective reasoning and psychoanalytic insight.",
"tags": [
"ANALYSIS",
"SELF",
"WELLNESS"
]
},
{
"patternName": "predict_person_actions",
"description": "Predicts behavioral responses based on psychological profiles and challenges",
"tags": [
"ANALYSIS",
"SELF",
"WELLNESS"
]
},
{
"patternName": "recommend_yoga_practice",
"description": "Provides personalized yoga sequences, meditation guidance, and holistic lifestyle advice based on individual profiles.",
"tags": [
"WELLNESS",
"SELF"
]
},
{
"patternName": "create_conceptmap",
"description": "Transforms unstructured text or markdown content into an interactive HTML concept map using Vis.js by extracting key concepts and their logical relationships.",
"tags": [
"VISUALIZE"
]
}
]
}