Compare commits

...

34 Commits

Author SHA1 Message Date
github-actions[bot]
2508dc6397 Update version to v1.4.257 and commit 2025-07-17 18:03:45 +00:00
Kayvan Sylvan
7670df35ad Merge pull request #1628 from ksylvan/0717-give-users-who-use-llama-server-ability-to-say-dont-use-responses-api
Introduce CLI Flag to Disable OpenAI Responses API
2025-07-17 11:02:18 -07:00
Kayvan Sylvan
3b9782f942 feat: add disable-responses-api flag for OpenAI compatibility
## CHANGES

- Add disable-responses-api flag to CLI completions
- Update zsh completion with new API flag
- Update bash completion options list
- Add fish shell completion for API flag
- Add testpattern to VSCode spell checker dictionary
- Configure disableResponsesAPI in example YAML config
- Enable flag for llama-server compatibility
2025-07-17 10:47:35 -07:00
Kayvan Sylvan
3fca3489fb feat: add OpenAI Responses API configuration control via CLI flag
## CHANGES

- Add `--disable-responses-api` CLI flag for OpenAI control
- Implement `SetResponsesAPIEnabled` method in OpenAI client
- Configure OpenAI Responses API setting during CLI initialization
- Update default config path to `~/.config/fabric/config.yaml`
- Add OpenAI import to CLI package dependencies
2025-07-17 10:37:15 -07:00
Kayvan Sylvan
bb2d58eae0 docs: Update CHANGELOG after v1.4.256 2025-07-17 07:17:26 -07:00
github-actions[bot]
87df7dc383 Update version to v1.4.256 and commit 2025-07-17 14:14:50 +00:00
Kayvan Sylvan
1d69afa1c9 Merge pull request #1624 from ksylvan/0716-default-config-yaml
Feature: Add Automatic ~/.fabric.yaml Config Detection
2025-07-17 07:13:17 -07:00
Kayvan Sylvan
96c18b4c99 refactor: extract flag parsing logic into separate extractFlag function 2025-07-17 06:51:40 -07:00
Kayvan Sylvan
dd5173963b fix: improve error handling for default config path resolution
## CHANGES

- Update `GetDefaultConfigPath` to return error alongside path
- Add proper error handling in flags initialization
- Include debug logging for config path failures
- Move channel close to defer in dryrun SendStream
- Return wrapped errors with context messages
- Handle non-existent config as valid case
2025-07-17 06:18:55 -07:00
Kayvan Sylvan
da1c8ec979 fix: improve dry run output formatting and config path error handling
## CHANGES

- Remove leading newline from DryRunResponse constant
- Add newline separator in SendStream method output
- Add newline separator in Send method output
- Improve GetDefaultConfigPath error handling logic
- Add stderr error message for config access failures
- Return empty string when config file doesn't exist
2025-07-16 23:46:07 -07:00
Kayvan Sylvan
ac97f9984f chore: refactor constructRequest method for consistency
### CHANGES

- Rename `_ConstructRequest` to `constructRequest` for consistency
- Update `SendStream` to use `constructRequest`
- Update `Send` method to use `constructRequest`
2025-07-16 23:36:05 -07:00
Kayvan Sylvan
181b812eaf chore: remove unneeded parenthesis around function call 2025-07-16 23:31:17 -07:00
Kayvan Sylvan
fe94165d31 chore: update Send method to append request to DryRunResponse
### CHANGES

- Assign `_ConstructRequest` output to `request` variable
- Concatenate `request` with `DryRunResponse` in `Send` method
2025-07-16 23:28:48 -07:00
Kayvan Sylvan
16e92690aa feat: improve flag handling and add default config support
## CHANGES

- Map both short and long flags to yaml tags
- Add support for short flag parsing with dashes
- Implement default ~/.fabric.yaml config file detection
- Fix think block suppression in dry run mode
- Add think options to dry run output formatting
- Refactor dry run response construction into helper method
- Return actual response content from dry run client
- Create utility function for default config path resolution
2025-07-16 23:09:56 -07:00
Kayvan Sylvan
1c33799aa8 docs: Update CHANGELOG after v1.4.255 2025-07-16 14:44:32 -07:00
github-actions[bot]
9559e618c3 Update version to v1.4.255 and commit 2025-07-16 21:38:03 +00:00
Kayvan Sylvan
ac32e8e64a Merge branch 'danielmiessler:main' into main 2025-07-16 14:35:00 -07:00
Kayvan Sylvan
82340e6126 chore: add more paths to update-version-andcreate-tag workflow to reduce unnecessary tagging 2025-07-16 14:34:29 -07:00
github-actions[bot]
5dec53726a Update version to v1.4.254 and commit 2025-07-16 21:21:12 +00:00
Kayvan Sylvan
b0eb136cbb Merge pull request #1621 from robertocarvajal/main
Adds generate code rules pattern
2025-07-16 14:19:46 -07:00
Roberto Carvajal
63f4370ff1 Adds generate code rules pattern
Signed-off-by: Roberto Carvajal <roberto.carvajal@gmail.com>
2025-07-16 11:15:55 -04:00
Kayvan Sylvan
b3cc2c737d docs: Update CHANGELOG after v1.4.253 2025-07-15 22:36:26 -07:00
github-actions[bot]
e43b4191e4 Update version to v1.4.253 and commit 2025-07-16 05:34:13 +00:00
Kayvan Sylvan
744c565120 Merge pull request #1620 from ksylvan/0715-thinking-flags-completions-scripts
Update Shell Completions for New Think-Block Suppression Options
2025-07-15 22:32:41 -07:00
Kayvan Sylvan
1473ac1465 feat: add 'think' tag options for text suppression and completion
### CHANGES

- Remove outdated update notes from README
- Add `--suppress-think` option to suppress 'think' tags
- Introduce `--think-start-tag` and `--think-end-tag` options
- Update bash completion with 'think' tag options
- Update fish completion with 'think' tag options
2025-07-15 22:26:12 -07:00
Kayvan Sylvan
c38c16f0db docs: Update CHANGELOG after v.1.4.252 2025-07-15 22:08:43 -07:00
github-actions[bot]
a4b1db4193 Update version to v1.4.252 and commit 2025-07-16 05:05:47 +00:00
Kayvan Sylvan
d44bc19a84 Merge pull request #1619 from ksylvan/0715-suppress-think
Feature: Optional Hiding of Model Thinking Process with Configurable Tags
2025-07-15 22:04:12 -07:00
Kayvan Sylvan
a2e618e11c perf: add regex caching to StripThinkBlocks function for improved performance 2025-07-15 22:02:16 -07:00
Kayvan Sylvan
cb90379b30 feat: add suppress-think feature to filter AI reasoning output
## CHANGES

- Add suppress-think flag to hide thinking blocks
- Configure customizable start and end thinking tags
- Strip thinking content from final response output
- Update streaming logic to respect suppress-think setting
- Add YAML configuration support for thinking options
- Implement StripThinkBlocks utility function for content filtering
- Add comprehensive tests for thinking suppression functionality
2025-07-15 21:52:27 -07:00
Kayvan Sylvan
4868687746 chore: Update CHANGELOG after v1.4.251 2025-07-15 21:44:14 -07:00
github-actions[bot]
85780fee76 Update version to v1.4.251 and commit 2025-07-16 03:49:08 +00:00
Kayvan Sylvan
497b1ed682 Merge pull request #1618 from ksylvan/0715-refrain-from-version-bumping-when-only-changelog-cache-changes
Update GitHub Workflow to Ignore Additional File Paths
2025-07-15 20:47:35 -07:00
Kayvan Sylvan
135433b749 ci: update workflow to ignore additional paths during version updates
## CHANGES

- Add `data/strategies/**` to paths-ignore list
- Add `cmd/generate_changelog/*.db` to paths-ignore list
- Prevent workflow triggers from strategy data changes
- Prevent workflow triggers from changelog database files
2025-07-15 20:38:44 -07:00
27 changed files with 985 additions and 738 deletions

View File

@@ -7,6 +7,10 @@ on:
paths-ignore:
- "data/patterns/**"
- "**/*.md"
- "data/strategies/**"
- "cmd/generate_changelog/*.db"
- "scripts/pattern_descriptions/*.json"
- "web/static/data/pattern_descriptions.json"
permissions:
contents: write # Ensure the workflow has write permissions

View File

@@ -104,6 +104,7 @@
"stretchr",
"talkpanel",
"Telos",
"testpattern",
"Thacker",
"tidwall",
"topp",

File diff suppressed because it is too large Load Diff

View File

@@ -113,30 +113,9 @@ Keep in mind that many of these were recorded when Fabric was Python-based, so r
## Updates
> [!NOTE]
>
> July 4, 2025
>
> - **Web Search**: Fabric now supports web search for Anthropic and OpenAI models using the `--search` and `--search-location` flags. This replaces the previous plugin-based search, so you may want to remove the old `ANTHROPIC_WEB_SEARCH_TOOL_*` variables from your `~/.config/fabric/.env` file.
> - **Image Generation**: Fabric now has powerful image generation capabilities with OpenAI.
> - Generate images from text prompts and save them using `--image-file`.
> - Edit existing images by providing an input image with `--attachment`.
> - Control image `size`, `quality`, `compression`, and `background` with the new `--image-*` flags.
>
>June 17, 2025
>
>- Fabric now supports Perplexity AI. Configure it by using `fabric -S` to add your Perplexity AI API Key,
> and then try:
>
> ```bash
> fabric -m sonar-pro "What is the latest world news?"
> ```
>
>June 11, 2025
>
>- Fabric's YouTube transcription now needs `yt-dlp` to be installed. Make sure to install the latest
> version (2025.06.09 as of this note). The YouTube API key is only needed for comments (the `--comments` flag)
> and metadata extraction (the `--metadata` flag).
Fabric is evolving rapidly.
Stay current with the latest features by reviewing the [CHANGELOG](./CHANGELOG.md) for all recent changes.
## Philosophy

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.250"
var version = "v1.4.257"

Binary file not shown.

View File

@@ -110,6 +110,10 @@ _fabric() {
'(--liststrategies)--liststrategies[List all strategies]' \
'(--listvendors)--listvendors[List all vendors]' \
'(--shell-complete-list)--shell-complete-list[Output raw list without headers/formatting (for shell completion)]' \
'(--suppress-think)--suppress-think[Suppress text enclosed in thinking tags]' \
'(--think-start-tag)--think-start-tag[Start tag for thinking sections (default: <think>)]:start tag:' \
'(--think-end-tag)--think-end-tag[End tag for thinking sections (default: </think>)]:end tag:' \
'(--disable-responses-api)--disable-responses-api[Disable OpenAI Responses API (default: false)]' \
'(-h --help)'{-h,--help}'[Show this help message]' \
'*:arguments:'
}

View File

@@ -13,7 +13,7 @@ _fabric() {
_get_comp_words_by_ref -n : cur prev words cword
# Define all possible options/flags
local opts="--pattern -p --variable -v --context -C --session --attachment -a --setup -S --temperature -t --topp -T --stream -s --presencepenalty -P --raw -r --frequencypenalty -F --listpatterns -l --listmodels -L --listcontexts -x --listsessions -X --updatepatterns -U --copy -c --model -m --modelContextLength --output -o --output-session --latest -n --changeDefaultModel -d --youtube -y --playlist --transcript --transcript-with-timestamps --comments --metadata --language -g --scrape_url -u --scrape_question -q --seed -e --wipecontext -w --wipesession -W --printcontext --printsession --readability --input-has-vars --dry-run --serve --serveOllama --address --api-key --config --search --search-location --image-file --image-size --image-quality --image-compression --image-background --version --listextensions --addextension --rmextension --strategy --liststrategies --listvendors --shell-complete-list --help -h"
local opts="--pattern -p --variable -v --context -C --session --attachment -a --setup -S --temperature -t --topp -T --stream -s --presencepenalty -P --raw -r --frequencypenalty -F --listpatterns -l --listmodels -L --listcontexts -x --listsessions -X --updatepatterns -U --copy -c --model -m --modelContextLength --output -o --output-session --latest -n --changeDefaultModel -d --youtube -y --playlist --transcript --transcript-with-timestamps --comments --metadata --language -g --scrape_url -u --scrape_question -q --seed -e --wipecontext -w --wipesession -W --printcontext --printsession --readability --input-has-vars --dry-run --serve --serveOllama --address --api-key --config --search --search-location --image-file --image-size --image-quality --image-compression --image-background --suppress-think --think-start-tag --think-end-tag --disable-responses-api --version --listextensions --addextension --rmextension --strategy --liststrategies --listvendors --shell-complete-list --help -h"
# Helper function for dynamic completions
_fabric_get_list() {
@@ -81,7 +81,7 @@ _fabric() {
return 0
;;
# Options requiring simple arguments (no specific completion logic here)
-v | --variable | -t | --temperature | -T | --topp | -P | --presencepenalty | -F | --frequencypenalty | --modelContextLength | -n | --latest | -y | --youtube | -g | --language | -u | --scrape_url | -q | --scrape_question | -e | --seed | --address | --api-key | --search-location | --image-compression)
-v | --variable | -t | --temperature | -T | --topp | -P | --presencepenalty | -F | --frequencypenalty | --modelContextLength | -n | --latest | -y | --youtube | -g | --language | -u | --scrape_url | -q | --scrape_question | -e | --seed | --address | --api-key | --search-location | --image-compression | --think-start-tag | --think-end-tag)
# No specific completion suggestions, user types the value
return 0
;;

View File

@@ -69,6 +69,8 @@ complete -c fabric -l image-background -d "Background type: opaque, transparent
complete -c fabric -l addextension -d "Register a new extension from config file path" -r -a "*.yaml *.yml"
complete -c fabric -l rmextension -d "Remove a registered extension by name" -a "(__fabric_get_extensions)"
complete -c fabric -l strategy -d "Choose a strategy from the available strategies" -a "(__fabric_get_strategies)"
complete -c fabric -l think-start-tag -d "Start tag for thinking sections (default: <think>)"
complete -c fabric -l think-end-tag -d "End tag for thinking sections (default: </think>)"
# Boolean flags (no arguments)
complete -c fabric -s S -l setup -d "Run setup for all reconfigurable parts of fabric"
@@ -98,4 +100,6 @@ complete -c fabric -l listextensions -d "List all registered extensions"
complete -c fabric -l liststrategies -d "List all strategies"
complete -c fabric -l listvendors -d "List all vendors"
complete -c fabric -l shell-complete-list -d "Output raw list without headers/formatting (for shell completion)"
complete -c fabric -l suppress-think -d "Suppress text enclosed in thinking tags"
complete -c fabric -l disable-responses-api -d "Disable OpenAI Responses API (default: false)"
complete -c fabric -s h -l help -d "Show this help message"

View File

@@ -0,0 +1,8 @@
# IDENTITY AND PURPOSE
You are a senior developer and expert prompt engineer. Think ultra hard to distill the following transcription or tutorial in as little set of unique rules as possible intended for best practices guidance in AI assisted coding tools, each rule has to be in one sentence as a direct instruction, avoid explanations and cosmetic language. Output in Markdown, I prefer bullet dash (-).
---
# TRANSCRIPT

View File

@@ -41,8 +41,8 @@ func handleChatProcessing(currentFlags *Flags, registry *core.PluginRegistry, me
result := session.GetLastMessage().Content
if !currentFlags.Stream {
// print the result if it was not streamed already
if !currentFlags.Stream || currentFlags.SuppressThink {
// print the result if it was not streamed already or suppress-think disabled streaming output
fmt.Println(result)
}

View File

@@ -7,6 +7,7 @@ import (
"strings"
"github.com/danielmiessler/fabric/internal/core"
"github.com/danielmiessler/fabric/internal/plugins/ai/openai"
"github.com/danielmiessler/fabric/internal/tools/converter"
"github.com/danielmiessler/fabric/internal/tools/youtube"
)
@@ -36,6 +37,11 @@ func Cli(version string) (err error) {
}
}
// Configure OpenAI Responses API setting based on CLI flag
if registry != nil {
configureOpenAIResponsesAPI(registry, currentFlags.DisableResponsesAPI)
}
// Handle setup and server commands
var handled bool
if handled, err = handleSetupAndServerCommands(currentFlags, registry, version); err != nil || handled {
@@ -142,3 +148,21 @@ func WriteOutput(message string, outputFile string) (err error) {
}
return
}
// configureOpenAIResponsesAPI configures the OpenAI client's Responses API setting based on the CLI flag
func configureOpenAIResponsesAPI(registry *core.PluginRegistry, disableResponsesAPI bool) {
// Find the OpenAI vendor in the registry
if registry != nil && registry.VendorsAll != nil {
for _, vendor := range registry.VendorsAll.Vendors {
if vendor.GetName() == "OpenAI" {
// Type assertion to access the OpenAI-specific method
if openaiClient, ok := vendor.(*openai.Client); ok {
// Invert the disable flag to get the enable flag
enableResponsesAPI := !disableResponsesAPI
openaiClient.SetResponsesAPIEnabled(enableResponsesAPI)
}
break
}
}
}
}

View File

@@ -18,4 +18,13 @@ temperature: 0.88
seed: 42
stream: true
raw: false
raw: false
# suppress vendor thinking output
suppressThink: false
thinkStartTag: "<think>"
thinkEndTag: "</think>"
# OpenAI Responses API settings
# (use this for llama-server or other OpenAI-compatible local servers)
disableResponsesAPI: true

View File

@@ -83,6 +83,10 @@ type Flags struct {
ImageQuality string `long:"image-quality" description:"Image quality: low, medium, high, auto (default: auto)"`
ImageCompression int `long:"image-compression" description:"Compression level 0-100 for JPEG/WebP formats (default: not set)"`
ImageBackground string `long:"image-background" description:"Background type: opaque, transparent (default: opaque, only for PNG/WebP)"`
SuppressThink bool `long:"suppress-think" yaml:"suppressThink" description:"Suppress text enclosed in thinking tags"`
ThinkStartTag string `long:"think-start-tag" yaml:"thinkStartTag" description:"Start tag for thinking sections" default:"<think>"`
ThinkEndTag string `long:"think-end-tag" yaml:"thinkEndTag" description:"End tag for thinking sections" default:"</think>"`
DisableResponsesAPI bool `long:"disable-responses-api" yaml:"disableResponsesAPI" description:"Disable OpenAI Responses API (default: false)"`
}
var debug = false
@@ -99,26 +103,34 @@ func Init() (ret *Flags, err error) {
usedFlags := make(map[string]bool)
yamlArgsScan := os.Args[1:]
// Get list of fields that have yaml tags, could be in yaml config
yamlFields := make(map[string]bool)
// Create mapping from flag names (both short and long) to yaml tag names
flagToYamlTag := make(map[string]string)
t := reflect.TypeOf(Flags{})
for i := 0; i < t.NumField(); i++ {
if yamlTag := t.Field(i).Tag.Get("yaml"); yamlTag != "" {
yamlFields[yamlTag] = true
//Debugf("Found yaml-configured field: %s\n", yamlTag)
field := t.Field(i)
yamlTag := field.Tag.Get("yaml")
if yamlTag != "" {
longTag := field.Tag.Get("long")
shortTag := field.Tag.Get("short")
if longTag != "" {
flagToYamlTag[longTag] = yamlTag
Debugf("Mapped long flag %s to yaml tag %s\n", longTag, yamlTag)
}
if shortTag != "" {
flagToYamlTag[shortTag] = yamlTag
Debugf("Mapped short flag %s to yaml tag %s\n", shortTag, yamlTag)
}
}
}
// Scan args for that are provided by cli and might be in yaml
for _, arg := range yamlArgsScan {
if strings.HasPrefix(arg, "--") {
flag := strings.TrimPrefix(arg, "--")
if i := strings.Index(flag, "="); i > 0 {
flag = flag[:i]
}
if yamlFields[flag] {
usedFlags[flag] = true
Debugf("CLI flag used: %s\n", flag)
flag := extractFlag(arg)
if flag != "" {
if yamlTag, exists := flagToYamlTag[flag]; exists {
usedFlags[yamlTag] = true
Debugf("CLI flag used: %s (yaml: %s)\n", flag, yamlTag)
}
}
}
@@ -131,6 +143,16 @@ func Init() (ret *Flags, err error) {
return
}
// Check to see if a ~/.config/fabric/config.yaml config file exists (only when user didn't specify a config)
if ret.Config == "" {
// Default to ~/.config/fabric/config.yaml if no config specified
if defaultConfigPath, err := util.GetDefaultConfigPath(); err == nil && defaultConfigPath != "" {
ret.Config = defaultConfigPath
} else if err != nil {
Debugf("Could not determine default config path: %v\n", err)
}
}
// If config specified, load and apply YAML for unused flags
if ret.Config != "" {
var yamlFlags *Flags
@@ -165,7 +187,6 @@ func Init() (ret *Flags, err error) {
}
}
// Handle stdin and messages
// Handle stdin and messages
info, _ := os.Stdin.Stat()
pipedToStdin := (info.Mode() & os.ModeCharDevice) == 0
@@ -185,6 +206,22 @@ func Init() (ret *Flags, err error) {
return
}
func extractFlag(arg string) string {
var flag string
if strings.HasPrefix(arg, "--") {
flag = strings.TrimPrefix(arg, "--")
if i := strings.Index(flag, "="); i > 0 {
flag = flag[:i]
}
} else if strings.HasPrefix(arg, "-") && len(arg) > 1 {
flag = strings.TrimPrefix(arg, "-")
if i := strings.Index(flag, "="); i > 0 {
flag = flag[:i]
}
}
return flag
}
func assignWithConversion(targetField, sourceField reflect.Value) error {
// Handle string source values
if sourceField.Kind() == reflect.String {
@@ -376,6 +413,15 @@ func (o *Flags) BuildChatOptions() (ret *domain.ChatOptions, err error) {
return nil, err
}
startTag := o.ThinkStartTag
if startTag == "" {
startTag = "<think>"
}
endTag := o.ThinkEndTag
if endTag == "" {
endTag = "</think>"
}
ret = &domain.ChatOptions{
Model: o.Model,
Temperature: o.Temperature,
@@ -392,6 +438,9 @@ func (o *Flags) BuildChatOptions() (ret *domain.ChatOptions, err error) {
ImageQuality: o.ImageQuality,
ImageCompression: o.ImageCompression,
ImageBackground: o.ImageBackground,
SuppressThink: o.SuppressThink,
ThinkStartTag: startTag,
ThinkEndTag: endTag,
}
return
}

View File

@@ -64,6 +64,9 @@ func TestBuildChatOptions(t *testing.T) {
FrequencyPenalty: 0.2,
Raw: false,
Seed: 1,
SuppressThink: false,
ThinkStartTag: "<think>",
ThinkEndTag: "</think>",
}
options, err := flags.BuildChatOptions()
assert.NoError(t, err)
@@ -85,12 +88,29 @@ func TestBuildChatOptionsDefaultSeed(t *testing.T) {
FrequencyPenalty: 0.2,
Raw: false,
Seed: 0,
SuppressThink: false,
ThinkStartTag: "<think>",
ThinkEndTag: "</think>",
}
options, err := flags.BuildChatOptions()
assert.NoError(t, err)
assert.Equal(t, expectedOptions, options)
}
func TestBuildChatOptionsSuppressThink(t *testing.T) {
flags := &Flags{
SuppressThink: true,
ThinkStartTag: "[[t]]",
ThinkEndTag: "[[/t]]",
}
options, err := flags.BuildChatOptions()
assert.NoError(t, err)
assert.True(t, options.SuppressThink)
assert.Equal(t, "[[t]]", options.ThinkStartTag)
assert.Equal(t, "[[/t]]", options.ThinkEndTag)
}
func TestInitWithYAMLConfig(t *testing.T) {
// Create a temporary YAML config file
configContent := `

View File

@@ -79,7 +79,9 @@ func (o *Chatter) Send(request *domain.ChatRequest, opts *domain.ChatOptions) (s
for response := range responseChan {
message += response
fmt.Print(response)
if !opts.SuppressThink {
fmt.Print(response)
}
}
// Wait for goroutine to finish
@@ -101,6 +103,10 @@ func (o *Chatter) Send(request *domain.ChatRequest, opts *domain.ChatOptions) (s
}
}
if opts.SuppressThink && !o.DryRun {
message = domain.StripThinkBlocks(message, opts.ThinkStartTag, opts.ThinkEndTag)
}
if message == "" {
session = nil
err = fmt.Errorf("empty response")

View File

@@ -15,6 +15,7 @@ import (
type mockVendor struct {
sendStreamError error
streamChunks []string
sendFunc func(context.Context, []*chat.ChatCompletionMessage, *domain.ChatOptions) (string, error)
}
func (m *mockVendor) GetName() string {
@@ -57,6 +58,9 @@ func (m *mockVendor) SendStream(messages []*chat.ChatCompletionMessage, opts *do
}
func (m *mockVendor) Send(ctx context.Context, messages []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (string, error) {
if m.sendFunc != nil {
return m.sendFunc(ctx, messages, opts)
}
return "test response", nil
}
@@ -64,6 +68,51 @@ func (m *mockVendor) NeedsRawMode(modelName string) bool {
return false
}
func TestChatter_Send_SuppressThink(t *testing.T) {
tempDir := t.TempDir()
db := fsdb.NewDb(tempDir)
mockVendor := &mockVendor{}
chatter := &Chatter{
db: db,
Stream: false,
vendor: mockVendor,
model: "test-model",
}
request := &domain.ChatRequest{
Message: &chat.ChatCompletionMessage{
Role: chat.ChatMessageRoleUser,
Content: "test",
},
}
opts := &domain.ChatOptions{
Model: "test-model",
SuppressThink: true,
ThinkStartTag: "<think>",
ThinkEndTag: "</think>",
}
// custom send function returning a message with think tags
mockVendor.sendFunc = func(ctx context.Context, msgs []*chat.ChatCompletionMessage, o *domain.ChatOptions) (string, error) {
return "<think>hidden</think> visible", nil
}
session, err := chatter.Send(request, opts)
if err != nil {
t.Fatalf("Send returned error: %v", err)
}
if session == nil {
t.Fatal("expected session")
}
last := session.GetLastMessage()
if last.Content != "visible" {
t.Errorf("expected filtered content 'visible', got %q", last.Content)
}
}
func TestChatter_Send_StreamingErrorPropagation(t *testing.T) {
// Create a temporary database for testing
tempDir := t.TempDir()

View File

@@ -33,6 +33,9 @@ type ChatOptions struct {
ImageQuality string
ImageCompression int
ImageBackground string
SuppressThink bool
ThinkStartTag string
ThinkEndTag string
}
// NormalizeMessages remove empty messages and ensure messages order user-assist-user

32
internal/domain/think.go Normal file
View File

@@ -0,0 +1,32 @@
package domain
import (
"regexp"
"sync"
)
// StripThinkBlocks removes any content between the provided start and end tags
// from the input string. Whitespace following the end tag is also removed so
// output resumes at the next non-empty line.
var (
regexCache = make(map[string]*regexp.Regexp)
cacheMutex sync.Mutex
)
func StripThinkBlocks(input, startTag, endTag string) string {
if startTag == "" || endTag == "" {
return input
}
cacheKey := startTag + "|" + endTag
cacheMutex.Lock()
re, exists := regexCache[cacheKey]
if !exists {
pattern := "(?s)" + regexp.QuoteMeta(startTag) + ".*?" + regexp.QuoteMeta(endTag) + "\\s*"
re = regexp.MustCompile(pattern)
regexCache[cacheKey] = re
}
cacheMutex.Unlock()
return re.ReplaceAllString(input, "")
}

View File

@@ -0,0 +1,19 @@
package domain
import "testing"
func TestStripThinkBlocks(t *testing.T) {
input := "<think>internal</think>\n\nresult"
got := StripThinkBlocks(input, "<think>", "</think>")
if got != "result" {
t.Errorf("expected %q, got %q", "result", got)
}
}
func TestStripThinkBlocksCustomTags(t *testing.T) {
input := "[[t]]hidden[[/t]] visible"
got := StripThinkBlocks(input, "[[t]]", "[[/t]]")
if got != "visible" {
t.Errorf("expected %q, got %q", "visible", got)
}
}

View File

@@ -12,6 +12,8 @@ import (
"github.com/danielmiessler/fabric/internal/plugins"
)
const DryRunResponse = "Dry run: Fake response sent by DryRun plugin\n"
type Client struct {
*plugins.PluginBase
}
@@ -85,27 +87,37 @@ func (c *Client) formatOptions(opts *domain.ChatOptions) string {
if opts.ImageFile != "" {
builder.WriteString(fmt.Sprintf("ImageFile: %s\n", opts.ImageFile))
}
if opts.SuppressThink {
builder.WriteString("SuppressThink: enabled\n")
builder.WriteString(fmt.Sprintf("Thinking Start Tag: %s\n", opts.ThinkStartTag))
builder.WriteString(fmt.Sprintf("Thinking End Tag: %s\n", opts.ThinkEndTag))
}
return builder.String()
}
func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan string) error {
func (c *Client) constructRequest(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) string {
var builder strings.Builder
builder.WriteString("Dry run: Would send the following request:\n\n")
builder.WriteString(c.formatMessages(msgs))
builder.WriteString(c.formatOptions(opts))
channel <- builder.String()
close(channel)
return builder.String()
}
func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan string) error {
defer close(channel)
request := c.constructRequest(msgs, opts)
channel <- request
channel <- "\n"
channel <- DryRunResponse
return nil
}
func (c *Client) Send(_ context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (string, error) {
fmt.Println("Dry run: Would send the following request:")
fmt.Print(c.formatMessages(msgs))
fmt.Print(c.formatOptions(opts))
request := c.constructRequest(msgs, opts)
return "", nil
return request + "\n" + DryRunResponse, nil
}
func (c *Client) Setup() error {

View File

@@ -66,6 +66,11 @@ type Client struct {
ImplementsResponses bool // Whether this provider supports the Responses API
}
// SetResponsesAPIEnabled configures whether to use the Responses API
func (o *Client) SetResponsesAPIEnabled(enabled bool) {
o.ImplementsResponses = enabled
}
func (o *Client) configure() (ret error) {
opts := []option.RequestOption{option.WithAPIKey(o.ApiKey.Value)}
if o.ApiBaseURL.Value != "" {

View File

@@ -71,3 +71,21 @@ func IsSymlinkToDir(path string) bool {
return false // Regular directories should not be treated as symlinks
}
// GetDefaultConfigPath returns the default path for the configuration file
// if it exists, otherwise returns an empty string.
func GetDefaultConfigPath() (string, error) {
homeDir, err := os.UserHomeDir()
if err != nil {
return "", fmt.Errorf("could not determine user home directory: %w", err)
}
defaultConfigPath := filepath.Join(homeDir, ".config", "fabric", "config.yaml")
if _, err := os.Stat(defaultConfigPath); err != nil {
if os.IsNotExist(err) {
return "", nil // Return no error for non-existent config path
}
return "", fmt.Errorf("error accessing default config path: %w", err)
}
return defaultConfigPath, nil
}

View File

@@ -1 +1 @@
"1.4.250"
"1.4.257"

View File

@@ -1861,6 +1861,16 @@
"CR THINKING",
"SELF"
]
},
{
"patternName": "generate_code_rules",
"description": "Extracts a list of best practices rules for AI coding assisted tools.",
"tags": [
"ANALYSIS",
"EXTRACT",
"DEVELOPMENT",
"AI"
]
}
]
}

View File

@@ -903,6 +903,10 @@
{
"patternName": "t_check_dunning_kruger",
"pattern_extract": "# IDENTITY You are an expert at understanding deep context about a person or entity, and then creating wisdom from that context combined with the instruction or question given in the input. # STEPS 1. Read the incoming TELOS File thoroughly. Fully understand everything about this person or entity. 2. Deeply study the input instruction or question. 3. Spend significant time and effort thinking about how these two are related, and what would be the best possible output for the person who sent the input. 4. Evaluate the input against the Dunning-Kruger effect and input's prior beliefs. Explore cognitive bias, subjective ability and objective ability for: low-ability areas where the input owner overestimate their knowledge or skill; and the opposite, high-ability areas where the input owner underestimate their knowledge or skill. # EXAMPLE In education, students who overestimate their understanding of a topic may not seek help or put in the necessary effort, while high-achieving students might doubt their abilities. In healthcare, overconfident practitioners might make critical errors, and underconfident practitioners might delay crucial decisions. In politics, politicians with limited expertise might propose simplistic solutions and ignore expert advice. END OF EXAMPLE # OUTPUT - In a section called OVERESTIMATION OF COMPETENCE, output a set of 10, 16-word bullets, that capture the principal misinterpretation of lack of knowledge or skill which are leading the input owner to believe they are more knowledgeable or skilled than they actually are. - In a section called UNDERESTIMATION OF COMPETENCE, output a set of 10, 16-word bullets,that capture the principal misinterpreation of underestimation of their knowledge or skill which are preventing the input owner to see opportunities. - In a section called METACOGNITIVIVE SKILLS, output a set of 10-word bullets that expose areas where the input owner struggles to accuratelly assess their own performance and may not be aware of the gap between their actual ability and their perceived ability. - In a section called IMPACT ON DECISION MAKING, output a set of 10-word bullets exposing facts, biases, traces of behavior based on overinflated self-assessment, that can lead to poor decisions. - At the end summarize the findings and give the input owner a motivational and constructive perspective on how they can start to tackle principal 5 gaps in their perceived skills and knowledge competencies. Don't be over simplistic. # OUTPUT INSTRUCTIONS 1. Only output valid, basic Markdown. No special formatting or italics or bolding or anything. 2. Do not output any content other than the sections above. Nothing else."
},
{
"patternName": "generate_code_rules",
"pattern_extract": "# IDENTITY AND PURPOSE You are a senior developer and expert prompt engineer. Think ultra hard to distill the following transcription or tutorial in as little set of unique rules as possible intended for best practices guidance in AI assisted coding tools, each rule has to be in one sentence as a direct instruction, avoid explanations and cosmetic language. Output in Markdown, I prefer bullet dash (-). --- # TRANSCRIPT"
}
]
}

View File

@@ -1861,6 +1861,16 @@
"CR THINKING",
"SELF"
]
},
{
"patternName": "generate_code_rules",
"description": "Extracts a list of best practices rules for AI coding assisted tools.",
"tags": [
"ANALYSIS",
"EXTRACT",
"DEVELOPMENT",
"AI"
]
}
]
}