Compare commits

...

42 Commits

Author SHA1 Message Date
github-actions[bot]
833b09081e chore(release): Update version to v1.4.349 2025-12-16 08:12:11 +00:00
Kayvan Sylvan
201d1fb791 Merge pull request #1877 from ksylvan/kayvan/modernize-part4-string-and-slice-syntax
modernize: update GitHub Actions and modernize Go code
2025-12-16 00:09:43 -08:00
Changelog Bot
6ecbd044e6 chore: incoming 1877 changelog entry 2025-12-16 00:06:39 -08:00
Kayvan Sylvan
fdadeae1e7 modernize: update GitHub Actions and modernize Go code with latest stdlib features
## CHANGES

- Upgrade GitHub Actions to latest versions (v6, v21)
- Add modernization check step in CI workflow
- Replace strings manipulation with `strings.CutPrefix` and `strings.CutSuffix`
- Replace manual loops with `slices.Contains` for validation
- Use `strings.SplitSeq` for iterator-based string splitting
- Replace `bytes.TrimPrefix` with `bytes.CutPrefix` for clarity
- Use `strings.Builder` instead of string concatenation
- Replace `fmt.Sprintf` with `fmt.Appendf` for efficiency
- Simplify padding calculation with `max` builtin
2025-12-15 23:55:37 -08:00
github-actions[bot]
57c3e36574 chore(release): Update version to v1.4.348 2025-12-16 07:34:45 +00:00
Kayvan Sylvan
1b98a8899f Merge pull request #1876 from ksylvan/kayvan/modernize-part3-typefor-and-range-loops
modernize Go code with TypeFor and range loops
2025-12-15 23:31:44 -08:00
Kayvan Sylvan
a4484d4e01 refactor: modernize Go code with TypeFor and range loops
- Replace reflect.TypeOf with TypeFor generic syntax
- Convert traditional for loops to range-based iterations
- Simplify reflection usage in CLI flag handling
- Update test loops to use range over integers
- Refactor string processing loops in template plugin
2025-12-15 23:29:41 -08:00
github-actions[bot]
005d43674f chore(release): Update version to v1.4.347 2025-12-16 06:51:40 +00:00
Kayvan Sylvan
3a69437790 Merge pull request #1875 from ksylvan/kayvan/modernize-part2-loops
modernize: update benchmarks to use b.Loop and refactor map copying
2025-12-15 22:48:59 -08:00
Changelog Bot
b057f52ca6 chore: incoming 1875 changelog entry 2025-12-15 22:46:45 -08:00
Kayvan Sylvan
dccdfbac8c test: update benchmarks to use b.Loop and refactor map copying
# CHANGES

- update benchmark loops to use cleaner `b.Loop()` syntax
- remove unnecessary `b.ResetTimer()` call in token benchmark
- use `maps.Copy` for merging variables in patterns handler
2025-12-15 22:40:55 -08:00
github-actions[bot]
98038707f1 chore(release): Update version to v1.4.346 2025-12-16 06:30:55 +00:00
Kayvan Sylvan
03b22a70f0 Merge pull request #1874 from ksylvan/kayvan/modernize-part1
refactor: replace interface{} with any across codebase
2025-12-15 22:28:15 -08:00
Kayvan Sylvan
66025d516c refactor: replace interface{} with any across codebase
- Part 1 of incorporating `modernize` tool into Fabric.
- Replace `interface{}` with `any` in slice type declarations
- Update map types from `map[string]interface{}` to `map[string]any`
- Change variadic function parameters to use `...any` instead of `...interface{}`
- Modernize JSON unmarshaling variables to `any` for consistency
- Update struct fields and method signatures to prefer `any` alias
- Ensure all type assertions and conversions use `any` throughout codebase
- Add PR guidelines in docs to encourage focused, reviewable changes
2025-12-15 22:25:18 -08:00
github-actions[bot]
32ef2b73c4 chore(release): Update version to v1.4.345 2025-12-15 06:03:18 +00:00
Kayvan Sylvan
656ca7ee28 Merge pull request #1870 from ksylvan/kayvan/update-web-ui-pdfjs-library
Web UI: upgrade pdfjs and add SSR-safe dynamic PDF worker init
2025-12-14 22:00:41 -08:00
Changelog Bot
0025466e4e chore: incoming 1870 changelog entry 2025-12-14 21:57:06 -08:00
Kayvan Sylvan
4c2b38ca53 feat: upgrade pdfjs and add SSR-safe dynamic PDF worker init
- Upgrade `pdfjs-dist` to v5 with new engine requirement
- Dynamically import PDF.js to avoid SSR import-time crashes
- Configure PDF worker via CDN using runtime PDF.js version
- Update PDF conversion pipeline to use lazy initialization
- Guard chat message localStorage persistence behind browser checks
- Reformat ChatService with consistent imports and typings
- Bump `patch-package` and refresh pnpm lock dependency graph
- Add `skeletonlabs` to VSCode spellcheck dictionary
2025-12-14 16:12:23 -08:00
github-actions[bot]
9c7ce4a974 chore(release): Update version to v1.4.344 2025-12-14 08:14:21 +00:00
Kayvan Sylvan
626c492c63 Merge pull request #1867 from jaredmontoya/update-flake
chore: update flake
2025-12-14 00:11:45 -08:00
Changelog Bot
71fb3fea7e chore: incoming 1867 changelog entry 2025-12-14 00:08:33 -08:00
Kayvan Sylvan
3bc1150da4 Merge branch 'main' into update-flake 2025-12-14 00:07:51 -08:00
github-actions[bot]
827e0aeca7 chore(release): Update version to v1.4.343 2025-12-14 08:05:48 +00:00
Kayvan Sylvan
0a1e01c4ab Merge pull request #1829 from danielmiessler/dependabot/npm_and_yarn/web/npm_and_yarn-3c67cbb9cd
chore(deps): bump js-yaml from 4.1.0 to 4.1.1 in /web in the npm_and_yarn group across 1 directory
2025-12-14 00:03:09 -08:00
Changelog Bot
6003bb2c86 chore: incoming 1829 changelog entry 2025-12-13 23:52:18 -08:00
dependabot[bot]
bb896b1064 chore(deps): bump js-yaml
Bumps the npm_and_yarn group with 1 update in the /web directory: [js-yaml](https://github.com/nodeca/js-yaml).


Updates `js-yaml` from 4.1.0 to 4.1.1
- [Changelog](https://github.com/nodeca/js-yaml/blob/master/CHANGELOG.md)
- [Commits](https://github.com/nodeca/js-yaml/compare/4.1.0...4.1.1)

---
updated-dependencies:
- dependency-name: js-yaml
  dependency-version: 4.1.1
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-13 23:52:18 -08:00
jaredmontoya
d149c62a37 chore: update flake 2025-12-13 20:30:31 +01:00
github-actions[bot]
3d25fbc04c chore(release): Update version to v1.4.342 2025-12-13 08:11:50 +00:00
Kayvan Sylvan
4c822d2c59 Merge pull request #1866 from ksylvan/kayvan/errors-never-to-stdout
fix: write CLI and streaming errors to stderr
2025-12-13 00:09:09 -08:00
Changelog Bot
f1ffd6ee29 chore: incoming 1866 changelog entry 2025-12-13 00:07:08 -08:00
Kayvan Sylvan
deb59bdd21 fix: write CLI and streaming errors to stderr
## CHANGES
- Route CLI execution errors to standard error output
- Print Anthropic stream errors to stderr consistently
- Add os import to support stderr error writes
- Preserve help-output suppression and exit behavior
2025-12-13 00:02:44 -08:00
github-actions[bot]
2a1e8dcf12 chore(release): Update version to v1.4.341 2025-12-11 10:49:47 +00:00
Kayvan Sylvan
b6fd81dd16 Merge pull request #1860 from ksylvan/kayvan/fix-for-setup-reset-required-value-now-does-not-show-validation-error
fix: allow resetting required settings without validation errors
2025-12-11 18:47:16 +08:00
Kayvan Sylvan
5b723c9e92 fix: allow resetting required settings without validation errors
CHANGES
- update `Ask` to detect reset command and bypass validation
- refactor `OnAnswer` to support new `isReset` parameter logic
- invoke `ConfigureCustom` in `Setup` to avoid redundant re-validation
- add unit tests ensuring required fields can be reset
- add incoming 1860 changelog entry
2025-12-11 02:39:35 -08:00
github-actions[bot]
93f8978085 chore(release): Update version to v1.4.340 2025-12-08 00:36:16 +00:00
Kayvan Sylvan
4d91bf837f Merge pull request #1856 from ksylvan/kayvan/claude-haiku-4-5
Add support for new ClaudeHaiku 4.5 models
2025-12-08 08:33:51 +08:00
Changelog Bot
cb29a0d606 chore: incoming 1856 changelog entry 2025-12-08 08:30:17 +08:00
Kayvan Sylvan
b1eb7a82d9 feat: add support for new ClaudeHaiku models in client
### CHANGES

- Add `ModelClaudeHaiku4_5` to supported models
- Add `ModelClaudeHaiku4_5_20251001` to supported models
2025-12-08 08:21:18 +08:00
github-actions[bot]
bc8f5add00 chore(release): Update version to v1.4.339 2025-12-08 00:10:02 +00:00
Kayvan Sylvan
c3f874f985 Merge pull request #1855 from ksylvan/kayvan/ollama_image_handling
feat: add image attachment support for Ollama vision models
2025-12-08 08:07:33 +08:00
Changelog Bot
922df52d0c chore: incoming 1855 changelog entry 2025-12-08 08:00:59 +08:00
Kayvan Sylvan
4badfecadb feat: add multi-modal image support to Ollama client
## CHANGES

- Add base64 and io imports for image handling
- Store httpClient separately in Client struct for reuse
- Convert createChatRequest to return error for validation
- Implement convertMessage to handle multi-content chat messages
- Add loadImageBytes to fetch images from URLs
- Support base64 data URLs for inline images
- Handle HTTP image URLs with context propagation
- Replace debug print with proper debuglog usage
2025-12-08 07:48:36 +08:00
47 changed files with 1071 additions and 1038 deletions

View File

@@ -20,18 +20,22 @@ jobs:
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v5
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@main
uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version-file: ./go.mod
- name: Run tests
run: go test -v ./...
- name: Check for modernization opportunities
run: |
go run golang.org/x/tools/go/analysis/passes/modernize/cmd/modernize@latest ./...
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@v21
- name: Check Formatting
run: nix flake check

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
@@ -32,7 +32,7 @@ jobs:
- name: Upload Patterns Artifact
if: steps.check-changes.outputs.changes == 'true'
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v6
with:
name: patterns
path: patterns.zip

View File

@@ -15,12 +15,12 @@ jobs:
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version-file: ./go.mod
@@ -37,11 +37,11 @@ jobs:
contents: write
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version-file: ./go.mod
- name: Run GoReleaser

View File

@@ -24,17 +24,17 @@ concurrency:
jobs:
update-version:
if: >
${{ github.repository_owner == 'danielmiessler' }} &&
github.repository_owner == 'danielmiessler' &&
github.event_name == 'push' && github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@main
uses: DeterminateSystems/nix-installer-action@v21
- name: Set up Git
run: |

View File

@@ -166,6 +166,7 @@
"sess",
"sgaunet",
"shellquote",
"skeletonlabs",
"SSEHTTP",
"storer",
"Streamlit",

View File

@@ -1,5 +1,105 @@
# Changelog
## v1.4.349 (2025-12-16)
### PR [#1877](https://github.com/danielmiessler/Fabric/pull/1877) by [ksylvan](https://github.com/ksylvan): modernize: update GitHub Actions and modernize Go code
- Modernize GitHub Actions and Go code with latest stdlib features
- Upgrade GitHub Actions to latest versions (v6, v21) and add modernization check step
- Replace strings manipulation with `strings.CutPrefix` and `strings.CutSuffix`
- Replace manual loops with `slices.Contains` for validation and use `strings.SplitSeq` for iterator-based splitting
- Replace `fmt.Sprintf` with `fmt.Appendf` for efficiency and simplify padding calculation with `max` builtin
## v1.4.348 (2025-12-16)
### PR [#1876](https://github.com/danielmiessler/Fabric/pull/1876) by [ksylvan](https://github.com/ksylvan): modernize Go code with TypeFor and range loops
- Replace reflect.TypeOf with TypeFor generic syntax for improved type handling
- Convert traditional for loops to range-based iterations for better code readability
- Simplify reflection usage in CLI flag handling to reduce complexity
- Update test loops to use range over integers for cleaner test code
- Refactor string processing loops in template plugin to use modern Go patterns
## v1.4.347 (2025-12-16)
### PR [#1875](https://github.com/danielmiessler/Fabric/pull/1875) by [ksylvan](https://github.com/ksylvan): modernize: update benchmarks to use b.Loop and refactor map copying
- Updated benchmark loops to use cleaner `b.Loop()` syntax
- Removed unnecessary `b.ResetTimer()` call in token benchmark
- Used `maps.Copy` for merging variables in patterns handler
## v1.4.346 (2025-12-16)
### PR [#1874](https://github.com/danielmiessler/Fabric/pull/1874) by [ksylvan](https://github.com/ksylvan): refactor: replace interface{} with any across codebase
- Part 1 of dealing with #1873 as pointed out by @philoserf
- Replace `interface{}` with `any` in slice type declarations throughout the codebase
- Update map types from `map[string]interface{}` to `map[string]any` for modern Go standards
- Change variadic function parameters to use `...any` instead of `...interface{}`
- Modernize JSON unmarshaling variables to use `any` for consistency
- Update struct fields and method signatures to prefer the `any` alias over legacy interface syntax
## v1.4.345 (2025-12-15)
### PR [#1870](https://github.com/danielmiessler/Fabric/pull/1870) by [ksylvan](https://github.com/ksylvan): Web UI: upgrade pdfjs and add SSR-safe dynamic PDF worker init
- Upgrade `pdfjs-dist` to v5 with new engine requirement
- Dynamically import PDF.js to avoid SSR import-time crashes
- Configure PDF worker via CDN using runtime PDF.js version
- Update PDF conversion pipeline to use lazy initialization
- Guard chat message localStorage persistence behind browser checks
## v1.4.344 (2025-12-14)
### PR [#1867](https://github.com/danielmiessler/Fabric/pull/1867) by [jaredmontoya](https://github.com/jaredmontoya): chore: update flake
- Chore: update flake
- Merge branch 'main' into update-flake
## v1.4.343 (2025-12-14)
### PR [#1829](https://github.com/danielmiessler/Fabric/pull/1829) by [dependabo](https://github.com/apps/dependabot): chore(deps): bump js-yaml from 4.1.0 to 4.1.1 in /web in the npm_and_yarn group across 1 directory
- Updated js-yaml dependency from version 4.1.0 to 4.1.1 in the /web directory
## v1.4.342 (2025-12-13)
### PR [#1866](https://github.com/danielmiessler/Fabric/pull/1866) by [ksylvan](https://github.com/ksylvan): fix: write CLI and streaming errors to stderr
- Fix: write CLI and streaming errors to stderr
- Route CLI execution errors to standard error output
- Print Anthropic stream errors to stderr consistently
- Add os import to support stderr error writes
- Preserve help-output suppression and exit behavior
## v1.4.341 (2025-12-10)
### PR [#1860](https://github.com/danielmiessler/Fabric/pull/1860) by [ksylvan](https://github.com/ksylvan): fix: allow resetting required settings without validation errors
- Fix: allow resetting required settings without validation errors
- Update `Ask` to detect reset command and bypass validation
- Refactor `OnAnswer` to support new `isReset` parameter logic
- Invoke `ConfigureCustom` in `Setup` to avoid redundant re-validation
- Add unit tests ensuring required fields can be reset
## v1.4.340 (2025-12-08)
### PR [#1856](https://github.com/danielmiessler/Fabric/pull/1856) by [ksylvan](https://github.com/ksylvan): Add support for new ClaudeHaiku 4.5 models
- Add support for new ClaudeHaiku models in client
- Add `ModelClaudeHaiku4_5` to supported models
- Add `ModelClaudeHaiku4_5_20251001` to supported models
## v1.4.339 (2025-12-08)
### PR [#1855](https://github.com/danielmiessler/Fabric/pull/1855) by [ksylvan](https://github.com/ksylvan): feat: add image attachment support for Ollama vision models
- Add multi-modal image support to Ollama client
- Implement convertMessage to handle multi-content chat messages
- Add loadImageBytes to fetch images from URLs
- Support base64 data URLs for inline images
- Handle HTTP image URLs with context propagation
## v1.4.338 (2025-12-04)
### PR [#1852](https://github.com/danielmiessler/Fabric/pull/1852) by [ksylvan](https://github.com/ksylvan): Add Abacus vendor for ChatLLM models with static model list

View File

@@ -109,11 +109,11 @@ func ScanDirectory(rootDir string, maxDepth int, instructions string, ignoreList
}
// Create final data structure
var data []interface{}
var data []any
data = append(data, rootItem)
// Add report
reportItem := map[string]interface{}{
reportItem := map[string]any{
"type": "report",
"directories": dirCount,
"files": fileCount,
@@ -121,7 +121,7 @@ func ScanDirectory(rootDir string, maxDepth int, instructions string, ignoreList
data = append(data, reportItem)
// Add instructions
instructionsItem := map[string]interface{}{
instructionsItem := map[string]any{
"type": "instructions",
"name": "code_change_instructions",
"details": instructions,

View File

@@ -12,7 +12,7 @@ import (
func main() {
err := cli.Cli(version)
if err != nil && !flags.WroteHelp(err) {
fmt.Printf("%s\n", err)
fmt.Fprintf(os.Stderr, "%s\n", err)
os.Exit(1)
}
}

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.338"
var version = "v1.4.349"

Binary file not shown.

View File

@@ -574,8 +574,8 @@ func (g *Generator) extractChanges(pr *github.PR) []string {
}
if len(changes) == 0 && pr.Body != "" {
lines := strings.Split(pr.Body, "\n")
for _, line := range lines {
lines := strings.SplitSeq(pr.Body, "\n")
for line := range lines {
line = strings.TrimSpace(line)
if strings.HasPrefix(line, "- ") || strings.HasPrefix(line, "* ") {
change := strings.TrimPrefix(strings.TrimPrefix(line, "- "), "* ")

View File

@@ -159,7 +159,7 @@ func (g *Generator) CreateNewChangelogEntry(version string) error {
for _, file := range files {
// Extract PR number from filename (e.g., "1640.txt" -> 1640)
filename := filepath.Base(file)
if prNumStr := strings.TrimSuffix(filename, ".txt"); prNumStr != filename {
if prNumStr, ok := strings.CutSuffix(filename, ".txt"); ok {
if prNum, err := strconv.Atoi(prNumStr); err == nil {
processedPRs[prNum] = true
prNumbers = append(prNumbers, prNum)

View File

@@ -333,7 +333,7 @@ func (c *Client) FetchAllMergedPRsGraphQL(since time.Time) ([]*PR, error) {
for {
// Prepare variables
variables := map[string]interface{}{
variables := map[string]any{
"owner": graphql.String(c.owner),
"repo": graphql.String(c.repo),
"after": (*graphql.String)(after),

View File

@@ -51,6 +51,29 @@ docs: update installation instructions
## Pull Request Process
### Pull Request Guidelines
**Keep pull requests focused and minimal.**
PRs that touch a large number of files (50+) without clear functional justification will likely be rejected without detailed review.
#### Why we enforce this
- **Reviewability**: Large PRs are effectively un-reviewable. Studies show reviewer effectiveness drops significantly after ~200-400 lines of code. A 93-file "cleanup" PR cannot receive meaningful review.
- **Git history**: Sweeping changes pollute `git blame`, making it harder to trace when and why functional changes were made.
- **Merge conflicts**: Large PRs increase the likelihood of conflicts with other contributors' work.
- **Risk**: More changed lines means more opportunities for subtle bugs, even in "safe" refactors.
#### What to do instead
If you have a large change in mind, break it into logical, independently-mergeable slices. For example:
- ✅ "Replace `interface{}` with `any` across codebase" (single mechanical change, easy to verify)
- ✅ "Migrate to `strings.CutPrefix` in `internal/cli`" (scoped to one package)
- ❌ "Modernize codebase with multiple idiom updates" (too broad, impossible to review)
For sweeping refactors or style changes, **open an issue first** to discuss the approach with maintainers before investing time in the work.
### Changelog Generation (REQUIRED)
After opening your PR, generate a changelog entry:

24
flake.lock generated
View File

@@ -5,11 +5,11 @@
"systems": "systems"
},
"locked": {
"lastModified": 1694529238,
"narHash": "sha256-zsNZZGTGnMOf9YpHKJqMSsa0dXbfmxeoJ7xHlrt+xmY=",
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "ff7b65b44d01cf9ba6a71320833626af21126384",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
@@ -26,11 +26,11 @@
]
},
"locked": {
"lastModified": 1742209644,
"narHash": "sha256-jMy1XqXqD0/tJprEbUmKilTkvbDY/C0ZGSsJJH4TNCE=",
"lastModified": 1763982521,
"narHash": "sha256-ur4QIAHwgFc0vXiaxn5No/FuZicxBr2p0gmT54xZkUQ=",
"owner": "nix-community",
"repo": "gomod2nix",
"rev": "8f3534eb8f6c5c3fce799376dc3b91bae6b11884",
"rev": "02e63a239d6eabd595db56852535992c898eba72",
"type": "github"
},
"original": {
@@ -41,11 +41,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1745234285,
"narHash": "sha256-GfpyMzxwkfgRVN0cTGQSkTC0OHhEkv3Jf6Tcjm//qZ0=",
"lastModified": 1765472234,
"narHash": "sha256-9VvC20PJPsleGMewwcWYKGzDIyjckEz8uWmT0vCDYK0=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "c11863f1e964833214b767f4a369c6e6a7aba141",
"rev": "2fbfb1d73d239d2402a8fe03963e37aab15abe8b",
"type": "github"
},
"original": {
@@ -100,11 +100,11 @@
]
},
"locked": {
"lastModified": 1744961264,
"narHash": "sha256-aRmUh0AMwcbdjJHnytg1e5h5ECcaWtIFQa6d9gI85AI=",
"lastModified": 1762938485,
"narHash": "sha256-AlEObg0syDl+Spi4LsZIBrjw+snSVU4T8MOeuZJUJjM=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "8d404a69efe76146368885110f29a2ca3700bee6",
"rev": "5b4ee75aeefd1e2d5a1cc43cf6ba65eba75e83e4",
"type": "github"
},
"original": {

View File

@@ -8,6 +8,7 @@ import (
"os"
"path/filepath"
"reflect"
"slices"
"strconv"
"strings"
@@ -115,7 +116,7 @@ func Init() (ret *Flags, err error) {
// Create mapping from flag names (both short and long) to yaml tag names
flagToYamlTag := make(map[string]string)
t := reflect.TypeOf(Flags{})
t := reflect.TypeFor[Flags]()
for i := 0; i < t.NumField(); i++ {
field := t.Field(i)
yamlTag := field.Tag.Get("yaml")
@@ -224,14 +225,14 @@ func Init() (ret *Flags, err error) {
}
func parseDebugLevel(args []string) int {
for i := 0; i < len(args); i++ {
for i := range args {
arg := args[i]
if arg == "--debug" && i+1 < len(args) {
if lvl, err := strconv.Atoi(args[i+1]); err == nil {
return lvl
}
} else if strings.HasPrefix(arg, "--debug=") {
if lvl, err := strconv.Atoi(strings.TrimPrefix(arg, "--debug=")); err == nil {
} else if after, ok := strings.CutPrefix(arg, "--debug="); ok {
if lvl, err := strconv.Atoi(after); err == nil {
return lvl
}
}
@@ -241,8 +242,8 @@ func parseDebugLevel(args []string) int {
func extractFlag(arg string) string {
var flag string
if strings.HasPrefix(arg, "--") {
flag = strings.TrimPrefix(arg, "--")
if after, ok := strings.CutPrefix(arg, "--"); ok {
flag = after
if i := strings.Index(flag, "="); i > 0 {
flag = flag[:i]
}
@@ -348,10 +349,8 @@ func validateImageFile(imagePath string) error {
ext := strings.ToLower(filepath.Ext(imagePath))
validExtensions := []string{".png", ".jpeg", ".jpg", ".webp"}
for _, validExt := range validExtensions {
if ext == validExt {
return nil // Valid extension found
}
if slices.Contains(validExtensions, ext) {
return nil // Valid extension found
}
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_image_file_extension"), ext))
@@ -370,13 +369,7 @@ func validateImageParameters(imagePath, size, quality, background string, compre
// Validate size
if size != "" {
validSizes := []string{"1024x1024", "1536x1024", "1024x1536", "auto"}
valid := false
for _, validSize := range validSizes {
if size == validSize {
valid = true
break
}
}
valid := slices.Contains(validSizes, size)
if !valid {
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_image_size"), size))
}
@@ -385,13 +378,7 @@ func validateImageParameters(imagePath, size, quality, background string, compre
// Validate quality
if quality != "" {
validQualities := []string{"low", "medium", "high", "auto"}
valid := false
for _, validQuality := range validQualities {
if quality == validQuality {
valid = true
break
}
}
valid := slices.Contains(validQualities, quality)
if !valid {
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_image_quality"), quality))
}
@@ -400,13 +387,7 @@ func validateImageParameters(imagePath, size, quality, background string, compre
// Validate background
if background != "" {
validBackgrounds := []string{"opaque", "transparent"}
valid := false
for _, validBackground := range validBackgrounds {
if background == validBackground {
valid = true
break
}
}
valid := slices.Contains(validBackgrounds, background)
if !valid {
return fmt.Errorf("%s", fmt.Sprintf(i18n.T("invalid_image_background"), background))
}

View File

@@ -137,8 +137,7 @@ func (h *TranslatedHelpWriter) getTranslatedDescription(flagName string) string
// getOriginalDescription retrieves the original description from struct tags
func (h *TranslatedHelpWriter) getOriginalDescription(flagName string) string {
flags := &Flags{}
flagsType := reflect.TypeOf(flags).Elem()
flagsType := reflect.TypeFor[Flags]()
for i := 0; i < flagsType.NumField(); i++ {
field := flagsType.Field(i)
@@ -184,10 +183,10 @@ func detectLanguageFromArgs() string {
if i+1 < len(args) {
return args[i+1]
}
} else if strings.HasPrefix(arg, "--language=") {
return strings.TrimPrefix(arg, "--language=")
} else if strings.HasPrefix(arg, "-g=") {
return strings.TrimPrefix(arg, "-g=")
} else if after, ok := strings.CutPrefix(arg, "--language="); ok {
return after
} else if after, ok := strings.CutPrefix(arg, "-g="); ok {
return after
} else if runtime.GOOS == "windows" && strings.HasPrefix(arg, "/g:") {
return strings.TrimPrefix(arg, "/g:")
} else if runtime.GOOS == "windows" && strings.HasPrefix(arg, "/g=") {
@@ -218,8 +217,7 @@ func detectLanguageFromEnv() string {
// writeAllFlags writes all flags with translated descriptions
func (h *TranslatedHelpWriter) writeAllFlags() {
// Use direct reflection on the Flags struct to get all flag definitions
flags := &Flags{}
flagsType := reflect.TypeOf(flags).Elem()
flagsType := reflect.TypeFor[Flags]()
for i := 0; i < flagsType.NumField(); i++ {
field := flagsType.Field(i)
@@ -274,10 +272,7 @@ func (h *TranslatedHelpWriter) writeAllFlags() {
// Pad to align descriptions
flagStr := flagLine.String()
padding := 34 - len(flagStr)
if padding < 2 {
padding = 2
}
padding := max(34-len(flagStr), 2)
fmt.Fprintf(h.writer, "%s%s%s", flagStr, strings.Repeat(" ", padding), description)

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"github.com/atotto/clipboard"
@@ -66,10 +67,5 @@ func CreateAudioOutputFile(audioData []byte, fileName string) (err error) {
func IsAudioFormat(fileName string) bool {
ext := strings.ToLower(filepath.Ext(fileName))
audioExts := []string{".wav", ".mp3", ".m4a", ".aac", ".ogg", ".flac"}
for _, audioExt := range audioExts {
if ext == audioExt {
return true
}
}
return false
return slices.Contains(audioExts, ext)
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"os"
"path/filepath"
"slices"
"strings"
)
@@ -146,14 +147,7 @@ func fixInvalidEscapes(jsonStr string) string {
// Check for escape sequences only inside strings
if inQuotes && ch == '\\' && i+1 < len(jsonStr) {
nextChar := jsonStr[i+1]
isValid := false
for _, validEscape := range validEscapes {
if nextChar == validEscape {
isValid = true
break
}
}
isValid := slices.Contains(validEscapes, nextChar)
if !isValid {
// Invalid escape sequence - add an extra backslash

View File

@@ -51,7 +51,7 @@ func LevelFromInt(i int) Level {
}
// Debug writes a debug message if the global level permits.
func Debug(l Level, format string, a ...interface{}) {
func Debug(l Level, format string, a ...any) {
mu.RLock()
current := level
w := output
@@ -63,7 +63,7 @@ func Debug(l Level, format string, a ...interface{}) {
// Log writes a message unconditionally to stderr.
// This is for important messages that should always be shown regardless of debug level.
func Log(format string, a ...interface{}) {
func Log(format string, a ...any) {
mu.RLock()
w := output
mu.RUnlock()

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"net/http"
"os"
"strconv"
"strings"
@@ -52,6 +53,8 @@ func NewClient() (ret *Client) {
string(anthropic.ModelClaudeSonnet4_5_20250929),
string(anthropic.ModelClaudeOpus4_5_20251101),
string(anthropic.ModelClaudeOpus4_5),
string(anthropic.ModelClaudeHaiku4_5),
string(anthropic.ModelClaudeHaiku4_5_20251001),
}
ret.modelBetas = map[string][]string{
@@ -214,7 +217,7 @@ func (an *Client) SendStream(
}
if stream.Err() != nil {
fmt.Printf("Messages stream error: %v\n", stream.Err())
fmt.Fprintf(os.Stderr, "Messages stream error: %v\n", stream.Err())
}
close(channel)
return

View File

@@ -52,7 +52,7 @@ func createExpiredToken(accessToken, refreshToken string) *util.OAuthToken {
}
// mockTokenServer creates a mock OAuth token server for testing
func mockTokenServer(_ *testing.T, responses map[string]interface{}) *httptest.Server {
func mockTokenServer(_ *testing.T, responses map[string]any) *httptest.Server {
return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/v1/oauth/token" {
http.NotFound(w, r)
@@ -80,7 +80,7 @@ func mockTokenServer(_ *testing.T, responses map[string]interface{}) *httptest.S
w.Header().Set("Content-Type", "application/json")
if errorResp, ok := response.(map[string]interface{}); ok && errorResp["error"] != nil {
if errorResp, ok := response.(map[string]any); ok && errorResp["error"] != nil {
w.WriteHeader(http.StatusBadRequest)
}
@@ -114,8 +114,8 @@ func TestGeneratePKCE(t *testing.T) {
func TestExchangeToken_Success(t *testing.T) {
// Create mock server
server := mockTokenServer(t, map[string]interface{}{
"authorization_code": map[string]interface{}{
server := mockTokenServer(t, map[string]any{
"authorization_code": map[string]any{
"access_token": "test_access_token",
"refresh_token": "test_refresh_token",
"expires_in": 3600,
@@ -161,8 +161,8 @@ func TestRefreshToken_Success(t *testing.T) {
os.WriteFile(tokenPath, data, 0600)
// Create mock server for refresh
server := mockTokenServer(t, map[string]interface{}{
"refresh_token": map[string]interface{}{
server := mockTokenServer(t, map[string]any{
"refresh_token": map[string]any{
"access_token": "new_access_token",
"refresh_token": "new_refresh_token",
"expires_in": 3600,
@@ -416,7 +416,7 @@ func TestGetValidTokenWithValidToken(t *testing.T) {
// Benchmark tests
func BenchmarkGeneratePKCE(b *testing.B) {
for i := 0; i < b.N; i++ {
for b.Loop() {
_, _, err := generatePKCE()
if err != nil {
b.Fatal(err)
@@ -427,8 +427,7 @@ func BenchmarkGeneratePKCE(b *testing.B) {
func BenchmarkTokenIsExpired(b *testing.B) {
token := createTestToken("access", "refresh", 3600)
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
token.IsExpired(5)
}
}

View File

@@ -3,6 +3,7 @@ package gemini
import (
"fmt"
"sort"
"strings"
)
// GeminiVoice represents a Gemini TTS voice with its characteristics
@@ -126,16 +127,17 @@ func ListGeminiVoices(shellCompleteMode bool) string {
if shellCompleteMode {
// For shell completion, just return voice names
names := GetGeminiVoiceNames()
result := ""
var result strings.Builder
for _, name := range names {
result += name + "\n"
result.WriteString(name + "\n")
}
return result
return result.String()
}
// For human-readable output
voices := GetGeminiVoices()
result := "Available Gemini Text-to-Speech voices:\n\n"
var result strings.Builder
result.WriteString("Available Gemini Text-to-Speech voices:\n\n")
// Group by characteristics for better readability
groups := map[string][]GeminiVoice{
@@ -186,22 +188,22 @@ func ListGeminiVoices(shellCompleteMode bool) string {
// Output grouped voices
for groupName, groupVoices := range groups {
if len(groupVoices) > 0 {
result += fmt.Sprintf("%s:\n", groupName)
result.WriteString(fmt.Sprintf("%s:\n", groupName))
for _, voice := range groupVoices {
defaultStr := ""
if voice.Name == "Kore" {
defaultStr = " (default)"
}
result += fmt.Sprintf(" %-15s - %s%s\n", voice.Name, voice.Description, defaultStr)
result.WriteString(fmt.Sprintf(" %-15s - %s%s\n", voice.Name, voice.Description, defaultStr))
}
result += "\n"
result.WriteString("\n")
}
}
result += "Use --voice <voice_name> to select a specific voice.\n"
result += "Example: fabric --voice Charon -m gemini-2.5-flash-preview-tts -o output.wav \"Hello world\"\n"
result.WriteString("Use --voice <voice_name> to select a specific voice.\n")
result.WriteString("Example: fabric --voice Charon -m gemini-2.5-flash-preview-tts -o output.wav \"Hello world\"\n")
return result
return result.String()
}
// NOTE: This implementation maintains a curated list based on official Google documentation.

View File

@@ -90,7 +90,7 @@ func (c *Client) ListModels() ([]string, error) {
func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan string) (err error) {
url := fmt.Sprintf("%s/chat/completions", c.ApiUrl.Value)
payload := map[string]interface{}{
payload := map[string]any{
"messages": msgs,
"model": opts.Model,
"stream": true, // Enable streaming
@@ -140,27 +140,27 @@ func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
continue
}
if bytes.HasPrefix(line, []byte("data: ")) {
line = bytes.TrimPrefix(line, []byte("data: "))
if after, ok := bytes.CutPrefix(line, []byte("data: ")); ok {
line = after
}
if string(line) == "[DONE]" {
break
}
var result map[string]interface{}
var result map[string]any
if err = json.Unmarshal(line, &result); err != nil {
continue
}
var choices []interface{}
var choices []any
var ok bool
if choices, ok = result["choices"].([]interface{}); !ok || len(choices) == 0 {
if choices, ok = result["choices"].([]any); !ok || len(choices) == 0 {
continue
}
var delta map[string]interface{}
if delta, ok = choices[0].(map[string]interface{})["delta"].(map[string]interface{}); !ok {
var delta map[string]any
if delta, ok = choices[0].(map[string]any)["delta"].(map[string]any); !ok {
continue
}
@@ -176,7 +176,7 @@ func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
func (c *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (content string, err error) {
url := fmt.Sprintf("%s/chat/completions", c.ApiUrl.Value)
payload := map[string]interface{}{
payload := map[string]any{
"messages": msgs,
"model": opts.Model,
// Add other options from opts if supported by LM Studio
@@ -208,21 +208,21 @@ func (c *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
return
}
var result map[string]interface{}
var result map[string]any
if err = json.NewDecoder(resp.Body).Decode(&result); err != nil {
err = fmt.Errorf("failed to decode response: %w", err)
return
}
var choices []interface{}
var choices []any
var ok bool
if choices, ok = result["choices"].([]interface{}); !ok || len(choices) == 0 {
if choices, ok = result["choices"].([]any); !ok || len(choices) == 0 {
err = fmt.Errorf("invalid response format: missing or empty choices")
return
}
var message map[string]interface{}
if message, ok = choices[0].(map[string]interface{})["message"].(map[string]interface{}); !ok {
var message map[string]any
if message, ok = choices[0].(map[string]any)["message"].(map[string]any); !ok {
err = fmt.Errorf("invalid response format: missing message in first choice")
return
}
@@ -238,7 +238,7 @@ func (c *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
func (c *Client) Complete(ctx context.Context, prompt string, opts *domain.ChatOptions) (text string, err error) {
url := fmt.Sprintf("%s/completions", c.ApiUrl.Value)
payload := map[string]interface{}{
payload := map[string]any{
"prompt": prompt,
"model": opts.Model,
// Add other options from opts if supported by LM Studio
@@ -270,20 +270,20 @@ func (c *Client) Complete(ctx context.Context, prompt string, opts *domain.ChatO
return
}
var result map[string]interface{}
var result map[string]any
if err = json.NewDecoder(resp.Body).Decode(&result); err != nil {
err = fmt.Errorf("failed to decode response: %w", err)
return
}
var choices []interface{}
var choices []any
var ok bool
if choices, ok = result["choices"].([]interface{}); !ok || len(choices) == 0 {
if choices, ok = result["choices"].([]any); !ok || len(choices) == 0 {
err = fmt.Errorf("invalid response format: missing or empty choices")
return
}
if text, ok = choices[0].(map[string]interface{})["text"].(string); !ok {
if text, ok = choices[0].(map[string]any)["text"].(string); !ok {
err = fmt.Errorf("invalid response format: missing or non-string text in first choice")
return
}
@@ -294,7 +294,7 @@ func (c *Client) Complete(ctx context.Context, prompt string, opts *domain.ChatO
func (c *Client) GetEmbeddings(ctx context.Context, input string, opts *domain.ChatOptions) (embeddings []float64, err error) {
url := fmt.Sprintf("%s/embeddings", c.ApiUrl.Value)
payload := map[string]interface{}{
payload := map[string]any{
"input": input,
"model": opts.Model,
// Add other options from opts if supported by LM Studio

View File

@@ -2,7 +2,9 @@ package ollama
import (
"context"
"encoding/base64"
"fmt"
"io"
"net/http"
"net/url"
"os"
@@ -10,11 +12,10 @@ import (
"time"
"github.com/danielmiessler/fabric/internal/chat"
ollamaapi "github.com/ollama/ollama/api"
"github.com/samber/lo"
"github.com/danielmiessler/fabric/internal/domain"
debuglog "github.com/danielmiessler/fabric/internal/log"
"github.com/danielmiessler/fabric/internal/plugins"
ollamaapi "github.com/ollama/ollama/api"
)
const defaultBaseUrl = "http://localhost:11434"
@@ -48,6 +49,7 @@ type Client struct {
apiUrl *url.URL
client *ollamaapi.Client
ApiHttpTimeout *plugins.SetupQuestion
httpClient *http.Client
}
type transport_sec struct {
@@ -84,7 +86,8 @@ func (o *Client) configure() (err error) {
}
}
o.client = ollamaapi.NewClient(o.apiUrl, &http.Client{Timeout: timeout, Transport: &transport_sec{underlyingTransport: http.DefaultTransport, ApiKey: o.ApiKey}})
o.httpClient = &http.Client{Timeout: timeout, Transport: &transport_sec{underlyingTransport: http.DefaultTransport, ApiKey: o.ApiKey}}
o.client = ollamaapi.NewClient(o.apiUrl, o.httpClient)
return
}
@@ -104,15 +107,18 @@ func (o *Client) ListModels() (ret []string, err error) {
}
func (o *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan string) (err error) {
req := o.createChatRequest(msgs, opts)
ctx := context.Background()
var req ollamaapi.ChatRequest
if req, err = o.createChatRequest(ctx, msgs, opts); err != nil {
return
}
respFunc := func(resp ollamaapi.ChatResponse) (streamErr error) {
channel <- resp.Message.Content
return
}
ctx := context.Background()
if err = o.client.Chat(ctx, &req, respFunc); err != nil {
return
}
@@ -124,7 +130,10 @@ func (o *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.Cha
func (o *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (ret string, err error) {
bf := false
req := o.createChatRequest(msgs, opts)
var req ollamaapi.ChatRequest
if req, err = o.createChatRequest(ctx, msgs, opts); err != nil {
return
}
req.Stream = &bf
respFunc := func(resp ollamaapi.ChatResponse) (streamErr error) {
@@ -133,17 +142,20 @@ func (o *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
}
if err = o.client.Chat(ctx, &req, respFunc); err != nil {
fmt.Printf("FRED --> %s\n", err)
debuglog.Debug(debuglog.Basic, "Ollama chat request failed: %v\n", err)
}
return
}
func (o *Client) createChatRequest(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (ret ollamaapi.ChatRequest) {
messages := lo.Map(msgs, func(message *chat.ChatCompletionMessage, _ int) (ret ollamaapi.Message) {
return ollamaapi.Message{Role: message.Role, Content: message.Content}
})
func (o *Client) createChatRequest(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions) (ret ollamaapi.ChatRequest, err error) {
messages := make([]ollamaapi.Message, len(msgs))
for i, message := range msgs {
if messages[i], err = o.convertMessage(ctx, message); err != nil {
return
}
}
options := map[string]interface{}{
options := map[string]any{
"temperature": opts.Temperature,
"presence_penalty": opts.PresencePenalty,
"frequency_penalty": opts.FrequencyPenalty,
@@ -162,6 +174,77 @@ func (o *Client) createChatRequest(msgs []*chat.ChatCompletionMessage, opts *dom
return
}
func (o *Client) convertMessage(ctx context.Context, message *chat.ChatCompletionMessage) (ret ollamaapi.Message, err error) {
ret = ollamaapi.Message{Role: message.Role, Content: message.Content}
if len(message.MultiContent) == 0 {
return
}
// Pre-allocate with capacity hint
textParts := make([]string, 0, len(message.MultiContent))
if strings.TrimSpace(ret.Content) != "" {
textParts = append(textParts, strings.TrimSpace(ret.Content))
}
for _, part := range message.MultiContent {
switch part.Type {
case chat.ChatMessagePartTypeText:
if trimmed := strings.TrimSpace(part.Text); trimmed != "" {
textParts = append(textParts, trimmed)
}
case chat.ChatMessagePartTypeImageURL:
// Nil guard
if part.ImageURL == nil || part.ImageURL.URL == "" {
continue
}
var img []byte
if img, err = o.loadImageBytes(ctx, part.ImageURL.URL); err != nil {
return
}
ret.Images = append(ret.Images, ollamaapi.ImageData(img))
}
}
ret.Content = strings.Join(textParts, "\n")
return
}
func (o *Client) loadImageBytes(ctx context.Context, imageURL string) (ret []byte, err error) {
// Handle data URLs (base64 encoded)
if strings.HasPrefix(imageURL, "data:") {
parts := strings.SplitN(imageURL, ",", 2)
if len(parts) != 2 {
err = fmt.Errorf("invalid data URL format")
return
}
if ret, err = base64.StdEncoding.DecodeString(parts[1]); err != nil {
err = fmt.Errorf("failed to decode data URL: %w", err)
}
return
}
// Handle HTTP URLs with context
var req *http.Request
if req, err = http.NewRequestWithContext(ctx, http.MethodGet, imageURL, nil); err != nil {
return
}
var resp *http.Response
if resp, err = o.httpClient.Do(req); err != nil {
return
}
defer resp.Body.Close()
if resp.StatusCode >= http.StatusBadRequest {
err = fmt.Errorf("failed to fetch image %s: %s", imageURL, resp.Status)
return
}
ret, err = io.ReadAll(resp.Body)
return
}
func (o *Client) NeedsRawMode(modelName string) bool {
ollamaSearchStrings := []string{
"llama3",

View File

@@ -8,6 +8,7 @@ import (
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"github.com/danielmiessler/fabric/internal/domain"
@@ -31,12 +32,7 @@ var ImageGenerationSupportedModels = []string{
// supportsImageGeneration checks if the given model supports the image_generation tool
func supportsImageGeneration(model string) bool {
for _, supportedModel := range ImageGenerationSupportedModels {
if model == supportedModel {
return true
}
}
return false
return slices.Contains(ImageGenerationSupportedModels, model)
}
// getOutputFormatFromExtension determines the API output format based on file extension

View File

@@ -345,7 +345,7 @@ func TestAddImageGenerationToolWithUserParameters(t *testing.T) {
tests := []struct {
name string
opts *domain.ChatOptions
expected map[string]interface{}
expected map[string]any
}{
{
name: "All parameters specified",
@@ -356,7 +356,7 @@ func TestAddImageGenerationToolWithUserParameters(t *testing.T) {
ImageBackground: "transparent",
ImageCompression: 0, // Not applicable for PNG
},
expected: map[string]interface{}{
expected: map[string]any{
"size": "1536x1024",
"quality": "high",
"background": "transparent",
@@ -372,7 +372,7 @@ func TestAddImageGenerationToolWithUserParameters(t *testing.T) {
ImageBackground: "opaque",
ImageCompression: 75,
},
expected: map[string]interface{}{
expected: map[string]any{
"size": "1024x1024",
"quality": "medium",
"background": "opaque",
@@ -386,7 +386,7 @@ func TestAddImageGenerationToolWithUserParameters(t *testing.T) {
ImageFile: "/tmp/test.webp",
ImageQuality: "low",
},
expected: map[string]interface{}{
expected: map[string]any{
"quality": "low",
"output_format": "webp",
},
@@ -396,7 +396,7 @@ func TestAddImageGenerationToolWithUserParameters(t *testing.T) {
opts: &domain.ChatOptions{
ImageFile: "/tmp/test.png",
},
expected: map[string]interface{}{
expected: map[string]any{
"output_format": "png",
},
},

View File

@@ -16,7 +16,7 @@ func TestBuildResponseRequestWithMaxTokens(t *testing.T) {
var msgs []*chat.ChatCompletionMessage
for i := 0; i < 2; i++ {
for range 2 {
msgs = append(msgs, &chat.ChatCompletionMessage{
Role: "User",
Content: "My msg",
@@ -42,7 +42,7 @@ func TestBuildResponseRequestNoMaxTokens(t *testing.T) {
var msgs []*chat.ChatCompletionMessage
for i := 0; i < 2; i++ {
for range 2 {
msgs = append(msgs, &chat.ChatCompletionMessage{
Role: "User",
Content: "My msg",

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"os"
"strings"
"sync"
"github.com/danielmiessler/fabric/internal/domain"
@@ -107,18 +108,19 @@ func (c *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, o
return "", fmt.Errorf("perplexity API request failed: %w", err) // Corrected capitalization
}
content := resp.GetLastContent()
var content strings.Builder
content.WriteString(resp.GetLastContent())
// Append citations if available
citations := resp.GetCitations()
if len(citations) > 0 {
content += "\n\n# CITATIONS\n\n"
content.WriteString("\n\n# CITATIONS\n\n")
for i, citation := range citations {
content += fmt.Sprintf("- [%d] %s\n", i+1, citation)
content.WriteString(fmt.Sprintf("- [%d] %s\n", i+1, citation))
}
}
return content, nil
return content.String(), nil
}
func (c *Client) SendStream(msgs []*chat.ChatCompletionMessage, opts *domain.ChatOptions, channel chan string) error {

View File

@@ -134,7 +134,7 @@ func (o *StorageEntity) buildFileName(name string) string {
return fmt.Sprintf("%s%v", name, o.FileExtension)
}
func (o *StorageEntity) SaveAsJson(name string, item interface{}) (err error) {
func (o *StorageEntity) SaveAsJson(name string, item any) (err error) {
var jsonString []byte
if jsonString, err = json.Marshal(item); err == nil {
err = o.Save(name, jsonString)
@@ -145,7 +145,7 @@ func (o *StorageEntity) SaveAsJson(name string, item interface{}) (err error) {
return err
}
func (o *StorageEntity) LoadAsJson(name string, item interface{}) (err error) {
func (o *StorageEntity) LoadAsJson(name string, item any) (err error) {
var content []byte
if content, err = o.Load(name); err != nil {
return

View File

@@ -92,7 +92,11 @@ func (o *PluginBase) Setup() (err error) {
return
}
err = o.Configure()
// After Setup, run ConfigureCustom if present, but skip re-validation
// since Ask() already validated user input (or allowed explicit reset)
if o.ConfigureCustom != nil {
err = o.ConfigureCustom()
}
return
}
@@ -198,16 +202,21 @@ func (o *SetupQuestion) Ask(label string) (err error) {
var answer string
fmt.Scanln(&answer)
answer = strings.TrimRight(answer, "\n")
isReset := strings.ToLower(answer) == AnswerReset
if answer == "" {
answer = o.Value
} else if strings.ToLower(answer) == AnswerReset {
} else if isReset {
answer = ""
}
err = o.OnAnswer(answer)
err = o.OnAnswerWithReset(answer, isReset)
return
}
func (o *SetupQuestion) OnAnswer(answer string) (err error) {
return o.OnAnswerWithReset(answer, false)
}
func (o *SetupQuestion) OnAnswerWithReset(answer string, isReset bool) (err error) {
if o.Type == SettingTypeBool {
if answer == "" {
o.Value = ""
@@ -226,6 +235,11 @@ func (o *SetupQuestion) OnAnswer(answer string) (err error) {
return
}
}
// Skip validation when explicitly resetting a value - the user intentionally
// wants to clear the value even if it's required
if isReset {
return nil
}
err = o.IsValidErr()
return
}

View File

@@ -116,6 +116,91 @@ func TestSetupQuestion_Ask(t *testing.T) {
assert.Equal(t, "user_value", setting.Value)
}
func TestSetupQuestion_Ask_Reset(t *testing.T) {
// Test that resetting a required field doesn't produce an error
setting := &Setting{
EnvVariable: "TEST_RESET_SETTING",
Value: "existing_value",
Required: true,
}
question := &SetupQuestion{
Setting: setting,
Question: "Enter test setting:",
}
input := "reset\n"
fmtInput := captureInput(input)
defer fmtInput()
err := question.Ask("TestConfigurable")
// Should NOT return an error even though the field is required
assert.NoError(t, err)
// Value should be cleared
assert.Equal(t, "", setting.Value)
}
func TestSetupQuestion_OnAnswerWithReset(t *testing.T) {
tests := []struct {
name string
setting *Setting
answer string
isReset bool
expectError bool
expectValue string
}{
{
name: "reset required field should not error",
setting: &Setting{
EnvVariable: "TEST_SETTING",
Value: "old_value",
Required: true,
},
answer: "",
isReset: true,
expectError: false,
expectValue: "",
},
{
name: "empty answer on required field should error",
setting: &Setting{
EnvVariable: "TEST_SETTING",
Value: "",
Required: true,
},
answer: "",
isReset: false,
expectError: true,
expectValue: "",
},
{
name: "valid answer on required field should not error",
setting: &Setting{
EnvVariable: "TEST_SETTING",
Value: "",
Required: true,
},
answer: "new_value",
isReset: false,
expectError: false,
expectValue: "new_value",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
question := &SetupQuestion{
Setting: tt.setting,
Question: "Test question",
}
err := question.OnAnswerWithReset(tt.answer, tt.isReset)
if tt.expectError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
}
assert.Equal(t, tt.expectValue, tt.setting.Value)
})
}
}
func TestSettings_IsConfigured(t *testing.T) {
settings := Settings{
{EnvVariable: "TEST_SETTING1", Value: "value1", Required: true},

View File

@@ -187,9 +187,10 @@ esac`
executor := NewExtensionExecutor(registry)
// Helper function to create and register extension
createExtension := func(name, opName, cmdTemplate string, config map[string]interface{}) error {
createExtension := func(name, opName, cmdTemplate string, config map[string]any) error {
configPath := filepath.Join(tmpDir, name+".yaml")
configContent := `name: ` + name + `
var configContent strings.Builder
configContent.WriteString(`name: ` + name + `
executable: ` + testScript + `
type: executable
timeout: 30s
@@ -199,14 +200,14 @@ operations:
config:
output:
method: file
file_config:`
file_config:`)
// Add config options
for k, v := range config {
configContent += "\n " + k + ": " + strings.TrimSpace(v.(string))
configContent.WriteString("\n " + k + ": " + strings.TrimSpace(v.(string)))
}
if err := os.WriteFile(configPath, []byte(configContent), 0644); err != nil {
if err := os.WriteFile(configPath, []byte(configContent.String()), 0644); err != nil {
return err
}
@@ -216,7 +217,7 @@ config:
// Test basic fixed file output
t.Run("BasicFixedFile", func(t *testing.T) {
outputFile := filepath.Join(tmpDir, "output.txt")
config := map[string]interface{}{
config := map[string]any{
"output_file": `"output.txt"`,
"work_dir": `"` + tmpDir + `"`,
"cleanup": "true",
@@ -241,7 +242,7 @@ config:
// Test no work_dir specified
t.Run("NoWorkDir", func(t *testing.T) {
config := map[string]interface{}{
config := map[string]any{
"output_file": `"direct-output.txt"`,
"cleanup": "true",
}
@@ -263,7 +264,7 @@ config:
outputFile := filepath.Join(tmpDir, "cleanup-test.txt")
// Test with cleanup enabled
config := map[string]interface{}{
config := map[string]any{
"output_file": `"cleanup-test.txt"`,
"work_dir": `"` + tmpDir + `"`,
"cleanup": "true",
@@ -307,7 +308,7 @@ config:
// Test error cases
t.Run("ErrorCases", func(t *testing.T) {
outputFile := filepath.Join(tmpDir, "error-test.txt")
config := map[string]interface{}{
config := map[string]any{
"output_file": `"error-test.txt"`,
"work_dir": `"` + tmpDir + `"`,
"cleanup": "true",
@@ -341,7 +342,7 @@ config:
// Test with missing output_file
t.Run("MissingOutputFile", func(t *testing.T) {
config := map[string]interface{}{
config := map[string]any{
"work_dir": `"` + tmpDir + `"`,
"cleanup": "true",
}

View File

@@ -30,7 +30,7 @@ type ExtensionDefinition struct {
Operations map[string]OperationConfig `yaml:"operations"`
// Additional config
Config map[string]interface{} `yaml:"config"`
Config map[string]any `yaml:"config"`
}
type OperationConfig struct {
@@ -53,7 +53,7 @@ type ExtensionRegistry struct {
// Helper methods for Config access
func (e *ExtensionDefinition) GetOutputMethod() string {
if output, ok := e.Config["output"].(map[string]interface{}); ok {
if output, ok := e.Config["output"].(map[string]any); ok {
if method, ok := output["method"].(string); ok {
return method
}
@@ -61,9 +61,9 @@ func (e *ExtensionDefinition) GetOutputMethod() string {
return "stdout" // default to stdout if not specified
}
func (e *ExtensionDefinition) GetFileConfig() map[string]interface{} {
if output, ok := e.Config["output"].(map[string]interface{}); ok {
if fileConfig, ok := output["file_config"].(map[string]interface{}); ok {
func (e *ExtensionDefinition) GetFileConfig() map[string]any {
if output, ok := e.Config["output"].(map[string]any); ok {
if fileConfig, ok := output["file_config"].(map[string]any); ok {
return fileConfig
}
}

View File

@@ -33,7 +33,7 @@ func init() {
var pluginPattern = regexp.MustCompile(`\{\{plugin:([^:]+):([^:]+)(?::([^}]+))?\}\}`)
var extensionPattern = regexp.MustCompile(`\{\{ext:([^:]+):([^:]+)(?::([^}]+))?\}\}`)
func debugf(format string, a ...interface{}) {
func debugf(format string, a ...any) {
debuglog.Debug(debuglog.Trace, format, a...)
}

View File

@@ -16,7 +16,7 @@ func toTitle(s string) string {
lower := strings.ToLower(s)
runes := []rune(lower)
for i := 0; i < len(runes); i++ {
for i := range runes {
// Capitalize if previous char is non-letter AND
// (we're at the end OR next char is not space)
if i == 0 || !unicode.IsLetter(runes[i-1]) {

View File

@@ -24,7 +24,7 @@ func (h *ModelsHandler) GetModelNames(c *gin.Context) {
return
}
response := make(map[string]interface{})
response := make(map[string]any)
vendors := make(map[string][]string)
for _, groupItems := range vendorsModels.GroupsItems {

View File

@@ -102,7 +102,7 @@ func ServeOllama(registry *core.PluginRegistry, address string, version string)
// Ollama Endpoints
r.GET("/api/tags", typeConversion.ollamaTags)
r.GET("/api/version", func(c *gin.Context) {
c.Data(200, "application/json", []byte(fmt.Sprintf("{\"%s\"}", version)))
c.Data(200, "application/json", fmt.Appendf(nil, "{\"%s\"}", version))
})
r.POST("/api/chat", typeConversion.ollamaChat)
@@ -224,7 +224,7 @@ func (f APIConvert) ollamaChat(c *gin.Context) {
c.JSON(http.StatusInternalServerError, gin.H{"error": "testing endpoint"})
return
}
for _, word := range strings.Split(fabricResponse.Content, " ") {
for word := range strings.SplitSeq(fabricResponse.Content, " ") {
forwardedResponse = OllamaResponse{
Model: "",
CreatedAt: "",

View File

@@ -1,6 +1,7 @@
package restapi
import (
"maps"
"net/http"
"github.com/danielmiessler/fabric/internal/plugins/db/fsdb"
@@ -74,9 +75,7 @@ func (h *PatternsHandler) ApplyPattern(c *gin.Context) {
variables[key] = values[0]
}
}
for key, value := range request.Variables {
variables[key] = value
}
maps.Copy(variables, request.Variables)
pattern, err := h.patterns.GetApplyVariables(name, variables, request.Input)
if err != nil {

View File

@@ -1 +1 @@
"1.4.338"
"1.4.349"

View File

@@ -26,9 +26,9 @@
"eslint-plugin-svelte": "^2.46.1",
"lucide-svelte": "^0.309.0",
"mdsvex": "^0.11.2",
"patch-package": "^8.0.0",
"patch-package": "^8.0.1",
"pdf-to-markdown-core": "github:jzillmann/pdf-to-markdown#modularize",
"pdfjs-dist": "^4.2.67",
"pdfjs-dist": "^5.4.449",
"postcss": "^8.5.3",
"postcss-load-config": "^6.0.1",
"rehype-autolink-headings": "^7.1.0",

565
web/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,239 +1,284 @@
import { get } from "svelte/store";
import type {
ChatRequest,
StreamResponse,
ChatError as IChatError,
ChatPrompt
} from '$lib/interfaces/chat-interface';
import { get } from 'svelte/store';
import { modelConfig } from '$lib/store/model-store';
import { systemPrompt, selectedPatternName, patternVariables } from '$lib/store/pattern-store';
import { chatConfig } from '$lib/store/chat-config';
import { messageStore } from '$lib/store/chat-store';
import { languageStore } from '$lib/store/language-store';
import { selectedStrategy } from '$lib/store/strategy-store';
ChatPrompt,
ChatRequest,
ChatError as IChatError,
StreamResponse,
} from "$lib/interfaces/chat-interface";
import { chatConfig } from "$lib/store/chat-config";
import { languageStore } from "$lib/store/language-store";
import { modelConfig } from "$lib/store/model-store";
import {
patternVariables,
selectedPatternName,
systemPrompt,
} from "$lib/store/pattern-store";
import { selectedStrategy } from "$lib/store/strategy-store";
class LanguageValidator {
constructor(private targetLanguage: string) {}
constructor(private targetLanguage: string) {}
enforceLanguage(content: string): string {
if (this.targetLanguage === 'en') return content;
return `[Language: ${this.targetLanguage}]\n${content}`;
}
enforceLanguage(content: string): string {
if (this.targetLanguage === "en") return content;
return `[Language: ${this.targetLanguage}]\n${content}`;
}
}
export class ChatError extends Error implements IChatError {
constructor(
message: string,
public readonly code: string = 'CHAT_ERROR',
public readonly details?: unknown
) {
super(message);
this.name = 'ChatError';
}
constructor(
message: string,
public readonly code: string = "CHAT_ERROR",
public readonly details?: unknown,
) {
super(message);
this.name = "ChatError";
}
}
export class ChatService {
private validator: LanguageValidator;
private validator: LanguageValidator;
constructor() {
this.validator = new LanguageValidator(get(languageStore));
}
constructor() {
this.validator = new LanguageValidator(get(languageStore));
}
private async fetchStream(request: ChatRequest): Promise<ReadableStream<StreamResponse>> {
try {
console.log('\n=== ChatService Request Start ===');
console.log('1. Request details:', {
language: get(languageStore),
pattern: get(selectedPatternName),
promptCount: request.prompts?.length,
messageCount: request.messages?.length
});
// NEW: Log the full payload before sending to backend
console.log('Final ChatRequest payload:', JSON.stringify(request, null, 2));
private async fetchStream(
request: ChatRequest,
): Promise<ReadableStream<StreamResponse>> {
try {
console.log("\n=== ChatService Request Start ===");
console.log("1. Request details:", {
language: get(languageStore),
pattern: get(selectedPatternName),
promptCount: request.prompts?.length,
messageCount: request.messages?.length,
});
// NEW: Log the full payload before sending to backend
console.log(
"Final ChatRequest payload:",
JSON.stringify(request, null, 2),
);
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(request),
});
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(request),
});
if (!response.ok) {
throw new ChatError(`HTTP error! status: ${response.status}`, 'HTTP_ERROR', { status: response.status });
}
if (!response.ok) {
throw new ChatError(
`HTTP error! status: ${response.status}`,
"HTTP_ERROR",
{ status: response.status },
);
}
const reader = response.body?.getReader();
if (!reader) {
throw new ChatError('Response body is null', 'NULL_RESPONSE');
}
const reader = response.body?.getReader();
if (!reader) {
throw new ChatError("Response body is null", "NULL_RESPONSE");
}
return this.createMessageStream(reader);
} catch (error) {
if (error instanceof ChatError) throw error;
throw new ChatError('Failed to fetch chat stream', 'FETCH_ERROR', error);
}
}
return this.createMessageStream(reader);
} catch (error) {
if (error instanceof ChatError) throw error;
throw new ChatError("Failed to fetch chat stream", "FETCH_ERROR", error);
}
}
private cleanPatternOutput(content: string): string {
// Remove markdown fence if present
let cleaned = content.replace(/^```markdown\n/, '');
cleaned = cleaned.replace(/\n```$/, '');
private cleanPatternOutput(content: string): string {
// Remove markdown fence if present
let cleaned = content.replace(/^```markdown\n/, "");
cleaned = cleaned.replace(/\n```$/, "");
// Existing cleaning
cleaned = cleaned.replace(/^# OUTPUT\s*\n/, '');
cleaned = cleaned.replace(/^\s*\n/, '');
cleaned = cleaned.replace(/\n\s*$/, '');
cleaned = cleaned.replace(/^#\s+([A-Z]+):/gm, '$1:');
cleaned = cleaned.replace(/^#\s+([A-Z]+)\s*$/gm, '$1');
cleaned = cleaned.trim();
cleaned = cleaned.replace(/\n{3,}/g, '\n\n');
return cleaned;
}
// Existing cleaning
cleaned = cleaned.replace(/^# OUTPUT\s*\n/, "");
cleaned = cleaned.replace(/^\s*\n/, "");
cleaned = cleaned.replace(/\n\s*$/, "");
cleaned = cleaned.replace(/^#\s+([A-Z]+):/gm, "$1:");
cleaned = cleaned.replace(/^#\s+([A-Z]+)\s*$/gm, "$1");
cleaned = cleaned.trim();
cleaned = cleaned.replace(/\n{3,}/g, "\n\n");
return cleaned;
}
private createMessageStream(reader: ReadableStreamDefaultReader<Uint8Array>): ReadableStream<StreamResponse> {
let buffer = '';
const cleanPatternOutput = this.cleanPatternOutput.bind(this);
const language = get(languageStore);
const validator = new LanguageValidator(language);
private createMessageStream(
reader: ReadableStreamDefaultReader<Uint8Array>,
): ReadableStream<StreamResponse> {
let buffer = "";
const cleanPatternOutput = this.cleanPatternOutput.bind(this);
const language = get(languageStore);
const validator = new LanguageValidator(language);
const processResponse = (response: StreamResponse) => {
const pattern = get(selectedPatternName);
const processResponse = (response: StreamResponse) => {
const pattern = get(selectedPatternName);
if (pattern) {
response.content = cleanPatternOutput(response.content);
// Simplified format determination - always markdown unless mermaid
const isMermaid = [
'graph TD', 'gantt', 'flowchart',
'sequenceDiagram', 'classDiagram', 'stateDiagram'
].some(starter => response.content.trim().startsWith(starter));
if (pattern) {
response.content = cleanPatternOutput(response.content);
// Simplified format determination - always markdown unless mermaid
const isMermaid = [
"graph TD",
"gantt",
"flowchart",
"sequenceDiagram",
"classDiagram",
"stateDiagram",
].some((starter) => response.content.trim().startsWith(starter));
response.format = isMermaid ? 'mermaid' : 'markdown';
}
response.format = isMermaid ? "mermaid" : "markdown";
}
if (response.type === 'content') {
response.content = validator.enforceLanguage(response.content);
}
if (response.type === "content") {
response.content = validator.enforceLanguage(response.content);
}
return response;
};
return new ReadableStream({
async start(controller) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
return response;
};
return new ReadableStream({
async start(controller) {
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += new TextDecoder().decode(value);
const messages = buffer.split('\n\n').filter(msg => msg.startsWith('data: '));
buffer += new TextDecoder().decode(value);
const messages = buffer
.split("\n\n")
.filter((msg) => msg.startsWith("data: "));
if (messages.length > 1) {
buffer = messages.pop() || '';
for (const msg of messages) {
try {
let response = JSON.parse(msg.slice(6)) as StreamResponse;
response = processResponse(response);
controller.enqueue(response);
} catch (parseError) {
console.error('Error parsing stream message:', parseError);
}
}
}
}
if (messages.length > 1) {
buffer = messages.pop() || "";
for (const msg of messages) {
try {
let response = JSON.parse(msg.slice(6)) as StreamResponse;
response = processResponse(response);
controller.enqueue(response);
} catch (parseError) {
console.error("Error parsing stream message:", parseError);
}
}
}
}
if (buffer.startsWith('data: ')) {
try {
let response = JSON.parse(buffer.slice(6)) as StreamResponse;
response = processResponse(response);
controller.enqueue(response);
} catch (parseError) {
console.error('Error parsing final message:', parseError);
}
}
} catch (error) {
controller.error(new ChatError('Stream processing error', 'STREAM_ERROR', error));
} finally {
reader.releaseLock();
controller.close();
}
},
cancel() {
reader.cancel();
}
});
}
if (buffer.startsWith("data: ")) {
try {
let response = JSON.parse(buffer.slice(6)) as StreamResponse;
response = processResponse(response);
controller.enqueue(response);
} catch (parseError) {
console.error("Error parsing final message:", parseError);
}
}
} catch (error) {
controller.error(
new ChatError("Stream processing error", "STREAM_ERROR", error),
);
} finally {
reader.releaseLock();
controller.close();
}
},
cancel() {
reader.cancel();
},
});
}
private createChatPrompt(userInput: string, systemPromptText?: string): ChatPrompt {
const config = get(modelConfig);
const language = get(languageStore);
private createChatPrompt(
userInput: string,
systemPromptText?: string,
): ChatPrompt {
const config = get(modelConfig);
const language = get(languageStore);
const languageInstruction = language !== 'en'
? `You MUST respond in ${language} language. All output must be in ${language}. `
// ? `You MUST respond in ${language} language. ALL output, including section headers, titles, and formatting, MUST be translated into ${language}. It is CRITICAL that you translate ALL headers, such as SUMMARY, IDEAS, QUOTES, TAKEAWAYS, MAIN POINTS, etc., into ${language}. Maintain markdown formatting in the response. Do not output any English headers.`
: '';
const languageInstruction =
language !== "en"
? `You MUST respond in ${language} language. All output must be in ${language}. `
: // ? `You MUST respond in ${language} language. ALL output, including section headers, titles, and formatting, MUST be translated into ${language}. It is CRITICAL that you translate ALL headers, such as SUMMARY, IDEAS, QUOTES, TAKEAWAYS, MAIN POINTS, etc., into ${language}. Maintain markdown formatting in the response. Do not output any English headers.`
"";
const finalSystemPrompt = languageInstruction + (systemPromptText ?? get(systemPrompt));
const finalSystemPrompt =
languageInstruction + (systemPromptText ?? get(systemPrompt));
const finalUserInput = language !== 'en'
? `${userInput}\n\nIMPORTANT: Respond in ${language} language only.`
: userInput;
const finalUserInput =
language !== "en"
? `${userInput}\n\nIMPORTANT: Respond in ${language} language only.`
: userInput;
return {
userInput: finalUserInput,
systemPrompt: finalSystemPrompt,
model: config.model,
patternName: get(selectedPatternName),
strategyName: get(selectedStrategy), // Add selected strategy to prompt
variables: get(patternVariables) // Add pattern variables
};
}
public async createChatRequest(userInput: string, systemPromptText?: string, isPattern: boolean = false): Promise<ChatRequest> {
const prompt = this.createChatPrompt(userInput, systemPromptText);
const config = get(chatConfig);
const language = get(languageStore);
return {
prompts: [prompt],
messages: [],
language: language, // Add language at the top level for backend compatibility
...config
};
}
public async streamPattern(userInput: string, systemPromptText?: string): Promise<ReadableStream<StreamResponse>> {
const request = await this.createChatRequest(userInput, systemPromptText, true);
return this.fetchStream(request);
}
public async streamChat(userInput: string, systemPromptText?: string): Promise<ReadableStream<StreamResponse>> {
const request = await this.createChatRequest(userInput, systemPromptText);
return this.fetchStream(request);
}
public async processStream(
stream: ReadableStream<StreamResponse>,
onContent: (content: string, response?: StreamResponse) => void,
onError: (error: Error) => void
): Promise<void> {
const reader = stream.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (value.type === 'error') {
throw new ChatError(value.content, 'STREAM_CONTENT_ERROR');
}
if (value.type === 'content') {
onContent(value.content, value);
}
}
} catch (error) {
onError(error instanceof ChatError ? error : new ChatError('Stream processing error', 'STREAM_ERROR', error));
} finally {
reader.releaseLock();
}
}
return {
userInput: finalUserInput,
systemPrompt: finalSystemPrompt,
model: config.model,
patternName: get(selectedPatternName),
strategyName: get(selectedStrategy), // Add selected strategy to prompt
variables: get(patternVariables), // Add pattern variables
};
}
public async createChatRequest(
userInput: string,
systemPromptText?: string,
isPattern: boolean = false,
): Promise<ChatRequest> {
const prompt = this.createChatPrompt(userInput, systemPromptText);
const config = get(chatConfig);
const language = get(languageStore);
return {
prompts: [prompt],
messages: [],
language: language, // Add language at the top level for backend compatibility
...config,
};
}
public async streamPattern(
userInput: string,
systemPromptText?: string,
): Promise<ReadableStream<StreamResponse>> {
const request = await this.createChatRequest(
userInput,
systemPromptText,
true,
);
return this.fetchStream(request);
}
public async streamChat(
userInput: string,
systemPromptText?: string,
): Promise<ReadableStream<StreamResponse>> {
const request = await this.createChatRequest(userInput, systemPromptText);
return this.fetchStream(request);
}
public async processStream(
stream: ReadableStream<StreamResponse>,
onContent: (content: string, response?: StreamResponse) => void,
onError: (error: Error) => void,
): Promise<void> {
const reader = stream.getReader();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
if (value.type === "error") {
throw new ChatError(value.content, "STREAM_CONTENT_ERROR");
}
if (value.type === "content") {
onContent(value.content, value);
}
}
} catch (error) {
onError(
error instanceof ChatError
? error
: new ChatError("Stream processing error", "STREAM_ERROR", error),
);
} finally {
reader.releaseLock();
}
}
}

View File

@@ -1,78 +1,74 @@
import { createPipeline, transformers } from 'pdf-to-markdown-core/lib/src';
import { PARSE_SCHEMA } from 'pdf-to-markdown-core/lib/src/PdfParser';
import * as pdfjs from 'pdfjs-dist';
import pdfConfig from './pdf-config';
import { createPipeline, transformers } from "pdf-to-markdown-core/lib/src";
import { PARSE_SCHEMA } from "pdf-to-markdown-core/lib/src/PdfParser";
// pdfjs-dist v5+ requires browser APIs at import time, so we use dynamic imports
let pdfjs: typeof import("pdfjs-dist") | null = null;
export class PdfConversionService {
constructor() {
if (typeof window !== 'undefined') {
console.log('PDF.js version:', pdfjs.version);
// Initialize PDF.js configuration from the shared config
pdfConfig.initialize();
console.log('Worker configuration complete');
}
}
private async ensureInitialized(): Promise<typeof import("pdfjs-dist")> {
if (!pdfjs) {
// Dynamic import to avoid SSR issues with pdfjs-dist v5+
pdfjs = await import("pdfjs-dist");
const pdfConfig = (await import("./pdf-config")).default;
console.log("PDF.js version:", pdfjs.version);
await pdfConfig.initialize();
console.log("Worker configuration complete");
}
return pdfjs;
}
async convertToMarkdown(file: File): Promise<string> {
console.log('Starting PDF conversion:', {
fileName: file.name,
fileSize: file.size
});
async convertToMarkdown(file: File): Promise<string> {
console.log("Starting PDF conversion:", {
fileName: file.name,
fileSize: file.size,
});
const buffer = await file.arrayBuffer();
console.log('Buffer created:', buffer.byteLength);
const pdfjsLib = await this.ensureInitialized();
const pipeline = createPipeline(pdfjs, {
transformConfig: {
transformers
}
});
console.log('Pipeline created');
const buffer = await file.arrayBuffer();
console.log("Buffer created:", buffer.byteLength);
const result = await pipeline.parse(
buffer,
(progress) => console.log('Processing:', {
stage: progress.stages,
details: progress.stageDetails,
progress: progress.stageProgress
})
);
console.log('Parse complete, validating result');
const pipeline = createPipeline(pdfjsLib, {
transformConfig: {
transformers,
},
});
console.log("Pipeline created");
const transformed = result.transform();
console.log('Transform applied:', transformed);
const result = await pipeline.parse(buffer, (progress) =>
console.log("Processing:", {
stage: progress.stages,
details: progress.stageDetails,
progress: progress.stageProgress,
}),
);
console.log("Parse complete, validating result");
const markdown = transformed.convert({
convert: (items) => {
console.log('PDF Structure:', {
itemCount: items.length,
firstItem: items[0],
schema: PARSE_SCHEMA // ['transform', 'width', 'height', 'str', 'fontName', 'dir']
});
const text = items
.map(item => item.value('str')) // Using 'str' instead of 'text' based on PARSE_SCHEMA
.filter(Boolean)
.join('\n');
console.log('Converted text:', {
length: text.length,
preview: text.substring(0, 100)
});
return text;
}
});
const transformed = result.transform();
console.log("Transform applied:", transformed);
return markdown;
}
const markdown = transformed.convert({
convert: (items) => {
console.log("PDF Structure:", {
itemCount: items.length,
firstItem: items[0],
schema: PARSE_SCHEMA, // ['transform', 'width', 'height', 'str', 'fontName', 'dir']
});
const text = items
.map((item) => item.value("str")) // Using 'str' instead of 'text' based on PARSE_SCHEMA
.filter(Boolean)
.join("\n");
console.log("Converted text:", {
length: text.length,
preview: text.substring(0, 100),
});
return text;
},
});
return markdown;
}
}

View File

@@ -1,20 +1,19 @@
import { browser } from '$app/environment';
import { GlobalWorkerOptions } from 'pdfjs-dist';
import { browser } from "$app/environment";
// Set up the worker source location - point to static file in public directory
const workerSrc = '/pdf.worker.min.mjs';
// Configure the worker options only on the client side
if (browser) {
GlobalWorkerOptions.workerSrc = workerSrc;
}
// Export the configuration
// Export the configuration - accepts pdfjs module to avoid top-level import
// This is necessary because pdfjs-dist v5+ uses browser APIs at import time
export default {
initialize: () => {
if (browser) {
console.log('PDF.js worker initialized at', workerSrc);
}
}
};
initialize: async () => {
if (browser) {
// Dynamic import to avoid SSR issues
const pdfjs = await import("pdfjs-dist");
const { GlobalWorkerOptions, version } = pdfjs;
// Use CDN-hosted worker to avoid bundling third-party minified code in the repo
const workerSrc = `https://unpkg.com/pdfjs-dist@${version}/build/pdf.worker.min.mjs`;
GlobalWorkerOptions.workerSrc = workerSrc;
console.log(`PDF.js worker v${version} initialized from CDN`);
}
},
};

View File

@@ -1,19 +1,24 @@
import { writable, derived, get } from 'svelte/store';
import type { ChatState, Message, StreamResponse } from '$lib/interfaces/chat-interface';
import { ChatService, ChatError } from '$lib/services/ChatService';
import { languageStore } from '$lib/store/language-store';
import { selectedPatternName } from '$lib/store/pattern-store';
import { derived, get, writable } from "svelte/store";
import { browser } from "$app/environment";
import type {
ChatState,
Message,
StreamResponse,
} from "$lib/interfaces/chat-interface";
import { ChatError, ChatService } from "$lib/services/ChatService";
import { languageStore } from "$lib/store/language-store";
import { selectedPatternName } from "$lib/store/pattern-store";
// Initialize chat service
const chatService = new ChatService();
// Local storage key for persisting messages
const MESSAGES_STORAGE_KEY = 'chat_messages';
const MESSAGES_STORAGE_KEY = "chat_messages";
// Load initial messages from local storage
const initialMessages = typeof localStorage !== 'undefined'
? JSON.parse(localStorage.getItem(MESSAGES_STORAGE_KEY) || '[]')
: [];
// Load initial messages from local storage (only in browser)
const initialMessages = browser
? JSON.parse(localStorage.getItem(MESSAGES_STORAGE_KEY) || "[]")
: [];
// Separate stores for different concerns
export const messageStore = writable<Message[]>(initialMessages);
@@ -21,134 +26,144 @@ export const streamingStore = writable<boolean>(false);
export const errorStore = writable<string | null>(null);
export const currentSession = writable<string | null>(null);
// Subscribe to messageStore changes to persist messages
if (typeof localStorage !== 'undefined') {
messageStore.subscribe($messages => {
localStorage.setItem(MESSAGES_STORAGE_KEY, JSON.stringify($messages));
});
// Subscribe to messageStore changes to persist messages (only in browser)
if (browser) {
messageStore.subscribe(($messages) => {
localStorage.setItem(MESSAGES_STORAGE_KEY, JSON.stringify($messages));
});
}
// Derived store for chat state
export const chatState = derived(
[messageStore, streamingStore],
([$messages, $streaming]) => ({
messages: $messages,
isStreaming: $streaming
})
[messageStore, streamingStore],
([$messages, $streaming]) => ({
messages: $messages,
isStreaming: $streaming,
}),
);
// Error handling utility
function handleError(error: Error | string) {
const errorMessage = error instanceof ChatError
? `${error.code}: ${error.message}`
: error instanceof Error
? error.message
: error;
const errorMessage =
error instanceof ChatError
? `${error.code}: ${error.message}`
: error instanceof Error
? error.message
: error;
errorStore.set(errorMessage);
streamingStore.set(false);
return errorMessage;
errorStore.set(errorMessage);
streamingStore.set(false);
return errorMessage;
}
export const setSession = (sessionName: string | null) => {
currentSession.set(sessionName);
if (!sessionName) {
clearMessages();
}
currentSession.set(sessionName);
if (!sessionName) {
clearMessages();
}
};
export const clearMessages = () => {
messageStore.set([]);
errorStore.set(null);
if (typeof localStorage !== 'undefined') {
localStorage.removeItem(MESSAGES_STORAGE_KEY);
}
messageStore.set([]);
errorStore.set(null);
if (typeof localStorage !== "undefined") {
localStorage.removeItem(MESSAGES_STORAGE_KEY);
}
};
export const revertLastMessage = () => {
messageStore.update(messages => messages.slice(0, -1));
messageStore.update((messages) => messages.slice(0, -1));
};
export async function sendMessage(
content: string,
systemPromptText?: string,
isSystem: boolean = false,
) {
try {
console.log("\n=== Message Processing Start ===");
console.log("1. Initial state:", {
isSystem,
hasSystemPrompt: !!systemPromptText,
currentLanguage: get(languageStore),
pattern: get(selectedPatternName),
});
export async function sendMessage(content: string, systemPromptText?: string, isSystem: boolean = false) {
try {
console.log('\n=== Message Processing Start ===');
console.log('1. Initial state:', {
isSystem,
hasSystemPrompt: !!systemPromptText,
currentLanguage: get(languageStore),
pattern: get(selectedPatternName)
});
const $streaming = get(streamingStore);
if ($streaming) {
throw new ChatError(
"Message submission blocked - already streaming",
"STREAMING_BLOCKED",
);
}
const $streaming = get(streamingStore);
if ($streaming) {
throw new ChatError('Message submission blocked - already streaming', 'STREAMING_BLOCKED');
}
streamingStore.set(true);
errorStore.set(null);
streamingStore.set(true);
errorStore.set(null);
// Add message
messageStore.update((messages) => [
...messages,
{
role: isSystem ? "system" : "user",
content,
},
]);
// Add message
messageStore.update(messages => [...messages, {
role: isSystem ? 'system' : 'user',
content
}]);
console.log("2. Message added:", {
role: isSystem ? "system" : "user",
language: get(languageStore),
});
console.log('2. Message added:', {
role: isSystem ? 'system' : 'user',
language: get(languageStore)
});
if (!isSystem) {
console.log("3. Preparing chat stream:", {
language: get(languageStore),
pattern: get(selectedPatternName),
hasSystemPrompt: !!systemPromptText,
});
if (!isSystem) {
console.log('3. Preparing chat stream:', {
language: get(languageStore),
pattern: get(selectedPatternName),
hasSystemPrompt: !!systemPromptText
});
const stream = await chatService.streamChat(content, systemPromptText);
console.log("4. Stream created");
const stream = await chatService.streamChat(content, systemPromptText);
console.log('4. Stream created');
await chatService.processStream(
stream,
(content: string, response?: StreamResponse) => {
messageStore.update((messages) => {
const newMessages = [...messages];
const lastMessage = newMessages[newMessages.length - 1];
await chatService.processStream(
stream,
(content: string, response?: StreamResponse) => {
messageStore.update(messages => {
const newMessages = [...messages];
const lastMessage = newMessages[newMessages.length - 1];
if (lastMessage?.role === "assistant") {
lastMessage.content = content;
lastMessage.format = response?.format;
console.log("Message updated:", {
role: "assistant",
format: lastMessage.format,
});
} else {
newMessages.push({
role: "assistant",
content,
format: response?.format,
});
}
if (lastMessage?.role === 'assistant') {
lastMessage.content = content;
lastMessage.format = response?.format;
console.log('Message updated:', {
role: 'assistant',
format: lastMessage.format
});
} else {
newMessages.push({
role: 'assistant',
content,
format: response?.format
});
}
return newMessages;
});
},
(error) => {
handleError(error);
},
);
}
return newMessages;
});
},
(error) => {
handleError(error);
}
);
}
streamingStore.set(false);
} catch (error) {
if (error instanceof Error) {
handleError(error);
} else {
handleError(String(error));
}
throw error;
}
streamingStore.set(false);
} catch (error) {
if (error instanceof Error) {
handleError(error);
} else {
handleError(String(error));
}
throw error;
}
}
// Re-export types for convenience

File diff suppressed because one or more lines are too long