Compare commits

...

26 Commits

Author SHA1 Message Date
github-actions[bot]
6d00405eb6 Update version to v1.4.124 and commit 2024-12-21 14:01:47 +00:00
Eugen Eisler
65285fdef0 Merge pull request #1215 from infosecwatchman/main
Add Endpoints to facilitate Ollama based chats
2024-12-21 15:00:52 +01:00
Eugen Eisler
89edd7152a Merge pull request #1214 from iliajie/fix/patterns-translate
Fix the typo in the sentence
2024-12-21 14:59:48 +01:00
Eugen Eisler
5527dc8db5 Merge pull request #1213 from AnirudhG07/main
Spelling Fixes
2024-12-21 14:59:21 +01:00
github-actions[bot]
ae18e9d1c7 Update version to v1.4.123 and commit 2024-12-20 11:17:36 +00:00
Eugen Eisler
76d18e2f04 Merge pull request #1208 from mattjoyce/fix/yaml-config
Fix: Issue with the custom message and added example config file.
2024-12-20 12:16:33 +01:00
InfosecWatchman
103388ecec Add Endpoints to facilitate Ollama based chats
Add Endpoints to facilitate Ollama based chats.

Built to use with Open WebUI
2024-12-19 16:14:51 -05:00
Ilia Ross
53ea7ab126 Fix the typo in the sentence 2024-12-19 12:26:44 +02:00
AnirudhG07
b008d17b6e Spelling fixes in create_quiz pattern 2024-12-19 13:52:25 +05:30
AnirudhG07
2ba294f4d6 Spelling fix in READEME 2024-12-19 13:50:06 +05:30
AnirudhG07
a7ed257fe3 Spelling fixes in patterns 2024-12-19 13:38:37 +05:30
Daniel Miessler
1ce5bd4447 Trying an XML-based Markdown converter pattern. 2024-12-17 14:13:45 -08:00
Daniel Miessler
634cd3f484 Trying an XML-based Markdown converter pattern. 2024-12-17 14:08:54 -08:00
Matt Joyce
8f4aab4f61 Fix: Issue with the custom message and added example config file. 2024-12-16 20:44:05 +11:00
github-actions[bot]
12284ad3db Update version to v1.4.122 and commit 2024-12-14 15:02:50 +00:00
Eugen Eisler
f180e8fc6b Merge pull request #1201 from mattjoyce/feature/config-yaml
feat: Add YAML configuration support
2024-12-14 20:31:42 +05:30
Matt Joyce
89153dd235 feat: Add YAML configuration support
Add support for persistent configuration via YAML files. Users can now specify
common options in a config file while maintaining the ability to override with
CLI flags. Currently supports core options like model, temperature, and pattern
settings.

- Add --config flag for specifying YAML config path
- Support standard option precedence (CLI > YAML > defaults)
- Add type-safe YAML parsing with reflection
- Add tests for YAML config functionality
2024-12-14 14:37:12 +11:00
github-actions[bot]
aa2881f3c2 Update version to v1.4.121 and commit 2024-12-13 21:17:35 +00:00
Eugen Eisler
82379ee6ec Merge pull request #1200 from mattjoyce/bugfix/1157-inputvars
Fix: Mask input token to prevent var substitution in patterns
2024-12-14 02:46:38 +05:30
Matt Joyce
e795055d13 Fix: Mask input token to prevent var substitution in patterns 2024-12-14 06:57:53 +11:00
Daniel Miessler
5b6d7e27b6 Added new instruction trick. 2024-12-11 13:54:33 -08:00
github-actions[bot]
c6dc13ef7f Update version to v1.4.120 and commit 2024-12-10 12:23:12 +00:00
Eugen Eisler
7e6a760623 Merge pull request #1189 from mattjoyce/bugfix/1157-inputvars
Add --input-has-vars flag to control variable substitution in input
2024-12-10 17:52:16 +05:30
Matt Joyce
01519d7486 Add --input-has-vars flag to control variable substitution in input
- Add InputHasVars field to ChatRequest struct
- Only process template variables in user input when flag is set
- Fixes issue with Ansible/Jekyll templates that use {{var}} syntax

This change makes template variable substitution in user input opt-in
via the --input-has-vars flag, preserving literal curly braces by
default.
2024-12-10 18:49:18 +11:00
Eugen Eisler
f5f50cc4c9 Merge pull request #1182 from jessefmoore/main
analyze_risk pattern
2024-12-07 23:57:01 +01:00
Jesse Moore
9226e95d18 analyze_risk pattern
Created a pattern to analyze 3rd party vendor risk.
2024-12-07 11:48:00 -08:00
23 changed files with 766 additions and 42 deletions

View File

@@ -68,7 +68,7 @@
> [!NOTE]
> November 8, 2024
>
> - **Multimodal Support**: You can now us `-a` (attachment) for Multimodal submissions to OpenAI models that support it. Example: `fabric -a https://path/to/image "Give me a description of this image."`
> - **Multimodal Support**: You can now use `-a` (attachment) for Multimodal submissions to OpenAI models that support it. Example: `fabric -a https://path/to/image "Give me a description of this image."`
## What and why

68
cli/README.md Normal file
View File

@@ -0,0 +1,68 @@
# YAML Configuration Support
## Overview
Fabric now supports YAML configuration files for commonly used options. This allows users to persist settings and share configurations across multiple runs.
## Usage
Use the `--config` flag to specify a YAML configuration file:
```bash
fabric --config ~/.config/fabric/config.yaml "Tell me about APIs"
```
## Configuration Precedence
1. CLI flags (highest priority)
2. YAML config values
3. Default values (lowest priority)
## Supported Configuration Options
```yaml
# Model selection
model: gpt-4
modelContextLength: 4096
# Model parameters
temperature: 0.7
topp: 0.9
presencepenalty: 0.0
frequencypenalty: 0.0
seed: 42
# Pattern selection
pattern: analyze # Use pattern name or filename
# Feature flags
stream: true
raw: false
```
## Rules and Behavior
- Only long flag names are supported in YAML (e.g., `temperature` not `-t`)
- CLI flags always override YAML values
- Unknown YAML declarations are ignored
- If a declaration appears multiple times in YAML, the last one wins
- The order of YAML declarations doesn't matter
## Type Conversions
The following string-to-type conversions are supported:
- String to number: `"42"``42`
- String to float: `"42.5"``42.5`
- String to boolean: `"true"``true`
## Example Config
```yaml
# ~/.config/fabric/config.yaml
model: gpt-4
temperature: 0.8
pattern: analyze
stream: true
topp: 0.95
presencepenalty: 0.1
frequencypenalty: 0.2
```
## CLI Override Example
```bash
# Override temperature from config
fabric --config ~/.config/fabric/config.yaml --temperature 0.9 "Query"
```

View File

@@ -56,6 +56,12 @@ func Cli(version string) (err error) {
return
}
if currentFlags.ServeOllama {
registry.ConfigureVendors()
err = restapi.ServeOllama(registry, currentFlags.ServeAddress, version)
return
}
if currentFlags.UpdatePatterns {
err = registry.PatternsLoader.PopulateDB()
return

21
cli/example.yaml Normal file
View File

@@ -0,0 +1,21 @@
#this is an example yaml config file for fabric
# use fabric pattern names
pattern: ai
# or use a filename
# pattern: ~/testpattern.md
model: phi3:latest
# for models that support context length
modelContextLength: 2048
frequencypenalty: 0.5
presencepenalty: 0.5
topp: 0.67
temperature: 0.88
seed: 42
stream: true
raw: false

View File

@@ -6,29 +6,32 @@ import (
"fmt"
"io"
"os"
"reflect"
"strconv"
"strings"
"github.com/jessevdk/go-flags"
goopenai "github.com/sashabaranov/go-openai"
"golang.org/x/text/language"
"gopkg.in/yaml.v2"
"github.com/danielmiessler/fabric/common"
)
// Flags create flags struct. the users flags go into this, this will be passed to the chat struct in cli
type Flags struct {
Pattern string `short:"p" long:"pattern" description:"Choose a pattern from the available patterns" default:""`
Pattern string `short:"p" long:"pattern" yaml:"pattern" description:"Choose a pattern from the available patterns" default:""`
PatternVariables map[string]string `short:"v" long:"variable" description:"Values for pattern variables, e.g. -v=#role:expert -v=#points:30"`
Context string `short:"C" long:"context" description:"Choose a context from the available contexts" default:""`
Session string `long:"session" description:"Choose a session from the available sessions"`
Attachments []string `short:"a" long:"attachment" description:"Attachment path or URL (e.g. for OpenAI image recognition messages)"`
Setup bool `short:"S" long:"setup" description:"Run setup for all reconfigurable parts of fabric"`
Temperature float64 `short:"t" long:"temperature" description:"Set temperature" default:"0.7"`
TopP float64 `short:"T" long:"topp" description:"Set top P" default:"0.9"`
Stream bool `short:"s" long:"stream" description:"Stream"`
PresencePenalty float64 `short:"P" long:"presencepenalty" description:"Set presence penalty" default:"0.0"`
Raw bool `short:"r" long:"raw" description:"Use the defaults of the model without sending chat options (like temperature etc.) and use the user role instead of the system role for patterns."`
FrequencyPenalty float64 `short:"F" long:"frequencypenalty" description:"Set frequency penalty" default:"0.0"`
Temperature float64 `short:"t" long:"temperature" yaml:"temperature" description:"Set temperature" default:"0.7"`
TopP float64 `short:"T" long:"topp" yaml:"topp" description:"Set top P" default:"0.9"`
Stream bool `short:"s" long:"stream" yaml:"stream" description:"Stream"`
PresencePenalty float64 `short:"P" long:"presencepenalty" yaml:"presencepenalty" description:"Set presence penalty" default:"0.0"`
Raw bool `short:"r" long:"raw" yaml:"raw" description:"Use the defaults of the model without sending chat options (like temperature etc.) and use the user role instead of the system role for patterns."`
FrequencyPenalty float64 `short:"F" long:"frequencypenalty" yaml:"frequencypenalty" description:"Set frequency penalty" default:"0.0"`
ListPatterns bool `short:"l" long:"listpatterns" description:"List all patterns"`
ListAllModels bool `short:"L" long:"listmodels" description:"List all available models"`
ListAllContexts bool `short:"x" long:"listcontexts" description:"List all contexts"`
@@ -36,8 +39,8 @@ type Flags struct {
UpdatePatterns bool `short:"U" long:"updatepatterns" description:"Update patterns"`
Message string `hidden:"true" description:"Messages to send to chat"`
Copy bool `short:"c" long:"copy" description:"Copy to clipboard"`
Model string `short:"m" long:"model" description:"Choose model"`
ModelContextLength int `long:"modelContextLength" description:"Model context length (only affects ollama)"`
Model string `short:"m" long:"model" yaml:"model" description:"Choose model"`
ModelContextLength int `long:"modelContextLength" yaml:"modelContextLength" description:"Model context length (only affects ollama)"`
Output string `short:"o" long:"output" description:"Output to file" default:""`
OutputSession bool `long:"output-session" description:"Output the entire session (also a temporary one) to the output file"`
LatestPatterns string `short:"n" long:"latest" description:"Number of latest patterns to list" default:"0"`
@@ -49,36 +52,110 @@ type Flags struct {
Language string `short:"g" long:"language" description:"Specify the Language Code for the chat, e.g. -g=en -g=zh" default:""`
ScrapeURL string `short:"u" long:"scrape_url" description:"Scrape website URL to markdown using Jina AI"`
ScrapeQuestion string `short:"q" long:"scrape_question" description:"Search question using Jina AI"`
Seed int `short:"e" long:"seed" description:"Seed to be used for LMM generation"`
Seed int `short:"e" long:"seed" yaml:"seed" description:"Seed to be used for LMM generation"`
WipeContext string `short:"w" long:"wipecontext" description:"Wipe context"`
WipeSession string `short:"W" long:"wipesession" description:"Wipe session"`
PrintContext string `long:"printcontext" description:"Print context"`
PrintSession string `long:"printsession" description:"Print session"`
HtmlReadability bool `long:"readability" description:"Convert HTML input into a clean, readable view"`
InputHasVars bool `long:"input-has-vars" description:"Apply variables to user input"`
DryRun bool `long:"dry-run" description:"Show what would be sent to the model without actually sending it"`
Serve bool `long:"serve" description:"Serve the Fabric Rest API"`
ServeOllama bool `long:"serveOllama" description:"Serve the Fabric Rest API with ollama endpoints"`
ServeAddress string `long:"address" description:"The address to bind the REST API" default:":8080"`
Config string `long:"config" description:"Path to YAML config file"`
Version bool `long:"version" description:"Print current version"`
}
var debug = false
func Debugf(format string, a ...interface{}) {
if debug {
fmt.Printf("DEBUG: "+format, a...)
}
}
// Init Initialize flags. returns a Flags struct and an error
func Init() (ret *Flags, err error) {
// Track which yaml-configured flags were set on CLI
usedFlags := make(map[string]bool)
yamlArgsScan := os.Args[1:]
// Get list of fields that have yaml tags, could be in yaml config
yamlFields := make(map[string]bool)
t := reflect.TypeOf(Flags{})
for i := 0; i < t.NumField(); i++ {
if yamlTag := t.Field(i).Tag.Get("yaml"); yamlTag != "" {
yamlFields[yamlTag] = true
//Debugf("Found yaml-configured field: %s\n", yamlTag)
}
}
// Scan args for that are provided by cli and might be in yaml
for _, arg := range yamlArgsScan {
if strings.HasPrefix(arg, "--") {
flag := strings.TrimPrefix(arg, "--")
if i := strings.Index(flag, "="); i > 0 {
flag = flag[:i]
}
if yamlFields[flag] {
usedFlags[flag] = true
Debugf("CLI flag used: %s\n", flag)
}
}
}
// Parse CLI flags first
ret = &Flags{}
parser := flags.NewParser(ret, flags.Default)
var args []string
if args, err = parser.Parse(); err != nil {
return
return nil, err
}
// If config specified, load and apply YAML for unused flags
if ret.Config != "" {
yamlFlags, err := loadYAMLConfig(ret.Config)
if err != nil {
return nil, err
}
// Apply YAML values where CLI flags weren't used
flagsVal := reflect.ValueOf(ret).Elem()
yamlVal := reflect.ValueOf(yamlFlags).Elem()
flagsType := flagsVal.Type()
for i := 0; i < flagsType.NumField(); i++ {
field := flagsType.Field(i)
if yamlTag := field.Tag.Get("yaml"); yamlTag != "" {
if !usedFlags[yamlTag] {
flagField := flagsVal.Field(i)
yamlField := yamlVal.Field(i)
if flagField.CanSet() {
if yamlField.Type() != flagField.Type() {
if err := assignWithConversion(flagField, yamlField); err != nil {
Debugf("Type conversion failed for %s: %v\n", yamlTag, err)
continue
}
} else {
flagField.Set(yamlField)
}
Debugf("Applied YAML value for %s: %v\n", yamlTag, yamlField.Interface())
}
}
}
}
}
// Handle stdin and messages
info, _ := os.Stdin.Stat()
pipedToStdin := (info.Mode() & os.ModeCharDevice) == 0
//custom message
// Append positional arguments to the message (custom message)
if len(args) > 0 {
ret.Message = AppendMessage(ret.Message, args[len(args)-1])
}
// takes input from stdin if it exists, otherwise takes input from args (the last argument)
if pipedToStdin {
var pipedMessage string
if pipedMessage, err = readStdin(); err != nil {
@@ -86,7 +163,66 @@ func Init() (ret *Flags, err error) {
}
ret.Message = AppendMessage(ret.Message, pipedMessage)
}
return
return ret, nil
}
func assignWithConversion(targetField, sourceField reflect.Value) error {
// Handle string source values
if sourceField.Kind() == reflect.String {
str := sourceField.String()
switch targetField.Kind() {
case reflect.Int:
// Try parsing as float first to handle "42.9" -> 42
if val, err := strconv.ParseFloat(str, 64); err == nil {
targetField.SetInt(int64(val))
return nil
}
// Try direct int parse
if val, err := strconv.ParseInt(str, 10, 64); err == nil {
targetField.SetInt(val)
return nil
}
case reflect.Float64:
if val, err := strconv.ParseFloat(str, 64); err == nil {
targetField.SetFloat(val)
return nil
}
case reflect.Bool:
if val, err := strconv.ParseBool(str); err == nil {
targetField.SetBool(val)
return nil
}
}
return fmt.Errorf("cannot convert string %q to %v", str, targetField.Kind())
}
return fmt.Errorf("unsupported conversion from %v to %v", sourceField.Kind(), targetField.Kind())
}
func loadYAMLConfig(configPath string) (*Flags, error) {
absPath, err := common.GetAbsolutePath(configPath)
if err != nil {
return nil, fmt.Errorf("invalid config path: %w", err)
}
data, err := os.ReadFile(absPath)
if err != nil {
if os.IsNotExist(err) {
return nil, fmt.Errorf("config file not found: %s", absPath)
}
return nil, fmt.Errorf("error reading config file: %w", err)
}
// Use the existing Flags struct for YAML unmarshal
config := &Flags{}
if err := yaml.Unmarshal(data, config); err != nil {
return nil, fmt.Errorf("error parsing config file: %w", err)
}
Debugf("Config: %v\n", config)
return config, nil
}
// readStdin reads from stdin and returns the input as a string or an error
@@ -126,6 +262,7 @@ func (o *Flags) BuildChatRequest(Meta string) (ret *common.ChatRequest, err erro
SessionName: o.Session,
PatternName: o.Pattern,
PatternVariables: o.PatternVariables,
InputHasVars: o.InputHasVars,
Meta: Meta,
}

View File

@@ -87,3 +87,80 @@ func TestBuildChatOptionsDefaultSeed(t *testing.T) {
options := flags.BuildChatOptions()
assert.Equal(t, expectedOptions, options)
}
func TestInitWithYAMLConfig(t *testing.T) {
// Create a temporary YAML config file
configContent := `
temperature: 0.9
model: gpt-4
pattern: analyze
stream: true
`
tmpfile, err := os.CreateTemp("", "config.*.yaml")
if err != nil {
t.Fatal(err)
}
defer os.Remove(tmpfile.Name())
if _, err := tmpfile.Write([]byte(configContent)); err != nil {
t.Fatal(err)
}
if err := tmpfile.Close(); err != nil {
t.Fatal(err)
}
// Test 1: Basic YAML loading
t.Run("Load YAML config", func(t *testing.T) {
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"cmd", "--config", tmpfile.Name()}
flags, err := Init()
assert.NoError(t, err)
assert.Equal(t, 0.9, flags.Temperature)
assert.Equal(t, "gpt-4", flags.Model)
assert.Equal(t, "analyze", flags.Pattern)
assert.True(t, flags.Stream)
})
// Test 2: CLI overrides YAML
t.Run("CLI overrides YAML", func(t *testing.T) {
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"cmd", "--config", tmpfile.Name(), "--temperature", "0.7", "--model", "gpt-3.5-turbo"}
flags, err := Init()
assert.NoError(t, err)
assert.Equal(t, 0.7, flags.Temperature)
assert.Equal(t, "gpt-3.5-turbo", flags.Model)
assert.Equal(t, "analyze", flags.Pattern) // unchanged from YAML
assert.True(t, flags.Stream) // unchanged from YAML
})
// Test 3: Invalid YAML config
t.Run("Invalid YAML config", func(t *testing.T) {
badConfig := `
temperature: "not a float"
model: 123 # should be string
`
badfile, err := os.CreateTemp("", "bad-config.*.yaml")
if err != nil {
t.Fatal(err)
}
defer os.Remove(badfile.Name())
if _, err := badfile.Write([]byte(badConfig)); err != nil {
t.Fatal(err)
}
if err := badfile.Close(); err != nil {
t.Fatal(err)
}
oldArgs := os.Args
defer func() { os.Args = oldArgs }()
os.Args = []string{"cmd", "--config", badfile.Name()}
_, err = Init()
assert.Error(t, err)
})
}

View File

@@ -12,6 +12,7 @@ type ChatRequest struct {
Message *goopenai.ChatCompletionMessage
Language string
Meta string
InputHasVars bool
}
type ChatOptions struct {

View File

@@ -121,9 +121,11 @@ func (o *Chatter) BuildSession(request *common.ChatRequest, raw bool) (session *
}
// Now we know request.Message is not nil, process template variables
request.Message.Content, err = template.ApplyTemplate(request.Message.Content, request.PatternVariables, "")
if err != nil {
return nil, err
if request.InputHasVars {
request.Message.Content, err = template.ApplyTemplate(request.Message.Content, request.PatternVariables, "")
if err != nil {
return nil, err
}
}
var patternContent string

6
go.mod
View File

@@ -6,14 +6,15 @@ toolchain go1.23.1
require (
github.com/anaskhan96/soup v1.2.5
github.com/anthropics/anthropic-sdk-go v0.2.0-alpha.4
github.com/atotto/clipboard v0.1.4
github.com/gabriel-vasile/mimetype v1.4.6
github.com/gin-gonic/gin v1.10.0
github.com/go-git/go-git/v5 v5.12.0
github.com/go-shiori/go-readability v0.0.0-20241012063810-92284fa8a71f
github.com/google/generative-ai-go v0.18.0
github.com/jessevdk/go-flags v1.6.1
github.com/joho/godotenv v1.5.1
github.com/liushuangls/go-anthropic/v2 v2.11.0
github.com/ollama/ollama v0.4.1
github.com/otiai10/copy v1.14.0
github.com/pkg/errors v0.9.1
@@ -22,6 +23,7 @@ require (
github.com/stretchr/testify v1.9.0
golang.org/x/text v0.20.0
google.golang.org/api v0.205.0
gopkg.in/yaml.v2 v2.4.0
)
require (
@@ -35,7 +37,6 @@ require (
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/ProtonMail/go-crypto v1.1.2 // indirect
github.com/andybalholm/cascadia v1.3.2 // indirect
github.com/anthropics/anthropic-sdk-go v0.2.0-alpha.4 // indirect
github.com/araddon/dateparse v0.0.0-20210429162001-6b43995a97de // indirect
github.com/bytedance/sonic v1.12.4 // indirect
github.com/bytedance/sonic/loader v0.2.1 // indirect
@@ -58,7 +59,6 @@ require (
github.com/goccy/go-json v0.10.3 // indirect
github.com/gogs/chardet v0.0.0-20211120154057-b7413eaefb8f // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/generative-ai-go v0.18.0 // indirect
github.com/google/s2a-go v0.1.8 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect

3
go.sum
View File

@@ -158,8 +158,6 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/liushuangls/go-anthropic/v2 v2.11.0 h1:YKyxDWQNaKPPgtLCgBH+JqzuznNWw8ZqQVeSdQNDMds=
github.com/liushuangls/go-anthropic/v2 v2.11.0/go.mod h1:8BKv/fkeTaL5R9R9bGkaknYBueyw2WxY20o7bImbOek=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.10/go.mod h1:RAqKPSqVFrSLVXbA8x7dzmKdmGzieGRCM46jaSJTDAk=
@@ -360,6 +358,7 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EV
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

View File

@@ -154,9 +154,6 @@ schema = 3
[mod."github.com/leodido/go-urn"]
version = "v1.4.0"
hash = "sha256-Q6kplWkY37Tzy6GOme3Wut40jFK4Izun+ij/BJvcEu0="
[mod."github.com/liushuangls/go-anthropic/v2"]
version = "v2.11.0"
hash = "sha256-VvQ6RT8qcP19mRzBtFKh19czlRk5obHzh1NVs3z/Gkc="
[mod."github.com/mattn/go-isatty"]
version = "v0.0.20"
hash = "sha256-qhw9hWtU5wnyFyuMbKx+7RB8ckQaFQ8D+8GKPkN3HHQ="
@@ -280,6 +277,9 @@ schema = 3
[mod."gopkg.in/warnings.v0"]
version = "v0.1.2"
hash = "sha256-ATVL9yEmgYbkJ1DkltDGRn/auGAjqGOfjQyBYyUo8s8="
[mod."gopkg.in/yaml.v2"]
version = "v2.4.0"
hash = "sha256-uVEGglIedjOIGZzHW4YwN1VoRSTK8o0eGZqzd+TNdd0="
[mod."gopkg.in/yaml.v3"]
version = "v3.0.1"
hash = "sha256-FqL9TKYJ0XkNwJFnq9j0VvJ5ZUU1RvH/52h/f5bkYAU="

View File

@@ -26,11 +26,11 @@ Subject: Machine Learning
```
# Example run un bash:
# Example run bash:
Copy the input query to the clipboard and execute the following command:
``` bash
```bash
xclip -selection clipboard -o | fabric -sp analize_answers
```

View File

@@ -18,7 +18,7 @@ You are an advanced AI with a 2,128 IQ and you are an expert in understanding an
- In a section called POSSIBLE CURRENT ERRORS, create a list of 15-word bullets indicating where similar thinking mistakes could be causing or affecting current beliefs or predictions.
- In a section called RECOMMENDATIONS, create a list of 15-word bullets recommending how to adjust current beliefs and/or predictions.
- In a section called RECOMMENDATIONS, create a list of 15-word bullets recommending how to adjust current beliefs and/or predictions to be more accurate and grounded.
# OUTPUT INSTRUCTIONS

View File

@@ -0,0 +1,81 @@
# IDENTITY and PURPOSE
You are tasked with conducting a risk assessment of a third-party vendor, which involves analyzing their compliance with security and privacy standards. Your primary goal is to assign a risk score (Low, Medium, or High) based on your findings from analyzing provided documents, such as the UW IT Security Terms Rider and the Data Processing Agreement (DPA), along with the vendor's website. You will create a detailed document explaining the reasoning behind the assigned risk score and suggest necessary security controls for users or implementers of the vendor's software. Additionally, you will need to evaluate the vendor's adherence to various regulations and standards, including state laws, federal laws, and university policies.
Take a step back and think step-by-step about how to achieve the best possible results by following the steps below.
# STEPS
- Conduct a risk assessment of the third-party vendor.
- Assign a risk score of Low, Medium, or High.
- Create a document explaining the reasoning behind the risk score.
- Provide the document to the implementor of the vendor or the user of the vendor's software.
- Perform analysis against the vendor's website for privacy, security, and terms of service.
- Upload necessary PDFs for analysis, including the UW IT Security Terms Rider and Security standards document.
# OUTPUT INSTRUCTIONS
- The only output format is Markdown.
- Ensure you follow ALL these instructions when creating your output.
# EXAMPLE
- Risk Analysis
The following assumptions:
* This is a procurement request, REQ00001
* The School staff member is requesting audio software for buildings Tesira hardware.
* The vendor will not engage UW Security Terms.
* The data used is for audio layouts locally on specialized computer.
* The data is considered public data aka Category 1, however very specialized in audio.
Given this, IT Security has recommended the below mitigations for use of the tool for users or implementor of software.
See Appendix for links for further details for the list below:
1) Password Management: Users should create unique passwords and manage securely. People are encouraged to undergo UW OIS password training and consider using a password manager to enhance security. Its crucial not to reuse their NETID password for the vendor account.
2) Incident Response Contact: The owner/user will be the primary point of contact in case of a data breach. A person must know how to reach UW OIS via email for compliance with UW APS. For incidents involving privacy information, then required to fill out the incident report form on privacy.uw.edu.
3) Data Backup: Its recommended to regularly back up. Ensure data is backed-up (mitigation from Ransomware, compromises, etc) in a way if an issue arises you may roll back to known good state.
Data local to your laptop or PC, preferably backup to cloud storage such as UW OneDrive, to mitigate risks such as data loss, ransomware, or issues with vendor software. Details on storage options are available on itconnect.uw.edu and specific link in below Appendix.
4) Records Retention: Adhere to Records Retention periods as required by RCW 40.14.050. Further guidance can be found on finance.uw.edu/recmgt/retentionschedules.
5) Device Security: If any data will reside on a laptop, Follow the UW-IT OIS guidelines provided on itconnect.uw.edu for securing laptops.
6) Software Patching: Routinely patch the vendor application. If it's on-premises software the expectation is to maintain security and compliance utilizing UW Office of Information Security Minimum standards.
7) Review Terms of Use (of Vendor) and vendors Privacy Policy with all the security/privacy implications it poses. Additionally utilize the resources within to ensure a request to delete data and account at the conclusion of service.
- IN CONCLUSION
This is not a comprehensive list of Risks.
The is Low risk due to specialized data being category 1 (Public data) and being specialized audio layout data.
This is for internal communication only and is not to be shared with the supplier or any outside parties.
# INPUT

View File

@@ -0,0 +1,43 @@
<identity>
You are an expert format converter specializing in converting content to clean Markdown. Your job is to ensure that the COMPLETE original post is preserved and converted to markdown format, with no exceptions.
</identity>
<steps>
1. Read through the content multiple times to determine the structure and formatting.
2. Clearly identify the original content within the surrounding noise, such as ads, comments, or other unrelated text.
3. Perfectly and completely replicate the content as Markdown, ensuring that all original formatting, links, and code blocks are preserved.
4. Output the COMPLETE original content in Markdown format.
</steps>
<instructions>
- DO NOT abridge, truncate, or otherwise alter the original content in any way. Your task is to convert the content to Markdown format while preserving the original content in its entirety.
- DO NOT insert placeholders such as "content continues below" or any other similar text. ALWAYS output the COMPLETE original content.
- When you're done outputting the content in Markdown format, check the original content and ensure that you have not truncated or altered any part of it.
</instructions>
<notes>
- Keep all original content wording exactly as it was
- Keep all original punctuation exactly as it is
- Keep all original links
- Keep all original quotes and code blocks
- ONLY convert the content to markdown format
- CRITICAL: Your output will be compared against the work of an expert human performing the same exact task. Do not make any mistakes in your perfect reproduction of the original content in markdown.
</notes>
<content>
INPUT
</content>

View File

@@ -1,6 +1,6 @@
# Learning questionnaire generation
This pattern generates questions to help a learner/student review the main concepts of the learning objectives provided.
This pattern generates questions to help a learner/student review the main concepts of the learning objectives provided.
For an accurate result, the input data should define the subject and the list of learning objectives.
@@ -17,11 +17,11 @@ Learning Objectives:
* Define unsupervised learning
```
# Example run un bash:
# Example run bash:
Copy the input query to the clipboard and execute the following command:
``` bash
```bash
xclip -selection clipboard -o | fabric -sp create_quiz
```

View File

@@ -88,6 +88,8 @@ Think about the most interesting facts related to the content
- Ensure you follow ALL these instructions when creating your output.
- Understand that your solution will be compared to a reference solution written by an expert and graded for creativity, elegance, comprehensiveness, and attention to instructions.
# INPUT
INPUT:

View File

@@ -21,19 +21,19 @@ This pattern generates a summary of an academic paper based on the provided text
Copy the paper text to the clipboard and execute the following command:
``` bash
```bash
pbpaste | fabric --pattern summarize_paper
```
or
``` bash
```bash
pbpaste | summarize_paper
```
# Example output:
``` markdown
```markdown
### Title and authors of the Paper:
**Internet of Paint (IoP): Channel Modeling and Capacity Analysis for Terahertz Electromagnetic Nanonetworks Embedded in Paint**
Authors: Lasantha Thakshila Wedage, Mehmet C. Vuran, Bernard Butler, Yevgeni Koucheryavy, Sasitharan Balasubramaniam

View File

@@ -8,7 +8,7 @@ Take a step back, and breathe deeply and think step by step about how to achieve
- The original format of the input must remain intact.
- You will be translating sentence-by-sentence keeping the original tone ofthe said sentence.
- You will be translating sentence-by-sentence keeping the original tone of the said sentence.
- You will not be manipulate the wording to change the meaning.

View File

@@ -1 +1 @@
"1.4.119"
"1.4.124"

View File

@@ -10,6 +10,8 @@ import (
"github.com/danielmiessler/fabric/plugins/template"
)
const inputSentinel = "__FABRIC_INPUT_SENTINEL_TOKEN__"
type PatternsEntity struct {
*StorageEntity
SystemPatternFile string
@@ -59,7 +61,8 @@ func (o *PatternsEntity) GetApplyVariables(
func (o *PatternsEntity) applyVariables(
pattern *Pattern, variables map[string]string, input string) (err error) {
// If {{input}} isn't in pattern, append it on new line
// Ensure pattern has an {{input}} placeholder
// If not present, append it on a new line
if !strings.Contains(pattern.Pattern, "{{input}}") {
if !strings.HasSuffix(pattern.Pattern, "\n") {
pattern.Pattern += "\n"
@@ -67,11 +70,20 @@ func (o *PatternsEntity) applyVariables(
pattern.Pattern += "{{input}}"
}
var result string
if result, err = template.ApplyTemplate(pattern.Pattern, variables, input); err != nil {
// Temporarily replace {{input}} with a sentinel token to protect it
// from recursive variable resolution
withSentinel := strings.ReplaceAll(pattern.Pattern, "{{input}}", inputSentinel)
// Process all other template variables in the pattern
// At this point, our sentinel ensures {{input}} won't be affected
var processed string
if processed, err = template.ApplyTemplate(withSentinel, variables, ""); err != nil {
return
}
pattern.Pattern = result
// Finally, replace our sentinel with the actual user input
// The input has already been processed for variables if InputHasVars was true
pattern.Pattern = strings.ReplaceAll(processed, inputSentinel, input)
return
}

275
restapi/ollama.go Normal file
View File

@@ -0,0 +1,275 @@
package restapi
import (
"bytes"
"context"
"encoding/json"
"fmt"
"github.com/danielmiessler/fabric/core"
"github.com/gin-gonic/gin"
"io"
"log"
"net/http"
"strings"
"time"
)
type OllamaModel struct {
Models []Model `json:"models"`
}
type Model struct {
Details ModelDetails `json:"details"`
Digest string `json:"digest"`
Model string `json:"model"`
ModifiedAt string `json:"modified_at"`
Name string `json:"name"`
Size int64 `json:"size"`
}
type ModelDetails struct {
Families []string `json:"families"`
Family string `json:"family"`
Format string `json:"format"`
ParameterSize string `json:"parameter_size"`
ParentModel string `json:"parent_model"`
QuantizationLevel string `json:"quantization_level"`
}
type APIConvert struct {
registry *core.PluginRegistry
r *gin.Engine
addr *string
}
type OllamaRequestBody struct {
Messages []OllamaMessage `json:"messages"`
Model string `json:"model"`
Options struct {
} `json:"options"`
Stream bool `json:"stream"`
}
type OllamaMessage struct {
Content string `json:"content"`
Role string `json:"role"`
}
type OllamaResponse struct {
Model string `json:"model"`
CreatedAt string `json:"created_at"`
Message struct {
Role string `json:"role"`
Content string `json:"content"`
} `json:"message"`
DoneReason string `json:"done_reason,omitempty"`
Done bool `json:"done"`
TotalDuration int64 `json:"total_duration,omitempty"`
LoadDuration int `json:"load_duration,omitempty"`
PromptEvalCount int `json:"prompt_eval_count,omitempty"`
PromptEvalDuration int `json:"prompt_eval_duration,omitempty"`
EvalCount int `json:"eval_count,omitempty"`
EvalDuration int64 `json:"eval_duration,omitempty"`
}
type FabricResponseFormat struct {
Type string `json:"type"`
Format string `json:"format"`
Content string `json:"content"`
}
func ServeOllama(registry *core.PluginRegistry, address string, version string) (err error) {
r := gin.New()
// Middleware
r.Use(gin.Logger())
r.Use(gin.Recovery())
// Register routes
fabricDb := registry.Db
NewPatternsHandler(r, fabricDb.Patterns)
NewContextsHandler(r, fabricDb.Contexts)
NewSessionsHandler(r, fabricDb.Sessions)
NewChatHandler(r, registry, fabricDb)
NewConfigHandler(r, fabricDb)
NewModelsHandler(r, registry.VendorManager)
typeConversion := APIConvert{
registry: registry,
r: r,
addr: &address,
}
// Ollama Endpoints
r.GET("/api/tags", typeConversion.ollamaTags)
r.GET("/api/version", func(c *gin.Context) {
c.Data(200, "application/json", []byte(fmt.Sprintf("{\"%s\"}", version)))
return
})
r.POST("/api/chat", typeConversion.ollamaChat)
// Start server
err = r.Run(address)
if err != nil {
return err
}
return
}
func (f APIConvert) ollamaTags(c *gin.Context) {
patterns, err := f.registry.Db.Patterns.GetNames()
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err})
return
}
var response OllamaModel
for _, pattern := range patterns {
today := time.Now().Format("2024-11-25T12:07:58.915991813-05:00")
details := ModelDetails{
Families: []string{"fabric"},
Family: "fabric",
Format: "custom",
ParameterSize: "42.0B",
ParentModel: "",
QuantizationLevel: "",
}
response.Models = append(response.Models, Model{
Details: details,
Digest: "365c0bd3c000a25d28ddbf732fe1c6add414de7275464c4e4d1c3b5fcb5d8ad1",
Model: fmt.Sprintf("%s:latest", pattern),
ModifiedAt: today,
Name: fmt.Sprintf("%s:latest", pattern),
Size: 0,
})
}
c.JSON(200, response)
}
func (f APIConvert) ollamaChat(c *gin.Context) {
body, err := io.ReadAll(c.Request.Body)
if err != nil {
log.Printf("Error reading body: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "testing endpoint"})
return
}
var prompt OllamaRequestBody
err = json.Unmarshal(body, &prompt)
if err != nil {
log.Printf("Error unmarshalling body: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "testing endpoint"})
return
}
now := time.Now()
var chat ChatRequest
if len(prompt.Messages) == 1 {
chat.Prompts = []PromptRequest{{
UserInput: prompt.Messages[0].Content,
Vendor: "",
Model: "",
ContextName: "",
PatternName: strings.Split(prompt.Model, ":")[0],
}}
} else if len(prompt.Messages) > 1 {
var content string
for _, msg := range prompt.Messages {
content = fmt.Sprintf("%s%s:%s\n", content, msg.Role, msg.Content)
}
chat.Prompts = []PromptRequest{{
UserInput: content,
Vendor: "",
Model: "",
ContextName: "",
PatternName: strings.Split(prompt.Model, ":")[0],
}}
}
fabricChatReq, err := json.Marshal(chat)
if err != nil {
log.Printf("Error marshalling body: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err})
return
}
ctx := context.Background()
var req *http.Request
if strings.Contains(*f.addr, "http") {
req, err = http.NewRequest("POST", fmt.Sprintf("%s/chat", *f.addr), bytes.NewBuffer(fabricChatReq))
} else {
req, err = http.NewRequest("POST", fmt.Sprintf("http://127.0.0.1%s/chat", *f.addr), bytes.NewBuffer(fabricChatReq))
}
if err != nil {
log.Fatal(err)
}
req = req.WithContext(ctx)
fabricRes, err := http.DefaultClient.Do(req)
if err != nil {
log.Printf("Error getting /chat body: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err})
return
}
body, err = io.ReadAll(fabricRes.Body)
if err != nil {
log.Printf("Error reading body: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "testing endpoint"})
return
}
var forwardedResponse OllamaResponse
var forwardedResponses []OllamaResponse
var fabricResponse FabricResponseFormat
err = json.Unmarshal([]byte(strings.Split(strings.Split(string(body), "\n")[0], "data: ")[1]), &fabricResponse)
if err != nil {
log.Printf("Error unmarshalling body: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "testing endpoint"})
return
}
for _, word := range strings.Split(fabricResponse.Content, " ") {
forwardedResponse = OllamaResponse{
Model: "",
CreatedAt: "",
Message: struct {
Role string `json:"role"`
Content string `json:"content"`
}(struct {
Role string
Content string
}{Content: fmt.Sprintf("%s ", word), Role: "assistant"}),
Done: false,
}
forwardedResponses = append(forwardedResponses, forwardedResponse)
}
forwardedResponse.Model = prompt.Model
forwardedResponse.CreatedAt = time.Now().UTC().Format("2006-01-02T15:04:05.999999999Z")
forwardedResponse.Message.Role = "assistant"
forwardedResponse.Message.Content = ""
forwardedResponse.DoneReason = "stop"
forwardedResponse.Done = true
forwardedResponse.TotalDuration = time.Since(now).Nanoseconds()
forwardedResponse.LoadDuration = int(time.Since(now).Nanoseconds())
forwardedResponse.PromptEvalCount = 42
forwardedResponse.PromptEvalDuration = int(time.Since(now).Nanoseconds())
forwardedResponse.EvalCount = 420
forwardedResponse.EvalDuration = time.Since(now).Nanoseconds()
forwardedResponses = append(forwardedResponses, forwardedResponse)
var res []byte
for _, response := range forwardedResponses {
marshalled, err := json.Marshal(response)
if err != nil {
log.Printf("Error marshalling body: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": err})
return
}
for _, bytein := range marshalled {
res = append(res, bytein)
}
for _, bytebreak := range []byte("\n") {
res = append(res, bytebreak)
}
}
c.Data(200, "application/json", res)
//c.JSON(200, forwardedResponse)
return
}

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.119"
var version = "v1.4.124"