mirror of
https://github.com/danielmiessler/Fabric.git
synced 2026-01-10 23:08:06 -05:00
Compare commits
16 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5e0aaa1f93 | ||
|
|
eb16806931 | ||
|
|
474dd786a4 | ||
|
|
edad63df19 | ||
|
|
c7eb7439ef | ||
|
|
23d678d62f | ||
|
|
de5260a661 | ||
|
|
baeadc2270 | ||
|
|
5b4cec81c3 | ||
|
|
eda5531087 | ||
|
|
66925d188a | ||
|
|
6179742e79 | ||
|
|
d8fc6940f0 | ||
|
|
44f7e8dfef | ||
|
|
c5ada714ff | ||
|
|
80c4807f7e |
3
.github/workflows/release.yml
vendored
3
.github/workflows/release.yml
vendored
@@ -2,7 +2,7 @@ name: Go Release
|
||||
|
||||
on:
|
||||
repository_dispatch:
|
||||
types: [ tag_created ]
|
||||
types: [tag_created]
|
||||
push:
|
||||
tags:
|
||||
- "v*"
|
||||
@@ -108,6 +108,7 @@ jobs:
|
||||
Add-Content -Path $env:GITHUB_ENV -Value "latest_tag=$latest_tag"
|
||||
|
||||
- name: Create release if it doesn't exist
|
||||
shell: bash
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
|
||||
@@ -1 +1 @@
|
||||
"1.4.221"
|
||||
"1.4.223"
|
||||
|
||||
@@ -22,19 +22,20 @@ Take a deep breath and think step by step about how to best accomplish this goal
|
||||
This must be under the heading "INSIGHTFULNESS SCORE (0 = not very interesting and insightful to 10 = very interesting and insightful)".
|
||||
- A rating of how emotional the debate was from 0 (very calm) to 5 (very emotional). This must be under the heading "EMOTIONALITY SCORE (0 (very calm) to 5 (very emotional))".
|
||||
- A list of the participants of the debate and a score of their emotionality from 0 (very calm) to 5 (very emotional). This must be under the heading "PARTICIPANTS".
|
||||
- A list of arguments attributed to participants with names and quotes. If possible, this should include external references that disprove or back up their claims.
|
||||
- A list of arguments attributed to participants with names and quotes. Each argument summary must be EXACTLY 16 words. If possible, this should include external references that disprove or back up their claims.
|
||||
It is IMPORTANT that these references are from trusted and verifiable sources that can be easily accessed. These sources have to BE REAL and NOT MADE UP. This must be under the heading "ARGUMENTS".
|
||||
If possible, provide an objective assessment of the truth of these arguments. If you assess the truth of the argument, provide some sources that back up your assessment. The material you provide should be from reliable, verifiable, and trustworthy sources. DO NOT MAKE UP SOURCES.
|
||||
- A list of agreements the participants have reached, attributed with names and quotes. This must be under the heading "AGREEMENTS".
|
||||
- A list of disagreements the participants were unable to resolve and the reasons why they remained unresolved, attributed with names and quotes. This must be under the heading "DISAGREEMENTS".
|
||||
- A list of possible misunderstandings and why they may have occurred, attributed with names and quotes. This must be under the heading "POSSIBLE MISUNDERSTANDINGS".
|
||||
- A list of learnings from the debate. This must be under the heading "LEARNINGS".
|
||||
- A list of takeaways that highlight ideas to think about, sources to explore, and actionable items. This must be under the heading "TAKEAWAYS".
|
||||
- A list of agreements the participants have reached. Each agreement summary must be EXACTLY 16 words, followed by names and quotes. This must be under the heading "AGREEMENTS".
|
||||
- A list of disagreements the participants were unable to resolve. Each disagreement summary must be EXACTLY 16 words, followed by names and quotes explaining why they remained unresolved. This must be under the heading "DISAGREEMENTS".
|
||||
- A list of possible misunderstandings. Each misunderstanding summary must be EXACTLY 16 words, followed by names and quotes explaining why they may have occurred. This must be under the heading "POSSIBLE MISUNDERSTANDINGS".
|
||||
- A list of learnings from the debate. Each learning must be EXACTLY 16 words. This must be under the heading "LEARNINGS".
|
||||
- A list of takeaways that highlight ideas to think about, sources to explore, and actionable items. Each takeaway must be EXACTLY 16 words. This must be under the heading "TAKEAWAYS".
|
||||
|
||||
# OUTPUT INSTRUCTIONS
|
||||
|
||||
- Output all sections above.
|
||||
- Use Markdown to structure your output.
|
||||
- Do not use any markdown formatting (no asterisks, no bullet points, no headers).
|
||||
- Keep all agreements, arguments, recommendations, learnings, and takeaways to EXACTLY 16 words each.
|
||||
- When providing quotes, these quotes should clearly express the points you are using them for. If necessary, use multiple quotes.
|
||||
|
||||
# INPUT:
|
||||
|
||||
16
patterns/extract_alpha/system.md
Normal file
16
patterns/extract_alpha/system.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# IDENTITY
|
||||
|
||||
You're an expert at finding Alpha in content.
|
||||
|
||||
# PHILOSOPHY
|
||||
|
||||
I love the idea of Claude Shannon's information theory where basically the only real information is the stuff that's different and anything that's the same as kind of background noise.
|
||||
|
||||
I love that idea for novelty and surprise inside of content when I think about a presentation or a talk or a podcast or an essay or anything I'm looking for the net new ideas or the new presentation of ideas for the new frameworks of how to use ideas or combine ideas so I'm looking for a way to capture that inside of content.
|
||||
|
||||
# INSTRUCTIONS
|
||||
|
||||
I want you to extract the 24 highest alpha ideas and thoughts and insights and recommendations in this piece of content, and I want you to output them in unformatted marked down in 8-word bullets written in the approachable style of Paul Graham.
|
||||
|
||||
# INPUT
|
||||
|
||||
116
plugins/ai/openai/chat_completions.go
Normal file
116
plugins/ai/openai/chat_completions.go
Normal file
@@ -0,0 +1,116 @@
|
||||
package openai
|
||||
|
||||
// This file contains helper methods for the Chat Completions API.
|
||||
// These methods are used as fallbacks for OpenAI-compatible providers
|
||||
// that don't support the newer Responses API (e.g., Groq, Mistral, etc.).
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"github.com/danielmiessler/fabric/chat"
|
||||
"github.com/danielmiessler/fabric/common"
|
||||
openai "github.com/openai/openai-go"
|
||||
"github.com/openai/openai-go/shared"
|
||||
)
|
||||
|
||||
// sendChatCompletions sends a request using the Chat Completions API
|
||||
func (o *Client) sendChatCompletions(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *common.ChatOptions) (ret string, err error) {
|
||||
req := o.buildChatCompletionParams(msgs, opts)
|
||||
|
||||
var resp *openai.ChatCompletion
|
||||
if resp, err = o.ApiClient.Chat.Completions.New(ctx, req); err != nil {
|
||||
return
|
||||
}
|
||||
if len(resp.Choices) > 0 {
|
||||
ret = resp.Choices[0].Message.Content
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// sendStreamChatCompletions sends a streaming request using the Chat Completions API
|
||||
func (o *Client) sendStreamChatCompletions(
|
||||
msgs []*chat.ChatCompletionMessage, opts *common.ChatOptions, channel chan string,
|
||||
) (err error) {
|
||||
defer close(channel)
|
||||
|
||||
req := o.buildChatCompletionParams(msgs, opts)
|
||||
stream := o.ApiClient.Chat.Completions.NewStreaming(context.Background(), req)
|
||||
for stream.Next() {
|
||||
chunk := stream.Current()
|
||||
if len(chunk.Choices) > 0 && chunk.Choices[0].Delta.Content != "" {
|
||||
channel <- chunk.Choices[0].Delta.Content
|
||||
}
|
||||
}
|
||||
if stream.Err() == nil {
|
||||
channel <- "\n"
|
||||
}
|
||||
return stream.Err()
|
||||
}
|
||||
|
||||
// buildChatCompletionParams builds parameters for the Chat Completions API
|
||||
func (o *Client) buildChatCompletionParams(
|
||||
inputMsgs []*chat.ChatCompletionMessage, opts *common.ChatOptions,
|
||||
) (ret openai.ChatCompletionNewParams) {
|
||||
|
||||
messages := make([]openai.ChatCompletionMessageParamUnion, len(inputMsgs))
|
||||
for i, msgPtr := range inputMsgs {
|
||||
msg := *msgPtr
|
||||
if strings.Contains(opts.Model, "deepseek") && len(inputMsgs) == 1 && msg.Role == chat.ChatMessageRoleSystem {
|
||||
msg.Role = chat.ChatMessageRoleUser
|
||||
}
|
||||
messages[i] = o.convertChatMessage(msg)
|
||||
}
|
||||
|
||||
ret = openai.ChatCompletionNewParams{
|
||||
Model: shared.ChatModel(opts.Model),
|
||||
Messages: messages,
|
||||
}
|
||||
|
||||
if !opts.Raw {
|
||||
ret.Temperature = openai.Float(opts.Temperature)
|
||||
ret.TopP = openai.Float(opts.TopP)
|
||||
if opts.MaxTokens != 0 {
|
||||
ret.MaxTokens = openai.Int(int64(opts.MaxTokens))
|
||||
}
|
||||
if opts.PresencePenalty != 0 {
|
||||
ret.PresencePenalty = openai.Float(opts.PresencePenalty)
|
||||
}
|
||||
if opts.FrequencyPenalty != 0 {
|
||||
ret.FrequencyPenalty = openai.Float(opts.FrequencyPenalty)
|
||||
}
|
||||
if opts.Seed != 0 {
|
||||
ret.Seed = openai.Int(int64(opts.Seed))
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// convertChatMessage converts fabric chat message to OpenAI chat completion message
|
||||
func (o *Client) convertChatMessage(msg chat.ChatCompletionMessage) openai.ChatCompletionMessageParamUnion {
|
||||
result := convertMessageCommon(msg)
|
||||
|
||||
switch result.Role {
|
||||
case chat.ChatMessageRoleSystem:
|
||||
return openai.SystemMessage(result.Content)
|
||||
case chat.ChatMessageRoleUser:
|
||||
// Handle multi-content messages (text + images)
|
||||
if result.HasMultiContent {
|
||||
var parts []openai.ChatCompletionContentPartUnionParam
|
||||
for _, p := range result.MultiContent {
|
||||
switch p.Type {
|
||||
case chat.ChatMessagePartTypeText:
|
||||
parts = append(parts, openai.TextContentPart(p.Text))
|
||||
case chat.ChatMessagePartTypeImageURL:
|
||||
parts = append(parts, openai.ImageContentPart(openai.ChatCompletionContentPartImageImageURLParam{URL: p.ImageURL.URL}))
|
||||
}
|
||||
}
|
||||
return openai.UserMessage(parts)
|
||||
}
|
||||
return openai.UserMessage(result.Content)
|
||||
case chat.ChatMessageRoleAssistant:
|
||||
return openai.AssistantMessage(result.Content)
|
||||
default:
|
||||
return openai.UserMessage(result.Content)
|
||||
}
|
||||
}
|
||||
21
plugins/ai/openai/message_conversion.go
Normal file
21
plugins/ai/openai/message_conversion.go
Normal file
@@ -0,0 +1,21 @@
|
||||
package openai
|
||||
|
||||
import "github.com/danielmiessler/fabric/chat"
|
||||
|
||||
// MessageConversionResult holds the common conversion result
|
||||
type MessageConversionResult struct {
|
||||
Role string
|
||||
Content string
|
||||
MultiContent []chat.ChatMessagePart
|
||||
HasMultiContent bool
|
||||
}
|
||||
|
||||
// convertMessageCommon extracts common conversion logic
|
||||
func convertMessageCommon(msg chat.ChatCompletionMessage) MessageConversionResult {
|
||||
return MessageConversionResult{
|
||||
Role: msg.Role,
|
||||
Content: msg.Content,
|
||||
MultiContent: msg.MultiContent,
|
||||
HasMultiContent: len(msg.MultiContent) > 0,
|
||||
}
|
||||
}
|
||||
@@ -2,7 +2,6 @@ package openai
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log/slog"
|
||||
"slices"
|
||||
"strings"
|
||||
|
||||
@@ -12,10 +11,13 @@ import (
|
||||
openai "github.com/openai/openai-go"
|
||||
"github.com/openai/openai-go/option"
|
||||
"github.com/openai/openai-go/packages/pagination"
|
||||
"github.com/openai/openai-go/responses"
|
||||
"github.com/openai/openai-go/shared"
|
||||
"github.com/openai/openai-go/shared/constant"
|
||||
)
|
||||
|
||||
func NewClient() (ret *Client) {
|
||||
return NewClientCompatible("OpenAI", "https://api.openai.com/v1", nil)
|
||||
return NewClientCompatibleWithResponses("OpenAI", "https://api.openai.com/v1", true, nil)
|
||||
}
|
||||
|
||||
func NewClientCompatible(vendorName string, defaultBaseUrl string, configureCustom func() error) (ret *Client) {
|
||||
@@ -28,6 +30,17 @@ func NewClientCompatible(vendorName string, defaultBaseUrl string, configureCust
|
||||
return
|
||||
}
|
||||
|
||||
func NewClientCompatibleWithResponses(vendorName string, defaultBaseUrl string, implementsResponses bool, configureCustom func() error) (ret *Client) {
|
||||
ret = NewClientCompatibleNoSetupQuestions(vendorName, configureCustom)
|
||||
|
||||
ret.ApiKey = ret.AddSetupQuestion("API Key", true)
|
||||
ret.ApiBaseURL = ret.AddSetupQuestion("API Base URL", false)
|
||||
ret.ApiBaseURL.Value = defaultBaseUrl
|
||||
ret.ImplementsResponses = implementsResponses
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func NewClientCompatibleNoSetupQuestions(vendorName string, configureCustom func() error) (ret *Client) {
|
||||
ret = &Client{}
|
||||
|
||||
@@ -46,9 +59,10 @@ func NewClientCompatibleNoSetupQuestions(vendorName string, configureCustom func
|
||||
|
||||
type Client struct {
|
||||
*plugins.PluginBase
|
||||
ApiKey *plugins.SetupQuestion
|
||||
ApiBaseURL *plugins.SetupQuestion
|
||||
ApiClient *openai.Client
|
||||
ApiKey *plugins.SetupQuestion
|
||||
ApiBaseURL *plugins.SetupQuestion
|
||||
ApiClient *openai.Client
|
||||
ImplementsResponses bool // Whether this provider supports the Responses API
|
||||
}
|
||||
|
||||
func (o *Client) configure() (ret error) {
|
||||
@@ -75,35 +89,59 @@ func (o *Client) ListModels() (ret []string, err error) {
|
||||
func (o *Client) SendStream(
|
||||
msgs []*chat.ChatCompletionMessage, opts *common.ChatOptions, channel chan string,
|
||||
) (err error) {
|
||||
req := o.buildChatCompletionParams(msgs, opts)
|
||||
stream := o.ApiClient.Chat.Completions.NewStreaming(context.Background(), req)
|
||||
// Use Responses API for OpenAI, Chat Completions API for other providers
|
||||
if o.supportsResponsesAPI() {
|
||||
return o.sendStreamResponses(msgs, opts, channel)
|
||||
}
|
||||
return o.sendStreamChatCompletions(msgs, opts, channel)
|
||||
}
|
||||
|
||||
func (o *Client) sendStreamResponses(
|
||||
msgs []*chat.ChatCompletionMessage, opts *common.ChatOptions, channel chan string,
|
||||
) (err error) {
|
||||
defer close(channel)
|
||||
|
||||
req := o.buildResponseParams(msgs, opts)
|
||||
stream := o.ApiClient.Responses.NewStreaming(context.Background(), req)
|
||||
for stream.Next() {
|
||||
chunk := stream.Current()
|
||||
if len(chunk.Choices) > 0 {
|
||||
channel <- chunk.Choices[0].Delta.Content
|
||||
event := stream.Current()
|
||||
switch event.Type {
|
||||
case string(constant.ResponseOutputTextDelta("").Default()):
|
||||
channel <- event.AsResponseOutputTextDelta().Delta
|
||||
case string(constant.ResponseOutputTextDone("").Default()):
|
||||
channel <- event.AsResponseOutputTextDone().Text
|
||||
}
|
||||
}
|
||||
if stream.Err() == nil {
|
||||
channel <- "\n"
|
||||
}
|
||||
close(channel)
|
||||
return stream.Err()
|
||||
}
|
||||
|
||||
func (o *Client) Send(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *common.ChatOptions) (ret string, err error) {
|
||||
req := o.buildChatCompletionParams(msgs, opts)
|
||||
// Use Responses API for OpenAI, Chat Completions API for other providers
|
||||
if o.supportsResponsesAPI() {
|
||||
return o.sendResponses(ctx, msgs, opts)
|
||||
}
|
||||
return o.sendChatCompletions(ctx, msgs, opts)
|
||||
}
|
||||
|
||||
var resp *openai.ChatCompletion
|
||||
if resp, err = o.ApiClient.Chat.Completions.New(ctx, req); err != nil {
|
||||
func (o *Client) sendResponses(ctx context.Context, msgs []*chat.ChatCompletionMessage, opts *common.ChatOptions) (ret string, err error) {
|
||||
req := o.buildResponseParams(msgs, opts)
|
||||
|
||||
var resp *responses.Response
|
||||
if resp, err = o.ApiClient.Responses.New(ctx, req); err != nil {
|
||||
return
|
||||
}
|
||||
if len(resp.Choices) > 0 {
|
||||
ret = resp.Choices[0].Message.Content
|
||||
slog.Debug("SystemFingerprint: " + resp.SystemFingerprint)
|
||||
}
|
||||
ret = o.extractText(resp)
|
||||
return
|
||||
}
|
||||
|
||||
// supportsResponsesAPI determines if the provider supports the new Responses API
|
||||
func (o *Client) supportsResponsesAPI() bool {
|
||||
return o.ImplementsResponses
|
||||
}
|
||||
|
||||
func (o *Client) NeedsRawMode(modelName string) bool {
|
||||
openaiModelsPrefixes := []string{
|
||||
"o1",
|
||||
@@ -115,8 +153,6 @@ func (o *Client) NeedsRawMode(modelName string) bool {
|
||||
"gpt-4o-mini-search-preview-2025-03-11",
|
||||
"gpt-4o-search-preview",
|
||||
"gpt-4o-search-preview-2025-03-11",
|
||||
"o4-mini-deep-research",
|
||||
"o4-mini-deep-research-2025-06-26",
|
||||
}
|
||||
for _, prefix := range openaiModelsPrefixes {
|
||||
if strings.HasPrefix(modelName, prefix) {
|
||||
@@ -126,56 +162,85 @@ func (o *Client) NeedsRawMode(modelName string) bool {
|
||||
return slices.Contains(openAIModelsNeedingRaw, modelName)
|
||||
}
|
||||
|
||||
func (o *Client) buildChatCompletionParams(
|
||||
func (o *Client) buildResponseParams(
|
||||
inputMsgs []*chat.ChatCompletionMessage, opts *common.ChatOptions,
|
||||
) (ret openai.ChatCompletionNewParams) {
|
||||
) (ret responses.ResponseNewParams) {
|
||||
|
||||
// Create a new slice for messages to be sent, converting from []*Msg to []Msg.
|
||||
// This also serves as a mutable copy for provider-specific modifications.
|
||||
messagesForRequest := make([]openai.ChatCompletionMessageParamUnion, len(inputMsgs))
|
||||
items := make([]responses.ResponseInputItemUnionParam, len(inputMsgs))
|
||||
for i, msgPtr := range inputMsgs {
|
||||
msg := *msgPtr // copy
|
||||
// Provider-specific modification for DeepSeek:
|
||||
msg := *msgPtr
|
||||
if strings.Contains(opts.Model, "deepseek") && len(inputMsgs) == 1 && msg.Role == chat.ChatMessageRoleSystem {
|
||||
msg.Role = chat.ChatMessageRoleUser
|
||||
}
|
||||
messagesForRequest[i] = convertMessage(msg)
|
||||
items[i] = convertMessage(msg)
|
||||
}
|
||||
ret = openai.ChatCompletionNewParams{
|
||||
Model: openai.ChatModel(opts.Model),
|
||||
Messages: messagesForRequest,
|
||||
|
||||
ret = responses.ResponseNewParams{
|
||||
Model: shared.ResponsesModel(opts.Model),
|
||||
Input: responses.ResponseNewParamsInputUnion{
|
||||
OfInputItemList: items,
|
||||
},
|
||||
}
|
||||
|
||||
if !opts.Raw {
|
||||
ret.Temperature = openai.Float(opts.Temperature)
|
||||
ret.TopP = openai.Float(opts.TopP)
|
||||
ret.PresencePenalty = openai.Float(opts.PresencePenalty)
|
||||
ret.FrequencyPenalty = openai.Float(opts.FrequencyPenalty)
|
||||
if opts.MaxTokens != 0 {
|
||||
ret.MaxOutputTokens = openai.Int(int64(opts.MaxTokens))
|
||||
}
|
||||
|
||||
// Add parameters not officially supported by Responses API as extra fields
|
||||
extraFields := make(map[string]any)
|
||||
if opts.PresencePenalty != 0 {
|
||||
extraFields["presence_penalty"] = opts.PresencePenalty
|
||||
}
|
||||
if opts.FrequencyPenalty != 0 {
|
||||
extraFields["frequency_penalty"] = opts.FrequencyPenalty
|
||||
}
|
||||
if opts.Seed != 0 {
|
||||
ret.Seed = openai.Int(int64(opts.Seed))
|
||||
extraFields["seed"] = opts.Seed
|
||||
}
|
||||
if len(extraFields) > 0 {
|
||||
ret.SetExtraFields(extraFields)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func convertMessage(msg chat.ChatCompletionMessage) openai.ChatCompletionMessageParamUnion {
|
||||
switch msg.Role {
|
||||
case chat.ChatMessageRoleSystem:
|
||||
return openai.SystemMessage(msg.Content)
|
||||
case chat.ChatMessageRoleUser:
|
||||
if len(msg.MultiContent) > 0 {
|
||||
var parts []openai.ChatCompletionContentPartUnionParam
|
||||
for _, p := range msg.MultiContent {
|
||||
switch p.Type {
|
||||
case chat.ChatMessagePartTypeText:
|
||||
parts = append(parts, openai.TextContentPart(p.Text))
|
||||
case chat.ChatMessagePartTypeImageURL:
|
||||
parts = append(parts, openai.ImageContentPart(openai.ChatCompletionContentPartImageImageURLParam{URL: p.ImageURL.URL}))
|
||||
func convertMessage(msg chat.ChatCompletionMessage) responses.ResponseInputItemUnionParam {
|
||||
result := convertMessageCommon(msg)
|
||||
role := responses.EasyInputMessageRole(result.Role)
|
||||
|
||||
if result.HasMultiContent {
|
||||
var parts []responses.ResponseInputContentUnionParam
|
||||
for _, p := range result.MultiContent {
|
||||
switch p.Type {
|
||||
case chat.ChatMessagePartTypeText:
|
||||
parts = append(parts, responses.ResponseInputContentParamOfInputText(p.Text))
|
||||
case chat.ChatMessagePartTypeImageURL:
|
||||
part := responses.ResponseInputContentParamOfInputImage(responses.ResponseInputImageDetailAuto)
|
||||
if part.OfInputImage != nil {
|
||||
part.OfInputImage.ImageURL = openai.String(p.ImageURL.URL)
|
||||
}
|
||||
parts = append(parts, part)
|
||||
}
|
||||
}
|
||||
contentList := responses.ResponseInputMessageContentListParam(parts)
|
||||
return responses.ResponseInputItemParamOfMessage(contentList, role)
|
||||
}
|
||||
return responses.ResponseInputItemParamOfMessage(result.Content, role)
|
||||
}
|
||||
|
||||
func (o *Client) extractText(resp *responses.Response) (ret string) {
|
||||
for _, item := range resp.Output {
|
||||
if item.Type == "message" {
|
||||
for _, c := range item.Content {
|
||||
if c.Type == "output_text" {
|
||||
ret += c.AsOutputText().Text
|
||||
}
|
||||
}
|
||||
return openai.UserMessage(parts)
|
||||
break
|
||||
}
|
||||
return openai.UserMessage(msg.Content)
|
||||
default:
|
||||
return openai.AssistantMessage(msg.Content)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -6,10 +6,11 @@ import (
|
||||
"github.com/danielmiessler/fabric/chat"
|
||||
"github.com/danielmiessler/fabric/common"
|
||||
openai "github.com/openai/openai-go"
|
||||
"github.com/openai/openai-go/shared"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestBuildChatCompletionRequestPinSeed(t *testing.T) {
|
||||
func TestBuildResponseRequestWithMaxTokens(t *testing.T) {
|
||||
|
||||
var msgs []*chat.ChatCompletionMessage
|
||||
|
||||
@@ -21,25 +22,21 @@ func TestBuildChatCompletionRequestPinSeed(t *testing.T) {
|
||||
}
|
||||
|
||||
opts := &common.ChatOptions{
|
||||
Temperature: 0.8,
|
||||
TopP: 0.9,
|
||||
PresencePenalty: 0.1,
|
||||
FrequencyPenalty: 0.2,
|
||||
Raw: false,
|
||||
Seed: 1,
|
||||
Temperature: 0.8,
|
||||
TopP: 0.9,
|
||||
Raw: false,
|
||||
MaxTokens: 50,
|
||||
}
|
||||
|
||||
var client = NewClient()
|
||||
request := client.buildChatCompletionParams(msgs, opts)
|
||||
assert.Equal(t, openai.ChatModel(opts.Model), request.Model)
|
||||
request := client.buildResponseParams(msgs, opts)
|
||||
assert.Equal(t, shared.ResponsesModel(opts.Model), request.Model)
|
||||
assert.Equal(t, openai.Float(opts.Temperature), request.Temperature)
|
||||
assert.Equal(t, openai.Float(opts.TopP), request.TopP)
|
||||
assert.Equal(t, openai.Float(opts.PresencePenalty), request.PresencePenalty)
|
||||
assert.Equal(t, openai.Float(opts.FrequencyPenalty), request.FrequencyPenalty)
|
||||
assert.Equal(t, openai.Int(int64(opts.Seed)), request.Seed)
|
||||
assert.Equal(t, openai.Int(int64(opts.MaxTokens)), request.MaxOutputTokens)
|
||||
}
|
||||
|
||||
func TestBuildChatCompletionRequestNilSeed(t *testing.T) {
|
||||
func TestBuildResponseRequestNoMaxTokens(t *testing.T) {
|
||||
|
||||
var msgs []*chat.ChatCompletionMessage
|
||||
|
||||
@@ -51,20 +48,15 @@ func TestBuildChatCompletionRequestNilSeed(t *testing.T) {
|
||||
}
|
||||
|
||||
opts := &common.ChatOptions{
|
||||
Temperature: 0.8,
|
||||
TopP: 0.9,
|
||||
PresencePenalty: 0.1,
|
||||
FrequencyPenalty: 0.2,
|
||||
Raw: false,
|
||||
Seed: 0,
|
||||
Temperature: 0.8,
|
||||
TopP: 0.9,
|
||||
Raw: false,
|
||||
}
|
||||
|
||||
var client = NewClient()
|
||||
request := client.buildChatCompletionParams(msgs, opts)
|
||||
assert.Equal(t, openai.ChatModel(opts.Model), request.Model)
|
||||
request := client.buildResponseParams(msgs, opts)
|
||||
assert.Equal(t, shared.ResponsesModel(opts.Model), request.Model)
|
||||
assert.Equal(t, openai.Float(opts.Temperature), request.Temperature)
|
||||
assert.Equal(t, openai.Float(opts.TopP), request.TopP)
|
||||
assert.Equal(t, openai.Float(opts.PresencePenalty), request.PresencePenalty)
|
||||
assert.Equal(t, openai.Float(opts.FrequencyPenalty), request.FrequencyPenalty)
|
||||
assert.False(t, request.Seed.Valid())
|
||||
assert.False(t, request.MaxOutputTokens.Valid())
|
||||
}
|
||||
|
||||
@@ -9,8 +9,9 @@ import (
|
||||
|
||||
// ProviderConfig defines the configuration for an OpenAI-compatible API provider
|
||||
type ProviderConfig struct {
|
||||
Name string
|
||||
BaseURL string
|
||||
Name string
|
||||
BaseURL string
|
||||
ImplementsResponses bool // Whether the provider supports OpenAI's new Responses API
|
||||
}
|
||||
|
||||
// Client is the common structure for all OpenAI-compatible providers
|
||||
@@ -21,51 +22,66 @@ type Client struct {
|
||||
// NewClient creates a new OpenAI-compatible client for the specified provider
|
||||
func NewClient(providerConfig ProviderConfig) *Client {
|
||||
client := &Client{}
|
||||
client.Client = openai.NewClientCompatible(providerConfig.Name, providerConfig.BaseURL, nil)
|
||||
client.Client = openai.NewClientCompatibleWithResponses(
|
||||
providerConfig.Name,
|
||||
providerConfig.BaseURL,
|
||||
providerConfig.ImplementsResponses,
|
||||
nil,
|
||||
)
|
||||
return client
|
||||
}
|
||||
|
||||
// ProviderMap is a map of provider name to ProviderConfig for O(1) lookup
|
||||
var ProviderMap = map[string]ProviderConfig{
|
||||
"AIML": {
|
||||
Name: "AIML",
|
||||
BaseURL: "https://api.aimlapi.com/v1",
|
||||
Name: "AIML",
|
||||
BaseURL: "https://api.aimlapi.com/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"Cerebras": {
|
||||
Name: "Cerebras",
|
||||
BaseURL: "https://api.cerebras.ai/v1",
|
||||
Name: "Cerebras",
|
||||
BaseURL: "https://api.cerebras.ai/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"DeepSeek": {
|
||||
Name: "DeepSeek",
|
||||
BaseURL: "https://api.deepseek.com",
|
||||
Name: "DeepSeek",
|
||||
BaseURL: "https://api.deepseek.com",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"GrokAI": {
|
||||
Name: "GrokAI",
|
||||
BaseURL: "https://api.x.ai/v1",
|
||||
Name: "GrokAI",
|
||||
BaseURL: "https://api.x.ai/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"Groq": {
|
||||
Name: "Groq",
|
||||
BaseURL: "https://api.groq.com/openai/v1",
|
||||
Name: "Groq",
|
||||
BaseURL: "https://api.groq.com/openai/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"Langdock": {
|
||||
Name: "Langdock",
|
||||
BaseURL: "https://api.langdock.com/openai/{{REGION=us}}/v1",
|
||||
Name: "Langdock",
|
||||
BaseURL: "https://api.langdock.com/openai/{{REGION=us}}/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"LiteLLM": {
|
||||
Name: "LiteLLM",
|
||||
BaseURL: "http://localhost:4000",
|
||||
Name: "LiteLLM",
|
||||
BaseURL: "http://localhost:4000",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"Mistral": {
|
||||
Name: "Mistral",
|
||||
BaseURL: "https://api.mistral.ai/v1",
|
||||
Name: "Mistral",
|
||||
BaseURL: "https://api.mistral.ai/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"OpenRouter": {
|
||||
Name: "OpenRouter",
|
||||
BaseURL: "https://openrouter.ai/api/v1",
|
||||
Name: "OpenRouter",
|
||||
BaseURL: "https://openrouter.ai/api/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
"SiliconCloud": {
|
||||
Name: "SiliconCloud",
|
||||
BaseURL: "https://api.siliconflow.cn/v1",
|
||||
Name: "SiliconCloud",
|
||||
BaseURL: "https://api.siliconflow.cn/v1",
|
||||
ImplementsResponses: false,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
package main
|
||||
|
||||
var version = "v1.4.221"
|
||||
var version = "v1.4.223"
|
||||
|
||||
Reference in New Issue
Block a user