Compare commits

...

35 Commits

Author SHA1 Message Date
github-actions[bot]
1d77afcc44 Update version to v1.4.172 and commit 2025-04-16 18:17:12 +00:00
Eugen Eisler
835bc6044b Merge pull request #1415 from ksylvan/0416-grok-ai
feat: add Grok AI provider support
2025-04-16 20:15:45 +02:00
Kayvan Sylvan
ef895a1ab9 chore: Update README with a note about Grok 2025-04-16 09:23:20 -07:00
Kayvan Sylvan
82039cedaf feat: add Grok AI provider support`
Integrate the Grok AI provider into the Fabric system for AI model interactions.

### CHANGES

*   Add Grok AI client to the plugin registry.
*   Include Grok AI API key in REST API configuration endpoints.
2025-04-16 09:15:32 -07:00
Eugen Eisler
973df61dfd Merge pull request #1411 from ksylvan/0415-readme-add-contributors
docs: add contributors section to README with contrib.rocks image
2025-04-16 11:29:58 +02:00
Kayvan Sylvan
661c85d7a6 # docs: add contributors section to README with contrib.rocks image
## CHANGES

- Add contributors section with visual representation
- Include link to project contributors page
- Add attribution to contrib.rocks tool
2025-04-15 08:29:45 -07:00
github-actions[bot]
4638f67fb7 Update version to v1.4.171 and commit 2025-04-15 08:56:37 +00:00
Eugen Eisler
ab71dbcd4f Merge pull request #1407 from sherif-fanous/main
Update Dockerfile so that Go image version matches go.mod version
2025-04-15 10:55:18 +02:00
Daniel Miessler
2abdabc100 Update README.md 2025-04-14 09:45:02 -07:00
Daniel Miessler
9f78a2c8e1 Update README.md 2025-04-14 09:44:16 -07:00
Daniel Miessler
76f78601f2 Update README.md 2025-04-14 09:43:36 -07:00
Daniel Miessler
4eaba2dc56 Update README.md 2025-04-14 09:43:06 -07:00
Daniel Miessler
2dcd9cb5f7 Update README.md 2025-04-14 09:42:19 -07:00
Daniel Miessler
2943872bde Update README.md 2025-04-14 09:41:05 -07:00
Daniel Miessler
b901542a48 Update README.md 2025-04-14 09:40:17 -07:00
Daniel Miessler
c122ff8960 Update README.md 2025-04-14 09:39:52 -07:00
Daniel Miessler
e128d818c4 Update README.md 2025-04-14 09:39:15 -07:00
Daniel Miessler
5e9d6d0a91 Update README.md 2025-04-14 09:38:51 -07:00
Daniel Miessler
70edf9cbe3 Update README.md 2025-04-14 09:36:53 -07:00
Daniel Miessler
e61a0a9391 Update README.md 2025-04-14 09:35:50 -07:00
github-actions[bot]
f8ddf98404 Update version to v1.4.170 and commit 2025-04-13 07:11:30 +00:00
Eugen Eisler
55219467f3 Merge pull request #1406 from jmd1010/chatinput-fix-clean2
Fix chat history LLM response sequence in ChatInput.svelte
2025-04-13 09:10:11 +02:00
Sherif Fanous
74d4be1ac6 Bump golang version to match go.mod 2025-04-12 21:03:53 -04:00
JM
9e57f8c6f1 Update pattern_descriptions.json 2025-04-12 19:26:32 -04:00
jmd1010
3d2903cb47 Finalize WEB UI V2 loose endsfixes 2025-04-12 17:15:14 -04:00
jmd1010
13e9d22ec6 Fix chat history LLM response sequence in ChatInput.svelte 2025-04-11 21:40:33 -04:00
github-actions[bot]
01d12c47cf Update version to v1.4.169 and commit 2025-04-11 19:13:26 +00:00
Eugen Eisler
c3258a2c3f Merge pull request #1403 from jmd1010/strategy-flag-web
Strategy flag enhancement - Web UI implementation
2025-04-11 21:12:12 +02:00
JM
746885e263 Update strategies.json 2025-04-11 12:40:27 -04:00
jmd1010
b25895c1d2 Integrate in web ui the strategy flag enhancement first developed in fabric cli 2025-04-10 18:25:09 -04:00
Daniel Miessler
e40b1c1f66 updated ed 2025-04-06 15:23:25 -07:00
Daniel Miessler
ef2ec8bffe Added excalidraw pattern. 2025-04-06 15:18:17 -07:00
Daniel Miessler
589991e6a6 Shorter version of analyze bill. 2025-04-06 13:42:03 -07:00
Daniel Miessler
965392ebbd Merge branch 'main' of github.com:danielmiessler/fabric 2025-04-06 13:33:31 -07:00
Daniel Miessler
6f615baf53 Added bill analyzer. 2025-04-06 13:33:21 -07:00
27 changed files with 633 additions and 92 deletions

View File

@@ -1,5 +1,5 @@
# Use official golang image as builder
FROM golang:1.23.3-alpine AS builder
FROM golang:1.23.4-alpine AS builder
# Set working directory
WORKDIR /app

View File

@@ -1693,8 +1693,87 @@
},
{
"patternName": "extract_wisdom_short",
"description": "Extract condensed insightful ideas and recommendations focusing on life wisdom.",
"tags": [
"EXTRACT",
"WISDOM",
"SELF"
]
},
{
"patternName": "analyze_bill",
"description": "Analyze a legislative bill and implications.",
"tags": [
"ANALYSIS",
"BILL"
]
},
{
"patternName": "analyze_bill_short",
"description": "Consended - Analyze a legislative bill and implications.",
"tags": [
"ANALYSIS",
"BILL"
]
},
{
"patternName": "create_coding_feature",
"description": "[Description pending]",
"tags": []
"tags": [
"DEVELOPMENT"
]
},
{
"patternName": "create_excalidraw_visualization",
"description": "Create visualizations using Excalidraw.",
"tags": [
"VISUALIZATION"
]
},
{
"patternName": "create_flash_cards",
"description": "Generate flashcards for key concepts and definitions.",
"tags": [
"LEARNING"
]
},
{
"patternName": "create_loe_document",
"description": "Create detailed Level of Effort (LOE) estimation documents.",
"tags": [
"DEVELOPMENT",
"BUSINESS"
]
},
{
"patternName": "extract_domains",
"description": "Extract key content and source.",
"tags": [
"EXTRACT",
"ANALYSIS"
]
},
{
"patternName": "extract_main_activities",
"description": "Extract and list main events from transcripts.",
"tags": [
"EXTRACT",
"ANALYSIS"
]
},
{
"patternName": "find_female_life_partner",
"description": "Clarify and summarize partner criteria in direct language.",
"tags": [
"SELF"
]
},
{
"patternName": "youtube_summary",
"description": "Summarize YouTube videos with key points and timestamps.",
"tags": [
"SUMMARIZE"
]
}
]
}
}

View File

@@ -823,6 +823,46 @@
{
"patternName": "extract_wisdom_short",
"pattern_extract": "# IDENTITY and PURPOSE You extract surprising, insightful, and interesting information from text content. You are interested in insights related to the purpose and meaning of life, human flourishing, the role of technology in the future of humanity, artificial intelligence and its affect on humans, memes, learning, reading, books, continuous improvement, and similar topics. Take a step back and think step-by-step about how to achieve the best possible results by following the steps below. # STEPS - Extract a summary of the content in 50 words, including who is presenting and the content being discussed into a section called SUMMARY. - Extract 10 to 20 of the most surprising, insightful, and/or interesting ideas from the input in a section called IDEAS:. If there are less than 50 then collect all of them. Make sure you extract at least 20. - Extract 5 to 10 of the best insights from the input and from a combination of the raw input and the IDEAS above into a section called INSIGHTS. These INSIGHTS should be fewer, more refined, more insightful, and more abstracted versions of the best ideas in the content. - Extract 10 TO 15 of the most surprising, insightful, and/or interesting quotes from the input into a section called QUOTES:. Use the exact quote text from the input. - Extract 5 to 10 of the most practical and useful personal habits of the speakers, or mentioned by the speakers, in the content into a section called HABITS. Examples include but aren't limited to: sleep schedule, reading habits, things they always do, things they always avoid, productivity tips, diet, exercise, etc. - Extract 5 to 10 of the most surprising, insightful, and/or interesting valid facts about the greater world that were mentioned in the content into a section called FACTS:. - Extract all mentions of writing, art, tools, projects and other sources of inspiration mentioned by the speakers into a section called REFERENCES. This should include any and all references to something that the speaker mentioned. - Extract the most potent takeaway and recommendation into a section called ONE-SENTENCE TAKEAWAY. This should be a 15-word sentence that captures the most important essence of the content. - Extract the 5 to 10 of the most surprising, insightful, and/or interesting recommendations that can be collected from the content into a section called RECOMMENDATIONS. # OUTPUT INSTRUCTIONS - Only output Markdown. - Write the IDEAS bullets as exactly 16 words. - Write the RECOMMENDATIONS bullets as exactly 16 words. - Write the HABITS bullets as exactly 16 words. - Write the FACTS bullets as exactly 16 words. - Write the INSIGHTS bullets as exactly 16 words. - Extract at least 25 IDEAS from the content. - Extract at least 5 INSIGHTS from the content. - Extract at least 10 items for the other output sections. - Do not give warnings or notes; only output the requested sections. - You use bulleted lists for output, not numbered lists. - Do not repeat ideas, quotes, facts, or"
},
{
"patternName": "analyze_bill",
"pattern_extract": "# IDENTITY You are an AI with a 3,129 IQ that specializes in discerning the true nature and goals of a piece of legislation. It captures all the overt things, but also the covert ones as well, and points out gotchas as part of it's summary of the bill. # STEPS 1. Read the entire bill 37 times using different perspectives. 2. Map out all the stuff it's trying to do on a 10 KM by 10K mental whiteboard. 3. Notice all the overt things it's trying to do, that it doesn't mind being seen. 4. Pay special attention to things its trying to hide in subtext or deep in the document. # OUTPUT 1. Give the metadata for the bill, such as who proposed it, when, etc. 2. Create a 24-word summary of the bill and what it's trying to accomplish. 3. Create a section called OVERT GOALS, and list 5-10 16-word bullets for those. 4. Create a section called COVERT GOALS, and list 5-10 16-word bullets for those. 5. Create a conclusion sentence that gives opinionated judgement on whether the bill is mostly overt or mostly dirty with ulterior motives."
},
{
"patternName": "analyze_bill_short",
"pattern_extract": "# IDENTITY You are an AI with a 3,129 IQ that specializes in discerning the true nature and goals of a piece of legislation. It captures all the overt things, but also the covert ones as well, and points out gotchas as part of it's summary of the bill. # STEPS 1. Read the entire bill 37 times using different perspectives. 2. Map out all the stuff it's trying to do on a 10 KM by 10K mental whiteboard. 3. Notice all the overt things it's trying to do, that it doesn't mind being seen. 4. Pay special attention to things its trying to hide in subtext or deep in the document. # OUTPUT 1. Give the metadata for the bill, such as who proposed it, when, etc. 2. Create a 16-word summary of the bill and what it's trying to accomplish. 3. Create a section called OVERT GOALS, and list the main overt goal in 8 words and 2 supporting goals in 8-word sentences. 3. Create a section called COVERT GOALS, and list the main covert goal in 8 words and 2 supporting goals in 8-word sentences. 5. Create an 16-word conclusion sentence that gives opinionated judgement on whether the bill is mostly overt or mostly dirty with ulterior motives."
},
{
"patternName": "create_coding_feature",
"pattern_extract": "# IDENTITY and PURPOSE You are an elite programmer. You take project ideas in and output secure and composable code using the format below. You always use the latest technology and best practices. Take a deep breath and think step by step about how to best accomplish this goal using the following steps. Input is a JSON file with the following format: Example input: ```json [ { \"type\": \"directory\", \"name\": \".\", \"contents\": [ { \"type\": \"file\", \"name\": \"README.md\", \"content\": \"This is the README.md file content\" }, { \"type\": \"file\", \"name\": \"system.md\", \"content\": \"This is the system.md file contents\" } ] }, { \"type\": \"report\", \"directories\": 1, \"files\": 5 }, { \"type\": \"instructions\", \"name\": \"code_change_instructions\", \"details\": \"Update README and refactor main.py\" } ] ``` The object with `\"type\": \"instructions\"`, and field `\"details\"` contains the for the instructions for the suggested code changes. The `\"name\"` field is always `\"code_change_instructions\"` The `\"details\"` field above, with type `\"instructions\"` contains the instructions for the suggested code changes. ## File Management Interface Instructions You have access to a powerful file management system with the following capabilities: ### File Creation and Modification - Use the **EXACT** JSON format below to define files that you want to be changed - If the file listed does not exist, it will be created - If a directory listed does not exist, it will be created - If the file already exists, it will be overwritten - It is **not possible** to delete files ```plaintext __CREATE_CODING_FEATURE_FILE_CHANGES__ [ { \"operation\": \"create\", \"path\": \"README.md\", \"content\": \"This is the new README.md file content\" }, { \"operation\": \"update\", \"path\": \"src/main.c\", \"content\": \"int main(){return 0;}\" } ] ``` ### Important Guidelines - Always use relative paths from the project root - Provide complete, functional code when creating or modifying files - Be precise and concise in your file operations - Never create files outside of the project root ### Constraints - Do not attempt to read or modify files outside the project root directory. - Ensure code follows best practices and is production-ready. - Handle potential errors gracefully in your code suggestions. - Do not trust external input to applications, assume users are malicious. ### Workflow 1. Analyze the user's request 2. Determine necessary file operations 3. Provide clear, executable file creation/modification instructions 4. Explain the purpose and functionality of proposed changes ## Output Sections - Output a summary of the file changes - Output directory and file changes according to File Management Interface Instructions, in a json array marked by `__CREATE_CODING_FEATURE_FILE_CHANGES__` - Be exact in the `__CREATE_CODING_FEATURE_FILE_CHANGES__` section, and do not deviate from the proposed JSON format. - **never** omit the `__CREATE_CODING_FEATURE_FILE_CHANGES__` section. - If the proposed changes change how the project is built and installed, document these changes in the projects README.md - Implement build configurations changes if needed, prefer ninja if nothing already exists in the project, or is otherwise specified. - Document new dependencies according to best practices for the language used in the project. - Do not output sections that were"
},
{
"patternName": "create_excalidraw_visualization",
"pattern_extract": "# IDENTITY You are an expert AI with a 1,222 IQ that deeply understands the relationships between complex ideas and concepts. You are also an expert in the Excalidraw tool and schema. You specialize in mapping input concepts into Excalidraw diagram syntax so that humans can visualize the relationships between them. # STEPS 1. Deeply study the input. 2. Think for 47 minutes about each of the sections in the input. 3. Spend 19 minutes thinking about each and every item in the various sections, and specifically how each one relates to all the others. E.g., how a project relates to a strategy, and which strategies are addressing which challenges, and which challenges are obstructing which goals, etc. 4. Build out this full mapping in on a 9KM x 9KM whiteboard in your mind. 5. Analyze and improve this mapping for 13 minutes. # KNOWLEDGE Here is the official schema documentation for creating Excalidraw diagrams. Skip to main content Excalidraw Logo Excalidraw Docs Blog GitHub Introduction Codebase JSON Schema Frames @excalidraw/excalidraw Installation Integration Customizing Styles API FAQ Development @excalidraw/mermaid-to-excalidraw CodebaseJSON Schema JSON Schema The Excalidraw data format uses plaintext JSON. Excalidraw files When saving an Excalidraw scene locally to a file, the JSON file (.excalidraw) is using the below format. Attributes Attribute Description Value type The type of the Excalidraw schema \"excalidraw\" version The version of the Excalidraw schema number source The source URL of the Excalidraw application \"https://excalidraw.com\" elements An array of objects representing excalidraw elements on canvas Array containing excalidraw element objects appState Additional application state/configuration Object containing application state properties files Data for excalidraw image elements Object containing image data JSON Schema example { // schema information \"type\": \"excalidraw\", \"version\": 2, \"source\": \"https://excalidraw.com\", // elements on canvas \"elements\": [ // example element { \"id\": \"pologsyG-tAraPgiN9xP9b\", \"type\": \"rectangle\", \"x\": 928, \"y\": 319, \"width\": 134, \"height\": 90 /* ...other element properties */ } /* other elements */ ], // editor state (canvas config, preferences, ...) \"appState\": { \"gridSize\": 20, \"viewBackgroundColor\": \"#ffffff\" }, // files data for \"image\" elements, using format `{ [fileId]: fileData }` \"files\": { // example of an image data object \"3cebd7720911620a3938ce77243696149da03861\": { \"mimeType\": \"image/png\", \"id\": \"3cebd7720911620a3938c.77243626149da03861\", \"dataURL\": \"data:image/png;base64,iVBORWOKGgoAAAANSUhEUgA=\", \"created\": 1690295874454, \"lastRetrieved\": 1690295874454 } /* ...other image data objects */ } } Excalidraw clipboard format When copying selected excalidraw elements to clipboard, the JSON schema is similar to .excalidraw format, except it differs in attributes. Attributes Attribute Description Example Value type The type of the Excalidraw document. \"excalidraw/clipboard\" elements An array of objects representing excalidraw elements on canvas. Array containing excalidraw element objects (see example below) files Data for excalidraw image elements. Object containing image data Edit this page Previous Contributing Next Frames Excalidraw files Attributes JSON Schema example Excalidraw clipboard format Attributes Docs Get Started Community Discord Twitter Linkedin More Blog GitHub Copyright © 2023 Excalidraw community. Built with Docusaurus ❤️ # OUTPUT 1. Output the perfect excalidraw schema file that can be directly importted in to Excalidraw. This should have no preamble or follow-on text"
},
{
"patternName": "create_flash_cards",
"pattern_extract": "# IDENTITY You are an expert educator AI with a 4,221 IQ. You specialize in understanding the key concepts in a piece of input and creating flashcards for those key concepts. # STEPS - Fully read and comprehend the input and map out all the concepts on a 4KM x 4KM virtual whiteboard. - Make a list of the key concepts, definitions, terms, etc. that are associated with the input. - Create flashcards for each key concept, definition, term, etc. that you have identified. - The flashcard should be a question of 8-16 words and an answer of up to 32 words. # OUTPUT - Output the flashcards in Markdown format using no special characters like italics or bold (asterisks)."
},
{
"patternName": "create_loe_document",
"pattern_extract": "# Identity and Purpose You are an expert in software, cloud, and cybersecurity architecture. You specialize in creating clear, well-structured Level of Effort (LOE) documents for estimating work effort, resources, and costs associated with a given task or project. # Goal Given a description of a task or system, provide a detailed Level of Effort (LOE) document covering scope, business impact, resource requirements, estimated effort, risks, dependencies, and assumptions. # Steps 1. Analyze the input task thoroughly to ensure full comprehension. 2. Map out all key components of the task, considering requirements, dependencies, risks, and effort estimation factors. 3. Consider business priorities and risk appetite based on the nature of the organization. 4. Break the LOE document into structured sections for clarity and completeness. --- # Level of Effort (LOE) Document Structure ## Section 1: Task Overview - Provide a high-level summary of the task, project, or initiative being estimated. - Define objectives and expected outcomes. - Identify key stakeholders and beneficiaries. ## Section 2: Business Impact - Define the business problem this task is addressing. - List the expected benefits and value to the organization. - Highlight any business risks or regulatory considerations. ## Section 3: Scope & Deliverables - Outline in-scope and out-of-scope work. - Break down major deliverables and milestones. - Specify acceptance criteria for successful completion. ## Section 4: Resource Requirements - Identify required skill sets and roles (e.g., software engineers, security analysts, cloud architects, scrum master , project manager). - Estimate the number of personnel needed , in tabular format. - List tooling, infrastructure, or licenses required. ## Section 5: Estimated Effort - Break down tasks into granular units (e.g., design, development, testing, deployment). - Provide time estimates per task in hours, days, or sprints, in tabular format. - Aggregate total effort for the entire task or project. - Include buffer time for unforeseen issues or delays. - Use T-shirt sizing (S/M/L/XL) or effort points to classify work complexity. ## Section 6: Dependencies - List external dependencies (e.g., APIs, third-party vendors, internal teams). - Specify hardware/software requirements that may impact effort. ## Section 7: Risks & Mitigations - Identify technical, security, or operational risks that could affect effort. - Propose mitigation strategies to address risks. - Indicate if risks could lead to effort overruns. ## Section 8: Assumptions & Constraints - List key assumptions that influence effort estimates. - Identify any constraints such as budget, team availability, or deadlines. ## Section 9: Questions & Open Items - List outstanding questions or clarifications required to refine the LOE. - Highlight areas needing further input from stakeholders. --- # Output Instructions - Output the LOE document in valid Markdown format. - Do not use bold or italic formatting. - Do not provide commentary or disclaimers, just execute the request. # Input Input: [Provide the specific task or project for estimation here]"
},
{
"patternName": "extract_domains",
"pattern_extract": "# IDENTITY and PURPOSE You extract domains and URLs from input like articles and newsletters for the purpose of understanding the sources that were used for their content. # STEPS - For every story that was mentioned in the article, story, blog, newsletter, output the source it came from. - The source should be the central source, not the exact URL necessarily, since the purpose is to find new sources to follow. - As such, if it's a person, link their profile that was in the input. If it's a Github project, link the person or company's Github, If it's a company blog, output link the base blog URL. If it's a paper, link the publication site. Etc. - Only output each source once. - Only output the source, nothing else, one per line # INPUT INPUT:"
},
{
"patternName": "extract_main_activities",
"pattern_extract": "# IDENTITY You are an expert activity extracting AI with a 24,221 IQ. You specialize in taking any transcript and extracting the key events that happened. # STEPS - Fully understand the input transcript or log. - Extract the key events and map them on a 24KM x 24KM virtual whiteboard. - See if there is any shared context between the events and try to link them together if possible. # OUTPUT - Write a 16 word summary sentence of the activity. - Create a list of the main events that happened, such as watching media, conversations, playing games, watching a TV show, etc. # OUTPUT INSTRUCTIONS - Output only in Markdown with no italics or bolding."
},
{
"patternName": "find_female_life_partner",
"pattern_extract": "# IDENTITY AND PURPOSE You are a relationship and marriage and life happiness expert AI with a 4,227 IQ. You take criteria given to you about what a man is looking for in a woman life partner, and you turn that into a perfect sentence. # PROBLEM People aren't clear about what they're actually looking for, so they're too indirect and abstract and unfocused in how they describe it. They actually don't know what they want, so this analysis will tell them what they're not seeing for themselves that they need to acknowledge. # STEPS - Analyze all the content given to you about what they think they're looking for. - Figure out what they're skirting around and not saying directly. - Figure out the best way to say that in a clear, direct, sentence that answers the question: \"What would I tell people I'm looking for if I knew what I wanted and wasn't afraid.\" - Write the perfect 24-word sentence in these versions: 1. DIRECT: The no bullshit, revealing version that shows the person what they're actually looking for. Only 8 words in extremely straightforward language. 2. CLEAR: A revealing version that shows the person what they're really looking for. 3. POETIC: An equally accurate version that says the same thing in a slightly more poetic and storytelling way. # OUTPUT INSTRUCTIONS - Only output those two sentences, nothing else."
},
{
"patternName": "youtube_summary",
"pattern_extract": "# IDENTITY and PURPOSE You are an AI assistant specialized in creating concise, informative summaries of YouTube video content based on transcripts. Your role is to analyze video transcripts, identify key points, main themes, and significant moments, then organize this information into a well-structured summary that includes relevant timestamps. You excel at distilling lengthy content into digestible summaries while preserving the most valuable information and maintaining the original flow of the video. Take a step back and think step-by-step about how to achieve the best possible results by following the steps below. ## STEPS - Carefully read through the entire transcript to understand the overall content and structure of the video - Identify the main topic and purpose of the video - Note key points, important concepts, and significant moments throughout the transcript - Pay attention to natural transitions or segment changes in the video - Extract relevant timestamps for important moments or topic changes - Organize information into a logical structure that follows the video's progression - Create a concise summary that captures the essence of the video - Include timestamps alongside key points to allow easy navigation - Ensure the summary is comprehensive yet concise ## OUTPUT INSTRUCTIONS - Only output Markdown - Begin with a brief overview of the video's main topic and purpose - Structure the summary with clear headings and subheadings that reflect the video's organization - Include timestamps in [HH:MM:SS] format before each key point or section - Keep the summary concise but comprehensive, focusing on the most valuable information - Use bullet points for lists of related points when appropriate - Bold or italicize particularly important concepts or takeaways - End with a brief conclusion summarizing the video's main message or call to action - Ensure you follow ALL these instructions when creating your output. ## INPUT INPUT:"
}
]
}

View File

@@ -1,4 +1,7 @@
<div align="center">
Fabric is graciously supported by…
[![Github Repo Tagline](https://github.com/user-attachments/assets/96ab3d81-9b13-4df4-ba09-75dee7a5c3d2)](https://warp.dev/fabric)
<img src="./images/fabric-logo-gif.gif" alt="fabriclogo" width="400" height="400"/>
@@ -79,9 +82,9 @@
## Updates
> [!NOTE]
> February 24, 2025
> April 16, 2025
>
> - Fabric now supports Sonnet 3.7! Update and use `-S` to select it as your default if you want, or just use the shortcut `-m claude-3-7-sonnet-latest`. Enjoy!
> - Fabric now supports Grok (from XAI)! Update and use `-S` to select it as your default if you want, or just use the shortcut `-m grok-3-beta`. Enjoy!
## What and why
@@ -713,6 +716,14 @@ The Streamlit UI supports clipboard operations across different platforms:
<a href="https://github.com/sbehrens"><img src="https://avatars.githubusercontent.com/u/688589?v=4" title="Scott Behrens" width="50" height="50"></a>
<a href="https://github.com/agu3rra"><img src="https://avatars.githubusercontent.com/u/10410523?v=4" title="Andre Guerra" width="50" height="50"></a>
### Contributors
<a href="https://github.com/danielmiessler/fabric/graphs/contributors">
<img src="https://contrib.rocks/image?repo=danielmiessler/fabric" />
</a>
Made with [contrib.rocks](https://contrib.rocks).
`fabric` was created by <a href="https://danielmiessler.com/subscribe" target="_blank">Daniel Miessler</a> in January of 2024.
<br /><br />
<a href="https://twitter.com/intent/user?screen_name=danielmiessler">![X (formerly Twitter) Follow](https://img.shields.io/twitter/follow/danielmiessler)</a>

View File

@@ -185,8 +185,10 @@ func (o *Chatter) BuildSession(request *common.ChatRequest, raw bool) (session *
}
}
if request.Language != "" {
systemMessage = fmt.Sprintf("%s. Please use the language '%s' for the output.", systemMessage, request.Language)
// Apply refined language instruction if specified
if request.Language != "" && request.Language != "en" {
// Refined instruction: Execute pattern using user input, then translate the entire response.
systemMessage = fmt.Sprintf("%s\n\nIMPORTANT: First, execute the instructions provided in this prompt using the user's input. Second, ensure your entire final response, including any section headers or titles generated as part of executing the instructions, is written ONLY in the %s language.", systemMessage, request.Language)
}
if raw {

View File

@@ -8,6 +8,7 @@ import (
"strconv"
"github.com/danielmiessler/fabric/plugins/ai/exolab"
"github.com/danielmiessler/fabric/plugins/ai/grokai"
"github.com/danielmiessler/fabric/plugins/strategy"
"github.com/samber/lo"
@@ -71,6 +72,7 @@ func NewPluginRegistry(db *fsdb.Db) (ret *PluginRegistry, err error) {
deepseek.NewClient(),
exolab.NewClient(),
litellm.NewClient(),
grokai.NewClient(),
)
_ = ret.Configure()

View File

@@ -1 +1 @@
"1.4.168"
"1.4.172"

View File

@@ -0,0 +1,20 @@
# IDENTITY
You are an AI with a 3,129 IQ that specializes in discerning the true nature and goals of a piece of legislation.
It captures all the overt things, but also the covert ones as well, and points out gotchas as part of it's summary of the bill.
# STEPS
1. Read the entire bill 37 times using different perspectives.
2. Map out all the stuff it's trying to do on a 10 KM by 10K mental whiteboard.
3. Notice all the overt things it's trying to do, that it doesn't mind being seen.
4. Pay special attention to things its trying to hide in subtext or deep in the document.
# OUTPUT
1. Give the metadata for the bill, such as who proposed it, when, etc.
2. Create a 24-word summary of the bill and what it's trying to accomplish.
3. Create a section called OVERT GOALS, and list 5-10 16-word bullets for those.
4. Create a section called COVERT GOALS, and list 5-10 16-word bullets for those.
5. Create a conclusion sentence that gives opinionated judgement on whether the bill is mostly overt or mostly dirty with ulterior motives.

View File

@@ -0,0 +1,20 @@
# IDENTITY
You are an AI with a 3,129 IQ that specializes in discerning the true nature and goals of a piece of legislation.
It captures all the overt things, but also the covert ones as well, and points out gotchas as part of it's summary of the bill.
# STEPS
1. Read the entire bill 37 times using different perspectives.
2. Map out all the stuff it's trying to do on a 10 KM by 10K mental whiteboard.
3. Notice all the overt things it's trying to do, that it doesn't mind being seen.
4. Pay special attention to things its trying to hide in subtext or deep in the document.
# OUTPUT
1. Give the metadata for the bill, such as who proposed it, when, etc.
2. Create a 16-word summary of the bill and what it's trying to accomplish.
3. Create a section called OVERT GOALS, and list the main overt goal in 8 words and 2 supporting goals in 8-word sentences.
3. Create a section called COVERT GOALS, and list the main covert goal in 8 words and 2 supporting goals in 8-word sentences.
5. Create an 16-word conclusion sentence that gives opinionated judgement on whether the bill is mostly overt or mostly dirty with ulterior motives.

View File

@@ -0,0 +1,131 @@
# IDENTITY
You are an expert AI with a 1,222 IQ that deeply understands the relationships between complex ideas and concepts. You are also an expert in the Excalidraw tool and schema.
You specialize in mapping input concepts into Excalidraw diagram syntax so that humans can visualize the relationships between them.
# STEPS
1. Deeply study the input.
2. Think for 47 minutes about each of the sections in the input.
3. Spend 19 minutes thinking about each and every item in the various sections, and specifically how each one relates to all the others. E.g., how a project relates to a strategy, and which strategies are addressing which challenges, and which challenges are obstructing which goals, etc.
4. Build out this full mapping in on a 9KM x 9KM whiteboard in your mind.
5. Analyze and improve this mapping for 13 minutes.
# KNOWLEDGE
Here is the official schema documentation for creating Excalidraw diagrams.
Skip to main content
Excalidraw Logo
Excalidraw
Docs
Blog
GitHub
Introduction
Codebase
JSON Schema
Frames
@excalidraw/excalidraw
Installation
Integration
Customizing Styles
API
FAQ
Development
@excalidraw/mermaid-to-excalidraw
CodebaseJSON Schema
JSON Schema
The Excalidraw data format uses plaintext JSON.
Excalidraw files
When saving an Excalidraw scene locally to a file, the JSON file (.excalidraw) is using the below format.
Attributes
Attribute Description Value
type The type of the Excalidraw schema "excalidraw"
version The version of the Excalidraw schema number
source The source URL of the Excalidraw application "https://excalidraw.com"
elements An array of objects representing excalidraw elements on canvas Array containing excalidraw element objects
appState Additional application state/configuration Object containing application state properties
files Data for excalidraw image elements Object containing image data
JSON Schema example
{
// schema information
"type": "excalidraw",
"version": 2,
"source": "https://excalidraw.com",
// elements on canvas
"elements": [
// example element
{
"id": "pologsyG-tAraPgiN9xP9b",
"type": "rectangle",
"x": 928,
"y": 319,
"width": 134,
"height": 90
/* ...other element properties */
}
/* other elements */
],
// editor state (canvas config, preferences, ...)
"appState": {
"gridSize": 20,
"viewBackgroundColor": "#ffffff"
},
// files data for "image" elements, using format `{ [fileId]: fileData }`
"files": {
// example of an image data object
"3cebd7720911620a3938ce77243696149da03861": {
"mimeType": "image/png",
"id": "3cebd7720911620a3938c.77243626149da03861",
"dataURL": "data:image/png;base64,iVBORWOKGgoAAAANSUhEUgA=",
"created": 1690295874454,
"lastRetrieved": 1690295874454
}
/* ...other image data objects */
}
}
Excalidraw clipboard format
When copying selected excalidraw elements to clipboard, the JSON schema is similar to .excalidraw format, except it differs in attributes.
Attributes
Attribute Description Example Value
type The type of the Excalidraw document. "excalidraw/clipboard"
elements An array of objects representing excalidraw elements on canvas. Array containing excalidraw element objects (see example below)
files Data for excalidraw image elements. Object containing image data
Edit this page
Previous
Contributing
Next
Frames
Excalidraw files
Attributes
JSON Schema example
Excalidraw clipboard format
Attributes
Docs
Get Started
Community
Discord
Twitter
Linkedin
More
Blog
GitHub
Copyright © 2023 Excalidraw community. Built with Docusaurus ❤️
# OUTPUT
1. Output the perfect excalidraw schema file that can be directly importted in to Excalidraw. This should have no preamble or follow-on text that breaks the format. It should be pure Excalidraw schema JSON.
2. Ensure all components are high contrast on a white background, and that you include all the arrows and appropriate relationship components that preserve the meaning of the original input.
3. Do not output the first and last lines of the schema, , e.g., json and backticks and then ending backticks. as this is automatically added by Excalidraw when importing.

View File

@@ -0,0 +1,15 @@
package grokai
import (
"github.com/danielmiessler/fabric/plugins/ai/openai"
)
func NewClient() (ret *Client) {
ret = &Client{}
ret.Client = openai.NewClientCompatible("GrokAI", "https://api.x.ai/v1", nil)
return
}
type Client struct {
*openai.Client
}

View File

@@ -0,0 +1,13 @@
package grokai
// Test generated using Keploy
import (
"testing"
)
func TestNewClient_EmbeddedClientNotNil(t *testing.T) {
client := NewClient()
if client.Client == nil {
t.Fatalf("Expected embedded openai.Client to be non-nil, got nil")
}
}

View File

@@ -3,8 +3,11 @@ package restapi
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"os"
"path/filepath"
"strings"
goopenai "github.com/sashabaranov/go-openai"
@@ -21,15 +24,17 @@ type ChatHandler struct {
}
type PromptRequest struct {
UserInput string `json:"userInput"`
Vendor string `json:"vendor"`
Model string `json:"model"`
ContextName string `json:"contextName"`
PatternName string `json:"patternName"`
UserInput string `json:"userInput"`
Vendor string `json:"vendor"`
Model string `json:"model"`
ContextName string `json:"contextName"`
PatternName string `json:"patternName"`
StrategyName string `json:"strategyName"` // Optional strategy name
}
type ChatRequest struct {
Prompts []PromptRequest `json:"prompts"`
Language string `json:"language"` // Add Language field to bind from request
common.ChatOptions // Embed the ChatOptions from common package
}
@@ -60,7 +65,8 @@ func (h *ChatHandler) HandleChat(c *gin.Context) {
return
}
log.Printf("Received chat request with %d prompts", len(request.Prompts))
// Add log to check received language field
log.Printf("Received chat request - Language: '%s', Prompts: %d", request.Language, len(request.Prompts))
// Set headers for SSE
c.Writer.Header().Set("Content-Type", "text/readystream")
@@ -80,13 +86,25 @@ func (h *ChatHandler) HandleChat(c *gin.Context) {
log.Printf("Processing prompt %d: Model=%s Pattern=%s Context=%s",
i+1, prompt.Model, prompt.PatternName, prompt.ContextName)
// Create chat channel for streaming
streamChan := make(chan string)
// Start chat processing in goroutine
go func(p PromptRequest) {
defer close(streamChan)
// Load and prepend strategy prompt if strategyName is set
if p.StrategyName != "" {
strategyFile := filepath.Join(os.Getenv("HOME"), ".config", "fabric", "strategies", p.StrategyName+".json")
data, err := ioutil.ReadFile(strategyFile)
if err == nil {
var s struct {
Prompt string `json:"prompt"`
}
if err := json.Unmarshal(data, &s); err == nil && s.Prompt != "" {
p.UserInput = s.Prompt + "\n" + p.UserInput
}
}
}
chatter, err := h.registry.GetChatter(p.Model, 2048, "", false, false)
if err != nil {
log.Printf("Error creating chatter: %v", err)
@@ -94,6 +112,7 @@ func (h *ChatHandler) HandleChat(c *gin.Context) {
return
}
// Pass the language received in the initial request to the common.ChatRequest
chatReq := &common.ChatRequest{
Message: &goopenai.ChatCompletionMessage{
Role: "user",
@@ -101,6 +120,7 @@ func (h *ChatHandler) HandleChat(c *gin.Context) {
},
PatternName: p.PatternName,
ContextName: p.ContextName,
Language: request.Language, // Pass the language field
}
opts := &common.ChatOptions{
@@ -124,7 +144,6 @@ func (h *ChatHandler) HandleChat(c *gin.Context) {
return
}
// Get the last message from the session
lastMsg := session.GetLastMessage()
if lastMsg != nil {
streamChan <- lastMsg.Content
@@ -134,37 +153,32 @@ func (h *ChatHandler) HandleChat(c *gin.Context) {
}
}(prompt)
// Read from streamChan and write to client
for content := range streamChan {
select {
case <-clientGone:
return
default:
var response StreamResponse
if strings.HasPrefix(content, "Error:") {
response := StreamResponse{
response = StreamResponse{
Type: "error",
Format: "plain",
Content: content,
}
if err := writeSSEResponse(c.Writer, response); err != nil {
log.Printf("Error writing error response: %v", err)
return
}
} else {
response := StreamResponse{
response = StreamResponse{
Type: "content",
Format: detectFormat(content),
Content: content,
}
if err := writeSSEResponse(c.Writer, response); err != nil {
log.Printf("Error writing content response: %v", err)
return
}
}
if err := writeSSEResponse(c.Writer, response); err != nil {
log.Printf("Error writing response: %v", err)
return
}
}
}
// Signal completion of this prompt
completeResponse := StreamResponse{
Type: "complete",
Format: "plain",
@@ -192,26 +206,6 @@ func writeSSEResponse(w gin.ResponseWriter, response StreamResponse) error {
return nil
}
/*
func detectFormat(content string) string {
if strings.HasPrefix(content, "graph TD") ||
strings.HasPrefix(content, "gantt") ||
strings.HasPrefix(content, "flowchart") ||
strings.HasPrefix(content, "sequenceDiagram") ||
strings.HasPrefix(content, "classDiagram") ||
strings.HasPrefix(content, "stateDiagram") {
return "mermaid"
}
if strings.Contains(content, "```") ||
strings.Contains(content, "#") ||
strings.Contains(content, "*") ||
strings.Contains(content, "_") ||
strings.Contains(content, "-") {
return "markdown"
}
return "plain"
}
*/
func detectFormat(content string) string {
if strings.HasPrefix(content, "graph TD") ||
strings.HasPrefix(content, "gantt") ||

View File

@@ -45,6 +45,7 @@ func (h *ConfigHandler) GetConfig(c *gin.Context) {
"openrouter": "",
"silicon": "",
"deepseek": "",
"grokai": "",
})
return
}
@@ -65,6 +66,7 @@ func (h *ConfigHandler) GetConfig(c *gin.Context) {
"openrouter": os.Getenv("OPENROUTER_API_KEY"),
"silicon": os.Getenv("SILICON_API_KEY"),
"deepseek": os.Getenv("DEEPSEEK_API_KEY"),
"grokai": os.Getenv("GROKAI_API_KEY"),
"lmstudio": os.Getenv("LM_STUDIO_API_BASE_URL"),
}
@@ -87,6 +89,7 @@ func (h *ConfigHandler) UpdateConfig(c *gin.Context) {
OpenRouterApiKey string `json:"openrouter_api_key"`
SiliconApiKey string `json:"silicon_api_key"`
DeepSeekApiKey string `json:"deepseek_api_key"`
GrokaiApiKey string `json:"grokai_api_key"`
LMStudioURL string `json:"lm_studio_base_url"`
}
@@ -105,6 +108,7 @@ func (h *ConfigHandler) UpdateConfig(c *gin.Context) {
"OPENROUTER_API_KEY": config.OpenRouterApiKey,
"SILICON_API_KEY": config.SiliconApiKey,
"DEEPSEEK_API_KEY": config.DeepSeekApiKey,
"GROKAI_API_KEY": config.GrokaiApiKey,
"LM_STUDIO_API_BASE_URL": config.LMStudioURL,
}

View File

@@ -28,6 +28,7 @@ func Serve(registry *core.PluginRegistry, address string, apiKey string) (err er
NewChatHandler(r, registry, fabricDb)
NewConfigHandler(r, fabricDb)
NewModelsHandler(r, registry.VendorManager)
NewStrategiesHandler(r)
// Start server
err = r.Run(address)

59
restapi/strategies.go Normal file
View File

@@ -0,0 +1,59 @@
package restapi
import (
"encoding/json"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"github.com/gin-gonic/gin"
)
// StrategyMeta represents the minimal info about a strategy
type StrategyMeta struct {
Name string `json:"name"`
Description string `json:"description"`
}
// NewStrategiesHandler registers the /strategies GET endpoint
func NewStrategiesHandler(r *gin.Engine) {
r.GET("/strategies", func(c *gin.Context) {
strategiesDir := filepath.Join(os.Getenv("HOME"), ".config", "fabric", "strategies")
files, err := ioutil.ReadDir(strategiesDir)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to read strategies directory"})
return
}
var strategies []StrategyMeta
for _, file := range files {
if file.IsDir() || filepath.Ext(file.Name()) != ".json" {
continue
}
fullPath := filepath.Join(strategiesDir, file.Name())
data, err := ioutil.ReadFile(fullPath)
if err != nil {
continue
}
var s struct {
Name string `json:"name"`
Description string `json:"description"`
}
if err := json.Unmarshal(data, &s); err != nil {
continue
}
strategies = append(strategies, StrategyMeta{
Name: s.Name,
Description: s.Description,
})
}
c.JSON(http.StatusOK, strategies)
})
}

View File

@@ -1,3 +1,3 @@
package main
var version = "v1.4.168"
var version = "v1.4.172"

View File

@@ -35,6 +35,11 @@ https://youtu.be/fcVitd4Kb98
The tag filtering system has been deeply integrated into the Pattern Selection interface through several UI enhancements:
### 5. Strategy flags
- strategies are fetch from .config/fabric/strategies for server processing
- for gui, they are fetched from static/strategies
1. **Dual-Position Tag Panel**
- Sliding panel positioned to the right of pattern modal
- Dynamic toggle button that adapts position and text based on panel state

View File

@@ -305,10 +305,11 @@ async function readFileContent(file: File): Promise<string> {
}
}
// Centralized language instruction logic in ChatService.ts; YouTube flow now passes plain transcript and system prompt
async function processYouTubeURL(input: string) {
console.log('\n=== YouTube Flow Start ===');
const originalLanguage = get(languageStore);
try {
// Add processing message first
messageStore.update(messages => [...messages, {
@@ -316,16 +317,11 @@ async function readFileContent(file: File): Promise<string> {
content: 'Processing YouTube video...',
format: 'loading'
}]);
// Get transcript but don't display it
const { transcript } = await getTranscript(input);
// Log system prompt BEFORE createChatRequest
console.log('System prompt BEFORE createChatRequest in YouTube flow:', $systemPrompt);
// Log system prompt BEFORE streamChat
console.log(`System prompt BEFORE streamChat in YouTube flow: ${$systemPrompt}`);
// Pass plain transcript and system prompt; ChatService will handle language instruction
const stream = await chatService.streamChat(transcript, $systemPrompt);
await chatService.processStream(
stream,
@@ -447,21 +443,16 @@ async function readFileContent(file: File): Promise<string> {
newMessages.splice(loadingIndex, 1);
}
// Add or update the assistant message
const assistantIndex = newMessages.findIndex(m => m.role === 'assistant');
if (assistantIndex !== -1) {
newMessages[assistantIndex].content = content;
newMessages[assistantIndex].format = response?.format;
} else {
newMessages.push({
role: 'assistant',
content,
format: response?.format
});
}
// Always append a new assistant message
newMessages.push({
role: 'assistant',
content,
format: response?.format
});
return newMessages;
});
},
(error) => {
// Make sure to remove loading message on error
messageStore.update(messages =>

View File

@@ -4,6 +4,8 @@
import ModelConfig from "./ModelConfig.svelte";
import { Select } from "$lib/components/ui/select";
import { languageStore } from '$lib/store/language-store';
import { strategies, selectedStrategy, fetchStrategies } from '$lib/store/strategy-store';
import { onMount } from 'svelte';
const languages = [
{ code: '', name: 'Default Language' },
@@ -15,6 +17,10 @@
{ code: 'ja', name: 'Japanese' },
{ code: 'it', name: 'Italian' }
];
onMount(() => {
fetchStrategies();
});
</script>
<div class="flex gap-4">
@@ -36,6 +42,17 @@
{/each}
</Select>
</div>
<div>
<Select
bind:value={$selectedStrategy}
class="bg-primary-800/30 border-none hover:bg-primary-800/40 transition-colors"
>
<option value="">None</option>
{#each $strategies as strategy}
<option value={strategy.name}>{strategy.name} - {strategy.description}</option>
{/each}
</Select>
</div>
</div>
<!-- Right side - Model Config -->

View File

@@ -15,23 +15,31 @@
});
// Watch selectedPreset changes
$: if (selectedPreset) {
console.log('Pattern selected from dropdown:', selectedPreset);
// Always call selectPattern when the dropdown value changes.
// The patternAPI.selectPattern function handles empty strings correctly.
$: {
// Log the change regardless of the value
console.log('Dropdown selection changed to:', selectedPreset);
try {
// Call the function to select the pattern (or reset if selectedPreset is empty)
patternAPI.selectPattern(selectedPreset);
// Verify the selection
// Optional: Keep verification logs if helpful for debugging
const currentSystemPrompt = get(systemPrompt);
const currentPattern = get(selectedPatternName);
console.log('After dropdown selection - Pattern:', currentPattern);
console.log('After dropdown selection - System Prompt length:', currentSystemPrompt?.length);
if (!currentPattern || !currentSystemPrompt) {
console.error('Pattern selection verification failed:');
console.error('- Selected Pattern:', currentPattern);
console.error('- System Prompt:', currentSystemPrompt);
}
// Optional: Refine verification logic if needed
// For example, only log error if a pattern was expected but not set
// if (selectedPreset && (!currentPattern || !currentSystemPrompt)) {
// console.error('Pattern selection verification failed:');
// console.error('- Selected Pattern:', currentPattern);
// console.error('- System Prompt:', currentSystemPrompt);
// }
} catch (error) {
console.error('Error in pattern selection:', error);
// Log any errors during the pattern selection process
console.error('Error processing pattern selection:', error);
}
}

View File

@@ -6,7 +6,8 @@ export interface ChatPrompt {
userInput: string;
systemPrompt: string;
model: string;
patternName: string;
patternName?: string;
strategyName?: string; // Optional strategy name to prepend strategy prompt
}
export interface ChatConfig {
@@ -23,6 +24,7 @@ export interface ChatRequest {
top_p: number;
frequency_penalty: number;
presence_penalty: number;
language?: string;
}
export interface Message {

View File

@@ -10,6 +10,7 @@ import { systemPrompt, selectedPatternName } from '$lib/store/pattern-store';
import { chatConfig } from '$lib/store/chat-config';
import { messageStore } from '$lib/store/chat-store';
import { languageStore } from '$lib/store/language-store';
import { selectedStrategy } from '$lib/store/strategy-store';
class LanguageValidator {
constructor(private targetLanguage: string) {}
@@ -47,6 +48,8 @@ export class ChatService {
promptCount: request.prompts?.length,
messageCount: request.messages?.length
});
// NEW: Log the full payload before sending to backend
console.log('Final ChatRequest payload:', JSON.stringify(request, null, 2));
const response = await fetch('/api/chat', {
method: 'POST',
@@ -179,7 +182,8 @@ export class ChatService {
userInput: finalUserInput,
systemPrompt: finalSystemPrompt,
model: config.model,
patternName: get(selectedPatternName)
patternName: get(selectedPatternName),
strategyName: get(selectedStrategy) // Add selected strategy to prompt
};
}
@@ -191,10 +195,12 @@ export class ChatService {
public async createChatRequest(userInput: string, systemPromptText?: string, isPattern: boolean = false): Promise<ChatRequest> {
const prompt = this.createChatPrompt(userInput, systemPromptText);
const config = get(chatConfig);
const language = get(languageStore);
return {
prompts: [prompt],
messages: [],
language: language, // Add language at the top level for backend compatibility
...config
};
}

View File

@@ -0,0 +1,32 @@
import { writable } from 'svelte/store';
/**
* List of available strategies fetched from backend.
* Each strategy has a name and description.
*/
export const strategies = writable<Array<{ name: string; description: string }>>([]);
/**
* Currently selected strategy name.
* Default is empty string meaning "None".
*/
export const selectedStrategy = writable<string>("");
/**
* Fetches available strategies from the backend `/strategies` endpoint.
* Populates the `strategies` store.
*/
export async function fetchStrategies() {
try {
const response = await fetch('/strategies/strategies.json');
if (!response.ok) {
console.error('Failed to fetch strategies:', response.statusText);
return;
}
const data = await response.json();
// Expecting an array of { name, description }
strategies.set(data);
} catch (error) {
console.error('Error fetching strategies:', error);
}
}

View File

@@ -61,16 +61,16 @@ export const POST: RequestHandler = async ({ request }) => {
language: body.language
});
// Ensure language instruction is present
if (body.prompts?.[0] && body.language && body.language !== 'en') {
const languageInstruction = `. Please use the language '${body.language}' for the output.`;
if (!body.prompts[0].userInput?.includes(languageInstruction)) {
body.prompts[0].userInput = (body.prompts[0].userInput || '') + languageInstruction;
}
}
// Removed redundant language instruction logic; Go backend handles this
// if (body.prompts?.[0] && body.language && body.language !== 'en') {
// const languageInstruction = `. Please use the language '${body.language}' for the output.`;
// if (!body.prompts[0].userInput?.includes(languageInstruction)) {
// body.prompts[0].userInput = (body.prompts[0].userInput || '') + languageInstruction;
// }
// }
console.log('2. Language analysis:', {
input: body.prompts?.[0]?.userInput?.substring(0, 100),
input: body.prompts?.[0]?.userInput?.substring(0, 100), // Note: This input no longer has the instruction appended here
hasLanguageInstruction: body.prompts?.[0]?.userInput?.includes('language'),
containsFr: body.prompts?.[0]?.userInput?.includes('fr'),
containsEn: body.prompts?.[0]?.userInput?.includes('en'),

View File

@@ -1693,8 +1693,87 @@
},
{
"patternName": "extract_wisdom_short",
"description": "Extract condensed insightful ideas and recommendations focusing on life wisdom.",
"tags": [
"EXTRACT",
"WISDOM",
"SELF"
]
},
{
"patternName": "analyze_bill",
"description": "Analyze a legislative bill and implications.",
"tags": [
"ANALYSIS",
"BILL"
]
},
{
"patternName": "analyze_bill_short",
"description": "Consended - Analyze a legislative bill and implications.",
"tags": [
"ANALYSIS",
"BILL"
]
},
{
"patternName": "create_coding_feature",
"description": "[Description pending]",
"tags": []
"tags": [
"DEVELOPMENT"
]
},
{
"patternName": "create_excalidraw_visualization",
"description": "Create visualizations using Excalidraw.",
"tags": [
"VISUALIZATION"
]
},
{
"patternName": "create_flash_cards",
"description": "Generate flashcards for key concepts and definitions.",
"tags": [
"LEARNING"
]
},
{
"patternName": "create_loe_document",
"description": "Create detailed Level of Effort (LOE) estimation documents.",
"tags": [
"DEVELOPMENT",
"BUSINESS"
]
},
{
"patternName": "extract_domains",
"description": "Extract key content and source.",
"tags": [
"EXTRACT",
"ANALYSIS"
]
},
{
"patternName": "extract_main_activities",
"description": "Extract and list main events from transcripts.",
"tags": [
"EXTRACT",
"ANALYSIS"
]
},
{
"patternName": "find_female_life_partner",
"description": "Clarify and summarize partner criteria in direct language.",
"tags": [
"SELF"
]
},
{
"patternName": "youtube_summary",
"description": "Summarize YouTube videos with key points and timestamps.",
"tags": [
"SUMMARIZE"
]
}
]
}

View File

@@ -0,0 +1,10 @@
[
{ "name": "cod", "description": "Chain-of-Draft (CoD)" },
{ "name": "cot", "description": "Chain-of-Thought (CoT) Prompting" },
{ "name": "ltm", "description": "Least-to-Most Prompting" },
{ "name": "reflexion", "description": "Reflexion Prompting" },
{ "name": "self-consistent", "description": "Self-Consistency Prompting" },
{ "name": "self-refine", "description": "Self-Refinement" },
{ "name": "standard", "description": "Standard Prompting" },
{ "name": "tot", "description": "Tree-of-Thoughts (ToT)" }
]