Compare commits

...

48 Commits

Author SHA1 Message Date
jad2121
1201184257 trying something out as a new branch 2024-03-05 21:01:07 -05:00
jad2121
b84451114c fixed something 2024-03-05 20:27:05 -05:00
jad2121
a5d3d71b9d changed more documentation 2024-03-05 20:14:09 -05:00
jad2121
a655e30226 added some stuff 2024-03-05 20:12:55 -05:00
jad2121
d37dc4565c added support for claude. choose --claude. make sure to run --setup again to enter your claude api key 2024-03-05 20:10:35 -05:00
jad2121
6c7143dd51 added yet another error message 2024-03-05 17:51:01 -05:00
Daniel Miessler
2b6cb21e35 Updated readme to add refresh note. 2024-03-05 12:58:00 -08:00
Jonathan Dunn
39c4636148 updated readme 2024-03-05 15:29:46 -05:00
Jonathan Dunn
38c09afc85 changed an error message 2024-03-05 15:26:59 -05:00
Jonathan Dunn
a12d140635 fixed the stuff that was broken 2024-03-05 14:48:07 -05:00
Jonathan Dunn
cde7952f80 fixed readme 2024-03-05 14:44:25 -05:00
Jonathan Dunn
0ce5ed24c2 Added support for local models 2024-03-05 14:43:34 -05:00
jad2121
37efb69283 just a little faster now 2024-03-05 05:42:02 -05:00
jad2121
b838b3dea2 made it faster 2024-03-05 05:37:16 -05:00
jad2121
330df982b1 updated readme 2024-03-04 17:39:47 -05:00
jad2121
295d8d53f6 updated agents 2024-03-04 17:09:25 -05:00
Daniel Miessler
54406181b4 Updated summarize_git_changes. 2024-03-03 18:24:32 -08:00
Daniel Miessler
3a2a1a3fc3 Updated summarize_git_changes. 2024-03-03 18:13:16 -08:00
Daniel Miessler
a2b6988a3d Updated extract_ideas. 2024-03-03 18:09:36 -08:00
Daniel Miessler
4d6cf4e26a Updated extract_ideas. 2024-03-03 13:27:36 -08:00
Daniel Miessler
0abc44f8ce Added extract_ideas. 2024-03-03 13:24:18 -08:00
Daniel Miessler
64042d0d58 Updated summarize_git_changes. 2024-03-03 12:56:34 -08:00
Daniel Miessler
47391db129 Updated summarize_git_changes. 2024-03-03 12:54:51 -08:00
Daniel Miessler
5ebbfca16b Added summarize_git_changes. 2024-03-03 12:47:39 -08:00
jad2121
15cdea3bee Merge remote-tracking branch 'origin/main'
fixed agents
2024-03-03 15:21:03 -05:00
jad2121
38a3539a6e fixed agents 2024-03-03 15:19:10 -05:00
Daniel Miessler
4107d514dd Added new pattern called create_command
Add New "create_command" Pattern
2024-03-03 12:13:55 -08:00
jad2121
0f3ae3b5ce Merge remote-tracking branch 'origin/main'
fixed things
2024-03-03 15:11:32 -05:00
jad2121
8c0bfc9e95 fixed yt 2024-03-03 14:09:02 -05:00
Daniel Miessler
72189c9bf6 Merge pull request #151 from tomi-font/main
Fix the cat.
2024-03-03 11:04:02 -08:00
jad2121
914f6b46c3 added yt and ts to poetry and to config in setup.sh 2024-03-03 10:57:49 -05:00
jad2121
aa33795f6a updated readme 2024-03-03 09:19:01 -05:00
jad2121
5efc720e29 updated readme 2024-03-03 09:17:15 -05:00
jad2121
0ab8052c69 added transcription 2024-03-03 08:42:40 -05:00
jad2121
70356b34c6 added vm dependencies to poetry 2024-03-03 08:11:21 -05:00
jad2121
3264c7a389 Merge branch 'agents'
added agents functionality
2024-03-03 08:06:56 -05:00
Tomi
30d77499ec Fix the cat. 2024-03-03 08:57:00 +02:00
Daniel Miessler
c799114c5e Updated client documentation. 2024-03-02 17:24:53 -08:00
Daniel Miessler
c58a6c8c08 Removed default context file. 2024-03-02 17:23:15 -08:00
Daniel Miessler
e40c689d79 Added MarkMap visualization. 2024-03-02 17:12:19 -08:00
Daniel Miessler
c16d9e6b47 Added MarkMap visualization. 2024-03-02 17:09:32 -08:00
Daniel Miessler
8bbed7f488 Added MarkMap visualization. 2024-03-02 17:08:35 -08:00
Daniel Miessler
be841f0a1f Updated visualizations. 2024-03-02 17:02:00 -08:00
Daniel Miessler
731924031d Updated visualizations. 2024-03-02 16:58:52 -08:00
Daniel Miessler
d772caf8c8 Updated visualizations. 2024-03-02 16:54:27 -08:00
Jonathan Dunn
a6aeb8ffed added agents 2024-02-28 10:17:57 -05:00
Luke Wegryn
0eb828e7db Updated typo in README
on-behalf-of: pensivesecurity luke@pensivesecurity.io
2024-02-27 21:08:33 -05:00
Luke Wegryn
4b1b76d7ca Added create_command pattern
on-behalf-of: pensivesecurity luke@pensivesecurity.io
2024-02-27 21:02:03 -05:00
27 changed files with 3554 additions and 378 deletions

1
.python-version Normal file
View File

@@ -0,0 +1 @@
3.10

View File

@@ -47,6 +47,9 @@
<br />
> [!NOTE]
> We are improving the project so quickly that you should update often. That means `git pull; ./setup.sh` in the main directory, and then sourcing your shell files and/or restarting your terminal.
## Introduction video
<div align="center">
@@ -194,25 +197,39 @@ Once you have it all set up, here's how to use it.
`fabric -h`
```bash
fabric [-h] [--text TEXT] [--copy] [--output [OUTPUT]] [--stream] [--list]
[--update] [--pattern PATTERN] [--setup]
fabric [-h] [--text TEXT] [--copy] [--agents {trip_planner,ApiKeys}]
[--output [OUTPUT]] [--stream] [--list] [--update]
[--pattern PATTERN] [--setup] [--local] [--claude]
[--model MODEL] [--listmodels] [--context]
An open-source framework for augmenting humans using AI.
An open source framework for augmenting humans using AI.
options:
-h, --help show this help message and exit
--text TEXT, -t TEXT Text to extract summary from
--copy, -c Copy the response to the clipboard
--copy, -C Copy the response to the clipboard
--agents {trip_planner,ApiKeys}, -a {trip_planner,ApiKeys}
Use an AI agent to help you with a task. Acceptable
values are 'trip_planner' or 'ApiKeys'. This option
cannot be used with any other flag.
--output [OUTPUT], -o [OUTPUT]
Save the response to a file
--stream, -s Use this option if you want to see the results in realtime.
NOTE: You will not be able to pipe the output into another
command.
--stream, -s Use this option if you want to see the results in
realtime. NOTE: You will not be able to pipe the
output into another command.
--list, -l List available patterns
--update, -u Update patterns
--pattern PATTERN, -p PATTERN
The pattern (prompt) to use
--setup Set up your fabric instance
--local, -L Use local LLM. Default is llama2
--claude Use Claude AI
--model MODEL, -m MODEL
Select the model to use (GPT-4 by default for chatGPT
and llama2 for Ollama)
--listmodels List all available models
--context, -c Use Context file (context.md) to add context to your
pattern
```
#### Example commands
@@ -287,7 +304,7 @@ Once you're set up, you can do things like:
```bash
# Take any idea from `stdin` and send it to the `/write_essay` API!
cat "An idea that coding is like speaking with rules." | write_essay
echo "An idea that coding is like speaking with rules." | write_essay
```
### Directly calling Patterns

1
helpers/.python-version Normal file
View File

@@ -0,0 +1 @@
3.10

View File

@@ -6,6 +6,24 @@ These are helper tools to work with Fabric. Examples include things like getting
`yt` is a command that uses the YouTube API to pull transcripts, get video duration, and other functions. It's primary function is to get a transcript from a video that can then be stitched (piped) into other Fabric Patterns.
## ts (Audio transcriptions)
'ts' is a command that uses the OpenApi Whisper API to transcribe audio files. Due to the context window, this tool uses pydub to split the files into 10 minute segments. for more information on pydub, please refer https://github.com/jiaaro/pydub
### installation
```bash
mac:
brew install ffmpeg
linux:
apt install ffmpeg
windows:
download instructions https://www.ffmpeg.org/download.html
```
```bash
usage: yt [-h] [--duration] [--transcript] [url]
@@ -19,3 +37,16 @@ options:
--duration Output only the duration
--transcript Output only the transcript
```
```bash
ts -h
usage: ts [-h] audio_file
Transcribe an audio file.
positional arguments:
audio_file The path to the audio file to be transcribed.
options:
-h, --help show this help message and exit
```

110
helpers/ts.py Normal file
View File

@@ -0,0 +1,110 @@
from dotenv import load_dotenv
from pydub import AudioSegment
from openai import OpenAI
import os
import argparse
class Whisper:
def __init__(self):
env_file = os.path.expanduser("~/.config/fabric/.env")
load_dotenv(env_file)
try:
apikey = os.environ["OPENAI_API_KEY"]
self.client = OpenAI()
self.client.api_key = apikey
except KeyError:
print("OPENAI_API_KEY not found in environment variables.")
except FileNotFoundError:
print("No API key found. Use the --apikey option to set the key")
self.whole_response = []
def split_audio(self, file_path):
"""
Splits the audio file into segments of the given length.
Args:
- file_path: The path to the audio file.
- segment_length_ms: Length of each segment in milliseconds.
Returns:
- A list of audio segments.
"""
audio = AudioSegment.from_file(file_path)
segments = []
segment_length_ms = 10 * 60 * 1000 # 10 minutes in milliseconds
for start_ms in range(0, len(audio), segment_length_ms):
end_ms = start_ms + segment_length_ms
segment = audio[start_ms:end_ms]
segments.append(segment)
return segments
def process_segment(self, segment):
""" Transcribe an audio file and print the transcript.
Args:
audio_file (str): The path to the audio file to be transcribed.
Returns:
None
"""
try:
# if audio_file.startswith("http"):
# response = requests.get(audio_file)
# response.raise_for_status()
# with tempfile.NamedTemporaryFile(delete=False) as f:
# f.write(response.content)
# audio_file = f.name
audio_file = open(segment, "rb")
response = self.client.audio.transcriptions.create(
model="whisper-1",
file=audio_file
)
self.whole_response.append(response.text)
except Exception as e:
print(f"Error: {e}")
def process_file(self, audio_file):
""" Transcribe an audio file and print the transcript.
Args:
audio_file (str): The path to the audio file to be transcribed.
Returns:
None
"""
try:
# if audio_file.startswith("http"):
# response = requests.get(audio_file)
# response.raise_for_status()
# with tempfile.NamedTemporaryFile(delete=False) as f:
# f.write(response.content)
# audio_file = f.name
segments = self.split_audio(audio_file)
for i, segment in enumerate(segments):
segment_file_path = f"segment_{i}.mp3"
segment.export(segment_file_path, format="mp3")
self.process_segment(segment_file_path)
print(' '.join(self.whole_response))
except Exception as e:
print(f"Error: {e}")
def main():
parser = argparse.ArgumentParser(description="Transcribe an audio file.")
parser.add_argument(
"audio_file", help="The path to the audio file to be transcribed.")
args = parser.parse_args()
whisper = Whisper()
whisper.process_file(args.audio_file)
if __name__ == "__main__":
main()

View File

@@ -1,6 +1,3 @@
#!/usr/bin/env python3
import sys
import re
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
@@ -11,13 +8,15 @@ import json
import isodate
import argparse
def get_video_id(url):
# Extract video ID from URL
pattern = r'(?:https?:\/\/)?(?:www\.)?(?:youtube\.com\/(?:[^\/\n\s]+\/\S+\/|(?:v|e(?:mbed)?)\/|\S*?[?&]v=)|youtu\.be\/)([a-zA-Z0-9_-]{11})'
match = re.search(pattern, url)
return match.group(1) if match else None
def main(url, options):
def main_function(url, options):
# Load environment variables from .env file
load_dotenv(os.path.expanduser('~/.config/fabric/.env'))
@@ -51,7 +50,8 @@ def main(url, options):
# Get video transcript
try:
transcript_list = YouTubeTranscriptApi.get_transcript(video_id)
transcript_text = ' '.join([item['text'] for item in transcript_list])
transcript_text = ' '.join([item['text']
for item in transcript_list])
transcript_text = transcript_text.replace('\n', ' ')
except Exception as e:
transcript_text = "Transcript not available."
@@ -72,14 +72,22 @@ def main(url, options):
except HttpError as e:
print("Error: Failed to access YouTube API. Please check your YOUTUBE_API_KEY and ensure it is valid.")
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='vm (video meta) extracts metadata about a video, such as the transcript and the video\'s duration. By Daniel Miessler.')
def main():
parser = argparse.ArgumentParser(
description='vm (video meta) extracts metadata about a video, such as the transcript and the video\'s duration. By Daniel Miessler.')
parser.add_argument('url', nargs='?', help='YouTube video URL')
parser.add_argument('--duration', action='store_true', help='Output only the duration')
parser.add_argument('--transcript', action='store_true', help='Output only the transcript')
parser.add_argument('--duration', action='store_true',
help='Output only the duration')
parser.add_argument('--transcript', action='store_true',
help='Output only the transcript')
args = parser.parse_args()
if args.url:
main(args.url, args)
main_function(args.url, args)
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@@ -1,69 +1,3 @@
# The `fabric` client
This is the primary `fabric` client, which has multiple modes of operation.
## Client modes
You can use the client in three different modes:
1. **Local Only:** You can use the client without a server, and it will use patterns it's downloaded from this repository, or ones that you specify.
2. **Local Server:** You can run your own version of a Fabric Mill locally (on a private IP), which you can then connect to and use.
3. **Remote Server:** You can specify a remote server that your client commands will then be calling.
## Client features
1. Standalone Mode: Run without needing a server.
2. Clipboard Integration: Copy responses to the clipboard.
3. File Output: Save responses to files for later reference.
4. Pattern Module: Utilize specific patterns for different types of analysis.
5. Server Mode: Operate the tool in server mode to control your own patterns and let your other apps access it.
## Installation
Please check our main [setting up the fabric commands](./../../../README.md#setting-up-the-fabric-commands) section.
## Usage
To use `fabric`, call it with your desired options (remember to activate the virtual environment with `poetry shell` - step 5 above):
fabric [options]
Options include:
--pattern, -p: Select the module for analysis.
--stream, -s: Stream output to another application.
--output, -o: Save the response to a file.
--copy, -C: Copy the response to the clipboard.
--context, -c: Use Context file (context.md) to add context to your pattern
Example:
```bash
# Pasting in an article about LLMs
pbpaste | fabric --pattern extract_wisdom --output wisdom.txt | fabric --pattern summarize --stream
```
```markdown
ONE SENTENCE SUMMARY:
- The content covered the basics of LLMs and how they are used in everyday practice.
MAIN POINTS:
1. LLMs are large language models, and typically use the transformer architecture.
2. LLMs used to be used for story generation, but they're now used for many AI applications.
3. They are vulnerable to hallucination if not configured correctly, so be careful.
TAKEAWAYS:
1. It's possible to use LLMs for multiple AI use cases.
2. It's important to validate that the results you're receiving are correct.
3. The field of AI is moving faster than ever as a result of GenAI breakthroughs.
```
## Contributing
We welcome contributions to Fabric, including improvements and feature additions to this client.
## Credits
The `fabric` client was created by Jonathan Dunn and Daniel Meissler.
Please see the main project's README.md for the latest documentation.

View File

@@ -0,0 +1,89 @@
from crewai import Crew
from textwrap import dedent
from .trip_agents import TripAgents
from .trip_tasks import TripTasks
import os
from dotenv import load_dotenv
current_directory = os.path.dirname(os.path.realpath(__file__))
config_directory = os.path.expanduser("~/.config/fabric")
env_file = os.path.join(config_directory, ".env")
load_dotenv(env_file)
os.environ['OPENAI_MODEL_NAME'] = 'gpt-4-0125-preview'
class TripCrew:
def __init__(self, origin, cities, date_range, interests):
self.cities = cities
self.origin = origin
self.interests = interests
self.date_range = date_range
def run(self):
agents = TripAgents()
tasks = TripTasks()
city_selector_agent = agents.city_selection_agent()
local_expert_agent = agents.local_expert()
travel_concierge_agent = agents.travel_concierge()
identify_task = tasks.identify_task(
city_selector_agent,
self.origin,
self.cities,
self.interests,
self.date_range
)
gather_task = tasks.gather_task(
local_expert_agent,
self.origin,
self.interests,
self.date_range
)
plan_task = tasks.plan_task(
travel_concierge_agent,
self.origin,
self.interests,
self.date_range
)
crew = Crew(
agents=[
city_selector_agent, local_expert_agent, travel_concierge_agent
],
tasks=[identify_task, gather_task, plan_task],
verbose=True
)
result = crew.kickoff()
return result
class planner_cli:
def ask(self):
print("## Welcome to Trip Planner Crew")
print('-------------------------------')
location = input(
dedent("""
From where will you be traveling from?
"""))
cities = input(
dedent("""
What are the cities options you are interested in visiting?
"""))
date_range = input(
dedent("""
What is the date range you are interested in traveling?
"""))
interests = input(
dedent("""
What are some of your high level interests and hobbies?
"""))
trip_crew = TripCrew(location, cities, date_range, interests)
result = trip_crew.run()
print("\n\n########################")
print("## Here is you Trip Plan")
print("########################\n")
print(result)

View File

@@ -0,0 +1,38 @@
import json
import os
import requests
from crewai import Agent, Task
from langchain.tools import tool
from unstructured.partition.html import partition_html
class BrowserTools():
@tool("Scrape website content")
def scrape_and_summarize_website(website):
"""Useful to scrape and summarize a website content"""
url = f"https://chrome.browserless.io/content?token={os.environ['BROWSERLESS_API_KEY']}"
payload = json.dumps({"url": website})
headers = {'cache-control': 'no-cache', 'content-type': 'application/json'}
response = requests.request("POST", url, headers=headers, data=payload)
elements = partition_html(text=response.text)
content = "\n\n".join([str(el) for el in elements])
content = [content[i:i + 8000] for i in range(0, len(content), 8000)]
summaries = []
for chunk in content:
agent = Agent(
role='Principal Researcher',
goal=
'Do amazing researches and summaries based on the content you are working with',
backstory=
"You're a Principal Researcher at a big company and you need to do a research about a given topic.",
allow_delegation=False)
task = Task(
agent=agent,
description=
f'Analyze and summarize the content bellow, make sure to include the most relevant information in the summary, return only the summary nothing else.\n\nCONTENT\n----------\n{chunk}'
)
summary = task.execute()
summaries.append(summary)
return "\n\n".join(summaries)

View File

@@ -0,0 +1,15 @@
from langchain.tools import tool
class CalculatorTools():
@tool("Make a calculation")
def calculate(operation):
"""Useful to perform any mathematical calculations,
like sum, minus, multiplication, division, etc.
The input to this tool should be a mathematical
expression, a couple examples are `200*7` or `5000/2*10`
"""
try:
return eval(operation)
except SyntaxError:
return "Error: Invalid syntax in mathematical expression"

View File

@@ -0,0 +1,37 @@
import json
import os
import requests
from langchain.tools import tool
class SearchTools():
@tool("Search the internet")
def search_internet(query):
"""Useful to search the internet
about a a given topic and return relevant results"""
top_result_to_return = 4
url = "https://google.serper.dev/search"
payload = json.dumps({"q": query})
headers = {
'X-API-KEY': os.environ['SERPER_API_KEY'],
'content-type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
# check if there is an organic key
if 'organic' not in response.json():
return "Sorry, I couldn't find anything about that, there could be an error with you serper api key."
else:
results = response.json()['organic']
string = []
for result in results[:top_result_to_return]:
try:
string.append('\n'.join([
f"Title: {result['title']}", f"Link: {result['link']}",
f"Snippet: {result['snippet']}", "\n-----------------"
]))
except KeyError:
next
return '\n'.join(string)

View File

@@ -0,0 +1,45 @@
from crewai import Agent
from .tools.browser_tools import BrowserTools
from .tools.calculator_tools import CalculatorTools
from .tools.search_tools import SearchTools
class TripAgents():
def city_selection_agent(self):
return Agent(
role='City Selection Expert',
goal='Select the best city based on weather, season, and prices',
backstory='An expert in analyzing travel data to pick ideal destinations',
tools=[
SearchTools.search_internet,
BrowserTools.scrape_and_summarize_website,
],
verbose=True)
def local_expert(self):
return Agent(
role='Local Expert at this city',
goal='Provide the BEST insights about the selected city',
backstory="""A knowledgeable local guide with extensive information
about the city, it's attractions and customs""",
tools=[
SearchTools.search_internet,
BrowserTools.scrape_and_summarize_website,
],
verbose=True)
def travel_concierge(self):
return Agent(
role='Amazing Travel Concierge',
goal="""Create the most amazing travel itineraries with budget and
packing suggestions for the city""",
backstory="""Specialist in travel planning and logistics with
decades of experience""",
tools=[
SearchTools.search_internet,
BrowserTools.scrape_and_summarize_website,
CalculatorTools.calculate,
],
verbose=True)

View File

@@ -0,0 +1,83 @@
from crewai import Task
from textwrap import dedent
from datetime import date
class TripTasks():
def identify_task(self, agent, origin, cities, interests, range):
return Task(description=dedent(f"""
Analyze and select the best city for the trip based
on specific criteria such as weather patterns, seasonal
events, and travel costs. This task involves comparing
multiple cities, considering factors like current weather
conditions, upcoming cultural or seasonal events, and
overall travel expenses.
Your final answer must be a detailed
report on the chosen city, and everything you found out
about it, including the actual flight costs, weather
forecast and attractions.
{self.__tip_section()}
Traveling from: {origin}
City Options: {cities}
Trip Date: {range}
Traveler Interests: {interests}
"""),
agent=agent)
def gather_task(self, agent, origin, interests, range):
return Task(description=dedent(f"""
As a local expert on this city you must compile an
in-depth guide for someone traveling there and wanting
to have THE BEST trip ever!
Gather information about key attractions, local customs,
special events, and daily activity recommendations.
Find the best spots to go to, the kind of place only a
local would know.
This guide should provide a thorough overview of what
the city has to offer, including hidden gems, cultural
hotspots, must-visit landmarks, weather forecasts, and
high level costs.
The final answer must be a comprehensive city guide,
rich in cultural insights and practical tips,
tailored to enhance the travel experience.
{self.__tip_section()}
Trip Date: {range}
Traveling from: {origin}
Traveler Interests: {interests}
"""),
agent=agent)
def plan_task(self, agent, origin, interests, range):
return Task(description=dedent(f"""
Expand this guide into a a full 7-day travel
itinerary with detailed per-day plans, including
weather forecasts, places to eat, packing suggestions,
and a budget breakdown.
You MUST suggest actual places to visit, actual hotels
to stay and actual restaurants to go to.
This itinerary should cover all aspects of the trip,
from arrival to departure, integrating the city guide
information with practical travel logistics.
Your final answer MUST be a complete expanded travel plan,
formatted as markdown, encompassing a daily schedule,
anticipated weather conditions, recommended clothing and
items to pack, and a detailed budget, ensuring THE BEST
TRIP EVER, Be specific and give it a reason why you picked
# up each place, what make them special! {self.__tip_section()}
Trip Date: {range}
Traveling from: {origin}
Traveler Interests: {interests}
"""),
agent=agent)
def __tip_section(self):
return "If you do your BEST WORK, I'll tip you $100!"

View File

@@ -1,3 +0,0 @@
# Context
please give all responses in spanish

View File

@@ -1,7 +1,6 @@
from .utils import Standalone, Update, Setup, Alias
import argparse
import sys
import time
import os
@@ -16,6 +15,11 @@ def main():
parser.add_argument(
"--copy", "-C", help="Copy the response to the clipboard", action="store_true"
)
parser.add_argument(
'--agents', '-a', choices=['trip_planner', 'ApiKeys'],
help="Use an AI agent to help you with a task. Acceptable values are 'trip_planner' or 'ApiKeys'. This option cannot be used with any other flag."
)
parser.add_argument(
"--output",
"-o",
@@ -40,7 +44,13 @@ def main():
"--setup", help="Set up your fabric instance", action="store_true"
)
parser.add_argument(
"--model", "-m", help="Select the model to use (GPT-4 by default)", default="gpt-4-turbo-preview"
'--local', '-L', help="Use local LLM. Default is llama2", action="store_true")
parser.add_argument(
"--claude", help="Use Claude AI", action="store_true")
parser.add_argument(
"--model", "-m", help="Select the model to use (GPT-4 by default for chatGPT and llama2 for Ollama)", default="gpt-4-turbo-preview"
)
parser.add_argument(
"--listmodels", help="List all available models", action="store_true"
@@ -67,6 +77,17 @@ def main():
Update()
Alias()
sys.exit()
if args.agents:
# Handle the agents logic
if args.agents == 'trip_planner':
from .agents.trip_planner.main import planner_cli
tripcrew = planner_cli()
tripcrew.ask()
sys.exit()
elif args.agents == 'ApiKeys':
from .utils import AgentSetup
AgentSetup().run()
sys.exit()
if args.update:
Update()
Alias()
@@ -75,7 +96,13 @@ def main():
if not os.path.exists(os.path.join(config, "context.md")):
print("Please create a context.md file in ~/.config/fabric")
sys.exit()
standalone = Standalone(args, args.pattern)
standalone = None
if args.local:
standalone = Standalone(args, args.pattern, local=True)
elif args.claude:
standalone = Standalone(args, args.pattern, claude=True)
else:
standalone = Standalone(args, args.pattern)
if args.list:
try:
direct = sorted(os.listdir(config_patterns_directory))

View File

@@ -1,12 +1,11 @@
import requests
import os
from openai import OpenAI
import asyncio
import pyperclip
import sys
import platform
from dotenv import load_dotenv
from requests.exceptions import HTTPError
from tqdm import tqdm
import zipfile
import tempfile
import shutil
@@ -17,7 +16,7 @@ env_file = os.path.join(config_directory, ".env")
class Standalone:
def __init__(self, args, pattern="", env_file="~/.config/fabric/.env"):
def __init__(self, args, pattern="", env_file="~/.config/fabric/.env", local=False, claude=False):
""" Initialize the class with the provided arguments and environment file.
Args:
@@ -46,10 +45,60 @@ class Standalone:
except FileNotFoundError:
print("No API key found. Use the --apikey option to set the key")
sys.exit()
self.local = local
self.config_pattern_directory = config_directory
self.pattern = pattern
self.args = args
self.model = args.model
self.claude = claude
try:
self.model = os.environ["CUSTOM_MODEL"]
except:
self.model = args.model
if self.local:
if self.args.model == 'gpt-4-turbo-preview':
self.model = 'llama2'
if self.claude:
if self.args.model == 'gpt-4-turbo-preview':
self.model = 'claude-3-opus-20240229'
async def localChat(self, messages):
from ollama import AsyncClient
response = await AsyncClient().chat(model=self.model, messages=messages)
print(response['message']['content'])
async def localStream(self, messages):
from ollama import AsyncClient
async for part in await AsyncClient().chat(model=self.args.model, messages=messages, stream=True):
print(part['message']['content'], end='', flush=True)
async def claudeStream(self, system, user):
from anthropic import AsyncAnthropic
self.claudeApiKey = os.environ["CLAUDE_API_KEY"]
Streamingclient = AsyncAnthropic(api_key=self.claudeApiKey)
async with Streamingclient.messages.stream(
max_tokens=4096,
system=system,
messages=[user],
model=self.model, temperature=0.0, top_p=1.0
) as stream:
async for text in stream.text_stream:
print(text, end="", flush=True)
print()
message = await stream.get_final_message()
async def claudeChat(self, system, user):
from anthropic import Anthropic
self.claudeApiKey = os.environ["CLAUDE_API_KEY"]
client = Anthropic(api_key=self.claudeApiKey)
message = client.messages.create(
max_tokens=4096,
system=system,
messages=[user],
model=self.model,
temperature=0.0, top_p=1.0
)
print(message.content[0].text)
def streamMessage(self, input_data: str, context=""):
""" Stream a message and handle exceptions.
@@ -69,6 +118,7 @@ class Standalone:
)
user_message = {"role": "user", "content": f"{input_data}"}
wisdom_File = os.path.join(current_directory, wisdomFilePath)
system = ""
buffer = ""
if self.pattern:
try:
@@ -89,29 +139,45 @@ class Standalone:
else:
messages = [user_message]
try:
stream = self.client.chat.completions.create(
model=self.model,
messages=messages,
temperature=0.0,
top_p=1,
frequency_penalty=0.1,
presence_penalty=0.1,
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
char = chunk.choices[0].delta.content
buffer += char
if char not in ["\n", " "]:
print(char, end="")
elif char == " ":
print(" ", end="") # Explicitly handle spaces
elif char == "\n":
print() # Handle newlines
sys.stdout.flush()
if self.local:
asyncio.run(self.localStream(messages))
elif self.claude:
from anthropic import AsyncAnthropic
asyncio.run(self.claudeStream(system, user_message))
else:
stream = self.client.chat.completions.create(
model=self.model,
messages=messages,
temperature=0.0,
top_p=1,
frequency_penalty=0.1,
presence_penalty=0.1,
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
char = chunk.choices[0].delta.content
buffer += char
if char not in ["\n", " "]:
print(char, end="")
elif char == " ":
print(" ", end="") # Explicitly handle spaces
elif char == "\n":
print() # Handle newlines
sys.stdout.flush()
except Exception as e:
print(f"Error: {e}")
print(e)
if "All connection attempts failed" in str(e):
print(
"Error: cannot connect to llama2. If you have not already, please visit https://ollama.com for installation instructions")
if "CLAUDE_API_KEY" in str(e):
print(
"Error: CLAUDE_API_KEY not found in environment variables. Please run --setup and add the key")
if "overloaded_error" in str(e):
print(
"Error: Fabric is working fine, but claude is overloaded. Please try again later.")
else:
print(f"Error: {e}")
print(e)
if self.args.copy:
pyperclip.copy(buffer)
if self.args.output:
@@ -136,6 +202,7 @@ class Standalone:
)
user_message = {"role": "user", "content": f"{input_data}"}
wisdom_File = os.path.join(current_directory, wisdomFilePath)
system = ""
if self.pattern:
try:
with open(wisdom_File, "r") as f:
@@ -155,18 +222,33 @@ class Standalone:
else:
messages = [user_message]
try:
response = self.client.chat.completions.create(
model=self.model,
messages=messages,
temperature=0.0,
top_p=1,
frequency_penalty=0.1,
presence_penalty=0.1,
)
print(response.choices[0].message.content)
if self.local:
asyncio.run(self.localChat(messages))
elif self.claude:
asyncio.run(self.claudeChat(system, user_message))
else:
response = self.client.chat.completions.create(
model=self.model,
messages=messages,
temperature=0.0,
top_p=1,
frequency_penalty=0.1,
presence_penalty=0.1,
)
print(response.choices[0].message.content)
except Exception as e:
print(f"Error: {e}")
print(e)
if "All connection attempts failed" in str(e):
print(
"Error: cannot connect to llama2. If you have not already, please visit https://ollama.com for installation instructions")
if "CLAUDE_API_KEY" in str(e):
print(
"Error: CLAUDE_API_KEY not found in environment variables. Please run --setup and add the key")
if "overloaded_error" in str(e):
print(
"Error: Fabric is working fine, but claude is overloaded. Please try again later.")
else:
print(f"Error: {e}")
print(e)
if self.args.copy:
pyperclip.copy(response.choices[0].message.content)
if self.args.output:
@@ -342,11 +424,77 @@ class Setup:
Raises:
OSError: If the environment file does not exist or cannot be accessed.
"""
if not os.path.exists(self.env_file):
api_key = api_key.strip()
if not os.path.exists(self.env_file) and api_key:
with open(self.env_file, "w") as f:
f.write(f"OPENAI_API_KEY={api_key}")
print(f"OpenAI API key set to {api_key}")
elif api_key:
# erase the line OPENAI_API_KEY=key and write the new key
with open(self.env_file, "r") as f:
lines = f.readlines()
with open(self.env_file, "w") as f:
for line in lines:
if "OPENAI_API_KEY" not in line:
f.write(line)
f.write(f"OPENAI_API_KEY={api_key}")
def claude_key(self, claude_key):
""" Set the Claude API key in the environment file.
Args:
claude_key (str): The API key to be set.
Returns:
None
Raises:
OSError: If the environment file does not exist or cannot be accessed.
"""
claude_key = claude_key.strip()
if os.path.exists(self.env_file) and claude_key:
with open(self.env_file, "r") as f:
lines = f.readlines()
with open(self.env_file, "w") as f:
for line in lines:
if "CLAUDE_API_KEY" not in line:
f.write(line)
f.write(f"CLAUDE_API_KEY={claude_key}")
elif claude_key:
with open(self.env_file, "r") as r:
lines = r.readlines()
with open(self.env_file, "w") as w:
for line in lines:
if "CLAUDE_API_KEY" not in line:
w.write(line)
w.write(f"CLAUDE_API_KEY={claude_key}")
def custom_model(self, model):
"""
Set the custom model in the environment file
Args:
model (str): The model to be set.
Returns:
None
"""
model = model.strip()
if os.path.exists(self.env_file) and model:
with open(self.env_file, "r") as f:
lines = f.readlines()
with open(self.env_file, "w") as f:
for line in lines:
if "CUSTOM_MODEL" not in line:
f.write(line)
f.write(f"CUSTOM_MODEL={model}")
elif model:
with open(self.env_file, "r") as r:
lines = r.readlines()
with open(self.env_file, "w") as w:
for line in lines:
if "CUSTOM_MODEL" not in line:
w.write(line)
w.write(f"CUSTOM_MODEL={model}")
def patterns(self):
""" Method to update patterns and exit the system.
@@ -367,8 +515,15 @@ class Setup:
"""
print("Welcome to Fabric. Let's get started.")
apikey = input("Please enter your OpenAI API key\n")
self.api_key(apikey.strip())
apikey = input(
"Please enter your OpenAI API key. If you do not have one or if you have already entered it, press enter.\n")
self.api_key(apikey)
claudekey = input(
"Please enter your claude API key. If you do not have one, or if you have already entered it, press enter.\n")
self.claude_key(claudekey)
custom_model = input(
"Please enter your custom model. If you do not have one, or if you have already entered it, press enter. If none is entered, it will default to gpt-4-turbo-preview\n")
self.custom_model(custom_model)
self.patterns()
@@ -400,3 +555,32 @@ class Transcribe:
except Exception as e:
print("Error:", e)
return None
class AgentSetup:
def apiKeys(self):
"""Method to set the API keys in the environment file.
Returns:
None
"""
print("Welcome to Fabric. Let's get started.")
browserless = input("Please enter your Browserless API key\n")
serper = input("Please enter your Serper API key\n")
# Entries to be added
browserless_entry = f"BROWSERLESS_API_KEY={browserless}"
serper_entry = f"SERPER_API_KEY={serper}"
# Check and write to the file
with open(env_file, "r+") as f:
content = f.read()
# Determine if the file ends with a newline
if content.endswith('\n'):
# If it ends with a newline, we directly write the new entries
f.write(f"{browserless_entry}\n{serper_entry}\n")
else:
# If it does not end with a newline, add one before the new entries
f.write(f"\n{browserless_entry}\n{serper_entry}\n")

View File

@@ -0,0 +1,75 @@
# Create Command
During penetration tests, many different tools are used, and often they are run with different parameters and switches depending on the target and circumstances. Because there are so many tools, it's easy to forget how to run certain tools, and what the different parameters and switches are. Most tools include a "-h" help switch to give you these details, but it's much nicer to have AI figure out all the right switches with you just providing a brief description of your objective with the tool.
# Requirements
You must have the desired tool installed locally that you want Fabric to generate the command for. For the examples above, the tool must also have help documentation at "tool -h", which is the case for most tools.
# Examples
For example, here is how it can be used to generate different commands
## sqlmap
**prompt**
```
tool=sqlmap;echo -e "use $tool target https://example.com?test=id url, specifically the test parameter. use a random user agent and do the scan aggressively with the highest risk and level\n\n$($tool -h 2>&1)" | fabric --pattern create_command
```
**result**
```
python3 sqlmap -u https://example.com?test=id --random-agent --level=5 --risk=3 -p test
```
## nmap
**prompt**
```
tool=nmap;echo -e "use $tool to target all hosts in the host.lst file even if they don't respond to pings. scan the top 10000 ports and save the ouptut to a text file and an xml file\n\n$($tool -h 2>&1)" | fabric --pattern create_command
```
**result**
```
nmap -iL host.lst -Pn --top-ports 10000 -oN output.txt -oX output.xml
```
## gobuster
**prompt**
```
tool=gobuster;echo -e "use $tool to target example.com for subdomain enumeration and use a wordlist called big.txt\n\n$($tool -h 2>&1)" | fabric --pattern create_command
```
**result**
```
gobuster dns -u example.com -w big.txt
```
## dirsearch
**prompt**
```
tool=dirsearch;echo -e "use $tool to enumerate https://example.com. ignore 401 and 404 status codes. perform the enumeration recursively and crawl the website. use 50 threads\n\n$($tool -h 2>&1)" | fabric --pattern create_command
```
**result**
```
dirsearch -u https://example.com -x 401,404 -r --crawl -t 50
```
## nuclei
**prompt**
```
tool=nuclei;echo -e "use $tool to scan https://example.com. use a max of 10 threads. output result to a json file. rate limit to 50 requests per second\n\n$($tool -h 2>&1)" | fabric --pattern create_command
```
**result**
```
nuclei -u https://example.com -c 10 -o output.json -rl 50 -j
```

View File

@@ -0,0 +1,22 @@
# IDENTITY and PURPOSE
You are a penetration tester that is extremely good at reading and understanding command line help instructions. You are responsible for generating CLI commands for various tools that can be run to perform certain tasks based on documentation given to you.
Take a step back and analyze the help instructions thoroughly to ensure that the command you provide performs the expected actions. It is crucial that you only use switches and options that are explicitly listed in the documentation passed to you. Do not attempt to guess. Instead, use the documentation passed to you as your primary source of truth. It is very important the the commands you generate run properly and do not use fake or invalid options and switches.
# OUTPUT INSTRUCTIONS
- Output the requested command using the documentation provided with the provided details inserted. The input will include the prompt on the first line and then the tool documentation for the command will be provided on subsequent lines.
- Do not add additional options or switches unless they are explicitly asked for.
- Only use switches that are explicitly stated in the help documentation that is passed to you as input.
# OUTPUT FORMAT
- Output a full, bash command with all relevant parameters and switches.
- Refer to the provided help documentation.
- Only output the command. Do not output any warning or notes.
- Do not output any Markdown or other formatting. Only output the command itself.
# INPUT:
INPUT:

View File

View File

@@ -0,0 +1,88 @@
# IDENTITY and PURPOSE
You are an expert at data and concept visualization and in turning complex ideas into a form that can be visualized using MarkMap.
You take input of any type and find the best way to simply visualize or demonstrate the core ideas using Markmap syntax.
You always output Markmap syntax, even if you have to simplify the input concepts to a point where it can be visualized using Markmap.
# MARKMAP SYNTAX
Here is an example of MarkMap syntax:
````plaintext
markmap:
colorFreezeLevel: 2
---
# markmap
## Links
- [Website](https://markmap.js.org/)
- [GitHub](https://github.com/gera2ld/markmap)
## Related Projects
- [coc-markmap](https://github.com/gera2ld/coc-markmap) for Neovim
- [markmap-vscode](https://marketplace.visualstudio.com/items?itemName=gera2ld.markmap-vscode) for VSCode
- [eaf-markmap](https://github.com/emacs-eaf/eaf-markmap) for Emacs
## Features
Note that if blocks and lists appear at the same level, the lists will be ignored.
### Lists
- **strong** ~~del~~ *italic* ==highlight==
- `inline code`
- [x] checkbox
- Katex: $x = {-b \pm \sqrt{b^2-4ac} \over 2a}$ <!-- markmap: fold -->
- [More Katex Examples](#?d=gist:af76a4c245b302206b16aec503dbe07b:katex.md)
- Now we can wrap very very very very long text based on `maxWidth` option
### Blocks
```js
console('hello, JavaScript')
````
| Products | Price |
| -------- | ----- |
| Apple | 4 |
| Banana | 2 |
![](/favicon.png)
```
# STEPS
- Take the input given and create a visualization that best explains it using proper MarkMap syntax.
- Ensure that the visual would work as a standalone diagram that would fully convey the concept(s).
- Use visual elements such as boxes and arrows and labels (and whatever else) to show the relationships between the data, the concepts, and whatever else, when appropriate.
- Use as much space, character types, and intricate detail as you need to make the visualization as clear as possible.
- Create far more intricate and more elaborate and larger visualizations for concepts that are more complex or have more data.
- Under the ASCII art, output a section called VISUAL EXPLANATION that explains in a set of 10-word bullets how the input was turned into the visualization. Ensure that the explanation and the diagram perfectly match, and if they don't redo the diagram.
- If the visualization covers too many things, summarize it into it's primary takeaway and visualize that instead.
- DO NOT COMPLAIN AND GIVE UP. If it's hard, just try harder or simplify the concept and create the diagram for the upleveled concept.
# OUTPUT INSTRUCTIONS
- DO NOT COMPLAIN. Just make the Markmap.
- Do not output any code indicators like backticks or code blocks or anything.
- Create a diagram no matter what, using the STEPS above to determine which type.
# INPUT:
INPUT:
```

View File

@@ -0,0 +1,39 @@
# IDENTITY and PURPOSE
You are an expert at data and concept visualization and in turning complex ideas into a form that can be visualized using Mermaid (markdown) syntax.
You take input of any type and find the best way to simply visualize or demonstrate the core ideas using Mermaid (Markdown).
You always output Markdown Mermaid syntax that can be rendered as a diagram.
# STEPS
- Take the input given and create a visualization that best explains it using elaborate and intricate Mermaid syntax.
- Ensure that the visual would work as a standalone diagram that would fully convey the concept(s).
- Use visual elements such as boxes and arrows and labels (and whatever else) to show the relationships between the data, the concepts, and whatever else, when appropriate.
- Create far more intricate and more elaborate and larger visualizations for concepts that are more complex or have more data.
- Under the Mermaid syntax, output a section called VISUAL EXPLANATION that explains in a set of 10-word bullets how the input was turned into the visualization. Ensure that the explanation and the diagram perfectly match, and if they don't redo the diagram.
- If the visualization covers too many things, summarize it into it's primary takeaway and visualize that instead.
- DO NOT COMPLAIN AND GIVE UP. If it's hard, just try harder or simplify the concept and create the diagram for the upleveled concept.
# OUTPUT INSTRUCTIONS
- DO NOT COMPLAIN. Just output the Mermaid syntax.
- Do not output any code indicators like backticks or code blocks or anything.
- Ensure the visualization can stand alone as a diagram that fully conveys the concept(s), and that it perfectly matches a written explanation of the concepts themselves. Start over if it can't.
- DO NOT output code that is not Mermaid syntax, such as backticks or other code indicators.
- Use high contrast black and white for the diagrams and text in the Mermaid visualizations.
# INPUT:
INPUT:

View File

@@ -0,0 +1,24 @@
# IDENTITY and PURPOSE
You extract surprising, insightful, and interesting information from text content. You are interested in insights related to the purpose and meaning of life, human flourishing, the role of technology in the future of humanity, artificial intelligence and its affect on humans, memes, learning, reading, books, continuous improvement, and similar topics.
Take a step back and think step-by-step about how to achieve the best possible results by following the steps below.
# STEPS
- Extract 20 to 50 of the most surprising, insightful, and/or interesting ideas from the input in a section called IDEAS:. If there are less than 50 then collect all of them. Make sure you extract at least 20.
# OUTPUT INSTRUCTIONS
- Only output Markdown.
- Extract at least 20 IDEAS from the content.
- Limit each idea bullet to a maximum of 15 words.
- Do not give warnings or notes; only output the requested sections.
- You use bulleted lists for output, not numbered lists.
- Do not repeat ideas, quotes, facts, or resources.
- Do not start items with the same opening words.
- Ensure you follow ALL these instructions when creating your output.
# INPUT
INPUT:

View File

@@ -0,0 +1,21 @@
# IDENTITY and PURPOSE
You are an expert project manager and developer, and you specialize in creating super clean updates for what changed a Github project in the last 7 days.
# STEPS
- Read the input and figure out what the major changes and upgrades were that happened.
- Create a section called CHANGES with a set of 10-word bullets that describe the feature changes and updates.
# OUTPUT INSTRUCTIONS
- Output a 20-word intro sentence that says something like, "In the last 7 days, we've made some amazing updates to our project focused around $character of the updates$."
- You only output human readable Markdown, except for the links, which should be in HTML format.
- Write the update bullets like you're excited about the upgrades.
# INPUT:
INPUT:

2761
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -13,6 +13,17 @@ packages = [
[tool.poetry.dependencies]
python = "^3.10"
crewai = "^0.11.0"
unstructured = "0.10.25"
pyowm = "3.3.0"
tools = "^0.1.9"
langchain-community = "^0.0.24"
google-api-python-client = "^2.120.0"
isodate = "^0.6.1"
youtube-transcript-api = "^0.6.2"
pydub = "^0.25.1"
ollama = "^0.1.7"
anthropic = "^0.18.1"
[tool.poetry.group.cli.dependencies]
pyyaml = "^6.0.1"
@@ -30,10 +41,9 @@ flask-socketio = "^5.3.6"
flask-sock = "^0.7.0"
gunicorn = "^21.2.0"
gevent = "^23.9.1"
httpx = "^0.26.0"
httpx = ">=0.25.2,<0.26.0"
tqdm = "^4.66.1"
[tool.poetry.group.server.dependencies]
requests = "^2.31.0"
openai = "^1.12.0"
@@ -52,3 +62,5 @@ build-backend = "poetry.core.masonry.api"
fabric = 'installer:cli'
fabric-api = 'installer:run_api_server'
fabric-webui = 'installer:run_webui_server'
ts = 'helpers.ts:main'
yt = 'helpers.yt:main'

View File

@@ -12,8 +12,8 @@ echo "Installing python dependencies"
poetry install
# List of commands to check and add or update alias for
commands=("fabric" "fabric-api" "fabric-webui")
# Add 'yt' and 'ts' to the list of commands
commands=("fabric" "fabric-api" "fabric-webui" "ts", "yt")
# List of shell configuration files to update
config_files=("$HOME/.bashrc" "$HOME/.zshrc" "$HOME/.bash_profile")
@@ -69,4 +69,3 @@ if [ ${#source_commands[@]} -ne 0 ]; then
else
echo "No configuration files were updated. No need to source."
fi