mirror of
https://github.com/Significant-Gravitas/AutoGPT.git
synced 2026-04-08 03:00:28 -04:00
Update block docs for: llm.md
This commit is contained in:
136
docs/content/platform/blocks/update/llm.md
Normal file
136
docs/content/platform/blocks/update/llm.md
Normal file
@@ -0,0 +1,136 @@
|
||||
|
||||
|
||||
# LLM Blocks Documentation
|
||||
|
||||
## AI Structured Response Generator
|
||||
|
||||
### What it is
|
||||
A block that generates structured responses using Large Language Models (LLMs).
|
||||
|
||||
### What it does
|
||||
Generates formatted object responses based on given prompts, ensuring the output follows a specific structure or format.
|
||||
|
||||
### How it works
|
||||
Takes a prompt and expected format, sends it to an LLM, and ensures the response matches the required structure. If the response doesn't match the format, it retries automatically.
|
||||
|
||||
### Inputs
|
||||
- Prompt: The text prompt to send to the language model
|
||||
- Expected Format: Dictionary defining the structure the response should follow
|
||||
- Model: Choice of LLM to use (e.g., GPT-4, Claude, etc.)
|
||||
- Credentials: API key for the chosen LLM provider
|
||||
- System Prompt: Additional context for the model
|
||||
- Conversation History: Previous messages for context
|
||||
- Retry Count: Number of attempts to get a valid response
|
||||
- Prompt Values: Variables to fill in prompt templates
|
||||
- Max Tokens: Maximum length of the generated response
|
||||
|
||||
### Outputs
|
||||
- Response: The structured object generated by the model
|
||||
- Error: Any error message if the process fails
|
||||
|
||||
### Possible use case
|
||||
Extracting specific information from customer reviews into a structured format, such as converting free-text feedback into categorized ratings and comments.
|
||||
|
||||
## AI Text Generator
|
||||
|
||||
### What it is
|
||||
A block that generates free-form text responses using LLMs.
|
||||
|
||||
### What it does
|
||||
Produces natural language responses based on given prompts without enforcing specific formats.
|
||||
|
||||
### How it works
|
||||
Sends prompts to an LLM and returns the raw text response, allowing for more creative and flexible outputs.
|
||||
|
||||
### Inputs
|
||||
- Prompt: The text prompt for the model
|
||||
- Model: Choice of LLM to use
|
||||
- Credentials: API key for the chosen provider
|
||||
- System Prompt: Additional context for the model
|
||||
- Retry Count: Number of retry attempts
|
||||
- Prompt Values: Variables for template filling
|
||||
|
||||
### Outputs
|
||||
- Response: The generated text
|
||||
- Error: Any error message if the process fails
|
||||
|
||||
### Possible use case
|
||||
Creating blog post drafts, generating creative stories, or writing marketing copy based on given topics.
|
||||
|
||||
## AI Text Summarizer
|
||||
|
||||
### What it is
|
||||
A block that creates concise summaries of longer texts using LLMs.
|
||||
|
||||
### What it does
|
||||
Breaks down long texts into manageable chunks, summarizes each chunk, and combines them into a final summary.
|
||||
|
||||
### How it works
|
||||
Processes text in chunks to handle long documents, maintains context through overlap, and recursively summarizes if needed.
|
||||
|
||||
### Inputs
|
||||
- Text: The long text to summarize
|
||||
- Model: Choice of LLM to use
|
||||
- Focus: Specific topic to focus on in the summary
|
||||
- Style: Summary format (concise, detailed, bullet points, numbered list)
|
||||
- Max Tokens: Maximum length for processing
|
||||
- Chunk Overlap: How much context to maintain between chunks
|
||||
|
||||
### Outputs
|
||||
- Summary: The final summarized text
|
||||
- Error: Any error message if the process fails
|
||||
|
||||
### Possible use case
|
||||
Summarizing long research papers, creating executive summaries of reports, or condensing meeting transcripts.
|
||||
|
||||
## AI Conversation Block
|
||||
|
||||
### What it is
|
||||
A block that manages multi-turn conversations with LLMs.
|
||||
|
||||
### What it does
|
||||
Handles back-and-forth dialogue between users and AI models, maintaining conversation context.
|
||||
|
||||
### How it works
|
||||
Processes a list of messages representing a conversation and generates appropriate responses while maintaining context.
|
||||
|
||||
### Inputs
|
||||
- Messages: List of previous conversation messages
|
||||
- Model: Choice of LLM to use
|
||||
- Credentials: API key for the chosen provider
|
||||
- Max Tokens: Maximum response length
|
||||
|
||||
### Outputs
|
||||
- Response: The model's reply to the conversation
|
||||
- Error: Any error message if the process fails
|
||||
|
||||
### Possible use case
|
||||
Creating interactive chatbots, virtual assistants, or customer service automation.
|
||||
|
||||
## AI List Generator
|
||||
|
||||
### What it is
|
||||
A block that generates lists of items based on given criteria or source data.
|
||||
|
||||
### What it does
|
||||
Creates structured lists of items either from provided source data or based on specific focus criteria.
|
||||
|
||||
### How it works
|
||||
Analyzes source data or focus requirements and generates a properly formatted list of relevant items.
|
||||
|
||||
### Inputs
|
||||
- Focus: The specific topic or criteria for the list
|
||||
- Source Data: Optional text to extract list items from
|
||||
- Model: Choice of LLM to use
|
||||
- Credentials: API key for the chosen provider
|
||||
- Max Retries: Number of attempts to generate a valid list
|
||||
- Max Tokens: Maximum response length
|
||||
|
||||
### Outputs
|
||||
- Generated List: The complete list of items
|
||||
- List Item: Individual items from the list
|
||||
- Error: Any error message if the process fails
|
||||
|
||||
### Possible use case
|
||||
Extracting key points from articles, generating todo lists, or creating categorical listings from unstructured text.
|
||||
|
||||
Reference in New Issue
Block a user