mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 22:18:10 -05:00
* feat: introduce human feedback events and decorator for flow methods - Added HumanFeedbackRequestedEvent and HumanFeedbackReceivedEvent classes to handle human feedback interactions within flows. - Implemented the @human_feedback decorator to facilitate human-in-the-loop workflows, allowing for feedback collection and routing based on responses. - Enhanced Flow class to store human feedback history and manage feedback outcomes. - Updated flow wrappers to preserve attributes from methods decorated with @human_feedback. - Added integration and unit tests for the new human feedback functionality, ensuring proper validation and routing behavior. * adding deployment docs * New docs * fix printer * wrong change * Adding Async Support feat: enhance human feedback support in flows - Updated the @human_feedback decorator to use 'message' parameter instead of 'request' for clarity. - Introduced new FlowPausedEvent and MethodExecutionPausedEvent to handle flow and method pauses during human feedback. - Added ConsoleProvider for synchronous feedback collection and integrated async feedback capabilities. - Implemented SQLite persistence for managing pending feedback context. - Expanded documentation to include examples of async human feedback usage and best practices. * linter * fix * migrating off printer * updating docs * new tests * doc update
154 lines
6.2 KiB
Plaintext
154 lines
6.2 KiB
Plaintext
---
|
|
title: "Human-in-the-Loop (HITL) Workflows"
|
|
description: "Learn how to implement Human-in-the-Loop workflows in CrewAI for enhanced decision-making"
|
|
icon: "user-check"
|
|
mode: "wide"
|
|
---
|
|
|
|
Human-in-the-Loop (HITL) is a powerful approach that combines artificial intelligence with human expertise to enhance decision-making and improve task outcomes. CrewAI provides multiple ways to implement HITL depending on your needs.
|
|
|
|
## Choosing Your HITL Approach
|
|
|
|
CrewAI offers two main approaches for implementing human-in-the-loop workflows:
|
|
|
|
| Approach | Best For | Integration |
|
|
|----------|----------|-------------|
|
|
| **Flow-based** (`@human_feedback` decorator) | Local development, console-based review, synchronous workflows | [Human Feedback in Flows](/en/learn/human-feedback-in-flows) |
|
|
| **Webhook-based** (Enterprise) | Production deployments, async workflows, external integrations (Slack, Teams, etc.) | This guide |
|
|
|
|
<Tip>
|
|
If you're building flows and want to add human review steps with routing based on feedback, check out the [Human Feedback in Flows](/en/learn/human-feedback-in-flows) guide for the `@human_feedback` decorator.
|
|
</Tip>
|
|
|
|
## Setting Up Webhook-Based HITL Workflows
|
|
|
|
<Steps>
|
|
<Step title="Configure Your Task">
|
|
Set up your task with human input enabled:
|
|
<Frame>
|
|
<img src="/images/enterprise/crew-human-input.png" alt="Crew Human Input" />
|
|
</Frame>
|
|
</Step>
|
|
|
|
<Step title="Provide Webhook URL">
|
|
When kicking off your crew, include a webhook URL for human input:
|
|
<Frame>
|
|
<img src="/images/enterprise/crew-webhook-url.png" alt="Crew Webhook URL" />
|
|
</Frame>
|
|
|
|
Example with Bearer authentication:
|
|
```bash
|
|
curl -X POST {BASE_URL}/kickoff \
|
|
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"inputs": {
|
|
"topic": "AI Research"
|
|
},
|
|
"humanInputWebhook": {
|
|
"url": "https://your-webhook.com/hitl",
|
|
"authentication": {
|
|
"strategy": "bearer",
|
|
"token": "your-webhook-secret-token"
|
|
}
|
|
}
|
|
}'
|
|
```
|
|
|
|
Or with Basic authentication:
|
|
```bash
|
|
curl -X POST {BASE_URL}/kickoff \
|
|
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"inputs": {
|
|
"topic": "AI Research"
|
|
},
|
|
"humanInputWebhook": {
|
|
"url": "https://your-webhook.com/hitl",
|
|
"authentication": {
|
|
"strategy": "basic",
|
|
"username": "your-username",
|
|
"password": "your-password"
|
|
}
|
|
}
|
|
}'
|
|
```
|
|
</Step>
|
|
|
|
<Step title="Receive Webhook Notification">
|
|
Once the crew completes the task requiring human input, you'll receive a webhook notification containing:
|
|
- Execution ID
|
|
- Task ID
|
|
- Task output
|
|
</Step>
|
|
|
|
<Step title="Review Task Output">
|
|
The system will pause in the `Pending Human Input` state. Review the task output carefully.
|
|
</Step>
|
|
|
|
<Step title="Submit Human Feedback">
|
|
Call the resume endpoint of your crew with the following information:
|
|
<Frame>
|
|
<img src="/images/enterprise/crew-resume-endpoint.png" alt="Crew Resume Endpoint" />
|
|
</Frame>
|
|
|
|
<Warning>
|
|
**Critical: Webhook URLs Must Be Provided Again**:
|
|
You **must** provide the same webhook URLs (`taskWebhookUrl`, `stepWebhookUrl`, `crewWebhookUrl`) in the resume call that you used in the kickoff call. Webhook configurations are **NOT** automatically carried over from kickoff - they must be explicitly included in the resume request to continue receiving notifications for task completion, agent steps, and crew completion.
|
|
</Warning>
|
|
|
|
Example resume call with webhooks:
|
|
```bash
|
|
curl -X POST {BASE_URL}/resume \
|
|
-H "Authorization: Bearer YOUR_API_TOKEN" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"execution_id": "abcd1234-5678-90ef-ghij-klmnopqrstuv",
|
|
"task_id": "research_task",
|
|
"human_feedback": "Great work! Please add more details.",
|
|
"is_approve": true,
|
|
"taskWebhookUrl": "https://your-server.com/webhooks/task",
|
|
"stepWebhookUrl": "https://your-server.com/webhooks/step",
|
|
"crewWebhookUrl": "https://your-server.com/webhooks/crew"
|
|
}'
|
|
```
|
|
|
|
<Warning>
|
|
**Feedback Impact on Task Execution**:
|
|
It's crucial to exercise care when providing feedback, as the entire feedback content will be incorporated as additional context for further task executions.
|
|
</Warning>
|
|
This means:
|
|
- All information in your feedback becomes part of the task's context.
|
|
- Irrelevant details may negatively influence it.
|
|
- Concise, relevant feedback helps maintain task focus and efficiency.
|
|
- Always review your feedback carefully before submission to ensure it contains only pertinent information that will positively guide the task's execution.
|
|
</Step>
|
|
<Step title="Handle Negative Feedback">
|
|
If you provide negative feedback:
|
|
- The crew will retry the task with added context from your feedback.
|
|
- You'll receive another webhook notification for further review.
|
|
- Repeat steps 4-6 until satisfied.
|
|
</Step>
|
|
|
|
<Step title="Execution Continuation">
|
|
When you submit positive feedback, the execution will proceed to the next steps.
|
|
</Step>
|
|
</Steps>
|
|
|
|
## Best Practices
|
|
|
|
- **Be Specific**: Provide clear, actionable feedback that directly addresses the task at hand
|
|
- **Stay Relevant**: Only include information that will help improve the task execution
|
|
- **Be Timely**: Respond to HITL prompts promptly to avoid workflow delays
|
|
- **Review Carefully**: Double-check your feedback before submitting to ensure accuracy
|
|
|
|
## Common Use Cases
|
|
|
|
HITL workflows are particularly valuable for:
|
|
- Quality assurance and validation
|
|
- Complex decision-making scenarios
|
|
- Sensitive or high-stakes operations
|
|
- Creative tasks requiring human judgment
|
|
- Compliance and regulatory reviews
|