From 508759610f99360017ac788a2289a913bc70adde Mon Sep 17 00:00:00 2001 From: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com> Date: Wed, 11 Feb 2026 18:39:21 +0530 Subject: [PATCH 01/18] fix(frontend): add min-width-0 to ContentCard to prevent overflow (#12060) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ### Changes 🏗️ Added `min-w-0` class to the ContentCard component in the ToolAccordion to prevent content overflow issues. This CSS fix ensures that the card properly respects its container width constraints and allows text truncation to work correctly when content is too wide. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified that tool content displays correctly in the accordion - [x] Confirmed that long content properly truncates instead of overflowing - [x] Tested with various screen sizes to ensure responsive behavior #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes

Greptile Overview

Greptile Summary

Added `min-w-0` class to `ContentCard` component to fix text truncation overflow in grid layouts. This is a standard CSS fix that allows grid items to shrink below their content size, enabling `truncate` classes on child elements (`ContentCardTitle`, `ContentCardSubtitle`) to work correctly. The fix follows the same pattern already used in `ContentCardHeader` (line 54) and `ToolAccordion` (line 54).

Confidence Score: 5/5

- Safe to merge with no risk - Single-line CSS fix that addresses a well-known flexbox/grid layout issue. The change follows existing patterns in the codebase and is thoroughly tested. No logic changes, no breaking changes, no side effects. - No files require special attention
--- .../copilot/components/ToolAccordion/AccordionContent.tsx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolAccordion/AccordionContent.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolAccordion/AccordionContent.tsx index 987941eee1..dab8f49257 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolAccordion/AccordionContent.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolAccordion/AccordionContent.tsx @@ -30,7 +30,7 @@ export function ContentCard({ return (
From 2a189c44c4f3b03c11137ccefb7e28785b91d6ef Mon Sep 17 00:00:00 2001 From: Ubbe Date: Wed, 11 Feb 2026 22:46:37 +0800 Subject: [PATCH 02/18] fix(frontend): API stream issues leaking into prompt (#12063) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Changes 🏗️ Screenshot 2026-02-11 at 19 32 39 When the BE API has an error, prevent it from leaking into the stream and instead handle it gracefully via toast. ## Checklist 📋 ### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Run the app locally and trust the changes

Greptile Overview

Greptile Summary

This PR fixes an issue where backend API stream errors were leaking into the chat prompt instead of being handled gracefully. The fix involves both backend and frontend changes to ensure error events conform to the AI SDK's strict schema. **Key Changes:** - **Backend (`response_model.py`)**: Added custom `to_sse()` method for `StreamError` that only emits `type` and `errorText` fields, stripping extra fields like `code` and `details` that cause AI SDK validation failures - **Backend (`prompt.py`)**: Added validation step after context compression to remove orphaned tool responses without matching tool calls, preventing "unexpected tool_use_id" API errors - **Frontend (`route.ts`)**: Implemented SSE stream normalization with `normalizeSSEStream()` and `normalizeSSEEvent()` functions to strip non-conforming fields from error events before they reach the AI SDK - **Frontend (`ChatMessagesContainer.tsx`)**: Added toast notifications for errors and improved error display UI with deduplication logic The changes ensure a clean separation between internal error metadata (useful for logging/debugging) and the strict schema required by the AI SDK on the frontend.

Confidence Score: 4/5

- This PR is safe to merge with low risk - The changes are well-structured and address a specific bug with proper error handling. The dual-layer approach (backend filtering in `to_sse()` + frontend normalization) provides defense-in-depth. However, the lack of automated tests for the new error normalization logic and the potential for edge cases in SSE parsing prevent a perfect score. - Pay close attention to `autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts` - the SSE normalization logic should be tested with various error scenarios

Sequence Diagram

```mermaid sequenceDiagram participant User participant Frontend as ChatMessagesContainer participant Proxy as /api/chat/.../stream participant Backend as Backend API participant AISDK as AI SDK User->>Frontend: Send message Frontend->>Proxy: POST with message Proxy->>Backend: Forward request with auth Backend->>Backend: Process message alt Success Path Backend->>Proxy: SSE stream (text-delta, etc.) Proxy->>Proxy: normalizeSSEStream (pass through) Proxy->>AISDK: Forward SSE events AISDK->>Frontend: Update messages Frontend->>User: Display response else Error Path Backend->>Backend: StreamError.to_sse() Note over Backend: Only emit {type, errorText} Backend->>Proxy: SSE error event Proxy->>Proxy: normalizeSSEEvent() Note over Proxy: Strip extra fields (code, details) Proxy->>AISDK: {type: "error", errorText: "..."} AISDK->>Frontend: error state updated Frontend->>Frontend: Toast notification (deduplicated) Frontend->>User: Show error UI + toast end ```
--------- Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> Co-authored-by: Otto-AGPT --- .../api/features/chat/response_model.py | 14 ++++ .../backend/backend/util/prompt.py | 45 ++++++++++++ .../ChatMessagesContainer.tsx | 26 ++++++- .../copilot/tools/CreateAgent/CreateAgent.tsx | 3 +- .../copilot/tools/RunAgent/helpers.tsx | 2 +- .../copilot/tools/RunBlock/helpers.tsx | 2 +- .../chat/sessions/[sessionId]/stream/route.ts | 44 +++++------- .../frontend/src/app/api/chat/sse-helpers.ts | 72 +++++++++++++++++++ .../api/chat/tasks/[taskId]/stream/route.ts | 31 ++------ 9 files changed, 180 insertions(+), 59 deletions(-) create mode 100644 autogpt_platform/frontend/src/app/api/chat/sse-helpers.ts diff --git a/autogpt_platform/backend/backend/api/features/chat/response_model.py b/autogpt_platform/backend/backend/api/features/chat/response_model.py index 1ae836f7d1..8ea0c1f97a 100644 --- a/autogpt_platform/backend/backend/api/features/chat/response_model.py +++ b/autogpt_platform/backend/backend/api/features/chat/response_model.py @@ -10,6 +10,8 @@ from typing import Any from pydantic import BaseModel, Field +from backend.util.json import dumps as json_dumps + class ResponseType(str, Enum): """Types of streaming responses following AI SDK protocol.""" @@ -193,6 +195,18 @@ class StreamError(StreamBaseResponse): default=None, description="Additional error details" ) + def to_sse(self) -> str: + """Convert to SSE format, only emitting fields required by AI SDK protocol. + + The AI SDK uses z.strictObject({type, errorText}) which rejects + any extra fields like `code` or `details`. + """ + data = { + "type": self.type.value, + "errorText": self.errorText, + } + return f"data: {json_dumps(data)}\n\n" + class StreamHeartbeat(StreamBaseResponse): """Heartbeat to keep SSE connection alive during long-running operations. diff --git a/autogpt_platform/backend/backend/util/prompt.py b/autogpt_platform/backend/backend/util/prompt.py index 5f904bbc8a..3ec25dd61b 100644 --- a/autogpt_platform/backend/backend/util/prompt.py +++ b/autogpt_platform/backend/backend/util/prompt.py @@ -364,6 +364,44 @@ def _remove_orphan_tool_responses( return result +def validate_and_remove_orphan_tool_responses( + messages: list[dict], + log_warning: bool = True, +) -> list[dict]: + """ + Validate tool_call/tool_response pairs and remove orphaned responses. + + Scans messages in order, tracking all tool_call IDs. Any tool response + referencing an ID not seen in a preceding message is considered orphaned + and removed. This prevents API errors like Anthropic's "unexpected tool_use_id". + + Args: + messages: List of messages to validate (OpenAI or Anthropic format) + log_warning: Whether to log a warning when orphans are found + + Returns: + A new list with orphaned tool responses removed + """ + available_ids: set[str] = set() + orphan_ids: set[str] = set() + + for msg in messages: + available_ids |= _extract_tool_call_ids_from_message(msg) + for resp_id in _extract_tool_response_ids_from_message(msg): + if resp_id not in available_ids: + orphan_ids.add(resp_id) + + if not orphan_ids: + return messages + + if log_warning: + logger.warning( + f"Removing {len(orphan_ids)} orphan tool response(s): {orphan_ids}" + ) + + return _remove_orphan_tool_responses(messages, orphan_ids) + + def _ensure_tool_pairs_intact( recent_messages: list[dict], all_messages: list[dict], @@ -723,6 +761,13 @@ async def compress_context( # Filter out any None values that may have been introduced final_msgs: list[dict] = [m for m in msgs if m is not None] + + # ---- STEP 6: Final tool-pair validation --------------------------------- + # After all compression steps, verify that every tool response has a + # matching tool_call in a preceding assistant message. Remove orphans + # to prevent API errors (e.g., Anthropic's "unexpected tool_use_id"). + final_msgs = validate_and_remove_orphan_tool_responses(final_msgs) + final_count = sum(_msg_tokens(m, enc) for m in final_msgs) error = None if final_count + reserve > target_tokens: diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx index 4578b268e3..fbe1c03d1d 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx @@ -10,8 +10,9 @@ import { MessageResponse, } from "@/components/ai-elements/message"; import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner"; +import { toast } from "@/components/molecules/Toast/use-toast"; import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai"; -import { useEffect, useState } from "react"; +import { useEffect, useRef, useState } from "react"; import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent"; import { EditAgentTool } from "../../tools/EditAgent/EditAgent"; import { FindAgentsTool } from "../../tools/FindAgents/FindAgents"; @@ -121,6 +122,7 @@ export const ChatMessagesContainer = ({ isLoading, }: ChatMessagesContainerProps) => { const [thinkingPhrase, setThinkingPhrase] = useState(getRandomPhrase); + const lastToastTimeRef = useRef(0); useEffect(() => { if (status === "submitted") { @@ -128,6 +130,20 @@ export const ChatMessagesContainer = ({ } }, [status]); + // Show a toast when a new error occurs, debounced to avoid spam + useEffect(() => { + if (!error) return; + const now = Date.now(); + if (now - lastToastTimeRef.current < 3_000) return; + lastToastTimeRef.current = now; + toast({ + variant: "destructive", + title: "Something went wrong", + description: + "The assistant encountered an error. Please try sending your message again.", + }); + }, [error]); + const lastMessage = messages[messages.length - 1]; const lastAssistantHasVisibleContent = lastMessage?.role === "assistant" && @@ -263,8 +279,12 @@ export const ChatMessagesContainer = ({ )} {error && ( -
- Error: {error.message} +
+

Something went wrong

+

+ The assistant encountered an error. Please try sending your + message again. +

)} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx index 0d023d0529..88b1c491d7 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx @@ -4,7 +4,6 @@ import { WarningDiamondIcon } from "@phosphor-icons/react"; import type { ToolUIPart } from "ai"; import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions"; import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation"; -import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader"; import { ProgressBar } from "../../components/ProgressBar/ProgressBar"; import { ContentCardDescription, @@ -77,7 +76,7 @@ function getAccordionMeta(output: CreateAgentToolOutput) { isOperationInProgressOutput(output) ) { return { - icon: , + icon, title: "Creating agent, this may take a few minutes. Sit back and relax.", }; } diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunAgent/helpers.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunAgent/helpers.tsx index 816c661230..2b75ed9c97 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunAgent/helpers.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunAgent/helpers.tsx @@ -203,7 +203,7 @@ export function getAccordionMeta(output: RunAgentToolOutput): { ? output.status.trim() : "started"; return { - icon: , + icon, title: output.graph_name, description: `Status: ${statusText}`, }; diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/helpers.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/helpers.tsx index c9b903876a..b8625988cd 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/helpers.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/helpers.tsx @@ -149,7 +149,7 @@ export function getAccordionMeta(output: RunBlockToolOutput): { if (isRunBlockBlockOutput(output)) { const keys = Object.keys(output.outputs ?? {}); return { - icon: , + icon, title: output.block_name, description: keys.length > 0 diff --git a/autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts b/autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts index 6facf80c58..bd27c77963 100644 --- a/autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts +++ b/autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts @@ -1,11 +1,8 @@ import { environment } from "@/services/environment"; import { getServerAuthToken } from "@/lib/autogpt-server-api/helpers"; import { NextRequest } from "next/server"; +import { normalizeSSEStream, SSE_HEADERS } from "../../../sse-helpers"; -/** - * SSE Proxy for chat streaming. - * Supports POST with context (page content + URL) in the request body. - */ export async function POST( request: NextRequest, { params }: { params: Promise<{ sessionId: string }> }, @@ -23,17 +20,14 @@ export async function POST( ); } - // Get auth token from server-side session const token = await getServerAuthToken(); - // Build backend URL const backendUrl = environment.getAGPTServerBaseUrl(); const streamUrl = new URL( `/api/chat/sessions/${sessionId}/stream`, backendUrl, ); - // Forward request to backend with auth header const headers: Record = { "Content-Type": "application/json", Accept: "text/event-stream", @@ -63,14 +57,15 @@ export async function POST( }); } - // Return the SSE stream directly - return new Response(response.body, { - headers: { - "Content-Type": "text/event-stream", - "Cache-Control": "no-cache, no-transform", - Connection: "keep-alive", - "X-Accel-Buffering": "no", - }, + if (!response.body) { + return new Response( + JSON.stringify({ error: "Empty response from chat service" }), + { status: 502, headers: { "Content-Type": "application/json" } }, + ); + } + + return new Response(normalizeSSEStream(response.body), { + headers: SSE_HEADERS, }); } catch (error) { console.error("SSE proxy error:", error); @@ -87,13 +82,6 @@ export async function POST( } } -/** - * Resume an active stream for a session. - * - * Called by the AI SDK's `useChat(resume: true)` on page load. - * Proxies to the backend which checks for an active stream and either - * replays it (200 + SSE) or returns 204 No Content. - */ export async function GET( _request: NextRequest, { params }: { params: Promise<{ sessionId: string }> }, @@ -124,7 +112,6 @@ export async function GET( headers, }); - // 204 = no active stream to resume if (response.status === 204) { return new Response(null, { status: 204 }); } @@ -137,12 +124,13 @@ export async function GET( }); } - return new Response(response.body, { + if (!response.body) { + return new Response(null, { status: 204 }); + } + + return new Response(normalizeSSEStream(response.body), { headers: { - "Content-Type": "text/event-stream", - "Cache-Control": "no-cache, no-transform", - Connection: "keep-alive", - "X-Accel-Buffering": "no", + ...SSE_HEADERS, "x-vercel-ai-ui-message-stream": "v1", }, }); diff --git a/autogpt_platform/frontend/src/app/api/chat/sse-helpers.ts b/autogpt_platform/frontend/src/app/api/chat/sse-helpers.ts new file mode 100644 index 0000000000..a5c76cf872 --- /dev/null +++ b/autogpt_platform/frontend/src/app/api/chat/sse-helpers.ts @@ -0,0 +1,72 @@ +export const SSE_HEADERS = { + "Content-Type": "text/event-stream", + "Cache-Control": "no-cache, no-transform", + Connection: "keep-alive", + "X-Accel-Buffering": "no", +} as const; + +export function normalizeSSEStream( + input: ReadableStream, +): ReadableStream { + const decoder = new TextDecoder(); + const encoder = new TextEncoder(); + let buffer = ""; + + return input.pipeThrough( + new TransformStream({ + transform(chunk, controller) { + buffer += decoder.decode(chunk, { stream: true }); + + const parts = buffer.split("\n\n"); + buffer = parts.pop() ?? ""; + + for (const part of parts) { + const normalized = normalizeSSEEvent(part); + controller.enqueue(encoder.encode(normalized + "\n\n")); + } + }, + flush(controller) { + if (buffer.trim()) { + const normalized = normalizeSSEEvent(buffer); + controller.enqueue(encoder.encode(normalized + "\n\n")); + } + }, + }), + ); +} + +function normalizeSSEEvent(event: string): string { + const lines = event.split("\n"); + const dataLines: string[] = []; + const otherLines: string[] = []; + + for (const line of lines) { + if (line.startsWith("data: ")) { + dataLines.push(line.slice(6)); + } else { + otherLines.push(line); + } + } + + if (dataLines.length === 0) return event; + + const dataStr = dataLines.join("\n"); + try { + const parsed = JSON.parse(dataStr) as Record; + if (parsed.type === "error") { + const normalized = { + type: "error", + errorText: + typeof parsed.errorText === "string" + ? parsed.errorText + : "An unexpected error occurred", + }; + const newData = `data: ${JSON.stringify(normalized)}`; + return [...otherLines.filter((l) => l.length > 0), newData].join("\n"); + } + } catch { + // Not valid JSON — pass through as-is + } + + return event; +} diff --git a/autogpt_platform/frontend/src/app/api/chat/tasks/[taskId]/stream/route.ts b/autogpt_platform/frontend/src/app/api/chat/tasks/[taskId]/stream/route.ts index 336786bfdb..238fdebb06 100644 --- a/autogpt_platform/frontend/src/app/api/chat/tasks/[taskId]/stream/route.ts +++ b/autogpt_platform/frontend/src/app/api/chat/tasks/[taskId]/stream/route.ts @@ -1,20 +1,8 @@ import { environment } from "@/services/environment"; import { getServerAuthToken } from "@/lib/autogpt-server-api/helpers"; import { NextRequest } from "next/server"; +import { normalizeSSEStream, SSE_HEADERS } from "../../../sse-helpers"; -/** - * SSE Proxy for task stream reconnection. - * - * This endpoint allows clients to reconnect to an ongoing or recently completed - * background task's stream. It replays missed messages from Redis Streams and - * subscribes to live updates if the task is still running. - * - * Client contract: - * 1. When receiving an operation_started event, store the task_id - * 2. To reconnect: GET /api/chat/tasks/{taskId}/stream?last_message_id={idx} - * 3. Messages are replayed from the last_message_id position - * 4. Stream ends when "finish" event is received - */ export async function GET( request: NextRequest, { params }: { params: Promise<{ taskId: string }> }, @@ -24,15 +12,12 @@ export async function GET( const lastMessageId = searchParams.get("last_message_id") || "0-0"; try { - // Get auth token from server-side session const token = await getServerAuthToken(); - // Build backend URL const backendUrl = environment.getAGPTServerBaseUrl(); const streamUrl = new URL(`/api/chat/tasks/${taskId}/stream`, backendUrl); streamUrl.searchParams.set("last_message_id", lastMessageId); - // Forward request to backend with auth header const headers: Record = { Accept: "text/event-stream", "Cache-Control": "no-cache", @@ -56,14 +41,12 @@ export async function GET( }); } - // Return the SSE stream directly - return new Response(response.body, { - headers: { - "Content-Type": "text/event-stream", - "Cache-Control": "no-cache, no-transform", - Connection: "keep-alive", - "X-Accel-Buffering": "no", - }, + if (!response.body) { + return new Response(null, { status: 204 }); + } + + return new Response(normalizeSSEStream(response.body), { + headers: SSE_HEADERS, }); } catch (error) { console.error("Task stream proxy error:", error); From 36aeb0b2b3d0ee0e8f23236771108040425a6cd5 Mon Sep 17 00:00:00 2001 From: Otto Date: Wed, 11 Feb 2026 15:43:58 +0000 Subject: [PATCH 03/18] docs(blocks): clarify HumanInTheLoop output descriptions for agent builder (#12069) ## Problem The agent builder (LLM) misinterprets the HumanInTheLoop block outputs. It thinks `approved_data` and `rejected_data` will yield status strings like "APPROVED" or "REJECTED" instead of understanding that the actual input data passes through. This leads to unnecessary complexity - the agent builder adds comparison blocks to check for status strings that don't exist. ## Solution Enriched the block docstring and all input/output field descriptions to make it explicit that: 1. The output is the actual data itself, not a status string 2. The routing is determined by which output pin fires 3. How to use the block correctly (connect downstream blocks to appropriate output pins) ## Changes - Updated block docstring with clear "How it works" and "Example usage" sections - Enhanced `data` input description to explain data flow - Enhanced `name` input description for reviewer context - Enhanced `approved_data` output to explicitly state it's NOT a status string - Enhanced `rejected_data` output to explicitly state it's NOT a status string - Enhanced `review_message` output for clarity ## Testing Documentation-only change to schema descriptions. No functional changes. Fixes SECRT-1930

Greptile Overview

Greptile Summary

Enhanced documentation for the `HumanInTheLoopBlock` to clarify how output pins work. The key improvement explicitly states that output pins (`approved_data` and `rejected_data`) yield the actual input data, not status strings like "APPROVED" or "REJECTED". This prevents the agent builder (LLM) from misinterpreting the block's behavior and adding unnecessary comparison blocks. **Key changes:** - Added "How it works" and "Example usage" sections to the block docstring - Clarified that routing is determined by which output pin fires, not by comparing output values - Enhanced all input/output field descriptions with explicit data flow explanations - Emphasized that downstream blocks should be connected to the appropriate output pin based on desired workflow path This is a documentation-only change with no functional modifications to the code logic.

Confidence Score: 5/5

- This PR is safe to merge with no risk - Documentation-only change that accurately reflects the existing code behavior. No functional changes, no runtime impact, and the enhanced descriptions correctly explain how the block outputs work based on verification of the implementation code. - No files require special attention
Co-authored-by: Zamil Majdy --- .../backend/blocks/human_in_the_loop.py | 56 ++++++++++++++----- docs/integrations/README.md | 2 +- docs/integrations/block-integrations/basic.md | 14 ++--- 3 files changed, 50 insertions(+), 22 deletions(-) diff --git a/autogpt_platform/backend/backend/blocks/human_in_the_loop.py b/autogpt_platform/backend/backend/blocks/human_in_the_loop.py index 568ac4b33f..d31f90ec81 100644 --- a/autogpt_platform/backend/backend/blocks/human_in_the_loop.py +++ b/autogpt_platform/backend/backend/blocks/human_in_the_loop.py @@ -21,43 +21,71 @@ logger = logging.getLogger(__name__) class HumanInTheLoopBlock(Block): """ - This block pauses execution and waits for human approval or modification of the data. + Pauses execution and waits for human approval or rejection of the data. - When executed, it creates a pending review entry and sets the node execution status - to REVIEW. The execution will remain paused until a human user either: - - Approves the data (with or without modifications) - - Rejects the data + When executed, this block creates a pending review entry and sets the node execution + status to REVIEW. The execution remains paused until a human user either approves + or rejects the data. - This is useful for workflows that require human validation or intervention before - proceeding to the next steps. + **How it works:** + - The input data is presented to a human reviewer + - The reviewer can approve or reject (and optionally modify the data if editable) + - On approval: the data flows out through the `approved_data` output pin + - On rejection: the data flows out through the `rejected_data` output pin + + **Important:** The output pins yield the actual data itself, NOT status strings. + The approval/rejection decision determines WHICH output pin fires, not the value. + You do NOT need to compare the output to "APPROVED" or "REJECTED" - simply connect + downstream blocks to the appropriate output pin for each case. + + **Example usage:** + - Connect `approved_data` → next step in your workflow (data was approved) + - Connect `rejected_data` → error handling or notification (data was rejected) """ class Input(BlockSchemaInput): - data: Any = SchemaField(description="The data to be reviewed by a human user") + data: Any = SchemaField( + description="The data to be reviewed by a human user. " + "This exact data will be passed through to either approved_data or " + "rejected_data output based on the reviewer's decision." + ) name: str = SchemaField( - description="A descriptive name for what this data represents", + description="A descriptive name for what this data represents. " + "This helps the reviewer understand what they are reviewing.", ) editable: bool = SchemaField( - description="Whether the human reviewer can edit the data", + description="Whether the human reviewer can edit the data before " + "approving or rejecting it", default=True, advanced=True, ) class Output(BlockSchemaOutput): approved_data: Any = SchemaField( - description="The data when approved (may be modified by reviewer)" + description="Outputs the input data when the reviewer APPROVES it. " + "The value is the actual data itself (not a status string like 'APPROVED'). " + "If the reviewer edited the data, this contains the modified version. " + "Connect downstream blocks here for the 'approved' workflow path." ) rejected_data: Any = SchemaField( - description="The data when rejected (may be modified by reviewer)" + description="Outputs the input data when the reviewer REJECTS it. " + "The value is the actual data itself (not a status string like 'REJECTED'). " + "If the reviewer edited the data, this contains the modified version. " + "Connect downstream blocks here for the 'rejected' workflow path." ) review_message: str = SchemaField( - description="Any message provided by the reviewer", default="" + description="Optional message provided by the reviewer explaining their " + "decision. Only outputs when the reviewer provides a message; " + "this pin does not fire if no message was given.", + default="", ) def __init__(self): super().__init__( id="8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d", - description="Pause execution and wait for human approval or modification of data", + description="Pause execution for human review. Data flows through " + "approved_data or rejected_data output based on the reviewer's decision. " + "Outputs contain the actual data, not status strings.", categories={BlockCategory.BASIC}, input_schema=HumanInTheLoopBlock.Input, output_schema=HumanInTheLoopBlock.Output, diff --git a/docs/integrations/README.md b/docs/integrations/README.md index 97a4d98709..a471ef3533 100644 --- a/docs/integrations/README.md +++ b/docs/integrations/README.md @@ -61,7 +61,7 @@ Below is a comprehensive list of all available blocks, categorized by their prim | [Get List Item](block-integrations/basic.md#get-list-item) | Returns the element at the given index | | [Get Store Agent Details](block-integrations/system/store_operations.md#get-store-agent-details) | Get detailed information about an agent from the store | | [Get Weather Information](block-integrations/basic.md#get-weather-information) | Retrieves weather information for a specified location using OpenWeatherMap API | -| [Human In The Loop](block-integrations/basic.md#human-in-the-loop) | Pause execution and wait for human approval or modification of data | +| [Human In The Loop](block-integrations/basic.md#human-in-the-loop) | Pause execution for human review | | [List Is Empty](block-integrations/basic.md#list-is-empty) | Checks if a list is empty | | [List Library Agents](block-integrations/system/library_operations.md#list-library-agents) | List all agents in your personal library | | [Note](block-integrations/basic.md#note) | A visual annotation block that displays a sticky note in the workflow editor for documentation and organization purposes | diff --git a/docs/integrations/block-integrations/basic.md b/docs/integrations/block-integrations/basic.md index 5a73fd5a03..08def38ede 100644 --- a/docs/integrations/block-integrations/basic.md +++ b/docs/integrations/block-integrations/basic.md @@ -975,7 +975,7 @@ A travel planning application could use this block to provide users with current ## Human In The Loop ### What it is -Pause execution and wait for human approval or modification of data +Pause execution for human review. Data flows through approved_data or rejected_data output based on the reviewer's decision. Outputs contain the actual data, not status strings. ### How it works @@ -988,18 +988,18 @@ This enables human oversight at critical points in automated workflows, ensuring | Input | Description | Type | Required | |-------|-------------|------|----------| -| data | The data to be reviewed by a human user | Data | Yes | -| name | A descriptive name for what this data represents | str | Yes | -| editable | Whether the human reviewer can edit the data | bool | No | +| data | The data to be reviewed by a human user. This exact data will be passed through to either approved_data or rejected_data output based on the reviewer's decision. | Data | Yes | +| name | A descriptive name for what this data represents. This helps the reviewer understand what they are reviewing. | str | Yes | +| editable | Whether the human reviewer can edit the data before approving or rejecting it | bool | No | ### Outputs | Output | Description | Type | |--------|-------------|------| | error | Error message if the operation failed | str | -| approved_data | The data when approved (may be modified by reviewer) | Approved Data | -| rejected_data | The data when rejected (may be modified by reviewer) | Rejected Data | -| review_message | Any message provided by the reviewer | str | +| approved_data | Outputs the input data when the reviewer APPROVES it. The value is the actual data itself (not a status string like 'APPROVED'). If the reviewer edited the data, this contains the modified version. Connect downstream blocks here for the 'approved' workflow path. | Approved Data | +| rejected_data | Outputs the input data when the reviewer REJECTS it. The value is the actual data itself (not a status string like 'REJECTED'). If the reviewer edited the data, this contains the modified version. Connect downstream blocks here for the 'rejected' workflow path. | Rejected Data | +| review_message | Optional message provided by the reviewer explaining their decision. Only outputs when the reviewer provides a message; this pin does not fire if no message was given. | str | ### Possible use case From a78145505b50f158a34ce4f101b5399f1115727e Mon Sep 17 00:00:00 2001 From: Zamil Majdy Date: Thu, 12 Feb 2026 05:52:17 +0400 Subject: [PATCH 04/18] fix(copilot): merge split assistant messages to prevent Anthropic API errors (#12062) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Summary - When the copilot model responds with both text content AND a long-running tool call (e.g., `create_agent`), the streaming code created two separate consecutive assistant messages — one with text, one with `tool_calls`. This caused Anthropic's API to reject with `"unexpected tool_use_id found in tool_result blocks"` because the `tool_result` couldn't find a matching `tool_use` in the immediately preceding assistant message. - Added a defensive merge of consecutive assistant messages in `to_openai_messages()` (fixes existing corrupt sessions too) - Fixed `_yield_tool_call` to add tool_calls to the existing current-turn assistant message instead of creating a new one - Changed `accumulated_tool_calls` assignment to use `extend` to prevent overwriting tool_calls added by long-running tool flow ## Test plan - [x] All 23 chat feature tests pass (`backend/api/features/chat/`) - [x] All 44 prompt utility tests pass (`backend/util/prompt_test.py`) - [x] All pre-commit hooks pass (ruff, isort, black, pyright) - [ ] Manual test: create an agent via copilot, then ask a follow-up question — should no longer get 400 error

Greptile Overview

Greptile Summary

Fixes a critical bug where long-running tool calls (like `create_agent`) caused Anthropic API 400 errors due to split assistant messages. The fix ensures tool calls are added to the existing assistant message instead of creating new ones, and adds a defensive merge function to repair any existing corrupt sessions. **Key changes:** - Added `_merge_consecutive_assistant_messages()` to defensively merge split assistant messages in `to_openai_messages()` - Modified `_yield_tool_call()` to append tool calls to the current-turn assistant message instead of creating a new one - Changed `accumulated_tool_calls` from assignment to `extend` to preserve tool calls already added by long-running tool flow **Impact:** Resolves the issue where users received 400 errors after creating agents via copilot and asking follow-up questions.

Confidence Score: 4/5

- Safe to merge with minor verification recommended - The changes are well-targeted and solve a real API compatibility issue. The logic is sound: searching backwards for the current assistant message is correct, and using `extend` instead of assignment prevents overwriting. The defensive merge in `to_openai_messages()` also fixes existing corrupt sessions. All existing tests pass according to the PR description. - No files require special attention - changes are localized and defensive

Sequence Diagram

```mermaid sequenceDiagram participant User participant StreamAPI as stream_chat_completion participant Chunks as _stream_chat_chunks participant ToolCall as _yield_tool_call participant Session as ChatSession User->>StreamAPI: Send message StreamAPI->>Chunks: Stream chat chunks alt Text + Long-running tool call Chunks->>StreamAPI: Text delta (content) StreamAPI->>Session: Append assistant message with content Chunks->>ToolCall: Tool call detected Note over ToolCall: OLD: Created new assistant message
NEW: Appends to existing assistant ToolCall->>Session: Search backwards for current assistant ToolCall->>Session: Append tool_call to existing message ToolCall->>Session: Add pending tool result end StreamAPI->>StreamAPI: Merge accumulated_tool_calls Note over StreamAPI: Use extend (not assign)
to preserve existing tool_calls StreamAPI->>Session: to_openai_messages() Session->>Session: _merge_consecutive_assistant_messages() Note over Session: Defensive: Merges any split
assistant messages Session-->>StreamAPI: Merged messages StreamAPI->>User: Return response ```
--- .../backend/api/features/chat/model.py | 65 +++++- .../backend/api/features/chat/model_test.py | 214 ++++++++++++++++++ .../backend/api/features/chat/service.py | 18 +- 3 files changed, 286 insertions(+), 11 deletions(-) diff --git a/autogpt_platform/backend/backend/api/features/chat/model.py b/autogpt_platform/backend/backend/api/features/chat/model.py index 7318ef88d7..35418f174f 100644 --- a/autogpt_platform/backend/backend/api/features/chat/model.py +++ b/autogpt_platform/backend/backend/api/features/chat/model.py @@ -2,7 +2,7 @@ import asyncio import logging import uuid from datetime import UTC, datetime -from typing import Any +from typing import Any, cast from weakref import WeakValueDictionary from openai.types.chat import ( @@ -104,6 +104,26 @@ class ChatSession(BaseModel): successful_agent_runs: dict[str, int] = {} successful_agent_schedules: dict[str, int] = {} + def add_tool_call_to_current_turn(self, tool_call: dict) -> None: + """Attach a tool_call to the current turn's assistant message. + + Searches backwards for the most recent assistant message (stopping at + any user message boundary). If found, appends the tool_call to it. + Otherwise creates a new assistant message with the tool_call. + """ + for msg in reversed(self.messages): + if msg.role == "user": + break + if msg.role == "assistant": + if not msg.tool_calls: + msg.tool_calls = [] + msg.tool_calls.append(tool_call) + return + + self.messages.append( + ChatMessage(role="assistant", content="", tool_calls=[tool_call]) + ) + @staticmethod def new(user_id: str) -> "ChatSession": return ChatSession( @@ -172,6 +192,47 @@ class ChatSession(BaseModel): successful_agent_schedules=successful_agent_schedules, ) + @staticmethod + def _merge_consecutive_assistant_messages( + messages: list[ChatCompletionMessageParam], + ) -> list[ChatCompletionMessageParam]: + """Merge consecutive assistant messages into single messages. + + Long-running tool flows can create split assistant messages: one with + text content and another with tool_calls. Anthropic's API requires + tool_result blocks to reference a tool_use in the immediately preceding + assistant message, so these splits cause 400 errors via OpenRouter. + """ + if len(messages) < 2: + return messages + + result: list[ChatCompletionMessageParam] = [messages[0]] + for msg in messages[1:]: + prev = result[-1] + if prev.get("role") != "assistant" or msg.get("role") != "assistant": + result.append(msg) + continue + + prev = cast(ChatCompletionAssistantMessageParam, prev) + curr = cast(ChatCompletionAssistantMessageParam, msg) + + curr_content = curr.get("content") or "" + if curr_content: + prev_content = prev.get("content") or "" + prev["content"] = ( + f"{prev_content}\n{curr_content}" if prev_content else curr_content + ) + + curr_tool_calls = curr.get("tool_calls") + if curr_tool_calls: + prev_tool_calls = prev.get("tool_calls") + prev["tool_calls"] = ( + list(prev_tool_calls) + list(curr_tool_calls) + if prev_tool_calls + else list(curr_tool_calls) + ) + return result + def to_openai_messages(self) -> list[ChatCompletionMessageParam]: messages = [] for message in self.messages: @@ -258,7 +319,7 @@ class ChatSession(BaseModel): name=message.name or "", ) ) - return messages + return self._merge_consecutive_assistant_messages(messages) async def _get_session_from_cache(session_id: str) -> ChatSession | None: diff --git a/autogpt_platform/backend/backend/api/features/chat/model_test.py b/autogpt_platform/backend/backend/api/features/chat/model_test.py index c230b00f9c..239137844d 100644 --- a/autogpt_platform/backend/backend/api/features/chat/model_test.py +++ b/autogpt_platform/backend/backend/api/features/chat/model_test.py @@ -1,4 +1,16 @@ +from typing import cast + import pytest +from openai.types.chat import ( + ChatCompletionAssistantMessageParam, + ChatCompletionMessageParam, + ChatCompletionToolMessageParam, + ChatCompletionUserMessageParam, +) +from openai.types.chat.chat_completion_message_tool_call_param import ( + ChatCompletionMessageToolCallParam, + Function, +) from .model import ( ChatMessage, @@ -117,3 +129,205 @@ async def test_chatsession_db_storage(setup_test_user, test_user_id): loaded.tool_calls is not None ), f"Tool calls missing for {orig.role} message" assert len(orig.tool_calls) == len(loaded.tool_calls) + + +# --------------------------------------------------------------------------- # +# _merge_consecutive_assistant_messages # +# --------------------------------------------------------------------------- # + +_tc = ChatCompletionMessageToolCallParam( + id="tc1", type="function", function=Function(name="do_stuff", arguments="{}") +) +_tc2 = ChatCompletionMessageToolCallParam( + id="tc2", type="function", function=Function(name="other", arguments="{}") +) + + +def test_merge_noop_when_no_consecutive_assistants(): + """Messages without consecutive assistants are returned unchanged.""" + msgs = [ + ChatCompletionUserMessageParam(role="user", content="hi"), + ChatCompletionAssistantMessageParam(role="assistant", content="hello"), + ChatCompletionUserMessageParam(role="user", content="bye"), + ] + merged = ChatSession._merge_consecutive_assistant_messages(msgs) + assert len(merged) == 3 + assert [m["role"] for m in merged] == ["user", "assistant", "user"] + + +def test_merge_splits_text_and_tool_calls(): + """The exact bug scenario: text-only assistant followed by tool_calls-only assistant.""" + msgs = [ + ChatCompletionUserMessageParam(role="user", content="build agent"), + ChatCompletionAssistantMessageParam( + role="assistant", content="Let me build that" + ), + ChatCompletionAssistantMessageParam( + role="assistant", content="", tool_calls=[_tc] + ), + ChatCompletionToolMessageParam(role="tool", content="ok", tool_call_id="tc1"), + ] + merged = ChatSession._merge_consecutive_assistant_messages(msgs) + + assert len(merged) == 3 + assert merged[0]["role"] == "user" + assert merged[2]["role"] == "tool" + a = cast(ChatCompletionAssistantMessageParam, merged[1]) + assert a["role"] == "assistant" + assert a.get("content") == "Let me build that" + assert a.get("tool_calls") == [_tc] + + +def test_merge_combines_tool_calls_from_both(): + """Both consecutive assistants have tool_calls — they get merged.""" + msgs: list[ChatCompletionAssistantMessageParam] = [ + ChatCompletionAssistantMessageParam( + role="assistant", content="text", tool_calls=[_tc] + ), + ChatCompletionAssistantMessageParam( + role="assistant", content="", tool_calls=[_tc2] + ), + ] + merged = ChatSession._merge_consecutive_assistant_messages(msgs) # type: ignore[arg-type] + + assert len(merged) == 1 + a = cast(ChatCompletionAssistantMessageParam, merged[0]) + assert a.get("tool_calls") == [_tc, _tc2] + assert a.get("content") == "text" + + +def test_merge_three_consecutive_assistants(): + """Three consecutive assistants collapse into one.""" + msgs: list[ChatCompletionAssistantMessageParam] = [ + ChatCompletionAssistantMessageParam(role="assistant", content="a"), + ChatCompletionAssistantMessageParam(role="assistant", content="b"), + ChatCompletionAssistantMessageParam( + role="assistant", content="", tool_calls=[_tc] + ), + ] + merged = ChatSession._merge_consecutive_assistant_messages(msgs) # type: ignore[arg-type] + + assert len(merged) == 1 + a = cast(ChatCompletionAssistantMessageParam, merged[0]) + assert a.get("content") == "a\nb" + assert a.get("tool_calls") == [_tc] + + +def test_merge_empty_and_single_message(): + """Edge cases: empty list and single message.""" + assert ChatSession._merge_consecutive_assistant_messages([]) == [] + + single: list[ChatCompletionMessageParam] = [ + ChatCompletionUserMessageParam(role="user", content="hi") + ] + assert ChatSession._merge_consecutive_assistant_messages(single) == single + + +# --------------------------------------------------------------------------- # +# add_tool_call_to_current_turn # +# --------------------------------------------------------------------------- # + +_raw_tc = { + "id": "tc1", + "type": "function", + "function": {"name": "f", "arguments": "{}"}, +} +_raw_tc2 = { + "id": "tc2", + "type": "function", + "function": {"name": "g", "arguments": "{}"}, +} + + +def test_add_tool_call_appends_to_existing_assistant(): + """When the last assistant is from the current turn, tool_call is added to it.""" + session = ChatSession.new(user_id="u") + session.messages = [ + ChatMessage(role="user", content="hi"), + ChatMessage(role="assistant", content="working on it"), + ] + session.add_tool_call_to_current_turn(_raw_tc) + + assert len(session.messages) == 2 # no new message created + assert session.messages[1].tool_calls == [_raw_tc] + + +def test_add_tool_call_creates_assistant_when_none_exists(): + """When there's no current-turn assistant, a new one is created.""" + session = ChatSession.new(user_id="u") + session.messages = [ + ChatMessage(role="user", content="hi"), + ] + session.add_tool_call_to_current_turn(_raw_tc) + + assert len(session.messages) == 2 + assert session.messages[1].role == "assistant" + assert session.messages[1].tool_calls == [_raw_tc] + + +def test_add_tool_call_does_not_cross_user_boundary(): + """A user message acts as a boundary — previous assistant is not modified.""" + session = ChatSession.new(user_id="u") + session.messages = [ + ChatMessage(role="assistant", content="old turn"), + ChatMessage(role="user", content="new message"), + ] + session.add_tool_call_to_current_turn(_raw_tc) + + assert len(session.messages) == 3 # new assistant was created + assert session.messages[0].tool_calls is None # old assistant untouched + assert session.messages[2].role == "assistant" + assert session.messages[2].tool_calls == [_raw_tc] + + +def test_add_tool_call_multiple_times(): + """Multiple long-running tool calls accumulate on the same assistant.""" + session = ChatSession.new(user_id="u") + session.messages = [ + ChatMessage(role="user", content="hi"), + ChatMessage(role="assistant", content="doing stuff"), + ] + session.add_tool_call_to_current_turn(_raw_tc) + # Simulate a pending tool result in between (like _yield_tool_call does) + session.messages.append( + ChatMessage(role="tool", content="pending", tool_call_id="tc1") + ) + session.add_tool_call_to_current_turn(_raw_tc2) + + assert len(session.messages) == 3 # user, assistant, tool — no extra assistant + assert session.messages[1].tool_calls == [_raw_tc, _raw_tc2] + + +def test_to_openai_messages_merges_split_assistants(): + """End-to-end: session with split assistants produces valid OpenAI messages.""" + session = ChatSession.new(user_id="u") + session.messages = [ + ChatMessage(role="user", content="build agent"), + ChatMessage(role="assistant", content="Let me build that"), + ChatMessage( + role="assistant", + content="", + tool_calls=[ + { + "id": "tc1", + "type": "function", + "function": {"name": "create_agent", "arguments": "{}"}, + } + ], + ), + ChatMessage(role="tool", content="done", tool_call_id="tc1"), + ChatMessage(role="assistant", content="Saved!"), + ChatMessage(role="user", content="show me an example run"), + ] + openai_msgs = session.to_openai_messages() + + # The two consecutive assistants at index 1,2 should be merged + roles = [m["role"] for m in openai_msgs] + assert roles == ["user", "assistant", "tool", "assistant", "user"] + + # The merged assistant should have both content and tool_calls + merged = cast(ChatCompletionAssistantMessageParam, openai_msgs[1]) + assert merged.get("content") == "Let me build that" + tc_list = merged.get("tool_calls") + assert tc_list is not None and len(list(tc_list)) == 1 + assert list(tc_list)[0]["id"] == "tc1" diff --git a/autogpt_platform/backend/backend/api/features/chat/service.py b/autogpt_platform/backend/backend/api/features/chat/service.py index 072ea88fd5..193566ea01 100644 --- a/autogpt_platform/backend/backend/api/features/chat/service.py +++ b/autogpt_platform/backend/backend/api/features/chat/service.py @@ -800,9 +800,13 @@ async def stream_chat_completion( # Build the messages list in the correct order messages_to_save: list[ChatMessage] = [] - # Add assistant message with tool_calls if any + # Add assistant message with tool_calls if any. + # Use extend (not assign) to preserve tool_calls already added by + # _yield_tool_call for long-running tools. if accumulated_tool_calls: - assistant_response.tool_calls = accumulated_tool_calls + if not assistant_response.tool_calls: + assistant_response.tool_calls = [] + assistant_response.tool_calls.extend(accumulated_tool_calls) logger.info( f"Added {len(accumulated_tool_calls)} tool calls to assistant message" ) @@ -1404,13 +1408,9 @@ async def _yield_tool_call( operation_id=operation_id, ) - # Save assistant message with tool_call FIRST (required by LLM) - assistant_message = ChatMessage( - role="assistant", - content="", - tool_calls=[tool_calls[yield_idx]], - ) - session.messages.append(assistant_message) + # Attach the tool_call to the current turn's assistant message + # (or create one if this is a tool-only response with no text). + session.add_tool_call_to_current_turn(tool_calls[yield_idx]) # Then save pending tool result pending_message = ChatMessage( From d09f1532a43b110919924836b4dcb39958bac977 Mon Sep 17 00:00:00 2001 From: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com> Date: Thu, 12 Feb 2026 16:46:01 +0530 Subject: [PATCH 05/18] feat(frontend): replace legacy builder with new flow editor (#12081) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ### Changes 🏗️ This PR completes the migration from the legacy builder to the new Flow editor by removing all legacy code and feature flags. **Removed:** - Old builder view toggle functionality (`BuilderViewTabs.tsx`) - Legacy debug panel (`RightSidebar.tsx`) - Feature flags: `NEW_FLOW_EDITOR` and `BUILDER_VIEW_SWITCH` - `useBuilderView` hook and related view-switching logic **Updated:** - Simplified `build/page.tsx` to always render the new Flow editor - Added CSS styling (`flow.css`) to properly render Phosphor icons in React Flow handles **Tests:** - Skipped e2e test suite in `build.spec.ts` (legacy builder tests) - Follow-up PR (#12082) will add new e2e tests for the Flow editor ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Create a new flow and verify it loads correctly - [x] Add nodes and connections to verify basic functionality works - [x] Verify that node handles render correctly with the new CSS - [x] Check that the UI is clean without the old debug panel or view toggles #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes --- .../BuilderViewTabs/BuilderViewTabs.tsx | 31 ------- .../build/components/FlowEditor/Flow/Flow.tsx | 3 + .../build/components/FlowEditor/Flow/flow.css | 9 ++ .../build/components/RIghtSidebar.tsx | 83 ------------------- .../src/app/(platform)/build/page.tsx | 63 ++------------ .../app/(platform)/build/useBuilderView.ts | 44 ---------- .../services/feature-flags/use-get-flag.ts | 4 - .../frontend/src/tests/agent-activity.spec.ts | 8 +- .../frontend/src/tests/build.spec.ts | 5 +- .../frontend/src/tests/pages/build.page.ts | 73 ++++++++-------- 10 files changed, 61 insertions(+), 262 deletions(-) delete mode 100644 autogpt_platform/frontend/src/app/(platform)/build/components/BuilderViewTabs/BuilderViewTabs.tsx create mode 100644 autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/Flow/flow.css delete mode 100644 autogpt_platform/frontend/src/app/(platform)/build/components/RIghtSidebar.tsx delete mode 100644 autogpt_platform/frontend/src/app/(platform)/build/useBuilderView.ts diff --git a/autogpt_platform/frontend/src/app/(platform)/build/components/BuilderViewTabs/BuilderViewTabs.tsx b/autogpt_platform/frontend/src/app/(platform)/build/components/BuilderViewTabs/BuilderViewTabs.tsx deleted file mode 100644 index 4f4237445b..0000000000 --- a/autogpt_platform/frontend/src/app/(platform)/build/components/BuilderViewTabs/BuilderViewTabs.tsx +++ /dev/null @@ -1,31 +0,0 @@ -"use client"; - -import { Tabs, TabsList, TabsTrigger } from "@/components/__legacy__/ui/tabs"; - -export type BuilderView = "old" | "new"; - -export function BuilderViewTabs({ - value, - onChange, -}: { - value: BuilderView; - onChange: (value: BuilderView) => void; -}) { - return ( -
- onChange(v as BuilderView)} - > - - - Old - - - New - - - -
- ); -} diff --git a/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/Flow/Flow.tsx b/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/Flow/Flow.tsx index 87ae4300b8..28bba580b4 100644 --- a/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/Flow/Flow.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/Flow/Flow.tsx @@ -23,6 +23,9 @@ import { useCopyPaste } from "./useCopyPaste"; import { useFlow } from "./useFlow"; import { useFlowRealtime } from "./useFlowRealtime"; +import "@xyflow/react/dist/style.css"; +import "./flow.css"; + export const Flow = () => { const [{ flowID, flowExecutionID }] = useQueryStates({ flowID: parseAsString, diff --git a/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/Flow/flow.css b/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/Flow/flow.css new file mode 100644 index 0000000000..0f73d047a9 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/build/components/FlowEditor/Flow/flow.css @@ -0,0 +1,9 @@ +/* Reset default xyflow handle styles so custom Phosphor icon handles render correctly */ +.react-flow__handle { + background: transparent; + width: auto; + height: auto; + border: 0; + position: relative; + transform: none; +} diff --git a/autogpt_platform/frontend/src/app/(platform)/build/components/RIghtSidebar.tsx b/autogpt_platform/frontend/src/app/(platform)/build/components/RIghtSidebar.tsx deleted file mode 100644 index cc0c7ff765..0000000000 --- a/autogpt_platform/frontend/src/app/(platform)/build/components/RIghtSidebar.tsx +++ /dev/null @@ -1,83 +0,0 @@ -import { useMemo } from "react"; - -import { Link } from "@/app/api/__generated__/models/link"; -import { useEdgeStore } from "../stores/edgeStore"; -import { useNodeStore } from "../stores/nodeStore"; -import { scrollbarStyles } from "@/components/styles/scrollbars"; -import { cn } from "@/lib/utils"; -import { customEdgeToLink } from "./helper"; - -export const RightSidebar = () => { - const edges = useEdgeStore((s) => s.edges); - const nodes = useNodeStore((s) => s.nodes); - - const backendLinks: Link[] = useMemo( - () => edges.map(customEdgeToLink), - [edges], - ); - - return ( -
-
-

- Graph Debug Panel -

-
- -
-

- Nodes ({nodes.length}) -

-
- {nodes.map((n) => ( -
-
- #{n.id} {n.data?.title ? `– ${n.data.title}` : ""} -
-
- hardcodedValues -
-
-                {JSON.stringify(n.data?.hardcodedValues ?? {}, null, 2)}
-              
-
- ))} -
- -

- Links ({backendLinks.length}) -

-
- {backendLinks.map((l) => ( -
-
- {l.source_id}[{l.source_name}] → {l.sink_id}[{l.sink_name}] -
-
- edge.id: {l.id} -
-
- ))} -
- -

- Backend Links JSON -

-
-          {JSON.stringify(backendLinks, null, 2)}
-        
-
-
- ); -}; diff --git a/autogpt_platform/frontend/src/app/(platform)/build/page.tsx b/autogpt_platform/frontend/src/app/(platform)/build/page.tsx index f1d62ee5fb..a8ed8a5e8e 100644 --- a/autogpt_platform/frontend/src/app/(platform)/build/page.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/build/page.tsx @@ -1,64 +1,13 @@ "use client"; - -import FlowEditor from "@/app/(platform)/build/components/legacy-builder/Flow/Flow"; -import { useOnboarding } from "@/providers/onboarding/onboarding-provider"; -// import LoadingBox from "@/components/__legacy__/ui/loading"; -import { GraphID } from "@/lib/autogpt-server-api/types"; import { ReactFlowProvider } from "@xyflow/react"; -import { useSearchParams } from "next/navigation"; -import { useEffect } from "react"; -import { BuilderViewTabs } from "./components/BuilderViewTabs/BuilderViewTabs"; import { Flow } from "./components/FlowEditor/Flow/Flow"; -import { useBuilderView } from "./useBuilderView"; - -function BuilderContent() { - const query = useSearchParams(); - const { completeStep } = useOnboarding(); - - useEffect(() => { - completeStep("BUILDER_OPEN"); - }, [completeStep]); - - const _graphVersion = query.get("flowVersion"); - const graphVersion = _graphVersion ? parseInt(_graphVersion) : undefined; - return ( - - ); -} export default function BuilderPage() { - const { - isSwitchEnabled, - selectedView, - setSelectedView, - isNewFlowEditorEnabled, - } = useBuilderView(); - - // Switch is temporary, we will remove it once our new flow editor is ready - if (isSwitchEnabled) { - return ( -
- - {selectedView === "new" ? ( - - - - ) : ( - - )} -
- ); - } - - return isNewFlowEditorEnabled ? ( - - - - ) : ( - + return ( +
+ + + +
); } diff --git a/autogpt_platform/frontend/src/app/(platform)/build/useBuilderView.ts b/autogpt_platform/frontend/src/app/(platform)/build/useBuilderView.ts deleted file mode 100644 index e0e524ddf8..0000000000 --- a/autogpt_platform/frontend/src/app/(platform)/build/useBuilderView.ts +++ /dev/null @@ -1,44 +0,0 @@ -import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag"; -import { usePathname, useRouter, useSearchParams } from "next/navigation"; -import { useEffect, useMemo } from "react"; -import { BuilderView } from "./components/BuilderViewTabs/BuilderViewTabs"; - -export function useBuilderView() { - const isNewFlowEditorEnabled = useGetFlag(Flag.NEW_FLOW_EDITOR); - const isBuilderViewSwitchEnabled = useGetFlag(Flag.BUILDER_VIEW_SWITCH); - - const router = useRouter(); - const pathname = usePathname(); - const searchParams = useSearchParams(); - - const currentView = searchParams.get("view"); - const defaultView = "old"; - const selectedView = useMemo(() => { - if (currentView === "new" || currentView === "old") return currentView; - return defaultView; - }, [currentView, defaultView]); - - useEffect(() => { - if (isBuilderViewSwitchEnabled === true) { - if (currentView !== "new" && currentView !== "old") { - const params = new URLSearchParams(searchParams); - params.set("view", defaultView); - router.replace(`${pathname}?${params.toString()}`); - } - } - // eslint-disable-next-line react-hooks/exhaustive-deps - }, [isBuilderViewSwitchEnabled, defaultView, pathname, router, searchParams]); - - const setSelectedView = (value: BuilderView) => { - const params = new URLSearchParams(searchParams); - params.set("view", value); - router.push(`${pathname}?${params.toString()}`); - }; - - return { - isSwitchEnabled: isBuilderViewSwitchEnabled === true, - selectedView, - setSelectedView, - isNewFlowEditorEnabled: Boolean(isNewFlowEditorEnabled), - } as const; -} diff --git a/autogpt_platform/frontend/src/services/feature-flags/use-get-flag.ts b/autogpt_platform/frontend/src/services/feature-flags/use-get-flag.ts index c61fc9749d..3a27aa6e9b 100644 --- a/autogpt_platform/frontend/src/services/feature-flags/use-get-flag.ts +++ b/autogpt_platform/frontend/src/services/feature-flags/use-get-flag.ts @@ -10,8 +10,6 @@ export enum Flag { NEW_AGENT_RUNS = "new-agent-runs", GRAPH_SEARCH = "graph-search", ENABLE_ENHANCED_OUTPUT_HANDLING = "enable-enhanced-output-handling", - NEW_FLOW_EDITOR = "new-flow-editor", - BUILDER_VIEW_SWITCH = "builder-view-switch", SHARE_EXECUTION_RESULTS = "share-execution-results", AGENT_FAVORITING = "agent-favoriting", MARKETPLACE_SEARCH_TERMS = "marketplace-search-terms", @@ -27,8 +25,6 @@ const defaultFlags = { [Flag.NEW_AGENT_RUNS]: false, [Flag.GRAPH_SEARCH]: false, [Flag.ENABLE_ENHANCED_OUTPUT_HANDLING]: false, - [Flag.NEW_FLOW_EDITOR]: false, - [Flag.BUILDER_VIEW_SWITCH]: false, [Flag.SHARE_EXECUTION_RESULTS]: false, [Flag.AGENT_FAVORITING]: false, [Flag.MARKETPLACE_SEARCH_TERMS]: DEFAULT_SEARCH_TERMS, diff --git a/autogpt_platform/frontend/src/tests/agent-activity.spec.ts b/autogpt_platform/frontend/src/tests/agent-activity.spec.ts index 96c19a8020..9cc2ca4ee9 100644 --- a/autogpt_platform/frontend/src/tests/agent-activity.spec.ts +++ b/autogpt_platform/frontend/src/tests/agent-activity.spec.ts @@ -11,24 +11,18 @@ test.beforeEach(async ({ page }) => { const buildPage = new BuildPage(page); const testUser = await getTestUser(); - const { getId } = getSelectors(page); - await page.goto("/login"); await loginPage.login(testUser.email, testUser.password); await hasUrl(page, "/marketplace"); await page.goto("/build"); await buildPage.closeTutorial(); - await buildPage.openBlocksPanel(); const [dictionaryBlock] = await buildPage.getFilteredBlocksFromAPI( (block) => block.name === "AddToDictionaryBlock", ); - const blockCard = getId(`block-name-${dictionaryBlock.id}`); - await blockCard.click(); - const blockInEditor = getId(dictionaryBlock.id).first(); - expect(blockInEditor).toBeAttached(); + await buildPage.addBlock(dictionaryBlock); await buildPage.saveAgent("Test Agent", "Test Description"); await test diff --git a/autogpt_platform/frontend/src/tests/build.spec.ts b/autogpt_platform/frontend/src/tests/build.spec.ts index abdd3ea63b..24d95b8174 100644 --- a/autogpt_platform/frontend/src/tests/build.spec.ts +++ b/autogpt_platform/frontend/src/tests/build.spec.ts @@ -1,3 +1,6 @@ +// TODO: These tests were written for the old (legacy) builder. +// They need to be updated to work with the new flow editor. + // Note: all the comments with //(number)! are for the docs //ignore them when reading the code, but if you change something, //make sure to update the docs! Your autoformmater will break this page, @@ -12,7 +15,7 @@ import { getTestUser } from "./utils/auth"; // Reason Ignore: admonishment is in the wrong place visually with correct prettier rules // prettier-ignore -test.describe("Build", () => { //(1)! +test.describe.skip("Build", () => { //(1)! let buildPage: BuildPage; //(2)! // Reason Ignore: admonishment is in the wrong place visually with correct prettier rules diff --git a/autogpt_platform/frontend/src/tests/pages/build.page.ts b/autogpt_platform/frontend/src/tests/pages/build.page.ts index 8acc9a8f40..9370288f8e 100644 --- a/autogpt_platform/frontend/src/tests/pages/build.page.ts +++ b/autogpt_platform/frontend/src/tests/pages/build.page.ts @@ -1,7 +1,6 @@ -import { expect, Locator, Page } from "@playwright/test"; +import { Locator, Page } from "@playwright/test"; import { Block as APIBlock } from "../../lib/autogpt-server-api/types"; import { beautifyString } from "../../lib/utils"; -import { isVisible } from "../utils/assertion"; import { BasePage } from "./base.page"; export interface Block { @@ -27,32 +26,39 @@ export class BuildPage extends BasePage { try { await this.page .getByRole("button", { name: "Skip Tutorial", exact: true }) - .click(); - } catch (error) { - console.info("Error closing tutorial:", error); + .click({ timeout: 3000 }); + } catch (_error) { + console.info("Tutorial not shown or already dismissed"); } } async openBlocksPanel(): Promise { - const isPanelOpen = await this.page - .getByTestId("blocks-control-blocks-label") - .isVisible(); + const popoverContent = this.page.locator( + '[data-id="blocks-control-popover-content"]', + ); + const isPanelOpen = await popoverContent.isVisible(); if (!isPanelOpen) { await this.page.getByTestId("blocks-control-blocks-button").click(); + await popoverContent.waitFor({ state: "visible", timeout: 5000 }); } } async closeBlocksPanel(): Promise { - await this.page.getByTestId("profile-popout-menu-trigger").click(); + const popoverContent = this.page.locator( + '[data-id="blocks-control-popover-content"]', + ); + if (await popoverContent.isVisible()) { + await this.page.getByTestId("blocks-control-blocks-button").click(); + } } async saveAgent( name: string = "Test Agent", description: string = "", ): Promise { - console.log(`💾 Saving agent '${name}' with description '${description}'`); - await this.page.getByTestId("blocks-control-save-button").click(); + console.log(`Saving agent '${name}' with description '${description}'`); + await this.page.getByTestId("save-control-save-button").click(); await this.page.getByTestId("save-control-name-input").fill(name); await this.page .getByTestId("save-control-description-input") @@ -107,32 +113,34 @@ export class BuildPage extends BasePage { await this.openBlocksPanel(); const searchInput = this.page.locator( - '[data-id="blocks-control-search-input"]', + '[data-id="blocks-control-search-bar"] input[type="text"]', ); const displayName = this.getDisplayName(block.name); await searchInput.clear(); await searchInput.fill(displayName); - const blockCard = this.page.getByTestId(`block-name-${block.id}`); + const blockCardId = block.id.replace(/[^a-zA-Z0-9]/g, ""); + const blockCard = this.page.locator( + `[data-id="block-card-${blockCardId}"]`, + ); try { // Wait for the block card to be visible with a reasonable timeout await blockCard.waitFor({ state: "visible", timeout: 10000 }); await blockCard.click(); - const blockInEditor = this.page.getByTestId(block.id).first(); - expect(blockInEditor).toBeAttached(); } catch (error) { console.log( - `❌ ❌ Block ${block.name} (display: ${displayName}) returned from the API but not found in block list`, + `Block ${block.name} (display: ${displayName}) returned from the API but not found in block list`, ); console.log(`Error: ${error}`); } } - async hasBlock(block: Block) { - const blockInEditor = this.page.getByTestId(block.id).first(); - await blockInEditor.isVisible(); + async hasBlock(_block: Block) { + // In the new flow editor, verify a node exists on the canvas + const node = this.page.locator('[data-id^="custom-node-"]').first(); + await node.isVisible(); } async getBlockInputs(blockId: string): Promise { @@ -159,7 +167,7 @@ export class BuildPage extends BasePage { // Clear any existing search to ensure we see all blocks in the category const searchInput = this.page.locator( - '[data-id="blocks-control-search-input"]', + '[data-id="blocks-control-search-bar"] input[type="text"]', ); await searchInput.clear(); @@ -391,13 +399,13 @@ export class BuildPage extends BasePage { async isRunButtonEnabled(): Promise { console.log(`checking if run button is enabled`); - const runButton = this.page.getByTestId("primary-action-run-agent"); + const runButton = this.page.locator('[data-id="run-graph-button"]'); return await runButton.isEnabled(); } async runAgent(): Promise { console.log(`clicking run button`); - const runButton = this.page.getByTestId("primary-action-run-agent"); + const runButton = this.page.locator('[data-id="run-graph-button"]'); await runButton.click(); await this.page.waitForTimeout(1000); await runButton.click(); @@ -424,7 +432,7 @@ export class BuildPage extends BasePage { async waitForSaveButton(): Promise { console.log(`waiting for save button`); await this.page.waitForSelector( - '[data-testid="blocks-control-save-button"]:not([disabled])', + '[data-testid="save-control-save-button"]:not([disabled])', ); } @@ -526,27 +534,22 @@ export class BuildPage extends BasePage { async createDummyAgent() { await this.closeTutorial(); await this.openBlocksPanel(); - const dictionaryBlock = await this.getDictionaryBlockDetails(); const searchInput = this.page.locator( - '[data-id="blocks-control-search-input"]', + '[data-id="blocks-control-search-bar"] input[type="text"]', ); - const displayName = this.getDisplayName(dictionaryBlock.name); await searchInput.clear(); + await searchInput.fill("Add to Dictionary"); - await isVisible(this.page.getByText("Output")); - - await searchInput.fill(displayName); - - const blockCard = this.page.getByTestId(`block-name-${dictionaryBlock.id}`); - if (await blockCard.isVisible()) { + const blockCard = this.page.locator('[data-id^="block-card-"]').first(); + try { + await blockCard.waitFor({ state: "visible", timeout: 10000 }); await blockCard.click(); - const blockInEditor = this.page.getByTestId(dictionaryBlock.id).first(); - expect(blockInEditor).toBeAttached(); + } catch (error) { + console.log("Could not find Add to Dictionary block:", error); } await this.saveAgent("Test Agent", "Test Description"); - await expect(this.isRunButtonEnabled()).resolves.toBeTruthy(); } } From 113e87a23c523bc39ba76dcd35416012e10fdd99 Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Thu, 12 Feb 2026 13:07:49 +0100 Subject: [PATCH 06/18] refactor(backend): Reduce circular imports (#12068) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit I'm getting circular import issues because there is a lot of cross-importing between `backend.data`, `backend.blocks`, and other modules. This change reduces block-related cross-imports and thus risk of breaking circular imports. ### Changes 🏗️ - Strip down `backend.data.block` - Move `Block` base class and related class/enum defs to `backend.blocks._base` - Move `is_block_auth_configured` to `backend.blocks._utils` - Move `get_blocks()`, `get_io_block_ids()` etc. to `backend.blocks` (`__init__.py`) - Update imports everywhere - Remove unused and poorly typed `Block.create()` - Change usages from `block_cls.create()` to `block_cls()` - Improve typing of `load_all_blocks` and `get_blocks` - Move cross-import of `backend.api.features.library.model` from `backend/data/__init__.py` to `backend/data/integrations.py` - Remove deprecated attribute `NodeModel.webhook` - Re-generate OpenAPI spec and fix frontend usage - Eliminate module-level `backend.blocks` import from `blocks/agent.py` - Eliminate module-level `backend.data.execution` and `backend.executor.manager` imports from `blocks/helpers/review.py` - Replace `BlockInput` with `GraphInput` for graph inputs ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - CI static type-checking + tests should be sufficient for this --- .../backend/backend/api/external/v1/routes.py | 6 +- .../backend/api/features/builder/db.py | 19 +- .../backend/api/features/builder/model.py | 4 +- .../backend/api/features/builder/routes.py | 2 +- .../api/features/chat/tools/find_block.py | 3 +- .../features/chat/tools/find_block_test.py | 2 +- .../api/features/chat/tools/run_block.py | 3 +- .../api/features/chat/tools/run_block_test.py | 2 +- .../backend/api/features/library/db.py | 5 +- .../backend/api/features/library/model.py | 13 +- .../backend/api/features/otto/service.py | 2 +- .../api/features/store/content_handlers.py | 4 +- .../features/store/content_handlers_test.py | 8 +- .../backend/api/features/store/embeddings.py | 2 +- .../backend/api/features/store/image_gen.py | 19 +- .../backend/backend/api/features/v1.py | 3 +- .../backend/backend/blocks/__init__.py | 66 +- .../backend/backend/blocks/_base.py | 739 ++++++++++++++ .../backend/backend/blocks/_utils.py | 122 +++ .../backend/backend/blocks/agent.py | 13 +- .../backend/backend/blocks/ai_condition.py | 12 +- .../backend/blocks/ai_image_customizer.py | 2 +- .../blocks/ai_image_generator_block.py | 7 +- .../backend/blocks/ai_music_generator.py | 2 +- .../blocks/ai_shortform_video_block.py | 2 +- .../backend/blocks/apollo/organization.py | 14 +- .../backend/backend/blocks/apollo/people.py | 14 +- .../backend/backend/blocks/apollo/person.py | 14 +- .../backend/backend/blocks/ayrshare/_util.py | 2 +- .../backend/backend/blocks/basic.py | 2 +- .../backend/backend/blocks/block.py | 2 +- .../backend/backend/blocks/branching.py | 2 +- .../backend/backend/blocks/claude_code.py | 2 +- .../backend/backend/blocks/code_executor.py | 2 +- .../backend/blocks/code_extraction_block.py | 2 +- .../backend/backend/blocks/codex.py | 2 +- .../backend/blocks/compass/triggers.py | 2 +- .../blocks/count_words_and_char_block.py | 2 +- .../backend/blocks/data_manipulation.py | 2 +- .../backend/backend/blocks/decoder_block.py | 2 +- .../backend/blocks/discord/bot_blocks.py | 2 +- .../backend/blocks/discord/oauth_blocks.py | 2 +- .../backend/backend/blocks/email_block.py | 2 +- .../backend/backend/blocks/encoder_block.py | 2 +- .../backend/blocks/enrichlayer/linkedin.py | 2 +- .../backend/blocks/fal/ai_video_generator.py | 14 +- .../backend/backend/blocks/flux_kontext.py | 2 +- .../backend/backend/blocks/github/checks.py | 2 +- .../backend/backend/blocks/github/ci.py | 2 +- .../backend/backend/blocks/github/issues.py | 2 +- .../backend/blocks/github/pull_requests.py | 2 +- .../backend/backend/blocks/github/repo.py | 2 +- .../backend/backend/blocks/github/reviews.py | 2 +- .../backend/backend/blocks/github/statuses.py | 2 +- .../backend/backend/blocks/github/triggers.py | 2 +- .../backend/backend/blocks/google/calendar.py | 2 +- .../backend/backend/blocks/google/docs.py | 4 +- .../backend/backend/blocks/google/gmail.py | 2 +- .../backend/backend/blocks/google/sheets.py | 4 +- .../backend/backend/blocks/google_maps.py | 2 +- .../backend/backend/blocks/helpers/review.py | 7 +- .../backend/backend/blocks/http.py | 2 +- .../backend/backend/blocks/hubspot/company.py | 12 +- .../backend/backend/blocks/hubspot/contact.py | 12 +- .../backend/blocks/hubspot/engagement.py | 12 +- .../backend/blocks/human_in_the_loop.py | 4 +- .../backend/backend/blocks/ideogram.py | 2 +- autogpt_platform/backend/backend/blocks/io.py | 7 +- .../backend/backend/blocks/iteration.py | 2 +- .../backend/backend/blocks/jina/chunking.py | 12 +- .../backend/backend/blocks/jina/embeddings.py | 12 +- .../backend/blocks/jina/fact_checker.py | 12 +- .../backend/backend/blocks/jina/search.py | 14 +- .../backend/backend/blocks/llm.py | 2 +- .../backend/backend/blocks/maths.py | 2 +- .../backend/backend/blocks/medium.py | 2 +- .../backend/backend/blocks/mem0.py | 2 +- .../backend/blocks/notion/create_page.py | 2 +- .../backend/blocks/notion/read_database.py | 2 +- .../backend/blocks/notion/read_page.py | 2 +- .../blocks/notion/read_page_markdown.py | 2 +- .../backend/backend/blocks/notion/search.py | 2 +- .../backend/backend/blocks/nvidia/deepfake.py | 12 +- .../backend/backend/blocks/perplexity.py | 2 +- .../backend/backend/blocks/persistence.py | 2 +- .../backend/backend/blocks/pinecone.py | 2 +- .../backend/backend/blocks/reddit.py | 2 +- .../backend/blocks/replicate/flux_advanced.py | 14 +- .../blocks/replicate/replicate_block.py | 14 +- .../backend/backend/blocks/rss.py | 2 +- .../backend/backend/blocks/sampling.py | 2 +- .../backend/backend/blocks/screenshotone.py | 2 +- .../backend/backend/blocks/search.py | 4 +- .../backend/backend/blocks/slant3d/base.py | 2 +- .../backend/blocks/slant3d/filament.py | 2 +- .../backend/backend/blocks/slant3d/order.py | 2 +- .../backend/backend/blocks/slant3d/slicing.py | 2 +- .../backend/backend/blocks/slant3d/webhook.py | 2 +- .../backend/blocks/smart_decision_maker.py | 4 +- .../backend/blocks/smartlead/campaign.py | 14 +- .../backend/backend/blocks/spreadsheet.py | 2 +- .../blocks/system/library_operations.py | 2 +- .../backend/blocks/system/store_operations.py | 2 +- .../backend/backend/blocks/talking_head.py | 2 +- .../backend/backend/blocks/test/test_block.py | 3 +- .../backend/backend/blocks/text.py | 2 +- .../backend/blocks/text_to_speech_block.py | 2 +- .../backend/backend/blocks/time_blocks.py | 2 +- .../backend/blocks/todoist/comments.py | 14 +- .../backend/backend/blocks/todoist/labels.py | 14 +- .../backend/blocks/todoist/projects.py | 14 +- .../backend/blocks/todoist/sections.py | 14 +- .../backend/backend/blocks/todoist/tasks.py | 14 +- .../backend/backend/blocks/twitter/_types.py | 2 +- .../direct_message/direct_message_lookup.py | 2 +- .../direct_message/manage_direct_message.py | 2 +- .../blocks/twitter/lists/list_follows.py | 14 +- .../blocks/twitter/lists/list_lookup.py | 2 +- .../blocks/twitter/lists/list_members.py | 14 +- .../twitter/lists/list_tweets_lookup.py | 2 +- .../blocks/twitter/lists/manage_lists.py | 14 +- .../blocks/twitter/lists/pinned_lists.py | 14 +- .../blocks/twitter/spaces/search_spaces.py | 2 +- .../blocks/twitter/spaces/spaces_lookup.py | 2 +- .../backend/blocks/twitter/tweets/bookmark.py | 14 +- .../backend/blocks/twitter/tweets/hide.py | 14 +- .../backend/blocks/twitter/tweets/like.py | 14 +- .../backend/blocks/twitter/tweets/manage.py | 14 +- .../backend/blocks/twitter/tweets/quote.py | 2 +- .../backend/blocks/twitter/tweets/retweet.py | 14 +- .../backend/blocks/twitter/tweets/timeline.py | 2 +- .../blocks/twitter/tweets/tweet_lookup.py | 2 +- .../backend/blocks/twitter/users/blocks.py | 2 +- .../backend/blocks/twitter/users/follows.py | 14 +- .../backend/blocks/twitter/users/mutes.py | 14 +- .../blocks/twitter/users/user_lookup.py | 2 +- .../backend/backend/blocks/video/add_audio.py | 4 +- .../backend/backend/blocks/video/clip.py | 12 +- .../backend/backend/blocks/video/concat.py | 12 +- .../backend/backend/blocks/video/download.py | 2 +- .../backend/backend/blocks/video/duration.py | 4 +- .../backend/backend/blocks/video/loop.py | 4 +- .../backend/backend/blocks/video/narration.py | 14 +- .../backend/blocks/video/text_overlay.py | 12 +- .../backend/backend/blocks/xml_parser.py | 2 +- .../backend/backend/blocks/youtube.py | 2 +- .../blocks/zerobounce/validate_emails.py | 14 +- .../backend/backend/data/__init__.py | 7 - .../backend/backend/data/block.py | 902 +----------------- .../backend/backend/data/block_cost_config.py | 2 +- .../backend/backend/data/credit.py | 2 +- .../backend/backend/data/credit_test.py | 2 +- .../backend/backend/data/execution.py | 34 +- .../backend/backend/data/graph.py | 45 +- .../backend/backend/data/graph_test.py | 3 +- .../backend/backend/data/integrations.py | 21 +- .../backend/backend/data/model.py | 3 + .../executor/activity_status_generator.py | 2 +- .../backend/backend/executor/manager.py | 10 +- .../backend/backend/executor/scheduler.py | 7 +- .../backend/backend/executor/utils.py | 25 +- .../webhooks/graph_lifecycle_hooks.py | 22 +- .../backend/integrations/webhooks/utils.py | 4 +- .../backend/monitoring/block_error_monitor.py | 2 +- .../backend/backend/sdk/__init__.py | 4 +- .../backend/backend/sdk/builder.py | 2 +- .../backend/backend/sdk/cost_integration.py | 2 +- .../backend/backend/sdk/provider.py | 2 +- autogpt_platform/backend/backend/util/test.py | 3 +- .../backend/scripts/generate_block_docs.py | 11 +- .../backend/test/load_store_agents.py | 2 +- .../backend/test/sdk/test_sdk_registry.py | 4 +- .../components/legacy-builder/Flow/Flow.tsx | 2 +- .../frontend/src/app/api/openapi.json | 6 - .../src/lib/autogpt-server-api/types.ts | 4 +- docs/platform/new_blocks.md | 10 +- 176 files changed, 1444 insertions(+), 1446 deletions(-) create mode 100644 autogpt_platform/backend/backend/blocks/_base.py create mode 100644 autogpt_platform/backend/backend/blocks/_utils.py diff --git a/autogpt_platform/backend/backend/api/external/v1/routes.py b/autogpt_platform/backend/backend/api/external/v1/routes.py index 00933c1899..69a0c36637 100644 --- a/autogpt_platform/backend/backend/api/external/v1/routes.py +++ b/autogpt_platform/backend/backend/api/external/v1/routes.py @@ -10,7 +10,7 @@ from typing_extensions import TypedDict import backend.api.features.store.cache as store_cache import backend.api.features.store.model as store_model -import backend.data.block +import backend.blocks from backend.api.external.middleware import require_permission from backend.data import execution as execution_db from backend.data import graph as graph_db @@ -67,7 +67,7 @@ async def get_user_info( dependencies=[Security(require_permission(APIKeyPermission.READ_BLOCK))], ) async def get_graph_blocks() -> Sequence[dict[Any, Any]]: - blocks = [block() for block in backend.data.block.get_blocks().values()] + blocks = [block() for block in backend.blocks.get_blocks().values()] return [b.to_dict() for b in blocks if not b.disabled] @@ -83,7 +83,7 @@ async def execute_graph_block( require_permission(APIKeyPermission.EXECUTE_BLOCK) ), ) -> CompletedBlockOutput: - obj = backend.data.block.get_block(block_id) + obj = backend.blocks.get_block(block_id) if not obj: raise HTTPException(status_code=404, detail=f"Block #{block_id} not found.") if obj.disabled: diff --git a/autogpt_platform/backend/backend/api/features/builder/db.py b/autogpt_platform/backend/backend/api/features/builder/db.py index 7177fa4dc6..e8d35b0bb5 100644 --- a/autogpt_platform/backend/backend/api/features/builder/db.py +++ b/autogpt_platform/backend/backend/api/features/builder/db.py @@ -10,10 +10,15 @@ import backend.api.features.library.db as library_db import backend.api.features.library.model as library_model import backend.api.features.store.db as store_db import backend.api.features.store.model as store_model -import backend.data.block from backend.blocks import load_all_blocks +from backend.blocks._base import ( + AnyBlockSchema, + BlockCategory, + BlockInfo, + BlockSchema, + BlockType, +) from backend.blocks.llm import LlmModel -from backend.data.block import AnyBlockSchema, BlockCategory, BlockInfo, BlockSchema from backend.data.db import query_raw_with_schema from backend.integrations.providers import ProviderName from backend.util.cache import cached @@ -22,7 +27,7 @@ from backend.util.models import Pagination from .model import ( BlockCategoryResponse, BlockResponse, - BlockType, + BlockTypeFilter, CountResponse, FilterType, Provider, @@ -88,7 +93,7 @@ def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse def get_blocks( *, category: str | None = None, - type: BlockType | None = None, + type: BlockTypeFilter | None = None, provider: ProviderName | None = None, page: int = 1, page_size: int = 50, @@ -669,9 +674,9 @@ async def get_suggested_blocks(count: int = 5) -> list[BlockInfo]: for block_type in load_all_blocks().values(): block: AnyBlockSchema = block_type() if block.disabled or block.block_type in ( - backend.data.block.BlockType.INPUT, - backend.data.block.BlockType.OUTPUT, - backend.data.block.BlockType.AGENT, + BlockType.INPUT, + BlockType.OUTPUT, + BlockType.AGENT, ): continue # Find the execution count for this block diff --git a/autogpt_platform/backend/backend/api/features/builder/model.py b/autogpt_platform/backend/backend/api/features/builder/model.py index fcd19dba94..8aa8ed06ed 100644 --- a/autogpt_platform/backend/backend/api/features/builder/model.py +++ b/autogpt_platform/backend/backend/api/features/builder/model.py @@ -4,7 +4,7 @@ from pydantic import BaseModel import backend.api.features.library.model as library_model import backend.api.features.store.model as store_model -from backend.data.block import BlockInfo +from backend.blocks._base import BlockInfo from backend.integrations.providers import ProviderName from backend.util.models import Pagination @@ -15,7 +15,7 @@ FilterType = Literal[ "my_agents", ] -BlockType = Literal["all", "input", "action", "output"] +BlockTypeFilter = Literal["all", "input", "action", "output"] class SearchEntry(BaseModel): diff --git a/autogpt_platform/backend/backend/api/features/builder/routes.py b/autogpt_platform/backend/backend/api/features/builder/routes.py index 15b922178d..091f477178 100644 --- a/autogpt_platform/backend/backend/api/features/builder/routes.py +++ b/autogpt_platform/backend/backend/api/features/builder/routes.py @@ -88,7 +88,7 @@ async def get_block_categories( ) async def get_blocks( category: Annotated[str | None, fastapi.Query()] = None, - type: Annotated[builder_model.BlockType | None, fastapi.Query()] = None, + type: Annotated[builder_model.BlockTypeFilter | None, fastapi.Query()] = None, provider: Annotated[ProviderName | None, fastapi.Query()] = None, page: Annotated[int, fastapi.Query()] = 1, page_size: Annotated[int, fastapi.Query()] = 50, diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/find_block.py b/autogpt_platform/backend/backend/api/features/chat/tools/find_block.py index f55cd567e8..6a8cfa9bbc 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/find_block.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/find_block.py @@ -13,7 +13,8 @@ from backend.api.features.chat.tools.models import ( NoResultsResponse, ) from backend.api.features.store.hybrid_search import unified_hybrid_search -from backend.data.block import BlockType, get_block +from backend.blocks import get_block +from backend.blocks._base import BlockType logger = logging.getLogger(__name__) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/find_block_test.py b/autogpt_platform/backend/backend/api/features/chat/tools/find_block_test.py index 0f3d4cbfa5..d567a89bbe 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/find_block_test.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/find_block_test.py @@ -10,7 +10,7 @@ from backend.api.features.chat.tools.find_block import ( FindBlockTool, ) from backend.api.features.chat.tools.models import BlockListResponse -from backend.data.block import BlockType +from backend.blocks._base import BlockType from ._test_data import make_session diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/run_block.py b/autogpt_platform/backend/backend/api/features/chat/tools/run_block.py index fc4a470fdd..8c29820f8e 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/run_block.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/run_block.py @@ -12,7 +12,8 @@ from backend.api.features.chat.tools.find_block import ( COPILOT_EXCLUDED_BLOCK_IDS, COPILOT_EXCLUDED_BLOCK_TYPES, ) -from backend.data.block import AnyBlockSchema, get_block +from backend.blocks import get_block +from backend.blocks._base import AnyBlockSchema from backend.data.execution import ExecutionContext from backend.data.model import CredentialsFieldInfo, CredentialsMetaInput from backend.data.workspace import get_or_create_workspace diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/run_block_test.py b/autogpt_platform/backend/backend/api/features/chat/tools/run_block_test.py index 2aae45e875..aadc161155 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/run_block_test.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/run_block_test.py @@ -6,7 +6,7 @@ import pytest from backend.api.features.chat.tools.models import ErrorResponse from backend.api.features.chat.tools.run_block import RunBlockTool -from backend.data.block import BlockType +from backend.blocks._base import BlockType from ._test_data import make_session diff --git a/autogpt_platform/backend/backend/api/features/library/db.py b/autogpt_platform/backend/backend/api/features/library/db.py index 32479c18a3..e07ed9f7ad 100644 --- a/autogpt_platform/backend/backend/api/features/library/db.py +++ b/autogpt_platform/backend/backend/api/features/library/db.py @@ -12,12 +12,11 @@ import backend.api.features.store.image_gen as store_image_gen import backend.api.features.store.media as store_media import backend.data.graph as graph_db import backend.data.integrations as integrations_db -from backend.data.block import BlockInput from backend.data.db import transaction from backend.data.execution import get_graph_execution from backend.data.graph import GraphSettings from backend.data.includes import AGENT_PRESET_INCLUDE, library_agent_include -from backend.data.model import CredentialsMetaInput +from backend.data.model import CredentialsMetaInput, GraphInput from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.webhooks.graph_lifecycle_hooks import ( on_graph_activate, @@ -1130,7 +1129,7 @@ async def create_preset_from_graph_execution( async def update_preset( user_id: str, preset_id: str, - inputs: Optional[BlockInput] = None, + inputs: Optional[GraphInput] = None, credentials: Optional[dict[str, CredentialsMetaInput]] = None, name: Optional[str] = None, description: Optional[str] = None, diff --git a/autogpt_platform/backend/backend/api/features/library/model.py b/autogpt_platform/backend/backend/api/features/library/model.py index c6bc0e0427..9ecbaecccb 100644 --- a/autogpt_platform/backend/backend/api/features/library/model.py +++ b/autogpt_platform/backend/backend/api/features/library/model.py @@ -6,9 +6,12 @@ import prisma.enums import prisma.models import pydantic -from backend.data.block import BlockInput from backend.data.graph import GraphModel, GraphSettings, GraphTriggerInfo -from backend.data.model import CredentialsMetaInput, is_credentials_field_name +from backend.data.model import ( + CredentialsMetaInput, + GraphInput, + is_credentials_field_name, +) from backend.util.json import loads as json_loads from backend.util.models import Pagination @@ -323,7 +326,7 @@ class LibraryAgentPresetCreatable(pydantic.BaseModel): graph_id: str graph_version: int - inputs: BlockInput + inputs: GraphInput credentials: dict[str, CredentialsMetaInput] name: str @@ -352,7 +355,7 @@ class LibraryAgentPresetUpdatable(pydantic.BaseModel): Request model used when updating a preset for a library agent. """ - inputs: Optional[BlockInput] = None + inputs: Optional[GraphInput] = None credentials: Optional[dict[str, CredentialsMetaInput]] = None name: Optional[str] = None @@ -395,7 +398,7 @@ class LibraryAgentPreset(LibraryAgentPresetCreatable): "Webhook must be included in AgentPreset query when webhookId is set" ) - input_data: BlockInput = {} + input_data: GraphInput = {} input_credentials: dict[str, CredentialsMetaInput] = {} for preset_input in preset.InputPresets: diff --git a/autogpt_platform/backend/backend/api/features/otto/service.py b/autogpt_platform/backend/backend/api/features/otto/service.py index 5f00022ff2..992021c0ca 100644 --- a/autogpt_platform/backend/backend/api/features/otto/service.py +++ b/autogpt_platform/backend/backend/api/features/otto/service.py @@ -5,8 +5,8 @@ from typing import Optional import aiohttp from fastapi import HTTPException +from backend.blocks import get_block from backend.data import graph as graph_db -from backend.data.block import get_block from backend.util.settings import Settings from .models import ApiResponse, ChatRequest, GraphData diff --git a/autogpt_platform/backend/backend/api/features/store/content_handlers.py b/autogpt_platform/backend/backend/api/features/store/content_handlers.py index cbbdcfbebf..38fc1e27d0 100644 --- a/autogpt_platform/backend/backend/api/features/store/content_handlers.py +++ b/autogpt_platform/backend/backend/api/features/store/content_handlers.py @@ -152,7 +152,7 @@ class BlockHandler(ContentHandler): async def get_missing_items(self, batch_size: int) -> list[ContentItem]: """Fetch blocks without embeddings.""" - from backend.data.block import get_blocks + from backend.blocks import get_blocks # Get all available blocks all_blocks = get_blocks() @@ -249,7 +249,7 @@ class BlockHandler(ContentHandler): async def get_stats(self) -> dict[str, int]: """Get statistics about block embedding coverage.""" - from backend.data.block import get_blocks + from backend.blocks import get_blocks all_blocks = get_blocks() diff --git a/autogpt_platform/backend/backend/api/features/store/content_handlers_test.py b/autogpt_platform/backend/backend/api/features/store/content_handlers_test.py index fee879fae0..c552e44a9d 100644 --- a/autogpt_platform/backend/backend/api/features/store/content_handlers_test.py +++ b/autogpt_platform/backend/backend/api/features/store/content_handlers_test.py @@ -93,7 +93,7 @@ async def test_block_handler_get_missing_items(mocker): mock_existing = [] with patch( - "backend.data.block.get_blocks", + "backend.blocks.get_blocks", return_value=mock_blocks, ): with patch( @@ -135,7 +135,7 @@ async def test_block_handler_get_stats(mocker): mock_embedded = [{"count": 2}] with patch( - "backend.data.block.get_blocks", + "backend.blocks.get_blocks", return_value=mock_blocks, ): with patch( @@ -327,7 +327,7 @@ async def test_block_handler_handles_missing_attributes(): mock_blocks = {"block-minimal": mock_block_class} with patch( - "backend.data.block.get_blocks", + "backend.blocks.get_blocks", return_value=mock_blocks, ): with patch( @@ -360,7 +360,7 @@ async def test_block_handler_skips_failed_blocks(): mock_blocks = {"good-block": good_block, "bad-block": bad_block} with patch( - "backend.data.block.get_blocks", + "backend.blocks.get_blocks", return_value=mock_blocks, ): with patch( diff --git a/autogpt_platform/backend/backend/api/features/store/embeddings.py b/autogpt_platform/backend/backend/api/features/store/embeddings.py index 434f2fe2ce..921e103618 100644 --- a/autogpt_platform/backend/backend/api/features/store/embeddings.py +++ b/autogpt_platform/backend/backend/api/features/store/embeddings.py @@ -662,7 +662,7 @@ async def cleanup_orphaned_embeddings() -> dict[str, Any]: ) current_ids = {row["id"] for row in valid_agents} elif content_type == ContentType.BLOCK: - from backend.data.block import get_blocks + from backend.blocks import get_blocks current_ids = set(get_blocks().keys()) elif content_type == ContentType.DOCUMENTATION: diff --git a/autogpt_platform/backend/backend/api/features/store/image_gen.py b/autogpt_platform/backend/backend/api/features/store/image_gen.py index 087a7895ba..64ac203182 100644 --- a/autogpt_platform/backend/backend/api/features/store/image_gen.py +++ b/autogpt_platform/backend/backend/api/features/store/image_gen.py @@ -7,15 +7,6 @@ from replicate.client import Client as ReplicateClient from replicate.exceptions import ReplicateError from replicate.helpers import FileOutput -from backend.blocks.ideogram import ( - AspectRatio, - ColorPalettePreset, - IdeogramModelBlock, - IdeogramModelName, - MagicPromptOption, - StyleType, - UpscaleOption, -) from backend.data.graph import GraphBaseMeta from backend.data.model import CredentialsMetaInput, ProviderName from backend.integrations.credentials_store import ideogram_credentials @@ -50,6 +41,16 @@ async def generate_agent_image_v2(graph: GraphBaseMeta | AgentGraph) -> io.Bytes if not ideogram_credentials.api_key: raise ValueError("Missing Ideogram API key") + from backend.blocks.ideogram import ( + AspectRatio, + ColorPalettePreset, + IdeogramModelBlock, + IdeogramModelName, + MagicPromptOption, + StyleType, + UpscaleOption, + ) + name = graph.name description = f"{name} ({graph.description})" if graph.description else name diff --git a/autogpt_platform/backend/backend/api/features/v1.py b/autogpt_platform/backend/backend/api/features/v1.py index a8610702cc..dd8ef3611f 100644 --- a/autogpt_platform/backend/backend/api/features/v1.py +++ b/autogpt_platform/backend/backend/api/features/v1.py @@ -40,10 +40,11 @@ from backend.api.model import ( UpdateTimezoneRequest, UploadFileResponse, ) +from backend.blocks import get_block, get_blocks from backend.data import execution as execution_db from backend.data import graph as graph_db from backend.data.auth import api_key as api_key_db -from backend.data.block import BlockInput, CompletedBlockOutput, get_block, get_blocks +from backend.data.block import BlockInput, CompletedBlockOutput from backend.data.credit import ( AutoTopUpConfig, RefundRequest, diff --git a/autogpt_platform/backend/backend/blocks/__init__.py b/autogpt_platform/backend/backend/blocks/__init__.py index a6c16393c7..524e47c31d 100644 --- a/autogpt_platform/backend/backend/blocks/__init__.py +++ b/autogpt_platform/backend/backend/blocks/__init__.py @@ -3,22 +3,19 @@ import logging import os import re from pathlib import Path -from typing import TYPE_CHECKING, TypeVar +from typing import Sequence, Type, TypeVar +from backend.blocks._base import AnyBlockSchema, BlockType from backend.util.cache import cached logger = logging.getLogger(__name__) - -if TYPE_CHECKING: - from backend.data.block import Block - T = TypeVar("T") @cached(ttl_seconds=3600) -def load_all_blocks() -> dict[str, type["Block"]]: - from backend.data.block import Block +def load_all_blocks() -> dict[str, type["AnyBlockSchema"]]: + from backend.blocks._base import Block from backend.util.settings import Config # Check if example blocks should be loaded from settings @@ -50,8 +47,8 @@ def load_all_blocks() -> dict[str, type["Block"]]: importlib.import_module(f".{module}", package=__name__) # Load all Block instances from the available modules - available_blocks: dict[str, type["Block"]] = {} - for block_cls in all_subclasses(Block): + available_blocks: dict[str, type["AnyBlockSchema"]] = {} + for block_cls in _all_subclasses(Block): class_name = block_cls.__name__ if class_name.endswith("Base"): @@ -64,7 +61,7 @@ def load_all_blocks() -> dict[str, type["Block"]]: "please name the class with 'Base' at the end" ) - block = block_cls.create() + block = block_cls() # pyright: ignore[reportAbstractUsage] if not isinstance(block.id, str) or len(block.id) != 36: raise ValueError( @@ -105,7 +102,7 @@ def load_all_blocks() -> dict[str, type["Block"]]: available_blocks[block.id] = block_cls # Filter out blocks with incomplete auth configs, e.g. missing OAuth server secrets - from backend.data.block import is_block_auth_configured + from ._utils import is_block_auth_configured filtered_blocks = {} for block_id, block_cls in available_blocks.items(): @@ -115,11 +112,48 @@ def load_all_blocks() -> dict[str, type["Block"]]: return filtered_blocks -__all__ = ["load_all_blocks"] - - -def all_subclasses(cls: type[T]) -> list[type[T]]: +def _all_subclasses(cls: type[T]) -> list[type[T]]: subclasses = cls.__subclasses__() for subclass in subclasses: - subclasses += all_subclasses(subclass) + subclasses += _all_subclasses(subclass) return subclasses + + +# ============== Block access helper functions ============== # + + +def get_blocks() -> dict[str, Type["AnyBlockSchema"]]: + return load_all_blocks() + + +# Note on the return type annotation: https://github.com/microsoft/pyright/issues/10281 +def get_block(block_id: str) -> "AnyBlockSchema | None": + cls = get_blocks().get(block_id) + return cls() if cls else None + + +@cached(ttl_seconds=3600) +def get_webhook_block_ids() -> Sequence[str]: + return [ + id + for id, B in get_blocks().items() + if B().block_type in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL) + ] + + +@cached(ttl_seconds=3600) +def get_io_block_ids() -> Sequence[str]: + return [ + id + for id, B in get_blocks().items() + if B().block_type in (BlockType.INPUT, BlockType.OUTPUT) + ] + + +@cached(ttl_seconds=3600) +def get_human_in_the_loop_block_ids() -> Sequence[str]: + return [ + id + for id, B in get_blocks().items() + if B().block_type == BlockType.HUMAN_IN_THE_LOOP + ] diff --git a/autogpt_platform/backend/backend/blocks/_base.py b/autogpt_platform/backend/backend/blocks/_base.py new file mode 100644 index 0000000000..0ba4daec40 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/_base.py @@ -0,0 +1,739 @@ +import inspect +import logging +from abc import ABC, abstractmethod +from enum import Enum +from typing import ( + TYPE_CHECKING, + Any, + Callable, + ClassVar, + Generic, + Optional, + Type, + TypeAlias, + TypeVar, + cast, + get_origin, +) + +import jsonref +import jsonschema +from pydantic import BaseModel + +from backend.data.block import BlockInput, BlockOutput, BlockOutputEntry +from backend.data.model import ( + Credentials, + CredentialsFieldInfo, + CredentialsMetaInput, + SchemaField, + is_credentials_field_name, +) +from backend.integrations.providers import ProviderName +from backend.util import json +from backend.util.exceptions import ( + BlockError, + BlockExecutionError, + BlockInputError, + BlockOutputError, + BlockUnknownError, +) +from backend.util.settings import Config + +logger = logging.getLogger(__name__) + +if TYPE_CHECKING: + from backend.data.execution import ExecutionContext + from backend.data.model import ContributorDetails, NodeExecutionStats + + from ..data.graph import Link + +app_config = Config() + + +BlockTestOutput = BlockOutputEntry | tuple[str, Callable[[Any], bool]] + + +class BlockType(Enum): + STANDARD = "Standard" + INPUT = "Input" + OUTPUT = "Output" + NOTE = "Note" + WEBHOOK = "Webhook" + WEBHOOK_MANUAL = "Webhook (manual)" + AGENT = "Agent" + AI = "AI" + AYRSHARE = "Ayrshare" + HUMAN_IN_THE_LOOP = "Human In The Loop" + + +class BlockCategory(Enum): + AI = "Block that leverages AI to perform a task." + SOCIAL = "Block that interacts with social media platforms." + TEXT = "Block that processes text data." + SEARCH = "Block that searches or extracts information from the internet." + BASIC = "Block that performs basic operations." + INPUT = "Block that interacts with input of the graph." + OUTPUT = "Block that interacts with output of the graph." + LOGIC = "Programming logic to control the flow of your agent" + COMMUNICATION = "Block that interacts with communication platforms." + DEVELOPER_TOOLS = "Developer tools such as GitHub blocks." + DATA = "Block that interacts with structured data." + HARDWARE = "Block that interacts with hardware." + AGENT = "Block that interacts with other agents." + CRM = "Block that interacts with CRM services." + SAFETY = ( + "Block that provides AI safety mechanisms such as detecting harmful content" + ) + PRODUCTIVITY = "Block that helps with productivity" + ISSUE_TRACKING = "Block that helps with issue tracking" + MULTIMEDIA = "Block that interacts with multimedia content" + MARKETING = "Block that helps with marketing" + + def dict(self) -> dict[str, str]: + return {"category": self.name, "description": self.value} + + +class BlockCostType(str, Enum): + RUN = "run" # cost X credits per run + BYTE = "byte" # cost X credits per byte + SECOND = "second" # cost X credits per second + + +class BlockCost(BaseModel): + cost_amount: int + cost_filter: BlockInput + cost_type: BlockCostType + + def __init__( + self, + cost_amount: int, + cost_type: BlockCostType = BlockCostType.RUN, + cost_filter: Optional[BlockInput] = None, + **data: Any, + ) -> None: + super().__init__( + cost_amount=cost_amount, + cost_filter=cost_filter or {}, + cost_type=cost_type, + **data, + ) + + +class BlockInfo(BaseModel): + id: str + name: str + inputSchema: dict[str, Any] + outputSchema: dict[str, Any] + costs: list[BlockCost] + description: str + categories: list[dict[str, str]] + contributors: list[dict[str, Any]] + staticOutput: bool + uiType: str + + +class BlockSchema(BaseModel): + cached_jsonschema: ClassVar[dict[str, Any]] + + @classmethod + def jsonschema(cls) -> dict[str, Any]: + if cls.cached_jsonschema: + return cls.cached_jsonschema + + model = jsonref.replace_refs(cls.model_json_schema(), merge_props=True) + + def ref_to_dict(obj): + if isinstance(obj, dict): + # OpenAPI <3.1 does not support sibling fields that has a $ref key + # So sometimes, the schema has an "allOf"/"anyOf"/"oneOf" with 1 item. + keys = {"allOf", "anyOf", "oneOf"} + one_key = next((k for k in keys if k in obj and len(obj[k]) == 1), None) + if one_key: + obj.update(obj[one_key][0]) + + return { + key: ref_to_dict(value) + for key, value in obj.items() + if not key.startswith("$") and key != one_key + } + elif isinstance(obj, list): + return [ref_to_dict(item) for item in obj] + + return obj + + cls.cached_jsonschema = cast(dict[str, Any], ref_to_dict(model)) + + return cls.cached_jsonschema + + @classmethod + def validate_data(cls, data: BlockInput) -> str | None: + return json.validate_with_jsonschema( + schema=cls.jsonschema(), + data={k: v for k, v in data.items() if v is not None}, + ) + + @classmethod + def get_mismatch_error(cls, data: BlockInput) -> str | None: + return cls.validate_data(data) + + @classmethod + def get_field_schema(cls, field_name: str) -> dict[str, Any]: + model_schema = cls.jsonschema().get("properties", {}) + if not model_schema: + raise ValueError(f"Invalid model schema {cls}") + + property_schema = model_schema.get(field_name) + if not property_schema: + raise ValueError(f"Invalid property name {field_name}") + + return property_schema + + @classmethod + def validate_field(cls, field_name: str, data: BlockInput) -> str | None: + """ + Validate the data against a specific property (one of the input/output name). + Returns the validation error message if the data does not match the schema. + """ + try: + property_schema = cls.get_field_schema(field_name) + jsonschema.validate(json.to_dict(data), property_schema) + return None + except jsonschema.ValidationError as e: + return str(e) + + @classmethod + def get_fields(cls) -> set[str]: + return set(cls.model_fields.keys()) + + @classmethod + def get_required_fields(cls) -> set[str]: + return { + field + for field, field_info in cls.model_fields.items() + if field_info.is_required() + } + + @classmethod + def __pydantic_init_subclass__(cls, **kwargs): + """Validates the schema definition. Rules: + - Fields with annotation `CredentialsMetaInput` MUST be + named `credentials` or `*_credentials` + - Fields named `credentials` or `*_credentials` MUST be + of type `CredentialsMetaInput` + """ + super().__pydantic_init_subclass__(**kwargs) + + # Reset cached JSON schema to prevent inheriting it from parent class + cls.cached_jsonschema = {} + + credentials_fields = cls.get_credentials_fields() + + for field_name in cls.get_fields(): + if is_credentials_field_name(field_name): + if field_name not in credentials_fields: + raise TypeError( + f"Credentials field '{field_name}' on {cls.__qualname__} " + f"is not of type {CredentialsMetaInput.__name__}" + ) + + CredentialsMetaInput.validate_credentials_field_schema( + cls.get_field_schema(field_name), field_name + ) + + elif field_name in credentials_fields: + raise KeyError( + f"Credentials field '{field_name}' on {cls.__qualname__} " + "has invalid name: must be 'credentials' or *_credentials" + ) + + @classmethod + def get_credentials_fields(cls) -> dict[str, type[CredentialsMetaInput]]: + return { + field_name: info.annotation + for field_name, info in cls.model_fields.items() + if ( + inspect.isclass(info.annotation) + and issubclass( + get_origin(info.annotation) or info.annotation, + CredentialsMetaInput, + ) + ) + } + + @classmethod + def get_auto_credentials_fields(cls) -> dict[str, dict[str, Any]]: + """ + Get fields that have auto_credentials metadata (e.g., GoogleDriveFileInput). + + Returns a dict mapping kwarg_name -> {field_name, auto_credentials_config} + + Raises: + ValueError: If multiple fields have the same kwarg_name, as this would + cause silent overwriting and only the last field would be processed. + """ + result: dict[str, dict[str, Any]] = {} + schema = cls.jsonschema() + properties = schema.get("properties", {}) + + for field_name, field_schema in properties.items(): + auto_creds = field_schema.get("auto_credentials") + if auto_creds: + kwarg_name = auto_creds.get("kwarg_name", "credentials") + if kwarg_name in result: + raise ValueError( + f"Duplicate auto_credentials kwarg_name '{kwarg_name}' " + f"in fields '{result[kwarg_name]['field_name']}' and " + f"'{field_name}' on {cls.__qualname__}" + ) + result[kwarg_name] = { + "field_name": field_name, + "config": auto_creds, + } + return result + + @classmethod + def get_credentials_fields_info(cls) -> dict[str, CredentialsFieldInfo]: + result = {} + + # Regular credentials fields + for field_name in cls.get_credentials_fields().keys(): + result[field_name] = CredentialsFieldInfo.model_validate( + cls.get_field_schema(field_name), by_alias=True + ) + + # Auto-generated credentials fields (from GoogleDriveFileInput etc.) + for kwarg_name, info in cls.get_auto_credentials_fields().items(): + config = info["config"] + # Build a schema-like dict that CredentialsFieldInfo can parse + auto_schema = { + "credentials_provider": [config.get("provider", "google")], + "credentials_types": [config.get("type", "oauth2")], + "credentials_scopes": config.get("scopes"), + } + result[kwarg_name] = CredentialsFieldInfo.model_validate( + auto_schema, by_alias=True + ) + + return result + + @classmethod + def get_input_defaults(cls, data: BlockInput) -> BlockInput: + return data # Return as is, by default. + + @classmethod + def get_missing_links(cls, data: BlockInput, links: list["Link"]) -> set[str]: + input_fields_from_nodes = {link.sink_name for link in links} + return input_fields_from_nodes - set(data) + + @classmethod + def get_missing_input(cls, data: BlockInput) -> set[str]: + return cls.get_required_fields() - set(data) + + +class BlockSchemaInput(BlockSchema): + """ + Base schema class for block inputs. + All block input schemas should extend this class for consistency. + """ + + pass + + +class BlockSchemaOutput(BlockSchema): + """ + Base schema class for block outputs that includes a standard error field. + All block output schemas should extend this class to ensure consistent error handling. + """ + + error: str = SchemaField( + description="Error message if the operation failed", default="" + ) + + +BlockSchemaInputType = TypeVar("BlockSchemaInputType", bound=BlockSchemaInput) +BlockSchemaOutputType = TypeVar("BlockSchemaOutputType", bound=BlockSchemaOutput) + + +class EmptyInputSchema(BlockSchemaInput): + pass + + +class EmptyOutputSchema(BlockSchemaOutput): + pass + + +# For backward compatibility - will be deprecated +EmptySchema = EmptyOutputSchema + + +# --8<-- [start:BlockWebhookConfig] +class BlockManualWebhookConfig(BaseModel): + """ + Configuration model for webhook-triggered blocks on which + the user has to manually set up the webhook at the provider. + """ + + provider: ProviderName + """The service provider that the webhook connects to""" + + webhook_type: str + """ + Identifier for the webhook type. E.g. GitHub has repo and organization level hooks. + + Only for use in the corresponding `WebhooksManager`. + """ + + event_filter_input: str = "" + """ + Name of the block's event filter input. + Leave empty if the corresponding webhook doesn't have distinct event/payload types. + """ + + event_format: str = "{event}" + """ + Template string for the event(s) that a block instance subscribes to. + Applied individually to each event selected in the event filter input. + + Example: `"pull_request.{event}"` -> `"pull_request.opened"` + """ + + +class BlockWebhookConfig(BlockManualWebhookConfig): + """ + Configuration model for webhook-triggered blocks for which + the webhook can be automatically set up through the provider's API. + """ + + resource_format: str + """ + Template string for the resource that a block instance subscribes to. + Fields will be filled from the block's inputs (except `payload`). + + Example: `f"{repo}/pull_requests"` (note: not how it's actually implemented) + + Only for use in the corresponding `WebhooksManager`. + """ + # --8<-- [end:BlockWebhookConfig] + + +class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]): + def __init__( + self, + id: str = "", + description: str = "", + contributors: list["ContributorDetails"] = [], + categories: set[BlockCategory] | None = None, + input_schema: Type[BlockSchemaInputType] = EmptyInputSchema, + output_schema: Type[BlockSchemaOutputType] = EmptyOutputSchema, + test_input: BlockInput | list[BlockInput] | None = None, + test_output: BlockTestOutput | list[BlockTestOutput] | None = None, + test_mock: dict[str, Any] | None = None, + test_credentials: Optional[Credentials | dict[str, Credentials]] = None, + disabled: bool = False, + static_output: bool = False, + block_type: BlockType = BlockType.STANDARD, + webhook_config: Optional[BlockWebhookConfig | BlockManualWebhookConfig] = None, + is_sensitive_action: bool = False, + ): + """ + Initialize the block with the given schema. + + Args: + id: The unique identifier for the block, this value will be persisted in the + DB. So it should be a unique and constant across the application run. + Use the UUID format for the ID. + description: The description of the block, explaining what the block does. + contributors: The list of contributors who contributed to the block. + input_schema: The schema, defined as a Pydantic model, for the input data. + output_schema: The schema, defined as a Pydantic model, for the output data. + test_input: The list or single sample input data for the block, for testing. + test_output: The list or single expected output if the test_input is run. + test_mock: function names on the block implementation to mock on test run. + disabled: If the block is disabled, it will not be available for execution. + static_output: Whether the output links of the block are static by default. + """ + from backend.data.model import NodeExecutionStats + + self.id = id + self.input_schema = input_schema + self.output_schema = output_schema + self.test_input = test_input + self.test_output = test_output + self.test_mock = test_mock + self.test_credentials = test_credentials + self.description = description + self.categories = categories or set() + self.contributors = contributors or set() + self.disabled = disabled + self.static_output = static_output + self.block_type = block_type + self.webhook_config = webhook_config + self.is_sensitive_action = is_sensitive_action + self.execution_stats: "NodeExecutionStats" = NodeExecutionStats() + + if self.webhook_config: + if isinstance(self.webhook_config, BlockWebhookConfig): + # Enforce presence of credentials field on auto-setup webhook blocks + if not (cred_fields := self.input_schema.get_credentials_fields()): + raise TypeError( + "credentials field is required on auto-setup webhook blocks" + ) + # Disallow multiple credentials inputs on webhook blocks + elif len(cred_fields) > 1: + raise ValueError( + "Multiple credentials inputs not supported on webhook blocks" + ) + + self.block_type = BlockType.WEBHOOK + else: + self.block_type = BlockType.WEBHOOK_MANUAL + + # Enforce shape of webhook event filter, if present + if self.webhook_config.event_filter_input: + event_filter_field = self.input_schema.model_fields[ + self.webhook_config.event_filter_input + ] + if not ( + isinstance(event_filter_field.annotation, type) + and issubclass(event_filter_field.annotation, BaseModel) + and all( + field.annotation is bool + for field in event_filter_field.annotation.model_fields.values() + ) + ): + raise NotImplementedError( + f"{self.name} has an invalid webhook event selector: " + "field must be a BaseModel and all its fields must be boolean" + ) + + # Enforce presence of 'payload' input + if "payload" not in self.input_schema.model_fields: + raise TypeError( + f"{self.name} is webhook-triggered but has no 'payload' input" + ) + + # Disable webhook-triggered block if webhook functionality not available + if not app_config.platform_base_url: + self.disabled = True + + @abstractmethod + async def run(self, input_data: BlockSchemaInputType, **kwargs) -> BlockOutput: + """ + Run the block with the given input data. + Args: + input_data: The input data with the structure of input_schema. + + Kwargs: Currently 14/02/2025 these include + graph_id: The ID of the graph. + node_id: The ID of the node. + graph_exec_id: The ID of the graph execution. + node_exec_id: The ID of the node execution. + user_id: The ID of the user. + + Returns: + A Generator that yields (output_name, output_data). + output_name: One of the output name defined in Block's output_schema. + output_data: The data for the output_name, matching the defined schema. + """ + # --- satisfy the type checker, never executed ------------- + if False: # noqa: SIM115 + yield "name", "value" # pyright: ignore[reportMissingYield] + raise NotImplementedError(f"{self.name} does not implement the run method.") + + async def run_once( + self, input_data: BlockSchemaInputType, output: str, **kwargs + ) -> Any: + async for item in self.run(input_data, **kwargs): + name, data = item + if name == output: + return data + raise ValueError(f"{self.name} did not produce any output for {output}") + + def merge_stats(self, stats: "NodeExecutionStats") -> "NodeExecutionStats": + self.execution_stats += stats + return self.execution_stats + + @property + def name(self): + return self.__class__.__name__ + + def to_dict(self): + return { + "id": self.id, + "name": self.name, + "inputSchema": self.input_schema.jsonschema(), + "outputSchema": self.output_schema.jsonschema(), + "description": self.description, + "categories": [category.dict() for category in self.categories], + "contributors": [ + contributor.model_dump() for contributor in self.contributors + ], + "staticOutput": self.static_output, + "uiType": self.block_type.value, + } + + def get_info(self) -> BlockInfo: + from backend.data.credit import get_block_cost + + return BlockInfo( + id=self.id, + name=self.name, + inputSchema=self.input_schema.jsonschema(), + outputSchema=self.output_schema.jsonschema(), + costs=get_block_cost(self), + description=self.description, + categories=[category.dict() for category in self.categories], + contributors=[ + contributor.model_dump() for contributor in self.contributors + ], + staticOutput=self.static_output, + uiType=self.block_type.value, + ) + + async def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput: + try: + async for output_name, output_data in self._execute(input_data, **kwargs): + yield output_name, output_data + except Exception as ex: + if isinstance(ex, BlockError): + raise ex + else: + raise ( + BlockExecutionError + if isinstance(ex, ValueError) + else BlockUnknownError + )( + message=str(ex), + block_name=self.name, + block_id=self.id, + ) from ex + + async def is_block_exec_need_review( + self, + input_data: BlockInput, + *, + user_id: str, + node_id: str, + node_exec_id: str, + graph_exec_id: str, + graph_id: str, + graph_version: int, + execution_context: "ExecutionContext", + **kwargs, + ) -> tuple[bool, BlockInput]: + """ + Check if this block execution needs human review and handle the review process. + + Returns: + Tuple of (should_pause, input_data_to_use) + - should_pause: True if execution should be paused for review + - input_data_to_use: The input data to use (may be modified by reviewer) + """ + if not ( + self.is_sensitive_action and execution_context.sensitive_action_safe_mode + ): + return False, input_data + + from backend.blocks.helpers.review import HITLReviewHelper + + # Handle the review request and get decision + decision = await HITLReviewHelper.handle_review_decision( + input_data=input_data, + user_id=user_id, + node_id=node_id, + node_exec_id=node_exec_id, + graph_exec_id=graph_exec_id, + graph_id=graph_id, + graph_version=graph_version, + block_name=self.name, + editable=True, + ) + + if decision is None: + # We're awaiting review - pause execution + return True, input_data + + if not decision.should_proceed: + # Review was rejected, raise an error to stop execution + raise BlockExecutionError( + message=f"Block execution rejected by reviewer: {decision.message}", + block_name=self.name, + block_id=self.id, + ) + + # Review was approved - use the potentially modified data + # ReviewResult.data must be a dict for block inputs + reviewed_data = decision.review_result.data + if not isinstance(reviewed_data, dict): + raise BlockExecutionError( + message=f"Review data must be a dict for block input, got {type(reviewed_data).__name__}", + block_name=self.name, + block_id=self.id, + ) + return False, reviewed_data + + async def _execute(self, input_data: BlockInput, **kwargs) -> BlockOutput: + # Check for review requirement only if running within a graph execution context + # Direct block execution (e.g., from chat) skips the review process + has_graph_context = all( + key in kwargs + for key in ( + "node_exec_id", + "graph_exec_id", + "graph_id", + "execution_context", + ) + ) + if has_graph_context: + should_pause, input_data = await self.is_block_exec_need_review( + input_data, **kwargs + ) + if should_pause: + return + + # Validate the input data (original or reviewer-modified) once + if error := self.input_schema.validate_data(input_data): + raise BlockInputError( + message=f"Unable to execute block with invalid input data: {error}", + block_name=self.name, + block_id=self.id, + ) + + # Use the validated input data + async for output_name, output_data in self.run( + self.input_schema(**{k: v for k, v in input_data.items() if v is not None}), + **kwargs, + ): + if output_name == "error": + raise BlockExecutionError( + message=output_data, block_name=self.name, block_id=self.id + ) + if self.block_type == BlockType.STANDARD and ( + error := self.output_schema.validate_field(output_name, output_data) + ): + raise BlockOutputError( + message=f"Block produced an invalid output data: {error}", + block_name=self.name, + block_id=self.id, + ) + yield output_name, output_data + + def is_triggered_by_event_type( + self, trigger_config: dict[str, Any], event_type: str + ) -> bool: + if not self.webhook_config: + raise TypeError("This method can't be used on non-trigger blocks") + if not self.webhook_config.event_filter_input: + return True + event_filter = trigger_config.get(self.webhook_config.event_filter_input) + if not event_filter: + raise ValueError("Event filter is not configured on trigger") + return event_type in [ + self.webhook_config.event_format.format(event=k) + for k in event_filter + if event_filter[k] is True + ] + + +# Type alias for any block with standard input/output schemas +AnyBlockSchema: TypeAlias = Block[BlockSchemaInput, BlockSchemaOutput] diff --git a/autogpt_platform/backend/backend/blocks/_utils.py b/autogpt_platform/backend/backend/blocks/_utils.py new file mode 100644 index 0000000000..bec033bd2c --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/_utils.py @@ -0,0 +1,122 @@ +import logging +import os + +from backend.integrations.providers import ProviderName + +from ._base import AnyBlockSchema + +logger = logging.getLogger(__name__) + + +def is_block_auth_configured( + block_cls: type[AnyBlockSchema], +) -> bool: + """ + Check if a block has a valid authentication method configured at runtime. + + For example if a block is an OAuth-only block and there env vars are not set, + do not show it in the UI. + + """ + from backend.sdk.registry import AutoRegistry + + # Create an instance to access input_schema + try: + block = block_cls() + except Exception as e: + # If we can't create a block instance, assume it's not OAuth-only + logger.error(f"Error creating block instance for {block_cls.__name__}: {e}") + return True + logger.debug( + f"Checking if block {block_cls.__name__} has a valid provider configured" + ) + + # Get all credential inputs from input schema + credential_inputs = block.input_schema.get_credentials_fields_info() + required_inputs = block.input_schema.get_required_fields() + if not credential_inputs: + logger.debug( + f"Block {block_cls.__name__} has no credential inputs - Treating as valid" + ) + return True + + # Check credential inputs + if len(required_inputs.intersection(credential_inputs.keys())) == 0: + logger.debug( + f"Block {block_cls.__name__} has only optional credential inputs" + " - will work without credentials configured" + ) + + # Check if the credential inputs for this block are correctly configured + for field_name, field_info in credential_inputs.items(): + provider_names = field_info.provider + if not provider_names: + logger.warning( + f"Block {block_cls.__name__} " + f"has credential input '{field_name}' with no provider options" + " - Disabling" + ) + return False + + # If a field has multiple possible providers, each one needs to be usable to + # prevent breaking the UX + for _provider_name in provider_names: + provider_name = _provider_name.value + if provider_name in ProviderName.__members__.values(): + logger.debug( + f"Block {block_cls.__name__} credential input '{field_name}' " + f"provider '{provider_name}' is part of the legacy provider system" + " - Treating as valid" + ) + break + + provider = AutoRegistry.get_provider(provider_name) + if not provider: + logger.warning( + f"Block {block_cls.__name__} credential input '{field_name}' " + f"refers to unknown provider '{provider_name}' - Disabling" + ) + return False + + # Check the provider's supported auth types + if field_info.supported_types != provider.supported_auth_types: + logger.warning( + f"Block {block_cls.__name__} credential input '{field_name}' " + f"has mismatched supported auth types (field <> Provider): " + f"{field_info.supported_types} != {provider.supported_auth_types}" + ) + + if not (supported_auth_types := provider.supported_auth_types): + # No auth methods are been configured for this provider + logger.warning( + f"Block {block_cls.__name__} credential input '{field_name}' " + f"provider '{provider_name}' " + "has no authentication methods configured - Disabling" + ) + return False + + # Check if provider supports OAuth + if "oauth2" in supported_auth_types: + # Check if OAuth environment variables are set + if (oauth_config := provider.oauth_config) and bool( + os.getenv(oauth_config.client_id_env_var) + and os.getenv(oauth_config.client_secret_env_var) + ): + logger.debug( + f"Block {block_cls.__name__} credential input '{field_name}' " + f"provider '{provider_name}' is configured for OAuth" + ) + else: + logger.error( + f"Block {block_cls.__name__} credential input '{field_name}' " + f"provider '{provider_name}' " + "is missing OAuth client ID or secret - Disabling" + ) + return False + + logger.debug( + f"Block {block_cls.__name__} credential input '{field_name}' is valid; " + f"supported credential types: {', '.join(field_info.supported_types)}" + ) + + return True diff --git a/autogpt_platform/backend/backend/blocks/agent.py b/autogpt_platform/backend/backend/blocks/agent.py index 0efc0a3369..574dbc2530 100644 --- a/autogpt_platform/backend/backend/blocks/agent.py +++ b/autogpt_platform/backend/backend/blocks/agent.py @@ -1,7 +1,7 @@ import logging -from typing import Any, Optional +from typing import TYPE_CHECKING, Any, Optional -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockInput, @@ -9,13 +9,15 @@ from backend.data.block import ( BlockSchema, BlockSchemaInput, BlockType, - get_block, ) from backend.data.execution import ExecutionContext, ExecutionStatus, NodesInputMasks from backend.data.model import NodeExecutionStats, SchemaField from backend.util.json import validate_with_jsonschema from backend.util.retry import func_retry +if TYPE_CHECKING: + from backend.executor.utils import LogMetadata + _logger = logging.getLogger(__name__) @@ -124,9 +126,10 @@ class AgentExecutorBlock(Block): graph_version: int, graph_exec_id: str, user_id: str, - logger, + logger: "LogMetadata", ) -> BlockOutput: + from backend.blocks import get_block from backend.data.execution import ExecutionEventType from backend.executor import utils as execution_utils @@ -198,7 +201,7 @@ class AgentExecutorBlock(Block): self, graph_exec_id: str, user_id: str, - logger, + logger: "LogMetadata", ) -> None: from backend.executor import utils as execution_utils diff --git a/autogpt_platform/backend/backend/blocks/ai_condition.py b/autogpt_platform/backend/backend/blocks/ai_condition.py index 2a5cdcdeec..c28c1e9f7d 100644 --- a/autogpt_platform/backend/backend/blocks/ai_condition.py +++ b/autogpt_platform/backend/backend/blocks/ai_condition.py @@ -1,5 +1,11 @@ from typing import Any +from backend.blocks._base import ( + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.llm import ( DEFAULT_LLM_MODEL, TEST_CREDENTIALS, @@ -11,12 +17,6 @@ from backend.blocks.llm import ( LLMResponse, llm_call, ) -from backend.data.block import ( - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import APIKeyCredentials, NodeExecutionStats, SchemaField diff --git a/autogpt_platform/backend/backend/blocks/ai_image_customizer.py b/autogpt_platform/backend/backend/blocks/ai_image_customizer.py index 91be33a60e..402e520ea0 100644 --- a/autogpt_platform/backend/backend/blocks/ai_image_customizer.py +++ b/autogpt_platform/backend/backend/blocks/ai_image_customizer.py @@ -6,7 +6,7 @@ from pydantic import SecretStr from replicate.client import Client as ReplicateClient from replicate.helpers import FileOutput -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/ai_image_generator_block.py b/autogpt_platform/backend/backend/blocks/ai_image_generator_block.py index e40731cd97..fcea24fb01 100644 --- a/autogpt_platform/backend/backend/blocks/ai_image_generator_block.py +++ b/autogpt_platform/backend/backend/blocks/ai_image_generator_block.py @@ -5,7 +5,12 @@ from pydantic import SecretStr from replicate.client import Client as ReplicateClient from replicate.helpers import FileOutput -from backend.data.block import Block, BlockCategory, BlockSchemaInput, BlockSchemaOutput +from backend.blocks._base import ( + Block, + BlockCategory, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.execution import ExecutionContext from backend.data.model import ( APIKeyCredentials, diff --git a/autogpt_platform/backend/backend/blocks/ai_music_generator.py b/autogpt_platform/backend/backend/blocks/ai_music_generator.py index 1ecb78f95e..9a0639a9c0 100644 --- a/autogpt_platform/backend/backend/blocks/ai_music_generator.py +++ b/autogpt_platform/backend/backend/blocks/ai_music_generator.py @@ -6,7 +6,7 @@ from typing import Literal from pydantic import SecretStr from replicate.client import Client as ReplicateClient -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/ai_shortform_video_block.py b/autogpt_platform/backend/backend/blocks/ai_shortform_video_block.py index eb60843185..2c53748fde 100644 --- a/autogpt_platform/backend/backend/blocks/ai_shortform_video_block.py +++ b/autogpt_platform/backend/backend/blocks/ai_shortform_video_block.py @@ -6,7 +6,7 @@ from typing import Literal from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/apollo/organization.py b/autogpt_platform/backend/backend/blocks/apollo/organization.py index 93acbff0b8..6722de4a79 100644 --- a/autogpt_platform/backend/backend/blocks/apollo/organization.py +++ b/autogpt_platform/backend/backend/blocks/apollo/organization.py @@ -1,3 +1,10 @@ +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.apollo._api import ApolloClient from backend.blocks.apollo._auth import ( TEST_CREDENTIALS, @@ -10,13 +17,6 @@ from backend.blocks.apollo.models import ( PrimaryPhone, SearchOrganizationsRequest, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import CredentialsField, SchemaField diff --git a/autogpt_platform/backend/backend/blocks/apollo/people.py b/autogpt_platform/backend/backend/blocks/apollo/people.py index a58321ecfc..b5059a2a26 100644 --- a/autogpt_platform/backend/backend/blocks/apollo/people.py +++ b/autogpt_platform/backend/backend/blocks/apollo/people.py @@ -1,5 +1,12 @@ import asyncio +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.apollo._api import ApolloClient from backend.blocks.apollo._auth import ( TEST_CREDENTIALS, @@ -14,13 +21,6 @@ from backend.blocks.apollo.models import ( SearchPeopleRequest, SenorityLevels, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import CredentialsField, SchemaField diff --git a/autogpt_platform/backend/backend/blocks/apollo/person.py b/autogpt_platform/backend/backend/blocks/apollo/person.py index 84b86d2bfd..4d586175e0 100644 --- a/autogpt_platform/backend/backend/blocks/apollo/person.py +++ b/autogpt_platform/backend/backend/blocks/apollo/person.py @@ -1,3 +1,10 @@ +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.apollo._api import ApolloClient from backend.blocks.apollo._auth import ( TEST_CREDENTIALS, @@ -6,13 +13,6 @@ from backend.blocks.apollo._auth import ( ApolloCredentialsInput, ) from backend.blocks.apollo.models import Contact, EnrichPersonRequest -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import CredentialsField, SchemaField diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/_util.py b/autogpt_platform/backend/backend/blocks/ayrshare/_util.py index 8d0b9914f9..231239310f 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/_util.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/_util.py @@ -3,7 +3,7 @@ from typing import Optional from pydantic import BaseModel, Field -from backend.data.block import BlockSchemaInput +from backend.blocks._base import BlockSchemaInput from backend.data.model import SchemaField, UserIntegrations from backend.integrations.ayrshare import AyrshareClient from backend.util.clients import get_database_manager_async_client diff --git a/autogpt_platform/backend/backend/blocks/basic.py b/autogpt_platform/backend/backend/blocks/basic.py index 95193b3feb..f129d2707b 100644 --- a/autogpt_platform/backend/backend/blocks/basic.py +++ b/autogpt_platform/backend/backend/blocks/basic.py @@ -1,7 +1,7 @@ import enum from typing import Any -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/block.py b/autogpt_platform/backend/backend/blocks/block.py index 95c92a41ab..d3f482fc65 100644 --- a/autogpt_platform/backend/backend/blocks/block.py +++ b/autogpt_platform/backend/backend/blocks/block.py @@ -2,7 +2,7 @@ import os import re from typing import Type -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/branching.py b/autogpt_platform/backend/backend/blocks/branching.py index e9177a8b65..fa4d8089ff 100644 --- a/autogpt_platform/backend/backend/blocks/branching.py +++ b/autogpt_platform/backend/backend/blocks/branching.py @@ -1,7 +1,7 @@ from enum import Enum from typing import Any -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/claude_code.py b/autogpt_platform/backend/backend/blocks/claude_code.py index 4ef44603b2..1919406c6f 100644 --- a/autogpt_platform/backend/backend/blocks/claude_code.py +++ b/autogpt_platform/backend/backend/blocks/claude_code.py @@ -6,7 +6,7 @@ from typing import Literal, Optional from e2b import AsyncSandbox as BaseAsyncSandbox from pydantic import BaseModel, SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/code_executor.py b/autogpt_platform/backend/backend/blocks/code_executor.py index be6f2bba55..766f44b7bb 100644 --- a/autogpt_platform/backend/backend/blocks/code_executor.py +++ b/autogpt_platform/backend/backend/blocks/code_executor.py @@ -6,7 +6,7 @@ from e2b_code_interpreter import Result as E2BExecutionResult from e2b_code_interpreter.charts import Chart as E2BExecutionResultChart from pydantic import BaseModel, Field, JsonValue, SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/code_extraction_block.py b/autogpt_platform/backend/backend/blocks/code_extraction_block.py index 98f40c7a8b..bde4bc9fc6 100644 --- a/autogpt_platform/backend/backend/blocks/code_extraction_block.py +++ b/autogpt_platform/backend/backend/blocks/code_extraction_block.py @@ -1,6 +1,6 @@ import re -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/codex.py b/autogpt_platform/backend/backend/blocks/codex.py index 1b907cafce..07dffec39f 100644 --- a/autogpt_platform/backend/backend/blocks/codex.py +++ b/autogpt_platform/backend/backend/blocks/codex.py @@ -6,7 +6,7 @@ from openai import AsyncOpenAI from openai.types.responses import Response as OpenAIResponse from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/compass/triggers.py b/autogpt_platform/backend/backend/blocks/compass/triggers.py index f6ac8dfd81..2afd03852e 100644 --- a/autogpt_platform/backend/backend/blocks/compass/triggers.py +++ b/autogpt_platform/backend/backend/blocks/compass/triggers.py @@ -1,6 +1,6 @@ from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockManualWebhookConfig, diff --git a/autogpt_platform/backend/backend/blocks/count_words_and_char_block.py b/autogpt_platform/backend/backend/blocks/count_words_and_char_block.py index 20a5077a2d..041f1bfaa1 100644 --- a/autogpt_platform/backend/backend/blocks/count_words_and_char_block.py +++ b/autogpt_platform/backend/backend/blocks/count_words_and_char_block.py @@ -1,4 +1,4 @@ -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/data_manipulation.py b/autogpt_platform/backend/backend/blocks/data_manipulation.py index 1014236b8c..a8f25ecb18 100644 --- a/autogpt_platform/backend/backend/blocks/data_manipulation.py +++ b/autogpt_platform/backend/backend/blocks/data_manipulation.py @@ -1,6 +1,6 @@ from typing import Any, List -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/decoder_block.py b/autogpt_platform/backend/backend/blocks/decoder_block.py index 7a7406bd1a..b9eb56e48f 100644 --- a/autogpt_platform/backend/backend/blocks/decoder_block.py +++ b/autogpt_platform/backend/backend/blocks/decoder_block.py @@ -1,6 +1,6 @@ import codecs -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/discord/bot_blocks.py b/autogpt_platform/backend/backend/blocks/discord/bot_blocks.py index 4438af1955..4ec3d0eec2 100644 --- a/autogpt_platform/backend/backend/blocks/discord/bot_blocks.py +++ b/autogpt_platform/backend/backend/blocks/discord/bot_blocks.py @@ -8,7 +8,7 @@ from typing import Any, Literal, cast import discord from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/discord/oauth_blocks.py b/autogpt_platform/backend/backend/blocks/discord/oauth_blocks.py index ca20eb6337..74e9229776 100644 --- a/autogpt_platform/backend/backend/blocks/discord/oauth_blocks.py +++ b/autogpt_platform/backend/backend/blocks/discord/oauth_blocks.py @@ -2,7 +2,7 @@ Discord OAuth-based blocks. """ -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/email_block.py b/autogpt_platform/backend/backend/blocks/email_block.py index fad2f411cb..626bb6cdac 100644 --- a/autogpt_platform/backend/backend/blocks/email_block.py +++ b/autogpt_platform/backend/backend/blocks/email_block.py @@ -7,7 +7,7 @@ from typing import Literal from pydantic import BaseModel, ConfigDict, SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/encoder_block.py b/autogpt_platform/backend/backend/blocks/encoder_block.py index b60a4ae828..bfab8f4555 100644 --- a/autogpt_platform/backend/backend/blocks/encoder_block.py +++ b/autogpt_platform/backend/backend/blocks/encoder_block.py @@ -2,7 +2,7 @@ import codecs -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/enrichlayer/linkedin.py b/autogpt_platform/backend/backend/blocks/enrichlayer/linkedin.py index 974ad28eed..de06230c00 100644 --- a/autogpt_platform/backend/backend/blocks/enrichlayer/linkedin.py +++ b/autogpt_platform/backend/backend/blocks/enrichlayer/linkedin.py @@ -8,7 +8,7 @@ which provides access to LinkedIn profile data and related information. import logging from typing import Optional -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/fal/ai_video_generator.py b/autogpt_platform/backend/backend/blocks/fal/ai_video_generator.py index c2079ef159..945e53578c 100644 --- a/autogpt_platform/backend/backend/blocks/fal/ai_video_generator.py +++ b/autogpt_platform/backend/backend/blocks/fal/ai_video_generator.py @@ -3,6 +3,13 @@ import logging from enum import Enum from typing import Any +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.fal._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -10,13 +17,6 @@ from backend.blocks.fal._auth import ( FalCredentialsField, FalCredentialsInput, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.execution import ExecutionContext from backend.data.model import SchemaField from backend.util.file import store_media_file diff --git a/autogpt_platform/backend/backend/blocks/flux_kontext.py b/autogpt_platform/backend/backend/blocks/flux_kontext.py index d56baa6d92..f2b35aee40 100644 --- a/autogpt_platform/backend/backend/blocks/flux_kontext.py +++ b/autogpt_platform/backend/backend/blocks/flux_kontext.py @@ -5,7 +5,7 @@ from pydantic import SecretStr from replicate.client import Client as ReplicateClient from replicate.helpers import FileOutput -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/github/checks.py b/autogpt_platform/backend/backend/blocks/github/checks.py index 02bc8d2400..99feefec88 100644 --- a/autogpt_platform/backend/backend/blocks/github/checks.py +++ b/autogpt_platform/backend/backend/blocks/github/checks.py @@ -3,7 +3,7 @@ from typing import Optional from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/github/ci.py b/autogpt_platform/backend/backend/blocks/github/ci.py index 8ba58e389e..c717be96e7 100644 --- a/autogpt_platform/backend/backend/blocks/github/ci.py +++ b/autogpt_platform/backend/backend/blocks/github/ci.py @@ -5,7 +5,7 @@ from typing import Optional from typing_extensions import TypedDict -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/github/issues.py b/autogpt_platform/backend/backend/blocks/github/issues.py index 22b4149663..7269c44f73 100644 --- a/autogpt_platform/backend/backend/blocks/github/issues.py +++ b/autogpt_platform/backend/backend/blocks/github/issues.py @@ -3,7 +3,7 @@ from urllib.parse import urlparse from typing_extensions import TypedDict -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/github/pull_requests.py b/autogpt_platform/backend/backend/blocks/github/pull_requests.py index 9049037716..b336c7bfa3 100644 --- a/autogpt_platform/backend/backend/blocks/github/pull_requests.py +++ b/autogpt_platform/backend/backend/blocks/github/pull_requests.py @@ -2,7 +2,7 @@ import re from typing_extensions import TypedDict -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/github/repo.py b/autogpt_platform/backend/backend/blocks/github/repo.py index 78ce26bfad..9b1e60b00c 100644 --- a/autogpt_platform/backend/backend/blocks/github/repo.py +++ b/autogpt_platform/backend/backend/blocks/github/repo.py @@ -2,7 +2,7 @@ import base64 from typing_extensions import TypedDict -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/github/reviews.py b/autogpt_platform/backend/backend/blocks/github/reviews.py index 11718d1402..932362c09a 100644 --- a/autogpt_platform/backend/backend/blocks/github/reviews.py +++ b/autogpt_platform/backend/backend/blocks/github/reviews.py @@ -4,7 +4,7 @@ from typing import Any, List, Optional from typing_extensions import TypedDict -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/github/statuses.py b/autogpt_platform/backend/backend/blocks/github/statuses.py index 42826a8a51..caa1282a9b 100644 --- a/autogpt_platform/backend/backend/blocks/github/statuses.py +++ b/autogpt_platform/backend/backend/blocks/github/statuses.py @@ -3,7 +3,7 @@ from typing import Optional from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/github/triggers.py b/autogpt_platform/backend/backend/blocks/github/triggers.py index 2fc568a468..e35dbb4123 100644 --- a/autogpt_platform/backend/backend/blocks/github/triggers.py +++ b/autogpt_platform/backend/backend/blocks/github/triggers.py @@ -4,7 +4,7 @@ from pathlib import Path from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/google/calendar.py b/autogpt_platform/backend/backend/blocks/google/calendar.py index 55c41f047c..b9fda2cf31 100644 --- a/autogpt_platform/backend/backend/blocks/google/calendar.py +++ b/autogpt_platform/backend/backend/blocks/google/calendar.py @@ -8,7 +8,7 @@ from google.oauth2.credentials import Credentials from googleapiclient.discovery import build from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/google/docs.py b/autogpt_platform/backend/backend/blocks/google/docs.py index 7840cbae73..33aab4638d 100644 --- a/autogpt_platform/backend/backend/blocks/google/docs.py +++ b/autogpt_platform/backend/backend/blocks/google/docs.py @@ -7,14 +7,14 @@ from google.oauth2.credentials import Credentials from googleapiclient.discovery import build from gravitas_md2gdocs import to_requests -from backend.blocks.google._drive import GoogleDriveFile, GoogleDriveFileField -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.google._drive import GoogleDriveFile, GoogleDriveFileField from backend.data.model import SchemaField from backend.util.settings import Settings diff --git a/autogpt_platform/backend/backend/blocks/google/gmail.py b/autogpt_platform/backend/backend/blocks/google/gmail.py index 2040cabe3f..2051f86b9e 100644 --- a/autogpt_platform/backend/backend/blocks/google/gmail.py +++ b/autogpt_platform/backend/backend/blocks/google/gmail.py @@ -14,7 +14,7 @@ from google.oauth2.credentials import Credentials from googleapiclient.discovery import build from pydantic import BaseModel, Field -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/google/sheets.py b/autogpt_platform/backend/backend/blocks/google/sheets.py index da541d3bf5..6e21008a23 100644 --- a/autogpt_platform/backend/backend/blocks/google/sheets.py +++ b/autogpt_platform/backend/backend/blocks/google/sheets.py @@ -7,14 +7,14 @@ from enum import Enum from google.oauth2.credentials import Credentials from googleapiclient.discovery import build -from backend.blocks.google._drive import GoogleDriveFile, GoogleDriveFileField -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.google._drive import GoogleDriveFile, GoogleDriveFileField from backend.data.model import SchemaField from backend.util.settings import Settings diff --git a/autogpt_platform/backend/backend/blocks/google_maps.py b/autogpt_platform/backend/backend/blocks/google_maps.py index 2ee2959326..bab0841c5d 100644 --- a/autogpt_platform/backend/backend/blocks/google_maps.py +++ b/autogpt_platform/backend/backend/blocks/google_maps.py @@ -3,7 +3,7 @@ from typing import Literal import googlemaps from pydantic import BaseModel, SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/helpers/review.py b/autogpt_platform/backend/backend/blocks/helpers/review.py index 4bd85e424b..23d1af6db3 100644 --- a/autogpt_platform/backend/backend/blocks/helpers/review.py +++ b/autogpt_platform/backend/backend/blocks/helpers/review.py @@ -9,9 +9,7 @@ from typing import Any, Optional from prisma.enums import ReviewStatus from pydantic import BaseModel -from backend.data.execution import ExecutionStatus from backend.data.human_review import ReviewResult -from backend.executor.manager import async_update_node_execution_status from backend.util.clients import get_database_manager_async_client logger = logging.getLogger(__name__) @@ -43,6 +41,8 @@ class HITLReviewHelper: @staticmethod async def update_node_execution_status(**kwargs) -> None: """Update the execution status of a node.""" + from backend.executor.manager import async_update_node_execution_status + await async_update_node_execution_status( db_client=get_database_manager_async_client(), **kwargs ) @@ -88,12 +88,13 @@ class HITLReviewHelper: Raises: Exception: If review creation or status update fails """ + from backend.data.execution import ExecutionStatus + # Note: Safe mode checks (human_in_the_loop_safe_mode, sensitive_action_safe_mode) # are handled by the caller: # - HITL blocks check human_in_the_loop_safe_mode in their run() method # - Sensitive action blocks check sensitive_action_safe_mode in is_block_exec_need_review() # This function only handles checking for existing approvals. - # Check if this node has already been approved (normal or auto-approval) if approval_result := await HITLReviewHelper.check_approval( node_exec_id=node_exec_id, diff --git a/autogpt_platform/backend/backend/blocks/http.py b/autogpt_platform/backend/backend/blocks/http.py index 77e7fe243f..21c2964412 100644 --- a/autogpt_platform/backend/backend/blocks/http.py +++ b/autogpt_platform/backend/backend/blocks/http.py @@ -8,7 +8,7 @@ from typing import Literal import aiofiles from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/hubspot/company.py b/autogpt_platform/backend/backend/blocks/hubspot/company.py index dee9169e59..543d16db0c 100644 --- a/autogpt_platform/backend/backend/blocks/hubspot/company.py +++ b/autogpt_platform/backend/backend/blocks/hubspot/company.py @@ -1,15 +1,15 @@ -from backend.blocks.hubspot._auth import ( - HubSpotCredentials, - HubSpotCredentialsField, - HubSpotCredentialsInput, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.hubspot._auth import ( + HubSpotCredentials, + HubSpotCredentialsField, + HubSpotCredentialsInput, +) from backend.data.model import SchemaField from backend.util.request import Requests diff --git a/autogpt_platform/backend/backend/blocks/hubspot/contact.py b/autogpt_platform/backend/backend/blocks/hubspot/contact.py index b4451c3b8b..1cdbf99b39 100644 --- a/autogpt_platform/backend/backend/blocks/hubspot/contact.py +++ b/autogpt_platform/backend/backend/blocks/hubspot/contact.py @@ -1,15 +1,15 @@ -from backend.blocks.hubspot._auth import ( - HubSpotCredentials, - HubSpotCredentialsField, - HubSpotCredentialsInput, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.hubspot._auth import ( + HubSpotCredentials, + HubSpotCredentialsField, + HubSpotCredentialsInput, +) from backend.data.model import SchemaField from backend.util.request import Requests diff --git a/autogpt_platform/backend/backend/blocks/hubspot/engagement.py b/autogpt_platform/backend/backend/blocks/hubspot/engagement.py index 683607c5b3..9408a543b6 100644 --- a/autogpt_platform/backend/backend/blocks/hubspot/engagement.py +++ b/autogpt_platform/backend/backend/blocks/hubspot/engagement.py @@ -1,17 +1,17 @@ from datetime import datetime, timedelta -from backend.blocks.hubspot._auth import ( - HubSpotCredentials, - HubSpotCredentialsField, - HubSpotCredentialsInput, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.hubspot._auth import ( + HubSpotCredentials, + HubSpotCredentialsField, + HubSpotCredentialsInput, +) from backend.data.model import SchemaField from backend.util.request import Requests diff --git a/autogpt_platform/backend/backend/blocks/human_in_the_loop.py b/autogpt_platform/backend/backend/blocks/human_in_the_loop.py index d31f90ec81..69c52081d8 100644 --- a/autogpt_platform/backend/backend/blocks/human_in_the_loop.py +++ b/autogpt_platform/backend/backend/blocks/human_in_the_loop.py @@ -3,8 +3,7 @@ from typing import Any from prisma.enums import ReviewStatus -from backend.blocks.helpers.review import HITLReviewHelper -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, @@ -12,6 +11,7 @@ from backend.data.block import ( BlockSchemaOutput, BlockType, ) +from backend.blocks.helpers.review import HITLReviewHelper from backend.data.execution import ExecutionContext from backend.data.human_review import ReviewResult from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/ideogram.py b/autogpt_platform/backend/backend/blocks/ideogram.py index 09a384c74a..5aed4aa5a9 100644 --- a/autogpt_platform/backend/backend/blocks/ideogram.py +++ b/autogpt_platform/backend/backend/blocks/ideogram.py @@ -3,7 +3,7 @@ from typing import Any, Dict, Literal, Optional from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/io.py b/autogpt_platform/backend/backend/blocks/io.py index a9c3859490..94542790ef 100644 --- a/autogpt_platform/backend/backend/blocks/io.py +++ b/autogpt_platform/backend/backend/blocks/io.py @@ -2,9 +2,7 @@ import copy from datetime import date, time from typing import Any, Optional -# Import for Google Drive file input block -from backend.blocks.google._drive import AttachmentView, GoogleDriveFile -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, @@ -12,6 +10,9 @@ from backend.data.block import ( BlockSchemaInput, BlockType, ) + +# Import for Google Drive file input block +from backend.blocks.google._drive import AttachmentView, GoogleDriveFile from backend.data.execution import ExecutionContext from backend.data.model import SchemaField from backend.util.file import store_media_file diff --git a/autogpt_platform/backend/backend/blocks/iteration.py b/autogpt_platform/backend/backend/blocks/iteration.py index 441f73fc4a..a35bcac9c1 100644 --- a/autogpt_platform/backend/backend/blocks/iteration.py +++ b/autogpt_platform/backend/backend/blocks/iteration.py @@ -1,6 +1,6 @@ from typing import Any -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/jina/chunking.py b/autogpt_platform/backend/backend/blocks/jina/chunking.py index 9a9b242aae..c248e3dd24 100644 --- a/autogpt_platform/backend/backend/blocks/jina/chunking.py +++ b/autogpt_platform/backend/backend/blocks/jina/chunking.py @@ -1,15 +1,15 @@ -from backend.blocks.jina._auth import ( - JinaCredentials, - JinaCredentialsField, - JinaCredentialsInput, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.jina._auth import ( + JinaCredentials, + JinaCredentialsField, + JinaCredentialsInput, +) from backend.data.model import SchemaField from backend.util.request import Requests diff --git a/autogpt_platform/backend/backend/blocks/jina/embeddings.py b/autogpt_platform/backend/backend/blocks/jina/embeddings.py index 0f6cf68c6c..f787de03b3 100644 --- a/autogpt_platform/backend/backend/blocks/jina/embeddings.py +++ b/autogpt_platform/backend/backend/blocks/jina/embeddings.py @@ -1,15 +1,15 @@ -from backend.blocks.jina._auth import ( - JinaCredentials, - JinaCredentialsField, - JinaCredentialsInput, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.jina._auth import ( + JinaCredentials, + JinaCredentialsField, + JinaCredentialsInput, +) from backend.data.model import SchemaField from backend.util.request import Requests diff --git a/autogpt_platform/backend/backend/blocks/jina/fact_checker.py b/autogpt_platform/backend/backend/blocks/jina/fact_checker.py index 3367ab99e6..df73ef94b1 100644 --- a/autogpt_platform/backend/backend/blocks/jina/fact_checker.py +++ b/autogpt_platform/backend/backend/blocks/jina/fact_checker.py @@ -3,18 +3,18 @@ from urllib.parse import quote from typing_extensions import TypedDict -from backend.blocks.jina._auth import ( - JinaCredentials, - JinaCredentialsField, - JinaCredentialsInput, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.jina._auth import ( + JinaCredentials, + JinaCredentialsField, + JinaCredentialsInput, +) from backend.data.model import SchemaField from backend.util.request import Requests diff --git a/autogpt_platform/backend/backend/blocks/jina/search.py b/autogpt_platform/backend/backend/blocks/jina/search.py index 05cddcc1df..22a883fa03 100644 --- a/autogpt_platform/backend/backend/blocks/jina/search.py +++ b/autogpt_platform/backend/backend/blocks/jina/search.py @@ -1,5 +1,12 @@ from urllib.parse import quote +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.jina._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -8,13 +15,6 @@ from backend.blocks.jina._auth import ( JinaCredentialsInput, ) from backend.blocks.search import GetRequest -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField from backend.util.exceptions import BlockExecutionError diff --git a/autogpt_platform/backend/backend/blocks/llm.py b/autogpt_platform/backend/backend/blocks/llm.py index 7a020593d7..1272a9ec1b 100644 --- a/autogpt_platform/backend/backend/blocks/llm.py +++ b/autogpt_platform/backend/backend/blocks/llm.py @@ -15,7 +15,7 @@ from anthropic.types import ToolParam from groq import AsyncGroq from pydantic import BaseModel, SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/maths.py b/autogpt_platform/backend/backend/blocks/maths.py index ad6dc67bbe..0f94075277 100644 --- a/autogpt_platform/backend/backend/blocks/maths.py +++ b/autogpt_platform/backend/backend/blocks/maths.py @@ -2,7 +2,7 @@ import operator from enum import Enum from typing import Any -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/medium.py b/autogpt_platform/backend/backend/blocks/medium.py index d54062d3ab..f511f19329 100644 --- a/autogpt_platform/backend/backend/blocks/medium.py +++ b/autogpt_platform/backend/backend/blocks/medium.py @@ -3,7 +3,7 @@ from typing import List, Literal from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/mem0.py b/autogpt_platform/backend/backend/blocks/mem0.py index b8dc11064a..ba0bd24290 100644 --- a/autogpt_platform/backend/backend/blocks/mem0.py +++ b/autogpt_platform/backend/backend/blocks/mem0.py @@ -3,7 +3,7 @@ from typing import Any, Literal, Optional, Union from mem0 import MemoryClient from pydantic import BaseModel, SecretStr -from backend.data.block import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput +from backend.blocks._base import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import ( APIKeyCredentials, CredentialsField, diff --git a/autogpt_platform/backend/backend/blocks/notion/create_page.py b/autogpt_platform/backend/backend/blocks/notion/create_page.py index 5edef144e3..315730d37c 100644 --- a/autogpt_platform/backend/backend/blocks/notion/create_page.py +++ b/autogpt_platform/backend/backend/blocks/notion/create_page.py @@ -4,7 +4,7 @@ from typing import Any, Dict, List, Optional from pydantic import model_validator -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/notion/read_database.py b/autogpt_platform/backend/backend/blocks/notion/read_database.py index 5720bea2f8..7b1dcf7be4 100644 --- a/autogpt_platform/backend/backend/blocks/notion/read_database.py +++ b/autogpt_platform/backend/backend/blocks/notion/read_database.py @@ -2,7 +2,7 @@ from __future__ import annotations from typing import Any, Dict, List, Optional -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/notion/read_page.py b/autogpt_platform/backend/backend/blocks/notion/read_page.py index 400fd2a929..a2b5273ad9 100644 --- a/autogpt_platform/backend/backend/blocks/notion/read_page.py +++ b/autogpt_platform/backend/backend/blocks/notion/read_page.py @@ -1,6 +1,6 @@ from __future__ import annotations -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/notion/read_page_markdown.py b/autogpt_platform/backend/backend/blocks/notion/read_page_markdown.py index 7ed87eaef9..cad3e85e79 100644 --- a/autogpt_platform/backend/backend/blocks/notion/read_page_markdown.py +++ b/autogpt_platform/backend/backend/blocks/notion/read_page_markdown.py @@ -1,6 +1,6 @@ from __future__ import annotations -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/notion/search.py b/autogpt_platform/backend/backend/blocks/notion/search.py index 1983763537..71af844b64 100644 --- a/autogpt_platform/backend/backend/blocks/notion/search.py +++ b/autogpt_platform/backend/backend/blocks/notion/search.py @@ -4,7 +4,7 @@ from typing import List, Optional from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/nvidia/deepfake.py b/autogpt_platform/backend/backend/blocks/nvidia/deepfake.py index f60b649839..06b05ebc50 100644 --- a/autogpt_platform/backend/backend/blocks/nvidia/deepfake.py +++ b/autogpt_platform/backend/backend/blocks/nvidia/deepfake.py @@ -1,15 +1,15 @@ -from backend.blocks.nvidia._auth import ( - NvidiaCredentials, - NvidiaCredentialsField, - NvidiaCredentialsInput, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.nvidia._auth import ( + NvidiaCredentials, + NvidiaCredentialsField, + NvidiaCredentialsInput, +) from backend.data.model import SchemaField from backend.util.request import Requests from backend.util.type import MediaFileType diff --git a/autogpt_platform/backend/backend/blocks/perplexity.py b/autogpt_platform/backend/backend/blocks/perplexity.py index e2796718a9..270081a3a8 100644 --- a/autogpt_platform/backend/backend/blocks/perplexity.py +++ b/autogpt_platform/backend/backend/blocks/perplexity.py @@ -6,7 +6,7 @@ from typing import Any, Literal import openai from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/persistence.py b/autogpt_platform/backend/backend/blocks/persistence.py index a327fd22c7..7584993beb 100644 --- a/autogpt_platform/backend/backend/blocks/persistence.py +++ b/autogpt_platform/backend/backend/blocks/persistence.py @@ -1,7 +1,7 @@ import logging from typing import Any, Literal -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/pinecone.py b/autogpt_platform/backend/backend/blocks/pinecone.py index 878f6f72fb..f882212ab2 100644 --- a/autogpt_platform/backend/backend/blocks/pinecone.py +++ b/autogpt_platform/backend/backend/blocks/pinecone.py @@ -3,7 +3,7 @@ from typing import Any, Literal from pinecone import Pinecone, ServerlessSpec -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/reddit.py b/autogpt_platform/backend/backend/blocks/reddit.py index 1109d568db..6544c698a3 100644 --- a/autogpt_platform/backend/backend/blocks/reddit.py +++ b/autogpt_platform/backend/backend/blocks/reddit.py @@ -6,7 +6,7 @@ import praw from praw.models import Comment, MoreComments, Submission from pydantic import BaseModel, SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py b/autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py index c112ce75c4..e7a0a82cce 100644 --- a/autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py +++ b/autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py @@ -4,19 +4,19 @@ from enum import Enum from pydantic import SecretStr from replicate.client import Client as ReplicateClient -from backend.blocks.replicate._auth import ( - TEST_CREDENTIALS, - TEST_CREDENTIALS_INPUT, - ReplicateCredentialsInput, -) -from backend.blocks.replicate._helper import ReplicateOutputs, extract_result -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.replicate._auth import ( + TEST_CREDENTIALS, + TEST_CREDENTIALS_INPUT, + ReplicateCredentialsInput, +) +from backend.blocks.replicate._helper import ReplicateOutputs, extract_result from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField diff --git a/autogpt_platform/backend/backend/blocks/replicate/replicate_block.py b/autogpt_platform/backend/backend/blocks/replicate/replicate_block.py index 7ee054d02e..2758c7cd06 100644 --- a/autogpt_platform/backend/backend/blocks/replicate/replicate_block.py +++ b/autogpt_platform/backend/backend/blocks/replicate/replicate_block.py @@ -4,19 +4,19 @@ from typing import Optional from pydantic import SecretStr from replicate.client import Client as ReplicateClient -from backend.blocks.replicate._auth import ( - TEST_CREDENTIALS, - TEST_CREDENTIALS_INPUT, - ReplicateCredentialsInput, -) -from backend.blocks.replicate._helper import ReplicateOutputs, extract_result -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.replicate._auth import ( + TEST_CREDENTIALS, + TEST_CREDENTIALS_INPUT, + ReplicateCredentialsInput, +) +from backend.blocks.replicate._helper import ReplicateOutputs, extract_result from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField from backend.util.exceptions import BlockExecutionError, BlockInputError diff --git a/autogpt_platform/backend/backend/blocks/rss.py b/autogpt_platform/backend/backend/blocks/rss.py index a23b3ee25c..5d26bc592c 100644 --- a/autogpt_platform/backend/backend/blocks/rss.py +++ b/autogpt_platform/backend/backend/blocks/rss.py @@ -6,7 +6,7 @@ from typing import Any import feedparser import pydantic -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/sampling.py b/autogpt_platform/backend/backend/blocks/sampling.py index b4463947a7..eb5f47e80e 100644 --- a/autogpt_platform/backend/backend/blocks/sampling.py +++ b/autogpt_platform/backend/backend/blocks/sampling.py @@ -3,7 +3,7 @@ from collections import defaultdict from enum import Enum from typing import Any, Dict, List, Optional, Union -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/screenshotone.py b/autogpt_platform/backend/backend/blocks/screenshotone.py index ee998f8da2..1ce133af83 100644 --- a/autogpt_platform/backend/backend/blocks/screenshotone.py +++ b/autogpt_platform/backend/backend/blocks/screenshotone.py @@ -4,7 +4,7 @@ from typing import Literal from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/search.py b/autogpt_platform/backend/backend/blocks/search.py index 09e16034a3..61acb2108e 100644 --- a/autogpt_platform/backend/backend/blocks/search.py +++ b/autogpt_platform/backend/backend/blocks/search.py @@ -3,14 +3,14 @@ from urllib.parse import quote from pydantic import SecretStr -from backend.blocks.helpers.http import GetRequest -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.helpers.http import GetRequest from backend.data.model import ( APIKeyCredentials, CredentialsField, diff --git a/autogpt_platform/backend/backend/blocks/slant3d/base.py b/autogpt_platform/backend/backend/blocks/slant3d/base.py index e368a1b451..3ce24f8ddc 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/base.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/base.py @@ -1,6 +1,6 @@ from typing import Any, Dict -from backend.data.block import Block +from backend.blocks._base import Block from backend.util.request import Requests from ._api import Color, CustomerDetails, OrderItem, Profile diff --git a/autogpt_platform/backend/backend/blocks/slant3d/filament.py b/autogpt_platform/backend/backend/blocks/slant3d/filament.py index f2b9eae38d..723ebff59e 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/filament.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/filament.py @@ -1,6 +1,6 @@ from typing import List -from backend.data.block import BlockOutput, BlockSchemaInput, BlockSchemaOutput +from backend.blocks._base import BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import APIKeyCredentials, SchemaField from ._api import ( diff --git a/autogpt_platform/backend/backend/blocks/slant3d/order.py b/autogpt_platform/backend/backend/blocks/slant3d/order.py index 4ece3fc51e..36d2705ea5 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/order.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/order.py @@ -1,7 +1,7 @@ import uuid from typing import List -from backend.data.block import BlockOutput, BlockSchemaInput, BlockSchemaOutput +from backend.blocks._base import BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import APIKeyCredentials, SchemaField from backend.util.settings import BehaveAs, Settings diff --git a/autogpt_platform/backend/backend/blocks/slant3d/slicing.py b/autogpt_platform/backend/backend/blocks/slant3d/slicing.py index 1952b162d2..8740f9504f 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/slicing.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/slicing.py @@ -1,4 +1,4 @@ -from backend.data.block import BlockOutput, BlockSchemaInput, BlockSchemaOutput +from backend.blocks._base import BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import APIKeyCredentials, SchemaField from ._api import ( diff --git a/autogpt_platform/backend/backend/blocks/slant3d/webhook.py b/autogpt_platform/backend/backend/blocks/slant3d/webhook.py index e5a2d72568..f2cb86ec09 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/webhook.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/webhook.py @@ -1,6 +1,6 @@ from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/smart_decision_maker.py b/autogpt_platform/backend/backend/blocks/smart_decision_maker.py index ff6042eaab..5e6b11eebd 100644 --- a/autogpt_platform/backend/backend/blocks/smart_decision_maker.py +++ b/autogpt_platform/backend/backend/blocks/smart_decision_maker.py @@ -7,8 +7,7 @@ from typing import TYPE_CHECKING, Any from pydantic import BaseModel import backend.blocks.llm as llm -from backend.blocks.agent import AgentExecutorBlock -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockInput, @@ -17,6 +16,7 @@ from backend.data.block import ( BlockSchemaOutput, BlockType, ) +from backend.blocks.agent import AgentExecutorBlock from backend.data.dynamic_fields import ( extract_base_field_name, get_dynamic_field_description, diff --git a/autogpt_platform/backend/backend/blocks/smartlead/campaign.py b/autogpt_platform/backend/backend/blocks/smartlead/campaign.py index c3bf930068..302a38f4db 100644 --- a/autogpt_platform/backend/backend/blocks/smartlead/campaign.py +++ b/autogpt_platform/backend/backend/blocks/smartlead/campaign.py @@ -1,3 +1,10 @@ +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.smartlead._api import SmartLeadClient from backend.blocks.smartlead._auth import ( TEST_CREDENTIALS, @@ -16,13 +23,6 @@ from backend.blocks.smartlead.models import ( SaveSequencesResponse, Sequence, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import CredentialsField, SchemaField diff --git a/autogpt_platform/backend/backend/blocks/spreadsheet.py b/autogpt_platform/backend/backend/blocks/spreadsheet.py index a13f9e2f6d..2bbfd6776f 100644 --- a/autogpt_platform/backend/backend/blocks/spreadsheet.py +++ b/autogpt_platform/backend/backend/blocks/spreadsheet.py @@ -1,6 +1,6 @@ from pathlib import Path -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/system/library_operations.py b/autogpt_platform/backend/backend/blocks/system/library_operations.py index 116da64599..b2433ce220 100644 --- a/autogpt_platform/backend/backend/blocks/system/library_operations.py +++ b/autogpt_platform/backend/backend/blocks/system/library_operations.py @@ -3,7 +3,7 @@ from typing import Any from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/system/store_operations.py b/autogpt_platform/backend/backend/blocks/system/store_operations.py index e9b7a01ebe..88958a5707 100644 --- a/autogpt_platform/backend/backend/blocks/system/store_operations.py +++ b/autogpt_platform/backend/backend/blocks/system/store_operations.py @@ -3,7 +3,7 @@ from typing import Literal from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/talking_head.py b/autogpt_platform/backend/backend/blocks/talking_head.py index e01e3d4023..f199d030ff 100644 --- a/autogpt_platform/backend/backend/blocks/talking_head.py +++ b/autogpt_platform/backend/backend/blocks/talking_head.py @@ -3,7 +3,7 @@ from typing import Literal from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/test/test_block.py b/autogpt_platform/backend/backend/blocks/test/test_block.py index 7a1fdbcc73..c7f3ca62f2 100644 --- a/autogpt_platform/backend/backend/blocks/test/test_block.py +++ b/autogpt_platform/backend/backend/blocks/test/test_block.py @@ -2,7 +2,8 @@ from typing import Any, Type import pytest -from backend.data.block import Block, BlockSchemaInput, get_blocks +from backend.blocks import get_blocks +from backend.blocks._base import Block, BlockSchemaInput from backend.data.model import SchemaField from backend.util.test import execute_block_test diff --git a/autogpt_platform/backend/backend/blocks/text.py b/autogpt_platform/backend/backend/blocks/text.py index 359e22a84f..4276ff3a45 100644 --- a/autogpt_platform/backend/backend/blocks/text.py +++ b/autogpt_platform/backend/backend/blocks/text.py @@ -4,7 +4,7 @@ from typing import Any import regex # Has built-in timeout support -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/text_to_speech_block.py b/autogpt_platform/backend/backend/blocks/text_to_speech_block.py index 8fe9e1cda7..a408c8772f 100644 --- a/autogpt_platform/backend/backend/blocks/text_to_speech_block.py +++ b/autogpt_platform/backend/backend/blocks/text_to_speech_block.py @@ -2,7 +2,7 @@ from typing import Any, Literal from pydantic import SecretStr -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/time_blocks.py b/autogpt_platform/backend/backend/blocks/time_blocks.py index 3a1f4c678e..5ee13db30b 100644 --- a/autogpt_platform/backend/backend/blocks/time_blocks.py +++ b/autogpt_platform/backend/backend/blocks/time_blocks.py @@ -7,7 +7,7 @@ from zoneinfo import ZoneInfo from pydantic import BaseModel -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/todoist/comments.py b/autogpt_platform/backend/backend/blocks/todoist/comments.py index f11534cbe3..dc8eef3919 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/comments.py +++ b/autogpt_platform/backend/backend/blocks/todoist/comments.py @@ -4,6 +4,13 @@ from pydantic import BaseModel from todoist_api_python.api import TodoistAPI from typing_extensions import Optional +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.todoist._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -12,13 +19,6 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsField, TodoistCredentialsInput, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/todoist/labels.py b/autogpt_platform/backend/backend/blocks/todoist/labels.py index 8107459567..0b0f26cc77 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/labels.py +++ b/autogpt_platform/backend/backend/blocks/todoist/labels.py @@ -1,6 +1,13 @@ from todoist_api_python.api import TodoistAPI from typing_extensions import Optional +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.todoist._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -10,13 +17,6 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsInput, ) from backend.blocks.todoist._types import Colors -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/todoist/projects.py b/autogpt_platform/backend/backend/blocks/todoist/projects.py index c6d345c116..a35bd3d41e 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/projects.py +++ b/autogpt_platform/backend/backend/blocks/todoist/projects.py @@ -1,6 +1,13 @@ from todoist_api_python.api import TodoistAPI from typing_extensions import Optional +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.todoist._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -10,13 +17,6 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsInput, ) from backend.blocks.todoist._types import Colors -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/todoist/sections.py b/autogpt_platform/backend/backend/blocks/todoist/sections.py index 52dceb70b9..23cabdb661 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/sections.py +++ b/autogpt_platform/backend/backend/blocks/todoist/sections.py @@ -1,6 +1,13 @@ from todoist_api_python.api import TodoistAPI from typing_extensions import Optional +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.todoist._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -9,13 +16,6 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsField, TodoistCredentialsInput, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/todoist/tasks.py b/autogpt_platform/backend/backend/blocks/todoist/tasks.py index 183a3340b3..6aaf766114 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/tasks.py +++ b/autogpt_platform/backend/backend/blocks/todoist/tasks.py @@ -4,6 +4,13 @@ from todoist_api_python.api import TodoistAPI from todoist_api_python.models import Task from typing_extensions import Optional +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.todoist._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -12,13 +19,6 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsField, TodoistCredentialsInput, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/_types.py b/autogpt_platform/backend/backend/blocks/twitter/_types.py index 88050ed545..ead54677be 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/_types.py +++ b/autogpt_platform/backend/backend/blocks/twitter/_types.py @@ -3,7 +3,7 @@ from enum import Enum from pydantic import BaseModel -from backend.data.block import BlockSchemaInput +from backend.blocks._base import BlockSchemaInput from backend.data.model import SchemaField # -------------- Tweets ----------------- diff --git a/autogpt_platform/backend/backend/blocks/twitter/direct_message/direct_message_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/direct_message/direct_message_lookup.py index 0ce8e08535..f4b07ca53e 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/direct_message/direct_message_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/direct_message/direct_message_lookup.py @@ -4,8 +4,8 @@ # import tweepy # from tweepy.client import Response +# from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchema, BlockSchemaInput, BlockSchemaOutput # from backend.blocks.twitter._serializer import IncludesSerializer, ResponseDataSerializer -# from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockSchemaInput, BlockSchemaOutput # from backend.data.model import SchemaField # from backend.blocks.twitter._builders import DMExpansionsBuilder # from backend.blocks.twitter._types import DMEventExpansion, DMEventExpansionInputs, DMEventType, DMMediaField, DMTweetField, TweetUserFields diff --git a/autogpt_platform/backend/backend/blocks/twitter/direct_message/manage_direct_message.py b/autogpt_platform/backend/backend/blocks/twitter/direct_message/manage_direct_message.py index cbbe019f37..0104e3e9c5 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/direct_message/manage_direct_message.py +++ b/autogpt_platform/backend/backend/blocks/twitter/direct_message/manage_direct_message.py @@ -5,7 +5,7 @@ # import tweepy # from tweepy.client import Response -# from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockSchemaInput, BlockSchemaOutput +# from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchema, BlockSchemaInput, BlockSchemaOutput # from backend.data.model import SchemaField # from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception # from backend.blocks.twitter._auth import ( diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/list_follows.py b/autogpt_platform/backend/backend/blocks/twitter/lists/list_follows.py index 5616e0ce14..93dfaef919 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/list_follows.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/list_follows.py @@ -1,6 +1,13 @@ # from typing import cast import tweepy +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -13,13 +20,6 @@ from backend.blocks.twitter._auth import ( # from backend.blocks.twitter._builders import UserExpansionsBuilder # from backend.blocks.twitter._types import TweetFields, TweetUserFields, UserExpansionInputs, UserExpansions from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField # from tweepy.client import Response diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/list_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/lists/list_lookup.py index 6b46f00a37..a6a5607196 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/list_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/list_lookup.py @@ -3,6 +3,7 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -23,7 +24,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/list_members.py b/autogpt_platform/backend/backend/blocks/twitter/lists/list_members.py index 32ffb9e5b6..5505f1457a 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/list_members.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/list_members.py @@ -3,6 +3,13 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -29,13 +36,6 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/list_tweets_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/lists/list_tweets_lookup.py index e43980683e..57dc6579c9 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/list_tweets_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/list_tweets_lookup.py @@ -3,6 +3,7 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -26,7 +27,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/manage_lists.py b/autogpt_platform/backend/backend/blocks/twitter/lists/manage_lists.py index 4092fbaa93..9bab05e98b 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/manage_lists.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/manage_lists.py @@ -3,6 +3,13 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -12,13 +19,6 @@ from backend.blocks.twitter._auth import ( TwitterCredentialsInput, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/pinned_lists.py b/autogpt_platform/backend/backend/blocks/twitter/lists/pinned_lists.py index 7bc5bb543f..0ebe9503b0 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/pinned_lists.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/pinned_lists.py @@ -3,6 +3,13 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -23,13 +30,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/spaces/search_spaces.py b/autogpt_platform/backend/backend/blocks/twitter/spaces/search_spaces.py index bd013cecc1..a38dc5452e 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/spaces/search_spaces.py +++ b/autogpt_platform/backend/backend/blocks/twitter/spaces/search_spaces.py @@ -3,6 +3,7 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -24,7 +25,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/spaces/spaces_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/spaces/spaces_lookup.py index 2c99d3ba3a..c31f0efd38 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/spaces/spaces_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/spaces/spaces_lookup.py @@ -4,6 +4,7 @@ import tweepy from pydantic import BaseModel from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -36,7 +37,6 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/bookmark.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/bookmark.py index b69002837e..9d8bfccad9 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/bookmark.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/bookmark.py @@ -3,6 +3,13 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -26,13 +33,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/hide.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/hide.py index f9992ea7c0..72ed2096a7 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/hide.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/hide.py @@ -1,5 +1,12 @@ import tweepy +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -9,13 +16,6 @@ from backend.blocks.twitter._auth import ( TwitterCredentialsInput, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/like.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/like.py index 2d499257a9..c2a920276c 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/like.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/like.py @@ -3,6 +3,13 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -31,13 +38,6 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/manage.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/manage.py index 875e22738b..68e379b895 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/manage.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/manage.py @@ -5,6 +5,13 @@ import tweepy from pydantic import BaseModel from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -35,13 +42,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/quote.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/quote.py index fc6c336e20..be8d5b3125 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/quote.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/quote.py @@ -3,6 +3,7 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -27,7 +28,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/retweet.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/retweet.py index 1f65f90ea3..606e3b8a74 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/retweet.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/retweet.py @@ -3,6 +3,13 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -23,13 +30,6 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/timeline.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/timeline.py index 9f07beba66..347ff5aee1 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/timeline.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/timeline.py @@ -4,6 +4,7 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -31,7 +32,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/tweet_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/tweet_lookup.py index 540aa1395f..f452848288 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/tweet_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/tweet_lookup.py @@ -3,6 +3,7 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -26,7 +27,6 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/users/blocks.py b/autogpt_platform/backend/backend/blocks/twitter/users/blocks.py index 1c192aa6b5..12df24cfe2 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/users/blocks.py +++ b/autogpt_platform/backend/backend/blocks/twitter/users/blocks.py @@ -3,6 +3,7 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -20,7 +21,6 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/users/follows.py b/autogpt_platform/backend/backend/blocks/twitter/users/follows.py index 537aea6031..20276b19b4 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/users/follows.py +++ b/autogpt_platform/backend/backend/blocks/twitter/users/follows.py @@ -3,6 +3,13 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -23,13 +30,6 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/users/mutes.py b/autogpt_platform/backend/backend/blocks/twitter/users/mutes.py index e22aec94dc..31927e2b71 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/users/mutes.py +++ b/autogpt_platform/backend/backend/blocks/twitter/users/mutes.py @@ -3,6 +3,13 @@ from typing import cast import tweepy from tweepy.client import Response +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -23,13 +30,6 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/twitter/users/user_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/users/user_lookup.py index 67c7d14c9b..8d01876955 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/users/user_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/users/user_lookup.py @@ -4,6 +4,7 @@ import tweepy from pydantic import BaseModel from tweepy.client import Response +from backend.blocks._base import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.blocks.twitter._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -24,7 +25,6 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/video/add_audio.py b/autogpt_platform/backend/backend/blocks/video/add_audio.py index ebd4ab94f2..f91a82a758 100644 --- a/autogpt_platform/backend/backend/blocks/video/add_audio.py +++ b/autogpt_platform/backend/backend/blocks/video/add_audio.py @@ -3,14 +3,14 @@ from moviepy.audio.io.AudioFileClip import AudioFileClip from moviepy.video.io.VideoFileClip import VideoFileClip -from backend.blocks.video._utils import extract_source_name, strip_chapters_inplace -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.video._utils import extract_source_name, strip_chapters_inplace from backend.data.execution import ExecutionContext from backend.data.model import SchemaField from backend.util.file import MediaFileType, get_exec_file_path, store_media_file diff --git a/autogpt_platform/backend/backend/blocks/video/clip.py b/autogpt_platform/backend/backend/blocks/video/clip.py index 05deea6530..990a8b2f31 100644 --- a/autogpt_platform/backend/backend/blocks/video/clip.py +++ b/autogpt_platform/backend/backend/blocks/video/clip.py @@ -4,18 +4,18 @@ from typing import Literal from moviepy.video.io.VideoFileClip import VideoFileClip -from backend.blocks.video._utils import ( - extract_source_name, - get_video_codecs, - strip_chapters_inplace, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.video._utils import ( + extract_source_name, + get_video_codecs, + strip_chapters_inplace, +) from backend.data.execution import ExecutionContext from backend.data.model import SchemaField from backend.util.exceptions import BlockExecutionError diff --git a/autogpt_platform/backend/backend/blocks/video/concat.py b/autogpt_platform/backend/backend/blocks/video/concat.py index b49854fb40..3bf2b5142b 100644 --- a/autogpt_platform/backend/backend/blocks/video/concat.py +++ b/autogpt_platform/backend/backend/blocks/video/concat.py @@ -6,18 +6,18 @@ from moviepy import concatenate_videoclips from moviepy.video.fx import CrossFadeIn, CrossFadeOut, FadeIn, FadeOut from moviepy.video.io.VideoFileClip import VideoFileClip -from backend.blocks.video._utils import ( - extract_source_name, - get_video_codecs, - strip_chapters_inplace, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.video._utils import ( + extract_source_name, + get_video_codecs, + strip_chapters_inplace, +) from backend.data.execution import ExecutionContext from backend.data.model import SchemaField from backend.util.exceptions import BlockExecutionError diff --git a/autogpt_platform/backend/backend/blocks/video/download.py b/autogpt_platform/backend/backend/blocks/video/download.py index 4046d5df42..c6d2617f73 100644 --- a/autogpt_platform/backend/backend/blocks/video/download.py +++ b/autogpt_platform/backend/backend/blocks/video/download.py @@ -9,7 +9,7 @@ import yt_dlp if typing.TYPE_CHECKING: from yt_dlp import _Params -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/video/duration.py b/autogpt_platform/backend/backend/blocks/video/duration.py index 9e05d35b00..ff904ad650 100644 --- a/autogpt_platform/backend/backend/blocks/video/duration.py +++ b/autogpt_platform/backend/backend/blocks/video/duration.py @@ -3,14 +3,14 @@ from moviepy.audio.io.AudioFileClip import AudioFileClip from moviepy.video.io.VideoFileClip import VideoFileClip -from backend.blocks.video._utils import strip_chapters_inplace -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.video._utils import strip_chapters_inplace from backend.data.execution import ExecutionContext from backend.data.model import SchemaField from backend.util.file import MediaFileType, get_exec_file_path, store_media_file diff --git a/autogpt_platform/backend/backend/blocks/video/loop.py b/autogpt_platform/backend/backend/blocks/video/loop.py index 461610f713..0cb360a5b2 100644 --- a/autogpt_platform/backend/backend/blocks/video/loop.py +++ b/autogpt_platform/backend/backend/blocks/video/loop.py @@ -5,14 +5,14 @@ from typing import Optional from moviepy.video.fx.Loop import Loop from moviepy.video.io.VideoFileClip import VideoFileClip -from backend.blocks.video._utils import extract_source_name, strip_chapters_inplace -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.video._utils import extract_source_name, strip_chapters_inplace from backend.data.execution import ExecutionContext from backend.data.model import SchemaField from backend.util.file import MediaFileType, get_exec_file_path, store_media_file diff --git a/autogpt_platform/backend/backend/blocks/video/narration.py b/autogpt_platform/backend/backend/blocks/video/narration.py index adf41753c8..39b9c481b0 100644 --- a/autogpt_platform/backend/backend/blocks/video/narration.py +++ b/autogpt_platform/backend/backend/blocks/video/narration.py @@ -8,6 +8,13 @@ from moviepy import CompositeAudioClip from moviepy.audio.io.AudioFileClip import AudioFileClip from moviepy.video.io.VideoFileClip import VideoFileClip +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.elevenlabs._auth import ( TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT, @@ -19,13 +26,6 @@ from backend.blocks.video._utils import ( get_video_codecs, strip_chapters_inplace, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.execution import ExecutionContext from backend.data.model import CredentialsField, SchemaField from backend.util.exceptions import BlockExecutionError diff --git a/autogpt_platform/backend/backend/blocks/video/text_overlay.py b/autogpt_platform/backend/backend/blocks/video/text_overlay.py index cb7cfe0420..86dd30318c 100644 --- a/autogpt_platform/backend/backend/blocks/video/text_overlay.py +++ b/autogpt_platform/backend/backend/blocks/video/text_overlay.py @@ -5,18 +5,18 @@ from typing import Literal from moviepy import CompositeVideoClip, TextClip from moviepy.video.io.VideoFileClip import VideoFileClip -from backend.blocks.video._utils import ( - extract_source_name, - get_video_codecs, - strip_chapters_inplace, -) -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, BlockSchemaInput, BlockSchemaOutput, ) +from backend.blocks.video._utils import ( + extract_source_name, + get_video_codecs, + strip_chapters_inplace, +) from backend.data.execution import ExecutionContext from backend.data.model import SchemaField from backend.util.exceptions import BlockExecutionError diff --git a/autogpt_platform/backend/backend/blocks/xml_parser.py b/autogpt_platform/backend/backend/blocks/xml_parser.py index 223f8ea367..a1274fa562 100644 --- a/autogpt_platform/backend/backend/blocks/xml_parser.py +++ b/autogpt_platform/backend/backend/blocks/xml_parser.py @@ -1,7 +1,7 @@ from gravitasml.parser import Parser from gravitasml.token import Token, tokenize -from backend.data.block import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput +from backend.blocks._base import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import SchemaField diff --git a/autogpt_platform/backend/backend/blocks/youtube.py b/autogpt_platform/backend/backend/blocks/youtube.py index 6d81a86b4c..6ce705e4f5 100644 --- a/autogpt_platform/backend/backend/blocks/youtube.py +++ b/autogpt_platform/backend/backend/blocks/youtube.py @@ -9,7 +9,7 @@ from youtube_transcript_api._transcripts import FetchedTranscript from youtube_transcript_api.formatters import TextFormatter from youtube_transcript_api.proxies import WebshareProxyConfig -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockOutput, diff --git a/autogpt_platform/backend/backend/blocks/zerobounce/validate_emails.py b/autogpt_platform/backend/backend/blocks/zerobounce/validate_emails.py index fa5283f324..6a461b4aa8 100644 --- a/autogpt_platform/backend/backend/blocks/zerobounce/validate_emails.py +++ b/autogpt_platform/backend/backend/blocks/zerobounce/validate_emails.py @@ -7,6 +7,13 @@ from zerobouncesdk.zb_validate_response import ( ZBValidateSubStatus, ) +from backend.blocks._base import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.blocks.zerobounce._api import ZeroBounceClient from backend.blocks.zerobounce._auth import ( TEST_CREDENTIALS, @@ -14,13 +21,6 @@ from backend.blocks.zerobounce._auth import ( ZeroBounceCredentials, ZeroBounceCredentialsInput, ) -from backend.data.block import ( - Block, - BlockCategory, - BlockOutput, - BlockSchemaInput, - BlockSchemaOutput, -) from backend.data.model import CredentialsField, SchemaField diff --git a/autogpt_platform/backend/backend/data/__init__.py b/autogpt_platform/backend/backend/data/__init__.py index c98667e362..8b13789179 100644 --- a/autogpt_platform/backend/backend/data/__init__.py +++ b/autogpt_platform/backend/backend/data/__init__.py @@ -1,8 +1 @@ -from backend.api.features.library.model import LibraryAgentPreset -from .graph import NodeModel -from .integrations import Webhook # noqa: F401 - -# Resolve Webhook forward references -NodeModel.model_rebuild() -LibraryAgentPreset.model_rebuild() diff --git a/autogpt_platform/backend/backend/data/block.py b/autogpt_platform/backend/backend/data/block.py index f67134ceb3..a958011bc0 100644 --- a/autogpt_platform/backend/backend/data/block.py +++ b/autogpt_platform/backend/backend/data/block.py @@ -1,887 +1,32 @@ -import inspect import logging -import os -from abc import ABC, abstractmethod -from collections.abc import AsyncGenerator as AsyncGen -from enum import Enum -from typing import ( - TYPE_CHECKING, - Any, - Callable, - ClassVar, - Generic, - Optional, - Sequence, - Type, - TypeAlias, - TypeVar, - cast, - get_origin, -) +from typing import TYPE_CHECKING, Any, AsyncGenerator -import jsonref -import jsonschema from prisma.models import AgentBlock from prisma.types import AgentBlockCreateInput -from pydantic import BaseModel -from backend.data.model import NodeExecutionStats -from backend.integrations.providers import ProviderName from backend.util import json -from backend.util.cache import cached -from backend.util.exceptions import ( - BlockError, - BlockExecutionError, - BlockInputError, - BlockOutputError, - BlockUnknownError, -) -from backend.util.settings import Config -from .model import ( - ContributorDetails, - Credentials, - CredentialsFieldInfo, - CredentialsMetaInput, - SchemaField, - is_credentials_field_name, -) +if TYPE_CHECKING: + from backend.blocks._base import AnyBlockSchema logger = logging.getLogger(__name__) -if TYPE_CHECKING: - from backend.data.execution import ExecutionContext - from .graph import Link - -app_config = Config() - -BlockInput = dict[str, Any] # Input: 1 input pin consumes 1 data. +BlockInput = dict[str, Any] # Input: 1 input pin <- 1 data. BlockOutputEntry = tuple[str, Any] # Output data should be a tuple of (name, value). -BlockOutput = AsyncGen[BlockOutputEntry, None] # Output: 1 output pin produces n data. -BlockTestOutput = BlockOutputEntry | tuple[str, Callable[[Any], bool]] +BlockOutput = AsyncGenerator[BlockOutputEntry, None] # Output: 1 output pin -> N data. CompletedBlockOutput = dict[str, list[Any]] # Completed stream, collected as a dict. -class BlockType(Enum): - STANDARD = "Standard" - INPUT = "Input" - OUTPUT = "Output" - NOTE = "Note" - WEBHOOK = "Webhook" - WEBHOOK_MANUAL = "Webhook (manual)" - AGENT = "Agent" - AI = "AI" - AYRSHARE = "Ayrshare" - HUMAN_IN_THE_LOOP = "Human In The Loop" - - -class BlockCategory(Enum): - AI = "Block that leverages AI to perform a task." - SOCIAL = "Block that interacts with social media platforms." - TEXT = "Block that processes text data." - SEARCH = "Block that searches or extracts information from the internet." - BASIC = "Block that performs basic operations." - INPUT = "Block that interacts with input of the graph." - OUTPUT = "Block that interacts with output of the graph." - LOGIC = "Programming logic to control the flow of your agent" - COMMUNICATION = "Block that interacts with communication platforms." - DEVELOPER_TOOLS = "Developer tools such as GitHub blocks." - DATA = "Block that interacts with structured data." - HARDWARE = "Block that interacts with hardware." - AGENT = "Block that interacts with other agents." - CRM = "Block that interacts with CRM services." - SAFETY = ( - "Block that provides AI safety mechanisms such as detecting harmful content" - ) - PRODUCTIVITY = "Block that helps with productivity" - ISSUE_TRACKING = "Block that helps with issue tracking" - MULTIMEDIA = "Block that interacts with multimedia content" - MARKETING = "Block that helps with marketing" - - def dict(self) -> dict[str, str]: - return {"category": self.name, "description": self.value} - - -class BlockCostType(str, Enum): - RUN = "run" # cost X credits per run - BYTE = "byte" # cost X credits per byte - SECOND = "second" # cost X credits per second - - -class BlockCost(BaseModel): - cost_amount: int - cost_filter: BlockInput - cost_type: BlockCostType - - def __init__( - self, - cost_amount: int, - cost_type: BlockCostType = BlockCostType.RUN, - cost_filter: Optional[BlockInput] = None, - **data: Any, - ) -> None: - super().__init__( - cost_amount=cost_amount, - cost_filter=cost_filter or {}, - cost_type=cost_type, - **data, - ) - - -class BlockInfo(BaseModel): - id: str - name: str - inputSchema: dict[str, Any] - outputSchema: dict[str, Any] - costs: list[BlockCost] - description: str - categories: list[dict[str, str]] - contributors: list[dict[str, Any]] - staticOutput: bool - uiType: str - - -class BlockSchema(BaseModel): - cached_jsonschema: ClassVar[dict[str, Any]] - - @classmethod - def jsonschema(cls) -> dict[str, Any]: - if cls.cached_jsonschema: - return cls.cached_jsonschema - - model = jsonref.replace_refs(cls.model_json_schema(), merge_props=True) - - def ref_to_dict(obj): - if isinstance(obj, dict): - # OpenAPI <3.1 does not support sibling fields that has a $ref key - # So sometimes, the schema has an "allOf"/"anyOf"/"oneOf" with 1 item. - keys = {"allOf", "anyOf", "oneOf"} - one_key = next((k for k in keys if k in obj and len(obj[k]) == 1), None) - if one_key: - obj.update(obj[one_key][0]) - - return { - key: ref_to_dict(value) - for key, value in obj.items() - if not key.startswith("$") and key != one_key - } - elif isinstance(obj, list): - return [ref_to_dict(item) for item in obj] - - return obj - - cls.cached_jsonschema = cast(dict[str, Any], ref_to_dict(model)) - - return cls.cached_jsonschema - - @classmethod - def validate_data(cls, data: BlockInput) -> str | None: - return json.validate_with_jsonschema( - schema=cls.jsonschema(), - data={k: v for k, v in data.items() if v is not None}, - ) - - @classmethod - def get_mismatch_error(cls, data: BlockInput) -> str | None: - return cls.validate_data(data) - - @classmethod - def get_field_schema(cls, field_name: str) -> dict[str, Any]: - model_schema = cls.jsonschema().get("properties", {}) - if not model_schema: - raise ValueError(f"Invalid model schema {cls}") - - property_schema = model_schema.get(field_name) - if not property_schema: - raise ValueError(f"Invalid property name {field_name}") - - return property_schema - - @classmethod - def validate_field(cls, field_name: str, data: BlockInput) -> str | None: - """ - Validate the data against a specific property (one of the input/output name). - Returns the validation error message if the data does not match the schema. - """ - try: - property_schema = cls.get_field_schema(field_name) - jsonschema.validate(json.to_dict(data), property_schema) - return None - except jsonschema.ValidationError as e: - return str(e) - - @classmethod - def get_fields(cls) -> set[str]: - return set(cls.model_fields.keys()) - - @classmethod - def get_required_fields(cls) -> set[str]: - return { - field - for field, field_info in cls.model_fields.items() - if field_info.is_required() - } - - @classmethod - def __pydantic_init_subclass__(cls, **kwargs): - """Validates the schema definition. Rules: - - Fields with annotation `CredentialsMetaInput` MUST be - named `credentials` or `*_credentials` - - Fields named `credentials` or `*_credentials` MUST be - of type `CredentialsMetaInput` - """ - super().__pydantic_init_subclass__(**kwargs) - - # Reset cached JSON schema to prevent inheriting it from parent class - cls.cached_jsonschema = {} - - credentials_fields = cls.get_credentials_fields() - - for field_name in cls.get_fields(): - if is_credentials_field_name(field_name): - if field_name not in credentials_fields: - raise TypeError( - f"Credentials field '{field_name}' on {cls.__qualname__} " - f"is not of type {CredentialsMetaInput.__name__}" - ) - - CredentialsMetaInput.validate_credentials_field_schema( - cls.get_field_schema(field_name), field_name - ) - - elif field_name in credentials_fields: - raise KeyError( - f"Credentials field '{field_name}' on {cls.__qualname__} " - "has invalid name: must be 'credentials' or *_credentials" - ) - - @classmethod - def get_credentials_fields(cls) -> dict[str, type[CredentialsMetaInput]]: - return { - field_name: info.annotation - for field_name, info in cls.model_fields.items() - if ( - inspect.isclass(info.annotation) - and issubclass( - get_origin(info.annotation) or info.annotation, - CredentialsMetaInput, - ) - ) - } - - @classmethod - def get_auto_credentials_fields(cls) -> dict[str, dict[str, Any]]: - """ - Get fields that have auto_credentials metadata (e.g., GoogleDriveFileInput). - - Returns a dict mapping kwarg_name -> {field_name, auto_credentials_config} - - Raises: - ValueError: If multiple fields have the same kwarg_name, as this would - cause silent overwriting and only the last field would be processed. - """ - result: dict[str, dict[str, Any]] = {} - schema = cls.jsonschema() - properties = schema.get("properties", {}) - - for field_name, field_schema in properties.items(): - auto_creds = field_schema.get("auto_credentials") - if auto_creds: - kwarg_name = auto_creds.get("kwarg_name", "credentials") - if kwarg_name in result: - raise ValueError( - f"Duplicate auto_credentials kwarg_name '{kwarg_name}' " - f"in fields '{result[kwarg_name]['field_name']}' and " - f"'{field_name}' on {cls.__qualname__}" - ) - result[kwarg_name] = { - "field_name": field_name, - "config": auto_creds, - } - return result - - @classmethod - def get_credentials_fields_info(cls) -> dict[str, CredentialsFieldInfo]: - result = {} - - # Regular credentials fields - for field_name in cls.get_credentials_fields().keys(): - result[field_name] = CredentialsFieldInfo.model_validate( - cls.get_field_schema(field_name), by_alias=True - ) - - # Auto-generated credentials fields (from GoogleDriveFileInput etc.) - for kwarg_name, info in cls.get_auto_credentials_fields().items(): - config = info["config"] - # Build a schema-like dict that CredentialsFieldInfo can parse - auto_schema = { - "credentials_provider": [config.get("provider", "google")], - "credentials_types": [config.get("type", "oauth2")], - "credentials_scopes": config.get("scopes"), - } - result[kwarg_name] = CredentialsFieldInfo.model_validate( - auto_schema, by_alias=True - ) - - return result - - @classmethod - def get_input_defaults(cls, data: BlockInput) -> BlockInput: - return data # Return as is, by default. - - @classmethod - def get_missing_links(cls, data: BlockInput, links: list["Link"]) -> set[str]: - input_fields_from_nodes = {link.sink_name for link in links} - return input_fields_from_nodes - set(data) - - @classmethod - def get_missing_input(cls, data: BlockInput) -> set[str]: - return cls.get_required_fields() - set(data) - - -class BlockSchemaInput(BlockSchema): - """ - Base schema class for block inputs. - All block input schemas should extend this class for consistency. - """ - - pass - - -class BlockSchemaOutput(BlockSchema): - """ - Base schema class for block outputs that includes a standard error field. - All block output schemas should extend this class to ensure consistent error handling. - """ - - error: str = SchemaField( - description="Error message if the operation failed", default="" - ) - - -BlockSchemaInputType = TypeVar("BlockSchemaInputType", bound=BlockSchemaInput) -BlockSchemaOutputType = TypeVar("BlockSchemaOutputType", bound=BlockSchemaOutput) - - -class EmptyInputSchema(BlockSchemaInput): - pass - - -class EmptyOutputSchema(BlockSchemaOutput): - pass - - -# For backward compatibility - will be deprecated -EmptySchema = EmptyOutputSchema - - -# --8<-- [start:BlockWebhookConfig] -class BlockManualWebhookConfig(BaseModel): - """ - Configuration model for webhook-triggered blocks on which - the user has to manually set up the webhook at the provider. - """ - - provider: ProviderName - """The service provider that the webhook connects to""" - - webhook_type: str - """ - Identifier for the webhook type. E.g. GitHub has repo and organization level hooks. - - Only for use in the corresponding `WebhooksManager`. - """ - - event_filter_input: str = "" - """ - Name of the block's event filter input. - Leave empty if the corresponding webhook doesn't have distinct event/payload types. - """ - - event_format: str = "{event}" - """ - Template string for the event(s) that a block instance subscribes to. - Applied individually to each event selected in the event filter input. - - Example: `"pull_request.{event}"` -> `"pull_request.opened"` - """ - - -class BlockWebhookConfig(BlockManualWebhookConfig): - """ - Configuration model for webhook-triggered blocks for which - the webhook can be automatically set up through the provider's API. - """ - - resource_format: str - """ - Template string for the resource that a block instance subscribes to. - Fields will be filled from the block's inputs (except `payload`). - - Example: `f"{repo}/pull_requests"` (note: not how it's actually implemented) - - Only for use in the corresponding `WebhooksManager`. - """ - # --8<-- [end:BlockWebhookConfig] - - -class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]): - def __init__( - self, - id: str = "", - description: str = "", - contributors: list[ContributorDetails] = [], - categories: set[BlockCategory] | None = None, - input_schema: Type[BlockSchemaInputType] = EmptyInputSchema, - output_schema: Type[BlockSchemaOutputType] = EmptyOutputSchema, - test_input: BlockInput | list[BlockInput] | None = None, - test_output: BlockTestOutput | list[BlockTestOutput] | None = None, - test_mock: dict[str, Any] | None = None, - test_credentials: Optional[Credentials | dict[str, Credentials]] = None, - disabled: bool = False, - static_output: bool = False, - block_type: BlockType = BlockType.STANDARD, - webhook_config: Optional[BlockWebhookConfig | BlockManualWebhookConfig] = None, - is_sensitive_action: bool = False, - ): - """ - Initialize the block with the given schema. - - Args: - id: The unique identifier for the block, this value will be persisted in the - DB. So it should be a unique and constant across the application run. - Use the UUID format for the ID. - description: The description of the block, explaining what the block does. - contributors: The list of contributors who contributed to the block. - input_schema: The schema, defined as a Pydantic model, for the input data. - output_schema: The schema, defined as a Pydantic model, for the output data. - test_input: The list or single sample input data for the block, for testing. - test_output: The list or single expected output if the test_input is run. - test_mock: function names on the block implementation to mock on test run. - disabled: If the block is disabled, it will not be available for execution. - static_output: Whether the output links of the block are static by default. - """ - self.id = id - self.input_schema = input_schema - self.output_schema = output_schema - self.test_input = test_input - self.test_output = test_output - self.test_mock = test_mock - self.test_credentials = test_credentials - self.description = description - self.categories = categories or set() - self.contributors = contributors or set() - self.disabled = disabled - self.static_output = static_output - self.block_type = block_type - self.webhook_config = webhook_config - self.is_sensitive_action = is_sensitive_action - self.execution_stats: NodeExecutionStats = NodeExecutionStats() - - if self.webhook_config: - if isinstance(self.webhook_config, BlockWebhookConfig): - # Enforce presence of credentials field on auto-setup webhook blocks - if not (cred_fields := self.input_schema.get_credentials_fields()): - raise TypeError( - "credentials field is required on auto-setup webhook blocks" - ) - # Disallow multiple credentials inputs on webhook blocks - elif len(cred_fields) > 1: - raise ValueError( - "Multiple credentials inputs not supported on webhook blocks" - ) - - self.block_type = BlockType.WEBHOOK - else: - self.block_type = BlockType.WEBHOOK_MANUAL - - # Enforce shape of webhook event filter, if present - if self.webhook_config.event_filter_input: - event_filter_field = self.input_schema.model_fields[ - self.webhook_config.event_filter_input - ] - if not ( - isinstance(event_filter_field.annotation, type) - and issubclass(event_filter_field.annotation, BaseModel) - and all( - field.annotation is bool - for field in event_filter_field.annotation.model_fields.values() - ) - ): - raise NotImplementedError( - f"{self.name} has an invalid webhook event selector: " - "field must be a BaseModel and all its fields must be boolean" - ) - - # Enforce presence of 'payload' input - if "payload" not in self.input_schema.model_fields: - raise TypeError( - f"{self.name} is webhook-triggered but has no 'payload' input" - ) - - # Disable webhook-triggered block if webhook functionality not available - if not app_config.platform_base_url: - self.disabled = True - - @classmethod - def create(cls: Type["Block"]) -> "Block": - return cls() - - @abstractmethod - async def run(self, input_data: BlockSchemaInputType, **kwargs) -> BlockOutput: - """ - Run the block with the given input data. - Args: - input_data: The input data with the structure of input_schema. - - Kwargs: Currently 14/02/2025 these include - graph_id: The ID of the graph. - node_id: The ID of the node. - graph_exec_id: The ID of the graph execution. - node_exec_id: The ID of the node execution. - user_id: The ID of the user. - - Returns: - A Generator that yields (output_name, output_data). - output_name: One of the output name defined in Block's output_schema. - output_data: The data for the output_name, matching the defined schema. - """ - # --- satisfy the type checker, never executed ------------- - if False: # noqa: SIM115 - yield "name", "value" # pyright: ignore[reportMissingYield] - raise NotImplementedError(f"{self.name} does not implement the run method.") - - async def run_once( - self, input_data: BlockSchemaInputType, output: str, **kwargs - ) -> Any: - async for item in self.run(input_data, **kwargs): - name, data = item - if name == output: - return data - raise ValueError(f"{self.name} did not produce any output for {output}") - - def merge_stats(self, stats: NodeExecutionStats) -> NodeExecutionStats: - self.execution_stats += stats - return self.execution_stats - - @property - def name(self): - return self.__class__.__name__ - - def to_dict(self): - return { - "id": self.id, - "name": self.name, - "inputSchema": self.input_schema.jsonschema(), - "outputSchema": self.output_schema.jsonschema(), - "description": self.description, - "categories": [category.dict() for category in self.categories], - "contributors": [ - contributor.model_dump() for contributor in self.contributors - ], - "staticOutput": self.static_output, - "uiType": self.block_type.value, - } - - def get_info(self) -> BlockInfo: - from backend.data.credit import get_block_cost - - return BlockInfo( - id=self.id, - name=self.name, - inputSchema=self.input_schema.jsonschema(), - outputSchema=self.output_schema.jsonschema(), - costs=get_block_cost(self), - description=self.description, - categories=[category.dict() for category in self.categories], - contributors=[ - contributor.model_dump() for contributor in self.contributors - ], - staticOutput=self.static_output, - uiType=self.block_type.value, - ) - - async def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput: - try: - async for output_name, output_data in self._execute(input_data, **kwargs): - yield output_name, output_data - except Exception as ex: - if isinstance(ex, BlockError): - raise ex - else: - raise ( - BlockExecutionError - if isinstance(ex, ValueError) - else BlockUnknownError - )( - message=str(ex), - block_name=self.name, - block_id=self.id, - ) from ex - - async def is_block_exec_need_review( - self, - input_data: BlockInput, - *, - user_id: str, - node_id: str, - node_exec_id: str, - graph_exec_id: str, - graph_id: str, - graph_version: int, - execution_context: "ExecutionContext", - **kwargs, - ) -> tuple[bool, BlockInput]: - """ - Check if this block execution needs human review and handle the review process. - - Returns: - Tuple of (should_pause, input_data_to_use) - - should_pause: True if execution should be paused for review - - input_data_to_use: The input data to use (may be modified by reviewer) - """ - if not ( - self.is_sensitive_action and execution_context.sensitive_action_safe_mode - ): - return False, input_data - - from backend.blocks.helpers.review import HITLReviewHelper - - # Handle the review request and get decision - decision = await HITLReviewHelper.handle_review_decision( - input_data=input_data, - user_id=user_id, - node_id=node_id, - node_exec_id=node_exec_id, - graph_exec_id=graph_exec_id, - graph_id=graph_id, - graph_version=graph_version, - block_name=self.name, - editable=True, - ) - - if decision is None: - # We're awaiting review - pause execution - return True, input_data - - if not decision.should_proceed: - # Review was rejected, raise an error to stop execution - raise BlockExecutionError( - message=f"Block execution rejected by reviewer: {decision.message}", - block_name=self.name, - block_id=self.id, - ) - - # Review was approved - use the potentially modified data - # ReviewResult.data must be a dict for block inputs - reviewed_data = decision.review_result.data - if not isinstance(reviewed_data, dict): - raise BlockExecutionError( - message=f"Review data must be a dict for block input, got {type(reviewed_data).__name__}", - block_name=self.name, - block_id=self.id, - ) - return False, reviewed_data - - async def _execute(self, input_data: BlockInput, **kwargs) -> BlockOutput: - # Check for review requirement only if running within a graph execution context - # Direct block execution (e.g., from chat) skips the review process - has_graph_context = all( - key in kwargs - for key in ( - "node_exec_id", - "graph_exec_id", - "graph_id", - "execution_context", - ) - ) - if has_graph_context: - should_pause, input_data = await self.is_block_exec_need_review( - input_data, **kwargs - ) - if should_pause: - return - - # Validate the input data (original or reviewer-modified) once - if error := self.input_schema.validate_data(input_data): - raise BlockInputError( - message=f"Unable to execute block with invalid input data: {error}", - block_name=self.name, - block_id=self.id, - ) - - # Use the validated input data - async for output_name, output_data in self.run( - self.input_schema(**{k: v for k, v in input_data.items() if v is not None}), - **kwargs, - ): - if output_name == "error": - raise BlockExecutionError( - message=output_data, block_name=self.name, block_id=self.id - ) - if self.block_type == BlockType.STANDARD and ( - error := self.output_schema.validate_field(output_name, output_data) - ): - raise BlockOutputError( - message=f"Block produced an invalid output data: {error}", - block_name=self.name, - block_id=self.id, - ) - yield output_name, output_data - - def is_triggered_by_event_type( - self, trigger_config: dict[str, Any], event_type: str - ) -> bool: - if not self.webhook_config: - raise TypeError("This method can't be used on non-trigger blocks") - if not self.webhook_config.event_filter_input: - return True - event_filter = trigger_config.get(self.webhook_config.event_filter_input) - if not event_filter: - raise ValueError("Event filter is not configured on trigger") - return event_type in [ - self.webhook_config.event_format.format(event=k) - for k in event_filter - if event_filter[k] is True - ] - - -# Type alias for any block with standard input/output schemas -AnyBlockSchema: TypeAlias = Block[BlockSchemaInput, BlockSchemaOutput] - - -# ======================= Block Helper Functions ======================= # - - -def get_blocks() -> dict[str, Type[Block]]: - from backend.blocks import load_all_blocks - - return load_all_blocks() - - -def is_block_auth_configured( - block_cls: type[AnyBlockSchema], -) -> bool: - """ - Check if a block has a valid authentication method configured at runtime. - - For example if a block is an OAuth-only block and there env vars are not set, - do not show it in the UI. - - """ - from backend.sdk.registry import AutoRegistry - - # Create an instance to access input_schema - try: - block = block_cls() - except Exception as e: - # If we can't create a block instance, assume it's not OAuth-only - logger.error(f"Error creating block instance for {block_cls.__name__}: {e}") - return True - logger.debug( - f"Checking if block {block_cls.__name__} has a valid provider configured" - ) - - # Get all credential inputs from input schema - credential_inputs = block.input_schema.get_credentials_fields_info() - required_inputs = block.input_schema.get_required_fields() - if not credential_inputs: - logger.debug( - f"Block {block_cls.__name__} has no credential inputs - Treating as valid" - ) - return True - - # Check credential inputs - if len(required_inputs.intersection(credential_inputs.keys())) == 0: - logger.debug( - f"Block {block_cls.__name__} has only optional credential inputs" - " - will work without credentials configured" - ) - - # Check if the credential inputs for this block are correctly configured - for field_name, field_info in credential_inputs.items(): - provider_names = field_info.provider - if not provider_names: - logger.warning( - f"Block {block_cls.__name__} " - f"has credential input '{field_name}' with no provider options" - " - Disabling" - ) - return False - - # If a field has multiple possible providers, each one needs to be usable to - # prevent breaking the UX - for _provider_name in provider_names: - provider_name = _provider_name.value - if provider_name in ProviderName.__members__.values(): - logger.debug( - f"Block {block_cls.__name__} credential input '{field_name}' " - f"provider '{provider_name}' is part of the legacy provider system" - " - Treating as valid" - ) - break - - provider = AutoRegistry.get_provider(provider_name) - if not provider: - logger.warning( - f"Block {block_cls.__name__} credential input '{field_name}' " - f"refers to unknown provider '{provider_name}' - Disabling" - ) - return False - - # Check the provider's supported auth types - if field_info.supported_types != provider.supported_auth_types: - logger.warning( - f"Block {block_cls.__name__} credential input '{field_name}' " - f"has mismatched supported auth types (field <> Provider): " - f"{field_info.supported_types} != {provider.supported_auth_types}" - ) - - if not (supported_auth_types := provider.supported_auth_types): - # No auth methods are been configured for this provider - logger.warning( - f"Block {block_cls.__name__} credential input '{field_name}' " - f"provider '{provider_name}' " - "has no authentication methods configured - Disabling" - ) - return False - - # Check if provider supports OAuth - if "oauth2" in supported_auth_types: - # Check if OAuth environment variables are set - if (oauth_config := provider.oauth_config) and bool( - os.getenv(oauth_config.client_id_env_var) - and os.getenv(oauth_config.client_secret_env_var) - ): - logger.debug( - f"Block {block_cls.__name__} credential input '{field_name}' " - f"provider '{provider_name}' is configured for OAuth" - ) - else: - logger.error( - f"Block {block_cls.__name__} credential input '{field_name}' " - f"provider '{provider_name}' " - "is missing OAuth client ID or secret - Disabling" - ) - return False - - logger.debug( - f"Block {block_cls.__name__} credential input '{field_name}' is valid; " - f"supported credential types: {', '.join(field_info.supported_types)}" - ) - - return True - - async def initialize_blocks() -> None: + from backend.blocks import get_blocks from backend.sdk.cost_integration import sync_all_provider_costs from backend.util.retry import func_retry sync_all_provider_costs() @func_retry - async def sync_block_to_db(block: Block) -> None: + async def sync_block_to_db(block: "AnyBlockSchema") -> None: existing_block = await AgentBlock.prisma().find_first( where={"OR": [{"id": block.id}, {"name": block.name}]} ) @@ -932,36 +77,3 @@ async def initialize_blocks() -> None: f"Failed to sync {len(failed_blocks)} block(s) to database: " f"{', '.join(failed_blocks)}. These blocks are still available in memory." ) - - -# Note on the return type annotation: https://github.com/microsoft/pyright/issues/10281 -def get_block(block_id: str) -> AnyBlockSchema | None: - cls = get_blocks().get(block_id) - return cls() if cls else None - - -@cached(ttl_seconds=3600) -def get_webhook_block_ids() -> Sequence[str]: - return [ - id - for id, B in get_blocks().items() - if B().block_type in (BlockType.WEBHOOK, BlockType.WEBHOOK_MANUAL) - ] - - -@cached(ttl_seconds=3600) -def get_io_block_ids() -> Sequence[str]: - return [ - id - for id, B in get_blocks().items() - if B().block_type in (BlockType.INPUT, BlockType.OUTPUT) - ] - - -@cached(ttl_seconds=3600) -def get_human_in_the_loop_block_ids() -> Sequence[str]: - return [ - id - for id, B in get_blocks().items() - if B().block_type == BlockType.HUMAN_IN_THE_LOOP - ] diff --git a/autogpt_platform/backend/backend/data/block_cost_config.py b/autogpt_platform/backend/backend/data/block_cost_config.py index ec35afa401..c7fb12deb6 100644 --- a/autogpt_platform/backend/backend/data/block_cost_config.py +++ b/autogpt_platform/backend/backend/data/block_cost_config.py @@ -1,5 +1,6 @@ from typing import Type +from backend.blocks._base import Block, BlockCost, BlockCostType from backend.blocks.ai_image_customizer import AIImageCustomizerBlock, GeminiImageModel from backend.blocks.ai_image_generator_block import AIImageGeneratorBlock, ImageGenModel from backend.blocks.ai_music_generator import AIMusicGeneratorBlock @@ -37,7 +38,6 @@ from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock from backend.blocks.talking_head import CreateTalkingAvatarVideoBlock from backend.blocks.text_to_speech_block import UnrealTextToSpeechBlock from backend.blocks.video.narration import VideoNarrationBlock -from backend.data.block import Block, BlockCost, BlockCostType from backend.integrations.credentials_store import ( aiml_api_credentials, anthropic_credentials, diff --git a/autogpt_platform/backend/backend/data/credit.py b/autogpt_platform/backend/backend/data/credit.py index f3c5365446..04f91d8d61 100644 --- a/autogpt_platform/backend/backend/data/credit.py +++ b/autogpt_platform/backend/backend/data/credit.py @@ -38,7 +38,7 @@ from backend.util.retry import func_retry from backend.util.settings import Settings if TYPE_CHECKING: - from backend.data.block import Block, BlockCost + from backend.blocks._base import Block, BlockCost settings = Settings() stripe.api_key = settings.secrets.stripe_api_key diff --git a/autogpt_platform/backend/backend/data/credit_test.py b/autogpt_platform/backend/backend/data/credit_test.py index 2b10c62882..cb5973c74f 100644 --- a/autogpt_platform/backend/backend/data/credit_test.py +++ b/autogpt_platform/backend/backend/data/credit_test.py @@ -4,8 +4,8 @@ import pytest from prisma.enums import CreditTransactionType from prisma.models import CreditTransaction, UserBalance +from backend.blocks import get_block from backend.blocks.llm import AITextGeneratorBlock -from backend.data.block import get_block from backend.data.credit import BetaUserCredit, UsageTransactionMetadata from backend.data.execution import ExecutionContext, NodeExecutionEntry from backend.data.user import DEFAULT_USER_ID diff --git a/autogpt_platform/backend/backend/data/execution.py b/autogpt_platform/backend/backend/data/execution.py index def3d14fda..2f9258dc55 100644 --- a/autogpt_platform/backend/backend/data/execution.py +++ b/autogpt_platform/backend/backend/data/execution.py @@ -4,7 +4,6 @@ from collections import defaultdict from datetime import datetime, timedelta, timezone from enum import Enum from typing import ( - TYPE_CHECKING, Annotated, Any, AsyncGenerator, @@ -39,6 +38,8 @@ from prisma.types import ( from pydantic import BaseModel, ConfigDict, JsonValue, ValidationError from pydantic.fields import Field +from backend.blocks import get_block, get_io_block_ids, get_webhook_block_ids +from backend.blocks._base import BlockType from backend.util import type as type_utils from backend.util.exceptions import DatabaseError from backend.util.json import SafeJson @@ -47,14 +48,7 @@ from backend.util.retry import func_retry from backend.util.settings import Config from backend.util.truncate import truncate -from .block import ( - BlockInput, - BlockType, - CompletedBlockOutput, - get_block, - get_io_block_ids, - get_webhook_block_ids, -) +from .block import BlockInput, CompletedBlockOutput from .db import BaseDbModel, query_raw_with_schema from .event_bus import AsyncRedisEventBus, RedisEventBus from .includes import ( @@ -63,10 +57,12 @@ from .includes import ( GRAPH_EXECUTION_INCLUDE_WITH_NODES, graph_execution_include, ) -from .model import CredentialsMetaInput, GraphExecutionStats, NodeExecutionStats - -if TYPE_CHECKING: - pass +from .model import ( + CredentialsMetaInput, + GraphExecutionStats, + GraphInput, + NodeExecutionStats, +) T = TypeVar("T") @@ -167,7 +163,7 @@ class GraphExecutionMeta(BaseDbModel): user_id: str graph_id: str graph_version: int - inputs: Optional[BlockInput] # no default -> required in the OpenAPI spec + inputs: Optional[GraphInput] # no default -> required in the OpenAPI spec credential_inputs: Optional[dict[str, CredentialsMetaInput]] nodes_input_masks: Optional[dict[str, BlockInput]] preset_id: Optional[str] @@ -272,7 +268,7 @@ class GraphExecutionMeta(BaseDbModel): user_id=_graph_exec.userId, graph_id=_graph_exec.agentGraphId, graph_version=_graph_exec.agentGraphVersion, - inputs=cast(BlockInput | None, _graph_exec.inputs), + inputs=cast(GraphInput | None, _graph_exec.inputs), credential_inputs=( { name: CredentialsMetaInput.model_validate(cmi) @@ -314,7 +310,7 @@ class GraphExecutionMeta(BaseDbModel): class GraphExecution(GraphExecutionMeta): - inputs: BlockInput # type: ignore - incompatible override is intentional + inputs: GraphInput # type: ignore - incompatible override is intentional outputs: CompletedBlockOutput @staticmethod @@ -447,7 +443,7 @@ class NodeExecutionResult(BaseModel): for name, messages in stats.cleared_inputs.items(): input_data[name] = messages[-1] if messages else "" elif _node_exec.executionData: - input_data = type_utils.convert(_node_exec.executionData, dict[str, Any]) + input_data = type_utils.convert(_node_exec.executionData, BlockInput) else: input_data: BlockInput = defaultdict() for data in _node_exec.Input or []: @@ -867,7 +863,7 @@ async def upsert_execution_output( async def get_execution_outputs_by_node_exec_id( node_exec_id: str, -) -> dict[str, Any]: +) -> CompletedBlockOutput: """ Get all execution outputs for a specific node execution ID. @@ -1498,7 +1494,7 @@ async def get_graph_execution_by_share_token( # The executionData contains the structured input with 'name' and 'value' fields if hasattr(node_exec, "executionData") and node_exec.executionData: exec_data = type_utils.convert( - node_exec.executionData, dict[str, Any] + node_exec.executionData, BlockInput ) if "name" in exec_data: name = exec_data["name"] diff --git a/autogpt_platform/backend/backend/data/graph.py b/autogpt_platform/backend/backend/data/graph.py index 2433a5d270..f39a0144e7 100644 --- a/autogpt_platform/backend/backend/data/graph.py +++ b/autogpt_platform/backend/backend/data/graph.py @@ -23,38 +23,29 @@ from prisma.types import ( from pydantic import BaseModel, BeforeValidator, Field from pydantic.fields import computed_field +from backend.blocks import get_block, get_blocks +from backend.blocks._base import Block, BlockType, EmptySchema from backend.blocks.agent import AgentExecutorBlock from backend.blocks.io import AgentInputBlock, AgentOutputBlock from backend.blocks.llm import LlmModel -from backend.data.db import prisma as db -from backend.data.dynamic_fields import is_tool_pin, sanitize_pin_name -from backend.data.includes import MAX_GRAPH_VERSIONS_FETCH -from backend.data.model import ( - CredentialsFieldInfo, - CredentialsMetaInput, - is_credentials_field_name, -) from backend.integrations.providers import ProviderName from backend.util import type as type_utils from backend.util.exceptions import GraphNotAccessibleError, GraphNotInLibraryError from backend.util.json import SafeJson from backend.util.models import Pagination -from .block import ( - AnyBlockSchema, - Block, - BlockInput, - BlockType, - EmptySchema, - get_block, - get_blocks, -) -from .db import BaseDbModel, query_raw_with_schema, transaction -from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE +from .block import BlockInput +from .db import BaseDbModel +from .db import prisma as db +from .db import query_raw_with_schema, transaction +from .dynamic_fields import is_tool_pin, sanitize_pin_name +from .includes import AGENT_GRAPH_INCLUDE, AGENT_NODE_INCLUDE, MAX_GRAPH_VERSIONS_FETCH +from .model import CredentialsFieldInfo, CredentialsMetaInput, is_credentials_field_name if TYPE_CHECKING: + from backend.blocks._base import AnyBlockSchema + from .execution import NodesInputMasks - from .integrations import Webhook logger = logging.getLogger(__name__) @@ -128,7 +119,7 @@ class Node(BaseDbModel): return self.metadata.get("credentials_optional", False) @property - def block(self) -> AnyBlockSchema | "_UnknownBlockBase": + def block(self) -> "AnyBlockSchema | _UnknownBlockBase": """Get the block for this node. Returns UnknownBlock if block is deleted/missing.""" block = get_block(self.block_id) if not block: @@ -145,21 +136,18 @@ class NodeModel(Node): graph_version: int webhook_id: Optional[str] = None - webhook: Optional["Webhook"] = None + # webhook: Optional["Webhook"] = None # deprecated @staticmethod def from_db(node: AgentNode, for_export: bool = False) -> "NodeModel": - from .integrations import Webhook - obj = NodeModel( id=node.id, block_id=node.agentBlockId, - input_default=type_utils.convert(node.constantInput, dict[str, Any]), + input_default=type_utils.convert(node.constantInput, BlockInput), metadata=type_utils.convert(node.metadata, dict[str, Any]), graph_id=node.agentGraphId, graph_version=node.agentGraphVersion, webhook_id=node.webhookId, - webhook=Webhook.from_db(node.Webhook) if node.Webhook else None, ) obj.input_links = [Link.from_db(link) for link in node.Input or []] obj.output_links = [Link.from_db(link) for link in node.Output or []] @@ -192,14 +180,13 @@ class NodeModel(Node): # Remove webhook info stripped_node.webhook_id = None - stripped_node.webhook = None return stripped_node @staticmethod def _filter_secrets_from_node_input( - input_data: dict[str, Any], schema: dict[str, Any] | None - ) -> dict[str, Any]: + input_data: BlockInput, schema: dict[str, Any] | None + ) -> BlockInput: sensitive_keys = ["credentials", "api_key", "password", "token", "secret"] field_schemas = schema.get("properties", {}) if schema else {} result = {} diff --git a/autogpt_platform/backend/backend/data/graph_test.py b/autogpt_platform/backend/backend/data/graph_test.py index 8b7eadb887..442c8ed4be 100644 --- a/autogpt_platform/backend/backend/data/graph_test.py +++ b/autogpt_platform/backend/backend/data/graph_test.py @@ -9,9 +9,9 @@ from pytest_snapshot.plugin import Snapshot import backend.api.features.store.model as store from backend.api.model import CreateGraph +from backend.blocks._base import BlockSchema, BlockSchemaInput from backend.blocks.basic import StoreValueBlock from backend.blocks.io import AgentInputBlock, AgentOutputBlock -from backend.data.block import BlockSchema, BlockSchemaInput from backend.data.graph import Graph, Link, Node from backend.data.model import SchemaField from backend.data.user import DEFAULT_USER_ID @@ -323,7 +323,6 @@ async def test_clean_graph(server: SpinTestServer): # Verify webhook info is removed (if any nodes had it) for node in cleaned_graph.nodes: assert node.webhook_id is None - assert node.webhook is None @pytest.mark.asyncio(loop_scope="session") diff --git a/autogpt_platform/backend/backend/data/integrations.py b/autogpt_platform/backend/backend/data/integrations.py index 5f44f928bd..a6f007ce99 100644 --- a/autogpt_platform/backend/backend/data/integrations.py +++ b/autogpt_platform/backend/backend/data/integrations.py @@ -1,5 +1,5 @@ import logging -from typing import TYPE_CHECKING, AsyncGenerator, Literal, Optional, overload +from typing import AsyncGenerator, Literal, Optional, overload from prisma.models import AgentNode, AgentPreset, IntegrationWebhook from prisma.types import ( @@ -22,9 +22,6 @@ from backend.integrations.webhooks.utils import webhook_ingress_url from backend.util.exceptions import NotFoundError from backend.util.json import SafeJson -if TYPE_CHECKING: - from backend.api.features.library.model import LibraryAgentPreset - from .db import BaseDbModel from .graph import NodeModel @@ -64,9 +61,18 @@ class Webhook(BaseDbModel): ) +# LibraryAgentPreset import must be after Webhook definition to avoid +# broken circular import: +# integrations.py → library/model.py → integrations.py (for Webhook) +from backend.api.features.library.model import LibraryAgentPreset # noqa: E402 + +# Resolve forward refs +LibraryAgentPreset.model_rebuild() + + class WebhookWithRelations(Webhook): triggered_nodes: list[NodeModel] - triggered_presets: list["LibraryAgentPreset"] + triggered_presets: list[LibraryAgentPreset] @staticmethod def from_db(webhook: IntegrationWebhook): @@ -75,11 +81,6 @@ class WebhookWithRelations(Webhook): "AgentNodes and AgentPresets must be included in " "IntegrationWebhook query with relations" ) - # LibraryAgentPreset import is moved to TYPE_CHECKING to avoid circular import: - # integrations.py → library/model.py → integrations.py (for Webhook) - # Runtime import is used in WebhookWithRelations.from_db() method instead - # Import at runtime to avoid circular dependency - from backend.api.features.library.model import LibraryAgentPreset return WebhookWithRelations( **Webhook.from_db(webhook).model_dump(), diff --git a/autogpt_platform/backend/backend/data/model.py b/autogpt_platform/backend/backend/data/model.py index 7bdfef059b..e61f7efbd0 100644 --- a/autogpt_platform/backend/backend/data/model.py +++ b/autogpt_platform/backend/backend/data/model.py @@ -168,6 +168,9 @@ T = TypeVar("T") logger = logging.getLogger(__name__) +GraphInput = dict[str, Any] + + class BlockSecret: def __init__(self, key: Optional[str] = None, value: Optional[str] = None): if value is not None: diff --git a/autogpt_platform/backend/backend/executor/activity_status_generator.py b/autogpt_platform/backend/backend/executor/activity_status_generator.py index 3bc6bcb876..8cc1da8957 100644 --- a/autogpt_platform/backend/backend/executor/activity_status_generator.py +++ b/autogpt_platform/backend/backend/executor/activity_status_generator.py @@ -13,8 +13,8 @@ except ImportError: from pydantic import SecretStr +from backend.blocks import get_block from backend.blocks.llm import AIStructuredResponseGeneratorBlock, LlmModel -from backend.data.block import get_block from backend.data.execution import ExecutionStatus, NodeExecutionResult from backend.data.model import APIKeyCredentials, GraphExecutionStats from backend.util.feature_flag import Flag, is_feature_enabled diff --git a/autogpt_platform/backend/backend/executor/manager.py b/autogpt_platform/backend/backend/executor/manager.py index 7304653811..1f76458947 100644 --- a/autogpt_platform/backend/backend/executor/manager.py +++ b/autogpt_platform/backend/backend/executor/manager.py @@ -16,16 +16,12 @@ from pika.spec import Basic, BasicProperties from prometheus_client import Gauge, start_http_server from redis.asyncio.lock import Lock as AsyncRedisLock +from backend.blocks import get_block +from backend.blocks._base import BlockSchema from backend.blocks.agent import AgentExecutorBlock from backend.blocks.io import AgentOutputBlock from backend.data import redis_client as redis -from backend.data.block import ( - BlockInput, - BlockOutput, - BlockOutputEntry, - BlockSchema, - get_block, -) +from backend.data.block import BlockInput, BlockOutput, BlockOutputEntry from backend.data.credit import UsageTransactionMetadata from backend.data.dynamic_fields import parse_execution_output from backend.data.execution import ( diff --git a/autogpt_platform/backend/backend/executor/scheduler.py b/autogpt_platform/backend/backend/executor/scheduler.py index cbdc441718..94829f9837 100644 --- a/autogpt_platform/backend/backend/executor/scheduler.py +++ b/autogpt_platform/backend/backend/executor/scheduler.py @@ -24,9 +24,8 @@ from dotenv import load_dotenv from pydantic import BaseModel, Field, ValidationError from sqlalchemy import MetaData, create_engine -from backend.data.block import BlockInput from backend.data.execution import GraphExecutionWithNodes -from backend.data.model import CredentialsMetaInput +from backend.data.model import CredentialsMetaInput, GraphInput from backend.executor import utils as execution_utils from backend.monitoring import ( NotificationJobArgs, @@ -387,7 +386,7 @@ class GraphExecutionJobArgs(BaseModel): graph_version: int agent_name: str | None = None cron: str - input_data: BlockInput + input_data: GraphInput input_credentials: dict[str, CredentialsMetaInput] = Field(default_factory=dict) @@ -649,7 +648,7 @@ class Scheduler(AppService): graph_id: str, graph_version: int, cron: str, - input_data: BlockInput, + input_data: GraphInput, input_credentials: dict[str, CredentialsMetaInput], name: Optional[str] = None, user_timezone: str | None = None, diff --git a/autogpt_platform/backend/backend/executor/utils.py b/autogpt_platform/backend/backend/executor/utils.py index d26424aefc..bb5da1e527 100644 --- a/autogpt_platform/backend/backend/executor/utils.py +++ b/autogpt_platform/backend/backend/executor/utils.py @@ -8,23 +8,18 @@ from typing import Mapping, Optional, cast from pydantic import BaseModel, JsonValue, ValidationError +from backend.blocks import get_block +from backend.blocks._base import Block, BlockCostType, BlockType from backend.data import execution as execution_db from backend.data import graph as graph_db from backend.data import human_review as human_review_db from backend.data import onboarding as onboarding_db from backend.data import user as user_db -from backend.data.block import ( - Block, - BlockCostType, - BlockInput, - BlockOutputEntry, - BlockType, - get_block, -) -from backend.data.block_cost_config import BLOCK_COSTS -from backend.data.db import prisma # Import dynamic field utilities from centralized location +from backend.data.block import BlockInput, BlockOutputEntry +from backend.data.block_cost_config import BLOCK_COSTS +from backend.data.db import prisma from backend.data.dynamic_fields import merge_execution_input from backend.data.execution import ( ExecutionContext, @@ -35,7 +30,7 @@ from backend.data.execution import ( NodesInputMasks, ) from backend.data.graph import GraphModel, Node -from backend.data.model import USER_TIMEZONE_NOT_SET, CredentialsMetaInput +from backend.data.model import USER_TIMEZONE_NOT_SET, CredentialsMetaInput, GraphInput from backend.data.rabbitmq import Exchange, ExchangeType, Queue, RabbitMQConfig from backend.util.clients import ( get_async_execution_event_bus, @@ -426,7 +421,7 @@ async def validate_graph_with_credentials( async def _construct_starting_node_execution_input( graph: GraphModel, user_id: str, - graph_inputs: BlockInput, + graph_inputs: GraphInput, nodes_input_masks: Optional[NodesInputMasks] = None, ) -> tuple[list[tuple[str, BlockInput]], set[str]]: """ @@ -438,7 +433,7 @@ async def _construct_starting_node_execution_input( Args: graph (GraphModel): The graph model to execute. user_id (str): The ID of the user executing the graph. - data (BlockInput): The input data for the graph execution. + data (GraphInput): The input data for the graph execution. node_credentials_map: `dict[node_id, dict[input_name, CredentialsMetaInput]]` Returns: @@ -496,7 +491,7 @@ async def _construct_starting_node_execution_input( async def validate_and_construct_node_execution_input( graph_id: str, user_id: str, - graph_inputs: BlockInput, + graph_inputs: GraphInput, graph_version: Optional[int] = None, graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None, nodes_input_masks: Optional[NodesInputMasks] = None, @@ -796,7 +791,7 @@ async def stop_graph_execution( async def add_graph_execution( graph_id: str, user_id: str, - inputs: Optional[BlockInput] = None, + inputs: Optional[GraphInput] = None, preset_id: Optional[str] = None, graph_version: Optional[int] = None, graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None, diff --git a/autogpt_platform/backend/backend/integrations/webhooks/graph_lifecycle_hooks.py b/autogpt_platform/backend/backend/integrations/webhooks/graph_lifecycle_hooks.py index 5fb9198c4d..99eee404b9 100644 --- a/autogpt_platform/backend/backend/integrations/webhooks/graph_lifecycle_hooks.py +++ b/autogpt_platform/backend/backend/integrations/webhooks/graph_lifecycle_hooks.py @@ -2,8 +2,9 @@ import asyncio import logging from typing import TYPE_CHECKING, Optional, cast, overload -from backend.data.block import BlockSchema +from backend.blocks._base import BlockSchema from backend.data.graph import set_node_webhook +from backend.data.integrations import get_webhook from backend.integrations.creds_manager import IntegrationCredentialsManager from . import get_webhook_manager, supports_webhooks @@ -113,31 +114,32 @@ async def on_node_deactivate( webhooks_manager = get_webhook_manager(provider) - if node.webhook_id: - logger.debug(f"Node #{node.id} has webhook_id {node.webhook_id}") - if not node.webhook: - logger.error(f"Node #{node.id} has webhook_id but no webhook object") - raise ValueError("node.webhook not included") + if webhook_id := node.webhook_id: + logger.warning( + f"Node #{node.id} still attached to webhook #{webhook_id} - " + "did migration by `migrate_legacy_triggered_graphs` fail? " + "Triggered nodes are deprecated since Significant-Gravitas/AutoGPT#10418." + ) + webhook = await get_webhook(webhook_id) # Detach webhook from node logger.debug(f"Detaching webhook from node #{node.id}") updated_node = await set_node_webhook(node.id, None) # Prune and deregister the webhook if it is no longer used anywhere - webhook = node.webhook logger.debug( f"Pruning{' and deregistering' if credentials else ''} " - f"webhook #{webhook.id}" + f"webhook #{webhook_id}" ) await webhooks_manager.prune_webhook_if_dangling( - user_id, webhook.id, credentials + user_id, webhook_id, credentials ) if ( cast(BlockSchema, block.input_schema).get_credentials_fields() and not credentials ): logger.warning( - f"Cannot deregister webhook #{webhook.id}: credentials " + f"Cannot deregister webhook #{webhook_id}: credentials " f"#{webhook.credentials_id} not available " f"({webhook.provider.value} webhook ID: {webhook.provider_webhook_id})" ) diff --git a/autogpt_platform/backend/backend/integrations/webhooks/utils.py b/autogpt_platform/backend/backend/integrations/webhooks/utils.py index 79316c4c0e..ffe910a2eb 100644 --- a/autogpt_platform/backend/backend/integrations/webhooks/utils.py +++ b/autogpt_platform/backend/backend/integrations/webhooks/utils.py @@ -9,7 +9,7 @@ from backend.util.settings import Config from . import get_webhook_manager, supports_webhooks if TYPE_CHECKING: - from backend.data.block import AnyBlockSchema + from backend.blocks._base import AnyBlockSchema from backend.data.integrations import Webhook from backend.data.model import Credentials from backend.integrations.providers import ProviderName @@ -42,7 +42,7 @@ async def setup_webhook_for_block( Webhook: The created or found webhook object, if successful. str: A feedback message, if any required inputs are missing. """ - from backend.data.block import BlockWebhookConfig + from backend.blocks._base import BlockWebhookConfig if not (trigger_base_config := trigger_block.webhook_config): raise ValueError(f"Block #{trigger_block.id} does not have a webhook_config") diff --git a/autogpt_platform/backend/backend/monitoring/block_error_monitor.py b/autogpt_platform/backend/backend/monitoring/block_error_monitor.py index ffd2ffc888..07565a37e8 100644 --- a/autogpt_platform/backend/backend/monitoring/block_error_monitor.py +++ b/autogpt_platform/backend/backend/monitoring/block_error_monitor.py @@ -6,7 +6,7 @@ from datetime import datetime, timedelta, timezone from pydantic import BaseModel -from backend.data.block import get_block +from backend.blocks import get_block from backend.data.execution import ExecutionStatus, NodeExecutionResult from backend.util.clients import ( get_database_manager_client, diff --git a/autogpt_platform/backend/backend/sdk/__init__.py b/autogpt_platform/backend/backend/sdk/__init__.py index b3a23dc735..dc7260d08f 100644 --- a/autogpt_platform/backend/backend/sdk/__init__.py +++ b/autogpt_platform/backend/backend/sdk/__init__.py @@ -17,7 +17,7 @@ This module provides: from pydantic import BaseModel, Field, SecretStr # === CORE BLOCK SYSTEM === -from backend.data.block import ( +from backend.blocks._base import ( Block, BlockCategory, BlockManualWebhookConfig, @@ -65,7 +65,7 @@ except ImportError: # Cost System try: - from backend.data.block import BlockCost, BlockCostType + from backend.blocks._base import BlockCost, BlockCostType except ImportError: from backend.data.block_cost_config import BlockCost, BlockCostType diff --git a/autogpt_platform/backend/backend/sdk/builder.py b/autogpt_platform/backend/backend/sdk/builder.py index 09949b256f..28dd4023f0 100644 --- a/autogpt_platform/backend/backend/sdk/builder.py +++ b/autogpt_platform/backend/backend/sdk/builder.py @@ -8,7 +8,7 @@ from typing import Callable, List, Optional, Type from pydantic import SecretStr -from backend.data.block import BlockCost, BlockCostType +from backend.blocks._base import BlockCost, BlockCostType from backend.data.model import ( APIKeyCredentials, Credentials, diff --git a/autogpt_platform/backend/backend/sdk/cost_integration.py b/autogpt_platform/backend/backend/sdk/cost_integration.py index 04c027ffa3..2eec1aece0 100644 --- a/autogpt_platform/backend/backend/sdk/cost_integration.py +++ b/autogpt_platform/backend/backend/sdk/cost_integration.py @@ -8,7 +8,7 @@ BLOCK_COSTS configuration used by the execution system. import logging from typing import List, Type -from backend.data.block import Block, BlockCost +from backend.blocks._base import Block, BlockCost from backend.data.block_cost_config import BLOCK_COSTS from backend.sdk.registry import AutoRegistry diff --git a/autogpt_platform/backend/backend/sdk/provider.py b/autogpt_platform/backend/backend/sdk/provider.py index 98afbf05d5..2933121703 100644 --- a/autogpt_platform/backend/backend/sdk/provider.py +++ b/autogpt_platform/backend/backend/sdk/provider.py @@ -7,7 +7,7 @@ from typing import Any, Callable, List, Optional, Set, Type from pydantic import BaseModel, SecretStr -from backend.data.block import BlockCost +from backend.blocks._base import BlockCost from backend.data.model import ( APIKeyCredentials, Credentials, diff --git a/autogpt_platform/backend/backend/util/test.py b/autogpt_platform/backend/backend/util/test.py index 23d7c24147..279b3142a4 100644 --- a/autogpt_platform/backend/backend/util/test.py +++ b/autogpt_platform/backend/backend/util/test.py @@ -8,8 +8,9 @@ from typing import Sequence, cast from autogpt_libs.auth import get_user_id from backend.api.rest_api import AgentServer +from backend.blocks._base import Block, BlockSchema from backend.data import db -from backend.data.block import Block, BlockSchema, initialize_blocks +from backend.data.block import initialize_blocks from backend.data.execution import ( ExecutionContext, ExecutionStatus, diff --git a/autogpt_platform/backend/scripts/generate_block_docs.py b/autogpt_platform/backend/scripts/generate_block_docs.py index bb60eddb5d..25ad0a3be7 100644 --- a/autogpt_platform/backend/scripts/generate_block_docs.py +++ b/autogpt_platform/backend/scripts/generate_block_docs.py @@ -24,7 +24,10 @@ import sys from collections import defaultdict from dataclasses import dataclass, field from pathlib import Path -from typing import Any +from typing import TYPE_CHECKING, Any, Type + +if TYPE_CHECKING: + from backend.blocks._base import AnyBlockSchema # Add backend to path for imports backend_dir = Path(__file__).parent.parent @@ -242,9 +245,9 @@ def file_path_to_title(file_path: str) -> str: return apply_fixes(name.replace("_", " ").title()) -def extract_block_doc(block_cls: type) -> BlockDoc: +def extract_block_doc(block_cls: Type["AnyBlockSchema"]) -> BlockDoc: """Extract documentation data from a block class.""" - block = block_cls.create() + block = block_cls() # Get source file try: @@ -520,7 +523,7 @@ def generate_overview_table(blocks: list[BlockDoc], block_dir_prefix: str = "") lines.append("") # Group blocks by category - by_category = defaultdict(list) + by_category = defaultdict[str, list[BlockDoc]](list) for block in blocks: primary_cat = block.categories[0] if block.categories else "BASIC" by_category[primary_cat].append(block) diff --git a/autogpt_platform/backend/test/load_store_agents.py b/autogpt_platform/backend/test/load_store_agents.py index b9d8e0478e..dfc5beb453 100644 --- a/autogpt_platform/backend/test/load_store_agents.py +++ b/autogpt_platform/backend/test/load_store_agents.py @@ -49,7 +49,7 @@ async def initialize_blocks(db: Prisma) -> set[str]: Returns a set of block IDs that exist in the database. """ - from backend.data.block import get_blocks + from backend.blocks import get_blocks print(" Initializing agent blocks...") blocks = get_blocks() diff --git a/autogpt_platform/backend/test/sdk/test_sdk_registry.py b/autogpt_platform/backend/test/sdk/test_sdk_registry.py index f82abd57cb..ab384ca955 100644 --- a/autogpt_platform/backend/test/sdk/test_sdk_registry.py +++ b/autogpt_platform/backend/test/sdk/test_sdk_registry.py @@ -377,7 +377,7 @@ class TestProviderBuilder: def test_provider_builder_with_base_cost(self): """Test building a provider with base costs.""" - from backend.data.block import BlockCostType + from backend.blocks._base import BlockCostType provider = ( ProviderBuilder("cost_test") @@ -418,7 +418,7 @@ class TestProviderBuilder: def test_provider_builder_complete_example(self): """Test building a complete provider with all features.""" - from backend.data.block import BlockCostType + from backend.blocks._base import BlockCostType class TestOAuth(BaseOAuthHandler): PROVIDER_NAME = ProviderName.GITHUB diff --git a/autogpt_platform/frontend/src/app/(platform)/build/components/legacy-builder/Flow/Flow.tsx b/autogpt_platform/frontend/src/app/(platform)/build/components/legacy-builder/Flow/Flow.tsx index 67b3cad9af..babe10b912 100644 --- a/autogpt_platform/frontend/src/app/(platform)/build/components/legacy-builder/Flow/Flow.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/build/components/legacy-builder/Flow/Flow.tsx @@ -1137,7 +1137,7 @@ const FlowEditor: React.FC<{ You are building a Trigger Agent Your agent{" "} - {savedAgent?.nodes.some((node) => node.webhook) + {savedAgent?.nodes.some((node) => node.webhook_id) ? "is listening" : "will listen"}{" "} for its trigger and will run when the time is right. diff --git a/autogpt_platform/frontend/src/app/api/openapi.json b/autogpt_platform/frontend/src/app/api/openapi.json index 172419d27e..5d2cb83f7c 100644 --- a/autogpt_platform/frontend/src/app/api/openapi.json +++ b/autogpt_platform/frontend/src/app/api/openapi.json @@ -9498,12 +9498,6 @@ "webhook_id": { "anyOf": [{ "type": "string" }, { "type": "null" }], "title": "Webhook Id" - }, - "webhook": { - "anyOf": [ - { "$ref": "#/components/schemas/Webhook" }, - { "type": "null" } - ] } }, "type": "object", diff --git a/autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts b/autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts index 44fb25dbfc..65625f1cfb 100644 --- a/autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts +++ b/autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts @@ -27,7 +27,7 @@ export type BlockCost = { cost_filter: Record; }; -/* Mirror of backend/data/block.py:Block */ +/* Mirror of backend/blocks/_base.py:Block */ export type Block = { id: string; name: string; @@ -292,7 +292,7 @@ export type NodeCreatable = { export type Node = NodeCreatable & { input_links: Link[]; output_links: Link[]; - webhook?: Webhook; + webhook_id?: string | null; }; /* Mirror of backend/data/graph.py:Link */ diff --git a/docs/platform/new_blocks.md b/docs/platform/new_blocks.md index 114ff8d9a4..c84f864684 100644 --- a/docs/platform/new_blocks.md +++ b/docs/platform/new_blocks.md @@ -20,13 +20,13 @@ Follow these steps to create and test a new block: Every block should contain the following: ```python - from backend.data.block import Block, BlockSchemaInput, BlockSchemaOutput, BlockOutput + from backend.blocks._base import Block, BlockSchemaInput, BlockSchemaOutput, BlockOutput ``` Example for the Wikipedia summary block: ```python - from backend.data.block import Block, BlockSchemaInput, BlockSchemaOutput, BlockOutput + from backend.blocks._base import Block, BlockSchemaInput, BlockSchemaOutput, BlockOutput from backend.utils.get_request import GetRequest import requests @@ -237,7 +237,7 @@ from backend.data.model import ( Credentials, ) -from backend.data.block import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput +from backend.blocks._base import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import CredentialsField from backend.integrations.providers import ProviderName @@ -496,8 +496,8 @@ To create a webhook-triggered block, follow these additional steps on top of the
BlockWebhookConfig definition - ```python title="backend/data/block.py" - --8<-- "autogpt_platform/backend/backend/data/block.py:BlockWebhookConfig" + ```python title="backend/blocks/_base.py" + --8<-- "autogpt_platform/backend/backend/blocks/_base.py:BlockWebhookConfig" ```
From 695a185fa1118f2948f7388bd70d0114cdaebc30 Mon Sep 17 00:00:00 2001 From: Otto Date: Thu, 12 Feb 2026 12:46:29 +0000 Subject: [PATCH 07/18] fix(frontend): remove fixed min-height from CoPilot message container (#12091) ## Summary Removes the `min-h-screen` class from `ConversationContent` in ChatMessagesContainer, which was causing fixed height layout issues in the CoPilot chat interface. ## Changes - Removed `min-h-screen` from ConversationContent className ## Linear Fixes [SECRT-1944](https://linear.app/autogpt/issue/SECRT-1944)

Greptile Overview

Greptile Summary

Removes the `min-h-screen` (100vh) class from `ConversationContent` that was causing the chat message container to enforce a minimum viewport height. The parent container already handles height constraints with `h-full min-h-0` and flexbox layout, so the fixed minimum height was creating layout conflicts. The component now properly grows within its flex container using `flex-1`.

Confidence Score: 5/5

- This PR is safe to merge with minimal risk - The change removes a single problematic CSS class that was causing fixed height layout issues. The parent container already handles height constraints properly with flexbox, and removing min-h-screen allows the component to size correctly within its flex parent. This is a targeted, low-risk bug fix with no logic changes. - No files require special attention
--- .../components/ChatMessagesContainer/ChatMessagesContainer.tsx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx index fbe1c03d1d..71ade81a9f 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx @@ -159,7 +159,7 @@ export const ChatMessagesContainer = ({ return ( - + {isLoading && messages.length === 0 && (
From 4f6055f4942f30414206f527e6bc40de43239087 Mon Sep 17 00:00:00 2001 From: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com> Date: Thu, 12 Feb 2026 18:27:06 +0530 Subject: [PATCH 08/18] refactor(frontend): remove default expiration date from API key credentials form (#12092) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ### Changes 🏗️ Removed the default expiration date for API keys in the credentials modal. Previously, API keys were set to expire the next day by default, but now the expiration date field starts empty, allowing users to explicitly choose whether they want to set an expiration date. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Open the API key credentials modal and verify the expiration date field is empty by default - [x] Test creating an API key with and without an expiration date - [x] Verify both scenarios work correctly

Greptile Overview

Greptile Summary

Removed the default expiration date for API key credentials in the credentials modal. Previously, API keys were automatically set to expire the next day at midnight. Now the expiration date field starts empty, allowing users to explicitly choose whether to set an expiration. - Removed `getDefaultExpirationDate()` helper function that calculated tomorrow's date - Changed default `expiresAt` value from calculated date to empty string - Backend already supports optional expiration (`expires_at?: number`), so no backend changes needed - Form submission correctly handles empty expiration by passing `undefined` to the API

Confidence Score: 5/5

- This PR is safe to merge with minimal risk - The changes are straightforward and well-contained. The refactor removes a helper function and changes a default value. The backend API already supports optional expiration dates, and the form submission logic correctly handles empty values by passing undefined. The change improves UX by not forcing a default expiration date on users. - No files require special attention
--- .../APIKeyCredentialsModal.tsx | 8 ++- .../useAPIKeyCredentialsModal.ts | 51 +++++++++---------- 2 files changed, 31 insertions(+), 28 deletions(-) diff --git a/autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/APIKeyCredentialsModal.tsx b/autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/APIKeyCredentialsModal.tsx index 90f6c0ff70..1c455863dd 100644 --- a/autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/APIKeyCredentialsModal.tsx +++ b/autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/APIKeyCredentialsModal.tsx @@ -30,6 +30,7 @@ export function APIKeyCredentialsModal({ const { form, isLoading, + isSubmitting, supportsApiKey, providerName, schemaDescription, @@ -138,7 +139,12 @@ export function APIKeyCredentialsModal({ /> )} /> - diff --git a/autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/useAPIKeyCredentialsModal.ts b/autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/useAPIKeyCredentialsModal.ts index 72599a2e79..1f3d4c9085 100644 --- a/autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/useAPIKeyCredentialsModal.ts +++ b/autogpt_platform/frontend/src/components/contextual/CredentialsInput/components/APIKeyCredentialsModal/useAPIKeyCredentialsModal.ts @@ -4,6 +4,7 @@ import { CredentialsMetaInput, } from "@/lib/autogpt-server-api/types"; import { zodResolver } from "@hookform/resolvers/zod"; +import { useState } from "react"; import { useForm, type UseFormReturn } from "react-hook-form"; import { z } from "zod"; @@ -26,6 +27,7 @@ export function useAPIKeyCredentialsModal({ }: Args): { form: UseFormReturn; isLoading: boolean; + isSubmitting: boolean; supportsApiKey: boolean; provider?: string; providerName?: string; @@ -33,6 +35,7 @@ export function useAPIKeyCredentialsModal({ onSubmit: (values: APIKeyFormValues) => Promise; } { const credentials = useCredentials(schema, siblingInputs); + const [isSubmitting, setIsSubmitting] = useState(false); const formSchema = z.object({ apiKey: z.string().min(1, "API Key is required"), @@ -40,48 +43,42 @@ export function useAPIKeyCredentialsModal({ expiresAt: z.string().optional(), }); - function getDefaultExpirationDate(): string { - const tomorrow = new Date(); - tomorrow.setDate(tomorrow.getDate() + 1); - tomorrow.setHours(0, 0, 0, 0); - const year = tomorrow.getFullYear(); - const month = String(tomorrow.getMonth() + 1).padStart(2, "0"); - const day = String(tomorrow.getDate()).padStart(2, "0"); - const hours = String(tomorrow.getHours()).padStart(2, "0"); - const minutes = String(tomorrow.getMinutes()).padStart(2, "0"); - return `${year}-${month}-${day}T${hours}:${minutes}`; - } - const form = useForm({ resolver: zodResolver(formSchema), defaultValues: { apiKey: "", title: "", - expiresAt: getDefaultExpirationDate(), + expiresAt: "", }, }); async function onSubmit(values: APIKeyFormValues) { if (!credentials || credentials.isLoading) return; - const expiresAt = values.expiresAt - ? new Date(values.expiresAt).getTime() / 1000 - : undefined; - const newCredentials = await credentials.createAPIKeyCredentials({ - api_key: values.apiKey, - title: values.title, - expires_at: expiresAt, - }); - onCredentialsCreate({ - provider: credentials.provider, - id: newCredentials.id, - type: "api_key", - title: newCredentials.title, - }); + setIsSubmitting(true); + try { + const expiresAt = values.expiresAt + ? new Date(values.expiresAt).getTime() / 1000 + : undefined; + const newCredentials = await credentials.createAPIKeyCredentials({ + api_key: values.apiKey, + title: values.title, + expires_at: expiresAt, + }); + onCredentialsCreate({ + provider: credentials.provider, + id: newCredentials.id, + type: "api_key", + title: newCredentials.title, + }); + } finally { + setIsSubmitting(false); + } } return { form, isLoading: !credentials || credentials.isLoading, + isSubmitting, supportsApiKey: !!credentials?.supportsApiKey, provider: credentials?.provider, providerName: From b8b6c9de2322cf083e61670ee8625d4abb2d8e19 Mon Sep 17 00:00:00 2001 From: Swifty Date: Thu, 12 Feb 2026 16:38:17 +0100 Subject: [PATCH 09/18] added feature request tooling --- .../api/features/chat/tools/__init__.py | 4 + .../features/chat/tools/feature_requests.py | 369 ++++++++++++++ .../backend/api/features/chat/tools/models.py | 34 ++ .../backend/backend/util/settings.py | 3 + .../backend/test_linear_customers.py | 468 ++++++++++++++++++ .../ChatMessagesContainer.tsx | 18 + .../(platform)/copilot/styleguide/page.tsx | 235 +++++++++ .../tools/FeatureRequests/FeatureRequests.tsx | 240 +++++++++ .../copilot/tools/FeatureRequests/helpers.tsx | 271 ++++++++++ .../frontend/src/app/api/openapi.json | 4 +- 10 files changed, 1645 insertions(+), 1 deletion(-) create mode 100644 autogpt_platform/backend/backend/api/features/chat/tools/feature_requests.py create mode 100644 autogpt_platform/backend/test_linear_customers.py create mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/FeatureRequests.tsx create mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/helpers.tsx diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py b/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py index dcbc35ef37..350776081a 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py @@ -12,6 +12,7 @@ from .base import BaseTool from .create_agent import CreateAgentTool from .customize_agent import CustomizeAgentTool from .edit_agent import EditAgentTool +from .feature_requests import CreateFeatureRequestTool, SearchFeatureRequestsTool from .find_agent import FindAgentTool from .find_block import FindBlockTool from .find_library_agent import FindLibraryAgentTool @@ -45,6 +46,9 @@ TOOL_REGISTRY: dict[str, BaseTool] = { "view_agent_output": AgentOutputTool(), "search_docs": SearchDocsTool(), "get_doc_page": GetDocPageTool(), + # Feature request tools + "search_feature_requests": SearchFeatureRequestsTool(), + "create_feature_request": CreateFeatureRequestTool(), # Workspace tools for CoPilot file operations "list_workspace_files": ListWorkspaceFilesTool(), "read_workspace_file": ReadWorkspaceFileTool(), diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/feature_requests.py b/autogpt_platform/backend/backend/api/features/chat/tools/feature_requests.py new file mode 100644 index 0000000000..5e06d8b4b2 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/feature_requests.py @@ -0,0 +1,369 @@ +"""Feature request tools - search and create feature requests via Linear.""" + +import logging +from typing import Any + +from pydantic import SecretStr + +from backend.api.features.chat.model import ChatSession +from backend.api.features.chat.tools.base import BaseTool +from backend.api.features.chat.tools.models import ( + ErrorResponse, + FeatureRequestCreatedResponse, + FeatureRequestInfo, + FeatureRequestSearchResponse, + NoResultsResponse, + ToolResponseBase, +) +from backend.blocks.linear._api import LinearClient +from backend.data.model import APIKeyCredentials +from backend.util.settings import Settings + +logger = logging.getLogger(__name__) + +# Target project and team IDs in our Linear workspace +FEATURE_REQUEST_PROJECT_ID = "13f066f3-f639-4a67-aaa3-31483ebdf8cd" +TEAM_ID = "557fd3d5-087e-43a9-83e3-476c8313ce49" + +MAX_SEARCH_RESULTS = 10 + +# GraphQL queries/mutations +SEARCH_ISSUES_QUERY = """ +query SearchFeatureRequests($term: String!, $filter: IssueFilter, $first: Int) { + searchIssues(term: $term, filter: $filter, first: $first) { + nodes { + id + identifier + title + description + } + } +} +""" + +CUSTOMER_UPSERT_MUTATION = """ +mutation CustomerUpsert($input: CustomerUpsertInput!) { + customerUpsert(input: $input) { + success + customer { + id + name + externalIds + } + } +} +""" + +ISSUE_CREATE_MUTATION = """ +mutation IssueCreate($input: IssueCreateInput!) { + issueCreate(input: $input) { + success + issue { + id + identifier + title + url + } + } +} +""" + +CUSTOMER_NEED_CREATE_MUTATION = """ +mutation CustomerNeedCreate($input: CustomerNeedCreateInput!) { + customerNeedCreate(input: $input) { + success + need { + id + body + customer { + id + name + } + issue { + id + identifier + title + url + } + } + } +} +""" + + +_settings: Settings | None = None + + +def _get_settings() -> Settings: + global _settings + if _settings is None: + _settings = Settings() + return _settings + + +def _get_linear_client() -> LinearClient: + """Create a Linear client using the system API key from settings.""" + api_key = _get_settings().secrets.linear_api_key + if not api_key: + raise RuntimeError("LINEAR_API_KEY secret is not configured") + credentials = APIKeyCredentials( + id="system-linear", + provider="linear", + api_key=SecretStr(api_key), + title="System Linear API Key", + ) + return LinearClient(credentials=credentials) + + +class SearchFeatureRequestsTool(BaseTool): + """Tool for searching existing feature requests in Linear.""" + + @property + def name(self) -> str: + return "search_feature_requests" + + @property + def description(self) -> str: + return ( + "Search existing feature requests to check if a similar request " + "already exists before creating a new one. Returns matching feature " + "requests with their ID, title, and description." + ) + + @property + def parameters(self) -> dict[str, Any]: + return { + "type": "object", + "properties": { + "query": { + "type": "string", + "description": "Search term to find matching feature requests.", + }, + }, + "required": ["query"], + } + + @property + def requires_auth(self) -> bool: + return True + + async def _execute( + self, + user_id: str | None, + session: ChatSession, + **kwargs, + ) -> ToolResponseBase: + query = kwargs.get("query", "").strip() + session_id = session.session_id if session else None + + if not query: + return ErrorResponse( + message="Please provide a search query.", + error="Missing query parameter", + session_id=session_id, + ) + + client = _get_linear_client() + data = await client.query( + SEARCH_ISSUES_QUERY, + { + "term": query, + "filter": { + "project": {"id": {"eq": FEATURE_REQUEST_PROJECT_ID}}, + }, + "first": MAX_SEARCH_RESULTS, + }, + ) + + nodes = data.get("searchIssues", {}).get("nodes", []) + + if not nodes: + return NoResultsResponse( + message=f"No feature requests found matching '{query}'.", + suggestions=[ + "Try different keywords", + "Use broader search terms", + "You can create a new feature request if none exists", + ], + session_id=session_id, + ) + + results = [ + FeatureRequestInfo( + id=node["id"], + identifier=node["identifier"], + title=node["title"], + description=node.get("description"), + ) + for node in nodes + ] + + return FeatureRequestSearchResponse( + message=f"Found {len(results)} feature request(s) matching '{query}'.", + results=results, + count=len(results), + query=query, + session_id=session_id, + ) + + +class CreateFeatureRequestTool(BaseTool): + """Tool for creating feature requests (or adding needs to existing ones).""" + + @property + def name(self) -> str: + return "create_feature_request" + + @property + def description(self) -> str: + return ( + "Create a new feature request or add a customer need to an existing one. " + "Always search first with search_feature_requests to avoid duplicates. " + "If a matching request exists, pass its ID as existing_issue_id to add " + "the user's need to it instead of creating a duplicate." + ) + + @property + def parameters(self) -> dict[str, Any]: + return { + "type": "object", + "properties": { + "title": { + "type": "string", + "description": "Title for the feature request.", + }, + "description": { + "type": "string", + "description": "Detailed description of what the user wants and why.", + }, + "existing_issue_id": { + "type": "string", + "description": ( + "If adding a need to an existing feature request, " + "provide its Linear issue ID (from search results). " + "Omit to create a new feature request." + ), + }, + }, + "required": ["title", "description"], + } + + @property + def requires_auth(self) -> bool: + return True + + async def _find_or_create_customer( + self, client: LinearClient, user_id: str + ) -> dict: + """Find existing customer by user_id or create a new one via upsert.""" + data = await client.mutate( + CUSTOMER_UPSERT_MUTATION, + { + "input": { + "name": user_id, + "externalId": user_id, + }, + }, + ) + result = data.get("customerUpsert", {}) + if not result.get("success"): + raise RuntimeError(f"Failed to upsert customer: {data}") + return result["customer"] + + async def _execute( + self, + user_id: str | None, + session: ChatSession, + **kwargs, + ) -> ToolResponseBase: + title = kwargs.get("title", "").strip() + description = kwargs.get("description", "").strip() + existing_issue_id = kwargs.get("existing_issue_id") + session_id = session.session_id if session else None + + if not title or not description: + return ErrorResponse( + message="Both title and description are required.", + error="Missing required parameters", + session_id=session_id, + ) + + if not user_id: + return ErrorResponse( + message="Authentication required to create feature requests.", + error="Missing user_id", + session_id=session_id, + ) + + client = _get_linear_client() + + # Step 1: Find or create customer for this user + customer = await self._find_or_create_customer(client, user_id) + customer_id = customer["id"] + customer_name = customer["name"] + + # Step 2: Create or reuse issue + if existing_issue_id: + # Add need to existing issue - we still need the issue details for response + is_new_issue = False + issue_id = existing_issue_id + else: + # Create new issue in the feature requests project + data = await client.mutate( + ISSUE_CREATE_MUTATION, + { + "input": { + "title": title, + "description": description, + "teamId": TEAM_ID, + "projectId": FEATURE_REQUEST_PROJECT_ID, + }, + }, + ) + result = data.get("issueCreate", {}) + if not result.get("success"): + return ErrorResponse( + message="Failed to create feature request issue.", + error=str(data), + session_id=session_id, + ) + issue = result["issue"] + issue_id = issue["id"] + is_new_issue = True + + # Step 3: Create customer need on the issue + data = await client.mutate( + CUSTOMER_NEED_CREATE_MUTATION, + { + "input": { + "customerId": customer_id, + "issueId": issue_id, + "body": description, + "priority": 0, + }, + }, + ) + need_result = data.get("customerNeedCreate", {}) + if not need_result.get("success"): + return ErrorResponse( + message="Failed to attach customer need to the feature request.", + error=str(data), + session_id=session_id, + ) + + need = need_result["need"] + issue_info = need["issue"] + + return FeatureRequestCreatedResponse( + message=( + f"{'Created new feature request' if is_new_issue else 'Added your request to existing feature request'} " + f"[{issue_info['identifier']}] {issue_info['title']}." + ), + issue_id=issue_info["id"], + issue_identifier=issue_info["identifier"], + issue_title=issue_info["title"], + issue_url=issue_info.get("url", ""), + is_new_issue=is_new_issue, + customer_name=customer_name, + session_id=session_id, + ) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/models.py b/autogpt_platform/backend/backend/api/features/chat/tools/models.py index 69c8c6c684..d420b289dc 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/models.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/models.py @@ -40,6 +40,9 @@ class ResponseType(str, Enum): OPERATION_IN_PROGRESS = "operation_in_progress" # Input validation INPUT_VALIDATION_ERROR = "input_validation_error" + # Feature request types + FEATURE_REQUEST_SEARCH = "feature_request_search" + FEATURE_REQUEST_CREATED = "feature_request_created" # Base response model @@ -421,3 +424,34 @@ class AsyncProcessingResponse(ToolResponseBase): status: str = "accepted" # Must be "accepted" for detection operation_id: str | None = None task_id: str | None = None + + +# Feature request models +class FeatureRequestInfo(BaseModel): + """Information about a feature request issue.""" + + id: str + identifier: str + title: str + description: str | None = None + + +class FeatureRequestSearchResponse(ToolResponseBase): + """Response for search_feature_requests tool.""" + + type: ResponseType = ResponseType.FEATURE_REQUEST_SEARCH + results: list[FeatureRequestInfo] + count: int + query: str + + +class FeatureRequestCreatedResponse(ToolResponseBase): + """Response for create_feature_request tool.""" + + type: ResponseType = ResponseType.FEATURE_REQUEST_CREATED + issue_id: str + issue_identifier: str + issue_title: str + issue_url: str + is_new_issue: bool # False if added to existing + customer_name: str diff --git a/autogpt_platform/backend/backend/util/settings.py b/autogpt_platform/backend/backend/util/settings.py index 50b7428160..d539832fb0 100644 --- a/autogpt_platform/backend/backend/util/settings.py +++ b/autogpt_platform/backend/backend/util/settings.py @@ -658,6 +658,9 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings): mem0_api_key: str = Field(default="", description="Mem0 API key") elevenlabs_api_key: str = Field(default="", description="ElevenLabs API key") + linear_api_key: str = Field( + default="", description="Linear API key for system-level operations" + ) linear_client_id: str = Field(default="", description="Linear client ID") linear_client_secret: str = Field(default="", description="Linear client secret") diff --git a/autogpt_platform/backend/test_linear_customers.py b/autogpt_platform/backend/test_linear_customers.py new file mode 100644 index 0000000000..6e6f3e48fc --- /dev/null +++ b/autogpt_platform/backend/test_linear_customers.py @@ -0,0 +1,468 @@ +""" +Test script for Linear GraphQL API - Customer Requests operations. + +Tests the exact GraphQL calls needed for: +1. search_feature_requests - search issues in the Customer Feature Requests project +2. add_feature_request - upsert customer + create customer need on issue + +Requires LINEAR_API_KEY in backend/.env +Generate one at: https://linear.app/settings/api +""" + +import json +import os +import sys + +import httpx +from dotenv import load_dotenv + +load_dotenv() + +LINEAR_API_URL = "https://api.linear.app/graphql" +API_KEY = os.getenv("LINEAR_API_KEY") + +# Target project for feature requests +FEATURE_REQUEST_PROJECT_ID = "13f066f3-f639-4a67-aaa3-31483ebdf8cd" +# Team: Internal +TEAM_ID = "557fd3d5-087e-43a9-83e3-476c8313ce49" + +if not API_KEY: + print("ERROR: LINEAR_API_KEY not found in .env") + print("Generate a personal API key at: https://linear.app/settings/api") + print("Then add LINEAR_API_KEY=lin_api_... to backend/.env") + sys.exit(1) + +HEADERS = { + "Authorization": API_KEY, + "Content-Type": "application/json", +} + + +def graphql(query: str, variables: dict | None = None) -> dict: + """Execute a GraphQL query against Linear API.""" + payload = {"query": query} + if variables: + payload["variables"] = variables + + resp = httpx.post(LINEAR_API_URL, json=payload, headers=HEADERS, timeout=30) + if resp.status_code != 200: + print(f"HTTP {resp.status_code}: {resp.text[:500]}") + resp.raise_for_status() + data = resp.json() + + if "errors" in data: + print(f"GraphQL Errors: {json.dumps(data['errors'], indent=2)}") + + return data + + +# --------------------------------------------------------------------------- +# QUERIES +# --------------------------------------------------------------------------- + +# Search issues within the feature requests project by title/description +SEARCH_ISSUES_IN_PROJECT = """ +query SearchFeatureRequests($filter: IssueFilter!, $first: Int) { + issues(filter: $filter, first: $first) { + nodes { + id + identifier + title + description + url + state { + name + type + } + project { + id + name + } + labels { + nodes { + name + } + } + } + } +} +""" + +# Get issue with its customer needs +GET_ISSUE_WITH_NEEDS = """ +query GetIssueWithNeeds($id: String!) { + issue(id: $id) { + id + identifier + title + url + needs { + nodes { + id + body + priority + customer { + id + name + domains + externalIds + } + } + } + } +} +""" + +# Search customers +SEARCH_CUSTOMERS = """ +query SearchCustomers($filter: CustomerFilter, $first: Int) { + customers(filter: $filter, first: $first) { + nodes { + id + name + domains + externalIds + revenue + size + status { + name + } + tier { + name + } + } + } +} +""" + +# --------------------------------------------------------------------------- +# MUTATIONS +# --------------------------------------------------------------------------- + +CUSTOMER_UPSERT = """ +mutation CustomerUpsert($input: CustomerUpsertInput!) { + customerUpsert(input: $input) { + success + customer { + id + name + domains + externalIds + } + } +} +""" + +CUSTOMER_NEED_CREATE = """ +mutation CustomerNeedCreate($input: CustomerNeedCreateInput!) { + customerNeedCreate(input: $input) { + success + need { + id + body + priority + customer { + id + name + } + issue { + id + identifier + title + } + } + } +} +""" + +ISSUE_CREATE = """ +mutation IssueCreate($input: IssueCreateInput!) { + issueCreate(input: $input) { + success + issue { + id + identifier + title + url + } + } +} +""" + + +# --------------------------------------------------------------------------- +# TESTS +# --------------------------------------------------------------------------- + + +def test_1_search_feature_requests(): + """Search for feature requests in the target project by keyword.""" + print("\n" + "=" * 60) + print("TEST 1: Search feature requests in project by keyword") + print("=" * 60) + + search_term = "agent" + result = graphql( + SEARCH_ISSUES_IN_PROJECT, + { + "filter": { + "project": {"id": {"eq": FEATURE_REQUEST_PROJECT_ID}}, + "or": [ + {"title": {"containsIgnoreCase": search_term}}, + {"description": {"containsIgnoreCase": search_term}}, + ], + }, + "first": 5, + }, + ) + + issues = result.get("data", {}).get("issues", {}).get("nodes", []) + for issue in issues: + proj = issue.get("project") or {} + print(f"\n [{issue['identifier']}] {issue['title']}") + print(f" Project: {proj.get('name', 'N/A')}") + print(f" State: {issue['state']['name']}") + print(f" URL: {issue['url']}") + + print(f"\n Found {len(issues)} issues matching '{search_term}'") + return issues + + +def test_2_list_all_in_project(): + """List all issues in the feature requests project.""" + print("\n" + "=" * 60) + print("TEST 2: List all issues in Customer Feature Requests project") + print("=" * 60) + + result = graphql( + SEARCH_ISSUES_IN_PROJECT, + { + "filter": { + "project": {"id": {"eq": FEATURE_REQUEST_PROJECT_ID}}, + }, + "first": 10, + }, + ) + + issues = result.get("data", {}).get("issues", {}).get("nodes", []) + if not issues: + print(" No issues in project yet (empty project)") + for issue in issues: + print(f"\n [{issue['identifier']}] {issue['title']}") + print(f" State: {issue['state']['name']}") + + print(f"\n Total: {len(issues)} issues") + return issues + + +def test_3_search_customers(): + """List existing customers.""" + print("\n" + "=" * 60) + print("TEST 3: List customers") + print("=" * 60) + + result = graphql(SEARCH_CUSTOMERS, {"first": 10}) + customers = result.get("data", {}).get("customers", {}).get("nodes", []) + + if not customers: + print(" No customers exist yet") + for c in customers: + status = c.get("status") or {} + tier = c.get("tier") or {} + print(f"\n [{c['id'][:8]}...] {c['name']}") + print(f" Domains: {c.get('domains', [])}") + print(f" External IDs: {c.get('externalIds', [])}") + print( + f" Status: {status.get('name', 'N/A')}, Tier: {tier.get('name', 'N/A')}" + ) + + print(f"\n Total: {len(customers)} customers") + return customers + + +def test_4_customer_upsert(): + """Upsert a test customer.""" + print("\n" + "=" * 60) + print("TEST 4: Customer upsert (find-or-create)") + print("=" * 60) + + result = graphql( + CUSTOMER_UPSERT, + { + "input": { + "name": "Test Customer (API Test)", + "domains": ["test-api-customer.example.com"], + "externalId": "test-customer-001", + } + }, + ) + + upsert = result.get("data", {}).get("customerUpsert", {}) + if upsert.get("success"): + customer = upsert["customer"] + print(f" Success! Customer: {customer['name']}") + print(f" ID: {customer['id']}") + print(f" Domains: {customer['domains']}") + print(f" External IDs: {customer['externalIds']}") + return customer + else: + print(f" Failed: {json.dumps(result, indent=2)}") + return None + + +def test_5_create_issue_and_need(customer_id: str): + """Create a new feature request issue and attach a customer need.""" + print("\n" + "=" * 60) + print("TEST 5: Create issue + customer need") + print("=" * 60) + + # Step 1: Create issue in the project + result = graphql( + ISSUE_CREATE, + { + "input": { + "title": "Test Feature Request (API Test - safe to delete)", + "description": "This is a test feature request created via the GraphQL API.", + "teamId": TEAM_ID, + "projectId": FEATURE_REQUEST_PROJECT_ID, + } + }, + ) + + data = result.get("data") + if not data: + print(f" Issue creation failed: {json.dumps(result, indent=2)}") + return None + issue_data = data.get("issueCreate", {}) + if not issue_data.get("success"): + print(f" Issue creation failed: {json.dumps(result, indent=2)}") + return None + + issue = issue_data["issue"] + print(f" Created issue: [{issue['identifier']}] {issue['title']}") + print(f" URL: {issue['url']}") + + # Step 2: Attach customer need + result = graphql( + CUSTOMER_NEED_CREATE, + { + "input": { + "customerId": customer_id, + "issueId": issue["id"], + "body": "Our team really needs this feature for our workflow. High priority for us!", + "priority": 0, + } + }, + ) + + need_data = result.get("data", {}).get("customerNeedCreate", {}) + if need_data.get("success"): + need = need_data["need"] + print(f" Attached customer need: {need['id']}") + print(f" Customer: {need['customer']['name']}") + print(f" Body: {need['body'][:80]}") + else: + print(f" Customer need creation failed: {json.dumps(result, indent=2)}") + + # Step 3: Verify by fetching the issue with needs + print("\n Verifying...") + verify = graphql(GET_ISSUE_WITH_NEEDS, {"id": issue["id"]}) + issue_verify = verify.get("data", {}).get("issue", {}) + needs = issue_verify.get("needs", {}).get("nodes", []) + print(f" Issue now has {len(needs)} customer need(s)") + for n in needs: + cust = n.get("customer") or {} + print(f" - {cust.get('name', 'N/A')}: {n.get('body', '')[:60]}") + + return issue + + +def test_6_add_need_to_existing(customer_id: str, issue_id: str): + """Add a customer need to an existing issue (the common case).""" + print("\n" + "=" * 60) + print("TEST 6: Add customer need to existing issue") + print("=" * 60) + + result = graphql( + CUSTOMER_NEED_CREATE, + { + "input": { + "customerId": customer_id, + "issueId": issue_id, + "body": "We also want this! +1 from our organization.", + "priority": 0, + } + }, + ) + + need_data = result.get("data", {}).get("customerNeedCreate", {}) + if need_data.get("success"): + need = need_data["need"] + print(f" Success! Need: {need['id']}") + print(f" Customer: {need['customer']['name']}") + print(f" Issue: [{need['issue']['identifier']}] {need['issue']['title']}") + return need + else: + print(f" Failed: {json.dumps(result, indent=2)}") + return None + + +def main(): + print("Linear GraphQL API - Customer Requests Test Suite") + print("=" * 60) + print(f"API URL: {LINEAR_API_URL}") + print(f"API Key: {API_KEY[:10]}...") + print(f"Project: Customer Feature Requests ({FEATURE_REQUEST_PROJECT_ID[:8]}...)") + + # --- Read-only tests --- + test_1_search_feature_requests() + test_2_list_all_in_project() + test_3_search_customers() + + # --- Write tests --- + print("\n" + "=" * 60) + answer = ( + input("Run WRITE tests? (creates test customer + issue + need) [y/N]: ") + .strip() + .lower() + ) + if answer != "y": + print("Skipped write tests.") + print("\nDone!") + return + + customer = test_4_customer_upsert() + if not customer: + print("Customer upsert failed, stopping.") + return + + issue = test_5_create_issue_and_need(customer["id"]) + if not issue: + print("Issue creation failed, stopping.") + return + + # Test adding a second need to the same issue (simulates another customer requesting same feature) + # First upsert a second customer + result = graphql( + CUSTOMER_UPSERT, + { + "input": { + "name": "Second Test Customer", + "domains": ["second-test.example.com"], + "externalId": "test-customer-002", + } + }, + ) + customer2 = result.get("data", {}).get("customerUpsert", {}).get("customer") + if customer2: + test_6_add_need_to_existing(customer2["id"], issue["id"]) + + print("\n" + "=" * 60) + print("All tests complete!") + print( + "Check the project: https://linear.app/autogpt/project/customer-feature-requests-710dcbf8bf4e/issues" + ) + + +if __name__ == "__main__": + main() diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx index 71ade81a9f..b62e96f58a 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx @@ -15,6 +15,10 @@ import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai"; import { useEffect, useRef, useState } from "react"; import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent"; import { EditAgentTool } from "../../tools/EditAgent/EditAgent"; +import { + CreateFeatureRequestTool, + SearchFeatureRequestsTool, +} from "../../tools/FeatureRequests/FeatureRequests"; import { FindAgentsTool } from "../../tools/FindAgents/FindAgents"; import { FindBlocksTool } from "../../tools/FindBlocks/FindBlocks"; import { RunAgentTool } from "../../tools/RunAgent/RunAgent"; @@ -254,6 +258,20 @@ export const ChatMessagesContainer = ({ part={part as ToolUIPart} /> ); + case "tool-search_feature_requests": + return ( + + ); + case "tool-create_feature_request": + return ( + + ); default: return null; } diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx index 6030665f1c..8a35f939ca 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx @@ -14,6 +14,10 @@ import { Text } from "@/components/atoms/Text/Text"; import { CopilotChatActionsProvider } from "../components/CopilotChatActionsProvider/CopilotChatActionsProvider"; import { CreateAgentTool } from "../tools/CreateAgent/CreateAgent"; import { EditAgentTool } from "../tools/EditAgent/EditAgent"; +import { + CreateFeatureRequestTool, + SearchFeatureRequestsTool, +} from "../tools/FeatureRequests/FeatureRequests"; import { FindAgentsTool } from "../tools/FindAgents/FindAgents"; import { FindBlocksTool } from "../tools/FindBlocks/FindBlocks"; import { RunAgentTool } from "../tools/RunAgent/RunAgent"; @@ -45,6 +49,8 @@ const SECTIONS = [ "Tool: Create Agent", "Tool: Edit Agent", "Tool: View Agent Output", + "Tool: Search Feature Requests", + "Tool: Create Feature Request", "Full Conversation Example", ] as const; @@ -1421,6 +1427,235 @@ export default function StyleguidePage() { + {/* ============================================================= */} + {/* SEARCH FEATURE REQUESTS */} + {/* ============================================================= */} + +
+ + + + + + + + + + + + + + + + + + + + + + + +
+ + {/* ============================================================= */} + {/* CREATE FEATURE REQUEST */} + {/* ============================================================= */} + +
+ + + + + + + + + + + + + + + + + + + + + + + +
+ {/* ============================================================= */} {/* FULL CONVERSATION EXAMPLE */} {/* ============================================================= */} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/FeatureRequests.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/FeatureRequests.tsx new file mode 100644 index 0000000000..e14ec69397 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/FeatureRequests.tsx @@ -0,0 +1,240 @@ +"use client"; + +import type { ToolUIPart } from "ai"; +import { useMemo } from "react"; + +import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation"; +import { + ContentBadge, + ContentCard, + ContentCardDescription, + ContentCardHeader, + ContentCardTitle, + ContentGrid, + ContentLink, + ContentMessage, + ContentSuggestionsList, +} from "../../components/ToolAccordion/AccordionContent"; +import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion"; +import { + AccordionIcon, + getAccordionTitle, + getAnimationText, + getFeatureRequestOutput, + isCreatedOutput, + isErrorOutput, + isNoResultsOutput, + isSearchResultsOutput, + ToolIcon, + type FeatureRequestToolType, +} from "./helpers"; + +export interface FeatureRequestToolPart { + type: FeatureRequestToolType; + toolCallId: string; + state: ToolUIPart["state"]; + input?: unknown; + output?: unknown; +} + +interface Props { + part: FeatureRequestToolPart; +} + +function truncate(text: string, maxChars: number): string { + const trimmed = text.trim(); + if (trimmed.length <= maxChars) return trimmed; + return `${trimmed.slice(0, maxChars).trimEnd()}…`; +} + +export function SearchFeatureRequestsTool({ part }: Props) { + const output = getFeatureRequestOutput(part); + const text = getAnimationText(part); + const isStreaming = + part.state === "input-streaming" || part.state === "input-available"; + const isError = + part.state === "output-error" || (!!output && isErrorOutput(output)); + + const normalized = useMemo(() => { + if (!output) return null; + return { title: getAccordionTitle(part.type, output) }; + }, [output, part.type]); + + const isOutputAvailable = part.state === "output-available" && !!output; + + const searchOutput = + isOutputAvailable && output && isSearchResultsOutput(output) + ? output + : null; + const noResultsOutput = + isOutputAvailable && output && isNoResultsOutput(output) ? output : null; + const errorOutput = + isOutputAvailable && output && isErrorOutput(output) ? output : null; + + const hasExpandableContent = + isOutputAvailable && + ((!!searchOutput && searchOutput.count > 0) || + !!noResultsOutput || + !!errorOutput); + + const accordionDescription = + hasExpandableContent && searchOutput + ? `Found ${searchOutput.count} result${searchOutput.count === 1 ? "" : "s"} for "${searchOutput.query}"` + : hasExpandableContent && (noResultsOutput || errorOutput) + ? ((noResultsOutput ?? errorOutput)?.message ?? null) + : null; + + return ( +
+
+ + +
+ + {hasExpandableContent && normalized && ( + } + title={normalized.title} + description={accordionDescription} + > + {searchOutput && ( + + {searchOutput.results.map((r) => ( + + + + {r.identifier} — {r.title} + + + {r.description && ( + + {truncate(r.description, 200)} + + )} + + ))} + + )} + + {noResultsOutput && ( +
+ {noResultsOutput.message} + {noResultsOutput.suggestions && + noResultsOutput.suggestions.length > 0 && ( + + )} +
+ )} + + {errorOutput && ( +
+ {errorOutput.message} + {errorOutput.error && ( + + {errorOutput.error} + + )} +
+ )} +
+ )} +
+ ); +} + +export function CreateFeatureRequestTool({ part }: Props) { + const output = getFeatureRequestOutput(part); + const text = getAnimationText(part); + const isStreaming = + part.state === "input-streaming" || part.state === "input-available"; + const isError = + part.state === "output-error" || (!!output && isErrorOutput(output)); + + const normalized = useMemo(() => { + if (!output) return null; + return { title: getAccordionTitle(part.type, output) }; + }, [output, part.type]); + + const isOutputAvailable = part.state === "output-available" && !!output; + + const createdOutput = + isOutputAvailable && output && isCreatedOutput(output) ? output : null; + const errorOutput = + isOutputAvailable && output && isErrorOutput(output) ? output : null; + + const hasExpandableContent = + isOutputAvailable && (!!createdOutput || !!errorOutput); + + const accordionDescription = + hasExpandableContent && createdOutput + ? `${createdOutput.issue_identifier} — ${createdOutput.issue_title}` + : hasExpandableContent && errorOutput + ? errorOutput.message + : null; + + return ( +
+
+ + +
+ + {hasExpandableContent && normalized && ( + } + title={normalized.title} + description={accordionDescription} + > + {createdOutput && ( + + + View + + ) : undefined + } + > + + {createdOutput.issue_identifier} — {createdOutput.issue_title} + + +
+ + {createdOutput.is_new_issue ? "New" : "Existing"} + +
+ {createdOutput.message} +
+ )} + + {errorOutput && ( +
+ {errorOutput.message} + {errorOutput.error && ( + + {errorOutput.error} + + )} +
+ )} +
+ )} +
+ ); +} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/helpers.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/helpers.tsx new file mode 100644 index 0000000000..ed292faf2b --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/helpers.tsx @@ -0,0 +1,271 @@ +import { + CheckCircleIcon, + LightbulbIcon, + MagnifyingGlassIcon, + PlusCircleIcon, +} from "@phosphor-icons/react"; +import type { ToolUIPart } from "ai"; + +/* ------------------------------------------------------------------ */ +/* Types (local until API client is regenerated) */ +/* ------------------------------------------------------------------ */ + +interface FeatureRequestInfo { + id: string; + identifier: string; + title: string; + description?: string | null; +} + +export interface FeatureRequestSearchResponse { + type: "feature_request_search"; + message: string; + results: FeatureRequestInfo[]; + count: number; + query: string; +} + +export interface FeatureRequestCreatedResponse { + type: "feature_request_created"; + message: string; + issue_id: string; + issue_identifier: string; + issue_title: string; + issue_url: string; + is_new_issue: boolean; + customer_name: string; +} + +interface NoResultsResponse { + type: "no_results"; + message: string; + suggestions?: string[]; +} + +interface ErrorResponse { + type: "error"; + message: string; + error?: string; +} + +export type FeatureRequestOutput = + | FeatureRequestSearchResponse + | FeatureRequestCreatedResponse + | NoResultsResponse + | ErrorResponse; + +export type FeatureRequestToolType = + | "tool-search_feature_requests" + | "tool-create_feature_request" + | string; + +/* ------------------------------------------------------------------ */ +/* Output parsing */ +/* ------------------------------------------------------------------ */ + +function parseOutput(output: unknown): FeatureRequestOutput | null { + if (!output) return null; + if (typeof output === "string") { + const trimmed = output.trim(); + if (!trimmed) return null; + try { + return parseOutput(JSON.parse(trimmed) as unknown); + } catch { + return null; + } + } + if (typeof output === "object") { + const type = (output as { type?: unknown }).type; + if ( + type === "feature_request_search" || + type === "feature_request_created" || + type === "no_results" || + type === "error" + ) { + return output as FeatureRequestOutput; + } + // Fallback structural checks + if ("results" in output && "query" in output) + return output as FeatureRequestSearchResponse; + if ("issue_identifier" in output) + return output as FeatureRequestCreatedResponse; + if ("suggestions" in output && !("error" in output)) + return output as NoResultsResponse; + if ("error" in output || "details" in output) + return output as ErrorResponse; + } + return null; +} + +export function getFeatureRequestOutput( + part: unknown, +): FeatureRequestOutput | null { + if (!part || typeof part !== "object") return null; + return parseOutput((part as { output?: unknown }).output); +} + +/* ------------------------------------------------------------------ */ +/* Type guards */ +/* ------------------------------------------------------------------ */ + +export function isSearchResultsOutput( + output: FeatureRequestOutput, +): output is FeatureRequestSearchResponse { + return ( + output.type === "feature_request_search" || + ("results" in output && "query" in output) + ); +} + +export function isCreatedOutput( + output: FeatureRequestOutput, +): output is FeatureRequestCreatedResponse { + return ( + output.type === "feature_request_created" || "issue_identifier" in output + ); +} + +export function isNoResultsOutput( + output: FeatureRequestOutput, +): output is NoResultsResponse { + return ( + output.type === "no_results" || + ("suggestions" in output && !("error" in output)) + ); +} + +export function isErrorOutput( + output: FeatureRequestOutput, +): output is ErrorResponse { + return output.type === "error" || "error" in output; +} + +/* ------------------------------------------------------------------ */ +/* Accordion metadata */ +/* ------------------------------------------------------------------ */ + +export function getAccordionTitle( + toolType: FeatureRequestToolType, + output: FeatureRequestOutput, +): string { + if (toolType === "tool-search_feature_requests") { + if (isSearchResultsOutput(output)) return "Feature requests"; + if (isNoResultsOutput(output)) return "No feature requests found"; + return "Feature request search error"; + } + if (isCreatedOutput(output)) { + return output.is_new_issue + ? "Feature request created" + : "Added to feature request"; + } + if (isErrorOutput(output)) return "Feature request error"; + return "Feature request"; +} + +/* ------------------------------------------------------------------ */ +/* Animation text */ +/* ------------------------------------------------------------------ */ + +interface AnimationPart { + type: FeatureRequestToolType; + state: ToolUIPart["state"]; + input?: unknown; + output?: unknown; +} + +export function getAnimationText(part: AnimationPart): string { + if (part.type === "tool-search_feature_requests") { + const query = (part.input as { query?: string } | undefined)?.query?.trim(); + const queryText = query ? ` for "${query}"` : ""; + + switch (part.state) { + case "input-streaming": + case "input-available": + return `Searching feature requests${queryText}`; + case "output-available": { + const output = parseOutput(part.output); + if (!output) return `Searching feature requests${queryText}`; + if (isSearchResultsOutput(output)) { + return `Found ${output.count} feature request${output.count === 1 ? "" : "s"}${queryText}`; + } + if (isNoResultsOutput(output)) + return `No feature requests found${queryText}`; + return `Error searching feature requests${queryText}`; + } + case "output-error": + return `Error searching feature requests${queryText}`; + default: + return "Searching feature requests"; + } + } + + // create_feature_request + const title = (part.input as { title?: string } | undefined)?.title?.trim(); + const titleText = title ? ` "${title}"` : ""; + + switch (part.state) { + case "input-streaming": + case "input-available": + return `Creating feature request${titleText}`; + case "output-available": { + const output = parseOutput(part.output); + if (!output) return `Creating feature request${titleText}`; + if (isCreatedOutput(output)) { + return output.is_new_issue + ? `Created ${output.issue_identifier}` + : `Added to ${output.issue_identifier}`; + } + if (isErrorOutput(output)) return "Error creating feature request"; + return `Created feature request${titleText}`; + } + case "output-error": + return "Error creating feature request"; + default: + return "Creating feature request"; + } +} + +/* ------------------------------------------------------------------ */ +/* Icons */ +/* ------------------------------------------------------------------ */ + +export function ToolIcon({ + toolType, + isStreaming, + isError, +}: { + toolType: FeatureRequestToolType; + isStreaming?: boolean; + isError?: boolean; +}) { + const IconComponent = + toolType === "tool-create_feature_request" + ? PlusCircleIcon + : MagnifyingGlassIcon; + + return ( + + ); +} + +export function AccordionIcon({ + toolType, +}: { + toolType: FeatureRequestToolType; +}) { + const IconComponent = + toolType === "tool-create_feature_request" + ? CheckCircleIcon + : LightbulbIcon; + return ; +} diff --git a/autogpt_platform/frontend/src/app/api/openapi.json b/autogpt_platform/frontend/src/app/api/openapi.json index 5d2cb83f7c..a0eb141aa9 100644 --- a/autogpt_platform/frontend/src/app/api/openapi.json +++ b/autogpt_platform/frontend/src/app/api/openapi.json @@ -10495,7 +10495,9 @@ "operation_started", "operation_pending", "operation_in_progress", - "input_validation_error" + "input_validation_error", + "feature_request_search", + "feature_request_created" ], "title": "ResponseType", "description": "Types of tool responses." From 3d31f62bf1376b7b3574977af86958e6aa000825 Mon Sep 17 00:00:00 2001 From: Swifty Date: Thu, 12 Feb 2026 16:39:24 +0100 Subject: [PATCH 10/18] Revert "added feature request tooling" This reverts commit b8b6c9de2322cf083e61670ee8625d4abb2d8e19. --- .../api/features/chat/tools/__init__.py | 4 - .../features/chat/tools/feature_requests.py | 369 -------------- .../backend/api/features/chat/tools/models.py | 34 -- .../backend/backend/util/settings.py | 3 - .../backend/test_linear_customers.py | 468 ------------------ .../ChatMessagesContainer.tsx | 18 - .../(platform)/copilot/styleguide/page.tsx | 235 --------- .../tools/FeatureRequests/FeatureRequests.tsx | 240 --------- .../copilot/tools/FeatureRequests/helpers.tsx | 271 ---------- .../frontend/src/app/api/openapi.json | 4 +- 10 files changed, 1 insertion(+), 1645 deletions(-) delete mode 100644 autogpt_platform/backend/backend/api/features/chat/tools/feature_requests.py delete mode 100644 autogpt_platform/backend/test_linear_customers.py delete mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/FeatureRequests.tsx delete mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/helpers.tsx diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py b/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py index 350776081a..dcbc35ef37 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py @@ -12,7 +12,6 @@ from .base import BaseTool from .create_agent import CreateAgentTool from .customize_agent import CustomizeAgentTool from .edit_agent import EditAgentTool -from .feature_requests import CreateFeatureRequestTool, SearchFeatureRequestsTool from .find_agent import FindAgentTool from .find_block import FindBlockTool from .find_library_agent import FindLibraryAgentTool @@ -46,9 +45,6 @@ TOOL_REGISTRY: dict[str, BaseTool] = { "view_agent_output": AgentOutputTool(), "search_docs": SearchDocsTool(), "get_doc_page": GetDocPageTool(), - # Feature request tools - "search_feature_requests": SearchFeatureRequestsTool(), - "create_feature_request": CreateFeatureRequestTool(), # Workspace tools for CoPilot file operations "list_workspace_files": ListWorkspaceFilesTool(), "read_workspace_file": ReadWorkspaceFileTool(), diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/feature_requests.py b/autogpt_platform/backend/backend/api/features/chat/tools/feature_requests.py deleted file mode 100644 index 5e06d8b4b2..0000000000 --- a/autogpt_platform/backend/backend/api/features/chat/tools/feature_requests.py +++ /dev/null @@ -1,369 +0,0 @@ -"""Feature request tools - search and create feature requests via Linear.""" - -import logging -from typing import Any - -from pydantic import SecretStr - -from backend.api.features.chat.model import ChatSession -from backend.api.features.chat.tools.base import BaseTool -from backend.api.features.chat.tools.models import ( - ErrorResponse, - FeatureRequestCreatedResponse, - FeatureRequestInfo, - FeatureRequestSearchResponse, - NoResultsResponse, - ToolResponseBase, -) -from backend.blocks.linear._api import LinearClient -from backend.data.model import APIKeyCredentials -from backend.util.settings import Settings - -logger = logging.getLogger(__name__) - -# Target project and team IDs in our Linear workspace -FEATURE_REQUEST_PROJECT_ID = "13f066f3-f639-4a67-aaa3-31483ebdf8cd" -TEAM_ID = "557fd3d5-087e-43a9-83e3-476c8313ce49" - -MAX_SEARCH_RESULTS = 10 - -# GraphQL queries/mutations -SEARCH_ISSUES_QUERY = """ -query SearchFeatureRequests($term: String!, $filter: IssueFilter, $first: Int) { - searchIssues(term: $term, filter: $filter, first: $first) { - nodes { - id - identifier - title - description - } - } -} -""" - -CUSTOMER_UPSERT_MUTATION = """ -mutation CustomerUpsert($input: CustomerUpsertInput!) { - customerUpsert(input: $input) { - success - customer { - id - name - externalIds - } - } -} -""" - -ISSUE_CREATE_MUTATION = """ -mutation IssueCreate($input: IssueCreateInput!) { - issueCreate(input: $input) { - success - issue { - id - identifier - title - url - } - } -} -""" - -CUSTOMER_NEED_CREATE_MUTATION = """ -mutation CustomerNeedCreate($input: CustomerNeedCreateInput!) { - customerNeedCreate(input: $input) { - success - need { - id - body - customer { - id - name - } - issue { - id - identifier - title - url - } - } - } -} -""" - - -_settings: Settings | None = None - - -def _get_settings() -> Settings: - global _settings - if _settings is None: - _settings = Settings() - return _settings - - -def _get_linear_client() -> LinearClient: - """Create a Linear client using the system API key from settings.""" - api_key = _get_settings().secrets.linear_api_key - if not api_key: - raise RuntimeError("LINEAR_API_KEY secret is not configured") - credentials = APIKeyCredentials( - id="system-linear", - provider="linear", - api_key=SecretStr(api_key), - title="System Linear API Key", - ) - return LinearClient(credentials=credentials) - - -class SearchFeatureRequestsTool(BaseTool): - """Tool for searching existing feature requests in Linear.""" - - @property - def name(self) -> str: - return "search_feature_requests" - - @property - def description(self) -> str: - return ( - "Search existing feature requests to check if a similar request " - "already exists before creating a new one. Returns matching feature " - "requests with their ID, title, and description." - ) - - @property - def parameters(self) -> dict[str, Any]: - return { - "type": "object", - "properties": { - "query": { - "type": "string", - "description": "Search term to find matching feature requests.", - }, - }, - "required": ["query"], - } - - @property - def requires_auth(self) -> bool: - return True - - async def _execute( - self, - user_id: str | None, - session: ChatSession, - **kwargs, - ) -> ToolResponseBase: - query = kwargs.get("query", "").strip() - session_id = session.session_id if session else None - - if not query: - return ErrorResponse( - message="Please provide a search query.", - error="Missing query parameter", - session_id=session_id, - ) - - client = _get_linear_client() - data = await client.query( - SEARCH_ISSUES_QUERY, - { - "term": query, - "filter": { - "project": {"id": {"eq": FEATURE_REQUEST_PROJECT_ID}}, - }, - "first": MAX_SEARCH_RESULTS, - }, - ) - - nodes = data.get("searchIssues", {}).get("nodes", []) - - if not nodes: - return NoResultsResponse( - message=f"No feature requests found matching '{query}'.", - suggestions=[ - "Try different keywords", - "Use broader search terms", - "You can create a new feature request if none exists", - ], - session_id=session_id, - ) - - results = [ - FeatureRequestInfo( - id=node["id"], - identifier=node["identifier"], - title=node["title"], - description=node.get("description"), - ) - for node in nodes - ] - - return FeatureRequestSearchResponse( - message=f"Found {len(results)} feature request(s) matching '{query}'.", - results=results, - count=len(results), - query=query, - session_id=session_id, - ) - - -class CreateFeatureRequestTool(BaseTool): - """Tool for creating feature requests (or adding needs to existing ones).""" - - @property - def name(self) -> str: - return "create_feature_request" - - @property - def description(self) -> str: - return ( - "Create a new feature request or add a customer need to an existing one. " - "Always search first with search_feature_requests to avoid duplicates. " - "If a matching request exists, pass its ID as existing_issue_id to add " - "the user's need to it instead of creating a duplicate." - ) - - @property - def parameters(self) -> dict[str, Any]: - return { - "type": "object", - "properties": { - "title": { - "type": "string", - "description": "Title for the feature request.", - }, - "description": { - "type": "string", - "description": "Detailed description of what the user wants and why.", - }, - "existing_issue_id": { - "type": "string", - "description": ( - "If adding a need to an existing feature request, " - "provide its Linear issue ID (from search results). " - "Omit to create a new feature request." - ), - }, - }, - "required": ["title", "description"], - } - - @property - def requires_auth(self) -> bool: - return True - - async def _find_or_create_customer( - self, client: LinearClient, user_id: str - ) -> dict: - """Find existing customer by user_id or create a new one via upsert.""" - data = await client.mutate( - CUSTOMER_UPSERT_MUTATION, - { - "input": { - "name": user_id, - "externalId": user_id, - }, - }, - ) - result = data.get("customerUpsert", {}) - if not result.get("success"): - raise RuntimeError(f"Failed to upsert customer: {data}") - return result["customer"] - - async def _execute( - self, - user_id: str | None, - session: ChatSession, - **kwargs, - ) -> ToolResponseBase: - title = kwargs.get("title", "").strip() - description = kwargs.get("description", "").strip() - existing_issue_id = kwargs.get("existing_issue_id") - session_id = session.session_id if session else None - - if not title or not description: - return ErrorResponse( - message="Both title and description are required.", - error="Missing required parameters", - session_id=session_id, - ) - - if not user_id: - return ErrorResponse( - message="Authentication required to create feature requests.", - error="Missing user_id", - session_id=session_id, - ) - - client = _get_linear_client() - - # Step 1: Find or create customer for this user - customer = await self._find_or_create_customer(client, user_id) - customer_id = customer["id"] - customer_name = customer["name"] - - # Step 2: Create or reuse issue - if existing_issue_id: - # Add need to existing issue - we still need the issue details for response - is_new_issue = False - issue_id = existing_issue_id - else: - # Create new issue in the feature requests project - data = await client.mutate( - ISSUE_CREATE_MUTATION, - { - "input": { - "title": title, - "description": description, - "teamId": TEAM_ID, - "projectId": FEATURE_REQUEST_PROJECT_ID, - }, - }, - ) - result = data.get("issueCreate", {}) - if not result.get("success"): - return ErrorResponse( - message="Failed to create feature request issue.", - error=str(data), - session_id=session_id, - ) - issue = result["issue"] - issue_id = issue["id"] - is_new_issue = True - - # Step 3: Create customer need on the issue - data = await client.mutate( - CUSTOMER_NEED_CREATE_MUTATION, - { - "input": { - "customerId": customer_id, - "issueId": issue_id, - "body": description, - "priority": 0, - }, - }, - ) - need_result = data.get("customerNeedCreate", {}) - if not need_result.get("success"): - return ErrorResponse( - message="Failed to attach customer need to the feature request.", - error=str(data), - session_id=session_id, - ) - - need = need_result["need"] - issue_info = need["issue"] - - return FeatureRequestCreatedResponse( - message=( - f"{'Created new feature request' if is_new_issue else 'Added your request to existing feature request'} " - f"[{issue_info['identifier']}] {issue_info['title']}." - ), - issue_id=issue_info["id"], - issue_identifier=issue_info["identifier"], - issue_title=issue_info["title"], - issue_url=issue_info.get("url", ""), - is_new_issue=is_new_issue, - customer_name=customer_name, - session_id=session_id, - ) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/models.py b/autogpt_platform/backend/backend/api/features/chat/tools/models.py index d420b289dc..69c8c6c684 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/models.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/models.py @@ -40,9 +40,6 @@ class ResponseType(str, Enum): OPERATION_IN_PROGRESS = "operation_in_progress" # Input validation INPUT_VALIDATION_ERROR = "input_validation_error" - # Feature request types - FEATURE_REQUEST_SEARCH = "feature_request_search" - FEATURE_REQUEST_CREATED = "feature_request_created" # Base response model @@ -424,34 +421,3 @@ class AsyncProcessingResponse(ToolResponseBase): status: str = "accepted" # Must be "accepted" for detection operation_id: str | None = None task_id: str | None = None - - -# Feature request models -class FeatureRequestInfo(BaseModel): - """Information about a feature request issue.""" - - id: str - identifier: str - title: str - description: str | None = None - - -class FeatureRequestSearchResponse(ToolResponseBase): - """Response for search_feature_requests tool.""" - - type: ResponseType = ResponseType.FEATURE_REQUEST_SEARCH - results: list[FeatureRequestInfo] - count: int - query: str - - -class FeatureRequestCreatedResponse(ToolResponseBase): - """Response for create_feature_request tool.""" - - type: ResponseType = ResponseType.FEATURE_REQUEST_CREATED - issue_id: str - issue_identifier: str - issue_title: str - issue_url: str - is_new_issue: bool # False if added to existing - customer_name: str diff --git a/autogpt_platform/backend/backend/util/settings.py b/autogpt_platform/backend/backend/util/settings.py index d539832fb0..50b7428160 100644 --- a/autogpt_platform/backend/backend/util/settings.py +++ b/autogpt_platform/backend/backend/util/settings.py @@ -658,9 +658,6 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings): mem0_api_key: str = Field(default="", description="Mem0 API key") elevenlabs_api_key: str = Field(default="", description="ElevenLabs API key") - linear_api_key: str = Field( - default="", description="Linear API key for system-level operations" - ) linear_client_id: str = Field(default="", description="Linear client ID") linear_client_secret: str = Field(default="", description="Linear client secret") diff --git a/autogpt_platform/backend/test_linear_customers.py b/autogpt_platform/backend/test_linear_customers.py deleted file mode 100644 index 6e6f3e48fc..0000000000 --- a/autogpt_platform/backend/test_linear_customers.py +++ /dev/null @@ -1,468 +0,0 @@ -""" -Test script for Linear GraphQL API - Customer Requests operations. - -Tests the exact GraphQL calls needed for: -1. search_feature_requests - search issues in the Customer Feature Requests project -2. add_feature_request - upsert customer + create customer need on issue - -Requires LINEAR_API_KEY in backend/.env -Generate one at: https://linear.app/settings/api -""" - -import json -import os -import sys - -import httpx -from dotenv import load_dotenv - -load_dotenv() - -LINEAR_API_URL = "https://api.linear.app/graphql" -API_KEY = os.getenv("LINEAR_API_KEY") - -# Target project for feature requests -FEATURE_REQUEST_PROJECT_ID = "13f066f3-f639-4a67-aaa3-31483ebdf8cd" -# Team: Internal -TEAM_ID = "557fd3d5-087e-43a9-83e3-476c8313ce49" - -if not API_KEY: - print("ERROR: LINEAR_API_KEY not found in .env") - print("Generate a personal API key at: https://linear.app/settings/api") - print("Then add LINEAR_API_KEY=lin_api_... to backend/.env") - sys.exit(1) - -HEADERS = { - "Authorization": API_KEY, - "Content-Type": "application/json", -} - - -def graphql(query: str, variables: dict | None = None) -> dict: - """Execute a GraphQL query against Linear API.""" - payload = {"query": query} - if variables: - payload["variables"] = variables - - resp = httpx.post(LINEAR_API_URL, json=payload, headers=HEADERS, timeout=30) - if resp.status_code != 200: - print(f"HTTP {resp.status_code}: {resp.text[:500]}") - resp.raise_for_status() - data = resp.json() - - if "errors" in data: - print(f"GraphQL Errors: {json.dumps(data['errors'], indent=2)}") - - return data - - -# --------------------------------------------------------------------------- -# QUERIES -# --------------------------------------------------------------------------- - -# Search issues within the feature requests project by title/description -SEARCH_ISSUES_IN_PROJECT = """ -query SearchFeatureRequests($filter: IssueFilter!, $first: Int) { - issues(filter: $filter, first: $first) { - nodes { - id - identifier - title - description - url - state { - name - type - } - project { - id - name - } - labels { - nodes { - name - } - } - } - } -} -""" - -# Get issue with its customer needs -GET_ISSUE_WITH_NEEDS = """ -query GetIssueWithNeeds($id: String!) { - issue(id: $id) { - id - identifier - title - url - needs { - nodes { - id - body - priority - customer { - id - name - domains - externalIds - } - } - } - } -} -""" - -# Search customers -SEARCH_CUSTOMERS = """ -query SearchCustomers($filter: CustomerFilter, $first: Int) { - customers(filter: $filter, first: $first) { - nodes { - id - name - domains - externalIds - revenue - size - status { - name - } - tier { - name - } - } - } -} -""" - -# --------------------------------------------------------------------------- -# MUTATIONS -# --------------------------------------------------------------------------- - -CUSTOMER_UPSERT = """ -mutation CustomerUpsert($input: CustomerUpsertInput!) { - customerUpsert(input: $input) { - success - customer { - id - name - domains - externalIds - } - } -} -""" - -CUSTOMER_NEED_CREATE = """ -mutation CustomerNeedCreate($input: CustomerNeedCreateInput!) { - customerNeedCreate(input: $input) { - success - need { - id - body - priority - customer { - id - name - } - issue { - id - identifier - title - } - } - } -} -""" - -ISSUE_CREATE = """ -mutation IssueCreate($input: IssueCreateInput!) { - issueCreate(input: $input) { - success - issue { - id - identifier - title - url - } - } -} -""" - - -# --------------------------------------------------------------------------- -# TESTS -# --------------------------------------------------------------------------- - - -def test_1_search_feature_requests(): - """Search for feature requests in the target project by keyword.""" - print("\n" + "=" * 60) - print("TEST 1: Search feature requests in project by keyword") - print("=" * 60) - - search_term = "agent" - result = graphql( - SEARCH_ISSUES_IN_PROJECT, - { - "filter": { - "project": {"id": {"eq": FEATURE_REQUEST_PROJECT_ID}}, - "or": [ - {"title": {"containsIgnoreCase": search_term}}, - {"description": {"containsIgnoreCase": search_term}}, - ], - }, - "first": 5, - }, - ) - - issues = result.get("data", {}).get("issues", {}).get("nodes", []) - for issue in issues: - proj = issue.get("project") or {} - print(f"\n [{issue['identifier']}] {issue['title']}") - print(f" Project: {proj.get('name', 'N/A')}") - print(f" State: {issue['state']['name']}") - print(f" URL: {issue['url']}") - - print(f"\n Found {len(issues)} issues matching '{search_term}'") - return issues - - -def test_2_list_all_in_project(): - """List all issues in the feature requests project.""" - print("\n" + "=" * 60) - print("TEST 2: List all issues in Customer Feature Requests project") - print("=" * 60) - - result = graphql( - SEARCH_ISSUES_IN_PROJECT, - { - "filter": { - "project": {"id": {"eq": FEATURE_REQUEST_PROJECT_ID}}, - }, - "first": 10, - }, - ) - - issues = result.get("data", {}).get("issues", {}).get("nodes", []) - if not issues: - print(" No issues in project yet (empty project)") - for issue in issues: - print(f"\n [{issue['identifier']}] {issue['title']}") - print(f" State: {issue['state']['name']}") - - print(f"\n Total: {len(issues)} issues") - return issues - - -def test_3_search_customers(): - """List existing customers.""" - print("\n" + "=" * 60) - print("TEST 3: List customers") - print("=" * 60) - - result = graphql(SEARCH_CUSTOMERS, {"first": 10}) - customers = result.get("data", {}).get("customers", {}).get("nodes", []) - - if not customers: - print(" No customers exist yet") - for c in customers: - status = c.get("status") or {} - tier = c.get("tier") or {} - print(f"\n [{c['id'][:8]}...] {c['name']}") - print(f" Domains: {c.get('domains', [])}") - print(f" External IDs: {c.get('externalIds', [])}") - print( - f" Status: {status.get('name', 'N/A')}, Tier: {tier.get('name', 'N/A')}" - ) - - print(f"\n Total: {len(customers)} customers") - return customers - - -def test_4_customer_upsert(): - """Upsert a test customer.""" - print("\n" + "=" * 60) - print("TEST 4: Customer upsert (find-or-create)") - print("=" * 60) - - result = graphql( - CUSTOMER_UPSERT, - { - "input": { - "name": "Test Customer (API Test)", - "domains": ["test-api-customer.example.com"], - "externalId": "test-customer-001", - } - }, - ) - - upsert = result.get("data", {}).get("customerUpsert", {}) - if upsert.get("success"): - customer = upsert["customer"] - print(f" Success! Customer: {customer['name']}") - print(f" ID: {customer['id']}") - print(f" Domains: {customer['domains']}") - print(f" External IDs: {customer['externalIds']}") - return customer - else: - print(f" Failed: {json.dumps(result, indent=2)}") - return None - - -def test_5_create_issue_and_need(customer_id: str): - """Create a new feature request issue and attach a customer need.""" - print("\n" + "=" * 60) - print("TEST 5: Create issue + customer need") - print("=" * 60) - - # Step 1: Create issue in the project - result = graphql( - ISSUE_CREATE, - { - "input": { - "title": "Test Feature Request (API Test - safe to delete)", - "description": "This is a test feature request created via the GraphQL API.", - "teamId": TEAM_ID, - "projectId": FEATURE_REQUEST_PROJECT_ID, - } - }, - ) - - data = result.get("data") - if not data: - print(f" Issue creation failed: {json.dumps(result, indent=2)}") - return None - issue_data = data.get("issueCreate", {}) - if not issue_data.get("success"): - print(f" Issue creation failed: {json.dumps(result, indent=2)}") - return None - - issue = issue_data["issue"] - print(f" Created issue: [{issue['identifier']}] {issue['title']}") - print(f" URL: {issue['url']}") - - # Step 2: Attach customer need - result = graphql( - CUSTOMER_NEED_CREATE, - { - "input": { - "customerId": customer_id, - "issueId": issue["id"], - "body": "Our team really needs this feature for our workflow. High priority for us!", - "priority": 0, - } - }, - ) - - need_data = result.get("data", {}).get("customerNeedCreate", {}) - if need_data.get("success"): - need = need_data["need"] - print(f" Attached customer need: {need['id']}") - print(f" Customer: {need['customer']['name']}") - print(f" Body: {need['body'][:80]}") - else: - print(f" Customer need creation failed: {json.dumps(result, indent=2)}") - - # Step 3: Verify by fetching the issue with needs - print("\n Verifying...") - verify = graphql(GET_ISSUE_WITH_NEEDS, {"id": issue["id"]}) - issue_verify = verify.get("data", {}).get("issue", {}) - needs = issue_verify.get("needs", {}).get("nodes", []) - print(f" Issue now has {len(needs)} customer need(s)") - for n in needs: - cust = n.get("customer") or {} - print(f" - {cust.get('name', 'N/A')}: {n.get('body', '')[:60]}") - - return issue - - -def test_6_add_need_to_existing(customer_id: str, issue_id: str): - """Add a customer need to an existing issue (the common case).""" - print("\n" + "=" * 60) - print("TEST 6: Add customer need to existing issue") - print("=" * 60) - - result = graphql( - CUSTOMER_NEED_CREATE, - { - "input": { - "customerId": customer_id, - "issueId": issue_id, - "body": "We also want this! +1 from our organization.", - "priority": 0, - } - }, - ) - - need_data = result.get("data", {}).get("customerNeedCreate", {}) - if need_data.get("success"): - need = need_data["need"] - print(f" Success! Need: {need['id']}") - print(f" Customer: {need['customer']['name']}") - print(f" Issue: [{need['issue']['identifier']}] {need['issue']['title']}") - return need - else: - print(f" Failed: {json.dumps(result, indent=2)}") - return None - - -def main(): - print("Linear GraphQL API - Customer Requests Test Suite") - print("=" * 60) - print(f"API URL: {LINEAR_API_URL}") - print(f"API Key: {API_KEY[:10]}...") - print(f"Project: Customer Feature Requests ({FEATURE_REQUEST_PROJECT_ID[:8]}...)") - - # --- Read-only tests --- - test_1_search_feature_requests() - test_2_list_all_in_project() - test_3_search_customers() - - # --- Write tests --- - print("\n" + "=" * 60) - answer = ( - input("Run WRITE tests? (creates test customer + issue + need) [y/N]: ") - .strip() - .lower() - ) - if answer != "y": - print("Skipped write tests.") - print("\nDone!") - return - - customer = test_4_customer_upsert() - if not customer: - print("Customer upsert failed, stopping.") - return - - issue = test_5_create_issue_and_need(customer["id"]) - if not issue: - print("Issue creation failed, stopping.") - return - - # Test adding a second need to the same issue (simulates another customer requesting same feature) - # First upsert a second customer - result = graphql( - CUSTOMER_UPSERT, - { - "input": { - "name": "Second Test Customer", - "domains": ["second-test.example.com"], - "externalId": "test-customer-002", - } - }, - ) - customer2 = result.get("data", {}).get("customerUpsert", {}).get("customer") - if customer2: - test_6_add_need_to_existing(customer2["id"], issue["id"]) - - print("\n" + "=" * 60) - print("All tests complete!") - print( - "Check the project: https://linear.app/autogpt/project/customer-feature-requests-710dcbf8bf4e/issues" - ) - - -if __name__ == "__main__": - main() diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx index b62e96f58a..71ade81a9f 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx @@ -15,10 +15,6 @@ import { ToolUIPart, UIDataTypes, UIMessage, UITools } from "ai"; import { useEffect, useRef, useState } from "react"; import { CreateAgentTool } from "../../tools/CreateAgent/CreateAgent"; import { EditAgentTool } from "../../tools/EditAgent/EditAgent"; -import { - CreateFeatureRequestTool, - SearchFeatureRequestsTool, -} from "../../tools/FeatureRequests/FeatureRequests"; import { FindAgentsTool } from "../../tools/FindAgents/FindAgents"; import { FindBlocksTool } from "../../tools/FindBlocks/FindBlocks"; import { RunAgentTool } from "../../tools/RunAgent/RunAgent"; @@ -258,20 +254,6 @@ export const ChatMessagesContainer = ({ part={part as ToolUIPart} /> ); - case "tool-search_feature_requests": - return ( - - ); - case "tool-create_feature_request": - return ( - - ); default: return null; } diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx index 8a35f939ca..6030665f1c 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/styleguide/page.tsx @@ -14,10 +14,6 @@ import { Text } from "@/components/atoms/Text/Text"; import { CopilotChatActionsProvider } from "../components/CopilotChatActionsProvider/CopilotChatActionsProvider"; import { CreateAgentTool } from "../tools/CreateAgent/CreateAgent"; import { EditAgentTool } from "../tools/EditAgent/EditAgent"; -import { - CreateFeatureRequestTool, - SearchFeatureRequestsTool, -} from "../tools/FeatureRequests/FeatureRequests"; import { FindAgentsTool } from "../tools/FindAgents/FindAgents"; import { FindBlocksTool } from "../tools/FindBlocks/FindBlocks"; import { RunAgentTool } from "../tools/RunAgent/RunAgent"; @@ -49,8 +45,6 @@ const SECTIONS = [ "Tool: Create Agent", "Tool: Edit Agent", "Tool: View Agent Output", - "Tool: Search Feature Requests", - "Tool: Create Feature Request", "Full Conversation Example", ] as const; @@ -1427,235 +1421,6 @@ export default function StyleguidePage() { - {/* ============================================================= */} - {/* SEARCH FEATURE REQUESTS */} - {/* ============================================================= */} - -
- - - - - - - - - - - - - - - - - - - - - - - -
- - {/* ============================================================= */} - {/* CREATE FEATURE REQUEST */} - {/* ============================================================= */} - -
- - - - - - - - - - - - - - - - - - - - - - - -
- {/* ============================================================= */} {/* FULL CONVERSATION EXAMPLE */} {/* ============================================================= */} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/FeatureRequests.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/FeatureRequests.tsx deleted file mode 100644 index e14ec69397..0000000000 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/FeatureRequests.tsx +++ /dev/null @@ -1,240 +0,0 @@ -"use client"; - -import type { ToolUIPart } from "ai"; -import { useMemo } from "react"; - -import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation"; -import { - ContentBadge, - ContentCard, - ContentCardDescription, - ContentCardHeader, - ContentCardTitle, - ContentGrid, - ContentLink, - ContentMessage, - ContentSuggestionsList, -} from "../../components/ToolAccordion/AccordionContent"; -import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion"; -import { - AccordionIcon, - getAccordionTitle, - getAnimationText, - getFeatureRequestOutput, - isCreatedOutput, - isErrorOutput, - isNoResultsOutput, - isSearchResultsOutput, - ToolIcon, - type FeatureRequestToolType, -} from "./helpers"; - -export interface FeatureRequestToolPart { - type: FeatureRequestToolType; - toolCallId: string; - state: ToolUIPart["state"]; - input?: unknown; - output?: unknown; -} - -interface Props { - part: FeatureRequestToolPart; -} - -function truncate(text: string, maxChars: number): string { - const trimmed = text.trim(); - if (trimmed.length <= maxChars) return trimmed; - return `${trimmed.slice(0, maxChars).trimEnd()}…`; -} - -export function SearchFeatureRequestsTool({ part }: Props) { - const output = getFeatureRequestOutput(part); - const text = getAnimationText(part); - const isStreaming = - part.state === "input-streaming" || part.state === "input-available"; - const isError = - part.state === "output-error" || (!!output && isErrorOutput(output)); - - const normalized = useMemo(() => { - if (!output) return null; - return { title: getAccordionTitle(part.type, output) }; - }, [output, part.type]); - - const isOutputAvailable = part.state === "output-available" && !!output; - - const searchOutput = - isOutputAvailable && output && isSearchResultsOutput(output) - ? output - : null; - const noResultsOutput = - isOutputAvailable && output && isNoResultsOutput(output) ? output : null; - const errorOutput = - isOutputAvailable && output && isErrorOutput(output) ? output : null; - - const hasExpandableContent = - isOutputAvailable && - ((!!searchOutput && searchOutput.count > 0) || - !!noResultsOutput || - !!errorOutput); - - const accordionDescription = - hasExpandableContent && searchOutput - ? `Found ${searchOutput.count} result${searchOutput.count === 1 ? "" : "s"} for "${searchOutput.query}"` - : hasExpandableContent && (noResultsOutput || errorOutput) - ? ((noResultsOutput ?? errorOutput)?.message ?? null) - : null; - - return ( -
-
- - -
- - {hasExpandableContent && normalized && ( - } - title={normalized.title} - description={accordionDescription} - > - {searchOutput && ( - - {searchOutput.results.map((r) => ( - - - - {r.identifier} — {r.title} - - - {r.description && ( - - {truncate(r.description, 200)} - - )} - - ))} - - )} - - {noResultsOutput && ( -
- {noResultsOutput.message} - {noResultsOutput.suggestions && - noResultsOutput.suggestions.length > 0 && ( - - )} -
- )} - - {errorOutput && ( -
- {errorOutput.message} - {errorOutput.error && ( - - {errorOutput.error} - - )} -
- )} -
- )} -
- ); -} - -export function CreateFeatureRequestTool({ part }: Props) { - const output = getFeatureRequestOutput(part); - const text = getAnimationText(part); - const isStreaming = - part.state === "input-streaming" || part.state === "input-available"; - const isError = - part.state === "output-error" || (!!output && isErrorOutput(output)); - - const normalized = useMemo(() => { - if (!output) return null; - return { title: getAccordionTitle(part.type, output) }; - }, [output, part.type]); - - const isOutputAvailable = part.state === "output-available" && !!output; - - const createdOutput = - isOutputAvailable && output && isCreatedOutput(output) ? output : null; - const errorOutput = - isOutputAvailable && output && isErrorOutput(output) ? output : null; - - const hasExpandableContent = - isOutputAvailable && (!!createdOutput || !!errorOutput); - - const accordionDescription = - hasExpandableContent && createdOutput - ? `${createdOutput.issue_identifier} — ${createdOutput.issue_title}` - : hasExpandableContent && errorOutput - ? errorOutput.message - : null; - - return ( -
-
- - -
- - {hasExpandableContent && normalized && ( - } - title={normalized.title} - description={accordionDescription} - > - {createdOutput && ( - - - View - - ) : undefined - } - > - - {createdOutput.issue_identifier} — {createdOutput.issue_title} - - -
- - {createdOutput.is_new_issue ? "New" : "Existing"} - -
- {createdOutput.message} -
- )} - - {errorOutput && ( -
- {errorOutput.message} - {errorOutput.error && ( - - {errorOutput.error} - - )} -
- )} -
- )} -
- ); -} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/helpers.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/helpers.tsx deleted file mode 100644 index ed292faf2b..0000000000 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/FeatureRequests/helpers.tsx +++ /dev/null @@ -1,271 +0,0 @@ -import { - CheckCircleIcon, - LightbulbIcon, - MagnifyingGlassIcon, - PlusCircleIcon, -} from "@phosphor-icons/react"; -import type { ToolUIPart } from "ai"; - -/* ------------------------------------------------------------------ */ -/* Types (local until API client is regenerated) */ -/* ------------------------------------------------------------------ */ - -interface FeatureRequestInfo { - id: string; - identifier: string; - title: string; - description?: string | null; -} - -export interface FeatureRequestSearchResponse { - type: "feature_request_search"; - message: string; - results: FeatureRequestInfo[]; - count: number; - query: string; -} - -export interface FeatureRequestCreatedResponse { - type: "feature_request_created"; - message: string; - issue_id: string; - issue_identifier: string; - issue_title: string; - issue_url: string; - is_new_issue: boolean; - customer_name: string; -} - -interface NoResultsResponse { - type: "no_results"; - message: string; - suggestions?: string[]; -} - -interface ErrorResponse { - type: "error"; - message: string; - error?: string; -} - -export type FeatureRequestOutput = - | FeatureRequestSearchResponse - | FeatureRequestCreatedResponse - | NoResultsResponse - | ErrorResponse; - -export type FeatureRequestToolType = - | "tool-search_feature_requests" - | "tool-create_feature_request" - | string; - -/* ------------------------------------------------------------------ */ -/* Output parsing */ -/* ------------------------------------------------------------------ */ - -function parseOutput(output: unknown): FeatureRequestOutput | null { - if (!output) return null; - if (typeof output === "string") { - const trimmed = output.trim(); - if (!trimmed) return null; - try { - return parseOutput(JSON.parse(trimmed) as unknown); - } catch { - return null; - } - } - if (typeof output === "object") { - const type = (output as { type?: unknown }).type; - if ( - type === "feature_request_search" || - type === "feature_request_created" || - type === "no_results" || - type === "error" - ) { - return output as FeatureRequestOutput; - } - // Fallback structural checks - if ("results" in output && "query" in output) - return output as FeatureRequestSearchResponse; - if ("issue_identifier" in output) - return output as FeatureRequestCreatedResponse; - if ("suggestions" in output && !("error" in output)) - return output as NoResultsResponse; - if ("error" in output || "details" in output) - return output as ErrorResponse; - } - return null; -} - -export function getFeatureRequestOutput( - part: unknown, -): FeatureRequestOutput | null { - if (!part || typeof part !== "object") return null; - return parseOutput((part as { output?: unknown }).output); -} - -/* ------------------------------------------------------------------ */ -/* Type guards */ -/* ------------------------------------------------------------------ */ - -export function isSearchResultsOutput( - output: FeatureRequestOutput, -): output is FeatureRequestSearchResponse { - return ( - output.type === "feature_request_search" || - ("results" in output && "query" in output) - ); -} - -export function isCreatedOutput( - output: FeatureRequestOutput, -): output is FeatureRequestCreatedResponse { - return ( - output.type === "feature_request_created" || "issue_identifier" in output - ); -} - -export function isNoResultsOutput( - output: FeatureRequestOutput, -): output is NoResultsResponse { - return ( - output.type === "no_results" || - ("suggestions" in output && !("error" in output)) - ); -} - -export function isErrorOutput( - output: FeatureRequestOutput, -): output is ErrorResponse { - return output.type === "error" || "error" in output; -} - -/* ------------------------------------------------------------------ */ -/* Accordion metadata */ -/* ------------------------------------------------------------------ */ - -export function getAccordionTitle( - toolType: FeatureRequestToolType, - output: FeatureRequestOutput, -): string { - if (toolType === "tool-search_feature_requests") { - if (isSearchResultsOutput(output)) return "Feature requests"; - if (isNoResultsOutput(output)) return "No feature requests found"; - return "Feature request search error"; - } - if (isCreatedOutput(output)) { - return output.is_new_issue - ? "Feature request created" - : "Added to feature request"; - } - if (isErrorOutput(output)) return "Feature request error"; - return "Feature request"; -} - -/* ------------------------------------------------------------------ */ -/* Animation text */ -/* ------------------------------------------------------------------ */ - -interface AnimationPart { - type: FeatureRequestToolType; - state: ToolUIPart["state"]; - input?: unknown; - output?: unknown; -} - -export function getAnimationText(part: AnimationPart): string { - if (part.type === "tool-search_feature_requests") { - const query = (part.input as { query?: string } | undefined)?.query?.trim(); - const queryText = query ? ` for "${query}"` : ""; - - switch (part.state) { - case "input-streaming": - case "input-available": - return `Searching feature requests${queryText}`; - case "output-available": { - const output = parseOutput(part.output); - if (!output) return `Searching feature requests${queryText}`; - if (isSearchResultsOutput(output)) { - return `Found ${output.count} feature request${output.count === 1 ? "" : "s"}${queryText}`; - } - if (isNoResultsOutput(output)) - return `No feature requests found${queryText}`; - return `Error searching feature requests${queryText}`; - } - case "output-error": - return `Error searching feature requests${queryText}`; - default: - return "Searching feature requests"; - } - } - - // create_feature_request - const title = (part.input as { title?: string } | undefined)?.title?.trim(); - const titleText = title ? ` "${title}"` : ""; - - switch (part.state) { - case "input-streaming": - case "input-available": - return `Creating feature request${titleText}`; - case "output-available": { - const output = parseOutput(part.output); - if (!output) return `Creating feature request${titleText}`; - if (isCreatedOutput(output)) { - return output.is_new_issue - ? `Created ${output.issue_identifier}` - : `Added to ${output.issue_identifier}`; - } - if (isErrorOutput(output)) return "Error creating feature request"; - return `Created feature request${titleText}`; - } - case "output-error": - return "Error creating feature request"; - default: - return "Creating feature request"; - } -} - -/* ------------------------------------------------------------------ */ -/* Icons */ -/* ------------------------------------------------------------------ */ - -export function ToolIcon({ - toolType, - isStreaming, - isError, -}: { - toolType: FeatureRequestToolType; - isStreaming?: boolean; - isError?: boolean; -}) { - const IconComponent = - toolType === "tool-create_feature_request" - ? PlusCircleIcon - : MagnifyingGlassIcon; - - return ( - - ); -} - -export function AccordionIcon({ - toolType, -}: { - toolType: FeatureRequestToolType; -}) { - const IconComponent = - toolType === "tool-create_feature_request" - ? CheckCircleIcon - : LightbulbIcon; - return ; -} diff --git a/autogpt_platform/frontend/src/app/api/openapi.json b/autogpt_platform/frontend/src/app/api/openapi.json index a0eb141aa9..5d2cb83f7c 100644 --- a/autogpt_platform/frontend/src/app/api/openapi.json +++ b/autogpt_platform/frontend/src/app/api/openapi.json @@ -10495,9 +10495,7 @@ "operation_started", "operation_pending", "operation_in_progress", - "input_validation_error", - "feature_request_search", - "feature_request_created" + "input_validation_error" ], "title": "ResponseType", "description": "Types of tool responses." From cb166dd6fb80b42da54da6051b26fd25bebe4517 Mon Sep 17 00:00:00 2001 From: Nicholas Tindle Date: Thu, 12 Feb 2026 09:56:59 -0600 Subject: [PATCH 11/18] feat(blocks): Store sandbox files to workspace (#12073) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Store files created by sandbox blocks (Claude Code, Code Executor) to the user's workspace for persistence across runs. ### Changes 🏗️ - **New `sandbox_files.py` utility** (`backend/util/sandbox_files.py`) - Shared module for extracting files from E2B sandboxes - Stores files to workspace via `store_media_file()` (includes virus scanning, size limits) - Returns `SandboxFileOutput` with path, content, and `workspace_ref` - **Claude Code block** (`backend/blocks/claude_code.py`) - Added `workspace_ref` field to `FileOutput` schema - Replaced inline `_extract_files()` with shared utility - Files from working directory now stored to workspace automatically - **Code Executor block** (`backend/blocks/code_executor.py`) - Added `files` output field to `ExecuteCodeBlock.Output` - Creates `/output` directory in sandbox before execution - Extracts all files (text + binary) from `/output` after execution - Updated `execute_code()` to support file extraction with `extract_files` param ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Create agent with Claude Code block, have it create a file, verify `workspace_ref` in output - [x] Create agent with Code Executor block, write file to `/output`, verify `workspace_ref` in output - [x] Verify files persist in workspace after sandbox disposal - [x] Verify binary files (images, etc.) work correctly in Code Executor - [x] Verify existing graphs using `content` field still work (backward compat) #### For configuration changes: - [x] `.env.default` is updated or already compatible with my changes - [x] `docker-compose.yml` is updated or already compatible with my changes - [x] I have included a list of my configuration changes in the PR description (under **Changes**) No configuration changes required - this is purely additive backend code. --- **Related:** Closes SECRT-1931 --- > [!NOTE] > **Medium Risk** > Adds automatic extraction and workspace storage of sandbox-written files (including binaries for code execution), which can affect output payload size, performance, and file-handling edge cases. > > **Overview** > **Sandbox blocks now persist generated files to workspace.** A new shared utility (`backend/util/sandbox_files.py`) extracts files from an E2B sandbox (scoped by a start timestamp) and stores them via `store_media_file`, returning `SandboxFileOutput` with `workspace_ref`. > > `ClaudeCodeBlock` replaces its inline file-scraping logic with this utility and updates the `files` output schema to include `workspace_ref`. > > `ExecuteCodeBlock` adds a `files` output and extends the executor mixin to optionally extract/store files (text + binary) when an `execution_context` is provided; related mocks/tests and docs are updated accordingly. > > Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit 343854c0cf971cffc975c466e79bbbc2f9fd7271. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot). --------- Co-authored-by: Claude Opus 4.6 --- .../backend/backend/blocks/claude_code.py | 179 ++--------- .../backend/backend/blocks/code_executor.py | 73 ++++- .../backend/backend/util/sandbox_files.py | 288 ++++++++++++++++++ docs/integrations/block-integrations/llm.md | 2 +- docs/integrations/block-integrations/misc.md | 1 + 5 files changed, 383 insertions(+), 160 deletions(-) create mode 100644 autogpt_platform/backend/backend/util/sandbox_files.py diff --git a/autogpt_platform/backend/backend/blocks/claude_code.py b/autogpt_platform/backend/backend/blocks/claude_code.py index 1919406c6f..2e870f02b6 100644 --- a/autogpt_platform/backend/backend/blocks/claude_code.py +++ b/autogpt_platform/backend/backend/blocks/claude_code.py @@ -1,10 +1,10 @@ import json import shlex import uuid -from typing import Literal, Optional +from typing import TYPE_CHECKING, Literal, Optional from e2b import AsyncSandbox as BaseAsyncSandbox -from pydantic import BaseModel, SecretStr +from pydantic import SecretStr from backend.blocks._base import ( Block, @@ -20,6 +20,13 @@ from backend.data.model import ( SchemaField, ) from backend.integrations.providers import ProviderName +from backend.util.sandbox_files import ( + SandboxFileOutput, + extract_and_store_sandbox_files, +) + +if TYPE_CHECKING: + from backend.executor.utils import ExecutionContext class ClaudeCodeExecutionError(Exception): @@ -174,22 +181,15 @@ class ClaudeCodeBlock(Block): advanced=True, ) - class FileOutput(BaseModel): - """A file extracted from the sandbox.""" - - path: str - relative_path: str # Path relative to working directory (for GitHub, etc.) - name: str - content: str - class Output(BlockSchemaOutput): response: str = SchemaField( description="The output/response from Claude Code execution" ) - files: list["ClaudeCodeBlock.FileOutput"] = SchemaField( + files: list[SandboxFileOutput] = SchemaField( description=( "List of text files created/modified by Claude Code during this execution. " - "Each file has 'path', 'relative_path', 'name', and 'content' fields." + "Each file has 'path', 'relative_path', 'name', 'content', and 'workspace_ref' fields. " + "workspace_ref contains a workspace:// URI if the file was stored to workspace." ) ) conversation_history: str = SchemaField( @@ -252,6 +252,7 @@ class ClaudeCodeBlock(Block): "relative_path": "index.html", "name": "index.html", "content": "Hello World", + "workspace_ref": None, } ], ), @@ -267,11 +268,12 @@ class ClaudeCodeBlock(Block): "execute_claude_code": lambda *args, **kwargs: ( "Created index.html with hello world content", # response [ - ClaudeCodeBlock.FileOutput( + SandboxFileOutput( path="/home/user/index.html", relative_path="index.html", name="index.html", content="Hello World", + workspace_ref=None, ) ], # files "User: Create a hello world HTML file\n" @@ -294,7 +296,8 @@ class ClaudeCodeBlock(Block): existing_sandbox_id: str, conversation_history: str, dispose_sandbox: bool, - ) -> tuple[str, list["ClaudeCodeBlock.FileOutput"], str, str, str]: + execution_context: "ExecutionContext", + ) -> tuple[str, list[SandboxFileOutput], str, str, str]: """ Execute Claude Code in an E2B sandbox. @@ -449,14 +452,18 @@ class ClaudeCodeBlock(Block): else: new_conversation_history = turn_entry - # Extract files created/modified during this run - files = await self._extract_files( - sandbox, working_directory, start_timestamp + # Extract files created/modified during this run and store to workspace + sandbox_files = await extract_and_store_sandbox_files( + sandbox=sandbox, + working_directory=working_directory, + execution_context=execution_context, + since_timestamp=start_timestamp, + text_only=True, ) return ( response, - files, + sandbox_files, # Already SandboxFileOutput objects new_conversation_history, current_session_id, sandbox_id, @@ -471,140 +478,6 @@ class ClaudeCodeBlock(Block): if dispose_sandbox and sandbox: await sandbox.kill() - async def _extract_files( - self, - sandbox: BaseAsyncSandbox, - working_directory: str, - since_timestamp: str | None = None, - ) -> list["ClaudeCodeBlock.FileOutput"]: - """ - Extract text files created/modified during this Claude Code execution. - - Args: - sandbox: The E2B sandbox instance - working_directory: Directory to search for files - since_timestamp: ISO timestamp - only return files modified after this time - - Returns: - List of FileOutput objects with path, relative_path, name, and content - """ - files: list[ClaudeCodeBlock.FileOutput] = [] - - # Text file extensions we can safely read as text - text_extensions = { - ".txt", - ".md", - ".html", - ".htm", - ".css", - ".js", - ".ts", - ".jsx", - ".tsx", - ".json", - ".xml", - ".yaml", - ".yml", - ".toml", - ".ini", - ".cfg", - ".conf", - ".py", - ".rb", - ".php", - ".java", - ".c", - ".cpp", - ".h", - ".hpp", - ".cs", - ".go", - ".rs", - ".swift", - ".kt", - ".scala", - ".sh", - ".bash", - ".zsh", - ".sql", - ".graphql", - ".env", - ".gitignore", - ".dockerfile", - "Dockerfile", - ".vue", - ".svelte", - ".astro", - ".mdx", - ".rst", - ".tex", - ".csv", - ".log", - } - - try: - # List files recursively using find command - # Exclude node_modules and .git directories, but allow hidden files - # like .env and .gitignore (they're filtered by text_extensions later) - # Filter by timestamp to only get files created/modified during this run - safe_working_dir = shlex.quote(working_directory) - timestamp_filter = "" - if since_timestamp: - timestamp_filter = f"-newermt {shlex.quote(since_timestamp)} " - find_result = await sandbox.commands.run( - f"find {safe_working_dir} -type f " - f"{timestamp_filter}" - f"-not -path '*/node_modules/*' " - f"-not -path '*/.git/*' " - f"2>/dev/null" - ) - - if find_result.stdout: - for file_path in find_result.stdout.strip().split("\n"): - if not file_path: - continue - - # Check if it's a text file we can read - is_text = any( - file_path.endswith(ext) for ext in text_extensions - ) or file_path.endswith("Dockerfile") - - if is_text: - try: - content = await sandbox.files.read(file_path) - # Handle bytes or string - if isinstance(content, bytes): - content = content.decode("utf-8", errors="replace") - - # Extract filename from path - file_name = file_path.split("/")[-1] - - # Calculate relative path by stripping working directory - relative_path = file_path - if file_path.startswith(working_directory): - relative_path = file_path[len(working_directory) :] - # Remove leading slash if present - if relative_path.startswith("/"): - relative_path = relative_path[1:] - - files.append( - ClaudeCodeBlock.FileOutput( - path=file_path, - relative_path=relative_path, - name=file_name, - content=content, - ) - ) - except Exception: - # Skip files that can't be read - pass - - except Exception: - # If file extraction fails, return empty results - pass - - return files - def _escape_prompt(self, prompt: str) -> str: """Escape the prompt for safe shell execution.""" # Use single quotes and escape any single quotes in the prompt @@ -617,6 +490,7 @@ class ClaudeCodeBlock(Block): *, e2b_credentials: APIKeyCredentials, anthropic_credentials: APIKeyCredentials, + execution_context: "ExecutionContext", **kwargs, ) -> BlockOutput: try: @@ -637,6 +511,7 @@ class ClaudeCodeBlock(Block): existing_sandbox_id=input_data.sandbox_id, conversation_history=input_data.conversation_history, dispose_sandbox=input_data.dispose_sandbox, + execution_context=execution_context, ) yield "response", response diff --git a/autogpt_platform/backend/backend/blocks/code_executor.py b/autogpt_platform/backend/backend/blocks/code_executor.py index 766f44b7bb..26bf9acd4f 100644 --- a/autogpt_platform/backend/backend/blocks/code_executor.py +++ b/autogpt_platform/backend/backend/blocks/code_executor.py @@ -1,5 +1,5 @@ from enum import Enum -from typing import Any, Literal, Optional +from typing import TYPE_CHECKING, Any, Literal, Optional from e2b_code_interpreter import AsyncSandbox from e2b_code_interpreter import Result as E2BExecutionResult @@ -20,6 +20,13 @@ from backend.data.model import ( SchemaField, ) from backend.integrations.providers import ProviderName +from backend.util.sandbox_files import ( + SandboxFileOutput, + extract_and_store_sandbox_files, +) + +if TYPE_CHECKING: + from backend.executor.utils import ExecutionContext TEST_CREDENTIALS = APIKeyCredentials( id="01234567-89ab-cdef-0123-456789abcdef", @@ -85,6 +92,9 @@ class CodeExecutionResult(MainCodeExecutionResult): class BaseE2BExecutorMixin: """Shared implementation methods for E2B executor blocks.""" + # Default working directory in E2B sandboxes + WORKING_DIR = "/home/user" + async def execute_code( self, api_key: str, @@ -95,14 +105,21 @@ class BaseE2BExecutorMixin: timeout: Optional[int] = None, sandbox_id: Optional[str] = None, dispose_sandbox: bool = False, + execution_context: Optional["ExecutionContext"] = None, + extract_files: bool = False, ): """ Unified code execution method that handles all three use cases: 1. Create new sandbox and execute (ExecuteCodeBlock) 2. Create new sandbox, execute, and return sandbox_id (InstantiateCodeSandboxBlock) 3. Connect to existing sandbox and execute (ExecuteCodeStepBlock) + + Args: + extract_files: If True and execution_context provided, extract files + created/modified during execution and store to workspace. """ # noqa sandbox = None + files: list[SandboxFileOutput] = [] try: if sandbox_id: # Connect to existing sandbox (ExecuteCodeStepBlock case) @@ -118,6 +135,12 @@ class BaseE2BExecutorMixin: for cmd in setup_commands: await sandbox.commands.run(cmd) + # Capture timestamp before execution to scope file extraction + start_timestamp = None + if extract_files: + ts_result = await sandbox.commands.run("date -u +%Y-%m-%dT%H:%M:%S") + start_timestamp = ts_result.stdout.strip() if ts_result.stdout else None + # Execute the code execution = await sandbox.run_code( code, @@ -133,7 +156,24 @@ class BaseE2BExecutorMixin: stdout_logs = "".join(execution.logs.stdout) stderr_logs = "".join(execution.logs.stderr) - return results, text_output, stdout_logs, stderr_logs, sandbox.sandbox_id + # Extract files created/modified during this execution + if extract_files and execution_context: + files = await extract_and_store_sandbox_files( + sandbox=sandbox, + working_directory=self.WORKING_DIR, + execution_context=execution_context, + since_timestamp=start_timestamp, + text_only=False, # Include binary files too + ) + + return ( + results, + text_output, + stdout_logs, + stderr_logs, + sandbox.sandbox_id, + files, + ) finally: # Dispose of sandbox if requested to reduce usage costs if dispose_sandbox and sandbox: @@ -238,6 +278,12 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin): description="Standard output logs from execution" ) stderr_logs: str = SchemaField(description="Standard error logs from execution") + files: list[SandboxFileOutput] = SchemaField( + description=( + "Files created or modified during execution. " + "Each file has path, name, content, and workspace_ref (if stored)." + ), + ) def __init__(self): super().__init__( @@ -259,23 +305,30 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin): ("results", []), ("response", "Hello World"), ("stdout_logs", "Hello World\n"), + ("files", []), ], test_mock={ - "execute_code": lambda api_key, code, language, template_id, setup_commands, timeout, dispose_sandbox: ( # noqa + "execute_code": lambda api_key, code, language, template_id, setup_commands, timeout, dispose_sandbox, execution_context, extract_files: ( # noqa [], # results "Hello World", # text_output "Hello World\n", # stdout_logs "", # stderr_logs "sandbox_id", # sandbox_id + [], # files ), }, ) async def run( - self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + self, + input_data: Input, + *, + credentials: APIKeyCredentials, + execution_context: "ExecutionContext", + **kwargs, ) -> BlockOutput: try: - results, text_output, stdout, stderr, _ = await self.execute_code( + results, text_output, stdout, stderr, _, files = await self.execute_code( api_key=credentials.api_key.get_secret_value(), code=input_data.code, language=input_data.language, @@ -283,6 +336,8 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin): setup_commands=input_data.setup_commands, timeout=input_data.timeout, dispose_sandbox=input_data.dispose_sandbox, + execution_context=execution_context, + extract_files=True, ) # Determine result object shape & filter out empty formats @@ -296,6 +351,8 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin): yield "stdout_logs", stdout if stderr: yield "stderr_logs", stderr + # Always yield files (empty list if none) + yield "files", [f.model_dump() for f in files] except Exception as e: yield "error", str(e) @@ -393,6 +450,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin): "Hello World\n", # stdout_logs "", # stderr_logs "sandbox_id", # sandbox_id + [], # files ), }, ) @@ -401,7 +459,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin): self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: try: - _, text_output, stdout, stderr, sandbox_id = await self.execute_code( + _, text_output, stdout, stderr, sandbox_id, _ = await self.execute_code( api_key=credentials.api_key.get_secret_value(), code=input_data.setup_code, language=input_data.language, @@ -500,6 +558,7 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin): "Hello World\n", # stdout_logs "", # stderr_logs sandbox_id, # sandbox_id + [], # files ), }, ) @@ -508,7 +567,7 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin): self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: try: - results, text_output, stdout, stderr, _ = await self.execute_code( + results, text_output, stdout, stderr, _, _ = await self.execute_code( api_key=credentials.api_key.get_secret_value(), code=input_data.step_code, language=input_data.language, diff --git a/autogpt_platform/backend/backend/util/sandbox_files.py b/autogpt_platform/backend/backend/util/sandbox_files.py new file mode 100644 index 0000000000..9db53ded14 --- /dev/null +++ b/autogpt_platform/backend/backend/util/sandbox_files.py @@ -0,0 +1,288 @@ +""" +Shared utilities for extracting and storing files from E2B sandboxes. + +This module provides common file extraction and workspace storage functionality +for blocks that run code in E2B sandboxes (Claude Code, Code Executor, etc.). +""" + +import base64 +import logging +import mimetypes +import shlex +from dataclasses import dataclass +from typing import TYPE_CHECKING + +from pydantic import BaseModel + +from backend.util.file import store_media_file +from backend.util.type import MediaFileType + +if TYPE_CHECKING: + from e2b import AsyncSandbox as BaseAsyncSandbox + + from backend.executor.utils import ExecutionContext + +logger = logging.getLogger(__name__) + +# Text file extensions that can be safely read and stored as text +TEXT_EXTENSIONS = { + ".txt", + ".md", + ".html", + ".htm", + ".css", + ".js", + ".ts", + ".jsx", + ".tsx", + ".json", + ".xml", + ".yaml", + ".yml", + ".toml", + ".ini", + ".cfg", + ".conf", + ".py", + ".rb", + ".php", + ".java", + ".c", + ".cpp", + ".h", + ".hpp", + ".cs", + ".go", + ".rs", + ".swift", + ".kt", + ".scala", + ".sh", + ".bash", + ".zsh", + ".sql", + ".graphql", + ".env", + ".gitignore", + ".dockerfile", + "Dockerfile", + ".vue", + ".svelte", + ".astro", + ".mdx", + ".rst", + ".tex", + ".csv", + ".log", +} + + +class SandboxFileOutput(BaseModel): + """A file extracted from a sandbox and optionally stored in workspace.""" + + path: str + """Full path in the sandbox.""" + + relative_path: str + """Path relative to the working directory.""" + + name: str + """Filename only.""" + + content: str + """File content as text (for backward compatibility).""" + + workspace_ref: str | None = None + """Workspace reference (workspace://{id}#mime) if stored, None otherwise.""" + + +@dataclass +class ExtractedFile: + """Internal representation of an extracted file before storage.""" + + path: str + relative_path: str + name: str + content: bytes + is_text: bool + + +async def extract_sandbox_files( + sandbox: "BaseAsyncSandbox", + working_directory: str, + since_timestamp: str | None = None, + text_only: bool = True, +) -> list[ExtractedFile]: + """ + Extract files from an E2B sandbox. + + Args: + sandbox: The E2B sandbox instance + working_directory: Directory to search for files + since_timestamp: ISO timestamp - only return files modified after this time + text_only: If True, only extract text files (default). If False, extract all files. + + Returns: + List of ExtractedFile objects with path, content, and metadata + """ + files: list[ExtractedFile] = [] + + try: + # Build find command + safe_working_dir = shlex.quote(working_directory) + timestamp_filter = "" + if since_timestamp: + timestamp_filter = f"-newermt {shlex.quote(since_timestamp)} " + + find_result = await sandbox.commands.run( + f"find {safe_working_dir} -type f " + f"{timestamp_filter}" + f"-not -path '*/node_modules/*' " + f"-not -path '*/.git/*' " + f"2>/dev/null" + ) + + if not find_result.stdout: + return files + + for file_path in find_result.stdout.strip().split("\n"): + if not file_path: + continue + + # Check if it's a text file + is_text = any(file_path.endswith(ext) for ext in TEXT_EXTENSIONS) + + # Skip non-text files if text_only mode + if text_only and not is_text: + continue + + try: + # Read file content as bytes + content = await sandbox.files.read(file_path, format="bytes") + if isinstance(content, str): + content = content.encode("utf-8") + elif isinstance(content, bytearray): + content = bytes(content) + + # Extract filename from path + file_name = file_path.split("/")[-1] + + # Calculate relative path + relative_path = file_path + if file_path.startswith(working_directory): + relative_path = file_path[len(working_directory) :] + if relative_path.startswith("/"): + relative_path = relative_path[1:] + + files.append( + ExtractedFile( + path=file_path, + relative_path=relative_path, + name=file_name, + content=content, + is_text=is_text, + ) + ) + except Exception as e: + logger.debug(f"Failed to read file {file_path}: {e}") + continue + + except Exception as e: + logger.warning(f"File extraction failed: {e}") + + return files + + +async def store_sandbox_files( + extracted_files: list[ExtractedFile], + execution_context: "ExecutionContext", +) -> list[SandboxFileOutput]: + """ + Store extracted sandbox files to workspace and return output objects. + + Args: + extracted_files: List of files extracted from sandbox + execution_context: Execution context for workspace storage + + Returns: + List of SandboxFileOutput objects with workspace refs + """ + outputs: list[SandboxFileOutput] = [] + + for file in extracted_files: + # Decode content for text files (for backward compat content field) + if file.is_text: + try: + content_str = file.content.decode("utf-8", errors="replace") + except Exception: + content_str = "" + else: + content_str = f"[Binary file: {len(file.content)} bytes]" + + # Build data URI (needed for storage and as binary fallback) + mime_type = mimetypes.guess_type(file.name)[0] or "application/octet-stream" + data_uri = f"data:{mime_type};base64,{base64.b64encode(file.content).decode()}" + + # Try to store in workspace + workspace_ref: str | None = None + try: + result = await store_media_file( + file=MediaFileType(data_uri), + execution_context=execution_context, + return_format="for_block_output", + ) + if result.startswith("workspace://"): + workspace_ref = result + elif not file.is_text: + # Non-workspace context (graph execution): store_media_file + # returned a data URI — use it as content so binary data isn't lost. + content_str = result + except Exception as e: + logger.warning(f"Failed to store file {file.name} to workspace: {e}") + # For binary files, fall back to data URI to prevent data loss + if not file.is_text: + content_str = data_uri + + outputs.append( + SandboxFileOutput( + path=file.path, + relative_path=file.relative_path, + name=file.name, + content=content_str, + workspace_ref=workspace_ref, + ) + ) + + return outputs + + +async def extract_and_store_sandbox_files( + sandbox: "BaseAsyncSandbox", + working_directory: str, + execution_context: "ExecutionContext", + since_timestamp: str | None = None, + text_only: bool = True, +) -> list[SandboxFileOutput]: + """ + Extract files from sandbox and store them in workspace. + + This is the main entry point combining extraction and storage. + + Args: + sandbox: The E2B sandbox instance + working_directory: Directory to search for files + execution_context: Execution context for workspace storage + since_timestamp: ISO timestamp - only return files modified after this time + text_only: If True, only extract text files + + Returns: + List of SandboxFileOutput objects with content and workspace refs + """ + extracted = await extract_sandbox_files( + sandbox=sandbox, + working_directory=working_directory, + since_timestamp=since_timestamp, + text_only=text_only, + ) + + return await store_sandbox_files(extracted, execution_context) diff --git a/docs/integrations/block-integrations/llm.md b/docs/integrations/block-integrations/llm.md index 20a5147fcd..9c96ef56c0 100644 --- a/docs/integrations/block-integrations/llm.md +++ b/docs/integrations/block-integrations/llm.md @@ -563,7 +563,7 @@ The block supports conversation continuation through three mechanisms: |--------|-------------|------| | error | Error message if execution failed | str | | response | The output/response from Claude Code execution | str | -| files | List of text files created/modified by Claude Code during this execution. Each file has 'path', 'relative_path', 'name', and 'content' fields. | List[FileOutput] | +| files | List of text files created/modified by Claude Code during this execution. Each file has 'path', 'relative_path', 'name', 'content', and 'workspace_ref' fields. workspace_ref contains a workspace:// URI if the file was stored to workspace. | List[SandboxFileOutput] | | conversation_history | Full conversation history including this turn. Pass this to conversation_history input to continue on a fresh sandbox if the previous sandbox timed out. | str | | session_id | Session ID for this conversation. Pass this back along with sandbox_id to continue the conversation. | str | | sandbox_id | ID of the sandbox instance. Pass this back along with session_id to continue the conversation. This is None if dispose_sandbox was True (sandbox was disposed). | str | diff --git a/docs/integrations/block-integrations/misc.md b/docs/integrations/block-integrations/misc.md index 4c199bebb4..ad6300ae88 100644 --- a/docs/integrations/block-integrations/misc.md +++ b/docs/integrations/block-integrations/misc.md @@ -215,6 +215,7 @@ The sandbox includes pip and npm pre-installed. Set timeout to limit execution t | response | Text output (if any) of the main execution result | str | | stdout_logs | Standard output logs from execution | str | | stderr_logs | Standard error logs from execution | str | +| files | Files created or modified during execution. Each file has path, name, content, and workspace_ref (if stored). | List[SandboxFileOutput] | ### Possible use case From d95aef76653733b8403e9a6dbec82a6bca9ca016 Mon Sep 17 00:00:00 2001 From: Ubbe Date: Fri, 13 Feb 2026 04:06:40 +0800 Subject: [PATCH 12/18] fix(copilot): stream timeout, long-running tool polling, and CreateAgent UI refresh (#12070) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Agent generation completes on the backend but the UI does not update/refresh to show the result. ### Changes 🏗️ ![Uploading Screenshot 2026-02-13 at 00.44.54.png…]() - **Stream start timeout (12s):** If the backend doesn't begin streaming within 12 seconds of submitting a message, the stream is aborted and a destructive toast is shown to the user. - **Long-running tool polling:** Added `useLongRunningToolPolling` hook that polls the session endpoint every 1.5s while a tool output is in an operating state (`operation_started` / `operation_pending` / `operation_in_progress`). When the backend completes, messages are refreshed so the UI reflects the final result. - **CreateAgent UI improvements:** Replaced the orbit loader / progress bar with a mini-game, added expanded accordion for saved agents, and improved the saved-agent card with image, icons, and links that open in new tabs. - **Backend tweaks:** Added `image_url` to `CreateAgentToolOutput`, minor model/service updates for the dummy agent generator. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Send a message and verify the stream starts within 12s or a toast appears - [x] Trigger agent creation and verify the UI updates when the backend completes - [x] Verify the saved-agent card renders correctly with image, links, and icons --------- Co-authored-by: Otto Co-authored-by: Nicholas Tindle Co-authored-by: Claude Opus 4.6 --- .../chat/tools/agent_generator/dummy.py | 154 +++++ .../chat/tools/agent_generator/service.py | 57 +- .../backend/backend/util/settings.py | 4 + .../test/agent_generator/test_service.py | 1 + .../src/app/(platform)/copilot/hooks/Untitled | 10 - .../hooks/useLongRunningToolPolling.ts | 126 ++++ .../copilot/tools/CreateAgent/CreateAgent.tsx | 77 ++- .../components/MiniGame/MiniGame.tsx | 21 + .../components/MiniGame/useMiniGame.ts | 579 ++++++++++++++++++ .../app/(platform)/copilot/useCopilotPage.ts | 29 +- 10 files changed, 1019 insertions(+), 39 deletions(-) create mode 100644 autogpt_platform/backend/backend/api/features/chat/tools/agent_generator/dummy.py delete mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/hooks/Untitled create mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/hooks/useLongRunningToolPolling.ts create mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/MiniGame.tsx create mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/useMiniGame.ts diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/agent_generator/dummy.py b/autogpt_platform/backend/backend/api/features/chat/tools/agent_generator/dummy.py new file mode 100644 index 0000000000..cf0e76d3b3 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/agent_generator/dummy.py @@ -0,0 +1,154 @@ +"""Dummy Agent Generator for testing. + +Returns mock responses matching the format expected from the external service. +Enable via AGENTGENERATOR_USE_DUMMY=true in settings. + +WARNING: This is for testing only. Do not use in production. +""" + +import asyncio +import logging +import uuid +from typing import Any + +logger = logging.getLogger(__name__) + +# Dummy decomposition result (instructions type) +DUMMY_DECOMPOSITION_RESULT: dict[str, Any] = { + "type": "instructions", + "steps": [ + { + "description": "Get input from user", + "action": "input", + "block_name": "AgentInputBlock", + }, + { + "description": "Process the input", + "action": "process", + "block_name": "TextFormatterBlock", + }, + { + "description": "Return output to user", + "action": "output", + "block_name": "AgentOutputBlock", + }, + ], +} + +# Block IDs from backend/blocks/io.py +AGENT_INPUT_BLOCK_ID = "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b" +AGENT_OUTPUT_BLOCK_ID = "363ae599-353e-4804-937e-b2ee3cef3da4" + + +def _generate_dummy_agent_json() -> dict[str, Any]: + """Generate a minimal valid agent JSON for testing.""" + input_node_id = str(uuid.uuid4()) + output_node_id = str(uuid.uuid4()) + + return { + "id": str(uuid.uuid4()), + "version": 1, + "is_active": True, + "name": "Dummy Test Agent", + "description": "A dummy agent generated for testing purposes", + "nodes": [ + { + "id": input_node_id, + "block_id": AGENT_INPUT_BLOCK_ID, + "input_default": { + "name": "input", + "title": "Input", + "description": "Enter your input", + "placeholder_values": [], + }, + "metadata": {"position": {"x": 0, "y": 0}}, + }, + { + "id": output_node_id, + "block_id": AGENT_OUTPUT_BLOCK_ID, + "input_default": { + "name": "output", + "title": "Output", + "description": "Agent output", + "format": "{output}", + }, + "metadata": {"position": {"x": 400, "y": 0}}, + }, + ], + "links": [ + { + "id": str(uuid.uuid4()), + "source_id": input_node_id, + "sink_id": output_node_id, + "source_name": "result", + "sink_name": "value", + "is_static": False, + }, + ], + } + + +async def decompose_goal_dummy( + description: str, + context: str = "", + library_agents: list[dict[str, Any]] | None = None, +) -> dict[str, Any]: + """Return dummy decomposition result.""" + logger.info("Using dummy agent generator for decompose_goal") + return DUMMY_DECOMPOSITION_RESULT.copy() + + +async def generate_agent_dummy( + instructions: dict[str, Any], + library_agents: list[dict[str, Any]] | None = None, + operation_id: str | None = None, + task_id: str | None = None, +) -> dict[str, Any]: + """Return dummy agent JSON after a simulated delay.""" + logger.info("Using dummy agent generator for generate_agent (30s delay)") + await asyncio.sleep(30) + return _generate_dummy_agent_json() + + +async def generate_agent_patch_dummy( + update_request: str, + current_agent: dict[str, Any], + library_agents: list[dict[str, Any]] | None = None, + operation_id: str | None = None, + task_id: str | None = None, +) -> dict[str, Any]: + """Return dummy patched agent (returns the current agent with updated description).""" + logger.info("Using dummy agent generator for generate_agent_patch") + patched = current_agent.copy() + patched["description"] = ( + f"{current_agent.get('description', '')} (updated: {update_request})" + ) + return patched + + +async def customize_template_dummy( + template_agent: dict[str, Any], + modification_request: str, + context: str = "", +) -> dict[str, Any]: + """Return dummy customized template (returns template with updated description).""" + logger.info("Using dummy agent generator for customize_template") + customized = template_agent.copy() + customized["description"] = ( + f"{template_agent.get('description', '')} (customized: {modification_request})" + ) + return customized + + +async def get_blocks_dummy() -> list[dict[str, Any]]: + """Return dummy blocks list.""" + logger.info("Using dummy agent generator for get_blocks") + return [ + {"id": AGENT_INPUT_BLOCK_ID, "name": "AgentInputBlock"}, + {"id": AGENT_OUTPUT_BLOCK_ID, "name": "AgentOutputBlock"}, + ] + + +async def health_check_dummy() -> bool: + """Always returns healthy for dummy service.""" + return True diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/agent_generator/service.py b/autogpt_platform/backend/backend/api/features/chat/tools/agent_generator/service.py index 62411b4e1b..2b40c6d6f3 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/agent_generator/service.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/agent_generator/service.py @@ -12,8 +12,19 @@ import httpx from backend.util.settings import Settings +from .dummy import ( + customize_template_dummy, + decompose_goal_dummy, + generate_agent_dummy, + generate_agent_patch_dummy, + get_blocks_dummy, + health_check_dummy, +) + logger = logging.getLogger(__name__) +_dummy_mode_warned = False + def _create_error_response( error_message: str, @@ -90,10 +101,26 @@ def _get_settings() -> Settings: return _settings -def is_external_service_configured() -> bool: - """Check if external Agent Generator service is configured.""" +def _is_dummy_mode() -> bool: + """Check if dummy mode is enabled for testing.""" + global _dummy_mode_warned settings = _get_settings() - return bool(settings.config.agentgenerator_host) + is_dummy = bool(settings.config.agentgenerator_use_dummy) + if is_dummy and not _dummy_mode_warned: + logger.warning( + "Agent Generator running in DUMMY MODE - returning mock responses. " + "Do not use in production!" + ) + _dummy_mode_warned = True + return is_dummy + + +def is_external_service_configured() -> bool: + """Check if external Agent Generator service is configured (or dummy mode).""" + settings = _get_settings() + return bool(settings.config.agentgenerator_host) or bool( + settings.config.agentgenerator_use_dummy + ) def _get_base_url() -> str: @@ -137,6 +164,9 @@ async def decompose_goal_external( - {"type": "error", "error": "...", "error_type": "..."} on error Or None on unexpected error """ + if _is_dummy_mode(): + return await decompose_goal_dummy(description, context, library_agents) + client = _get_client() if context: @@ -226,6 +256,11 @@ async def generate_agent_external( Returns: Agent JSON dict, {"status": "accepted"} for async, or error dict {"type": "error", ...} on error """ + if _is_dummy_mode(): + return await generate_agent_dummy( + instructions, library_agents, operation_id, task_id + ) + client = _get_client() # Build request payload @@ -297,6 +332,11 @@ async def generate_agent_patch_external( Returns: Updated agent JSON, clarifying questions dict, {"status": "accepted"} for async, or error dict on error """ + if _is_dummy_mode(): + return await generate_agent_patch_dummy( + update_request, current_agent, library_agents, operation_id, task_id + ) + client = _get_client() # Build request payload @@ -383,6 +423,11 @@ async def customize_template_external( Returns: Customized agent JSON, clarifying questions dict, or error dict on error """ + if _is_dummy_mode(): + return await customize_template_dummy( + template_agent, modification_request, context + ) + client = _get_client() request = modification_request @@ -445,6 +490,9 @@ async def get_blocks_external() -> list[dict[str, Any]] | None: Returns: List of block info dicts or None on error """ + if _is_dummy_mode(): + return await get_blocks_dummy() + client = _get_client() try: @@ -478,6 +526,9 @@ async def health_check() -> bool: if not is_external_service_configured(): return False + if _is_dummy_mode(): + return await health_check_dummy() + client = _get_client() try: diff --git a/autogpt_platform/backend/backend/util/settings.py b/autogpt_platform/backend/backend/util/settings.py index 50b7428160..48dadb88f1 100644 --- a/autogpt_platform/backend/backend/util/settings.py +++ b/autogpt_platform/backend/backend/util/settings.py @@ -368,6 +368,10 @@ class Config(UpdateTrackingModel["Config"], BaseSettings): default=600, description="The timeout in seconds for Agent Generator service requests (includes retries for rate limits)", ) + agentgenerator_use_dummy: bool = Field( + default=False, + description="Use dummy agent generator responses for testing (bypasses external service)", + ) enable_example_blocks: bool = Field( default=False, diff --git a/autogpt_platform/backend/test/agent_generator/test_service.py b/autogpt_platform/backend/test/agent_generator/test_service.py index cc37c428c0..93c9b9dcc0 100644 --- a/autogpt_platform/backend/test/agent_generator/test_service.py +++ b/autogpt_platform/backend/test/agent_generator/test_service.py @@ -25,6 +25,7 @@ class TestServiceConfiguration: """Test that external service is not configured when host is empty.""" mock_settings = MagicMock() mock_settings.config.agentgenerator_host = "" + mock_settings.config.agentgenerator_use_dummy = False with patch.object(service, "_get_settings", return_value=mock_settings): assert service.is_external_service_configured() is False diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/hooks/Untitled b/autogpt_platform/frontend/src/app/(platform)/copilot/hooks/Untitled deleted file mode 100644 index 13769eb726..0000000000 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/hooks/Untitled +++ /dev/null @@ -1,10 +0,0 @@ -import { parseAsString, useQueryState } from "nuqs"; - -export function useCopilotSessionId() { - const [urlSessionId, setUrlSessionId] = useQueryState( - "sessionId", - parseAsString, - ); - - return { urlSessionId, setUrlSessionId }; -} \ No newline at end of file diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/hooks/useLongRunningToolPolling.ts b/autogpt_platform/frontend/src/app/(platform)/copilot/hooks/useLongRunningToolPolling.ts new file mode 100644 index 0000000000..85ef6b2962 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/hooks/useLongRunningToolPolling.ts @@ -0,0 +1,126 @@ +import { getGetV2GetSessionQueryKey } from "@/app/api/__generated__/endpoints/chat/chat"; +import { useQueryClient } from "@tanstack/react-query"; +import type { UIDataTypes, UIMessage, UITools } from "ai"; +import { useCallback, useEffect, useRef } from "react"; +import { convertChatSessionMessagesToUiMessages } from "../helpers/convertChatSessionToUiMessages"; + +const OPERATING_TYPES = new Set([ + "operation_started", + "operation_pending", + "operation_in_progress", +]); + +const POLL_INTERVAL_MS = 1_500; + +/** + * Detects whether any message contains a tool part whose output indicates + * a long-running operation is still in progress. + */ +function hasOperatingTool( + messages: UIMessage[], +) { + for (const msg of messages) { + for (const part of msg.parts) { + if (!part.type.startsWith("tool-")) continue; + const toolPart = part as { output?: unknown }; + if (!toolPart.output) continue; + const output = + typeof toolPart.output === "string" + ? safeParse(toolPart.output) + : toolPart.output; + if ( + output && + typeof output === "object" && + "type" in output && + OPERATING_TYPES.has((output as { type: string }).type) + ) { + return true; + } + } + } + return false; +} + +function safeParse(value: string): unknown { + try { + return JSON.parse(value); + } catch { + return null; + } +} + +/** + * Polls the session endpoint while any tool is in an "operating" state + * (operation_started / operation_pending / operation_in_progress). + * + * When the session data shows the tool output has changed (e.g. to + * agent_saved), it calls `setMessages` with the updated messages. + */ +export function useLongRunningToolPolling( + sessionId: string | null, + messages: UIMessage[], + setMessages: ( + updater: ( + prev: UIMessage[], + ) => UIMessage[], + ) => void, +) { + const queryClient = useQueryClient(); + const intervalRef = useRef | null>(null); + + const stopPolling = useCallback(() => { + if (intervalRef.current) { + clearInterval(intervalRef.current); + intervalRef.current = null; + } + }, []); + + const poll = useCallback(async () => { + if (!sessionId) return; + + // Invalidate the query cache so the next fetch gets fresh data + await queryClient.invalidateQueries({ + queryKey: getGetV2GetSessionQueryKey(sessionId), + }); + + // Fetch fresh session data + const data = queryClient.getQueryData<{ + status: number; + data: { messages?: unknown[] }; + }>(getGetV2GetSessionQueryKey(sessionId)); + + if (data?.status !== 200 || !data.data.messages) return; + + const freshMessages = convertChatSessionMessagesToUiMessages( + sessionId, + data.data.messages, + ); + + if (!freshMessages || freshMessages.length === 0) return; + + // Update when the long-running tool completed + if (!hasOperatingTool(freshMessages)) { + setMessages(() => freshMessages); + stopPolling(); + } + }, [sessionId, queryClient, setMessages, stopPolling]); + + useEffect(() => { + const shouldPoll = hasOperatingTool(messages); + + // Always clear any previous interval first so we never leak timers + // when the effect re-runs due to dependency changes (e.g. messages + // updating as the LLM streams text after the tool call). + stopPolling(); + + if (shouldPoll && sessionId) { + intervalRef.current = setInterval(() => { + poll(); + }, POLL_INTERVAL_MS); + } + + return () => { + stopPolling(); + }; + }, [messages, sessionId, poll, stopPolling]); +} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx index 88b1c491d7..26977a207a 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx @@ -1,24 +1,30 @@ "use client"; -import { WarningDiamondIcon } from "@phosphor-icons/react"; +import { Button } from "@/components/atoms/Button/Button"; +import { Text } from "@/components/atoms/Text/Text"; +import { + BookOpenIcon, + CheckFatIcon, + PencilSimpleIcon, + WarningDiamondIcon, +} from "@phosphor-icons/react"; import type { ToolUIPart } from "ai"; +import NextLink from "next/link"; import { useCopilotChatActions } from "../../components/CopilotChatActionsProvider/useCopilotChatActions"; import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation"; -import { ProgressBar } from "../../components/ProgressBar/ProgressBar"; import { ContentCardDescription, ContentCodeBlock, ContentGrid, ContentHint, - ContentLink, ContentMessage, } from "../../components/ToolAccordion/AccordionContent"; import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion"; -import { useAsymptoticProgress } from "../../hooks/useAsymptoticProgress"; import { ClarificationQuestionsCard, ClarifyingQuestion, } from "./components/ClarificationQuestionsCard"; +import { MiniGame } from "./components/MiniGame/MiniGame"; import { AccordionIcon, formatMaybeJson, @@ -52,7 +58,7 @@ function getAccordionMeta(output: CreateAgentToolOutput) { const icon = ; if (isAgentSavedOutput(output)) { - return { icon, title: output.agent_name }; + return { icon, title: output.agent_name, expanded: true }; } if (isAgentPreviewOutput(output)) { return { @@ -78,6 +84,7 @@ function getAccordionMeta(output: CreateAgentToolOutput) { return { icon, title: "Creating agent, this may take a few minutes. Sit back and relax.", + expanded: true, }; } return { @@ -107,8 +114,6 @@ export function CreateAgentTool({ part }: Props) { isOperationPendingOutput(output) || isOperationInProgressOutput(output)); - const progress = useAsymptoticProgress(isOperating); - const hasExpandableContent = part.state === "output-available" && !!output && @@ -152,31 +157,53 @@ export function CreateAgentTool({ part }: Props) { {isOperating && ( - + - This could take a few minutes, grab a coffee ☕ + This could take a few minutes — play while you wait! )} {isAgentSavedOutput(output) && ( - - {output.message} -
- - Open in library - - - Open in builder - +
+
+ + + {output.message} +
- - {truncateText( - formatMaybeJson({ agent_id: output.agent_id }), - 800, - )} - - +
+ + +
+
)} {isAgentPreviewOutput(output) && ( diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/MiniGame.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/MiniGame.tsx new file mode 100644 index 0000000000..53cfcf2731 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/MiniGame.tsx @@ -0,0 +1,21 @@ +"use client"; + +import { useMiniGame } from "./useMiniGame"; + +export function MiniGame() { + const { canvasRef } = useMiniGame(); + + return ( +
+ +
+ ); +} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/useMiniGame.ts b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/useMiniGame.ts new file mode 100644 index 0000000000..e91f1766ca --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/useMiniGame.ts @@ -0,0 +1,579 @@ +import { useEffect, useRef } from "react"; + +/* ------------------------------------------------------------------ */ +/* Constants */ +/* ------------------------------------------------------------------ */ + +const CANVAS_HEIGHT = 150; +const GRAVITY = 0.55; +const JUMP_FORCE = -9.5; +const BASE_SPEED = 3; +const SPEED_INCREMENT = 0.0008; +const SPAWN_MIN = 70; +const SPAWN_MAX = 130; +const CHAR_SIZE = 18; +const CHAR_X = 50; +const GROUND_PAD = 20; +const STORAGE_KEY = "copilot-minigame-highscore"; + +// Colors +const COLOR_BG = "#E8EAF6"; +const COLOR_CHAR = "#263238"; +const COLOR_BOSS = "#F50057"; + +// Boss +const BOSS_SIZE = 36; +const BOSS_ENTER_SPEED = 2; +const BOSS_LEAVE_SPEED = 3; +const BOSS_SHOOT_COOLDOWN = 90; +const BOSS_SHOTS_TO_EVADE = 5; +const BOSS_INTERVAL = 20; // every N score +const PROJ_SPEED = 4.5; +const PROJ_SIZE = 12; + +/* ------------------------------------------------------------------ */ +/* Types */ +/* ------------------------------------------------------------------ */ + +interface Obstacle { + x: number; + width: number; + height: number; + scored: boolean; +} + +interface Projectile { + x: number; + y: number; + speed: number; + evaded: boolean; + type: "low" | "high"; +} + +interface BossState { + phase: "inactive" | "entering" | "fighting" | "leaving"; + x: number; + targetX: number; + shotsEvaded: number; + cooldown: number; + projectiles: Projectile[]; + bob: number; +} + +interface GameState { + charY: number; + vy: number; + obstacles: Obstacle[]; + score: number; + highScore: number; + speed: number; + frame: number; + nextSpawn: number; + running: boolean; + over: boolean; + groundY: number; + boss: BossState; + bossThreshold: number; +} + +/* ------------------------------------------------------------------ */ +/* Helpers */ +/* ------------------------------------------------------------------ */ + +function randInt(min: number, max: number) { + return Math.floor(Math.random() * (max - min + 1)) + min; +} + +function readHighScore(): number { + try { + return parseInt(localStorage.getItem(STORAGE_KEY) || "0", 10) || 0; + } catch { + return 0; + } +} + +function writeHighScore(score: number) { + try { + localStorage.setItem(STORAGE_KEY, String(score)); + } catch { + /* noop */ + } +} + +function makeBoss(): BossState { + return { + phase: "inactive", + x: 0, + targetX: 0, + shotsEvaded: 0, + cooldown: 0, + projectiles: [], + bob: 0, + }; +} + +function makeState(groundY: number): GameState { + return { + charY: groundY - CHAR_SIZE, + vy: 0, + obstacles: [], + score: 0, + highScore: readHighScore(), + speed: BASE_SPEED, + frame: 0, + nextSpawn: randInt(SPAWN_MIN, SPAWN_MAX), + running: false, + over: false, + groundY, + boss: makeBoss(), + bossThreshold: BOSS_INTERVAL, + }; +} + +function gameOver(s: GameState) { + s.running = false; + s.over = true; + if (s.score > s.highScore) { + s.highScore = s.score; + writeHighScore(s.score); + } +} + +/* ------------------------------------------------------------------ */ +/* Projectile collision — shared between fighting & leaving phases */ +/* ------------------------------------------------------------------ */ + +/** Returns true if the player died. */ +function tickProjectiles(s: GameState): boolean { + const boss = s.boss; + + for (const p of boss.projectiles) { + p.x -= p.speed; + + if (!p.evaded && p.x + PROJ_SIZE < CHAR_X) { + p.evaded = true; + boss.shotsEvaded++; + } + + // Collision + if ( + !p.evaded && + CHAR_X + CHAR_SIZE > p.x && + CHAR_X < p.x + PROJ_SIZE && + s.charY + CHAR_SIZE > p.y && + s.charY < p.y + PROJ_SIZE + ) { + gameOver(s); + return true; + } + } + + boss.projectiles = boss.projectiles.filter((p) => p.x + PROJ_SIZE > -20); + return false; +} + +/* ------------------------------------------------------------------ */ +/* Update */ +/* ------------------------------------------------------------------ */ + +function update(s: GameState, canvasWidth: number) { + if (!s.running) return; + + s.frame++; + + // Speed only ramps during regular play + if (s.boss.phase === "inactive") { + s.speed = BASE_SPEED + s.frame * SPEED_INCREMENT; + } + + // ---- Character physics (always active) ---- // + s.vy += GRAVITY; + s.charY += s.vy; + if (s.charY + CHAR_SIZE >= s.groundY) { + s.charY = s.groundY - CHAR_SIZE; + s.vy = 0; + } + + // ---- Trigger boss ---- // + if (s.boss.phase === "inactive" && s.score >= s.bossThreshold) { + s.boss.phase = "entering"; + s.boss.x = canvasWidth + 10; + s.boss.targetX = canvasWidth - BOSS_SIZE - 40; + s.boss.shotsEvaded = 0; + s.boss.cooldown = BOSS_SHOOT_COOLDOWN; + s.boss.projectiles = []; + s.obstacles = []; + } + + // ---- Boss: entering ---- // + if (s.boss.phase === "entering") { + s.boss.bob = Math.sin(s.frame * 0.05) * 3; + s.boss.x -= BOSS_ENTER_SPEED; + if (s.boss.x <= s.boss.targetX) { + s.boss.x = s.boss.targetX; + s.boss.phase = "fighting"; + } + return; // no obstacles while entering + } + + // ---- Boss: fighting ---- // + if (s.boss.phase === "fighting") { + s.boss.bob = Math.sin(s.frame * 0.05) * 3; + + // Shoot + s.boss.cooldown--; + if (s.boss.cooldown <= 0) { + const isLow = Math.random() < 0.5; + s.boss.projectiles.push({ + x: s.boss.x - PROJ_SIZE, + y: isLow ? s.groundY - 14 : s.groundY - 70, + speed: PROJ_SPEED, + evaded: false, + type: isLow ? "low" : "high", + }); + s.boss.cooldown = BOSS_SHOOT_COOLDOWN; + } + + if (tickProjectiles(s)) return; + + // Boss defeated? + if (s.boss.shotsEvaded >= BOSS_SHOTS_TO_EVADE) { + s.boss.phase = "leaving"; + s.score += 5; // bonus + s.bossThreshold = s.score + BOSS_INTERVAL; + } + return; + } + + // ---- Boss: leaving ---- // + if (s.boss.phase === "leaving") { + s.boss.bob = Math.sin(s.frame * 0.05) * 3; + s.boss.x += BOSS_LEAVE_SPEED; + + // Still check in-flight projectiles + if (tickProjectiles(s)) return; + + if (s.boss.x > canvasWidth + 50) { + s.boss = makeBoss(); + s.nextSpawn = s.frame + randInt(SPAWN_MIN / 2, SPAWN_MAX / 2); + } + return; + } + + // ---- Regular obstacle play ---- // + if (s.frame >= s.nextSpawn) { + s.obstacles.push({ + x: canvasWidth + 10, + width: randInt(10, 16), + height: randInt(20, 48), + scored: false, + }); + s.nextSpawn = s.frame + randInt(SPAWN_MIN, SPAWN_MAX); + } + + for (const o of s.obstacles) { + o.x -= s.speed; + if (!o.scored && o.x + o.width < CHAR_X) { + o.scored = true; + s.score++; + } + } + + s.obstacles = s.obstacles.filter((o) => o.x + o.width > -20); + + for (const o of s.obstacles) { + const oY = s.groundY - o.height; + if ( + CHAR_X + CHAR_SIZE > o.x && + CHAR_X < o.x + o.width && + s.charY + CHAR_SIZE > oY + ) { + gameOver(s); + return; + } + } +} + +/* ------------------------------------------------------------------ */ +/* Drawing */ +/* ------------------------------------------------------------------ */ + +function drawBoss(ctx: CanvasRenderingContext2D, s: GameState, bg: string) { + const bx = s.boss.x; + const by = s.groundY - BOSS_SIZE + s.boss.bob; + + // Body + ctx.save(); + ctx.fillStyle = COLOR_BOSS; + ctx.globalAlpha = 0.9; + ctx.beginPath(); + ctx.roundRect(bx, by, BOSS_SIZE, BOSS_SIZE, 4); + ctx.fill(); + ctx.restore(); + + // Eyes + ctx.save(); + ctx.fillStyle = bg; + const eyeY = by + 13; + ctx.beginPath(); + ctx.arc(bx + 10, eyeY, 4, 0, Math.PI * 2); + ctx.fill(); + ctx.beginPath(); + ctx.arc(bx + 26, eyeY, 4, 0, Math.PI * 2); + ctx.fill(); + ctx.restore(); + + // Angry eyebrows + ctx.save(); + ctx.strokeStyle = bg; + ctx.lineWidth = 2; + ctx.beginPath(); + ctx.moveTo(bx + 5, eyeY - 7); + ctx.lineTo(bx + 14, eyeY - 4); + ctx.stroke(); + ctx.beginPath(); + ctx.moveTo(bx + 31, eyeY - 7); + ctx.lineTo(bx + 22, eyeY - 4); + ctx.stroke(); + ctx.restore(); + + // Zigzag mouth + ctx.save(); + ctx.strokeStyle = bg; + ctx.lineWidth = 1.5; + ctx.beginPath(); + ctx.moveTo(bx + 10, by + 27); + ctx.lineTo(bx + 14, by + 24); + ctx.lineTo(bx + 18, by + 27); + ctx.lineTo(bx + 22, by + 24); + ctx.lineTo(bx + 26, by + 27); + ctx.stroke(); + ctx.restore(); +} + +function drawProjectiles(ctx: CanvasRenderingContext2D, boss: BossState) { + ctx.save(); + ctx.fillStyle = COLOR_BOSS; + ctx.globalAlpha = 0.8; + for (const p of boss.projectiles) { + if (p.evaded) continue; + ctx.beginPath(); + ctx.arc( + p.x + PROJ_SIZE / 2, + p.y + PROJ_SIZE / 2, + PROJ_SIZE / 2, + 0, + Math.PI * 2, + ); + ctx.fill(); + } + ctx.restore(); +} + +function draw( + ctx: CanvasRenderingContext2D, + s: GameState, + w: number, + h: number, + fg: string, + started: boolean, +) { + ctx.fillStyle = COLOR_BG; + ctx.fillRect(0, 0, w, h); + + // Ground + ctx.save(); + ctx.strokeStyle = fg; + ctx.globalAlpha = 0.15; + ctx.setLineDash([4, 4]); + ctx.beginPath(); + ctx.moveTo(0, s.groundY); + ctx.lineTo(w, s.groundY); + ctx.stroke(); + ctx.restore(); + + // Character + ctx.save(); + ctx.fillStyle = COLOR_CHAR; + ctx.globalAlpha = 0.85; + ctx.beginPath(); + ctx.roundRect(CHAR_X, s.charY, CHAR_SIZE, CHAR_SIZE, 3); + ctx.fill(); + ctx.restore(); + + // Eyes + ctx.save(); + ctx.fillStyle = COLOR_BG; + ctx.beginPath(); + ctx.arc(CHAR_X + 6, s.charY + 7, 2.5, 0, Math.PI * 2); + ctx.fill(); + ctx.beginPath(); + ctx.arc(CHAR_X + 12, s.charY + 7, 2.5, 0, Math.PI * 2); + ctx.fill(); + ctx.restore(); + + // Obstacles + ctx.save(); + ctx.fillStyle = fg; + ctx.globalAlpha = 0.55; + for (const o of s.obstacles) { + ctx.fillRect(o.x, s.groundY - o.height, o.width, o.height); + } + ctx.restore(); + + // Boss + projectiles + if (s.boss.phase !== "inactive") { + drawBoss(ctx, s, COLOR_BG); + drawProjectiles(ctx, s.boss); + } + + // Score HUD + ctx.save(); + ctx.fillStyle = fg; + ctx.globalAlpha = 0.5; + ctx.font = "bold 11px monospace"; + ctx.textAlign = "right"; + ctx.fillText(`Score: ${s.score}`, w - 12, 20); + ctx.fillText(`Best: ${s.highScore}`, w - 12, 34); + if (s.boss.phase === "fighting") { + ctx.fillText( + `Evade: ${s.boss.shotsEvaded}/${BOSS_SHOTS_TO_EVADE}`, + w - 12, + 48, + ); + } + ctx.restore(); + + // Prompts + if (!started && !s.running && !s.over) { + ctx.save(); + ctx.fillStyle = fg; + ctx.globalAlpha = 0.5; + ctx.font = "12px sans-serif"; + ctx.textAlign = "center"; + ctx.fillText("Click or press Space to play while you wait", w / 2, h / 2); + ctx.restore(); + } + + if (s.over) { + ctx.save(); + ctx.fillStyle = fg; + ctx.globalAlpha = 0.7; + ctx.font = "bold 13px sans-serif"; + ctx.textAlign = "center"; + ctx.fillText("Game Over", w / 2, h / 2 - 8); + ctx.font = "11px sans-serif"; + ctx.fillText("Click or Space to restart", w / 2, h / 2 + 10); + ctx.restore(); + } +} + +/* ------------------------------------------------------------------ */ +/* Hook */ +/* ------------------------------------------------------------------ */ + +export function useMiniGame() { + const canvasRef = useRef(null); + const stateRef = useRef(null); + const rafRef = useRef(0); + const startedRef = useRef(false); + + useEffect(() => { + const canvas = canvasRef.current; + if (!canvas) return; + + const container = canvas.parentElement; + if (container) { + canvas.width = container.clientWidth; + canvas.height = CANVAS_HEIGHT; + } + + const groundY = canvas.height - GROUND_PAD; + stateRef.current = makeState(groundY); + + const style = getComputedStyle(canvas); + let fg = style.color || "#71717a"; + + // -------------------------------------------------------------- // + // Jump // + // -------------------------------------------------------------- // + function jump() { + const s = stateRef.current; + if (!s) return; + + if (s.over) { + const hs = s.highScore; + const gy = s.groundY; + stateRef.current = makeState(gy); + stateRef.current.highScore = hs; + stateRef.current.running = true; + startedRef.current = true; + return; + } + + if (!s.running) { + s.running = true; + startedRef.current = true; + return; + } + + // Only jump when on the ground + if (s.charY + CHAR_SIZE >= s.groundY) { + s.vy = JUMP_FORCE; + } + } + + function onKey(e: KeyboardEvent) { + if (e.code === "Space" || e.key === " ") { + e.preventDefault(); + jump(); + } + } + + function onClick() { + canvas?.focus(); + jump(); + } + + // -------------------------------------------------------------- // + // Loop // + // -------------------------------------------------------------- // + function loop() { + const s = stateRef.current; + if (!canvas || !s) return; + const ctx = canvas.getContext("2d"); + if (!ctx) return; + + update(s, canvas.width); + draw(ctx, s, canvas.width, canvas.height, fg, startedRef.current); + rafRef.current = requestAnimationFrame(loop); + } + + rafRef.current = requestAnimationFrame(loop); + + canvas.addEventListener("click", onClick); + canvas.addEventListener("keydown", onKey); + + const observer = new ResizeObserver((entries) => { + for (const entry of entries) { + canvas.width = entry.contentRect.width; + canvas.height = CANVAS_HEIGHT; + if (stateRef.current) { + stateRef.current.groundY = canvas.height - GROUND_PAD; + } + const cs = getComputedStyle(canvas); + fg = cs.color || fg; + } + }); + if (container) observer.observe(container); + + return () => { + cancelAnimationFrame(rafRef.current); + canvas.removeEventListener("click", onClick); + canvas.removeEventListener("keydown", onKey); + observer.disconnect(); + }; + }, []); + + return { canvasRef }; +} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts b/autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts index 3dbba6e790..28e9ba7cfb 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts @@ -1,10 +1,14 @@ import { useGetV2ListSessions } from "@/app/api/__generated__/endpoints/chat/chat"; +import { toast } from "@/components/molecules/Toast/use-toast"; import { useBreakpoint } from "@/lib/hooks/useBreakpoint"; import { useSupabase } from "@/lib/supabase/hooks/useSupabase"; import { useChat } from "@ai-sdk/react"; import { DefaultChatTransport } from "ai"; -import { useEffect, useMemo, useState } from "react"; +import { useEffect, useMemo, useRef, useState } from "react"; import { useChatSession } from "./useChatSession"; +import { useLongRunningToolPolling } from "./hooks/useLongRunningToolPolling"; + +const STREAM_START_TIMEOUT_MS = 12_000; export function useCopilotPage() { const { isUserLoading, isLoggedIn } = useSupabase(); @@ -52,6 +56,24 @@ export function useCopilotPage() { transport: transport ?? undefined, }); + // Abort the stream if the backend doesn't start sending data within 12s. + const stopRef = useRef(stop); + stopRef.current = stop; + useEffect(() => { + if (status !== "submitted") return; + + const timer = setTimeout(() => { + stopRef.current(); + toast({ + title: "Stream timed out", + description: "The server took too long to respond. Please try again.", + variant: "destructive", + }); + }, STREAM_START_TIMEOUT_MS); + + return () => clearTimeout(timer); + }, [status]); + useEffect(() => { if (!hydratedMessages || hydratedMessages.length === 0) return; setMessages((prev) => { @@ -60,6 +82,11 @@ export function useCopilotPage() { }); }, [hydratedMessages, setMessages]); + // Poll session endpoint when a long-running tool (create_agent, edit_agent) + // is in progress. When the backend completes, the session data will contain + // the final tool output — this hook detects the change and updates messages. + useLongRunningToolPolling(sessionId, messages, setMessages); + // Clear messages when session is null useEffect(() => { if (!sessionId) setMessages([]); From 301d7cbadaf6d623cf3d7921e69dbdcc43d2c2e2 Mon Sep 17 00:00:00 2001 From: Ubbe Date: Fri, 13 Feb 2026 09:37:54 +0800 Subject: [PATCH 13/18] fix(frontend): suppress cross-origin stylesheet security error (#12086) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Summary - Adds `ignoreErrors` to the Sentry client configuration (`instrumentation-client.ts`) to filter out `SecurityError: CSSStyleSheet.cssRules getter: Not allowed to access cross-origin stylesheet` errors - These errors are caused by Sentry Replay (rrweb) attempting to serialize DOM snapshots that include cross-origin stylesheets (from browser extensions or CDN-loaded CSS) - This was reported via Sentry on production, occurring on any page when logged in ## Changes - **`frontend/instrumentation-client.ts`**: Added `ignoreErrors: [/Not allowed to access cross-origin stylesheet/]` to `Sentry.init()` config ## Test plan - [ ] Verify the error no longer appears in Sentry after deployment - [ ] Verify Sentry Replay still works correctly for other errors - [ ] Verify no regressions in error tracking (other errors should still be captured) 🤖 Generated with [Claude Code](https://claude.com/claude-code)

Greptile Overview

Greptile Summary

Adds error filtering to Sentry client configuration to suppress cross-origin stylesheet security errors that occur when Sentry Replay (rrweb) attempts to serialize DOM snapshots containing stylesheets from browser extensions or CDN-loaded CSS. This prevents noise in Sentry error logs without affecting the capture of legitimate errors.

Confidence Score: 5/5

- This PR is safe to merge with minimal risk - The change adds a simple error filter to suppress benign cross-origin stylesheet errors that are caused by Sentry Replay itself. The regex pattern is specific and only affects client-side error reporting, with no impact on application functionality or legitimate error capture - No files require special attention
Co-authored-by: Claude Opus 4.6 --- autogpt_platform/frontend/instrumentation-client.ts | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/autogpt_platform/frontend/instrumentation-client.ts b/autogpt_platform/frontend/instrumentation-client.ts index 86fe015e62..f4af2e8956 100644 --- a/autogpt_platform/frontend/instrumentation-client.ts +++ b/autogpt_platform/frontend/instrumentation-client.ts @@ -22,6 +22,11 @@ Sentry.init({ enabled: shouldEnable, + // Suppress cross-origin stylesheet errors from Sentry Replay (rrweb) + // serializing DOM snapshots with cross-origin stylesheets + // (e.g., from browser extensions or CDN-loaded CSS) + ignoreErrors: [/Not allowed to access cross-origin stylesheet/], + // Add optional integrations for additional features integrations: [ Sentry.captureConsoleIntegration(), From 30e854569ae79b58870c1d3777c23c5396ed2f80 Mon Sep 17 00:00:00 2001 From: Ubbe Date: Fri, 13 Feb 2026 09:38:16 +0800 Subject: [PATCH 14/18] feat(frontend): add exact timestamp tooltip on run timestamps (#12087) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Resolves OPEN-2693: Make exact timestamp of runs accessible through UI. The NewAgentLibraryView shows relative timestamps ("2 days ago") for runs and schedules, but unlike the OldAgentLibraryView it didn't show the exact timestamp on hover. This PR adds a native `title` tooltip so users can see the full date/time by hovering. ### Changes 🏗️ - Added `descriptionTitle` prop to `SidebarItemCard` that renders as a `title` attribute on the description text - `TaskListItem` now passes the exact `run.started_at` timestamp via `descriptionTitle` - `ScheduleListItem` now passes the exact `schedule.next_run_time` timestamp via `descriptionTitle` ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [ ] Open an agent in the library view - [ ] Hover over a run's relative timestamp (e.g. "2 days ago") and confirm the full date/time tooltip appears - [ ] Hover over a schedule's relative timestamp and confirm the full date/time tooltip appears 🤖 Generated with [Claude Code](https://claude.com/claude-code)

Greptile Overview

Greptile Summary

Added native tooltip functionality to show exact timestamps in the library view. The implementation adds a `descriptionTitle` prop to `SidebarItemCard` that renders as a `title` attribute on the description text. This allows users to hover over relative timestamps (e.g., "2 days ago") to see the full date/time. **Changes:** - Added optional `descriptionTitle` prop to `SidebarItemCard` component (SidebarItemCard.tsx:10) - `TaskListItem` passes `run.started_at` as the tooltip value (TaskListItem.tsx:84-86) - `ScheduleListItem` passes `schedule.next_run_time` as the tooltip value (ScheduleListItem.tsx:32) - Unrelated fix included: Sentry configuration updated to suppress cross-origin stylesheet errors (instrumentation-client.ts:25-28) **Note:** The PR includes two separate commits - the main timestamp tooltip feature and a Sentry error suppression fix. The PR description only documents the timestamp feature.

Confidence Score: 5/5

- This PR is safe to merge with minimal risk - The changes are straightforward and limited in scope - adding an optional prop that forwards a native HTML attribute for tooltip functionality. The Text component already supports forwarding arbitrary HTML attributes through its spread operator (...rest), ensuring the `title` attribute works correctly. Both the timestamp tooltip feature and the Sentry configuration fix are low-risk improvements with no breaking changes. - No files require special attention

Sequence Diagram

```mermaid sequenceDiagram participant User participant TaskListItem participant ScheduleListItem participant SidebarItemCard participant Text participant Browser User->>TaskListItem: Hover over run timestamp TaskListItem->>SidebarItemCard: Pass descriptionTitle (run.started_at) SidebarItemCard->>Text: Render with title attribute Text->>Browser: Forward title attribute to DOM Browser->>User: Display native tooltip with exact timestamp User->>ScheduleListItem: Hover over schedule timestamp ScheduleListItem->>SidebarItemCard: Pass descriptionTitle (schedule.next_run_time) SidebarItemCard->>Text: Render with title attribute Text->>Browser: Forward title attribute to DOM Browser->>User: Display native tooltip with exact timestamp ```
Co-authored-by: Claude Opus 4.6 --- .../SidebarRunsList/components/ScheduleListItem.tsx | 1 + .../SidebarRunsList/components/SidebarItemCard.tsx | 8 +++++++- .../sidebar/SidebarRunsList/components/TaskListItem.tsx | 3 +++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/ScheduleListItem.tsx b/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/ScheduleListItem.tsx index 1ad40fcef4..8815069011 100644 --- a/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/ScheduleListItem.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/ScheduleListItem.tsx @@ -29,6 +29,7 @@ export function ScheduleListItem({ description={formatDistanceToNow(schedule.next_run_time, { addSuffix: true, })} + descriptionTitle={new Date(schedule.next_run_time).toString()} onClick={onClick} selected={selected} icon={ diff --git a/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/SidebarItemCard.tsx b/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/SidebarItemCard.tsx index 4f4e9962ce..a438568b74 100644 --- a/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/SidebarItemCard.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/SidebarItemCard.tsx @@ -7,6 +7,7 @@ import React from "react"; interface Props { title: string; description?: string; + descriptionTitle?: string; icon?: React.ReactNode; selected?: boolean; onClick?: () => void; @@ -16,6 +17,7 @@ interface Props { export function SidebarItemCard({ title, description, + descriptionTitle, icon, selected, onClick, @@ -38,7 +40,11 @@ export function SidebarItemCard({ > {title} - + {description}
diff --git a/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/TaskListItem.tsx b/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/TaskListItem.tsx index 8970e82b64..b9320822fc 100644 --- a/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/TaskListItem.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/sidebar/SidebarRunsList/components/TaskListItem.tsx @@ -81,6 +81,9 @@ export function TaskListItem({ ? formatDistanceToNow(run.started_at, { addSuffix: true }) : "—" } + descriptionTitle={ + run.started_at ? new Date(run.started_at).toString() : undefined + } onClick={onClick} selected={selected} actions={ From e8c50b96d1b4f32d0f320241dc6f2268bcd94c06 Mon Sep 17 00:00:00 2001 From: Ubbe Date: Fri, 13 Feb 2026 09:38:59 +0800 Subject: [PATCH 15/18] fix(frontend): improve CoPilot chat table styling (#12094) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## Summary - Remove left and right borders from tables rendered in CoPilot chat - Increase cell padding (py-3 → py-3.5) for better spacing between text and lines - Applies to both Streamdown (main chat) and MarkdownRenderer (tool outputs) Design feedback from Olivia to make tables "breathe" more. ## Test plan - [ ] Open CoPilot chat and trigger a response containing a table - [ ] Verify tables no longer have left/right borders - [ ] Verify increased spacing between rows - [ ] Check both light and dark modes 🤖 Generated with [Claude Code](https://claude.com/claude-code)

Greptile Overview

Greptile Summary

Improved CoPilot chat table styling by removing left and right borders and increasing vertical padding from `py-3` to `py-3.5`. Changes apply to both: - Streamdown-rendered tables (via CSS selector in `globals.css`) - MarkdownRenderer tables (via Tailwind classes) The changes make tables "breathe" more per design feedback from Olivia. **Issue Found:** - The CSS padding value in `globals.css:192` is `0.625rem` (`py-2.5`) but should be `0.875rem` (`py-3.5`) to match the PR description and the MarkdownRenderer implementation.

Confidence Score: 2/5

- This PR has a logical error that will cause inconsistent table styling between Streamdown and MarkdownRenderer tables - The implementation has an inconsistency where the CSS file uses `py-2.5` padding while the PR description and MarkdownRenderer use `py-3.5`. This will result in different table padding between the two rendering systems, contradicting the goal of consistent styling improvements. - Pay close attention to `autogpt_platform/frontend/src/app/globals.css` - the padding value needs to be corrected to match the intended design
--------- Co-authored-by: Claude Opus 4.6 Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> --- autogpt_platform/frontend/src/app/globals.css | 11 +++++++++++ .../OutputRenderers/renderers/MarkdownRenderer.tsx | 6 +++--- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/autogpt_platform/frontend/src/app/globals.css b/autogpt_platform/frontend/src/app/globals.css index dd1d17cde7..4a1691eec3 100644 --- a/autogpt_platform/frontend/src/app/globals.css +++ b/autogpt_platform/frontend/src/app/globals.css @@ -180,3 +180,14 @@ body[data-google-picker-open="true"] [data-dialog-content] { z-index: 1 !important; pointer-events: none !important; } + +/* CoPilot chat table styling — remove left/right borders, increase padding */ +[data-streamdown="table-wrapper"] table { + border-left: none; + border-right: none; +} + +[data-streamdown="table-wrapper"] th, +[data-streamdown="table-wrapper"] td { + padding: 0.875rem 1rem; /* py-3.5 px-4 */ +} diff --git a/autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/MarkdownRenderer.tsx b/autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/MarkdownRenderer.tsx index d94966c6c8..9815cea6ff 100644 --- a/autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/MarkdownRenderer.tsx +++ b/autogpt_platform/frontend/src/components/contextual/OutputRenderers/renderers/MarkdownRenderer.tsx @@ -226,7 +226,7 @@ function renderMarkdown( table: ({ children, ...props }) => (
{children} @@ -235,7 +235,7 @@ function renderMarkdown( ), th: ({ children, ...props }) => (
{children} @@ -243,7 +243,7 @@ function renderMarkdown( ), td: ({ children, ...props }) => ( {children} From 9a8c6ad609f8f29087066d318cd5e1673f838e89 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 13 Feb 2026 10:10:11 +0100 Subject: [PATCH 16/18] chore(libs/deps): bump the production-dependencies group across 1 directory with 4 updates (#12056) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Bumps the production-dependencies group with 4 updates in the /autogpt_platform/autogpt_libs directory: [cryptography](https://github.com/pyca/cryptography), [fastapi](https://github.com/fastapi/fastapi), [launchdarkly-server-sdk](https://github.com/launchdarkly/python-server-sdk) and [supabase](https://github.com/supabase/supabase-py). Updates `cryptography` from 46.0.4 to 46.0.5
Changelog

Sourced from cryptography's changelog.

46.0.5 - 2026-02-10


* An attacker could create a malicious public key that reveals portions
of your
private key when using certain uncommon elliptic curves (binary curves).
This version now includes additional security checks to prevent this
attack.
This issue only affects binary elliptic curves, which are rarely used in
real-world applications. Credit to **XlabAI Team of Tencent Xuanwu Lab
and
Atuin Automated Vulnerability Discovery Engine** for reporting the
issue.
  **CVE-2026-26007**
* Support for ``SECT*`` binary elliptic curves is deprecated and will be
  removed in the next release.

.. v46-0-4:

Commits

Updates `fastapi` from 0.128.0 to 0.128.7
Release notes

Sourced from fastapi's releases.

0.128.7

Features

Refactors

  • ♻️ Simplify reading files in memory, do it sequentially instead of (fake) parallel. PR #14884 by @​tiangolo.

Docs

Internal

0.128.6

Fixes

Translations

Internal

0.128.5

Refactors

  • ♻️ Refactor and simplify Pydantic v2 (and v1) compatibility internal utils. PR #14862 by @​tiangolo.

Internal

  • ✅ Add inline snapshot tests for OpenAPI before changes from Pydantic v2. PR #14864 by @​tiangolo.

0.128.4

Refactors

  • ♻️ Refactor internals, simplify Pydantic v2/v1 utils, create_model_field, better types for lenient_issubclass. PR #14860 by @​tiangolo.
  • ♻️ Simplify internals, remove Pydantic v1 only logic, no longer needed. PR #14857 by @​tiangolo.
  • ♻️ Refactor internals, cleanup unneeded Pydantic v1 specific logic. PR #14856 by @​tiangolo.

... (truncated)

Commits

Updates `launchdarkly-server-sdk` from 9.14.1 to 9.15.0
Release notes

Sourced from launchdarkly-server-sdk's releases.

v9.15.0

9.15.0 (2026-02-10)

Features

Bug Fixes

  • Add context manager for clearer, safer locks (#396) (beca0fa)
  • Address potential race condition in FeatureStore update_availability (#391) (31cf487)
  • Allow modifying fdv2 data source options independent of main config (#403) (d78079e)
  • Mark copy_with_new_sdk_key method as deprecated (#353) (e471ccc)
  • Prevent immediate polling on recoverable error (#399) (da565a2)
  • Redis store is considered initialized when $inited key is written (e99a27d)
  • Stop FeatureStoreClientWrapper poller on close (#397) (468afdf)
  • Update DataSystemConfig to accept list of synchronizers (#404) (c73ad14)
  • Update reason documentation with inExperiment value (#401) (cbfc3dd)
  • Update Redis to write missing $inited key (e99a27d)

This PR was generated with Release Please. See documentation.

Changelog

Sourced from launchdarkly-server-sdk's changelog.

9.15.0 (2026-02-10)

⚠ BREAKING CHANGES

Note: The following breaking changes apply only to FDv2 (Flag Delivery v2) early access features, which are not subject to semantic versioning and may change without a major version bump.

  • Update ChangeSet to always require a Selector (#405) (5dc4f81)
    • The ChangeSetBuilder.finish() method now requires a Selector parameter.
  • Update DataSystemConfig to accept list of synchronizers (#404) (c73ad14)
    • The DataSystemConfig.synchronizers field now accepts a list of synchronizers, and the ConfigBuilder.synchronizers() method accepts variadic arguments.

Features

Bug Fixes

  • Add context manager for clearer, safer locks (#396) (beca0fa)
  • Address potential race condition in FeatureStore update_availability (#391) (31cf487)
  • Allow modifying fdv2 data source options independent of main config (#403) (d78079e)
  • Mark copy_with_new_sdk_key method as deprecated (#353) (e471ccc)
  • Prevent immediate polling on recoverable error (#399) (da565a2)
  • Redis store is considered initialized when $inited key is written (e99a27d)
  • Stop FeatureStoreClientWrapper poller on close (#397) (468afdf)
  • Update reason documentation with inExperiment value (#401) (cbfc3dd)
  • Update Redis to write missing $inited key (e99a27d)
Commits
  • e542f73 chore(main): release 9.15.0 (#394)
  • e471ccc fix: Mark copy_with_new_sdk_key method as deprecated (#353)
  • 5dc4f81 feat: Update ChangeSet to always require a Selector (#405)
  • f20fffe chore: Remove dead code, clarify names, other cleanup (#398)
  • c73ad14 fix: Update DataSystemConfig to accept list of synchronizers (#404)
  • d78079e fix: Allow modifying fdv2 data source options independent of main config (#403)
  • e99a27d chore: Support persistent data store verification in contract tests (#402)
  • cbfc3dd fix: Update reason documentation with inExperiment value (#401)
  • 5a1adbb chore: Update sdk_metadata features (#400)
  • da565a2 fix: Prevent immediate polling on recoverable error (#399)
  • Additional commits viewable in compare view

Updates `supabase` from 2.27.2 to 2.28.0
Release notes

Sourced from supabase's releases.

v2.28.0

2.28.0 (2026-02-10)

Features

  • storage: add list_v2 method to file_api client (#1377) (259f4ad)

Bug Fixes

  • auth: add missing is_sso_user, deleted_at, banned_until to User model (#1375) (7f84a62)
  • realtime: ensure remove_channel removes channel from channels dict (#1373) (0923314)
  • realtime: use pop with default in _handle_message to prevent KeyError (#1388) (baea26f)
  • storage3: replace print() with warnings.warn() for trailing slash notice (#1380) (50b099f)

v2.27.3

2.27.3 (2026-02-03)

Bug Fixes

  • deprecate python 3.9 in all packages (#1365) (cc72ed7)
  • ensure storage_url has trailing slash to prevent warning (#1367) (4267ff1)
Changelog

Sourced from supabase's changelog.

2.28.0 (2026-02-10)

Features

  • storage: add list_v2 method to file_api client (#1377) (259f4ad)

Bug Fixes

  • auth: add missing is_sso_user, deleted_at, banned_until to User model (#1375) (7f84a62)
  • realtime: ensure remove_channel removes channel from channels dict (#1373) (0923314)
  • realtime: use pop with default in _handle_message to prevent KeyError (#1388) (baea26f)
  • storage3: replace print() with warnings.warn() for trailing slash notice (#1380) (50b099f)

2.27.3 (2026-02-03)

Bug Fixes

  • deprecate python 3.9 in all packages (#1365) (cc72ed7)
  • ensure storage_url has trailing slash to prevent warning (#1367) (4267ff1)
Commits
  • 59e3384 chore(main): release 2.28.0 (#1378)
  • baea26f fix(realtime): use pop with default in _handle_message to prevent KeyError (#...
  • 259f4ad feat(storage): add list_v2 method to file_api client (#1377)
  • 50b099f fix(storage3): replace print() with warnings.warn() for trailing slash notice...
  • 0923314 fix(realtime): ensure remove_channel removes channel from channels dict (#1373)
  • 7f84a62 fix(auth): add missing is_sso_user, deleted_at, banned_until to User model (#...
  • 57dd6e2 chore(deps): bump the uv group across 1 directory with 3 updates (#1369)
  • c357def chore(main): release 2.27.3 (#1368)
  • 4267ff1 fix: ensure storage_url has trailing slash to prevent warning (#1367)
  • cc72ed7 fix: deprecate python 3.9 in all packages (#1365)
  • Additional commits viewable in compare view

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions

Greptile Overview

Greptile Summary

Dependency update bumps 4 packages in the production-dependencies group, including a **critical security patch for `cryptography`** (CVE-2026-26007) that prevents malicious public key attacks on binary elliptic curves. The update also includes bug fixes for `fastapi`, `launchdarkly-server-sdk`, and `supabase`. - **cryptography** 46.0.4 → 46.0.5: patches CVE-2026-26007, deprecates SECT* binary curves - **fastapi** 0.128.0 → 0.128.7: bug fixes, improved error handling, relaxed Starlette constraint - **launchdarkly-server-sdk** 9.14.1 → 9.15.0: drops Python 3.9 support (requires >=3.10), fixes race conditions - **supabase** 2.27.2/2.27.3 → 2.28.0: realtime fixes, new User model fields The lock files correctly resolve all dependencies. Python 3.10+ requirement is already enforced in both packages. However, backend's `pyproject.toml` still specifies `launchdarkly-server-sdk = "^9.14.1"` while the lock file uses 9.15.0 (pulled from autogpt_libs dependency), creating a minor version constraint inconsistency.

Confidence Score: 4/5

- This PR is safe to merge with one minor style suggestion - Automated dependency update with critical security patch for cryptography. All updates are backwards-compatible within semver constraints. Lock files correctly resolve all dependencies. Python 3.10+ is already enforced. Only minor issue is version constraint inconsistency in backend's pyproject.toml for launchdarkly-server-sdk, which doesn't affect functionality but should be aligned for clarity. - autogpt_platform/backend/pyproject.toml needs launchdarkly-server-sdk version constraint updated to ^9.15.0
--------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Otto --- autogpt_platform/autogpt_libs/poetry.lock | 169 ++++++++++--------- autogpt_platform/autogpt_libs/pyproject.toml | 6 +- autogpt_platform/backend/poetry.lock | 68 ++++---- autogpt_platform/backend/pyproject.toml | 2 +- 4 files changed, 123 insertions(+), 122 deletions(-) diff --git a/autogpt_platform/autogpt_libs/poetry.lock b/autogpt_platform/autogpt_libs/poetry.lock index 0a421dda31..e1d599360e 100644 --- a/autogpt_platform/autogpt_libs/poetry.lock +++ b/autogpt_platform/autogpt_libs/poetry.lock @@ -448,61 +448,61 @@ toml = ["tomli ; python_full_version <= \"3.11.0a6\""] [[package]] name = "cryptography" -version = "46.0.4" +version = "46.0.5" description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." optional = false python-versions = "!=3.9.0,!=3.9.1,>=3.8" groups = ["main"] files = [ - {file = "cryptography-46.0.4-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:281526e865ed4166009e235afadf3a4c4cba6056f99336a99efba65336fd5485"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5f14fba5bf6f4390d7ff8f086c566454bff0411f6d8aa7af79c88b6f9267aecc"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:47bcd19517e6389132f76e2d5303ded6cf3f78903da2158a671be8de024f4cd0"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:01df4f50f314fbe7009f54046e908d1754f19d0c6d3070df1e6268c5a4af09fa"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:5aa3e463596b0087b3da0dbe2b2487e9fc261d25da85754e30e3b40637d61f81"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:0a9ad24359fee86f131836a9ac3bffc9329e956624a2d379b613f8f8abaf5255"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:dc1272e25ef673efe72f2096e92ae39dea1a1a450dd44918b15351f72c5a168e"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:de0f5f4ec8711ebc555f54735d4c673fc34b65c44283895f1a08c2b49d2fd99c"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:eeeb2e33d8dbcccc34d64651f00a98cb41b2dc69cef866771a5717e6734dfa32"}, - {file = "cryptography-46.0.4-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:3d425eacbc9aceafd2cb429e42f4e5d5633c6f873f5e567077043ef1b9bbf616"}, - {file = "cryptography-46.0.4-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91627ebf691d1ea3976a031b61fb7bac1ccd745afa03602275dda443e11c8de0"}, - {file = "cryptography-46.0.4-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:2d08bc22efd73e8854b0b7caff402d735b354862f1145d7be3b9c0f740fef6a0"}, - {file = "cryptography-46.0.4-cp311-abi3-win32.whl", hash = "sha256:82a62483daf20b8134f6e92898da70d04d0ef9a75829d732ea1018678185f4f5"}, - {file = "cryptography-46.0.4-cp311-abi3-win_amd64.whl", hash = "sha256:6225d3ebe26a55dbc8ead5ad1265c0403552a63336499564675b29eb3184c09b"}, - {file = "cryptography-46.0.4-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:485e2b65d25ec0d901bca7bcae0f53b00133bf3173916d8e421f6fddde103908"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:078e5f06bd2fa5aea5a324f2a09f914b1484f1d0c2a4d6a8a28c74e72f65f2da"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:dce1e4f068f03008da7fa51cc7abc6ddc5e5de3e3d1550334eaf8393982a5829"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:2067461c80271f422ee7bdbe79b9b4be54a5162e90345f86a23445a0cf3fd8a2"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:c92010b58a51196a5f41c3795190203ac52edfd5dc3ff99149b4659eba9d2085"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:829c2b12bbc5428ab02d6b7f7e9bbfd53e33efd6672d21341f2177470171ad8b"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:62217ba44bf81b30abaeda1488686a04a702a261e26f87db51ff61d9d3510abd"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:9c2da296c8d3415b93e6053f5a728649a87a48ce084a9aaf51d6e46c87c7f2d2"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:9b34d8ba84454641a6bf4d6762d15847ecbd85c1316c0a7984e6e4e9f748ec2e"}, - {file = "cryptography-46.0.4-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:df4a817fa7138dd0c96c8c8c20f04b8aaa1fac3bbf610913dcad8ea82e1bfd3f"}, - {file = "cryptography-46.0.4-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:b1de0ebf7587f28f9190b9cb526e901bf448c9e6a99655d2b07fff60e8212a82"}, - {file = "cryptography-46.0.4-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:9b4d17bc7bd7cdd98e3af40b441feaea4c68225e2eb2341026c84511ad246c0c"}, - {file = "cryptography-46.0.4-cp314-cp314t-win32.whl", hash = "sha256:c411f16275b0dea722d76544a61d6421e2cc829ad76eec79280dbdc9ddf50061"}, - {file = "cryptography-46.0.4-cp314-cp314t-win_amd64.whl", hash = "sha256:728fedc529efc1439eb6107b677f7f7558adab4553ef8669f0d02d42d7b959a7"}, - {file = "cryptography-46.0.4-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:a9556ba711f7c23f77b151d5798f3ac44a13455cc68db7697a1096e6d0563cab"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8bf75b0259e87fa70bddc0b8b4078b76e7fd512fd9afae6c1193bcf440a4dbef"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3c268a3490df22270955966ba236d6bc4a8f9b6e4ffddb78aac535f1a5ea471d"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:812815182f6a0c1d49a37893a303b44eaac827d7f0d582cecfc81b6427f22973"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:a90e43e3ef65e6dcf969dfe3bb40cbf5aef0d523dff95bfa24256be172a845f4"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:a05177ff6296644ef2876fce50518dffb5bcdf903c85250974fc8bc85d54c0af"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:daa392191f626d50f1b136c9b4cf08af69ca8279d110ea24f5c2700054d2e263"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:e07ea39c5b048e085f15923511d8121e4a9dc45cee4e3b970ca4f0d338f23095"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:d5a45ddc256f492ce42a4e35879c5e5528c09cd9ad12420828c972951d8e016b"}, - {file = "cryptography-46.0.4-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:6bb5157bf6a350e5b28aee23beb2d84ae6f5be390b2f8ee7ea179cda077e1019"}, - {file = "cryptography-46.0.4-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:dd5aba870a2c40f87a3af043e0dee7d9eb02d4aff88a797b48f2b43eff8c3ab4"}, - {file = "cryptography-46.0.4-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:93d8291da8d71024379ab2cb0b5c57915300155ad42e07f76bea6ad838d7e59b"}, - {file = "cryptography-46.0.4-cp38-abi3-win32.whl", hash = "sha256:0563655cb3c6d05fb2afe693340bc050c30f9f34e15763361cf08e94749401fc"}, - {file = "cryptography-46.0.4-cp38-abi3-win_amd64.whl", hash = "sha256:fa0900b9ef9c49728887d1576fd8d9e7e3ea872fa9b25ef9b64888adc434e976"}, - {file = "cryptography-46.0.4-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:766330cce7416c92b5e90c3bb71b1b79521760cdcfc3a6a1a182d4c9fab23d2b"}, - {file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c236a44acfb610e70f6b3e1c3ca20ff24459659231ef2f8c48e879e2d32b73da"}, - {file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8a15fb869670efa8f83cbffbc8753c1abf236883225aed74cd179b720ac9ec80"}, - {file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:fdc3daab53b212472f1524d070735b2f0c214239df131903bae1d598016fa822"}, - {file = "cryptography-46.0.4-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:44cc0675b27cadb71bdbb96099cca1fa051cd11d2ade09e5cd3a2edb929ed947"}, - {file = "cryptography-46.0.4-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:be8c01a7d5a55f9a47d1888162b76c8f49d62b234d88f0ff91a9fbebe32ffbc3"}, - {file = "cryptography-46.0.4.tar.gz", hash = "sha256:bfd019f60f8abc2ed1b9be4ddc21cfef059c841d86d710bb69909a688cbb8f59"}, + {file = "cryptography-46.0.5-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:351695ada9ea9618b3500b490ad54c739860883df6c1f555e088eaf25b1bbaad"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:c18ff11e86df2e28854939acde2d003f7984f721eba450b56a200ad90eeb0e6b"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:4d7e3d356b8cd4ea5aff04f129d5f66ebdc7b6f8eae802b93739ed520c47c79b"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:50bfb6925eff619c9c023b967d5b77a54e04256c4281b0e21336a130cd7fc263"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:803812e111e75d1aa73690d2facc295eaefd4439be1023fefc4995eaea2af90d"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3ee190460e2fbe447175cda91b88b84ae8322a104fc27766ad09428754a618ed"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:f145bba11b878005c496e93e257c1e88f154d278d2638e6450d17e0f31e558d2"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:e9251e3be159d1020c4030bd2e5f84d6a43fe54b6c19c12f51cde9542a2817b2"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:47fb8a66058b80e509c47118ef8a75d14c455e81ac369050f20ba0d23e77fee0"}, + {file = "cryptography-46.0.5-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:4c3341037c136030cb46e4b1e17b7418ea4cbd9dd207e4a6f3b2b24e0d4ac731"}, + {file = "cryptography-46.0.5-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:890bcb4abd5a2d3f852196437129eb3667d62630333aacc13dfd470fad3aaa82"}, + {file = "cryptography-46.0.5-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:80a8d7bfdf38f87ca30a5391c0c9ce4ed2926918e017c29ddf643d0ed2778ea1"}, + {file = "cryptography-46.0.5-cp311-abi3-win32.whl", hash = "sha256:60ee7e19e95104d4c03871d7d7dfb3d22ef8a9b9c6778c94e1c8fcc8365afd48"}, + {file = "cryptography-46.0.5-cp311-abi3-win_amd64.whl", hash = "sha256:38946c54b16c885c72c4f59846be9743d699eee2b69b6988e0a00a01f46a61a4"}, + {file = "cryptography-46.0.5-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:94a76daa32eb78d61339aff7952ea819b1734b46f73646a07decb40e5b3448e2"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:5be7bf2fb40769e05739dd0046e7b26f9d4670badc7b032d6ce4db64dddc0678"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fe346b143ff9685e40192a4960938545c699054ba11d4f9029f94751e3f71d87"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:c69fd885df7d089548a42d5ec05be26050ebcd2283d89b3d30676eb32ff87dee"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:8293f3dea7fc929ef7240796ba231413afa7b68ce38fd21da2995549f5961981"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:1abfdb89b41c3be0365328a410baa9df3ff8a9110fb75e7b52e66803ddabc9a9"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:d66e421495fdb797610a08f43b05269e0a5ea7f5e652a89bfd5a7d3c1dee3648"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:4e817a8920bfbcff8940ecfd60f23d01836408242b30f1a708d93198393a80b4"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:68f68d13f2e1cb95163fa3b4db4bf9a159a418f5f6e7242564fc75fcae667fd0"}, + {file = "cryptography-46.0.5-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:a3d1fae9863299076f05cb8a778c467578262fae09f9dc0ee9b12eb4268ce663"}, + {file = "cryptography-46.0.5-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:c4143987a42a2397f2fc3b4d7e3a7d313fbe684f67ff443999e803dd75a76826"}, + {file = "cryptography-46.0.5-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:7d731d4b107030987fd61a7f8ab512b25b53cef8f233a97379ede116f30eb67d"}, + {file = "cryptography-46.0.5-cp314-cp314t-win32.whl", hash = "sha256:c3bcce8521d785d510b2aad26ae2c966092b7daa8f45dd8f44734a104dc0bc1a"}, + {file = "cryptography-46.0.5-cp314-cp314t-win_amd64.whl", hash = "sha256:4d8ae8659ab18c65ced284993c2265910f6c9e650189d4e3f68445ef82a810e4"}, + {file = "cryptography-46.0.5-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:4108d4c09fbbf2789d0c926eb4152ae1760d5a2d97612b92d508d96c861e4d31"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7d1f30a86d2757199cb2d56e48cce14deddf1f9c95f1ef1b64ee91ea43fe2e18"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:039917b0dc418bb9f6edce8a906572d69e74bd330b0b3fea4f79dab7f8ddd235"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ba2a27ff02f48193fc4daeadf8ad2590516fa3d0adeeb34336b96f7fa64c1e3a"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:61aa400dce22cb001a98014f647dc21cda08f7915ceb95df0c9eaf84b4b6af76"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3ce58ba46e1bc2aac4f7d9290223cead56743fa6ab94a5d53292ffaac6a91614"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:420d0e909050490d04359e7fdb5ed7e667ca5c3c402b809ae2563d7e66a92229"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:582f5fcd2afa31622f317f80426a027f30dc792e9c80ffee87b993200ea115f1"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:bfd56bb4b37ed4f330b82402f6f435845a5f5648edf1ad497da51a8452d5d62d"}, + {file = "cryptography-46.0.5-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:a3d507bb6a513ca96ba84443226af944b0f7f47dcc9a399d110cd6146481d24c"}, + {file = "cryptography-46.0.5-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9f16fbdf4da055efb21c22d81b89f155f02ba420558db21288b3d0035bafd5f4"}, + {file = "cryptography-46.0.5-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:ced80795227d70549a411a4ab66e8ce307899fad2220ce5ab2f296e687eacde9"}, + {file = "cryptography-46.0.5-cp38-abi3-win32.whl", hash = "sha256:02f547fce831f5096c9a567fd41bc12ca8f11df260959ecc7c3202555cc47a72"}, + {file = "cryptography-46.0.5-cp38-abi3-win_amd64.whl", hash = "sha256:556e106ee01aa13484ce9b0239bca667be5004efb0aabbed28d353df86445595"}, + {file = "cryptography-46.0.5-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:3b4995dc971c9fb83c25aa44cf45f02ba86f71ee600d81091c2f0cbae116b06c"}, + {file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:bc84e875994c3b445871ea7181d424588171efec3e185dced958dad9e001950a"}, + {file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:2ae6971afd6246710480e3f15824ed3029a60fc16991db250034efd0b9fb4356"}, + {file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:d861ee9e76ace6cf36a6a89b959ec08e7bc2493ee39d07ffe5acb23ef46d27da"}, + {file = "cryptography-46.0.5-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:2b7a67c9cd56372f3249b39699f2ad479f6991e62ea15800973b956f4b73e257"}, + {file = "cryptography-46.0.5-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:8456928655f856c6e1533ff59d5be76578a7157224dbd9ce6872f25055ab9ab7"}, + {file = "cryptography-46.0.5.tar.gz", hash = "sha256:abace499247268e3757271b2f1e244b36b06f8515cf27c4d49468fc9eb16e93d"}, ] [package.dependencies] @@ -516,7 +516,7 @@ nox = ["nox[uv] (>=2024.4.15)"] pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"] sdist = ["build (>=1.0.0)"] ssh = ["bcrypt (>=3.1.5)"] -test = ["certifi (>=2024)", "cryptography-vectors (==46.0.4)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"] +test = ["certifi (>=2024)", "cryptography-vectors (==46.0.5)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"] test-randomorder = ["pytest-randomly"] [[package]] @@ -570,24 +570,25 @@ tests = ["coverage", "coveralls", "dill", "mock", "nose"] [[package]] name = "fastapi" -version = "0.128.0" +version = "0.128.7" description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "fastapi-0.128.0-py3-none-any.whl", hash = "sha256:aebd93f9716ee3b4f4fcfe13ffb7cf308d99c9f3ab5622d8877441072561582d"}, - {file = "fastapi-0.128.0.tar.gz", hash = "sha256:1cc179e1cef10a6be60ffe429f79b829dce99d8de32d7acb7e6c8dfdf7f2645a"}, + {file = "fastapi-0.128.7-py3-none-any.whl", hash = "sha256:6bd9bd31cb7047465f2d3fa3ba3f33b0870b17d4eaf7cdb36d1576ab060ad662"}, + {file = "fastapi-0.128.7.tar.gz", hash = "sha256:783c273416995486c155ad2c0e2b45905dedfaf20b9ef8d9f6a9124670639a24"}, ] [package.dependencies] annotated-doc = ">=0.0.2" pydantic = ">=2.7.0" -starlette = ">=0.40.0,<0.51.0" +starlette = ">=0.40.0,<1.0.0" typing-extensions = ">=4.8.0" +typing-inspection = ">=0.4.2" [package.extras] -all = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=3.1.5)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.18)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"] +all = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=3.1.5)", "orjson (>=3.9.3)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.18)", "pyyaml (>=5.3.1)", "ujson (>=5.8.0)", "uvicorn[standard] (>=0.12.0)"] standard = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "jinja2 (>=3.1.5)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.18)", "uvicorn[standard] (>=0.12.0)"] standard-no-fastapi-cloud-cli = ["email-validator (>=2.0.0)", "fastapi-cli[standard-no-fastapi-cloud-cli] (>=0.0.8)", "httpx (>=0.23.0,<1.0.0)", "jinja2 (>=3.1.5)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.18)", "uvicorn[standard] (>=0.12.0)"] @@ -1062,14 +1063,14 @@ urllib3 = ">=1.26.0,<3" [[package]] name = "launchdarkly-server-sdk" -version = "9.14.1" +version = "9.15.0" description = "LaunchDarkly SDK for Python" optional = false -python-versions = ">=3.9" +python-versions = ">=3.10" groups = ["main"] files = [ - {file = "launchdarkly_server_sdk-9.14.1-py3-none-any.whl", hash = "sha256:a9e2bd9ecdef845cd631ae0d4334a1115e5b44257c42eb2349492be4bac7815c"}, - {file = "launchdarkly_server_sdk-9.14.1.tar.gz", hash = "sha256:1df44baf0a0efa74d8c1dad7a00592b98bce7d19edded7f770da8dbc49922213"}, + {file = "launchdarkly_server_sdk-9.15.0-py3-none-any.whl", hash = "sha256:c267e29bfa3fb5e2a06a208448ada6ed5557a2924979b8d79c970b45d227c668"}, + {file = "launchdarkly_server_sdk-9.15.0.tar.gz", hash = "sha256:f31441b74bc1a69c381db57c33116509e407a2612628ad6dff0a7dbb39d5020b"}, ] [package.dependencies] @@ -1478,14 +1479,14 @@ testing = ["coverage", "pytest", "pytest-benchmark"] [[package]] name = "postgrest" -version = "2.27.2" +version = "2.28.0" description = "PostgREST client for Python. This library provides an ORM interface to PostgREST." optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "postgrest-2.27.2-py3-none-any.whl", hash = "sha256:1666fef3de05ca097a314433dd5ae2f2d71c613cb7b233d0f468c4ffe37277da"}, - {file = "postgrest-2.27.2.tar.gz", hash = "sha256:55407d530b5af3d64e883a71fec1f345d369958f723ce4a8ab0b7d169e313242"}, + {file = "postgrest-2.28.0-py3-none-any.whl", hash = "sha256:7bca2f24dd1a1bf8a3d586c7482aba6cd41662da6733045fad585b63b7f7df75"}, + {file = "postgrest-2.28.0.tar.gz", hash = "sha256:c36b38646d25ea4255321d3d924ce70f8d20ec7799cb42c1221d6a818d4f6515"}, ] [package.dependencies] @@ -2248,14 +2249,14 @@ cli = ["click (>=5.0)"] [[package]] name = "realtime" -version = "2.27.2" +version = "2.28.0" description = "" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "realtime-2.27.2-py3-none-any.whl", hash = "sha256:34a9cbb26a274e707e8fc9e3ee0a66de944beac0fe604dc336d1e985db2c830f"}, - {file = "realtime-2.27.2.tar.gz", hash = "sha256:b960a90294d2cea1b3f1275ecb89204304728e08fff1c393cc1b3150739556b3"}, + {file = "realtime-2.28.0-py3-none-any.whl", hash = "sha256:db1bd59bab9b1fcc9f9d3b1a073bed35bf4994d720e6751f10031a58d57a3836"}, + {file = "realtime-2.28.0.tar.gz", hash = "sha256:d18cedcebd6a8f22fcd509bc767f639761eb218b7b2b6f14fc4205b6259b50fc"}, ] [package.dependencies] @@ -2436,14 +2437,14 @@ full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart [[package]] name = "storage3" -version = "2.27.2" +version = "2.28.0" description = "Supabase Storage client for Python." optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "storage3-2.27.2-py3-none-any.whl", hash = "sha256:e6f16e7a260729e7b1f46e9bf61746805a02e30f5e419ee1291007c432e3ec63"}, - {file = "storage3-2.27.2.tar.gz", hash = "sha256:cb4807b7f86b4bb1272ac6fdd2f3cfd8ba577297046fa5f88557425200275af5"}, + {file = "storage3-2.28.0-py3-none-any.whl", hash = "sha256:ecb50efd2ac71dabbdf97e99ad346eafa630c4c627a8e5a138ceb5fbbadae716"}, + {file = "storage3-2.28.0.tar.gz", hash = "sha256:bc1d008aff67de7a0f2bd867baee7aadbcdb6f78f5a310b4f7a38e8c13c19865"}, ] [package.dependencies] @@ -2487,35 +2488,35 @@ python-dateutil = ">=2.6.0" [[package]] name = "supabase" -version = "2.27.2" +version = "2.28.0" description = "Supabase client for Python." optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "supabase-2.27.2-py3-none-any.whl", hash = "sha256:d4dce00b3a418ee578017ec577c0e5be47a9a636355009c76f20ed2faa15bc54"}, - {file = "supabase-2.27.2.tar.gz", hash = "sha256:2aed40e4f3454438822442a1e94a47be6694c2c70392e7ae99b51a226d4293f7"}, + {file = "supabase-2.28.0-py3-none-any.whl", hash = "sha256:42776971c7d0ccca16034df1ab96a31c50228eb1eb19da4249ad2f756fc20272"}, + {file = "supabase-2.28.0.tar.gz", hash = "sha256:aea299aaab2a2eed3c57e0be7fc035c6807214194cce795a3575add20268ece1"}, ] [package.dependencies] httpx = ">=0.26,<0.29" -postgrest = "2.27.2" -realtime = "2.27.2" -storage3 = "2.27.2" -supabase-auth = "2.27.2" -supabase-functions = "2.27.2" +postgrest = "2.28.0" +realtime = "2.28.0" +storage3 = "2.28.0" +supabase-auth = "2.28.0" +supabase-functions = "2.28.0" yarl = ">=1.22.0" [[package]] name = "supabase-auth" -version = "2.27.2" +version = "2.28.0" description = "Python Client Library for Supabase Auth" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "supabase_auth-2.27.2-py3-none-any.whl", hash = "sha256:78ec25b11314d0a9527a7205f3b1c72560dccdc11b38392f80297ef98664ee91"}, - {file = "supabase_auth-2.27.2.tar.gz", hash = "sha256:0f5bcc79b3677cb42e9d321f3c559070cfa40d6a29a67672cc8382fb7dc2fe97"}, + {file = "supabase_auth-2.28.0-py3-none-any.whl", hash = "sha256:2ac85026cc285054c7fa6d41924f3a333e9ec298c013e5b5e1754039ba7caec9"}, + {file = "supabase_auth-2.28.0.tar.gz", hash = "sha256:2bb8f18ff39934e44b28f10918db965659f3735cd6fbfcc022fe0b82dbf8233e"}, ] [package.dependencies] @@ -2525,14 +2526,14 @@ pyjwt = {version = ">=2.10.1", extras = ["crypto"]} [[package]] name = "supabase-functions" -version = "2.27.2" +version = "2.28.0" description = "Library for Supabase Functions" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "supabase_functions-2.27.2-py3-none-any.whl", hash = "sha256:db480efc669d0bca07605b9b6f167312af43121adcc842a111f79bea416ef754"}, - {file = "supabase_functions-2.27.2.tar.gz", hash = "sha256:d0c8266207a94371cb3fd35ad3c7f025b78a97cf026861e04ccd35ac1775f80b"}, + {file = "supabase_functions-2.28.0-py3-none-any.whl", hash = "sha256:30bf2d586f8df285faf0621bb5d5bb3ec3157234fc820553ca156f009475e4ae"}, + {file = "supabase_functions-2.28.0.tar.gz", hash = "sha256:db3dddfc37aca5858819eb461130968473bd8c75bd284581013958526dac718b"}, ] [package.dependencies] @@ -2911,4 +2912,4 @@ type = ["pytest-mypy"] [metadata] lock-version = "2.1" python-versions = ">=3.10,<4.0" -content-hash = "40eae94995dc0a388fa832ed4af9b6137f28d5b5ced3aaea70d5f91d4d9a179d" +content-hash = "9619cae908ad38fa2c48016a58bcf4241f6f5793aa0e6cc140276e91c433cbbb" diff --git a/autogpt_platform/autogpt_libs/pyproject.toml b/autogpt_platform/autogpt_libs/pyproject.toml index 8deb4d2169..2cfa742922 100644 --- a/autogpt_platform/autogpt_libs/pyproject.toml +++ b/autogpt_platform/autogpt_libs/pyproject.toml @@ -11,14 +11,14 @@ python = ">=3.10,<4.0" colorama = "^0.4.6" cryptography = "^46.0" expiringdict = "^1.2.2" -fastapi = "^0.128.0" +fastapi = "^0.128.7" google-cloud-logging = "^3.13.0" -launchdarkly-server-sdk = "^9.14.1" +launchdarkly-server-sdk = "^9.15.0" pydantic = "^2.12.5" pydantic-settings = "^2.12.0" pyjwt = { version = "^2.11.0", extras = ["crypto"] } redis = "^6.2.0" -supabase = "^2.27.2" +supabase = "^2.28.0" uvicorn = "^0.40.0" [tool.poetry.group.dev.dependencies] diff --git a/autogpt_platform/backend/poetry.lock b/autogpt_platform/backend/poetry.lock index 53b5030da6..d71cca7865 100644 --- a/autogpt_platform/backend/poetry.lock +++ b/autogpt_platform/backend/poetry.lock @@ -441,14 +441,14 @@ develop = true colorama = "^0.4.6" cryptography = "^46.0" expiringdict = "^1.2.2" -fastapi = "^0.128.0" +fastapi = "^0.128.7" google-cloud-logging = "^3.13.0" -launchdarkly-server-sdk = "^9.14.1" +launchdarkly-server-sdk = "^9.15.0" pydantic = "^2.12.5" pydantic-settings = "^2.12.0" pyjwt = {version = "^2.11.0", extras = ["crypto"]} redis = "^6.2.0" -supabase = "^2.27.2" +supabase = "^2.28.0" uvicorn = "^0.40.0" [package.source] @@ -1382,14 +1382,14 @@ tzdata = "*" [[package]] name = "fastapi" -version = "0.128.6" +version = "0.128.7" description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "fastapi-0.128.6-py3-none-any.whl", hash = "sha256:bb1c1ef87d6086a7132d0ab60869d6f1ee67283b20fbf84ec0003bd335099509"}, - {file = "fastapi-0.128.6.tar.gz", hash = "sha256:0cb3946557e792d731b26a42b04912f16367e3c3135ea8290f620e234f2b604f"}, + {file = "fastapi-0.128.7-py3-none-any.whl", hash = "sha256:6bd9bd31cb7047465f2d3fa3ba3f33b0870b17d4eaf7cdb36d1576ab060ad662"}, + {file = "fastapi-0.128.7.tar.gz", hash = "sha256:783c273416995486c155ad2c0e2b45905dedfaf20b9ef8d9f6a9124670639a24"}, ] [package.dependencies] @@ -3117,14 +3117,14 @@ urllib3 = ">=1.26.0,<3" [[package]] name = "launchdarkly-server-sdk" -version = "9.14.1" +version = "9.15.0" description = "LaunchDarkly SDK for Python" optional = false -python-versions = ">=3.9" +python-versions = ">=3.10" groups = ["main"] files = [ - {file = "launchdarkly_server_sdk-9.14.1-py3-none-any.whl", hash = "sha256:a9e2bd9ecdef845cd631ae0d4334a1115e5b44257c42eb2349492be4bac7815c"}, - {file = "launchdarkly_server_sdk-9.14.1.tar.gz", hash = "sha256:1df44baf0a0efa74d8c1dad7a00592b98bce7d19edded7f770da8dbc49922213"}, + {file = "launchdarkly_server_sdk-9.15.0-py3-none-any.whl", hash = "sha256:c267e29bfa3fb5e2a06a208448ada6ed5557a2924979b8d79c970b45d227c668"}, + {file = "launchdarkly_server_sdk-9.15.0.tar.gz", hash = "sha256:f31441b74bc1a69c381db57c33116509e407a2612628ad6dff0a7dbb39d5020b"}, ] [package.dependencies] @@ -4728,14 +4728,14 @@ tests = ["coverage-conditional-plugin (>=0.9.0)", "portalocker[redis]", "pytest [[package]] name = "postgrest" -version = "2.27.3" +version = "2.28.0" description = "PostgREST client for Python. This library provides an ORM interface to PostgREST." optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "postgrest-2.27.3-py3-none-any.whl", hash = "sha256:ed79123af7127edd78d538bfe8351d277e45b1a36994a4dbf57ae27dde87a7b7"}, - {file = "postgrest-2.27.3.tar.gz", hash = "sha256:c2e2679addfc8eaab23197bad7ddaee6cbb4cbe8c483ebd2d2e5219543037cc3"}, + {file = "postgrest-2.28.0-py3-none-any.whl", hash = "sha256:7bca2f24dd1a1bf8a3d586c7482aba6cd41662da6733045fad585b63b7f7df75"}, + {file = "postgrest-2.28.0.tar.gz", hash = "sha256:c36b38646d25ea4255321d3d924ce70f8d20ec7799cb42c1221d6a818d4f6515"}, ] [package.dependencies] @@ -6260,14 +6260,14 @@ all = ["numpy"] [[package]] name = "realtime" -version = "2.27.3" +version = "2.28.0" description = "" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "realtime-2.27.3-py3-none-any.whl", hash = "sha256:f571115f86988e33c41c895cb3fba2eaa1b693aeaede3617288f44274ca90f43"}, - {file = "realtime-2.27.3.tar.gz", hash = "sha256:02b082243107656a5ef3fb63e8e2ab4c40bc199abb45adb8a42ed63f089a1041"}, + {file = "realtime-2.28.0-py3-none-any.whl", hash = "sha256:db1bd59bab9b1fcc9f9d3b1a073bed35bf4994d720e6751f10031a58d57a3836"}, + {file = "realtime-2.28.0.tar.gz", hash = "sha256:d18cedcebd6a8f22fcd509bc767f639761eb218b7b2b6f14fc4205b6259b50fc"}, ] [package.dependencies] @@ -7024,14 +7024,14 @@ full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart [[package]] name = "storage3" -version = "2.27.3" +version = "2.28.0" description = "Supabase Storage client for Python." optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "storage3-2.27.3-py3-none-any.whl", hash = "sha256:11a05b7da84bccabeeea12d940bca3760cf63fe6ca441868677335cfe4fdfbe0"}, - {file = "storage3-2.27.3.tar.gz", hash = "sha256:dc1a4a010cf36d5482c5cb6c1c28fc5f00e23284342b89e4ae43b5eae8501ddb"}, + {file = "storage3-2.28.0-py3-none-any.whl", hash = "sha256:ecb50efd2ac71dabbdf97e99ad346eafa630c4c627a8e5a138ceb5fbbadae716"}, + {file = "storage3-2.28.0.tar.gz", hash = "sha256:bc1d008aff67de7a0f2bd867baee7aadbcdb6f78f5a310b4f7a38e8c13c19865"}, ] [package.dependencies] @@ -7091,35 +7091,35 @@ typing-extensions = {version = ">=4.5.0", markers = "python_version >= \"3.7\""} [[package]] name = "supabase" -version = "2.27.3" +version = "2.28.0" description = "Supabase client for Python." optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "supabase-2.27.3-py3-none-any.whl", hash = "sha256:082a74642fcf9954693f1ce8c251baf23e4bda26ffdbc8dcd4c99c82e60d69ff"}, - {file = "supabase-2.27.3.tar.gz", hash = "sha256:5e5a348232ac4315c1032ddd687278f0b982465471f0cbb52bca7e6a66495ff3"}, + {file = "supabase-2.28.0-py3-none-any.whl", hash = "sha256:42776971c7d0ccca16034df1ab96a31c50228eb1eb19da4249ad2f756fc20272"}, + {file = "supabase-2.28.0.tar.gz", hash = "sha256:aea299aaab2a2eed3c57e0be7fc035c6807214194cce795a3575add20268ece1"}, ] [package.dependencies] httpx = ">=0.26,<0.29" -postgrest = "2.27.3" -realtime = "2.27.3" -storage3 = "2.27.3" -supabase-auth = "2.27.3" -supabase-functions = "2.27.3" +postgrest = "2.28.0" +realtime = "2.28.0" +storage3 = "2.28.0" +supabase-auth = "2.28.0" +supabase-functions = "2.28.0" yarl = ">=1.22.0" [[package]] name = "supabase-auth" -version = "2.27.3" +version = "2.28.0" description = "Python Client Library for Supabase Auth" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "supabase_auth-2.27.3-py3-none-any.whl", hash = "sha256:82a4262eaad85383319d394dab0eea11fcf3ebd774062aef8ea3874ae2f02579"}, - {file = "supabase_auth-2.27.3.tar.gz", hash = "sha256:39894d4bc60b6f23b5cff4d0d7d4c1659e5d69563cadf014d4896f780ca8ca78"}, + {file = "supabase_auth-2.28.0-py3-none-any.whl", hash = "sha256:2ac85026cc285054c7fa6d41924f3a333e9ec298c013e5b5e1754039ba7caec9"}, + {file = "supabase_auth-2.28.0.tar.gz", hash = "sha256:2bb8f18ff39934e44b28f10918db965659f3735cd6fbfcc022fe0b82dbf8233e"}, ] [package.dependencies] @@ -7129,14 +7129,14 @@ pyjwt = {version = ">=2.10.1", extras = ["crypto"]} [[package]] name = "supabase-functions" -version = "2.27.3" +version = "2.28.0" description = "Library for Supabase Functions" optional = false python-versions = ">=3.9" groups = ["main"] files = [ - {file = "supabase_functions-2.27.3-py3-none-any.whl", hash = "sha256:9d14a931d49ede1c6cf5fbfceb11c44061535ba1c3f310f15384964d86a83d9e"}, - {file = "supabase_functions-2.27.3.tar.gz", hash = "sha256:e954f1646da8ca6e7e16accef58d0884a5f97b25956ee98e7d4927a210ed92f9"}, + {file = "supabase_functions-2.28.0-py3-none-any.whl", hash = "sha256:30bf2d586f8df285faf0621bb5d5bb3ec3157234fc820553ca156f009475e4ae"}, + {file = "supabase_functions-2.28.0.tar.gz", hash = "sha256:db3dddfc37aca5858819eb461130968473bd8c75bd284581013958526dac718b"}, ] [package.dependencies] @@ -8440,4 +8440,4 @@ cffi = ["cffi (>=1.17,<2.0) ; platform_python_implementation != \"PyPy\" and pyt [metadata] lock-version = "2.1" python-versions = ">=3.10,<3.14" -content-hash = "c06e96ad49388ba7a46786e9ea55ea2c1a57408e15613237b4bee40a592a12af" +content-hash = "fa9c5deadf593e815dd2190f58e22152373900603f5f244b9616cd721de84d2f" diff --git a/autogpt_platform/backend/pyproject.toml b/autogpt_platform/backend/pyproject.toml index 317663ee98..32dfc547bc 100644 --- a/autogpt_platform/backend/pyproject.toml +++ b/autogpt_platform/backend/pyproject.toml @@ -65,7 +65,7 @@ sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlal sqlalchemy = "^2.0.40" strenum = "^0.4.9" stripe = "^11.5.0" -supabase = "2.27.3" +supabase = "2.28.0" tenacity = "^9.1.4" todoist-api-python = "^2.1.7" tweepy = "^4.16.0" From ab0b537cc7d1484dd2777b0d56f397601aba3e76 Mon Sep 17 00:00:00 2001 From: Swifty Date: Fri, 13 Feb 2026 11:08:51 +0100 Subject: [PATCH 17/18] refactor(backend): optimize find_block response size by removing raw JSON schemas (#12020) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ### Changes 🏗️ The `find_block` AutoPilot tool was returning ~90K characters per response (10 blocks). The bloat came from including full JSON Schema objects (`input_schema`, `output_schema`) with all nested `$defs`, `anyOf`, and type definitions for every block. **What changed:** - **`BlockInfoSummary` model**: Removed `input_schema` (raw JSON Schema), `output_schema` (raw JSON Schema), and `categories`. Added `output_fields` (compact field-level summaries matching the existing `required_inputs` format). - **`BlockListResponse` model**: Removed `usage_hint` (info now in `message`). - **`FindBlockTool._execute()`**: Now extracts compact `output_fields` from output schema properties instead of including the entire raw schema. Credentials handling is unchanged. - **Test**: Added `test_response_size_average_chars_per_block` with realistic block schemas (HTTP, Email, Claude Code) to measure and assert response size stays under 2K chars/block. - **`CLAUDE.md`**: Clarified `dev` vs `master` branching strategy. **Result:** Average response size reduced from ~9,000 to ~1,300 chars per block (~85% reduction). This directly reduces LLM token consumption, latency, and API costs for AutoPilot interactions. ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - [x] Verified models import and serialize correctly - [x] Verified response size: 3,970 chars for 3 realistic blocks (avg 1,323/block) - [x] Lint (`ruff check`) and type check (`pyright`) pass on changed files - [x] Frontend compatibility preserved: `blocks[].name` and `count` fields retained for `block_list` handler --------- Co-authored-by: Claude Opus 4.6 Co-authored-by: Toran Bruce Richards --- autogpt_platform/CLAUDE.md | 5 + .../backend/api/features/chat/routes.py | 2 + .../api/features/chat/tools/find_block.py | 63 +---- .../features/chat/tools/find_block_test.py | 255 ++++++++++++++++- .../backend/api/features/chat/tools/models.py | 31 ++- .../api/features/chat/tools/run_block.py | 94 ++++++- .../api/features/chat/tools/run_block_test.py | 262 +++++++++++++++++- .../chat/tools/test_run_block_details.py | 153 ++++++++++ .../copilot/tools/RunBlock/RunBlock.tsx | 7 + .../BlockDetailsCard.stories.tsx | 188 +++++++++++++ .../BlockDetailsCard/BlockDetailsCard.tsx | 103 +++++++ .../copilot/tools/RunBlock/helpers.tsx | 58 +++- .../frontend/src/app/api/openapi.json | 114 ++++---- 13 files changed, 1194 insertions(+), 141 deletions(-) create mode 100644 autogpt_platform/backend/backend/api/features/chat/tools/test_run_block_details.py create mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/components/BlockDetailsCard/BlockDetailsCard.stories.tsx create mode 100644 autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/components/BlockDetailsCard/BlockDetailsCard.tsx diff --git a/autogpt_platform/CLAUDE.md b/autogpt_platform/CLAUDE.md index 62adbdaefa..021b7c27e4 100644 --- a/autogpt_platform/CLAUDE.md +++ b/autogpt_platform/CLAUDE.md @@ -45,6 +45,11 @@ AutoGPT Platform is a monorepo containing: - Backend/Frontend services use YAML anchors for consistent configuration - Supabase services (`db/docker/docker-compose.yml`) follow the same pattern +### Branching Strategy + +- **`dev`** is the main development branch. All PRs should target `dev`. +- **`master`** is the production branch. Only used for production releases. + ### Creating Pull Requests - Create the PR against the `dev` branch of the repository. diff --git a/autogpt_platform/backend/backend/api/features/chat/routes.py b/autogpt_platform/backend/backend/api/features/chat/routes.py index c6f37569b7..0d8b12b0b7 100644 --- a/autogpt_platform/backend/backend/api/features/chat/routes.py +++ b/autogpt_platform/backend/backend/api/features/chat/routes.py @@ -24,6 +24,7 @@ from .tools.models import ( AgentPreviewResponse, AgentSavedResponse, AgentsFoundResponse, + BlockDetailsResponse, BlockListResponse, BlockOutputResponse, ClarificationNeededResponse, @@ -971,6 +972,7 @@ ToolResponseUnion = ( | AgentSavedResponse | ClarificationNeededResponse | BlockListResponse + | BlockDetailsResponse | BlockOutputResponse | DocSearchResultsResponse | DocPageResponse diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/find_block.py b/autogpt_platform/backend/backend/api/features/chat/tools/find_block.py index 6a8cfa9bbc..55b1c0d510 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/find_block.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/find_block.py @@ -7,7 +7,6 @@ from backend.api.features.chat.model import ChatSession from backend.api.features.chat.tools.base import BaseTool, ToolResponseBase from backend.api.features.chat.tools.models import ( BlockInfoSummary, - BlockInputFieldInfo, BlockListResponse, ErrorResponse, NoResultsResponse, @@ -55,7 +54,8 @@ class FindBlockTool(BaseTool): "Blocks are reusable components that perform specific tasks like " "sending emails, making API calls, processing text, etc. " "IMPORTANT: Use this tool FIRST to get the block's 'id' before calling run_block. " - "The response includes each block's id, required_inputs, and input_schema." + "The response includes each block's id, name, and description. " + "Call run_block with the block's id **with no inputs** to see detailed inputs/outputs and execute it." ) @property @@ -124,7 +124,7 @@ class FindBlockTool(BaseTool): session_id=session_id, ) - # Enrich results with full block information + # Enrich results with block information blocks: list[BlockInfoSummary] = [] for result in results: block_id = result["content_id"] @@ -141,65 +141,11 @@ class FindBlockTool(BaseTool): ): continue - # Get input/output schemas - input_schema = {} - output_schema = {} - try: - input_schema = block.input_schema.jsonschema() - except Exception as e: - logger.debug( - "Failed to generate input schema for block %s: %s", - block_id, - e, - ) - try: - output_schema = block.output_schema.jsonschema() - except Exception as e: - logger.debug( - "Failed to generate output schema for block %s: %s", - block_id, - e, - ) - - # Get categories from block instance - categories = [] - if hasattr(block, "categories") and block.categories: - categories = [cat.value for cat in block.categories] - - # Extract required inputs for easier use - required_inputs: list[BlockInputFieldInfo] = [] - if input_schema: - properties = input_schema.get("properties", {}) - required_fields = set(input_schema.get("required", [])) - # Get credential field names to exclude from required inputs - credentials_fields = set( - block.input_schema.get_credentials_fields().keys() - ) - - for field_name, field_schema in properties.items(): - # Skip credential fields - they're handled separately - if field_name in credentials_fields: - continue - - required_inputs.append( - BlockInputFieldInfo( - name=field_name, - type=field_schema.get("type", "string"), - description=field_schema.get("description", ""), - required=field_name in required_fields, - default=field_schema.get("default"), - ) - ) - blocks.append( BlockInfoSummary( id=block_id, name=block.name, description=block.description or "", - categories=categories, - input_schema=input_schema, - output_schema=output_schema, - required_inputs=required_inputs, ) ) @@ -228,8 +174,7 @@ class FindBlockTool(BaseTool): return BlockListResponse( message=( f"Found {len(blocks)} block(s) matching '{query}'. " - "To execute a block, use run_block with the block's 'id' field " - "and provide 'input_data' matching the block's input_schema." + "To see a block's inputs/outputs and execute it, use run_block with the block's 'id' - providing no inputs." ), blocks=blocks, count=len(blocks), diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/find_block_test.py b/autogpt_platform/backend/backend/api/features/chat/tools/find_block_test.py index d567a89bbe..44606f81c3 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/find_block_test.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/find_block_test.py @@ -18,7 +18,13 @@ _TEST_USER_ID = "test-user-find-block" def make_mock_block( - block_id: str, name: str, block_type: BlockType, disabled: bool = False + block_id: str, + name: str, + block_type: BlockType, + disabled: bool = False, + input_schema: dict | None = None, + output_schema: dict | None = None, + credentials_fields: dict | None = None, ): """Create a mock block for testing.""" mock = MagicMock() @@ -28,10 +34,13 @@ def make_mock_block( mock.block_type = block_type mock.disabled = disabled mock.input_schema = MagicMock() - mock.input_schema.jsonschema.return_value = {"properties": {}, "required": []} - mock.input_schema.get_credentials_fields.return_value = {} + mock.input_schema.jsonschema.return_value = input_schema or { + "properties": {}, + "required": [], + } + mock.input_schema.get_credentials_fields.return_value = credentials_fields or {} mock.output_schema = MagicMock() - mock.output_schema.jsonschema.return_value = {} + mock.output_schema.jsonschema.return_value = output_schema or {} mock.categories = [] return mock @@ -137,3 +146,241 @@ class TestFindBlockFiltering: assert isinstance(response, BlockListResponse) assert len(response.blocks) == 1 assert response.blocks[0].id == "normal-block-id" + + @pytest.mark.asyncio(loop_scope="session") + async def test_response_size_average_chars_per_block(self): + """Measure average chars per block in the serialized response.""" + session = make_session(user_id=_TEST_USER_ID) + + # Realistic block definitions modeled after real blocks + block_defs = [ + { + "id": "http-block-id", + "name": "Send Web Request", + "input_schema": { + "properties": { + "url": { + "type": "string", + "description": "The URL to send the request to", + }, + "method": { + "type": "string", + "description": "The HTTP method to use", + }, + "headers": { + "type": "object", + "description": "Headers to include in the request", + }, + "json_format": { + "type": "boolean", + "description": "If true, send the body as JSON", + }, + "body": { + "type": "object", + "description": "Form/JSON body payload", + }, + "credentials": { + "type": "object", + "description": "HTTP credentials", + }, + }, + "required": ["url", "method"], + }, + "output_schema": { + "properties": { + "response": { + "type": "object", + "description": "The response from the server", + }, + "client_error": { + "type": "object", + "description": "Errors on 4xx status codes", + }, + "server_error": { + "type": "object", + "description": "Errors on 5xx status codes", + }, + "error": { + "type": "string", + "description": "Errors for all other exceptions", + }, + }, + }, + "credentials_fields": {"credentials": True}, + }, + { + "id": "email-block-id", + "name": "Send Email", + "input_schema": { + "properties": { + "to_email": { + "type": "string", + "description": "Recipient email address", + }, + "subject": { + "type": "string", + "description": "Subject of the email", + }, + "body": { + "type": "string", + "description": "Body of the email", + }, + "config": { + "type": "object", + "description": "SMTP Config", + }, + "credentials": { + "type": "object", + "description": "SMTP credentials", + }, + }, + "required": ["to_email", "subject", "body", "credentials"], + }, + "output_schema": { + "properties": { + "status": { + "type": "string", + "description": "Status of the email sending operation", + }, + "error": { + "type": "string", + "description": "Error message if sending failed", + }, + }, + }, + "credentials_fields": {"credentials": True}, + }, + { + "id": "claude-code-block-id", + "name": "Claude Code", + "input_schema": { + "properties": { + "e2b_credentials": { + "type": "object", + "description": "API key for E2B platform", + }, + "anthropic_credentials": { + "type": "object", + "description": "API key for Anthropic", + }, + "prompt": { + "type": "string", + "description": "Task or instruction for Claude Code", + }, + "timeout": { + "type": "integer", + "description": "Sandbox timeout in seconds", + }, + "setup_commands": { + "type": "array", + "description": "Shell commands to run before execution", + }, + "working_directory": { + "type": "string", + "description": "Working directory for Claude Code", + }, + "session_id": { + "type": "string", + "description": "Session ID to resume a conversation", + }, + "sandbox_id": { + "type": "string", + "description": "Sandbox ID to reconnect to", + }, + "conversation_history": { + "type": "string", + "description": "Previous conversation history", + }, + "dispose_sandbox": { + "type": "boolean", + "description": "Whether to dispose sandbox after execution", + }, + }, + "required": [ + "e2b_credentials", + "anthropic_credentials", + "prompt", + ], + }, + "output_schema": { + "properties": { + "response": { + "type": "string", + "description": "Output from Claude Code execution", + }, + "files": { + "type": "array", + "description": "Files created/modified by Claude Code", + }, + "conversation_history": { + "type": "string", + "description": "Full conversation history", + }, + "session_id": { + "type": "string", + "description": "Session ID for this conversation", + }, + "sandbox_id": { + "type": "string", + "description": "ID of the sandbox instance", + }, + "error": { + "type": "string", + "description": "Error message if execution failed", + }, + }, + }, + "credentials_fields": { + "e2b_credentials": True, + "anthropic_credentials": True, + }, + }, + ] + + search_results = [ + {"content_id": d["id"], "score": 0.9 - i * 0.1} + for i, d in enumerate(block_defs) + ] + mock_blocks = { + d["id"]: make_mock_block( + block_id=d["id"], + name=d["name"], + block_type=BlockType.STANDARD, + input_schema=d["input_schema"], + output_schema=d["output_schema"], + credentials_fields=d["credentials_fields"], + ) + for d in block_defs + } + + with patch( + "backend.api.features.chat.tools.find_block.unified_hybrid_search", + new_callable=AsyncMock, + return_value=(search_results, len(search_results)), + ), patch( + "backend.api.features.chat.tools.find_block.get_block", + side_effect=lambda bid: mock_blocks.get(bid), + ): + tool = FindBlockTool() + response = await tool._execute( + user_id=_TEST_USER_ID, session=session, query="test" + ) + + assert isinstance(response, BlockListResponse) + assert response.count == len(block_defs) + + total_chars = len(response.model_dump_json()) + avg_chars = total_chars // response.count + + # Print for visibility in test output + print(f"\nTotal response size: {total_chars} chars") + print(f"Number of blocks: {response.count}") + print(f"Average chars per block: {avg_chars}") + + # The old response was ~90K for 10 blocks (~9K per block). + # Previous optimization reduced it to ~1.5K per block (no raw JSON schemas). + # Now with only id/name/description, we expect ~300 chars per block. + assert avg_chars < 500, ( + f"Average chars per block ({avg_chars}) exceeds 500. " + f"Total response: {total_chars} chars for {response.count} blocks." + ) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/models.py b/autogpt_platform/backend/backend/api/features/chat/tools/models.py index 69c8c6c684..bd19d590a6 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/models.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/models.py @@ -25,6 +25,7 @@ class ResponseType(str, Enum): AGENT_SAVED = "agent_saved" CLARIFICATION_NEEDED = "clarification_needed" BLOCK_LIST = "block_list" + BLOCK_DETAILS = "block_details" BLOCK_OUTPUT = "block_output" DOC_SEARCH_RESULTS = "doc_search_results" DOC_PAGE = "doc_page" @@ -334,13 +335,6 @@ class BlockInfoSummary(BaseModel): id: str name: str description: str - categories: list[str] - input_schema: dict[str, Any] - output_schema: dict[str, Any] - required_inputs: list[BlockInputFieldInfo] = Field( - default_factory=list, - description="List of required input fields for this block", - ) class BlockListResponse(ToolResponseBase): @@ -350,10 +344,25 @@ class BlockListResponse(ToolResponseBase): blocks: list[BlockInfoSummary] count: int query: str - usage_hint: str = Field( - default="To execute a block, call run_block with block_id set to the block's " - "'id' field and input_data containing the required fields from input_schema." - ) + + +class BlockDetails(BaseModel): + """Detailed block information.""" + + id: str + name: str + description: str + inputs: dict[str, Any] = {} + outputs: dict[str, Any] = {} + credentials: list[CredentialsMetaInput] = [] + + +class BlockDetailsResponse(ToolResponseBase): + """Response for block details (first run_block attempt).""" + + type: ResponseType = ResponseType.BLOCK_DETAILS + block: BlockDetails + user_authenticated: bool = False class BlockOutputResponse(ToolResponseBase): diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/run_block.py b/autogpt_platform/backend/backend/api/features/chat/tools/run_block.py index 8c29820f8e..a55478326a 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/run_block.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/run_block.py @@ -23,8 +23,11 @@ from backend.util.exceptions import BlockError from .base import BaseTool from .helpers import get_inputs_from_schema from .models import ( + BlockDetails, + BlockDetailsResponse, BlockOutputResponse, ErrorResponse, + InputValidationErrorResponse, SetupInfo, SetupRequirementsResponse, ToolResponseBase, @@ -51,8 +54,8 @@ class RunBlockTool(BaseTool): "Execute a specific block with the provided input data. " "IMPORTANT: You MUST call find_block first to get the block's 'id' - " "do NOT guess or make up block IDs. " - "Use the 'id' from find_block results and provide input_data " - "matching the block's required_inputs." + "On first attempt (without input_data), returns detailed schema showing " + "required inputs and outputs. Then call again with proper input_data to execute." ) @property @@ -67,11 +70,19 @@ class RunBlockTool(BaseTool): "NEVER guess this - always get it from find_block first." ), }, + "block_name": { + "type": "string", + "description": ( + "The block's human-readable name from find_block results. " + "Used for display purposes in the UI." + ), + }, "input_data": { "type": "object", "description": ( - "Input values for the block. Use the 'required_inputs' field " - "from find_block to see what fields are needed." + "Input values for the block. " + "First call with empty {} to see the block's schema, " + "then call again with proper values to execute." ), }, }, @@ -156,6 +167,34 @@ class RunBlockTool(BaseTool): await self._resolve_block_credentials(user_id, block, input_data) ) + # Get block schemas for details/validation + try: + input_schema: dict[str, Any] = block.input_schema.jsonschema() + except Exception as e: + logger.warning( + "Failed to generate input schema for block %s: %s", + block_id, + e, + ) + return ErrorResponse( + message=f"Block '{block.name}' has an invalid input schema", + error=str(e), + session_id=session_id, + ) + try: + output_schema: dict[str, Any] = block.output_schema.jsonschema() + except Exception as e: + logger.warning( + "Failed to generate output schema for block %s: %s", + block_id, + e, + ) + return ErrorResponse( + message=f"Block '{block.name}' has an invalid output schema", + error=str(e), + session_id=session_id, + ) + if missing_credentials: # Return setup requirements response with missing credentials credentials_fields_info = block.input_schema.get_credentials_fields_info() @@ -188,6 +227,53 @@ class RunBlockTool(BaseTool): graph_version=None, ) + # Check if this is a first attempt (required inputs missing) + # Return block details so user can see what inputs are needed + credentials_fields = set(block.input_schema.get_credentials_fields().keys()) + required_keys = set(input_schema.get("required", [])) + required_non_credential_keys = required_keys - credentials_fields + provided_input_keys = set(input_data.keys()) - credentials_fields + + # Check for unknown input fields + valid_fields = ( + set(input_schema.get("properties", {}).keys()) - credentials_fields + ) + unrecognized_fields = provided_input_keys - valid_fields + if unrecognized_fields: + return InputValidationErrorResponse( + message=( + f"Unknown input field(s) provided: {', '.join(sorted(unrecognized_fields))}. " + f"Block was not executed. Please use the correct field names from the schema." + ), + session_id=session_id, + unrecognized_fields=sorted(unrecognized_fields), + inputs=input_schema, + ) + + # Show details when not all required non-credential inputs are provided + if not (required_non_credential_keys <= provided_input_keys): + # Get credentials info for the response + credentials_meta = [] + for field_name, cred_meta in matched_credentials.items(): + credentials_meta.append(cred_meta) + + return BlockDetailsResponse( + message=( + f"Block '{block.name}' details. " + "Provide input_data matching the inputs schema to execute the block." + ), + session_id=session_id, + block=BlockDetails( + id=block_id, + name=block.name, + description=block.description or "", + inputs=input_schema, + outputs=output_schema, + credentials=credentials_meta, + ), + user_authenticated=True, + ) + try: # Get or create user's workspace for CoPilot file operations workspace = await get_or_create_workspace(user_id) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/run_block_test.py b/autogpt_platform/backend/backend/api/features/chat/tools/run_block_test.py index aadc161155..55efc38479 100644 --- a/autogpt_platform/backend/backend/api/features/chat/tools/run_block_test.py +++ b/autogpt_platform/backend/backend/api/features/chat/tools/run_block_test.py @@ -1,10 +1,15 @@ -"""Tests for block execution guards in RunBlockTool.""" +"""Tests for block execution guards and input validation in RunBlockTool.""" -from unittest.mock import MagicMock, patch +from unittest.mock import AsyncMock, MagicMock, patch import pytest -from backend.api.features.chat.tools.models import ErrorResponse +from backend.api.features.chat.tools.models import ( + BlockDetailsResponse, + BlockOutputResponse, + ErrorResponse, + InputValidationErrorResponse, +) from backend.api.features.chat.tools.run_block import RunBlockTool from backend.blocks._base import BlockType @@ -28,6 +33,39 @@ def make_mock_block( return mock +def make_mock_block_with_schema( + block_id: str, + name: str, + input_properties: dict, + required_fields: list[str], + output_properties: dict | None = None, +): + """Create a mock block with a defined input/output schema for validation tests.""" + mock = MagicMock() + mock.id = block_id + mock.name = name + mock.block_type = BlockType.STANDARD + mock.disabled = False + mock.description = f"Test block: {name}" + + input_schema = { + "properties": input_properties, + "required": required_fields, + } + mock.input_schema = MagicMock() + mock.input_schema.jsonschema.return_value = input_schema + mock.input_schema.get_credentials_fields_info.return_value = {} + mock.input_schema.get_credentials_fields.return_value = {} + + output_schema = { + "properties": output_properties or {"result": {"type": "string"}}, + } + mock.output_schema = MagicMock() + mock.output_schema.jsonschema.return_value = output_schema + + return mock + + class TestRunBlockFiltering: """Tests for block execution guards in RunBlockTool.""" @@ -104,3 +142,221 @@ class TestRunBlockFiltering: # (may be other errors like missing credentials, but not the exclusion guard) if isinstance(response, ErrorResponse): assert "cannot be run directly in CoPilot" not in response.message + + +class TestRunBlockInputValidation: + """Tests for input field validation in RunBlockTool. + + run_block rejects unknown input field names with InputValidationErrorResponse, + preventing silent failures where incorrect keys would be ignored and the block + would execute with default values instead of the caller's intended values. + """ + + @pytest.mark.asyncio(loop_scope="session") + async def test_unknown_input_fields_are_rejected(self): + """run_block rejects unknown input fields instead of silently ignoring them. + + Scenario: The AI Text Generator block has a field called 'model' (for LLM model + selection), but the LLM calling the tool guesses wrong and sends 'LLM_Model' + instead. The block should reject the request and return the valid schema. + """ + session = make_session(user_id=_TEST_USER_ID) + + mock_block = make_mock_block_with_schema( + block_id="ai-text-gen-id", + name="AI Text Generator", + input_properties={ + "prompt": {"type": "string", "description": "The prompt to send"}, + "model": { + "type": "string", + "description": "The LLM model to use", + "default": "gpt-4o-mini", + }, + "sys_prompt": { + "type": "string", + "description": "System prompt", + "default": "", + }, + }, + required_fields=["prompt"], + output_properties={"response": {"type": "string"}}, + ) + + with patch( + "backend.api.features.chat.tools.run_block.get_block", + return_value=mock_block, + ): + tool = RunBlockTool() + + # Provide 'prompt' (correct) but 'LLM_Model' instead of 'model' (wrong key) + response = await tool._execute( + user_id=_TEST_USER_ID, + session=session, + block_id="ai-text-gen-id", + input_data={ + "prompt": "Write a haiku about coding", + "LLM_Model": "claude-opus-4-6", # WRONG KEY - should be 'model' + }, + ) + + assert isinstance(response, InputValidationErrorResponse) + assert "LLM_Model" in response.unrecognized_fields + assert "Block was not executed" in response.message + assert "inputs" in response.model_dump() # valid schema included + + @pytest.mark.asyncio(loop_scope="session") + async def test_multiple_wrong_keys_are_all_reported(self): + """All unrecognized field names are reported in a single error response.""" + session = make_session(user_id=_TEST_USER_ID) + + mock_block = make_mock_block_with_schema( + block_id="ai-text-gen-id", + name="AI Text Generator", + input_properties={ + "prompt": {"type": "string"}, + "model": {"type": "string", "default": "gpt-4o-mini"}, + "sys_prompt": {"type": "string", "default": ""}, + "retry": {"type": "integer", "default": 3}, + }, + required_fields=["prompt"], + ) + + with patch( + "backend.api.features.chat.tools.run_block.get_block", + return_value=mock_block, + ): + tool = RunBlockTool() + + response = await tool._execute( + user_id=_TEST_USER_ID, + session=session, + block_id="ai-text-gen-id", + input_data={ + "prompt": "Hello", # correct + "llm_model": "claude-opus-4-6", # WRONG - should be 'model' + "system_prompt": "Be helpful", # WRONG - should be 'sys_prompt' + "retries": 5, # WRONG - should be 'retry' + }, + ) + + assert isinstance(response, InputValidationErrorResponse) + assert set(response.unrecognized_fields) == { + "llm_model", + "system_prompt", + "retries", + } + assert "Block was not executed" in response.message + + @pytest.mark.asyncio(loop_scope="session") + async def test_unknown_fields_rejected_even_with_missing_required(self): + """Unknown fields are caught before the missing-required-fields check.""" + session = make_session(user_id=_TEST_USER_ID) + + mock_block = make_mock_block_with_schema( + block_id="ai-text-gen-id", + name="AI Text Generator", + input_properties={ + "prompt": {"type": "string"}, + "model": {"type": "string", "default": "gpt-4o-mini"}, + }, + required_fields=["prompt"], + ) + + with patch( + "backend.api.features.chat.tools.run_block.get_block", + return_value=mock_block, + ): + tool = RunBlockTool() + + # 'prompt' is missing AND 'LLM_Model' is an unknown field + response = await tool._execute( + user_id=_TEST_USER_ID, + session=session, + block_id="ai-text-gen-id", + input_data={ + "LLM_Model": "claude-opus-4-6", # wrong key, and 'prompt' is missing + }, + ) + + # Unknown fields are caught first + assert isinstance(response, InputValidationErrorResponse) + assert "LLM_Model" in response.unrecognized_fields + + @pytest.mark.asyncio(loop_scope="session") + async def test_correct_inputs_still_execute(self): + """Correct input field names pass validation and the block executes.""" + session = make_session(user_id=_TEST_USER_ID) + + mock_block = make_mock_block_with_schema( + block_id="ai-text-gen-id", + name="AI Text Generator", + input_properties={ + "prompt": {"type": "string"}, + "model": {"type": "string", "default": "gpt-4o-mini"}, + }, + required_fields=["prompt"], + ) + + async def mock_execute(input_data, **kwargs): + yield "response", "Generated text" + + mock_block.execute = mock_execute + + with ( + patch( + "backend.api.features.chat.tools.run_block.get_block", + return_value=mock_block, + ), + patch( + "backend.api.features.chat.tools.run_block.get_or_create_workspace", + new_callable=AsyncMock, + return_value=MagicMock(id="test-workspace-id"), + ), + ): + tool = RunBlockTool() + + response = await tool._execute( + user_id=_TEST_USER_ID, + session=session, + block_id="ai-text-gen-id", + input_data={ + "prompt": "Write a haiku", + "model": "gpt-4o-mini", # correct field name + }, + ) + + assert isinstance(response, BlockOutputResponse) + assert response.success is True + + @pytest.mark.asyncio(loop_scope="session") + async def test_missing_required_fields_returns_details(self): + """Missing required fields returns BlockDetailsResponse with schema.""" + session = make_session(user_id=_TEST_USER_ID) + + mock_block = make_mock_block_with_schema( + block_id="ai-text-gen-id", + name="AI Text Generator", + input_properties={ + "prompt": {"type": "string"}, + "model": {"type": "string", "default": "gpt-4o-mini"}, + }, + required_fields=["prompt"], + ) + + with patch( + "backend.api.features.chat.tools.run_block.get_block", + return_value=mock_block, + ): + tool = RunBlockTool() + + # Only provide valid optional field, missing required 'prompt' + response = await tool._execute( + user_id=_TEST_USER_ID, + session=session, + block_id="ai-text-gen-id", + input_data={ + "model": "gpt-4o-mini", # valid but optional + }, + ) + + assert isinstance(response, BlockDetailsResponse) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/test_run_block_details.py b/autogpt_platform/backend/backend/api/features/chat/tools/test_run_block_details.py new file mode 100644 index 0000000000..fbab0b723d --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/test_run_block_details.py @@ -0,0 +1,153 @@ +"""Tests for BlockDetailsResponse in RunBlockTool.""" + +from unittest.mock import AsyncMock, MagicMock, patch + +import pytest + +from backend.api.features.chat.tools.models import BlockDetailsResponse +from backend.api.features.chat.tools.run_block import RunBlockTool +from backend.blocks._base import BlockType +from backend.data.model import CredentialsMetaInput +from backend.integrations.providers import ProviderName + +from ._test_data import make_session + +_TEST_USER_ID = "test-user-run-block-details" + + +def make_mock_block_with_inputs( + block_id: str, name: str, description: str = "Test description" +): + """Create a mock block with input/output schemas for testing.""" + mock = MagicMock() + mock.id = block_id + mock.name = name + mock.description = description + mock.block_type = BlockType.STANDARD + mock.disabled = False + + # Input schema with non-credential fields + mock.input_schema = MagicMock() + mock.input_schema.jsonschema.return_value = { + "properties": { + "url": {"type": "string", "description": "URL to fetch"}, + "method": {"type": "string", "description": "HTTP method"}, + }, + "required": ["url"], + } + mock.input_schema.get_credentials_fields.return_value = {} + mock.input_schema.get_credentials_fields_info.return_value = {} + + # Output schema + mock.output_schema = MagicMock() + mock.output_schema.jsonschema.return_value = { + "properties": { + "response": {"type": "object", "description": "HTTP response"}, + "error": {"type": "string", "description": "Error message"}, + } + } + + return mock + + +@pytest.mark.asyncio(loop_scope="session") +async def test_run_block_returns_details_when_no_input_provided(): + """When run_block is called without input_data, it should return BlockDetailsResponse.""" + session = make_session(user_id=_TEST_USER_ID) + + # Create a block with inputs + http_block = make_mock_block_with_inputs( + "http-block-id", "HTTP Request", "Send HTTP requests" + ) + + with patch( + "backend.api.features.chat.tools.run_block.get_block", + return_value=http_block, + ): + # Mock credentials check to return no missing credentials + with patch.object( + RunBlockTool, + "_resolve_block_credentials", + new_callable=AsyncMock, + return_value=({}, []), # (matched_credentials, missing_credentials) + ): + tool = RunBlockTool() + response = await tool._execute( + user_id=_TEST_USER_ID, + session=session, + block_id="http-block-id", + input_data={}, # Empty input data + ) + + # Should return BlockDetailsResponse showing the schema + assert isinstance(response, BlockDetailsResponse) + assert response.block.id == "http-block-id" + assert response.block.name == "HTTP Request" + assert response.block.description == "Send HTTP requests" + assert "url" in response.block.inputs["properties"] + assert "method" in response.block.inputs["properties"] + assert "response" in response.block.outputs["properties"] + assert response.user_authenticated is True + + +@pytest.mark.asyncio(loop_scope="session") +async def test_run_block_returns_details_when_only_credentials_provided(): + """When only credentials are provided (no actual input), should return details.""" + session = make_session(user_id=_TEST_USER_ID) + + # Create a block with both credential and non-credential inputs + mock = MagicMock() + mock.id = "api-block-id" + mock.name = "API Call" + mock.description = "Make API calls" + mock.block_type = BlockType.STANDARD + mock.disabled = False + + mock.input_schema = MagicMock() + mock.input_schema.jsonschema.return_value = { + "properties": { + "credentials": {"type": "object", "description": "API credentials"}, + "endpoint": {"type": "string", "description": "API endpoint"}, + }, + "required": ["credentials", "endpoint"], + } + mock.input_schema.get_credentials_fields.return_value = {"credentials": True} + mock.input_schema.get_credentials_fields_info.return_value = {} + + mock.output_schema = MagicMock() + mock.output_schema.jsonschema.return_value = { + "properties": {"result": {"type": "object"}} + } + + with patch( + "backend.api.features.chat.tools.run_block.get_block", + return_value=mock, + ): + with patch.object( + RunBlockTool, + "_resolve_block_credentials", + new_callable=AsyncMock, + return_value=( + { + "credentials": CredentialsMetaInput( + id="cred-id", + provider=ProviderName("test_provider"), + type="api_key", + title="Test Credential", + ) + }, + [], + ), + ): + tool = RunBlockTool() + response = await tool._execute( + user_id=_TEST_USER_ID, + session=session, + block_id="api-block-id", + input_data={"credentials": {"some": "cred"}}, # Only credential + ) + + # Should return details because no non-credential inputs provided + assert isinstance(response, BlockDetailsResponse) + assert response.block.id == "api-block-id" + assert response.block.name == "API Call" diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/RunBlock.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/RunBlock.tsx index e1cb030449..6e2cbe90d7 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/RunBlock.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/RunBlock.tsx @@ -3,6 +3,7 @@ import type { ToolUIPart } from "ai"; import { MorphingTextAnimation } from "../../components/MorphingTextAnimation/MorphingTextAnimation"; import { ToolAccordion } from "../../components/ToolAccordion/ToolAccordion"; +import { BlockDetailsCard } from "./components/BlockDetailsCard/BlockDetailsCard"; import { BlockOutputCard } from "./components/BlockOutputCard/BlockOutputCard"; import { ErrorCard } from "./components/ErrorCard/ErrorCard"; import { SetupRequirementsCard } from "./components/SetupRequirementsCard/SetupRequirementsCard"; @@ -11,6 +12,7 @@ import { getAnimationText, getRunBlockToolOutput, isRunBlockBlockOutput, + isRunBlockDetailsOutput, isRunBlockErrorOutput, isRunBlockSetupRequirementsOutput, ToolIcon, @@ -41,6 +43,7 @@ export function RunBlockTool({ part }: Props) { part.state === "output-available" && !!output && (isRunBlockBlockOutput(output) || + isRunBlockDetailsOutput(output) || isRunBlockSetupRequirementsOutput(output) || isRunBlockErrorOutput(output)); @@ -58,6 +61,10 @@ export function RunBlockTool({ part }: Props) { {isRunBlockBlockOutput(output) && } + {isRunBlockDetailsOutput(output) && ( + + )} + {isRunBlockSetupRequirementsOutput(output) && ( )} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/components/BlockDetailsCard/BlockDetailsCard.stories.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/components/BlockDetailsCard/BlockDetailsCard.stories.tsx new file mode 100644 index 0000000000..6e133ca93b --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/components/BlockDetailsCard/BlockDetailsCard.stories.tsx @@ -0,0 +1,188 @@ +import type { Meta, StoryObj } from "@storybook/nextjs"; +import { ResponseType } from "@/app/api/__generated__/models/responseType"; +import type { BlockDetailsResponse } from "../../helpers"; +import { BlockDetailsCard } from "./BlockDetailsCard"; + +const meta: Meta = { + title: "Copilot/RunBlock/BlockDetailsCard", + component: BlockDetailsCard, + parameters: { + layout: "centered", + }, + tags: ["autodocs"], + decorators: [ + (Story) => ( +
+ +
+ ), + ], +}; + +export default meta; +type Story = StoryObj; + +const baseBlock: BlockDetailsResponse = { + type: ResponseType.block_details, + message: + "Here are the details for the GetWeather block. Provide the required inputs to run it.", + session_id: "session-123", + user_authenticated: true, + block: { + id: "block-abc-123", + name: "GetWeather", + description: "Fetches current weather data for a given location.", + inputs: { + type: "object", + properties: { + location: { + title: "Location", + type: "string", + description: + "City name or coordinates (e.g. 'London' or '51.5,-0.1')", + }, + units: { + title: "Units", + type: "string", + description: "Temperature units: 'metric' or 'imperial'", + }, + }, + required: ["location"], + }, + outputs: { + type: "object", + properties: { + temperature: { + title: "Temperature", + type: "number", + description: "Current temperature in the requested units", + }, + condition: { + title: "Condition", + type: "string", + description: "Weather condition description (e.g. 'Sunny', 'Rain')", + }, + }, + }, + credentials: [], + }, +}; + +export const Default: Story = { + args: { + output: baseBlock, + }, +}; + +export const InputsOnly: Story = { + args: { + output: { + ...baseBlock, + message: "This block requires inputs. No outputs are defined.", + block: { + ...baseBlock.block, + outputs: {}, + }, + }, + }, +}; + +export const OutputsOnly: Story = { + args: { + output: { + ...baseBlock, + message: "This block has no required inputs.", + block: { + ...baseBlock.block, + inputs: {}, + }, + }, + }, +}; + +export const ManyFields: Story = { + args: { + output: { + ...baseBlock, + message: "Block with many input and output fields.", + block: { + ...baseBlock.block, + name: "SendEmail", + description: "Sends an email via SMTP.", + inputs: { + type: "object", + properties: { + to: { + title: "To", + type: "string", + description: "Recipient email address", + }, + subject: { + title: "Subject", + type: "string", + description: "Email subject line", + }, + body: { + title: "Body", + type: "string", + description: "Email body content", + }, + cc: { + title: "CC", + type: "string", + description: "CC recipients (comma-separated)", + }, + bcc: { + title: "BCC", + type: "string", + description: "BCC recipients (comma-separated)", + }, + }, + required: ["to", "subject", "body"], + }, + outputs: { + type: "object", + properties: { + message_id: { + title: "Message ID", + type: "string", + description: "Unique ID of the sent email", + }, + status: { + title: "Status", + type: "string", + description: "Delivery status", + }, + }, + }, + }, + }, + }, +}; + +export const NoFieldDescriptions: Story = { + args: { + output: { + ...baseBlock, + message: "Fields without descriptions.", + block: { + ...baseBlock.block, + name: "SimpleBlock", + inputs: { + type: "object", + properties: { + input_a: { title: "Input A", type: "string" }, + input_b: { title: "Input B", type: "number" }, + }, + required: ["input_a"], + }, + outputs: { + type: "object", + properties: { + result: { title: "Result", type: "string" }, + }, + }, + }, + }, + }, +}; diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/components/BlockDetailsCard/BlockDetailsCard.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/components/BlockDetailsCard/BlockDetailsCard.tsx new file mode 100644 index 0000000000..fdbf115222 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/components/BlockDetailsCard/BlockDetailsCard.tsx @@ -0,0 +1,103 @@ +"use client"; + +import type { BlockDetailsResponse } from "../../helpers"; +import { + ContentBadge, + ContentCard, + ContentCardDescription, + ContentCardTitle, + ContentGrid, + ContentMessage, +} from "../../../../components/ToolAccordion/AccordionContent"; + +interface Props { + output: BlockDetailsResponse; +} + +function SchemaFieldList({ + title, + properties, + required, +}: { + title: string; + properties: Record; + required?: string[]; +}) { + const entries = Object.entries(properties); + if (entries.length === 0) return null; + + const requiredSet = new Set(required ?? []); + + return ( + + {title} +
+ {entries.map(([name, schema]) => { + const field = schema as Record | undefined; + const fieldTitle = + typeof field?.title === "string" ? field.title : name; + const fieldType = + typeof field?.type === "string" ? field.type : "unknown"; + const description = + typeof field?.description === "string" + ? field.description + : undefined; + + return ( +
+
+ + {fieldTitle} + +
+ {fieldType} + {requiredSet.has(name) && ( + Required + )} +
+
+ {description && ( + + {description} + + )} +
+ ); + })} +
+
+ ); +} + +export function BlockDetailsCard({ output }: Props) { + const inputs = output.block.inputs as { + properties?: Record; + required?: string[]; + } | null; + const outputs = output.block.outputs as { + properties?: Record; + required?: string[]; + } | null; + + return ( + + {output.message} + + {inputs?.properties && Object.keys(inputs.properties).length > 0 && ( + + )} + + {outputs?.properties && Object.keys(outputs.properties).length > 0 && ( + + )} + + ); +} diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/helpers.tsx b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/helpers.tsx index b8625988cd..6e56154a5e 100644 --- a/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/helpers.tsx +++ b/autogpt_platform/frontend/src/app/(platform)/copilot/tools/RunBlock/helpers.tsx @@ -10,18 +10,37 @@ import { import type { ToolUIPart } from "ai"; import { OrbitLoader } from "../../components/OrbitLoader/OrbitLoader"; +/** Block details returned on first run_block attempt (before input_data provided). */ +export interface BlockDetailsResponse { + type: typeof ResponseType.block_details; + message: string; + session_id?: string | null; + block: { + id: string; + name: string; + description: string; + inputs: Record; + outputs: Record; + credentials: unknown[]; + }; + user_authenticated: boolean; +} + export interface RunBlockInput { block_id?: string; + block_name?: string; input_data?: Record; } export type RunBlockToolOutput = | SetupRequirementsResponse + | BlockDetailsResponse | BlockOutputResponse | ErrorResponse; const RUN_BLOCK_OUTPUT_TYPES = new Set([ ResponseType.setup_requirements, + ResponseType.block_details, ResponseType.block_output, ResponseType.error, ]); @@ -35,6 +54,15 @@ export function isRunBlockSetupRequirementsOutput( ); } +export function isRunBlockDetailsOutput( + output: RunBlockToolOutput, +): output is BlockDetailsResponse { + return ( + output.type === ResponseType.block_details || + ("block" in output && typeof output.block === "object") + ); +} + export function isRunBlockBlockOutput( output: RunBlockToolOutput, ): output is BlockOutputResponse { @@ -64,6 +92,7 @@ function parseOutput(output: unknown): RunBlockToolOutput | null { return output as RunBlockToolOutput; } if ("block_id" in output) return output as BlockOutputResponse; + if ("block" in output) return output as BlockDetailsResponse; if ("setup_info" in output) return output as SetupRequirementsResponse; if ("error" in output || "details" in output) return output as ErrorResponse; @@ -84,17 +113,25 @@ export function getAnimationText(part: { output?: unknown; }): string { const input = part.input as RunBlockInput | undefined; + const blockName = input?.block_name?.trim(); const blockId = input?.block_id?.trim(); - const blockText = blockId ? ` "${blockId}"` : ""; + // Prefer block_name if available, otherwise fall back to block_id + const blockText = blockName + ? ` "${blockName}"` + : blockId + ? ` "${blockId}"` + : ""; switch (part.state) { case "input-streaming": case "input-available": - return `Running the block${blockText}`; + return `Running${blockText}`; case "output-available": { const output = parseOutput(part.output); - if (!output) return `Running the block${blockText}`; + if (!output) return `Running${blockText}`; if (isRunBlockBlockOutput(output)) return `Ran "${output.block_name}"`; + if (isRunBlockDetailsOutput(output)) + return `Details for "${output.block.name}"`; if (isRunBlockSetupRequirementsOutput(output)) { return `Setup needed for "${output.setup_info.agent_name}"`; } @@ -158,6 +195,21 @@ export function getAccordionMeta(output: RunBlockToolOutput): { }; } + if (isRunBlockDetailsOutput(output)) { + const inputKeys = Object.keys( + (output.block.inputs as { properties?: Record }) + ?.properties ?? {}, + ); + return { + icon, + title: output.block.name, + description: + inputKeys.length > 0 + ? `${inputKeys.length} input field${inputKeys.length === 1 ? "" : "s"} available` + : output.message, + }; + } + if (isRunBlockSetupRequirementsOutput(output)) { const missingCredsCount = Object.keys( (output.setup_info.user_readiness?.missing_credentials ?? {}) as Record< diff --git a/autogpt_platform/frontend/src/app/api/openapi.json b/autogpt_platform/frontend/src/app/api/openapi.json index 5d2cb83f7c..496a714ba5 100644 --- a/autogpt_platform/frontend/src/app/api/openapi.json +++ b/autogpt_platform/frontend/src/app/api/openapi.json @@ -1053,6 +1053,7 @@ "$ref": "#/components/schemas/ClarificationNeededResponse" }, { "$ref": "#/components/schemas/BlockListResponse" }, + { "$ref": "#/components/schemas/BlockDetailsResponse" }, { "$ref": "#/components/schemas/BlockOutputResponse" }, { "$ref": "#/components/schemas/DocSearchResultsResponse" }, { "$ref": "#/components/schemas/DocPageResponse" }, @@ -6958,6 +6959,58 @@ "enum": ["run", "byte", "second"], "title": "BlockCostType" }, + "BlockDetails": { + "properties": { + "id": { "type": "string", "title": "Id" }, + "name": { "type": "string", "title": "Name" }, + "description": { "type": "string", "title": "Description" }, + "inputs": { + "additionalProperties": true, + "type": "object", + "title": "Inputs", + "default": {} + }, + "outputs": { + "additionalProperties": true, + "type": "object", + "title": "Outputs", + "default": {} + }, + "credentials": { + "items": { "$ref": "#/components/schemas/CredentialsMetaInput" }, + "type": "array", + "title": "Credentials", + "default": [] + } + }, + "type": "object", + "required": ["id", "name", "description"], + "title": "BlockDetails", + "description": "Detailed block information." + }, + "BlockDetailsResponse": { + "properties": { + "type": { + "$ref": "#/components/schemas/ResponseType", + "default": "block_details" + }, + "message": { "type": "string", "title": "Message" }, + "session_id": { + "anyOf": [{ "type": "string" }, { "type": "null" }], + "title": "Session Id" + }, + "block": { "$ref": "#/components/schemas/BlockDetails" }, + "user_authenticated": { + "type": "boolean", + "title": "User Authenticated", + "default": false + } + }, + "type": "object", + "required": ["message", "block"], + "title": "BlockDetailsResponse", + "description": "Response for block details (first run_block attempt)." + }, "BlockInfo": { "properties": { "id": { "type": "string", "title": "Id" }, @@ -7013,62 +7066,13 @@ "properties": { "id": { "type": "string", "title": "Id" }, "name": { "type": "string", "title": "Name" }, - "description": { "type": "string", "title": "Description" }, - "categories": { - "items": { "type": "string" }, - "type": "array", - "title": "Categories" - }, - "input_schema": { - "additionalProperties": true, - "type": "object", - "title": "Input Schema" - }, - "output_schema": { - "additionalProperties": true, - "type": "object", - "title": "Output Schema" - }, - "required_inputs": { - "items": { "$ref": "#/components/schemas/BlockInputFieldInfo" }, - "type": "array", - "title": "Required Inputs", - "description": "List of required input fields for this block" - } + "description": { "type": "string", "title": "Description" } }, "type": "object", - "required": [ - "id", - "name", - "description", - "categories", - "input_schema", - "output_schema" - ], + "required": ["id", "name", "description"], "title": "BlockInfoSummary", "description": "Summary of a block for search results." }, - "BlockInputFieldInfo": { - "properties": { - "name": { "type": "string", "title": "Name" }, - "type": { "type": "string", "title": "Type" }, - "description": { - "type": "string", - "title": "Description", - "default": "" - }, - "required": { - "type": "boolean", - "title": "Required", - "default": false - }, - "default": { "anyOf": [{}, { "type": "null" }], "title": "Default" } - }, - "type": "object", - "required": ["name", "type"], - "title": "BlockInputFieldInfo", - "description": "Information about a block input field." - }, "BlockListResponse": { "properties": { "type": { @@ -7086,12 +7090,7 @@ "title": "Blocks" }, "count": { "type": "integer", "title": "Count" }, - "query": { "type": "string", "title": "Query" }, - "usage_hint": { - "type": "string", - "title": "Usage Hint", - "default": "To execute a block, call run_block with block_id set to the block's 'id' field and input_data containing the required fields from input_schema." - } + "query": { "type": "string", "title": "Query" } }, "type": "object", "required": ["message", "blocks", "count", "query"], @@ -10484,6 +10483,7 @@ "agent_saved", "clarification_needed", "block_list", + "block_details", "block_output", "doc_search_results", "doc_page", From 43b25b5e2fdec3fa0579f952d835355cddbd00f8 Mon Sep 17 00:00:00 2001 From: Reinier van der Leer Date: Fri, 13 Feb 2026 11:09:41 +0100 Subject: [PATCH 18/18] ci(frontend): Speed up E2E test job (#12090) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The frontend `e2e_test` doesn't have a working build cache setup, causing really slow builds = slow test jobs. These changes reduce total test runtime from ~12 minutes to ~5 minutes. ### Changes 🏗️ - Inject build cache config into docker compose config; let `buildx bake` use GHA cache directly - Add `docker-ci-fix-compose-build-cache.py` script - Optimize `backend/Dockerfile` + root `.dockerignore` - Replace broken DIY pnpm store caching with `actions/setup-node` built-in cache management - Add caching for test seed data created in DB ### Checklist 📋 #### For code changes: - [x] I have clearly listed my changes in the PR description - [x] I have made a test plan - [x] I have tested my changes according to the test plan: - CI --- .dockerignore | 73 +++--- .github/workflows/platform-frontend-ci.yml | 241 +++++++++--------- .../docker-ci-fix-compose-build-cache.py | 195 ++++++++++++++ autogpt_platform/backend/Dockerfile | 69 +++-- autogpt_platform/docker-compose.platform.yml | 4 +- 5 files changed, 406 insertions(+), 176 deletions(-) create mode 100644 .github/workflows/scripts/docker-ci-fix-compose-build-cache.py diff --git a/.dockerignore b/.dockerignore index 9b744e7f9b..427cab29f4 100644 --- a/.dockerignore +++ b/.dockerignore @@ -5,42 +5,13 @@ !docs/ # Platform - Libs -!autogpt_platform/autogpt_libs/autogpt_libs/ -!autogpt_platform/autogpt_libs/pyproject.toml -!autogpt_platform/autogpt_libs/poetry.lock -!autogpt_platform/autogpt_libs/README.md +!autogpt_platform/autogpt_libs/ # Platform - Backend -!autogpt_platform/backend/backend/ -!autogpt_platform/backend/test/e2e_test_data.py -!autogpt_platform/backend/migrations/ -!autogpt_platform/backend/schema.prisma -!autogpt_platform/backend/pyproject.toml -!autogpt_platform/backend/poetry.lock -!autogpt_platform/backend/README.md -!autogpt_platform/backend/.env -!autogpt_platform/backend/gen_prisma_types_stub.py - -# Platform - Market -!autogpt_platform/market/market/ -!autogpt_platform/market/scripts.py -!autogpt_platform/market/schema.prisma -!autogpt_platform/market/pyproject.toml -!autogpt_platform/market/poetry.lock -!autogpt_platform/market/README.md +!autogpt_platform/backend/ # Platform - Frontend -!autogpt_platform/frontend/src/ -!autogpt_platform/frontend/public/ -!autogpt_platform/frontend/scripts/ -!autogpt_platform/frontend/package.json -!autogpt_platform/frontend/pnpm-lock.yaml -!autogpt_platform/frontend/tsconfig.json -!autogpt_platform/frontend/README.md -## config -!autogpt_platform/frontend/*.config.* -!autogpt_platform/frontend/.env.* -!autogpt_platform/frontend/.env +!autogpt_platform/frontend/ # Classic - AutoGPT !classic/original_autogpt/autogpt/ @@ -64,6 +35,38 @@ # Classic - Frontend !classic/frontend/build/web/ -# Explicitly re-ignore some folders -.* -**/__pycache__ +# Explicitly re-ignore unwanted files from whitelisted directories +# Note: These patterns MUST come after the whitelist rules to take effect + +# Hidden files and directories (but keep frontend .env files needed for build) +**/.* +!autogpt_platform/frontend/.env +!autogpt_platform/frontend/.env.default +!autogpt_platform/frontend/.env.production + +# Python artifacts +**/__pycache__/ +**/*.pyc +**/*.pyo +**/.venv/ +**/.ruff_cache/ +**/.pytest_cache/ +**/.coverage +**/htmlcov/ + +# Node artifacts +**/node_modules/ +**/.next/ +**/storybook-static/ +**/playwright-report/ +**/test-results/ + +# Build artifacts +**/dist/ +**/build/ +!autogpt_platform/frontend/src/**/build/ +**/target/ + +# Logs and temp files +**/*.log +**/*.tmp diff --git a/.github/workflows/platform-frontend-ci.yml b/.github/workflows/platform-frontend-ci.yml index 6410daae9f..4bf8a2b80c 100644 --- a/.github/workflows/platform-frontend-ci.yml +++ b/.github/workflows/platform-frontend-ci.yml @@ -26,7 +26,6 @@ jobs: setup: runs-on: ubuntu-latest outputs: - cache-key: ${{ steps.cache-key.outputs.key }} components-changed: ${{ steps.filter.outputs.components }} steps: @@ -41,28 +40,17 @@ jobs: components: - 'autogpt_platform/frontend/src/components/**' - - name: Set up Node.js - uses: actions/setup-node@v6 - with: - node-version: "22.18.0" - - name: Enable corepack run: corepack enable - - name: Generate cache key - id: cache-key - run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/package.json') }}" >> $GITHUB_OUTPUT - - - name: Cache dependencies - uses: actions/cache@v5 + - name: Set up Node + uses: actions/setup-node@v6 with: - path: ~/.pnpm-store - key: ${{ steps.cache-key.outputs.key }} - restore-keys: | - ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }} - ${{ runner.os }}-pnpm- + node-version: "22.18.0" + cache: "pnpm" + cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml - - name: Install dependencies + - name: Install dependencies to populate cache run: pnpm install --frozen-lockfile lint: @@ -73,22 +61,15 @@ jobs: - name: Checkout repository uses: actions/checkout@v6 - - name: Set up Node.js - uses: actions/setup-node@v6 - with: - node-version: "22.18.0" - - name: Enable corepack run: corepack enable - - name: Restore dependencies cache - uses: actions/cache@v5 + - name: Set up Node + uses: actions/setup-node@v6 with: - path: ~/.pnpm-store - key: ${{ needs.setup.outputs.cache-key }} - restore-keys: | - ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }} - ${{ runner.os }}-pnpm- + node-version: "22.18.0" + cache: "pnpm" + cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml - name: Install dependencies run: pnpm install --frozen-lockfile @@ -111,22 +92,15 @@ jobs: with: fetch-depth: 0 - - name: Set up Node.js - uses: actions/setup-node@v6 - with: - node-version: "22.18.0" - - name: Enable corepack run: corepack enable - - name: Restore dependencies cache - uses: actions/cache@v5 + - name: Set up Node + uses: actions/setup-node@v6 with: - path: ~/.pnpm-store - key: ${{ needs.setup.outputs.cache-key }} - restore-keys: | - ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }} - ${{ runner.os }}-pnpm- + node-version: "22.18.0" + cache: "pnpm" + cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml - name: Install dependencies run: pnpm install --frozen-lockfile @@ -141,10 +115,8 @@ jobs: exitOnceUploaded: true e2e_test: + name: end-to-end tests runs-on: big-boi - needs: setup - strategy: - fail-fast: false steps: - name: Checkout repository @@ -152,19 +124,11 @@ jobs: with: submodules: recursive - - name: Set up Node.js - uses: actions/setup-node@v6 - with: - node-version: "22.18.0" - - - name: Enable corepack - run: corepack enable - - - name: Copy default supabase .env + - name: Set up Platform - Copy default supabase .env run: | cp ../.env.default ../.env - - name: Copy backend .env and set OpenAI API key + - name: Set up Platform - Copy backend .env and set OpenAI API key run: | cp ../backend/.env.default ../backend/.env echo "OPENAI_INTERNAL_API_KEY=${{ secrets.OPENAI_API_KEY }}" >> ../backend/.env @@ -172,77 +136,125 @@ jobs: # Used by E2E test data script to generate embeddings for approved store agents OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} - - name: Set up Docker Buildx + - name: Set up Platform - Set up Docker Buildx uses: docker/setup-buildx-action@v3 + with: + driver: docker-container + driver-opts: network=host - - name: Cache Docker layers + - name: Set up Platform - Expose GHA cache to docker buildx CLI + uses: crazy-max/ghaction-github-runtime@v3 + + - name: Set up Platform - Build Docker images (with cache) + working-directory: autogpt_platform + run: | + pip install pyyaml + + # Resolve extends and generate a flat compose file that bake can understand + docker compose -f docker-compose.yml config > docker-compose.resolved.yml + + # Add cache configuration to the resolved compose file + python ../.github/workflows/scripts/docker-ci-fix-compose-build-cache.py \ + --source docker-compose.resolved.yml \ + --cache-from "type=gha" \ + --cache-to "type=gha,mode=max" \ + --backend-hash "${{ hashFiles('autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/poetry.lock', 'autogpt_platform/backend/backend') }}" \ + --frontend-hash "${{ hashFiles('autogpt_platform/frontend/Dockerfile', 'autogpt_platform/frontend/pnpm-lock.yaml', 'autogpt_platform/frontend/src') }}" \ + --git-ref "${{ github.ref }}" + + # Build with bake using the resolved compose file (now includes cache config) + docker buildx bake --allow=fs.read=.. -f docker-compose.resolved.yml --load + env: + NEXT_PUBLIC_PW_TEST: true + + - name: Set up tests - Cache E2E test data + id: e2e-data-cache uses: actions/cache@v5 with: - path: /tmp/.buildx-cache - key: ${{ runner.os }}-buildx-frontend-test-${{ hashFiles('autogpt_platform/docker-compose.yml', 'autogpt_platform/backend/Dockerfile', 'autogpt_platform/backend/pyproject.toml', 'autogpt_platform/backend/poetry.lock') }} - restore-keys: | - ${{ runner.os }}-buildx-frontend-test- + path: /tmp/e2e_test_data.sql + key: e2e-test-data-${{ hashFiles('autogpt_platform/backend/test/e2e_test_data.py', 'autogpt_platform/backend/migrations/**', '.github/workflows/platform-frontend-ci.yml') }} - - name: Run docker compose + - name: Set up Platform - Start Supabase DB + Auth run: | - NEXT_PUBLIC_PW_TEST=true docker compose -f ../docker-compose.yml up -d + docker compose -f ../docker-compose.resolved.yml up -d db auth --no-build + echo "Waiting for database to be ready..." + timeout 60 sh -c 'until docker compose -f ../docker-compose.resolved.yml exec -T db pg_isready -U postgres 2>/dev/null; do sleep 2; done' + echo "Waiting for auth service to be ready..." + timeout 60 sh -c 'until docker compose -f ../docker-compose.resolved.yml exec -T db psql -U postgres -d postgres -c "SELECT 1 FROM auth.users LIMIT 1" 2>/dev/null; do sleep 2; done' || echo "Auth schema check timeout, continuing..." + + - name: Set up Platform - Run migrations + run: | + echo "Running migrations..." + docker compose -f ../docker-compose.resolved.yml run --rm migrate + echo "✅ Migrations completed" env: - DOCKER_BUILDKIT: 1 - BUILDX_CACHE_FROM: type=local,src=/tmp/.buildx-cache - BUILDX_CACHE_TO: type=local,dest=/tmp/.buildx-cache-new,mode=max + NEXT_PUBLIC_PW_TEST: true - - name: Move cache + - name: Set up tests - Load cached E2E test data + if: steps.e2e-data-cache.outputs.cache-hit == 'true' run: | - rm -rf /tmp/.buildx-cache - if [ -d "/tmp/.buildx-cache-new" ]; then - mv /tmp/.buildx-cache-new /tmp/.buildx-cache - fi + echo "✅ Found cached E2E test data, restoring..." + { + echo "SET session_replication_role = 'replica';" + cat /tmp/e2e_test_data.sql + echo "SET session_replication_role = 'origin';" + } | docker compose -f ../docker-compose.resolved.yml exec -T db psql -U postgres -d postgres -b + # Refresh materialized views after restore + docker compose -f ../docker-compose.resolved.yml exec -T db \ + psql -U postgres -d postgres -b -c "SET search_path TO platform; SELECT refresh_store_materialized_views();" || true - - name: Wait for services to be ready + echo "✅ E2E test data restored from cache" + + - name: Set up Platform - Start (all other services) run: | + docker compose -f ../docker-compose.resolved.yml up -d --no-build echo "Waiting for rest_server to be ready..." timeout 60 sh -c 'until curl -f http://localhost:8006/health 2>/dev/null; do sleep 2; done' || echo "Rest server health check timeout, continuing..." - echo "Waiting for database to be ready..." - timeout 60 sh -c 'until docker compose -f ../docker-compose.yml exec -T db pg_isready -U postgres 2>/dev/null; do sleep 2; done' || echo "Database ready check timeout, continuing..." + env: + NEXT_PUBLIC_PW_TEST: true - - name: Create E2E test data + - name: Set up tests - Create E2E test data + if: steps.e2e-data-cache.outputs.cache-hit != 'true' run: | echo "Creating E2E test data..." - # First try to run the script from inside the container - if docker compose -f ../docker-compose.yml exec -T rest_server test -f /app/autogpt_platform/backend/test/e2e_test_data.py; then - echo "✅ Found e2e_test_data.py in container, running it..." - docker compose -f ../docker-compose.yml exec -T rest_server sh -c "cd /app/autogpt_platform && python backend/test/e2e_test_data.py" || { - echo "❌ E2E test data creation failed!" - docker compose -f ../docker-compose.yml logs --tail=50 rest_server - exit 1 - } - else - echo "⚠️ e2e_test_data.py not found in container, copying and running..." - # Copy the script into the container and run it - docker cp ../backend/test/e2e_test_data.py $(docker compose -f ../docker-compose.yml ps -q rest_server):/tmp/e2e_test_data.py || { - echo "❌ Failed to copy script to container" - exit 1 - } - docker compose -f ../docker-compose.yml exec -T rest_server sh -c "cd /app/autogpt_platform && python /tmp/e2e_test_data.py" || { - echo "❌ E2E test data creation failed!" - docker compose -f ../docker-compose.yml logs --tail=50 rest_server - exit 1 - } - fi + docker cp ../backend/test/e2e_test_data.py $(docker compose -f ../docker-compose.resolved.yml ps -q rest_server):/tmp/e2e_test_data.py + docker compose -f ../docker-compose.resolved.yml exec -T rest_server sh -c "cd /app/autogpt_platform && python /tmp/e2e_test_data.py" || { + echo "❌ E2E test data creation failed!" + docker compose -f ../docker-compose.resolved.yml logs --tail=50 rest_server + exit 1 + } - - name: Restore dependencies cache - uses: actions/cache@v5 + # Dump auth.users + platform schema for cache (two separate dumps) + echo "Dumping database for cache..." + { + docker compose -f ../docker-compose.resolved.yml exec -T db \ + pg_dump -U postgres --data-only --column-inserts \ + --table='auth.users' postgres + docker compose -f ../docker-compose.resolved.yml exec -T db \ + pg_dump -U postgres --data-only --column-inserts \ + --schema=platform \ + --exclude-table='platform._prisma_migrations' \ + --exclude-table='platform.apscheduler_jobs' \ + --exclude-table='platform.apscheduler_jobs_batched_notifications' \ + postgres + } > /tmp/e2e_test_data.sql + + echo "✅ Database dump created for caching ($(wc -l < /tmp/e2e_test_data.sql) lines)" + + - name: Set up tests - Enable corepack + run: corepack enable + + - name: Set up tests - Set up Node + uses: actions/setup-node@v6 with: - path: ~/.pnpm-store - key: ${{ needs.setup.outputs.cache-key }} - restore-keys: | - ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }} - ${{ runner.os }}-pnpm- + node-version: "22.18.0" + cache: "pnpm" + cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml - - name: Install dependencies + - name: Set up tests - Install dependencies run: pnpm install --frozen-lockfile - - name: Install Browser 'chromium' + - name: Set up tests - Install browser 'chromium' run: pnpm playwright install --with-deps chromium - name: Run Playwright tests @@ -269,7 +281,7 @@ jobs: - name: Print Final Docker Compose logs if: always() - run: docker compose -f ../docker-compose.yml logs + run: docker compose -f ../docker-compose.resolved.yml logs integration_test: runs-on: ubuntu-latest @@ -281,22 +293,15 @@ jobs: with: submodules: recursive - - name: Set up Node.js - uses: actions/setup-node@v6 - with: - node-version: "22.18.0" - - name: Enable corepack run: corepack enable - - name: Restore dependencies cache - uses: actions/cache@v5 + - name: Set up Node + uses: actions/setup-node@v6 with: - path: ~/.pnpm-store - key: ${{ needs.setup.outputs.cache-key }} - restore-keys: | - ${{ runner.os }}-pnpm-${{ hashFiles('autogpt_platform/frontend/pnpm-lock.yaml') }} - ${{ runner.os }}-pnpm- + node-version: "22.18.0" + cache: "pnpm" + cache-dependency-path: autogpt_platform/frontend/pnpm-lock.yaml - name: Install dependencies run: pnpm install --frozen-lockfile diff --git a/.github/workflows/scripts/docker-ci-fix-compose-build-cache.py b/.github/workflows/scripts/docker-ci-fix-compose-build-cache.py new file mode 100644 index 0000000000..33693fc739 --- /dev/null +++ b/.github/workflows/scripts/docker-ci-fix-compose-build-cache.py @@ -0,0 +1,195 @@ +#!/usr/bin/env python3 +""" +Add cache configuration to a resolved docker-compose file for all services +that have a build key, and ensure image names match what docker compose expects. +""" + +import argparse + +import yaml + + +DEFAULT_BRANCH = "dev" +CACHE_BUILDS_FOR_COMPONENTS = ["backend", "frontend"] + + +def main(): + parser = argparse.ArgumentParser( + description="Add cache config to a resolved compose file" + ) + parser.add_argument( + "--source", + required=True, + help="Source compose file to read (should be output of `docker compose config`)", + ) + parser.add_argument( + "--cache-from", + default="type=gha", + help="Cache source configuration", + ) + parser.add_argument( + "--cache-to", + default="type=gha,mode=max", + help="Cache destination configuration", + ) + for component in CACHE_BUILDS_FOR_COMPONENTS: + parser.add_argument( + f"--{component}-hash", + default="", + help=f"Hash for {component} cache scope (e.g., from hashFiles())", + ) + parser.add_argument( + "--git-ref", + default="", + help="Git ref for branch-based cache scope (e.g., refs/heads/master)", + ) + args = parser.parse_args() + + # Normalize git ref to a safe scope name (e.g., refs/heads/master -> master) + git_ref_scope = "" + if args.git_ref: + git_ref_scope = args.git_ref.replace("refs/heads/", "").replace("/", "-") + + with open(args.source, "r") as f: + compose = yaml.safe_load(f) + + # Get project name from compose file or default + project_name = compose.get("name", "autogpt_platform") + + def get_image_name(dockerfile: str, target: str) -> str: + """Generate image name based on Dockerfile folder and build target.""" + dockerfile_parts = dockerfile.replace("\\", "/").split("/") + if len(dockerfile_parts) >= 2: + folder_name = dockerfile_parts[-2] # e.g., "backend" or "frontend" + else: + folder_name = "app" + return f"{project_name}-{folder_name}:{target}" + + def get_build_key(dockerfile: str, target: str) -> str: + """Generate a unique key for a Dockerfile+target combination.""" + return f"{dockerfile}:{target}" + + def get_component(dockerfile: str) -> str | None: + """Get component name (frontend/backend) from dockerfile path.""" + for component in CACHE_BUILDS_FOR_COMPONENTS: + if component in dockerfile: + return component + return None + + # First pass: collect all services with build configs and identify duplicates + # Track which (dockerfile, target) combinations we've seen + build_key_to_first_service: dict[str, str] = {} + services_to_build: list[str] = [] + services_to_dedupe: list[str] = [] + + for service_name, service_config in compose.get("services", {}).items(): + if "build" not in service_config: + continue + + build_config = service_config["build"] + dockerfile = build_config.get("dockerfile", "Dockerfile") + target = build_config.get("target", "default") + build_key = get_build_key(dockerfile, target) + + if build_key not in build_key_to_first_service: + # First service with this build config - it will do the actual build + build_key_to_first_service[build_key] = service_name + services_to_build.append(service_name) + else: + # Duplicate - will just use the image from the first service + services_to_dedupe.append(service_name) + + # Second pass: configure builds and deduplicate + modified_services = [] + for service_name, service_config in compose.get("services", {}).items(): + if "build" not in service_config: + continue + + build_config = service_config["build"] + dockerfile = build_config.get("dockerfile", "Dockerfile") + target = build_config.get("target", "latest") + image_name = get_image_name(dockerfile, target) + + # Set image name for all services (needed for both builders and deduped) + service_config["image"] = image_name + + if service_name in services_to_dedupe: + # Remove build config - this service will use the pre-built image + del service_config["build"] + continue + + # This service will do the actual build - add cache config + cache_from_list = [] + cache_to_list = [] + + component = get_component(dockerfile) + if not component: + # Skip services that don't clearly match frontend/backend + continue + + # Get the hash for this component + component_hash = getattr(args, f"{component}_hash") + + # Scope format: platform-{component}-{target}-{hash|ref} + # Example: platform-backend-server-abc123 + + if "type=gha" in args.cache_from: + # 1. Primary: exact hash match (most specific) + if component_hash: + hash_scope = f"platform-{component}-{target}-{component_hash}" + cache_from_list.append(f"{args.cache_from},scope={hash_scope}") + + # 2. Fallback: branch-based cache + if git_ref_scope: + ref_scope = f"platform-{component}-{target}-{git_ref_scope}" + cache_from_list.append(f"{args.cache_from},scope={ref_scope}") + + # 3. Fallback: dev branch cache (for PRs/feature branches) + if git_ref_scope and git_ref_scope != DEFAULT_BRANCH: + master_scope = f"platform-{component}-{target}-{DEFAULT_BRANCH}" + cache_from_list.append(f"{args.cache_from},scope={master_scope}") + + if "type=gha" in args.cache_to: + # Write to both hash-based and branch-based scopes + if component_hash: + hash_scope = f"platform-{component}-{target}-{component_hash}" + cache_to_list.append(f"{args.cache_to},scope={hash_scope}") + + if git_ref_scope: + ref_scope = f"platform-{component}-{target}-{git_ref_scope}" + cache_to_list.append(f"{args.cache_to},scope={ref_scope}") + + # Ensure we have at least one cache source/target + if not cache_from_list: + cache_from_list.append(args.cache_from) + if not cache_to_list: + cache_to_list.append(args.cache_to) + + build_config["cache_from"] = cache_from_list + build_config["cache_to"] = cache_to_list + modified_services.append(service_name) + + # Write back to the same file + with open(args.source, "w") as f: + yaml.dump(compose, f, default_flow_style=False, sort_keys=False) + + print(f"Added cache config to {len(modified_services)} services in {args.source}:") + for svc in modified_services: + svc_config = compose["services"][svc] + build_cfg = svc_config.get("build", {}) + cache_from_list = build_cfg.get("cache_from", ["none"]) + cache_to_list = build_cfg.get("cache_to", ["none"]) + print(f" - {svc}") + print(f" image: {svc_config.get('image', 'N/A')}") + print(f" cache_from: {cache_from_list}") + print(f" cache_to: {cache_to_list}") + if services_to_dedupe: + print( + f"Deduplicated {len(services_to_dedupe)} services (will use pre-built images):" + ) + for svc in services_to_dedupe: + print(f" - {svc} -> {compose['services'][svc].get('image', 'N/A')}") + + +if __name__ == "__main__": + main() diff --git a/autogpt_platform/backend/Dockerfile b/autogpt_platform/backend/Dockerfile index 9bd455e490..ace534b730 100644 --- a/autogpt_platform/backend/Dockerfile +++ b/autogpt_platform/backend/Dockerfile @@ -1,3 +1,5 @@ +# ============================ DEPENDENCY BUILDER ============================ # + FROM debian:13-slim AS builder # Set environment variables @@ -51,7 +53,9 @@ COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/parti COPY autogpt_platform/backend/gen_prisma_types_stub.py ./ RUN poetry run prisma generate && poetry run gen-prisma-stub -FROM debian:13-slim AS server_dependencies +# ============================== BACKEND SERVER ============================== # + +FROM debian:13-slim AS server WORKDIR /app @@ -63,15 +67,14 @@ ENV POETRY_HOME=/opt/poetry \ ENV PATH=/opt/poetry/bin:$PATH # Install Python, FFmpeg, and ImageMagick (required for video processing blocks) -RUN apt-get update && apt-get install -y \ +# Using --no-install-recommends saves ~650MB by skipping unnecessary deps like llvm, mesa, etc. +RUN apt-get update && apt-get install -y --no-install-recommends \ python3.13 \ python3-pip \ ffmpeg \ imagemagick \ && rm -rf /var/lib/apt/lists/* -# Copy only necessary files from builder -COPY --from=builder /app /app COPY --from=builder /usr/local/lib/python3* /usr/local/lib/python3* COPY --from=builder /usr/local/bin/poetry /usr/local/bin/poetry # Copy Node.js installation for Prisma @@ -81,30 +84,54 @@ COPY --from=builder /usr/bin/npm /usr/bin/npm COPY --from=builder /usr/bin/npx /usr/bin/npx COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries -ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH" - -RUN mkdir -p /app/autogpt_platform/autogpt_libs -RUN mkdir -p /app/autogpt_platform/backend - -COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs - -COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml /app/autogpt_platform/backend/ - WORKDIR /app/autogpt_platform/backend -FROM server_dependencies AS migrate +# Copy only the .venv from builder (not the entire /app directory) +# The .venv includes the generated Prisma client +COPY --from=builder /app/autogpt_platform/backend/.venv ./.venv +ENV PATH="/app/autogpt_platform/backend/.venv/bin:$PATH" -# Migration stage only needs schema and migrations - much lighter than full backend -COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/ -COPY autogpt_platform/backend/backend/data/partial_types.py /app/autogpt_platform/backend/backend/data/partial_types.py -COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations +# Copy dependency files + autogpt_libs (path dependency) +COPY autogpt_platform/autogpt_libs /app/autogpt_platform/autogpt_libs +COPY autogpt_platform/backend/poetry.lock autogpt_platform/backend/pyproject.toml ./ -FROM server_dependencies AS server - -COPY autogpt_platform/backend /app/autogpt_platform/backend +# Copy backend code + docs (for Copilot docs search) +COPY autogpt_platform/backend ./ COPY docs /app/docs RUN poetry install --no-ansi --only-root ENV PORT=8000 CMD ["poetry", "run", "rest"] + +# =============================== DB MIGRATOR =============================== # + +# Lightweight migrate stage - only needs Prisma CLI, not full Python environment +FROM debian:13-slim AS migrate + +WORKDIR /app/autogpt_platform/backend + +ENV DEBIAN_FRONTEND=noninteractive + +# Install only what's needed for prisma migrate: Node.js and minimal Python for prisma-python +RUN apt-get update && apt-get install -y --no-install-recommends \ + python3.13 \ + python3-pip \ + ca-certificates \ + && rm -rf /var/lib/apt/lists/* + +# Copy Node.js from builder (needed for Prisma CLI) +COPY --from=builder /usr/bin/node /usr/bin/node +COPY --from=builder /usr/lib/node_modules /usr/lib/node_modules +COPY --from=builder /usr/bin/npm /usr/bin/npm + +# Copy Prisma binaries +COPY --from=builder /root/.cache/prisma-python/binaries /root/.cache/prisma-python/binaries + +# Install prisma-client-py directly (much smaller than copying full venv) +RUN pip3 install prisma>=0.15.0 --break-system-packages + +COPY autogpt_platform/backend/schema.prisma ./ +COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/partial_types.py +COPY autogpt_platform/backend/gen_prisma_types_stub.py ./ +COPY autogpt_platform/backend/migrations ./migrations diff --git a/autogpt_platform/docker-compose.platform.yml b/autogpt_platform/docker-compose.platform.yml index de6ecfd612..bab92d4693 100644 --- a/autogpt_platform/docker-compose.platform.yml +++ b/autogpt_platform/docker-compose.platform.yml @@ -37,7 +37,7 @@ services: context: ../ dockerfile: autogpt_platform/backend/Dockerfile target: migrate - command: ["sh", "-c", "poetry run prisma generate && poetry run gen-prisma-stub && poetry run prisma migrate deploy"] + command: ["sh", "-c", "prisma generate && python3 gen_prisma_types_stub.py && prisma migrate deploy"] develop: watch: - path: ./ @@ -56,7 +56,7 @@ services: test: [ "CMD-SHELL", - "poetry run prisma migrate status | grep -q 'No pending migrations' || exit 1", + "prisma migrate status | grep -q 'No pending migrations' || exit 1", ] interval: 30s timeout: 10s