diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index a0834a6913..870e6b4b0a 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -12,6 +12,7 @@ This file provides comprehensive onboarding information for GitHub Copilot codin - **Infrastructure** - Docker configurations, CI/CD, and development tools **Primary Languages & Frameworks:** + - **Backend**: Python 3.10-3.13, FastAPI, Prisma ORM, PostgreSQL, RabbitMQ - **Frontend**: TypeScript, Next.js 15, React, Tailwind CSS, Radix UI - **Development**: Docker, Poetry, pnpm, Playwright, Storybook @@ -23,15 +24,17 @@ This file provides comprehensive onboarding information for GitHub Copilot codin **Always run these commands in the correct directory and in this order:** 1. **Initial Setup** (required once): + ```bash # Clone and enter repository git clone && cd AutoGPT - + # Start all services (database, redis, rabbitmq, clamav) cd autogpt_platform && docker compose --profile local up deps --build --detach ``` 2. **Backend Setup** (always run before backend development): + ```bash cd autogpt_platform/backend poetry install # Install dependencies @@ -48,6 +51,7 @@ This file provides comprehensive onboarding information for GitHub Copilot codin ### Runtime Requirements **Critical:** Always ensure Docker services are running before starting development: + ```bash cd autogpt_platform && docker compose --profile local up deps --build --detach ``` @@ -58,6 +62,7 @@ cd autogpt_platform && docker compose --profile local up deps --build --detach ### Development Commands **Backend Development:** + ```bash cd autogpt_platform/backend poetry run serve # Start development server (port 8000) @@ -68,6 +73,7 @@ poetry run lint # Lint code (ruff) - run after format ``` **Frontend Development:** + ```bash cd autogpt_platform/frontend pnpm dev # Start development server (port 3000) - use for active development @@ -81,23 +87,27 @@ pnpm storybook # Start component development server ### Testing Strategy **Backend Tests:** + - **Block Tests**: `poetry run pytest backend/blocks/test/test_block.py -xvs` (validates all blocks) - **Specific Block**: `poetry run pytest 'backend/blocks/test/test_block.py::test_available_blocks[BlockName]' -xvs` - **Snapshot Tests**: Use `--snapshot-update` when output changes, always review with `git diff` **Frontend Tests:** + - **E2E Tests**: Always run `pnpm dev` before `pnpm test` (Playwright requires running instance) - **Component Tests**: Use Storybook for isolated component development ### Critical Validation Steps **Before committing changes:** + 1. Run `poetry run format` (backend) and `pnpm format` (frontend) 2. Ensure all tests pass in modified areas 3. Verify Docker services are still running 4. Check that database migrations apply cleanly **Common Issues & Workarounds:** + - **Prisma issues**: Run `poetry run prisma generate` after schema changes - **Permission errors**: Ensure Docker has proper permissions - **Port conflicts**: Check the `docker-compose.yml` file for the current list of exposed ports. You can list all mapped ports with: @@ -108,6 +118,7 @@ pnpm storybook # Start component development server ### Core Architecture **AutoGPT Platform** (`autogpt_platform/`): + - `backend/` - FastAPI server with async support - `backend/backend/` - Core API logic - `backend/blocks/` - Agent execution blocks @@ -121,6 +132,7 @@ pnpm storybook # Start component development server - `docker-compose.yml` - Development stack orchestration **Key Configuration Files:** + - `pyproject.toml` - Python dependencies and tooling - `package.json` - Node.js dependencies and scripts - `schema.prisma` - Database schema and migrations @@ -136,6 +148,7 @@ pnpm storybook # Start component development server ### Development Workflow **GitHub Actions**: Multiple CI/CD workflows in `.github/workflows/` + - `platform-backend-ci.yml` - Backend testing and validation - `platform-frontend-ci.yml` - Frontend testing and validation - `platform-fullstack-ci.yml` - End-to-end integration tests @@ -146,11 +159,13 @@ pnpm storybook # Start component development server ### Key Source Files **Backend Entry Points:** + - `backend/backend/server/server.py` - FastAPI application setup - `backend/backend/data/` - Database models and user management - `backend/blocks/` - Agent execution blocks and logic **Frontend Entry Points:** + - `frontend/src/app/layout.tsx` - Root application layout - `frontend/src/app/page.tsx` - Home page - `frontend/src/lib/supabase/` - Authentication and database client @@ -160,6 +175,7 @@ pnpm storybook # Start component development server ### Agent Block System Agents are built using a visual block-based system where each block performs a single action. Blocks are defined in `backend/blocks/` and must include: + - Block definition with input/output schemas - Execution logic with proper error handling - Tests validating functionality @@ -167,6 +183,7 @@ Agents are built using a visual block-based system where each block performs a s ### Database & ORM **Prisma ORM** with PostgreSQL backend including pgvector for embeddings: + - Schema in `schema.prisma` - Migrations in `backend/migrations/` - Always run `prisma migrate dev` and `prisma generate` after schema changes @@ -174,13 +191,15 @@ Agents are built using a visual block-based system where each block performs a s ## Environment Configuration ### Configuration Files Priority Order + 1. **Backend**: `/backend/.env.default` → `/backend/.env` (user overrides) -2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides) +2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides) 3. **Platform**: `/.env.default` (Supabase/shared) → `/.env` (user overrides) 4. Docker Compose `environment:` sections override file-based config 5. Shell environment variables have highest precedence ### Docker Environment Setup + - All services use hardcoded defaults (no `${VARIABLE}` substitutions) - The `env_file` directive loads variables INTO containers at runtime - Backend/Frontend services use YAML anchors for consistent configuration @@ -189,6 +208,7 @@ Agents are built using a visual block-based system where each block performs a s ## Advanced Development Patterns ### Adding New Blocks + 1. Create file in `/backend/backend/blocks/` 2. Inherit from `Block` base class with input/output schemas 3. Implement `run` method with proper error handling @@ -198,6 +218,7 @@ Agents are built using a visual block-based system where each block performs a s 7. Consider how inputs/outputs connect with other blocks in graph editor ### API Development + 1. Update routes in `/backend/backend/server/routers/` 2. Add/update Pydantic models in same directory 3. Write tests alongside route files @@ -205,21 +226,76 @@ Agents are built using a visual block-based system where each block performs a s 5. Run `poetry run test` to verify changes ### Frontend Development -1. Components in `/frontend/src/components/` -2. Use existing UI components from `/frontend/src/components/ui/` -3. Add Storybook stories for component development -4. Test user-facing features with Playwright E2E tests -5. Update protected routes in middleware when needed + +**📖 Complete Frontend Guide**: See `autogpt_platform/frontend/CONTRIBUTING.md` and `autogpt_platform/frontend/.cursorrules` for comprehensive patterns and conventions. + +**Quick Reference:** + +**Component Structure:** + +- Separate render logic from data/behavior +- Structure: `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts` +- Exception: Small components (3-4 lines of logic) can be inline +- Render-only components can be direct files without folders + +**Data Fetching:** + +- Use generated API hooks from `@/app/api/__generated__/endpoints/` +- Generated via Orval from backend OpenAPI spec +- Pattern: `use{Method}{Version}{OperationName}` +- Example: `useGetV2ListLibraryAgents` +- Regenerate with: `pnpm generate:api` +- **Never** use deprecated `BackendAPI` or `src/lib/autogpt-server-api/*` + +**Code Conventions:** + +- Use function declarations for components and handlers (not arrow functions) +- Only arrow functions for small inline lambdas (map, filter, etc.) +- Components: `PascalCase`, Hooks: `camelCase` with `use` prefix +- No barrel files or `index.ts` re-exports +- Minimal comments (code should be self-documenting) + +**Styling:** + +- Use Tailwind CSS utilities only +- Use design system components from `src/components/` (atoms, molecules, organisms) +- Never use `src/components/__legacy__/*` +- Only use Phosphor Icons (`@phosphor-icons/react`) +- Prefer design tokens over hardcoded values + +**Error Handling:** + +- Render errors: Use `` component +- Mutation errors: Display with toast notifications +- Manual exceptions: Use `Sentry.captureException()` +- Global error boundaries already configured + +**Testing:** + +- Add/update Storybook stories for UI components (`pnpm storybook`) +- Run Playwright E2E tests with `pnpm test` +- Verify in Chromatic after PR + +**Architecture:** + +- Default to client components ("use client") +- Server components only for SEO or extreme TTFB needs +- Use React Query for server state (via generated hooks) +- Co-locate UI state in components/hooks ### Security Guidelines + **Cache Protection Middleware** (`/backend/backend/server/middleware/security.py`): + - Default: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private` - Uses allow list approach for cacheable paths (static assets, health checks, public pages) - Prevents sensitive data caching in browsers/proxies - Add new cacheable endpoints to `CACHEABLE_PATHS` ### CI/CD Alignment + The repository has comprehensive CI workflows that test: + - **Backend**: Python 3.11-3.13, services (Redis/RabbitMQ/ClamAV), Prisma migrations, Poetry lock validation - **Frontend**: Node.js 21, pnpm, Playwright with Docker Compose stack, API schema validation - **Integration**: Full-stack type checking and E2E testing @@ -229,6 +305,7 @@ Match these patterns when developing locally - the copilot setup environment mir ## Collaboration with Other AI Assistants This repository is actively developed with assistance from Claude (via CLAUDE.md files). When working on this codebase: + - Check for existing CLAUDE.md files that provide additional context - Follow established patterns and conventions already in the codebase - Maintain consistency with existing code style and architecture @@ -237,8 +314,9 @@ This repository is actively developed with assistance from Claude (via CLAUDE.md ## Trust These Instructions These instructions are comprehensive and tested. Only perform additional searches if: + 1. Information here is incomplete for your specific task 2. You encounter errors not covered by the workarounds 3. You need to understand implementation details not covered above -For detailed platform development patterns, refer to `autogpt_platform/CLAUDE.md` and `AGENTS.md` in the repository root. \ No newline at end of file +For detailed platform development patterns, refer to `autogpt_platform/CLAUDE.md` and `AGENTS.md` in the repository root. diff --git a/.github/workflows/claude-dependabot.yml b/.github/workflows/claude-dependabot.yml index 902fc461b2..20b6f1d28e 100644 --- a/.github/workflows/claude-dependabot.yml +++ b/.github/workflows/claude-dependabot.yml @@ -80,7 +80,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22" - name: Enable corepack run: corepack enable diff --git a/.github/workflows/claude.yml b/.github/workflows/claude.yml index 31f2769ea4..3f5e8c22ec 100644 --- a/.github/workflows/claude.yml +++ b/.github/workflows/claude.yml @@ -44,6 +44,12 @@ jobs: with: fetch-depth: 1 + - name: Free Disk Space (Ubuntu) + uses: jlumbroso/free-disk-space@v1.3.1 + with: + large-packages: false # slow + docker-images: false # limited benefit + # Backend Python/Poetry setup (mirrors platform-backend-ci.yml) - name: Set up Python uses: actions/setup-python@v5 @@ -90,7 +96,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22" - name: Enable corepack run: corepack enable diff --git a/.github/workflows/copilot-setup-steps.yml b/.github/workflows/copilot-setup-steps.yml index 7af1ec4365..13ef01cc44 100644 --- a/.github/workflows/copilot-setup-steps.yml +++ b/.github/workflows/copilot-setup-steps.yml @@ -78,7 +78,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22" - name: Enable corepack run: corepack enable @@ -299,4 +299,4 @@ jobs: echo "✅ AutoGPT Platform development environment setup complete!" echo "🚀 Ready for development with Docker services running" echo "📝 Backend server: poetry run serve (port 8000)" - echo "🌐 Frontend server: pnpm dev (port 3000)" \ No newline at end of file + echo "🌐 Frontend server: pnpm dev (port 3000)" diff --git a/.github/workflows/platform-frontend-ci.yml b/.github/workflows/platform-frontend-ci.yml index dc33d3bb5e..2154fe1385 100644 --- a/.github/workflows/platform-frontend-ci.yml +++ b/.github/workflows/platform-frontend-ci.yml @@ -12,6 +12,10 @@ on: - "autogpt_platform/frontend/**" merge_group: +concurrency: + group: ${{ github.workflow }}-${{ github.event_name == 'merge_group' && format('merge-queue-{0}', github.ref) || format('{0}-{1}', github.ref, github.event.pull_request.number || github.sha) }} + cancel-in-progress: ${{ github.event_name == 'pull_request' }} + defaults: run: shell: bash @@ -30,7 +34,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22.18.0" - name: Enable corepack run: corepack enable @@ -62,7 +66,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22.18.0" - name: Enable corepack run: corepack enable @@ -97,7 +101,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22.18.0" - name: Enable corepack run: corepack enable @@ -138,7 +142,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22.18.0" - name: Enable corepack run: corepack enable diff --git a/.github/workflows/platform-fullstack-ci.yml b/.github/workflows/platform-fullstack-ci.yml index d98a6598e0..c888ace6c5 100644 --- a/.github/workflows/platform-fullstack-ci.yml +++ b/.github/workflows/platform-fullstack-ci.yml @@ -12,6 +12,10 @@ on: - "autogpt_platform/**" merge_group: +concurrency: + group: ${{ github.workflow }}-${{ github.event_name == 'merge_group' && format('merge-queue-{0}', github.ref) || github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha }} + cancel-in-progress: ${{ github.event_name == 'pull_request' }} + defaults: run: shell: bash @@ -30,7 +34,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22.18.0" - name: Enable corepack run: corepack enable @@ -66,7 +70,7 @@ jobs: - name: Set up Node.js uses: actions/setup-node@v4 with: - node-version: "21" + node-version: "22.18.0" - name: Enable corepack run: corepack enable diff --git a/.github/workflows/repo-close-stale-issues.yml b/.github/workflows/repo-close-stale-issues.yml index a9f183d775..d58459daa1 100644 --- a/.github/workflows/repo-close-stale-issues.yml +++ b/.github/workflows/repo-close-stale-issues.yml @@ -11,7 +11,7 @@ jobs: stale: runs-on: ubuntu-latest steps: - - uses: actions/stale@v9 + - uses: actions/stale@v10 with: # operations-per-run: 5000 stale-issue-message: > diff --git a/.github/workflows/repo-pr-label.yml b/.github/workflows/repo-pr-label.yml index eef928ef16..97579c2784 100644 --- a/.github/workflows/repo-pr-label.yml +++ b/.github/workflows/repo-pr-label.yml @@ -61,6 +61,6 @@ jobs: pull-requests: write runs-on: ubuntu-latest steps: - - uses: actions/labeler@v5 + - uses: actions/labeler@v6 with: sync-labels: true diff --git a/.gitignore b/.gitignore index 15160be56e..dfce8ba810 100644 --- a/.gitignore +++ b/.gitignore @@ -178,3 +178,4 @@ autogpt_platform/backend/settings.py *.ign.* .test-contents .claude/settings.local.json +/autogpt_platform/backend/logs diff --git a/autogpt_platform/CLAUDE.md b/autogpt_platform/CLAUDE.md index 3b8eaba0a1..df1f3314aa 100644 --- a/autogpt_platform/CLAUDE.md +++ b/autogpt_platform/CLAUDE.md @@ -63,6 +63,9 @@ poetry run pytest path/to/test.py --snapshot-update # Install dependencies cd frontend && pnpm i +# Generate API client from OpenAPI spec +pnpm generate:api + # Start development server pnpm dev @@ -75,12 +78,23 @@ pnpm storybook # Build production pnpm build +# Format and lint +pnpm format + # Type checking pnpm types ``` -We have a components library in autogpt_platform/frontend/src/components/atoms that should be used when adding new pages and components. +**📖 Complete Guide**: See `/frontend/CONTRIBUTING.md` and `/frontend/.cursorrules` for comprehensive frontend patterns. +**Key Frontend Conventions:** + +- Separate render logic from data/behavior in components +- Use generated API hooks from `@/app/api/__generated__/endpoints/` +- Use function declarations (not arrow functions) for components/handlers +- Use design system components from `src/components/` (atoms, molecules, organisms) +- Only use Phosphor Icons +- Never use `src/components/__legacy__/*` or deprecated `BackendAPI` ## Architecture Overview @@ -95,11 +109,16 @@ We have a components library in autogpt_platform/frontend/src/components/atoms t ### Frontend Architecture -- **Framework**: Next.js App Router with React Server Components -- **State Management**: React hooks + Supabase client for real-time updates +- **Framework**: Next.js 15 App Router (client-first approach) +- **Data Fetching**: Type-safe generated API hooks via Orval + React Query +- **State Management**: React Query for server state, co-located UI state in components/hooks +- **Component Structure**: Separate render logic (`.tsx`) from business logic (`use*.ts` hooks) - **Workflow Builder**: Visual graph editor using @xyflow/react -- **UI Components**: Radix UI primitives with Tailwind CSS styling +- **UI Components**: shadcn/ui (Radix UI primitives) with Tailwind CSS styling +- **Icons**: Phosphor Icons only - **Feature Flags**: LaunchDarkly integration +- **Error Handling**: ErrorCard for render errors, toast for mutations, Sentry for exceptions +- **Testing**: Playwright for E2E, Storybook for component development ### Key Concepts @@ -153,6 +172,7 @@ Key models (defined in `/backend/schema.prisma`): **Adding a new block:** Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block-sdk-guide.md) which covers: + - Provider configuration with `ProviderBuilder` - Block schema definition - Authentication (API keys, OAuth, webhooks) @@ -160,6 +180,7 @@ Follow the comprehensive [Block SDK Guide](../../../docs/content/platform/block- - File organization Quick steps: + 1. Create new file in `/backend/backend/blocks/` 2. Configure provider using `ProviderBuilder` in `_config.py` 3. Inherit from `Block` base class @@ -171,6 +192,8 @@ Quick steps: Note: when making many new blocks analyze the interfaces for each of these blocks and picture if they would go well together in a graph based editor or would they struggle to connect productively? ex: do the inputs and outputs tie well together? +If you get any pushback or hit complex block conditions check the new_blocks guide in the docs. + **Modifying the API:** 1. Update route in `/backend/backend/server/routers/` @@ -180,10 +203,20 @@ ex: do the inputs and outputs tie well together? **Frontend feature development:** -1. Components go in `/frontend/src/components/` -2. Use existing UI components from `/frontend/src/components/ui/` -3. Add Storybook stories for new components -4. Test with Playwright if user-facing +See `/frontend/CONTRIBUTING.md` for complete patterns. Quick reference: + +1. **Pages**: Create in `src/app/(platform)/feature-name/page.tsx` + - Add `usePageName.ts` hook for logic + - Put sub-components in local `components/` folder +2. **Components**: Structure as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts` + - Use design system components from `src/components/` (atoms, molecules, organisms) + - Never use `src/components/__legacy__/*` +3. **Data fetching**: Use generated API hooks from `@/app/api/__generated__/endpoints/` + - Regenerate with `pnpm generate:api` + - Pattern: `use{Method}{Version}{OperationName}` +4. **Styling**: Tailwind CSS only, use design tokens, Phosphor Icons only +5. **Testing**: Add Storybook stories for new components, Playwright for E2E +6. **Code conventions**: Function declarations (not arrow functions) for components/handlers ### Security Implementation diff --git a/autogpt_platform/Makefile b/autogpt_platform/Makefile index b8a3261fcf..d99fee49d7 100644 --- a/autogpt_platform/Makefile +++ b/autogpt_platform/Makefile @@ -1,4 +1,4 @@ -.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend +.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend load-store-agents # Run just Supabase + Redis + RabbitMQ start-core: @@ -8,6 +8,11 @@ start-core: stop-core: docker compose stop deps +reset-db: + rm -rf db/docker/volumes/db/data + cd backend && poetry run prisma migrate deploy + cd backend && poetry run prisma generate + # View logs for core services logs-core: docker compose logs -f deps @@ -35,13 +40,22 @@ run-backend: run-frontend: cd frontend && pnpm dev +test-data: + cd backend && poetry run python test/test_data_creator.py + +load-store-agents: + cd backend && poetry run load-store-agents + help: @echo "Usage: make " @echo "Targets:" @echo " start-core - Start just the core services (Supabase, Redis, RabbitMQ) in background" @echo " stop-core - Stop the core services" + @echo " reset-db - Reset the database by deleting the volume" @echo " logs-core - Tail the logs for core services" @echo " format - Format & lint backend (Python) and frontend (TypeScript) code" @echo " migrate - Run backend database migrations" @echo " run-backend - Run the backend FastAPI server" - @echo " run-frontend - Run the frontend Next.js development server" \ No newline at end of file + @echo " run-frontend - Run the frontend Next.js development server" + @echo " test-data - Run the test data creator" + @echo " load-store-agents - Load store agents from agents/ folder into test database" \ No newline at end of file diff --git a/autogpt_platform/autogpt_libs/autogpt_libs/api_key/keysmith.py b/autogpt_platform/autogpt_libs/autogpt_libs/api_key/keysmith.py index 394044a69d..aee7040288 100644 --- a/autogpt_platform/autogpt_libs/autogpt_libs/api_key/keysmith.py +++ b/autogpt_platform/autogpt_libs/autogpt_libs/api_key/keysmith.py @@ -57,6 +57,9 @@ class APIKeySmith: def hash_key(self, raw_key: str) -> tuple[str, str]: """Migrate a legacy hash to secure hash format.""" + if not raw_key.startswith(self.PREFIX): + raise ValueError("Key without 'agpt_' prefix would fail validation") + salt = self._generate_salt() hash = self._hash_key_with_salt(raw_key, salt) return hash, salt.hex() diff --git a/autogpt_platform/autogpt_libs/autogpt_libs/auth/__init__.py b/autogpt_platform/autogpt_libs/autogpt_libs/auth/__init__.py index 5202ebc769..edf0f4c29d 100644 --- a/autogpt_platform/autogpt_libs/autogpt_libs/auth/__init__.py +++ b/autogpt_platform/autogpt_libs/autogpt_libs/auth/__init__.py @@ -1,5 +1,10 @@ from .config import verify_settings -from .dependencies import get_user_id, requires_admin_user, requires_user +from .dependencies import ( + get_optional_user_id, + get_user_id, + requires_admin_user, + requires_user, +) from .helpers import add_auth_responses_to_openapi from .models import User @@ -8,6 +13,7 @@ __all__ = [ "get_user_id", "requires_admin_user", "requires_user", + "get_optional_user_id", "add_auth_responses_to_openapi", "User", ] diff --git a/autogpt_platform/autogpt_libs/autogpt_libs/auth/dependencies.py b/autogpt_platform/autogpt_libs/autogpt_libs/auth/dependencies.py index 2fbc3da0e7..3fcecb3544 100644 --- a/autogpt_platform/autogpt_libs/autogpt_libs/auth/dependencies.py +++ b/autogpt_platform/autogpt_libs/autogpt_libs/auth/dependencies.py @@ -4,11 +4,53 @@ FastAPI dependency functions for JWT-based authentication and authorization. These are the high-level dependency functions used in route definitions. """ +import logging + import fastapi +from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer from .jwt_utils import get_jwt_payload, verify_user from .models import User +optional_bearer = HTTPBearer(auto_error=False) + +# Header name for admin impersonation +IMPERSONATION_HEADER_NAME = "X-Act-As-User-Id" + +logger = logging.getLogger(__name__) + + +def get_optional_user_id( + credentials: HTTPAuthorizationCredentials | None = fastapi.Security( + optional_bearer + ), +) -> str | None: + """ + Attempts to extract the user ID ("sub" claim) from a Bearer JWT if provided. + + This dependency allows for both authenticated and anonymous access. If a valid bearer token is + supplied, it parses the JWT and extracts the user ID. If the token is missing or invalid, it returns None, + treating the request as anonymous. + + Args: + credentials: Optional HTTPAuthorizationCredentials object from FastAPI Security dependency. + + Returns: + The user ID (str) extracted from the JWT "sub" claim, or None if no valid token is present. + """ + if not credentials: + return None + + try: + # Parse JWT token to get user ID + from autogpt_libs.auth.jwt_utils import parse_jwt_token + + payload = parse_jwt_token(credentials.credentials) + return payload.get("sub") + except Exception as e: + logger.debug(f"Auth token validation failed (anonymous access): {e}") + return None + async def requires_user(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> User: """ @@ -32,16 +74,44 @@ async def requires_admin_user( return verify_user(jwt_payload, admin_only=True) -async def get_user_id(jwt_payload: dict = fastapi.Security(get_jwt_payload)) -> str: +async def get_user_id( + request: fastapi.Request, jwt_payload: dict = fastapi.Security(get_jwt_payload) +) -> str: """ FastAPI dependency that returns the ID of the authenticated user. + Supports admin impersonation via X-Act-As-User-Id header: + - If the header is present and user is admin, returns the impersonated user ID + - Otherwise returns the authenticated user's own ID + - Logs all impersonation actions for audit trail + Raises: HTTPException: 401 for authentication failures or missing user ID + HTTPException: 403 if non-admin tries to use impersonation """ + # Get the authenticated user's ID from JWT user_id = jwt_payload.get("sub") if not user_id: raise fastapi.HTTPException( status_code=401, detail="User ID not found in token" ) + + # Check for admin impersonation header + impersonate_header = request.headers.get(IMPERSONATION_HEADER_NAME, "").strip() + if impersonate_header: + # Verify the authenticated user is an admin + authenticated_user = verify_user(jwt_payload, admin_only=False) + if authenticated_user.role != "admin": + raise fastapi.HTTPException( + status_code=403, detail="Only admin users can impersonate other users" + ) + + # Log the impersonation for audit trail + logger.info( + f"Admin impersonation: {authenticated_user.user_id} ({authenticated_user.email}) " + f"acting as user {impersonate_header} for requesting {request.method} {request.url}" + ) + + return impersonate_header + return user_id diff --git a/autogpt_platform/autogpt_libs/autogpt_libs/auth/dependencies_test.py b/autogpt_platform/autogpt_libs/autogpt_libs/auth/dependencies_test.py index 0b9cd6f866..95795c2cfc 100644 --- a/autogpt_platform/autogpt_libs/autogpt_libs/auth/dependencies_test.py +++ b/autogpt_platform/autogpt_libs/autogpt_libs/auth/dependencies_test.py @@ -4,9 +4,10 @@ Tests the full authentication flow from HTTP requests to user validation. """ import os +from unittest.mock import Mock import pytest -from fastapi import FastAPI, HTTPException, Security +from fastapi import FastAPI, HTTPException, Request, Security from fastapi.testclient import TestClient from pytest_mock import MockerFixture @@ -45,6 +46,7 @@ class TestAuthDependencies: """Create a test client.""" return TestClient(app) + @pytest.mark.asyncio async def test_requires_user_with_valid_jwt_payload(self, mocker: MockerFixture): """Test requires_user with valid JWT payload.""" jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"} @@ -58,6 +60,7 @@ class TestAuthDependencies: assert user.user_id == "user-123" assert user.role == "user" + @pytest.mark.asyncio async def test_requires_user_with_admin_jwt_payload(self, mocker: MockerFixture): """Test requires_user accepts admin users.""" jwt_payload = { @@ -73,6 +76,7 @@ class TestAuthDependencies: assert user.user_id == "admin-456" assert user.role == "admin" + @pytest.mark.asyncio async def test_requires_user_missing_sub(self): """Test requires_user with missing user ID.""" jwt_payload = {"role": "user", "email": "user@example.com"} @@ -82,6 +86,7 @@ class TestAuthDependencies: assert exc_info.value.status_code == 401 assert "User ID not found" in exc_info.value.detail + @pytest.mark.asyncio async def test_requires_user_empty_sub(self): """Test requires_user with empty user ID.""" jwt_payload = {"sub": "", "role": "user"} @@ -90,6 +95,7 @@ class TestAuthDependencies: await requires_user(jwt_payload) assert exc_info.value.status_code == 401 + @pytest.mark.asyncio async def test_requires_admin_user_with_admin(self, mocker: MockerFixture): """Test requires_admin_user with admin role.""" jwt_payload = { @@ -105,6 +111,7 @@ class TestAuthDependencies: assert user.user_id == "admin-789" assert user.role == "admin" + @pytest.mark.asyncio async def test_requires_admin_user_with_regular_user(self): """Test requires_admin_user rejects regular users.""" jwt_payload = {"sub": "user-123", "role": "user", "email": "user@example.com"} @@ -114,6 +121,7 @@ class TestAuthDependencies: assert exc_info.value.status_code == 403 assert "Admin access required" in exc_info.value.detail + @pytest.mark.asyncio async def test_requires_admin_user_missing_role(self): """Test requires_admin_user with missing role.""" jwt_payload = {"sub": "user-123", "email": "user@example.com"} @@ -121,31 +129,40 @@ class TestAuthDependencies: with pytest.raises(KeyError): await requires_admin_user(jwt_payload) + @pytest.mark.asyncio async def test_get_user_id_with_valid_payload(self, mocker: MockerFixture): """Test get_user_id extracts user ID correctly.""" + request = Mock(spec=Request) + request.headers = {} jwt_payload = {"sub": "user-id-xyz", "role": "user"} mocker.patch( "autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload ) - user_id = await get_user_id(jwt_payload) + user_id = await get_user_id(request, jwt_payload) assert user_id == "user-id-xyz" + @pytest.mark.asyncio async def test_get_user_id_missing_sub(self): """Test get_user_id with missing user ID.""" + request = Mock(spec=Request) + request.headers = {} jwt_payload = {"role": "user"} with pytest.raises(HTTPException) as exc_info: - await get_user_id(jwt_payload) + await get_user_id(request, jwt_payload) assert exc_info.value.status_code == 401 assert "User ID not found" in exc_info.value.detail + @pytest.mark.asyncio async def test_get_user_id_none_sub(self): """Test get_user_id with None user ID.""" + request = Mock(spec=Request) + request.headers = {} jwt_payload = {"sub": None, "role": "user"} with pytest.raises(HTTPException) as exc_info: - await get_user_id(jwt_payload) + await get_user_id(request, jwt_payload) assert exc_info.value.status_code == 401 @@ -170,6 +187,7 @@ class TestAuthDependenciesIntegration: return _create_token + @pytest.mark.asyncio async def test_endpoint_auth_enabled_no_token(self): """Test endpoints require token when auth is enabled.""" app = FastAPI() @@ -184,6 +202,7 @@ class TestAuthDependenciesIntegration: response = client.get("/test") assert response.status_code == 401 + @pytest.mark.asyncio async def test_endpoint_with_valid_token(self, create_token): """Test endpoint with valid JWT token.""" app = FastAPI() @@ -203,6 +222,7 @@ class TestAuthDependenciesIntegration: assert response.status_code == 200 assert response.json()["user_id"] == "test-user" + @pytest.mark.asyncio async def test_admin_endpoint_requires_admin_role(self, create_token): """Test admin endpoint rejects non-admin users.""" app = FastAPI() @@ -240,6 +260,7 @@ class TestAuthDependenciesIntegration: class TestAuthDependenciesEdgeCases: """Edge case tests for authentication dependencies.""" + @pytest.mark.asyncio async def test_dependency_with_complex_payload(self): """Test dependencies handle complex JWT payloads.""" complex_payload = { @@ -263,6 +284,7 @@ class TestAuthDependenciesEdgeCases: admin = await requires_admin_user(complex_payload) assert admin.role == "admin" + @pytest.mark.asyncio async def test_dependency_with_unicode_in_payload(self): """Test dependencies handle unicode in JWT payloads.""" unicode_payload = { @@ -276,6 +298,7 @@ class TestAuthDependenciesEdgeCases: assert "😀" in user.user_id assert user.email == "测试@example.com" + @pytest.mark.asyncio async def test_dependency_with_null_values(self): """Test dependencies handle null values in payload.""" null_payload = { @@ -290,6 +313,7 @@ class TestAuthDependenciesEdgeCases: assert user.user_id == "user-123" assert user.email is None + @pytest.mark.asyncio async def test_concurrent_requests_isolation(self): """Test that concurrent requests don't interfere with each other.""" payload1 = {"sub": "user-1", "role": "user"} @@ -314,6 +338,7 @@ class TestAuthDependenciesEdgeCases: ({"sub": "user", "role": "user"}, "Admin access required", True), ], ) + @pytest.mark.asyncio async def test_dependency_error_cases( self, payload, expected_error: str, admin_only: bool ): @@ -325,6 +350,7 @@ class TestAuthDependenciesEdgeCases: verify_user(payload, admin_only=admin_only) assert expected_error in exc_info.value.detail + @pytest.mark.asyncio async def test_dependency_valid_user(self): """Test valid user case for dependency.""" # Import verify_user to test it directly since dependencies use FastAPI Security @@ -333,3 +359,196 @@ class TestAuthDependenciesEdgeCases: # Valid case user = verify_user({"sub": "user", "role": "user"}, admin_only=False) assert user.user_id == "user" + + +class TestAdminImpersonation: + """Test suite for admin user impersonation functionality.""" + + @pytest.mark.asyncio + async def test_admin_impersonation_success(self, mocker: MockerFixture): + """Test admin successfully impersonating another user.""" + request = Mock(spec=Request) + request.headers = {"X-Act-As-User-Id": "target-user-123"} + jwt_payload = { + "sub": "admin-456", + "role": "admin", + "email": "admin@example.com", + } + + # Mock verify_user to return admin user data + mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user") + mock_verify_user.return_value = Mock( + user_id="admin-456", email="admin@example.com", role="admin" + ) + + # Mock logger to verify audit logging + mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger") + + mocker.patch( + "autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload + ) + + user_id = await get_user_id(request, jwt_payload) + + # Should return the impersonated user ID + assert user_id == "target-user-123" + + # Should log the impersonation attempt + mock_logger.info.assert_called_once() + log_call = mock_logger.info.call_args[0][0] + assert "Admin impersonation:" in log_call + assert "admin@example.com" in log_call + assert "target-user-123" in log_call + + @pytest.mark.asyncio + async def test_non_admin_impersonation_attempt(self, mocker: MockerFixture): + """Test non-admin user attempting impersonation returns 403.""" + request = Mock(spec=Request) + request.headers = {"X-Act-As-User-Id": "target-user-123"} + jwt_payload = { + "sub": "regular-user", + "role": "user", + "email": "user@example.com", + } + + # Mock verify_user to return regular user data + mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user") + mock_verify_user.return_value = Mock( + user_id="regular-user", email="user@example.com", role="user" + ) + + mocker.patch( + "autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload + ) + + with pytest.raises(HTTPException) as exc_info: + await get_user_id(request, jwt_payload) + + assert exc_info.value.status_code == 403 + assert "Only admin users can impersonate other users" in exc_info.value.detail + + @pytest.mark.asyncio + async def test_impersonation_empty_header(self, mocker: MockerFixture): + """Test impersonation with empty header falls back to regular user ID.""" + request = Mock(spec=Request) + request.headers = {"X-Act-As-User-Id": ""} + jwt_payload = { + "sub": "admin-456", + "role": "admin", + "email": "admin@example.com", + } + + mocker.patch( + "autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload + ) + + user_id = await get_user_id(request, jwt_payload) + + # Should fall back to the admin's own user ID + assert user_id == "admin-456" + + @pytest.mark.asyncio + async def test_impersonation_missing_header(self, mocker: MockerFixture): + """Test normal behavior when impersonation header is missing.""" + request = Mock(spec=Request) + request.headers = {} # No impersonation header + jwt_payload = { + "sub": "admin-456", + "role": "admin", + "email": "admin@example.com", + } + + mocker.patch( + "autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload + ) + + user_id = await get_user_id(request, jwt_payload) + + # Should return the admin's own user ID + assert user_id == "admin-456" + + @pytest.mark.asyncio + async def test_impersonation_audit_logging_details(self, mocker: MockerFixture): + """Test that impersonation audit logging includes all required details.""" + request = Mock(spec=Request) + request.headers = {"X-Act-As-User-Id": "victim-user-789"} + jwt_payload = { + "sub": "admin-999", + "role": "admin", + "email": "superadmin@company.com", + } + + # Mock verify_user to return admin user data + mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user") + mock_verify_user.return_value = Mock( + user_id="admin-999", email="superadmin@company.com", role="admin" + ) + + # Mock logger to capture audit trail + mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger") + + mocker.patch( + "autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload + ) + + user_id = await get_user_id(request, jwt_payload) + + # Verify all audit details are logged + assert user_id == "victim-user-789" + mock_logger.info.assert_called_once() + + log_message = mock_logger.info.call_args[0][0] + assert "Admin impersonation:" in log_message + assert "superadmin@company.com" in log_message + assert "victim-user-789" in log_message + + @pytest.mark.asyncio + async def test_impersonation_header_case_sensitivity(self, mocker: MockerFixture): + """Test that impersonation header is case-sensitive.""" + request = Mock(spec=Request) + # Use wrong case - should not trigger impersonation + request.headers = {"x-act-as-user-id": "target-user-123"} + jwt_payload = { + "sub": "admin-456", + "role": "admin", + "email": "admin@example.com", + } + + mocker.patch( + "autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload + ) + + user_id = await get_user_id(request, jwt_payload) + + # Should fall back to admin's own ID (header case mismatch) + assert user_id == "admin-456" + + @pytest.mark.asyncio + async def test_impersonation_with_whitespace_header(self, mocker: MockerFixture): + """Test impersonation with whitespace in header value.""" + request = Mock(spec=Request) + request.headers = {"X-Act-As-User-Id": " target-user-123 "} + jwt_payload = { + "sub": "admin-456", + "role": "admin", + "email": "admin@example.com", + } + + # Mock verify_user to return admin user data + mock_verify_user = mocker.patch("autogpt_libs.auth.dependencies.verify_user") + mock_verify_user.return_value = Mock( + user_id="admin-456", email="admin@example.com", role="admin" + ) + + # Mock logger + mock_logger = mocker.patch("autogpt_libs.auth.dependencies.logger") + + mocker.patch( + "autogpt_libs.auth.dependencies.get_jwt_payload", return_value=jwt_payload + ) + + user_id = await get_user_id(request, jwt_payload) + + # Should strip whitespace and impersonate successfully + assert user_id == "target-user-123" + mock_logger.info.assert_called_once() diff --git a/autogpt_platform/autogpt_libs/autogpt_libs/auth/helpers.py b/autogpt_platform/autogpt_libs/autogpt_libs/auth/helpers.py index d3d571d73c..10101778e7 100644 --- a/autogpt_platform/autogpt_libs/autogpt_libs/auth/helpers.py +++ b/autogpt_platform/autogpt_libs/autogpt_libs/auth/helpers.py @@ -1,29 +1,25 @@ from fastapi import FastAPI -from fastapi.openapi.utils import get_openapi from .jwt_utils import bearer_jwt_auth def add_auth_responses_to_openapi(app: FastAPI) -> None: """ - Set up custom OpenAPI schema generation that adds 401 responses + Patch a FastAPI instance's `openapi()` method to add 401 responses to all authenticated endpoints. This is needed when using HTTPBearer with auto_error=False to get proper 401 responses instead of 403, but FastAPI only automatically adds security responses when auto_error=True. """ + # Wrap current method to allow stacking OpenAPI schema modifiers like this + wrapped_openapi = app.openapi def custom_openapi(): if app.openapi_schema: return app.openapi_schema - openapi_schema = get_openapi( - title=app.title, - version=app.version, - description=app.description, - routes=app.routes, - ) + openapi_schema = wrapped_openapi() # Add 401 response to all endpoints that have security requirements for path, methods in openapi_schema["paths"].items(): diff --git a/autogpt_platform/autogpt_libs/autogpt_libs/logging/config.py b/autogpt_platform/autogpt_libs/autogpt_libs/logging/config.py index 93a4030bcc..af958d65c3 100644 --- a/autogpt_platform/autogpt_libs/autogpt_libs/logging/config.py +++ b/autogpt_platform/autogpt_libs/autogpt_libs/logging/config.py @@ -94,42 +94,36 @@ def configure_logging(force_cloud_logging: bool = False) -> None: config = LoggingConfig() log_handlers: list[logging.Handler] = [] + structured_logging = config.enable_cloud_logging or force_cloud_logging + # Console output handlers - stdout = logging.StreamHandler(stream=sys.stdout) - stdout.setLevel(config.level) - stdout.addFilter(BelowLevelFilter(logging.WARNING)) - if config.level == logging.DEBUG: - stdout.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT)) - else: - stdout.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT)) + if not structured_logging: + stdout = logging.StreamHandler(stream=sys.stdout) + stdout.setLevel(config.level) + stdout.addFilter(BelowLevelFilter(logging.WARNING)) + if config.level == logging.DEBUG: + stdout.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT)) + else: + stdout.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT)) - stderr = logging.StreamHandler() - stderr.setLevel(logging.WARNING) - if config.level == logging.DEBUG: - stderr.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT)) - else: - stderr.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT)) + stderr = logging.StreamHandler() + stderr.setLevel(logging.WARNING) + if config.level == logging.DEBUG: + stderr.setFormatter(AGPTFormatter(DEBUG_LOG_FORMAT)) + else: + stderr.setFormatter(AGPTFormatter(SIMPLE_LOG_FORMAT)) - log_handlers += [stdout, stderr] + log_handlers += [stdout, stderr] # Cloud logging setup - if config.enable_cloud_logging or force_cloud_logging: - import google.cloud.logging - from google.cloud.logging.handlers import CloudLoggingHandler - from google.cloud.logging_v2.handlers.transports import ( - BackgroundThreadTransport, - ) + else: + # Use Google Cloud Structured Log Handler. Log entries are printed to stdout + # in a JSON format which is automatically picked up by Google Cloud Logging. + from google.cloud.logging.handlers import StructuredLogHandler - client = google.cloud.logging.Client() - # Use BackgroundThreadTransport to prevent blocking the main thread - # and deadlocks when gRPC calls to Google Cloud Logging hang - cloud_handler = CloudLoggingHandler( - client, - name="autogpt_logs", - transport=BackgroundThreadTransport, - ) - cloud_handler.setLevel(config.level) - log_handlers.append(cloud_handler) + structured_log_handler = StructuredLogHandler(stream=sys.stdout) + structured_log_handler.setLevel(config.level) + log_handlers.append(structured_log_handler) # File logging setup if config.enable_file_logging: @@ -185,7 +179,13 @@ def configure_logging(force_cloud_logging: bool = False) -> None: # Configure the root logger logging.basicConfig( - format=DEBUG_LOG_FORMAT if config.level == logging.DEBUG else SIMPLE_LOG_FORMAT, + format=( + "%(levelname)s %(message)s" + if structured_logging + else ( + DEBUG_LOG_FORMAT if config.level == logging.DEBUG else SIMPLE_LOG_FORMAT + ) + ), level=config.level, handlers=log_handlers, ) diff --git a/autogpt_platform/backend/.env.default b/autogpt_platform/backend/.env.default index a00af85724..a0004633ca 100644 --- a/autogpt_platform/backend/.env.default +++ b/autogpt_platform/backend/.env.default @@ -134,13 +134,6 @@ POSTMARK_WEBHOOK_TOKEN= # Error Tracking SENTRY_DSN= -# Cloudflare Turnstile (CAPTCHA) Configuration -# Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile -# This is the backend secret key -TURNSTILE_SECRET_KEY= -# This is the verify URL -TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify - # Feature Flags LAUNCH_DARKLY_SDK_KEY= diff --git a/autogpt_platform/backend/Dockerfile b/autogpt_platform/backend/Dockerfile index 70b31e554d..7f51bad3a1 100644 --- a/autogpt_platform/backend/Dockerfile +++ b/autogpt_platform/backend/Dockerfile @@ -47,6 +47,7 @@ RUN poetry install --no-ansi --no-root # Generate Prisma client COPY autogpt_platform/backend/schema.prisma ./ +COPY autogpt_platform/backend/backend/data/partial_types.py ./backend/data/partial_types.py RUN poetry run prisma generate FROM debian:13-slim AS server_dependencies @@ -92,6 +93,7 @@ FROM server_dependencies AS migrate # Migration stage only needs schema and migrations - much lighter than full backend COPY autogpt_platform/backend/schema.prisma /app/autogpt_platform/backend/ +COPY autogpt_platform/backend/backend/data/partial_types.py /app/autogpt_platform/backend/backend/data/partial_types.py COPY autogpt_platform/backend/migrations /app/autogpt_platform/backend/migrations FROM server_dependencies AS server diff --git a/autogpt_platform/backend/TESTING.md b/autogpt_platform/backend/TESTING.md index 39fe4611b4..a3a5db68ef 100644 --- a/autogpt_platform/backend/TESTING.md +++ b/autogpt_platform/backend/TESTING.md @@ -108,7 +108,7 @@ import fastapi.testclient import pytest from pytest_snapshot.plugin import Snapshot -from backend.server.v2.myroute import router +from backend.api.features.myroute import router app = fastapi.FastAPI() app.include_router(router) @@ -149,7 +149,7 @@ These provide the easiest way to set up authentication mocking in test modules: import fastapi import fastapi.testclient import pytest -from backend.server.v2.myroute import router +from backend.api.features.myroute import router app = fastapi.FastAPI() app.include_router(router) diff --git a/autogpt_platform/backend/agents/StoreAgent_rows.csv b/autogpt_platform/backend/agents/StoreAgent_rows.csv new file mode 100644 index 0000000000..44a5e052fc --- /dev/null +++ b/autogpt_platform/backend/agents/StoreAgent_rows.csv @@ -0,0 +1,242 @@ +listing_id,storeListingVersionId,slug,agent_name,agent_video,agent_image,featured,sub_heading,description,categories,useForOnboarding,is_available +6e60a900-9d7d-490e-9af2-a194827ed632,d85882b8-633f-44ce-a315-c20a8c123d19,flux-ai-image-generator,Flux AI Image Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ca154dd1-140e-454c-91bd-2d8a00de3f08.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/577d995d-bc38-40a9-a23f-1f30f5774bdb.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/415db1b7-115c-43ab-bd6c-4e9f7ef95be1.jpg""]",false,Transform ideas into breathtaking images,"Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!","[""creative""]",false,true +f11fc6e9-6166-4676-ac5d-f07127b270c1,c775f60d-b99f-418b-8fe0-53172258c3ce,youtube-transcription-scraper,YouTube Transcription Scraper,https://youtu.be/H8S3pU68lGE,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/65bce54b-0124-4b0d-9e3e-f9b89d0dc99e.jpg""]",false,Fetch the transcriptions from the most popular YouTube videos in your chosen topic,"Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content.","[""writing""]",false,true +17908889-b599-4010-8e4f-bed19b8f3446,6e16e65a-ad34-4108-b4fd-4a23fced5ea2,business-ownerceo-finder,Decision Maker Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/1020d94e-b6a2-4fa7-bbdf-2c218b0de563.jpg""]",false,Contact CEOs today,"Find the key decision-makers you need, fast. + +This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses you’re looking for and where, and it will: + +* Search the area and gather public information +* Return names, roles, and contact details when available +* Provide smart Google search suggestions if details aren’t found + +Perfect for: + +* B2B sales teams seeking verified leads +* Recruiters sourcing local talent +* Researchers looking to connect with business leaders + +Save hours of manual searching and get straight to the people who matter most.","[""business""]",true,true +72beca1d-45ea-4403-a7ce-e2af168ee428,415b7352-0dc6-4214-9d87-0ad3751b711d,smart-meeting-brief,Smart Meeting Prep,https://youtu.be/9ydZR2hkxaY,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2f116ce1-63ae-4d39-a5cd-f514defc2b97.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0a71a60a-2263-4f12-9836-9c76ab49f155.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/95327695-9184-403c-907a-a9d3bdafa6a5.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2bc77788-790b-47d4-8a61-ce97b695e9f5.png""]",true,Business meeting briefings delivered daily,"Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls. + +How It Works +1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day. +2. It reviews recent email threads with each participant to surface key relationship history and communication context. +3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data. +4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points.","[""personal""]",true,true +9fa5697a-617b-4fae-aea0-7dbbed279976,b8ceb480-a7a2-4c90-8513-181a49f7071f,automated-support-ai,Automated Support Agent,https://youtu.be/nBMfu_5sgDA,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ed56febc-2205-4179-9e7e-505d8500b66c.png""]",true,Automate up to 80 percent of inbound support emails,"Overview: +Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution. + +How it Works: +New support emails are routed to the agent. +The agent checks internal documentation for answers. +It measures confidence in the answer found and either replies directly or escalates to a human. + +Business Value: +Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times.","[""business""]",false,true +2bdac92b-a12c-4131-bb46-0e3b89f61413,31daf49d-31d3-476b-aa4c-099abc59b458,unspirational-poster-maker,Unspirational Poster Maker,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a490dac-27e5-405f-a4c4-8d1c55b85060.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d343fbb5-478c-4e38-94df-4337293b61f1.jpg""]",false,Because adulting is hard,"This witty AI agent generates hilariously relatable ""motivational"" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to ""get it together."" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.","[""creative""]",false,true +9adf005e-2854-4cc7-98cf-f7103b92a7b7,a03b0d8c-4751-43d6-a54e-c3b7856ba4e3,ai-shortform-video-generator-create-viral-ready-content,AI Video Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/8d2670b9-fea5-4966-a597-0a4511bffdc3.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/aabe8aec-0110-4ce7-a259-4f86fe8fe07d.png""]",false,Create Viral-Ready Shorts Content in Seconds,"OVERVIEW +Transform any trending headline or broad topic into a polished, vertical short-form video in a single run. +The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags. + +HOW IT WORKS +1. Input a topic or an exact news headline. +2. The agent fetches live search results and selects the most engaging related story. +3. Key facts are summarised into concise research notes. +4. Claude writes a 30–35 second script with visual cues, a three-second hook, tension loops, and a call-to-action. +5. GPT-4o generates an eye-catching title and one or two discoverability hashtags. +6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”). + – All voice, style and resolution settings can be adjusted in the Builder before you press ""Run"". +7. Output delivered: Title, Script, Hashtags, Video URL. + +KEY USE CASES +- Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”). +- Real-time newsjacking with a specific breaking headline. +- Product-launch spotlights and quick event recaps while interest is high. + +BUSINESS VALUE +- One-click speed: from idea to finished video in minutes. +- Consistent brand look: Revid presets keep voice, style and aspect ratio on spec. +- No-code workflow: marketers create social video without design or development queues. +- Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys. + Self-hosted users simply add OpenAI, Anthropic, Perplexity (OpenRouter/Jina) and Revid keys once. + +IMPORTANT NOTES +- The agent outputs exactly one video per execution. Run it again for additional shorts. +- Video rendering time varies; AI-generated footage may take several minutes.","[""writing""]",false,true +864e48ef-fee5-42c1-b6a4-2ae139db9fc1,55d40473-0f31-4ada-9e40-d3a7139fcbd4,automated-blog-writer,Automated SEO Blog Writer,https://youtu.be/nKcDCbDVobs,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2dd5f95b-5b30-4bf8-a11b-bac776c5141a.jpg""]",true,"Automate research, writing, and publishing for high-ranking blog posts","Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility. + +How it works: + +1. Share your pitch, website, and values. +2. The agent studies your site and uncovers proven SEO opportunities. +3. It spends two hours researching and drafting each post. +4. You set the cadence—publishing runs on autopilot. + +Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most. + +Use cases: +• Founders: Keep your blog active with no time drain. +• Agencies: Deliver scalable SEO content for clients. +• Strategists: Automate execution, focus on strategy. +• Marketers: Drive steady organic growth. +• Local businesses: Capture nearby search traffic.","[""writing""]",false,true +6046f42e-eb84-406f-bae0-8e052064a4fa,a548e507-09a7-4b30-909c-f63fcda10fff,lead-finder-local-businesses,Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/abd6605f-d5f8-426b-af36-052e8ba5044f.webp""]",false,Auto-Prospect Like a Pro,"Turbo-charge your local lead generation with the AutoGPT Marketplace’s top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching. + +**WHAT IT DOES** +• Searches Google Maps via the official API (no scraping) +• Prompts like “dentists in Chicago” or “coffee shops near me” +• Returns: Name, Website, Rating, Reviews, **Phone & Address** +• Exports instantly to your CRM, sheet, or outreach workflow + +**WHY YOU’LL LOVE IT** +✓ Hyper-targeted leads in minutes +✓ Unlimited searches & locations +✓ Zero CAPTCHAs or IP blocks +✓ Works on AutoGPT Cloud or self-hosted (with your API key) +✓ Cut prospecting time by 90% + +**PERFECT FOR** +— Marketers & PPC agencies +— SEO consultants & designers +— SaaS founders & sales teams + +Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit. + +→ Click *Add to Library* and own your market today.","[""business""]",true,true +f623c862-24e9-44fc-8ce8-d8282bb51ad2,eafa21d3-bf14-4f63-a97f-a5ee41df83b3,linkedin-post-generator,LinkedIn Post Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/297f6a8e-81a8-43e2-b106-c7ad4a5662df.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/fceebdc1-aef6-4000-97fc-4ef587f56bda.png""]",false,Auto‑craft LinkedIn gold,"Create research‑driven, high‑impact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed. + +FEATURES +• Automated YouTube research – discovers and analyses top‑ranked videos so you don’t have to +• AI‑curated synthesis – combines multiple transcripts into one authoritative narrative +• Full creative control – adjust style, tone, objective, opinion, clarity, target word count and number of videos +• LinkedIn‑optimised output – hook, 2‑3 key points, CTA, strategic line breaks, 3‑5 hashtags, no markdown +• One‑click publish – returns a ready‑to‑post text block (≤1 300 characters) + +HOW IT WORKS +1. Enter a topic and your preferred writing parameters. +2. The agent builds a YouTube search, fetches the page, and extracts the top N video URLs. +3. It pulls each transcript, then feeds them—plus your settings—into Claude 3.5 Sonnet. +4. The model writes a concise, engaging post designed for maximum LinkedIn engagement. + +USE CASES +• Thought‑leadership updates backed by fresh video research +• Rapid industry summaries after major events, webinars, or conferences +• Consistent LinkedIn content for busy founders, marketers, and creators + +WHY YOU’LL LOVE IT +Save hours of manual research, avoid surface‑level hot‑takes, and publish posts that showcase real expertise—without the heavy lift.","[""writing""]",true,true +7d4120ad-b6b3-4419-8bdb-7dd7d350ef32,e7bb29a1-23c7-4fee-aa3b-5426174b8c52,youtube-to-linkedin-post-converter,YouTube to LinkedIn Post Converter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f084b326-a708-4396-be51-7ba59ad2ef32.png""]",false,Transform Your YouTube Videos into Engaging LinkedIn Posts with AI,"WHAT IT DOES: +This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn. + +HOW IT WORKS: +- You provide the URL to the YouTube video (required) +- You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.) +- You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.) +- The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model +- The models extract key insights, memorable quotes, and the main points from the video +- You’ll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement + +INPUTS: +- Source YouTube Video – Provide the URL to the YouTube video +- Structure – Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.) +- Content – Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.) +- Tone – Select the tone for the post (e.g., Conversational, Inspirational, etc.) + +OUTPUT: +- LinkedIn Post – A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences + +Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding.","[""writing""]",false,true +c61d6a83-ea48-4df8-b447-3da2d9fe5814,00fdd42c-a14c-4d19-a567-65374ea0e87f,personalized-morning-coffee-newsletter,Personal Newsletter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f4b38e4c-8166-4caf-9411-96c9c4c82d4c.png""]",false,Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood.,"This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained. + + +How It Works +1. Enter your favorite topics, industries, or areas of interest. +2. Choose your tone—professional, casual, or humorous. +3. Set your preferred delivery cadence: daily or weekly. +4. The agent scans top sources and compiles 3–5 engaging stories, insights, and fun facts into a conversational newsletter. + +Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day. + + +Use Cases +• Executives: Get a daily digest of market updates and leadership insights. +• Marketers: Receive curated creative trends and campaign inspiration. +• Entrepreneurs: Stay updated on your industry without information overload.","[""research""]",true,true +e2e49cfc-4a39-4d62-a6b3-c095f6d025ff,fc2c9976-0962-4625-a27b-d316573a9e7f,email-address-finder,Email Scout - Contact Finder Assistant,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/da8a690a-7a8b-4c1d-b6f8-e2f840c0205d.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a2ac25c-1609-4881-8140-e6da2421afb3.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/26179263-fe06-45bd-b6a0-0754660a0a46.jpg""]",false,Find contact details from name and location using AI search,"Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information. + +Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like ""Tim Cook, USA"" or ""Sarah Smith, London"" and let the AI assistant do the work of finding potential contact details. + +Key Features: +- Quick search from just name and location +- Scans multiple public sources +- Automated AI-powered search process +- Easy to use with simple inputs + +Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact. + +Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible.","[""""]",false,true +81bcc372-0922-4a36-bc35-f7b1e51d6939,e437cc95-e671-489d-b915-76561fba8c7f,ai-youtube-to-blog-converter,YouTube Video to SEO Blog Writer,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/239e5a41-2515-4e1c-96ef-31d0d37ecbeb.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/c7d96966-786f-4be6-ad7d-3a51c84efc0e.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0275a74c-e2c2-4e29-a6e4-3a616c3c35dd.png""]",false,One link. One click. One powerful blog post.,"Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts. + +Your videos deserve a second life—in writing. +Make your content work twice as hard by repurposing it into engaging, searchable articles. + +Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest. + +FEATURES + +• CONTENT ANALYSIS + Extracts key points from the video while preserving your message and intent. + +• CUSTOMIZABLE OUTPUT + Select a tone that fits your audience: casual, professional, educational, or formal. + +• SEO OPTIMIZATION + Automatically creates engaging titles and structured subheadings for better search visibility. + +• USER-FRIENDLY + Repurpose your videos into written content to expand your reach and improve accessibility. + +Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless. +","[""writing""]",true,true +5c3510d2-fc8b-4053-8e19-67f53c86eb1a,f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9,ai-webpage-copy-improver,AI Webpage Copy Improver,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d562d26f-5891-4b09-8859-fbb205972313.jpg""]",false,Boost Your Website's Search Engine Performance,"Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.","[""marketing""]",true,true +94d03bd3-7d44-4d47-b60c-edb2f89508d6,b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0,domain-name-finder,Domain Name Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/28545e09-b2b8-4916-b4c6-67f982510a78.jpeg""]",false,Instantly generate brand-ready domain names that are actually available,"Overview: +Finding a domain name that fits your brand shouldn’t take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable. + +How It Works +1. Input your product pitch, company name, or core keywords. +2. The agent analyzes brand tone, audience, and industry context. +3. It generates a list of unique, memorable domains that match your criteria. +4. All names are pre-filtered for real-time availability, so you can register immediately. + + +Business Value +Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains. + + +Key Use Cases +• Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands. +• Marketers: Test name options across campaigns with instant availability data. +• Entrepreneurs: Validate ideas faster with instant domain options.","[""business""]",false,true +7a831906-daab-426f-9d66-bcf98d869426,516d813b-d1bc-470f-add7-c63a4b2c2bad,ai-function,AI Function,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/620e8117-2ee1-4384-89e6-c2ef4ec3d9c9.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/476259e2-5a79-4a7b-8e70-deeebfca70d7.png""]",false,Never Code Again,"AI FUNCTION MAGIC +Your AI‑powered assistant for turning plain‑English descriptions into working Python functions. + +HOW IT WORKS +1. Describe what the function should do. +2. Specify the inputs it needs. +3. Receive the generated Python code. + +FEATURES +- Effortless Function Generation: convert natural‑language specs into complete functions. +- Customizable Inputs: define the parameters that matter to you. +- Versatile Use Cases: simulate data, automate tasks, prototype ideas. +- Seamless Integration: add the generated function directly to your codebase. + +EXAMPLE +Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.” +Input parameter: number_of_people (default 20) +Result: a list of dictionaries such as +[ + { ""name"": ""Emma Martinez"", ""date_of_birth"": ""1992‑11‑03"", ""job_title"": ""Data Analyst"", ""age"": 32 }, + { ""name"": ""Liam O’Connor"", ""date_of_birth"": ""1985‑07‑19"", ""job_title"": ""Marketing Manager"", ""age"": 39 }, + …18 more entries… +]","[""development""]",false,true diff --git a/autogpt_platform/backend/agents/agent_00fdd42c-a14c-4d19-a567-65374ea0e87f.json b/autogpt_platform/backend/agents/agent_00fdd42c-a14c-4d19-a567-65374ea0e87f.json new file mode 100644 index 0000000000..75d4886813 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_00fdd42c-a14c-4d19-a567-65374ea0e87f.json @@ -0,0 +1,3559 @@ +{ + "id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "version": 58, + "is_active": true, + "name": "Personal Newsletter", + "description": "This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained.\n\n\nHow It Works\n1. Enter your favorite topics, industries, or areas of interest.\n2. Choose your tone\u2014professional, casual, or humorous.\n3. Set your preferred delivery cadence: daily or weekly.\n4. The agent scans top sources and compiles 3\u20135 engaging stories, insights, and fun facts into a conversational newsletter.\n\nSkip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day.\n\n\nUse Cases\n\u2022 Executives: Get a daily digest of market updates and leadership insights.\n\u2022 Marketers: Receive curated creative trends and campaign inspiration.\n\u2022 Entrepreneurs: Stay updated on your industry without information overload.", + "instructions": "1. Enter your topic and email address.\n2. Connect the Gmail inbox you want to send from. (I just have mine send to myself.)\n3. Choose the time period you want the email to cover \u2014 for example, \u201c3 days.\u201d\n4. Set the agent\u2019s schedule to match that time period. If you want a newsletter every 3 days, set it to run every 3 days.\n\nOnce you\u2019ve got your first schedule set up, you can add more!\nFor example, I have three newsletter schedules running \u2014 each on a different topic \u2014 that land in my inbox on different days.", + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "35f3a0cc-cc5e-44d0-ad1e-3e24f53975e7", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Writing Style", + "value": " Engaging, witty, and informative", + "secret": false, + "advanced": false, + "description": "How would you like the newsletter to be written? ", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -3529.511189031829, + "y": 2727.9041957996333 + } + }, + "input_links": [], + "output_links": [ + { + "id": "ef01e0bd-f301-4ac5-93d1-cd5f8ddf30cb", + "source_id": "35f3a0cc-cc5e-44d0-ad1e-3e24f53975e7", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_style", + "is_static": true + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "stories" + }, + "metadata": { + "position": { + "x": 1926.8656810234534, + "y": -32.6452834422966 + } + }, + "input_links": [ + { + "id": "bdc99d05-b6f8-4b2f-8741-c4ab5d2c188e", + "source_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "sink_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "81c73012-3747-410c-b17b-fca5bd83bb1c", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "7abc71cc-09e1-4113-be01-7b37921d6bed", + "source_name": "output", + "sink_name": "items", + "is_static": false + }, + { + "id": "ffc2ac93-0af5-4b20-a4ab-8bd864294669", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "37a4ebc1-d6e1-49f4-a0fb-7769fbdfe9e5", + "source_name": "output", + "sink_name": "collection", + "is_static": false + }, + { + "id": "ce32a0b2-807d-4f6a-8f2d-0f3faec134b7", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "37a4ebc1-d6e1-49f4-a0fb-7769fbdfe9e5", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 2487.025692297438, + "y": 1882.671360992259 + } + }, + "input_links": [ + { + "id": "ffc2ac93-0af5-4b20-a4ab-8bd864294669", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "37a4ebc1-d6e1-49f4-a0fb-7769fbdfe9e5", + "source_name": "output", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "6a255401-9741-4ef5-8a65-0a63b087051b", + "source_id": "37a4ebc1-d6e1-49f4-a0fb-7769fbdfe9e5", + "sink_id": "56e0a227-fa84-4429-bec6-1b41dfd7fe1d", + "source_name": "count", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "block_id": "aeb08fc1-2fc1-4141-bc8e-f758f183a822", + "input_default": { + "list": [], + "entries": [] + }, + "metadata": { + "position": { + "x": 4919.826544870652, + "y": -1.8124187368125888 + } + }, + "input_links": [ + { + "id": "f5763443-3bd7-4a6c-8688-9bc288dc1fed", + "source_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "updated_dictionary", + "sink_name": "entry", + "is_static": false + }, + { + "id": "f58e953d-9369-4c2a-9e3d-1158542d8867", + "source_id": "3f1f1b7a-73b4-4824-882b-7a9bd367866f", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "list", + "sink_name": "list", + "is_static": false + }, + { + "id": "64fbb095-9b1b-478c-a581-f10f7ee721ec", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + } + ], + "output_links": [ + { + "id": "ccfc0124-e44e-413a-8094-9ee0d1e53789", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d74579ef-5171-4178-972e-b5cf0cc90030", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "d21a61b1-5a1e-4724-9075-9c0a9a49bab7", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "e704b15f-835e-48ca-8469-7f06b3217190", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "64fbb095-9b1b-478c-a581-f10f7ee721ec", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "3f1f1b7a-73b4-4824-882b-7a9bd367866f", + "block_id": "a912d5c7-6e00-4542-b2a9-8034136930e4", + "input_default": { + "values": [ + "TEMP" + ] + }, + "metadata": { + "position": { + "x": 4296.522865137839, + "y": -1445.0813939163068 + } + }, + "input_links": [], + "output_links": [ + { + "id": "f58e953d-9369-4c2a-9e3d-1158542d8867", + "source_id": "3f1f1b7a-73b4-4824-882b-7a9bd367866f", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "list", + "sink_name": "list", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "operator": ">" + }, + "metadata": { + "position": { + "x": 6543.9519568197675, + "y": 14.039317215238228 + }, + "customized_name": "Accumulate Images" + }, + "input_links": [ + { + "id": "02215011-eaab-49bf-b461-ec8d3014a3b9", + "source_id": "56e0a227-fa84-4429-bec6-1b41dfd7fe1d", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "b02c106c-25b4-45ac-a0de-7ed1dfc5f13a", + "source_id": "d21a61b1-5a1e-4724-9075-9c0a9a49bab7", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "e704b15f-835e-48ca-8469-7f06b3217190", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + } + ], + "output_links": [ + { + "id": "f9e65c32-1669-477b-a7f9-122150f14e4a", + "source_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "sink_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "block_id": "d93c5a93-ac7e-41c1-ae5c-ef67e6e9b826", + "input_default": { + "list": [], + "value": "TEMP", + "return_item": false + }, + "metadata": { + "position": { + "x": 7164.6756579342355, + "y": 22.080807214334925 + }, + "customized_name": "Remove TEMP from list" + }, + "input_links": [ + { + "id": "f9e65c32-1669-477b-a7f9-122150f14e4a", + "source_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "sink_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + } + ], + "output_links": [ + { + "id": "9a3132bf-98a4-48fc-b70b-636f7df11ded", + "source_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "7c7811e1-d220-4e85-82fa-e2a10fa4291a", + "source_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "sink_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "source_name": "updated_list", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "c46b730a-16bc-4c55-a619-d74974797d91", + "block_id": "b29c1b50-5d0e-4d9f-8f9d-1b0e6fcbf0b1", + "input_default": { + "offset": 0, + "trigger": "go", + "format_type": { + "discriminator": "iso8601", + "use_user_timezone": true + } + }, + "metadata": { + "position": { + "x": -3523.1264876803534, + "y": 2059.162687743904 + } + }, + "input_links": [], + "output_links": [ + { + "id": "50a2f911-b4b0-4818-a895-1c36b2f54de5", + "source_id": "c46b730a-16bc-4c55-a619-d74974797d91", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "date", + "sink_name": "prompt_values_#_today", + "is_static": false + }, + { + "id": "b6e872da-2675-4067-a50d-269ac89d7cf0", + "source_id": "c46b730a-16bc-4c55-a619-d74974797d91", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "date", + "sink_name": "prompt_values_#_today", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "7abc71cc-09e1-4113-be01-7b37921d6bed", + "block_id": "f66a3543-28d3-4ab5-8945-9b336371e2ce", + "input_default": { + "items": [], + "items_object": {} + }, + "metadata": { + "position": { + "x": 2508.865949283632, + "y": -22.36800962879755 + } + }, + "input_links": [ + { + "id": "81c73012-3747-410c-b17b-fca5bd83bb1c", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "7abc71cc-09e1-4113-be01-7b37921d6bed", + "source_name": "output", + "sink_name": "items", + "is_static": false + } + ], + "output_links": [ + { + "id": "e58d50fe-ee19-4442-988f-632640c7e850", + "source_id": "7abc71cc-09e1-4113-be01-7b37921d6bed", + "sink_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "You are a skilled newsletter writer for a popular newsletter, which covers the topic: \"{{topic | safe}}\" \nToday's date is \"{{today | safe}}\"\nYour newsletter gets sent out on the following repeat rate: \"{{recency | safe}}\"\n\nCreate a newsletter using the provided research in the following writing style: {{style | safe}}\n\nFollow the Morning Coffee style: conversational tone, clever headlines, and a mix of serious insights with light-hearted comments. \n\nInclude 3-5 main stories and 2-3 shorter snippets or fun facts. Use emojis sparingly for emphasis. End with a thought-provoking question if appropriate.\nInclude dates for every story (only based on the research below, never assume dates, if not present just omit the date). You don't need to write a formal date, you can say things like \"On Monday\" or \"Yesterday\" when accurate and to do so.\n\nHere is the research for today's letter:\n\n{{research | safe}}\n\n\nDo not cover stories not present in the research or make any assumptions. Base your newsletter on this research.\n\nOutput this edition of the newsletter in the following format: \n\n\nThe title of this edition of the newsletter\n\n\n\nThe content of this edition of the newsletter to be sent out to it's readers (formatted in markdown, no additional commentary, will be auto-sent as-is\n", + "sys_prompt": "You are a skilled newsletter writer for Morning Coffee. \n\nCreate an engaging, witty, and informative newsletter using the provided summarized news articles. \n\nFollow the Morning Coffee style: conversational tone, clever headlines, and a mix of serious insights with light-hearted comments. Include 3-5 main stories and 2-3 shorter snippets or fun facts. Use emojis sparingly for emphasis. End with a thought-provoking question or call-to-action.", + "ollama_host": "localhost:11434", + "prompt_values": { + "recency": "1 day" + } + }, + "metadata": { + "position": { + "x": -1579.1113129259709, + "y": 13.308927171573714 + }, + "customized_name": "Write the Newsletter" + }, + "input_links": [ + { + "id": "628cb3f7-ec28-4d7e-a506-6dc14194452b", + "source_id": "6c6cff92-b759-41be-a5a1-cd3976e05b24", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_recency", + "is_static": true + }, + { + "id": "bf7f2e77-eabd-4019-bedc-39eb2928f996", + "source_id": "7b9333d9-68db-48e4-a066-3bfca9802a30", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_topic", + "is_static": true + }, + { + "id": "50a2f911-b4b0-4818-a895-1c36b2f54de5", + "source_id": "c46b730a-16bc-4c55-a619-d74974797d91", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "date", + "sink_name": "prompt_values_#_today", + "is_static": false + }, + { + "id": "ef01e0bd-f301-4ac5-93d1-cd5f8ddf30cb", + "source_id": "35f3a0cc-cc5e-44d0-ad1e-3e24f53975e7", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_style", + "is_static": true + }, + { + "id": "68f19410-34fb-4774-8c14-17c1ffb15a79", + "source_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "response", + "sink_name": "prompt_values_#_research", + "is_static": false + } + ], + "output_links": [ + { + "id": "ca49f19b-1a82-411e-b3ed-aaf0d8c8eec8", + "source_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "sink_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "ff488394-ee6c-41d2-a525-fca11b32549a", + "source_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "7b9333d9-68db-48e4-a066-3bfca9802a30", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Topics of Interest", + "value": "Space", + "secret": false, + "advanced": false, + "description": "Enter your topics of interest, separated by commas", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -3484.2707482603346, + "y": 18.788160761561386 + } + }, + "input_links": [], + "output_links": [ + { + "id": "bf7f2e77-eabd-4019-bedc-39eb2928f996", + "source_id": "7b9333d9-68db-48e4-a066-3bfca9802a30", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_topic", + "is_static": true + }, + { + "id": "a817955f-ec4b-4ad4-9b77-e4e353dbdba4", + "source_id": "7b9333d9-68db-48e4-a066-3bfca9802a30", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "result", + "sink_name": "prompt_values_#_topic", + "is_static": true + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "bb142109-ea01-4917-b32d-2848716bd154", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Newsletter", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 9002.541850178177, + "y": -6.253190173390621 + } + }, + "input_links": [ + { + "id": "16a4267d-aaf5-4636-95f9-bdb2c5e83385", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "bb142109-ea01-4917-b32d-2848716bd154", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "Your task is to return a question that will best find me what I'm looking for. \n\nI am interested in the latest news on the following topic:\n\"{{topic | safe}}\"\n\nWithin the following recency range:\n{{recency | safe}}\n\nThe question you generated should be formatted such as: \"Which AI models launched in the past 24 hours?\" for example. This would be for an input like \"New AI Models\" for the topic and \"1 day\" for the recency.\nBe sure to clearly constrain the search range with the recency.\n\ntodays date is {{today | safe}}\n\nReturn only the question. No other text or commentary. ", + "ollama_host": "localhost:11434", + "prompt_values": { + "recency": "1 day" + } + }, + "metadata": { + "position": { + "x": -2729.6035222682317, + "y": 10.832130787151357 + }, + "customized_name": "Search Query Generator" + }, + "input_links": [ + { + "id": "a8915dbf-6caf-42f7-9cef-8401fcf291ff", + "source_id": "6c6cff92-b759-41be-a5a1-cd3976e05b24", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "result", + "sink_name": "prompt_values_#_recency", + "is_static": true + }, + { + "id": "a817955f-ec4b-4ad4-9b77-e4e353dbdba4", + "source_id": "7b9333d9-68db-48e4-a066-3bfca9802a30", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "result", + "sink_name": "prompt_values_#_topic", + "is_static": true + }, + { + "id": "b6e872da-2675-4067-a50d-269ac89d7cf0", + "source_id": "c46b730a-16bc-4c55-a619-d74974797d91", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "date", + "sink_name": "prompt_values_#_today", + "is_static": false + } + ], + "output_links": [ + { + "id": "17c3b57f-ad85-40de-9a6c-388f9060c6b9", + "source_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "65169c65-3ca7-4599-94b9-b3bac6248a49", + "source_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "sink_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "perplexity/sonar-deep-research", + "retry": 3, + "sys_prompt": "Only ever return results which fall within the specified date range. For reference, \"today\" is {{today | safe}}.\n\nAlways include an explicit date for every story mentioned. \n\nIf a story is outside the timeframe then do not report on it.", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": -2143.8092001237387, + "y": 10.500985789126844 + }, + "customized_name": "Research the Topic" + }, + "input_links": [ + { + "id": "65169c65-3ca7-4599-94b9-b3bac6248a49", + "source_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "sink_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "68f19410-34fb-4774-8c14-17c1ffb15a79", + "source_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "response", + "sink_name": "prompt_values_#_research", + "is_static": false + }, + { + "id": "134406c2-0a8c-4df0-a413-f2e815d21057", + "source_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": { + "Report Design Brief": "\n\n## \u2733\ufe0f Design Brief \u2014 \u201cHuman-Crafted Simplicity\u201d\n\n### \ud83c\udfaf **Goal**\n\nCreate an HTML email that feels **designed by hand**, not generated by a template.\nThe visual tone should be **calm, confident, and editorial**, with natural rhythm and whitespace.\nThe layout should read easily on desktop and mobile, maintaining a **friendly, premium newsletter** vibe.\n\n---\n\n### \ud83e\uddf1 **Layout Structure**\n\n* **Width:** 600px fixed, centered (`align=\"center\"`)\n* **Padding:** 24\u201332px inner margin on all sides\n* **Grid:** Single-column layout (no sidebars or multiple columns)\n* **Flow:**\n\n 1. Header area (title or logo text)\n 2. Intro paragraph (optional)\n 3. Content sections stacked vertically\n 4. Optional footer / signature\n\nEach section should feel like a \u201cblock\u201d of thought \u2014 separated by whitespace, not borders.\n\n---\n\n### \ud83c\udfa8 **Color Palette**\n\n* **Background:** Off-white / warm neutral (`#fffaf2`, `#fffefc`, or `#fdfbf7`)\n* **Text:** Soft black (`#1a1a1a`)\n* **Secondary Text:** Muted gray (`#666`)\n* **Accent Color:** One strong, warm tone (e.g. amber `#f5b200` or coral `#ff6b4a`)\n* **Highlight Backgrounds:** Light tint of accent (`#fff4e0` or `#fff2e2`)\n* **Links:** Accent color or slightly darker version\n\nOverall, aim for **soft contrast** \u2014 no harsh blacks or bright whites.\n\n---\n\n### \ud83d\uddda **Typography**\n\n* **Font Family:** `Helvetica, Arial, sans-serif` (system fonts preferred for email)\n* **Base Size:** 16px\n* **Line Height:** 1.6\u20131.8\n* **Hierarchy:**\n\n * **Main Title:** 28\u201332px, bold, tight spacing, strong top margin\n * **Section Titles:** 20\u201322px, semibold, margin-bottom 12px\n * **Body Text:** 16px, normal weight\n * **Meta Text / Footnotes:** 14px, gray (#777)\n* **Letter Spacing:** Slight positive spacing (0.1\u20130.2px) for a polished feel\n\nAvoid underlines; instead, use color and weight for emphasis.\n\n---\n\n### \u26aa **Spacing and Rhythm**\n\n* **Vertical rhythm** is everything:\n\n * 48px between major sections\n * 24px between heading and paragraph\n * 8\u201312px between lines or inline elements\n* Avoid even repetition \u2014 vary spacing subtly between elements to feel more human.\n* Never use visible borders to divide; use padding and whitespace.\n\n---\n\n### \ud83e\udde9 **Visual Elements**\n\n* Use **colored spans** or **highlight backgrounds** to create visual variety:\n\n ```html\n highlight text\n ```\n* Soft rounded corners on any background block (6\u20138px radius).\n* Occasional use of horizontal rules (`
`) allowed only if light gray (`#eee`) and with generous vertical spacing.\n* Features images clearly grouped with their relevant text, small caption text below the image where available.\n\n---\n\n### \ud83e\udded **Sections**\n\nEach section should feel distinct through typography and whitespace, not hard dividers.\n\nExample section pattern:\n\n* Title \u2192 body \u2192 small callout or quote\n* Use **a single accent element** per section (highlighted text, tinted background block, or colored link).\n* No boxes around \u201cWhy it matters\u201d\u2013style callouts; instead, lightly tinted background or italicized text block.\n\n---\n\n### \ud83d\udcf1 **Responsiveness**\n\n* Ensure readability on mobile:\n\n * Maintain 16px font minimum\n * Auto-scale images and block widths\n * Increase line spacing slightly on mobile (`line-height: 1.8`)\n\n---\n\n### \ud83e\uddfe **Footer**\n\n* Simple and unintrusive.\n* Muted gray text, small font size (14px).\n* Optional subtle top border (`#eee`) or generous top margin (40\u201348px).\n\n---\n\n### \ud83d\udca1 **Design Keywords**\n\n> Warm \u2022 Minimal \u2022 Human \u2022 Editorial \u2022 Spacious \u2022 Typographically-driven \u2022 Confident \u2022 Soft" + }, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "b29bb8a9-d0ab-4858-adfa-af6baa7a81d9", + "input_schema": { + "type": "object", + "required": [ + "Report Text", + "Recipient Email", + "Email Subject" + ], + "properties": { + "Report Text": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Report Text", + "secret": false, + "advanced": false + }, + "Email Subject": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Email Subject", + "secret": false, + "advanced": false, + "description": "The subject line for the email. i.e. \"Your Report\"" + }, + "Recipient Email": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Recipient Email", + "secret": false, + "advanced": false, + "description": "The Email Address to send the report to. i.e. your@email.com" + }, + "Report Design Brief": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Report Design Brief", + "secret": false, + "default": "default", + "advanced": true, + "description": "(optional) Briefly describe how you would like your report to look.\n\nFor example \"Style this like a Stripe documentation page\" or \"Make it look like a high-end medical journal\"" + } + } + }, + "graph_version": 6, + "output_schema": { + "type": "object", + "required": [ + "Error", + "Raw HTML", + "Email Status" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Raw HTML": { + "title": "Raw HTML", + "secret": false, + "advanced": false + }, + "Email Status": { + "title": "Email Status", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 9278.160782058398, + "y": 2084.7289729862423 + }, + "customized_name": "Send Pretty Email" + }, + "input_links": [ + { + "id": "8beb2cbe-4f77-431f-b376-33cfde908629", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "response", + "sink_name": "Report Text", + "is_static": false + }, + { + "id": "ab679265-5b24-4a02-af4d-cf12619ec793", + "source_id": "2b478887-fbf1-4128-8de8-26028cb7c8e3", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "result", + "sink_name": "Recipient Email", + "is_static": true + }, + { + "id": "7b5b9a0d-df5f-4af1-9b8f-38441f299037", + "source_id": "8380fb24-95dc-439e-bd42-3454cfffbba7", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "output", + "sink_name": "Email Subject", + "is_static": false + } + ], + "output_links": [], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": -737.8058814655014, + "y": 32.186899799402624 + } + }, + "input_links": [ + { + "id": "ca49f19b-1a82-411e-b3ed-aaf0d8c8eec8", + "source_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "sink_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "97d2344e-91e5-4358-8b43-42d278a06996", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f56853c2-3ccc-467d-9eda-767ee4d7352f", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "b3cd6910-8252-4ff2-9983-335279774213", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "e221a5a2-1b51-4c23-8019-e9bee5dc4ea0", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "8380fb24-95dc-439e-bd42-3454cfffbba7", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "b3cd6910-8252-4ff2-9983-335279774213", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "body" + }, + "metadata": { + "position": { + "x": -146.05704448321512, + "y": 39.89591730848741 + } + }, + "input_links": [ + { + "id": "f56853c2-3ccc-467d-9eda-767ee4d7352f", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "b3cd6910-8252-4ff2-9983-335279774213", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "635f3112-295b-436e-8b07-6f67ef7cf0e2", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "source_name": "output", + "sink_name": "prompt_values_#_newsletter", + "is_static": false + }, + { + "id": "b82e12d4-aef0-4675-a1f5-6ae375e41710", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "source_name": "output", + "sink_name": "prompt_values_#_newsletter", + "is_static": false + }, + { + "id": "efb10adb-9a77-4430-bc51-9a8adaf80ef2", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "8380fb24-95dc-439e-bd42-3454cfffbba7", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "title" + }, + "metadata": { + "position": { + "x": -139.42904518013256, + "y": 2235.26145151658 + } + }, + "input_links": [ + { + "id": "e221a5a2-1b51-4c23-8019-e9bee5dc4ea0", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "8380fb24-95dc-439e-bd42-3454cfffbba7", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "7b5b9a0d-df5f-4af1-9b8f-38441f299037", + "source_id": "8380fb24-95dc-439e-bd42-3454cfffbba7", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "output", + "sink_name": "Email Subject", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "2b478887-fbf1-4128-8de8-26028cb7c8e3", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Your Email Address", + "secret": false, + "advanced": false, + "description": "Enter the email address at which you would like to receive the newsletter.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -3498.8900990289535, + "y": 563.204100068285 + } + }, + "input_links": [], + "output_links": [ + { + "id": "ab679265-5b24-4a02-af4d-cf12619ec793", + "source_id": "2b478887-fbf1-4128-8de8-26028cb7c8e3", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "result", + "sink_name": "Recipient Email", + "is_static": true + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "6c6cff92-b759-41be-a5a1-cd3976e05b24", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "What time range would you like each newsletter edition to cover?", + "secret": false, + "advanced": false, + "description": "For example, you could say one day to only receive news stories that happened in the past 24 hours before at the time of sending the newsletter, or you could say one week to receive news from that week", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -3504.8879543920716, + "y": 1362.1657779029272 + } + }, + "input_links": [], + "output_links": [ + { + "id": "628cb3f7-ec28-4d7e-a506-6dc14194452b", + "source_id": "6c6cff92-b759-41be-a5a1-cd3976e05b24", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_recency", + "is_static": true + }, + { + "id": "a8915dbf-6caf-42f7-9cef-8401fcf291ff", + "source_id": "6c6cff92-b759-41be-a5a1-cd3976e05b24", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "result", + "sink_name": "prompt_values_#_recency", + "is_static": true + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "Extract and list the individual stories mentioned in the following newsletter which you believe would benefit from an accompanying picture.\nWhen making this decision, consider which stories are likely to have the best pictures available online. Do not just select every story. Consider how many images would be appropriate for a newsletter.\nFor each story you choose, output a clear, descriptive title that would make sense as a search query \u2014 not necessarily the original headline. \n\nOutput the results in XML format as follows:\n\n\n Man lands on the moon\n NASA announces next Mars mission\n\n\nHere is the newsletter content:\n\n{{newsletter | safe}}\n", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 705.45366413491, + "y": -31.198002095433587 + }, + "customized_name": "Pick Stories that need Images" + }, + "input_links": [ + { + "id": "635f3112-295b-436e-8b07-6f67ef7cf0e2", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "source_name": "output", + "sink_name": "prompt_values_#_newsletter", + "is_static": false + } + ], + "output_links": [ + { + "id": "115037c7-c122-4895-8598-f5b238d2eb11", + "source_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "730b0a8f-91e7-444c-a9e3-2b93df7ab6b1", + "source_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "sink_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 1325.4881632458146, + "y": -30.03409103591575 + } + }, + "input_links": [ + { + "id": "730b0a8f-91e7-444c-a9e3-2b93df7ab6b1", + "source_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "sink_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "bdc99d05-b6f8-4b2f-8741-c4ab5d2c188e", + "source_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "sink_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "7d4464f3-87bf-4d58-b16a-232a9123f0c3", + "source_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "You are given a newsletter. Each story includes an associated image URL. While not every image needs to be used (as too many may overcrowd the layout), it might also make sense to include them all\u2014use your judgment.\n\nYour task is to reproduce the newsletter **verbatim** in markdown format, inserting the image URLs where they fit best. \n\nDo not alter or reword the newsletter text in any way\u2014this is absolutely critical. Simply return the unedited newsletter text with the image URLs appropriately placed. Include no extra commentary, explanation, or decoration\u2014only the final markdown output.\n\n\nHere is the newsletter and the URLs:\n\n\n{{newsletter | safe}}\n\n\n\n{{image_urls | safe}}\n\n\nNow, respond with the final newsletter text with no additional commentary, explanation, or decoration. ", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 8400.349484389619, + "y": -8.39926280284628 + }, + "customized_name": "Insert Images into Newsletter" + }, + "input_links": [ + { + "id": "fc9ae566-2a36-4ae9-ac2c-c5fdb7ae7a03", + "source_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "sink_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "source_name": "value", + "sink_name": "prompt_values_#_image_urls", + "is_static": false + }, + { + "id": "b82e12d4-aef0-4675-a1f5-6ae375e41710", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "source_name": "output", + "sink_name": "prompt_values_#_newsletter", + "is_static": false + } + ], + "output_links": [ + { + "id": "8d8ffa53-6012-4d5a-934b-96a732f384f3", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "16a4267d-aaf5-4636-95f9-bdb2c5e83385", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "bb142109-ea01-4917-b32d-2848716bd154", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "8beb2cbe-4f77-431f-b376-33cfde908629", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "response", + "sink_name": "Report Text", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "block_id": "31d1064e-7446-4693-a7d4-65e5ca1180d1", + "input_default": { + "entries": {}, + "dictionary": {} + }, + "metadata": { + "position": { + "x": 4312.272039894743, + "y": 13.125548418915074 + } + }, + "input_links": [ + { + "id": "6ba524c6-3227-43e2-89a8-a2795b21f275", + "source_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "output", + "sink_name": "key", + "is_static": false + }, + { + "id": "9daa4f4e-2cf7-4ec6-b57d-cde111b1a596", + "source_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0a0265e8-47be-4a1c-8da0-9e346f2f7787", + "source_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "URLs", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "f5763443-3bd7-4a6c-8688-9bc288dc1fed", + "source_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "updated_dictionary", + "sink_name": "entry", + "is_static": false + }, + { + "id": "25e097a1-df65-43c9-bb20-cd73b4fb7cc3", + "source_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 7756.490203488402, + "y": 27.235451847524203 + }, + "customized_name": "Convert to String" + }, + "input_links": [ + { + "id": "7c7811e1-d220-4e85-82fa-e2a10fa4291a", + "source_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "sink_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "source_name": "updated_list", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "fc9ae566-2a36-4ae9-ac2c-c5fdb7ae7a03", + "source_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "sink_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "source_name": "value", + "sink_name": "prompt_values_#_image_urls", + "is_static": false + }, + { + "id": "0a9ebd00-3aa5-41b4-85f2-e19510f7c1bd", + "source_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 3201.5656769329316, + "y": -4946.674997979083 + } + }, + "input_links": [ + { + "id": "8d8ffa53-6012-4d5a-934b-96a732f384f3", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "115037c7-c122-4895-8598-f5b238d2eb11", + "source_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "17c3b57f-ad85-40de-9a6c-388f9060c6b9", + "source_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "97d2344e-91e5-4358-8b43-42d278a06996", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "ccfc0124-e44e-413a-8094-9ee0d1e53789", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "9a3132bf-98a4-48fc-b70b-636f7df11ded", + "source_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "ce32a0b2-807d-4f6a-8f2d-0f3faec134b7", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "efb10adb-9a77-4430-bc51-9a8adaf80ef2", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "ff488394-ee6c-41d2-a525-fca11b32549a", + "source_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "7d4464f3-87bf-4d58-b16a-232a9123f0c3", + "source_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0a9ebd00-3aa5-41b4-85f2-e19510f7c1bd", + "source_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "134406c2-0a8c-4df0-a413-f2e815d21057", + "source_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "25e097a1-df65-43c9-bb20-cd73b4fb7cc3", + "source_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "story" + }, + "metadata": { + "position": { + "x": 3170.598532776419, + "y": 9.675529042865193 + } + }, + "input_links": [ + { + "id": "e58d50fe-ee19-4442-988f-632640c7e850", + "source_id": "7abc71cc-09e1-4113-be01-7b37921d6bed", + "sink_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "6ba524c6-3227-43e2-89a8-a2795b21f275", + "source_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "output", + "sink_name": "key", + "is_static": false + }, + { + "id": "cbe51c52-0a47-4578-a2af-1824de07e7bb", + "source_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "sink_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "source_name": "output", + "sink_name": "Search Query", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "d21a61b1-5a1e-4724-9075-9c0a9a49bab7", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 5692.954324071057, + "y": 16.815914128328018 + }, + "customized_name": "Count Images" + }, + "input_links": [ + { + "id": "d74579ef-5171-4178-972e-b5cf0cc90030", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "d21a61b1-5a1e-4724-9075-9c0a9a49bab7", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "b02c106c-25b4-45ac-a0de-7ed1dfc5f13a", + "source_id": "d21a61b1-5a1e-4724-9075-9c0a9a49bab7", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "56e0a227-fa84-4429-bec6-1b41dfd7fe1d", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": {}, + "metadata": { + "position": { + "x": 5901.56593370303, + "y": 1245.5667181466683 + } + }, + "input_links": [ + { + "id": "6a255401-9741-4ef5-8a65-0a63b087051b", + "source_id": "37a4ebc1-d6e1-49f4-a0fb-7769fbdfe9e5", + "sink_id": "56e0a227-fa84-4429-bec6-1b41dfd7fe1d", + "source_name": "count", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "02215011-eaab-49bf-b461-ec8d3014a3b9", + "source_id": "56e0a227-fa84-4429-bec6-1b41dfd7fe1d", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "output", + "sink_name": "value2", + "is_static": true + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + }, + { + "id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": { + "Max Results": 1 + }, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "e429bc2a-e4e3-4b5f-b097-6081e94aeba9", + "input_schema": { + "type": "object", + "required": [ + "Search Query", + "Max Results" + ], + "properties": { + "Max Results": { + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "title": "Max Results", + "secret": false, + "advanced": false + }, + "Search Query": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Search Query", + "secret": false, + "advanced": false + } + } + }, + "graph_version": 21, + "output_schema": { + "type": "object", + "required": [ + "URLs", + "Error" + ], + "properties": { + "URLs": { + "title": "URLs", + "secret": false, + "advanced": false + }, + "Error": { + "title": "Error", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 3748.327471525148, + "y": 12.656659260656184 + }, + "customized_name": "Image Search" + }, + "input_links": [ + { + "id": "cbe51c52-0a47-4578-a2af-1824de07e7bb", + "source_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "sink_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "source_name": "output", + "sink_name": "Search Query", + "is_static": false + } + ], + "output_links": [ + { + "id": "9daa4f4e-2cf7-4ec6-b57d-cde111b1a596", + "source_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0a0265e8-47be-4a1c-8da0-9e346f2f7787", + "source_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "URLs", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6cf51c3a-25dd-4189-bd50-5ee05a5f7794", + "graph_version": 58, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "0a0265e8-47be-4a1c-8da0-9e346f2f7787", + "source_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "URLs", + "sink_name": "value", + "is_static": false + }, + { + "id": "f9e65c32-1669-477b-a7f9-122150f14e4a", + "source_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "sink_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + }, + { + "id": "b82e12d4-aef0-4675-a1f5-6ae375e41710", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "source_name": "output", + "sink_name": "prompt_values_#_newsletter", + "is_static": false + }, + { + "id": "25e097a1-df65-43c9-bb20-cd73b4fb7cc3", + "source_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "7b5b9a0d-df5f-4af1-9b8f-38441f299037", + "source_id": "8380fb24-95dc-439e-bd42-3454cfffbba7", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "output", + "sink_name": "Email Subject", + "is_static": false + }, + { + "id": "e704b15f-835e-48ca-8469-7f06b3217190", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "8d8ffa53-6012-4d5a-934b-96a732f384f3", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "628cb3f7-ec28-4d7e-a506-6dc14194452b", + "source_id": "6c6cff92-b759-41be-a5a1-cd3976e05b24", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_recency", + "is_static": true + }, + { + "id": "f56853c2-3ccc-467d-9eda-767ee4d7352f", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "b3cd6910-8252-4ff2-9983-335279774213", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "ffc2ac93-0af5-4b20-a4ab-8bd864294669", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "37a4ebc1-d6e1-49f4-a0fb-7769fbdfe9e5", + "source_name": "output", + "sink_name": "collection", + "is_static": false + }, + { + "id": "ff488394-ee6c-41d2-a525-fca11b32549a", + "source_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "50a2f911-b4b0-4818-a895-1c36b2f54de5", + "source_id": "c46b730a-16bc-4c55-a619-d74974797d91", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "date", + "sink_name": "prompt_values_#_today", + "is_static": false + }, + { + "id": "0a9ebd00-3aa5-41b4-85f2-e19510f7c1bd", + "source_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "68f19410-34fb-4774-8c14-17c1ffb15a79", + "source_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "response", + "sink_name": "prompt_values_#_research", + "is_static": false + }, + { + "id": "e221a5a2-1b51-4c23-8019-e9bee5dc4ea0", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "8380fb24-95dc-439e-bd42-3454cfffbba7", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "6ba524c6-3227-43e2-89a8-a2795b21f275", + "source_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "output", + "sink_name": "key", + "is_static": false + }, + { + "id": "97d2344e-91e5-4358-8b43-42d278a06996", + "source_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "efb10adb-9a77-4430-bc51-9a8adaf80ef2", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "6a255401-9741-4ef5-8a65-0a63b087051b", + "source_id": "37a4ebc1-d6e1-49f4-a0fb-7769fbdfe9e5", + "sink_id": "56e0a227-fa84-4429-bec6-1b41dfd7fe1d", + "source_name": "count", + "sink_name": "input", + "is_static": false + }, + { + "id": "a8915dbf-6caf-42f7-9cef-8401fcf291ff", + "source_id": "6c6cff92-b759-41be-a5a1-cd3976e05b24", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "result", + "sink_name": "prompt_values_#_recency", + "is_static": true + }, + { + "id": "65169c65-3ca7-4599-94b9-b3bac6248a49", + "source_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "sink_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "134406c2-0a8c-4df0-a413-f2e815d21057", + "source_id": "c991c79e-a690-45ab-b9c2-8cb7837c8f8c", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "ef01e0bd-f301-4ac5-93d1-cd5f8ddf30cb", + "source_id": "35f3a0cc-cc5e-44d0-ad1e-3e24f53975e7", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_style", + "is_static": true + }, + { + "id": "8beb2cbe-4f77-431f-b376-33cfde908629", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "response", + "sink_name": "Report Text", + "is_static": false + }, + { + "id": "d74579ef-5171-4178-972e-b5cf0cc90030", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "d21a61b1-5a1e-4724-9075-9c0a9a49bab7", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "81c73012-3747-410c-b17b-fca5bd83bb1c", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "7abc71cc-09e1-4113-be01-7b37921d6bed", + "source_name": "output", + "sink_name": "items", + "is_static": false + }, + { + "id": "f5763443-3bd7-4a6c-8688-9bc288dc1fed", + "source_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "updated_dictionary", + "sink_name": "entry", + "is_static": false + }, + { + "id": "ca49f19b-1a82-411e-b3ed-aaf0d8c8eec8", + "source_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "sink_id": "65e276b6-18c0-4176-84a0-011d1b20859c", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "bdc99d05-b6f8-4b2f-8741-c4ab5d2c188e", + "source_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "sink_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "7d4464f3-87bf-4d58-b16a-232a9123f0c3", + "source_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "cbe51c52-0a47-4578-a2af-1824de07e7bb", + "source_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "sink_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "source_name": "output", + "sink_name": "Search Query", + "is_static": false + }, + { + "id": "b6e872da-2675-4067-a50d-269ac89d7cf0", + "source_id": "c46b730a-16bc-4c55-a619-d74974797d91", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "date", + "sink_name": "prompt_values_#_today", + "is_static": false + }, + { + "id": "ce32a0b2-807d-4f6a-8f2d-0f3faec134b7", + "source_id": "58df8a79-a41a-4c83-8adc-6bb517b4eb43", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "9a3132bf-98a4-48fc-b70b-636f7df11ded", + "source_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "a817955f-ec4b-4ad4-9b77-e4e353dbdba4", + "source_id": "7b9333d9-68db-48e4-a066-3bfca9802a30", + "sink_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "source_name": "result", + "sink_name": "prompt_values_#_topic", + "is_static": true + }, + { + "id": "bf7f2e77-eabd-4019-bedc-39eb2928f996", + "source_id": "7b9333d9-68db-48e4-a066-3bfca9802a30", + "sink_id": "e4eecf3d-fbd0-4095-b844-38794e6b4e7b", + "source_name": "result", + "sink_name": "prompt_values_#_topic", + "is_static": true + }, + { + "id": "16a4267d-aaf5-4636-95f9-bdb2c5e83385", + "source_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "sink_id": "bb142109-ea01-4917-b32d-2848716bd154", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "635f3112-295b-436e-8b07-6f67ef7cf0e2", + "source_id": "b3cd6910-8252-4ff2-9983-335279774213", + "sink_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "source_name": "output", + "sink_name": "prompt_values_#_newsletter", + "is_static": false + }, + { + "id": "730b0a8f-91e7-444c-a9e3-2b93df7ab6b1", + "source_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "sink_id": "e94bcdb4-67a8-40ea-9bc1-e6672a4bd87a", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "02215011-eaab-49bf-b461-ec8d3014a3b9", + "source_id": "56e0a227-fa84-4429-bec6-1b41dfd7fe1d", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "7c7811e1-d220-4e85-82fa-e2a10fa4291a", + "source_id": "662b0bb1-106b-45c2-ab92-4ddc2640bef5", + "sink_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "source_name": "updated_list", + "sink_name": "value", + "is_static": false + }, + { + "id": "ccfc0124-e44e-413a-8094-9ee0d1e53789", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "115037c7-c122-4895-8598-f5b238d2eb11", + "source_id": "0a54faab-7d56-4826-8408-f1b00042fccd", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "64fbb095-9b1b-478c-a581-f10f7ee721ec", + "source_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "e58d50fe-ee19-4442-988f-632640c7e850", + "source_id": "7abc71cc-09e1-4113-be01-7b37921d6bed", + "sink_id": "5c52e32d-f1c8-4400-a107-85bb4ebb218d", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "ab679265-5b24-4a02-af4d-cf12619ec793", + "source_id": "2b478887-fbf1-4128-8de8-26028cb7c8e3", + "sink_id": "1da0b365-a483-4677-ac3c-ea99270ad92e", + "source_name": "result", + "sink_name": "Recipient Email", + "is_static": true + }, + { + "id": "f58e953d-9369-4c2a-9e3d-1158542d8867", + "source_id": "3f1f1b7a-73b4-4824-882b-7a9bd367866f", + "sink_id": "fb8d7f23-43eb-486e-8fbd-15de021c06d4", + "source_name": "list", + "sink_name": "list", + "is_static": false + }, + { + "id": "9daa4f4e-2cf7-4ec6-b57d-cde111b1a596", + "source_id": "2d0abd00-ca63-4bf7-9a65-730e356eb52a", + "sink_id": "a95ad0cb-9644-4fa5-b2d3-4ae80a9f75c0", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "b02c106c-25b4-45ac-a0de-7ed1dfc5f13a", + "source_id": "d21a61b1-5a1e-4724-9075-9c0a9a49bab7", + "sink_id": "0b91fb17-0c3d-48cc-931d-4f115b2cffc4", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "fc9ae566-2a36-4ae9-ac2c-c5fdb7ae7a03", + "source_id": "131f496c-544c-48e5-ac63-0fa2b6554d1e", + "sink_id": "fe3b8cd6-2c09-4d8b-98d3-70e2df92de41", + "source_name": "value", + "sink_name": "prompt_values_#_image_urls", + "is_static": false + }, + { + "id": "17c3b57f-ad85-40de-9a6c-388f9060c6b9", + "source_id": "68883c29-9a9e-4c97-99ee-945920bfeab7", + "sink_id": "a50c18bc-508e-4b89-aa59-e908a7b7db4e", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [ + { + "id": "8b712b87-c350-40de-a1f8-4869b4f40103", + "version": 6, + "is_active": true, + "name": "Text to Email Report", + "description": "Input your text, and get a beautifully designed email sent straight to your inbox", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "b2074e3b-6f46-47a1-a105-7f80b00af064", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 955.6287333350833, + "y": 1415.5500101046268 + } + }, + "input_links": [ + { + "id": "17cdcfa7-0c24-4991-9798-dd0cad48444c", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "b2074e3b-6f46-47a1-a105-7f80b00af064", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-20250514", + "retry": 3, + "prompt": "Make this into a high-quality html email report. \n\nThe user made the following design style request:\n```\n{{design | safe}}\n```\nDo not mention or reference this style design request in the rendered html report.\n\n\nHere is the report. do not change any of it's written content, your job is just to present it exactly as written:\n```\n{{report | safe}}\n```\n\nDo not include any functional buttons, animations, or any elements that would be non functional or out of place in a static offline report.\n\nRespond with just the html, no additional commentary or decoration. No code blocks, just the html.\n", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 347.6321405130907, + "y": -554.8904332378107 + } + }, + "input_links": [ + { + "id": "d95bf708-63ac-4105-ad72-25c49294d3d9", + "source_id": "22f239e3-69bc-47b7-9e7e-0a4457439fee", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_report", + "is_static": true + }, + { + "id": "0bd1be01-c6b7-4a4e-b294-9d583868e8fa", + "source_id": "b845d0d3-db13-48ff-8257-4bb63be9e5a4", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_design", + "is_static": true + } + ], + "output_links": [ + { + "id": "17cdcfa7-0c24-4991-9798-dd0cad48444c", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "b2074e3b-6f46-47a1-a105-7f80b00af064", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d4ffb3bc-e397-4a38-a269-7fd4bd6a8bb2", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "0b3ef64f-7c87-4773-bfef-71710ea7c1d9", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "d7844d48-631a-4747-a3cc-df18fde200e3", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "response", + "sink_name": "body", + "is_static": false + } + ] + }, + { + "id": "22f239e3-69bc-47b7-9e7e-0a4457439fee", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Report Text", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -811.4705680247006, + "y": -548.7174009963832 + } + }, + "input_links": [], + "output_links": [ + { + "id": "d95bf708-63ac-4105-ad72-25c49294d3d9", + "source_id": "22f239e3-69bc-47b7-9e7e-0a4457439fee", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_report", + "is_static": true + } + ] + }, + { + "id": "0b3ef64f-7c87-4773-bfef-71710ea7c1d9", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Raw HTML", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 908.7018239625986, + "y": -541.959658221456 + } + }, + "input_links": [ + { + "id": "d4ffb3bc-e397-4a38-a269-7fd4bd6a8bb2", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "0b3ef64f-7c87-4773-bfef-71710ea7c1d9", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "b845d0d3-db13-48ff-8257-4bb63be9e5a4", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Report Design Brief", + "title": null, + "value": "default", + "secret": false, + "advanced": true, + "description": "(optional) Briefly describe how you would like your report to look.\n\nFor example \"Style this like a Stripe documentation page\" or \"Make it look like a high-end medical journal\"", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -243.38695621119234, + "y": -549.2030784531711 + } + }, + "input_links": [], + "output_links": [ + { + "id": "0bd1be01-c6b7-4a4e-b294-9d583868e8fa", + "source_id": "b845d0d3-db13-48ff-8257-4bb63be9e5a4", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_design", + "is_static": true + } + ] + }, + { + "id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "block_id": "6c27abc2-e51d-499e-a85f-5a0041ba94f0", + "input_default": { + "cc": [], + "to": [ + "" + ], + "bcc": [], + "attachments": [], + "content_type": null + }, + "metadata": { + "position": { + "x": 2529.745828367755, + "y": -535.7360458512633 + } + }, + "input_links": [ + { + "id": "93a95de7-4234-4ba8-84c6-24c1ebcb9298", + "source_id": "93e1b359-9b03-41be-b1b6-97bc49cef269", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "to_$_0", + "is_static": true + }, + { + "id": "67f30c70-e93e-418f-904a-5efc38e3149d", + "source_id": "8f27e2de-eb93-4f52-b214-aac616bbf9fd", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "subject", + "is_static": true + }, + { + "id": "d7844d48-631a-4747-a3cc-df18fde200e3", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "response", + "sink_name": "body", + "is_static": false + } + ], + "output_links": [ + { + "id": "0bf1bb7f-ab0c-4e58-9d9b-3e51c7c0bedd", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "a3f80ad6-5f2d-45e6-a58d-8312cbb62b66", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "result_#_status", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "93e1b359-9b03-41be-b1b6-97bc49cef269", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Recipient Email", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "The Email Address to send the report to. i.e. your@email.com", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 1439.6291079216235, + "y": -533.8659891630703 + } + }, + "input_links": [], + "output_links": [ + { + "id": "93a95de7-4234-4ba8-84c6-24c1ebcb9298", + "source_id": "93e1b359-9b03-41be-b1b6-97bc49cef269", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "to_$_0", + "is_static": true + } + ] + }, + { + "id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Email Status", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 3123.908492421893, + "y": 119.90445391424203 + } + }, + "input_links": [ + { + "id": "0bf1bb7f-ab0c-4e58-9d9b-3e51c7c0bedd", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "a3f80ad6-5f2d-45e6-a58d-8312cbb62b66", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "result_#_status", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "8f27e2de-eb93-4f52-b214-aac616bbf9fd", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Email Subject", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "The subject line for the email. i.e. \"Your Report\"", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 1974.0298117846444, + "y": -540.8747751530805 + } + }, + "input_links": [], + "output_links": [ + { + "id": "67f30c70-e93e-418f-904a-5efc38e3149d", + "source_id": "8f27e2de-eb93-4f52-b214-aac616bbf9fd", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "subject", + "is_static": true + } + ] + } + ], + "links": [ + { + "id": "a3f80ad6-5f2d-45e6-a58d-8312cbb62b66", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "result_#_status", + "sink_name": "value", + "is_static": false + }, + { + "id": "93a95de7-4234-4ba8-84c6-24c1ebcb9298", + "source_id": "93e1b359-9b03-41be-b1b6-97bc49cef269", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "to_$_0", + "is_static": true + }, + { + "id": "d95bf708-63ac-4105-ad72-25c49294d3d9", + "source_id": "22f239e3-69bc-47b7-9e7e-0a4457439fee", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_report", + "is_static": true + }, + { + "id": "67f30c70-e93e-418f-904a-5efc38e3149d", + "source_id": "8f27e2de-eb93-4f52-b214-aac616bbf9fd", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "subject", + "is_static": true + }, + { + "id": "d4ffb3bc-e397-4a38-a269-7fd4bd6a8bb2", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "0b3ef64f-7c87-4773-bfef-71710ea7c1d9", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "0bd1be01-c6b7-4a4e-b294-9d583868e8fa", + "source_id": "b845d0d3-db13-48ff-8257-4bb63be9e5a4", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_design", + "is_static": true + }, + { + "id": "17cdcfa7-0c24-4991-9798-dd0cad48444c", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "b2074e3b-6f46-47a1-a105-7f80b00af064", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0bf1bb7f-ab0c-4e58-9d9b-3e51c7c0bedd", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d7844d48-631a-4747-a3cc-df18fde200e3", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "response", + "sink_name": "body", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Report Text": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Report Text" + }, + "Report Design Brief": { + "advanced": true, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Report Design Brief", + "description": "(optional) Briefly describe how you would like your report to look.\n\nFor example \"Style this like a Stripe documentation page\" or \"Make it look like a high-end medical journal\"", + "default": "default" + }, + "Recipient Email": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Recipient Email", + "description": "The Email Address to send the report to. i.e. your@email.com" + }, + "Email Subject": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Email Subject", + "description": "The subject line for the email. i.e. \"Your Report\"" + } + }, + "required": [ + "Report Text", + "Recipient Email", + "Email Subject" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + }, + "Raw HTML": { + "advanced": false, + "secret": false, + "title": "Raw HTML" + }, + "Email Status": { + "advanced": false, + "secret": false, + "title": "Email Status" + } + }, + "required": [ + "Error", + "Raw HTML", + "Email Status" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "2fc911f6-09ea-4503-b36a-fda31beffe63", + "version": 21, + "is_active": true, + "name": "Image Search", + "description": "Search the web for images, get back image urls.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "d6de58a7-daeb-4f03-b65a-197f330caf99", + "block_id": "87840993-2053-44b7-8da4-187ad4ee518c", + "input_default": {}, + "metadata": { + "position": { + "x": 1413.5001262690541, + "y": 546.0000441621792 + } + }, + "input_links": [ + { + "id": "99c9a905-3bf4-418c-86e4-442c71ac28e9", + "source_id": "e4727d6e-c50f-4dcf-9a5a-2bd3d432d6ee", + "sink_id": "d6de58a7-daeb-4f03-b65a-197f330caf99", + "source_name": "result", + "sink_name": "query", + "is_static": true + } + ], + "output_links": [ + { + "id": "17af5ed9-d1a6-4461-81a2-cf66bb6e759c", + "source_id": "d6de58a7-daeb-4f03-b65a-197f330caf99", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "results", + "sink_name": "prompt_values_#_results", + "is_static": false + } + ] + }, + { + "id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-mini-2025-08-07", + "retry": 3, + "prompt": "Extract the most relevant image URL(s) that represent the following query from the text below. \nDo not return more than {{max_results}} images under any circumstances.\n\n\n{{query | safe}}\n\n\n\n{{results | safe}}\n\n\nRespond using the following XML structure:\n- Wrap all images in a single tag (even if there is only one image).\n- For each image, include:\n - The full image URL inside ... \n - A short image description inside ... \n- Wrap each image\u2019s data in an ... block.\n\nExample:\n\n \n https://example.com/image1.png\n The image depicts a sunrise over the mountains.\n \n \n https://example.com/image2.jpg\n A close-up of a person hiking on a rocky trail.\n \n\n\nTry to return up to {{max_results}} relevant images, but exclude any that are not relevant.\n", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": { + "query": "", + "results": "", + "max_results": "" + } + }, + "metadata": { + "position": { + "x": 1982.0954315766367, + "y": 547.6003508667025 + } + }, + "input_links": [ + { + "id": "ebb9ade8-cf2b-4baf-88cc-9ff77d88ffe8", + "source_id": "e4727d6e-c50f-4dcf-9a5a-2bd3d432d6ee", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "result", + "sink_name": "prompt_values_#_query", + "is_static": true + }, + { + "id": "15182d9b-235f-43a8-ada5-9a4961f61362", + "source_id": "42288c2f-2737-4bfd-b7b7-a2d7b95d19da", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "result", + "sink_name": "prompt_values_#_max_results", + "is_static": true + }, + { + "id": "17af5ed9-d1a6-4461-81a2-cf66bb6e759c", + "source_id": "d6de58a7-daeb-4f03-b65a-197f330caf99", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "results", + "sink_name": "prompt_values_#_results", + "is_static": false + } + ], + "output_links": [ + { + "id": "4f5b682d-13a1-4bc4-99d4-fd8c979a8d4b", + "source_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "sink_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ] + }, + { + "id": "e4727d6e-c50f-4dcf-9a5a-2bd3d432d6ee", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Search Query", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 799.2065101737418, + "y": 535.4934391928202 + } + }, + "input_links": [], + "output_links": [ + { + "id": "ebb9ade8-cf2b-4baf-88cc-9ff77d88ffe8", + "source_id": "e4727d6e-c50f-4dcf-9a5a-2bd3d432d6ee", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "result", + "sink_name": "prompt_values_#_query", + "is_static": true + }, + { + "id": "99c9a905-3bf4-418c-86e4-442c71ac28e9", + "source_id": "e4727d6e-c50f-4dcf-9a5a-2bd3d432d6ee", + "sink_id": "d6de58a7-daeb-4f03-b65a-197f330caf99", + "source_name": "result", + "sink_name": "query", + "is_static": true + } + ] + }, + { + "id": "42288c2f-2737-4bfd-b7b7-a2d7b95d19da", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Max Results", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 772.4815894022927, + "y": 1517.0242518700538 + } + }, + "input_links": [], + "output_links": [ + { + "id": "15182d9b-235f-43a8-ada5-9a4961f61362", + "source_id": "42288c2f-2737-4bfd-b7b7-a2d7b95d19da", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "result", + "sink_name": "prompt_values_#_max_results", + "is_static": true + } + ] + }, + { + "id": "9988df52-8b32-40ac-a1dd-824f68a753b4", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "URLs", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null, + "escape_html": false + }, + "metadata": { + "position": { + "x": 3251.0726844642595, + "y": 568.0389731566472 + } + }, + "input_links": [ + { + "id": "dd5b8bb3-3f3f-4eb0-8183-95c55ab0fec2", + "source_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "sink_id": "9988df52-8b32-40ac-a1dd-824f68a753b4", + "source_name": "parsed_xml", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 2635.8455314360563, + "y": 550.2528550156364 + } + }, + "input_links": [ + { + "id": "4f5b682d-13a1-4bc4-99d4-fd8c979a8d4b", + "source_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "sink_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "2c493706-4911-48bc-9322-7828ffc8fbdc", + "source_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "sink_id": "c487e2f1-ac64-4006-8448-2012d15f839b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "dd5b8bb3-3f3f-4eb0-8183-95c55ab0fec2", + "source_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "sink_id": "9988df52-8b32-40ac-a1dd-824f68a753b4", + "source_name": "parsed_xml", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "c487e2f1-ac64-4006-8448-2012d15f839b", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null, + "escape_html": false + }, + "metadata": { + "position": { + "x": 3248.5693093186587, + "y": 1538.1199069708057 + } + }, + "input_links": [ + { + "id": "2c493706-4911-48bc-9322-7828ffc8fbdc", + "source_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "sink_id": "c487e2f1-ac64-4006-8448-2012d15f839b", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + } + ], + "links": [ + { + "id": "ebb9ade8-cf2b-4baf-88cc-9ff77d88ffe8", + "source_id": "e4727d6e-c50f-4dcf-9a5a-2bd3d432d6ee", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "result", + "sink_name": "prompt_values_#_query", + "is_static": true + }, + { + "id": "15182d9b-235f-43a8-ada5-9a4961f61362", + "source_id": "42288c2f-2737-4bfd-b7b7-a2d7b95d19da", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "result", + "sink_name": "prompt_values_#_max_results", + "is_static": true + }, + { + "id": "4f5b682d-13a1-4bc4-99d4-fd8c979a8d4b", + "source_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "sink_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "99c9a905-3bf4-418c-86e4-442c71ac28e9", + "source_id": "e4727d6e-c50f-4dcf-9a5a-2bd3d432d6ee", + "sink_id": "d6de58a7-daeb-4f03-b65a-197f330caf99", + "source_name": "result", + "sink_name": "query", + "is_static": true + }, + { + "id": "17af5ed9-d1a6-4461-81a2-cf66bb6e759c", + "source_id": "d6de58a7-daeb-4f03-b65a-197f330caf99", + "sink_id": "cc3dd52e-9f27-407a-8577-7db36b5394eb", + "source_name": "results", + "sink_name": "prompt_values_#_results", + "is_static": false + }, + { + "id": "dd5b8bb3-3f3f-4eb0-8183-95c55ab0fec2", + "source_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "sink_id": "9988df52-8b32-40ac-a1dd-824f68a753b4", + "source_name": "parsed_xml", + "sink_name": "value", + "is_static": false + }, + { + "id": "2c493706-4911-48bc-9322-7828ffc8fbdc", + "source_id": "9f55068b-9638-4c80-b82b-7847c75c0cf6", + "sink_id": "c487e2f1-ac64-4006-8448-2012d15f839b", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Search Query": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Search Query" + }, + "Max Results": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Max Results" + } + }, + "required": [ + "Search Query", + "Max Results" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "URLs": { + "advanced": false, + "secret": false, + "title": "URLs" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "URLs", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + } + ], + "user_id": "", + "created_at": "2025-10-18T11:19:17.072Z", + "input_schema": { + "type": "object", + "properties": { + "Writing Style": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Writing Style", + "description": "How would you like the newsletter to be written? ", + "default": " Engaging, witty, and informative" + }, + "Topics of Interest": { + "advanced": false, + "secret": false, + "title": "Topics of Interest", + "description": "Enter your topics of interest, separated by commas", + "default": "Space" + }, + "Your Email Address": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Your Email Address", + "description": "Enter the email address at which you would like to receive the newsletter." + }, + "What time range would you like each newsletter edition to cover?": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "What time range would you like each newsletter edition to cover?", + "description": "For example, you could say one day to only receive news stories that happened in the past 24 hours before at the time of sending the newsletter, or you could say one week to receive news from that week" + } + }, + "required": [ + "Your Email Address", + "What time range would you like each newsletter edition to cover?" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Newsletter": { + "advanced": false, + "secret": false, + "title": "Newsletter" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Newsletter", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-5-mini-2025-08-07", + "gpt-5-2025-08-07" + ] + }, + "open_router_api_key_credentials": { + "credentials_provider": [ + "open_router" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "open_router", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "perplexity/sonar-deep-research" + ] + }, + "anthropic_api_key_credentials": { + "credentials_provider": [ + "anthropic" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "anthropic", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "claude-sonnet-4-20250514" + ] + }, + "google_oauth2_credentials": { + "credentials_provider": [ + "google" + ], + "credentials_types": [ + "oauth2" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "google", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "oauth2", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['oauth2']]", + "type": "object", + "credentials_scopes": [ + "https://www.googleapis.com/auth/gmail.send" + ], + "discriminator_values": [] + }, + "jina_api_key_credentials": { + "credentials_provider": [ + "jina" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "jina", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + } + }, + "required": [ + "openai_api_key_credentials", + "open_router_api_key_credentials", + "anthropic_api_key_credentials", + "google_oauth2_credentials", + "jina_api_key_credentials" + ], + "title": "PersonalNewsletterCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_31daf49d-31d3-476b-aa4c-099abc59b458.json b/autogpt_platform/backend/agents/agent_31daf49d-31d3-476b-aa4c-099abc59b458.json new file mode 100644 index 0000000000..c796083b28 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_31daf49d-31d3-476b-aa4c-099abc59b458.json @@ -0,0 +1,590 @@ +{ + "id": "7b2e2095-782a-4f8d-adda-e62b661bccf5", + "version": 29, + "is_active": false, + "name": "Unspirational Poster Maker", + "description": "This witty AI agent generates hilariously relatable \"motivational\" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clich\u00e9s and embrace our collective struggles to \"get it together.\" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Generated Image", + "description": "The resulting generated image ready for you to review and post." + }, + "metadata": { + "position": { + "x": 2329.937006807125, + "y": 80.49068076698347 + } + }, + "input_links": [ + { + "id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229", + "source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "20845dda-91de-4508-8077-0504b1a5ae03", + "source_id": "28bda769-b88b-44c9-be5c-52c2667f137e", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "6524c611-774b-45e9-899d-9a6aa80c549c", + "source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "714a0821-e5ba-4af7-9432-50491adda7b1", + "source_id": "576c5677-9050-4d1c-aad4-36b820c04fef", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "7e026d19-f9a6-412f-8082-610f9ba0c410", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Theme", + "value": "Cooking" + }, + "metadata": { + "position": { + "x": -1219.5966324967521, + "y": 80.50339731789956 + } + }, + "input_links": [], + "output_links": [ + { + "id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a", + "source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410", + "sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "source_name": "result", + "sink_name": "prompt_values_#_THEME", + "is_static": true + } + ], + "graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "28bda769-b88b-44c9-be5c-52c2667f137e", + "block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6", + "input_default": { + "upscale": "No Upscale" + }, + "metadata": { + "position": { + "x": 1132.373897280427, + "y": 88.44610377514573 + } + }, + "input_links": [ + { + "id": "54588c74-e090-4e49-89e4-844b9952a585", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "20845dda-91de-4508-8077-0504b1a5ae03", + "source_id": "28bda769-b88b-44c9-be5c-52c2667f137e", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8", + "block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6", + "input_default": { + "upscale": "No Upscale" + }, + "metadata": { + "position": { + "x": 590.7543882245375, + "y": 85.69546832466654 + } + }, + "input_links": [ + { + "id": "66646786-3006-4417-a6b7-0158f2603d1d", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "6524c611-774b-45e9-899d-9a6aa80c549c", + "source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "576c5677-9050-4d1c-aad4-36b820c04fef", + "block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6", + "input_default": { + "upscale": "No Upscale" + }, + "metadata": { + "position": { + "x": 60.48904654237981, + "y": 86.06183359510214 + } + }, + "input_links": [ + { + "id": "201d3e03-bc06-4cee-846d-4c3c804d8857", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "714a0821-e5ba-4af7-9432-50491adda7b1", + "source_id": "576c5677-9050-4d1c-aad4-36b820c04fef", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51", + "block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6", + "input_default": { + "prompt": "A cat sprawled dramatically across an important-looking document during a work-from-home meeting, making direct eye contact with the camera while knocking over a coffee mug in slow motion. Text Overlay: \"Chaos is a career path. Be the obstacle everyone has to work around.\"", + "upscale": "No Upscale" + }, + "metadata": { + "position": { + "x": 1668.3572666956795, + "y": 89.69665262457966 + } + }, + "input_links": [ + { + "id": "509b7587-1940-4a06-808d-edde9a74f400", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229", + "source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o", + "prompt": "\nA photo of a sloth lounging on a desk, with its head resting on a keyboard. The keyboard is on top of a laptop with a blank spreadsheet open. A to-do list is placed beside the laptop, with the top item written as \"Do literally anything\". There is a text overlay that says \"If you can't outwork them, outnap them.\".\n\n\nCreate a relatable satirical, snarky, user-deprecating motivational style image based on the theme: \"{{THEME}}\".\n\nOutput only the image description and caption, without any additional commentary or formatting.", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": -561.1139207164056, + "y": 78.60434452403524 + } + }, + "input_links": [ + { + "id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a", + "source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410", + "sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "source_name": "result", + "sink_name": "prompt_values_#_THEME", + "is_static": true + } + ], + "output_links": [ + { + "id": "54588c74-e090-4e49-89e4-844b9952a585", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "201d3e03-bc06-4cee-846d-4c3c804d8857", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "509b7587-1940-4a06-808d-edde9a74f400", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "66646786-3006-4417-a6b7-0158f2603d1d", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5", + "graph_version": 29, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "66646786-3006-4417-a6b7-0158f2603d1d", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229", + "source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "6524c611-774b-45e9-899d-9a6aa80c549c", + "source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "20845dda-91de-4508-8077-0504b1a5ae03", + "source_id": "28bda769-b88b-44c9-be5c-52c2667f137e", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a", + "source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410", + "sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "source_name": "result", + "sink_name": "prompt_values_#_THEME", + "is_static": true + }, + { + "id": "201d3e03-bc06-4cee-846d-4c3c804d8857", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "714a0821-e5ba-4af7-9432-50491adda7b1", + "source_id": "576c5677-9050-4d1c-aad4-36b820c04fef", + "sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "54588c74-e090-4e49-89e4-844b9952a585", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "509b7587-1940-4a06-808d-edde9a74f400", + "source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4", + "sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2024-12-20T19:58:34.390Z", + "input_schema": { + "type": "object", + "properties": { + "Theme": { + "advanced": false, + "secret": false, + "title": "Theme", + "default": "Cooking" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Generated Image": { + "advanced": false, + "secret": false, + "title": "Generated Image", + "description": "The resulting generated image ready for you to review and post." + } + }, + "required": [ + "Generated Image" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "ideogram_api_key_credentials": { + "credentials_provider": [ + "ideogram" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "ideogram", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-4o" + ] + } + }, + "required": [ + "ideogram_api_key_credentials", + "openai_api_key_credentials" + ], + "title": "UnspirationalPosterMakerCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_415b7352-0dc6-4214-9d87-0ad3751b711d.json b/autogpt_platform/backend/agents/agent_415b7352-0dc6-4214-9d87-0ad3751b711d.json new file mode 100644 index 0000000000..3b52477795 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_415b7352-0dc6-4214-9d87-0ad3751b711d.json @@ -0,0 +1,4953 @@ +{ + "id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "version": 145, + "is_active": true, + "name": "Smart Meeting Prep", + "description": "Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls.\n\nHow It Works\n1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day.\n2. It reviews recent email threads with each participant to surface key relationship history and communication context.\n3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data.\n4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "block_id": "80bc3ed1-e9a4-449e-8163-a8fc86f74f6a", + "input_default": { + "max_events": 10, + "start_time": "2025-06-03T23:00:00.000Z", + "calendar_id": "primary", + "time_range_days": 1, + "include_declined_events": false + }, + "metadata": { + "position": { + "x": -4695.248227566036, + "y": 1025.6785924803073 + } + }, + "input_links": [ + { + "id": "13f0408a-ff6f-46e3-8bc2-d0494affd0be", + "source_id": "2f36814e-5346-4495-a690-114392c69e5a", + "sink_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "source_name": "output", + "sink_name": "start_time", + "is_static": false + } + ], + "output_links": [ + { + "id": "df28bad4-d719-4c70-abb0-3c9f205c36a1", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "140fbcac-7b0a-4e31-97b8-f82d40aba8ad", + "source_name": "events", + "sink_name": "collection", + "is_static": false + }, + { + "id": "db5ab8be-2ba6-48e9-bb60-7e1c1357b2ca", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0d38c2f5-7ff2-4245-937d-ed196f340e62", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "375fbdcf-1401-472d-a3ee-44626f9e324b", + "source_name": "event_#_attendees", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "375fbdcf-1401-472d-a3ee-44626f9e324b", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": -1919.2965406426972, + "y": 517.5094962063698 + } + }, + "input_links": [ + { + "id": "0d38c2f5-7ff2-4245-937d-ed196f340e62", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "375fbdcf-1401-472d-a3ee-44626f9e324b", + "source_name": "event_#_attendees", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "5778d55e-6c46-4e08-a4fc-47aefe562a9a", + "source_id": "375fbdcf-1401-472d-a3ee-44626f9e324b", + "sink_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "source_name": "value", + "sink_name": "Attendees List", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "36a914b3-f48c-4fd5-bc85-abcef3ab8ece", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Meeting Prep Report Text", + "secret": false, + "advanced": false, + "description": "Plain Text Report, the full report was emailed to you." + }, + "metadata": { + "position": { + "x": 3275.241152538072, + "y": 867.6720519460005 + } + }, + "input_links": [ + { + "id": "da7a8692-42d4-426b-a5e8-fc08b49416d1", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "36a914b3-f48c-4fd5-bc85-abcef3ab8ece", + "source_name": "Attendee Research", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "secret": false, + "advanced": false + }, + "metadata": { + "position": { + "x": -2720.993908836589, + "y": 8787.545599772538 + } + }, + "input_links": [ + { + "id": "db5ab8be-2ba6-48e9-bb60-7e1c1357b2ca", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "ff44a42d-6510-409c-bb7c-5d2b435f98b9", + "source_id": "b38b33ea-1751-4f8e-932c-bde5ba898a6a", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "bb4aeae7-66d3-45ce-9219-e0922b0388dc", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "Error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "b38b33ea-1751-4f8e-932c-bde5ba898a6a", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "Invalid email. Please enter your email address in the format: your@email.com." + }, + "metadata": { + "position": { + "x": -4672.316992893083, + "y": -1035.715960747806 + } + }, + "input_links": [ + { + "id": "2fd790d9-537c-4638-9317-c98fcae93100", + "source_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "sink_id": "b38b33ea-1751-4f8e-932c-bde5ba898a6a", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "ff44a42d-6510-409c-bb7c-5d2b435f98b9", + "source_id": "b38b33ea-1751-4f8e-932c-bde5ba898a6a", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "72c7f3e2-f1d2-4e1f-a9e0-8c44e0b7a20f", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Internal Only Meeting", + "secret": false, + "advanced": false, + "description": "At least one internal meeting was detected, I won't brief you for it as I'm assuming you already have background info on your colleagues." + }, + "metadata": { + "position": { + "x": 3227.3962414727475, + "y": 2548.626128665094 + } + }, + "input_links": [ + { + "id": "358195c9-34ed-4842-8fe8-711520b1f43f", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "72c7f3e2-f1d2-4e1f-a9e0-8c44e0b7a20f", + "source_name": "Internal Only Meeting", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "c2b870d0-c997-4246-b4e5-db5032c9a964", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Email Status", + "secret": false, + "advanced": false, + "description": "Whether or not the briefing was successfully sent" + }, + "metadata": { + "position": { + "x": 3893.5172040435923, + "y": -973.2636913682546 + } + }, + "input_links": [ + { + "id": "882efe40-7e50-4303-9996-8c8761466dc2", + "source_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "sink_id": "c2b870d0-c997-4246-b4e5-db5032c9a964", + "source_name": "Email Status", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "86aefe01-bc1b-4143-b37c-d970fc161616", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "You have no meetings on your calendar tomorrow, so I haven\u2019t sent an email \u2014 enjoy the peace and quiet!\n" + }, + "metadata": { + "position": { + "x": -2683.159978461873, + "y": 3832.233478029798 + } + }, + "input_links": [ + { + "id": "4861e4c3-1772-477b-9f39-1e71f5b3b14b", + "source_id": "81999510-7240-4f04-9e2e-b8b838b6ae04", + "sink_id": "86aefe01-bc1b-4143-b37c-d970fc161616", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "a81255d1-6707-442d-9fac-5814ec801a81", + "source_id": "86aefe01-bc1b-4143-b37c-d970fc161616", + "sink_id": "c8184979-92dc-4a66-8e4c-6eac1eaaeb1f", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "c8184979-92dc-4a66-8e4c-6eac1eaaeb1f", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "No Meetings Found", + "secret": false, + "advanced": false + }, + "metadata": { + "position": { + "x": -2119.3676563329755, + "y": 3831.7669244753833 + } + }, + "input_links": [ + { + "id": "a81255d1-6707-442d-9fac-5814ec801a81", + "source_id": "86aefe01-bc1b-4143-b37c-d970fc161616", + "sink_id": "c8184979-92dc-4a66-8e4c-6eac1eaaeb1f", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "140fbcac-7b0a-4e31-97b8-f82d40aba8ad", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": -3797.5359658821435, + "y": 3827.3338921781456 + } + }, + "input_links": [ + { + "id": "df28bad4-d719-4c70-abb0-3c9f205c36a1", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "140fbcac-7b0a-4e31-97b8-f82d40aba8ad", + "source_name": "events", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "7f63a53a-1b74-4cc4-99a5-c698f3eebcd1", + "source_id": "140fbcac-7b0a-4e31-97b8-f82d40aba8ad", + "sink_id": "81999510-7240-4f04-9e2e-b8b838b6ae04", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "81999510-7240-4f04-9e2e-b8b838b6ae04", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "0", + "operator": "==" + }, + "metadata": { + "position": { + "x": -3250.822170585202, + "y": 3831.6129255495543 + } + }, + "input_links": [ + { + "id": "7f63a53a-1b74-4cc4-99a5-c698f3eebcd1", + "source_id": "140fbcac-7b0a-4e31-97b8-f82d40aba8ad", + "sink_id": "81999510-7240-4f04-9e2e-b8b838b6ae04", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "4861e4c3-1772-477b-9f39-1e71f5b3b14b", + "source_id": "81999510-7240-4f04-9e2e-b8b838b6ae04", + "sink_id": "86aefe01-bc1b-4143-b37c-d970fc161616", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "591bee73-2d6e-4420-a142-6343d5c08628", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Check if there are just no meetings tomorrow." + }, + "metadata": { + "position": { + "x": -3796.3852666270996, + "y": 3431.8497318339414 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "c2ec43d3-fbd8-444c-8e63-d8f449f092e3", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Meeting Prep Report Error", + "secret": false, + "advanced": false, + "description": "Error generating or emailing the final report - please email contact@agpt.co" + }, + "metadata": { + "position": { + "x": 3216.287190359684, + "y": -988.0598836224085 + } + }, + "input_links": [ + { + "id": "a88ffe8a-b611-49c8-9fb9-46bce891e023", + "source_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "sink_id": "c2ec43d3-fbd8-444c-8e63-d8f449f092e3", + "source_name": "Error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": {}, + "metadata": { + "position": { + "x": -5232.793780572323, + "y": -1033.1249675080244 + } + }, + "input_links": [ + { + "id": "fe88b71c-6d4e-4c79-8fb6-5dfbde952653", + "source_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "sink_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "source_name": "positive", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "d6451364-3742-4506-a111-da24f71da2ce", + "source_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "sink_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "source_name": "output", + "sink_name": "User's Email", + "is_static": true + }, + { + "id": "23393c58-1bd1-4fa9-91d8-8c3d7a9bd887", + "source_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "sink_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "source_name": "output", + "sink_name": "Recipient Email", + "is_static": true + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "39c9af7e-2380-49b3-87e2-72ed10e00c4c", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Your email address", + "secret": false, + "advanced": false, + "description": "The work email address you use for your meetings. \nYour daily briefings will be sent to you here.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -6422.331283936919, + "y": -1031.7516197244809 + } + }, + "input_links": [], + "output_links": [ + { + "id": "a7fc2437-d157-42e8-9cd5-6e6f2365dc8a", + "source_id": "39c9af7e-2380-49b3-87e2-72ed10e00c4c", + "sink_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "source_name": "result", + "sink_name": "data", + "is_static": true + }, + { + "id": "09284cc3-7322-4f26-b181-783910ba3d40", + "source_id": "39c9af7e-2380-49b3-87e2-72ed10e00c4c", + "sink_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "source_name": "result", + "sink_name": "text", + "is_static": true + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "22f95a78-f6b1-470a-8d07-36661accce5b", + "block_id": "b29c1b50-5d0e-4d9f-8f9d-1b0e6fcbf0b1", + "input_default": { + "offset": "-1", + "trigger": "go", + "format_type": { + "discriminator": "iso8601" + } + }, + "metadata": { + "position": { + "x": -5818.802520787647, + "y": 1028.6722992737089 + } + }, + "input_links": [], + "output_links": [ + { + "id": "8762d87d-972e-4413-860c-c8f5fa5ef7a4", + "source_id": "22f95a78-f6b1-470a-8d07-36661accce5b", + "sink_id": "2f36814e-5346-4495-a690-114392c69e5a", + "source_name": "date", + "sink_name": "values_#_date", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "2f36814e-5346-4495-a690-114392c69e5a", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "{{date | safe}}T00:00:00Z", + "values": {} + }, + "metadata": { + "position": { + "x": -5268.141757324583, + "y": 1027.1597337120993 + } + }, + "input_links": [ + { + "id": "8762d87d-972e-4413-860c-c8f5fa5ef7a4", + "source_id": "22f95a78-f6b1-470a-8d07-36661accce5b", + "sink_id": "2f36814e-5346-4495-a690-114392c69e5a", + "source_name": "date", + "sink_name": "values_#_date", + "is_static": false + } + ], + "output_links": [ + { + "id": "13f0408a-ff6f-46e3-8bc2-d0494affd0be", + "source_id": "2f36814e-5346-4495-a690-114392c69e5a", + "sink_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "source_name": "output", + "sink_name": "start_time", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "block_id": "3060088f-6ed9-4928-9ba7-9c92823a7ccd", + "input_default": { + "match": "^[^\\s@]+@[^\\s@]+\\.[^\\s@]+$", + "dot_all": false, + "case_sensitive": false + }, + "metadata": { + "position": { + "x": -5777.840158296072, + "y": -1031.5404577040745 + } + }, + "input_links": [ + { + "id": "a7fc2437-d157-42e8-9cd5-6e6f2365dc8a", + "source_id": "39c9af7e-2380-49b3-87e2-72ed10e00c4c", + "sink_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "source_name": "result", + "sink_name": "data", + "is_static": true + }, + { + "id": "09284cc3-7322-4f26-b181-783910ba3d40", + "source_id": "39c9af7e-2380-49b3-87e2-72ed10e00c4c", + "sink_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "source_name": "result", + "sink_name": "text", + "is_static": true + } + ], + "output_links": [ + { + "id": "fe88b71c-6d4e-4c79-8fb6-5dfbde952653", + "source_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "sink_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "source_name": "positive", + "sink_name": "input", + "is_static": false + }, + { + "id": "2fd790d9-537c-4638-9317-c98fcae93100", + "source_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "sink_id": "b38b33ea-1751-4f8e-932c-bde5ba898a6a", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": {}, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "4ab85485-082c-4131-a594-9d822b23d9d4", + "input_schema": { + "type": "object", + "required": [ + "Attendees List", + "User's Email" + ], + "properties": { + "User's Email": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "User's Email", + "secret": false, + "advanced": false + }, + "Attendees List": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Attendees List", + "secret": false, + "advanced": false + } + } + }, + "graph_version": 32, + "output_schema": { + "type": "object", + "required": [ + "Attendee Research", + "Error", + "Internal Only Meeting" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Attendee Research": { + "title": "Attendee Research", + "secret": false, + "advanced": false + }, + "Internal Only Meeting": { + "title": "Internal Only Meeting", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": -611.1639892104845, + "y": 997.9458693057281 + } + }, + "input_links": [ + { + "id": "d6451364-3742-4506-a111-da24f71da2ce", + "source_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "sink_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "source_name": "output", + "sink_name": "User's Email", + "is_static": true + }, + { + "id": "5778d55e-6c46-4e08-a4fc-47aefe562a9a", + "source_id": "375fbdcf-1401-472d-a3ee-44626f9e324b", + "sink_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "source_name": "value", + "sink_name": "Attendees List", + "is_static": false + } + ], + "output_links": [ + { + "id": "90c1597e-2cd9-4866-8a1c-7caec0d408db", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "source_name": "Attendee Research", + "sink_name": "Report Text", + "is_static": false + }, + { + "id": "bb4aeae7-66d3-45ce-9219-e0922b0388dc", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "da7a8692-42d4-426b-a5e8-fc08b49416d1", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "36a914b3-f48c-4fd5-bc85-abcef3ab8ece", + "source_name": "Attendee Research", + "sink_name": "value", + "is_static": false + }, + { + "id": "358195c9-34ed-4842-8fe8-711520b1f43f", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "72c7f3e2-f1d2-4e1f-a9e0-8c44e0b7a20f", + "source_name": "Internal Only Meeting", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + }, + { + "id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": { + "Email Subject": "Attendee Backgrounds for your Meeting Tomorrow", + "Report Design Brief": "# Email Report Design Prompt (for a Meeting Prep Agent)\n\n**Mood & Tone**\nProfessional, calm, high-trust, low-noise. Minimal but powerful decoration; whitespace does the heavy lifting.\n\n**Design Tokens (Hex)**\n\n* **Primary**: `#2F6FED` (actions, links, highlights)\n* **Accent**: `#10B981` (positive highlights, subtle KPIs)\n* **Text/Neutrals**:\n\n * Text-Strong: `#0B1220`\n * Text-Muted: `#5B6472`\n * Border: `#E6EAF2`\n * Surface: `#FFFFFF`\n * Canvas: `#F6F8FB`\n* **Semantic**:\n\n * Success: `#14804A`\n * Warning: `#B45309`\n * Danger: `#B91C1C`\n * Info: `#2563EB`\n\n**Typography (System Fonts only)**\n\n* Font stack: `-apple-system, BlinkMacSystemFont, \"Segoe UI\", Roboto, Arial, sans-serif`\n* Heading scale (tight, bold): H1 20px/26px, H2 16px/22px, H3 14px/20px\n* Body: 14px/20px; Muted: 13px/18px\n* Links: Primary color, always underlined, no hover effects (email-safe)\n\n**Layout & Spacing**\n\n* Outer wrapper: 100% width with Canvas background; center a 600px card with Surface background.\n* Padding: 24px outer, 16px inner blocks.\n* Spacing system: 4/8/16/24px; default gap between blocks: 16px.\n* Dividers: 1px Border color, full width, 16px vertical margin.\n\n**Core Components**\n\n* **Header Bar**:\n\n * Background: Surface\n * Title (H1) left-aligned; optional tiny muted subtitle beneath.\n * Optional small Primary top-border (3px) spanning the card for a branded touch.\n* **KPI Tiles (3\u20134 across on desktop, stacked in mobile)**:\n\n * Container: tinted Surface (Primary at 6% tint \u2192 `#E8F0FF`) with 8px radius, 12px padding, 1px Border.\n * Value: 20px bold; Label: 12\u201313px muted.\n * Use Accent tint (`#EAF7F2`) for \u201cgood\u201d metrics; Warning/Danger tints for issues.\n* **Tables (for lists & comparisons)**:\n\n * Header row: Text-Strong, 12px uppercase, letter-spacing 0.5px, bottom border.\n * Rows: 14px body; zebra striping with `#FAFBFE` every other row.\n * Cell padding: 12px; Grid lines: Border color.\n * Numeric columns right-aligned; status columns use chips.\n* **Status Chips**:\n\n * Pill (14px text, 6px vertical padding, 10px horizontal, 999px radius).\n * Success/Warning/Danger/Info use semantic colors with very light background tint and solid text/border in the same hue.\n* **Callouts**:\n\n * Left border 3px in semantic color; background a very light corresponding tint; 12px padding; 8px radius.\n* **Buttons (Bulletproof)**:\n\n * Primary: Primary background, white text, 6\u20138px radius, 14px medium weight, 14\u201316px vertical padding.\n * Secondary: White background, 1px Border, Text-Strong, same padding.\n * Use `` styled as a button (no images).\n* **Tags/Badges**:\n\n * Small pill, uppercase 11px, muted text, light tint background; used sparingly.\n* **Footer**:\n\n * Muted 12px text, high spacing above (24px), no heavy borders.\n\n**Accessibility**\n\n* Body text contrast \u2265 4.5:1.\n* Do not rely on color alone\u2014pair status color with label text.\n* Links always underlined. Minimum touch target height 36\u201340px for buttons.\n\n**Email-Safe Constraints (enforce)**\n\n* Inline CSS only; table-based layout; avoid flex/grid.\n* No web fonts, background images, or box shadows; use borders and tints instead.\n* Avoid absolute positioning; keep max width 600px; make buttons and chips pure HTML/CSS.\n* Provide meaningful alt text for any images.\n\n**Micro-Patterns**\n\n* **Section Lead-In**: tiny uppercase kicker (11px, Primary), then H2.\n* **Key Number**: 28px bold number with small muted caption below.\n* **Inline Label\u2013Value**: Label muted 12px, value 14px strong, separated by 8px." + }, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "b29bb8a9-d0ab-4858-adfa-af6baa7a81d9", + "input_schema": { + "type": "object", + "required": [ + "Report Text", + "Recipient Email", + "Email Subject" + ], + "properties": { + "Report Text": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Report Text", + "secret": false, + "advanced": false + }, + "Email Subject": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Email Subject", + "secret": false, + "advanced": false, + "description": "The subject line for the email. i.e. \"Your Report\"" + }, + "Recipient Email": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Recipient Email", + "secret": false, + "advanced": false, + "description": "The Email Address to send the report to. i.e. your@email.com" + }, + "Report Design Brief": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Report Design Brief", + "secret": false, + "default": "default", + "advanced": true, + "description": "(optional) Briefly describe how you would like your report to look.\n\nFor example \"Style this like a Stripe documentation page\" or \"Make it look like a high-end medical journal\"" + } + } + }, + "graph_version": 6, + "output_schema": { + "type": "object", + "required": [ + "Error", + "Raw HTML", + "Email Status" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Raw HTML": { + "title": "Raw HTML", + "secret": false, + "advanced": false + }, + "Email Status": { + "title": "Email Status", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 2317.4492213928424, + "y": -942.2912786229219 + } + }, + "input_links": [ + { + "id": "90c1597e-2cd9-4866-8a1c-7caec0d408db", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "source_name": "Attendee Research", + "sink_name": "Report Text", + "is_static": false + }, + { + "id": "23393c58-1bd1-4fa9-91d8-8c3d7a9bd887", + "source_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "sink_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "source_name": "output", + "sink_name": "Recipient Email", + "is_static": true + } + ], + "output_links": [ + { + "id": "882efe40-7e50-4303-9996-8c8761466dc2", + "source_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "sink_id": "c2b870d0-c997-4246-b4e5-db5032c9a964", + "source_name": "Email Status", + "sink_name": "value", + "is_static": false + }, + { + "id": "a88ffe8a-b611-49c8-9fb9-46bce891e023", + "source_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "sink_id": "c2ec43d3-fbd8-444c-8e63-d8f449f092e3", + "source_name": "Error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "5231292e-9f27-4ac0-bda6-b67daf1fa765", + "graph_version": 145, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "358195c9-34ed-4842-8fe8-711520b1f43f", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "72c7f3e2-f1d2-4e1f-a9e0-8c44e0b7a20f", + "source_name": "Internal Only Meeting", + "sink_name": "value", + "is_static": false + }, + { + "id": "da7a8692-42d4-426b-a5e8-fc08b49416d1", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "36a914b3-f48c-4fd5-bc85-abcef3ab8ece", + "source_name": "Attendee Research", + "sink_name": "value", + "is_static": false + }, + { + "id": "7f63a53a-1b74-4cc4-99a5-c698f3eebcd1", + "source_id": "140fbcac-7b0a-4e31-97b8-f82d40aba8ad", + "sink_id": "81999510-7240-4f04-9e2e-b8b838b6ae04", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "a88ffe8a-b611-49c8-9fb9-46bce891e023", + "source_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "sink_id": "c2ec43d3-fbd8-444c-8e63-d8f449f092e3", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0d38c2f5-7ff2-4245-937d-ed196f340e62", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "375fbdcf-1401-472d-a3ee-44626f9e324b", + "source_name": "event_#_attendees", + "sink_name": "value", + "is_static": false + }, + { + "id": "db5ab8be-2ba6-48e9-bb60-7e1c1357b2ca", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "13f0408a-ff6f-46e3-8bc2-d0494affd0be", + "source_id": "2f36814e-5346-4495-a690-114392c69e5a", + "sink_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "source_name": "output", + "sink_name": "start_time", + "is_static": false + }, + { + "id": "23393c58-1bd1-4fa9-91d8-8c3d7a9bd887", + "source_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "sink_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "source_name": "output", + "sink_name": "Recipient Email", + "is_static": true + }, + { + "id": "5778d55e-6c46-4e08-a4fc-47aefe562a9a", + "source_id": "375fbdcf-1401-472d-a3ee-44626f9e324b", + "sink_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "source_name": "value", + "sink_name": "Attendees List", + "is_static": false + }, + { + "id": "fe88b71c-6d4e-4c79-8fb6-5dfbde952653", + "source_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "sink_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "source_name": "positive", + "sink_name": "input", + "is_static": false + }, + { + "id": "8762d87d-972e-4413-860c-c8f5fa5ef7a4", + "source_id": "22f95a78-f6b1-470a-8d07-36661accce5b", + "sink_id": "2f36814e-5346-4495-a690-114392c69e5a", + "source_name": "date", + "sink_name": "values_#_date", + "is_static": false + }, + { + "id": "ff44a42d-6510-409c-bb7c-5d2b435f98b9", + "source_id": "b38b33ea-1751-4f8e-932c-bde5ba898a6a", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "4861e4c3-1772-477b-9f39-1e71f5b3b14b", + "source_id": "81999510-7240-4f04-9e2e-b8b838b6ae04", + "sink_id": "86aefe01-bc1b-4143-b37c-d970fc161616", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "09284cc3-7322-4f26-b181-783910ba3d40", + "source_id": "39c9af7e-2380-49b3-87e2-72ed10e00c4c", + "sink_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "source_name": "result", + "sink_name": "text", + "is_static": true + }, + { + "id": "a7fc2437-d157-42e8-9cd5-6e6f2365dc8a", + "source_id": "39c9af7e-2380-49b3-87e2-72ed10e00c4c", + "sink_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "source_name": "result", + "sink_name": "data", + "is_static": true + }, + { + "id": "d6451364-3742-4506-a111-da24f71da2ce", + "source_id": "23598a96-f6b4-497e-b6f2-2094a0686e47", + "sink_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "source_name": "output", + "sink_name": "User's Email", + "is_static": true + }, + { + "id": "df28bad4-d719-4c70-abb0-3c9f205c36a1", + "source_id": "292f5bea-5d68-4ba7-9ce7-dc357caa662e", + "sink_id": "140fbcac-7b0a-4e31-97b8-f82d40aba8ad", + "source_name": "events", + "sink_name": "collection", + "is_static": false + }, + { + "id": "bb4aeae7-66d3-45ce-9219-e0922b0388dc", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "e0b6c552-d7bb-47ca-811c-586b1d64dfa8", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "2fd790d9-537c-4638-9317-c98fcae93100", + "source_id": "bc5b090b-4250-4a82-ac1a-9ad43a4682c4", + "sink_id": "b38b33ea-1751-4f8e-932c-bde5ba898a6a", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "882efe40-7e50-4303-9996-8c8761466dc2", + "source_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "sink_id": "c2b870d0-c997-4246-b4e5-db5032c9a964", + "source_name": "Email Status", + "sink_name": "value", + "is_static": false + }, + { + "id": "90c1597e-2cd9-4866-8a1c-7caec0d408db", + "source_id": "39a1e6f2-8a30-46bd-a8f4-9faeb9013033", + "sink_id": "1b300455-fc85-42fd-b007-c1383d08ada1", + "source_name": "Attendee Research", + "sink_name": "Report Text", + "is_static": false + }, + { + "id": "a81255d1-6707-442d-9fac-5814ec801a81", + "source_id": "86aefe01-bc1b-4143-b37c-d970fc161616", + "sink_id": "c8184979-92dc-4a66-8e4c-6eac1eaaeb1f", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [ + { + "id": "28a7169e-e0e9-4c0f-b1b5-388373ea57f9", + "version": 32, + "is_active": true, + "name": "Research External Meeting Attendees", + "description": "", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "8d6e4f8f-0103-4ffb-bf19-7b604a21c8c1", + "block_id": "a912d5c7-6e00-4542-b2a9-8034136930e4", + "input_default": { + "values": [ + "Items" + ], + "max_size": null + }, + "metadata": { + "position": { + "x": 990.9274568525307, + "y": -151.77627995804914 + } + }, + "input_links": [], + "output_links": [ + { + "id": "66ae58a0-f8dd-4632-9603-ad6577608910", + "source_id": "8d6e4f8f-0103-4ffb-bf19-7b604a21c8c1", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d7d0eee3-d82a-4d90-b7f5-b0c197a1f8e8", + "source_id": "8d6e4f8f-0103-4ffb-bf19-7b604a21c8c1", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "list", + "sink_name": "list", + "is_static": false + } + ] + }, + { + "id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "list" + }, + "metadata": { + "position": { + "x": -4057.6792066765515, + "y": 733.3237429158124 + } + }, + "input_links": [ + { + "id": "12637b65-a018-423f-b1df-e650b72f4f6a", + "source_id": "41b55485-236d-4e58-b788-f085fccd2688", + "sink_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "source_name": "result", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [ + { + "id": "ba6f64a6-b123-47e2-9f12-8b9fd366c4c0", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "e68c03d3-eb89-49ec-b197-4620e69bed80", + "source_name": "value", + "sink_name": "collection", + "is_static": false + }, + { + "id": "f3fb5449-3d3c-4458-9f69-a85bc8c56337", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "dd5a7337-95bb-463b-8bc0-1e502f669374", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "source_name": "value", + "sink_name": "items", + "is_static": false + } + ] + }, + { + "id": "41b55485-236d-4e58-b788-f085fccd2688", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Attendees List", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -4939.766796585129, + "y": 729.3642963116179 + } + }, + "input_links": [], + "output_links": [ + { + "id": "12637b65-a018-423f-b1df-e650b72f4f6a", + "source_id": "41b55485-236d-4e58-b788-f085fccd2688", + "sink_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "source_name": "result", + "sink_name": "value", + "is_static": true + } + ] + }, + { + "id": "e68c03d3-eb89-49ec-b197-4620e69bed80", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": -2820.142700065319, + "y": 1413.8900415776243 + } + }, + "input_links": [ + { + "id": "ba6f64a6-b123-47e2-9f12-8b9fd366c4c0", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "e68c03d3-eb89-49ec-b197-4620e69bed80", + "source_name": "value", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "db1812f4-f46f-4c7d-9ad3-37453d04f170", + "source_id": "e68c03d3-eb89-49ec-b197-4620e69bed80", + "sink_id": "13fb3e82-8098-42bb-b37e-38e63ffdf93e", + "source_name": "count", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "13fb3e82-8098-42bb-b37e-38e63ffdf93e", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": null + }, + "metadata": { + "position": { + "x": 2389.064482139841, + "y": 1446.115279044648 + } + }, + "input_links": [ + { + "id": "db1812f4-f46f-4c7d-9ad3-37453d04f170", + "source_id": "e68c03d3-eb89-49ec-b197-4620e69bed80", + "sink_id": "13fb3e82-8098-42bb-b37e-38e63ffdf93e", + "source_name": "count", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "8113ecf0-38b8-4540-b3f8-0780531737a0", + "source_id": "13fb3e82-8098-42bb-b37e-38e63ffdf93e", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "output", + "sink_name": "value2", + "is_static": true + } + ] + }, + { + "id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "block_id": "6dbbc4b3-ca6c-42b6-b508-da52d23e13f2", + "input_default": { + "no_value": null, + "yes_value": null + }, + "metadata": { + "position": { + "x": -1053.732606369223, + "y": 46.018177536569056 + } + }, + "input_links": [ + { + "id": "7cea7f65-9768-4a19-8d5a-0244b36dc224", + "source_id": "7b343208-476f-4c79-8745-43fbbc5f12a6", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "positive", + "sink_name": "input", + "is_static": false + }, + { + "id": "0fa405e9-0f78-4fc6-92d3-e58bc0aa3d9a", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "item", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "3d29605c-d46d-405f-b552-dbc4ef7fd7d2", + "source_id": "5d98d672-00fe-4843-a744-b3f2be26d705", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "08b6ee07-24d8-4d8c-928a-c7d3d840251e", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "item", + "sink_name": "yes_value", + "is_static": false + } + ], + "output_links": [ + { + "id": "ae6561b3-6fc1-47f5-88c0-10eade2aa223", + "source_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "sink_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "source_name": "no_output", + "sink_name": "Email to Research", + "is_static": false + }, + { + "id": "f89e59ba-64e1-485b-be15-282c4f6055fa", + "source_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "sink_id": "f11e61f1-4851-49f6-9660-44c9d85b51d4", + "source_name": "yes_output", + "sink_name": "values_#_email", + "is_static": false + } + ] + }, + { + "id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "block_id": "f66a3543-28d3-4ab5-8945-9b336371e2ce", + "input_default": { + "items": [], + "items_str": "", + "items_object": {} + }, + "metadata": { + "position": { + "x": -2818.4348689810117, + "y": 150.89764218147366 + } + }, + "input_links": [ + { + "id": "dd5a7337-95bb-463b-8bc0-1e502f669374", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "source_name": "value", + "sink_name": "items", + "is_static": false + } + ], + "output_links": [ + { + "id": "0fa405e9-0f78-4fc6-92d3-e58bc0aa3d9a", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "item", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "6c88dbfb-0af2-4fe0-9948-e06cf7f0d33e", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "7b343208-476f-4c79-8745-43fbbc5f12a6", + "source_name": "item", + "sink_name": "text", + "is_static": false + }, + { + "id": "08b6ee07-24d8-4d8c-928a-c7d3d840251e", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "item", + "sink_name": "yes_value", + "is_static": false + } + ] + }, + { + "id": "defcbcd0-2df6-43d7-becd-7e3bcc4d7740", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "User's Email", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -4937.255061338332, + "y": -415.4362088144616 + } + }, + "input_links": [], + "output_links": [ + { + "id": "ad7b4785-df90-4f38-96fe-af38676873e7", + "source_id": "defcbcd0-2df6-43d7-becd-7e3bcc4d7740", + "sink_id": "dadb1ee3-1672-4b1a-a6fd-587194f2643b", + "source_name": "result", + "sink_name": "text", + "is_static": true + }, + { + "id": "453bb451-58d8-472c-a5a2-c47ff6bff038", + "source_id": "defcbcd0-2df6-43d7-becd-7e3bcc4d7740", + "sink_id": "aa4b36f5-34f0-4853-b079-ea5a5a84bd63", + "source_name": "result", + "sink_name": "input", + "is_static": true + } + ] + }, + { + "id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "block_id": "aeb08fc1-2fc1-4141-bc8e-f758f183a822", + "input_default": { + "list": [], + "entry": null, + "entries": [], + "position": null + }, + "metadata": { + "position": { + "x": 1682.4780488544197, + "y": 448.7751892519923 + } + }, + "input_links": [ + { + "id": "d7d0eee3-d82a-4d90-b7f5-b0c197a1f8e8", + "source_id": "8d6e4f8f-0103-4ffb-bf19-7b604a21c8c1", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "list", + "sink_name": "list", + "is_static": false + }, + { + "id": "1cf0ad99-d112-4e23-9e5f-769a62c7cc4f", + "source_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "output", + "sink_name": "entry", + "is_static": false + }, + { + "id": "e94bc214-33c2-4a6a-bb2e-dd3dab125dc1", + "source_id": "f11e61f1-4851-49f6-9660-44c9d85b51d4", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "output", + "sink_name": "entry", + "is_static": false + }, + { + "id": "60ea725d-0dde-4ad0-92d6-8af386acd094", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + } + ], + "output_links": [ + { + "id": "7afabefa-38e0-4920-8bd8-ca9885dd7be6", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "0cdc884c-e5b0-4e34-8816-8aa420e93227", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "16b1ce43-4173-4309-8bd1-dca65b72567f", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "36aa1d66-cb53-43a4-8dab-f85040473227", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "60ea725d-0dde-4ad0-92d6-8af386acd094", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + } + ] + }, + { + "id": "0cdc884c-e5b0-4e34-8816-8aa420e93227", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 2301.3488365407675, + "y": 509.9912999793819 + } + }, + "input_links": [ + { + "id": "7afabefa-38e0-4920-8bd8-ca9885dd7be6", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "0cdc884c-e5b0-4e34-8816-8aa420e93227", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "c5660895-3e68-466d-b4fb-6a182796bedb", + "source_id": "0cdc884c-e5b0-4e34-8816-8aa420e93227", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ] + }, + { + "id": "f9569ab7-cc2a-4c65-ba5f-83d35ddf5c3f", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Check if attendees are internal" + }, + "metadata": { + "position": { + "x": -954.8210420231826, + "y": -327.66424109765137 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "7b343208-476f-4c79-8745-43fbbc5f12a6", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 0, + "dot_all": true, + "pattern": "(?<=@)[^@\\s]+", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": -2178.1796684448254, + "y": 552.8838690923096 + } + }, + "input_links": [ + { + "id": "6c88dbfb-0af2-4fe0-9948-e06cf7f0d33e", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "7b343208-476f-4c79-8745-43fbbc5f12a6", + "source_name": "item", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "7cea7f65-9768-4a19-8d5a-0244b36dc224", + "source_id": "7b343208-476f-4c79-8745-43fbbc5f12a6", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "positive", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "no_value": null, + "operator": ">", + "yes_value": null + }, + "metadata": { + "position": { + "x": 2946.0378579705966, + "y": 685.2750616626113 + } + }, + "input_links": [ + { + "id": "8113ecf0-38b8-4540-b3f8-0780531737a0", + "source_id": "13fb3e82-8098-42bb-b37e-38e63ffdf93e", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "36aa1d66-cb53-43a4-8dab-f85040473227", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "c5660895-3e68-466d-b4fb-6a182796bedb", + "source_id": "0cdc884c-e5b0-4e34-8816-8aa420e93227", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "ed218477-ceb8-4d08-a45f-e26365b26ebd", + "source_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "sink_id": "4fa775c1-4084-4ece-81eb-def0c652dc13", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "a064476a-2126-4beb-9730-46765d9a13fa", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "= Accumulator =\n\nHere we combine all the outputs back into one list." + }, + "metadata": { + "position": { + "x": 2127.858354521276, + "y": 66.26566251729537 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "2aa667fa-e64f-4c0c-b23e-cb34faa8d50d", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Attendee Research", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 6853.688660071529, + "y": 247.66871541131957 + } + }, + "input_links": [ + { + "id": "2e72d3f5-43dc-409f-8475-26cec5358c69", + "source_id": "23404c1a-b71b-4ebb-8009-7b1049e85e22", + "sink_id": "2aa667fa-e64f-4c0c-b23e-cb34faa8d50d", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "36bb85e6-d85f-4135-ae30-15680b9e2da0", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Limitation: currently you need to append an extra item onto the list to kick things off" + }, + "metadata": { + "position": { + "x": 1076.7993596989909, + "y": -514.1152790446481 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-20250514", + "retry": 3, + "prompt": "You are tasked with creating an executive preparation brief for The User (\"{{user_email | safe}}\") regarding an upcoming meeting. This email should provide an overview of the key individuals attending the meeting, highlighting their relevant experience, current roles, and any significant achievements or areas of expertise, along with reminding The User of any prior relationship they have with the attendees and briefing them on key information that you think they should know.\n\nHere is the attendee information:\n\n\n{{ATTENDEE_INFO}}\n\n\nAnalyze the provided information carefully. Focus on extracting the most relevant and current details about each attendee. Pay particular attention to information that you think it is important for The User to know before this meetiing.\n\nStructure your executive briefing as follows:\n0. Do not include a subject or title, you are providing the email body only.\n1. A brief introduction stating the purpose of the briefing\n2. For each attendee:\n a. Name and current position\n b. Brief professional background\n c. Key achievements or contributions in their field\n d. Any recent notable activities or publications\n e. Areas of expertise relevant to the meeting\n f. Relationship notes and important relevant reminders/info.\n3. A concise conclusion highlighting the collective expertise of the attendees\n\nInclude the following content in your briefing:\n- Current roles and responsibilities\n- Significant past positions or experiences\n- Major contributions to their field\n- Recent publications or projects\n- Awards or recognition\n- Areas of specialization or expert knowledge\n\nFormat your briefing in a clear, professional manner, using plaintext only (no markdown). Use bullet points or short paragraphs for easy, quick readability. Ensure that the information is presented in a logical order, typically starting with the most important or relevant details.\n\nRemember to focus on the most pertinent information for the meeting context. Maintain a professional tone throughout the briefing, avoiding personal opinions or unnecessary details. Keep it short and dense in genuine value for the receiver.\n\nProvide your final executive briefing within tags. Your output should consist of only the executive briefing; do not include any additional commentary or notes.\n\nFormat only in plaintext as the result will be sent as an email body which doesn't support any formatting languages.\n\nIMPORTANT NOTE: If ALL attendees are internal, skip everything and instead respond with: All attendees are internal, no prep required.", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 4519.657069242934, + "y": 678.7856719692923 + } + }, + "input_links": [ + { + "id": "8de21195-5685-487c-9cbd-3bed51542ebf", + "source_id": "4fa775c1-4084-4ece-81eb-def0c652dc13", + "sink_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "source_name": "value", + "sink_name": "prompt_values_#_ATTENDEE_INFO", + "is_static": false + } + ], + "output_links": [ + { + "id": "b6bb609d-d07a-4265-917a-612bb80d97e1", + "source_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "sink_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "67c22769-3b27-4e73-8ddf-7121c831bb55", + "source_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 1842.7434457237568, + "y": 4458.016728696583 + } + }, + "input_links": [ + { + "id": "66ae58a0-f8dd-4632-9603-ad6577608910", + "source_id": "8d6e4f8f-0103-4ffb-bf19-7b604a21c8c1", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "1638dc72-7f9e-48f5-8984-42434eb65a08", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "16b1ce43-4173-4309-8bd1-dca65b72567f", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "67c22769-3b27-4e73-8ddf-7121c831bb55", + "source_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f3fb5449-3d3c-4458-9f69-a85bc8c56337", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "da9160bc-c2ba-426a-a889-4eb98b594669", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "Error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "f11e61f1-4851-49f6-9660-44c9d85b51d4", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "Known internal attendee, do not research or brief on: {{email}}", + "values": {} + }, + "metadata": { + "position": { + "x": 168.07272266110272, + "y": 935.4561425646702 + } + }, + "input_links": [ + { + "id": "f89e59ba-64e1-485b-be15-282c4f6055fa", + "source_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "sink_id": "f11e61f1-4851-49f6-9660-44c9d85b51d4", + "source_name": "yes_output", + "sink_name": "values_#_email", + "is_static": false + } + ], + "output_links": [ + { + "id": "e94bc214-33c2-4a6a-bb2e-dd3dab125dc1", + "source_id": "f11e61f1-4851-49f6-9660-44c9d85b51d4", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "output", + "sink_name": "entry", + "is_static": false + } + ] + }, + { + "id": "4fa775c1-4084-4ece-81eb-def0c652dc13", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 3824.1548682955167, + "y": 728.9822152581054 + } + }, + "input_links": [ + { + "id": "ed218477-ceb8-4d08-a45f-e26365b26ebd", + "source_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "sink_id": "4fa775c1-4084-4ece-81eb-def0c652dc13", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "8de21195-5685-487c-9cbd-3bed51542ebf", + "source_id": "4fa775c1-4084-4ece-81eb-def0c652dc13", + "sink_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "source_name": "value", + "sink_name": "prompt_values_#_ATTENDEE_INFO", + "is_static": false + } + ] + }, + { + "id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 5238.836948897109, + "y": 955.402047669719 + } + }, + "input_links": [ + { + "id": "b6bb609d-d07a-4265-917a-612bb80d97e1", + "source_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "sink_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "0380f22e-d7cc-4572-9b32-9e159a137603", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "edf2a105-c95a-4cca-8d87-c5aa987ef890", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "1638dc72-7f9e-48f5-8984-42434eb65a08", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "6362d278-da61-488f-8b24-3a1e9cc6ca31", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "23404c1a-b71b-4ebb-8009-7b1049e85e22", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "23404c1a-b71b-4ebb-8009-7b1049e85e22", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "executive_briefing" + }, + "metadata": { + "position": { + "x": 6038.208459885578, + "y": 248.22785259740084 + } + }, + "input_links": [ + { + "id": "6362d278-da61-488f-8b24-3a1e9cc6ca31", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "23404c1a-b71b-4ebb-8009-7b1049e85e22", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "2e72d3f5-43dc-409f-8475-26cec5358c69", + "source_id": "23404c1a-b71b-4ebb-8009-7b1049e85e22", + "sink_id": "2aa667fa-e64f-4c0c-b23e-cb34faa8d50d", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "edf2a105-c95a-4cca-8d87-c5aa987ef890", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "internal_meeting" + }, + "metadata": { + "position": { + "x": 6027.189757186089, + "y": 1205.559857532105 + } + }, + "input_links": [ + { + "id": "0380f22e-d7cc-4572-9b32-9e159a137603", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "edf2a105-c95a-4cca-8d87-c5aa987ef890", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "6a582b2e-1cb3-417d-b755-76e40b80b464", + "source_id": "edf2a105-c95a-4cca-8d87-c5aa987ef890", + "sink_id": "5035ad59-ddaa-4579-9c14-8a89efbab4ff", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "5035ad59-ddaa-4579-9c14-8a89efbab4ff", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Internal Only Meeting", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 6854.284061920448, + "y": 1214.1889470912665 + } + }, + "input_links": [ + { + "id": "6a582b2e-1cb3-417d-b755-76e40b80b464", + "source_id": "edf2a105-c95a-4cca-8d87-c5aa987ef890", + "sink_id": "5035ad59-ddaa-4579-9c14-8a89efbab4ff", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "dadb1ee3-1672-4b1a-a6fd-587194f2643b", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 0, + "dot_all": true, + "pattern": "(?<=@)[^@\\s]+", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": -2894.8860470671607, + "y": -643.0069162291081 + } + }, + "input_links": [ + { + "id": "ad7b4785-df90-4f38-96fe-af38676873e7", + "source_id": "defcbcd0-2df6-43d7-becd-7e3bcc4d7740", + "sink_id": "dadb1ee3-1672-4b1a-a6fd-587194f2643b", + "source_name": "result", + "sink_name": "text", + "is_static": true + } + ], + "output_links": [ + { + "id": "015357e3-7a02-4c1e-95f8-65c4bff0c598", + "source_id": "dadb1ee3-1672-4b1a-a6fd-587194f2643b", + "sink_id": "5d98d672-00fe-4843-a744-b3f2be26d705", + "source_name": "positive", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "5d98d672-00fe-4843-a744-b3f2be26d705", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": null + }, + "metadata": { + "position": { + "x": -2334.8859194235783, + "y": -643.0068956909276 + } + }, + "input_links": [ + { + "id": "015357e3-7a02-4c1e-95f8-65c4bff0c598", + "source_id": "dadb1ee3-1672-4b1a-a6fd-587194f2643b", + "sink_id": "5d98d672-00fe-4843-a744-b3f2be26d705", + "source_name": "positive", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "3d29605c-d46d-405f-b552-dbc4ef7fd7d2", + "source_id": "5d98d672-00fe-4843-a744-b3f2be26d705", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ] + }, + { + "id": "aa4b36f5-34f0-4853-b079-ea5a5a84bd63", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": null + }, + "metadata": { + "position": { + "x": -514.19902675328, + "y": -505.60565239229214 + } + }, + "input_links": [ + { + "id": "453bb451-58d8-472c-a5a2-c47ff6bff038", + "source_id": "defcbcd0-2df6-43d7-becd-7e3bcc4d7740", + "sink_id": "aa4b36f5-34f0-4853-b079-ea5a5a84bd63", + "source_name": "result", + "sink_name": "input", + "is_static": true + } + ], + "output_links": [ + { + "id": "27fcfa16-47d1-4cde-bc75-c898af48d9f6", + "source_id": "aa4b36f5-34f0-4853-b079-ea5a5a84bd63", + "sink_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "source_name": "output", + "sink_name": "Your Email", + "is_static": true + } + ] + }, + { + "id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": {}, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "71e4d0d9-9c6c-45ea-a3ad-e1441faf7c55", + "input_schema": { + "type": "object", + "required": [ + "Email to Research", + "Your Email" + ], + "properties": { + "Your Email": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Your Email", + "secret": false, + "advanced": false + }, + "Email to Research": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Email to Research", + "secret": false, + "advanced": false + } + } + }, + "graph_version": 36, + "output_schema": { + "type": "object", + "required": [ + "Research Result", + "Error", + "Relationship Info" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Research Result": { + "title": "Research Result", + "secret": false, + "advanced": false + }, + "Relationship Info": { + "title": "Relationship Info", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 95.9200514228832, + "y": 139.6163830028459 + } + }, + "input_links": [ + { + "id": "ae6561b3-6fc1-47f5-88c0-10eade2aa223", + "source_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "sink_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "source_name": "no_output", + "sink_name": "Email to Research", + "is_static": false + }, + { + "id": "27fcfa16-47d1-4cde-bc75-c898af48d9f6", + "source_id": "aa4b36f5-34f0-4853-b079-ea5a5a84bd63", + "sink_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "source_name": "output", + "sink_name": "Your Email", + "is_static": true + } + ], + "output_links": [ + { + "id": "16a96716-9a6a-4249-92b7-6d1c8920e618", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "source_name": "Relationship Info", + "sink_name": "values_#_relationship_info", + "is_static": false + }, + { + "id": "da9160bc-c2ba-426a-a889-4eb98b594669", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "71d7c814-d27d-4a10-a6c2-37eeaaea8d97", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "source_name": "Research Result", + "sink_name": "values_#_research", + "is_static": false + } + ] + }, + { + "id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "# Next Attendee\n\n## Research\n{{research | safe}}\n\n## Relationship Info\n{{relationship_info | safe}}\n\n---", + "values": {} + }, + "metadata": { + "position": { + "x": 997.9674189914142, + "y": 888.7526528403145 + } + }, + "input_links": [ + { + "id": "16a96716-9a6a-4249-92b7-6d1c8920e618", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "source_name": "Relationship Info", + "sink_name": "values_#_relationship_info", + "is_static": false + }, + { + "id": "71d7c814-d27d-4a10-a6c2-37eeaaea8d97", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "source_name": "Research Result", + "sink_name": "values_#_research", + "is_static": false + } + ], + "output_links": [ + { + "id": "1cf0ad99-d112-4e23-9e5f-769a62c7cc4f", + "source_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "output", + "sink_name": "entry", + "is_static": false + } + ] + } + ], + "links": [ + { + "id": "ae6561b3-6fc1-47f5-88c0-10eade2aa223", + "source_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "sink_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "source_name": "no_output", + "sink_name": "Email to Research", + "is_static": false + }, + { + "id": "71d7c814-d27d-4a10-a6c2-37eeaaea8d97", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "source_name": "Research Result", + "sink_name": "values_#_research", + "is_static": false + }, + { + "id": "ba6f64a6-b123-47e2-9f12-8b9fd366c4c0", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "e68c03d3-eb89-49ec-b197-4620e69bed80", + "source_name": "value", + "sink_name": "collection", + "is_static": false + }, + { + "id": "6c88dbfb-0af2-4fe0-9948-e06cf7f0d33e", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "7b343208-476f-4c79-8745-43fbbc5f12a6", + "source_name": "item", + "sink_name": "text", + "is_static": false + }, + { + "id": "e94bc214-33c2-4a6a-bb2e-dd3dab125dc1", + "source_id": "f11e61f1-4851-49f6-9660-44c9d85b51d4", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "output", + "sink_name": "entry", + "is_static": false + }, + { + "id": "f89e59ba-64e1-485b-be15-282c4f6055fa", + "source_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "sink_id": "f11e61f1-4851-49f6-9660-44c9d85b51d4", + "source_name": "yes_output", + "sink_name": "values_#_email", + "is_static": false + }, + { + "id": "16a96716-9a6a-4249-92b7-6d1c8920e618", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "source_name": "Relationship Info", + "sink_name": "values_#_relationship_info", + "is_static": false + }, + { + "id": "c5660895-3e68-466d-b4fb-6a182796bedb", + "source_id": "0cdc884c-e5b0-4e34-8816-8aa420e93227", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "da9160bc-c2ba-426a-a889-4eb98b594669", + "source_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0fa405e9-0f78-4fc6-92d3-e58bc0aa3d9a", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "item", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "36aa1d66-cb53-43a4-8dab-f85040473227", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "8113ecf0-38b8-4540-b3f8-0780531737a0", + "source_id": "13fb3e82-8098-42bb-b37e-38e63ffdf93e", + "sink_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "0380f22e-d7cc-4572-9b32-9e159a137603", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "edf2a105-c95a-4cca-8d87-c5aa987ef890", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "dd5a7337-95bb-463b-8bc0-1e502f669374", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "source_name": "value", + "sink_name": "items", + "is_static": false + }, + { + "id": "6362d278-da61-488f-8b24-3a1e9cc6ca31", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "23404c1a-b71b-4ebb-8009-7b1049e85e22", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "60ea725d-0dde-4ad0-92d6-8af386acd094", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "16b1ce43-4173-4309-8bd1-dca65b72567f", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "2e72d3f5-43dc-409f-8475-26cec5358c69", + "source_id": "23404c1a-b71b-4ebb-8009-7b1049e85e22", + "sink_id": "2aa667fa-e64f-4c0c-b23e-cb34faa8d50d", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "12637b65-a018-423f-b1df-e650b72f4f6a", + "source_id": "41b55485-236d-4e58-b788-f085fccd2688", + "sink_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "source_name": "result", + "sink_name": "value", + "is_static": true + }, + { + "id": "1638dc72-7f9e-48f5-8984-42434eb65a08", + "source_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d7d0eee3-d82a-4d90-b7f5-b0c197a1f8e8", + "source_id": "8d6e4f8f-0103-4ffb-bf19-7b604a21c8c1", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "list", + "sink_name": "list", + "is_static": false + }, + { + "id": "3d29605c-d46d-405f-b552-dbc4ef7fd7d2", + "source_id": "5d98d672-00fe-4843-a744-b3f2be26d705", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "6a582b2e-1cb3-417d-b755-76e40b80b464", + "source_id": "edf2a105-c95a-4cca-8d87-c5aa987ef890", + "sink_id": "5035ad59-ddaa-4579-9c14-8a89efbab4ff", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "015357e3-7a02-4c1e-95f8-65c4bff0c598", + "source_id": "dadb1ee3-1672-4b1a-a6fd-587194f2643b", + "sink_id": "5d98d672-00fe-4843-a744-b3f2be26d705", + "source_name": "positive", + "sink_name": "input", + "is_static": false + }, + { + "id": "db1812f4-f46f-4c7d-9ad3-37453d04f170", + "source_id": "e68c03d3-eb89-49ec-b197-4620e69bed80", + "sink_id": "13fb3e82-8098-42bb-b37e-38e63ffdf93e", + "source_name": "count", + "sink_name": "input", + "is_static": false + }, + { + "id": "f3fb5449-3d3c-4458-9f69-a85bc8c56337", + "source_id": "5bc2a8c7-45f9-4067-9480-04d98a876252", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "7cea7f65-9768-4a19-8d5a-0244b36dc224", + "source_id": "7b343208-476f-4c79-8745-43fbbc5f12a6", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "positive", + "sink_name": "input", + "is_static": false + }, + { + "id": "08b6ee07-24d8-4d8c-928a-c7d3d840251e", + "source_id": "bf994e32-3222-4272-ae05-022d07acf2a8", + "sink_id": "9d9d702e-137c-416f-aac5-5f5b5473d875", + "source_name": "item", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "67c22769-3b27-4e73-8ddf-7121c831bb55", + "source_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "8de21195-5685-487c-9cbd-3bed51542ebf", + "source_id": "4fa775c1-4084-4ece-81eb-def0c652dc13", + "sink_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "source_name": "value", + "sink_name": "prompt_values_#_ATTENDEE_INFO", + "is_static": false + }, + { + "id": "66ae58a0-f8dd-4632-9603-ad6577608910", + "source_id": "8d6e4f8f-0103-4ffb-bf19-7b604a21c8c1", + "sink_id": "bb652b0a-02b8-44b2-9829-88fc516dba35", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "b6bb609d-d07a-4265-917a-612bb80d97e1", + "source_id": "9b870197-d821-4397-8ac5-58a5b140a8b4", + "sink_id": "e90c4687-0af5-4ff7-8998-5a6357ae351e", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "7afabefa-38e0-4920-8bd8-ca9885dd7be6", + "source_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "sink_id": "0cdc884c-e5b0-4e34-8816-8aa420e93227", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "ed218477-ceb8-4d08-a45f-e26365b26ebd", + "source_id": "51a2b1a4-9624-4efc-9a2a-955962e9261a", + "sink_id": "4fa775c1-4084-4ece-81eb-def0c652dc13", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + }, + { + "id": "27fcfa16-47d1-4cde-bc75-c898af48d9f6", + "source_id": "aa4b36f5-34f0-4853-b079-ea5a5a84bd63", + "sink_id": "3fa806a2-e5b1-44c8-9128-8a59b8ce4cd2", + "source_name": "output", + "sink_name": "Your Email", + "is_static": true + }, + { + "id": "ad7b4785-df90-4f38-96fe-af38676873e7", + "source_id": "defcbcd0-2df6-43d7-becd-7e3bcc4d7740", + "sink_id": "dadb1ee3-1672-4b1a-a6fd-587194f2643b", + "source_name": "result", + "sink_name": "text", + "is_static": true + }, + { + "id": "453bb451-58d8-472c-a5a2-c47ff6bff038", + "source_id": "defcbcd0-2df6-43d7-becd-7e3bcc4d7740", + "sink_id": "aa4b36f5-34f0-4853-b079-ea5a5a84bd63", + "source_name": "result", + "sink_name": "input", + "is_static": true + }, + { + "id": "1cf0ad99-d112-4e23-9e5f-769a62c7cc4f", + "source_id": "e70f8f56-1f99-4d90-b3f4-1dbbb6e36c84", + "sink_id": "aac1cef5-f505-43b2-a529-411710bcea7d", + "source_name": "output", + "sink_name": "entry", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Attendees List": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Attendees List" + }, + "User's Email": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "User's Email" + } + }, + "required": [ + "Attendees List", + "User's Email" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Attendee Research": { + "advanced": false, + "secret": false, + "title": "Attendee Research" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + }, + "Internal Only Meeting": { + "advanced": false, + "secret": false, + "title": "Internal Only Meeting" + } + }, + "required": [ + "Attendee Research", + "Error", + "Internal Only Meeting" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "8b712b87-c350-40de-a1f8-4869b4f40103", + "version": 6, + "is_active": true, + "name": "Text to Email Report", + "description": "Input your text, and get a beautifully designed email sent straight to your inbox", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "b2074e3b-6f46-47a1-a105-7f80b00af064", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 955.6287333350833, + "y": 1415.5500101046268 + } + }, + "input_links": [ + { + "id": "17cdcfa7-0c24-4991-9798-dd0cad48444c", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "b2074e3b-6f46-47a1-a105-7f80b00af064", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-20250514", + "retry": 3, + "prompt": "Make this into a high-quality html email report. \n\nThe user made the following design style request:\n```\n{{design | safe}}\n```\nDo not mention or reference this style design request in the rendered html report.\n\n\nHere is the report. do not change any of it's written content, your job is just to present it exactly as written:\n```\n{{report | safe}}\n```\n\nDo not include any functional buttons, animations, or any elements that would be non functional or out of place in a static offline report.\n\nRespond with just the html, no additional commentary or decoration. No code blocks, just the html.\n", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 347.6321405130907, + "y": -554.8904332378107 + } + }, + "input_links": [ + { + "id": "d95bf708-63ac-4105-ad72-25c49294d3d9", + "source_id": "22f239e3-69bc-47b7-9e7e-0a4457439fee", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_report", + "is_static": true + }, + { + "id": "0bd1be01-c6b7-4a4e-b294-9d583868e8fa", + "source_id": "b845d0d3-db13-48ff-8257-4bb63be9e5a4", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_design", + "is_static": true + } + ], + "output_links": [ + { + "id": "17cdcfa7-0c24-4991-9798-dd0cad48444c", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "b2074e3b-6f46-47a1-a105-7f80b00af064", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d4ffb3bc-e397-4a38-a269-7fd4bd6a8bb2", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "0b3ef64f-7c87-4773-bfef-71710ea7c1d9", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "d7844d48-631a-4747-a3cc-df18fde200e3", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "response", + "sink_name": "body", + "is_static": false + } + ] + }, + { + "id": "22f239e3-69bc-47b7-9e7e-0a4457439fee", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Report Text", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -811.4705680247006, + "y": -548.7174009963832 + } + }, + "input_links": [], + "output_links": [ + { + "id": "d95bf708-63ac-4105-ad72-25c49294d3d9", + "source_id": "22f239e3-69bc-47b7-9e7e-0a4457439fee", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_report", + "is_static": true + } + ] + }, + { + "id": "0b3ef64f-7c87-4773-bfef-71710ea7c1d9", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Raw HTML", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 908.7018239625986, + "y": -541.959658221456 + } + }, + "input_links": [ + { + "id": "d4ffb3bc-e397-4a38-a269-7fd4bd6a8bb2", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "0b3ef64f-7c87-4773-bfef-71710ea7c1d9", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "b845d0d3-db13-48ff-8257-4bb63be9e5a4", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Report Design Brief", + "title": null, + "value": "default", + "secret": false, + "advanced": true, + "description": "(optional) Briefly describe how you would like your report to look.\n\nFor example \"Style this like a Stripe documentation page\" or \"Make it look like a high-end medical journal\"", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -243.38695621119234, + "y": -549.2030784531711 + } + }, + "input_links": [], + "output_links": [ + { + "id": "0bd1be01-c6b7-4a4e-b294-9d583868e8fa", + "source_id": "b845d0d3-db13-48ff-8257-4bb63be9e5a4", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_design", + "is_static": true + } + ] + }, + { + "id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "block_id": "6c27abc2-e51d-499e-a85f-5a0041ba94f0", + "input_default": { + "cc": [], + "to": [ + "" + ], + "bcc": [], + "attachments": [], + "content_type": null + }, + "metadata": { + "position": { + "x": 2529.745828367755, + "y": -535.7360458512633 + } + }, + "input_links": [ + { + "id": "93a95de7-4234-4ba8-84c6-24c1ebcb9298", + "source_id": "93e1b359-9b03-41be-b1b6-97bc49cef269", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "to_$_0", + "is_static": true + }, + { + "id": "67f30c70-e93e-418f-904a-5efc38e3149d", + "source_id": "8f27e2de-eb93-4f52-b214-aac616bbf9fd", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "subject", + "is_static": true + }, + { + "id": "d7844d48-631a-4747-a3cc-df18fde200e3", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "response", + "sink_name": "body", + "is_static": false + } + ], + "output_links": [ + { + "id": "0bf1bb7f-ab0c-4e58-9d9b-3e51c7c0bedd", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "a3f80ad6-5f2d-45e6-a58d-8312cbb62b66", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "result_#_status", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "93e1b359-9b03-41be-b1b6-97bc49cef269", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Recipient Email", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "The Email Address to send the report to. i.e. your@email.com", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 1439.6291079216235, + "y": -533.8659891630703 + } + }, + "input_links": [], + "output_links": [ + { + "id": "93a95de7-4234-4ba8-84c6-24c1ebcb9298", + "source_id": "93e1b359-9b03-41be-b1b6-97bc49cef269", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "to_$_0", + "is_static": true + } + ] + }, + { + "id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Email Status", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 3123.908492421893, + "y": 119.90445391424203 + } + }, + "input_links": [ + { + "id": "0bf1bb7f-ab0c-4e58-9d9b-3e51c7c0bedd", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "a3f80ad6-5f2d-45e6-a58d-8312cbb62b66", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "result_#_status", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "8f27e2de-eb93-4f52-b214-aac616bbf9fd", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Email Subject", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "The subject line for the email. i.e. \"Your Report\"", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 1974.0298117846444, + "y": -540.8747751530805 + } + }, + "input_links": [], + "output_links": [ + { + "id": "67f30c70-e93e-418f-904a-5efc38e3149d", + "source_id": "8f27e2de-eb93-4f52-b214-aac616bbf9fd", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "subject", + "is_static": true + } + ] + } + ], + "links": [ + { + "id": "0bd1be01-c6b7-4a4e-b294-9d583868e8fa", + "source_id": "b845d0d3-db13-48ff-8257-4bb63be9e5a4", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_design", + "is_static": true + }, + { + "id": "d95bf708-63ac-4105-ad72-25c49294d3d9", + "source_id": "22f239e3-69bc-47b7-9e7e-0a4457439fee", + "sink_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "source_name": "result", + "sink_name": "prompt_values_#_report", + "is_static": true + }, + { + "id": "a3f80ad6-5f2d-45e6-a58d-8312cbb62b66", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "result_#_status", + "sink_name": "value", + "is_static": false + }, + { + "id": "d4ffb3bc-e397-4a38-a269-7fd4bd6a8bb2", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "0b3ef64f-7c87-4773-bfef-71710ea7c1d9", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "0bf1bb7f-ab0c-4e58-9d9b-3e51c7c0bedd", + "source_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "sink_id": "03f800f3-c59d-484b-bafd-d2307e644f92", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d7844d48-631a-4747-a3cc-df18fde200e3", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "response", + "sink_name": "body", + "is_static": false + }, + { + "id": "67f30c70-e93e-418f-904a-5efc38e3149d", + "source_id": "8f27e2de-eb93-4f52-b214-aac616bbf9fd", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "subject", + "is_static": true + }, + { + "id": "17cdcfa7-0c24-4991-9798-dd0cad48444c", + "source_id": "87e23432-e584-4418-ad9a-a42dafd72f28", + "sink_id": "b2074e3b-6f46-47a1-a105-7f80b00af064", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "93a95de7-4234-4ba8-84c6-24c1ebcb9298", + "source_id": "93e1b359-9b03-41be-b1b6-97bc49cef269", + "sink_id": "c05f2dec-50d1-4bea-82bd-faeb08cdf18b", + "source_name": "result", + "sink_name": "to_$_0", + "is_static": true + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Report Text": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Report Text" + }, + "Report Design Brief": { + "advanced": true, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Report Design Brief", + "description": "(optional) Briefly describe how you would like your report to look.\n\nFor example \"Style this like a Stripe documentation page\" or \"Make it look like a high-end medical journal\"", + "default": "default" + }, + "Recipient Email": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Recipient Email", + "description": "The Email Address to send the report to. i.e. your@email.com" + }, + "Email Subject": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Email Subject", + "description": "The subject line for the email. i.e. \"Your Report\"" + } + }, + "required": [ + "Report Text", + "Recipient Email", + "Email Subject" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + }, + "Raw HTML": { + "advanced": false, + "secret": false, + "title": "Raw HTML" + }, + "Email Status": { + "advanced": false, + "secret": false, + "title": "Email Status" + } + }, + "required": [ + "Error", + "Raw HTML", + "Email Status" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "78e74323-a3d9-4b8c-ad3a-a7c416b5b00d", + "version": 36, + "is_active": true, + "name": "Research Person by Email", + "description": "", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "0c8e7071-4424-409b-af01-4892ddc5b4fa", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Research Result", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": -296.1798996216345, + "y": 2233.711614935227 + } + }, + "input_links": [ + { + "id": "cb61f771-fc15-4aba-852d-6e0984f26c41", + "source_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "sink_id": "0c8e7071-4424-409b-af01-4892ddc5b4fa", + "source_name": "Answer", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Email to Research", + "secret": false, + "advanced": false, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -6108.709260576726, + "y": 1228.4257716255026 + } + }, + "input_links": [], + "output_links": [ + { + "id": "7ffdbe78-88fd-4a53-83ce-7c04d146afef", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "9b83fe30-6887-438a-816f-a03eb0910bc3", + "source_name": "result", + "sink_name": "values_#_email", + "is_static": true + }, + { + "id": "d03d0f61-8d43-4285-b460-ede1ef85706b", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + }, + { + "id": "90f08627-a930-4405-8053-f2d6175d5386", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + }, + { + "id": "32a2b355-8c6a-4872-8902-e7bbd8e11a8b", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + } + ] + }, + { + "id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "block_id": "25310c70-b89b-43ba-b25c-4dfa7e2a481c", + "input_default": { + "query": "is:unread", + "max_results": 30 + }, + "metadata": { + "position": { + "x": -4643.9226948203395, + "y": 2172.989467947373 + } + }, + "input_links": [ + { + "id": "20e29442-d609-48b5-918e-919ff87280c5", + "source_id": "9b83fe30-6887-438a-816f-a03eb0910bc3", + "sink_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "source_name": "output", + "sink_name": "query", + "is_static": false + } + ], + "output_links": [ + { + "id": "596f14d0-7270-429b-b779-479c8fc978be", + "source_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "sink_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "source_name": "emails", + "sink_name": "value", + "is_static": false + }, + { + "id": "c614173a-9b4e-4e52-987b-dc60118bb2e4", + "source_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "Identify the owner of \"{{target_email | safe}}\" from the following data. \n\nProvide their full name and any additional information that can be gleaned about their identity from the data (i.e role, company, skills/characteristics, accounts/online profile links etc). \n\nDo not make any assumptions when doing this, provide only what you can verify. \n\nIf there are any meeting notes or written emails present, use them to provide notes on the relationship between me (\"{{user_email | safe}}\") and \"{{target_email | safe}}\", and to remind me of things that it would be beneficial for me to remember regarding them and things they've told me.\n\n\n{{RAW_DATA | safe}}\n", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": -3445.1320373310455, + "y": 2153.301716589669 + } + }, + "input_links": [ + { + "id": "fb24b852-da8d-4cb7-bd2d-e92ff4c3ef9b", + "source_id": "a906c063-c42f-4535-b607-c21ec0ff3ea2", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "result", + "sink_name": "prompt_values_#_user_email", + "is_static": true + }, + { + "id": "6706cc77-d18c-4a5d-a27c-583648482d9d", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "value", + "sink_name": "prompt_values_#_RAW_DATA", + "is_static": false + }, + { + "id": "90f08627-a930-4405-8053-f2d6175d5386", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + } + ], + "output_links": [ + { + "id": "5b703b2d-36ec-46dd-8a52-e5ee7bb415c1", + "source_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "e46395fa-cc9e-4e41-9f01-2a38bee74bb3", + "source_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "sink_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "source_name": "response", + "sink_name": "prompt_values_#_research", + "is_static": false + } + ] + }, + { + "id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": -4044.709675038533, + "y": 2154.011502560513 + } + }, + "input_links": [ + { + "id": "596f14d0-7270-429b-b779-479c8fc978be", + "source_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "sink_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "source_name": "emails", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "6706cc77-d18c-4a5d-a27c-583648482d9d", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "value", + "sink_name": "prompt_values_#_RAW_DATA", + "is_static": false + }, + { + "id": "0ed18996-cb91-4cf6-b55c-af2a94daad78", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "531d9ab3-cdd2-4f8c-ac54-9065db1647ed", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "value", + "sink_name": "prompt_values_#_RAW_DATA", + "is_static": false + } + ] + }, + { + "id": "9b83fe30-6887-438a-816f-a03eb0910bc3", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "from:{{email | safe}} OR to:{{email | safe}} OR \"{{email | safe}}\"", + "values": {} + }, + "metadata": { + "position": { + "x": -5261.837850853275, + "y": 2145.29281130474 + } + }, + "input_links": [ + { + "id": "7ffdbe78-88fd-4a53-83ce-7c04d146afef", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "9b83fe30-6887-438a-816f-a03eb0910bc3", + "source_name": "result", + "sink_name": "values_#_email", + "is_static": true + } + ], + "output_links": [ + { + "id": "20e29442-d609-48b5-918e-919ff87280c5", + "source_id": "9b83fe30-6887-438a-816f-a03eb0910bc3", + "sink_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "source_name": "output", + "sink_name": "query", + "is_static": false + } + ] + }, + { + "id": "a906c063-c42f-4535-b607-c21ec0ff3ea2", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Your Email", + "secret": false, + "advanced": false, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -6011.4052138380575, + "y": 2147.1415239684898 + } + }, + "input_links": [], + "output_links": [ + { + "id": "fb24b852-da8d-4cb7-bd2d-e92ff4c3ef9b", + "source_id": "a906c063-c42f-4535-b607-c21ec0ff3ea2", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "result", + "sink_name": "prompt_values_#_user_email", + "is_static": true + }, + { + "id": "3dd822a9-10c7-4b42-b925-b983185adcc2", + "source_id": "a906c063-c42f-4535-b607-c21ec0ff3ea2", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "result", + "sink_name": "prompt_values_#_user_email", + "is_static": true + } + ] + }, + { + "id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "You are a brilliant prompt-writer. \nYour job is to WRITE a Deep Research prompt for Perplexity\u2019s Deep Research Agent in order to identify the owner of \"email\" that will only produce a dossier if identity is verified to a high bar.\n\nWHAT WE KNOW SO FAR (FROM THE USER'S INBOX)\n\n{{research | safe}}\n\n\nGOAL\nFrom the research above, extract {{target_email | safe}}'s identity, then GENERATE a ready-to-send Perplexity request that:\n1) Forces identity confirmation first (no guessing; return AMBIGUOUS if unsure).\n2) Avoids revealing or using private emails/meeting links in the public search.\n3) Pins the search to the correct human (disambiguates homonyms).\n4) If verified, finds out as much VERIFIABLY accurate information about the person as possible.\n\nDO NOT do any web research yourself. Only WRITE the Perplexity request.\n\nDISAMBIGUATION HEURISTICS (for the request you write)\nHave Perplexity accept a candidate ONLY if \u22652 of:\n1) The public profile is clearly software/AI/engineering related (exclude unrelated homonyms like musicians unless they ALSO meet #2). \n2) Signals of agents/LLMs/dev tooling/startups or affiliation with a named org you infer from public sources. \n3) Geography consistent with hints (time zone/region) \u2014 weak alone; must pair with #1 or #2. \n4) Cross-source agreement across \u22652 reputable, independent sources (e.g., LinkedIn + GitHub, company site + conf bio).\n\nWHAT TO OUTPUT (Prompt ONLY, no prose, no markdown)\n- A single set of ... xml tags containing your given prompt text for the deep research request.\n- DO NOT include any xml tags inside of these ... tags. Any additional use of whatsoever will break the xml parsing setup.", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": -2797.215927245378, + "y": 2141.7155411686153 + } + }, + "input_links": [ + { + "id": "d03d0f61-8d43-4285-b460-ede1ef85706b", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + }, + { + "id": "e46395fa-cc9e-4e41-9f01-2a38bee74bb3", + "source_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "sink_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "source_name": "response", + "sink_name": "prompt_values_#_research", + "is_static": false + } + ], + "output_links": [ + { + "id": "56fa8329-98d3-4ac4-aafc-80b26767f45f", + "source_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "eec6a3b5-7f34-49ae-9d49-231c7fa31b9d", + "source_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "sink_id": "0835f399-b960-4adb-8ea0-316a61632106", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ] + }, + { + "id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": {}, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "f89ac5a2-fbd0-476e-8666-23e2574236ca", + "input_schema": { + "type": "object", + "required": [ + "Question" + ], + "properties": { + "Question": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Question", + "secret": false, + "advanced": false + } + } + }, + "graph_version": 4, + "output_schema": { + "type": "object", + "required": [ + "Answer", + "Error" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Answer": { + "title": "Answer", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": -1028.193817200345, + "y": 2166.664113149024 + } + }, + "input_links": [ + { + "id": "dec10a13-fcd8-40dd-abdd-3cc8f61df5f5", + "source_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "sink_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "source_name": "output", + "sink_name": "Question", + "is_static": false + } + ], + "output_links": [ + { + "id": "cb61f771-fc15-4aba-852d-6e0984f26c41", + "source_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "sink_id": "0c8e7071-4424-409b-af01-4892ddc5b4fa", + "source_name": "Answer", + "sink_name": "value", + "is_static": false + }, + { + "id": "c09a29f7-e91a-416d-bfb4-6d4da3047133", + "source_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "Error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "0835f399-b960-4adb-8ea0-316a61632106", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": -2198.193790964403, + "y": 2141.6640881280887 + } + }, + "input_links": [ + { + "id": "eec6a3b5-7f34-49ae-9d49-231c7fa31b9d", + "source_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "sink_id": "0835f399-b960-4adb-8ea0-316a61632106", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "f79a3f6a-f6ad-4d5a-adc9-c2abed207d30", + "source_id": "0835f399-b960-4adb-8ea0-316a61632106", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "2ff7d45e-269d-4321-8c43-00532892dd0d", + "source_id": "0835f399-b960-4adb-8ea0-316a61632106", + "sink_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "prompt" + }, + "metadata": { + "position": { + "x": -1610.6939610788666, + "y": 2149.4975376460748 + } + }, + "input_links": [ + { + "id": "2ff7d45e-269d-4321-8c43-00532892dd0d", + "source_id": "0835f399-b960-4adb-8ea0-316a61632106", + "sink_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "cb24ba08-20f6-478c-8827-e6d26e83aa4a", + "source_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "dec10a13-fcd8-40dd-abdd-3cc8f61df5f5", + "source_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "sink_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "source_name": "output", + "sink_name": "Question", + "is_static": false + } + ] + }, + { + "id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "secret": false, + "advanced": false + }, + "metadata": { + "position": { + "x": -2630.3106778315464, + "y": 6817.635342158574 + } + }, + "input_links": [ + { + "id": "cb24ba08-20f6-478c-8827-e6d26e83aa4a", + "source_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "56fa8329-98d3-4ac4-aafc-80b26767f45f", + "source_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c614173a-9b4e-4e52-987b-dc60118bb2e4", + "source_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f79a3f6a-f6ad-4d5a-adc9-c2abed207d30", + "source_id": "0835f399-b960-4adb-8ea0-316a61632106", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0ed18996-cb91-4cf6-b55c-af2a94daad78", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "5b703b2d-36ec-46dd-8a52-e5ee7bb415c1", + "source_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c09a29f7-e91a-416d-bfb4-6d4da3047133", + "source_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "4b7cb407-2ed6-41c5-8926-9c923368920b", + "source_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "Analyse the relationship between The User (\"{{user_email | safe}}\") and the owner of \"{{target_email | safe}}\" from the following data. \n\nDo not make any assumptions when doing this, provide only what you can verify. \n\nIf there are any meeting notes or written emails present, use them to provide notes on the relationship between The User (\"{{user_email | safe}}\") and \"{{target_email | safe}}\", and to remind The User of things that it would be beneficial for them to remember regarding \"{{target_email | safe}}\" and things they've previously discussed, asked of them or informed them of.\n\nEven things like \"Last time you spoke (date), Joe said he was going on holiday to Italy on holiday_date, you could ask him how his trip was.\" are Beneficial.\n\n\n{{RAW_DATA | safe}}\n", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": -3390.9822535560816, + "y": 4168.765045893363 + } + }, + "input_links": [ + { + "id": "531d9ab3-cdd2-4f8c-ac54-9065db1647ed", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "value", + "sink_name": "prompt_values_#_RAW_DATA", + "is_static": false + }, + { + "id": "32a2b355-8c6a-4872-8902-e7bbd8e11a8b", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + }, + { + "id": "3dd822a9-10c7-4b42-b925-b983185adcc2", + "source_id": "a906c063-c42f-4535-b607-c21ec0ff3ea2", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "result", + "sink_name": "prompt_values_#_user_email", + "is_static": true + } + ], + "output_links": [ + { + "id": "87069362-a720-4455-b8a1-d3741dd8eac0", + "source_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "sink_id": "476a3170-08be-4a26-a099-2019946e7aab", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "4b7cb407-2ed6-41c5-8926-9c923368920b", + "source_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "476a3170-08be-4a26-a099-2019946e7aab", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Relationship Info", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": -2608.660922077564, + "y": 4180.504053779745 + } + }, + "input_links": [ + { + "id": "87069362-a720-4455-b8a1-d3741dd8eac0", + "source_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "sink_id": "476a3170-08be-4a26-a099-2019946e7aab", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + } + ], + "links": [ + { + "id": "5b703b2d-36ec-46dd-8a52-e5ee7bb415c1", + "source_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "4b7cb407-2ed6-41c5-8926-9c923368920b", + "source_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "531d9ab3-cdd2-4f8c-ac54-9065db1647ed", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "value", + "sink_name": "prompt_values_#_RAW_DATA", + "is_static": false + }, + { + "id": "596f14d0-7270-429b-b779-479c8fc978be", + "source_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "sink_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "source_name": "emails", + "sink_name": "value", + "is_static": false + }, + { + "id": "dec10a13-fcd8-40dd-abdd-3cc8f61df5f5", + "source_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "sink_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "source_name": "output", + "sink_name": "Question", + "is_static": false + }, + { + "id": "d03d0f61-8d43-4285-b460-ede1ef85706b", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + }, + { + "id": "cb61f771-fc15-4aba-852d-6e0984f26c41", + "source_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "sink_id": "0c8e7071-4424-409b-af01-4892ddc5b4fa", + "source_name": "Answer", + "sink_name": "value", + "is_static": false + }, + { + "id": "cb24ba08-20f6-478c-8827-e6d26e83aa4a", + "source_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "32a2b355-8c6a-4872-8902-e7bbd8e11a8b", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + }, + { + "id": "87069362-a720-4455-b8a1-d3741dd8eac0", + "source_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "sink_id": "476a3170-08be-4a26-a099-2019946e7aab", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "7ffdbe78-88fd-4a53-83ce-7c04d146afef", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "9b83fe30-6887-438a-816f-a03eb0910bc3", + "source_name": "result", + "sink_name": "values_#_email", + "is_static": true + }, + { + "id": "fb24b852-da8d-4cb7-bd2d-e92ff4c3ef9b", + "source_id": "a906c063-c42f-4535-b607-c21ec0ff3ea2", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "result", + "sink_name": "prompt_values_#_user_email", + "is_static": true + }, + { + "id": "20e29442-d609-48b5-918e-919ff87280c5", + "source_id": "9b83fe30-6887-438a-816f-a03eb0910bc3", + "sink_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "source_name": "output", + "sink_name": "query", + "is_static": false + }, + { + "id": "c614173a-9b4e-4e52-987b-dc60118bb2e4", + "source_id": "8ae2698f-1bd8-42fc-976a-3167717ce8d2", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "6706cc77-d18c-4a5d-a27c-583648482d9d", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "value", + "sink_name": "prompt_values_#_RAW_DATA", + "is_static": false + }, + { + "id": "3dd822a9-10c7-4b42-b925-b983185adcc2", + "source_id": "a906c063-c42f-4535-b607-c21ec0ff3ea2", + "sink_id": "8101e732-9400-42b6-84c1-235dfcc7274c", + "source_name": "result", + "sink_name": "prompt_values_#_user_email", + "is_static": true + }, + { + "id": "0ed18996-cb91-4cf6-b55c-af2a94daad78", + "source_id": "5356c145-be3d-4b38-a553-78ec99fb7261", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "56fa8329-98d3-4ac4-aafc-80b26767f45f", + "source_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "2ff7d45e-269d-4321-8c43-00532892dd0d", + "source_id": "0835f399-b960-4adb-8ea0-316a61632106", + "sink_id": "c3fc4d6d-8ae7-44b4-a4cb-771f20a2f448", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "c09a29f7-e91a-416d-bfb4-6d4da3047133", + "source_id": "cdac63f1-eee2-4e99-818d-c7c9f2560d18", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f79a3f6a-f6ad-4d5a-adc9-c2abed207d30", + "source_id": "0835f399-b960-4adb-8ea0-316a61632106", + "sink_id": "347fad9f-d49f-46f3-8c49-cbcf67387052", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "e46395fa-cc9e-4e41-9f01-2a38bee74bb3", + "source_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "sink_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "source_name": "response", + "sink_name": "prompt_values_#_research", + "is_static": false + }, + { + "id": "eec6a3b5-7f34-49ae-9d49-231c7fa31b9d", + "source_id": "f26280fe-3296-4ce6-83ab-3c5213a43716", + "sink_id": "0835f399-b960-4adb-8ea0-316a61632106", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "90f08627-a930-4405-8053-f2d6175d5386", + "source_id": "480fb04f-089a-42f4-ba0a-04903d90c953", + "sink_id": "e007dd2d-6b22-42d6-b1d7-f955017e0830", + "source_name": "result", + "sink_name": "prompt_values_#_target_email", + "is_static": true + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Email to Research": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Email to Research" + }, + "Your Email": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Your Email" + } + }, + "required": [ + "Email to Research", + "Your Email" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Research Result": { + "advanced": false, + "secret": false, + "title": "Research Result" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + }, + "Relationship Info": { + "advanced": false, + "secret": false, + "title": "Relationship Info" + } + }, + "required": [ + "Research Result", + "Error", + "Relationship Info" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "2a2c337b-67cf-46b3-8b7d-6bce5e605ddf", + "version": 4, + "is_active": true, + "name": "Deep Research Question", + "description": "Deeply researches to find the answer to the given question on the web and returns the answer/research based on results.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "perplexity/sonar-deep-research", + "retry": 3, + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 846.9587455711553, + "y": 442.607698478585 + } + }, + "input_links": [ + { + "id": "8cf832f2-d113-4e48-a443-c1842bde9335", + "source_id": "231ed59c-eb30-436f-bdb4-52c1512518cf", + "sink_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "source_name": "result", + "sink_name": "prompt", + "is_static": true + } + ], + "output_links": [ + { + "id": "fb0ae75f-c995-485a-97e2-2eef5ca6ff13", + "source_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "sink_id": "55232481-5336-4046-a7e7-f1ba843698c9", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "094fa24b-5634-4bc8-9a5e-2222ab433c11", + "source_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "sink_id": "7e7a642f-f1e9-4a28-82ae-fa133bafa28b", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "231ed59c-eb30-436f-bdb4-52c1512518cf", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Question", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 279.82167298214586, + "y": 447.4426854847086 + } + }, + "input_links": [], + "output_links": [ + { + "id": "8cf832f2-d113-4e48-a443-c1842bde9335", + "source_id": "231ed59c-eb30-436f-bdb4-52c1512518cf", + "sink_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "source_name": "result", + "sink_name": "prompt", + "is_static": true + } + ] + }, + { + "id": "7e7a642f-f1e9-4a28-82ae-fa133bafa28b", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Answer", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 1402.1783270178541, + "y": 905.0573145152914 + } + }, + "input_links": [ + { + "id": "094fa24b-5634-4bc8-9a5e-2222ab433c11", + "source_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "sink_id": "7e7a642f-f1e9-4a28-82ae-fa133bafa28b", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "55232481-5336-4046-a7e7-f1ba843698c9", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 1397.2500000000002, + "y": 427.5 + } + }, + "input_links": [ + { + "id": "fb0ae75f-c995-485a-97e2-2eef5ca6ff13", + "source_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "sink_id": "55232481-5336-4046-a7e7-f1ba843698c9", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + } + ], + "links": [ + { + "id": "fb0ae75f-c995-485a-97e2-2eef5ca6ff13", + "source_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "sink_id": "55232481-5336-4046-a7e7-f1ba843698c9", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "8cf832f2-d113-4e48-a443-c1842bde9335", + "source_id": "231ed59c-eb30-436f-bdb4-52c1512518cf", + "sink_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "source_name": "result", + "sink_name": "prompt", + "is_static": true + }, + { + "id": "094fa24b-5634-4bc8-9a5e-2222ab433c11", + "source_id": "e4d9c513-ab2d-4435-ba54-62545932710a", + "sink_id": "7e7a642f-f1e9-4a28-82ae-fa133bafa28b", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Question": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Question" + } + }, + "required": [ + "Question" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Answer": { + "advanced": false, + "secret": false, + "title": "Answer" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Answer", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + } + ], + "user_id": "", + "created_at": "2025-08-30T10:24:03.147Z", + "input_schema": { + "type": "object", + "properties": { + "Your email address": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Your email address", + "description": "The work email address you use for your meetings. \nYour daily briefings will be sent to you here." + } + }, + "required": [ + "Your email address" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Meeting Prep Report Text": { + "advanced": false, + "secret": false, + "title": "Meeting Prep Report Text", + "description": "Plain Text Report, the full report was emailed to you." + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + }, + "Internal Only Meeting": { + "advanced": false, + "secret": false, + "title": "Internal Only Meeting", + "description": "At least one internal meeting was detected, I won't brief you for it as I'm assuming you already have background info on your colleagues." + }, + "Email Status": { + "advanced": false, + "secret": false, + "title": "Email Status", + "description": "Whether or not the briefing was successfully sent" + }, + "No Meetings Found": { + "advanced": false, + "secret": false, + "title": "No Meetings Found" + }, + "Meeting Prep Report Error": { + "advanced": false, + "secret": false, + "title": "Meeting Prep Report Error", + "description": "Error generating or emailing the final report - please email contact@agpt.co" + } + }, + "required": [ + "Meeting Prep Report Text", + "Error", + "Internal Only Meeting", + "Email Status", + "No Meetings Found", + "Meeting Prep Report Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "google_oauth2_credentials": { + "credentials_provider": [ + "google" + ], + "credentials_types": [ + "oauth2" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "google", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "oauth2", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['oauth2']]", + "type": "object", + "credentials_scopes": [ + "https://www.googleapis.com/auth/gmail.send", + "https://www.googleapis.com/auth/calendar.readonly", + "https://www.googleapis.com/auth/gmail.readonly" + ], + "discriminator_values": [] + }, + "anthropic_api_key_credentials": { + "credentials_provider": [ + "anthropic" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "anthropic", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "claude-sonnet-4-20250514" + ] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-5-2025-08-07" + ] + }, + "open_router_api_key_credentials": { + "credentials_provider": [ + "open_router" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "open_router", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "perplexity/sonar-deep-research" + ] + } + }, + "required": [ + "google_oauth2_credentials", + "anthropic_api_key_credentials", + "openai_api_key_credentials", + "open_router_api_key_credentials" + ], + "title": "SmartMeetingPrepCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_516d813b-d1bc-470f-add7-c63a4b2c2bad.json b/autogpt_platform/backend/agents/agent_516d813b-d1bc-470f-add7-c63a4b2c2bad.json new file mode 100644 index 0000000000..028e4249e7 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_516d813b-d1bc-470f-add7-c63a4b2c2bad.json @@ -0,0 +1,447 @@ +{ + "id": "622849a7-5848-4838-894d-01f8f07e3fad", + "version": 18, + "is_active": true, + "name": "AI Function", + "description": "## AI-Powered Function Magic: Never code again!\nProvide a description of a python function and your inputs and AI will provide the results.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "return", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": "The value returned by the function" + }, + "metadata": { + "position": { + "x": 1598.8622921127233, + "y": 291.59140862204725 + } + }, + "input_links": [ + { + "id": "caecc1de-fdbc-4fd9-9570-074057bb15f9", + "source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "622849a7-5848-4838-894d-01f8f07e3fad", + "graph_version": 18, + "webhook_id": null, + "webhook": null + }, + { + "id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "o3-mini", + "retry": 3, + "prompt": "{{ARGS}}", + "sys_prompt": "You are now the following python function:\n\n```\n# {{DESCRIPTION}}\n{{FUNCTION}}\n```\n\nThe user will provide your input arguments.\nOnly respond with your `return` value.\nDo not include any commentary or additional text in your response. \nDo not include ``` backticks or any other decorators.", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 995, + "y": 290.50000000000006 + } + }, + "input_links": [ + { + "id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed", + "source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_FUNCTION", + "is_static": true + }, + { + "id": "093bdca5-9f44-42f9-8e1c-276dd2971675", + "source_id": "844530de-2354-46d8-b748-67306b7bbca1", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_ARGS", + "is_static": true + }, + { + "id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af", + "source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_DESCRIPTION", + "is_static": true + } + ], + "output_links": [ + { + "id": "caecc1de-fdbc-4fd9-9570-074057bb15f9", + "source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "622849a7-5848-4838-894d-01f8f07e3fad", + "graph_version": 18, + "webhook_id": null, + "webhook": null + }, + { + "id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Function Definition", + "title": null, + "value": "def fake_people(n: int) -> list[dict]:", + "secret": false, + "advanced": false, + "description": "The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -672.6908629664215, + "y": 302.42044359789116 + } + }, + "input_links": [], + "output_links": [ + { + "id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed", + "source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_FUNCTION", + "is_static": true + } + ], + "graph_id": "622849a7-5848-4838-894d-01f8f07e3fad", + "graph_version": 18, + "webhook_id": null, + "webhook": null + }, + { + "id": "844530de-2354-46d8-b748-67306b7bbca1", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Arguments", + "title": null, + "value": "20", + "secret": false, + "advanced": false, + "description": "The function's inputs\n\ne.g \"20\"", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -158.1623599617334, + "y": 295.410856928333 + } + }, + "input_links": [], + "output_links": [ + { + "id": "093bdca5-9f44-42f9-8e1c-276dd2971675", + "source_id": "844530de-2354-46d8-b748-67306b7bbca1", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_ARGS", + "is_static": true + } + ], + "graph_id": "622849a7-5848-4838-894d-01f8f07e3fad", + "graph_version": 18, + "webhook_id": null, + "webhook": null + }, + { + "id": "0fd6ef54-c1cd-478d-b764-17e40f882b99", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Description", + "title": null, + "value": "Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.", + "secret": false, + "advanced": false, + "description": "Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 374.4548658057796, + "y": 290.3779121974126 + } + }, + "input_links": [], + "output_links": [ + { + "id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af", + "source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_DESCRIPTION", + "is_static": true + } + ], + "graph_id": "622849a7-5848-4838-894d-01f8f07e3fad", + "graph_version": 18, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "caecc1de-fdbc-4fd9-9570-074057bb15f9", + "source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af", + "source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_DESCRIPTION", + "is_static": true + }, + { + "id": "093bdca5-9f44-42f9-8e1c-276dd2971675", + "source_id": "844530de-2354-46d8-b748-67306b7bbca1", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_ARGS", + "is_static": true + }, + { + "id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed", + "source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2", + "sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b", + "source_name": "result", + "sink_name": "prompt_values_#_FUNCTION", + "is_static": true + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2025-04-19T17:10:48.857Z", + "input_schema": { + "type": "object", + "properties": { + "Function Definition": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Function Definition", + "description": "The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"", + "default": "def fake_people(n: int) -> list[dict]:" + }, + "Arguments": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Arguments", + "description": "The function's inputs\n\ne.g \"20\"", + "default": "20" + }, + "Description": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Description", + "description": "Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"", + "default": "Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age." + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "return": { + "advanced": false, + "secret": false, + "title": "return", + "description": "The value returned by the function" + } + }, + "required": [ + "return" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "o3-mini" + ] + } + }, + "required": [ + "openai_api_key_credentials" + ], + "title": "AIFunctionCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_55d40473-0f31-4ada-9e40-d3a7139fcbd4.json b/autogpt_platform/backend/agents/agent_55d40473-0f31-4ada-9e40-d3a7139fcbd4.json new file mode 100644 index 0000000000..8dcc1a4478 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_55d40473-0f31-4ada-9e40-d3a7139fcbd4.json @@ -0,0 +1,7222 @@ +{ + "id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "version": 246, + "is_active": false, + "name": "Automated SEO Blog Writer", + "description": "Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility.\n\nHow it works:\n\n1. Share your pitch, website, and values.\n2. The agent studies your site and uncovers proven SEO opportunities.\n3. It spends two hours researching and drafting each post.\n4. You set the cadence\u2014publishing runs on autopilot.\n\nBusiness value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most.\n\nUse cases:\n\u2022 Founders: Keep your blog active with no time drain.\n\u2022 Agencies: Deliver scalable SEO content for clients.\n\u2022 Strategists: Automate execution, focus on strategy.\n\u2022 Marketers: Drive steady organic growth.\n\u2022 Local businesses: Capture nearby search traffic.", + "instructions": "This agent takes up to two hours to craft a highly researched, SEO-driven blog post that has a strong likelihood of performing well and driving traffic to your site, thereby increasing the chances of your website appearing on Google's front page. It accomplishes this by conducting keyword research with a subagent, which is an SEO expert. You can schedule this agent according to your publishing frequency, whether that's once a day, every four days, or once a week\u2014whatever suits your blogging needs. \n\nAlthough it is necessary to use WordPress for managing the blog posts, you can design and build a website with WordPress as the back-end by clicking this link, where I have provided a pre-filled Lovable prompt to guide you: https://tinyurl.com/loveable-wordpress-frontend\n\nNote: This agent will automatically publish the posts it creates directly to your WordPress site.", + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "6429fe92-aeff-4a14-be9f-9d2d5021bad4", + "block_id": "655d6fdf-a334-421c-b733-520549c07cd1", + "input_default": { + "name": "brand_tone", + "title": "Brand Tone", + "value": null, + "secret": false, + "advanced": false, + "description": "The voice and style for your content - choose the tone that best matches your brand and audience", + "placeholder_values": [ + "Friendly", + "Professional", + "Technical", + "Casual" + ] + }, + "metadata": { + "position": { + "x": -4959.765499130381, + "y": -2179.9447502570065 + }, + "customized_name": "Brand Tone" + }, + "input_links": [], + "output_links": [ + { + "id": "ac0da1b7-1680-4748-8334-f832d2063899", + "source_id": "6429fe92-aeff-4a14-be9f-9d2d5021bad4", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "result", + "sink_name": "prompt_values_#_brand_tone", + "is_static": true + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "block_id": "ed55ac19-356e-4243-a6cb-bc599e9b716f", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "Write a comprehensive, SEO-optimized blog post about \"{{keyword | safe}}\".\n\nBLOG POST TARGET LENGTH: \"{{target_word_count | safe}} words\"\nWRITING STYLE & TONE: \"{{brand_tone | safe}}\"\n\nKEYWORD DATA - This tells you what people are searching for; what they want.\nPRIMARY KEYWORD: {{keyword}}\n\n{{keyword_data | safe}}\n\n\nSEO EXPERT RESEARCH:\nWe hired an SEO expert to choose this keyword after lots of research.\nKeyword: \"{{keyword}}\"\nThey left a note for you as to why they chose it:\n\n{{seo_expert_reason | safe}}\n\nThey also had a suggestion for the title, but it's up to you what title you choose:\n\"{{seo_expert_proposed_title | safe}}\"\n\nTOPIC RESEARCH:\n\n{{research_notes | safe}}\n\n\nREQUIREMENTS:\n- Title: Create an engaging, SEO-friendly title (under 60 characters), making sure to include the main keyword \"{{keyword | safe}}\"\n- Length: {{target_word_count | safe}} words\n- Tone: Write in a {{brand_tone | safe}} tone throughout the content\n- Format: Clean HTML only (no markdown, no code blocks)\n- Structure: Use proper HTML tags (

,

,

,

    ,
  • )\n- Content: Scale sections appropriately for the target length:\n * 500 words: Introduction + 2 main sections + conclusion\n * 1,000 words: Introduction + 3-4 main sections + conclusion \n * 1,500 words: Introduction + 4-5 main sections + conclusion\n * 2,000 words: Introduction + 5-6 main sections + conclusion\n- SEO: Naturally incorporate the primary keyword and relevant secondary keywords throughout the content. Choose keywords intelligently and avoid forced or unnatural usage.\n- Value: Provide actionable, helpful information for readers. Focus on creating content that genuinely answers user questions and solves problems.\n- Flow: Ensure the content flows naturally. Do not directly reference parts of your instructions or prompt in the output.\n- Pay special attention to avoiding robotic keyword insertion and unnatural phrasing. Do not directly reference or comment on the keyword data in the blog post.\n\nHTML STRUCTURE:\n- Do NOT include

    tag - provide title separately\n- Start content with

    for introduction paragraph\n- Use

    for main sections\n- Use

    for subsections if needed\n- Use

    for paragraphs\n- Use

      and
    • for lists\n- No , , or tags needed\n\nOUTPUT FORMAT:\nProvide your response as a JSON object with these fields:\n{\n \"h1\": \"Your SEO-friendly blog post title\",\n \"body\": \"Your complete HTML blog content\",\n \"image_prompt\": \"Detailed prompt for generating a professional blog header image that complements this article\"\n}\n", + "sys_prompt": "You are an expert SEO blog writer that creates high-quality, engaging content optimized for search engines. You always return clean HTML formatted content that is ready to publish. You follow SEO best practices including proper heading structure, keyword optimization, and user-focused content. You write in a conversational yet informative tone that provides real value to readers.", + "list_result": false, + "ollama_host": "localhost:11434", + "prompt_values": {}, + "expected_format": { + "title": "SEO-optimized blog post title (under 60 characters) that will be used as the WordPress post title", + "content": "The generated blog post content in clean HTML format, ready for WordPress publishing", + "image_prompt": "Detailed string for a ultra high quality photographic image generation prompt. Your best work ever." + }, + "conversation_history": [], + "compress_prompt_to_fit": true + }, + "metadata": { + "position": { + "x": 7056.547945728113, + "y": -488.792306881219 + }, + "customized_name": "Write the Blogpost 1st Draft" + }, + "input_links": [ + { + "id": "325265ca-819c-4fdd-a69f-1b75d2da03f7", + "source_id": "3800e8a5-e648-4686-988d-61eae960f126", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "response", + "sink_name": "prompt_values_#_research_notes", + "is_static": false + }, + { + "id": "ac0da1b7-1680-4748-8334-f832d2063899", + "source_id": "6429fe92-aeff-4a14-be9f-9d2d5021bad4", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "result", + "sink_name": "prompt_values_#_brand_tone", + "is_static": true + }, + { + "id": "d1ba66d5-158f-44ec-adb3-158775365f3e", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Proposed Title", + "sink_name": "prompt_values_#_seo_expert_proposed_title", + "is_static": false + }, + { + "id": "dd201eb6-26bf-4133-acd0-cb8cc94ee739", + "source_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "no_output", + "sink_name": "prompt_values_#_keyword_data", + "is_static": false + }, + { + "id": "d240ea3d-3104-44a5-9714-c255ab5dc99d", + "source_id": "7cbb535f-bb8c-49d3-9392-883c4eaa9372", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "result", + "sink_name": "prompt_values_#_target_word_count", + "is_static": true + }, + { + "id": "b42dc4c7-e7ba-4c20-9c10-3275867cc22d", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Justification", + "sink_name": "prompt_values_#_seo_expert_reason", + "is_static": false + }, + { + "id": "b2b664e7-cf87-4651-bb5c-1256be2db489", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Primary Keyword", + "sink_name": "prompt_values_#_keyword", + "is_static": false + } + ], + "output_links": [ + { + "id": "6af59cf6-3a24-4f4e-a940-7ae8b455c4a1", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "6fde0a6a-ef7e-439a-ba65-9b78ca0fd5ae", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "959f0fb4-02bd-4e93-b0b4-d187b10773ba", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "2149a96a-c3ac-4d45-98f4-1c926ceae5b2", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "ddb9e490-56a6-4203-8f1b-d944e30c253b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c6280938-afc1-40a6-8493-80c234ccb6a0", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "233293ef-ce18-45b0-ab68-37d17d8bc6f9", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "7cbb535f-bb8c-49d3-9392-883c4eaa9372", + "block_id": "655d6fdf-a334-421c-b733-520549c07cd1", + "input_default": { + "name": "target_word_count", + "title": "Blog Post Length", + "value": null, + "secret": false, + "advanced": false, + "description": "Blog post length (500=quick reads, 1000=standard posts, 1500=detailed guides, 2000=comprehensive content)", + "placeholder_values": [ + "500", + "1000", + "1500", + "2000" + ] + }, + "metadata": { + "position": { + "x": -5506.114503217936, + "y": -2181.569356300228 + }, + "customized_name": "Word Count" + }, + "input_links": [], + "output_links": [ + { + "id": "d240ea3d-3104-44a5-9714-c255ab5dc99d", + "source_id": "7cbb535f-bb8c-49d3-9392-883c4eaa9372", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "result", + "sink_name": "prompt_values_#_target_word_count", + "is_static": true + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "f6fe5b4c-0003-4c25-861c-e61d6d139ad6", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "blog_url", + "title": "WordPress Blog URL", + "value": null, + "secret": false, + "advanced": false, + "description": "Your WordPress.com blog URL (e.g., https://yourblog.wordpress.com) - used for posting content via API", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -4399.983960132575, + "y": -2180.865627104322 + }, + "customized_name": "Blog URL" + }, + "input_links": [], + "output_links": [ + { + "id": "16445831-8df5-437b-9bb9-66c961ca6bc4", + "source_id": "f6fe5b4c-0003-4c25-861c-e61d6d139ad6", + "sink_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "source_name": "result", + "sink_name": "text", + "is_static": true + }, + { + "id": "d7862045-b221-4b07-95a8-4a6a774201be", + "source_id": "f6fe5b4c-0003-4c25-861c-e61d6d139ad6", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Domain", + "is_static": true + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "block_id": "b924ddf4-de4f-4b56-9a85-358930dcbc91", + "input_default": { + "values": {} + }, + "metadata": { + "position": { + "x": 16112.806128212102, + "y": -1351.0548006410775 + }, + "customized_name": "Collect Results" + }, + "input_links": [ + { + "id": "d5bfc1fa-3a43-427f-8073-36f1083e836d", + "source_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "value", + "sink_name": "values_#_Search_Volume", + "is_static": false + }, + { + "id": "77e0d0e4-da9e-47b3-812c-72a52c12513e", + "source_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "post_url", + "sink_name": "values_#_Published URL", + "is_static": false + }, + { + "id": "74e11c3f-2d41-4049-993b-142b362f5cd4", + "source_id": "d9a537f7-930d-4526-8eca-8279d9ea747a", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "date", + "sink_name": "values_#_Date", + "is_static": false + }, + { + "id": "c00b16c3-421c-4078-a335-03d510bf7213", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "Justification", + "sink_name": "values_#_SEO_Experts_Reason_for_keyword_choice", + "is_static": false + }, + { + "id": "cb4eedea-e089-4356-b153-f94520957ab1", + "source_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "output", + "sink_name": "values_#_Article Title", + "is_static": false + }, + { + "id": "519d0be6-338c-4b26-84d5-4e8fff1a1ea7", + "source_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "value", + "sink_name": "values_#_Keyword_Difficulty", + "is_static": false + }, + { + "id": "d76f1e26-3c01-4f62-a70f-a49cbf4b135a", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "Primary Keyword", + "sink_name": "values_#_Primary_Target_Keyword", + "is_static": false + } + ], + "output_links": [ + { + "id": "6330180a-9c46-4b53-a246-27e8bb0d8f4d", + "source_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "sink_id": "c701345d-37e8-40b7-82b7-72c7b10337ed", + "source_name": "dictionary", + "sink_name": "value", + "is_static": false + }, + { + "id": "83f01bb1-581d-4a5e-a7cd-29f6a6686585", + "source_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "sink_id": "1d0294f2-456a-493e-91fe-317ec29de58b", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "title" + }, + "metadata": { + "position": { + "x": 10428.49844425863, + "y": -3038.4947545918985 + }, + "customized_name": "Get Blogpost Title" + }, + "input_links": [ + { + "id": "959f0fb4-02bd-4e93-b0b4-d187b10773ba", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "cb4eedea-e089-4356-b153-f94520957ab1", + "source_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "output", + "sink_name": "values_#_Article Title", + "is_static": false + }, + { + "id": "d006c5ea-ad1b-4572-a451-8d60a87b65dc", + "source_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "output", + "sink_name": "title", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "ddb9e490-56a6-4203-8f1b-d944e30c253b", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "CONTENT_GENERATION_FAILED", + "title": "Content Generation Failed", + "value": "CONTENT_GENERATION_FAILED", + "format": "", + "secret": false, + "advanced": false, + "description": "The AI was unable to generate the blog article content. This may occur due to LLM API issues, invalid keyword input, or content policy restrictions. The agent cannot proceed with publishing without generated content." + }, + "metadata": { + "position": { + "x": 8216.687484594391, + "y": 4663.903323732193 + } + }, + "input_links": [ + { + "id": "2149a96a-c3ac-4d45-98f4-1c926ceae5b2", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "ddb9e490-56a6-4203-8f1b-d944e30c253b", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "d9a537f7-930d-4526-8eca-8279d9ea747a", + "block_id": "b29c1b50-5d0e-4d9f-8f9d-1b0e6fcbf0b1", + "input_default": { + "offset": 0, + "trigger": "go", + "format_type": { + "format": "%Y-%m-%d", + "timezone": "UTC", + "discriminator": "strftime" + } + }, + "metadata": { + "position": { + "x": 13975.209945423716, + "y": -566.406791021141 + } + }, + "input_links": [], + "output_links": [ + { + "id": "74e11c3f-2d41-4049-993b-142b362f5cd4", + "source_id": "d9a537f7-930d-4526-8eca-8279d9ea747a", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "date", + "sink_name": "values_#_Date", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "1d0294f2-456a-493e-91fe-317ec29de58b", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "DICTIONARY_CREATE_FAILED", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 16569.880611942004, + "y": 4617.380527385684 + } + }, + "input_links": [ + { + "id": "83f01bb1-581d-4a5e-a7cd-29f6a6686585", + "source_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "sink_id": "1d0294f2-456a-493e-91fe-317ec29de58b", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "da04c7e3-0a1e-4360-8feb-3cd6a76eec55", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "BLOG_PUBLISHED_SUCCESS", + "title": "Blog Post Published Successfully", + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": "The blog post has been successfully generated, published to WordPress, and logged to Airtable. The SEO-optimized article is now live and tracking data has been recorded." + }, + "metadata": { + "position": { + "x": 17905.45367408944, + "y": -1349.5333694234762 + }, + "customized_name": "Report Success to User" + }, + "input_links": [ + { + "id": "0d13989b-9ad2-4da1-9f50-75639ed21775", + "source_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "sink_id": "da04c7e3-0a1e-4360-8feb-3cd6a76eec55", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "number" + }, + "metadata": { + "position": { + "x": 13964.025249663777, + "y": -3038.6487438436952 + }, + "customized_name": "Convert KD to Number" + }, + "input_links": [ + { + "id": "2d911061-a380-49cd-bfe1-bb5f1ca3bc9d", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "source_name": "Keyword Difficulty", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "519d0be6-338c-4b26-84d5-4e8fff1a1ea7", + "source_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "value", + "sink_name": "values_#_Keyword_Difficulty", + "is_static": false + }, + { + "id": "df7f655b-4daf-48bb-b5db-1268c27d7103", + "source_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "sink_id": "24690e20-a754-4cf7-89e1-a67d2c2f9549", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "number" + }, + "metadata": { + "position": { + "x": 13971.094121579132, + "y": -2101.944509517056 + }, + "customized_name": "Convert SV to Number" + }, + "input_links": [ + { + "id": "3f10a5ab-d964-4a24-8dcd-14d4f736a862", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "source_name": "Search Volume", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "d5bfc1fa-3a43-427f-8073-36f1083e836d", + "source_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "value", + "sink_name": "values_#_Search_Volume", + "is_static": false + }, + { + "id": "1bd68af3-2ba6-46a4-b140-eb472f3bcfd1", + "source_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "sink_id": "fcb3b8f6-f4ae-4868-a4d6-4ebca0d9c304", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "fcb3b8f6-f4ae-4868-a4d6-4ebca0d9c304", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "ERROR_CONVERTING_TYPE", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 14875.346915635211, + "y": 4561.969587130949 + } + }, + "input_links": [ + { + "id": "1bd68af3-2ba6-46a4-b140-eb472f3bcfd1", + "source_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "sink_id": "fcb3b8f6-f4ae-4868-a4d6-4ebca0d9c304", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "24690e20-a754-4cf7-89e1-a67d2c2f9549", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "ERROR_CONVERTING_TYPE", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 15649.703538991489, + "y": 4565.460381735377 + } + }, + "input_links": [ + { + "id": "df7f655b-4daf-48bb-b5db-1268c27d7103", + "source_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "sink_id": "24690e20-a754-4cf7-89e1-a67d2c2f9549", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "3800e8a5-e648-4686-988d-61eae960f126", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "perplexity/sonar-deep-research", + "retry": 3, + "prompt": "Research the topic \"{{selected_keyword}}\" and provide comprehensive background information for creating a high-quality blog post.\n\nRESEARCH OBJECTIVES:\n- Current trends and developments related to {{selected_keyword}}\n- Key facts, statistics, and data points\n- Expert insights and authoritative sources\n- Common questions and concerns people have\n- Best practices and actionable advice\n- Recent news or updates in this area\n- Competitive landscape or alternatives (if applicable)\n\nRESEARCH DEPTH:\n- Gather information from authoritative sources\n- Include recent data and statistics where available\n- Identify key subtopics and angles to cover\n- Note any common misconceptions or myths\n- Find real-world examples and case studies\n\nOUTPUT FORMAT:\nProvide detailed research notes that can be used to write an informed, factual blog post. Include:\n- Key facts and statistics\n- Important subtopics to cover\n- Authoritative quotes or insights\n- Recent developments or trends\n- Actionable information for readers\n\nKEYWORD: {{selected_keyword}}", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 6344.985761522615, + "y": -495.2912465560192 + }, + "customized_name": "Deep Research the Keyword Topic" + }, + "input_links": [], + "output_links": [ + { + "id": "325265ca-819c-4fdd-a69f-1b75d2da03f7", + "source_id": "3800e8a5-e648-4686-988d-61eae960f126", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "response", + "sink_name": "prompt_values_#_research_notes", + "is_static": false + }, + { + "id": "b3cba97a-0157-4a76-bc83-b84e89a1bf20", + "source_id": "3800e8a5-e648-4686-988d-61eae960f126", + "sink_id": "aa29e890-8db6-4ec9-acf5-b7c798d4e604", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "aa29e890-8db6-4ec9-acf5-b7c798d4e604", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "ERROR_DEEP_RESEARCH", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 7284.977563710625, + "y": 4644.416894958872 + } + }, + "input_links": [ + { + "id": "b3cba97a-0157-4a76-bc83-b84e89a1bf20", + "source_id": "3800e8a5-e648-4686-988d-61eae960f126", + "sink_id": "aa29e890-8db6-4ec9-acf5-b7c798d4e604", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "9a4d8599-a360-424e-a084-f19f02312c52", + "block_id": "ed1ae7a0-b770-4089-b520-1f0005fad19a", + "input_default": { + "size": "landscape", + "model": "Flux 1.1 Pro Ultra", + "style": "any" + }, + "metadata": { + "position": { + "x": 11005.360832763865, + "y": 443.9029054017601 + }, + "customized_name": "Generate Cover Image" + }, + "input_links": [ + { + "id": "1680109f-50c1-4783-a39d-db8dc6ea0609", + "source_id": "6fde0a6a-ef7e-439a-ba65-9b78ca0fd5ae", + "sink_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "6afb10cc-5dc0-4ecb-a228-31b88640a21f", + "source_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "image_url", + "sink_name": "featured_image", + "is_static": false + }, + { + "id": "b8c352f9-3d01-4e8d-a862-5a6334204e03", + "source_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "sink_id": "65e15cdd-d855-432e-899a-74c9709b1790", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "6fde0a6a-ef7e-439a-ba65-9b78ca0fd5ae", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "image_prompt" + }, + "metadata": { + "position": { + "x": 10446.404588741585, + "y": 440.70569332210243 + }, + "customized_name": "Get Image Prompt" + }, + "input_links": [ + { + "id": "6af59cf6-3a24-4f4e-a940-7ae8b455c4a1", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "6fde0a6a-ef7e-439a-ba65-9b78ca0fd5ae", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "1680109f-50c1-4783-a39d-db8dc6ea0609", + "source_id": "6fde0a6a-ef7e-439a-ba65-9b78ca0fd5ae", + "sink_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "65e15cdd-d855-432e-899a-74c9709b1790", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "ERROR_GENERATING_IMAGE", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 11383.484804408008, + "y": 4583.174420896016 + } + }, + "input_links": [ + { + "id": "b8c352f9-3d01-4e8d-a862-5a6334204e03", + "source_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "sink_id": "65e15cdd-d855-432e-899a-74c9709b1790", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "block_id": "ee4fe08c-18f9-442f-a985-235379b932e1", + "input_default": { + "slug": null, + "tags": [], + "author": null, + "excerpt": null, + "categories": [], + "media_urls": [], + "featured_image": null + }, + "metadata": { + "position": { + "x": 12029.32958727942, + "y": -1582.0626805947688 + } + }, + "input_links": [ + { + "id": "6afb10cc-5dc0-4ecb-a228-31b88640a21f", + "source_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "image_url", + "sink_name": "featured_image", + "is_static": false + }, + { + "id": "cc1b7d8b-391a-4906-a022-9fe2d73cffc9", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "positive", + "sink_name": "site", + "is_static": false + }, + { + "id": "49b1300d-a7c0-4231-a3b9-2388c7daac45", + "source_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "Humanized Text", + "sink_name": "content", + "is_static": false + }, + { + "id": "d006c5ea-ad1b-4572-a451-8d60a87b65dc", + "source_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "output", + "sink_name": "title", + "is_static": false + } + ], + "output_links": [ + { + "id": "77e0d0e4-da9e-47b3-812c-72a52c12513e", + "source_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "post_url", + "sink_name": "values_#_Published URL", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "233293ef-ce18-45b0-ab68-37d17d8bc6f9", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "content" + }, + "metadata": { + "position": { + "x": 10445.27781719332, + "y": -1491.8466055640126 + }, + "customized_name": "Get Blogpost Content" + }, + "input_links": [ + { + "id": "c6280938-afc1-40a6-8493-80c234ccb6a0", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "233293ef-ce18-45b0-ab68-37d17d8bc6f9", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "4bc17613-0f77-42a4-8950-e104cf654ae1", + "source_id": "233293ef-ce18-45b0-ab68-37d17d8bc6f9", + "sink_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "source_name": "output", + "sink_name": "AI Generated Text", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "502780bc-4065-4c4b-b066-ee1d0ef0c543", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": {}, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "4b853f16-916d-4df3-ba14-2e7e8592f4de", + "input_schema": { + "type": "object", + "required": [ + "Wordpress Blog URL" + ], + "properties": { + "Wordpress Blog URL": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Wordpress Blog URL", + "secret": false, + "advanced": false, + "description": "e.g aiespresso.wordpress.com - no \"http://\" or \"/\"s" + } + } + }, + "graph_version": 11, + "output_schema": { + "type": "object", + "required": [ + "Blog Post Titles", + "Error" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Blog Post Titles": { + "title": "Blog Post Titles", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": -1496.8661397394753, + "y": -1675.3060584242785 + } + }, + "input_links": [ + { + "id": "b140f563-da18-4942-8f03-f4e7bc650a24", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "502780bc-4065-4c4b-b066-ee1d0ef0c543", + "source_name": "positive", + "sink_name": "Wordpress Blog URL", + "is_static": false + } + ], + "output_links": [ + { + "id": "5e2dc3d9-6e0b-432b-bd98-f8b072706f76", + "source_id": "502780bc-4065-4c4b-b066-ee1d0ef0c543", + "sink_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "source_name": "Blog Post Titles", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "a4ac5b96-607c-4924-a4fe-8ab8a59ecf2d", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error Humanizing Text", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 12111.055420431672, + "y": 4624.703912317778 + } + }, + "input_links": [ + { + "id": "2ed564ec-112c-424f-bc28-f9d328888b03", + "source_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "sink_id": "a4ac5b96-607c-4924-a4fe-8ab8a59ecf2d", + "source_name": "Error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "4ec86588-fbed-4a09-94ab-66001bec189a", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "= Secondary Keyword Data =" + }, + "metadata": { + "position": { + "x": 3369.199845843561, + "y": -2826.507493945017 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "1", + "no_value": null, + "operator": "<", + "yes_value": "I couldn't find any secondary keywords at all, sorry!\n\nPlease run me again and let's see if we can fix that. \nIf this keeps happening then report this to my creator at contact@agpt.co" + }, + "metadata": { + "position": { + "x": 3891.9644264275603, + "y": -2154.449817171518 + }, + "customized_name": "Confirm at Least 1 Keyword" + }, + "input_links": [ + { + "id": "6a2d8156-ae27-463b-bce6-7e8f4211c8f0", + "source_id": "b7eccd17-1fb1-40da-b41a-30b888cc2fdf", + "sink_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "source_name": "value", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "5723772a-f223-4726-9e17-189699534395", + "source_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "sink_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "dd201eb6-26bf-4133-acd0-cb8cc94ee739", + "source_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "no_output", + "sink_name": "prompt_values_#_keyword_data", + "is_static": false + }, + { + "id": "772ef883-8c97-4637-81b9-028fda6d6a80", + "source_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "sink_id": "8d0906fe-9c39-46f5-ad68-608a81adb41e", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "e7bfbe6f-56fa-4f61-8ac9-d0482e668de2", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Check if no secondary keywords were found" + }, + "metadata": { + "position": { + "x": 3980.309914573834, + "y": -2540.4611027109822 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "c1493119-c005-491f-aa7b-73ee93ee3174", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Note: This is currently hardcoded to the United States as DataForSEO requires a location to be given. \n\nList of all locations: https://cdn.dataforseo.com/v3/locations/locations_and_languages_dataforseo_labs_2025_08_05.csv" + }, + "metadata": { + "position": { + "x": 2774.8281405524294, + "y": -2535.6192391146647 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 1, + "dot_all": false, + "pattern": "^\\s*(?:https?:\\/\\/)?(?:www\\.)?(([a-z0-9](?:[a-z0-9-]{0,61}[a-z0-9])?)\\.wordpress\\.com)(?:[\\/?#].*)?\\s*$", + "find_all": false, + "case_sensitive": false + }, + "metadata": { + "position": { + "x": -2699.292888944363, + "y": -1682.9669074436983 + }, + "customized_name": "Confirm Wordpress URL" + }, + "input_links": [ + { + "id": "16445831-8df5-437b-9bb9-66c961ca6bc4", + "source_id": "f6fe5b4c-0003-4c25-861c-e61d6d139ad6", + "sink_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "source_name": "result", + "sink_name": "text", + "is_static": true + } + ], + "output_links": [ + { + "id": "cc1b7d8b-391a-4906-a022-9fe2d73cffc9", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "positive", + "sink_name": "site", + "is_static": false + }, + { + "id": "b140f563-da18-4942-8f03-f4e7bc650a24", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "502780bc-4065-4c4b-b066-ee1d0ef0c543", + "source_name": "positive", + "sink_name": "Wordpress Blog URL", + "is_static": false + }, + { + "id": "ad31ceeb-c0e3-4608-bc55-87166a8097d9", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "b84a8845-1643-44fb-827d-147e2626b9a8", + "source_name": "negative", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "b84a8845-1643-44fb-827d-147e2626b9a8", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Hmm\u2026 that doesn\u2019t look like a valid WordPress blog address. \nIt should look like this: yourname.wordpress.com", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": -2109.7525384562996, + "y": -1682.621039550986 + }, + "customized_name": "Error Output" + }, + "input_links": [ + { + "id": "ad31ceeb-c0e3-4608-bc55-87166a8097d9", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "b84a8845-1643-44fb-827d-147e2626b9a8", + "source_name": "negative", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "8d0906fe-9c39-46f5-ad68-608a81adb41e", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Secondary Keyword Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 5051.227386059662, + "y": 4582.181714024435 + }, + "customized_name": "Keyword Error" + }, + "input_links": [ + { + "id": "772ef883-8c97-4637-81b9-028fda6d6a80", + "source_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "sink_id": "8d0906fe-9c39-46f5-ad68-608a81adb41e", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "07958cbd-89da-4377-8935-31a904abfc7c", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "== INPUTS ==" + }, + "metadata": { + "position": { + "x": -5188.243233618317, + "y": -2649.885729295427 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "25085d63-cbca-4a2a-8c4e-2d22e12de0be", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Website Primary Topic", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "The main subject or theme your blog is about (e.g., 'AI Automation', 'Viral Marketing')", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -6703.390474290607, + "y": -2191.6416280072335 + }, + "customized_name": "Website Topic Input" + }, + "input_links": [], + "output_links": [ + { + "id": "d9eeca3d-f84b-4e43-85cc-0fd6fffd0a8b", + "source_id": "25085d63-cbca-4a2a-8c4e-2d22e12de0be", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Website Primary Topic", + "is_static": true + }, + { + "id": "1c9fe832-5877-4636-aa6d-5eaeacd8a065", + "source_id": "25085d63-cbca-4a2a-8c4e-2d22e12de0be", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "result", + "sink_name": "prompt_values_#_website_topic", + "is_static": true + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "9aac9ea1-3413-4ce0-a9fc-26d5aec9ef93", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Website Description", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "Describe your website, including its purpose, tone, and approach. \n\nFor example: \"A blog that makes AI approachable for the average person. We share clear, easy-to-follow guides and curated recommendations on which models and tools to use.\"", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -6140.322999480015, + "y": -2192.306642371697 + }, + "customized_name": "Website Pitch" + }, + "input_links": [], + "output_links": [ + { + "id": "9c5af979-7ca0-44a0-8350-e447a9e01765", + "source_id": "9aac9ea1-3413-4ce0-a9fc-26d5aec9ef93", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "result", + "sink_name": "prompt_values_#_website_description", + "is_static": true + }, + { + "id": "634a7d56-d5f4-4133-a475-0be6e49c381a", + "source_id": "9aac9ea1-3413-4ce0-a9fc-26d5aec9ef93", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Website Description", + "is_static": true + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 639.7987235732376, + "y": -2165.8505997999355 + }, + "customized_name": "Convert to string" + }, + "input_links": [ + { + "id": "5e2dc3d9-6e0b-432b-bd98-f8b072706f76", + "source_id": "502780bc-4065-4c4b-b066-ee1d0ef0c543", + "sink_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "source_name": "Blog Post Titles", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "5241ccc9-8df8-41ee-9305-0d345fbfbf3d", + "source_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "value", + "sink_name": "Previous Posts", + "is_static": false + }, + { + "id": "59a3eebb-7b26-4dfe-900b-35e3a06866bd", + "source_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "sink_id": "932e0dfe-aec4-4d1f-be74-432e6c5b77f7", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "c701345d-37e8-40b7-82b7-72c7b10337ed", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 16719.029182231257, + "y": -1350.4168181153414 + }, + "customized_name": "Convert to String" + }, + "input_links": [ + { + "id": "6330180a-9c46-4b53-a246-27e8bb0d8f4d", + "source_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "sink_id": "c701345d-37e8-40b7-82b7-72c7b10337ed", + "source_name": "dictionary", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "b9d5fbed-bb4f-42ef-b415-61e1ea4f1971", + "source_id": "c701345d-37e8-40b7-82b7-72c7b10337ed", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "value", + "sink_name": "prompt_values_#_data", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "Turn this data informing the user of a just-published blog post into a non-technical human-readable report. \n\n{{data}}\n\n\nAs part of your process you also performed deep research of the topic on the web.\nYou selected the topic based on:\n- The user's website pitch they provided to you {{ website_description | safe }}\n- The user's website's primary topic they provided to you. {{ website_topic | safe }}\n- The previous topics already covered on the users blog (post titles).\n- Recommendations from an SEO Expert Agent, which found underserved keywords in the niche defined from the points above. Underserved meaning low KD and high enough volume, so that the user's post has the best chance of success.\n\nRules:\n- Do not offer to make any follow up actions or ask a question. This is not a conversation, it's a one-way report to which there will be no reply.\n- Format using Markdown.\n- Return only the text with no additional commentary or decoration.\n- Do not make any assumptions. Only write about what can be 100% confirmed through the data you have available to you. Do not speculate.\n\nWrite your response from the first person, like this: \"I successfully published a post to your blog titled XYZ.... etc.\"\nYou can reference the SEO expert Sub-Agent that you delegated SEO research too.", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": { + "data": "", + "website_topic": "", + "website_description": "" + } + }, + "metadata": { + "position": { + "x": 17329.25442073381, + "y": -1349.5874683419552 + }, + "customized_name": "Generate User Report" + }, + "input_links": [ + { + "id": "1c9fe832-5877-4636-aa6d-5eaeacd8a065", + "source_id": "25085d63-cbca-4a2a-8c4e-2d22e12de0be", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "result", + "sink_name": "prompt_values_#_website_topic", + "is_static": true + }, + { + "id": "b9d5fbed-bb4f-42ef-b415-61e1ea4f1971", + "source_id": "c701345d-37e8-40b7-82b7-72c7b10337ed", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "value", + "sink_name": "prompt_values_#_data", + "is_static": false + }, + { + "id": "9c5af979-7ca0-44a0-8350-e447a9e01765", + "source_id": "9aac9ea1-3413-4ce0-a9fc-26d5aec9ef93", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "result", + "sink_name": "prompt_values_#_website_description", + "is_static": true + } + ], + "output_links": [ + { + "id": "0d13989b-9ad2-4da1-9f50-75639ed21775", + "source_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "sink_id": "da04c7e3-0a1e-4360-8feb-3cd6a76eec55", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "block_id": "8f2e4d6a-1b3c-4a5e-9d7f-2c8e6a4b3f1d", + "input_default": { + "depth": 1, + "limit": 100, + "language_code": "en", + "location_code": 2840, + "include_serp_info": false, + "include_seed_keyword": true, + "include_clickstream_data": false + }, + "metadata": { + "position": { + "x": 2687.4900253744463, + "y": -2162.6734634319273 + } + }, + "input_links": [ + { + "id": "1b9ae93d-e8f3-46aa-b983-5e4bd95f2f56", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "source_name": "Primary Keyword", + "sink_name": "keyword", + "is_static": false + } + ], + "output_links": [ + { + "id": "ac131160-cec8-4919-b5ad-ae3ab1ebd461", + "source_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "sink_id": "b7eccd17-1fb1-40da-b41a-30b888cc2fdf", + "source_name": "related_keywords", + "sink_name": "value", + "is_static": false + }, + { + "id": "5723772a-f223-4726-9e17-189699534395", + "source_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "sink_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "b7eccd17-1fb1-40da-b41a-30b888cc2fdf", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 3304.650423375051, + "y": -2158.038818543938 + }, + "customized_name": "Convert to String" + }, + "input_links": [ + { + "id": "ac131160-cec8-4919-b5ad-ae3ab1ebd461", + "source_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "sink_id": "b7eccd17-1fb1-40da-b41a-30b888cc2fdf", + "source_name": "related_keywords", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "6a2d8156-ae27-463b-bce6-7e8f4211c8f0", + "source_id": "b7eccd17-1fb1-40da-b41a-30b888cc2fdf", + "sink_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "source_name": "value", + "sink_name": "no_value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "932e0dfe-aec4-4d1f-be74-432e6c5b77f7", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 2515.1673028985, + "y": 4578.667647973382 + } + }, + "input_links": [ + { + "id": "59a3eebb-7b26-4dfe-900b-35e3a06866bd", + "source_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "sink_id": "932e0dfe-aec4-4d1f-be74-432e6c5b77f7", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f2c12cd4-fca4-4012-921c-264c1b65d999", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "932e0dfe-aec4-4d1f-be74-432e6c5b77f7", + "source_name": "Error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": {}, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "b5ed0a0f-2c94-4357-b4e3-19b0357e0431", + "input_schema": { + "type": "object", + "required": [ + "AI Generated Text" + ], + "properties": { + "AI Generated Text": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "AI Generated Text", + "secret": false, + "advanced": false + } + } + }, + "graph_version": 15, + "output_schema": { + "type": "object", + "required": [ + "Humanized Text", + "Error" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Humanized Text": { + "title": "Humanized Text", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 11263.993876585177, + "y": -1500.0808137159602 + } + }, + "input_links": [ + { + "id": "4bc17613-0f77-42a4-8950-e104cf654ae1", + "source_id": "233293ef-ce18-45b0-ab68-37d17d8bc6f9", + "sink_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "source_name": "output", + "sink_name": "AI Generated Text", + "is_static": false + } + ], + "output_links": [ + { + "id": "2ed564ec-112c-424f-bc28-f9d328888b03", + "source_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "sink_id": "a4ac5b96-607c-4924-a4fe-8ab8a59ecf2d", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "49b1300d-a7c0-4231-a3b9-2388c7daac45", + "source_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "Humanized Text", + "sink_name": "content", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + }, + { + "id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": {}, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "2a65b19f-4c82-4756-8884-8159a614486b", + "input_schema": { + "type": "object", + "required": [ + "Website Primary Topic", + "Domain", + "Website Description", + "Previous Posts" + ], + "properties": { + "Domain": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Domain", + "secret": false, + "advanced": false + }, + "Previous Posts": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Previous Posts", + "secret": false, + "advanced": false, + "description": "A list of your website's previous blog post titles" + }, + "Website Description": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Website Description", + "secret": false, + "advanced": false, + "description": "A brief explanation of your site\u2019s purpose, audience, and value. Example: \u201cA blog that makes AI approachable and easy to benefit from for the average person.\u201d" + }, + "Website Primary Topic": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Website Primary Topic", + "secret": false, + "advanced": false, + "description": "A short phrase summarizing your main subject or niche. Example: \u201cAI for Everyday People.\u201d" + } + } + }, + "graph_version": 38, + "output_schema": { + "type": "object", + "required": [ + "Primary Keyword", + "Error", + "Proposed Title", + "Justification", + "Keyword Difficulty", + "Search Volume" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Justification": { + "title": "Justification", + "secret": false, + "advanced": false + }, + "Search Volume": { + "title": "Search Volume", + "secret": false, + "advanced": false + }, + "Proposed Title": { + "title": "Proposed Title", + "secret": false, + "advanced": false + }, + "Primary Keyword": { + "title": "Primary Keyword", + "secret": false, + "advanced": false + }, + "Keyword Difficulty": { + "title": "Keyword Difficulty", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 1578.6246712108784, + "y": -2257.448893909135 + } + }, + "input_links": [ + { + "id": "5241ccc9-8df8-41ee-9305-0d345fbfbf3d", + "source_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "value", + "sink_name": "Previous Posts", + "is_static": false + }, + { + "id": "d9eeca3d-f84b-4e43-85cc-0fd6fffd0a8b", + "source_id": "25085d63-cbca-4a2a-8c4e-2d22e12de0be", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Website Primary Topic", + "is_static": true + }, + { + "id": "d7862045-b221-4b07-95a8-4a6a774201be", + "source_id": "f6fe5b4c-0003-4c25-861c-e61d6d139ad6", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Domain", + "is_static": true + }, + { + "id": "634a7d56-d5f4-4133-a475-0be6e49c381a", + "source_id": "9aac9ea1-3413-4ce0-a9fc-26d5aec9ef93", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Website Description", + "is_static": true + } + ], + "output_links": [ + { + "id": "3f10a5ab-d964-4a24-8dcd-14d4f736a862", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "source_name": "Search Volume", + "sink_name": "value", + "is_static": false + }, + { + "id": "1b9ae93d-e8f3-46aa-b983-5e4bd95f2f56", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "source_name": "Primary Keyword", + "sink_name": "keyword", + "is_static": false + }, + { + "id": "d1ba66d5-158f-44ec-adb3-158775365f3e", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Proposed Title", + "sink_name": "prompt_values_#_seo_expert_proposed_title", + "is_static": false + }, + { + "id": "c00b16c3-421c-4078-a335-03d510bf7213", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "Justification", + "sink_name": "values_#_SEO_Experts_Reason_for_keyword_choice", + "is_static": false + }, + { + "id": "b42dc4c7-e7ba-4c20-9c10-3275867cc22d", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Justification", + "sink_name": "prompt_values_#_seo_expert_reason", + "is_static": false + }, + { + "id": "b2b664e7-cf87-4651-bb5c-1256be2db489", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Primary Keyword", + "sink_name": "prompt_values_#_keyword", + "is_static": false + }, + { + "id": "f2c12cd4-fca4-4012-921c-264c1b65d999", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "932e0dfe-aec4-4d1f-be74-432e6c5b77f7", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d76f1e26-3c01-4f62-a70f-a49cbf4b135a", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "Primary Keyword", + "sink_name": "values_#_Primary_Target_Keyword", + "is_static": false + }, + { + "id": "2d911061-a380-49cd-bfe1-bb5f1ca3bc9d", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "source_name": "Keyword Difficulty", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9dfa5527-15f3-4769-b173-1dd9ecfe19da", + "graph_version": 246, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "6330180a-9c46-4b53-a246-27e8bb0d8f4d", + "source_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "sink_id": "c701345d-37e8-40b7-82b7-72c7b10337ed", + "source_name": "dictionary", + "sink_name": "value", + "is_static": false + }, + { + "id": "49b1300d-a7c0-4231-a3b9-2388c7daac45", + "source_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "Humanized Text", + "sink_name": "content", + "is_static": false + }, + { + "id": "2d911061-a380-49cd-bfe1-bb5f1ca3bc9d", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "source_name": "Keyword Difficulty", + "sink_name": "value", + "is_static": false + }, + { + "id": "ad31ceeb-c0e3-4608-bc55-87166a8097d9", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "b84a8845-1643-44fb-827d-147e2626b9a8", + "source_name": "negative", + "sink_name": "value", + "is_static": false + }, + { + "id": "4bc17613-0f77-42a4-8950-e104cf654ae1", + "source_id": "233293ef-ce18-45b0-ab68-37d17d8bc6f9", + "sink_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "source_name": "output", + "sink_name": "AI Generated Text", + "is_static": false + }, + { + "id": "df7f655b-4daf-48bb-b5db-1268c27d7103", + "source_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "sink_id": "24690e20-a754-4cf7-89e1-a67d2c2f9549", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "ac131160-cec8-4919-b5ad-ae3ab1ebd461", + "source_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "sink_id": "b7eccd17-1fb1-40da-b41a-30b888cc2fdf", + "source_name": "related_keywords", + "sink_name": "value", + "is_static": false + }, + { + "id": "b9d5fbed-bb4f-42ef-b415-61e1ea4f1971", + "source_id": "c701345d-37e8-40b7-82b7-72c7b10337ed", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "value", + "sink_name": "prompt_values_#_data", + "is_static": false + }, + { + "id": "b8c352f9-3d01-4e8d-a862-5a6334204e03", + "source_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "sink_id": "65e15cdd-d855-432e-899a-74c9709b1790", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "1c9fe832-5877-4636-aa6d-5eaeacd8a065", + "source_id": "25085d63-cbca-4a2a-8c4e-2d22e12de0be", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "result", + "sink_name": "prompt_values_#_website_topic", + "is_static": true + }, + { + "id": "6af59cf6-3a24-4f4e-a940-7ae8b455c4a1", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "6fde0a6a-ef7e-439a-ba65-9b78ca0fd5ae", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "519d0be6-338c-4b26-84d5-4e8fff1a1ea7", + "source_id": "e588a9ea-4148-45cd-a709-d57125a50c01", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "value", + "sink_name": "values_#_Keyword_Difficulty", + "is_static": false + }, + { + "id": "5723772a-f223-4726-9e17-189699534395", + "source_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "sink_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "d7862045-b221-4b07-95a8-4a6a774201be", + "source_id": "f6fe5b4c-0003-4c25-861c-e61d6d139ad6", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Domain", + "is_static": true + }, + { + "id": "772ef883-8c97-4637-81b9-028fda6d6a80", + "source_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "sink_id": "8d0906fe-9c39-46f5-ad68-608a81adb41e", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + }, + { + "id": "d5bfc1fa-3a43-427f-8073-36f1083e836d", + "source_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "value", + "sink_name": "values_#_Search_Volume", + "is_static": false + }, + { + "id": "d9eeca3d-f84b-4e43-85cc-0fd6fffd0a8b", + "source_id": "25085d63-cbca-4a2a-8c4e-2d22e12de0be", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Website Primary Topic", + "is_static": true + }, + { + "id": "b2b664e7-cf87-4651-bb5c-1256be2db489", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Primary Keyword", + "sink_name": "prompt_values_#_keyword", + "is_static": false + }, + { + "id": "ac0da1b7-1680-4748-8334-f832d2063899", + "source_id": "6429fe92-aeff-4a14-be9f-9d2d5021bad4", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "result", + "sink_name": "prompt_values_#_brand_tone", + "is_static": true + }, + { + "id": "5241ccc9-8df8-41ee-9305-0d345fbfbf3d", + "source_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "value", + "sink_name": "Previous Posts", + "is_static": false + }, + { + "id": "f2c12cd4-fca4-4012-921c-264c1b65d999", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "932e0dfe-aec4-4d1f-be74-432e6c5b77f7", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "1b9ae93d-e8f3-46aa-b983-5e4bd95f2f56", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "0939990c-e972-45b0-9a1c-03d61f65eee0", + "source_name": "Primary Keyword", + "sink_name": "keyword", + "is_static": false + }, + { + "id": "dd201eb6-26bf-4133-acd0-cb8cc94ee739", + "source_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "no_output", + "sink_name": "prompt_values_#_keyword_data", + "is_static": false + }, + { + "id": "cb4eedea-e089-4356-b153-f94520957ab1", + "source_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "output", + "sink_name": "values_#_Article Title", + "is_static": false + }, + { + "id": "0d13989b-9ad2-4da1-9f50-75639ed21775", + "source_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "sink_id": "da04c7e3-0a1e-4360-8feb-3cd6a76eec55", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "1bd68af3-2ba6-46a4-b140-eb472f3bcfd1", + "source_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "sink_id": "fcb3b8f6-f4ae-4868-a4d6-4ebca0d9c304", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c00b16c3-421c-4078-a335-03d510bf7213", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "Justification", + "sink_name": "values_#_SEO_Experts_Reason_for_keyword_choice", + "is_static": false + }, + { + "id": "6a2d8156-ae27-463b-bce6-7e8f4211c8f0", + "source_id": "b7eccd17-1fb1-40da-b41a-30b888cc2fdf", + "sink_id": "9e4e8cc7-8f72-40c8-93a4-5e73504fbff1", + "source_name": "value", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "959f0fb4-02bd-4e93-b0b4-d187b10773ba", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "325265ca-819c-4fdd-a69f-1b75d2da03f7", + "source_id": "3800e8a5-e648-4686-988d-61eae960f126", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "response", + "sink_name": "prompt_values_#_research_notes", + "is_static": false + }, + { + "id": "74e11c3f-2d41-4049-993b-142b362f5cd4", + "source_id": "d9a537f7-930d-4526-8eca-8279d9ea747a", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "date", + "sink_name": "values_#_Date", + "is_static": false + }, + { + "id": "2ed564ec-112c-424f-bc28-f9d328888b03", + "source_id": "f8a8a5fa-fa17-43f3-ab83-f7234e4148fe", + "sink_id": "a4ac5b96-607c-4924-a4fe-8ab8a59ecf2d", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "6afb10cc-5dc0-4ecb-a228-31b88640a21f", + "source_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "image_url", + "sink_name": "featured_image", + "is_static": false + }, + { + "id": "83f01bb1-581d-4a5e-a7cd-29f6a6686585", + "source_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "sink_id": "1d0294f2-456a-493e-91fe-317ec29de58b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "59a3eebb-7b26-4dfe-900b-35e3a06866bd", + "source_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "sink_id": "932e0dfe-aec4-4d1f-be74-432e6c5b77f7", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "9c5af979-7ca0-44a0-8350-e447a9e01765", + "source_id": "9aac9ea1-3413-4ce0-a9fc-26d5aec9ef93", + "sink_id": "f5a81332-edeb-49c2-9460-cbfcc139d21d", + "source_name": "result", + "sink_name": "prompt_values_#_website_description", + "is_static": true + }, + { + "id": "d006c5ea-ad1b-4572-a451-8d60a87b65dc", + "source_id": "79d2d459-08ff-4ad3-a084-174243f4fb5e", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "output", + "sink_name": "title", + "is_static": false + }, + { + "id": "16445831-8df5-437b-9bb9-66c961ca6bc4", + "source_id": "f6fe5b4c-0003-4c25-861c-e61d6d139ad6", + "sink_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "source_name": "result", + "sink_name": "text", + "is_static": true + }, + { + "id": "d240ea3d-3104-44a5-9714-c255ab5dc99d", + "source_id": "7cbb535f-bb8c-49d3-9392-883c4eaa9372", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "result", + "sink_name": "prompt_values_#_target_word_count", + "is_static": true + }, + { + "id": "634a7d56-d5f4-4133-a475-0be6e49c381a", + "source_id": "9aac9ea1-3413-4ce0-a9fc-26d5aec9ef93", + "sink_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "source_name": "result", + "sink_name": "Website Description", + "is_static": true + }, + { + "id": "b3cba97a-0157-4a76-bc83-b84e89a1bf20", + "source_id": "3800e8a5-e648-4686-988d-61eae960f126", + "sink_id": "aa29e890-8db6-4ec9-acf5-b7c798d4e604", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "1680109f-50c1-4783-a39d-db8dc6ea0609", + "source_id": "6fde0a6a-ef7e-439a-ba65-9b78ca0fd5ae", + "sink_id": "9a4d8599-a360-424e-a084-f19f02312c52", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "b140f563-da18-4942-8f03-f4e7bc650a24", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "502780bc-4065-4c4b-b066-ee1d0ef0c543", + "source_name": "positive", + "sink_name": "Wordpress Blog URL", + "is_static": false + }, + { + "id": "b42dc4c7-e7ba-4c20-9c10-3275867cc22d", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Justification", + "sink_name": "prompt_values_#_seo_expert_reason", + "is_static": false + }, + { + "id": "3f10a5ab-d964-4a24-8dcd-14d4f736a862", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "30576680-b666-45e3-86b4-4ed4520c8ac0", + "source_name": "Search Volume", + "sink_name": "value", + "is_static": false + }, + { + "id": "d76f1e26-3c01-4f62-a70f-a49cbf4b135a", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "Primary Keyword", + "sink_name": "values_#_Primary_Target_Keyword", + "is_static": false + }, + { + "id": "d1ba66d5-158f-44ec-adb3-158775365f3e", + "source_id": "951f011a-0f21-43a2-91b4-3b8734a4fc53", + "sink_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "source_name": "Proposed Title", + "sink_name": "prompt_values_#_seo_expert_proposed_title", + "is_static": false + }, + { + "id": "2149a96a-c3ac-4d45-98f4-1c926ceae5b2", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "ddb9e490-56a6-4203-8f1b-d944e30c253b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "cc1b7d8b-391a-4906-a022-9fe2d73cffc9", + "source_id": "26446b03-3117-4f7e-af10-855fdd1089c3", + "sink_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "source_name": "positive", + "sink_name": "site", + "is_static": false + }, + { + "id": "c6280938-afc1-40a6-8493-80c234ccb6a0", + "source_id": "bee9a01e-2ea8-4a0c-9a44-6150e7d6512b", + "sink_id": "233293ef-ce18-45b0-ab68-37d17d8bc6f9", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "77e0d0e4-da9e-47b3-812c-72a52c12513e", + "source_id": "43ecf959-efbb-48be-bd72-17f9fc580b88", + "sink_id": "de461596-8a75-4b0e-ae2e-09206e48fdc3", + "source_name": "post_url", + "sink_name": "values_#_Published URL", + "is_static": false + }, + { + "id": "5e2dc3d9-6e0b-432b-bd98-f8b072706f76", + "source_id": "502780bc-4065-4c4b-b066-ee1d0ef0c543", + "sink_id": "d9f8fff7-930a-4e84-bf1a-104702527283", + "source_name": "Blog Post Titles", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [ + { + "id": "e26a6fd5-654a-4cfd-afb9-9d71db746802", + "version": 11, + "is_active": false, + "name": "Wordpress Get Post Titles", + "description": "", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "block_id": "6595ae1f-b924-42cb-9a41-551a0611c4b4", + "input_default": { + "files": [], + "method": "GET", + "headers": {}, + "files_name": "file", + "json_format": true + }, + "metadata": { + "position": { + "x": 848.5, + "y": 465.5 + } + }, + "input_links": [ + { + "id": "a606ea7d-f3c7-4a3b-9a0b-4bb2388a046a", + "source_id": "19591287-3f37-42c4-9f82-59f4f9e4facf", + "sink_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "source_name": "output", + "sink_name": "url", + "is_static": false + } + ], + "output_links": [ + { + "id": "c69e7a20-81c4-44ff-bb7a-5d389d68f649", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "2f41c444-0645-4e64-9e07-d7069305f2bc", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "client_error", + "sink_name": "value", + "is_static": false + }, + { + "id": "5060ea20-5ef9-4b95-976d-491767f55576", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "server_error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0d945642-b8ca-4f82-8c52-8f72346846db", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "posts" + }, + "metadata": { + "position": { + "x": 1408.500114440918, + "y": 465.5000381469726 + } + }, + "input_links": [ + { + "id": "c69e7a20-81c4-44ff-bb7a-5d389d68f649", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "cb1aa523-b93b-49d1-8d64-1da3782c5a18", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "df9b2f6b-8e92-44b1-b2b7-24fbcb3bc636", + "source_name": "output", + "sink_name": "items", + "is_static": false + }, + { + "id": "8fc356aa-c3e8-4b67-af05-f05a018c8f8c", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "9e938bd6-d757-44b8-9a27-4d237cebd370", + "source_name": "output", + "sink_name": "collection", + "is_static": false + }, + { + "id": "02de7dc8-6cd0-4880-8e3d-b8723b26ce47", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "df9b2f6b-8e92-44b1-b2b7-24fbcb3bc636", + "block_id": "f66a3543-28d3-4ab5-8945-9b336371e2ce", + "input_default": { + "items": [], + "items_object": {} + }, + "metadata": { + "position": { + "x": 1977.263221947672, + "y": 447.97382313346327 + } + }, + "input_links": [ + { + "id": "cb1aa523-b93b-49d1-8d64-1da3782c5a18", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "df9b2f6b-8e92-44b1-b2b7-24fbcb3bc636", + "source_name": "output", + "sink_name": "items", + "is_static": false + } + ], + "output_links": [ + { + "id": "640f7f20-704b-4f3a-9c86-c0bf3677996c", + "source_id": "df9b2f6b-8e92-44b1-b2b7-24fbcb3bc636", + "sink_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "title" + }, + "metadata": { + "position": { + "x": 2526.320477819138, + "y": 456.78149165985286 + } + }, + "input_links": [ + { + "id": "640f7f20-704b-4f3a-9c86-c0bf3677996c", + "source_id": "df9b2f6b-8e92-44b1-b2b7-24fbcb3bc636", + "sink_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "6d3dddc3-c9b1-4c48-9cb6-c78deb322498", + "source_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "output", + "sink_name": "entry", + "is_static": false + }, + { + "id": "29ae2bf1-4801-4a64-a4c0-190d42eb60ac", + "source_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "9e938bd6-d757-44b8-9a27-4d237cebd370", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 1997.1362790954677, + "y": 2760.2039857865184 + } + }, + "input_links": [ + { + "id": "8fc356aa-c3e8-4b67-af05-f05a018c8f8c", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "9e938bd6-d757-44b8-9a27-4d237cebd370", + "source_name": "output", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "1c8e8264-e788-47fb-96a3-e4ae5116fab8", + "source_id": "9e938bd6-d757-44b8-9a27-4d237cebd370", + "sink_id": "c7fe8eca-b2af-4874-8544-55490f4998a6", + "source_name": "count", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "block_id": "aeb08fc1-2fc1-4141-bc8e-f758f183a822", + "input_default": { + "list": [ + "TEMP" + ], + "entries": [] + }, + "metadata": { + "position": { + "x": 3113.406873655237, + "y": 457.33750112260316 + } + }, + "input_links": [ + { + "id": "6d3dddc3-c9b1-4c48-9cb6-c78deb322498", + "source_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "output", + "sink_name": "entry", + "is_static": false + }, + { + "id": "0c1e4c82-8aab-4bf4-bb1d-f3f63a25dc16", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "5eb8ec7c-bc6e-41e2-be64-6d91a83abf3a", + "source_id": "c9222e64-2ea8-442e-85eb-b51f9a43c127", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "list", + "sink_name": "list", + "is_static": false + } + ], + "output_links": [ + { + "id": "40840419-a0de-4b9e-bc8d-2f5215a4f7df", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "b74ae8b7-0a96-45a4-ae39-d697c2bd9938", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "a64966eb-dac6-4ca9-b7d3-5736138b7ad8", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "20e067d6-8242-40e1-a064-e3dc823ca041", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0c1e4c82-8aab-4bf4-bb1d-f3f63a25dc16", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + } + ] + }, + { + "id": "b74ae8b7-0a96-45a4-ae39-d697c2bd9938", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 3673.4068005348, + "y": 457.3374812766046 + } + }, + "input_links": [ + { + "id": "40840419-a0de-4b9e-bc8d-2f5215a4f7df", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "b74ae8b7-0a96-45a4-ae39-d697c2bd9938", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "d3ce3958-893f-43d8-83ee-2c5aa86dbce9", + "source_id": "b74ae8b7-0a96-45a4-ae39-d697c2bd9938", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ] + }, + { + "id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "operator": ">" + }, + "metadata": { + "position": { + "x": 4233.407029416636, + "y": 457.3374812766046 + } + }, + "input_links": [ + { + "id": "d3ce3958-893f-43d8-83ee-2c5aa86dbce9", + "source_id": "b74ae8b7-0a96-45a4-ae39-d697c2bd9938", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "a64966eb-dac6-4ca9-b7d3-5736138b7ad8", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "9e2e280e-ea15-4f8b-8d63-f6030fec44db", + "source_id": "c7fe8eca-b2af-4874-8544-55490f4998a6", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "output", + "sink_name": "value2", + "is_static": true + } + ], + "output_links": [ + { + "id": "ce55ba06-8e69-4d7f-b5a5-ec353db4ef0e", + "source_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "sink_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + } + ] + }, + { + "id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "block_id": "d93c5a93-ac7e-41c1-ae5c-ef67e6e9b826", + "input_default": { + "list": [], + "value": "TEMP", + "return_item": false + }, + "metadata": { + "position": { + "x": 4793.407090062505, + "y": 457.3375189606344 + } + }, + "input_links": [ + { + "id": "ce55ba06-8e69-4d7f-b5a5-ec353db4ef0e", + "source_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "sink_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + } + ], + "output_links": [ + { + "id": "4e03765c-0ab3-431f-84b2-adeaac4a85ff", + "source_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "sink_id": "40cf195d-ab45-462d-a7b2-ad9456a117c6", + "source_name": "updated_list", + "sink_name": "value", + "is_static": false + }, + { + "id": "e5a1dafd-6426-487d-a744-c429604a1cce", + "source_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "40cf195d-ab45-462d-a7b2-ad9456a117c6", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Blog Post Titles", + "secret": false, + "advanced": false + }, + "metadata": { + "position": { + "x": 5353.407547826177, + "y": 457.33755710760704 + } + }, + "input_links": [ + { + "id": "4e03765c-0ab3-431f-84b2-adeaac4a85ff", + "source_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "sink_id": "40cf195d-ab45-462d-a7b2-ad9456a117c6", + "source_name": "updated_list", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "db429ad3-c802-48de-a090-80b028ffbc40", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Wordpress Blog URL", + "secret": false, + "advanced": false, + "description": "e.g aiespresso.wordpress.com - no \"http://\" or \"/\"s", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -293.0142729917493, + "y": 460.6433177043922 + } + }, + "input_links": [], + "output_links": [ + { + "id": "6de5d480-3005-4c75-aac4-675185415362", + "source_id": "db429ad3-c802-48de-a090-80b028ffbc40", + "sink_id": "19591287-3f37-42c4-9f82-59f4f9e4facf", + "source_name": "result", + "sink_name": "values_#_url", + "is_static": true + } + ] + }, + { + "id": "19591287-3f37-42c4-9f82-59f4f9e4facf", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "https://public-api.wordpress.com/rest/v1.1/sites/{{url}}/posts/", + "values": {} + }, + "metadata": { + "position": { + "x": 266.9854934848886, + "y": 460.6433460811645 + } + }, + "input_links": [ + { + "id": "6de5d480-3005-4c75-aac4-675185415362", + "source_id": "db429ad3-c802-48de-a090-80b028ffbc40", + "sink_id": "19591287-3f37-42c4-9f82-59f4f9e4facf", + "source_name": "result", + "sink_name": "values_#_url", + "is_static": true + } + ], + "output_links": [ + { + "id": "a606ea7d-f3c7-4a3b-9a0b-4bb2388a046a", + "source_id": "19591287-3f37-42c4-9f82-59f4f9e4facf", + "sink_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "source_name": "output", + "sink_name": "url", + "is_static": false + } + ] + }, + { + "id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "secret": false, + "advanced": false + }, + "metadata": { + "position": { + "x": 2179.568267604507, + "y": 4624.3507624591275 + } + }, + "input_links": [ + { + "id": "2f41c444-0645-4e64-9e07-d7069305f2bc", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "client_error", + "sink_name": "value", + "is_static": false + }, + { + "id": "5060ea20-5ef9-4b95-976d-491767f55576", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "server_error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0d945642-b8ca-4f82-8c52-8f72346846db", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "02de7dc8-6cd0-4880-8e3d-b8723b26ce47", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "29ae2bf1-4801-4a64-a4c0-190d42eb60ac", + "source_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "20e067d6-8242-40e1-a064-e3dc823ca041", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "e5a1dafd-6426-487d-a744-c429604a1cce", + "source_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "c7fe8eca-b2af-4874-8544-55490f4998a6", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": {}, + "metadata": { + "position": { + "x": 2855.062585417521, + "y": 2732.0502477648593 + } + }, + "input_links": [ + { + "id": "1c8e8264-e788-47fb-96a3-e4ae5116fab8", + "source_id": "9e938bd6-d757-44b8-9a27-4d237cebd370", + "sink_id": "c7fe8eca-b2af-4874-8544-55490f4998a6", + "source_name": "count", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "9e2e280e-ea15-4f8b-8d63-f6030fec44db", + "source_id": "c7fe8eca-b2af-4874-8544-55490f4998a6", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "output", + "sink_name": "value2", + "is_static": true + } + ] + }, + { + "id": "c9222e64-2ea8-442e-85eb-b51f9a43c127", + "block_id": "a912d5c7-6e00-4542-b2a9-8034136930e4", + "input_default": { + "values": [ + "TEMP" + ] + }, + "metadata": { + "position": { + "x": 2406.044362323408, + "y": -2252.8326294404233 + } + }, + "input_links": [], + "output_links": [ + { + "id": "5eb8ec7c-bc6e-41e2-be64-6d91a83abf3a", + "source_id": "c9222e64-2ea8-442e-85eb-b51f9a43c127", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "list", + "sink_name": "list", + "is_static": false + } + ] + } + ], + "links": [ + { + "id": "0d945642-b8ca-4f82-8c52-8f72346846db", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "ce55ba06-8e69-4d7f-b5a5-ec353db4ef0e", + "source_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "sink_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + }, + { + "id": "20e067d6-8242-40e1-a064-e3dc823ca041", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "e5a1dafd-6426-487d-a744-c429604a1cce", + "source_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "5060ea20-5ef9-4b95-976d-491767f55576", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "server_error", + "sink_name": "value", + "is_static": false + }, + { + "id": "2f41c444-0645-4e64-9e07-d7069305f2bc", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "client_error", + "sink_name": "value", + "is_static": false + }, + { + "id": "40840419-a0de-4b9e-bc8d-2f5215a4f7df", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "b74ae8b7-0a96-45a4-ae39-d697c2bd9938", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "c69e7a20-81c4-44ff-bb7a-5d389d68f649", + "source_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "sink_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "6d3dddc3-c9b1-4c48-9cb6-c78deb322498", + "source_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "output", + "sink_name": "entry", + "is_static": false + }, + { + "id": "640f7f20-704b-4f3a-9c86-c0bf3677996c", + "source_id": "df9b2f6b-8e92-44b1-b2b7-24fbcb3bc636", + "sink_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "9e2e280e-ea15-4f8b-8d63-f6030fec44db", + "source_id": "c7fe8eca-b2af-4874-8544-55490f4998a6", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "d3ce3958-893f-43d8-83ee-2c5aa86dbce9", + "source_id": "b74ae8b7-0a96-45a4-ae39-d697c2bd9938", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "a64966eb-dac6-4ca9-b7d3-5736138b7ad8", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "2e06f234-58a5-40d7-b27f-bf61690e5d1b", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "6de5d480-3005-4c75-aac4-675185415362", + "source_id": "db429ad3-c802-48de-a090-80b028ffbc40", + "sink_id": "19591287-3f37-42c4-9f82-59f4f9e4facf", + "source_name": "result", + "sink_name": "values_#_url", + "is_static": true + }, + { + "id": "cb1aa523-b93b-49d1-8d64-1da3782c5a18", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "df9b2f6b-8e92-44b1-b2b7-24fbcb3bc636", + "source_name": "output", + "sink_name": "items", + "is_static": false + }, + { + "id": "0c1e4c82-8aab-4bf4-bb1d-f3f63a25dc16", + "source_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "4e03765c-0ab3-431f-84b2-adeaac4a85ff", + "source_id": "9e7c1bd0-e8a2-42de-9c59-2c44c62020d3", + "sink_id": "40cf195d-ab45-462d-a7b2-ad9456a117c6", + "source_name": "updated_list", + "sink_name": "value", + "is_static": false + }, + { + "id": "a606ea7d-f3c7-4a3b-9a0b-4bb2388a046a", + "source_id": "19591287-3f37-42c4-9f82-59f4f9e4facf", + "sink_id": "7b6f2677-be00-49b6-a50b-e4e551b231f6", + "source_name": "output", + "sink_name": "url", + "is_static": false + }, + { + "id": "8fc356aa-c3e8-4b67-af05-f05a018c8f8c", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "9e938bd6-d757-44b8-9a27-4d237cebd370", + "source_name": "output", + "sink_name": "collection", + "is_static": false + }, + { + "id": "1c8e8264-e788-47fb-96a3-e4ae5116fab8", + "source_id": "9e938bd6-d757-44b8-9a27-4d237cebd370", + "sink_id": "c7fe8eca-b2af-4874-8544-55490f4998a6", + "source_name": "count", + "sink_name": "input", + "is_static": false + }, + { + "id": "5eb8ec7c-bc6e-41e2-be64-6d91a83abf3a", + "source_id": "c9222e64-2ea8-442e-85eb-b51f9a43c127", + "sink_id": "3b540af2-ab3c-4a70-84a9-0226862d6740", + "source_name": "list", + "sink_name": "list", + "is_static": false + }, + { + "id": "29ae2bf1-4801-4a64-a4c0-190d42eb60ac", + "source_id": "083a9d96-7683-4963-b18e-26bc68841f7c", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "02de7dc8-6cd0-4880-8e3d-b8723b26ce47", + "source_id": "c80e0834-fc0e-4841-9d9d-8c0c0ccd06c5", + "sink_id": "0bc3c929-2493-4d0c-9a77-3d831e193385", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Wordpress Blog URL": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Wordpress Blog URL", + "description": "e.g aiespresso.wordpress.com - no \"http://\" or \"/\"s" + } + }, + "required": [ + "Wordpress Blog URL" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Blog Post Titles": { + "advanced": false, + "secret": false, + "title": "Blog Post Titles" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Blog Post Titles", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "d43270ae-9538-4f4d-bf40-0e842ea42bbc", + "version": 15, + "is_active": true, + "name": "Text Humanizer", + "description": "Put in AI Generated text, this tool will attempt to make it appear less obviously AI generated.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "block_id": "0b02b072-abe7-11ef-8372-fb5d162dd712", + "input_default": { + "timeout": 300, + "language": "python", + "setup_commands": [] + }, + "metadata": { + "position": { + "x": 1396.3000357443748, + "y": 610.461992453033 + } + }, + "input_links": [ + { + "id": "5eeb3e7d-130a-46a9-8939-af9fbb1db3d4", + "source_id": "8a844481-e687-469c-a4c6-6135a4fa7fd9", + "sink_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "source_name": "output", + "sink_name": "code", + "is_static": false + } + ], + "output_links": [ + { + "id": "341de54d-aba8-4f6f-b0f3-225e12f54dee", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "b72a68e2-95bc-4b6a-b012-cad51ce891e9", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "d0cda6db-96b1-4cae-9a3b-89a7c786ed55", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "b72a68e2-95bc-4b6a-b012-cad51ce891e9", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "68a7ac5c-0204-4d37-bd29-33d03c4a8b16", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "64b313dc-0b6f-41c0-9c37-3bab18538c5a", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "8a844481-e687-469c-a4c6-6135a4fa7fd9", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "import re\n\nRE = re.compile(r\"[\u201c\u201d\u2018\u2019\u2014\u2013\u2026\u2032\u2033\u2212\u00d7\u00f7\u2264\u2265\u2260\u00b1\u2248\u2192\u2190\u2194\u21d2\u21d0\u2022\u00b7\u00a9\u00ae\u2122\u00bd\u00bc\u00be\u2153\u2154\u215b\u215c\u215d\u215e\u00ab\u00bb\u2039\u203a\\u00A0\\u2010\\u2011\\u2012\\u00AD]\")\n\nMAP = {\n \"\u201c\": '\"', \"\u201d\": '\"',\n \"\u2018\": \"'\", \"\u2019\": \"'\",\n \"\u2014\": \" - \", \"\u2013\": \"-\",\n \"\u2026\": \"...\",\n \"\u2032\": \"'\", \"\u2033\": '\"',\n \"\u2212\": \"-\", # U+2212 minus\n \"\u00d7\": \"x\", \"\u00f7\": \"/\",\n \"\u2264\": \"<=\", \"\u2265\": \">=\", \"\u2260\": \"!=\",\n \"\u00b1\": \"+/-\", \"\u2248\": \"~\",\n \"\u2192\": \"->\", \"\u2190\": \"<-\", \"\u2194\": \"<->\",\n \"\u21d2\": \"=>\", \"\u21d0\": \"<=\",\n \"\u2022\": \"*\", \"\u00b7\": \".\",\n \"\u00a9\": \"(C)\", \"\u00ae\": \"(R)\", \"\u2122\": \"(TM)\",\n \"\u00bd\": \"1/2\", \"\u00bc\": \"1/4\", \"\u00be\": \"3/4\",\n \"\u2153\": \"1/3\", \"\u2154\": \"2/3\",\n \"\u215b\": \"1/8\", \"\u215c\": \"3/8\", \"\u215d\": \"5/8\", \"\u215e\": \"7/8\",\n \"\u00ab\": \"<<\", \"\u00bb\": \">>\", \"\u2039\": \"<\", \"\u203a\": \">\",\n \"\\u00A0\": \" \", # NBSP -> space\n \"\\u2010\": \"-\", # HYPHEN\n \"\\u2011\": \"-\", # NON-BREAKING HYPHEN\n \"\\u2012\": \"-\", # FIGURE DASH\n \"\\u00AD\": \"\", # SOFT HYPHEN\n}\n\ninput_string = \"\"\"{{text | safe}}\"\"\"\nprint(RE.sub(lambda m: MAP[m.group(0)], input_string))\n", + "values": {} + }, + "metadata": { + "position": { + "x": 798.0204985759965, + "y": 604.6283090015096 + } + }, + "input_links": [ + { + "id": "f5c80b8e-c9b5-46f3-a1d1-c78ed476ced9", + "source_id": "2c805606-23e3-479f-a1dc-b3a1649dac4d", + "sink_id": "8a844481-e687-469c-a4c6-6135a4fa7fd9", + "source_name": "result", + "sink_name": "values_#_text", + "is_static": true + } + ], + "output_links": [ + { + "id": "5eeb3e7d-130a-46a9-8939-af9fbb1db3d4", + "source_id": "8a844481-e687-469c-a4c6-6135a4fa7fd9", + "sink_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "source_name": "output", + "sink_name": "code", + "is_static": false + } + ] + }, + { + "id": "2c805606-23e3-479f-a1dc-b3a1649dac4d", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "AI Generated Text", + "secret": false, + "advanced": false, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 205.77212429609472, + "y": 602.9791272931379 + } + }, + "input_links": [], + "output_links": [ + { + "id": "f5c80b8e-c9b5-46f3-a1d1-c78ed476ced9", + "source_id": "2c805606-23e3-479f-a1dc-b3a1649dac4d", + "sink_id": "8a844481-e687-469c-a4c6-6135a4fa7fd9", + "source_name": "result", + "sink_name": "values_#_text", + "is_static": true + } + ] + }, + { + "id": "64b313dc-0b6f-41c0-9c37-3bab18538c5a", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Humanized Text", + "secret": false, + "advanced": false + }, + "metadata": { + "position": { + "x": 2551.9887740433414, + "y": 617.3378904696456 + } + }, + "input_links": [ + { + "id": "68a7ac5c-0204-4d37-bd29-33d03c4a8b16", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "64b313dc-0b6f-41c0-9c37-3bab18538c5a", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "b72a68e2-95bc-4b6a-b012-cad51ce891e9", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "secret": false, + "advanced": false + }, + "metadata": { + "position": { + "x": 3111.9890459714757, + "y": 617.3379217231886 + } + }, + "input_links": [ + { + "id": "341de54d-aba8-4f6f-b0f3-225e12f54dee", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "b72a68e2-95bc-4b6a-b012-cad51ce891e9", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "d0cda6db-96b1-4cae-9a3b-89a7c786ed55", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "b72a68e2-95bc-4b6a-b012-cad51ce891e9", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + } + ], + "links": [ + { + "id": "f5c80b8e-c9b5-46f3-a1d1-c78ed476ced9", + "source_id": "2c805606-23e3-479f-a1dc-b3a1649dac4d", + "sink_id": "8a844481-e687-469c-a4c6-6135a4fa7fd9", + "source_name": "result", + "sink_name": "values_#_text", + "is_static": true + }, + { + "id": "5eeb3e7d-130a-46a9-8939-af9fbb1db3d4", + "source_id": "8a844481-e687-469c-a4c6-6135a4fa7fd9", + "sink_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "source_name": "output", + "sink_name": "code", + "is_static": false + }, + { + "id": "341de54d-aba8-4f6f-b0f3-225e12f54dee", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "b72a68e2-95bc-4b6a-b012-cad51ce891e9", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "d0cda6db-96b1-4cae-9a3b-89a7c786ed55", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "b72a68e2-95bc-4b6a-b012-cad51ce891e9", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "68a7ac5c-0204-4d37-bd29-33d03c4a8b16", + "source_id": "15886b0c-1525-44b3-8b52-4b1be4fc4c18", + "sink_id": "64b313dc-0b6f-41c0-9c37-3bab18538c5a", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "AI Generated Text": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "AI Generated Text" + } + }, + "required": [ + "AI Generated Text" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Humanized Text": { + "advanced": false, + "secret": false, + "title": "Humanized Text" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Humanized Text", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "062c0dfe-15c2-48ae-bf3a-4ec36d818d08", + "version": 38, + "is_active": false, + "name": "Keyword SEO Expert", + "description": "takes the guesswork out of SEO by analyzing your website with top keyword research tools, then recommending the most valuable keyword to target based on your domain and niche \u2014 so you know exactly where to focus for maximum search impact.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "block_id": "3b191d9f-356f-482d-8238-ba04b6d18381", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "You are a seasoned SEO strategist, hired to perform expert quality SEO research, in order to select the next blog post topic for this website.\n\nYou must:\n- Treat this as a serious, high-stakes SEO decision where the site's growth depends on your choice. Pick the highest volume, low difficulty keyword possible which is relevant to the website.\n- Work like a real SEO expert would: explore multiple angles, refine ideas, and check underserved niches before making a recommendation.\n- Prefer extra research over premature completion. Continue until you are confident, based on the evidence gathered, that no stronger keyword opportunity exists.\n\nTools Available:\n1) Get Related Search Keywords \u2014 related keywords from Google SERP \u201csearches related to\u201d.\n2) Get Autocomplete Keyword Suggestions \u2014 autocomplete ideas with words before, after, or within the seed keyword.\n\nInputs:\n\n{{DOMAIN | safe}}\n\n\n{{SITE_TOPIC | safe}}\n\n\n{{SITE_DESC | safe}}\n\n\n{{POSTS | safe}}\n\n\nProcess:\n1) Generate multiple broad seed keyword ideas for the site\u2019s niche.\n2) For each seed:\n - Use BOTH tools to gather suggestions.\n - Merge/deduplicate results; note volume/intent/fit if available.\n3) After each round, reflect explicitly:\n \u201cDo I have enough evidence to pick the best keyword, or could I improve by exploring another angle or refining a seed?\u201d\n If improvement is possible, keep researching.\n4) Only when further research is unlikely to yield a better opportunity:\n - Select ONE Primary Keyword.\n - Propose a compelling blog title that includes the Primary Keyword.\n - Summarize the evidence and reasoning that supports the choice.\n\nOUTPUT FORMAT (strict):\nReturn EXACTLY seven XML blocks, plain text inside each, no nesting and no other content outside these tags:\n1) \u2026 \u2014 a chronological log of seeds tried, which tool(s) were used, notable findings.\n2) \u2026 \u2014 why certain keywords were kept or discarded; how they map to site goals/gaps.\n3) \u2026 \u2014 the chosen primary keyword (plain text only).\n4) \u2026 \u2014 the proposed blog title including the primary keyword.\n5) \u2026 \u2014 keyword difficulty for the chosen primary keyword (numeric if available; otherwise \u201cunknown\u201d).\n6) \u2026 \u2014 monthly search volume for the chosen primary keyword (numeric if available; otherwise \u201cunknown\u201d).\n7) \u2026 \u2014 brief evidence-backed rationale for the selection.\n\nRules:\n- Do not include any XML tags other than: research_log, selection_notes, primary_keyword, proposed_title, kd, sv, justification.\n- Do not nest or add attributes.\n- Keep kd and sv to a single value each (no units; plain number or \u201cunknown\u201d).\n- Keep all other structure as plain text lines and bullet points only.\n- Use as many tool calls as necessary to complete your task to a very high degree of quality.\n- Avoid repeating topics covered in previous_posts. Duplicate or overly similar topics must always be avoided.\n", + "sys_prompt": "Thinking carefully step by step decide which function to call. \n\nAlways choose a function call from the list of function signatures, and always provide the complete argument provided with the type matching the required jsonschema signature, no missing argument is allowed. \n\nIf you have already completed the task objective, you can end the task by providing the end result of your work as a finish message. \n\nCritical Reminder: All function parameters must always be be provided, you can't put \"null\" or nothing. \nNEVER provide `null` as an input to a tool, you must always fill out every tool input on your chosen tool\n\nYou may make as many sequential tool calls as needed to complete the research before providing the final output - the vast majority of tasks require many sequential tool calls to achieve. \nContinue using tools until you have high confidence in your choice, as defined by the main prompt.\nAlways call a tool with each message unless you are providing your final response. As soon as you call without a tall the task will be permanently ended.\n\nThe following note is regarding PARALLEL tool calls (multiple sequential is okay and heavily encouraged):\n", + "ollama_host": "localhost:11434", + "prompt_values": { + "DOMAIN": "aiespresso.wordpress.com", + "SITE_DESC": "A blog that makes AI approachable and easy to benefit from for the average person", + "SITE_TOPIC": "AI for Everyday People" + }, + "last_tool_output": null, + "multiple_tool_calls": false, + "conversation_history": [] + }, + "metadata": { + "position": { + "x": 352.68980628394695, + "y": 259.96845909220383 + } + }, + "input_links": [ + { + "id": "c08f8540-1033-458a-9bbd-19bfc8877bb8", + "source_id": "55548d2f-6f9d-45d1-aa42-88c259b0d87c", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_DOMAIN", + "is_static": true + }, + { + "id": "642c7991-3941-44bd-8787-d3b1176f33b1", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "586a626c-8ad0-400a-a7b2-3091c58e3bfe", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Keyword Data", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "9cefe389-418b-4221-a40b-2493297ed6ec", + "source_id": "05dc4a44-a944-4543-904f-481c0a2d400e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_POSTS", + "is_static": true + }, + { + "id": "4ec7c4ac-361b-40ba-9fd7-b14b5605a51d", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Keyword Data", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "8a7fef86-932c-4f9f-952b-df2a7cc719f0", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "No Keywords Found", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "ffbfaade-ca79-4671-9bb3-0b170635bdc3", + "source_id": "fe19496a-6df6-4a22-9e4e-6d2bff38a4fe", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_SITE_TOPIC", + "is_static": true + }, + { + "id": "2670a7c9-501e-4a79-8a71-c49e4bd7fa58", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "842a7d43-3086-4952-a2f6-761465cb375d", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "No Keywords Found", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "627b7e64-a673-445f-9c83-57ff4e20f9b1", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + }, + { + "id": "f0c06198-971e-4638-9b0c-23533b891241", + "source_id": "0344ac9e-eed6-4405-a9c8-f0dcf0ca09e3", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_SITE_DESC", + "is_static": true + } + ], + "output_links": [ + { + "id": "3424d022-438e-42b8-91a9-5eaaddefc98f", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_minimum_volume", + "sink_name": "Minimum Volume", + "is_static": false + }, + { + "id": "38c43ee6-072f-4b08-87f9-318b499b198d", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "d4c81e1a-9d5c-457d-aeb3-a941787ff30b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "10ef8efb-46ac-4c35-bc85-453b9db1907d", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_max_keyword_difficulty", + "sink_name": "Max Keyword Difficulty", + "is_static": false + }, + { + "id": "405c8acf-1abf-4a8d-8323-8edbea98d837", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_keyword", + "sink_name": "Keyword", + "is_static": false + }, + { + "id": "6d35548e-4440-493a-9b2d-1a836db49778", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_keyword", + "sink_name": "Keyword", + "is_static": false + }, + { + "id": "627b7e64-a673-445f-9c83-57ff4e20f9b1", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + }, + { + "id": "cc390dfe-ec89-4962-96ce-76bf20f373b9", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_max_keyword_difficulty", + "sink_name": "Max Keyword Difficulty", + "is_static": false + }, + { + "id": "da7242d2-5f0c-41ae-a3bf-4f6ce6aabc10", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "source_name": "finished", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "948a9164-7270-43ba-998b-7280479fbc4f", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_minimum_volume", + "sink_name": "Minimum Volume", + "is_static": false + } + ] + }, + { + "id": "e3bafccc-ef89-4422-bb8c-4631a81a30ff", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Primary Keyword", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 4656.850042368009, + "y": 1934.8347990637833 + } + }, + "input_links": [ + { + "id": "3895ff23-7887-4678-a698-139a3220734e", + "source_id": "a8465106-f0c2-4402-85ba-9dcaf1b5ac62", + "sink_id": "e3bafccc-ef89-4422-bb8c-4631a81a30ff", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "fe19496a-6df6-4a22-9e4e-6d2bff38a4fe", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Website Primary Topic", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "A short phrase summarizing your main subject or niche. Example: \u201cAI for Everyday People.\u201d", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1032.7414381010028, + "y": 272.7029940933061 + } + }, + "input_links": [], + "output_links": [ + { + "id": "ffbfaade-ca79-4671-9bb3-0b170635bdc3", + "source_id": "fe19496a-6df6-4a22-9e4e-6d2bff38a4fe", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_SITE_TOPIC", + "is_static": true + } + ] + }, + { + "id": "55548d2f-6f9d-45d1-aa42-88c259b0d87c", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Domain", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -504.58356355679166, + "y": 279.1213802574623 + } + }, + "input_links": [], + "output_links": [ + { + "id": "c08f8540-1033-458a-9bbd-19bfc8877bb8", + "source_id": "55548d2f-6f9d-45d1-aa42-88c259b0d87c", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_DOMAIN", + "is_static": true + } + ] + }, + { + "id": "0344ac9e-eed6-4405-a9c8-f0dcf0ca09e3", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Website Description", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "A brief explanation of your site\u2019s purpose, audience, and value. Example: \u201cA blog that makes AI approachable and easy to benefit from for the average person.\u201d", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1618.6352277845856, + "y": 268.3754316343051 + } + }, + "input_links": [], + "output_links": [ + { + "id": "f0c06198-971e-4638-9b0c-23533b891241", + "source_id": "0344ac9e-eed6-4405-a9c8-f0dcf0ca09e3", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_SITE_DESC", + "is_static": true + } + ] + }, + { + "id": "05dc4a44-a944-4543-904f-481c0a2d400e", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Previous Posts", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "A list of your website's previous blog post titles", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -2199.218708524658, + "y": 266.2945888433429 + } + }, + "input_links": [], + "output_links": [ + { + "id": "9cefe389-418b-4221-a40b-2493297ed6ec", + "source_id": "05dc4a44-a944-4543-904f-481c0a2d400e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_POSTS", + "is_static": true + } + ] + }, + { + "id": "d4c81e1a-9d5c-457d-aeb3-a941787ff30b", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 2146.9002151573304, + "y": 4143.429009250373 + } + }, + "input_links": [ + { + "id": "38c43ee6-072f-4b08-87f9-318b499b198d", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "d4c81e1a-9d5c-457d-aeb3-a941787ff30b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "51cefea1-15bd-48e4-82e1-9097eab637bc", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "d4c81e1a-9d5c-457d-aeb3-a941787ff30b", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 1974.4003297012903, + "y": 2973.429046826421 + } + }, + "input_links": [ + { + "id": "da7242d2-5f0c-41ae-a3bf-4f6ce6aabc10", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "source_name": "finished", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "ccdd1c36-faf4-4dcb-931a-8ed3da6124a8", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "9ba91bda-de3c-4b0d-b9d0-0cfbf28b9c26", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "9bf42085-0f70-421d-806c-bd3cb049e016", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "7530fa6d-d585-41fb-89bd-ac0b3f9ef3ca", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "a8720e35-7300-4a05-bc4a-4515d8a87300", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "a8465106-f0c2-4402-85ba-9dcaf1b5ac62", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "51cefea1-15bd-48e4-82e1-9097eab637bc", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "d4c81e1a-9d5c-457d-aeb3-a941787ff30b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "791876d6-2b30-445a-a727-e23ec5f1e55c", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "dc079b5e-d8a2-463b-a944-823df2d09b0e", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "f7f057f7-2153-43f7-8f81-ad91d8359708", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "b21a47a6-1a49-489f-b0bc-507c52b52b20", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "a8465106-f0c2-4402-85ba-9dcaf1b5ac62", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "primary_keyword" + }, + "metadata": { + "position": { + "x": 3965.8326973926414, + "y": 1920.2696668872873 + } + }, + "input_links": [ + { + "id": "a8720e35-7300-4a05-bc4a-4515d8a87300", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "a8465106-f0c2-4402-85ba-9dcaf1b5ac62", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "3895ff23-7887-4678-a698-139a3220734e", + "source_id": "a8465106-f0c2-4402-85ba-9dcaf1b5ac62", + "sink_id": "e3bafccc-ef89-4422-bb8c-4631a81a30ff", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "9ba91bda-de3c-4b0d-b9d0-0cfbf28b9c26", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "justification" + }, + "metadata": { + "position": { + "x": 3963.5980556850923, + "y": 3859.0123170291718 + } + }, + "input_links": [ + { + "id": "ccdd1c36-faf4-4dcb-931a-8ed3da6124a8", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "9ba91bda-de3c-4b0d-b9d0-0cfbf28b9c26", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "db342600-82fe-434e-841a-9253bca321d1", + "source_id": "9ba91bda-de3c-4b0d-b9d0-0cfbf28b9c26", + "sink_id": "666e537d-1b77-4203-b3fc-633fa85047f2", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "dc079b5e-d8a2-463b-a944-823df2d09b0e", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "proposed_title" + }, + "metadata": { + "position": { + "x": 3953.3134885963573, + "y": 2902.045587327791 + } + }, + "input_links": [ + { + "id": "791876d6-2b30-445a-a727-e23ec5f1e55c", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "dc079b5e-d8a2-463b-a944-823df2d09b0e", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "c22687de-adf9-4cb9-abc1-959c324f2818", + "source_id": "dc079b5e-d8a2-463b-a944-823df2d09b0e", + "sink_id": "72b726cf-76bf-4556-8dcb-13c5941692d0", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "72b726cf-76bf-4556-8dcb-13c5941692d0", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Proposed Title", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 4709.161620441494, + "y": 2908.0903438523746 + } + }, + "input_links": [ + { + "id": "c22687de-adf9-4cb9-abc1-959c324f2818", + "source_id": "dc079b5e-d8a2-463b-a944-823df2d09b0e", + "sink_id": "72b726cf-76bf-4556-8dcb-13c5941692d0", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "666e537d-1b77-4203-b3fc-633fa85047f2", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Justification", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 4676.881380660359, + "y": 3866.38733581504 + } + }, + "input_links": [ + { + "id": "db342600-82fe-434e-841a-9253bca321d1", + "source_id": "9ba91bda-de3c-4b0d-b9d0-0cfbf28b9c26", + "sink_id": "666e537d-1b77-4203-b3fc-633fa85047f2", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "d753bfcb-875a-4eab-baa0-d4d72c285e75", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Keyword Difficulty", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 4665.022736582729, + "y": 4737.442694914714 + } + }, + "input_links": [ + { + "id": "2362bd80-c538-4346-800d-a2ef52584ccd", + "source_id": "b21a47a6-1a49-489f-b0bc-507c52b52b20", + "sink_id": "d753bfcb-875a-4eab-baa0-d4d72c285e75", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "30a4a6e9-acc4-4d6b-9a60-9e5a095eabde", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Search Volume", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 4654.590430839753, + "y": 5542.18112551929 + } + }, + "input_links": [ + { + "id": "08df15a6-fcb5-4d1f-9ca0-5cc66c16b372", + "source_id": "7530fa6d-d585-41fb-89bd-ac0b3f9ef3ca", + "sink_id": "30a4a6e9-acc4-4d6b-9a60-9e5a095eabde", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "7530fa6d-d585-41fb-89bd-ac0b3f9ef3ca", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "sv" + }, + "metadata": { + "position": { + "x": 3962.535456565219, + "y": 5547.358192948138 + } + }, + "input_links": [ + { + "id": "9bf42085-0f70-421d-806c-bd3cb049e016", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "7530fa6d-d585-41fb-89bd-ac0b3f9ef3ca", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "08df15a6-fcb5-4d1f-9ca0-5cc66c16b372", + "source_id": "7530fa6d-d585-41fb-89bd-ac0b3f9ef3ca", + "sink_id": "30a4a6e9-acc4-4d6b-9a60-9e5a095eabde", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "b21a47a6-1a49-489f-b0bc-507c52b52b20", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "kd" + }, + "metadata": { + "position": { + "x": 3962.3501099410223, + "y": 4736.499717747163 + } + }, + "input_links": [ + { + "id": "f7f057f7-2153-43f7-8f81-ad91d8359708", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "b21a47a6-1a49-489f-b0bc-507c52b52b20", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "2362bd80-c538-4346-800d-a2ef52584ccd", + "source_id": "b21a47a6-1a49-489f-b0bc-507c52b52b20", + "sink_id": "d753bfcb-875a-4eab-baa0-d4d72c285e75", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": { + "Minimum Volume": 0, + "Max Keyword Difficulty": 100 + }, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "20b42974-fdb4-49f5-a9cc-04340f09251e", + "input_schema": { + "type": "object", + "required": [ + "Keyword" + ], + "properties": { + "Keyword": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Keyword", + "secret": false, + "advanced": false, + "description": "The seed keyword to lookup" + }, + "Minimum Volume": { + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "title": "Minimum Volume", + "secret": false, + "default": 0, + "advanced": false + }, + "Max Keyword Difficulty": { + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "title": "Max Keyword Difficulty", + "secret": false, + "default": 100, + "advanced": false + } + } + }, + "graph_version": 73, + "output_schema": { + "type": "object", + "required": [ + "No Keywords Found", + "Keyword Data", + "Error" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Keyword Data": { + "title": "Keyword Data", + "secret": false, + "advanced": false + }, + "No Keywords Found": { + "title": "No Keywords Found", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 1700.8005379414687, + "y": -195.69575688882134 + } + }, + "input_links": [ + { + "id": "3424d022-438e-42b8-91a9-5eaaddefc98f", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_minimum_volume", + "sink_name": "Minimum Volume", + "is_static": false + }, + { + "id": "10ef8efb-46ac-4c35-bc85-453b9db1907d", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_max_keyword_difficulty", + "sink_name": "Max Keyword Difficulty", + "is_static": false + }, + { + "id": "405c8acf-1abf-4a8d-8323-8edbea98d837", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_keyword", + "sink_name": "Keyword", + "is_static": false + } + ], + "output_links": [ + { + "id": "4ec7c4ac-361b-40ba-9fd7-b14b5605a51d", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Keyword Data", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "2670a7c9-501e-4a79-8a71-c49e4bd7fa58", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "842a7d43-3086-4952-a2f6-761465cb375d", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "No Keywords Found", + "sink_name": "last_tool_output", + "is_static": false + } + ] + }, + { + "id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": { + "Minimum Volume": 0, + "Max Keyword Difficulty": 100 + }, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "7c4d8d07-7bf2-440c-88d1-810bec7f4123", + "input_schema": { + "type": "object", + "required": [ + "Keyword" + ], + "properties": { + "Keyword": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Keyword", + "secret": false, + "advanced": false, + "description": "The seed keyword to lookup" + }, + "Minimum Volume": { + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "title": "Minimum Volume", + "secret": false, + "default": 0, + "advanced": false, + "description": "Positive whole number, e.g 1000" + }, + "Max Keyword Difficulty": { + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "title": "Max Keyword Difficulty", + "secret": false, + "default": 100, + "advanced": false, + "description": "Positive whole number, range from 0 to 100" + } + } + }, + "graph_version": 35, + "output_schema": { + "type": "object", + "required": [ + "Keyword Data", + "No Keywords Found", + "Error" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Keyword Data": { + "title": "Keyword Data", + "secret": false, + "advanced": false + }, + "No Keywords Found": { + "title": "No Keywords Found", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 1670.7222438114568, + "y": 1314.0768351227432 + } + }, + "input_links": [ + { + "id": "6d35548e-4440-493a-9b2d-1a836db49778", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_keyword", + "sink_name": "Keyword", + "is_static": false + }, + { + "id": "cc390dfe-ec89-4962-96ce-76bf20f373b9", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_max_keyword_difficulty", + "sink_name": "Max Keyword Difficulty", + "is_static": false + }, + { + "id": "948a9164-7270-43ba-998b-7280479fbc4f", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_minimum_volume", + "sink_name": "Minimum Volume", + "is_static": false + } + ], + "output_links": [ + { + "id": "642c7991-3941-44bd-8787-d3b1176f33b1", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "586a626c-8ad0-400a-a7b2-3091c58e3bfe", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Keyword Data", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "8a7fef86-932c-4f9f-952b-df2a7cc719f0", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "No Keywords Found", + "sink_name": "last_tool_output", + "is_static": false + } + ] + } + ], + "links": [ + { + "id": "ccdd1c36-faf4-4dcb-931a-8ed3da6124a8", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "9ba91bda-de3c-4b0d-b9d0-0cfbf28b9c26", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "38c43ee6-072f-4b08-87f9-318b499b198d", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "d4c81e1a-9d5c-457d-aeb3-a941787ff30b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "642c7991-3941-44bd-8787-d3b1176f33b1", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "f7f057f7-2153-43f7-8f81-ad91d8359708", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "b21a47a6-1a49-489f-b0bc-507c52b52b20", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "c08f8540-1033-458a-9bbd-19bfc8877bb8", + "source_id": "55548d2f-6f9d-45d1-aa42-88c259b0d87c", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_DOMAIN", + "is_static": true + }, + { + "id": "da7242d2-5f0c-41ae-a3bf-4f6ce6aabc10", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "source_name": "finished", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "2670a7c9-501e-4a79-8a71-c49e4bd7fa58", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "3895ff23-7887-4678-a698-139a3220734e", + "source_id": "a8465106-f0c2-4402-85ba-9dcaf1b5ac62", + "sink_id": "e3bafccc-ef89-4422-bb8c-4631a81a30ff", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "4ec7c4ac-361b-40ba-9fd7-b14b5605a51d", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Keyword Data", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "9bf42085-0f70-421d-806c-bd3cb049e016", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "7530fa6d-d585-41fb-89bd-ac0b3f9ef3ca", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "c22687de-adf9-4cb9-abc1-959c324f2818", + "source_id": "dc079b5e-d8a2-463b-a944-823df2d09b0e", + "sink_id": "72b726cf-76bf-4556-8dcb-13c5941692d0", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "2362bd80-c538-4346-800d-a2ef52584ccd", + "source_id": "b21a47a6-1a49-489f-b0bc-507c52b52b20", + "sink_id": "d753bfcb-875a-4eab-baa0-d4d72c285e75", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "627b7e64-a673-445f-9c83-57ff4e20f9b1", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + }, + { + "id": "586a626c-8ad0-400a-a7b2-3091c58e3bfe", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "Keyword Data", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "6d35548e-4440-493a-9b2d-1a836db49778", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_keyword", + "sink_name": "Keyword", + "is_static": false + }, + { + "id": "51cefea1-15bd-48e4-82e1-9097eab637bc", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "d4c81e1a-9d5c-457d-aeb3-a941787ff30b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "405c8acf-1abf-4a8d-8323-8edbea98d837", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_keyword", + "sink_name": "Keyword", + "is_static": false + }, + { + "id": "cc390dfe-ec89-4962-96ce-76bf20f373b9", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_max_keyword_difficulty", + "sink_name": "Max Keyword Difficulty", + "is_static": false + }, + { + "id": "f0c06198-971e-4638-9b0c-23533b891241", + "source_id": "0344ac9e-eed6-4405-a9c8-f0dcf0ca09e3", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_SITE_DESC", + "is_static": true + }, + { + "id": "791876d6-2b30-445a-a727-e23ec5f1e55c", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "dc079b5e-d8a2-463b-a944-823df2d09b0e", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "842a7d43-3086-4952-a2f6-761465cb375d", + "source_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "No Keywords Found", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "8a7fef86-932c-4f9f-952b-df2a7cc719f0", + "source_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "No Keywords Found", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "3424d022-438e-42b8-91a9-5eaaddefc98f", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_minimum_volume", + "sink_name": "Minimum Volume", + "is_static": false + }, + { + "id": "9cefe389-418b-4221-a40b-2493297ed6ec", + "source_id": "05dc4a44-a944-4543-904f-481c0a2d400e", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_POSTS", + "is_static": true + }, + { + "id": "10ef8efb-46ac-4c35-bc85-453b9db1907d", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "c3659055-1b2f-4615-8789-91f0bbf87177", + "source_name": "tools_^_get_related_search_keywords_~_max_keyword_difficulty", + "sink_name": "Max Keyword Difficulty", + "is_static": false + }, + { + "id": "a8720e35-7300-4a05-bc4a-4515d8a87300", + "source_id": "9000deed-fde2-46ee-a12e-3660a0229a7a", + "sink_id": "a8465106-f0c2-4402-85ba-9dcaf1b5ac62", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "08df15a6-fcb5-4d1f-9ca0-5cc66c16b372", + "source_id": "7530fa6d-d585-41fb-89bd-ac0b3f9ef3ca", + "sink_id": "30a4a6e9-acc4-4d6b-9a60-9e5a095eabde", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "ffbfaade-ca79-4671-9bb3-0b170635bdc3", + "source_id": "fe19496a-6df6-4a22-9e4e-6d2bff38a4fe", + "sink_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "source_name": "result", + "sink_name": "prompt_values_#_SITE_TOPIC", + "is_static": true + }, + { + "id": "948a9164-7270-43ba-998b-7280479fbc4f", + "source_id": "542e391b-121c-4ca1-8327-7fef6f8fac12", + "sink_id": "1d7ec99e-e1c3-4fca-b8f7-c5dc8bc5d36e", + "source_name": "tools_^_get_autocomplete_keyword_suggestions_~_minimum_volume", + "sink_name": "Minimum Volume", + "is_static": false + }, + { + "id": "db342600-82fe-434e-841a-9253bca321d1", + "source_id": "9ba91bda-de3c-4b0d-b9d0-0cfbf28b9c26", + "sink_id": "666e537d-1b77-4203-b3fc-633fa85047f2", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Website Primary Topic": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Website Primary Topic", + "description": "A short phrase summarizing your main subject or niche. Example: \u201cAI for Everyday People.\u201d" + }, + "Domain": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Domain" + }, + "Website Description": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Website Description", + "description": "A brief explanation of your site\u2019s purpose, audience, and value. Example: \u201cA blog that makes AI approachable and easy to benefit from for the average person.\u201d" + }, + "Previous Posts": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Previous Posts", + "description": "A list of your website's previous blog post titles" + } + }, + "required": [ + "Website Primary Topic", + "Domain", + "Website Description", + "Previous Posts" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Primary Keyword": { + "advanced": false, + "secret": false, + "title": "Primary Keyword" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + }, + "Proposed Title": { + "advanced": false, + "secret": false, + "title": "Proposed Title" + }, + "Justification": { + "advanced": false, + "secret": false, + "title": "Justification" + }, + "Keyword Difficulty": { + "advanced": false, + "secret": false, + "title": "Keyword Difficulty" + }, + "Search Volume": { + "advanced": false, + "secret": false, + "title": "Search Volume" + } + }, + "required": [ + "Primary Keyword", + "Error", + "Proposed Title", + "Justification", + "Keyword Difficulty", + "Search Volume" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "657ec6bb-6fa3-4482-9e2d-deefea0536c9", + "version": 73, + "is_active": true, + "name": "Get Related Search Keywords", + "description": "Get Keyword data on phrases Google suggests because real people search them alongside your seed keyword.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "96e5a74b-8d2d-4ec7-9994-32e378fdaaa5", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "= Secondary Keyword Data =" + }, + "metadata": { + "position": { + "x": -255.73234729532345, + "y": -407.6558618593143 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "1", + "no_value": null, + "operator": "<", + "yes_value": "I couldn't find any secondary keywords at all, sorry!\nPlease run me again and let's see if we can fix that. \nIf this keeps happening then report this to my creator." + }, + "metadata": { + "position": { + "x": 254.80072769380757, + "y": 550.4832899348573 + } + }, + "input_links": [ + { + "id": "a0c01e8e-50fb-4c1c-9ed4-825c3b40c33c", + "source_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "sink_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "source_name": "value", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "25cb3e2c-7a73-4fe9-92b6-1fe90920f89b", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "fded76a6-ae6c-425d-b214-531a3d1d4e59", + "source_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "sink_id": "bea42c1d-ed2b-4915-b1f3-5630de69e2b8", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "843e0fbb-c624-41b5-b248-a0fc00a0799d", + "source_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "no_output", + "sink_name": "values_#_data", + "is_static": false + } + ] + }, + { + "id": "e39442a0-9d9f-412e-8157-98eb4fa46778", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Check if no secondary keywords were found" + }, + "metadata": { + "position": { + "x": 357.42288934544194, + "y": 119.11975196081374 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "e56ce5ef-0c66-45ba-aacf-7688f790c979", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "No Keywords Found", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 1618.276865532203, + "y": 1847.8911781116722 + }, + "customized_name": "Error Output" + }, + "input_links": [ + { + "id": "f2e021ce-1f95-42e4-a87a-6d071633679c", + "source_id": "bea42c1d-ed2b-4915-b1f3-5630de69e2b8", + "sink_id": "e56ce5ef-0c66-45ba-aacf-7688f790c979", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [] + }, + { + "id": "778c43c0-a2e2-4181-ad22-296fc59ec949", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "list" + }, + "metadata": { + "position": { + "x": -2474.469109884351, + "y": 1051.8289532098283 + } + }, + "input_links": [ + { + "id": "f6f0a8dc-6097-464a-808a-dc7ca69be5cb", + "source_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "sink_id": "778c43c0-a2e2-4181-ad22-296fc59ec949", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "561aabd7-1e74-4213-9af3-3fc4f7dfc530", + "source_id": "778c43c0-a2e2-4181-ad22-296fc59ec949", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "2a83c026-2b6f-4152-b86c-e4aef59335f3", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Keyword", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "The seed keyword to lookup", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -4276.457053315847, + "y": -709.6902532743586 + } + }, + "input_links": [], + "output_links": [ + { + "id": "d0807d37-4159-4d1b-b887-65f1c5ffda85", + "source_id": "2a83c026-2b6f-4152-b86c-e4aef59335f3", + "sink_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "source_name": "result", + "sink_name": "keyword", + "is_static": true + } + ] + }, + { + "id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "[\n [\n \"keyword_data.keyword_properties.keyword_difficulty\",\n \"<=\",\n {{MAX_KD | safe}}\n ],\n \"and\",\n [\n \"keyword_data.keyword_info.search_volume\",\n \">=\",\n {{MIN_VOL | safe}}\n ]\n]", + "values": {} + }, + "metadata": { + "position": { + "x": -3087.714689964885, + "y": 1051.1149586572712 + } + }, + "input_links": [ + { + "id": "29896219-a0df-42a6-9947-aecfaf55011c", + "source_id": "23a17cfd-61f3-44ad-8b14-9ffdd36316af", + "sink_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "source_name": "result", + "sink_name": "values_#_MAX_KD", + "is_static": true + }, + { + "id": "c4a2a463-49dd-4984-baac-170421525c2d", + "source_id": "f71cd722-4873-4073-a5e4-a9ab3a61b0cc", + "sink_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "source_name": "result", + "sink_name": "values_#_MIN_VOL", + "is_static": true + } + ], + "output_links": [ + { + "id": "f6f0a8dc-6097-464a-808a-dc7ca69be5cb", + "source_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "sink_id": "778c43c0-a2e2-4181-ad22-296fc59ec949", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "f71cd722-4873-4073-a5e4-a9ab3a61b0cc", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Minimum Volume", + "title": null, + "value": 0, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -4246.868272179172, + "y": 1760.207949089364 + } + }, + "input_links": [], + "output_links": [ + { + "id": "62fe4ed9-9d11-47d6-bbc9-46f6fffe5995", + "source_id": "f71cd722-4873-4073-a5e4-a9ab3a61b0cc", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "result", + "sink_name": "values_#_min_sv", + "is_static": true + }, + { + "id": "c4a2a463-49dd-4984-baac-170421525c2d", + "source_id": "f71cd722-4873-4073-a5e4-a9ab3a61b0cc", + "sink_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "source_name": "result", + "sink_name": "values_#_MIN_VOL", + "is_static": true + } + ] + }, + { + "id": "23a17cfd-61f3-44ad-8b14-9ffdd36316af", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Max Keyword Difficulty", + "title": null, + "value": 100, + "secret": false, + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -4273.259259514494, + "y": 432.2099875522331 + } + }, + "input_links": [], + "output_links": [ + { + "id": "29896219-a0df-42a6-9947-aecfaf55011c", + "source_id": "23a17cfd-61f3-44ad-8b14-9ffdd36316af", + "sink_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "source_name": "result", + "sink_name": "values_#_MAX_KD", + "is_static": true + }, + { + "id": "601ede07-b921-4ae5-8c78-d58d2524416d", + "source_id": "23a17cfd-61f3-44ad-8b14-9ffdd36316af", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "result", + "sink_name": "values_#_max_kd", + "is_static": true + } + ] + }, + { + "id": "bea42c1d-ed2b-4915-b1f3-5630de69e2b8", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "No keywords found. Try using broader filters or a different keyword." + }, + "metadata": { + "position": { + "x": 1054.5895743019075, + "y": 1846.0784770074652 + }, + "customized_name": "Return Stored Error Text" + }, + "input_links": [ + { + "id": "fded76a6-ae6c-425d-b214-531a3d1d4e59", + "source_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "sink_id": "bea42c1d-ed2b-4915-b1f3-5630de69e2b8", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "f2e021ce-1f95-42e4-a87a-6d071633679c", + "source_id": "bea42c1d-ed2b-4915-b1f3-5630de69e2b8", + "sink_id": "e56ce5ef-0c66-45ba-aacf-7688f790c979", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ] + }, + { + "id": "9137b39a-0c2c-4732-b849-ed8890971891", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Keyword Data", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 2272.668218317573, + "y": 557.0925384278696 + }, + "customized_name": "Keyword Output" + }, + "input_links": [ + { + "id": "29bfd992-fbe6-4d66-9ba1-c524bfb23395", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "9137b39a-0c2c-4732-b849-ed8890971891", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": -1693.0162804471381, + "y": 5489.841850218033 + } + }, + "input_links": [ + { + "id": "b855139c-d92c-4384-ad3c-712a2530d7ef", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "561aabd7-1e74-4213-9af3-3fc4f7dfc530", + "source_id": "778c43c0-a2e2-4181-ad22-296fc59ec949", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "8195a0d0-f8b1-4562-ad55-ebe0b57e86bb", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "6de4a61e-2226-49d0-b623-e75362759023", + "source_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "21c45a45-7287-4c09-b868-c2ac133fcc3c", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "block_id": "8f2e4d6a-1b3c-4a5e-9d7f-2c8e6a4b3f1d", + "input_default": { + "depth": 3, + "limit": 100, + "language_code": "en", + "location_code": 2840, + "include_serp_info": false, + "include_seed_keyword": true, + "include_clickstream_data": false + }, + "metadata": { + "position": { + "x": -963.1511852421794, + "y": 543.9685971066308 + } + }, + "input_links": [ + { + "id": "d0807d37-4159-4d1b-b887-65f1c5ffda85", + "source_id": "2a83c026-2b6f-4152-b86c-e4aef59335f3", + "sink_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "source_name": "result", + "sink_name": "keyword", + "is_static": true + } + ], + "output_links": [ + { + "id": "31bb49ac-8ef9-4518-9968-9b3bcaee9cb7", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "source_name": "related_keywords", + "sink_name": "value", + "is_static": false + }, + { + "id": "25cb3e2c-7a73-4fe9-92b6-1fe90920f89b", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "21c45a45-7287-4c09-b868-c2ac133fcc3c", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": -368.88245461178354, + "y": 547.6687175163374 + } + }, + "input_links": [ + { + "id": "31bb49ac-8ef9-4518-9968-9b3bcaee9cb7", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "source_name": "related_keywords", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "a0c01e8e-50fb-4c1c-9ed4-825c3b40c33c", + "source_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "sink_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "source_name": "value", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "6de4a61e-2226-49d0-b623-e75362759023", + "source_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "block_id": "0b02b072-abe7-11ef-8372-fb5d162dd712", + "input_default": { + "code": "", + "timeout": 300, + "language": "python", + "template_id": "", + "setup_commands": [] + }, + "metadata": { + "position": { + "x": 1668.6920394987753, + "y": 559.6813869648646 + } + }, + "input_links": [ + { + "id": "6dda07ba-925d-4a8c-9e98-e60b5874cc35", + "source_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "sink_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "source_name": "output", + "sink_name": "code", + "is_static": false + } + ], + "output_links": [ + { + "id": "b855139c-d92c-4384-ad3c-712a2530d7ef", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "8195a0d0-f8b1-4562-ad55-ebe0b57e86bb", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "29bfd992-fbe6-4d66-9ba1-c524bfb23395", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "9137b39a-0c2c-4732-b849-ed8890971891", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "import json\n\ndef parse_loose_json(raw: str):\n raw = raw.strip()\n if not raw.startswith('['):\n raw = '[' + raw.rstrip(', \\n') + ']'\n return json.loads(raw)\n\ndef filter_keywords(items, min_volume: int, max_difficulty: int):\n filtered = [\n row for row in items\n if isinstance(row.get('search_volume'), (int, float))\n and isinstance(row.get('keyword_difficulty'), (int, float))\n and row['search_volume'] >= min_volume\n and row['keyword_difficulty'] <= max_difficulty\n ]\n filtered.sort(key=lambda r: (-r['search_volume'], r['keyword_difficulty']))\n return filtered\n\ndef _fmt(v):\n if v is None:\n return ''\n if isinstance(v, float):\n s = ('%.6f' % v).rstrip('0').rstrip('.')\n return s\n return str(v)\n\ndef to_markdown(rows, columns):\n def esc(x: str) -> str:\n return x.replace('|', r'\\|').replace('\\n', ' ').replace('\\r', ' ')\n header = '| ' + ' | '.join(columns) + ' |'\n sep = '| ' + ' | '.join('---' for _ in columns) + ' |'\n out = [header, sep]\n for r in rows:\n out.append('| ' + ' | '.join(esc(_fmt(r.get(c, ''))) for c in columns) + ' |')\n return '\\n'.join(out)\n\n# --- run ---\nraw = \"\"\"{{data | safe}}\"\"\"\ndata = parse_loose_json(raw)\nrecords = filter_keywords(data, min_volume={{min_sv | safe}}, max_difficulty={{max_kd | safe}})\n\n# Print Markdown Table\ncolumns = ['keyword','search_volume','keyword_difficulty','cpc','competition','serp_info','clickstream_data']\nprint(to_markdown(records, columns))\n", + "values": {} + }, + "metadata": { + "position": { + "x": 1093.6145350526099, + "y": 560.7299035903276 + } + }, + "input_links": [ + { + "id": "601ede07-b921-4ae5-8c78-d58d2524416d", + "source_id": "23a17cfd-61f3-44ad-8b14-9ffdd36316af", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "result", + "sink_name": "values_#_max_kd", + "is_static": true + }, + { + "id": "62fe4ed9-9d11-47d6-bbc9-46f6fffe5995", + "source_id": "f71cd722-4873-4073-a5e4-a9ab3a61b0cc", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "result", + "sink_name": "values_#_min_sv", + "is_static": true + }, + { + "id": "843e0fbb-c624-41b5-b248-a0fc00a0799d", + "source_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "no_output", + "sink_name": "values_#_data", + "is_static": false + } + ], + "output_links": [ + { + "id": "6dda07ba-925d-4a8c-9e98-e60b5874cc35", + "source_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "sink_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "source_name": "output", + "sink_name": "code", + "is_static": false + } + ] + } + ], + "links": [ + { + "id": "62fe4ed9-9d11-47d6-bbc9-46f6fffe5995", + "source_id": "f71cd722-4873-4073-a5e4-a9ab3a61b0cc", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "result", + "sink_name": "values_#_min_sv", + "is_static": true + }, + { + "id": "6dda07ba-925d-4a8c-9e98-e60b5874cc35", + "source_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "sink_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "source_name": "output", + "sink_name": "code", + "is_static": false + }, + { + "id": "25cb3e2c-7a73-4fe9-92b6-1fe90920f89b", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "f2e021ce-1f95-42e4-a87a-6d071633679c", + "source_id": "bea42c1d-ed2b-4915-b1f3-5630de69e2b8", + "sink_id": "e56ce5ef-0c66-45ba-aacf-7688f790c979", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "561aabd7-1e74-4213-9af3-3fc4f7dfc530", + "source_id": "778c43c0-a2e2-4181-ad22-296fc59ec949", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "601ede07-b921-4ae5-8c78-d58d2524416d", + "source_id": "23a17cfd-61f3-44ad-8b14-9ffdd36316af", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "result", + "sink_name": "values_#_max_kd", + "is_static": true + }, + { + "id": "6de4a61e-2226-49d0-b623-e75362759023", + "source_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f6f0a8dc-6097-464a-808a-dc7ca69be5cb", + "source_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "sink_id": "778c43c0-a2e2-4181-ad22-296fc59ec949", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "31bb49ac-8ef9-4518-9968-9b3bcaee9cb7", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "source_name": "related_keywords", + "sink_name": "value", + "is_static": false + }, + { + "id": "843e0fbb-c624-41b5-b248-a0fc00a0799d", + "source_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "sink_id": "c9f70151-fc39-48b6-8a38-085549c91cd7", + "source_name": "no_output", + "sink_name": "values_#_data", + "is_static": false + }, + { + "id": "b855139c-d92c-4384-ad3c-712a2530d7ef", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "29896219-a0df-42a6-9947-aecfaf55011c", + "source_id": "23a17cfd-61f3-44ad-8b14-9ffdd36316af", + "sink_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "source_name": "result", + "sink_name": "values_#_MAX_KD", + "is_static": true + }, + { + "id": "8195a0d0-f8b1-4562-ad55-ebe0b57e86bb", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "21c45a45-7287-4c09-b868-c2ac133fcc3c", + "source_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "sink_id": "2dfa1598-ea11-4d27-ba6d-18bdb83cdaf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "29bfd992-fbe6-4d66-9ba1-c524bfb23395", + "source_id": "f9b686b8-e1a3-450f-ba73-2c379fe56154", + "sink_id": "9137b39a-0c2c-4732-b849-ed8890971891", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "c4a2a463-49dd-4984-baac-170421525c2d", + "source_id": "f71cd722-4873-4073-a5e4-a9ab3a61b0cc", + "sink_id": "1a51cc78-ddb1-4de7-8227-3e35e59a5f83", + "source_name": "result", + "sink_name": "values_#_MIN_VOL", + "is_static": true + }, + { + "id": "d0807d37-4159-4d1b-b887-65f1c5ffda85", + "source_id": "2a83c026-2b6f-4152-b86c-e4aef59335f3", + "sink_id": "d30c312c-4a28-4d1b-a55a-bdf354064b85", + "source_name": "result", + "sink_name": "keyword", + "is_static": true + }, + { + "id": "fded76a6-ae6c-425d-b214-531a3d1d4e59", + "source_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "sink_id": "bea42c1d-ed2b-4915-b1f3-5630de69e2b8", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "a0c01e8e-50fb-4c1c-9ed4-825c3b40c33c", + "source_id": "622abbc3-2e79-4c32-b70d-31b9de4bb976", + "sink_id": "5c20ed44-6e12-4c03-9ecb-04f563935eb5", + "source_name": "value", + "sink_name": "no_value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Keyword": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Keyword", + "description": "The seed keyword to lookup" + }, + "Minimum Volume": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Minimum Volume", + "default": 0 + }, + "Max Keyword Difficulty": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Max Keyword Difficulty", + "default": 100 + } + }, + "required": [ + "Keyword" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "No Keywords Found": { + "advanced": false, + "secret": false, + "title": "No Keywords Found" + }, + "Keyword Data": { + "advanced": false, + "secret": false, + "title": "Keyword Data" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "No Keywords Found", + "Keyword Data", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "2092cdc6-23ec-4d9a-b86b-4550d1ad4c36", + "version": 35, + "is_active": true, + "name": "Get Autocomplete Keyword Suggestions", + "description": "Keyword Suggestions that match the specified seed keyword with additional words before, after, or within the seed key phrase.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "d4e0be80-95d9-43e5-b469-ece4e80df665", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "= Keyword Selection =" + }, + "metadata": { + "position": { + "x": 840.6286051237548, + "y": 1680.1422247691685 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "1", + "no_value": null, + "operator": "<", + "yes_value": null + }, + "metadata": { + "position": { + "x": 1720.2372837594887, + "y": 2163.078675814242 + } + }, + "input_links": [ + { + "id": "f974eb66-a394-4859-a83f-5b4e9ee3f2ad", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "source_name": "suggestions", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "7bb2e0fc-2048-4449-83e1-b5869e828460", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "01bc10f6-a73d-42b2-a01c-d79335c62dd3", + "source_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "sink_id": "66c9c470-236c-4ed4-9a85-d3cfcd8c690d", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "3dc26eaa-e0d2-4bde-86fa-22e7a8bbe4c8", + "source_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "sink_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "651f3600-5bf5-41f9-8089-2b89092377ee", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Check if no keywords were found matching KD and Volume requirements" + }, + "metadata": { + "position": { + "x": 1829.1907186133253, + "y": 1672.384213626709 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "07a23921-f966-4a3e-b514-7e015e98638a", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Keyword", + "title": null, + "value": null, + "secret": false, + "advanced": false, + "description": "The seed keyword to lookup", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -732.4540122056117, + "y": 1363.9109711466199 + }, + "customized_name": "Keyword Input" + }, + "input_links": [], + "output_links": [ + { + "id": "266c742c-0089-43dc-b642-6b497580c886", + "source_id": "07a23921-f966-4a3e-b514-7e015e98638a", + "sink_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "source_name": "result", + "sink_name": "keyword", + "is_static": true + } + ] + }, + { + "id": "df083135-7bd5-4454-a914-b2347918ea86", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Minimum Volume", + "title": null, + "value": 0, + "secret": false, + "advanced": false, + "description": "Positive whole number, e.g 1000", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -727.7519034098402, + "y": 3874.28177329114 + }, + "customized_name": "Min SV Input" + }, + "input_links": [], + "output_links": [ + { + "id": "eaa8a11a-e30f-4682-83e8-90ba5d5ceabe", + "source_id": "df083135-7bd5-4454-a914-b2347918ea86", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "result", + "sink_name": "values_#_min_sv", + "is_static": true + } + ] + }, + { + "id": "de25381e-bd13-4888-966c-7202e6062214", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Max Keyword Difficulty", + "title": null, + "value": 100, + "secret": false, + "advanced": false, + "description": "Positive whole number, range from 0 to 100", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -730.5947010919119, + "y": 2590.032416129025 + }, + "customized_name": "Max KD Input" + }, + "input_links": [], + "output_links": [ + { + "id": "70a1308e-12d9-4f18-a917-1598fdd4251b", + "source_id": "de25381e-bd13-4888-966c-7202e6062214", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "result", + "sink_name": "values_#_max_kd", + "is_static": true + } + ] + }, + { + "id": "f0cc1f10-0a8c-4193-bffa-9a12e464102a", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Keyword Data", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 4633.951805446384, + "y": 2618.278472627514 + } + }, + "input_links": [ + { + "id": "54d6eae1-a230-44dc-8966-99d458d00188", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "f0cc1f10-0a8c-4193-bffa-9a12e464102a", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "7ccad556-6933-4c85-bce6-2b7623b6f749", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "No Keywords Found", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 3238.4091560687907, + "y": 4086.0327887130948 + } + }, + "input_links": [ + { + "id": "fc63c957-2ed8-42c9-ac52-fa72212aa250", + "source_id": "66c9c470-236c-4ed4-9a85-d3cfcd8c690d", + "sink_id": "7ccad556-6933-4c85-bce6-2b7623b6f749", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [] + }, + { + "id": "66c9c470-236c-4ed4-9a85-d3cfcd8c690d", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "No keywords found. Try using broader filters or a different keyword." + }, + "metadata": { + "position": { + "x": 2543.9397489616244, + "y": 4112.758322808854 + } + }, + "input_links": [ + { + "id": "01bc10f6-a73d-42b2-a01c-d79335c62dd3", + "source_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "sink_id": "66c9c470-236c-4ed4-9a85-d3cfcd8c690d", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "fc63c957-2ed8-42c9-ac52-fa72212aa250", + "source_id": "66c9c470-236c-4ed4-9a85-d3cfcd8c690d", + "sink_id": "7ccad556-6933-4c85-bce6-2b7623b6f749", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ] + }, + { + "id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "block_id": "73c3e7c4-2b3f-4e9f-9e3e-8f7a5c3e2d45", + "input_default": { + "limit": 100, + "language_code": "en", + "location_code": 2840, + "include_serp_info": false, + "include_seed_keyword": true, + "include_clickstream_data": false + }, + "metadata": { + "position": { + "x": 707.3895326817377, + "y": 2151.883264545683 + } + }, + "input_links": [ + { + "id": "266c742c-0089-43dc-b642-6b497580c886", + "source_id": "07a23921-f966-4a3e-b514-7e015e98638a", + "sink_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "source_name": "result", + "sink_name": "keyword", + "is_static": true + } + ], + "output_links": [ + { + "id": "f974eb66-a394-4859-a83f-5b4e9ee3f2ad", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "source_name": "suggestions", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "7bb2e0fc-2048-4449-83e1-b5869e828460", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "c9ec105e-99ab-423a-b641-521112d71db5", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "block_id": "0b02b072-abe7-11ef-8372-fb5d162dd712", + "input_default": { + "code": "", + "timeout": 300, + "language": "python", + "template_id": "", + "setup_commands": [] + }, + "metadata": { + "position": { + "x": 4009.2040452599676, + "y": 2620.3038028366636 + } + }, + "input_links": [ + { + "id": "1baa33d3-32c3-4e61-901b-abb0e9425afa", + "source_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "sink_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "source_name": "output", + "sink_name": "code", + "is_static": false + } + ], + "output_links": [ + { + "id": "8de1b1e0-56af-4394-9be6-e112dd9aac8d", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "54d6eae1-a230-44dc-8966-99d458d00188", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "f0cc1f10-0a8c-4193-bffa-9a12e464102a", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "30528193-aede-49fa-a058-4c746b9e5a73", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "import json\n\ndef parse_loose_json(raw: str):\n raw = raw.strip()\n if not raw.startswith('['):\n raw = '[' + raw.rstrip(', \\n') + ']'\n return json.loads(raw)\n\ndef filter_keywords(items, min_volume: int, max_difficulty: int):\n filtered = [\n row for row in items\n if isinstance(row.get('search_volume'), (int, float))\n and isinstance(row.get('keyword_difficulty'), (int, float))\n and row['search_volume'] >= min_volume\n and row['keyword_difficulty'] <= max_difficulty\n ]\n filtered.sort(key=lambda r: (-r['search_volume'], r['keyword_difficulty']))\n return filtered\n\ndef _fmt(v):\n if v is None:\n return ''\n if isinstance(v, float):\n s = ('%.6f' % v).rstrip('0').rstrip('.')\n return s\n return str(v)\n\ndef to_markdown(rows, columns):\n def esc(x: str) -> str:\n return x.replace('|', r'\\|').replace('\\n', ' ').replace('\\r', ' ')\n header = '| ' + ' | '.join(columns) + ' |'\n sep = '| ' + ' | '.join('---' for _ in columns) + ' |'\n out = [header, sep]\n for r in rows:\n out.append('| ' + ' | '.join(esc(_fmt(r.get(c, ''))) for c in columns) + ' |')\n return '\\n'.join(out)\n\n# --- run ---\nraw = \"\"\"{{data | safe}}\"\"\"\ndata = parse_loose_json(raw)\nrecords = filter_keywords(data, min_volume={{min_sv | safe}}, max_difficulty={{max_kd | safe}})\n\n# Print Markdown Table\ncolumns = ['keyword','search_volume','keyword_difficulty','cpc','competition','serp_info','clickstream_data']\nprint(to_markdown(records, columns))\n", + "values": {} + }, + "metadata": { + "position": { + "x": 3434.126540813802, + "y": 2621.3523194621266 + } + }, + "input_links": [ + { + "id": "70dec687-6ab2-4ed9-b8f3-fe105647b973", + "source_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "value", + "sink_name": "values_#_data", + "is_static": false + }, + { + "id": "eaa8a11a-e30f-4682-83e8-90ba5d5ceabe", + "source_id": "df083135-7bd5-4454-a914-b2347918ea86", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "result", + "sink_name": "values_#_min_sv", + "is_static": true + }, + { + "id": "70a1308e-12d9-4f18-a917-1598fdd4251b", + "source_id": "de25381e-bd13-4888-966c-7202e6062214", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "result", + "sink_name": "values_#_max_kd", + "is_static": true + } + ], + "output_links": [ + { + "id": "1baa33d3-32c3-4e61-901b-abb0e9425afa", + "source_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "sink_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "source_name": "output", + "sink_name": "code", + "is_static": false + } + ] + }, + { + "id": "763702a4-65fa-4a07-a639-14a8042820a1", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 2864.578376287297, + "y": 2619.1367666532237 + }, + "customized_name": "Convert Keyword List to String" + }, + "input_links": [ + { + "id": "3dc26eaa-e0d2-4bde-86fa-22e7a8bbe4c8", + "source_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "sink_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "70dec687-6ab2-4ed9-b8f3-fe105647b973", + "source_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "value", + "sink_name": "values_#_data", + "is_static": false + }, + { + "id": "c0ff539f-d10b-4639-8305-f03549e30600", + "source_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "28c53d32-5901-41e4-9154-4973ae054581", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 2185.815586561798, + "y": 6180.142359767333 + }, + "customized_name": "Error" + }, + "input_links": [ + { + "id": "8de1b1e0-56af-4394-9be6-e112dd9aac8d", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "c0ff539f-d10b-4639-8305-f03549e30600", + "source_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "30528193-aede-49fa-a058-4c746b9e5a73", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c9ec105e-99ab-423a-b641-521112d71db5", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + } + ], + "links": [ + { + "id": "eaa8a11a-e30f-4682-83e8-90ba5d5ceabe", + "source_id": "df083135-7bd5-4454-a914-b2347918ea86", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "result", + "sink_name": "values_#_min_sv", + "is_static": true + }, + { + "id": "1baa33d3-32c3-4e61-901b-abb0e9425afa", + "source_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "sink_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "source_name": "output", + "sink_name": "code", + "is_static": false + }, + { + "id": "c0ff539f-d10b-4639-8305-f03549e30600", + "source_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "8de1b1e0-56af-4394-9be6-e112dd9aac8d", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "70dec687-6ab2-4ed9-b8f3-fe105647b973", + "source_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "value", + "sink_name": "values_#_data", + "is_static": false + }, + { + "id": "7bb2e0fc-2048-4449-83e1-b5869e828460", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "source_name": "total_count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "266c742c-0089-43dc-b642-6b497580c886", + "source_id": "07a23921-f966-4a3e-b514-7e015e98638a", + "sink_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "source_name": "result", + "sink_name": "keyword", + "is_static": true + }, + { + "id": "fc63c957-2ed8-42c9-ac52-fa72212aa250", + "source_id": "66c9c470-236c-4ed4-9a85-d3cfcd8c690d", + "sink_id": "7ccad556-6933-4c85-bce6-2b7623b6f749", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "01bc10f6-a73d-42b2-a01c-d79335c62dd3", + "source_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "sink_id": "66c9c470-236c-4ed4-9a85-d3cfcd8c690d", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "c9ec105e-99ab-423a-b641-521112d71db5", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f974eb66-a394-4859-a83f-5b4e9ee3f2ad", + "source_id": "473ff4a2-8683-4114-968f-3c7c4368cfa5", + "sink_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "source_name": "suggestions", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "30528193-aede-49fa-a058-4c746b9e5a73", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "28c53d32-5901-41e4-9154-4973ae054581", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "54d6eae1-a230-44dc-8966-99d458d00188", + "source_id": "32fd2417-d1b0-4c57-8265-0cba26f0b37c", + "sink_id": "f0cc1f10-0a8c-4193-bffa-9a12e464102a", + "source_name": "stdout_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "3dc26eaa-e0d2-4bde-86fa-22e7a8bbe4c8", + "source_id": "6ba3fdca-0c73-4e13-ab76-03e6dcdb979b", + "sink_id": "763702a4-65fa-4a07-a639-14a8042820a1", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + }, + { + "id": "70a1308e-12d9-4f18-a917-1598fdd4251b", + "source_id": "de25381e-bd13-4888-966c-7202e6062214", + "sink_id": "f7045b4b-cda6-452f-abb1-c520e6592d37", + "source_name": "result", + "sink_name": "values_#_max_kd", + "is_static": true + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Keyword": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Keyword", + "description": "The seed keyword to lookup" + }, + "Minimum Volume": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Minimum Volume", + "description": "Positive whole number, e.g 1000", + "default": 0 + }, + "Max Keyword Difficulty": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Max Keyword Difficulty", + "description": "Positive whole number, range from 0 to 100", + "default": 100 + } + }, + "required": [ + "Keyword" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Keyword Data": { + "advanced": false, + "secret": false, + "title": "Keyword Data" + }, + "No Keywords Found": { + "advanced": false, + "secret": false, + "title": "No Keywords Found" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Keyword Data", + "No Keywords Found", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + } + ], + "user_id": "", + "created_at": "2025-09-30T13:29:21.415Z", + "input_schema": { + "type": "object", + "properties": { + "brand_tone": { + "advanced": false, + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Brand Tone", + "enum": [ + "Friendly", + "Professional", + "Technical", + "Casual" + ], + "description": "The voice and style for your content - choose the tone that best matches your brand and audience" + }, + "target_word_count": { + "advanced": false, + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Blog Post Length", + "enum": [ + "500", + "1000", + "1500", + "2000" + ], + "description": "Blog post length (500=quick reads, 1000=standard posts, 1500=detailed guides, 2000=comprehensive content)" + }, + "blog_url": { + "advanced": false, + "secret": false, + "title": "WordPress Blog URL", + "description": "Your WordPress.com blog URL (e.g., https://yourblog.wordpress.com) - used for posting content via API" + }, + "Website Primary Topic": { + "advanced": false, + "secret": false, + "title": "Website Primary Topic", + "description": "The main subject or theme your blog is about (e.g., 'AI Automation', 'Viral Marketing')" + }, + "Website Description": { + "advanced": false, + "secret": false, + "title": "Website Description", + "description": "Describe your website, including its purpose, tone, and approach. \n\nFor example: \"A blog that makes AI approachable for the average person. We share clear, easy-to-follow guides and curated recommendations on which models and tools to use.\"" + } + }, + "required": [ + "brand_tone", + "target_word_count", + "blog_url", + "Website Primary Topic", + "Website Description" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "CONTENT_GENERATION_FAILED": { + "advanced": false, + "secret": false, + "title": "Content Generation Failed", + "description": "The AI was unable to generate the blog article content. This may occur due to LLM API issues, invalid keyword input, or content policy restrictions. The agent cannot proceed with publishing without generated content.", + "default": "CONTENT_GENERATION_FAILED" + }, + "DICTIONARY_CREATE_FAILED": { + "advanced": false, + "secret": false, + "title": "DICTIONARY_CREATE_FAILED" + }, + "BLOG_PUBLISHED_SUCCESS": { + "advanced": false, + "secret": false, + "title": "Blog Post Published Successfully", + "description": "The blog post has been successfully generated, published to WordPress, and logged to Airtable. The SEO-optimized article is now live and tracking data has been recorded." + }, + "ERROR_CONVERTING_TYPE": { + "advanced": false, + "secret": false, + "title": "ERROR_CONVERTING_TYPE" + }, + "ERROR_DEEP_RESEARCH": { + "advanced": false, + "secret": false, + "title": "ERROR_DEEP_RESEARCH" + }, + "ERROR_GENERATING_IMAGE": { + "advanced": false, + "secret": false, + "title": "ERROR_GENERATING_IMAGE" + }, + "Error Humanizing Text": { + "advanced": false, + "secret": false, + "title": "Error Humanizing Text" + }, + "Hmm\u2026 that doesn\u2019t look like a valid WordPress blog address. \nIt should look like this: yourname.wordpress.com": { + "advanced": false, + "secret": false, + "title": "Hmm\u2026 that doesn\u2019t look like a valid WordPress blog address. \nIt should look like this: yourname.wordpress.com" + }, + "Secondary Keyword Error": { + "advanced": false, + "secret": false, + "title": "Secondary Keyword Error" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "DICTIONARY_CREATE_FAILED", + "BLOG_PUBLISHED_SUCCESS", + "ERROR_CONVERTING_TYPE", + "ERROR_CONVERTING_TYPE", + "ERROR_DEEP_RESEARCH", + "ERROR_GENERATING_IMAGE", + "Error Humanizing Text", + "Hmm\u2026 that doesn\u2019t look like a valid WordPress blog address. \nIt should look like this: yourname.wordpress.com", + "Secondary Keyword Error", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-5-2025-08-07" + ] + }, + "open_router_api_key_credentials": { + "credentials_provider": [ + "open_router" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "open_router", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "perplexity/sonar-deep-research" + ] + }, + "replicate_api_key_credentials": { + "credentials_provider": [ + "replicate" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "replicate", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "wordpress_oauth2_credentials": { + "credentials_provider": [ + "wordpress" + ], + "credentials_types": [ + "oauth2" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "wordpress", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "oauth2", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['oauth2']]", + "type": "object", + "discriminator_values": [] + }, + "dataforseo_user_password_credentials": { + "credentials_provider": [ + "dataforseo" + ], + "credentials_types": [ + "user_password" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "dataforseo", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "user_password", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['user_password']]", + "type": "object", + "discriminator_values": [] + }, + "e2b_api_key_credentials": { + "credentials_provider": [ + "e2b" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "e2b", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + } + }, + "required": [ + "openai_api_key_credentials", + "open_router_api_key_credentials", + "replicate_api_key_credentials", + "wordpress_oauth2_credentials", + "dataforseo_user_password_credentials", + "e2b_api_key_credentials" + ], + "title": "AutomatedSEOBlogWriterCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_6e16e65a-ad34-4108-b4fd-4a23fced5ea2.json b/autogpt_platform/backend/agents/agent_6e16e65a-ad34-4108-b4fd-4a23fced5ea2.json new file mode 100644 index 0000000000..d25da2f332 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_6e16e65a-ad34-4108-b4fd-4a23fced5ea2.json @@ -0,0 +1,1795 @@ +{ + "id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "version": 173, + "is_active": true, + "name": "Decision Maker Lead Finder", + "description": "Find the key decision-makers you need, fast.\n\nThis agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses you\u2019re looking for and where, and it will:\n\n* Search the area and gather public information\n* Return names, roles, and contact details when available\n* Provide smart Google search suggestions if details aren\u2019t found\n\nPerfect for:\n\n* B2B sales teams seeking verified leads\n* Recruiters sourcing local talent\n* Researchers looking to connect with business leaders\n\nSave hours of manual searching and get straight to the people who matter most.", + "instructions": "Input the type of businesses that you'd like to contact and a target location and the agent will find you leads.\n\nStart off with a small number and increase the search results once you're happy.", + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "7d2177fa-7974-4058-8da2-4247c4aae83f", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Results", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 3460.582732906506, + "y": 274.71786838896395 + } + }, + "input_links": [ + { + "id": "5121e8ff-68ff-4eee-a4f7-bdef9590da73", + "source_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "sink_id": "7d2177fa-7974-4058-8da2-4247c4aae83f", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "1bd025e0-3ccf-4e63-a960-03325f4a6e42", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Here we do the initial google search to get back the results" + }, + "metadata": { + "position": { + "x": 445.7289615198955, + "y": -122.37319867142689 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "5119cbae-3244-4b5c-a5df-8ff949391e5f", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Location", + "value": "Edinburgh", + "secret": false, + "advanced": false, + "description": "Enter a specific location (e.g. city, neighborhood, or address). For example: \u201cSan Francisco\u201d, \u201cPalo Alto\u201d, or \u201cSoHo, NYC\u201d. This helps narrow the search to a specific area.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1254.565608902577, + "y": -445.18205291322886 + } + }, + "input_links": [], + "output_links": [ + { + "id": "05dafd35-04f0-4a31-80a6-399642febdec", + "source_id": "5119cbae-3244-4b5c-a5df-8ff949391e5f", + "sink_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "source_name": "result", + "sink_name": "values_#_Y", + "is_static": true + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "b2286cfa-84f2-40ff-91a7-36528dc95d1b", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Search Radius (meters, max 50,000)", + "value": 10000, + "secret": false, + "advanced": false, + "description": "Defines how far around the specified location we\u2019ll look for businesses. For example, 5000 meters covers a large neighborhood or small city.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1235.999879770989, + "y": 516.5017041601284 + } + }, + "input_links": [], + "output_links": [ + { + "id": "62ef44f9-ed35-4eaa-a0ba-d9a023caa05b", + "source_id": "b2286cfa-84f2-40ff-91a7-36528dc95d1b", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "result", + "sink_name": "radius", + "is_static": true + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "64d8b6ef-35ef-4b0c-af79-62d0185b1769", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Max Number of Businesses to Analyze (max 60)", + "value": 3, + "secret": false, + "advanced": false, + "description": "The maximum number of businesses the agent will analyze from the search results. We\u2019ll attempt to identify the CEO or owner for each one.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1241.9447129110408, + "y": 1597.7355422461178 + } + }, + "input_links": [], + "output_links": [ + { + "id": "755618e6-21ee-4017-89aa-1e0891b5ee35", + "source_id": "64d8b6ef-35ef-4b0c-af79-62d0185b1769", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "result", + "sink_name": "max_results", + "is_static": true + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2286.9263830966697, + "y": 5098.192228390306 + } + }, + "input_links": [ + { + "id": "74e0063f-4742-4a5e-9545-e74b9d5aac71", + "source_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f621a673-5f85-417b-b166-45cf4f284d40", + "source_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "e731e593-fb21-4768-9de4-17b17d0d62dc", + "source_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f9bf8eb1-316e-4ed7-92e2-db42493e7fc8", + "source_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "block_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479", + "input_default": { + "radius": 5000, + "max_results": 20 + }, + "metadata": { + "position": { + "x": 388.18411547948523, + "y": 292.0431065385293 + } + }, + "input_links": [ + { + "id": "62ef44f9-ed35-4eaa-a0ba-d9a023caa05b", + "source_id": "b2286cfa-84f2-40ff-91a7-36528dc95d1b", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "result", + "sink_name": "radius", + "is_static": true + }, + { + "id": "9f9a549c-12dc-42c8-8aeb-52221c3e30a9", + "source_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "output", + "sink_name": "query", + "is_static": false + }, + { + "id": "755618e6-21ee-4017-89aa-1e0891b5ee35", + "source_id": "64d8b6ef-35ef-4b0c-af79-62d0185b1769", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "result", + "sink_name": "max_results", + "is_static": true + } + ], + "output_links": [ + { + "id": "5ea771cc-8621-4bbe-8fe1-c6ff0837247e", + "source_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "sink_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "source_name": "place", + "sink_name": "value", + "is_static": false + }, + { + "id": "f9bf8eb1-316e-4ed7-92e2-db42493e7fc8", + "source_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "373eef04-8ac1-4741-a9f6-469479e6abd3", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Business Type", + "value": "coffee roasters", + "secret": false, + "advanced": false, + "description": "Describe the kind of businesses you're targeting. For example: \u201cAI startups\u201d, \u201cdigital marketing agencies\u201d, or \u201ccoffee roasters\u201d. This will be used in a Google Maps search.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1250.0050183525716, + "y": -1439.6118227804732 + } + }, + "input_links": [], + "output_links": [ + { + "id": "477c651e-24d2-40b7-a33d-c434c3b40fc8", + "source_id": "373eef04-8ac1-4741-a9f6-469479e6abd3", + "sink_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "source_name": "result", + "sink_name": "values_#_X", + "is_static": true + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "aba6fb51-53ac-4864-ba90-5765db928731", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "{{X}} in {{Y}}", + "values": {}, + "escape_html": false + }, + "metadata": { + "position": { + "x": -692.063978515958, + "y": -993.3284573108554 + } + }, + "input_links": [ + { + "id": "477c651e-24d2-40b7-a33d-c434c3b40fc8", + "source_id": "373eef04-8ac1-4741-a9f6-469479e6abd3", + "sink_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "source_name": "result", + "sink_name": "values_#_X", + "is_static": true + }, + { + "id": "05dafd35-04f0-4a31-80a6-399642febdec", + "source_id": "5119cbae-3244-4b5c-a5df-8ff949391e5f", + "sink_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "source_name": "result", + "sink_name": "values_#_Y", + "is_static": true + } + ], + "output_links": [ + { + "id": "9f9a549c-12dc-42c8-8aeb-52221c3e30a9", + "source_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "output", + "sink_name": "query", + "is_static": false + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 1024.5114218164606, + "y": 293.3837921763587 + } + }, + "input_links": [ + { + "id": "5ea771cc-8621-4bbe-8fe1-c6ff0837247e", + "source_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "sink_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "source_name": "place", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "6a390b2e-1d72-4283-8649-db59caf83671", + "source_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "sink_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "source_name": "value", + "sink_name": "Business Info", + "is_static": false + }, + { + "id": "e731e593-fb21-4768-9de4-17b17d0d62dc", + "source_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": {}, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "0c9c3670-b3b4-45ce-92d1-42b178fd92b9", + "input_schema": { + "type": "object", + "required": [ + "Business Info" + ], + "properties": { + "Business Info": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Business Info", + "secret": false, + "advanced": false + } + } + }, + "graph_version": 10, + "output_schema": { + "type": "object", + "required": [ + "Results!", + "Error" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Results!": { + "title": "Results!", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 1600.245816656499, + "y": 301.35868425022886 + }, + "customized_name": "Find Business CEO" + }, + "input_links": [ + { + "id": "6a390b2e-1d72-4283-8649-db59caf83671", + "source_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "sink_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "source_name": "value", + "sink_name": "Business Info", + "is_static": false + } + ], + "output_links": [ + { + "id": "20ba5b01-cd71-483a-9f5d-5ad122671ec3", + "source_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "sink_id": "60dab3ca-79d5-4e6d-b8e4-94f55fab3353", + "source_name": "Results!", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "74e0063f-4742-4a5e-9545-e74b9d5aac71", + "source_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "Error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "60dab3ca-79d5-4e6d-b8e4-94f55fab3353", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 2324.8981774818467, + "y": 262.78406464486443 + } + }, + "input_links": [ + { + "id": "20ba5b01-cd71-483a-9f5d-5ad122671ec3", + "source_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "sink_id": "60dab3ca-79d5-4e6d-b8e4-94f55fab3353", + "source_name": "Results!", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "d7042b49-fbdd-48dd-b06f-22622f537c97", + "source_id": "60dab3ca-79d5-4e6d-b8e4-94f55fab3353", + "sink_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + }, + { + "id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "identified_owner_ceo" + }, + "metadata": { + "position": { + "x": 2885.0176323637097, + "y": 266.28362722772454 + } + }, + "input_links": [ + { + "id": "d7042b49-fbdd-48dd-b06f-22622f537c97", + "source_id": "60dab3ca-79d5-4e6d-b8e4-94f55fab3353", + "sink_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "5121e8ff-68ff-4eee-a4f7-bdef9590da73", + "source_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "sink_id": "7d2177fa-7974-4058-8da2-4247c4aae83f", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "f621a673-5f85-417b-b166-45cf4f284d40", + "source_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f6138aaf-4f85-48d5-b905-db11a2ff82f3", + "graph_version": 173, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "74e0063f-4742-4a5e-9545-e74b9d5aac71", + "source_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "Error", + "sink_name": "value", + "is_static": false + }, + { + "id": "05dafd35-04f0-4a31-80a6-399642febdec", + "source_id": "5119cbae-3244-4b5c-a5df-8ff949391e5f", + "sink_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "source_name": "result", + "sink_name": "values_#_Y", + "is_static": true + }, + { + "id": "6a390b2e-1d72-4283-8649-db59caf83671", + "source_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "sink_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "source_name": "value", + "sink_name": "Business Info", + "is_static": false + }, + { + "id": "d7042b49-fbdd-48dd-b06f-22622f537c97", + "source_id": "60dab3ca-79d5-4e6d-b8e4-94f55fab3353", + "sink_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "f621a673-5f85-417b-b166-45cf4f284d40", + "source_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "e731e593-fb21-4768-9de4-17b17d0d62dc", + "source_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f9bf8eb1-316e-4ed7-92e2-db42493e7fc8", + "source_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "sink_id": "786df672-a9cf-4f17-b18f-fd7355ce0c19", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "9f9a549c-12dc-42c8-8aeb-52221c3e30a9", + "source_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "output", + "sink_name": "query", + "is_static": false + }, + { + "id": "20ba5b01-cd71-483a-9f5d-5ad122671ec3", + "source_id": "f0c906ed-b396-4c7a-bd61-9f5744378fc0", + "sink_id": "60dab3ca-79d5-4e6d-b8e4-94f55fab3353", + "source_name": "Results!", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "62ef44f9-ed35-4eaa-a0ba-d9a023caa05b", + "source_id": "b2286cfa-84f2-40ff-91a7-36528dc95d1b", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "result", + "sink_name": "radius", + "is_static": true + }, + { + "id": "755618e6-21ee-4017-89aa-1e0891b5ee35", + "source_id": "64d8b6ef-35ef-4b0c-af79-62d0185b1769", + "sink_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "source_name": "result", + "sink_name": "max_results", + "is_static": true + }, + { + "id": "5121e8ff-68ff-4eee-a4f7-bdef9590da73", + "source_id": "0bc5330b-96da-4f43-8085-94d09d4dc4ea", + "sink_id": "7d2177fa-7974-4058-8da2-4247c4aae83f", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "477c651e-24d2-40b7-a33d-c434c3b40fc8", + "source_id": "373eef04-8ac1-4741-a9f6-469479e6abd3", + "sink_id": "aba6fb51-53ac-4864-ba90-5765db928731", + "source_name": "result", + "sink_name": "values_#_X", + "is_static": true + }, + { + "id": "5ea771cc-8621-4bbe-8fe1-c6ff0837247e", + "source_id": "65c45aa1-b513-4741-bd1c-d26b15b1d9d3", + "sink_id": "b9653cd3-5cd7-46da-b762-4ef0e3249f0d", + "source_name": "place", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [ + { + "id": "d3d3907c-9974-4890-9fee-997519ba2711", + "version": 10, + "is_active": true, + "name": "Find Business CEO", + "description": "Input info on a business, this agent will find it's CEO / Leader for you.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": {}, + "metadata": { + "position": { + "x": -24.95592616214003, + "y": 439.8362208290606 + } + }, + "input_links": [ + { + "id": "91f1d36c-8afb-4037-9add-7ceb7c80fc2c", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "source_name": "value", + "sink_name": "input", + "is_static": false + }, + { + "id": "1eb094ac-70a8-43bf-aef8-72259a9e952a", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "source_name": "value", + "sink_name": "data", + "is_static": false + } + ], + "output_links": [ + { + "id": "069e3f5d-3a2e-442c-9ce8-8d4dac810fdf", + "source_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "output", + "sink_name": "prompt_values_#_BUSINESS_INFO", + "is_static": true + } + ] + }, + { + "id": "506276f9-d5be-4e5c-9569-4fe7999d0cc0", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Results!", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2318.871796113364, + "y": 1268.345179299749 + } + }, + "input_links": [ + { + "id": "3a4f43c0-658d-4164-90e2-b254867594fd", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "506276f9-d5be-4e5c-9569-4fe7999d0cc0", + "source_name": "finished", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": -624.0481725216048, + "y": 445.81163897153147 + } + }, + "input_links": [ + { + "id": "240d49bf-9350-4d13-a33c-02f34c360a7e", + "source_id": "30e9d734-52e2-470c-9323-f2eb2a33e2fb", + "sink_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "source_name": "result", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [ + { + "id": "91f1d36c-8afb-4037-9add-7ceb7c80fc2c", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "source_name": "value", + "sink_name": "input", + "is_static": false + }, + { + "id": "08b6f408-24f7-4556-913a-add356c5bf92", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "6afdc5ce-37db-4238-8fea-86eb4c360fbe", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "1eb094ac-70a8-43bf-aef8-72259a9e952a", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "source_name": "value", + "sink_name": "data", + "is_static": false + } + ] + }, + { + "id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "block_id": "3b191d9f-356f-482d-8238-ba04b6d18381", + "input_default": { + "model": "gpt-5-mini-2025-08-07", + "retry": 3, + "prompt": "You are a business CEO/Leader identifier. \n\nYour task is to identify the CEO/Leader of the business below by conducting web research. In order to identify the CEO, use your web search tool. \n\nHere is the business to find the CEO of:\n \n{{BUSINESS_INFO}} \n \n\n\nOnce you can confidently identify the business owner or CEO based on the search results, provide the details in the following format in your \"finished\" message (no tool call).\n \nName: [Owner's or CEO's full name] \nPosition: [Owner or CEO] \nBusiness Name: [Name of the business] \nEmail: [Owner's or CEO's email address, if available] \nPhone Number: [Owner's or CEO's phone number, if available] \nBusiness Address: [Complete business address] \n\n\nOnly include the fields for which you have information from the search results. \nDo not make assumptions or add information that is not explicitly stated. \n\nIf you cannot yet confidently identify the business owner or CEO, use your web search tool to find more information about the business owner or CEO. \n\nWhen creating these search queries: \n1. Use specific details from the business information provided \n2. Focus on finding the owner's or CEO's identity and contact information \n3. Vary the types of searches (e.g., business name + CEO, business address + proprietor, written questions also work well etc.) \n4. Include location-specific searches if applicable \n5. Consider searching for business registration or public records \n6. Aim to uncover the owner's or CEO's contact details (email, phone number) if possible \n\nRemember: \n1. Never make assumptions when identifying the business, owner or CEO. \n2. If any piece of required information is missing, do not try to fill it in with guesses, never make assumptions\n3. Be certain of the owner's or CEO's identity before providing an response. \n4. Always call a tool, if you provide a message without a tool call or a \"finished\" message, this will be taken as your final answer.\n\nAnalyze the business information now and provide your response accordingly.", + "sys_prompt": "Thinking carefully step by step decide which function to call. Always choose a function call from the list of function signatures, and always provide the complete argument provided with the type matching the required jsonschema signature, no missing argument is allowed. If you have already completed the task objective, you can end the task by providing the end result of your work as a finish message. Function parameters that has no default value and not optional typed has to be provided. ", + "ollama_host": "localhost:11434", + "prompt_values": {}, + "multiple_tool_calls": false, + "conversation_history": [] + }, + "metadata": { + "position": { + "x": 1143.4480508963065, + "y": 15.954572309820549 + } + }, + "input_links": [ + { + "id": "d1421048-3a10-46d8-995b-4fdcc8bca179", + "source_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "Answer", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "74066195-307d-4351-a8af-d0240d9a4977", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + }, + { + "id": "069e3f5d-3a2e-442c-9ce8-8d4dac810fdf", + "source_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "output", + "sink_name": "prompt_values_#_BUSINESS_INFO", + "is_static": true + }, + { + "id": "d67e6d77-c214-4853-97b4-7a8c5a94f1f2", + "source_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + } + ], + "output_links": [ + { + "id": "647f0ed8-4e1c-4ea7-abd4-67d13022a343", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "source_name": "tools_^_web_search_~_question", + "sink_name": "Question", + "is_static": false + }, + { + "id": "d2b063bf-a2a1-45b3-a705-c912c4b587aa", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "6afdc5ce-37db-4238-8fea-86eb4c360fbe", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "3a4f43c0-658d-4164-90e2-b254867594fd", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "506276f9-d5be-4e5c-9569-4fe7999d0cc0", + "source_name": "finished", + "sink_name": "value", + "is_static": false + }, + { + "id": "74066195-307d-4351-a8af-d0240d9a4977", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + } + ] + }, + { + "id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": { + "Question": "Why is the sky blue?" + }, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "06fa822d-4c66-43ad-a49e-cd3f2b148b82", + "input_schema": { + "type": "object", + "required": [], + "properties": { + "Question": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Question", + "secret": false, + "default": "Why is the sky blue?", + "advanced": false + } + } + }, + "graph_version": 16, + "output_schema": { + "type": "object", + "required": [ + "Answer", + "Error" + ], + "properties": { + "Error": { + "title": "Error", + "secret": false, + "advanced": false + }, + "Answer": { + "title": "Answer", + "secret": false, + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 1892.787309780138, + "y": -337.34517929974896 + } + }, + "input_links": [ + { + "id": "647f0ed8-4e1c-4ea7-abd4-67d13022a343", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "source_name": "tools_^_web_search_~_question", + "sink_name": "Question", + "is_static": false + } + ], + "output_links": [ + { + "id": "d1421048-3a10-46d8-995b-4fdcc8bca179", + "source_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "Answer", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "d67e6d77-c214-4853-97b4-7a8c5a94f1f2", + "source_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + } + ] + }, + { + "id": "30e9d734-52e2-470c-9323-f2eb2a33e2fb", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Business Info", + "secret": false, + "advanced": false, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1289.439195278696, + "y": 434.231534401348 + } + }, + "input_links": [], + "output_links": [ + { + "id": "240d49bf-9350-4d13-a33c-02f34c360a7e", + "source_id": "30e9d734-52e2-470c-9323-f2eb2a33e2fb", + "sink_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "source_name": "result", + "sink_name": "value", + "is_static": true + } + ] + }, + { + "id": "6afdc5ce-37db-4238-8fea-86eb4c360fbe", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2288.060474752282, + "y": 2349.5595931194903 + } + }, + "input_links": [ + { + "id": "d2b063bf-a2a1-45b3-a705-c912c4b587aa", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "6afdc5ce-37db-4238-8fea-86eb4c360fbe", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "08b6f408-24f7-4556-913a-add356c5bf92", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "6afdc5ce-37db-4238-8fea-86eb4c360fbe", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + } + ], + "links": [ + { + "id": "069e3f5d-3a2e-442c-9ce8-8d4dac810fdf", + "source_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "output", + "sink_name": "prompt_values_#_BUSINESS_INFO", + "is_static": true + }, + { + "id": "d67e6d77-c214-4853-97b4-7a8c5a94f1f2", + "source_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "Error", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "647f0ed8-4e1c-4ea7-abd4-67d13022a343", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "source_name": "tools_^_web_search_~_question", + "sink_name": "Question", + "is_static": false + }, + { + "id": "240d49bf-9350-4d13-a33c-02f34c360a7e", + "source_id": "30e9d734-52e2-470c-9323-f2eb2a33e2fb", + "sink_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "source_name": "result", + "sink_name": "value", + "is_static": true + }, + { + "id": "1eb094ac-70a8-43bf-aef8-72259a9e952a", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "source_name": "value", + "sink_name": "data", + "is_static": false + }, + { + "id": "3a4f43c0-658d-4164-90e2-b254867594fd", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "506276f9-d5be-4e5c-9569-4fe7999d0cc0", + "source_name": "finished", + "sink_name": "value", + "is_static": false + }, + { + "id": "d2b063bf-a2a1-45b3-a705-c912c4b587aa", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "6afdc5ce-37db-4238-8fea-86eb4c360fbe", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "91f1d36c-8afb-4037-9add-7ceb7c80fc2c", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "54bfb5a3-051a-4c4a-a180-7d1fbb7b113e", + "source_name": "value", + "sink_name": "input", + "is_static": false + }, + { + "id": "08b6f408-24f7-4556-913a-add356c5bf92", + "source_id": "61aad48d-ca01-46bd-9b17-67e03136bc17", + "sink_id": "6afdc5ce-37db-4238-8fea-86eb4c360fbe", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "74066195-307d-4351-a8af-d0240d9a4977", + "source_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + }, + { + "id": "d1421048-3a10-46d8-995b-4fdcc8bca179", + "source_id": "c5b5c09a-152e-4932-ad72-1cfd0735b834", + "sink_id": "939682c2-fda2-42e2-b309-492ed9a9422a", + "source_name": "Answer", + "sink_name": "last_tool_output", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Business Info": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Business Info" + } + }, + "required": [ + "Business Info" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Results!": { + "advanced": false, + "secret": false, + "title": "Results!" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Results!", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + }, + { + "id": "c19b3799-6b6b-40e5-86c8-8496274cddeb", + "version": 16, + "is_active": true, + "name": "Web Search", + "description": "Quick web search using perplexity", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "perplexity/sonar-pro", + "retry": 3, + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 227.84177932953094, + "y": 234.0155945699488 + } + }, + "input_links": [ + { + "id": "7056a45f-c928-46f5-aed2-b91f5779e3d8", + "source_id": "95c75f27-10a9-4caf-980a-4496a9fb1da0", + "sink_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "source_name": "result", + "sink_name": "prompt", + "is_static": true + } + ], + "output_links": [ + { + "id": "e587d4a7-3753-443e-b4e9-f10b29ff0425", + "source_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "sink_id": "a23d4c5c-e9ca-496c-a5e8-57534dfa9085", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "60e288be-3de9-417c-9c6a-10897293203c", + "source_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "sink_id": "7dcd5103-e3f9-434c-9bde-1b6a844d979d", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ] + }, + { + "id": "95c75f27-10a9-4caf-980a-4496a9fb1da0", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Question", + "value": "Why is the sky blue?", + "secret": false, + "advanced": false, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -360.0051846165286, + "y": 226.56203046586123 + } + }, + "input_links": [], + "output_links": [ + { + "id": "7056a45f-c928-46f5-aed2-b91f5779e3d8", + "source_id": "95c75f27-10a9-4caf-980a-4496a9fb1da0", + "sink_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "source_name": "result", + "sink_name": "prompt", + "is_static": true + } + ] + }, + { + "id": "a23d4c5c-e9ca-496c-a5e8-57534dfa9085", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Answer", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 813.9483980308338, + "y": 212.6556938463584 + } + }, + "input_links": [ + { + "id": "e587d4a7-3753-443e-b4e9-f10b29ff0425", + "source_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "sink_id": "a23d4c5c-e9ca-496c-a5e8-57534dfa9085", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "7dcd5103-e3f9-434c-9bde-1b6a844d979d", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 1435.786315062846, + "y": 222.89116078136777 + } + }, + "input_links": [ + { + "id": "60e288be-3de9-417c-9c6a-10897293203c", + "source_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "sink_id": "7dcd5103-e3f9-434c-9bde-1b6a844d979d", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + } + ], + "links": [ + { + "id": "60e288be-3de9-417c-9c6a-10897293203c", + "source_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "sink_id": "7dcd5103-e3f9-434c-9bde-1b6a844d979d", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "7056a45f-c928-46f5-aed2-b91f5779e3d8", + "source_id": "95c75f27-10a9-4caf-980a-4496a9fb1da0", + "sink_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "source_name": "result", + "sink_name": "prompt", + "is_static": true + }, + { + "id": "e587d4a7-3753-443e-b4e9-f10b29ff0425", + "source_id": "f14630d5-4344-46c9-a2e4-4516e51aff54", + "sink_id": "a23d4c5c-e9ca-496c-a5e8-57534dfa9085", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Question": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Question", + "default": "Why is the sky blue?" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Answer": { + "advanced": false, + "secret": false, + "title": "Answer" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Answer", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + } + ], + "user_id": "", + "created_at": "2025-10-13T20:55:15.988Z", + "input_schema": { + "type": "object", + "properties": { + "Location": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Location", + "description": "Enter a specific location (e.g. city, neighborhood, or address). For example: \u201cSan Francisco\u201d, \u201cPalo Alto\u201d, or \u201cSoHo, NYC\u201d. This helps narrow the search to a specific area.", + "default": "Edinburgh" + }, + "Search Radius (meters, max 50,000)": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Search Radius (meters, max 50,000)", + "description": "Defines how far around the specified location we\u2019ll look for businesses. For example, 5000 meters covers a large neighborhood or small city.", + "default": 10000 + }, + "Max Number of Businesses to Analyze (max 60)": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Max Number of Businesses to Analyze (max 60)", + "description": "The maximum number of businesses the agent will analyze from the search results. We\u2019ll attempt to identify the CEO or owner for each one.", + "default": 3 + }, + "Business Type": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Business Type", + "description": "Describe the kind of businesses you're targeting. For example: \u201cAI startups\u201d, \u201cdigital marketing agencies\u201d, or \u201ccoffee roasters\u201d. This will be used in a Google Maps search.", + "default": "coffee roasters" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Results": { + "advanced": false, + "secret": false, + "title": "Results" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Results", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "google_maps_api_key_credentials": { + "credentials_provider": [ + "google_maps" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "google_maps", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-5-mini-2025-08-07" + ] + }, + "open_router_api_key_credentials": { + "credentials_provider": [ + "open_router" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "open_router", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "perplexity/sonar-pro" + ] + } + }, + "required": [ + "google_maps_api_key_credentials", + "openai_api_key_credentials", + "open_router_api_key_credentials" + ], + "title": "DecisionMakerLeadFinderCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_a03b0d8c-4751-43d6-a54e-c3b7856ba4e3.json b/autogpt_platform/backend/agents/agent_a03b0d8c-4751-43d6-a54e-c3b7856ba4e3.json new file mode 100644 index 0000000000..3a11726721 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_a03b0d8c-4751-43d6-a54e-c3b7856ba4e3.json @@ -0,0 +1,1656 @@ +{ + "id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "version": 57, + "is_active": true, + "name": "AI Video Generator: Create Viral-Ready Content in Seconds", + "description": "AI Shortform Video Generator: Create Viral-Ready Content in Seconds\n\nTransform trending topics into engaging shortform videos with this cutting-edge AI Video Generator. Perfect for content creators, social media managers, and marketers looking to capitalize on the latest news and viral trends. Simply input your desired video count and source website, and watch as the AI scours the internet for the hottest stories, crafting them into attention-grabbing scripts optimized for platforms like TikTok, Instagram Reels, and YouTube Shorts.\n\nKey features include:\n- Customizable video count (1-5 per generation)\n- Flexible source selection for trending topics\n- AI-driven script writing following best practices for shortform content\n- Hooks that capture attention in the first 3 seconds\n- Dual narrative storytelling for maximum engagement\n- SEO-optimized content to boost discoverability\n- Integration with video generation tools for seamless production\n\nFrom hook to conclusion, each script is meticulously crafted to maintain viewer interest, incorporating proven techniques like \"but so\" storytelling, visual metaphors, and strategically placed calls-to-action. The AI Shortform Video Generator streamlines your content creation process, allowing you to stay ahead of trends and consistently produce viral-worthy videos that resonate with your audience.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "ca4397f3-3146-4721-8d45-027581169210", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 1, + "dot_all": true, + "pattern": "(.*?)<\\/title>", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": 4943.271705805728, + "y": 2591.4971054942084 + } + }, + "input_links": [ + { + "id": "dd046be2-e8a3-461b-9ca9-ff60d00c89e6", + "source_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "sink_id": "ca4397f3-3146-4721-8d45-027581169210", + "source_name": "response", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "0c3311b3-fbcc-45db-bf90-f4ca892bdfbf", + "source_id": "ca4397f3-3146-4721-8d45-027581169210", + "sink_id": "98d315de-2f53-4f28-aec1-d254c307b86b", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "682e6436-c45e-44e4-a643-1562564302a2", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 1, + "dot_all": true, + "pattern": "<news_story>(.*?)<\\/news_story>", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": -1143.6210607383573, + "y": -10.931733959013457 + } + }, + "input_links": [ + { + "id": "fef2729e-95cc-4963-a956-95d33aa3bbc4", + "source_id": "6f373ad2-ff4f-44be-9328-27519f47afe3", + "sink_id": "682e6436-c45e-44e4-a643-1562564302a2", + "source_name": "response", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "58c628e0-c1bb-4f7c-bcaa-a7cac3c53c4c", + "source_id": "682e6436-c45e-44e4-a643-1562564302a2", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "positive", + "sink_name": "prompt_values_#_FULL_STORY", + "is_static": false + }, + { + "id": "f7fd1362-8653-4283-a39c-201d177bc80e", + "source_id": "682e6436-c45e-44e4-a643-1562564302a2", + "sink_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "source_name": "positive", + "sink_name": "prompt_values_#_STORY", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "bb38c5ab-916d-439a-b566-7a51a5fd8433", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "You can customise many of the properties of the video before generation, such as whether it should use AI generated video, which voices it should use, and the style of the video." + }, + "metadata": { + "position": { + "x": 5053.550340120883, + "y": 803.8832822791481 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "e14feaac-ab83-4708-9783-73c45b9c6fa8", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Press \"Advanced\" on this block and input your revid.ai API key.\n\nYou can get an API key by signing up and going to this page: https://www.revid.ai/account" + }, + "metadata": { + "position": { + "x": 5055.294942637052, + "y": 187.83618794207337 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "75d91ec6-e3ff-4013-9a6d-43ea023e4989", + "block_id": "b29c1b50-5d0e-4d9f-8f9d-1b0e6fcbf0b1", + "input_default": { + "format": "%Y-%m-%d", + "offset": 0, + "trigger": "go" + }, + "metadata": { + "position": { + "x": 2195.9745600248634, + "y": 522.0698096335659 + } + }, + "input_links": [], + "output_links": [ + { + "id": "2f6c6627-e243-47b0-a78a-f65a4b0ba665", + "source_id": "75d91ec6-e3ff-4013-9a6d-43ea023e4989", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "date", + "sink_name": "prompt_values_#_DATE", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "98d315de-2f53-4f28-aec1-d254c307b86b", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Short-Form Video Title", + "secret": false, + "advanced": false, + "description": "Hook-driven title optimised for TikTok, Instagram Reels & YouTube Shorts." + }, + "metadata": { + "position": { + "x": 5559.918574664145, + "y": 2596.448241637105 + } + }, + "input_links": [ + { + "id": "0c3311b3-fbcc-45db-bf90-f4ca892bdfbf", + "source_id": "ca4397f3-3146-4721-8d45-027581169210", + "sink_id": "98d315de-2f53-4f28-aec1-d254c307b86b", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "d9e779dc-8f71-4234-b91b-7e1c041e89a4", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Video Script (+Visual Cues)", + "secret": false, + "advanced": false, + "description": "Complete spoken script with [visual directions] for your short-form video." + }, + "metadata": { + "position": { + "x": 6250.997197019673, + "y": 2619.623453448885 + } + }, + "input_links": [ + { + "id": "c4a903cd-1d79-4ff8-b0f1-a112ec11a714", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "d9e779dc-8f71-4234-b91b-7e1c041e89a4", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "block_id": "361697fb-0c4f-4feb-aed3-8320c88c771b", + "input_default": { + "ratio": "9 / 16", + "voice": "Brian", + "frame_rate": 30, + "resolution": "720p", + "video_style": "movingImage", + "background_music": "Bladerunner 2049", + "generation_preset": "Default" + }, + "metadata": { + "position": { + "x": 4502.256834355213, + "y": -151.96056183992042 + } + }, + "input_links": [ + { + "id": "e2a7904d-fe01-49b4-bd10-e1fd23a31678", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "source_name": "response", + "sink_name": "script", + "is_static": false + } + ], + "output_links": [ + { + "id": "8e93cfb8-5338-475c-8bab-4b4d3912a913", + "source_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "sink_id": "681a5773-4e22-4ba7-b972-8864110ee106", + "source_name": "video_url", + "sink_name": "value", + "is_static": false + }, + { + "id": "b751321b-4688-4ab7-af14-e8d15ae8e786", + "source_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "sink_id": "681a5773-4e22-4ba7-b972-8864110ee106", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o", + "retry": 3, + "prompt": "You are a search term generator. Your task is to take an input story and turn it into a concise Google search term that can be used for further research on the topic. Here is the input story:\n\n<story>\n{{STORY}}\n</story>\n\nTo create an effective search term:\n1. Identify the main topic or key elements of the story\n2. Choose 2-5 words or short phrases that best represent these elements\n3. Arrange these words in a logical order\n4. Avoid unnecessary words like articles (a, an, the) or conjunctions (and, or, but)\n\nYour output should be only the search term, without any additional explanation or commentary. The search term should be concise enough to be used in a Google search bar but descriptive enough to yield relevant results for further research on the story's topic.\n\nRespond with just your generated search term with no additional commentary or text.", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 18.004348090716576, + "y": -32.85966357648664 + } + }, + "input_links": [ + { + "id": "f7fd1362-8653-4283-a39c-201d177bc80e", + "source_id": "682e6436-c45e-44e4-a643-1562564302a2", + "sink_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "source_name": "positive", + "sink_name": "prompt_values_#_STORY", + "is_static": false + } + ], + "output_links": [ + { + "id": "287bc030-1ce3-433a-bef7-5936eff22f62", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "response", + "sink_name": "prompt_values_#_TOPIC", + "is_static": false + }, + { + "id": "336cf54b-f639-4ff0-bafd-fce3328d2046", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "bcc0c964-c614-4ce4-977e-36eff68a82a9", + "source_name": "response", + "sink_name": "query", + "is_static": false + }, + { + "id": "ea10a20b-f133-44fb-aa9d-e4cffd7d96c7", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "source_name": "response", + "sink_name": "focus", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "06ff48a2-101e-4e20-ae92-b8ab94ad9df9", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "What is the latest news related to the following topic:\n\n<topic>\n{{TOPIC | safe}}\n</topic>\n\nTypically, you must return a list of the top 5-10 story headlines. \nIf however, if the user provided topic is clearly the headline of a very specific news story, just return just the most widely used headline of that story.", + "values": {} + }, + "metadata": { + "position": { + "x": -3240.1630809703283, + "y": -26.788488530455936 + } + }, + "input_links": [ + { + "id": "9164e2ea-7207-4c1d-88ea-32ae8bf0a429", + "source_id": "7168de4f-206f-43bc-8e36-55df20656b84", + "sink_id": "06ff48a2-101e-4e20-ae92-b8ab94ad9df9", + "source_name": "result", + "sink_name": "values_#_TOPIC", + "is_static": true + } + ], + "output_links": [ + { + "id": "9c1fdb9d-a860-4645-8e78-13b1c6a961e9", + "source_id": "06ff48a2-101e-4e20-ae92-b8ab94ad9df9", + "sink_id": "cab3dbde-1473-465a-8bf4-71ba69e5a5de", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "7168de4f-206f-43bc-8e36-55df20656b84", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Video Topic / Headline", + "value": "AutoGPT", + "secret": false, + "advanced": false, + "description": "Type a broad topic (e.g. \u2018Climate Tech\u2019) or paste an exact headline (e.g. \u2018IBM unveils quantum roadmap\u2019) to generate a short-form video about it.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -4180.031840506213, + "y": -26.11710632181729 + } + }, + "input_links": [], + "output_links": [ + { + "id": "9164e2ea-7207-4c1d-88ea-32ae8bf0a429", + "source_id": "7168de4f-206f-43bc-8e36-55df20656b84", + "sink_id": "06ff48a2-101e-4e20-ae92-b8ab94ad9df9", + "source_name": "result", + "sink_name": "values_#_TOPIC", + "is_static": true + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "block_id": "a0a69be1-4528-491c-a85a-a4ab6873e3f0", + "input_default": { + "focus": "general information", + "model": "gpt-4.1-2025-04-14", + "style": "detailed", + "ollama_host": "localhost:11434", + "chunk_overlap": 300 + }, + "metadata": { + "position": { + "x": 1155.0727064918613, + "y": -31.31013909615018 + } + }, + "input_links": [ + { + "id": "3a679d8a-c62d-498b-a83d-8b7241449526", + "source_id": "bcc0c964-c614-4ce4-977e-36eff68a82a9", + "sink_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "source_name": "results", + "sink_name": "text", + "is_static": false + }, + { + "id": "ea10a20b-f133-44fb-aa9d-e4cffd7d96c7", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "source_name": "response", + "sink_name": "focus", + "is_static": false + } + ], + "output_links": [ + { + "id": "476a4503-87f1-40a4-8dd7-c188061a4870", + "source_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "summary", + "sink_name": "prompt_values_#_RESEARCH", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "6d53ddf1-b39d-4dbc-931a-aeae658f9edd", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Optimised Hashtags", + "secret": false, + "advanced": false, + "description": "One or two high-reach hashtags to boost discoverability." + }, + "metadata": { + "position": { + "x": 5546.663436566226, + "y": 4294.468802152407 + } + }, + "input_links": [ + { + "id": "ff3d2c2d-d0c4-48ac-a147-09518f3c920b", + "source_id": "c1edbb59-d97d-4f13-aaaf-9f089280e6d6", + "sink_id": "6d53ddf1-b39d-4dbc-931a-aeae658f9edd", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "c1edbb59-d97d-4f13-aaaf-9f089280e6d6", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 1, + "dot_all": true, + "pattern": "<hashtags>(.*?)<\\/hashtags>", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": 4989.880194526019, + "y": 4288.181263622385 + } + }, + "input_links": [ + { + "id": "348a7031-04b5-43c8-b20a-560772b6f8b4", + "source_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "sink_id": "c1edbb59-d97d-4f13-aaaf-9f089280e6d6", + "source_name": "response", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "ff3d2c2d-d0c4-48ac-a147-09518f3c920b", + "source_id": "c1edbb59-d97d-4f13-aaaf-9f089280e6d6", + "sink_id": "6d53ddf1-b39d-4dbc-931a-aeae658f9edd", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "681a5773-4e22-4ba7-b972-8864110ee106", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Video", + "secret": false, + "advanced": false, + "description": "The finished video created by the Agent" + }, + "metadata": { + "position": { + "x": 5416.720683710664, + "y": -176.92102066765722 + } + }, + "input_links": [ + { + "id": "8e93cfb8-5338-475c-8bab-4b4d3912a913", + "source_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "sink_id": "681a5773-4e22-4ba7-b972-8864110ee106", + "source_name": "video_url", + "sink_name": "value", + "is_static": false + }, + { + "id": "b751321b-4688-4ab7-af14-e8d15ae8e786", + "source_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "sink_id": "681a5773-4e22-4ba7-b972-8864110ee106", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "bcc0c964-c614-4ce4-977e-36eff68a82a9", + "block_id": "87840993-2053-44b7-8da4-187ad4ee518c", + "input_default": {}, + "metadata": { + "position": { + "x": 594.3494633641187, + "y": -26.67385432381235 + } + }, + "input_links": [ + { + "id": "336cf54b-f639-4ff0-bafd-fce3328d2046", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "bcc0c964-c614-4ce4-977e-36eff68a82a9", + "source_name": "response", + "sink_name": "query", + "is_static": false + } + ], + "output_links": [ + { + "id": "3a679d8a-c62d-498b-a83d-8b7241449526", + "source_id": "bcc0c964-c614-4ce4-977e-36eff68a82a9", + "sink_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "source_name": "results", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "d7591a9b-daa7-4f03-852f-fb8d26fab57e", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "WARNING: Depending on the settings you select and the length of the script, video generation can take a long time. In Extreme cases 10 minutes.\n\nSettings such as AI video take longer than stock footage." + }, + "metadata": { + "position": { + "x": 5055.294942637052, + "y": 495.83618794207337 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "a6eb119e-833c-4702-b0d1-c6e01fa79df5", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Ask Perplexity (live web-searching answer AI)" + }, + "metadata": { + "position": { + "x": -2825.229479767265, + "y": -428.19471165159075 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "c6d73929-e112-4c02-8d49-686ceda1fabd", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Select the news Story to make a video about" + }, + "metadata": { + "position": { + "x": -1394.0304079884431, + "y": -437.4027905400797 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "fadf0fcc-6506-4423-8f0b-e8bb606524e2", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Conduct background research on the chosen news story" + }, + "metadata": { + "position": { + "x": 670.5595728367297, + "y": -440.5403142618513 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "d38b848f-e8ed-40ed-ad89-31664875373d", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Write a highly virality optimised script using proven methods. \n\nThis step also directs the visuals." + }, + "metadata": { + "position": { + "x": 2798.2437838618944, + "y": -446.598452987355 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "08db983c-3312-4a79-b195-9207a27f44f9", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Create the video!" + }, + "metadata": { + "position": { + "x": 4579.82086567059, + "y": -517.3760892955992 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "7dc90a26-3094-472f-9441-f0a8eb5e4ac8", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Generate metadata for the video such as the title and hashtags." + }, + "metadata": { + "position": { + "x": 4239.2662913385075, + "y": 2486.858941366123 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "cab3dbde-1473-465a-8bf4-71ba69e5a5de", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "perplexity/sonar-pro", + "retry": 3, + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": -2670.231626894567, + "y": -42.249645219997035 + } + }, + "input_links": [ + { + "id": "9c1fdb9d-a860-4645-8e78-13b1c6a961e9", + "source_id": "06ff48a2-101e-4e20-ae92-b8ab94ad9df9", + "sink_id": "cab3dbde-1473-465a-8bf4-71ba69e5a5de", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "afb2edf8-8f05-4a6c-af7d-86c9ae0009b6", + "source_id": "cab3dbde-1473-465a-8bf4-71ba69e5a5de", + "sink_id": "6f373ad2-ff4f-44be-9328-27519f47afe3", + "source_name": "response", + "sink_name": "prompt_values_#_RESULTS", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-5-20250929", + "retry": 3, + "prompt": "Write a shortform video script about this news story\n<story>\n{{TOPIC}}\n---\n{{FULL_STORY}}\n</story>\n\nUsing this in depth research:\n<research>\n{{RESEARCH}}\n</research>\n\nWhen writing your script, remember that today's date is {{DATE}}. Don't mention this, but take it into account when positioning events related to the story in time.\nEnsure what you write about is true and does not include any assumptions to avoid creating fake news.\nYour video should be from the perspective of reporting on the story.", + "sys_prompt": "You are a shortform video script writer. Write a short form video script based on the user's topic that exactly complies with the following guide, and uses the example script below for reference on format, detail level and quality. \n\nKey shortform video guide:\n```\nWriting:\n\n1. Hooks:\n - Keep hooks under 3 seconds, absolutely no longer\n - Make them visually pleasing and understandable without sound - test by watching on mute\n - Use foreshadowing to tease the video's conclusion, but don't reveal everything\n - Example: Start with a reaction to a $5 gift, but don't show the gift itself until the end\n - Ensure the hook is so visually strong it could be used as a thumbnail for a long-form video\n\n2. Storytelling:\n - \"But so\" storytelling: Constantly set up and resolve conflicts\n Example: \"I was at home but I got bored, so I ended up going for a walk, but it started raining, so I ended up trying to figure out what to do.\"\n - Dual narrative storytelling:\n - Voice: Focus on the main action or process\n - Visuals: Add emotional depth or context without explicitly stating it\n Example: Talking about making a $5 Valentine's gift while showing photos of grandma with late grandpa\n - Create multiple \"but\" moments to repeatedly grab attention\n\n3. Language:\n - Be ruthlessly concise - cut any word that doesn't add value\n - Use \"but\" and \"so\" frequently to create micro-cliffhangers\n - Avoid explaining what's visually obvious\n\n4. Content:\n - Address 2-3 potential viewer questions in each video\n - Provide multiple reasons for viewers to care (e.g., health benefits AND weight loss for a diet video)\n - Use numbered lists (e.g., \"3 reasons why\") to create clear structure and expectation\n\nDirecting:\n\n1. Visual Composition:\n - Center all important elements in the frame\n - Limit to 2-3 objects max in any given shot\n - First frame must be thumbnail-worthy - test this specifically\n\n2. Pacing:\n - Hook: 1-3 seconds, ultra-fast\n - Middle: Medium pace, but with micro-variations to maintain interest\n - Peak: Place a clear high point (joke, reveal, etc.) around 60-70% through\n - Quick ending: Last 2-3 seconds should wrap up decisively\n\n3. Emotional Journey:\n - Plan 3-4 distinct emotional beats in each video\n - Ensure the peak emotion (whether humor, surprise, etc.) is crystal clear\n - End on the second-strongest emotional beat\n\n4. Speaking:\n - Speak at a normal pace, but cut out all filler words\n - After every dense information bit, pause for 0.5-1 second\n - Vary tone to emphasize key points - don't be monotonous\n\n5. Visuals:\n - Every 2-3 seconds should have a visual change (new shot, text overlay, etc.)\n - Use visual metaphors to explain complex ideas quickly\n - Incorporate progress indicators (e.g., step counters) to maintain viewer orientation\n\nStructure:\n\n1. Overall Structure:\n - Hook (1-3 seconds): Instantly grab attention\n - Context (2-4 seconds): Quickly establish what's happening\n - Main content (20-25 seconds): Deliver value in bite-sized chunks\n - Peak (3-5 seconds): Clear climax or punchline\n - Ending (1-2 seconds): Swift wrap-up\n\n2. Retention Tactics:\n - Every single line must progress the story - ruthlessly cut any that don't\n - Place \"but\" moments every 5-7 seconds to maintain tension\n - Use visual progress indicators that are always on screen (e.g., step counter in corner)\n\n3. Length:\n - Aim for 30-35 seconds initially (Jenny's sweet spot is 34 seconds)\n - Test variations of +/- 5 seconds to find your ideal length\n\n4. Call-to-Action (CTA):\n - Place CTA at 50-60% mark in the video\n - Integrate CTA into the story (e.g., \"Subscribe to see what happened next\")\n - Keep CTA under 2 seconds\n\n5. Ending:\n - Last 1-2 seconds should be quick and decisive\n - Consider a cliffhanger ending to encourage rewatches\n - If part of a series, tease the next video in the last second\n\n6. Content Buckets:\n - Develop 2-4 distinct series with near-identical structures\n - Use exact same title format for each video in a series\n - Example bucket: \"$1 [food item] vs Restaurant version\"\n\n7. Testing and Iteration:\n - Create at least 3 versions of each video, changing only one element\n - Analyze first 24-48 hours of performance rigorously\n - Look for patterns in top 10% of videos and double down on those elements\n```\n\nExample Script:\nHere's an example Full Script with Visual Direction (full_script) for a video titled \"How many flavors can I get for $1\":\n```\n[close-up of ice cream machine with multiple flavors visible] How many flavors can you get with a dollar? \n[pan to show a full ice cream cup, price tag visible] That's going to be $20, and it's just vanilla. \n[Jenny holding a tiny cup, moving towards ice cream machine] So I brought a tiny cup to get every flavor without spending more than a dollar. \n[close-up of tiny cup under ice cream dispenser] What do these levers do? \n[Jenny's mom entering frame, looking concerned] Jenny, they're gonna kick you out! \n[cup filling with multiple colors of ice cream] I'm more concerned that the more flavors I added, the less space I had in my cup. \n[Jenny eating some ice cream from the cup] Oh my goodness, I put the same flavor twice on accident. \n[Jenny looking around furtively] Testing to make extra room. \n[store employee walking by in background] They're gonna kick us out. Look, I called the cops! \n[text overlay \"Subscribe for more challenges!\"] All this for one subscriber. \n[close-up of ice cream machine with multiple levers] So I hurry through the final flavors by pressing a special button that adds two flavors at once. \n[cup overflowing slightly, Jenny looking panicked] That's more than a dollar! Hurry up! \n[Jenny holding the overfilled cup, moving to cashier] 14 ice cream flavors... is it gonna be a dollar? \n[cash register display showing price] $2.48 \n[Jenny's disappointed face, quick cut to black] No ice cream for you... again. \n```\n\nThis script follows the key principles of:\n- A quick, visually appealing hook\n- Constant progression and conflict\n- Dual narrative storytelling (spoken story about getting flavors, visual story about potentially getting in trouble)\n- Foreshadowing the ending (mentioning \"again\" to hint at a previous video)\n- Quick, impactful ending\nRemember, every single second and line serves a purpose in moving the story forward and maintaining viewer engagement.\n\n- [text between brackets] will be used to guide the the visuals and ignored by the speaker\n- Breaking lines to force clip separation and media change\n\nEverything written outside of [ ] will be spoken aloud, so never include non-spoken words like \"*sigh*\" and just leave empty lines rather than marking sections as \"None\".\n\nOutput only the full script with no commentary or additional text, titles or parenthesis.", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 2757.2350969970985, + "y": -47.24840953640182 + } + }, + "input_links": [ + { + "id": "2f6c6627-e243-47b0-a78a-f65a4b0ba665", + "source_id": "75d91ec6-e3ff-4013-9a6d-43ea023e4989", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "date", + "sink_name": "prompt_values_#_DATE", + "is_static": false + }, + { + "id": "287bc030-1ce3-433a-bef7-5936eff22f62", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "response", + "sink_name": "prompt_values_#_TOPIC", + "is_static": false + }, + { + "id": "58c628e0-c1bb-4f7c-bcaa-a7cac3c53c4c", + "source_id": "682e6436-c45e-44e4-a643-1562564302a2", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "positive", + "sink_name": "prompt_values_#_FULL_STORY", + "is_static": false + }, + { + "id": "476a4503-87f1-40a4-8dd7-c188061a4870", + "source_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "summary", + "sink_name": "prompt_values_#_RESEARCH", + "is_static": false + } + ], + "output_links": [ + { + "id": "e2a7904d-fe01-49b4-bd10-e1fd23a31678", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "source_name": "response", + "sink_name": "script", + "is_static": false + }, + { + "id": "a11e06c0-5592-41c1-97c8-6fb12ff29164", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "source_name": "response", + "sink_name": "prompt_values_#_SCRIPT", + "is_static": false + }, + { + "id": "c4a903cd-1d79-4ff8-b0f1-a112ec11a714", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "d9e779dc-8f71-4234-b91b-7e1c041e89a4", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "6f373ad2-ff4f-44be-9328-27519f47afe3", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-5-20250929", + "retry": 3, + "prompt": "Extract the News Story from the search results that people would find most engaging and interesting. The story you select must be related to the topic {{TOPIC}}.\n\n <search_results>\n{{RESULTS}}\n</search_results> \n\nOutput your concise analysis inside <analysis> tags, then output your final unambiguous choice of news story inside <news_story> tags in the following format:\n\n<news_story>\nTitle: [TITLE]\nDate of event: [Date/N/A]\nStory: [Story Text]\n</news_story>", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": -1726.816139381607, + "y": -19.133506652531025 + } + }, + "input_links": [ + { + "id": "afb2edf8-8f05-4a6c-af7d-86c9ae0009b6", + "source_id": "cab3dbde-1473-465a-8bf4-71ba69e5a5de", + "sink_id": "6f373ad2-ff4f-44be-9328-27519f47afe3", + "source_name": "response", + "sink_name": "prompt_values_#_RESULTS", + "is_static": false + } + ], + "output_links": [ + { + "id": "fef2729e-95cc-4963-a956-95d33aa3bbc4", + "source_id": "6f373ad2-ff4f-44be-9328-27519f47afe3", + "sink_id": "682e6436-c45e-44e4-a643-1562564302a2", + "source_name": "response", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + }, + { + "id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-5-20250929", + "retry": 3, + "prompt": "You are an expert in creating engaging, viral shortform content. Your task is to analyze a given video script and generate an interesting title and relevant hashtags for it.\n\nHere is the video script you will be working with:\n\n<video_script>\n{{SCRIPT}}\n</video_script>\n\nCarefully read and analyze the script, paying attention to:\n- The main topic or theme\n- Key points or messages\n- Unique or attention-grabbing elements\n- The overall tone or style\n- The target audience\n\nAfter analyzing the script, follow these steps:\n\n1. Generate your thoughts on the script and potential titles/hashtags. Consider what would make the video stand out and appeal to the target audience. Write your thoughts inside <thoughts> tags.\n\n2. Create an unambiguous, interesting, and engaging title for the video. The title should:\n - Accurately reflect the content of the video\n - Be concise and attention-grabbing\n - Use language that resonates with the target audience\n - Potentially include a hook or create curiosity\n\n3. Generate 1-2 relevant hashtags for the video. The hashtags should:\n - Be closely related to the video's content\n - Be popular or trending (if applicable)\n - Help increase the video's discoverability\n\nProvide your output in the following format:\n<thoughts>\nYour analysis and thoughts on the script, potential titles, and hashtags\n</thoughts>\n\n<title>Your final, unambiguous, interesting, and engaging title for the video\n\nYour 1-2 most relevant hashtags, separated by a single space if using multiple", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 4160.482363056444, + "y": 2861.811609434119 + } + }, + "input_links": [ + { + "id": "a11e06c0-5592-41c1-97c8-6fb12ff29164", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "source_name": "response", + "sink_name": "prompt_values_#_SCRIPT", + "is_static": false + } + ], + "output_links": [ + { + "id": "348a7031-04b5-43c8-b20a-560772b6f8b4", + "source_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "sink_id": "c1edbb59-d97d-4f13-aaaf-9f089280e6d6", + "source_name": "response", + "sink_name": "text", + "is_static": false + }, + { + "id": "dd046be2-e8a3-461b-9ca9-ff60d00c89e6", + "source_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "sink_id": "ca4397f3-3146-4721-8d45-027581169210", + "source_name": "response", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "6a6f8fcd-857d-469c-a98b-e90950a7851b", + "graph_version": 57, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "c4a903cd-1d79-4ff8-b0f1-a112ec11a714", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "d9e779dc-8f71-4234-b91b-7e1c041e89a4", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "b751321b-4688-4ab7-af14-e8d15ae8e786", + "source_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "sink_id": "681a5773-4e22-4ba7-b972-8864110ee106", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "e2a7904d-fe01-49b4-bd10-e1fd23a31678", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "source_name": "response", + "sink_name": "script", + "is_static": false + }, + { + "id": "287bc030-1ce3-433a-bef7-5936eff22f62", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "response", + "sink_name": "prompt_values_#_TOPIC", + "is_static": false + }, + { + "id": "476a4503-87f1-40a4-8dd7-c188061a4870", + "source_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "summary", + "sink_name": "prompt_values_#_RESEARCH", + "is_static": false + }, + { + "id": "dd046be2-e8a3-461b-9ca9-ff60d00c89e6", + "source_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "sink_id": "ca4397f3-3146-4721-8d45-027581169210", + "source_name": "response", + "sink_name": "text", + "is_static": false + }, + { + "id": "0c3311b3-fbcc-45db-bf90-f4ca892bdfbf", + "source_id": "ca4397f3-3146-4721-8d45-027581169210", + "sink_id": "98d315de-2f53-4f28-aec1-d254c307b86b", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "9c1fdb9d-a860-4645-8e78-13b1c6a961e9", + "source_id": "06ff48a2-101e-4e20-ae92-b8ab94ad9df9", + "sink_id": "cab3dbde-1473-465a-8bf4-71ba69e5a5de", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "a11e06c0-5592-41c1-97c8-6fb12ff29164", + "source_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "sink_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "source_name": "response", + "sink_name": "prompt_values_#_SCRIPT", + "is_static": false + }, + { + "id": "336cf54b-f639-4ff0-bafd-fce3328d2046", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "bcc0c964-c614-4ce4-977e-36eff68a82a9", + "source_name": "response", + "sink_name": "query", + "is_static": false + }, + { + "id": "ff3d2c2d-d0c4-48ac-a147-09518f3c920b", + "source_id": "c1edbb59-d97d-4f13-aaaf-9f089280e6d6", + "sink_id": "6d53ddf1-b39d-4dbc-931a-aeae658f9edd", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "8e93cfb8-5338-475c-8bab-4b4d3912a913", + "source_id": "6dbef347-9a49-4f24-9d1c-3d39c384f354", + "sink_id": "681a5773-4e22-4ba7-b972-8864110ee106", + "source_name": "video_url", + "sink_name": "value", + "is_static": false + }, + { + "id": "9164e2ea-7207-4c1d-88ea-32ae8bf0a429", + "source_id": "7168de4f-206f-43bc-8e36-55df20656b84", + "sink_id": "06ff48a2-101e-4e20-ae92-b8ab94ad9df9", + "source_name": "result", + "sink_name": "values_#_TOPIC", + "is_static": true + }, + { + "id": "ea10a20b-f133-44fb-aa9d-e4cffd7d96c7", + "source_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "sink_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "source_name": "response", + "sink_name": "focus", + "is_static": false + }, + { + "id": "afb2edf8-8f05-4a6c-af7d-86c9ae0009b6", + "source_id": "cab3dbde-1473-465a-8bf4-71ba69e5a5de", + "sink_id": "6f373ad2-ff4f-44be-9328-27519f47afe3", + "source_name": "response", + "sink_name": "prompt_values_#_RESULTS", + "is_static": false + }, + { + "id": "3a679d8a-c62d-498b-a83d-8b7241449526", + "source_id": "bcc0c964-c614-4ce4-977e-36eff68a82a9", + "sink_id": "46cca925-1e4a-47ca-b2d3-ff2ef07ee917", + "source_name": "results", + "sink_name": "text", + "is_static": false + }, + { + "id": "fef2729e-95cc-4963-a956-95d33aa3bbc4", + "source_id": "6f373ad2-ff4f-44be-9328-27519f47afe3", + "sink_id": "682e6436-c45e-44e4-a643-1562564302a2", + "source_name": "response", + "sink_name": "text", + "is_static": false + }, + { + "id": "2f6c6627-e243-47b0-a78a-f65a4b0ba665", + "source_id": "75d91ec6-e3ff-4013-9a6d-43ea023e4989", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "date", + "sink_name": "prompt_values_#_DATE", + "is_static": false + }, + { + "id": "348a7031-04b5-43c8-b20a-560772b6f8b4", + "source_id": "f7c30e6b-4b42-4fa4-8c31-32475db03833", + "sink_id": "c1edbb59-d97d-4f13-aaaf-9f089280e6d6", + "source_name": "response", + "sink_name": "text", + "is_static": false + }, + { + "id": "58c628e0-c1bb-4f7c-bcaa-a7cac3c53c4c", + "source_id": "682e6436-c45e-44e4-a643-1562564302a2", + "sink_id": "6ffba701-9296-4203-bbde-3c2c1a1247dd", + "source_name": "positive", + "sink_name": "prompt_values_#_FULL_STORY", + "is_static": false + }, + { + "id": "f7fd1362-8653-4283-a39c-201d177bc80e", + "source_id": "682e6436-c45e-44e4-a643-1562564302a2", + "sink_id": "d5ec4382-2bca-4655-bf38-5c6f65010b75", + "source_name": "positive", + "sink_name": "prompt_values_#_STORY", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2025-04-28T13:04:18.367Z", + "input_schema": { + "type": "object", + "properties": { + "Video Topic / Headline": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Video Topic / Headline", + "description": "Type a broad topic (e.g. \u2018Climate Tech\u2019) or paste an exact headline (e.g. \u2018IBM unveils quantum roadmap\u2019) to generate a short-form video about it.", + "default": "AutoGPT" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Short-Form Video Title": { + "advanced": false, + "secret": false, + "title": "Short-Form Video Title", + "description": "Hook-driven title optimised for TikTok, Instagram Reels & YouTube Shorts." + }, + "Video Script (+Visual Cues)": { + "advanced": false, + "secret": false, + "title": "Video Script (+Visual Cues)", + "description": "Complete spoken script with [visual directions] for your short-form video." + }, + "Optimised Hashtags": { + "advanced": false, + "secret": false, + "title": "Optimised Hashtags", + "description": "One or two high-reach hashtags to boost discoverability." + }, + "Video": { + "advanced": false, + "secret": false, + "title": "Video", + "description": "The finished video created by the Agent" + } + }, + "required": [ + "Short-Form Video Title", + "Video Script (+Visual Cues)", + "Optimised Hashtags", + "Video" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "revid_api_key_credentials": { + "credentials_provider": [ + "revid" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "revid", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-4.1-2025-04-14", + "gpt-4o" + ] + }, + "jina_api_key_credentials": { + "credentials_provider": [ + "jina" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "jina", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "open_router_api_key_credentials": { + "credentials_provider": [ + "open_router" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "open_router", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "perplexity/sonar-pro" + ] + }, + "anthropic_api_key_credentials": { + "credentials_provider": [ + "anthropic" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "anthropic", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "claude-sonnet-4-5-20250929" + ] + } + }, + "required": [ + "revid_api_key_credentials", + "openai_api_key_credentials", + "jina_api_key_credentials", + "open_router_api_key_credentials", + "anthropic_api_key_credentials" + ], + "title": "AIVideoGenerator:CreateViral-ReadyContentinSecondsCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_a548e507-09a7-4b30-909c-f63fcda10fff.json b/autogpt_platform/backend/agents/agent_a548e507-09a7-4b30-909c-f63fcda10fff.json new file mode 100644 index 0000000000..f86b6f0826 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_a548e507-09a7-4b30-909c-f63fcda10fff.json @@ -0,0 +1,1886 @@ +{ + "id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "version": 81, + "is_active": true, + "name": "Lead Finder", + "description": "Turbo-charge your local lead generation with the AutoGPT Marketplace\u2019s top Google Maps prospecting agent. \u201cLead Finder: Local Businesses\u201d delivers verified, ready-to-contact prospects in any niche and city\u2014so you can focus on closing, not searching.\n\n**WHAT IT DOES**\n\u2022 Searches Google Maps via the official API (no scraping)\n\u2022 Prompts like \u201cdentists in Chicago\u201d or \u201ccoffee shops near me\u201d\n\u2022 Returns: Name, Website, Rating, Reviews, **Phone & Address**\n\u2022 Exports instantly to your CRM, sheet, or outreach workflow\n\n**WHY YOU\u2019LL LOVE IT**\n\u2713 Hyper-targeted leads in minutes\n\u2713 Unlimited searches & locations\n\u2713 Zero CAPTCHAs or IP blocks\n\u2713 Works on AutoGPT Cloud or self-hosted (with your API key)\n\u2713 Cut prospecting time by 90%\n\n**PERFECT FOR**\n\u2014 Marketers & PPC agencies\n\u2014 SEO consultants & designers\n\u2014 SaaS founders & sales teams\n\nStop scrolling directories\u2014start filling your pipeline. Start now and let AI prospect while you profit.\n\n\u2192 Click *Add to Library* and own your market today.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 0, + "dot_all": true, + "pattern": "^(?!\\s*$).+", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": 1697.320236665491, + "y": -2066.289253119125 + } + }, + "input_links": [ + { + "id": "2cfa5d5e-fb60-4e56-a51d-347a37ee982b", + "source_id": "7d54d6b5-d70d-4393-83c2-c434794ce9a7", + "sink_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "source_name": "output", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "1580f5c8-1dc1-43a4-abdc-3a82e6805db2", + "source_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "sink_id": "5f26aa29-e3b4-45e6-9dbc-cefcb4209207", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "ac1912f8-8a9c-420e-a37e-b4f74eff0df1", + "source_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "sink_id": "61ec7945-ebd2-402b-bdcc-4640304115cb", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "fd06d8d6-cde2-4ecb-810b-d09f543c1187", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "N/A" + }, + "metadata": { + "position": { + "x": 2317.712735488114, + "y": 1145.5918856678481 + } + }, + "input_links": [ + { + "id": "0bee5403-8d79-4bf9-b6dc-b0eab6236b4b", + "source_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "sink_id": "fd06d8d6-cde2-4ecb-810b-d09f543c1187", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "95b8d8fd-ae38-4179-a02d-c0f6f6ba02aa", + "source_id": "fd06d8d6-cde2-4ecb-810b-d09f543c1187", + "sink_id": "76ef9396-957a-4e27-a113-cd4f85ecf6bc", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "61ec7945-ebd2-402b-bdcc-4640304115cb", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Name", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2847.32026315085, + "y": -2058.5197452144816 + } + }, + "input_links": [ + { + "id": "c47483a4-04a0-4e8f-ae78-22e6ae0d8c64", + "source_id": "5f26aa29-e3b4-45e6-9dbc-cefcb4209207", + "sink_id": "61ec7945-ebd2-402b-bdcc-4640304115cb", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "ac1912f8-8a9c-420e-a37e-b4f74eff0df1", + "source_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "sink_id": "61ec7945-ebd2-402b-bdcc-4640304115cb", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "46a34959-1a6c-4ecd-af02-13a800e09607", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "N/A" + }, + "metadata": { + "position": { + "x": 2321.6222500793283, + "y": 3151.327567081995 + } + }, + "input_links": [ + { + "id": "fe6da5a0-810c-4520-8ab2-2aab995000e7", + "source_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "sink_id": "46a34959-1a6c-4ecd-af02-13a800e09607", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "54bd9907-fd90-4edf-9946-b2ca17e7d726", + "source_id": "46a34959-1a6c-4ecd-af02-13a800e09607", + "sink_id": "0d867846-375c-4dfe-b32c-92fe92dc6483", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "36eab73e-a2c0-4b5c-bf2c-5d4671e1d5c1", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "N/A" + }, + "metadata": { + "position": { + "x": 2299.03546771492, + "y": 5101.985232738642 + } + }, + "input_links": [ + { + "id": "e1515e09-9825-4ebf-a3fc-7369759d354e", + "source_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "sink_id": "36eab73e-a2c0-4b5c-bf2c-5d4671e1d5c1", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "a69bb20d-b0a0-4f71-b9a7-a82e2e845150", + "source_id": "36eab73e-a2c0-4b5c-bf2c-5d4671e1d5c1", + "sink_id": "197b946b-4cfa-4179-97b6-0523a04512ca", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "0b5116b0-f2cf-413a-83de-e7e4d048e6f0", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Here we do the initial google Maps search to get back the results.\n\nIf you're Self-Hosting the AutoGPT Platform, you'll need to get yourself a Google Maps API Key." + }, + "metadata": { + "position": { + "x": -372.969708635105, + "y": 1036.185006880431 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "460555f5-6975-47e1-838a-da5d75a3a947", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Website", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2879.8018180164377, + "y": 6738.704014950039 + } + }, + "input_links": [ + { + "id": "3d5d4401-cf25-4e55-8a1d-25224786b393", + "source_id": "eb8cc44d-d18d-40ff-baf2-9136d067b100", + "sink_id": "460555f5-6975-47e1-838a-da5d75a3a947", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "5793b7c2-5373-4015-88fd-6887164823ee", + "source_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "sink_id": "460555f5-6975-47e1-838a-da5d75a3a947", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "5f26aa29-e3b4-45e6-9dbc-cefcb4209207", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "N/A" + }, + "metadata": { + "position": { + "x": 2272.9880044509714, + "y": -2055.9332059174567 + } + }, + "input_links": [ + { + "id": "1580f5c8-1dc1-43a4-abdc-3a82e6805db2", + "source_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "sink_id": "5f26aa29-e3b4-45e6-9dbc-cefcb4209207", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "c47483a4-04a0-4e8f-ae78-22e6ae0d8c64", + "source_id": "5f26aa29-e3b4-45e6-9dbc-cefcb4209207", + "sink_id": "61ec7945-ebd2-402b-bdcc-4640304115cb", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "eb8cc44d-d18d-40ff-baf2-9136d067b100", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "N/A" + }, + "metadata": { + "position": { + "x": 2324.6507353066345, + "y": 6741.223660283022 + } + }, + "input_links": [ + { + "id": "d58df6d7-782c-4c53-b0cc-c6e03d8ce2ca", + "source_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "sink_id": "eb8cc44d-d18d-40ff-baf2-9136d067b100", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "3d5d4401-cf25-4e55-8a1d-25224786b393", + "source_id": "eb8cc44d-d18d-40ff-baf2-9136d067b100", + "sink_id": "460555f5-6975-47e1-838a-da5d75a3a947", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "4a760ac4-1c75-49bb-8f3c-80f7ae432f2c", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "N/A" + }, + "metadata": { + "position": { + "x": 2315.4267207511584, + "y": -404.255551965831 + } + }, + "input_links": [ + { + "id": "f532bfbe-c782-4e4c-843a-a7f863e120c9", + "source_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "sink_id": "4a760ac4-1c75-49bb-8f3c-80f7ae432f2c", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "4ae66fd0-be6b-4f7e-a417-fcc0d083a257", + "source_id": "4a760ac4-1c75-49bb-8f3c-80f7ae432f2c", + "sink_id": "5e569677-da76-41d8-aa43-32f7ce3aeb8a", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 0, + "dot_all": true, + "pattern": "^(?!\\s*$).+", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": 1771.7091159373263, + "y": 3151.7592879177296 + } + }, + "input_links": [ + { + "id": "cf5fea80-c13b-4654-aa72-ac9d121bc83b", + "source_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "sink_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "source_name": "value", + "sink_name": "text", + "is_static": false + }, + { + "id": "97ebd438-0eeb-4f3a-b518-9f88b73ad633", + "source_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "sink_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "source_name": "error", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "f2db2232-4e08-41fb-b305-edb9424364c9", + "source_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "sink_id": "0d867846-375c-4dfe-b32c-92fe92dc6483", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "fe6da5a0-810c-4520-8ab2-2aab995000e7", + "source_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "sink_id": "46a34959-1a6c-4ecd-af02-13a800e09607", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "0d867846-375c-4dfe-b32c-92fe92dc6483", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Rating", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2896.6577884948874, + "y": 3158.253346594326 + } + }, + "input_links": [ + { + "id": "f2db2232-4e08-41fb-b305-edb9424364c9", + "source_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "sink_id": "0d867846-375c-4dfe-b32c-92fe92dc6483", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "54bd9907-fd90-4edf-9946-b2ca17e7d726", + "source_id": "46a34959-1a6c-4ecd-af02-13a800e09607", + "sink_id": "0d867846-375c-4dfe-b32c-92fe92dc6483", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 0, + "dot_all": true, + "pattern": "^(?!\\s*$).+", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": 1776.9159165326862, + "y": 6742.839875910975 + } + }, + "input_links": [ + { + "id": "f281291e-1941-4c56-8093-a57aa7643938", + "source_id": "891698ed-e025-4a54-865c-964a868526d7", + "sink_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "source_name": "output", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "d58df6d7-782c-4c53-b0cc-c6e03d8ce2ca", + "source_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "sink_id": "eb8cc44d-d18d-40ff-baf2-9136d067b100", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "5793b7c2-5373-4015-88fd-6887164823ee", + "source_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "sink_id": "460555f5-6975-47e1-838a-da5d75a3a947", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "block_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479", + "input_default": { + "radius": 5000, + "max_results": 1 + }, + "metadata": { + "position": { + "x": -370.29653927884453, + "y": 1440.7256906576536 + } + }, + "input_links": [ + { + "id": "4f454037-fe3a-49bf-8c53-656d8a1c9606", + "source_id": "3e71a189-342b-4a37-9f4f-d4860bea245c", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "result", + "sink_name": "max_results", + "is_static": true + }, + { + "id": "a4e970e1-337f-4f57-9f26-74c37a362324", + "source_id": "47b5c5f5-4d86-489a-8ae7-ed568f1d0a1f", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "result", + "sink_name": "radius", + "is_static": true + }, + { + "id": "62595f52-9b70-4dc3-9b0c-d5232436f93b", + "source_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "output", + "sink_name": "query", + "is_static": false + } + ], + "output_links": [ + { + "id": "f4a918ce-5b15-448b-a4ec-cd66e5c2e5d4", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "7d54d6b5-d70d-4393-83c2-c434794ce9a7", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "595bce58-569e-4452-ae3c-acd04e686de4", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "a30f45fa-93b2-4cdf-a1a1-b927eff02aeb", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "cfb52131-957d-431d-a1a6-14f31928b2b8", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "350280cc-6c7e-4731-81a5-6eba97b9dc96", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "7a22a238-fc96-44eb-8e1e-cd88855e8bda", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "bc185c6d-65de-4e42-ba8d-934f69419745", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "ead01791-cdac-4fc7-b54a-e4a12effdf3d", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "03d3b23a-2e9d-439c-b0cf-6effbf1702a5", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "8dacb5f6-2766-409a-86a1-93823237d8f5", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "891698ed-e025-4a54-865c-964a868526d7", + "source_name": "place", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "197b946b-4cfa-4179-97b6-0523a04512ca", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Number of Reviews", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2865.657540235487, + "y": 5108.552280213967 + } + }, + "input_links": [ + { + "id": "8e70d90e-292d-4a18-8753-183bbecc01c9", + "source_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "sink_id": "197b946b-4cfa-4179-97b6-0523a04512ca", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "a69bb20d-b0a0-4f71-b9a7-a82e2e845150", + "source_id": "36eab73e-a2c0-4b5c-bf2c-5d4671e1d5c1", + "sink_id": "197b946b-4cfa-4179-97b6-0523a04512ca", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "f1c2ac5b-ac6a-4e95-a371-26193d19157e", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "https://developers.google.com/maps/documentation/javascript/get-api-key" + }, + "metadata": { + "position": { + "x": -63.51102951717789, + "y": 1036.0200034451102 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "76ef9396-957a-4e27-a113-cd4f85ecf6bc", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Phone Number", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2899.9197943242525, + "y": 1147.8867571738717 + } + }, + "input_links": [ + { + "id": "9c93f096-d8a7-491e-937e-90c55d0dfd57", + "source_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "sink_id": "76ef9396-957a-4e27-a113-cd4f85ecf6bc", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "95b8d8fd-ae38-4179-a02d-c0f6f6ba02aa", + "source_id": "fd06d8d6-cde2-4ecb-810b-d09f543c1187", + "sink_id": "76ef9396-957a-4e27-a113-cd4f85ecf6bc", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 0, + "dot_all": true, + "pattern": "^(?!\\s*$).+", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": 1724.8717771093939, + "y": -408.46103769754797 + } + }, + "input_links": [ + { + "id": "5addd51e-f0d6-4c37-a1b1-611708e424c6", + "source_id": "350280cc-6c7e-4731-81a5-6eba97b9dc96", + "sink_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "source_name": "output", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "6b72b2ac-d536-4368-a86d-7384b64690a4", + "source_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "sink_id": "5e569677-da76-41d8-aa43-32f7ce3aeb8a", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "f532bfbe-c782-4e4c-843a-a7f863e120c9", + "source_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "sink_id": "4a760ac4-1c75-49bb-8f3c-80f7ae432f2c", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 0, + "dot_all": true, + "pattern": "^(?!\\s*$).+", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": 1734.5107817365715, + "y": 5104.56488050784 + } + }, + "input_links": [ + { + "id": "ed49401e-3125-40dd-b856-2d0521cb01e9", + "source_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "sink_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "source_name": "value", + "sink_name": "text", + "is_static": false + }, + { + "id": "d3228bda-f3a9-449f-b6e1-d279108a19a5", + "source_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "sink_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "source_name": "error", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "8e70d90e-292d-4a18-8753-183bbecc01c9", + "source_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "sink_id": "197b946b-4cfa-4179-97b6-0523a04512ca", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "e1515e09-9825-4ebf-a3fc-7369759d354e", + "source_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "sink_id": "36eab73e-a2c0-4b5c-bf2c-5d4671e1d5c1", + "source_name": "negative", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "5e569677-da76-41d8-aa43-32f7ce3aeb8a", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Address", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2874.8718742474557, + "y": -388.4610593468465 + } + }, + "input_links": [ + { + "id": "6b72b2ac-d536-4368-a86d-7384b64690a4", + "source_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "sink_id": "5e569677-da76-41d8-aa43-32f7ce3aeb8a", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "4ae66fd0-be6b-4f7e-a417-fcc0d083a257", + "source_id": "4a760ac4-1c75-49bb-8f3c-80f7ae432f2c", + "sink_id": "5e569677-da76-41d8-aa43-32f7ce3aeb8a", + "source_name": "output", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 0, + "dot_all": true, + "pattern": "^(?!\\s*$).+", + "find_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": 1740.8272338961247, + "y": 1146.0307471534502 + } + }, + "input_links": [ + { + "id": "dd7d4b5c-ab16-45bc-936c-926ea3b24a56", + "source_id": "a30f45fa-93b2-4cdf-a1a1-b927eff02aeb", + "sink_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "source_name": "output", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "0bee5403-8d79-4bf9-b6dc-b0eab6236b4b", + "source_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "sink_id": "fd06d8d6-cde2-4ecb-810b-d09f543c1187", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "9c93f096-d8a7-491e-937e-90c55d0dfd57", + "source_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "sink_id": "76ef9396-957a-4e27-a113-cd4f85ecf6bc", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "{{TYPE}} in {{LOCATION}}", + "values": {}, + "escape_html": false + }, + "metadata": { + "position": { + "x": -1405.6996683435632, + "y": 734.5756913677108 + } + }, + "input_links": [ + { + "id": "cff3e237-f81d-4fb2-9aad-38f03d026b87", + "source_id": "4b7851a9-8c05-4823-888d-dda3dd71b1e1", + "sink_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "source_name": "result", + "sink_name": "values_#_LOCATION", + "is_static": true + }, + { + "id": "4fff9ca6-1671-49ed-827a-69b882555644", + "source_id": "46381f83-47f1-4b3f-8cce-9472ba040498", + "sink_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "source_name": "result", + "sink_name": "values_#_TYPE", + "is_static": true + } + ], + "output_links": [ + { + "id": "62595f52-9b70-4dc3-9b0c-d5232436f93b", + "source_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "output", + "sink_name": "query", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "46381f83-47f1-4b3f-8cce-9472ba040498", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "What business type you want to look for?", + "value": "Coffee Roasters", + "secret": false, + "advanced": false, + "description": "The type of Business you're looking for, for example Hotels, Gyms or Restaurants.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -2506.207808097121, + "y": 735.3387323062221 + } + }, + "input_links": [], + "output_links": [ + { + "id": "4fff9ca6-1671-49ed-827a-69b882555644", + "source_id": "46381f83-47f1-4b3f-8cce-9472ba040498", + "sink_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "source_name": "result", + "sink_name": "values_#_TYPE", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "4b7851a9-8c05-4823-888d-dda3dd71b1e1", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Where do you want to search?", + "value": "Edinburgh, UK", + "secret": false, + "advanced": false, + "description": "The area you want to search, for example London, New York or Tokyo.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1964.9818115357355, + "y": 730.9882069015447 + } + }, + "input_links": [], + "output_links": [ + { + "id": "cff3e237-f81d-4fb2-9aad-38f03d026b87", + "source_id": "4b7851a9-8c05-4823-888d-dda3dd71b1e1", + "sink_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "source_name": "result", + "sink_name": "values_#_LOCATION", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "47b5c5f5-4d86-489a-8ae7-ed568f1d0a1f", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Search Distance (metres)", + "value": 50000, + "secret": false, + "advanced": false, + "description": "How far around the are to look for results (Max 50000 metres)", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -2530.587066097787, + "y": 2194.680140573501 + } + }, + "input_links": [], + "output_links": [ + { + "id": "a4e970e1-337f-4f57-9f26-74c37a362324", + "source_id": "47b5c5f5-4d86-489a-8ae7-ed568f1d0a1f", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "result", + "sink_name": "radius", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "3e71a189-342b-4a37-9f4f-d4860bea245c", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Number of Results", + "value": 3, + "secret": false, + "advanced": false, + "description": "The maximum number of results to return from the search, the actual result may be less. (max 60).", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1965.5247495264289, + "y": 2188.1229142270595 + } + }, + "input_links": [], + "output_links": [ + { + "id": "4f454037-fe3a-49bf-8c53-656d8a1c9606", + "source_id": "3e71a189-342b-4a37-9f4f-d4860bea245c", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "result", + "sink_name": "max_results", + "is_static": true + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "bc185c6d-65de-4e42-ba8d-934f69419745", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "rating" + }, + "metadata": { + "position": { + "x": 619.9647728855725, + "y": 3122.1700573934104 + } + }, + "input_links": [ + { + "id": "7a22a238-fc96-44eb-8e1e-cd88855e8bda", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "bc185c6d-65de-4e42-ba8d-934f69419745", + "source_name": "place", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "3f43af55-3873-4ea3-b0ee-46250300892f", + "source_id": "bc185c6d-65de-4e42-ba8d-934f69419745", + "sink_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "350280cc-6c7e-4731-81a5-6eba97b9dc96", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "address" + }, + "metadata": { + "position": { + "x": 1150.02463887578, + "y": -413.4008616060557 + } + }, + "input_links": [ + { + "id": "cfb52131-957d-431d-a1a6-14f31928b2b8", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "350280cc-6c7e-4731-81a5-6eba97b9dc96", + "source_name": "place", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "5addd51e-f0d6-4c37-a1b1-611708e424c6", + "source_id": "350280cc-6c7e-4731-81a5-6eba97b9dc96", + "sink_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "source_name": "output", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "a30f45fa-93b2-4cdf-a1a1-b927eff02aeb", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "phone" + }, + "metadata": { + "position": { + "x": 1187.1724965166527, + "y": 1146.1097139851993 + } + }, + "input_links": [ + { + "id": "595bce58-569e-4452-ae3c-acd04e686de4", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "a30f45fa-93b2-4cdf-a1a1-b927eff02aeb", + "source_name": "place", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "dd7d4b5c-ab16-45bc-936c-926ea3b24a56", + "source_id": "a30f45fa-93b2-4cdf-a1a1-b927eff02aeb", + "sink_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "source_name": "output", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "7d54d6b5-d70d-4393-83c2-c434794ce9a7", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "name" + }, + "metadata": { + "position": { + "x": 1142.7247250461776, + "y": -2070.416862429695 + } + }, + "input_links": [ + { + "id": "f4a918ce-5b15-448b-a4ec-cd66e5c2e5d4", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "7d54d6b5-d70d-4393-83c2-c434794ce9a7", + "source_name": "place", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "2cfa5d5e-fb60-4e56-a51d-347a37ee982b", + "source_id": "7d54d6b5-d70d-4393-83c2-c434794ce9a7", + "sink_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "source_name": "output", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "03d3b23a-2e9d-439c-b0cf-6effbf1702a5", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "reviews" + }, + "metadata": { + "position": { + "x": 628.5636486050491, + "y": 5093.236632487861 + } + }, + "input_links": [ + { + "id": "ead01791-cdac-4fc7-b54a-e4a12effdf3d", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "03d3b23a-2e9d-439c-b0cf-6effbf1702a5", + "source_name": "place", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "ff15a08f-0a94-44dc-b4a3-a6dfd632d1c1", + "source_id": "03d3b23a-2e9d-439c-b0cf-6effbf1702a5", + "sink_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "891698ed-e025-4a54-865c-964a868526d7", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "website" + }, + "metadata": { + "position": { + "x": 1203.8900324117724, + "y": 6741.745315388359 + } + }, + "input_links": [ + { + "id": "8dacb5f6-2766-409a-86a1-93823237d8f5", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "891698ed-e025-4a54-865c-964a868526d7", + "source_name": "place", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "f281291e-1941-4c56-8093-a57aa7643938", + "source_id": "891698ed-e025-4a54-865c-964a868526d7", + "sink_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "source_name": "output", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 1198.0222674824167, + "y": 5097.958112311722 + } + }, + "input_links": [ + { + "id": "ff15a08f-0a94-44dc-b4a3-a6dfd632d1c1", + "source_id": "03d3b23a-2e9d-439c-b0cf-6effbf1702a5", + "sink_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "ed49401e-3125-40dd-b856-2d0521cb01e9", + "source_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "sink_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "source_name": "value", + "sink_name": "text", + "is_static": false + }, + { + "id": "d3228bda-f3a9-449f-b6e1-d279108a19a5", + "source_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "sink_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "source_name": "error", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + }, + { + "id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 1213.2804020096423, + "y": 3130.25175294644 + } + }, + "input_links": [ + { + "id": "3f43af55-3873-4ea3-b0ee-46250300892f", + "source_id": "bc185c6d-65de-4e42-ba8d-934f69419745", + "sink_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "cf5fea80-c13b-4654-aa72-ac9d121bc83b", + "source_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "sink_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "source_name": "value", + "sink_name": "text", + "is_static": false + }, + { + "id": "97ebd438-0eeb-4f3a-b518-9f88b73ad633", + "source_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "sink_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "source_name": "error", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "e25b5ce7-e807-4b5a-8708-cb94fa0e2ace", + "graph_version": 81, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "1580f5c8-1dc1-43a4-abdc-3a82e6805db2", + "source_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "sink_id": "5f26aa29-e3b4-45e6-9dbc-cefcb4209207", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "5793b7c2-5373-4015-88fd-6887164823ee", + "source_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "sink_id": "460555f5-6975-47e1-838a-da5d75a3a947", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "4f454037-fe3a-49bf-8c53-656d8a1c9606", + "source_id": "3e71a189-342b-4a37-9f4f-d4860bea245c", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "result", + "sink_name": "max_results", + "is_static": true + }, + { + "id": "54bd9907-fd90-4edf-9946-b2ca17e7d726", + "source_id": "46a34959-1a6c-4ecd-af02-13a800e09607", + "sink_id": "0d867846-375c-4dfe-b32c-92fe92dc6483", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "cf5fea80-c13b-4654-aa72-ac9d121bc83b", + "source_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "sink_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "source_name": "value", + "sink_name": "text", + "is_static": false + }, + { + "id": "595bce58-569e-4452-ae3c-acd04e686de4", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "a30f45fa-93b2-4cdf-a1a1-b927eff02aeb", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "a69bb20d-b0a0-4f71-b9a7-a82e2e845150", + "source_id": "36eab73e-a2c0-4b5c-bf2c-5d4671e1d5c1", + "sink_id": "197b946b-4cfa-4179-97b6-0523a04512ca", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "9c93f096-d8a7-491e-937e-90c55d0dfd57", + "source_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "sink_id": "76ef9396-957a-4e27-a113-cd4f85ecf6bc", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "4fff9ca6-1671-49ed-827a-69b882555644", + "source_id": "46381f83-47f1-4b3f-8cce-9472ba040498", + "sink_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "source_name": "result", + "sink_name": "values_#_TYPE", + "is_static": true + }, + { + "id": "d3228bda-f3a9-449f-b6e1-d279108a19a5", + "source_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "sink_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "source_name": "error", + "sink_name": "text", + "is_static": false + }, + { + "id": "c47483a4-04a0-4e8f-ae78-22e6ae0d8c64", + "source_id": "5f26aa29-e3b4-45e6-9dbc-cefcb4209207", + "sink_id": "61ec7945-ebd2-402b-bdcc-4640304115cb", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "a4e970e1-337f-4f57-9f26-74c37a362324", + "source_id": "47b5c5f5-4d86-489a-8ae7-ed568f1d0a1f", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "result", + "sink_name": "radius", + "is_static": true + }, + { + "id": "8e70d90e-292d-4a18-8753-183bbecc01c9", + "source_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "sink_id": "197b946b-4cfa-4179-97b6-0523a04512ca", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "ff15a08f-0a94-44dc-b4a3-a6dfd632d1c1", + "source_id": "03d3b23a-2e9d-439c-b0cf-6effbf1702a5", + "sink_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "dd7d4b5c-ab16-45bc-936c-926ea3b24a56", + "source_id": "a30f45fa-93b2-4cdf-a1a1-b927eff02aeb", + "sink_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "source_name": "output", + "sink_name": "text", + "is_static": false + }, + { + "id": "ed49401e-3125-40dd-b856-2d0521cb01e9", + "source_id": "5fe95f79-5ac3-49f4-94d7-308b41b0123c", + "sink_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "source_name": "value", + "sink_name": "text", + "is_static": false + }, + { + "id": "e1515e09-9825-4ebf-a3fc-7369759d354e", + "source_id": "45946cdc-20b7-4b7f-8000-7db13ac73302", + "sink_id": "36eab73e-a2c0-4b5c-bf2c-5d4671e1d5c1", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "0bee5403-8d79-4bf9-b6dc-b0eab6236b4b", + "source_id": "9ef45c5c-df30-40c1-b6d3-3d5fbefcda0b", + "sink_id": "fd06d8d6-cde2-4ecb-810b-d09f543c1187", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "f4a918ce-5b15-448b-a4ec-cd66e5c2e5d4", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "7d54d6b5-d70d-4393-83c2-c434794ce9a7", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "f281291e-1941-4c56-8093-a57aa7643938", + "source_id": "891698ed-e025-4a54-865c-964a868526d7", + "sink_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "source_name": "output", + "sink_name": "text", + "is_static": false + }, + { + "id": "d58df6d7-782c-4c53-b0cc-c6e03d8ce2ca", + "source_id": "5a51baf4-bddf-4888-9e67-6e78fbf24ab1", + "sink_id": "eb8cc44d-d18d-40ff-baf2-9136d067b100", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "95b8d8fd-ae38-4179-a02d-c0f6f6ba02aa", + "source_id": "fd06d8d6-cde2-4ecb-810b-d09f543c1187", + "sink_id": "76ef9396-957a-4e27-a113-cd4f85ecf6bc", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "fe6da5a0-810c-4520-8ab2-2aab995000e7", + "source_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "sink_id": "46a34959-1a6c-4ecd-af02-13a800e09607", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "ac1912f8-8a9c-420e-a37e-b4f74eff0df1", + "source_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "sink_id": "61ec7945-ebd2-402b-bdcc-4640304115cb", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "4ae66fd0-be6b-4f7e-a417-fcc0d083a257", + "source_id": "4a760ac4-1c75-49bb-8f3c-80f7ae432f2c", + "sink_id": "5e569677-da76-41d8-aa43-32f7ce3aeb8a", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "f2db2232-4e08-41fb-b305-edb9424364c9", + "source_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "sink_id": "0d867846-375c-4dfe-b32c-92fe92dc6483", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "f532bfbe-c782-4e4c-843a-a7f863e120c9", + "source_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "sink_id": "4a760ac4-1c75-49bb-8f3c-80f7ae432f2c", + "source_name": "negative", + "sink_name": "input", + "is_static": false + }, + { + "id": "2cfa5d5e-fb60-4e56-a51d-347a37ee982b", + "source_id": "7d54d6b5-d70d-4393-83c2-c434794ce9a7", + "sink_id": "f0610fef-5f2f-4e98-8b45-3ac56dab56c6", + "source_name": "output", + "sink_name": "text", + "is_static": false + }, + { + "id": "cfb52131-957d-431d-a1a6-14f31928b2b8", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "350280cc-6c7e-4731-81a5-6eba97b9dc96", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "cff3e237-f81d-4fb2-9aad-38f03d026b87", + "source_id": "4b7851a9-8c05-4823-888d-dda3dd71b1e1", + "sink_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "source_name": "result", + "sink_name": "values_#_LOCATION", + "is_static": true + }, + { + "id": "97ebd438-0eeb-4f3a-b518-9f88b73ad633", + "source_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "sink_id": "7ccdbff5-6b66-457e-82db-99149a0e08c3", + "source_name": "error", + "sink_name": "text", + "is_static": false + }, + { + "id": "7a22a238-fc96-44eb-8e1e-cd88855e8bda", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "bc185c6d-65de-4e42-ba8d-934f69419745", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "3f43af55-3873-4ea3-b0ee-46250300892f", + "source_id": "bc185c6d-65de-4e42-ba8d-934f69419745", + "sink_id": "34eff5c0-daf9-45da-a73d-2034cbe50561", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "5addd51e-f0d6-4c37-a1b1-611708e424c6", + "source_id": "350280cc-6c7e-4731-81a5-6eba97b9dc96", + "sink_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "source_name": "output", + "sink_name": "text", + "is_static": false + }, + { + "id": "62595f52-9b70-4dc3-9b0c-d5232436f93b", + "source_id": "e382ef86-26ca-4e13-b47f-9fd285ea1ac1", + "sink_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "source_name": "output", + "sink_name": "query", + "is_static": false + }, + { + "id": "6b72b2ac-d536-4368-a86d-7384b64690a4", + "source_id": "a20838ad-4b15-4be4-9ce0-79e99de1a8ff", + "sink_id": "5e569677-da76-41d8-aa43-32f7ce3aeb8a", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "3d5d4401-cf25-4e55-8a1d-25224786b393", + "source_id": "eb8cc44d-d18d-40ff-baf2-9136d067b100", + "sink_id": "460555f5-6975-47e1-838a-da5d75a3a947", + "source_name": "output", + "sink_name": "value", + "is_static": true + }, + { + "id": "8dacb5f6-2766-409a-86a1-93823237d8f5", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "891698ed-e025-4a54-865c-964a868526d7", + "source_name": "place", + "sink_name": "input", + "is_static": false + }, + { + "id": "ead01791-cdac-4fc7-b54a-e4a12effdf3d", + "source_id": "fcb277c7-2094-4a4a-87bd-2c647affd115", + "sink_id": "03d3b23a-2e9d-439c-b0cf-6effbf1702a5", + "source_name": "place", + "sink_name": "input", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2025-10-13T17:03:42.423Z", + "input_schema": { + "type": "object", + "properties": { + "What business type you want to look for?": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "What business type you want to look for?", + "description": "The type of Business you're looking for, for example Hotels, Gyms or Restaurants.", + "default": "Coffee Roasters" + }, + "Where do you want to search?": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Where do you want to search?", + "description": "The area you want to search, for example London, New York or Tokyo.", + "default": "Edinburgh, UK" + }, + "Search Distance (metres)": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Search Distance (metres)", + "description": "How far around the are to look for results (Max 50000 metres)", + "default": 50000 + }, + "Number of Results": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Number of Results", + "description": "The maximum number of results to return from the search, the actual result may be less. (max 60).", + "default": 3 + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Name": { + "advanced": false, + "secret": false, + "title": "Name" + }, + "Website": { + "advanced": false, + "secret": false, + "title": "Website" + }, + "Rating": { + "advanced": false, + "secret": false, + "title": "Rating" + }, + "Number of Reviews": { + "advanced": false, + "secret": false, + "title": "Number of Reviews" + }, + "Phone Number": { + "advanced": false, + "secret": false, + "title": "Phone Number" + }, + "Address": { + "advanced": false, + "secret": false, + "title": "Address" + } + }, + "required": [ + "Name", + "Website", + "Rating", + "Number of Reviews", + "Phone Number", + "Address" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "google_maps_api_key_credentials": { + "credentials_provider": [ + "google_maps" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "google_maps", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + } + }, + "required": [ + "google_maps_api_key_credentials" + ], + "title": "LeadFinderCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0.json b/autogpt_platform/backend/agents/agent_b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0.json new file mode 100644 index 0000000000..02e4fd463f --- /dev/null +++ b/autogpt_platform/backend/agents/agent_b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0.json @@ -0,0 +1,1676 @@ +{ + "id": "f410b776-3603-4f84-8348-e5db9c551322", + "version": 31, + "is_active": true, + "name": "Domain Name Finder", + "description": "Overview:\nFinding a domain name that fits your brand shouldn\u2019t take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas\u2014filtered by live availability so every result is actionable.\n\nHow It Works\n1. Input your product pitch, company name, or core keywords.\n2. The agent analyzes brand tone, audience, and industry context.\n3. It generates a list of unique, memorable domains that match your criteria.\n4. All names are pre-filtered for real-time availability, so you can register immediately.\n\n\nBusiness Value\nSave hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains.\n\n\nKey Use Cases\n\u2022 Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands.\n\u2022 Marketers: Test name options across campaigns with instant availability data.\n\u2022 Entrepreneurs: Validate ideas faster with instant domain options.", + "instructions": "Pitch your website idea, run the agent, then check back in 10 mins for your bespoke set of unregistered domain names - ranked by estimated valuation and relevance to your pitch.", + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "570c679c-676d-448f-bb1c-b347f36970aa", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "domains" + }, + "metadata": { + "position": { + "x": 1923.7273090275667, + "y": 465.50008994458585 + } + }, + "input_links": [ + { + "id": "4e403abc-3a34-414c-9240-7a30b6f22ac5", + "source_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "sink_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "486aae4d-b3db-4008-842c-05f966f00b5b", + "source_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "c70b4120-64a3-44d0-a960-3674735314ff", + "source_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "sink_id": "f3feafe5-4971-4604-9cfa-caf8e0cf60d8", + "source_name": "output", + "sink_name": "values_#_DOMAINS", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "2105e9c1-0698-4a01-9d02-02d791364498", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "Generate a creative and relevant list of possible domain names that could be available (unregistered) for the product described by the given product pitch. Prioritize short, single-word English .com domains that closely relate to the product or its key features, as these are the most valuable. Only suggest .com domains. Avoid suggesting long or awkward names; names that sound like real English words but may be invented are acceptable if memorable and brandable. Do not include off-topic, or low-value gibberish names.\n\nBefore listing any domains, internally consider:\n- What words or ideas are most core to the product pitch?\n- Which synonyms, metaphors, or conceptual connections would make good brands?\n- What domains are likely to realistically be available and not already taken?\n- Are there any exceptional reasons to consider a non-.com extension?\n- Whether the domains are concise, attractive, and easy to remember/pronounce.\n\nOutput only the domains in a comma-separated list, enclosed in XML tags, with no additional commentary or explanation.\n\nFormat:\nexample1.com,example2.com,example3.com\n\nExample Input:\n \"A voice-based fitness app\"\n\nExample Output:\nvotone.com,soundgym.com,voicetrain.com,fitchant.com\n(Realistic use cases may generate ~500k high-quality domains per prompt.)\n\nEdge Cases & Special Considerations:\n- Never generate multi-word domains with dashes or excessive length.\n- If a non-.com TLD is used, justify internally that it is a natural fit (e.g., \".ai\" for an artificial intelligence SaaS).\n- Exclude domains that are likely to be heavily trademarked or existing brands.\n- The output must always be ONLY the XML tag with comma-separated domains.\n- Aim to write at least 500 domains\n- NEVER comment inside the csv, otherwise the csv will be corrupted and the workflow will completely break. If you want to make a comment, leave a xml set OUTSIDE the xml set with your comments. No matter what do not break this rule.\n\nHere is the product pitch you will be working with:\n\n{{product_pitch | safe}}\n\n\n\nREMINDER: Focus on creative, concise, high value .com domains aligned with the product pitch; output only as example.com,example2.com", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 803.72695210037, + "y": 465.5000381469727 + } + }, + "input_links": [ + { + "id": "6097cc32-f6e1-4311-8804-e99af44201f4", + "source_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "sink_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "source_name": "output", + "sink_name": "prompt_values_#_product_pitch", + "is_static": true + } + ], + "output_links": [ + { + "id": "ddbcf470-de3a-4955-89aa-12291f26a059", + "source_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "sink_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "source_name": "error", + "sink_name": "input", + "is_static": false + }, + { + "id": "7427c0ae-ec91-4bd5-9a72-1ef24fc11430", + "source_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "sink_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 1363.7269529646674, + "y": 465.5000376599629 + } + }, + "input_links": [ + { + "id": "7427c0ae-ec91-4bd5-9a72-1ef24fc11430", + "source_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "sink_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "2430664a-6580-428b-a317-36c7675539c1", + "source_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "4e403abc-3a34-414c-9240-7a30b6f22ac5", + "source_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "sink_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "524bd1e9-17c2-4b11-ab3e-2f1809823c7e", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Product Pitch", + "secret": false, + "advanced": false, + "description": "Pitch your product!", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -534.5334675859784, + "y": 458.4006590838727 + } + }, + "input_links": [], + "output_links": [ + { + "id": "e53ea15b-4d84-494d-9867-71e454f7c867", + "source_id": "524bd1e9-17c2-4b11-ab3e-2f1809823c7e", + "sink_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "source_name": "result", + "sink_name": "prompt_values_#_product_pitch", + "is_static": true + }, + { + "id": "34ba9aec-eb91-4760-bde7-c98685b95f5a", + "source_id": "524bd1e9-17c2-4b11-ab3e-2f1809823c7e", + "sink_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "source_name": "result", + "sink_name": "input", + "is_static": true + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "block_id": "0b02b072-abe7-11ef-8372-fb5d162dd712", + "input_default": { + "timeout": 300, + "language": "python", + "setup_commands": [], + "dispose_sandbox": true + }, + "metadata": { + "position": { + "x": 3084.835212232708, + "y": 469.16536240312246 + } + }, + "input_links": [ + { + "id": "9f73853c-27bc-40c0-ac1b-b6aee896c92c", + "source_id": "f3feafe5-4971-4604-9cfa-caf8e0cf60d8", + "sink_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "source_name": "output", + "sink_name": "code", + "is_static": false + } + ], + "output_links": [ + { + "id": "c9ab9fb0-391c-46bc-b6e7-13fe3125a5f1", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "5f5dba52-1c34-4621-9047-94185ed471f6", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "0b604734-0417-4a56-b3c3-6c841ee764df", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "source_name": "stdout_logs", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "ab8d3920-8582-426d-92a1-0cf3bdd31efe", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "You are a domain valuation expert tasked with ranking a list of domain names based on their value and relevance to a specific product. You will be provided with two key pieces of information:\n\n1. A product pitch:\n\n{{product_pitch | safe}}\n\n\n2. A list of domain names to rank:\n\n{{domains | safe}}\n\n\nYour task is to analyze these domains and rank them based on two primary factors:\n1. The inherent value of the domain name (most important)\n2. The relevance and potential value of the domain for the specific product described in the product pitch\n\nPlease follow these steps:\n\n1. Read the product pitch carefully and extract key product features and target audience information. List these in tags.\n\n2. Identify relevant keywords for the product based on the pitch. List these in tags.\n\n3. For each domain in the provided list - carefully consider it, then pick your top 10 domains from the list for the following analysis:\n a. Evaluate the domain based on the following factors:\n - Length (number of characters)\n - Memorability\n - Brandability\n - Keyword relevance\n - TLD (Top-Level Domain)\n - Potential commercial value\n\n b. Calculate a Domain Quality Score (1-10) using the following method:\n - Length Score (LS):\n 10: 1\u20134 chars\n 9: 5\n 8: 6\n 7: 7\n 6: 8\n 5: 9\n 4: 10\n 3: 11\u201312\n 2: 13\u201314\n 1: 15+\n\n - Word Score (WS):\n * Base (BWC): 1 word =5, 2 words =4, 3w=3, 4w=2, 5+ words =1\n * Quality Modifier (WQM): +1 English dictionary word, 0 common/brandable, \u20131 meh brand/obscure foreign, \u20132 unpronounceable\n * Word Score (WS) = clamp(BWC + WQM, 1\u20135)\u00d72\n\n - Penalties: \u20131 hyphen, \u20131 number, \u20130.5 triple letter\n\n - Final Score = clamp(round(0.6\u00d7WS + 0.4\u00d7LS + penalties), 1\u201310)\n\n c. Consider how the domain relates to the product described in the product pitch.\n\nWrap your analysis in a single set of tags. \nInclude the following for each domain:\n1. Verify that the domain is present in the original list provided.\n2. A table with the following columns: Factor, Calculated Score, Justification.\n3. The calculated Domain Quality Score, showing each step of the calculation.\n4. An explanation of how the domain relates to the product pitch.\n\nAfter evaluating your potential top 10 domains, present your final ranking as a newline-separated list of domains inside a single complete set of XML tags. Ensure that only domains from the original list are included in your ranking.\n\nIf there are more than 30 domains in the list, just list your top 30.\n\nExample output format:\n\n\ndomain1.com\ndomain2.net\ndomain3.org\n\n\nPlease proceed with your evaluation and ranking of the provided domain list.", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 6417.294788842872, + "y": 443.3113161801738 + } + }, + "input_links": [ + { + "id": "e53ea15b-4d84-494d-9867-71e454f7c867", + "source_id": "524bd1e9-17c2-4b11-ab3e-2f1809823c7e", + "sink_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "source_name": "result", + "sink_name": "prompt_values_#_product_pitch", + "is_static": true + }, + { + "id": "4260a459-1cda-411e-9c32-1e886d49f444", + "source_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "sink_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "source_name": "no_output", + "sink_name": "prompt_values_#_domains", + "is_static": false + } + ], + "output_links": [ + { + "id": "781b8b03-e645-42f6-b9ee-18727a04bb42", + "source_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "sink_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "6b8523b5-ce6e-4cd4-80a6-6be54b9be4ab", + "source_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 6977.294417024638, + "y": 458.46844766434674 + } + }, + "input_links": [ + { + "id": "781b8b03-e645-42f6-b9ee-18727a04bb42", + "source_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "sink_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "d28783ab-8818-4f28-ae66-d9a04ad33455", + "source_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "sink_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "cf5cd1d2-24ac-4126-ad0f-d934b7862e3a", + "source_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "domains" + }, + "metadata": { + "position": { + "x": 7537.294428844076, + "y": 458.46840935125437 + } + }, + "input_links": [ + { + "id": "d28783ab-8818-4f28-ae66-d9a04ad33455", + "source_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "sink_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "d6bd5988-cee8-4ac9-873f-a5dbd3bc8da5", + "source_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "output", + "sink_name": "values_#_ranked_domains_block", + "is_static": false + }, + { + "id": "99886544-d322-4179-838f-a5b01292fcdb", + "source_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "ea561c3b-3d18-4b8e-8842-065d06309502", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Domain Suggestions", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 9300.394546088242, + "y": 452.78448671411047 + } + }, + "input_links": [ + { + "id": "d3c06626-b5b0-4660-8733-b5167cbdd26c", + "source_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "sink_id": "ea561c3b-3d18-4b8e-8842-065d06309502", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "f3feafe5-4971-4604-9cfa-caf8e0cf60d8", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "#!/usr/bin/env python3\n\"\"\"\ncheck_domains_simple_com.py \u2014 Check .com domain availability via RDAP.\n\nHow to use:\n1) Put comma-separated .com domains into DOMAINS_TEXT below.\n - Each comma-delimited token must be an exact .com domain (e.g., \"foo.com\").\n - Anything that is not a valid .com domain counts as an input error.\n2) Run: python3 check_domains_simple_com.py\n3) Output summary sections and lists of available/unavailable/error tokens.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport re\nfrom concurrent.futures import ThreadPoolExecutor, as_completed\nfrom typing import Dict, List, Tuple\n\nimport requests\nfrom requests.adapters import HTTPAdapter\n\ntry:\n from urllib3.util.retry import Retry # type: ignore\nexcept Exception: # Retry optional\n Retry = None # type: ignore\n\n# ---------------------------------------------------------------------\n# \ud83d\udcdd Paste your names/domains here (commas and/or newlines are fine).\nDOMAINS_TEXT = \"\"\"{{DOMAINS | safe}}\"\"\"\n# ---------------------------------------------------------------------\n\n# Concurrency / networking\nWORKERS = 80\nTIMEOUT = 12.0\n\nHEADERS = {\n \"Accept\": \"application/rdap+json, application/json;q=0.9, */*;q=0.1\",\n \"User-Agent\": \"domain-checker/1.0 (+https://example.com)\",\n}\n\n# Verisign RDAP for .com\nCOM_ENDPOINT = \"https://rdap.verisign.com/com/v1/domain/{}\"\n\n\ndef normalize_label(s: str) -> str:\n \"\"\"Lowercase and keep only a\u2013z and 0\u20139.\"\"\"\n return re.sub(r\"[^a-z0-9]\", \"\", s.lower())\n\nCOM_DOMAIN_RE = re.compile(r\"^(?=.{1,253}$)(?!-)[a-z0-9](?:[a-z0-9-]{0,61}[a-z0-9])?\\.com$\")\n\n\ndef domains_from_text(text: str) -> Tuple[List[str], List[str]]:\n \"\"\"Parse comma-separated tokens; accept only exact .com domains.\n\n Returns (valid_domains, invalid_tokens).\n \"\"\"\n tokens: List[str] = []\n for part in (text or \"\").split(\",\"):\n val = part.strip()\n if val:\n tokens.append(val)\n\n valid_domains: List[str] = []\n invalid_tokens: List[str] = []\n for token in tokens:\n s = token.strip().lower()\n if COM_DOMAIN_RE.match(s):\n # De-dup preserving order for domains only\n if s not in valid_domains:\n valid_domains.append(s)\n else:\n invalid_tokens.append(token)\n\n return valid_domains, invalid_tokens\n\n\ndef build_session(pool_size: int) -> requests.Session:\n session = requests.Session()\n session.headers.update(HEADERS)\n adapter_kwargs = {\"pool_connections\": pool_size, \"pool_maxsize\": pool_size}\n if Retry is not None:\n retry = Retry(\n total=3,\n backoff_factor=0.3,\n status_forcelist=(429, 500, 502, 503, 504),\n allowed_methods=(\"GET\",),\n raise_on_status=False,\n )\n adapter = HTTPAdapter(max_retries=retry, **adapter_kwargs)\n else:\n adapter = HTTPAdapter(**adapter_kwargs)\n session.mount(\"https://\", adapter)\n session.mount(\"http://\", adapter)\n return session\n\n\ndef rdap_lookup_com(domain: str, session: requests.Session) -> Dict[str, str]:\n \"\"\"Return {'domain': str, 'status': 'AVAILABLE'|'REGISTERED'|'ERROR'|'UNKNOWN_xx'}.\"\"\"\n url = COM_ENDPOINT.format(domain)\n try:\n r = session.get(url, timeout=TIMEOUT)\n except requests.RequestException as e:\n return {\"domain\": domain, \"status\": \"ERROR\", \"error\": str(e)}\n if r.status_code == 404:\n return {\"domain\": domain, \"status\": \"AVAILABLE\"}\n if r.status_code == 200:\n return {\"domain\": domain, \"status\": \"REGISTERED\"}\n return {\"domain\": domain, \"status\": f\"UNKNOWN_{r.status_code}\", \"error\": r.text[:200].replace(\"\\n\", \" \")}\n\n\ndef main() -> int:\n domains, invalid_tokens = domains_from_text(DOMAINS_TEXT)\n if not domains and not invalid_tokens:\n return 0\n\n # Preserve input order\n domain_to_index: Dict[str, int] = {d: i for i, d in enumerate(domains)}\n\n # Connection pool\n pool_size = max(1, min(WORKERS, len(domains)))\n session = build_session(pool_size)\n\n rows = []\n with ThreadPoolExecutor(max_workers=pool_size) as executor:\n futs = {executor.submit(rdap_lookup_com, d, session): d for d in domains}\n for fut in as_completed(futs):\n rows.append(fut.result())\n\n # Restore input order\n rows.sort(key=lambda r: domain_to_index.get(r.get(\"domain\", \"\"), 10**9))\n\n # Group domains by availability\n available = []\n unavailable = []\n errors = []\n for r in rows:\n if r.get(\"status\") == \"AVAILABLE\" and r.get(\"domain\"):\n available.append(r[\"domain\"])\n elif r.get(\"status\") == \"REGISTERED\" and r.get(\"domain\"):\n unavailable.append(r[\"domain\"])\n elif r.get(\"domain\"):\n errors.append(r[\"domain\"])\n\n # Include input parsing errors (non-.com tokens)\n errors.extend(invalid_tokens)\n\n # Print in requested format\n print(\"\")\n total = len(available) + len(unavailable)\n print(f\" I checked {total} domains for you.\")\n print(f\"Domains available to register now: {len(available)}\")\n print(f\"Domains already registered: {len(unavailable)}\")\n print(\"\")\n print(\"\")\n for domain in available:\n print(domain)\n print(\"\")\n print(\"\")\n for domain in unavailable:\n print(domain)\n print(\"\")\n print(\"\")\n for domain in errors:\n print(domain)\n print(\"\")\n\n return 0\n\n\nif __name__ == \"__main__\":\n _ = main()\n", + "values": {}, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2490.3486926884043, + "y": 464.1087684425463 + } + }, + "input_links": [ + { + "id": "c70b4120-64a3-44d0-a960-3674735314ff", + "source_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "sink_id": "f3feafe5-4971-4604-9cfa-caf8e0cf60d8", + "source_name": "output", + "sink_name": "values_#_DOMAINS", + "is_static": false + } + ], + "output_links": [ + { + "id": "9f73853c-27bc-40c0-ac1b-b6aee896c92c", + "source_id": "f3feafe5-4971-4604-9cfa-caf8e0cf60d8", + "sink_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "source_name": "output", + "sink_name": "code", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "secret": false, + "advanced": false, + "escape_html": false + }, + "metadata": { + "position": { + "x": 2215.3143256007343, + "y": 3870.960518458868 + } + }, + "input_links": [ + { + "id": "2430664a-6580-428b-a317-36c7675539c1", + "source_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c49011f5-3dd8-416a-9604-be450f0f465f", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c9ab9fb0-391c-46bc-b6e7-13fe3125a5f1", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "2607c55b-1f24-47f8-acd4-4df5a51171b6", + "source_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "486aae4d-b3db-4008-842c-05f966f00b5b", + "source_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "27dd643a-09e8-4d49-814d-f8015c7a7f6f", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "ab8d3920-8582-426d-92a1-0cf3bdd31efe", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d3f1d3e7-36e5-450a-a810-9f534a12adac", + "source_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + }, + { + "id": "99886544-d322-4179-838f-a5b01292fcdb", + "source_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "cf5cd1d2-24ac-4126-ad0f-d934b7862e3a", + "source_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "6b8523b5-ce6e-4cd4-80a6-6be54b9be4ab", + "source_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 3636.000439211476, + "y": 457.0000605027634 + } + }, + "input_links": [ + { + "id": "5f5dba52-1c34-4621-9047-94185ed471f6", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "0b604734-0417-4a56-b3c3-6c841ee764df", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "source_name": "stdout_logs", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "73c613c5-92c5-4a72-a610-6f45a43212a4", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "c49011f5-3dd8-416a-9604-be450f0f465f", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "b77b60d6-23b9-4700-ade8-7afa1f4ee3d0", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "available" + }, + "metadata": { + "position": { + "x": 4338.8348745433095, + "y": 451.31610498162377 + } + }, + "input_links": [ + { + "id": "b77b60d6-23b9-4700-ade8-7afa1f4ee3d0", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "b00144a7-177c-46a1-aa25-c2be45823fc1", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "source_name": "output", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "87c5dbb4-6741-4c86-beca-f40e0dd79995", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "84659cbb-1184-4040-a1af-2f2d785cac3e", + "source_name": "output", + "sink_name": "collection", + "is_static": false + }, + { + "id": "27dd643a-09e8-4d49-814d-f8015c7a7f6f", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "results_summary" + }, + "metadata": { + "position": { + "x": 4211.577175606866, + "y": -649.3408490608967 + } + }, + "input_links": [ + { + "id": "73c613c5-92c5-4a72-a610-6f45a43212a4", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "f7307ebc-189f-4554-ba9a-29b66f11fd6d", + "source_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "output", + "sink_name": "values_#_summary", + "is_static": false + }, + { + "id": "2607c55b-1f24-47f8-acd4-4df5a51171b6", + "source_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": {}, + "metadata": { + "position": { + "x": 136.30286534603857, + "y": 457.04507015080037 + } + }, + "input_links": [ + { + "id": "de188d9a-a33f-4790-9891-914046e255b7", + "source_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "sink_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "34ba9aec-eb91-4760-bde7-c98685b95f5a", + "source_id": "524bd1e9-17c2-4b11-ab3e-2f1809823c7e", + "sink_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "source_name": "result", + "sink_name": "input", + "is_static": true + } + ], + "output_links": [ + { + "id": "6097cc32-f6e1-4311-8804-e99af44201f4", + "source_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "sink_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "source_name": "output", + "sink_name": "prompt_values_#_product_pitch", + "is_static": true + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "3", + "no_value": "The AI failed to successfully come up with domain names to check.\n\nThis could either be due to the input you provided, or an error in the system.\n\nPlease try again, but if this keeps happening please report it!", + "operator": "<" + }, + "metadata": { + "position": { + "x": 660.6968537410919, + "y": 2498.7048054763927 + } + }, + "input_links": [ + { + "id": "02f129e4-bd3b-4969-aceb-21ea389572e2", + "source_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "sink_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "source_name": "result", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "de188d9a-a33f-4790-9891-914046e255b7", + "source_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "sink_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "d3f1d3e7-36e5-450a-a810-9f534a12adac", + "source_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "block_id": "b1ab9b19-67a6-406d-abf5-2dba76d00c79", + "input_default": { + "a": 0, + "b": 1, + "operation": "Add", + "round_result": false + }, + "metadata": { + "position": { + "x": 13.885754497174958, + "y": 2477.6485504601296 + } + }, + "input_links": [ + { + "id": "de32fe82-38e2-40cc-9308-810ceb3e382c", + "source_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "output", + "sink_name": "b", + "is_static": true + }, + { + "id": "caeaf988-2aac-46a3-b396-f14f4d61b4c7", + "source_id": "694737bd-4ca2-4927-9e3a-62d32e2bfcb3", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "value", + "sink_name": "a", + "is_static": false + }, + { + "id": "8766de74-96ab-4607-8994-4f7b333a59d7", + "source_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "result", + "sink_name": "a", + "is_static": false + } + ], + "output_links": [ + { + "id": "02f129e4-bd3b-4969-aceb-21ea389572e2", + "source_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "sink_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "source_name": "result", + "sink_name": "value1", + "is_static": false + }, + { + "id": "8766de74-96ab-4607-8994-4f7b333a59d7", + "source_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "result", + "sink_name": "a", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "694737bd-4ca2-4927-9e3a-62d32e2bfcb3", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "number", + "value": "0" + }, + "metadata": { + "position": { + "x": -752.5331319770655, + "y": 2129.182323495746 + } + }, + "input_links": [], + "output_links": [ + { + "id": "caeaf988-2aac-46a3-b396-f14f4d61b4c7", + "source_id": "694737bd-4ca2-4927-9e3a-62d32e2bfcb3", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "value", + "sink_name": "a", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": {}, + "metadata": { + "position": { + "x": -739.4623196973166, + "y": 2733.163868913187 + } + }, + "input_links": [ + { + "id": "ddbcf470-de3a-4955-89aa-12291f26a059", + "source_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "sink_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "source_name": "error", + "sink_name": "input", + "is_static": false + }, + { + "id": "7939e7f5-d0b4-4d3d-b622-ab1edaf18bd5", + "source_id": "6214aa85-548d-4c78-a298-0499a4f1c7ad", + "sink_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "source_name": "value", + "sink_name": "data", + "is_static": false + } + ], + "output_links": [ + { + "id": "de32fe82-38e2-40cc-9308-810ceb3e382c", + "source_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "output", + "sink_name": "b", + "is_static": true + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "6214aa85-548d-4c78-a298-0499a4f1c7ad", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "number", + "value": "1" + }, + "metadata": { + "position": { + "x": -1397.8285977211458, + "y": 2726.508771135722 + } + }, + "input_links": [], + "output_links": [ + { + "id": "7939e7f5-d0b4-4d3d-b622-ab1edaf18bd5", + "source_id": "6214aa85-548d-4c78-a298-0499a4f1c7ad", + "sink_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "source_name": "value", + "sink_name": "data", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "### Summary\n\n{{ summary \n | replace('\\r\\n', '\\n')\n | replace('\\n', ' \\n') }}\n\n### How This List Is Ranked\n\nThe suggestions are **ranked from best fit to broader options** based on:\n- **Fit with your pitch** (topic match, tone, audience)\n- **Brandability** (memorability, uniqueness, ease to say/spell)\n- **Clarity & length** (short, unambiguous, no filler)\n- **Domain quality** (clean .com bias; no hyphens/numbers)\n- **Noise penalties** (awkward word combos or ambiguity)\n\n> Tip: The top 5 are usually the \u201cbuy now\u201d candidates. If you love a lower-ranked name, go for it\u2014the ranking is guidance, not a rule.\n\n### Domain Suggestions (Ranked)\n\n{{ ranked_domains_block\n | replace('\\r\\n', '\\n')\n | replace('\\n', ' \\n') }}\n\n\n### What to Do Next\n\nThe domain names listed above are still available for registration. To claim one, purchase it from a domain registrar (a company that sells and manages domain names). Availability can change quickly, so check out promptly.\n\n**Popular registrars:**\n- [GoDaddy](https://www.godaddy.com/) \u2014 one of the largest registrars. \n- [Namecheap](https://www.namecheap.com/) \u2014 transparent pricing and easy UI. \n- [Domain.com](https://www.domain.com/) \u2014 long-standing, straightforward. \n- [Dynadot](https://www.dynadot.com/) \u2014 clean, low-friction checkout. \n- [IONOS](https://www.ionos.com/domains) \u2014 competitive first-year pricing.\n\n**Steps:**\n1. Pick your favourite from the top of the list. \n2. Add to cart and check out \u2014 confirm the price \u2014 usually **$10\u201315 per year** \n3. In your registrar dashboard, set up **DNS / name servers** to point the domain to your website or email. \n4. (Optional) Consider grabbing close variants or common misspellings for brand protection.\n", + "values": {}, + "escape_html": false + }, + "metadata": { + "position": { + "x": 8442.06365615613, + "y": 448.9943472999386 + } + }, + "input_links": [ + { + "id": "d2586c8d-edc6-4c00-afad-1815986aaf4d", + "source_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "yes_output", + "sink_name": "values_#_ranked_domains_block", + "is_static": false + }, + { + "id": "f7307ebc-189f-4554-ba9a-29b66f11fd6d", + "source_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "output", + "sink_name": "values_#_summary", + "is_static": false + }, + { + "id": "d6bd5988-cee8-4ac9-873f-a5dbd3bc8da5", + "source_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "output", + "sink_name": "values_#_ranked_domains_block", + "is_static": false + } + ], + "output_links": [ + { + "id": "d3c06626-b5b0-4660-8733-b5167cbdd26c", + "source_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "sink_id": "ea561c3b-3d18-4b8e-8842-065d06309502", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "84659cbb-1184-4040-a1af-2f2d785cac3e", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 4936.321448143547, + "y": 450.50529440807475 + } + }, + "input_links": [ + { + "id": "87c5dbb4-6741-4c86-beca-f40e0dd79995", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "84659cbb-1184-4040-a1af-2f2d785cac3e", + "source_name": "output", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "f2696b8f-75f7-472e-92a5-d33fcd77a779", + "source_id": "84659cbb-1184-4040-a1af-2f2d785cac3e", + "sink_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + }, + { + "id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "1", + "operator": "<", + "yes_value": "Unfortunately, every single domain I checked was already registered by someone else. \n\nPlease try again with a slightly different idea, and hopefully I'll be able to find something that's available!" + }, + "metadata": { + "position": { + "x": 5513.162415604335, + "y": 445.1137642882928 + } + }, + "input_links": [ + { + "id": "b00144a7-177c-46a1-aa25-c2be45823fc1", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "source_name": "output", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "f2696b8f-75f7-472e-92a5-d33fcd77a779", + "source_id": "84659cbb-1184-4040-a1af-2f2d785cac3e", + "sink_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "d2586c8d-edc6-4c00-afad-1815986aaf4d", + "source_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "yes_output", + "sink_name": "values_#_ranked_domains_block", + "is_static": false + }, + { + "id": "4260a459-1cda-411e-9c32-1e886d49f444", + "source_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "sink_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "source_name": "no_output", + "sink_name": "prompt_values_#_domains", + "is_static": false + } + ], + "graph_id": "f410b776-3603-4f84-8348-e5db9c551322", + "graph_version": 31, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "f7307ebc-189f-4554-ba9a-29b66f11fd6d", + "source_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "output", + "sink_name": "values_#_summary", + "is_static": false + }, + { + "id": "486aae4d-b3db-4008-842c-05f966f00b5b", + "source_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "caeaf988-2aac-46a3-b396-f14f4d61b4c7", + "source_id": "694737bd-4ca2-4927-9e3a-62d32e2bfcb3", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "value", + "sink_name": "a", + "is_static": false + }, + { + "id": "34ba9aec-eb91-4760-bde7-c98685b95f5a", + "source_id": "524bd1e9-17c2-4b11-ab3e-2f1809823c7e", + "sink_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "source_name": "result", + "sink_name": "input", + "is_static": true + }, + { + "id": "9f73853c-27bc-40c0-ac1b-b6aee896c92c", + "source_id": "f3feafe5-4971-4604-9cfa-caf8e0cf60d8", + "sink_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "source_name": "output", + "sink_name": "code", + "is_static": false + }, + { + "id": "d6bd5988-cee8-4ac9-873f-a5dbd3bc8da5", + "source_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "output", + "sink_name": "values_#_ranked_domains_block", + "is_static": false + }, + { + "id": "87c5dbb4-6741-4c86-beca-f40e0dd79995", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "84659cbb-1184-4040-a1af-2f2d785cac3e", + "source_name": "output", + "sink_name": "collection", + "is_static": false + }, + { + "id": "4e403abc-3a34-414c-9240-7a30b6f22ac5", + "source_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "sink_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "d28783ab-8818-4f28-ae66-d9a04ad33455", + "source_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "sink_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "8766de74-96ab-4607-8994-4f7b333a59d7", + "source_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "result", + "sink_name": "a", + "is_static": false + }, + { + "id": "2430664a-6580-428b-a317-36c7675539c1", + "source_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "0b604734-0417-4a56-b3c3-6c841ee764df", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "source_name": "stdout_logs", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "de188d9a-a33f-4790-9891-914046e255b7", + "source_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "sink_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "ddbcf470-de3a-4955-89aa-12291f26a059", + "source_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "sink_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "source_name": "error", + "sink_name": "input", + "is_static": false + }, + { + "id": "7939e7f5-d0b4-4d3d-b622-ab1edaf18bd5", + "source_id": "6214aa85-548d-4c78-a298-0499a4f1c7ad", + "sink_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "source_name": "value", + "sink_name": "data", + "is_static": false + }, + { + "id": "73c613c5-92c5-4a72-a610-6f45a43212a4", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "c70b4120-64a3-44d0-a960-3674735314ff", + "source_id": "570c679c-676d-448f-bb1c-b347f36970aa", + "sink_id": "f3feafe5-4971-4604-9cfa-caf8e0cf60d8", + "source_name": "output", + "sink_name": "values_#_DOMAINS", + "is_static": false + }, + { + "id": "d3c06626-b5b0-4660-8733-b5167cbdd26c", + "source_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "sink_id": "ea561c3b-3d18-4b8e-8842-065d06309502", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "c49011f5-3dd8-416a-9604-be450f0f465f", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "b77b60d6-23b9-4700-ade8-7afa1f4ee3d0", + "source_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "sink_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "02f129e4-bd3b-4969-aceb-21ea389572e2", + "source_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "sink_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "source_name": "result", + "sink_name": "value1", + "is_static": false + }, + { + "id": "99886544-d322-4179-838f-a5b01292fcdb", + "source_id": "63df7668-8ada-4525-a1a1-7baae5efc103", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "6097cc32-f6e1-4311-8804-e99af44201f4", + "source_id": "ed14af2c-2251-4cef-8a70-2337251c64c0", + "sink_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "source_name": "output", + "sink_name": "prompt_values_#_product_pitch", + "is_static": true + }, + { + "id": "27dd643a-09e8-4d49-814d-f8015c7a7f6f", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "b00144a7-177c-46a1-aa25-c2be45823fc1", + "source_id": "64bc1c5d-720c-4ac4-ae0f-1c20a9d41ad9", + "sink_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "source_name": "output", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "ab8d3920-8582-426d-92a1-0cf3bdd31efe", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "d2586c8d-edc6-4c00-afad-1815986aaf4d", + "source_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "sink_id": "32f86709-810b-4f20-a2f7-a4027ff05094", + "source_name": "yes_output", + "sink_name": "values_#_ranked_domains_block", + "is_static": false + }, + { + "id": "d3f1d3e7-36e5-450a-a810-9f534a12adac", + "source_id": "81a851af-8465-4154-83f5-36aad0ad2ba5", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + }, + { + "id": "cf5cd1d2-24ac-4126-ad0f-d934b7862e3a", + "source_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "781b8b03-e645-42f6-b9ee-18727a04bb42", + "source_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "sink_id": "aa67b2a4-e886-4381-9bea-65fe4d6e0a07", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "e53ea15b-4d84-494d-9867-71e454f7c867", + "source_id": "524bd1e9-17c2-4b11-ab3e-2f1809823c7e", + "sink_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "source_name": "result", + "sink_name": "prompt_values_#_product_pitch", + "is_static": true + }, + { + "id": "6b8523b5-ce6e-4cd4-80a6-6be54b9be4ab", + "source_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c9ab9fb0-391c-46bc-b6e7-13fe3125a5f1", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "stderr_logs", + "sink_name": "value", + "is_static": false + }, + { + "id": "7427c0ae-ec91-4bd5-9a72-1ef24fc11430", + "source_id": "2105e9c1-0698-4a01-9d02-02d791364498", + "sink_id": "63c243b9-7c2d-4d55-9693-7eca2e66fb67", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "de32fe82-38e2-40cc-9308-810ceb3e382c", + "source_id": "74ae8dfb-4ec3-442d-a965-9fcc95fc8a48", + "sink_id": "0963256f-3d16-47b2-808c-91b6b2f88aee", + "source_name": "output", + "sink_name": "b", + "is_static": true + }, + { + "id": "4260a459-1cda-411e-9c32-1e886d49f444", + "source_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "sink_id": "55b1f3b7-9b38-45dd-8fb7-c4c7f0037c04", + "source_name": "no_output", + "sink_name": "prompt_values_#_domains", + "is_static": false + }, + { + "id": "2607c55b-1f24-47f8-acd4-4df5a51171b6", + "source_id": "aaff98f5-ebea-4dbb-b927-35e2fb20be3c", + "sink_id": "7e00122a-ad6b-4e0f-92e4-d079a2173fae", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "f2696b8f-75f7-472e-92a5-d33fcd77a779", + "source_id": "84659cbb-1184-4040-a1af-2f2d785cac3e", + "sink_id": "187169c5-388f-4be3-ae67-ff46fb091ba3", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "5f5dba52-1c34-4621-9047-94185ed471f6", + "source_id": "1177aa11-112e-4644-996c-cb9bab6e03ba", + "sink_id": "52555e06-ac46-4ed2-97dd-c2e46c0985f4", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2025-10-22T20:56:19.630Z", + "input_schema": { + "type": "object", + "properties": { + "Product Pitch": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Product Pitch", + "description": "Pitch your product!" + } + }, + "required": [ + "Product Pitch" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "Domain Suggestions": { + "advanced": false, + "secret": false, + "title": "Domain Suggestions" + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + } + }, + "required": [ + "Domain Suggestions", + "Error" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-5-2025-08-07" + ] + }, + "e2b_api_key_credentials": { + "credentials_provider": [ + "e2b" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "e2b", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + } + }, + "required": [ + "openai_api_key_credentials", + "e2b_api_key_credentials" + ], + "title": "DomainNameFinderCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_b8ceb480-a7a2-4c90-8513-181a49f7071f.json b/autogpt_platform/backend/agents/agent_b8ceb480-a7a2-4c90-8513-181a49f7071f.json new file mode 100644 index 0000000000..757d4efa48 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_b8ceb480-a7a2-4c90-8513-181a49f7071f.json @@ -0,0 +1,5032 @@ +{ + "id": "ef561358-b8a2-407f-935f-18b25975b00e", + "version": 369, + "is_active": true, + "name": "Automated Support Agent", + "description": "Overview:\nSupport teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution.\n\nHow it Works:\nNew support emails are routed to the agent.\nThe agent checks internal documentation for answers.\nIt measures confidence in the answer found and either replies directly or escalates to a human.\n\nBusiness Value:\nAutomating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times.", + "instructions": null, + "recommended_schedule_cron": "*/15 * * * *", + "nodes": [ + { + "id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "block_id": "25310c70-b89b-43ba-b25c-4dfa7e2a481c", + "input_default": { + "query": "is:unread -label:agent-processed in:inbox newer_than:1d", + "max_results": 10 + }, + "metadata": { + "position": { + "x": -4377.036473228495, + "y": 217.00872154737638 + } + }, + "input_links": [], + "output_links": [ + { + "id": "fd4efd81-1307-4ff0-8f95-b5ab56a123c0", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "10ad5631-6678-4912-93ee-78f9c118c695", + "source_name": "emails", + "sink_name": "list", + "is_static": false + }, + { + "id": "cf384d57-86bf-4489-b341-bb9150eff615", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "fc75b556-8fab-424c-a4d5-e9755f658d73", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "f9e1a8ca-f81b-46ea-9fc8-6b9adcc95c6f", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "emails", + "sink_name": "yes_value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "fc75b556-8fab-424c-a4d5-e9755f658d73", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error Reading Emails", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": -3684.3197478553593, + "y": 7064.355577971482 + } + }, + "input_links": [ + { + "id": "cf384d57-86bf-4489-b341-bb9150eff615", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "fc75b556-8fab-424c-a4d5-e9755f658d73", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "10ad5631-6678-4912-93ee-78f9c118c695", + "block_id": "896ed73b-27d0-41be-813c-c1c1dc856c03", + "input_default": { + "list": [] + }, + "metadata": { + "position": { + "x": -3656.9920633640163, + "y": 1183.6701728155692 + } + }, + "input_links": [ + { + "id": "fd4efd81-1307-4ff0-8f95-b5ab56a123c0", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "10ad5631-6678-4912-93ee-78f9c118c695", + "source_name": "emails", + "sink_name": "list", + "is_static": false + } + ], + "output_links": [ + { + "id": "c47b1fae-8a85-491f-aaae-bc7be41f0b7a", + "source_id": "10ad5631-6678-4912-93ee-78f9c118c695", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "is_empty", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "881de375-cc67-4776-8323-80b32a4e3e20", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "true", + "no_value": "No support tickets", + "operator": "!=", + "yes_value": "true" + }, + "metadata": { + "position": { + "x": -2968.4493212008188, + "y": 360.8576799085441 + } + }, + "input_links": [ + { + "id": "c47b1fae-8a85-491f-aaae-bc7be41f0b7a", + "source_id": "10ad5631-6678-4912-93ee-78f9c118c695", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "is_empty", + "sink_name": "value1", + "is_static": false + }, + { + "id": "f9e1a8ca-f81b-46ea-9fc8-6b9adcc95c6f", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "emails", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "ab6e1ed4-a063-4b51-8387-962485e8d025", + "source_id": "b014ffc4-8b61-4893-a6fd-1b8ee26a3417", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "value", + "sink_name": "value2", + "is_static": false + } + ], + "output_links": [ + { + "id": "4f17205a-0241-4f16-aeb3-fae33cf92fe9", + "source_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "sink_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + }, + { + "id": "055d3b60-1956-4185-8b5c-a09d375b81c7", + "source_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "sink_id": "1d9be004-1d01-446d-b692-09bd20c74d08", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "1d9be004-1d01-446d-b692-09bd20c74d08", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "No New Support Tickets", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": -2483.577891425824, + "y": 7093.927637075307 + } + }, + "input_links": [ + { + "id": "055d3b60-1956-4185-8b5c-a09d375b81c7", + "source_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "sink_id": "1d9be004-1d01-446d-b692-09bd20c74d08", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "body" + }, + "metadata": { + "position": { + "x": 4809.42894262303, + "y": 369.2029335908554 + } + }, + "input_links": [ + { + "id": "4774f0a4-63b4-4bfe-9be0-b0b434f9301c", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "d7e9f888-f634-47f4-8142-33855d2dc32b", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "output", + "sink_name": "Question", + "is_static": false + }, + { + "id": "79555ba8-df15-4927-84e7-81edef6a66df", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "output", + "sink_name": "prompt_values_#_customer_email_body", + "is_static": false + }, + { + "id": "9d40b3f6-529a-47ba-ae66-eddfd76456c1", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "source_name": "output", + "sink_name": "prompt_values_#_SUPPORT_EMAIL", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "d28447e7-5d86-4a02-8dec-2ee625426591", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Documentation URL", + "title": null, + "value": "", + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 5537.586792212602, + "y": -96.0776608443798 + } + }, + "input_links": [], + "output_links": [ + { + "id": "8daee585-cde1-476c-a649-76e376582026", + "source_id": "d28447e7-5d86-4a02-8dec-2ee625426591", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "result", + "sink_name": "Documentation URL", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "3ecbe89e-cb1e-497f-9056-75d6e082d414", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error Crawling Documentation", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 8880.517064277346, + "y": 6808.114651414224 + } + }, + "input_links": [ + { + "id": "61c54e69-2605-4379-874b-ef630a97c3e9", + "source_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "sink_id": "3ecbe89e-cb1e-497f-9056-75d6e082d414", + "source_name": "ERROR", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "block_id": "ed55ac19-356e-4243-a6cb-bc599e9b716f", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "CUSTOMER EMAIL:\n{{customer_email_body | safe}}\n\nDOCUMENTATION RESEARCH FINDINGS (XML FORMAT):\n{{relevant_information | safe}}\n\nAnalyze the XML research findings above to determine if you can provide a complete, accurate answer to this customer's question. The XML contains:\n- Research confidence score\n- Verbatim documentation quotes with sources \n- Pre-drafted answer based on documentation\n- Source URLs\n\nUse a {{company_tone}} tone and only auto-reply if confidence is {{confidence_threshold}}% or higher.\n\nStart your email response with \"Hello,\" followed by two line breaks, then provide the email body content. Do not include a signature or closing.\n\nRespond in this JSON format:\n{\n \"confidence_score\": [number 0-100],\n \"should_auto_reply\": [string \"true\" or \"false\" with quotes],\n \"email_response\": \"[start with 'Hello,' two enter characters, then email body content - no signature]\",\n \"reasoning\": \"[brief explanation of your confidence assessment and decision]\"\n}\n\n\u2705 CORRECT Adaptation to Threshold 70%:\n{\n \"confidence_score\": 75,\n \"should_auto_reply\": \"true\",\n \"email_response\": \"Hello,\\n\\nBased on our documentation, PandaDoc API has rate limiting in place, though specific limits may vary by endpoint. For detailed rate limit information specific to your use case, I recommend checking the headers in API responses or contacting our technical team.\",\n \"reasoning\": \"Documentation mentions rate limiting exists but lacks specific numbers. At 70% threshold, I can provide helpful guidance even with partial information.\"\n}\n\n\u2705 CORRECT Adaptation to Threshold 90%:\n{\n \"confidence_score\": 75,\n \"should_auto_reply\": \"false\",\n \"email_response\": \"Hello,\\n\\nYour question about API rate limits requires specific technical details that would benefit from personalized assistance. I'm escalating this to our technical support team who can provide you with exact rate limit specifications.\",\n \"reasoning\": \"Documentation provides partial information about rate limiting, but at 90% threshold, I need more complete details to auto-reply confidently.\"\n}\n\n\u274c WRONG Examples (Ignoring Threshold):\nWrong - Same Response Regardless of Threshold:\n// At 70% threshold - WRONG\n{\n \"confidence_score\": 75,\n \"should_auto_reply\": \"false\", // \u2190 Should be true at 70%\n \"email_response\": \"drafted email response (best try) goes here\"\n \"reasoning\": \"Not confident enough\" // \u2190 Ignoring 70% threshold\n}\n// At 90% threshold - WRONG \n{\n \"confidence_score\": 75,\n \"should_auto_reply\": \"true\", // \u2190 Should be false at 90%\n \"email_response\": \"Hello,\\n\\nHere's partial information...\",\n \"reasoning\": \"Providing available info\" // \u2190 Ignoring 90% threshold\n}\n\nDo not ask people to contact human support on their own, as they have already contacted support, and setting should_auto_reply to false results in this ticket being escalated to a human.", + "sys_prompt": "You are a professional and intelligent customer support agent. Your job is to help customers with their questions using our documentation research findings. You must be helpful, accurate, and professional.\n\nThe relevant information will be provided in XML format containing:\n- Confidence score from documentation research\n- Verbatim quotes from documentation with sources\n- Pre-drafted answer based on documentation findings\n- Source URLs for reference\n\nUse a {{company_tone}} tone in your responses. Analyze if you can provide a complete, accurate answer to the customer's question using the available documentation research. Only auto-reply if you are confident ({{confidence_threshold}}%+) that you can fully address their question. If not confident enough, recommend escalation to human support.\n\nProvide just the greeting and email body content - do not include closings like \"Best regards, Support Team\"", + "list_result": false, + "ollama_host": "localhost:11434", + "prompt_values": {}, + "expected_format": { + "reasoning": "String (explanation of decision)", + "email_response": "String (email body content only, no greeting/signature)", + "confidence_score": "Number (0-100)", + "should_auto_reply": "String \"true\" or \"false\" - always use quotes" + }, + "conversation_history": [], + "compress_prompt_to_fit": true + }, + "metadata": { + "position": { + "x": 8950.116678128301, + "y": 219.86991069037902 + } + }, + "input_links": [ + { + "id": "56fe28df-b2b8-4f3b-8528-90d5f16c6069", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "result", + "sink_name": "prompt_values_#_confidence_threshold", + "is_static": true + }, + { + "id": "79555ba8-df15-4927-84e7-81edef6a66df", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "output", + "sink_name": "prompt_values_#_customer_email_body", + "is_static": false + }, + { + "id": "a9b03ff1-e567-4622-892d-a9003f2269a7", + "source_id": "dc400c79-dc17-4bac-81ae-cdcbcc69d90b", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "result", + "sink_name": "prompt_values_#_company_tone", + "is_static": true + }, + { + "id": "a7864344-1be9-4ee2-88a8-575bbd851643", + "source_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "Answer", + "sink_name": "prompt_values_#_relevant_information", + "is_static": false + } + ], + "output_links": [ + { + "id": "48b7ebe9-c3d9-4dd1-a70e-9b6043e0a14e", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "response", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "8176c88c-d42f-4266-9749-e28de468e07d", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "98cd104e-fa1a-421f-9572-b4234ea55416", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "494f9e25-9d86-4635-b395-ee948ab2751e", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "64efc204-0d77-4f50-85f5-4e80e7bb9940", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "2a273dc3-041d-44bd-babe-4131a55fccc6", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "a67dde5f-182e-47e4-a811-a03cd7383f92", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "2dcdbdb1-8c60-4b1c-bcc8-c69f159c0126", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "88269a97-5378-4ea7-9a5b-45878a290752", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "a67dde5f-182e-47e4-a811-a03cd7383f92", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "should_auto_reply" + }, + "metadata": { + "position": { + "x": 10059.8313825174, + "y": 233.51346374787636 + } + }, + "input_links": [ + { + "id": "2a273dc3-041d-44bd-babe-4131a55fccc6", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "a67dde5f-182e-47e4-a811-a03cd7383f92", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "afbaed62-c6e9-4876-a927-081a9a076215", + "source_id": "a67dde5f-182e-47e4-a811-a03cd7383f92", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "output", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "19757868-f240-4e39-b484-fe3f84d3d237", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "true", + "no_value": null, + "operator": "==", + "yes_value": null + }, + "metadata": { + "position": { + "x": 10827.351694272655, + "y": 532.492316800443 + } + }, + "input_links": [ + { + "id": "afbaed62-c6e9-4876-a927-081a9a076215", + "source_id": "a67dde5f-182e-47e4-a811-a03cd7383f92", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "output", + "sink_name": "value1", + "is_static": false + }, + { + "id": "48b7ebe9-c3d9-4dd1-a70e-9b6043e0a14e", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "response", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "1c72921e-805d-4edb-9a47-239609ecf3da", + "source_id": "88269a97-5378-4ea7-9a5b-45878a290752", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "output", + "sink_name": "yes_value", + "is_static": false + } + ], + "output_links": [ + { + "id": "2c7a281e-57d9-46a7-9823-fc76b37b5cd4", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "source_name": "no_output", + "sink_name": "prompt_values_#_AI_ANALYSIS", + "is_static": false + }, + { + "id": "9810c154-dda6-4e84-85c5-b432251e2c38", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "source_name": "yes_output", + "sink_name": "input_$_0", + "is_static": false + }, + { + "id": "313227ff-5500-4dee-9621-3f1858979029", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "88269a97-5378-4ea7-9a5b-45878a290752", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "email_response" + }, + "metadata": { + "position": { + "x": 10059.491152442875, + "y": 1022.4886105087694 + } + }, + "input_links": [ + { + "id": "2dcdbdb1-8c60-4b1c-bcc8-c69f159c0126", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "88269a97-5378-4ea7-9a5b-45878a290752", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "1c72921e-805d-4edb-9a47-239609ecf3da", + "source_id": "88269a97-5378-4ea7-9a5b-45878a290752", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "output", + "sink_name": "yes_value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "dc400c79-dc17-4bac-81ae-cdcbcc69d90b", + "block_id": "655d6fdf-a334-421c-b733-520549c07cd1", + "input_default": { + "name": "Company Tone", + "title": null, + "value": "", + "advanced": false, + "description": null, + "placeholder_values": [ + "Professional", + "Friendly", + "Casual", + "Technical" + ] + }, + "metadata": { + "position": { + "x": 7244.420431667231, + "y": -75.86923121349153 + } + }, + "input_links": [], + "output_links": [ + { + "id": "a9b03ff1-e567-4622-892d-a9003f2269a7", + "source_id": "dc400c79-dc17-4bac-81ae-cdcbcc69d90b", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "result", + "sink_name": "prompt_values_#_company_tone", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Confidence Threshold", + "title": "Auto-Reply Confidence Threshold", + "value": "", + "advanced": false, + "description": "Minimum confidence score (0-100) required for automatic responses", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 7800.965353330381, + "y": -79.6138784743564 + } + }, + "input_links": [], + "output_links": [ + { + "id": "56fe28df-b2b8-4f3b-8528-90d5f16c6069", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "result", + "sink_name": "prompt_values_#_confidence_threshold", + "is_static": true + }, + { + "id": "e4eea37c-032e-45af-83b2-58ded4b6b0e2", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "result", + "sink_name": "Confidence Threshold", + "is_static": true + }, + { + "id": "de9c6cc7-3e9c-4ef1-9d0c-67dcc34ce309", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "e6390819-6c64-49e4-a414-7969d05c099e", + "source_name": "result", + "sink_name": "value", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "from_" + }, + "metadata": { + "position": { + "x": 10140.619289492886, + "y": -2330.431807974878 + } + }, + "input_links": [ + { + "id": "d662d709-54bd-4c4d-ac8b-8a5e24c8eb65", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "e850b133-a458-46e2-886c-82743a1a8bcd", + "source_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "to", + "is_static": false + }, + { + "id": "cd6f5657-085c-40dc-89f8-4cb849692017", + "source_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Sender", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "block_id": "e30a4d42-7b7d-4e6a-b36e-1f9b8e3b7d85", + "input_default": { + "input": [ + "", + "" + ], + "delimiter": "\n\n" + }, + "metadata": { + "position": { + "x": 15784.293277139936, + "y": -2624.6877021448286 + } + }, + "input_links": [ + { + "id": "9810c154-dda6-4e84-85c5-b432251e2c38", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "source_name": "yes_output", + "sink_name": "input_$_0", + "is_static": false + }, + { + "id": "884103aa-a984-4715-a0f2-ab93bf641aa1", + "source_id": "b69fd82c-f602-4655-b208-e3c9785bc37f", + "sink_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "source_name": "result", + "sink_name": "input_$_1", + "is_static": true + } + ], + "output_links": [ + { + "id": "40d7e87b-802e-49c8-9653-8cf9ccccd275", + "source_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "body", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "62628a43-7b56-4746-b996-9f9aecda8aa5", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "subject" + }, + "metadata": { + "position": { + "x": 10106.409863397323, + "y": -5029.043156200651 + } + }, + "input_links": [ + { + "id": "8b58d74a-9028-4257-864b-b5e1ef137541", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "62628a43-7b56-4746-b996-9f9aecda8aa5", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "0c90d29b-158d-4806-b1d4-f65dc449b3d0", + "source_id": "62628a43-7b56-4746-b996-9f9aecda8aa5", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Subject", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "27bc4042-6f70-4f36-8834-e2bd639c6b4b", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error Gmail Auto-Reply", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 17891.79841778554, + "y": -2474.981748545754 + } + }, + "input_links": [ + { + "id": "0cc2a91c-6330-483e-88ea-65936d9671e3", + "source_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "sink_id": "27bc4042-6f70-4f36-8834-e2bd639c6b4b", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "6516c017-1d49-466f-bdb4-002802ae13f9", + "block_id": "f884b2fb-04f4-4265-9658-14f433926ac9", + "input_default": { + "label_name": "agent-processed" + }, + "metadata": { + "position": { + "x": 10790.509936804032, + "y": -3215.4666077875036 + } + }, + "input_links": [ + { + "id": "e8ae929b-92eb-43aa-a913-b436ac938bf9", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "6516c017-1d49-466f-bdb4-002802ae13f9", + "source_name": "output", + "sink_name": "message_id", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "id" + }, + "metadata": { + "position": { + "x": 9589.754074815959, + "y": -3372.212683050522 + } + }, + "input_links": [ + { + "id": "c6c7095f-2953-4550-bdad-1c899b95a68c", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "2738b173-49ff-4db6-997e-0ac3d6cb6cac", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "parentMessageId", + "is_static": false + }, + { + "id": "e8ae929b-92eb-43aa-a913-b436ac938bf9", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "6516c017-1d49-466f-bdb4-002802ae13f9", + "source_name": "output", + "sink_name": "message_id", + "is_static": false + }, + { + "id": "99e13790-2ac2-420d-97aa-00cfef148fcd", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Email ID", + "is_static": false + }, + { + "id": "7eafdff8-3401-4ead-8d44-37bfc650e082", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "output", + "sink_name": "messageId", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "block_id": "b924ddf4-de4f-4b56-9a85-358930dcbc91", + "input_default": { + "values": {} + }, + "metadata": { + "position": { + "x": 20813.931194689907, + "y": -998.3299549638214 + } + }, + "input_links": [ + { + "id": "0c90d29b-158d-4806-b1d4-f65dc449b3d0", + "source_id": "62628a43-7b56-4746-b996-9f9aecda8aa5", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Subject", + "is_static": false + }, + { + "id": "f9fb44ad-ce32-421a-a717-8e6cf4c894da", + "source_id": "4b4c3539-4051-42d3-8777-2d1dc5ac6897", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "value", + "sink_name": "values_#_Confidence Score", + "is_static": false + }, + { + "id": "d3405fa9-d81f-4be2-925c-2739863bd2c8", + "source_id": "e6390819-6c64-49e4-a414-7969d05c099e", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "value", + "sink_name": "values_#_Confidence Threshold", + "is_static": false + }, + { + "id": "a4e9ac5e-ddc2-4260-ae2f-9740575857ba", + "source_id": "ef730afa-4de7-47d9-bcb7-11af07f2088b", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Email Received", + "is_static": false + }, + { + "id": "f8b6e637-a7f1-4933-9f70-3f4d980187bc", + "source_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "yes_output", + "sink_name": "values_#_Status", + "is_static": false + }, + { + "id": "99e13790-2ac2-420d-97aa-00cfef148fcd", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Email ID", + "is_static": false + }, + { + "id": "ad3963d4-05b4-41d7-924b-176216497fa6", + "source_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "no_output", + "sink_name": "values_#_Status", + "is_static": false + }, + { + "id": "cd6f5657-085c-40dc-89f8-4cb849692017", + "source_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Sender", + "is_static": false + }, + { + "id": "174a6e58-b69e-448a-8d00-d0e763e58cbb", + "source_id": "98cd104e-fa1a-421f-9572-b4234ea55416", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Reasoning", + "is_static": false + } + ], + "output_links": [ + { + "id": "bf5877d9-85bc-48d6-a6a5-2337ed938837", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "916095c0-99f2-4778-b988-a15e1857456e", + "source_name": "dictionary", + "sink_name": "trigger", + "is_static": false + }, + { + "id": "e9ca04e6-ac35-415a-9d0e-f9710258e320", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "source_name": "dictionary", + "sink_name": "dictionary", + "is_static": false + }, + { + "id": "6309d780-4b93-42cd-a163-07aaf89e63d9", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "541121c0-1631-4acf-8e1e-5c61a8051c0f", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "541121c0-1631-4acf-8e1e-5c61a8051c0f", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Dictionary Creation Failed", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 22344.143138017123, + "y": 6329.340165898824 + } + }, + "input_links": [ + { + "id": "6309d780-4b93-42cd-a163-07aaf89e63d9", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "541121c0-1631-4acf-8e1e-5c61a8051c0f", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "04dced50-40cf-40f6-bf87-6e3c1ddb746a", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Adding to List Process Failed", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 23650.8757176323, + "y": 6337.7771719329885 + } + }, + "input_links": [ + { + "id": "5a5b9d6b-73c0-4bb0-a953-6502c54c4684", + "source_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "sink_id": "04dced50-40cf-40f6-bf87-6e3c1ddb746a", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "aa470a71-9f1b-49ce-8f20-4b1d34f6b093", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Success", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 25063.734638194343, + "y": -1145.3101308595953 + } + }, + "input_links": [ + { + "id": "fdea35df-93c6-4b20-8672-a5c2e67d0a84", + "source_id": "47679fdf-d4fd-42b8-9d28-548ddddc496e", + "sink_id": "aa470a71-9f1b-49ce-8f20-4b1d34f6b093", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "block_id": "aeb08fc1-2fc1-4141-bc8e-f758f183a822", + "input_default": { + "list": [ + "" + ], + "entry": null, + "entries": [], + "position": null + }, + "metadata": { + "position": { + "x": 22590.882048183783, + "y": -1003.9751813069879 + } + }, + "input_links": [ + { + "id": "093f5c0b-79a7-435f-9a49-d742df68da6f", + "source_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "sink_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "source_name": "updated_dictionary", + "sink_name": "list_$_0", + "is_static": false + } + ], + "output_links": [ + { + "id": "5a5b9d6b-73c0-4bb0-a953-6502c54c4684", + "source_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "sink_id": "04dced50-40cf-40f6-bf87-6e3c1ddb746a", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "a090a603-1fcf-4e17-ba2f-f1affcfc1c34", + "source_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "updated_list", + "sink_name": "records", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "64efc204-0d77-4f50-85f5-4e80e7bb9940", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "confidence_score" + }, + "metadata": { + "position": { + "x": 10061.11286699683, + "y": 1978.064996400531 + } + }, + "input_links": [ + { + "id": "494f9e25-9d86-4635-b395-ee948ab2751e", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "64efc204-0d77-4f50-85f5-4e80e7bb9940", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "d1dc9a21-6804-447d-a8ca-652bbbf32f2a", + "source_id": "64efc204-0d77-4f50-85f5-4e80e7bb9940", + "sink_id": "4b4c3539-4051-42d3-8777-2d1dc5ac6897", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "ef730afa-4de7-47d9-bcb7-11af07f2088b", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "date" + }, + "metadata": { + "position": { + "x": 10119.944318670057, + "y": -647.9166854102735 + } + }, + "input_links": [ + { + "id": "fcdf10ff-97cb-4277-aa34-25e52f8b03f9", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "ef730afa-4de7-47d9-bcb7-11af07f2088b", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "a4e9ac5e-ddc2-4260-ae2f-9740575857ba", + "source_id": "ef730afa-4de7-47d9-bcb7-11af07f2088b", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Email Received", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "e6390819-6c64-49e4-a414-7969d05c099e", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "number" + }, + "metadata": { + "position": { + "x": 10128.92050805434, + "y": -1385.601365457166 + } + }, + "input_links": [ + { + "id": "de9c6cc7-3e9c-4ef1-9d0c-67dcc34ce309", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "e6390819-6c64-49e4-a414-7969d05c099e", + "source_name": "result", + "sink_name": "value", + "is_static": true + } + ], + "output_links": [ + { + "id": "d3405fa9-d81f-4be2-925c-2739863bd2c8", + "source_id": "e6390819-6c64-49e4-a414-7969d05c099e", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "value", + "sink_name": "values_#_Confidence Threshold", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "f3681bf4-67ca-418f-8185-50818078d8d1", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error in Gmail Escalation Process", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 18348.59654909178, + "y": 5824.027615074887 + } + }, + "input_links": [ + { + "id": "da802c48-c8bd-4452-814d-89c3cafa64c0", + "source_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "sink_id": "f3681bf4-67ca-418f-8185-50818078d8d1", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "8c7b0442-ce83-44cf-891b-43e71637a320", + "source_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "sink_id": "f3681bf4-67ca-418f-8185-50818078d8d1", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "6b443996-b026-4422-bf7c-99d1174fd4fd", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Escalation Email", + "title": "Human Escalation Email Address", + "value": "", + "advanced": false, + "description": "Email address where complex support requests that require human attention will be forwarded", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 15477.8947349296, + "y": 1485.943583344358 + } + }, + "input_links": [], + "output_links": [ + { + "id": "65ce669e-8224-44de-b797-1be31b104170", + "source_id": "6b443996-b026-4422-bf7c-99d1174fd4fd", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "result", + "sink_name": "to", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "98cd104e-fa1a-421f-9572-b4234ea55416", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "reasoning" + }, + "metadata": { + "position": { + "x": 10083.343823760459, + "y": 2848.538600865504 + } + }, + "input_links": [ + { + "id": "8176c88c-d42f-4266-9749-e28de468e07d", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "98cd104e-fa1a-421f-9572-b4234ea55416", + "source_name": "response", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "174a6e58-b69e-448a-8d00-d0e763e58cbb", + "source_id": "98cd104e-fa1a-421f-9572-b4234ea55416", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Reasoning", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "916095c0-99f2-4778-b988-a15e1857456e", + "block_id": "716a67b3-6760-42e7-86dc-18645c6e00fc", + "input_default": { + "format_type": { + "format": "%Y-%m-%d %H:%M:%S", + "timezone": "UTC", + "discriminator": "strftime" + } + }, + "metadata": { + "position": { + "x": 21440.42170327138, + "y": -1009.9581348960743 + } + }, + "input_links": [ + { + "id": "bf5877d9-85bc-48d6-a6a5-2337ed938837", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "916095c0-99f2-4778-b988-a15e1857456e", + "source_name": "dictionary", + "sink_name": "trigger", + "is_static": false + } + ], + "output_links": [ + { + "id": "30925949-b897-4ace-a9f8-9cb5cb792ec6", + "source_id": "916095c0-99f2-4778-b988-a15e1857456e", + "sink_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "source_name": "date_time", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "block_id": "31d1064e-7446-4693-a7d4-65e5ca1180d1", + "input_default": { + "key": "Timestamp", + "value": null, + "entries": {}, + "dictionary": {} + }, + "metadata": { + "position": { + "x": 22028.3509182162, + "y": -1012.617793657085 + } + }, + "input_links": [ + { + "id": "e9ca04e6-ac35-415a-9d0e-f9710258e320", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "source_name": "dictionary", + "sink_name": "dictionary", + "is_static": false + }, + { + "id": "30925949-b897-4ace-a9f8-9cb5cb792ec6", + "source_id": "916095c0-99f2-4778-b988-a15e1857456e", + "sink_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "source_name": "date_time", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "1e5c56ed-58ca-4346-93ae-62520ca7acef", + "source_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "sink_id": "551eba5c-6f98-437a-a6bd-06fb74af9bf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "093f5c0b-79a7-435f-9a49-d742df68da6f", + "source_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "sink_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "source_name": "updated_dictionary", + "sink_name": "list_$_0", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "551eba5c-6f98-437a-a6bd-06fb74af9bf5", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error adding to Airtable Write payload", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 22998.910025310455, + "y": 6336.245524799465 + } + }, + "input_links": [ + { + "id": "1e5c56ed-58ca-4346-93ae-62520ca7acef", + "source_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "sink_id": "551eba5c-6f98-437a-a6bd-06fb74af9bf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "true", + "no_value": "Escalated", + "operator": "==", + "yes_value": "Auto-replied" + }, + "metadata": { + "position": { + "x": 12239.520123451122, + "y": 1001.8327291456504 + } + }, + "input_links": [ + { + "id": "aeff641a-5a4d-4839-9971-8b60f408136c", + "source_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "sink_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "source_name": "value", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "f8b6e637-a7f1-4933-9f70-3f4d980187bc", + "source_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "yes_output", + "sink_name": "values_#_Status", + "is_static": false + }, + { + "id": "ad3963d4-05b4-41d7-924b-176216497fa6", + "source_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "no_output", + "sink_name": "values_#_Status", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 11576.631416001415, + "y": 1021.3723440115053 + } + }, + "input_links": [ + { + "id": "313227ff-5500-4dee-9621-3f1858979029", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "7e82bed8-34e4-46ee-bc7e-61b731cbffda", + "source_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "sink_id": "04d04750-c575-4252-a7c5-4732ffb1ab11", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "aeff641a-5a4d-4839-9971-8b60f408136c", + "source_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "sink_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "source_name": "value", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "04d04750-c575-4252-a7c5-4732ffb1ab11", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error Type Conversion", + "title": null, + "value": null, + "format": "", + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 12108.616167023609, + "y": 6335.446040316887 + } + }, + "input_links": [ + { + "id": "7e82bed8-34e4-46ee-bc7e-61b731cbffda", + "source_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "sink_id": "04d04750-c575-4252-a7c5-4732ffb1ab11", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "block_id": "262ca24c-1025-43cf-a578-534e23234e97", + "input_default": { + "list": [], + "index": 0 + }, + "metadata": { + "position": { + "x": -2318.4819333134533, + "y": 185.68971703440786 + } + }, + "input_links": [ + { + "id": "4f17205a-0241-4f16-aeb3-fae33cf92fe9", + "source_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "sink_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + } + ], + "output_links": [ + { + "id": "fcdf10ff-97cb-4277-aa34-25e52f8b03f9", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "ef730afa-4de7-47d9-bcb7-11af07f2088b", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "bd6efba6-419e-4317-a820-17ef2801b2c9", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "58a4593a-44cc-4c68-85fa-af97815ac94e", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "c6c7095f-2953-4550-bdad-1c899b95a68c", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "4774f0a4-63b4-4bfe-9be0-b0b434f9301c", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "d662d709-54bd-4c4d-ac8b-8a5e24c8eb65", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "8b58d74a-9028-4257-864b-b5e1ef137541", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "62628a43-7b56-4746-b996-9f9aecda8aa5", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "b014ffc4-8b61-4893-a6fd-1b8ee26a3417", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "boolean", + "value": "true" + }, + "metadata": { + "position": { + "x": -3727.3387428181913, + "y": 60.132298707885866 + } + }, + "input_links": [], + "output_links": [ + { + "id": "ab6e1ed4-a063-4b51-8387-962485e8d025", + "source_id": "b014ffc4-8b61-4893-a6fd-1b8ee26a3417", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "value", + "sink_name": "value2", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "4eaa8717-fef1-4792-8444-3f397830964a", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Option 2:\n== ESCALATE TO HUMAN ==" + }, + "metadata": { + "position": { + "x": 15623.254838255332, + "y": -395.47746668079077 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "b0aeb8e2-75d6-44e7-ae59-476c23c11a0e", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Option 1:\n== AI REPLY ==" + }, + "metadata": { + "position": { + "x": 15669.172950633385, + "y": -3349.135928920478 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "2324fbde-baac-4cef-ae10-6e5fb859b88e", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "= Research Documentation =" + }, + "metadata": { + "position": { + "x": 6179.405102578336, + "y": -502.1749222877834 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "4049a1ce-91f3-4851-a5e6-8efbcbec1729", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "== Log to Airtable ==" + }, + "metadata": { + "position": { + "x": 22603.352272532335, + "y": -1460.949491315006 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "f3ad60e4-48f7-4758-af84-ab6a0d0ec457", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Read Email, check if inbox is empty" + }, + "metadata": { + "position": { + "x": -2810.4847179418566, + "y": -467.50154184598614 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4.1-2025-04-14", + "retry": 3, + "prompt": "You are an AI assistant tasked with escalating a customer support ticket to human support via email forwarding. You will be provided with an AI analysis of the customer request.\n\nHere is the AI analysis of the request:\n\n{{AI_ANALYSIS | safe}}\n\n\nCompose a clear, well-formatted escalation email to human support. Follow these guidelines:\n\n1. Address the email to \"Human Support Team\"\n2. Include the AI analysis with proper formatting and line breaks\n3. Use clear paragraph breaks and formatting for readability\n4. Keep the message concise but complete - the original email thread is already preserved in the forward\n\nYour email should follow this structure:\n- Brief greeting\n- Clear explanation of why this requires human attention\n- Well-formatted AI analysis with proper line breaks\n- Professional closing\n\n**IMPORTANT FORMATTING:**\n- Use proper line breaks (\\n\\n) between paragraphs\n- Format lists and bullet points clearly\n- Ensure the email is easy to read when forwarded\n\nWrite your email inside tags.", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 15241.174623909594, + "y": -79.26315426453732 + } + }, + "input_links": [ + { + "id": "2c7a281e-57d9-46a7-9823-fc76b37b5cd4", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "source_name": "no_output", + "sink_name": "prompt_values_#_AI_ANALYSIS", + "is_static": false + }, + { + "id": "9d40b3f6-529a-47ba-ae66-eddfd76456c1", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "source_name": "output", + "sink_name": "prompt_values_#_SUPPORT_EMAIL", + "is_static": false + } + ], + "output_links": [ + { + "id": "ef1796d4-d291-4547-b74c-ad9d402c2a41", + "source_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "response", + "sink_name": "forwardMessage", + "is_static": false + }, + { + "id": "8c7b0442-ce83-44cf-891b-43e71637a320", + "source_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "sink_id": "f3681bf4-67ca-418f-8185-50818078d8d1", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "block_id": "42527e98-47b6-44ce-ac0e-86b4883721d3", + "input_default": { + "records": [], + "typecast": false, + "return_fields_by_field_id": null + }, + "metadata": { + "position": { + "x": 23238.33408276495, + "y": -1152.3392111246394 + } + }, + "input_links": [ + { + "id": "91a93af1-7663-4560-8fcb-7cd41eb4f312", + "source_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "positive", + "sink_name": "base_id", + "is_static": false + }, + { + "id": "4ea3f29f-51fa-4435-9807-35e665d8728f", + "source_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "positive", + "sink_name": "table_id_or_name", + "is_static": false + }, + { + "id": "a090a603-1fcf-4e17-ba2f-f1affcfc1c34", + "source_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "updated_list", + "sink_name": "records", + "is_static": false + }, + { + "id": "13cc3a42-f363-4e86-92e7-b33c4afc00e1", + "source_id": "11ff0653-85cb-4096-a4ee-70e4326d3579", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "value", + "sink_name": "table_id_or_name", + "is_static": false + }, + { + "id": "e45f100c-c364-4f36-a235-bd8f90aa4530", + "source_id": "e48f04c5-d6b9-442a-aba1-8258f9aee729", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "value", + "sink_name": "base_id", + "is_static": false + } + ], + "output_links": [ + { + "id": "5643f208-2f9b-40c2-8aac-e09445071b7e", + "source_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "sink_id": "87521271-8efa-4177-a320-4ef68572fafe", + "source_name": "records", + "sink_name": "value", + "is_static": false + }, + { + "id": "dc116bb0-1d68-4b3a-aa46-8826f5d36efa", + "source_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "sink_id": "87521271-8efa-4177-a320-4ef68572fafe", + "source_name": "details", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "b69fd82c-f602-4655-b208-e3c9785bc37f", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Email Signature", + "title": null, + "value": "", + "advanced": false, + "description": null, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 15106.407323571713, + "y": -2827.9020135426836 + } + }, + "input_links": [], + "output_links": [ + { + "id": "884103aa-a984-4715-a0f2-ab93bf641aa1", + "source_id": "b69fd82c-f602-4655-b208-e3c9785bc37f", + "sink_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "source_name": "result", + "sink_name": "input_$_1", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "4b4c3539-4051-42d3-8777-2d1dc5ac6897", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "number" + }, + "metadata": { + "position": { + "x": 10961.223256335805, + "y": 1892.1933521811452 + } + }, + "input_links": [ + { + "id": "d1dc9a21-6804-447d-a8ca-652bbbf32f2a", + "source_id": "64efc204-0d77-4f50-85f5-4e80e7bb9940", + "sink_id": "4b4c3539-4051-42d3-8777-2d1dc5ac6897", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "f9fb44ad-ce32-421a-a717-8e6cf4c894da", + "source_id": "4b4c3539-4051-42d3-8777-2d1dc5ac6897", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "value", + "sink_name": "values_#_Confidence Score", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "block_id": "d8710fc9-6e29-481e-a7d5-165eb16f8471", + "input_default": { + "key": "airtable_base_id", + "scope": "within_agent", + "default_value": "\"no base\"" + }, + "metadata": { + "position": { + "x": -3221.9622839864496, + "y": -3931.8274474696573 + } + }, + "input_links": [], + "output_links": [ + { + "id": "e6475fc2-a119-483a-9f39-952c97c8e0f5", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "a04f4e4b-c45d-4e43-a35a-092e0ab8df95", + "source_name": "value", + "sink_name": "value1", + "is_static": true + }, + { + "id": "c07cbe40-08c0-4c71-b352-3c3002477d3b", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "source_name": "value", + "sink_name": "text", + "is_static": true + }, + { + "id": "d79ae92e-092b-4c81-b007-5cccee7db054", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "source_name": "value", + "sink_name": "data", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "249d0bd7-0c1b-4669-bd7e-cf319f30db82", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Check airtable exists or create a new table" + }, + "metadata": { + "position": { + "x": -813.5937267932889, + "y": -5091.993650936468 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "a04f4e4b-c45d-4e43-a35a-092e0ab8df95", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "value2": "\"no base\"", + "no_value": null, + "operator": "==", + "yes_value": null + }, + "metadata": { + "position": { + "x": -756.7079259057305, + "y": -4470.959327044515 + } + }, + "input_links": [ + { + "id": "e6475fc2-a119-483a-9f39-952c97c8e0f5", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "a04f4e4b-c45d-4e43-a35a-092e0ab8df95", + "source_name": "value", + "sink_name": "value1", + "is_static": true + } + ], + "output_links": [ + { + "id": "c71a0e91-6429-4c30-ab69-7ee393f761f3", + "source_id": "a04f4e4b-c45d-4e43-a35a-092e0ab8df95", + "sink_id": "40597163-b473-4e62-9eb1-e633cf98c774", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "40597163-b473-4e62-9eb1-e633cf98c774", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": "AutoGPT - Customer Support Agent" + }, + "metadata": { + "position": { + "x": 29.437117138455676, + "y": -3818.072750653035 + } + }, + "input_links": [ + { + "id": "c71a0e91-6429-4c30-ab69-7ee393f761f3", + "source_id": "a04f4e4b-c45d-4e43-a35a-092e0ab8df95", + "sink_id": "40597163-b473-4e62-9eb1-e633cf98c774", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "29630e00-3eeb-4240-860b-3f0531ac57b6", + "source_id": "40597163-b473-4e62-9eb1-e633cf98c774", + "sink_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "source_name": "output", + "sink_name": "name", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "ee8e3c4a-92f4-4c69-b3ec-9c7405f33d22", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Airtable Workspace ID", + "title": null, + "value": "", + "advanced": false, + "description": "To get your Airtable workspace ID is to open Airtable in a browser, select your workspace, and check the URL in your browser\u2019s address bar. The workspace ID is the string that starts with \"wsp\" in the URL (for example,\u00a0https://airtable.com/workspaces/wspsqMNxxxxxxxxxxxxx). Do not include the trailing question mark if present.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -25.06951093488192, + "y": -4808.956221583233 + } + }, + "input_links": [], + "output_links": [ + { + "id": "4b6076ee-f741-4aff-908f-83bf0013c3a8", + "source_id": "ee8e3c4a-92f4-4c69-b3ec-9c7405f33d22", + "sink_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "source_name": "result", + "sink_name": "workspace_id", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "block_id": "f59b88a8-54ce-4676-a508-fd614b4e8dce", + "input_default": { + "tables": [ + { + "name": "Default table", + "fields": [ + { + "name": "Email ID", + "type": "singleLineText" + }, + { + "name": "Email Received", + "type": "dateTime", + "options": { + "timeZone": "utc", + "dateFormat": { + "name": "iso" + }, + "timeFormat": { + "name": "24hour" + } + } + }, + { + "name": "Sender", + "type": "email" + }, + { + "name": "Subject", + "type": "singleLineText" + }, + { + "name": "Confidence Score", + "type": "number", + "options": { + "precision": 0 + } + }, + { + "name": "Confidence Threshold", + "type": "number", + "options": { + "precision": 0 + } + }, + { + "name": "Status", + "type": "singleSelect", + "options": { + "choices": [ + { + "name": "Auto-replied", + "color": "greenBright" + }, + { + "name": "Escalated", + "color": "orangeBright" + } + ] + } + }, + { + "name": "Reasoning", + "type": "multilineText" + }, + { + "name": "Timestamp", + "type": "dateTime", + "options": { + "timeZone": "utc", + "dateFormat": { + "name": "iso" + }, + "timeFormat": { + "name": "24hour" + } + } + } + ], + "description": "Default table" + } + ] + }, + "metadata": { + "position": { + "x": 810.8016749879614, + "y": -4299.437787912821 + } + }, + "input_links": [ + { + "id": "4b6076ee-f741-4aff-908f-83bf0013c3a8", + "source_id": "ee8e3c4a-92f4-4c69-b3ec-9c7405f33d22", + "sink_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "source_name": "result", + "sink_name": "workspace_id", + "is_static": true + }, + { + "id": "29630e00-3eeb-4240-860b-3f0531ac57b6", + "source_id": "40597163-b473-4e62-9eb1-e633cf98c774", + "sink_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "source_name": "output", + "sink_name": "name", + "is_static": true + } + ], + "output_links": [ + { + "id": "2a92a7b0-3fb4-4a77-9e60-57f5885c9968", + "source_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "sink_id": "cd060293-584c-4265-bc24-afd141ac57a3", + "source_name": "table", + "sink_name": "input", + "is_static": false + }, + { + "id": "4ec94486-f5ee-4481-b5e9-393a721b8d61", + "source_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "sink_id": "e48f04c5-d6b9-442a-aba1-8258f9aee729", + "source_name": "base_id", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "e48f04c5-d6b9-442a-aba1-8258f9aee729", + "block_id": "1d055e55-a2b9-4547-8311-907d05b0304d", + "input_default": { + "key": "airtable_base_id", + "scope": "within_agent" + }, + "metadata": { + "position": { + "x": 1687.1341506441627, + "y": -4407.308561781573 + } + }, + "input_links": [ + { + "id": "4ec94486-f5ee-4481-b5e9-393a721b8d61", + "source_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "sink_id": "e48f04c5-d6b9-442a-aba1-8258f9aee729", + "source_name": "base_id", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "e45f100c-c364-4f36-a235-bd8f90aa4530", + "source_id": "e48f04c5-d6b9-442a-aba1-8258f9aee729", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "value", + "sink_name": "base_id", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "cd060293-584c-4265-bc24-afd141ac57a3", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "id", + "input": "id" + }, + "metadata": { + "position": { + "x": 1678.28350336602, + "y": -3429.956467741783 + } + }, + "input_links": [ + { + "id": "2a92a7b0-3fb4-4a77-9e60-57f5885c9968", + "source_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "sink_id": "cd060293-584c-4265-bc24-afd141ac57a3", + "source_name": "table", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "245482e9-6c54-4cd5-8e18-d7f20e68901a", + "source_id": "cd060293-584c-4265-bc24-afd141ac57a3", + "sink_id": "11ff0653-85cb-4096-a4ee-70e4326d3579", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "11ff0653-85cb-4096-a4ee-70e4326d3579", + "block_id": "1d055e55-a2b9-4547-8311-907d05b0304d", + "input_default": { + "key": "airtable_table_id", + "scope": "within_agent" + }, + "metadata": { + "position": { + "x": 2426.725598733371, + "y": -3408.9295586269513 + } + }, + "input_links": [ + { + "id": "245482e9-6c54-4cd5-8e18-d7f20e68901a", + "source_id": "cd060293-584c-4265-bc24-afd141ac57a3", + "sink_id": "11ff0653-85cb-4096-a4ee-70e4326d3579", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "13cc3a42-f363-4e86-92e7-b33c4afc00e1", + "source_id": "11ff0653-85cb-4096-a4ee-70e4326d3579", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "value", + "sink_name": "table_id_or_name", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "5b1f1d9f-8c55-4569-beec-dc71c83183a6", + "block_id": "d8710fc9-6e29-481e-a7d5-165eb16f8471", + "input_default": { + "key": "airtable_table_id", + "scope": "within_agent", + "default_value": null + }, + "metadata": { + "position": { + "x": -3224.904869058506, + "y": -2872.282904679654 + } + }, + "input_links": [], + "output_links": [ + { + "id": "672bf51d-9db4-4bde-a189-60ac948b58f6", + "source_id": "5b1f1d9f-8c55-4569-beec-dc71c83183a6", + "sink_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "source_name": "value", + "sink_name": "text", + "is_static": true + }, + { + "id": "c9b30d0a-3a32-4a59-a381-e56f131b174a", + "source_id": "5b1f1d9f-8c55-4569-beec-dc71c83183a6", + "sink_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "source_name": "value", + "sink_name": "data", + "is_static": true + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "58a4593a-44cc-4c68-85fa-af97815ac94e", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "threadId" + }, + "metadata": { + "position": { + "x": 10229.8899989352, + "y": -3952.5560860773867 + } + }, + "input_links": [ + { + "id": "bd6efba6-419e-4317-a820-17ef2801b2c9", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "58a4593a-44cc-4c68-85fa-af97815ac94e", + "source_name": "item", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "2f38fd4d-4a91-4c0d-ae31-517e9fe8f5d1", + "source_id": "58a4593a-44cc-4c68-85fa-af97815ac94e", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "threadId", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "block_id": "12bf5a24-9b90-4f40-9090-4e86e6995e60", + "input_default": { + "cc": [], + "to": [], + "bcc": [], + "subject": "", + "replyAll": false, + "attachments": [], + "content_type": null + }, + "metadata": { + "position": { + "x": 17136.158668653152, + "y": -3233.1313109289067 + } + }, + "input_links": [ + { + "id": "2f38fd4d-4a91-4c0d-ae31-517e9fe8f5d1", + "source_id": "58a4593a-44cc-4c68-85fa-af97815ac94e", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "threadId", + "is_static": false + }, + { + "id": "e850b133-a458-46e2-886c-82743a1a8bcd", + "source_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "to", + "is_static": false + }, + { + "id": "40d7e87b-802e-49c8-9653-8cf9ccccd275", + "source_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "body", + "is_static": false + }, + { + "id": "2738b173-49ff-4db6-997e-0ac3d6cb6cac", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "parentMessageId", + "is_static": false + } + ], + "output_links": [ + { + "id": "0cc2a91c-6330-483e-88ea-65936d9671e3", + "source_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "sink_id": "27bc4042-6f70-4f36-8834-e2bd639c6b4b", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "block_id": "64d2301c-b3f5-4174-8ac0-111ca1e1a7c0", + "input_default": { + "cc": [], + "to": [], + "bcc": [], + "subject": "", + "content_type": null, + "forwardMessage": "", + "includeAttachments": true, + "additionalAttachments": [] + }, + "metadata": { + "position": { + "x": 17336.98838098029, + "y": 675.8669215100489 + } + }, + "input_links": [ + { + "id": "ef1796d4-d291-4547-b74c-ad9d402c2a41", + "source_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "response", + "sink_name": "forwardMessage", + "is_static": false + }, + { + "id": "7eafdff8-3401-4ead-8d44-37bfc650e082", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "output", + "sink_name": "messageId", + "is_static": false + }, + { + "id": "65ce669e-8224-44de-b797-1be31b104170", + "source_id": "6b443996-b026-4422-bf7c-99d1174fd4fd", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "result", + "sink_name": "to", + "is_static": true + } + ], + "output_links": [ + { + "id": "da802c48-c8bd-4452-814d-89c3cafa64c0", + "source_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "sink_id": "f3681bf4-67ca-418f-8185-50818078d8d1", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "block_id": "3060088f-6ed9-4928-9ba7-9c92823a7ccd", + "input_default": { + "match": "^app", + "dot_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": -2579.875244856158, + "y": -3955.522593643197 + } + }, + "input_links": [ + { + "id": "c07cbe40-08c0-4c71-b352-3c3002477d3b", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "source_name": "value", + "sink_name": "text", + "is_static": true + }, + { + "id": "d79ae92e-092b-4c81-b007-5cccee7db054", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "source_name": "value", + "sink_name": "data", + "is_static": true + } + ], + "output_links": [ + { + "id": "91a93af1-7663-4560-8fcb-7cd41eb4f312", + "source_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "positive", + "sink_name": "base_id", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "block_id": "3060088f-6ed9-4928-9ba7-9c92823a7ccd", + "input_default": { + "match": "^tbl", + "dot_all": false, + "case_sensitive": true + }, + "metadata": { + "position": { + "x": -2620.9001532453476, + "y": -2868.9564795486635 + } + }, + "input_links": [ + { + "id": "672bf51d-9db4-4bde-a189-60ac948b58f6", + "source_id": "5b1f1d9f-8c55-4569-beec-dc71c83183a6", + "sink_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "source_name": "value", + "sink_name": "text", + "is_static": true + }, + { + "id": "c9b30d0a-3a32-4a59-a381-e56f131b174a", + "source_id": "5b1f1d9f-8c55-4569-beec-dc71c83183a6", + "sink_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "source_name": "value", + "sink_name": "data", + "is_static": true + } + ], + "output_links": [ + { + "id": "4ea3f29f-51fa-4435-9807-35e665d8728f", + "source_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "positive", + "sink_name": "table_id_or_name", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "47679fdf-d4fd-42b8-9d28-548ddddc496e", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-20250514", + "retry": 3, + "prompt": "Format this object in nice, human readable markdown, without changing any of the content at all.\n\n```\n{{data | safe}}\n```\n\nRules:\n- Do not include the \"## Basic Information\" section.\n- Convert all timestamps to human readable formats\n- Rename the \"Processing information\" Timestamp to \"Reply Sent:\"\n- Rename \"Email ID\" to \"Airtable Email ID\"\n\nMost importantly:\nRespond with just the markdown, with no additional commentary, formatting or decoration.", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 24453.96538518012, + "y": -1148.4255266128644 + } + }, + "input_links": [ + { + "id": "5d73f06f-4bbc-4e3f-b61b-1e0656aa73a5", + "source_id": "87521271-8efa-4177-a320-4ef68572fafe", + "sink_id": "47679fdf-d4fd-42b8-9d28-548ddddc496e", + "source_name": "value", + "sink_name": "prompt_values_#_data", + "is_static": false + } + ], + "output_links": [ + { + "id": "fdea35df-93c6-4b20-8672-a5c2e67d0a84", + "source_id": "47679fdf-d4fd-42b8-9d28-548ddddc496e", + "sink_id": "aa470a71-9f1b-49ce-8f20-4b1d34f6b093", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "87521271-8efa-4177-a320-4ef68572fafe", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 23895.90419815295, + "y": -1147.808785662202 + } + }, + "input_links": [ + { + "id": "5643f208-2f9b-40c2-8aac-e09445071b7e", + "source_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "sink_id": "87521271-8efa-4177-a320-4ef68572fafe", + "source_name": "records", + "sink_name": "value", + "is_static": false + }, + { + "id": "dc116bb0-1d68-4b3a-aa46-8826f5d36efa", + "source_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "sink_id": "87521271-8efa-4177-a320-4ef68572fafe", + "source_name": "details", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "5d73f06f-4bbc-4e3f-b61b-1e0656aa73a5", + "source_id": "87521271-8efa-4177-a320-4ef68572fafe", + "sink_id": "47679fdf-d4fd-42b8-9d28-548ddddc496e", + "source_name": "value", + "sink_name": "prompt_values_#_data", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + }, + { + "id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "block_id": "e189baac-8c20-45a1-94a7-55177ea42565", + "input_default": { + "inputs": {}, + "user_id": "b3e41ea4-2f4c-4964-927c-fe682d857bad", + "graph_id": "5981ea5d-f57f-42f6-8b5c-e09ce7e1b3c0", + "input_schema": { + "type": "object", + "required": [ + "Documentation URL", + "Question", + "Confidence Threshold" + ], + "properties": { + "Question": { + "anyOf": [ + { + "type": "string", + "format": "long-text" + }, + { + "type": "null" + } + ], + "title": "Question", + "advanced": false + }, + "Documentation URL": { + "anyOf": [ + { + "type": "string", + "format": "short-text" + }, + { + "type": "null" + } + ], + "title": "Documentation URL", + "advanced": false + }, + "Confidence Threshold": { + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "title": "Auto-Reply Confidence Threshold", + "advanced": false, + "description": "Minimum confidence score (0-100) required for automatic responses" + } + } + }, + "graph_version": 38, + "output_schema": { + "type": "object", + "required": [ + "ERROR", + "Answer" + ], + "properties": { + "ERROR": { + "title": "ERROR", + "advanced": false + }, + "Answer": { + "title": "Answer", + "advanced": false + } + } + } + }, + "metadata": { + "position": { + "x": 7230.977824744452, + "y": 1565.234385573906 + } + }, + "input_links": [ + { + "id": "d7e9f888-f634-47f4-8142-33855d2dc32b", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "output", + "sink_name": "Question", + "is_static": false + }, + { + "id": "8daee585-cde1-476c-a649-76e376582026", + "source_id": "d28447e7-5d86-4a02-8dec-2ee625426591", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "result", + "sink_name": "Documentation URL", + "is_static": true + }, + { + "id": "e4eea37c-032e-45af-83b2-58ded4b6b0e2", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "result", + "sink_name": "Confidence Threshold", + "is_static": true + } + ], + "output_links": [ + { + "id": "61c54e69-2605-4379-874b-ef630a97c3e9", + "source_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "sink_id": "3ecbe89e-cb1e-497f-9056-75d6e082d414", + "source_name": "ERROR", + "sink_name": "value", + "is_static": false + }, + { + "id": "a7864344-1be9-4ee2-88a8-575bbd851643", + "source_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "Answer", + "sink_name": "prompt_values_#_relevant_information", + "is_static": false + } + ], + "graph_id": "ef561358-b8a2-407f-935f-18b25975b00e", + "graph_version": 369, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "8b58d74a-9028-4257-864b-b5e1ef137541", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "62628a43-7b56-4746-b996-9f9aecda8aa5", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "c6c7095f-2953-4550-bdad-1c899b95a68c", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "d662d709-54bd-4c4d-ac8b-8a5e24c8eb65", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "9d40b3f6-529a-47ba-ae66-eddfd76456c1", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "source_name": "output", + "sink_name": "prompt_values_#_SUPPORT_EMAIL", + "is_static": false + }, + { + "id": "2738b173-49ff-4db6-997e-0ac3d6cb6cac", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "parentMessageId", + "is_static": false + }, + { + "id": "4f17205a-0241-4f16-aeb3-fae33cf92fe9", + "source_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "sink_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "source_name": "yes_output", + "sink_name": "list", + "is_static": false + }, + { + "id": "13cc3a42-f363-4e86-92e7-b33c4afc00e1", + "source_id": "11ff0653-85cb-4096-a4ee-70e4326d3579", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "value", + "sink_name": "table_id_or_name", + "is_static": false + }, + { + "id": "c71a0e91-6429-4c30-ab69-7ee393f761f3", + "source_id": "a04f4e4b-c45d-4e43-a35a-092e0ab8df95", + "sink_id": "40597163-b473-4e62-9eb1-e633cf98c774", + "source_name": "yes_output", + "sink_name": "input", + "is_static": false + }, + { + "id": "7e82bed8-34e4-46ee-bc7e-61b731cbffda", + "source_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "sink_id": "04d04750-c575-4252-a7c5-4732ffb1ab11", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "e850b133-a458-46e2-886c-82743a1a8bcd", + "source_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "to", + "is_static": false + }, + { + "id": "29630e00-3eeb-4240-860b-3f0531ac57b6", + "source_id": "40597163-b473-4e62-9eb1-e633cf98c774", + "sink_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "source_name": "output", + "sink_name": "name", + "is_static": true + }, + { + "id": "d79ae92e-092b-4c81-b007-5cccee7db054", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "source_name": "value", + "sink_name": "data", + "is_static": true + }, + { + "id": "8c7b0442-ce83-44cf-891b-43e71637a320", + "source_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "sink_id": "f3681bf4-67ca-418f-8185-50818078d8d1", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "4ea3f29f-51fa-4435-9807-35e665d8728f", + "source_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "positive", + "sink_name": "table_id_or_name", + "is_static": false + }, + { + "id": "e45f100c-c364-4f36-a235-bd8f90aa4530", + "source_id": "e48f04c5-d6b9-442a-aba1-8258f9aee729", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "value", + "sink_name": "base_id", + "is_static": false + }, + { + "id": "5a5b9d6b-73c0-4bb0-a953-6502c54c4684", + "source_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "sink_id": "04dced50-40cf-40f6-bf87-6e3c1ddb746a", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "79555ba8-df15-4927-84e7-81edef6a66df", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "output", + "sink_name": "prompt_values_#_customer_email_body", + "is_static": false + }, + { + "id": "56fe28df-b2b8-4f3b-8528-90d5f16c6069", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "result", + "sink_name": "prompt_values_#_confidence_threshold", + "is_static": true + }, + { + "id": "494f9e25-9d86-4635-b395-ee948ab2751e", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "64efc204-0d77-4f50-85f5-4e80e7bb9940", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "91a93af1-7663-4560-8fcb-7cd41eb4f312", + "source_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "positive", + "sink_name": "base_id", + "is_static": false + }, + { + "id": "cf384d57-86bf-4489-b341-bb9150eff615", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "fc75b556-8fab-424c-a4d5-e9755f658d73", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "fcdf10ff-97cb-4277-aa34-25e52f8b03f9", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "ef730afa-4de7-47d9-bcb7-11af07f2088b", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "f9fb44ad-ce32-421a-a717-8e6cf4c894da", + "source_id": "4b4c3539-4051-42d3-8777-2d1dc5ac6897", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "value", + "sink_name": "values_#_Confidence Score", + "is_static": false + }, + { + "id": "f8b6e637-a7f1-4933-9f70-3f4d980187bc", + "source_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "yes_output", + "sink_name": "values_#_Status", + "is_static": false + }, + { + "id": "2f38fd4d-4a91-4c0d-ae31-517e9fe8f5d1", + "source_id": "58a4593a-44cc-4c68-85fa-af97815ac94e", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "threadId", + "is_static": false + }, + { + "id": "48b7ebe9-c3d9-4dd1-a70e-9b6043e0a14e", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "response", + "sink_name": "no_value", + "is_static": false + }, + { + "id": "6309d780-4b93-42cd-a163-07aaf89e63d9", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "541121c0-1631-4acf-8e1e-5c61a8051c0f", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "dc116bb0-1d68-4b3a-aa46-8826f5d36efa", + "source_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "sink_id": "87521271-8efa-4177-a320-4ef68572fafe", + "source_name": "details", + "sink_name": "value", + "is_static": false + }, + { + "id": "de9c6cc7-3e9c-4ef1-9d0c-67dcc34ce309", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "e6390819-6c64-49e4-a414-7969d05c099e", + "source_name": "result", + "sink_name": "value", + "is_static": true + }, + { + "id": "a7864344-1be9-4ee2-88a8-575bbd851643", + "source_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "Answer", + "sink_name": "prompt_values_#_relevant_information", + "is_static": false + }, + { + "id": "bf5877d9-85bc-48d6-a6a5-2337ed938837", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "916095c0-99f2-4778-b988-a15e1857456e", + "source_name": "dictionary", + "sink_name": "trigger", + "is_static": false + }, + { + "id": "afbaed62-c6e9-4876-a927-081a9a076215", + "source_id": "a67dde5f-182e-47e4-a811-a03cd7383f92", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "output", + "sink_name": "value1", + "is_static": false + }, + { + "id": "2a273dc3-041d-44bd-babe-4131a55fccc6", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "a67dde5f-182e-47e4-a811-a03cd7383f92", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "313227ff-5500-4dee-9621-3f1858979029", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "c07cbe40-08c0-4c71-b352-3c3002477d3b", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "df31ccd5-ebc2-48bb-8cd9-02fb15847a25", + "source_name": "value", + "sink_name": "text", + "is_static": true + }, + { + "id": "5d73f06f-4bbc-4e3f-b61b-1e0656aa73a5", + "source_id": "87521271-8efa-4177-a320-4ef68572fafe", + "sink_id": "47679fdf-d4fd-42b8-9d28-548ddddc496e", + "source_name": "value", + "sink_name": "prompt_values_#_data", + "is_static": false + }, + { + "id": "2dcdbdb1-8c60-4b1c-bcc8-c69f159c0126", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "88269a97-5378-4ea7-9a5b-45878a290752", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "8176c88c-d42f-4266-9749-e28de468e07d", + "source_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "sink_id": "98cd104e-fa1a-421f-9572-b4234ea55416", + "source_name": "response", + "sink_name": "input", + "is_static": false + }, + { + "id": "055d3b60-1956-4185-8b5c-a09d375b81c7", + "source_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "sink_id": "1d9be004-1d01-446d-b692-09bd20c74d08", + "source_name": "no_output", + "sink_name": "value", + "is_static": false + }, + { + "id": "c9b30d0a-3a32-4a59-a381-e56f131b174a", + "source_id": "5b1f1d9f-8c55-4569-beec-dc71c83183a6", + "sink_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "source_name": "value", + "sink_name": "data", + "is_static": true + }, + { + "id": "5643f208-2f9b-40c2-8aac-e09445071b7e", + "source_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "sink_id": "87521271-8efa-4177-a320-4ef68572fafe", + "source_name": "records", + "sink_name": "value", + "is_static": false + }, + { + "id": "c47b1fae-8a85-491f-aaae-bc7be41f0b7a", + "source_id": "10ad5631-6678-4912-93ee-78f9c118c695", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "is_empty", + "sink_name": "value1", + "is_static": false + }, + { + "id": "0cc2a91c-6330-483e-88ea-65936d9671e3", + "source_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "sink_id": "27bc4042-6f70-4f36-8834-e2bd639c6b4b", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "1e5c56ed-58ca-4346-93ae-62520ca7acef", + "source_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "sink_id": "551eba5c-6f98-437a-a6bd-06fb74af9bf5", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "245482e9-6c54-4cd5-8e18-d7f20e68901a", + "source_id": "cd060293-584c-4265-bc24-afd141ac57a3", + "sink_id": "11ff0653-85cb-4096-a4ee-70e4326d3579", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "e4eea37c-032e-45af-83b2-58ded4b6b0e2", + "source_id": "3ce011b7-40b5-4bb0-880b-203e52b23a3c", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "result", + "sink_name": "Confidence Threshold", + "is_static": true + }, + { + "id": "d1dc9a21-6804-447d-a8ca-652bbbf32f2a", + "source_id": "64efc204-0d77-4f50-85f5-4e80e7bb9940", + "sink_id": "4b4c3539-4051-42d3-8777-2d1dc5ac6897", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "a090a603-1fcf-4e17-ba2f-f1affcfc1c34", + "source_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "sink_id": "8af604e1-68d7-416f-84fe-41b1ffc3749e", + "source_name": "updated_list", + "sink_name": "records", + "is_static": false + }, + { + "id": "aeff641a-5a4d-4839-9971-8b60f408136c", + "source_id": "b30555a2-d3d9-47cb-9174-6197591625a0", + "sink_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "source_name": "value", + "sink_name": "value1", + "is_static": false + }, + { + "id": "4b6076ee-f741-4aff-908f-83bf0013c3a8", + "source_id": "ee8e3c4a-92f4-4c69-b3ec-9c7405f33d22", + "sink_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "source_name": "result", + "sink_name": "workspace_id", + "is_static": true + }, + { + "id": "884103aa-a984-4715-a0f2-ab93bf641aa1", + "source_id": "b69fd82c-f602-4655-b208-e3c9785bc37f", + "sink_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "source_name": "result", + "sink_name": "input_$_1", + "is_static": true + }, + { + "id": "fdea35df-93c6-4b20-8672-a5c2e67d0a84", + "source_id": "47679fdf-d4fd-42b8-9d28-548ddddc496e", + "sink_id": "aa470a71-9f1b-49ce-8f20-4b1d34f6b093", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "61c54e69-2605-4379-874b-ef630a97c3e9", + "source_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "sink_id": "3ecbe89e-cb1e-497f-9056-75d6e082d414", + "source_name": "ERROR", + "sink_name": "value", + "is_static": false + }, + { + "id": "bd6efba6-419e-4317-a820-17ef2801b2c9", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "58a4593a-44cc-4c68-85fa-af97815ac94e", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "ad3963d4-05b4-41d7-924b-176216497fa6", + "source_id": "2c22b40e-db53-4a91-bb76-9a45e427ada1", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "no_output", + "sink_name": "values_#_Status", + "is_static": false + }, + { + "id": "672bf51d-9db4-4bde-a189-60ac948b58f6", + "source_id": "5b1f1d9f-8c55-4569-beec-dc71c83183a6", + "sink_id": "4de3a835-ce7d-4503-af97-b947a31c3285", + "source_name": "value", + "sink_name": "text", + "is_static": true + }, + { + "id": "1c72921e-805d-4edb-9a47-239609ecf3da", + "source_id": "88269a97-5378-4ea7-9a5b-45878a290752", + "sink_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "source_name": "output", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "e8ae929b-92eb-43aa-a913-b436ac938bf9", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "6516c017-1d49-466f-bdb4-002802ae13f9", + "source_name": "output", + "sink_name": "message_id", + "is_static": false + }, + { + "id": "cd6f5657-085c-40dc-89f8-4cb849692017", + "source_id": "68be5e28-53eb-46ef-a40c-1cdc9d1d3459", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Sender", + "is_static": false + }, + { + "id": "65ce669e-8224-44de-b797-1be31b104170", + "source_id": "6b443996-b026-4422-bf7c-99d1174fd4fd", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "result", + "sink_name": "to", + "is_static": true + }, + { + "id": "da802c48-c8bd-4452-814d-89c3cafa64c0", + "source_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "sink_id": "f3681bf4-67ca-418f-8185-50818078d8d1", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "7eafdff8-3401-4ead-8d44-37bfc650e082", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "output", + "sink_name": "messageId", + "is_static": false + }, + { + "id": "174a6e58-b69e-448a-8d00-d0e763e58cbb", + "source_id": "98cd104e-fa1a-421f-9572-b4234ea55416", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Reasoning", + "is_static": false + }, + { + "id": "e9ca04e6-ac35-415a-9d0e-f9710258e320", + "source_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "sink_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "source_name": "dictionary", + "sink_name": "dictionary", + "is_static": false + }, + { + "id": "a4e9ac5e-ddc2-4260-ae2f-9740575857ba", + "source_id": "ef730afa-4de7-47d9-bcb7-11af07f2088b", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Email Received", + "is_static": false + }, + { + "id": "fd4efd81-1307-4ff0-8f95-b5ab56a123c0", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "10ad5631-6678-4912-93ee-78f9c118c695", + "source_name": "emails", + "sink_name": "list", + "is_static": false + }, + { + "id": "99e13790-2ac2-420d-97aa-00cfef148fcd", + "source_id": "783c0eba-35e1-44d2-8734-ca398c65e82e", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Email ID", + "is_static": false + }, + { + "id": "30925949-b897-4ace-a9f8-9cb5cb792ec6", + "source_id": "916095c0-99f2-4778-b988-a15e1857456e", + "sink_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "source_name": "date_time", + "sink_name": "value", + "is_static": false + }, + { + "id": "e6475fc2-a119-483a-9f39-952c97c8e0f5", + "source_id": "eee15323-2b62-42f9-8dc0-3cfaa4e191b7", + "sink_id": "a04f4e4b-c45d-4e43-a35a-092e0ab8df95", + "source_name": "value", + "sink_name": "value1", + "is_static": true + }, + { + "id": "40d7e87b-802e-49c8-9653-8cf9ccccd275", + "source_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "sink_id": "e69c4fcb-ce36-411a-a205-7a07bdd08413", + "source_name": "output", + "sink_name": "body", + "is_static": false + }, + { + "id": "2a92a7b0-3fb4-4a77-9e60-57f5885c9968", + "source_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "sink_id": "cd060293-584c-4265-bc24-afd141ac57a3", + "source_name": "table", + "sink_name": "input", + "is_static": false + }, + { + "id": "093f5c0b-79a7-435f-9a49-d742df68da6f", + "source_id": "81c7afae-69bd-4916-9024-a0c6da636d5d", + "sink_id": "72a1d15e-7711-4c82-bdae-b2fd243563ee", + "source_name": "updated_dictionary", + "sink_name": "list_$_0", + "is_static": false + }, + { + "id": "4ec94486-f5ee-4481-b5e9-393a721b8d61", + "source_id": "e061a66b-3e5f-475f-bfa3-ba364f662f7c", + "sink_id": "e48f04c5-d6b9-442a-aba1-8258f9aee729", + "source_name": "base_id", + "sink_name": "value", + "is_static": false + }, + { + "id": "2c7a281e-57d9-46a7-9823-fc76b37b5cd4", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "source_name": "no_output", + "sink_name": "prompt_values_#_AI_ANALYSIS", + "is_static": false + }, + { + "id": "d7e9f888-f634-47f4-8142-33855d2dc32b", + "source_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "output", + "sink_name": "Question", + "is_static": false + }, + { + "id": "9810c154-dda6-4e84-85c5-b432251e2c38", + "source_id": "19757868-f240-4e39-b484-fe3f84d3d237", + "sink_id": "5cd84e14-97f6-408f-bd9b-017f1301fed8", + "source_name": "yes_output", + "sink_name": "input_$_0", + "is_static": false + }, + { + "id": "8daee585-cde1-476c-a649-76e376582026", + "source_id": "d28447e7-5d86-4a02-8dec-2ee625426591", + "sink_id": "21fa5920-eff4-4641-a598-0f90f73dce61", + "source_name": "result", + "sink_name": "Documentation URL", + "is_static": true + }, + { + "id": "ef1796d4-d291-4547-b74c-ad9d402c2a41", + "source_id": "f064e18f-2363-488b-ab47-dab1d85e55b7", + "sink_id": "2bea668b-a6b5-467e-9bc6-787be7f21cf7", + "source_name": "response", + "sink_name": "forwardMessage", + "is_static": false + }, + { + "id": "f9e1a8ca-f81b-46ea-9fc8-6b9adcc95c6f", + "source_id": "c5a1791f-92c0-4dbd-8c6c-732db2be6225", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "emails", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "d3405fa9-d81f-4be2-925c-2739863bd2c8", + "source_id": "e6390819-6c64-49e4-a414-7969d05c099e", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "value", + "sink_name": "values_#_Confidence Threshold", + "is_static": false + }, + { + "id": "4774f0a4-63b4-4bfe-9be0-b0b434f9301c", + "source_id": "423e8b8c-ac1a-4d6a-9c70-1c778645d938", + "sink_id": "cc7c1334-f7eb-43c1-8ac2-15a84cadd2d7", + "source_name": "item", + "sink_name": "input", + "is_static": false + }, + { + "id": "ab6e1ed4-a063-4b51-8387-962485e8d025", + "source_id": "b014ffc4-8b61-4893-a6fd-1b8ee26a3417", + "sink_id": "881de375-cc67-4776-8323-80b32a4e3e20", + "source_name": "value", + "sink_name": "value2", + "is_static": false + }, + { + "id": "a9b03ff1-e567-4622-892d-a9003f2269a7", + "source_id": "dc400c79-dc17-4bac-81ae-cdcbcc69d90b", + "sink_id": "6b2e19b1-5bb8-49bc-939c-8220d17e9dee", + "source_name": "result", + "sink_name": "prompt_values_#_company_tone", + "is_static": true + }, + { + "id": "0c90d29b-158d-4806-b1d4-f65dc449b3d0", + "source_id": "62628a43-7b56-4746-b996-9f9aecda8aa5", + "sink_id": "0ba27931-43c0-4cc2-952c-14394fc0a10d", + "source_name": "output", + "sink_name": "values_#_Subject", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [ + { + "id": "6e0b8d36-3e03-4292-8b2d-9202e2d0c5e5", + "version": 38, + "is_active": true, + "name": "Product Knowledge Agent", + "description": "Overview\nFinding the right detail in technical documentation and FAQs can be overwhelming. This agent solves that problem by exploring product manuals and guides on your behalf, giving you quick, accurate answers every time.\n\nHow It Works\n1. You ask a product-related question.\n2. The agent reviews docs,\u00a0 manuals, guides, and FAQs.\n3. It verifies accuracy and compiles the best answer.\n4. You receive a concise explanation with direct references to the docs.\n\n\nBusiness Value\n Accelerate onboarding, reduce support wait times, and empower teams with instant knowledge. By automating product Q&A, companies can improve customer satisfaction by cutting down on repetitive support tickets.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "263a6854-5aed-4b9d-9d18-41b25e5002d0", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Documentation URL", + "advanced": false, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": 48.620022798759834, + "y": 214.4951509877074 + } + }, + "input_links": [], + "output_links": [ + { + "id": "0601d9e3-086d-41c6-b2fe-1fe1f7166fd5", + "source_id": "263a6854-5aed-4b9d-9d18-41b25e5002d0", + "sink_id": "e4d24a93-2107-41a2-b853-69b7b3603153", + "source_name": "result", + "sink_name": "url", + "is_static": true + }, + { + "id": "85507b2c-4eca-4d66-a414-486e65d5382f", + "source_id": "263a6854-5aed-4b9d-9d18-41b25e5002d0", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_DOCUMENTATION_URL", + "is_static": true + } + ] + }, + { + "id": "8a0577e3-315a-4bd8-99d9-da082316b729", + "block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba", + "input_default": { + "name": "Question", + "advanced": false, + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1199.8907386673427, + "y": 215.28637177848216 + } + }, + "input_links": [], + "output_links": [ + { + "id": "de329c54-737a-4a44-85ee-44b23ac7f683", + "source_id": "8a0577e3-315a-4bd8-99d9-da082316b729", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_QUESTION", + "is_static": true + } + ] + }, + { + "id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "block_id": "3b191d9f-356f-482d-8238-ba04b6d18381", + "input_default": { + "model": "gpt-5-2025-08-07", + "retry": 3, + "prompt": "**Current Task:**\nDocumentation Base URL: {{documentation_base_url}}\nCustomer Question: {{question}}\nRequired Confidence Threshold: {{confidence_threshold}}%\nSitemap: {{sitemap}}\n\n**Your Available Tools:**\n1. **FireCrawl Crawl** - Pass in a base URL to crawl through multiple related pages agentically\n2. **FireCrawl Extract** - Pass in a list of specific URLs and a custom prompt for targeted extraction\n\n**Search Process:**\n1. Analyze the sitemap and customer's question to identify key concepts and requirements\n2. Choose the optimal tool strategy:\n - Use **FireCrawl Crawl** when you need to explore multiple related pages in a section\n - Use **FireCrawl Extract** when you know specific URLs and want targeted information with custom prompts\n3. For each search result, perform documentation analysis:\n - Extract relevant quotes from documentation, cited verbatim\n - Identify specific instructions, troubleshooting steps, or policies\n - Note any limitations that might prevent a complete answer\n - Consider edge cases or limitations in potential answers\n4. Assess confidence level (0-100%) based on completeness and accuracy of found information\n5. Continue searching if below {{confidence_threshold}}% threshold using different tools/strategies\n6. Only include information explicitly stated in the documentation\n\n**Tool Usage Guidelines:**\n- **FireCrawl Crawl:** When sitemap shows related pages that need exploration (e.g., API section with multiple endpoints)\n- **FireCrawl Extract:** When sitemap reveals specific relevant pages and you need precise information with targeted prompts\n\n**Stop Conditions:**\n- Confidence reaches {{confidence_threshold}}% or higher\n- Documentation has been thoroughly searched with appropriate tools\n- No additional relevant information can be found\n\n**Final Output Format:**\n{\n \"confidence_score\": [X]%,\n \"relevant_information\": \"Exact text copied verbatim from documentation\",\n \"answer\": \"Complete response based solely on documentation findings\",\n \"source_urls\": [\"exact page URLs relevant to this answer\"],\n \"reasoning\": \"Why this confidence level was reached and which tools were used\"\n}\n\n**Important:** \n- Never make assumptions or include information not present in documentation\n- If insufficient information found, clearly state limitations\n- Ensure responses are accurate, professional, and solution-oriented\n\nBegin searching for: \"{{question}}\" in documentation at {{documentation_base_url}}", + "sys_prompt": "You are a documentation search agent that uses FireCrawl tools to find accurate answers to customer questions. Your goal is to thoroughly search documentation until you reach the specified confidence threshold or exhaust available sources.\n\n**Available Functions:**\n1. firecrawl_crawl(url, limit, only_main_content) - Explore multiple pages from a base URL\n2. firecrawl_extract(urls, prompt) - Extract specific information from known URLs with targeted prompts\n\n**Search Strategy:**\n- Analyze the sitemap to identify relevant documentation areas\n- Use firecrawl_extract when you know specific URLs contain the answer\n- Use firecrawl_crawl when you need to explore and discover relevant pages\n- Continue searching until your confidence reaches the specified confidence threshold OR you've exhausted relevant sources\n- Always provide complete function arguments matching the required schema\n\n**MANDATORY STOP CONDITIONS:**\nYou MUST stop searching and provide your final answer when ANY of these conditions are met:\n- You have made 5 tool calls total (hard limit)\n- Your confidence reaches the confidence threshold ({{CONFIDENCE_THRESHOLD}}%) or higher\n- You find no new relevant information in your last 2 consecutive searches\n- You are about to crawl/extract from URLs you've already processed\n- No more relevant URLs are available to search\n\n**Goal:** Raise your confidence level as high as possible through thorough documentation search.\n\n**Confidence Assessment:**\n- Evaluate how completely and accurately you can answer the customer's question\n- Consider: completeness of information, clarity of documentation, presence of contradictions\n- You must explicitly state your confidence percentage (0-100%)\n- Continue searching if below the confidence threshold ({{CONFIDENCE_THRESHOLD}}%) AND more sources are available AND stop conditions not met\n\n**Output Requirements:**\n- Extract verbatim quotes from documentation sources\n- Provide comprehensive answers based solely on documentation\n- Include relevant source URLs (limit to 2-3 most important)\n- Assess your final confidence level objectively\n\n**Tool Call Tracking:**\n- Keep count of your tool calls (1/5, 2/5, etc.)\n- After each tool call, evaluate if stop conditions are met\n- If stop conditions are met, immediately provide final output in XML format\n\nAlways choose a function call from the available tools and provide complete arguments. When finished, output your findings in the specified XML format.", + "ollama_host": "localhost:11434", + "prompt_values": {}, + "multiple_tool_calls": true, + "conversation_history": [] + }, + "metadata": { + "position": { + "x": 2693.3732369945164, + "y": 208.8074862369709 + } + }, + "input_links": [ + { + "id": "5f17f7fe-6990-4513-8a56-c58c349ba933", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + }, + { + "id": "ebbd1614-a4cc-4fdb-b5a7-8e09345675b6", + "source_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "output_message", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "aa80e27a-4dd3-41b4-9d4a-aab33c0dd88d", + "source_id": "8d5f6594-d865-4aa7-8479-7f36b4c083df", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "output", + "sink_name": "prompt", + "is_static": true + } + ], + "output_links": [ + { + "id": "5f17f7fe-6990-4513-8a56-c58c349ba933", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + }, + { + "id": "a68a6096-56e2-4ee3-9c3d-3d6ed8d3e3d4", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "c9f24972-04a3-472e-a545-6cdd5e0201e1", + "source_name": "finished", + "sink_name": "value", + "is_static": false + }, + { + "id": "206be745-3a31-4b63-ae95-29a7a5822266", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "c9f24972-04a3-472e-a545-6cdd5e0201e1", + "source_name": "tools_^_agentoutputblock_~_value", + "sink_name": "value", + "is_static": false + }, + { + "id": "50c8dfdf-edd4-4ac0-9859-c4eee9737661", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "source_name": "tools_^_firecrawlextractblock_~_urls", + "sink_name": "urls", + "is_static": false + }, + { + "id": "43b50d09-91ad-4195-9282-143c5c074b85", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "source_name": "tools_^_firecrawlextractblock_~_prompt", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "77e077d9-b442-4a0f-95da-fb878a8fe624", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "16fbae22-b2ed-42c7-890c-c8554fe74656", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "e6441aea-26d9-4292-80be-54ecdab260ae", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "8a2766d8-94c3-40f2-8e60-66cc63f6425e", + "source_name": "tools_^_firecrawlcrawlblock_~_url", + "sink_name": "url", + "is_static": false + } + ] + }, + { + "id": "a15a19b5-d771-4e30-bb76-2c368468df23", + "block_id": "96dae2bb-97a2-41c2-bd2f-13a3b5a8ea98", + "input_default": { + "name": "Confidence Threshold", + "title": "Auto-Reply Confidence Threshold", + "advanced": false, + "description": "Minimum confidence score (0-100) required for automatic responses", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -573.5612071210447, + "y": 214.50574812392034 + } + }, + "input_links": [], + "output_links": [ + { + "id": "f96fd53d-c66d-437a-84a4-222abf018bde", + "source_id": "a15a19b5-d771-4e30-bb76-2c368468df23", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_CONFIDENCE_THRESHOLD", + "is_static": true + } + ] + }, + { + "id": "8a2766d8-94c3-40f2-8e60-66cc63f6425e", + "block_id": "bdbbaba0-03b7-4971-970e-699e2de6015e", + "input_default": { + "limit": 10, + "formats": [ + "markdown" + ], + "max_age": 3600000, + "wait_for": 0, + "only_main_content": true + }, + "metadata": { + "position": { + "x": 3635.6678690822937, + "y": -525.4558779233904 + } + }, + "input_links": [ + { + "id": "e6441aea-26d9-4292-80be-54ecdab260ae", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "8a2766d8-94c3-40f2-8e60-66cc63f6425e", + "source_name": "tools_^_firecrawlcrawlblock_~_url", + "sink_name": "url", + "is_static": false + } + ], + "output_links": [ + { + "id": "3a1f2995-6ea7-41e4-a10c-295febd1892a", + "source_id": "8a2766d8-94c3-40f2-8e60-66cc63f6425e", + "sink_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "source_name": "markdown", + "sink_name": "input_message", + "is_static": false + } + ] + }, + { + "id": "16fbae22-b2ed-42c7-890c-c8554fe74656", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "ERROR", + "advanced": false + }, + "metadata": { + "position": { + "x": 3318.7470205644167, + "y": 2347.415524537313 + } + }, + "input_links": [ + { + "id": "77e077d9-b442-4a0f-95da-fb878a8fe624", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "16fbae22-b2ed-42c7-890c-c8554fe74656", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "c9f24972-04a3-472e-a545-6cdd5e0201e1", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Answer", + "advanced": false + }, + "metadata": { + "position": { + "x": 4740.84460411233, + "y": 1254.0475468406855 + } + }, + "input_links": [ + { + "id": "a68a6096-56e2-4ee3-9c3d-3d6ed8d3e3d4", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "c9f24972-04a3-472e-a545-6cdd5e0201e1", + "source_name": "finished", + "sink_name": "value", + "is_static": false + }, + { + "id": "206be745-3a31-4b63-ae95-29a7a5822266", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "c9f24972-04a3-472e-a545-6cdd5e0201e1", + "source_name": "tools_^_agentoutputblock_~_value", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [] + }, + { + "id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "block_id": "d1774756-4d9e-40e6-bab1-47ec0ccd81b2", + "input_default": { + "urls": [], + "enable_web_search": false + }, + "metadata": { + "position": { + "x": 4262.589888165149, + "y": -538.1522431063202 + } + }, + "input_links": [ + { + "id": "50c8dfdf-edd4-4ac0-9859-c4eee9737661", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "source_name": "tools_^_firecrawlextractblock_~_urls", + "sink_name": "urls", + "is_static": false + }, + { + "id": "43b50d09-91ad-4195-9282-143c5c074b85", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "source_name": "tools_^_firecrawlextractblock_~_prompt", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "5b904c69-fb0c-4bf7-b4c1-4429d5c2f17c", + "source_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "sink_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "source_name": "data", + "sink_name": "input_message", + "is_static": false + } + ] + }, + { + "id": "e4d24a93-2107-41a2-b853-69b7b3603153", + "block_id": "f0f43e2b-c943-48a0-a7f1-40136ca4d3b9", + "input_default": {}, + "metadata": { + "position": { + "x": 700.7848438430577, + "y": 219.16232116733937 + } + }, + "input_links": [ + { + "id": "0601d9e3-086d-41c6-b2fe-1fe1f7166fd5", + "source_id": "263a6854-5aed-4b9d-9d18-41b25e5002d0", + "sink_id": "e4d24a93-2107-41a2-b853-69b7b3603153", + "source_name": "result", + "sink_name": "url", + "is_static": true + } + ], + "output_links": [ + { + "id": "bdfce016-b3ec-4a81-a343-a5f2c063168d", + "source_id": "e4d24a93-2107-41a2-b853-69b7b3603153", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "links", + "sink_name": "values_#_SITEMAP_URL_LIST", + "is_static": false + } + ] + }, + { + "id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "\nDocumentation Base URL: {{DOCUMENTATION_URL}}\nCustomer Question: {{QUESTION}}\nRequired Confidence Threshold: {{CONFIDENCE_THRESHOLD}}%\nAvailable Pages (Sitemap): {{SITEMAP_URL_LIST}}\n\n\n\nContinue searching and extracting information until you raise your confidence level as high as possible. Don't stop until you reach {{CONFIDENCE_THRESHOLD}}% confidence OR exhaust all relevant documentation sources.\n\n\n\nMethod:\n- Start with sitemap analysis from the sitemap url list\n- Use FireCrawl Extract for known specific URLs, FireCrawl Crawl for exploration\n- Keep searching to raise confidence - don't settle for partial answers\n- Extract as much relevant information as possible\n\n\n\nCRITICAL: FireCrawl Extract accepts maximum 10 URLs per request. Be highly selective and choose only the most comprehensive, relevant pages for extraction.\n\n\n\n- Use FireCrawl Crawl for broader exploration when you need to discover relevant content\n- Use FireCrawl Extract for up to 10 most critical URLs when you know exactly what pages contain the answer\n- If you identify more than 10 relevant URLs, prioritize: overview pages > specific subsections > examples\n\n\n\n\n [X]\n \n [Exact text from documentation]\n \n \n [Complete response based solely on documentation findings]\n \n \n [URL1]\n [URL2]\n [URL3]\n...\n \n\n\n\nFind the most comprehensive documentation-based answer for: \"{{QUESTION}}\"\n\nBegin by analyzing the sitemap url list and planning your search strategy to achieve the highest possible confidence level.", + "values": {} + }, + "metadata": { + "position": { + "x": 1366.0592785137596, + "y": 214.32942114785305 + } + }, + "input_links": [ + { + "id": "85507b2c-4eca-4d66-a414-486e65d5382f", + "source_id": "263a6854-5aed-4b9d-9d18-41b25e5002d0", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_DOCUMENTATION_URL", + "is_static": true + }, + { + "id": "bdfce016-b3ec-4a81-a343-a5f2c063168d", + "source_id": "e4d24a93-2107-41a2-b853-69b7b3603153", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "links", + "sink_name": "values_#_SITEMAP_URL_LIST", + "is_static": false + }, + { + "id": "f96fd53d-c66d-437a-84a4-222abf018bde", + "source_id": "a15a19b5-d771-4e30-bb76-2c368468df23", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_CONFIDENCE_THRESHOLD", + "is_static": true + }, + { + "id": "de329c54-737a-4a44-85ee-44b23ac7f683", + "source_id": "8a0577e3-315a-4bd8-99d9-da082316b729", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_QUESTION", + "is_static": true + } + ], + "output_links": [ + { + "id": "535b347d-fa2d-4461-8ade-f093cf73dde2", + "source_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "sink_id": "8d5f6594-d865-4aa7-8479-7f36b4c083df", + "source_name": "output", + "sink_name": "input", + "is_static": false + } + ] + }, + { + "id": "8d5f6594-d865-4aa7-8479-7f36b4c083df", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": {}, + "metadata": { + "position": { + "x": 2009.3858195835292, + "y": 219.61895555965955 + } + }, + "input_links": [ + { + "id": "535b347d-fa2d-4461-8ade-f093cf73dde2", + "source_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "sink_id": "8d5f6594-d865-4aa7-8479-7f36b4c083df", + "source_name": "output", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "aa80e27a-4dd3-41b4-9d4a-aab33c0dd88d", + "source_id": "8d5f6594-d865-4aa7-8479-7f36b4c083df", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "output", + "sink_name": "prompt", + "is_static": true + } + ] + }, + { + "id": "c6b1017c-a82e-401f-ae59-3a36b993a5c1", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Inputs" + }, + "metadata": { + "position": { + "x": -1195.91298968421, + "y": -189.12681492578457 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "235155a9-e4f6-4a6d-90df-a3ba5bd31bbc", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Toolbox" + }, + "metadata": { + "position": { + "x": 3629.7459544349654, + "y": -902.1151785811265 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "57e269dc-75d9-46cc-85be-85e693759c94", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Core: Smart Decision Maker and prompt" + }, + "metadata": { + "position": { + "x": 1361.3560589479616, + "y": -181.6344386113369 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "68c9962e-c3b6-4483-a262-08fb93348698", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Output" + }, + "metadata": { + "position": { + "x": 4745.12015746418, + "y": 869.4735488177469 + } + }, + "input_links": [], + "output_links": [] + }, + { + "id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "block_id": "d67a9c52-5e4e-11e2-bcfd-0800200c9a71", + "input_default": { + "days": 0, + "hours": 0, + "repeat": 1, + "minutes": 0, + "seconds": "15", + "input_message": "timer finished" + }, + "metadata": { + "position": { + "x": 4908.496986570356, + "y": -532.4058692534887 + } + }, + "input_links": [ + { + "id": "5b904c69-fb0c-4bf7-b4c1-4429d5c2f17c", + "source_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "sink_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "source_name": "data", + "sink_name": "input_message", + "is_static": false + }, + { + "id": "3a1f2995-6ea7-41e4-a10c-295febd1892a", + "source_id": "8a2766d8-94c3-40f2-8e60-66cc63f6425e", + "sink_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "source_name": "markdown", + "sink_name": "input_message", + "is_static": false + } + ], + "output_links": [ + { + "id": "ebbd1614-a4cc-4fdb-b5a7-8e09345675b6", + "source_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "output_message", + "sink_name": "last_tool_output", + "is_static": false + } + ] + } + ], + "links": [ + { + "id": "43b50d09-91ad-4195-9282-143c5c074b85", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "source_name": "tools_^_firecrawlextractblock_~_prompt", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "77e077d9-b442-4a0f-95da-fb878a8fe624", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "16fbae22-b2ed-42c7-890c-c8554fe74656", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "3a1f2995-6ea7-41e4-a10c-295febd1892a", + "source_id": "8a2766d8-94c3-40f2-8e60-66cc63f6425e", + "sink_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "source_name": "markdown", + "sink_name": "input_message", + "is_static": false + }, + { + "id": "e6441aea-26d9-4292-80be-54ecdab260ae", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "8a2766d8-94c3-40f2-8e60-66cc63f6425e", + "source_name": "tools_^_firecrawlcrawlblock_~_url", + "sink_name": "url", + "is_static": false + }, + { + "id": "aa80e27a-4dd3-41b4-9d4a-aab33c0dd88d", + "source_id": "8d5f6594-d865-4aa7-8479-7f36b4c083df", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "output", + "sink_name": "prompt", + "is_static": true + }, + { + "id": "5b904c69-fb0c-4bf7-b4c1-4429d5c2f17c", + "source_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "sink_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "source_name": "data", + "sink_name": "input_message", + "is_static": false + }, + { + "id": "535b347d-fa2d-4461-8ade-f093cf73dde2", + "source_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "sink_id": "8d5f6594-d865-4aa7-8479-7f36b4c083df", + "source_name": "output", + "sink_name": "input", + "is_static": false + }, + { + "id": "de329c54-737a-4a44-85ee-44b23ac7f683", + "source_id": "8a0577e3-315a-4bd8-99d9-da082316b729", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_QUESTION", + "is_static": true + }, + { + "id": "bdfce016-b3ec-4a81-a343-a5f2c063168d", + "source_id": "e4d24a93-2107-41a2-b853-69b7b3603153", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "links", + "sink_name": "values_#_SITEMAP_URL_LIST", + "is_static": false + }, + { + "id": "a68a6096-56e2-4ee3-9c3d-3d6ed8d3e3d4", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "c9f24972-04a3-472e-a545-6cdd5e0201e1", + "source_name": "finished", + "sink_name": "value", + "is_static": false + }, + { + "id": "206be745-3a31-4b63-ae95-29a7a5822266", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "c9f24972-04a3-472e-a545-6cdd5e0201e1", + "source_name": "tools_^_agentoutputblock_~_value", + "sink_name": "value", + "is_static": false + }, + { + "id": "85507b2c-4eca-4d66-a414-486e65d5382f", + "source_id": "263a6854-5aed-4b9d-9d18-41b25e5002d0", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_DOCUMENTATION_URL", + "is_static": true + }, + { + "id": "5f17f7fe-6990-4513-8a56-c58c349ba933", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "conversations", + "sink_name": "conversation_history", + "is_static": false + }, + { + "id": "ebbd1614-a4cc-4fdb-b5a7-8e09345675b6", + "source_id": "94c831ec-ecf0-477c-8d77-45dec19457d5", + "sink_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "source_name": "output_message", + "sink_name": "last_tool_output", + "is_static": false + }, + { + "id": "f96fd53d-c66d-437a-84a4-222abf018bde", + "source_id": "a15a19b5-d771-4e30-bb76-2c368468df23", + "sink_id": "12c4366e-81e7-4e65-b2b5-eb5fc8c65230", + "source_name": "result", + "sink_name": "values_#_CONFIDENCE_THRESHOLD", + "is_static": true + }, + { + "id": "50c8dfdf-edd4-4ac0-9859-c4eee9737661", + "source_id": "29c894cd-e453-4159-b2da-ebbafdaac2fe", + "sink_id": "5f122699-682a-43de-b45e-2b2933f0f5ad", + "source_name": "tools_^_firecrawlextractblock_~_urls", + "sink_name": "urls", + "is_static": false + }, + { + "id": "0601d9e3-086d-41c6-b2fe-1fe1f7166fd5", + "source_id": "263a6854-5aed-4b9d-9d18-41b25e5002d0", + "sink_id": "e4d24a93-2107-41a2-b853-69b7b3603153", + "source_name": "result", + "sink_name": "url", + "is_static": true + } + ], + "forked_from_id": null, + "forked_from_version": null, + "input_schema": { + "type": "object", + "properties": { + "Documentation URL": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Documentation URL" + }, + "Question": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Question" + }, + "Confidence Threshold": { + "advanced": false, + "anyOf": [ + { + "type": "integer" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Auto-Reply Confidence Threshold", + "description": "Minimum confidence score (0-100) required for automatic responses" + } + }, + "required": [ + "Documentation URL", + "Question", + "Confidence Threshold" + ] + }, + "output_schema": { + "type": "object", + "properties": { + "ERROR": { + "advanced": false, + "secret": false, + "title": "ERROR" + }, + "Answer": { + "advanced": false, + "secret": false, + "title": "Answer" + } + }, + "required": [ + "ERROR", + "Answer" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null + } + ], + "user_id": "", + "created_at": "2025-09-16T19:11:44.836Z", + "input_schema": { + "type": "object", + "properties": { + "Documentation URL": { + "advanced": false, + "secret": false, + "title": "Documentation URL", + "default": "" + }, + "Company Tone": { + "advanced": false, + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Company Tone", + "enum": [ + "Professional", + "Friendly", + "Casual", + "Technical" + ], + "default": "" + }, + "Confidence Threshold": { + "advanced": false, + "secret": false, + "title": "Auto-Reply Confidence Threshold", + "description": "Minimum confidence score (0-100) required for automatic responses", + "default": "" + }, + "Escalation Email": { + "advanced": false, + "secret": false, + "title": "Human Escalation Email Address", + "description": "Email address where complex support requests that require human attention will be forwarded", + "default": "" + }, + "Email Signature": { + "advanced": false, + "anyOf": [ + { + "format": "long-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Email Signature", + "default": "" + }, + "Airtable Workspace ID": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Airtable Workspace ID", + "description": "To get your Airtable workspace ID is to open Airtable in a browser, select your workspace, and check the URL in your browser\u2019s address bar. The workspace ID is the string that starts with \"wsp\" in the URL (for example,\u00a0https://airtable.com/workspaces/wspsqMNxxxxxxxxxxxxx). Do not include the trailing question mark if present.", + "default": "" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Error Reading Emails": { + "advanced": false, + "secret": false, + "title": "Error Reading Emails" + }, + "No New Support Tickets": { + "advanced": false, + "secret": false, + "title": "No New Support Tickets" + }, + "Error Crawling Documentation": { + "advanced": false, + "secret": false, + "title": "Error Crawling Documentation" + }, + "Error Gmail Auto-Reply": { + "advanced": false, + "secret": false, + "title": "Error Gmail Auto-Reply" + }, + "Dictionary Creation Failed": { + "advanced": false, + "secret": false, + "title": "Dictionary Creation Failed" + }, + "Adding to List Process Failed": { + "advanced": false, + "secret": false, + "title": "Adding to List Process Failed" + }, + "Success": { + "advanced": false, + "secret": false, + "title": "Success" + }, + "Error in Gmail Escalation Process": { + "advanced": false, + "secret": false, + "title": "Error in Gmail Escalation Process" + }, + "Error adding to Airtable Write payload": { + "advanced": false, + "secret": false, + "title": "Error adding to Airtable Write payload" + }, + "Error Type Conversion": { + "advanced": false, + "secret": false, + "title": "Error Type Conversion" + } + }, + "required": [ + "Error Reading Emails", + "No New Support Tickets", + "Error Crawling Documentation", + "Error Gmail Auto-Reply", + "Dictionary Creation Failed", + "Adding to List Process Failed", + "Success", + "Error in Gmail Escalation Process", + "Error adding to Airtable Write payload", + "Error Type Conversion" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "google_oauth2_credentials": { + "credentials_provider": [ + "google" + ], + "credentials_types": [ + "oauth2" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "google", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "oauth2", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['oauth2']]", + "type": "object", + "credentials_scopes": [ + "https://www.googleapis.com/auth/gmail.send", + "https://www.googleapis.com/auth/gmail.modify", + "https://www.googleapis.com/auth/gmail.readonly" + ], + "discriminator_values": [] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-4.1-2025-04-14", + "gpt-5-2025-08-07" + ] + }, + "airtable_api_key-oauth2_credentials": { + "credentials_provider": [ + "airtable" + ], + "credentials_types": [ + "api_key", + "oauth2" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "airtable", + "title": "Provider", + "type": "string" + }, + "type": { + "enum": [ + "api_key", + "oauth2" + ], + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key', 'oauth2']]", + "type": "object", + "discriminator_values": [] + }, + "anthropic_api_key_credentials": { + "credentials_provider": [ + "anthropic" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "anthropic", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "claude-sonnet-4-20250514" + ] + }, + "firecrawl_api_key_credentials": { + "credentials_provider": [ + "firecrawl" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "firecrawl", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + } + }, + "required": [ + "google_oauth2_credentials", + "openai_api_key_credentials", + "airtable_api_key-oauth2_credentials", + "anthropic_api_key_credentials", + "firecrawl_api_key_credentials" + ], + "title": "AutomatedSupportAgentCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_c775f60d-b99f-418b-8fe0-53172258c3ce.json b/autogpt_platform/backend/agents/agent_c775f60d-b99f-418b-8fe0-53172258c3ce.json new file mode 100644 index 0000000000..532c173a1b --- /dev/null +++ b/autogpt_platform/backend/agents/agent_c775f60d-b99f-418b-8fe0-53172258c3ce.json @@ -0,0 +1,1005 @@ +{ + "id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "version": 16, + "is_active": false, + "name": "YouTube Transcription Scraper", + "description": "Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 1242.570588258763, + "y": 1239.0648340008283 + } + }, + "input_links": [ + { + "id": "e816444c-5fca-42d4-b4d1-719294ccfaeb", + "source_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "sink_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "source_name": "generated_list", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "81581615-5c5c-41b0-906f-1cf259542c53", + "source_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "sink_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "source_name": "count", + "sink_name": "input", + "is_static": false + }, + { + "id": "53cb9832-a375-4084-a635-669f19924cec", + "source_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "sink_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "source_name": "count", + "sink_name": "data", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "5275b389-3d1a-472c-877d-bddd7245767a", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Search Query", + "value": "Auto_GPT" + }, + "metadata": { + "position": { + "x": -1066.187417387172, + "y": 1001.8552390983206 + } + }, + "input_links": [], + "output_links": [ + { + "id": "7ba96ec9-67df-458f-9650-3c8779c9584f", + "source_id": "5275b389-3d1a-472c-877d-bddd7245767a", + "sink_id": "46ec84d0-f03a-45f1-a154-4f694738a783", + "source_name": "result", + "sink_name": "values_#_QUERY", + "is_static": true + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "133946e1-3033-4c2d-85ab-2ae6b8f65857", + "block_id": "aeb08fc1-2fc1-4141-bc8e-f758f183a822", + "input_default": { + "entry": "Youtube Transcripts" + }, + "metadata": { + "position": { + "x": 1231.8185576522417, + "y": -915.1398277411396 + } + }, + "input_links": [], + "output_links": [ + { + "id": "26662256-353e-469e-906f-b413e3ce2f59", + "source_id": "133946e1-3033-4c2d-85ab-2ae6b8f65857", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "412ab7a5-ef93-483a-9137-3dfddd0ab455", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Transcripts" + }, + "metadata": { + "position": { + "x": 3785.9696718713567, + "y": 657.2728830845492 + } + }, + "input_links": [ + { + "id": "fc871514-57ba-41cb-a8fd-e7b6623a5e26", + "source_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "sink_id": "412ab7a5-ef93-483a-9137-3dfddd0ab455", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "operator": ">" + }, + "metadata": { + "position": { + "x": 3144.36290122744, + "y": 657.803932826998 + } + }, + "input_links": [ + { + "id": "04480881-a501-4c78-a555-a9a6c5781add", + "source_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "da50467b-9bd2-4e01-ac4d-25dd5153c0f9", + "source_id": "c38521df-c4ee-48c8-8cf1-a49de02fa417", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "8bd9a887-980a-476d-8c0a-f6125b339347", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + } + ], + "output_links": [ + { + "id": "fc871514-57ba-41cb-a8fd-e7b6623a5e26", + "source_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "sink_id": "412ab7a5-ef93-483a-9137-3dfddd0ab455", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "6231dfea-c224-4029-ae8c-96873dfed078", + "block_id": "aeb08fc1-2fc1-4141-bc8e-f758f183a822", + "input_default": {}, + "metadata": { + "position": { + "x": 1863.7263973535069, + "y": 176.40384128981023 + } + }, + "input_links": [ + { + "id": "fa234b34-2608-428f-b0f0-ee68bfc9d1c7", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "26662256-353e-469e-906f-b413e3ce2f59", + "source_id": "133946e1-3033-4c2d-85ab-2ae6b8f65857", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "9afff64c-554f-4bf2-a338-85e2edfe44cf", + "source_id": "49ac5b72-9bf5-4697-a56c-a235988394e3", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "transcript", + "sink_name": "entry", + "is_static": false + } + ], + "output_links": [ + { + "id": "fa234b34-2608-428f-b0f0-ee68bfc9d1c7", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "84460e51-1efa-4f54-8c87-316a63666dc8", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "c38521df-c4ee-48c8-8cf1-a49de02fa417", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "8bd9a887-980a-476d-8c0a-f6125b339347", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "fafb0d60-a5b6-4188-a144-6ae8e68f65a9", + "block_id": "436c3984-57fd-4b85-8e9a-459b356883bd", + "input_default": {}, + "metadata": { + "position": { + "x": 100.13599586516995, + "y": 998.8123871077987 + } + }, + "input_links": [ + { + "id": "661711fd-dea6-4766-aea2-7cf3177c4dfb", + "source_id": "46ec84d0-f03a-45f1-a154-4f694738a783", + "sink_id": "fafb0d60-a5b6-4188-a144-6ae8e68f65a9", + "source_name": "output", + "sink_name": "url", + "is_static": false + } + ], + "output_links": [ + { + "id": "1eb955fb-d20e-405d-b89c-a0abd8fbe58e", + "source_id": "fafb0d60-a5b6-4188-a144-6ae8e68f65a9", + "sink_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "source_name": "content", + "sink_name": "source_data", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "a2e83b14-29ca-4f99-b79d-c4e309a86ca8", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Note for self-host users: \n\nPress \"Advanced\" and enter your Anthropic API key ->\n\nhttps://docs.anthropic.com/en/api/getting-started" + }, + "metadata": { + "position": { + "x": 314.2957291286988, + "y": 341.8904744260496 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "5c0b3dc2-3bd8-4ee7-a315-83d89219a33e", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "At this point we have all the transcriptions in a list." + }, + "metadata": { + "position": { + "x": 3555.3575982644275, + "y": 235.32122600538992 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "0b2bfa81-c3b8-4189-b983-c6364c21272e", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Number of Videos", + "value": "3", + "description": "The number of videos to collect the transcripts of" + }, + "metadata": { + "position": { + "x": -1059.1936308573434, + "y": 160.45170411762848 + } + }, + "input_links": [], + "output_links": [ + { + "id": "a90eb9e6-d4de-46e8-8b62-614f9c6ca61b", + "source_id": "0b2bfa81-c3b8-4189-b983-c6364c21272e", + "sink_id": "ac2654e8-546d-4748-a4b9-49e7662d3d1a", + "source_name": "result", + "sink_name": "values_#_NUMBER", + "is_static": true + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "b4a85365-6943-4b0a-928a-68b56377a454", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": {}, + "metadata": { + "position": { + "x": 1859.949466461855, + "y": 1241.0348156680416 + } + }, + "input_links": [ + { + "id": "81581615-5c5c-41b0-906f-1cf259542c53", + "source_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "sink_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "source_name": "count", + "sink_name": "input", + "is_static": false + }, + { + "id": "53cb9832-a375-4084-a635-669f19924cec", + "source_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "sink_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "source_name": "count", + "sink_name": "data", + "is_static": false + } + ], + "output_links": [ + { + "id": "04480881-a501-4c78-a555-a9a6c5781add", + "source_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "output", + "sink_name": "value2", + "is_static": true + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "49ac5b72-9bf5-4697-a56c-a235988394e3", + "block_id": "f3a8f7e1-4b1d-4e5f-9f2a-7c3d5a2e6b4c", + "input_default": {}, + "metadata": { + "position": { + "x": 1234.9739670999406, + "y": 172.36828137640848 + } + }, + "input_links": [ + { + "id": "b33653ed-1334-43a9-ad19-24d393de5ec7", + "source_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "sink_id": "49ac5b72-9bf5-4697-a56c-a235988394e3", + "source_name": "list_item", + "sink_name": "youtube_url", + "is_static": false + } + ], + "output_links": [ + { + "id": "9afff64c-554f-4bf2-a338-85e2edfe44cf", + "source_id": "49ac5b72-9bf5-4697-a56c-a235988394e3", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "transcript", + "sink_name": "entry", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "c38521df-c4ee-48c8-8cf1-a49de02fa417", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 2525.209901802294, + "y": 174.92728320007268 + } + }, + "input_links": [ + { + "id": "84460e51-1efa-4f54-8c87-316a63666dc8", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "c38521df-c4ee-48c8-8cf1-a49de02fa417", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "da50467b-9bd2-4e01-ac4d-25dd5153c0f9", + "source_id": "c38521df-c4ee-48c8-8cf1-a49de02fa417", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "ac2654e8-546d-4748-a4b9-49e7662d3d1a", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "List the top {{NUMBER}} YouTube video urls results in this data.\n\nThe urls look like this:\n```https://www.youtube.com/watch?v=.....```", + "values": {} + }, + "metadata": { + "position": { + "x": -476.1955531511852, + "y": 165.59810105279777 + } + }, + "input_links": [ + { + "id": "a90eb9e6-d4de-46e8-8b62-614f9c6ca61b", + "source_id": "0b2bfa81-c3b8-4189-b983-c6364c21272e", + "sink_id": "ac2654e8-546d-4748-a4b9-49e7662d3d1a", + "source_name": "result", + "sink_name": "values_#_NUMBER", + "is_static": true + } + ], + "output_links": [ + { + "id": "1266f0f1-d6b2-4cdd-ac1a-7169340fc6f4", + "source_id": "ac2654e8-546d-4748-a4b9-49e7662d3d1a", + "sink_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "source_name": "output", + "sink_name": "focus", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "46ec84d0-f03a-45f1-a154-4f694738a783", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "https://www.youtube.com/results?search_query={{QUERY}}", + "values": {} + }, + "metadata": { + "position": { + "x": -483.5962787777845, + "y": 995.4859020054615 + } + }, + "input_links": [ + { + "id": "7ba96ec9-67df-458f-9650-3c8779c9584f", + "source_id": "5275b389-3d1a-472c-877d-bddd7245767a", + "sink_id": "46ec84d0-f03a-45f1-a154-4f694738a783", + "source_name": "result", + "sink_name": "values_#_QUERY", + "is_static": true + } + ], + "output_links": [ + { + "id": "661711fd-dea6-4766-aea2-7cf3177c4dfb", + "source_id": "46ec84d0-f03a-45f1-a154-4f694738a783", + "sink_id": "fafb0d60-a5b6-4188-a144-6ae8e68f65a9", + "source_name": "output", + "sink_name": "url", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "block_id": "9c0b0450-d199-458b-a731-072189dd6593", + "input_default": { + "focus": "List the top 5 YouTube video urls results in this data.\n\nThe urls look like this:\n```https://www.youtube.com/watch?v=.....```", + "model": "claude-sonnet-4-5-20250929" + }, + "metadata": { + "position": { + "x": 661.4989974785728, + "y": 171 + } + }, + "input_links": [ + { + "id": "1eb955fb-d20e-405d-b89c-a0abd8fbe58e", + "source_id": "fafb0d60-a5b6-4188-a144-6ae8e68f65a9", + "sink_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "source_name": "content", + "sink_name": "source_data", + "is_static": false + }, + { + "id": "1266f0f1-d6b2-4cdd-ac1a-7169340fc6f4", + "source_id": "ac2654e8-546d-4748-a4b9-49e7662d3d1a", + "sink_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "source_name": "output", + "sink_name": "focus", + "is_static": false + } + ], + "output_links": [ + { + "id": "b33653ed-1334-43a9-ad19-24d393de5ec7", + "source_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "sink_id": "49ac5b72-9bf5-4697-a56c-a235988394e3", + "source_name": "list_item", + "sink_name": "youtube_url", + "is_static": false + }, + { + "id": "e816444c-5fca-42d4-b4d1-719294ccfaeb", + "source_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "sink_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "source_name": "generated_list", + "sink_name": "collection", + "is_static": false + } + ], + "graph_id": "1acef2da-b865-4503-b94e-bfd444eddd66", + "graph_version": 16, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "04480881-a501-4c78-a555-a9a6c5781add", + "source_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "7ba96ec9-67df-458f-9650-3c8779c9584f", + "source_id": "5275b389-3d1a-472c-877d-bddd7245767a", + "sink_id": "46ec84d0-f03a-45f1-a154-4f694738a783", + "source_name": "result", + "sink_name": "values_#_QUERY", + "is_static": true + }, + { + "id": "26662256-353e-469e-906f-b413e3ce2f59", + "source_id": "133946e1-3033-4c2d-85ab-2ae6b8f65857", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "1eb955fb-d20e-405d-b89c-a0abd8fbe58e", + "source_id": "fafb0d60-a5b6-4188-a144-6ae8e68f65a9", + "sink_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "source_name": "content", + "sink_name": "source_data", + "is_static": false + }, + { + "id": "fa234b34-2608-428f-b0f0-ee68bfc9d1c7", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "a90eb9e6-d4de-46e8-8b62-614f9c6ca61b", + "source_id": "0b2bfa81-c3b8-4189-b983-c6364c21272e", + "sink_id": "ac2654e8-546d-4748-a4b9-49e7662d3d1a", + "source_name": "result", + "sink_name": "values_#_NUMBER", + "is_static": true + }, + { + "id": "fc871514-57ba-41cb-a8fd-e7b6623a5e26", + "source_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "sink_id": "412ab7a5-ef93-483a-9137-3dfddd0ab455", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + }, + { + "id": "8bd9a887-980a-476d-8c0a-f6125b339347", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "661711fd-dea6-4766-aea2-7cf3177c4dfb", + "source_id": "46ec84d0-f03a-45f1-a154-4f694738a783", + "sink_id": "fafb0d60-a5b6-4188-a144-6ae8e68f65a9", + "source_name": "output", + "sink_name": "url", + "is_static": false + }, + { + "id": "e816444c-5fca-42d4-b4d1-719294ccfaeb", + "source_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "sink_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "source_name": "generated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "84460e51-1efa-4f54-8c87-316a63666dc8", + "source_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "sink_id": "c38521df-c4ee-48c8-8cf1-a49de02fa417", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "da50467b-9bd2-4e01-ac4d-25dd5153c0f9", + "source_id": "c38521df-c4ee-48c8-8cf1-a49de02fa417", + "sink_id": "a563e3ed-9d51-4960-9b95-dbe29e8962cd", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "53cb9832-a375-4084-a635-669f19924cec", + "source_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "sink_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "source_name": "count", + "sink_name": "data", + "is_static": false + }, + { + "id": "b33653ed-1334-43a9-ad19-24d393de5ec7", + "source_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "sink_id": "49ac5b72-9bf5-4697-a56c-a235988394e3", + "source_name": "list_item", + "sink_name": "youtube_url", + "is_static": false + }, + { + "id": "81581615-5c5c-41b0-906f-1cf259542c53", + "source_id": "a5c6d874-24ce-4bd9-8722-40ef6e13372e", + "sink_id": "b4a85365-6943-4b0a-928a-68b56377a454", + "source_name": "count", + "sink_name": "input", + "is_static": false + }, + { + "id": "1266f0f1-d6b2-4cdd-ac1a-7169340fc6f4", + "source_id": "ac2654e8-546d-4748-a4b9-49e7662d3d1a", + "sink_id": "39cd36bc-bf6b-4f8f-9588-c178428c23b8", + "source_name": "output", + "sink_name": "focus", + "is_static": false + }, + { + "id": "9afff64c-554f-4bf2-a338-85e2edfe44cf", + "source_id": "49ac5b72-9bf5-4697-a56c-a235988394e3", + "sink_id": "6231dfea-c224-4029-ae8c-96873dfed078", + "source_name": "transcript", + "sink_name": "entry", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2024-12-20T16:34:24.107Z", + "input_schema": { + "type": "object", + "properties": { + "Search Query": { + "advanced": false, + "secret": false, + "title": "Search Query", + "default": "Auto_GPT" + }, + "Number of Videos": { + "advanced": false, + "secret": false, + "title": "Number of Videos", + "description": "The number of videos to collect the transcripts of", + "default": "3" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Transcripts": { + "advanced": false, + "secret": false, + "title": "Transcripts" + } + }, + "required": [ + "Transcripts" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "jina_api_key_credentials": { + "credentials_provider": [ + "jina" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "jina", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "webshare_proxy_user_password_credentials": { + "credentials_provider": [ + "webshare_proxy" + ], + "credentials_types": [ + "user_password" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "webshare_proxy", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "user_password", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['user_password']]", + "type": "object", + "discriminator_values": [] + }, + "anthropic_api_key_credentials": { + "credentials_provider": [ + "anthropic" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "anthropic", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "claude-sonnet-4-5-20250929" + ] + } + }, + "required": [ + "jina_api_key_credentials", + "webshare_proxy_user_password_credentials", + "anthropic_api_key_credentials" + ], + "title": "YouTubeTranscriptionScraperCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_d85882b8-633f-44ce-a315-c20a8c123d19.json b/autogpt_platform/backend/agents/agent_d85882b8-633f-44ce-a315-c20a8c123d19.json new file mode 100644 index 0000000000..144ecf8e94 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_d85882b8-633f-44ce-a315-c20a8c123d19.json @@ -0,0 +1,403 @@ +{ + "id": "ed2091cf-5b27-45a9-b3ea-42396f95b256", + "version": 12, + "is_active": true, + "name": "Flux AI Image Generator", + "description": "Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "7482c59d-725f-4686-82b9-0dfdc4e92316", + "block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc", + "input_default": { + "text": "Press the \"Advanced\" toggle and input your replicate API key.\n\nYou can get one here:\nhttps://replicate.com/account/api-tokens\n" + }, + "metadata": { + "position": { + "x": 872.8268131538296, + "y": 614.9436919065381 + } + }, + "input_links": [], + "output_links": [], + "graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Generated Image" + }, + "metadata": { + "position": { + "x": 1453.6844137728922, + "y": 963.2466395125115 + } + }, + "input_links": [ + { + "id": "06665d23-2f3d-4445-8f22-573446fcff5b", + "source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea", + "sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "6f24c45f-1548-4eda-9784-da06ce0abef8", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Image Subject", + "value": "Otto the friendly, purple \"Chief Automation Octopus\" helping people automate their tedious tasks.", + "description": "The subject of the image" + }, + "metadata": { + "position": { + "x": -314.43009631839783, + "y": 962.935949165938 + } + }, + "input_links": [], + "output_links": [ + { + "id": "1077c61a-a32a-4ed7-becf-11bcf835b914", + "source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8", + "sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12", + "source_name": "result", + "sink_name": "prompt_values_#_TOPIC", + "is_static": true + } + ], + "graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "50bc23e9-f2b7-4959-8710-99679ed9eeea", + "block_id": "90f8c45e-e983-4644-aa0b-b4ebe2f531bc", + "input_default": { + "prompt": "dog", + "output_format": "png", + "replicate_model_name": "Flux Pro 1.1" + }, + "metadata": { + "position": { + "x": 873.0119949791526, + "y": 966.1604399052493 + } + }, + "input_links": [ + { + "id": "a17ec505-9377-4700-8fe0-124ca81d43a9", + "source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12", + "sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "06665d23-2f3d-4445-8f22-573446fcff5b", + "source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea", + "sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e", + "source_name": "result", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o-mini", + "prompt": "Generate an incredibly detailed, photorealistic image prompt about {{TOPIC}}, describing the camera it's taken with and prompting the diffusion model to use all the best quality techniques.\n\nOutput only the prompt with no additional commentary.", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 277.3057034159709, + "y": 962.8382498113764 + } + }, + "input_links": [ + { + "id": "1077c61a-a32a-4ed7-becf-11bcf835b914", + "source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8", + "sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12", + "source_name": "result", + "sink_name": "prompt_values_#_TOPIC", + "is_static": true + } + ], + "output_links": [ + { + "id": "a17ec505-9377-4700-8fe0-124ca81d43a9", + "source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12", + "sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256", + "graph_version": 12, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "1077c61a-a32a-4ed7-becf-11bcf835b914", + "source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8", + "sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12", + "source_name": "result", + "sink_name": "prompt_values_#_TOPIC", + "is_static": true + }, + { + "id": "06665d23-2f3d-4445-8f22-573446fcff5b", + "source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea", + "sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e", + "source_name": "result", + "sink_name": "value", + "is_static": false + }, + { + "id": "a17ec505-9377-4700-8fe0-124ca81d43a9", + "source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12", + "sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea", + "source_name": "response", + "sink_name": "prompt", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2024-12-20T18:46:11.492Z", + "input_schema": { + "type": "object", + "properties": { + "Image Subject": { + "advanced": false, + "secret": false, + "title": "Image Subject", + "description": "The subject of the image", + "default": "Otto the friendly, purple \"Chief Automation Octopus\" helping people automate their tedious tasks." + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Generated Image": { + "advanced": false, + "secret": false, + "title": "Generated Image" + } + }, + "required": [ + "Generated Image" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "replicate_api_key_credentials": { + "credentials_provider": [ + "replicate" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "replicate", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-4o-mini" + ] + } + }, + "required": [ + "replicate_api_key_credentials", + "openai_api_key_credentials" + ], + "title": "FluxAIImageGeneratorCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_e437cc95-e671-489d-b915-76561fba8c7f.json b/autogpt_platform/backend/agents/agent_e437cc95-e671-489d-b915-76561fba8c7f.json new file mode 100644 index 0000000000..b3b29da570 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_e437cc95-e671-489d-b915-76561fba8c7f.json @@ -0,0 +1,1064 @@ +{ + "id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "version": 17, + "is_active": true, + "name": "AI YouTube-to-Blog Converter", + "description": "Effortlessly turn YouTube videos into high-quality, SEO-optimized blog posts with this innovative AI YouTube-to-Blog Converter. Perfect for content creators, marketers, and bloggers, this tool analyses video content and generates well-structured articles tailored to your specifications. Simply input a YouTube URL, set your desired tone and word count, and let the AI work its magic. The converter extracts key points, maintains the original message, and enhances readability for a text audience. With options for casual, professional, educational, or formal tones, it adapts to various niches and target readers. Featuring smart SEO optimization, engaging titles, and clear subheadings, this tool helps repurpose video content into shareable, search-engine-friendly blog posts. Expand your content strategy and reach a wider audience by transforming your YouTube videos into compelling written content with the AI YouTube-to-Blog Converter.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "7f37f7a6-6fb9-4c8b-9992-0638abfd7919", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "YouTube URL", + "title": null, + "value": "https://www.youtube.com/watch?v=J1Mzd1ZeSaU", + "secret": false, + "advanced": false, + "description": "Enter the URL of the YouTube video you want to convert to a blog post", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1083.525390625, + "y": -72.1875 + } + }, + "input_links": [], + "output_links": [ + { + "id": "7548d09d-7ac9-4c3a-901e-f29a2e24e729", + "source_id": "7f37f7a6-6fb9-4c8b-9992-0638abfd7919", + "sink_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "source_name": "result", + "sink_name": "youtube_url", + "is_static": true + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "13013c6b-505c-408f-aa30-d43a4e0b4309", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Blog Post", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": "The main body of the newly written blog post." + }, + "metadata": { + "position": { + "x": 3771.885044160941, + "y": -19.86303750155819 + } + }, + "input_links": [ + { + "id": "0f89a6e6-d2c4-484f-92de-abcec1f6ecbb", + "source_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "sink_id": "13013c6b-505c-408f-aa30-d43a4e0b4309", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "96280d45-4bdd-4610-958b-054775e77108", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Blog Tone", + "title": null, + "value": "Educational", + "secret": false, + "advanced": false, + "description": "Select the desired tone for the blog post", + "placeholder_values": [ + "Professional", + "Casual", + "Educational", + "Conversational", + "Formal" + ] + }, + "metadata": { + "position": { + "x": -526.960448013501, + "y": 1230.0347965176616 + } + }, + "input_links": [], + "output_links": [ + { + "id": "fa2f1f6a-4798-4e75-aede-44cc1d423f2b", + "source_id": "96280d45-4bdd-4610-958b-054775e77108", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "result", + "sink_name": "prompt_values_#_TONE", + "is_static": true + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "block_id": "f3a8f7e1-4b1d-4e5f-9f2a-7c3d5a2e6b4c", + "input_default": {}, + "metadata": { + "position": { + "x": -521.103515625, + "y": -69.931640625 + } + }, + "input_links": [ + { + "id": "7548d09d-7ac9-4c3a-901e-f29a2e24e729", + "source_id": "7f37f7a6-6fb9-4c8b-9992-0638abfd7919", + "sink_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "source_name": "result", + "sink_name": "youtube_url", + "is_static": true + } + ], + "output_links": [ + { + "id": "223f88a6-4d00-4fbf-93b1-e6d3b005e46a", + "source_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "cda30ed3-c4bb-429b-ac45-641606b75959", + "source_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "sink_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "source_name": "transcript", + "sink_name": "prompt_values_#_TRANSCRIPT", + "is_static": false + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "abad3718-38d8-4b54-8383-565b6bb0b8d2", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Blog Length", + "title": null, + "value": "4000", + "secret": false, + "advanced": false, + "description": "Enter the desired word count for the blog post", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1090.8784904050794, + "y": 1225.5455600527607 + } + }, + "input_links": [], + "output_links": [ + { + "id": "9c0078a5-d586-43a6-b893-23cc246fba63", + "source_id": "abad3718-38d8-4b54-8383-565b6bb0b8d2", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "result", + "sink_name": "prompt_values_#_WORD_COUNT", + "is_static": true + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o", + "retry": 3, + "prompt": "You are an expert content analyst tasked with extracting comprehensive information from a video transcript. Your goal is to provide a detailed analysis and summary of the transcript, ensuring no important details are lost.\n\nHere is the video transcript you need to analyze:\n\n\n{{TRANSCRIPT}}\n\n\nPlease follow these steps to analyze the transcript:\n\n1. Carefully read the entire transcript.\n2. Identify potential mistranscriptions and infer the most likely correct words based on context.\n3. Extract the main topics discussed in the video.\n4. For each main topic, identify all relevant key points and supporting details.\n5. Analyze the overall structure and flow of the content.\n6. Compile a comprehensive summary of all the information extracted from the transcript.\n\nBefore providing your final output, wrap your analysis inside tags. Include the following:\n\n1. List potential mistranscriptions and their likely corrections, quoting the original text.\n2. Identify and quote key phrases for each main topic.\n3. Outline the content structure by noting transitions between topics.\n4. Explain your reasoning for extracting main topics and analyzing the content structure.\n\nAfter your analysis, present your findings in the following format, enclosed in tags:\n\n1. Main Topics: List all main topics discussed in the video.\n2. Detailed Breakdown: For each main topic, provide:\n a. A brief description of the topic\n b. All key points related to the topic\n c. Any supporting details or examples mentioned\n3. Content Structure: Describe the overall structure and flow of the video content.\n4. Comprehensive Summary: Provide a detailed summary that captures all significant information from the transcript.\n\nExample output structure (replace with actual content):\n\n\n1. Main Topics:\n - Topic A\n - Topic B\n - Topic C\n\n2. Detailed Breakdown:\n Topic A:\n a. Description: [Brief description of Topic A]\n b. Key points:\n - Point 1\n - Point 2\n c. Supporting details:\n - Detail 1\n - Detail 2\n\n [Repeat for Topics B and C]\n\n3. Content Structure:\n [Description of overall structure and flow]\n\n4. Comprehensive Summary:\n [Detailed summary of all significant information]\n\n\nRemember, the goal is to extract and present as much valuable information as possible from the transcript, ensuring no important details are lost. Be aware that the transcript is auto-generated, so there might be mistranscriptions.", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 36.80664062500006, + "y": -72.1875 + } + }, + "input_links": [ + { + "id": "cda30ed3-c4bb-429b-ac45-641606b75959", + "source_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "sink_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "source_name": "transcript", + "sink_name": "prompt_values_#_TRANSCRIPT", + "is_static": false + } + ], + "output_links": [ + { + "id": "8b062f6a-baa4-495b-ba1e-1bfc8824b3a1", + "source_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "b747c24b-b846-40a6-905d-4b1dd56f5050", + "source_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "response", + "sink_name": "prompt_values_#_ANALYSIS", + "is_static": false + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "o1", + "retry": 3, + "prompt": "Create a well-structured blog post based on the following video content analysis:\n\n{{ANALYSIS}}\n\nUse the following parameters:\n- Write the bog post in the following tone: {{TONE}}\n- Target word count: {{WORD_COUNT}}\n\nThe blog post should include:\n1. An engaging title\n2. An introduction that hooks the reader\n3. Main body with appropriate headings and subheadings\n4. A conclusion that summarizes key points\n5. SEO optimization (include relevant keywords naturally)\n\nFormat the blog post in plaintext, as beautifully as the medium allows.\n\nOutput the title inside a complete set of xml tags.\nOutput the blogpost inside a complete set of xml tags.\nOutput a list of the used keywords inside a complete set of xml tags\n\nNEVER write xml tags inside of these xml tags, even if mentioned in the video, otherwise the parser will completely fail.", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 576.2402343750001, + "y": -80.80078125000006 + } + }, + "input_links": [ + { + "id": "fa2f1f6a-4798-4e75-aede-44cc1d423f2b", + "source_id": "96280d45-4bdd-4610-958b-054775e77108", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "result", + "sink_name": "prompt_values_#_TONE", + "is_static": true + }, + { + "id": "b747c24b-b846-40a6-905d-4b1dd56f5050", + "source_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "response", + "sink_name": "prompt_values_#_ANALYSIS", + "is_static": false + }, + { + "id": "9c0078a5-d586-43a6-b893-23cc246fba63", + "source_id": "abad3718-38d8-4b54-8383-565b6bb0b8d2", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "result", + "sink_name": "prompt_values_#_WORD_COUNT", + "is_static": true + } + ], + "output_links": [ + { + "id": "5fbdce15-1141-49e4-a3be-243fdc3c81da", + "source_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "sink_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + }, + { + "id": "c5108577-ba7f-4878-bfb4-c45cc4d78dba", + "source_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Error", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 3680.5777660860595, + "y": 1050.178642988597 + } + }, + "input_links": [ + { + "id": "223f88a6-4d00-4fbf-93b1-e6d3b005e46a", + "source_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "da4f2ae2-372c-4e33-a350-6d2e07430cc1", + "source_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "94343835-0dc5-4ab8-9755-5ef1a3ca320e", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "678226df-d4f0-440b-841e-493be3b57df8", + "source_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "02e273fe-df05-471a-935e-991f4ab4a65e", + "source_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "8b062f6a-baa4-495b-ba1e-1bfc8824b3a1", + "source_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "c5108577-ba7f-4878-bfb4-c45cc4d78dba", + "source_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "block_id": "286380af-9529-4b55-8be0-1d7c854abdb5", + "input_default": {}, + "metadata": { + "position": { + "x": 1251.681602785506, + "y": -39.58716347603245 + } + }, + "input_links": [ + { + "id": "5fbdce15-1141-49e4-a3be-243fdc3c81da", + "source_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "sink_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "output_links": [ + { + "id": "94343835-0dc5-4ab8-9755-5ef1a3ca320e", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "16f51325-b332-46e5-bce5-b5f5fc72fb2b", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "018b486e-8d8e-499b-902e-58466d444034", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "0ae3eff0-5861-4fda-a3a5-4d9755255c67", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "title" + }, + "metadata": { + "position": { + "x": 2437.644150293325, + "y": -26.895480664250428 + } + }, + "input_links": [ + { + "id": "018b486e-8d8e-499b-902e-58466d444034", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "02e273fe-df05-471a-935e-991f4ab4a65e", + "source_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "c0972c6b-b63f-4f20-a952-f72cdd518fbe", + "source_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "sink_id": "8fabda6c-b71f-479b-a729-14f2c97c8430", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "keywords" + }, + "metadata": { + "position": { + "x": 3047.168210470221, + "y": -18.693735013756466 + } + }, + "input_links": [ + { + "id": "16f51325-b332-46e5-bce5-b5f5fc72fb2b", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "da4f2ae2-372c-4e33-a350-6d2e07430cc1", + "source_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "b2609fad-e411-47cc-89e2-6991edc373ec", + "source_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "sink_id": "30772e58-9ffe-4b02-a1c8-1e7f5f395a02", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "block_id": "0e50422c-6dee-4145-83d6-3a5a392f65de", + "input_default": { + "key": "blog_post" + }, + "metadata": { + "position": { + "x": 1842.6779133913146, + "y": -27.334164720592128 + } + }, + "input_links": [ + { + "id": "0ae3eff0-5861-4fda-a3a5-4d9755255c67", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "678226df-d4f0-440b-841e-493be3b57df8", + "source_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "0f89a6e6-d2c4-484f-92de-abcec1f6ecbb", + "source_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "sink_id": "13013c6b-505c-408f-aa30-d43a4e0b4309", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "8fabda6c-b71f-479b-a729-14f2c97c8430", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Title", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": "The title of the blog post." + }, + "metadata": { + "position": { + "x": 4373.488009835746, + "y": -15.821288142690591 + } + }, + "input_links": [ + { + "id": "c0972c6b-b63f-4f20-a952-f72cdd518fbe", + "source_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "sink_id": "8fabda6c-b71f-479b-a729-14f2c97c8430", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + }, + { + "id": "30772e58-9ffe-4b02-a1c8-1e7f5f395a02", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Target Keywords", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": "Target keywords used in the blog post." + }, + "metadata": { + "position": { + "x": 4975.268157067363, + "y": -15.821335826406298 + } + }, + "input_links": [ + { + "id": "b2609fad-e411-47cc-89e2-6991edc373ec", + "source_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "sink_id": "30772e58-9ffe-4b02-a1c8-1e7f5f395a02", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "1f14dce1-9cbd-4c96-aeaf-630675ef3a6e", + "graph_version": 17, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "b747c24b-b846-40a6-905d-4b1dd56f5050", + "source_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "response", + "sink_name": "prompt_values_#_ANALYSIS", + "is_static": false + }, + { + "id": "94343835-0dc5-4ab8-9755-5ef1a3ca320e", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "9c0078a5-d586-43a6-b893-23cc246fba63", + "source_id": "abad3718-38d8-4b54-8383-565b6bb0b8d2", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "result", + "sink_name": "prompt_values_#_WORD_COUNT", + "is_static": true + }, + { + "id": "da4f2ae2-372c-4e33-a350-6d2e07430cc1", + "source_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "0f89a6e6-d2c4-484f-92de-abcec1f6ecbb", + "source_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "sink_id": "13013c6b-505c-408f-aa30-d43a4e0b4309", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "0ae3eff0-5861-4fda-a3a5-4d9755255c67", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "7548d09d-7ac9-4c3a-901e-f29a2e24e729", + "source_id": "7f37f7a6-6fb9-4c8b-9992-0638abfd7919", + "sink_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "source_name": "result", + "sink_name": "youtube_url", + "is_static": true + }, + { + "id": "16f51325-b332-46e5-bce5-b5f5fc72fb2b", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "c5108577-ba7f-4878-bfb4-c45cc4d78dba", + "source_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "018b486e-8d8e-499b-902e-58466d444034", + "source_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "sink_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "source_name": "parsed_xml", + "sink_name": "input", + "is_static": false + }, + { + "id": "b2609fad-e411-47cc-89e2-6991edc373ec", + "source_id": "b6786db8-d6aa-42de-9d19-515d5e7d9fc6", + "sink_id": "30772e58-9ffe-4b02-a1c8-1e7f5f395a02", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "8b062f6a-baa4-495b-ba1e-1bfc8824b3a1", + "source_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "678226df-d4f0-440b-841e-493be3b57df8", + "source_id": "ef4553a0-4e84-45f1-b972-f6e7123d68e7", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "c0972c6b-b63f-4f20-a952-f72cdd518fbe", + "source_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "sink_id": "8fabda6c-b71f-479b-a729-14f2c97c8430", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "fa2f1f6a-4798-4e75-aede-44cc1d423f2b", + "source_id": "96280d45-4bdd-4610-958b-054775e77108", + "sink_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "source_name": "result", + "sink_name": "prompt_values_#_TONE", + "is_static": true + }, + { + "id": "02e273fe-df05-471a-935e-991f4ab4a65e", + "source_id": "3c3b6d76-54de-4b89-8022-c9ef7ca34b86", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "missing", + "sink_name": "value", + "is_static": false + }, + { + "id": "223f88a6-4d00-4fbf-93b1-e6d3b005e46a", + "source_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "sink_id": "c8e51364-ecad-407f-b4e5-9f5496744285", + "source_name": "error", + "sink_name": "value", + "is_static": false + }, + { + "id": "cda30ed3-c4bb-429b-ac45-641606b75959", + "source_id": "9fa16df6-d785-4b0a-9f58-7f42544d5cdf", + "sink_id": "1806c84c-2ff2-4045-9fc7-12b5be4086ab", + "source_name": "transcript", + "sink_name": "prompt_values_#_TRANSCRIPT", + "is_static": false + }, + { + "id": "5fbdce15-1141-49e4-a3be-243fdc3c81da", + "source_id": "2b85cb39-9cfc-411c-a9fb-a3af03df9e00", + "sink_id": "ffdcd234-9c01-4576-8768-b1f092f4b3bb", + "source_name": "response", + "sink_name": "input_xml", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2025-04-20T11:23:11.671Z", + "input_schema": { + "type": "object", + "properties": { + "YouTube URL": { + "advanced": false, + "secret": false, + "title": "YouTube URL", + "description": "Enter the URL of the YouTube video you want to convert to a blog post", + "default": "https://www.youtube.com/watch?v=J1Mzd1ZeSaU" + }, + "Blog Tone": { + "advanced": false, + "secret": false, + "title": "Blog Tone", + "enum": [ + "Professional", + "Casual", + "Educational", + "Conversational", + "Formal" + ], + "description": "Select the desired tone for the blog post", + "default": "Educational" + }, + "Blog Length": { + "advanced": false, + "secret": false, + "title": "Blog Length", + "description": "Enter the desired word count for the blog post", + "default": "4000" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Blog Post": { + "advanced": false, + "secret": false, + "title": "Blog Post", + "description": "The main body of the newly written blog post." + }, + "Error": { + "advanced": false, + "secret": false, + "title": "Error" + }, + "Title": { + "advanced": false, + "secret": false, + "title": "Title", + "description": "The title of the blog post." + }, + "Target Keywords": { + "advanced": false, + "secret": false, + "title": "Target Keywords", + "description": "Target keywords used in the blog post." + } + }, + "required": [ + "Blog Post", + "Error", + "Title", + "Target Keywords" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "webshare_proxy_user_password_credentials": { + "credentials_provider": [ + "webshare_proxy" + ], + "credentials_types": [ + "user_password" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "webshare_proxy", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "user_password", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['user_password']]", + "type": "object", + "discriminator_values": [] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-4o", + "o1" + ] + } + }, + "required": [ + "webshare_proxy_user_password_credentials", + "openai_api_key_credentials" + ], + "title": "AIYouTube-to-BlogConverterCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_e7bb29a1-23c7-4fee-aa3b-5426174b8c52.json b/autogpt_platform/backend/agents/agent_e7bb29a1-23c7-4fee-aa3b-5426174b8c52.json new file mode 100644 index 0000000000..96ef0335ce --- /dev/null +++ b/autogpt_platform/backend/agents/agent_e7bb29a1-23c7-4fee-aa3b-5426174b8c52.json @@ -0,0 +1,1094 @@ +{ + "id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "version": 51, + "is_active": true, + "name": "YouTube to LinkedIn Post Converter", + "description": "Seamlessly convert YouTube videos into compelling LinkedIn posts with this innovative AI-powered tool. Perfect for content creators, marketers, and professionals looking to repurpose video content for their LinkedIn network. Simply input a YouTube URL, select your preferred post structure, content focus, and tone, and let the AI work its magic. This versatile converter analyses video transcripts, extracts key insights, and crafts tailored LinkedIn posts that resonate with your professional audience. Choose from various post styles like personal achievements, lessons learned, thought leadership, or curated content. Customize your message with options for inspirational, conversational, or analytical tones. The tool ensures your post captures the essence of the video while optimizing for LinkedIn's format and engagement best practices. Elevate your LinkedIn presence, share valuable insights, and boost your professional brand by transforming video content into strategically crafted posts with the YouTube to LinkedIn Post Converter.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "41fea0d5-2cb8-4149-93d6-488ceb673cb1", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Structure: How do you want to present your LinkedIn Post?", + "title": null, + "value": "Curated Content", + "secret": false, + "advanced": false, + "description": "The *format or structure* in which you present your content. It's about *how* you package your message to make it engaging and accessible.", + "placeholder_values": [ + "Personal Achievement Story", + "Lesson Learned", + "Thought Leadership", + "Question or Poll", + "Curated Content" + ] + }, + "metadata": { + "position": { + "x": 742.2727851273153, + "y": -162.73965069729454 + } + }, + "input_links": [], + "output_links": [ + { + "id": "c6f52852-fc1a-42c0-bc56-e20e69ecac41", + "source_id": "41fea0d5-2cb8-4149-93d6-488ceb673cb1", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_linkedin_post_style", + "is_static": true + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "e236e485-3488-46d2-b2fd-b7d6453c7000", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Content: What is the main idea or message you want to convey.", + "title": null, + "value": "Recommendation", + "secret": false, + "advanced": false, + "description": "The main message or purpose of your post\u2014the core idea or topic you want to convey to your audience.", + "placeholder_values": [ + "Hot Take", + "Key Takeaways", + "Thought Leadership", + "Discussion Starter", + "Recommendation", + "Controversial Opinion" + ] + }, + "metadata": { + "position": { + "x": -975.4693913493425, + "y": 1265.6679205588327 + } + }, + "input_links": [], + "output_links": [ + { + "id": "817be9fc-3ba9-48f0-8e95-43243bf659d1", + "source_id": "e236e485-3488-46d2-b2fd-b7d6453c7000", + "sink_id": "1400eee9-a2d9-420a-ab66-9470e1b76470", + "source_name": "result", + "sink_name": "values_#_post_type", + "is_static": true + }, + { + "id": "5d35de06-be69-4f22-9696-9db0cb94e02d", + "source_id": "e236e485-3488-46d2-b2fd-b7d6453c7000", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_post_type", + "is_static": true + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "block_id": "f3a8f7e1-4b1d-4e5f-9f2a-7c3d5a2e6b4c", + "input_default": {}, + "metadata": { + "position": { + "x": -366.84945248598535, + "y": -174.34246704308515 + } + }, + "input_links": [ + { + "id": "d7311030-3b77-424e-9b84-a2faa53cdc1e", + "source_id": "865d00df-c739-4e7f-a1dd-a9028a8565f0", + "sink_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "source_name": "result", + "sink_name": "youtube_url", + "is_static": true + } + ], + "output_links": [ + { + "id": "6e5a3deb-2d67-4eed-93a7-bb2ab105a0de", + "source_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "sink_id": "36dc1854-65f5-4b28-8146-ede4ffc6f0c0", + "source_name": "transcript", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "50771fc3-c5d0-4b9c-8ed1-7fb2d098b665", + "source_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "transcript", + "sink_name": "values_#_upload_transcript", + "is_static": false + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "54b1767c-e70f-4130-8e1b-682ac9f0b50e", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Tone: How do you want to express yourself in your post?", + "title": null, + "value": "Inspirational", + "secret": false, + "advanced": false, + "description": "Focuses on the tone, voice, and language used in your post\u2014the manner in which you express your ideas. It's about the way you communicate.", + "placeholder_values": [ + "Conversational", + "Storytelling", + "Analytical", + "Inspirational", + "Direct and No-Nonsense" + ] + }, + "metadata": { + "position": { + "x": -964.4798031030377, + "y": 2676.6786854942006 + } + }, + "input_links": [], + "output_links": [ + { + "id": "681a0afa-854d-4cfa-b0b6-ef8faceeda95", + "source_id": "54b1767c-e70f-4130-8e1b-682ac9f0b50e", + "sink_id": "19d45e40-f7d6-48e8-a740-cc832dd1d330", + "source_name": "result", + "sink_name": "values_#_writing_style", + "is_static": true + }, + { + "id": "dc77df0f-23db-4ea5-a2c4-ede925057953", + "source_id": "54b1767c-e70f-4130-8e1b-682ac9f0b50e", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_writing_style", + "is_static": true + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "804f744e-b2ca-4730-9ed4-0a63a59eb809", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o", + "retry": 3, + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 202.0733827539823, + "y": 2671.747616069122 + } + }, + "input_links": [ + { + "id": "50edde5d-e0f7-4daf-b7b0-ffa68837a253", + "source_id": "19d45e40-f7d6-48e8-a740-cc832dd1d330", + "sink_id": "804f744e-b2ca-4730-9ed4-0a63a59eb809", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "f2bf9eb0-815b-4dc3-b498-9a0f9f2c27ba", + "source_id": "804f744e-b2ca-4730-9ed4-0a63a59eb809", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_writing_style_description", + "is_static": false + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "36dc1854-65f5-4b28-8146-ede4ffc6f0c0", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o", + "retry": 3, + "sys_prompt": "You're a seasoned content analyst with expertise in creating engaging short-form content. Your task is to analyze the given transcript and extract key insights, memorable quotes, and main points that would be suitable for a LinkedIn post.", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 173.53779914382744, + "y": -164.85629259652507 + } + }, + "input_links": [ + { + "id": "6e5a3deb-2d67-4eed-93a7-bb2ab105a0de", + "source_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "sink_id": "36dc1854-65f5-4b28-8146-ede4ffc6f0c0", + "source_name": "transcript", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "7d8cb205-8c1b-4d07-827d-801da8ef7ffc", + "source_id": "36dc1854-65f5-4b28-8146-ede4ffc6f0c0", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_content_analysis", + "is_static": false + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "0424732b-b2e9-48a0-bd0d-a4a8c1704c5c", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o", + "retry": 3, + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 181.98730870654288, + "y": 1274.0823202416145 + } + }, + "input_links": [ + { + "id": "38f6a641-4619-4115-99be-0efca1fa0070", + "source_id": "1400eee9-a2d9-420a-ab66-9470e1b76470", + "sink_id": "0424732b-b2e9-48a0-bd0d-a4a8c1704c5c", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "da96e08e-e297-42e9-a0f7-7a386746bff9", + "source_id": "0424732b-b2e9-48a0-bd0d-a4a8c1704c5c", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_linkedin_post_style_guide", + "is_static": false + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "4d4971f7-5ff0-4695-a4e8-a048e4b38a0e", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "LinkedIn Post", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": null + }, + "metadata": { + "position": { + "x": 2758.2750890246175, + "y": 1011.5042359300692 + } + }, + "input_links": [ + { + "id": "bd89a482-38e4-44b7-b783-009e8d10b287", + "source_id": "04eab06b-dfa6-4ae3-bff3-dfd956537239", + "sink_id": "4d4971f7-5ff0-4695-a4e8-a048e4b38a0e", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "19d45e40-f7d6-48e8-a740-cc832dd1d330", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "I want to create a LinkedIn post using the writing style {{writing_style}}. Please provide a detailed description of this writing style, including its key characteristics, tone, typical sentence structures, and any specific language patterns or techniques associated with this style.", + "values": {} + }, + "metadata": { + "position": { + "x": -346.42784978402483, + "y": 2672.3771034612896 + } + }, + "input_links": [ + { + "id": "681a0afa-854d-4cfa-b0b6-ef8faceeda95", + "source_id": "54b1767c-e70f-4130-8e1b-682ac9f0b50e", + "sink_id": "19d45e40-f7d6-48e8-a740-cc832dd1d330", + "source_name": "result", + "sink_name": "values_#_writing_style", + "is_static": true + } + ], + "output_links": [ + { + "id": "50edde5d-e0f7-4daf-b7b0-ffa68837a253", + "source_id": "19d45e40-f7d6-48e8-a740-cc832dd1d330", + "sink_id": "804f744e-b2ca-4730-9ed4-0a63a59eb809", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "1400eee9-a2d9-420a-ab66-9470e1b76470", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "I want to create a specific type of LinkedIn post. The post type is {{post_type}}. Please provide a detailed description of this post type, including its key characteristics, typical structure, and best practices for creating an engaging post in this style.", + "values": {} + }, + "metadata": { + "position": { + "x": -356.7832100774149, + "y": 1274.7151696370713 + } + }, + "input_links": [ + { + "id": "817be9fc-3ba9-48f0-8e95-43243bf659d1", + "source_id": "e236e485-3488-46d2-b2fd-b7d6453c7000", + "sink_id": "1400eee9-a2d9-420a-ab66-9470e1b76470", + "source_name": "result", + "sink_name": "values_#_post_type", + "is_static": true + } + ], + "output_links": [ + { + "id": "38f6a641-4619-4115-99be-0efca1fa0070", + "source_id": "1400eee9-a2d9-420a-ab66-9470e1b76470", + "sink_id": "0424732b-b2e9-48a0-bd0d-a4a8c1704c5c", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "\n{{upload_transcript}}\n\n\n\nFrom the above video transcript, take the best {{post_type}} content and create a LinkedIn post in the {{linkedin_post_style}} style, written in the {{writing_style}} voice. \nUse the insights and guidelines provided in the following Video Content Analysis, LinkedIn Post Style Guide, and Writing Style Description to craft an engaging and effective post.\n\n### Video Content Analysis:\n\n{{content_analysis}}\n\n\n### LinkedIn Post Style Guide:\n\n{{linkedin_post_style_guide}}\n\n\n### {{writing_style}} style description:\n\n{{writing_style_description}}\n", + "values": {} + }, + "metadata": { + "position": { + "x": 1536.312780421781, + "y": 1009.7455928932694 + } + }, + "input_links": [ + { + "id": "5d35de06-be69-4f22-9696-9db0cb94e02d", + "source_id": "e236e485-3488-46d2-b2fd-b7d6453c7000", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_post_type", + "is_static": true + }, + { + "id": "f2bf9eb0-815b-4dc3-b498-9a0f9f2c27ba", + "source_id": "804f744e-b2ca-4730-9ed4-0a63a59eb809", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_writing_style_description", + "is_static": false + }, + { + "id": "dc77df0f-23db-4ea5-a2c4-ede925057953", + "source_id": "54b1767c-e70f-4130-8e1b-682ac9f0b50e", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_writing_style", + "is_static": true + }, + { + "id": "50771fc3-c5d0-4b9c-8ed1-7fb2d098b665", + "source_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "transcript", + "sink_name": "values_#_upload_transcript", + "is_static": false + }, + { + "id": "7d8cb205-8c1b-4d07-827d-801da8ef7ffc", + "source_id": "36dc1854-65f5-4b28-8146-ede4ffc6f0c0", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_content_analysis", + "is_static": false + }, + { + "id": "c6f52852-fc1a-42c0-bc56-e20e69ecac41", + "source_id": "41fea0d5-2cb8-4149-93d6-488ceb673cb1", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_linkedin_post_style", + "is_static": true + }, + { + "id": "da96e08e-e297-42e9-a0f7-7a386746bff9", + "source_id": "0424732b-b2e9-48a0-bd0d-a4a8c1704c5c", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_linkedin_post_style_guide", + "is_static": false + } + ], + "output_links": [ + { + "id": "2274a7dc-ff54-4de7-aae0-409cb54d88da", + "source_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "sink_id": "04eab06b-dfa6-4ae3-bff3-dfd956537239", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "865d00df-c739-4e7f-a1dd-a9028a8565f0", + "block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158", + "input_default": { + "name": "Source YouTube Video", + "title": null, + "value": "https://www.youtube.com/watch?v=KWonAsyKF3g", + "secret": false, + "advanced": false, + "description": "Add the URL of the YouTube video you want to write a LinkedIn Post about.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -966.1451592697463, + "y": -174.75074398841113 + } + }, + "input_links": [], + "output_links": [ + { + "id": "d7311030-3b77-424e-9b84-a2faa53cdc1e", + "source_id": "865d00df-c739-4e7f-a1dd-a9028a8565f0", + "sink_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "source_name": "result", + "sink_name": "youtube_url", + "is_static": true + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + }, + { + "id": "04eab06b-dfa6-4ae3-bff3-dfd956537239", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-5-20250929", + "retry": 3, + "sys_prompt": "Output only the LinkedIn post with no additional commentary or parenthesis.", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 2141.310928112576, + "y": 1011.6502333729907 + } + }, + "input_links": [ + { + "id": "2274a7dc-ff54-4de7-aae0-409cb54d88da", + "source_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "sink_id": "04eab06b-dfa6-4ae3-bff3-dfd956537239", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + } + ], + "output_links": [ + { + "id": "bd89a482-38e4-44b7-b783-009e8d10b287", + "source_id": "04eab06b-dfa6-4ae3-bff3-dfd956537239", + "sink_id": "4d4971f7-5ff0-4695-a4e8-a048e4b38a0e", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "1d79c52b-0daa-4bc9-9de1-08d9986db033", + "graph_version": 51, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "5d35de06-be69-4f22-9696-9db0cb94e02d", + "source_id": "e236e485-3488-46d2-b2fd-b7d6453c7000", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_post_type", + "is_static": true + }, + { + "id": "38f6a641-4619-4115-99be-0efca1fa0070", + "source_id": "1400eee9-a2d9-420a-ab66-9470e1b76470", + "sink_id": "0424732b-b2e9-48a0-bd0d-a4a8c1704c5c", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "681a0afa-854d-4cfa-b0b6-ef8faceeda95", + "source_id": "54b1767c-e70f-4130-8e1b-682ac9f0b50e", + "sink_id": "19d45e40-f7d6-48e8-a740-cc832dd1d330", + "source_name": "result", + "sink_name": "values_#_writing_style", + "is_static": true + }, + { + "id": "d7311030-3b77-424e-9b84-a2faa53cdc1e", + "source_id": "865d00df-c739-4e7f-a1dd-a9028a8565f0", + "sink_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "source_name": "result", + "sink_name": "youtube_url", + "is_static": true + }, + { + "id": "2274a7dc-ff54-4de7-aae0-409cb54d88da", + "source_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "sink_id": "04eab06b-dfa6-4ae3-bff3-dfd956537239", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "50edde5d-e0f7-4daf-b7b0-ffa68837a253", + "source_id": "19d45e40-f7d6-48e8-a740-cc832dd1d330", + "sink_id": "804f744e-b2ca-4730-9ed4-0a63a59eb809", + "source_name": "output", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "6e5a3deb-2d67-4eed-93a7-bb2ab105a0de", + "source_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "sink_id": "36dc1854-65f5-4b28-8146-ede4ffc6f0c0", + "source_name": "transcript", + "sink_name": "prompt", + "is_static": false + }, + { + "id": "50771fc3-c5d0-4b9c-8ed1-7fb2d098b665", + "source_id": "343cce54-6cb6-49e0-847d-a5e664ba91b2", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "transcript", + "sink_name": "values_#_upload_transcript", + "is_static": false + }, + { + "id": "817be9fc-3ba9-48f0-8e95-43243bf659d1", + "source_id": "e236e485-3488-46d2-b2fd-b7d6453c7000", + "sink_id": "1400eee9-a2d9-420a-ab66-9470e1b76470", + "source_name": "result", + "sink_name": "values_#_post_type", + "is_static": true + }, + { + "id": "f2bf9eb0-815b-4dc3-b498-9a0f9f2c27ba", + "source_id": "804f744e-b2ca-4730-9ed4-0a63a59eb809", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_writing_style_description", + "is_static": false + }, + { + "id": "bd89a482-38e4-44b7-b783-009e8d10b287", + "source_id": "04eab06b-dfa6-4ae3-bff3-dfd956537239", + "sink_id": "4d4971f7-5ff0-4695-a4e8-a048e4b38a0e", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "c6f52852-fc1a-42c0-bc56-e20e69ecac41", + "source_id": "41fea0d5-2cb8-4149-93d6-488ceb673cb1", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_linkedin_post_style", + "is_static": true + }, + { + "id": "7d8cb205-8c1b-4d07-827d-801da8ef7ffc", + "source_id": "36dc1854-65f5-4b28-8146-ede4ffc6f0c0", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_content_analysis", + "is_static": false + }, + { + "id": "dc77df0f-23db-4ea5-a2c4-ede925057953", + "source_id": "54b1767c-e70f-4130-8e1b-682ac9f0b50e", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "result", + "sink_name": "values_#_writing_style", + "is_static": true + }, + { + "id": "da96e08e-e297-42e9-a0f7-7a386746bff9", + "source_id": "0424732b-b2e9-48a0-bd0d-a4a8c1704c5c", + "sink_id": "5b5fbcbc-c3c9-405a-88d2-5af6f0572f0f", + "source_name": "response", + "sink_name": "values_#_linkedin_post_style_guide", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2025-04-25T04:55:41.897Z", + "input_schema": { + "type": "object", + "properties": { + "Structure: How do you want to present your LinkedIn Post?": { + "advanced": false, + "secret": false, + "title": "Structure: How do you want to present your LinkedIn Post?", + "enum": [ + "Personal Achievement Story", + "Lesson Learned", + "Thought Leadership", + "Question or Poll", + "Curated Content" + ], + "description": "The *format or structure* in which you present your content. It's about *how* you package your message to make it engaging and accessible.", + "default": "Curated Content" + }, + "Content: What is the main idea or message you want to convey.": { + "advanced": false, + "secret": false, + "title": "Content: What is the main idea or message you want to convey.", + "enum": [ + "Hot Take", + "Key Takeaways", + "Thought Leadership", + "Discussion Starter", + "Recommendation", + "Controversial Opinion" + ], + "description": "The main message or purpose of your post\u2014the core idea or topic you want to convey to your audience.", + "default": "Recommendation" + }, + "Tone: How do you want to express yourself in your post?": { + "advanced": false, + "secret": false, + "title": "Tone: How do you want to express yourself in your post?", + "enum": [ + "Conversational", + "Storytelling", + "Analytical", + "Inspirational", + "Direct and No-Nonsense" + ], + "description": "Focuses on the tone, voice, and language used in your post\u2014the manner in which you express your ideas. It's about the way you communicate.", + "default": "Inspirational" + }, + "Source YouTube Video": { + "advanced": false, + "anyOf": [ + { + "format": "short-text", + "type": "string" + }, + { + "type": "null" + } + ], + "secret": false, + "title": "Source YouTube Video", + "description": "Add the URL of the YouTube video you want to write a LinkedIn Post about.", + "default": "https://www.youtube.com/watch?v=KWonAsyKF3g" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "LinkedIn Post": { + "advanced": false, + "secret": false, + "title": "LinkedIn Post" + } + }, + "required": [ + "LinkedIn Post" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "webshare_proxy_user_password_credentials": { + "credentials_provider": [ + "webshare_proxy" + ], + "credentials_types": [ + "user_password" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "webshare_proxy", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "user_password", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['user_password']]", + "type": "object", + "discriminator_values": [] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-4o" + ] + }, + "anthropic_api_key_credentials": { + "credentials_provider": [ + "anthropic" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "anthropic", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "claude-sonnet-4-5-20250929" + ] + } + }, + "required": [ + "webshare_proxy_user_password_credentials", + "openai_api_key_credentials", + "anthropic_api_key_credentials" + ], + "title": "YouTubetoLinkedInPostConverterCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_eafa21d3-bf14-4f63-a97f-a5ee41df83b3.json b/autogpt_platform/backend/agents/agent_eafa21d3-bf14-4f63-a97f-a5ee41df83b3.json new file mode 100644 index 0000000000..f8df8d7989 --- /dev/null +++ b/autogpt_platform/backend/agents/agent_eafa21d3-bf14-4f63-a97f-a5ee41df83b3.json @@ -0,0 +1,1560 @@ +{ + "id": "9afd017f-565f-4090-b5df-d01bc4094292", + "version": 16, + "is_active": false, + "name": "LinkedIn Post Generator", + "description": "Convert YouTube transcripts into polished LinkedIn posts with custom style, tone, and length options. Optimized for maximum engagement.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "a6567a56-c74b-4fd4-b640-9a9fc09f1b3e", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Style", + "title": null, + "value": "Persuasive", + "secret": false, + "advanced": false, + "description": "Specifies the overall writing style of the blog post. This determines how the content is presented, such as in a professional, conversational, academic, narrative, or persuasive manner.", + "placeholder_values": [ + "Professional", + "Conversational", + "Academic", + "Narrative", + "Persuasive", + "Humorous", + "Informal", + "Formal" + ] + }, + "metadata": { + "position": { + "x": -2643.282376938525, + "y": 2212.604543124934 + } + }, + "input_links": [], + "output_links": [ + { + "id": "fbc76ac7-75c1-4ee0-8cc3-3581f549bd59", + "source_id": "a6567a56-c74b-4fd4-b640-9a9fc09f1b3e", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_STYLE", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "block_id": "aeb08fc1-2fc1-4141-bc8e-f758f183a822", + "input_default": { + "list": [], + "entry": null, + "entries": [], + "position": null + }, + "metadata": { + "position": { + "x": 1855.4446907401616, + "y": -14.075410817135948 + } + }, + "input_links": [ + { + "id": "31b1f11d-ecff-4d0b-b55c-5cd5f59ea1d6", + "source_id": "7278c40d-aecf-4677-852a-b4f69db2e927", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "transcript", + "sink_name": "entry", + "is_static": false + }, + { + "id": "2e8d10c5-a067-4c64-8fc5-a2db5ab82351", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "94b3e4e7-02fd-4c24-aa70-7a0d64cc83e2", + "source_id": "f0f83bf3-4c5a-4a69-a795-84916a9d02c0", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + } + ], + "output_links": [ + { + "id": "a58d1e78-2548-47e0-990c-8f7cae2bdbcf", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "b10632a5-b59a-4240-a6e5-c8adeabaa784", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "2e8d10c5-a067-4c64-8fc5-a2db5ab82351", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "af84836d-c9de-46ea-8d1a-07400d6e5dfe", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "aaeee594-70f9-4bc3-a9a8-b2951e0809e2", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Target Word Count", + "title": null, + "value": "100", + "secret": false, + "advanced": false, + "description": "Sets the target length of the blog post in words. This allows you to control the content length as needed. \nPlease note that due to the nature of LLMs this will not be exact.", + "placeholder_values": [ + "100", + "200", + "300" + ] + }, + "metadata": { + "position": { + "x": 72.17400805594986, + "y": 2194.0648866119127 + } + }, + "input_links": [], + "output_links": [ + { + "id": "85629334-5e62-42ea-9184-c60e4a89b2fd", + "source_id": "aaeee594-70f9-4bc3-a9a8-b2951e0809e2", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_MIN_WORD_COUNT", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "727df6b8-7208-4137-a7e4-c17d2bc2cea1", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Objective", + "title": null, + "value": "Persuasive", + "secret": false, + "advanced": false, + "description": "Defines the main goal or purpose of the blog post. It guides the focus of the content, whether it's meant to inform, persuade, entertain, analyze, educate, inspire, critique, or raise awareness about the topic.", + "placeholder_values": [ + "Informative", + "Persuasive", + "Entertaining", + "Analytical", + "Educational", + "Inspirational", + "Critical", + "Awareness-raising" + ] + }, + "metadata": { + "position": { + "x": -2110.3821388638826, + "y": 2212.2704551656807 + } + }, + "input_links": [], + "output_links": [ + { + "id": "ff4e71a9-2e2d-4d58-ae81-441756b67e22", + "source_id": "727df6b8-7208-4137-a7e4-c17d2bc2cea1", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_OBJECTIVE", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "b10632a5-b59a-4240-a6e5-c8adeabaa784", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 2508.716052917633, + "y": 174.92728320007268 + } + }, + "input_links": [ + { + "id": "a58d1e78-2548-47e0-990c-8f7cae2bdbcf", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "b10632a5-b59a-4240-a6e5-c8adeabaa784", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "64a031c6-d680-4cf9-adae-4374e19bd5f4", + "source_id": "b10632a5-b59a-4240-a6e5-c8adeabaa784", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "132eb8d1-49b6-47f4-9b6f-114b5b929989", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Clarity", + "title": null, + "value": "Clear and Accessible", + "secret": false, + "advanced": false, + "description": "Indicates the level of complexity in the explanations. It tailors the content to the audience's understanding, ranging from clear and accessible language to detailed and technical descriptions.", + "placeholder_values": [ + "Clear and Accessible", + "Detailed and Technical", + "Simple and Straightforward", + "Comprehensive", + "Concise", + "Elaborate" + ] + }, + "metadata": { + "position": { + "x": -995.962495730203, + "y": 2207.5679670937093 + } + }, + "input_links": [], + "output_links": [ + { + "id": "36566600-abaf-479e-8310-380ba8716a82", + "source_id": "132eb8d1-49b6-47f4-9b6f-114b5b929989", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_CLARITY", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "287df4a8-2bf9-4cbf-a32f-dca95aeca0e7", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Number of Videos", + "title": null, + "value": "3", + "secret": false, + "advanced": false, + "description": "The number of videos to collect the transcripts of", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1065.6909670444297, + "y": -191.861475611625 + } + }, + "input_links": [], + "output_links": [ + { + "id": "abc225ed-e1d3-462c-bcc0-88a14d17dd03", + "source_id": "287df4a8-2bf9-4cbf-a32f-dca95aeca0e7", + "sink_id": "8f9d7438-bfdf-44a0-8e07-116143247616", + "source_name": "result", + "sink_name": "values_#_NUMBER", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "block_id": "715696a0-e1da-45c8-b209-c2fa9c3b0be6", + "input_default": { + "no_value": null, + "operator": ">", + "yes_value": null + }, + "metadata": { + "position": { + "x": 3144.36290122744, + "y": 657.803932826998 + } + }, + "input_links": [ + { + "id": "4a91777b-946b-4c3c-b9a4-3fa3645c1e39", + "source_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "af84836d-c9de-46ea-8d1a-07400d6e5dfe", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "64a031c6-d680-4cf9-adae-4374e19bd5f4", + "source_id": "b10632a5-b59a-4240-a6e5-c8adeabaa784", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "count", + "sink_name": "value1", + "is_static": false + } + ], + "output_links": [ + { + "id": "56d063cd-9a82-49cd-8a8e-e8a452009e1a", + "source_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "sink_id": "49998ca9-0549-4198-98f6-a9ae39907c4b", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "464d3fe5-33e8-47bd-b196-e7d0adcd582c", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Tone", + "title": null, + "value": "Engaging", + "secret": false, + "advanced": false, + "description": " Sets the mood or attitude conveyed through the writing. This affects how the reader perceives the content, with options like engaging, serious, light-hearted, friendly, formal, optimistic, or urgent.", + "placeholder_values": [ + "Engaging", + "Serious", + "Light-hearted", + "Friendly", + "Formal", + "Conversational", + "Optimistic", + "Urgent" + ] + }, + "metadata": { + "position": { + "x": -1551.6698388997072, + "y": 2204.312773278323 + } + }, + "input_links": [], + "output_links": [ + { + "id": "9c1f7321-f3b7-428e-8767-fdc717f72597", + "source_id": "464d3fe5-33e8-47bd-b196-e7d0adcd582c", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_TONE", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "block_id": "3c9c2f42-b0c3-435f-ba35-05f7a25c772a", + "input_default": {}, + "metadata": { + "position": { + "x": 1228.32902223793, + "y": 857.7922977651456 + } + }, + "input_links": [ + { + "id": "aedc1c04-41fc-4ddc-a498-2180e04448d0", + "source_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "sink_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "source_name": "generated_list", + "sink_name": "collection", + "is_static": false + } + ], + "output_links": [ + { + "id": "3e931141-56e9-4904-b70a-0ad36415e7be", + "source_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "sink_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "source_name": "count", + "sink_name": "data", + "is_static": false + }, + { + "id": "04bd40a3-52a6-4b2b-93fe-5685066ffeb1", + "source_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "sink_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "source_name": "count", + "sink_name": "input", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "block_id": "1ff065e9-88e8-4358-9d82-8dc91f622ba9", + "input_default": { + "data": null + }, + "metadata": { + "position": { + "x": 1855.59224552268, + "y": 1010.1021058917578 + } + }, + "input_links": [ + { + "id": "3e931141-56e9-4904-b70a-0ad36415e7be", + "source_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "sink_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "source_name": "count", + "sink_name": "data", + "is_static": false + }, + { + "id": "04bd40a3-52a6-4b2b-93fe-5685066ffeb1", + "source_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "sink_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "source_name": "count", + "sink_name": "input", + "is_static": false + } + ], + "output_links": [ + { + "id": "4a91777b-946b-4c3c-b9a4-3fa3645c1e39", + "source_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "output", + "sink_name": "value2", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "67d559a9-89ca-4f1c-888b-7d5e4b934824", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Blog Post", + "title": null, + "value": null, + "format": "", + "secret": false, + "advanced": false, + "description": "The full blog post written by the Agent" + }, + "metadata": { + "position": { + "x": 5258.257136947356, + "y": 575.0003353591338 + } + }, + "input_links": [ + { + "id": "3531f653-5cb2-4107-9522-d9aaebd9d333", + "source_id": "95cbf9be-59cb-4240-aead-e537af599976", + "sink_id": "67d559a9-89ca-4f1c-888b-7d5e4b934824", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "b527d6c7-2dce-4c0b-997b-017dc078ed91", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Opinion", + "title": null, + "value": "Positive", + "secret": false, + "advanced": false, + "description": "Represents the perspective or stance taken in the blog post. This shapes the viewpoint presented to the reader, whether it's positive, negative, neutral, balanced, critical, supportive, or skeptical.", + "placeholder_values": [ + "Positive", + "Negative", + "Neutral", + "Balanced", + "Critical", + "Supportive", + "Skeptical" + ] + }, + "metadata": { + "position": { + "x": -464.49460685891376, + "y": 2201.5046753139704 + } + }, + "input_links": [], + "output_links": [ + { + "id": "f505026b-2026-4bc1-b3c4-fb991cea03ea", + "source_id": "b527d6c7-2dce-4c0b-997b-017dc078ed91", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_OPINION", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "7278c40d-aecf-4677-852a-b4f69db2e927", + "block_id": "f3a8f7e1-4b1d-4e5f-9f2a-7c3d5a2e6b4c", + "input_default": {}, + "metadata": { + "position": { + "x": 1230.8331137932678, + "y": -154.75912985073822 + } + }, + "input_links": [ + { + "id": "02035f8b-3b5c-491f-a192-4030a1c05f86", + "source_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "sink_id": "7278c40d-aecf-4677-852a-b4f69db2e927", + "source_name": "list_item", + "sink_name": "youtube_url", + "is_static": false + } + ], + "output_links": [ + { + "id": "31b1f11d-ecff-4d0b-b55c-5cd5f59ea1d6", + "source_id": "7278c40d-aecf-4677-852a-b4f69db2e927", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "transcript", + "sink_name": "entry", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "eeb6298f-9559-4422-8641-a2841492dbf0", + "block_id": "436c3984-57fd-4b85-8e9a-459b356883bd", + "input_default": { + "raw_content": false + }, + "metadata": { + "position": { + "x": 100.13599586516995, + "y": 998.8123871077987 + } + }, + "input_links": [ + { + "id": "bb434ddb-3d01-4969-bdf9-e1d983628b9c", + "source_id": "7032ccd7-3e7a-4d27-952e-53ccb7ef0994", + "sink_id": "eeb6298f-9559-4422-8641-a2841492dbf0", + "source_name": "output", + "sink_name": "url", + "is_static": false + } + ], + "output_links": [ + { + "id": "f752040c-a0f3-4fe0-9e9e-65da9cff67a4", + "source_id": "eeb6298f-9559-4422-8641-a2841492dbf0", + "sink_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "source_name": "content", + "sink_name": "source_data", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "7cc813d0-574c-4e72-ad89-2d9869c2b8b6", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Topic", + "title": null, + "value": "Auto_GPT", + "secret": false, + "advanced": false, + "description": "The topic of the post you want to write. This is also the query that will be searched on YouTube for research.", + "placeholder_values": [] + }, + "metadata": { + "position": { + "x": -1052.9923382794432, + "y": 1203.0801954911867 + } + }, + "input_links": [], + "output_links": [ + { + "id": "b336b185-bb36-41ec-9464-5e80263087a2", + "source_id": "7cc813d0-574c-4e72-ad89-2d9869c2b8b6", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_TOPIC", + "is_static": true + }, + { + "id": "a3e3dcac-b431-4b04-858d-ec9be1e3a027", + "source_id": "7cc813d0-574c-4e72-ad89-2d9869c2b8b6", + "sink_id": "7032ccd7-3e7a-4d27-952e-53ccb7ef0994", + "source_name": "result", + "sink_name": "values_#_QUERY", + "is_static": true + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "f0f83bf3-4c5a-4a69-a795-84916a9d02c0", + "block_id": "aeb08fc1-2fc1-4141-bc8e-f758f183a822", + "input_default": { + "list": [], + "entry": "Youtube Transcripts", + "entries": [], + "position": null + }, + "metadata": { + "position": { + "x": 1234.7784174183614, + "y": -1194.4414362453076 + } + }, + "input_links": [], + "output_links": [ + { + "id": "94b3e4e7-02fd-4c24-aa70-7a0d64cc83e2", + "source_id": "f0f83bf3-4c5a-4a69-a795-84916a9d02c0", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "8f9d7438-bfdf-44a0-8e07-116143247616", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "List the top {{NUMBER}} YouTube video urls results in this data.\n\nThe urls look like this:\n```https://www.youtube.com/watch?v=.....```", + "values": {} + }, + "metadata": { + "position": { + "x": -482.6928893382714, + "y": -186.71507867645573 + } + }, + "input_links": [ + { + "id": "abc225ed-e1d3-462c-bcc0-88a14d17dd03", + "source_id": "287df4a8-2bf9-4cbf-a32f-dca95aeca0e7", + "sink_id": "8f9d7438-bfdf-44a0-8e07-116143247616", + "source_name": "result", + "sink_name": "values_#_NUMBER", + "is_static": true + } + ], + "output_links": [ + { + "id": "47d0bc20-ae7d-4082-9a1d-9d42d5229d85", + "source_id": "8f9d7438-bfdf-44a0-8e07-116143247616", + "sink_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "source_name": "output", + "sink_name": "focus", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "7032ccd7-3e7a-4d27-952e-53ccb7ef0994", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "https://www.youtube.com/results?search_query={{QUERY}}", + "values": {} + }, + "metadata": { + "position": { + "x": -483.5962787777845, + "y": 995.4859020054615 + } + }, + "input_links": [ + { + "id": "a3e3dcac-b431-4b04-858d-ec9be1e3a027", + "source_id": "7cc813d0-574c-4e72-ad89-2d9869c2b8b6", + "sink_id": "7032ccd7-3e7a-4d27-952e-53ccb7ef0994", + "source_name": "result", + "sink_name": "values_#_QUERY", + "is_static": true + } + ], + "output_links": [ + { + "id": "bb434ddb-3d01-4969-bdf9-e1d983628b9c", + "source_id": "7032ccd7-3e7a-4d27-952e-53ccb7ef0994", + "sink_id": "eeb6298f-9559-4422-8641-a2841492dbf0", + "source_name": "output", + "sink_name": "url", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "49998ca9-0549-4198-98f6-a9ae39907c4b", + "block_id": "95d1b990-ce13-4d88-9737-ba5c2070c97b", + "input_default": { + "type": "string" + }, + "metadata": { + "position": { + "x": 3778.0381982319714, + "y": 1092.8217591079524 + } + }, + "input_links": [ + { + "id": "56d063cd-9a82-49cd-8a8e-e8a452009e1a", + "source_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "sink_id": "49998ca9-0549-4198-98f6-a9ae39907c4b", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [ + { + "id": "051f1e02-0bfa-436f-bab3-91489a6aa9a5", + "source_id": "49998ca9-0549-4198-98f6-a9ae39907c4b", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "value", + "sink_name": "prompt_values_#_TRANSCRIPTS", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "block_id": "9c0b0450-d199-458b-a731-072189dd6593", + "input_default": { + "focus": "List the top 5 YouTube video urls results in this data.\n\nThe urls look like this:\n```https://www.youtube.com/watch?v=.....```", + "model": "claude-sonnet-4-5-20250929", + "max_retries": 3, + "ollama_host": "localhost:11434", + "source_data": null + }, + "metadata": { + "position": { + "x": 674.6940765863018, + "y": 174.29876977693223 + } + }, + "input_links": [ + { + "id": "f752040c-a0f3-4fe0-9e9e-65da9cff67a4", + "source_id": "eeb6298f-9559-4422-8641-a2841492dbf0", + "sink_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "source_name": "content", + "sink_name": "source_data", + "is_static": false + }, + { + "id": "47d0bc20-ae7d-4082-9a1d-9d42d5229d85", + "source_id": "8f9d7438-bfdf-44a0-8e07-116143247616", + "sink_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "source_name": "output", + "sink_name": "focus", + "is_static": false + } + ], + "output_links": [ + { + "id": "02035f8b-3b5c-491f-a192-4030a1c05f86", + "source_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "sink_id": "7278c40d-aecf-4677-852a-b4f69db2e927", + "source_name": "list_item", + "sink_name": "youtube_url", + "is_static": false + }, + { + "id": "aedc1c04-41fc-4ddc-a498-2180e04448d0", + "source_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "sink_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "source_name": "generated_list", + "sink_name": "collection", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + }, + { + "id": "95cbf9be-59cb-4240-aead-e537af599976", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-5-20250929", + "retry": 3, + "prompt": "You are tasked with writing a {{STYLE}} LinkedIn post based on transcripts from top-result YouTube videos on a specific topic. Your goal is to create an {{OBJECTIVE}} piece that delivers valuable insights in LinkedIn's professional format.\n\nHere are the transcripts you will be working with:\n\n\n{{TRANSCRIPTS}}\n\n\nThe topic of the LinkedIn post is: {{TOPIC}}\n\nTo complete this task, follow these steps:\n\n1. Carefully read through all the provided transcripts.\n\n2. Identify the most impactful insights and key takeaways that would resonate with a professional LinkedIn audience.\n\n3. Structure your LinkedIn post to include:\n - A compelling hook that stops the scroll\n - 2-3 main points that deliver value\n - A clear call-to-action or thought-provoking question\n - Relevant hashtags (3-5 maximum)\n - Strategic line breaks for better readability\n\n4. Write the post, ensuring that you:\n - Use a {{TONE}} tone appropriate for LinkedIn's professional environment\n - Incorporate insights from multiple transcripts to demonstrate expertise\n - Explain complex concepts in a {{CLARITY}} manner\n - Break up text into easily digestible chunks\n - Include relevant statistics or data mentioned in the transcripts, if applicable\n - Avoid directly quoting from the transcripts; instead, paraphrase and synthesize the information\n - Present the content from a {{OPINION}} perspective\n\n5. The post must be {{MIN_WORD_COUNT}} words, considering LinkedIn's optimal post length (1,300 characters maximum for standard posts).\n\n6. Format your post with:\n - Strategic emojis (if appropriate for the topic)\n - Bullet points or numbered lists when relevant\n - Line breaks between key points\n - Clear paragraph separation\n - But AVOID using ** or other markdown style formatting, as LinkedIn will not render it.\n\n7. After writing the post, review it for:\n - Professional tone\n - Mobile-first readability\n - Strategic use of white space\n - Engagement potential\n - Character count compliance\n\n8. Output only the final LinkedIn post, without any additional commentary or parentheses. Do not include any meta-information about the writing process or the original transcripts.\n\nBegin writing the LinkedIn post now, and present it as a complete, ready-to-publish piece.", + "sys_prompt": "", + "ollama_host": "localhost:11434", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 4602.504770765923, + "y": 517.5856999003662 + } + }, + "input_links": [ + { + "id": "85629334-5e62-42ea-9184-c60e4a89b2fd", + "source_id": "aaeee594-70f9-4bc3-a9a8-b2951e0809e2", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_MIN_WORD_COUNT", + "is_static": true + }, + { + "id": "b336b185-bb36-41ec-9464-5e80263087a2", + "source_id": "7cc813d0-574c-4e72-ad89-2d9869c2b8b6", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_TOPIC", + "is_static": true + }, + { + "id": "fbc76ac7-75c1-4ee0-8cc3-3581f549bd59", + "source_id": "a6567a56-c74b-4fd4-b640-9a9fc09f1b3e", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_STYLE", + "is_static": true + }, + { + "id": "ff4e71a9-2e2d-4d58-ae81-441756b67e22", + "source_id": "727df6b8-7208-4137-a7e4-c17d2bc2cea1", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_OBJECTIVE", + "is_static": true + }, + { + "id": "f505026b-2026-4bc1-b3c4-fb991cea03ea", + "source_id": "b527d6c7-2dce-4c0b-997b-017dc078ed91", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_OPINION", + "is_static": true + }, + { + "id": "36566600-abaf-479e-8310-380ba8716a82", + "source_id": "132eb8d1-49b6-47f4-9b6f-114b5b929989", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_CLARITY", + "is_static": true + }, + { + "id": "9c1f7321-f3b7-428e-8767-fdc717f72597", + "source_id": "464d3fe5-33e8-47bd-b196-e7d0adcd582c", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_TONE", + "is_static": true + }, + { + "id": "051f1e02-0bfa-436f-bab3-91489a6aa9a5", + "source_id": "49998ca9-0549-4198-98f6-a9ae39907c4b", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "value", + "sink_name": "prompt_values_#_TRANSCRIPTS", + "is_static": false + } + ], + "output_links": [ + { + "id": "3531f653-5cb2-4107-9522-d9aaebd9d333", + "source_id": "95cbf9be-59cb-4240-aead-e537af599976", + "sink_id": "67d559a9-89ca-4f1c-888b-7d5e4b934824", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "9afd017f-565f-4090-b5df-d01bc4094292", + "graph_version": 16, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "02035f8b-3b5c-491f-a192-4030a1c05f86", + "source_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "sink_id": "7278c40d-aecf-4677-852a-b4f69db2e927", + "source_name": "list_item", + "sink_name": "youtube_url", + "is_static": false + }, + { + "id": "fbc76ac7-75c1-4ee0-8cc3-3581f549bd59", + "source_id": "a6567a56-c74b-4fd4-b640-9a9fc09f1b3e", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_STYLE", + "is_static": true + }, + { + "id": "f505026b-2026-4bc1-b3c4-fb991cea03ea", + "source_id": "b527d6c7-2dce-4c0b-997b-017dc078ed91", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_OPINION", + "is_static": true + }, + { + "id": "31b1f11d-ecff-4d0b-b55c-5cd5f59ea1d6", + "source_id": "7278c40d-aecf-4677-852a-b4f69db2e927", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "transcript", + "sink_name": "entry", + "is_static": false + }, + { + "id": "94b3e4e7-02fd-4c24-aa70-7a0d64cc83e2", + "source_id": "f0f83bf3-4c5a-4a69-a795-84916a9d02c0", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "aedc1c04-41fc-4ddc-a498-2180e04448d0", + "source_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "sink_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "source_name": "generated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "47d0bc20-ae7d-4082-9a1d-9d42d5229d85", + "source_id": "8f9d7438-bfdf-44a0-8e07-116143247616", + "sink_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "source_name": "output", + "sink_name": "focus", + "is_static": false + }, + { + "id": "a58d1e78-2548-47e0-990c-8f7cae2bdbcf", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "b10632a5-b59a-4240-a6e5-c8adeabaa784", + "source_name": "updated_list", + "sink_name": "collection", + "is_static": false + }, + { + "id": "85629334-5e62-42ea-9184-c60e4a89b2fd", + "source_id": "aaeee594-70f9-4bc3-a9a8-b2951e0809e2", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_MIN_WORD_COUNT", + "is_static": true + }, + { + "id": "ff4e71a9-2e2d-4d58-ae81-441756b67e22", + "source_id": "727df6b8-7208-4137-a7e4-c17d2bc2cea1", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_OBJECTIVE", + "is_static": true + }, + { + "id": "a3e3dcac-b431-4b04-858d-ec9be1e3a027", + "source_id": "7cc813d0-574c-4e72-ad89-2d9869c2b8b6", + "sink_id": "7032ccd7-3e7a-4d27-952e-53ccb7ef0994", + "source_name": "result", + "sink_name": "values_#_QUERY", + "is_static": true + }, + { + "id": "af84836d-c9de-46ea-8d1a-07400d6e5dfe", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "updated_list", + "sink_name": "yes_value", + "is_static": false + }, + { + "id": "9c1f7321-f3b7-428e-8767-fdc717f72597", + "source_id": "464d3fe5-33e8-47bd-b196-e7d0adcd582c", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_TONE", + "is_static": true + }, + { + "id": "04bd40a3-52a6-4b2b-93fe-5685066ffeb1", + "source_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "sink_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "source_name": "count", + "sink_name": "input", + "is_static": false + }, + { + "id": "f752040c-a0f3-4fe0-9e9e-65da9cff67a4", + "source_id": "eeb6298f-9559-4422-8641-a2841492dbf0", + "sink_id": "32d9fa39-50c4-4727-b96b-03e88f8ab26b", + "source_name": "content", + "sink_name": "source_data", + "is_static": false + }, + { + "id": "b336b185-bb36-41ec-9464-5e80263087a2", + "source_id": "7cc813d0-574c-4e72-ad89-2d9869c2b8b6", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_TOPIC", + "is_static": true + }, + { + "id": "abc225ed-e1d3-462c-bcc0-88a14d17dd03", + "source_id": "287df4a8-2bf9-4cbf-a32f-dca95aeca0e7", + "sink_id": "8f9d7438-bfdf-44a0-8e07-116143247616", + "source_name": "result", + "sink_name": "values_#_NUMBER", + "is_static": true + }, + { + "id": "bb434ddb-3d01-4969-bdf9-e1d983628b9c", + "source_id": "7032ccd7-3e7a-4d27-952e-53ccb7ef0994", + "sink_id": "eeb6298f-9559-4422-8641-a2841492dbf0", + "source_name": "output", + "sink_name": "url", + "is_static": false + }, + { + "id": "3e931141-56e9-4904-b70a-0ad36415e7be", + "source_id": "bffd38ad-4c3c-4f37-b3ea-9d1788e527b8", + "sink_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "source_name": "count", + "sink_name": "data", + "is_static": false + }, + { + "id": "64a031c6-d680-4cf9-adae-4374e19bd5f4", + "source_id": "b10632a5-b59a-4240-a6e5-c8adeabaa784", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "count", + "sink_name": "value1", + "is_static": false + }, + { + "id": "4a91777b-946b-4c3c-b9a4-3fa3645c1e39", + "source_id": "dad79a25-9633-4b61-ad6b-344b3d7f4d1c", + "sink_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "source_name": "output", + "sink_name": "value2", + "is_static": true + }, + { + "id": "3531f653-5cb2-4107-9522-d9aaebd9d333", + "source_id": "95cbf9be-59cb-4240-aead-e537af599976", + "sink_id": "67d559a9-89ca-4f1c-888b-7d5e4b934824", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "36566600-abaf-479e-8310-380ba8716a82", + "source_id": "132eb8d1-49b6-47f4-9b6f-114b5b929989", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "result", + "sink_name": "prompt_values_#_CLARITY", + "is_static": true + }, + { + "id": "051f1e02-0bfa-436f-bab3-91489a6aa9a5", + "source_id": "49998ca9-0549-4198-98f6-a9ae39907c4b", + "sink_id": "95cbf9be-59cb-4240-aead-e537af599976", + "source_name": "value", + "sink_name": "prompt_values_#_TRANSCRIPTS", + "is_static": false + }, + { + "id": "2e8d10c5-a067-4c64-8fc5-a2db5ab82351", + "source_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "sink_id": "c0d0d5d5-221b-4ce7-911b-09e40442fd82", + "source_name": "updated_list", + "sink_name": "list", + "is_static": false + }, + { + "id": "56d063cd-9a82-49cd-8a8e-e8a452009e1a", + "source_id": "87506078-97fc-4591-bb19-1e4a95af72a8", + "sink_id": "49998ca9-0549-4198-98f6-a9ae39907c4b", + "source_name": "yes_output", + "sink_name": "value", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2025-04-20T18:34:57.261Z", + "input_schema": { + "type": "object", + "properties": { + "Style": { + "advanced": false, + "secret": false, + "title": "Style", + "enum": [ + "Professional", + "Conversational", + "Academic", + "Narrative", + "Persuasive", + "Humorous", + "Informal", + "Formal" + ], + "description": "Specifies the overall writing style of the blog post. This determines how the content is presented, such as in a professional, conversational, academic, narrative, or persuasive manner.", + "default": "Persuasive" + }, + "Target Word Count": { + "advanced": false, + "secret": false, + "title": "Target Word Count", + "enum": [ + "100", + "200", + "300" + ], + "description": "Sets the target length of the blog post in words. This allows you to control the content length as needed. \nPlease note that due to the nature of LLMs this will not be exact.", + "default": "100" + }, + "Objective": { + "advanced": false, + "secret": false, + "title": "Objective", + "enum": [ + "Informative", + "Persuasive", + "Entertaining", + "Analytical", + "Educational", + "Inspirational", + "Critical", + "Awareness-raising" + ], + "description": "Defines the main goal or purpose of the blog post. It guides the focus of the content, whether it's meant to inform, persuade, entertain, analyze, educate, inspire, critique, or raise awareness about the topic.", + "default": "Persuasive" + }, + "Clarity": { + "advanced": false, + "secret": false, + "title": "Clarity", + "enum": [ + "Clear and Accessible", + "Detailed and Technical", + "Simple and Straightforward", + "Comprehensive", + "Concise", + "Elaborate" + ], + "description": "Indicates the level of complexity in the explanations. It tailors the content to the audience's understanding, ranging from clear and accessible language to detailed and technical descriptions.", + "default": "Clear and Accessible" + }, + "Number of Videos": { + "advanced": false, + "secret": false, + "title": "Number of Videos", + "description": "The number of videos to collect the transcripts of", + "default": "3" + }, + "Tone": { + "advanced": false, + "secret": false, + "title": "Tone", + "enum": [ + "Engaging", + "Serious", + "Light-hearted", + "Friendly", + "Formal", + "Conversational", + "Optimistic", + "Urgent" + ], + "description": " Sets the mood or attitude conveyed through the writing. This affects how the reader perceives the content, with options like engaging, serious, light-hearted, friendly, formal, optimistic, or urgent.", + "default": "Engaging" + }, + "Opinion": { + "advanced": false, + "secret": false, + "title": "Opinion", + "enum": [ + "Positive", + "Negative", + "Neutral", + "Balanced", + "Critical", + "Supportive", + "Skeptical" + ], + "description": "Represents the perspective or stance taken in the blog post. This shapes the viewpoint presented to the reader, whether it's positive, negative, neutral, balanced, critical, supportive, or skeptical.", + "default": "Positive" + }, + "Topic": { + "advanced": false, + "secret": false, + "title": "Topic", + "description": "The topic of the post you want to write. This is also the query that will be searched on YouTube for research.", + "default": "Auto_GPT" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Blog Post": { + "advanced": false, + "secret": false, + "title": "Blog Post", + "description": "The full blog post written by the Agent" + } + }, + "required": [ + "Blog Post" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "webshare_proxy_user_password_credentials": { + "credentials_provider": [ + "webshare_proxy" + ], + "credentials_types": [ + "user_password" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "webshare_proxy", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "user_password", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['user_password']]", + "type": "object", + "discriminator_values": [] + }, + "jina_api_key_credentials": { + "credentials_provider": [ + "jina" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "jina", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "anthropic_api_key_credentials": { + "credentials_provider": [ + "anthropic" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "anthropic", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "claude-sonnet-4-5-20250929" + ] + } + }, + "required": [ + "webshare_proxy_user_password_credentials", + "jina_api_key_credentials", + "anthropic_api_key_credentials" + ], + "title": "LinkedInPostGeneratorCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9.json b/autogpt_platform/backend/agents/agent_f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9.json new file mode 100644 index 0000000000..24772cf01b --- /dev/null +++ b/autogpt_platform/backend/agents/agent_f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9.json @@ -0,0 +1,505 @@ +{ + "id": "0d440799-44ba-4d6c-85b3-b3739f1e1287", + "version": 12, + "is_active": true, + "name": "AI Webpage Copy Improver", + "description": "Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Improved Webpage Copy" + }, + "metadata": { + "position": { + "x": 1039.5884372540172, + "y": -0.8359099621230968 + } + }, + "input_links": [ + { + "id": "d4334477-3616-454f-a430-614ca27f5b36", + "source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Original Page Analysis", + "description": "Analysis of the webpage as it currently stands." + }, + "metadata": { + "position": { + "x": 1037.7724103954706, + "y": -606.5934325506903 + } + }, + "input_links": [ + { + "id": "f979ab78-0903-4f19-a7c2-a419d5d81aef", + "source_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Homepage URL", + "value": "https://agpt.co", + "description": "Enter the URL of the homepage you want to improve" + }, + "metadata": { + "position": { + "x": -1195.1455674454749, + "y": 0 + } + }, + "input_links": [], + "output_links": [ + { + "id": "cbb12335-fefd-4560-9fff-98675130fbad", + "source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce", + "sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "source_name": "result", + "sink_name": "url", + "is_static": true + } + ], + "graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "block_id": "436c3984-57fd-4b85-8e9a-459b356883bd", + "input_default": { + "raw_content": false + }, + "metadata": { + "position": { + "x": -631.7330786555249, + "y": 1.9638396496230826 + } + }, + "input_links": [ + { + "id": "cbb12335-fefd-4560-9fff-98675130fbad", + "source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce", + "sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "source_name": "result", + "sink_name": "url", + "is_static": true + } + ], + "output_links": [ + { + "id": "adfa6113-77b3-4e32-b136-3e694b87553e", + "source_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "source_name": "content", + "sink_name": "prompt_values_#_CONTENT", + "is_static": false + }, + { + "id": "5d5656fd-4208-4296-bc70-e39cc31caada", + "source_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "sink_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "source_name": "content", + "sink_name": "prompt_values_#_CONTENT", + "is_static": false + } + ], + "graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o", + "prompt": "Current Webpage Content:\n```\n{{CONTENT}}\n```\n\nBased on the following analysis of the webpage content:\n\n```\n{{ANALYSIS}}\n```\n\nRewrite and improve the content to address the identified issues. Focus on:\n1. Enhancing clarity and readability\n2. Optimizing for SEO (suggest and incorporate relevant keywords)\n3. Improving calls-to-action for better conversion rates\n4. Refining the structure and organization\n5. Maintaining brand consistency while improving the overall tone\n\nProvide the improved content in HTML format inside a code-block with \"```\" backticks, preserving the original structure where appropriate. Also, include a brief summary of the changes made and their potential impact.", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 488.37278423303917, + "y": 0 + } + }, + "input_links": [ + { + "id": "adfa6113-77b3-4e32-b136-3e694b87553e", + "source_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "source_name": "content", + "sink_name": "prompt_values_#_CONTENT", + "is_static": false + }, + { + "id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57", + "source_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "source_name": "response", + "sink_name": "prompt_values_#_ANALYSIS", + "is_static": false + } + ], + "output_links": [ + { + "id": "d4334477-3616-454f-a430-614ca27f5b36", + "source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7", + "source_name": "response", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287", + "graph_version": 12, + "webhook_id": null, + "webhook": null + }, + { + "id": "08612ce2-625b-4c17-accd-3acace7b6477", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "gpt-4o", + "prompt": "Analyze the following webpage content and provide a detailed report on its current state, including strengths and weaknesses in terms of clarity, SEO optimization, and potential for conversion:\n\n{{CONTENT}}\n\nInclude observations on:\n1. Overall readability and clarity\n2. Use of keywords and SEO-friendly language\n3. Effectiveness of calls-to-action\n4. Structure and organization of content\n5. Tone and brand consistency", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": -72.66206703605442, + "y": -0.58403945075381 + } + }, + "input_links": [ + { + "id": "5d5656fd-4208-4296-bc70-e39cc31caada", + "source_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "sink_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "source_name": "content", + "sink_name": "prompt_values_#_CONTENT", + "is_static": false + } + ], + "output_links": [ + { + "id": "f979ab78-0903-4f19-a7c2-a419d5d81aef", + "source_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57", + "source_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "source_name": "response", + "sink_name": "prompt_values_#_ANALYSIS", + "is_static": false + } + ], + "graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287", + "graph_version": 12, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "adfa6113-77b3-4e32-b136-3e694b87553e", + "source_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "source_name": "content", + "sink_name": "prompt_values_#_CONTENT", + "is_static": false + }, + { + "id": "d4334477-3616-454f-a430-614ca27f5b36", + "source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "5d5656fd-4208-4296-bc70-e39cc31caada", + "source_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "sink_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "source_name": "content", + "sink_name": "prompt_values_#_CONTENT", + "is_static": false + }, + { + "id": "f979ab78-0903-4f19-a7c2-a419d5d81aef", + "source_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35", + "source_name": "response", + "sink_name": "value", + "is_static": false + }, + { + "id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57", + "source_id": "08612ce2-625b-4c17-accd-3acace7b6477", + "sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9", + "source_name": "response", + "sink_name": "prompt_values_#_ANALYSIS", + "is_static": false + }, + { + "id": "cbb12335-fefd-4560-9fff-98675130fbad", + "source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce", + "sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103", + "source_name": "result", + "sink_name": "url", + "is_static": true + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2024-12-20T19:47:22.036Z", + "input_schema": { + "type": "object", + "properties": { + "Homepage URL": { + "advanced": false, + "secret": false, + "title": "Homepage URL", + "description": "Enter the URL of the homepage you want to improve", + "default": "https://agpt.co" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Improved Webpage Copy": { + "advanced": false, + "secret": false, + "title": "Improved Webpage Copy" + }, + "Original Page Analysis": { + "advanced": false, + "secret": false, + "title": "Original Page Analysis", + "description": "Analysis of the webpage as it currently stands." + } + }, + "required": [ + "Improved Webpage Copy", + "Original Page Analysis" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "jina_api_key_credentials": { + "credentials_provider": [ + "jina" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "jina", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "openai_api_key_credentials": { + "credentials_provider": [ + "openai" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "openai", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "gpt-4o" + ] + } + }, + "required": [ + "jina_api_key_credentials", + "openai_api_key_credentials" + ], + "title": "AIWebpageCopyImproverCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/agents/agent_fc2c9976-0962-4625-a27b-d316573a9e7f.json b/autogpt_platform/backend/agents/agent_fc2c9976-0962-4625-a27b-d316573a9e7f.json new file mode 100644 index 0000000000..d2a83bcdfe --- /dev/null +++ b/autogpt_platform/backend/agents/agent_fc2c9976-0962-4625-a27b-d316573a9e7f.json @@ -0,0 +1,615 @@ +{ + "id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "version": 29, + "is_active": true, + "name": "Email Address Finder", + "description": "Input information of a business and find their email address", + "instructions": null, + "recommended_schedule_cron": null, + "nodes": [ + { + "id": "04cad535-9f1a-4876-8b07-af5897d8c282", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Address", + "value": "USA" + }, + "metadata": { + "position": { + "x": 1047.9357219838776, + "y": 1067.9123910370954 + } + }, + "input_links": [], + "output_links": [ + { + "id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957", + "source_id": "04cad535-9f1a-4876-8b07-af5897d8c282", + "sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "source_name": "result", + "sink_name": "values_#_ADDRESS", + "is_static": true + } + ], + "graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0", + "input_default": { + "group": 1, + "pattern": "(.*?)<\\/email>" + }, + "metadata": { + "position": { + "x": 3381.2821481740634, + "y": 246.091098184158 + } + }, + "input_links": [ + { + "id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6", + "source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "source_name": "response", + "sink_name": "text", + "is_static": false + } + ], + "output_links": [ + { + "id": "b15b5143-27b7-486e-a166-4095e72e5235", + "source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865", + "source_name": "negative", + "sink_name": "values_#_Result", + "is_static": false + }, + { + "id": "23591872-3c6b-4562-87d3-5b6ade698e48", + "source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "310c8fab-2ae6-4158-bd48-01dbdc434130", + "block_id": "363ae599-353e-4804-937e-b2ee3cef3da4", + "input_default": { + "name": "Email" + }, + "metadata": { + "position": { + "x": 4525.4246310882, + "y": 246.36913665010354 + } + }, + "input_links": [ + { + "id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb", + "source_id": "266b7255-11c4-4b88-99e2-85db31a2e865", + "sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "23591872-3c6b-4562-87d3-5b6ade698e48", + "source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130", + "source_name": "positive", + "sink_name": "value", + "is_static": false + } + ], + "output_links": [], + "graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "4a41df99-ffe2-4c12-b528-632979c9c030", + "block_id": "87840993-2053-44b7-8da4-187ad4ee518c", + "input_default": {}, + "metadata": { + "position": { + "x": 2182.7499999999995, + "y": 242.00001144409185 + } + }, + "input_links": [ + { + "id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649", + "source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030", + "source_name": "output", + "sink_name": "query", + "is_static": false + } + ], + "output_links": [ + { + "id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b", + "source_id": "4a41df99-ffe2-4c12-b528-632979c9c030", + "sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "source_name": "results", + "sink_name": "prompt_values_#_WEBSITE_CONTENT", + "is_static": false + } + ], + "graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "9708a10a-8be0-4c44-abb3-bd0f7c594794", + "block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b", + "input_default": { + "name": "Business Name", + "value": "Tim Cook" + }, + "metadata": { + "position": { + "x": 1049.9704155272595, + "y": 244.49931152418344 + } + }, + "input_links": [], + "output_links": [ + { + "id": "946b522c-365f-4ee0-96f9-28863d9882ea", + "source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794", + "sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "source_name": "result", + "sink_name": "values_#_NAME", + "is_static": true + }, + { + "id": "43e920a7-0bb4-4fae-9a22-91df95c7342a", + "source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794", + "sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "source_name": "result", + "sink_name": "prompt_values_#_BUSINESS_NAME", + "is_static": true + } + ], + "graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "Email Address of {{NAME}}, {{ADDRESS}}", + "values": {} + }, + "metadata": { + "position": { + "x": 1625.25, + "y": 243.25001144409185 + } + }, + "input_links": [ + { + "id": "946b522c-365f-4ee0-96f9-28863d9882ea", + "source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794", + "sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "source_name": "result", + "sink_name": "values_#_NAME", + "is_static": true + }, + { + "id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957", + "source_id": "04cad535-9f1a-4876-8b07-af5897d8c282", + "sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "source_name": "result", + "sink_name": "values_#_ADDRESS", + "is_static": true + } + ], + "output_links": [ + { + "id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649", + "source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030", + "source_name": "output", + "sink_name": "query", + "is_static": false + } + ], + "graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "266b7255-11c4-4b88-99e2-85db31a2e865", + "block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30", + "input_default": { + "format": "Failed to find email. \nResult:\n{{RESULT}}", + "values": {} + }, + "metadata": { + "position": { + "x": 3949.7493830805934, + "y": 705.209819698647 + } + }, + "input_links": [ + { + "id": "b15b5143-27b7-486e-a166-4095e72e5235", + "source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865", + "source_name": "negative", + "sink_name": "values_#_Result", + "is_static": false + } + ], + "output_links": [ + { + "id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb", + "source_id": "266b7255-11c4-4b88-99e2-85db31a2e865", + "sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130", + "source_name": "output", + "sink_name": "value", + "is_static": false + } + ], + "graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "graph_version": 29, + "webhook_id": null, + "webhook": null + }, + { + "id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91", + "input_default": { + "model": "claude-sonnet-4-5-20250929", + "prompt": "\n{{WEBSITE_CONTENT}}\n\n\nExtract the Contact Email of {{BUSINESS_NAME}}.\n\nIf no email that can be used to contact {{BUSINESS_NAME}} is present, output `N/A`.\nDo not share any emails other than the email for this specific entity.\n\nIf multiple present pick the likely best one.\n\nRespond with the email (or N/A) inside tags.\n\nExample Response:\n\n\nThere were many emails present, but luckily one was for {{BUSINESS_NAME}} which I have included below.\n\n\nexample@email.com\n", + "prompt_values": {} + }, + "metadata": { + "position": { + "x": 2774.879259081777, + "y": 243.3102035752969 + } + }, + "input_links": [ + { + "id": "43e920a7-0bb4-4fae-9a22-91df95c7342a", + "source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794", + "sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "source_name": "result", + "sink_name": "prompt_values_#_BUSINESS_NAME", + "is_static": true + }, + { + "id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b", + "source_id": "4a41df99-ffe2-4c12-b528-632979c9c030", + "sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "source_name": "results", + "sink_name": "prompt_values_#_WEBSITE_CONTENT", + "is_static": false + } + ], + "output_links": [ + { + "id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6", + "source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "source_name": "response", + "sink_name": "text", + "is_static": false + } + ], + "graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26", + "graph_version": 29, + "webhook_id": null, + "webhook": null + } + ], + "links": [ + { + "id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6", + "source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "source_name": "response", + "sink_name": "text", + "is_static": false + }, + { + "id": "b15b5143-27b7-486e-a166-4095e72e5235", + "source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865", + "source_name": "negative", + "sink_name": "values_#_Result", + "is_static": false + }, + { + "id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb", + "source_id": "266b7255-11c4-4b88-99e2-85db31a2e865", + "sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130", + "source_name": "output", + "sink_name": "value", + "is_static": false + }, + { + "id": "946b522c-365f-4ee0-96f9-28863d9882ea", + "source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794", + "sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "source_name": "result", + "sink_name": "values_#_NAME", + "is_static": true + }, + { + "id": "23591872-3c6b-4562-87d3-5b6ade698e48", + "source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981", + "sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130", + "source_name": "positive", + "sink_name": "value", + "is_static": false + }, + { + "id": "43e920a7-0bb4-4fae-9a22-91df95c7342a", + "source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794", + "sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "source_name": "result", + "sink_name": "prompt_values_#_BUSINESS_NAME", + "is_static": true + }, + { + "id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649", + "source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030", + "source_name": "output", + "sink_name": "query", + "is_static": false + }, + { + "id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957", + "source_id": "04cad535-9f1a-4876-8b07-af5897d8c282", + "sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694", + "source_name": "result", + "sink_name": "values_#_ADDRESS", + "is_static": true + }, + { + "id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b", + "source_id": "4a41df99-ffe2-4c12-b528-632979c9c030", + "sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50", + "source_name": "results", + "sink_name": "prompt_values_#_WEBSITE_CONTENT", + "is_static": false + } + ], + "forked_from_id": null, + "forked_from_version": null, + "sub_graphs": [], + "user_id": "", + "created_at": "2025-01-03T00:46:30.244Z", + "input_schema": { + "type": "object", + "properties": { + "Address": { + "advanced": false, + "secret": false, + "title": "Address", + "default": "USA" + }, + "Business Name": { + "advanced": false, + "secret": false, + "title": "Business Name", + "default": "Tim Cook" + } + }, + "required": [] + }, + "output_schema": { + "type": "object", + "properties": { + "Email": { + "advanced": false, + "secret": false, + "title": "Email" + } + }, + "required": [ + "Email" + ] + }, + "has_external_trigger": false, + "has_human_in_the_loop": false, + "trigger_setup_info": null, + "credentials_input_schema": { + "properties": { + "jina_api_key_credentials": { + "credentials_provider": [ + "jina" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "jina", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator_values": [] + }, + "anthropic_api_key_credentials": { + "credentials_provider": [ + "anthropic" + ], + "credentials_types": [ + "api_key" + ], + "properties": { + "id": { + "title": "Id", + "type": "string" + }, + "title": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "default": null, + "title": "Title" + }, + "provider": { + "const": "anthropic", + "title": "Provider", + "type": "string" + }, + "type": { + "const": "api_key", + "title": "Type", + "type": "string" + } + }, + "required": [ + "id", + "provider", + "type" + ], + "title": "CredentialsMetaInput[Literal[], Literal['api_key']]", + "type": "object", + "discriminator": "model", + "discriminator_mapping": { + "Llama-3.3-70B-Instruct": "llama_api", + "Llama-3.3-8B-Instruct": "llama_api", + "Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api", + "Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api", + "Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api", + "amazon/nova-lite-v1": "open_router", + "amazon/nova-micro-v1": "open_router", + "amazon/nova-pro-v1": "open_router", + "claude-3-7-sonnet-20250219": "anthropic", + "claude-3-haiku-20240307": "anthropic", + "claude-haiku-4-5-20251001": "anthropic", + "claude-opus-4-1-20250805": "anthropic", + "claude-opus-4-20250514": "anthropic", + "claude-opus-4-5-20251101": "anthropic", + "claude-sonnet-4-20250514": "anthropic", + "claude-sonnet-4-5-20250929": "anthropic", + "cohere/command-r-08-2024": "open_router", + "cohere/command-r-plus-08-2024": "open_router", + "deepseek/deepseek-chat": "open_router", + "deepseek/deepseek-r1-0528": "open_router", + "dolphin-mistral:latest": "ollama", + "google/gemini-2.0-flash-001": "open_router", + "google/gemini-2.0-flash-lite-001": "open_router", + "google/gemini-2.5-flash": "open_router", + "google/gemini-2.5-flash-lite-preview-06-17": "open_router", + "google/gemini-2.5-pro-preview-03-25": "open_router", + "google/gemini-3-pro-preview": "open_router", + "gpt-3.5-turbo": "openai", + "gpt-4-turbo": "openai", + "gpt-4.1-2025-04-14": "openai", + "gpt-4.1-mini-2025-04-14": "openai", + "gpt-4o": "openai", + "gpt-4o-mini": "openai", + "gpt-5-2025-08-07": "openai", + "gpt-5-chat-latest": "openai", + "gpt-5-mini-2025-08-07": "openai", + "gpt-5-nano-2025-08-07": "openai", + "gpt-5.1-2025-11-13": "openai", + "gryphe/mythomax-l2-13b": "open_router", + "llama-3.1-8b-instant": "groq", + "llama-3.3-70b-versatile": "groq", + "llama3": "ollama", + "llama3.1:405b": "ollama", + "llama3.2": "ollama", + "llama3.3": "ollama", + "meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api", + "meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api", + "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api", + "meta-llama/llama-4-maverick": "open_router", + "meta-llama/llama-4-scout": "open_router", + "microsoft/wizardlm-2-8x22b": "open_router", + "mistralai/mistral-nemo": "open_router", + "moonshotai/kimi-k2": "open_router", + "nousresearch/hermes-3-llama-3.1-405b": "open_router", + "nousresearch/hermes-3-llama-3.1-70b": "open_router", + "nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api", + "o1": "openai", + "o1-mini": "openai", + "o3-2025-04-16": "openai", + "o3-mini": "openai", + "openai/gpt-oss-120b": "open_router", + "openai/gpt-oss-20b": "open_router", + "perplexity/sonar": "open_router", + "perplexity/sonar-deep-research": "open_router", + "perplexity/sonar-pro": "open_router", + "qwen/qwen3-235b-a22b-thinking-2507": "open_router", + "qwen/qwen3-coder": "open_router", + "v0-1.0-md": "v0", + "v0-1.5-lg": "v0", + "v0-1.5-md": "v0", + "x-ai/grok-4": "open_router", + "x-ai/grok-4-fast": "open_router", + "x-ai/grok-4.1-fast": "open_router", + "x-ai/grok-code-fast-1": "open_router" + }, + "discriminator_values": [ + "claude-sonnet-4-5-20250929" + ] + } + }, + "required": [ + "jina_api_key_credentials", + "anthropic_api_key_credentials" + ], + "title": "EmailAddressFinderCredentialsInputSchema", + "type": "object" + } +} \ No newline at end of file diff --git a/autogpt_platform/backend/backend/server/__init__.py b/autogpt_platform/backend/backend/api/__init__.py similarity index 100% rename from autogpt_platform/backend/backend/server/__init__.py rename to autogpt_platform/backend/backend/api/__init__.py diff --git a/autogpt_platform/backend/backend/server/conftest.py b/autogpt_platform/backend/backend/api/conftest.py similarity index 100% rename from autogpt_platform/backend/backend/server/conftest.py rename to autogpt_platform/backend/backend/api/conftest.py diff --git a/autogpt_platform/backend/backend/server/conn_manager.py b/autogpt_platform/backend/backend/api/conn_manager.py similarity index 73% rename from autogpt_platform/backend/backend/server/conn_manager.py rename to autogpt_platform/backend/backend/api/conn_manager.py index 0430028610..52e0f50f69 100644 --- a/autogpt_platform/backend/backend/server/conn_manager.py +++ b/autogpt_platform/backend/backend/api/conn_manager.py @@ -1,13 +1,14 @@ +import asyncio from typing import Dict, Set from fastapi import WebSocket +from backend.api.model import NotificationPayload, WSMessage, WSMethod from backend.data.execution import ( ExecutionEventType, GraphExecutionEvent, NodeExecutionEvent, ) -from backend.server.model import WSMessage, WSMethod _EVENT_TYPE_TO_METHOD_MAP: dict[ExecutionEventType, WSMethod] = { ExecutionEventType.GRAPH_EXEC_UPDATE: WSMethod.GRAPH_EXECUTION_EVENT, @@ -19,15 +20,24 @@ class ConnectionManager: def __init__(self): self.active_connections: Set[WebSocket] = set() self.subscriptions: Dict[str, Set[WebSocket]] = {} + self.user_connections: Dict[str, Set[WebSocket]] = {} - async def connect_socket(self, websocket: WebSocket): + async def connect_socket(self, websocket: WebSocket, *, user_id: str): await websocket.accept() self.active_connections.add(websocket) + if user_id not in self.user_connections: + self.user_connections[user_id] = set() + self.user_connections[user_id].add(websocket) - def disconnect_socket(self, websocket: WebSocket): - self.active_connections.remove(websocket) + def disconnect_socket(self, websocket: WebSocket, *, user_id: str): + self.active_connections.discard(websocket) for subscribers in self.subscriptions.values(): subscribers.discard(websocket) + user_conns = self.user_connections.get(user_id) + if user_conns is not None: + user_conns.discard(websocket) + if not user_conns: + self.user_connections.pop(user_id, None) async def subscribe_graph_exec( self, *, user_id: str, graph_exec_id: str, websocket: WebSocket @@ -92,6 +102,26 @@ class ConnectionManager: return n_sent + async def send_notification( + self, *, user_id: str, payload: NotificationPayload + ) -> int: + """Send a notification to all websocket connections belonging to a user.""" + message = WSMessage( + method=WSMethod.NOTIFICATION, + data=payload.model_dump(), + ).model_dump_json() + + connections = tuple(self.user_connections.get(user_id, set())) + if not connections: + return 0 + + await asyncio.gather( + *(connection.send_text(message) for connection in connections), + return_exceptions=True, + ) + + return len(connections) + async def _subscribe(self, channel_key: str, websocket: WebSocket) -> str: if channel_key not in self.subscriptions: self.subscriptions[channel_key] = set() diff --git a/autogpt_platform/backend/backend/server/conn_manager_test.py b/autogpt_platform/backend/backend/api/conn_manager_test.py similarity index 84% rename from autogpt_platform/backend/backend/server/conn_manager_test.py rename to autogpt_platform/backend/backend/api/conn_manager_test.py index 401a9eaf81..71dbc0ffee 100644 --- a/autogpt_platform/backend/backend/server/conn_manager_test.py +++ b/autogpt_platform/backend/backend/api/conn_manager_test.py @@ -4,13 +4,13 @@ from unittest.mock import AsyncMock import pytest from fastapi import WebSocket +from backend.api.conn_manager import ConnectionManager +from backend.api.model import NotificationPayload, WSMessage, WSMethod from backend.data.execution import ( ExecutionStatus, GraphExecutionEvent, NodeExecutionEvent, ) -from backend.server.conn_manager import ConnectionManager -from backend.server.model import WSMessage, WSMethod @pytest.fixture @@ -29,8 +29,9 @@ def mock_websocket() -> AsyncMock: async def test_connect( connection_manager: ConnectionManager, mock_websocket: AsyncMock ) -> None: - await connection_manager.connect_socket(mock_websocket) + await connection_manager.connect_socket(mock_websocket, user_id="user-1") assert mock_websocket in connection_manager.active_connections + assert mock_websocket in connection_manager.user_connections["user-1"] mock_websocket.accept.assert_called_once() @@ -39,11 +40,13 @@ def test_disconnect( ) -> None: connection_manager.active_connections.add(mock_websocket) connection_manager.subscriptions["test_channel_42"] = {mock_websocket} + connection_manager.user_connections["user-1"] = {mock_websocket} - connection_manager.disconnect_socket(mock_websocket) + connection_manager.disconnect_socket(mock_websocket, user_id="user-1") assert mock_websocket not in connection_manager.active_connections assert mock_websocket not in connection_manager.subscriptions["test_channel_42"] + assert "user-1" not in connection_manager.user_connections @pytest.mark.asyncio @@ -207,3 +210,22 @@ async def test_send_execution_result_no_subscribers( await connection_manager.send_execution_update(result) mock_websocket.send_text.assert_not_called() + + +@pytest.mark.asyncio +async def test_send_notification( + connection_manager: ConnectionManager, mock_websocket: AsyncMock +) -> None: + connection_manager.user_connections["user-1"] = {mock_websocket} + + await connection_manager.send_notification( + user_id="user-1", payload=NotificationPayload(type="info", event="hey") + ) + + mock_websocket.send_text.assert_called_once() + sent_message = mock_websocket.send_text.call_args[0][0] + expected_message = WSMessage( + method=WSMethod.NOTIFICATION, + data={"type": "info", "event": "hey"}, + ).model_dump_json() + assert sent_message == expected_message diff --git a/autogpt_platform/backend/backend/server/external/api.py b/autogpt_platform/backend/backend/api/external/fastapi_app.py similarity index 60% rename from autogpt_platform/backend/backend/server/external/api.py rename to autogpt_platform/backend/backend/api/external/fastapi_app.py index ee2cec2fa1..b55c918a74 100644 --- a/autogpt_platform/backend/backend/server/external/api.py +++ b/autogpt_platform/backend/backend/api/external/fastapi_app.py @@ -1,23 +1,23 @@ from fastapi import FastAPI +from backend.api.middleware.security import SecurityHeadersMiddleware from backend.monitoring.instrumentation import instrument_fastapi -from backend.server.middleware.security import SecurityHeadersMiddleware -from .routes.v1 import v1_router +from .v1.routes import v1_router -external_app = FastAPI( +external_api = FastAPI( title="AutoGPT External API", description="External API for AutoGPT integrations", docs_url="/docs", version="1.0", ) -external_app.add_middleware(SecurityHeadersMiddleware) -external_app.include_router(v1_router, prefix="/v1") +external_api.add_middleware(SecurityHeadersMiddleware) +external_api.include_router(v1_router, prefix="/v1") # Add Prometheus instrumentation instrument_fastapi( - external_app, + external_api, service_name="external-api", expose_endpoint=True, endpoint="/metrics", diff --git a/autogpt_platform/backend/backend/api/external/middleware.py b/autogpt_platform/backend/backend/api/external/middleware.py new file mode 100644 index 0000000000..0c278e1715 --- /dev/null +++ b/autogpt_platform/backend/backend/api/external/middleware.py @@ -0,0 +1,107 @@ +from fastapi import HTTPException, Security, status +from fastapi.security import APIKeyHeader, HTTPAuthorizationCredentials, HTTPBearer +from prisma.enums import APIKeyPermission + +from backend.data.auth.api_key import APIKeyInfo, validate_api_key +from backend.data.auth.base import APIAuthorizationInfo +from backend.data.auth.oauth import ( + InvalidClientError, + InvalidTokenError, + OAuthAccessTokenInfo, + validate_access_token, +) + +api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False) +bearer_auth = HTTPBearer(auto_error=False) + + +async def require_api_key(api_key: str | None = Security(api_key_header)) -> APIKeyInfo: + """Middleware for API key authentication only""" + if api_key is None: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, detail="Missing API key" + ) + + api_key_obj = await validate_api_key(api_key) + + if not api_key_obj: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key" + ) + + return api_key_obj + + +async def require_access_token( + bearer: HTTPAuthorizationCredentials | None = Security(bearer_auth), +) -> OAuthAccessTokenInfo: + """Middleware for OAuth access token authentication only""" + if bearer is None: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Missing Authorization header", + ) + + try: + token_info, _ = await validate_access_token(bearer.credentials) + except (InvalidClientError, InvalidTokenError) as e: + raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e)) + + return token_info + + +async def require_auth( + api_key: str | None = Security(api_key_header), + bearer: HTTPAuthorizationCredentials | None = Security(bearer_auth), +) -> APIAuthorizationInfo: + """ + Unified authentication middleware supporting both API keys and OAuth tokens. + + Supports two authentication methods, which are checked in order: + 1. X-API-Key header (existing API key authentication) + 2. Authorization: Bearer header (OAuth access token) + + Returns: + APIAuthorizationInfo: base class of both APIKeyInfo and OAuthAccessTokenInfo. + """ + # Try API key first + if api_key is not None: + api_key_info = await validate_api_key(api_key) + if api_key_info: + return api_key_info + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key" + ) + + # Try OAuth bearer token + if bearer is not None: + try: + token_info, _ = await validate_access_token(bearer.credentials) + return token_info + except (InvalidClientError, InvalidTokenError) as e: + raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=str(e)) + + # No credentials provided + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Missing authentication. Provide API key or access token.", + ) + + +def require_permission(permission: APIKeyPermission): + """ + Dependency function for checking specific permissions + (works with API keys and OAuth tokens) + """ + + async def check_permission( + auth: APIAuthorizationInfo = Security(require_auth), + ) -> APIAuthorizationInfo: + if permission not in auth.scopes: + raise HTTPException( + status_code=status.HTTP_403_FORBIDDEN, + detail=f"Missing required permission: {permission.value}", + ) + return auth + + return check_permission diff --git a/autogpt_platform/backend/backend/server/external/routes/__init__.py b/autogpt_platform/backend/backend/api/external/v1/__init__.py similarity index 100% rename from autogpt_platform/backend/backend/server/external/routes/__init__.py rename to autogpt_platform/backend/backend/api/external/v1/__init__.py diff --git a/autogpt_platform/backend/backend/api/external/v1/integrations.py b/autogpt_platform/backend/backend/api/external/v1/integrations.py new file mode 100644 index 0000000000..a3df481a67 --- /dev/null +++ b/autogpt_platform/backend/backend/api/external/v1/integrations.py @@ -0,0 +1,655 @@ +""" +External API endpoints for integrations and credentials. + +This module provides endpoints for external applications (like Autopilot) to: +- Initiate OAuth flows with custom callback URLs +- Complete OAuth flows by exchanging authorization codes +- Create API key, user/password, and host-scoped credentials +- List and manage user credentials +""" + +import logging +from typing import TYPE_CHECKING, Annotated, Any, Literal, Optional, Union +from urllib.parse import urlparse + +from fastapi import APIRouter, Body, HTTPException, Path, Security, status +from prisma.enums import APIKeyPermission +from pydantic import BaseModel, Field, SecretStr + +from backend.api.external.middleware import require_permission +from backend.api.features.integrations.models import get_all_provider_names +from backend.data.auth.base import APIAuthorizationInfo +from backend.data.model import ( + APIKeyCredentials, + Credentials, + CredentialsType, + HostScopedCredentials, + OAuth2Credentials, + UserPasswordCredentials, +) +from backend.integrations.creds_manager import IntegrationCredentialsManager +from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME +from backend.integrations.providers import ProviderName +from backend.util.settings import Settings + +if TYPE_CHECKING: + from backend.integrations.oauth import BaseOAuthHandler + +logger = logging.getLogger(__name__) +settings = Settings() +creds_manager = IntegrationCredentialsManager() + +integrations_router = APIRouter(prefix="/integrations", tags=["integrations"]) + + +# ==================== Request/Response Models ==================== # + + +class OAuthInitiateRequest(BaseModel): + """Request model for initiating an OAuth flow.""" + + callback_url: str = Field( + ..., description="The external app's callback URL for OAuth redirect" + ) + scopes: list[str] = Field( + default_factory=list, description="OAuth scopes to request" + ) + state_metadata: dict[str, Any] = Field( + default_factory=dict, + description="Arbitrary metadata to echo back on completion", + ) + + +class OAuthInitiateResponse(BaseModel): + """Response model for OAuth initiation.""" + + login_url: str = Field(..., description="URL to redirect user for OAuth consent") + state_token: str = Field(..., description="State token for CSRF protection") + expires_at: int = Field( + ..., description="Unix timestamp when the state token expires" + ) + + +class OAuthCompleteRequest(BaseModel): + """Request model for completing an OAuth flow.""" + + code: str = Field(..., description="Authorization code from OAuth provider") + state_token: str = Field(..., description="State token from initiate request") + + +class OAuthCompleteResponse(BaseModel): + """Response model for OAuth completion.""" + + credentials_id: str = Field(..., description="ID of the stored credentials") + provider: str = Field(..., description="Provider name") + type: str = Field(..., description="Credential type (oauth2)") + title: Optional[str] = Field(None, description="Credential title") + scopes: list[str] = Field(default_factory=list, description="Granted scopes") + username: Optional[str] = Field(None, description="Username from provider") + state_metadata: dict[str, Any] = Field( + default_factory=dict, description="Echoed metadata from initiate request" + ) + + +class CredentialSummary(BaseModel): + """Summary of a credential without sensitive data.""" + + id: str + provider: str + type: CredentialsType + title: Optional[str] = None + scopes: Optional[list[str]] = None + username: Optional[str] = None + host: Optional[str] = None + + +class ProviderInfo(BaseModel): + """Information about an integration provider.""" + + name: str + supports_oauth: bool = False + supports_api_key: bool = False + supports_user_password: bool = False + supports_host_scoped: bool = False + default_scopes: list[str] = Field(default_factory=list) + + +# ==================== Credential Creation Models ==================== # + + +class CreateAPIKeyCredentialRequest(BaseModel): + """Request model for creating API key credentials.""" + + type: Literal["api_key"] = "api_key" + api_key: str = Field(..., description="The API key") + title: str = Field(..., description="A name for this credential") + expires_at: Optional[int] = Field( + None, description="Unix timestamp when the API key expires" + ) + + +class CreateUserPasswordCredentialRequest(BaseModel): + """Request model for creating username/password credentials.""" + + type: Literal["user_password"] = "user_password" + username: str = Field(..., description="Username") + password: str = Field(..., description="Password") + title: str = Field(..., description="A name for this credential") + + +class CreateHostScopedCredentialRequest(BaseModel): + """Request model for creating host-scoped credentials.""" + + type: Literal["host_scoped"] = "host_scoped" + host: str = Field(..., description="Host/domain pattern to match") + headers: dict[str, str] = Field(..., description="Headers to include in requests") + title: str = Field(..., description="A name for this credential") + + +# Union type for credential creation +CreateCredentialRequest = Annotated[ + CreateAPIKeyCredentialRequest + | CreateUserPasswordCredentialRequest + | CreateHostScopedCredentialRequest, + Field(discriminator="type"), +] + + +class CreateCredentialResponse(BaseModel): + """Response model for credential creation.""" + + id: str + provider: str + type: CredentialsType + title: Optional[str] = None + + +# ==================== Helper Functions ==================== # + + +def validate_callback_url(callback_url: str) -> bool: + """Validate that the callback URL is from an allowed origin.""" + allowed_origins = settings.config.external_oauth_callback_origins + + try: + parsed = urlparse(callback_url) + callback_origin = f"{parsed.scheme}://{parsed.netloc}" + + for allowed in allowed_origins: + # Simple origin matching + if callback_origin == allowed: + return True + + # Allow localhost with any port in development (proper hostname check) + if parsed.hostname == "localhost": + for allowed in allowed_origins: + allowed_parsed = urlparse(allowed) + if allowed_parsed.hostname == "localhost": + return True + + return False + except Exception: + return False + + +def _get_oauth_handler_for_external( + provider_name: str, redirect_uri: str +) -> "BaseOAuthHandler": + """Get an OAuth handler configured with an external redirect URI.""" + # Ensure blocks are loaded so SDK providers are available + try: + from backend.blocks import load_all_blocks + + load_all_blocks() + except Exception as e: + logger.warning(f"Failed to load blocks: {e}") + + if provider_name not in HANDLERS_BY_NAME: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail=f"Provider '{provider_name}' does not support OAuth", + ) + + # Check if this provider has custom OAuth credentials + oauth_credentials = CREDENTIALS_BY_PROVIDER.get(provider_name) + + if oauth_credentials and not oauth_credentials.use_secrets: + import os + + client_id = ( + os.getenv(oauth_credentials.client_id_env_var) + if oauth_credentials.client_id_env_var + else None + ) + client_secret = ( + os.getenv(oauth_credentials.client_secret_env_var) + if oauth_credentials.client_secret_env_var + else None + ) + else: + client_id = getattr(settings.secrets, f"{provider_name}_client_id", None) + client_secret = getattr( + settings.secrets, f"{provider_name}_client_secret", None + ) + + if not (client_id and client_secret): + logger.error(f"Attempt to use unconfigured {provider_name} OAuth integration") + raise HTTPException( + status_code=status.HTTP_501_NOT_IMPLEMENTED, + detail={ + "message": f"Integration with provider '{provider_name}' is not configured.", + "hint": "Set client ID and secret in the application's deployment environment", + }, + ) + + handler_class = HANDLERS_BY_NAME[provider_name] + return handler_class( + client_id=client_id, + client_secret=client_secret, + redirect_uri=redirect_uri, + ) + + +# ==================== Endpoints ==================== # + + +@integrations_router.get("/providers", response_model=list[ProviderInfo]) +async def list_providers( + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.READ_INTEGRATIONS) + ), +) -> list[ProviderInfo]: + """ + List all available integration providers. + + Returns a list of all providers with their supported credential types. + Most providers support API key credentials, and some also support OAuth. + """ + # Ensure blocks are loaded + try: + from backend.blocks import load_all_blocks + + load_all_blocks() + except Exception as e: + logger.warning(f"Failed to load blocks: {e}") + + from backend.sdk.registry import AutoRegistry + + providers = [] + for name in get_all_provider_names(): + supports_oauth = name in HANDLERS_BY_NAME + handler_class = HANDLERS_BY_NAME.get(name) + default_scopes = ( + getattr(handler_class, "DEFAULT_SCOPES", []) if handler_class else [] + ) + + # Check if provider has specific auth types from SDK registration + sdk_provider = AutoRegistry.get_provider(name) + if sdk_provider and sdk_provider.supported_auth_types: + supports_api_key = "api_key" in sdk_provider.supported_auth_types + supports_user_password = ( + "user_password" in sdk_provider.supported_auth_types + ) + supports_host_scoped = "host_scoped" in sdk_provider.supported_auth_types + else: + # Fallback for legacy providers + supports_api_key = True # All providers can accept API keys + supports_user_password = name in ("smtp",) + supports_host_scoped = name == "http" + + providers.append( + ProviderInfo( + name=name, + supports_oauth=supports_oauth, + supports_api_key=supports_api_key, + supports_user_password=supports_user_password, + supports_host_scoped=supports_host_scoped, + default_scopes=default_scopes, + ) + ) + + return providers + + +@integrations_router.post( + "/{provider}/oauth/initiate", + response_model=OAuthInitiateResponse, + summary="Initiate OAuth flow", +) +async def initiate_oauth( + provider: Annotated[str, Path(title="The OAuth provider")], + request: OAuthInitiateRequest, + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.MANAGE_INTEGRATIONS) + ), +) -> OAuthInitiateResponse: + """ + Initiate an OAuth flow for an external application. + + This endpoint allows external apps to start an OAuth flow with a custom + callback URL. The callback URL must be from an allowed origin configured + in the platform settings. + + Returns a login URL to redirect the user to, along with a state token + for CSRF protection. + """ + # Validate callback URL + if not validate_callback_url(request.callback_url): + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=( + f"Callback URL origin is not allowed. " + f"Allowed origins: {settings.config.external_oauth_callback_origins}", + ), + ) + + # Validate provider + try: + provider_name = ProviderName(provider) + except ValueError: + # Check if it's a dynamically registered provider + if provider not in HANDLERS_BY_NAME: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail=f"Provider '{provider}' not found", + ) + provider_name = provider + + # Get OAuth handler with external callback URL + handler = _get_oauth_handler_for_external( + provider if isinstance(provider_name, str) else provider_name.value, + request.callback_url, + ) + + # Store state token with external flow metadata + # Note: initiated_by_api_key_id is only available for API key auth, not OAuth + api_key_id = getattr(auth, "id", None) if auth.type == "api_key" else None + state_token, code_challenge = await creds_manager.store.store_state_token( + user_id=auth.user_id, + provider=provider if isinstance(provider_name, str) else provider_name.value, + scopes=request.scopes, + callback_url=request.callback_url, + state_metadata=request.state_metadata, + initiated_by_api_key_id=api_key_id, + ) + + # Build login URL + login_url = handler.get_login_url( + request.scopes, state_token, code_challenge=code_challenge + ) + + # Calculate expiration (10 minutes from now) + from datetime import datetime, timedelta, timezone + + expires_at = int((datetime.now(timezone.utc) + timedelta(minutes=10)).timestamp()) + + return OAuthInitiateResponse( + login_url=login_url, + state_token=state_token, + expires_at=expires_at, + ) + + +@integrations_router.post( + "/{provider}/oauth/complete", + response_model=OAuthCompleteResponse, + summary="Complete OAuth flow", +) +async def complete_oauth( + provider: Annotated[str, Path(title="The OAuth provider")], + request: OAuthCompleteRequest, + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.MANAGE_INTEGRATIONS) + ), +) -> OAuthCompleteResponse: + """ + Complete an OAuth flow by exchanging the authorization code for tokens. + + This endpoint should be called after the user has authorized the application + and been redirected back to the external app's callback URL with an + authorization code. + """ + # Verify state token + valid_state = await creds_manager.store.verify_state_token( + auth.user_id, request.state_token, provider + ) + + if not valid_state: + logger.warning(f"Invalid or expired state token for provider {provider}") + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail="Invalid or expired state token", + ) + + # Verify this is an external flow (callback_url must be set) + if not valid_state.callback_url: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail="State token was not created for external OAuth flow", + ) + + # Get OAuth handler with the original callback URL + handler = _get_oauth_handler_for_external(provider, valid_state.callback_url) + + try: + scopes = valid_state.scopes + scopes = handler.handle_default_scopes(scopes) + + credentials = await handler.exchange_code_for_tokens( + request.code, scopes, valid_state.code_verifier + ) + + # Handle Linear's space-separated scopes + if len(credentials.scopes) == 1 and " " in credentials.scopes[0]: + credentials.scopes = credentials.scopes[0].split(" ") + + # Check scope mismatch + if not set(scopes).issubset(set(credentials.scopes)): + logger.warning( + f"Granted scopes {credentials.scopes} for provider {provider} " + f"do not include all requested scopes {scopes}" + ) + + except Exception as e: + logger.error(f"OAuth2 Code->Token exchange failed for provider {provider}: {e}") + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"OAuth2 callback failed to exchange code for tokens: {str(e)}", + ) + + # Store credentials + await creds_manager.create(auth.user_id, credentials) + + logger.info(f"Successfully completed external OAuth for provider {provider}") + + return OAuthCompleteResponse( + credentials_id=credentials.id, + provider=credentials.provider, + type=credentials.type, + title=credentials.title, + scopes=credentials.scopes, + username=credentials.username, + state_metadata=valid_state.state_metadata, + ) + + +@integrations_router.get("/credentials", response_model=list[CredentialSummary]) +async def list_credentials( + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.READ_INTEGRATIONS) + ), +) -> list[CredentialSummary]: + """ + List all credentials for the authenticated user. + + Returns metadata about each credential without exposing sensitive tokens. + """ + credentials = await creds_manager.store.get_all_creds(auth.user_id) + return [ + CredentialSummary( + id=cred.id, + provider=cred.provider, + type=cred.type, + title=cred.title, + scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None, + username=cred.username if isinstance(cred, OAuth2Credentials) else None, + host=cred.host if isinstance(cred, HostScopedCredentials) else None, + ) + for cred in credentials + ] + + +@integrations_router.get( + "/{provider}/credentials", response_model=list[CredentialSummary] +) +async def list_credentials_by_provider( + provider: Annotated[str, Path(title="The provider to list credentials for")], + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.READ_INTEGRATIONS) + ), +) -> list[CredentialSummary]: + """ + List credentials for a specific provider. + """ + credentials = await creds_manager.store.get_creds_by_provider( + auth.user_id, provider + ) + return [ + CredentialSummary( + id=cred.id, + provider=cred.provider, + type=cred.type, + title=cred.title, + scopes=cred.scopes if isinstance(cred, OAuth2Credentials) else None, + username=cred.username if isinstance(cred, OAuth2Credentials) else None, + host=cred.host if isinstance(cred, HostScopedCredentials) else None, + ) + for cred in credentials + ] + + +@integrations_router.post( + "/{provider}/credentials", + response_model=CreateCredentialResponse, + status_code=status.HTTP_201_CREATED, + summary="Create credentials", +) +async def create_credential( + provider: Annotated[str, Path(title="The provider to create credentials for")], + request: Union[ + CreateAPIKeyCredentialRequest, + CreateUserPasswordCredentialRequest, + CreateHostScopedCredentialRequest, + ] = Body(..., discriminator="type"), + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.MANAGE_INTEGRATIONS) + ), +) -> CreateCredentialResponse: + """ + Create non-OAuth credentials for a provider. + + Supports creating: + - API key credentials (type: "api_key") + - Username/password credentials (type: "user_password") + - Host-scoped credentials (type: "host_scoped") + + For OAuth credentials, use the OAuth initiate/complete flow instead. + """ + # Validate provider exists + all_providers = get_all_provider_names() + if provider not in all_providers: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail=f"Provider '{provider}' not found", + ) + + # Create the appropriate credential type + credentials: Credentials + if request.type == "api_key": + credentials = APIKeyCredentials( + provider=provider, + api_key=SecretStr(request.api_key), + title=request.title, + expires_at=request.expires_at, + ) + elif request.type == "user_password": + credentials = UserPasswordCredentials( + provider=provider, + username=SecretStr(request.username), + password=SecretStr(request.password), + title=request.title, + ) + elif request.type == "host_scoped": + # Convert string headers to SecretStr + secret_headers = {k: SecretStr(v) for k, v in request.headers.items()} + credentials = HostScopedCredentials( + provider=provider, + host=request.host, + headers=secret_headers, + title=request.title, + ) + else: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Unsupported credential type: {request.type}", + ) + + # Store credentials + try: + await creds_manager.create(auth.user_id, credentials) + except Exception as e: + logger.error(f"Failed to store credentials: {e}") + raise HTTPException( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + detail=f"Failed to store credentials: {str(e)}", + ) + + logger.info(f"Created {request.type} credentials for provider {provider}") + + return CreateCredentialResponse( + id=credentials.id, + provider=provider, + type=credentials.type, + title=credentials.title, + ) + + +class DeleteCredentialResponse(BaseModel): + """Response model for deleting a credential.""" + + deleted: bool = Field(..., description="Whether the credential was deleted") + credentials_id: str = Field(..., description="ID of the deleted credential") + + +@integrations_router.delete( + "/{provider}/credentials/{cred_id}", + response_model=DeleteCredentialResponse, +) +async def delete_credential( + provider: Annotated[str, Path(title="The provider")], + cred_id: Annotated[str, Path(title="The credential ID to delete")], + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.DELETE_INTEGRATIONS) + ), +) -> DeleteCredentialResponse: + """ + Delete a credential. + + Note: This does not revoke the tokens with the provider. For full cleanup, + use the main API's delete endpoint which handles webhook cleanup and + token revocation. + """ + creds = await creds_manager.store.get_creds_by_id(auth.user_id, cred_id) + if not creds: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, detail="Credentials not found" + ) + if creds.provider != provider: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Credentials do not match the specified provider", + ) + + await creds_manager.delete(auth.user_id, cred_id) + + return DeleteCredentialResponse(deleted=True, credentials_id=cred_id) diff --git a/autogpt_platform/backend/backend/api/external/v1/routes.py b/autogpt_platform/backend/backend/api/external/v1/routes.py new file mode 100644 index 0000000000..58e15dc6a3 --- /dev/null +++ b/autogpt_platform/backend/backend/api/external/v1/routes.py @@ -0,0 +1,328 @@ +import logging +import urllib.parse +from collections import defaultdict +from typing import Annotated, Any, Literal, Optional, Sequence + +from fastapi import APIRouter, Body, HTTPException, Security +from prisma.enums import AgentExecutionStatus, APIKeyPermission +from pydantic import BaseModel, Field +from typing_extensions import TypedDict + +import backend.api.features.store.cache as store_cache +import backend.api.features.store.model as store_model +import backend.data.block +from backend.api.external.middleware import require_permission +from backend.data import execution as execution_db +from backend.data import graph as graph_db +from backend.data import user as user_db +from backend.data.auth.base import APIAuthorizationInfo +from backend.data.block import BlockInput, CompletedBlockOutput +from backend.executor.utils import add_graph_execution +from backend.util.settings import Settings + +from .integrations import integrations_router +from .tools import tools_router + +settings = Settings() +logger = logging.getLogger(__name__) + +v1_router = APIRouter() + +v1_router.include_router(integrations_router) +v1_router.include_router(tools_router) + + +class UserInfoResponse(BaseModel): + id: str + name: Optional[str] + email: str + timezone: str = Field( + description="The user's last known timezone (e.g. 'Europe/Amsterdam'), " + "or 'not-set' if not set" + ) + + +@v1_router.get( + path="/me", + tags=["user", "meta"], +) +async def get_user_info( + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.IDENTITY) + ), +) -> UserInfoResponse: + user = await user_db.get_user_by_id(auth.user_id) + + return UserInfoResponse( + id=user.id, + name=user.name, + email=user.email, + timezone=user.timezone, + ) + + +@v1_router.get( + path="/blocks", + tags=["blocks"], + dependencies=[Security(require_permission(APIKeyPermission.READ_BLOCK))], +) +async def get_graph_blocks() -> Sequence[dict[Any, Any]]: + blocks = [block() for block in backend.data.block.get_blocks().values()] + return [b.to_dict() for b in blocks if not b.disabled] + + +@v1_router.post( + path="/blocks/{block_id}/execute", + tags=["blocks"], + dependencies=[Security(require_permission(APIKeyPermission.EXECUTE_BLOCK))], +) +async def execute_graph_block( + block_id: str, + data: BlockInput, + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.EXECUTE_BLOCK) + ), +) -> CompletedBlockOutput: + obj = backend.data.block.get_block(block_id) + if not obj: + raise HTTPException(status_code=404, detail=f"Block #{block_id} not found.") + + output = defaultdict(list) + async for name, data in obj.execute(data): + output[name].append(data) + return output + + +@v1_router.post( + path="/graphs/{graph_id}/execute/{graph_version}", + tags=["graphs"], +) +async def execute_graph( + graph_id: str, + graph_version: int, + node_input: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)], + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.EXECUTE_GRAPH) + ), +) -> dict[str, Any]: + try: + graph_exec = await add_graph_execution( + graph_id=graph_id, + user_id=auth.user_id, + inputs=node_input, + graph_version=graph_version, + ) + return {"id": graph_exec.id} + except Exception as e: + msg = str(e).encode().decode("unicode_escape") + raise HTTPException(status_code=400, detail=msg) + + +class ExecutionNode(TypedDict): + node_id: str + input: Any + output: dict[str, Any] + + +class GraphExecutionResult(TypedDict): + execution_id: str + status: str + nodes: list[ExecutionNode] + output: Optional[list[dict[str, str]]] + + +@v1_router.get( + path="/graphs/{graph_id}/executions/{graph_exec_id}/results", + tags=["graphs"], +) +async def get_graph_execution_results( + graph_id: str, + graph_exec_id: str, + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.READ_GRAPH) + ), +) -> GraphExecutionResult: + graph_exec = await execution_db.get_graph_execution( + user_id=auth.user_id, + execution_id=graph_exec_id, + include_node_executions=True, + ) + if not graph_exec: + raise HTTPException( + status_code=404, detail=f"Graph execution #{graph_exec_id} not found." + ) + + if not await graph_db.get_graph( + graph_id=graph_exec.graph_id, + version=graph_exec.graph_version, + user_id=auth.user_id, + ): + raise HTTPException(status_code=404, detail=f"Graph #{graph_id} not found.") + + return GraphExecutionResult( + execution_id=graph_exec_id, + status=graph_exec.status.value, + nodes=[ + ExecutionNode( + node_id=node_exec.node_id, + input=node_exec.input_data.get("value", node_exec.input_data), + output={k: v for k, v in node_exec.output_data.items()}, + ) + for node_exec in graph_exec.node_executions + ], + output=( + [ + {name: value} + for name, values in graph_exec.outputs.items() + for value in values + ] + if graph_exec.status == AgentExecutionStatus.COMPLETED + else None + ), + ) + + +############################################## +############### Store Endpoints ############## +############################################## + + +@v1_router.get( + path="/store/agents", + tags=["store"], + dependencies=[Security(require_permission(APIKeyPermission.READ_STORE))], + response_model=store_model.StoreAgentsResponse, +) +async def get_store_agents( + featured: bool = False, + creator: str | None = None, + sorted_by: Literal["rating", "runs", "name", "updated_at"] | None = None, + search_query: str | None = None, + category: str | None = None, + page: int = 1, + page_size: int = 20, +) -> store_model.StoreAgentsResponse: + """ + Get a paginated list of agents from the store with optional filtering and sorting. + + Args: + featured: Filter to only show featured agents + creator: Filter agents by creator username + sorted_by: Sort agents by "runs", "rating", "name", or "updated_at" + search_query: Search agents by name, subheading and description + category: Filter agents by category + page: Page number for pagination (default 1) + page_size: Number of agents per page (default 20) + + Returns: + StoreAgentsResponse: Paginated list of agents matching the filters + """ + if page < 1: + raise HTTPException(status_code=422, detail="Page must be greater than 0") + + if page_size < 1: + raise HTTPException(status_code=422, detail="Page size must be greater than 0") + + agents = await store_cache._get_cached_store_agents( + featured=featured, + creator=creator, + sorted_by=sorted_by, + search_query=search_query, + category=category, + page=page, + page_size=page_size, + ) + return agents + + +@v1_router.get( + path="/store/agents/{username}/{agent_name}", + tags=["store"], + dependencies=[Security(require_permission(APIKeyPermission.READ_STORE))], + response_model=store_model.StoreAgentDetails, +) +async def get_store_agent( + username: str, + agent_name: str, +) -> store_model.StoreAgentDetails: + """ + Get details of a specific store agent by username and agent name. + + Args: + username: Creator's username + agent_name: Name/slug of the agent + + Returns: + StoreAgentDetails: Detailed information about the agent + """ + username = urllib.parse.unquote(username).lower() + agent_name = urllib.parse.unquote(agent_name).lower() + agent = await store_cache._get_cached_agent_details( + username=username, agent_name=agent_name + ) + return agent + + +@v1_router.get( + path="/store/creators", + tags=["store"], + dependencies=[Security(require_permission(APIKeyPermission.READ_STORE))], + response_model=store_model.CreatorsResponse, +) +async def get_store_creators( + featured: bool = False, + search_query: str | None = None, + sorted_by: Literal["agent_rating", "agent_runs", "num_agents"] | None = None, + page: int = 1, + page_size: int = 20, +) -> store_model.CreatorsResponse: + """ + Get a paginated list of store creators with optional filtering and sorting. + + Args: + featured: Filter to only show featured creators + search_query: Search creators by profile description + sorted_by: Sort by "agent_rating", "agent_runs", or "num_agents" + page: Page number for pagination (default 1) + page_size: Number of creators per page (default 20) + + Returns: + CreatorsResponse: Paginated list of creators matching the filters + """ + if page < 1: + raise HTTPException(status_code=422, detail="Page must be greater than 0") + + if page_size < 1: + raise HTTPException(status_code=422, detail="Page size must be greater than 0") + + creators = await store_cache._get_cached_store_creators( + featured=featured, + search_query=search_query, + sorted_by=sorted_by, + page=page, + page_size=page_size, + ) + return creators + + +@v1_router.get( + path="/store/creators/{username}", + tags=["store"], + dependencies=[Security(require_permission(APIKeyPermission.READ_STORE))], + response_model=store_model.CreatorDetails, +) +async def get_store_creator( + username: str, +) -> store_model.CreatorDetails: + """ + Get details of a specific store creator by username. + + Args: + username: Creator's username + + Returns: + CreatorDetails: Detailed information about the creator + """ + username = urllib.parse.unquote(username).lower() + creator = await store_cache._get_cached_creator_details(username=username) + return creator diff --git a/autogpt_platform/backend/backend/api/external/v1/tools.py b/autogpt_platform/backend/backend/api/external/v1/tools.py new file mode 100644 index 0000000000..9e362fb32c --- /dev/null +++ b/autogpt_platform/backend/backend/api/external/v1/tools.py @@ -0,0 +1,152 @@ +"""External API routes for chat tools - stateless HTTP endpoints. + +Note: These endpoints use ephemeral sessions that are not persisted to Redis. +As a result, session-based rate limiting (max_agent_runs, max_agent_schedules) +is not enforced for external API calls. Each request creates a fresh session +with zeroed counters. Rate limiting for external API consumers should be +handled separately (e.g., via API key quotas). +""" + +import logging +from typing import Any + +from fastapi import APIRouter, Security +from prisma.enums import APIKeyPermission +from pydantic import BaseModel, Field + +from backend.api.external.middleware import require_permission +from backend.api.features.chat.model import ChatSession +from backend.api.features.chat.tools import find_agent_tool, run_agent_tool +from backend.api.features.chat.tools.models import ToolResponseBase +from backend.data.auth.base import APIAuthorizationInfo + +logger = logging.getLogger(__name__) + +tools_router = APIRouter(prefix="/tools", tags=["tools"]) + +# Note: We use Security() as a function parameter dependency (auth: APIAuthorizationInfo = Security(...)) +# rather than in the decorator's dependencies= list. This avoids duplicate permission checks +# while still enforcing auth AND giving us access to auth for extracting user_id. + + +# Request models +class FindAgentRequest(BaseModel): + query: str = Field(..., description="Search query for finding agents") + + +class RunAgentRequest(BaseModel): + """Request to run or schedule an agent. + + The tool automatically handles the setup flow: + - First call returns available inputs so user can decide what values to use + - Returns missing credentials if user needs to configure them + - Executes when inputs are provided OR use_defaults=true + - Schedules execution if schedule_name and cron are provided + """ + + username_agent_slug: str = Field( + ..., + description="The marketplace agent slug (e.g., 'username/agent-name')", + ) + inputs: dict[str, Any] = Field( + default_factory=dict, + description="Dictionary of input values for the agent", + ) + use_defaults: bool = Field( + default=False, + description="Set to true to run with default values (user must confirm)", + ) + schedule_name: str | None = Field( + None, + description="Name for scheduled execution (triggers scheduling mode)", + ) + cron: str | None = Field( + None, + description="Cron expression (5 fields: minute hour day month weekday)", + ) + timezone: str = Field( + default="UTC", + description="IANA timezone (e.g., 'America/New_York', 'UTC')", + ) + + +def _create_ephemeral_session(user_id: str | None) -> ChatSession: + """Create an ephemeral session for stateless API requests.""" + return ChatSession.new(user_id) + + +@tools_router.post( + path="/find-agent", +) +async def find_agent( + request: FindAgentRequest, + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.USE_TOOLS) + ), +) -> dict[str, Any]: + """ + Search for agents in the marketplace based on capabilities and user needs. + + Args: + request: Search query for finding agents + + Returns: + List of matching agents or no results response + """ + session = _create_ephemeral_session(auth.user_id) + result = await find_agent_tool._execute( + user_id=auth.user_id, + session=session, + query=request.query, + ) + return _response_to_dict(result) + + +@tools_router.post( + path="/run-agent", +) +async def run_agent( + request: RunAgentRequest, + auth: APIAuthorizationInfo = Security( + require_permission(APIKeyPermission.USE_TOOLS) + ), +) -> dict[str, Any]: + """ + Run or schedule an agent from the marketplace. + + The endpoint automatically handles the setup flow: + - Returns missing inputs if required fields are not provided + - Returns missing credentials if user needs to configure them + - Executes immediately if all requirements are met + - Schedules execution if schedule_name and cron are provided + + For scheduled execution: + - Cron format: "minute hour day month weekday" + - Examples: "0 9 * * 1-5" (9am weekdays), "0 0 * * *" (daily at midnight) + - Timezone: Use IANA timezone names like "America/New_York" + + Args: + request: Agent slug, inputs, and optional schedule config + + Returns: + - setup_requirements: If inputs or credentials are missing + - execution_started: If agent was run or scheduled successfully + - error: If something went wrong + """ + session = _create_ephemeral_session(auth.user_id) + result = await run_agent_tool._execute( + user_id=auth.user_id, + session=session, + username_agent_slug=request.username_agent_slug, + inputs=request.inputs, + use_defaults=request.use_defaults, + schedule_name=request.schedule_name or "", + cron=request.cron or "", + timezone=request.timezone, + ) + return _response_to_dict(result) + + +def _response_to_dict(result: ToolResponseBase) -> dict[str, Any]: + """Convert a tool response to a dictionary for JSON serialization.""" + return result.model_dump() diff --git a/autogpt_platform/backend/backend/server/routers/postmark/__init__.py b/autogpt_platform/backend/backend/api/features/__init__.py similarity index 100% rename from autogpt_platform/backend/backend/server/routers/postmark/__init__.py rename to autogpt_platform/backend/backend/api/features/__init__.py diff --git a/autogpt_platform/backend/backend/server/v2/library/__init__.py b/autogpt_platform/backend/backend/api/features/admin/__init__.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/library/__init__.py rename to autogpt_platform/backend/backend/api/features/admin/__init__.py diff --git a/autogpt_platform/backend/backend/server/v2/admin/credit_admin_routes.py b/autogpt_platform/backend/backend/api/features/admin/credit_admin_routes.py similarity index 96% rename from autogpt_platform/backend/backend/server/v2/admin/credit_admin_routes.py rename to autogpt_platform/backend/backend/api/features/admin/credit_admin_routes.py index e4ea2c7f32..8930172c7f 100644 --- a/autogpt_platform/backend/backend/server/v2/admin/credit_admin_routes.py +++ b/autogpt_platform/backend/backend/api/features/admin/credit_admin_routes.py @@ -6,9 +6,10 @@ from fastapi import APIRouter, Body, Security from prisma.enums import CreditTransactionType from backend.data.credit import admin_get_user_history, get_user_credit_model -from backend.server.v2.admin.model import AddUserCreditsResponse, UserHistoryResponse from backend.util.json import SafeJson +from .model import AddUserCreditsResponse, UserHistoryResponse + logger = logging.getLogger(__name__) diff --git a/autogpt_platform/backend/backend/server/v2/admin/credit_admin_routes_test.py b/autogpt_platform/backend/backend/api/features/admin/credit_admin_routes_test.py similarity index 90% rename from autogpt_platform/backend/backend/server/v2/admin/credit_admin_routes_test.py rename to autogpt_platform/backend/backend/api/features/admin/credit_admin_routes_test.py index 0248da352f..db2d3cb41a 100644 --- a/autogpt_platform/backend/backend/server/v2/admin/credit_admin_routes_test.py +++ b/autogpt_platform/backend/backend/api/features/admin/credit_admin_routes_test.py @@ -9,14 +9,15 @@ import pytest_mock from autogpt_libs.auth.jwt_utils import get_jwt_payload from pytest_snapshot.plugin import Snapshot -import backend.server.v2.admin.credit_admin_routes as credit_admin_routes -import backend.server.v2.admin.model as admin_model from backend.data.model import UserTransaction from backend.util.json import SafeJson from backend.util.models import Pagination +from .credit_admin_routes import router as credit_admin_router +from .model import UserHistoryResponse + app = fastapi.FastAPI() -app.include_router(credit_admin_routes.router) +app.include_router(credit_admin_router) client = fastapi.testclient.TestClient(app) @@ -30,7 +31,7 @@ def setup_app_admin_auth(mock_jwt_admin): def test_add_user_credits_success( - mocker: pytest_mock.MockFixture, + mocker: pytest_mock.MockerFixture, configured_snapshot: Snapshot, admin_user_id: str, target_user_id: str, @@ -42,7 +43,7 @@ def test_add_user_credits_success( return_value=(1500, "transaction-123-uuid") ) mocker.patch( - "backend.server.v2.admin.credit_admin_routes.get_user_credit_model", + "backend.api.features.admin.credit_admin_routes.get_user_credit_model", return_value=mock_credit_model, ) @@ -84,7 +85,7 @@ def test_add_user_credits_success( def test_add_user_credits_negative_amount( - mocker: pytest_mock.MockFixture, + mocker: pytest_mock.MockerFixture, snapshot: Snapshot, ) -> None: """Test credit deduction by admin (negative amount)""" @@ -94,7 +95,7 @@ def test_add_user_credits_negative_amount( return_value=(200, "transaction-456-uuid") ) mocker.patch( - "backend.server.v2.admin.credit_admin_routes.get_user_credit_model", + "backend.api.features.admin.credit_admin_routes.get_user_credit_model", return_value=mock_credit_model, ) @@ -119,12 +120,12 @@ def test_add_user_credits_negative_amount( def test_get_user_history_success( - mocker: pytest_mock.MockFixture, + mocker: pytest_mock.MockerFixture, snapshot: Snapshot, ) -> None: """Test successful retrieval of user credit history""" # Mock the admin_get_user_history function - mock_history_response = admin_model.UserHistoryResponse( + mock_history_response = UserHistoryResponse( history=[ UserTransaction( user_id="user-1", @@ -150,7 +151,7 @@ def test_get_user_history_success( ) mocker.patch( - "backend.server.v2.admin.credit_admin_routes.admin_get_user_history", + "backend.api.features.admin.credit_admin_routes.admin_get_user_history", return_value=mock_history_response, ) @@ -170,12 +171,12 @@ def test_get_user_history_success( def test_get_user_history_with_filters( - mocker: pytest_mock.MockFixture, + mocker: pytest_mock.MockerFixture, snapshot: Snapshot, ) -> None: """Test user credit history with search and filter parameters""" # Mock the admin_get_user_history function - mock_history_response = admin_model.UserHistoryResponse( + mock_history_response = UserHistoryResponse( history=[ UserTransaction( user_id="user-3", @@ -194,7 +195,7 @@ def test_get_user_history_with_filters( ) mock_get_history = mocker.patch( - "backend.server.v2.admin.credit_admin_routes.admin_get_user_history", + "backend.api.features.admin.credit_admin_routes.admin_get_user_history", return_value=mock_history_response, ) @@ -230,12 +231,12 @@ def test_get_user_history_with_filters( def test_get_user_history_empty_results( - mocker: pytest_mock.MockFixture, + mocker: pytest_mock.MockerFixture, snapshot: Snapshot, ) -> None: """Test user credit history with no results""" # Mock empty history response - mock_history_response = admin_model.UserHistoryResponse( + mock_history_response = UserHistoryResponse( history=[], pagination=Pagination( total_items=0, @@ -246,7 +247,7 @@ def test_get_user_history_empty_results( ) mocker.patch( - "backend.server.v2.admin.credit_admin_routes.admin_get_user_history", + "backend.api.features.admin.credit_admin_routes.admin_get_user_history", return_value=mock_history_response, ) diff --git a/autogpt_platform/backend/backend/api/features/admin/execution_analytics_routes.py b/autogpt_platform/backend/backend/api/features/admin/execution_analytics_routes.py new file mode 100644 index 0000000000..00f0bda884 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/admin/execution_analytics_routes.py @@ -0,0 +1,474 @@ +import asyncio +import logging +from datetime import datetime +from typing import Optional + +from autogpt_libs.auth import get_user_id, requires_admin_user +from fastapi import APIRouter, HTTPException, Security +from pydantic import BaseModel, Field + +from backend.blocks.llm import LlmModel +from backend.data.analytics import ( + AccuracyTrendsResponse, + get_accuracy_trends_and_alerts, +) +from backend.data.execution import ( + ExecutionStatus, + GraphExecutionMeta, + get_graph_executions, + update_graph_execution_stats, +) +from backend.data.model import GraphExecutionStats +from backend.executor.activity_status_generator import ( + DEFAULT_SYSTEM_PROMPT, + DEFAULT_USER_PROMPT, + generate_activity_status_for_execution, +) +from backend.executor.manager import get_db_async_client +from backend.util.settings import Settings + +logger = logging.getLogger(__name__) + + +class ExecutionAnalyticsRequest(BaseModel): + graph_id: str = Field(..., description="Graph ID to analyze") + graph_version: Optional[int] = Field(None, description="Optional graph version") + user_id: Optional[str] = Field(None, description="Optional user ID filter") + created_after: Optional[datetime] = Field( + None, description="Optional created date lower bound" + ) + model_name: str = Field("gpt-4o-mini", description="Model to use for generation") + batch_size: int = Field( + 10, description="Batch size for concurrent processing", le=25, ge=1 + ) + system_prompt: Optional[str] = Field( + None, description="Custom system prompt (default: built-in prompt)" + ) + user_prompt: Optional[str] = Field( + None, + description="Custom user prompt with {{GRAPH_NAME}} and {{EXECUTION_DATA}} placeholders (default: built-in prompt)", + ) + skip_existing: bool = Field( + True, + description="Whether to skip executions that already have activity status and correctness score", + ) + + +class ExecutionAnalyticsResult(BaseModel): + agent_id: str + version_id: int + user_id: str + exec_id: str + summary_text: Optional[str] + score: Optional[float] + status: str # "success", "failed", "skipped" + error_message: Optional[str] = None + + +class ExecutionAnalyticsResponse(BaseModel): + total_executions: int + processed_executions: int + successful_analytics: int + failed_analytics: int + skipped_executions: int + results: list[ExecutionAnalyticsResult] + + +class ModelInfo(BaseModel): + value: str + label: str + provider: str + + +class ExecutionAnalyticsConfig(BaseModel): + available_models: list[ModelInfo] + default_system_prompt: str + default_user_prompt: str + recommended_model: str + + +class AccuracyTrendsRequest(BaseModel): + graph_id: str = Field(..., description="Graph ID to analyze", min_length=1) + user_id: Optional[str] = Field(None, description="Optional user ID filter") + days_back: int = Field(30, description="Number of days to look back", ge=7, le=90) + drop_threshold: float = Field( + 10.0, description="Alert threshold percentage", ge=1.0, le=50.0 + ) + include_historical: bool = Field( + False, description="Include historical data for charts" + ) + + +router = APIRouter( + prefix="/admin", + tags=["admin", "execution_analytics"], + dependencies=[Security(requires_admin_user)], +) + + +@router.get( + "/execution_analytics/config", + response_model=ExecutionAnalyticsConfig, + summary="Get Execution Analytics Configuration", +) +async def get_execution_analytics_config( + admin_user_id: str = Security(get_user_id), +): + """ + Get the configuration for execution analytics including: + - Available AI models with metadata + - Default system and user prompts + - Recommended model selection + """ + logger.info(f"Admin user {admin_user_id} requesting execution analytics config") + + # Generate model list from LlmModel enum with provider information + available_models = [] + + # Function to generate friendly display names from model values + def generate_model_label(model: LlmModel) -> str: + """Generate a user-friendly label from the model enum value.""" + value = model.value + + # For all models, convert underscores/hyphens to spaces and title case + # e.g., "gpt-4-turbo" -> "GPT 4 Turbo", "claude-3-haiku-20240307" -> "Claude 3 Haiku" + parts = value.replace("_", "-").split("-") + + # Handle provider prefixes (e.g., "google/", "x-ai/") + if "/" in value: + _, model_name = value.split("/", 1) + parts = model_name.replace("_", "-").split("-") + + # Capitalize and format parts + formatted_parts = [] + for part in parts: + # Skip date-like patterns - check for various date formats: + # - Long dates like "20240307" (8 digits) + # - Year components like "2024", "2025" (4 digit years >= 2020) + # - Month/day components like "04", "16" when they appear to be dates + if part.isdigit(): + if len(part) >= 8: # Long date format like "20240307" + continue + elif len(part) == 4 and int(part) >= 2020: # Year like "2024", "2025" + continue + elif len(part) <= 2 and int(part) <= 31: # Month/day like "04", "16" + # Skip if this looks like a date component (basic heuristic) + continue + # Keep version numbers as-is + if part.replace(".", "").isdigit(): + formatted_parts.append(part) + # Capitalize normal words + else: + formatted_parts.append( + part.upper() + if part.upper() in ["GPT", "LLM", "API", "V0"] + else part.capitalize() + ) + + model_name = " ".join(formatted_parts) + + # Format provider name for better display + provider_name = model.provider.replace("_", " ").title() + + # Return with provider prefix for clarity + return f"{provider_name}: {model_name}" + + # Include all LlmModel values (no more filtering by hardcoded list) + recommended_model = LlmModel.GPT4O_MINI.value + for model in LlmModel: + label = generate_model_label(model) + # Add "(Recommended)" suffix to the recommended model + if model.value == recommended_model: + label += " (Recommended)" + + available_models.append( + ModelInfo( + value=model.value, + label=label, + provider=model.provider, + ) + ) + + # Sort models by provider and name for better UX + available_models.sort(key=lambda x: (x.provider, x.label)) + + return ExecutionAnalyticsConfig( + available_models=available_models, + default_system_prompt=DEFAULT_SYSTEM_PROMPT, + default_user_prompt=DEFAULT_USER_PROMPT, + recommended_model=recommended_model, + ) + + +@router.post( + "/execution_analytics", + response_model=ExecutionAnalyticsResponse, + summary="Generate Execution Analytics", +) +async def generate_execution_analytics( + request: ExecutionAnalyticsRequest, + admin_user_id: str = Security(get_user_id), +): + """ + Generate activity summaries and correctness scores for graph executions. + + This endpoint: + 1. Fetches all completed executions matching the criteria + 2. Identifies executions missing activity_status or correctness_score + 3. Generates missing data using AI in batches + 4. Updates the database with new stats + 5. Returns a detailed report of the analytics operation + """ + logger.info( + f"Admin user {admin_user_id} starting execution analytics generation for graph {request.graph_id}" + ) + + try: + # Validate model configuration + settings = Settings() + if not settings.secrets.openai_internal_api_key: + raise HTTPException(status_code=500, detail="OpenAI API key not configured") + + # Get database client + db_client = get_db_async_client() + + # Fetch executions to process + executions = await get_graph_executions( + graph_id=request.graph_id, + graph_version=request.graph_version, + user_id=request.user_id, + created_time_gte=request.created_after, + statuses=[ + ExecutionStatus.COMPLETED, + ExecutionStatus.FAILED, + ExecutionStatus.TERMINATED, + ], # Only process finished executions + ) + + logger.info( + f"Found {len(executions)} total executions for graph {request.graph_id}" + ) + + # Filter executions that need analytics generation + executions_to_process = [] + for execution in executions: + # Skip if we should skip existing analytics and both activity_status and correctness_score exist + if ( + request.skip_existing + and execution.stats + and execution.stats.activity_status + and execution.stats.correctness_score is not None + ): + continue + + # Add execution to processing list + executions_to_process.append(execution) + + logger.info( + f"Found {len(executions_to_process)} executions needing analytics generation" + ) + + # Create results for ALL executions - processed and skipped + results = [] + successful_count = 0 + failed_count = 0 + + # Process executions that need analytics generation + if executions_to_process: + total_batches = len( + range(0, len(executions_to_process), request.batch_size) + ) + + for batch_idx, i in enumerate( + range(0, len(executions_to_process), request.batch_size) + ): + batch = executions_to_process[i : i + request.batch_size] + logger.info( + f"Processing batch {batch_idx + 1}/{total_batches} with {len(batch)} executions" + ) + + batch_results = await _process_batch(batch, request, db_client) + + for result in batch_results: + results.append(result) + if result.status == "success": + successful_count += 1 + elif result.status == "failed": + failed_count += 1 + + # Small delay between batches to avoid overwhelming the LLM API + if batch_idx < total_batches - 1: # Don't delay after the last batch + await asyncio.sleep(2) + + # Add ALL executions to results (both processed and skipped) + for execution in executions: + # Skip if already processed (added to results above) + if execution in executions_to_process: + continue + + results.append( + ExecutionAnalyticsResult( + agent_id=execution.graph_id, + version_id=execution.graph_version, + user_id=execution.user_id, + exec_id=execution.id, + summary_text=( + execution.stats.activity_status if execution.stats else None + ), + score=( + execution.stats.correctness_score if execution.stats else None + ), + status="skipped", + error_message=None, # Not an error - just already processed + ) + ) + + response = ExecutionAnalyticsResponse( + total_executions=len(executions), + processed_executions=len(executions_to_process), + successful_analytics=successful_count, + failed_analytics=failed_count, + skipped_executions=len(executions) - len(executions_to_process), + results=results, + ) + + logger.info( + f"Analytics generation completed: {successful_count} successful, {failed_count} failed, " + f"{response.skipped_executions} skipped" + ) + + return response + + except Exception as e: + logger.exception(f"Error during execution analytics generation: {e}") + raise HTTPException(status_code=500, detail=str(e)) + + +async def _process_batch( + executions, request: ExecutionAnalyticsRequest, db_client +) -> list[ExecutionAnalyticsResult]: + """Process a batch of executions concurrently.""" + + async def process_single_execution(execution) -> ExecutionAnalyticsResult: + try: + # Generate activity status and score using the specified model + # Convert stats to GraphExecutionStats if needed + if execution.stats: + if isinstance(execution.stats, GraphExecutionMeta.Stats): + stats_for_generation = execution.stats.to_db() + else: + # Already GraphExecutionStats + stats_for_generation = execution.stats + else: + stats_for_generation = GraphExecutionStats() + + activity_response = await generate_activity_status_for_execution( + graph_exec_id=execution.id, + graph_id=execution.graph_id, + graph_version=execution.graph_version, + execution_stats=stats_for_generation, + db_client=db_client, + user_id=execution.user_id, + execution_status=execution.status, + model_name=request.model_name, + skip_feature_flag=True, # Admin endpoint bypasses feature flags + system_prompt=request.system_prompt or DEFAULT_SYSTEM_PROMPT, + user_prompt=request.user_prompt or DEFAULT_USER_PROMPT, + skip_existing=request.skip_existing, + ) + + if not activity_response: + return ExecutionAnalyticsResult( + agent_id=execution.graph_id, + version_id=execution.graph_version, + user_id=execution.user_id, + exec_id=execution.id, + summary_text=None, + score=None, + status="skipped", + error_message="Activity generation returned None", + ) + + # Update the execution stats + # Convert GraphExecutionMeta.Stats to GraphExecutionStats for DB compatibility + if execution.stats: + if isinstance(execution.stats, GraphExecutionMeta.Stats): + updated_stats = execution.stats.to_db() + else: + # Already GraphExecutionStats + updated_stats = execution.stats + else: + updated_stats = GraphExecutionStats() + + updated_stats.activity_status = activity_response["activity_status"] + updated_stats.correctness_score = activity_response["correctness_score"] + + # Save to database with correct stats type + await update_graph_execution_stats( + graph_exec_id=execution.id, stats=updated_stats + ) + + return ExecutionAnalyticsResult( + agent_id=execution.graph_id, + version_id=execution.graph_version, + user_id=execution.user_id, + exec_id=execution.id, + summary_text=activity_response["activity_status"], + score=activity_response["correctness_score"], + status="success", + ) + + except Exception as e: + logger.exception(f"Error processing execution {execution.id}: {e}") + return ExecutionAnalyticsResult( + agent_id=execution.graph_id, + version_id=execution.graph_version, + user_id=execution.user_id, + exec_id=execution.id, + summary_text=None, + score=None, + status="failed", + error_message=str(e), + ) + + # Process all executions in the batch concurrently + return await asyncio.gather( + *[process_single_execution(execution) for execution in executions] + ) + + +@router.get( + "/execution_accuracy_trends", + response_model=AccuracyTrendsResponse, + summary="Get Execution Accuracy Trends and Alerts", +) +async def get_execution_accuracy_trends( + graph_id: str, + user_id: Optional[str] = None, + days_back: int = 30, + drop_threshold: float = 10.0, + include_historical: bool = False, + admin_user_id: str = Security(get_user_id), +) -> AccuracyTrendsResponse: + """ + Get execution accuracy trends with moving averages and alert detection. + Simple single-query approach. + """ + logger.info( + f"Admin user {admin_user_id} requesting accuracy trends for graph {graph_id}" + ) + + try: + result = await get_accuracy_trends_and_alerts( + graph_id=graph_id, + days_back=days_back, + user_id=user_id, + drop_threshold=drop_threshold, + include_historical=include_historical, + ) + + return result + + except Exception as e: + logger.exception(f"Error getting accuracy trends for graph {graph_id}: {e}") + raise HTTPException(status_code=500, detail=str(e)) diff --git a/autogpt_platform/backend/backend/server/v2/admin/model.py b/autogpt_platform/backend/backend/api/features/admin/model.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/admin/model.py rename to autogpt_platform/backend/backend/api/features/admin/model.py diff --git a/autogpt_platform/backend/backend/server/v2/admin/store_admin_routes.py b/autogpt_platform/backend/backend/api/features/admin/store_admin_routes.py similarity index 84% rename from autogpt_platform/backend/backend/server/v2/admin/store_admin_routes.py rename to autogpt_platform/backend/backend/api/features/admin/store_admin_routes.py index c611c43f5a..9c4b89fee6 100644 --- a/autogpt_platform/backend/backend/server/v2/admin/store_admin_routes.py +++ b/autogpt_platform/backend/backend/api/features/admin/store_admin_routes.py @@ -7,9 +7,9 @@ import fastapi import fastapi.responses import prisma.enums -import backend.server.v2.store.cache as store_cache -import backend.server.v2.store.db -import backend.server.v2.store.model +import backend.api.features.store.cache as store_cache +import backend.api.features.store.db as store_db +import backend.api.features.store.model as store_model import backend.util.json logger = logging.getLogger(__name__) @@ -24,7 +24,7 @@ router = fastapi.APIRouter( @router.get( "/listings", summary="Get Admin Listings History", - response_model=backend.server.v2.store.model.StoreListingsWithVersionsResponse, + response_model=store_model.StoreListingsWithVersionsResponse, ) async def get_admin_listings_with_versions( status: typing.Optional[prisma.enums.SubmissionStatus] = None, @@ -48,7 +48,7 @@ async def get_admin_listings_with_versions( StoreListingsWithVersionsResponse with listings and their versions """ try: - listings = await backend.server.v2.store.db.get_admin_listings_with_versions( + listings = await store_db.get_admin_listings_with_versions( status=status, search_query=search, page=page, @@ -68,11 +68,11 @@ async def get_admin_listings_with_versions( @router.post( "/submissions/{store_listing_version_id}/review", summary="Review Store Submission", - response_model=backend.server.v2.store.model.StoreSubmission, + response_model=store_model.StoreSubmission, ) async def review_submission( store_listing_version_id: str, - request: backend.server.v2.store.model.ReviewSubmissionRequest, + request: store_model.ReviewSubmissionRequest, user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id), ): """ @@ -87,12 +87,10 @@ async def review_submission( StoreSubmission with updated review information """ try: - already_approved = ( - await backend.server.v2.store.db.check_submission_already_approved( - store_listing_version_id=store_listing_version_id, - ) + already_approved = await store_db.check_submission_already_approved( + store_listing_version_id=store_listing_version_id, ) - submission = await backend.server.v2.store.db.review_store_submission( + submission = await store_db.review_store_submission( store_listing_version_id=store_listing_version_id, is_approved=request.is_approved, external_comments=request.comments, @@ -136,7 +134,7 @@ async def admin_download_agent_file( Raises: HTTPException: If the agent is not found or an unexpected error occurs. """ - graph_data = await backend.server.v2.store.db.get_agent_as_admin( + graph_data = await store_db.get_agent_as_admin( user_id=user_id, store_listing_version_id=store_listing_version_id, ) diff --git a/autogpt_platform/backend/backend/server/routers/analytics.py b/autogpt_platform/backend/backend/api/features/analytics.py similarity index 94% rename from autogpt_platform/backend/backend/server/routers/analytics.py rename to autogpt_platform/backend/backend/api/features/analytics.py index 98c2dd8e96..73a4590dcb 100644 --- a/autogpt_platform/backend/backend/server/routers/analytics.py +++ b/autogpt_platform/backend/backend/api/features/analytics.py @@ -6,10 +6,11 @@ from typing import Annotated import fastapi import pydantic from autogpt_libs.auth import get_user_id +from autogpt_libs.auth.dependencies import requires_user import backend.data.analytics -router = fastapi.APIRouter() +router = fastapi.APIRouter(dependencies=[fastapi.Security(requires_user)]) logger = logging.getLogger(__name__) diff --git a/autogpt_platform/backend/backend/api/features/analytics_test.py b/autogpt_platform/backend/backend/api/features/analytics_test.py new file mode 100644 index 0000000000..2493bdb7e4 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/analytics_test.py @@ -0,0 +1,340 @@ +"""Tests for analytics API endpoints.""" + +import json +from unittest.mock import AsyncMock, Mock + +import fastapi +import fastapi.testclient +import pytest +import pytest_mock +from pytest_snapshot.plugin import Snapshot + +from .analytics import router as analytics_router + +app = fastapi.FastAPI() +app.include_router(analytics_router) + +client = fastapi.testclient.TestClient(app) + + +@pytest.fixture(autouse=True) +def setup_app_auth(mock_jwt_user): + """Setup auth overrides for all tests in this module.""" + from autogpt_libs.auth.jwt_utils import get_jwt_payload + + app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"] + yield + app.dependency_overrides.clear() + + +# ============================================================================= +# /log_raw_metric endpoint tests +# ============================================================================= + + +def test_log_raw_metric_success( + mocker: pytest_mock.MockFixture, + configured_snapshot: Snapshot, + test_user_id: str, +) -> None: + """Test successful raw metric logging.""" + mock_result = Mock(id="metric-123-uuid") + mock_log_metric = mocker.patch( + "backend.data.analytics.log_raw_metric", + new_callable=AsyncMock, + return_value=mock_result, + ) + + request_data = { + "metric_name": "page_load_time", + "metric_value": 2.5, + "data_string": "/dashboard", + } + + response = client.post("/log_raw_metric", json=request_data) + + assert response.status_code == 200, f"Unexpected response: {response.text}" + assert response.json() == "metric-123-uuid" + + mock_log_metric.assert_called_once_with( + user_id=test_user_id, + metric_name="page_load_time", + metric_value=2.5, + data_string="/dashboard", + ) + + configured_snapshot.assert_match( + json.dumps({"metric_id": response.json()}, indent=2, sort_keys=True), + "analytics_log_metric_success", + ) + + +@pytest.mark.parametrize( + "metric_value,metric_name,data_string,test_id", + [ + (100, "api_calls_count", "external_api", "integer_value"), + (0, "error_count", "no_errors", "zero_value"), + (-5.2, "temperature_delta", "cooling", "negative_value"), + (1.23456789, "precision_test", "float_precision", "float_precision"), + (999999999, "large_number", "max_value", "large_number"), + (0.0000001, "tiny_number", "min_value", "tiny_number"), + ], +) +def test_log_raw_metric_various_values( + mocker: pytest_mock.MockFixture, + configured_snapshot: Snapshot, + metric_value: float, + metric_name: str, + data_string: str, + test_id: str, +) -> None: + """Test raw metric logging with various metric values.""" + mock_result = Mock(id=f"metric-{test_id}-uuid") + mocker.patch( + "backend.data.analytics.log_raw_metric", + new_callable=AsyncMock, + return_value=mock_result, + ) + + request_data = { + "metric_name": metric_name, + "metric_value": metric_value, + "data_string": data_string, + } + + response = client.post("/log_raw_metric", json=request_data) + + assert response.status_code == 200, f"Failed for {test_id}: {response.text}" + + configured_snapshot.assert_match( + json.dumps( + {"metric_id": response.json(), "test_case": test_id}, + indent=2, + sort_keys=True, + ), + f"analytics_metric_{test_id}", + ) + + +@pytest.mark.parametrize( + "invalid_data,expected_error", + [ + ({}, "Field required"), + ({"metric_name": "test"}, "Field required"), + ( + {"metric_name": "test", "metric_value": "not_a_number", "data_string": "x"}, + "Input should be a valid number", + ), + ( + {"metric_name": "", "metric_value": 1.0, "data_string": "test"}, + "String should have at least 1 character", + ), + ( + {"metric_name": "test", "metric_value": 1.0, "data_string": ""}, + "String should have at least 1 character", + ), + ], + ids=[ + "empty_request", + "missing_metric_value_and_data_string", + "invalid_metric_value_type", + "empty_metric_name", + "empty_data_string", + ], +) +def test_log_raw_metric_validation_errors( + invalid_data: dict, + expected_error: str, +) -> None: + """Test validation errors for invalid metric requests.""" + response = client.post("/log_raw_metric", json=invalid_data) + + assert response.status_code == 422 + error_detail = response.json() + assert "detail" in error_detail, f"Missing 'detail' in error: {error_detail}" + + error_text = json.dumps(error_detail) + assert ( + expected_error in error_text + ), f"Expected '{expected_error}' in error response: {error_text}" + + +def test_log_raw_metric_service_error( + mocker: pytest_mock.MockFixture, + test_user_id: str, +) -> None: + """Test error handling when analytics service fails.""" + mocker.patch( + "backend.data.analytics.log_raw_metric", + new_callable=AsyncMock, + side_effect=Exception("Database connection failed"), + ) + + request_data = { + "metric_name": "test_metric", + "metric_value": 1.0, + "data_string": "test", + } + + response = client.post("/log_raw_metric", json=request_data) + + assert response.status_code == 500 + error_detail = response.json()["detail"] + assert "Database connection failed" in error_detail["message"] + assert "hint" in error_detail + + +# ============================================================================= +# /log_raw_analytics endpoint tests +# ============================================================================= + + +def test_log_raw_analytics_success( + mocker: pytest_mock.MockFixture, + configured_snapshot: Snapshot, + test_user_id: str, +) -> None: + """Test successful raw analytics logging.""" + mock_result = Mock(id="analytics-789-uuid") + mock_log_analytics = mocker.patch( + "backend.data.analytics.log_raw_analytics", + new_callable=AsyncMock, + return_value=mock_result, + ) + + request_data = { + "type": "user_action", + "data": { + "action": "button_click", + "button_id": "submit_form", + "timestamp": "2023-01-01T00:00:00Z", + "metadata": {"form_type": "registration", "fields_filled": 5}, + }, + "data_index": "button_click_submit_form", + } + + response = client.post("/log_raw_analytics", json=request_data) + + assert response.status_code == 200, f"Unexpected response: {response.text}" + assert response.json() == "analytics-789-uuid" + + mock_log_analytics.assert_called_once_with( + test_user_id, + "user_action", + request_data["data"], + "button_click_submit_form", + ) + + configured_snapshot.assert_match( + json.dumps({"analytics_id": response.json()}, indent=2, sort_keys=True), + "analytics_log_analytics_success", + ) + + +def test_log_raw_analytics_complex_data( + mocker: pytest_mock.MockFixture, + configured_snapshot: Snapshot, +) -> None: + """Test raw analytics logging with complex nested data structures.""" + mock_result = Mock(id="analytics-complex-uuid") + mocker.patch( + "backend.data.analytics.log_raw_analytics", + new_callable=AsyncMock, + return_value=mock_result, + ) + + request_data = { + "type": "agent_execution", + "data": { + "agent_id": "agent_123", + "execution_id": "exec_456", + "status": "completed", + "duration_ms": 3500, + "nodes_executed": 15, + "blocks_used": [ + {"block_id": "llm_block", "count": 3}, + {"block_id": "http_block", "count": 5}, + {"block_id": "code_block", "count": 2}, + ], + "errors": [], + "metadata": { + "trigger": "manual", + "user_tier": "premium", + "environment": "production", + }, + }, + "data_index": "agent_123_exec_456", + } + + response = client.post("/log_raw_analytics", json=request_data) + + assert response.status_code == 200 + + configured_snapshot.assert_match( + json.dumps( + {"analytics_id": response.json(), "logged_data": request_data["data"]}, + indent=2, + sort_keys=True, + ), + "analytics_log_analytics_complex_data", + ) + + +@pytest.mark.parametrize( + "invalid_data,expected_error", + [ + ({}, "Field required"), + ({"type": "test"}, "Field required"), + ( + {"type": "test", "data": "not_a_dict", "data_index": "test"}, + "Input should be a valid dictionary", + ), + ({"type": "test", "data": {"key": "value"}}, "Field required"), + ], + ids=[ + "empty_request", + "missing_data_and_data_index", + "invalid_data_type", + "missing_data_index", + ], +) +def test_log_raw_analytics_validation_errors( + invalid_data: dict, + expected_error: str, +) -> None: + """Test validation errors for invalid analytics requests.""" + response = client.post("/log_raw_analytics", json=invalid_data) + + assert response.status_code == 422 + error_detail = response.json() + assert "detail" in error_detail, f"Missing 'detail' in error: {error_detail}" + + error_text = json.dumps(error_detail) + assert ( + expected_error in error_text + ), f"Expected '{expected_error}' in error response: {error_text}" + + +def test_log_raw_analytics_service_error( + mocker: pytest_mock.MockFixture, + test_user_id: str, +) -> None: + """Test error handling when analytics service fails.""" + mocker.patch( + "backend.data.analytics.log_raw_analytics", + new_callable=AsyncMock, + side_effect=Exception("Analytics DB unreachable"), + ) + + request_data = { + "type": "test_event", + "data": {"key": "value"}, + "data_index": "test_index", + } + + response = client.post("/log_raw_analytics", json=request_data) + + assert response.status_code == 500 + error_detail = response.json()["detail"] + assert "Analytics DB unreachable" in error_detail["message"] + assert "hint" in error_detail diff --git a/autogpt_platform/backend/backend/server/v2/store/__init__.py b/autogpt_platform/backend/backend/api/features/builder/__init__.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/store/__init__.py rename to autogpt_platform/backend/backend/api/features/builder/__init__.py diff --git a/autogpt_platform/backend/backend/api/features/builder/db.py b/autogpt_platform/backend/backend/api/features/builder/db.py new file mode 100644 index 0000000000..7177fa4dc6 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/builder/db.py @@ -0,0 +1,689 @@ +import logging +from dataclasses import dataclass +from datetime import datetime, timedelta, timezone +from difflib import SequenceMatcher +from typing import Sequence + +import prisma + +import backend.api.features.library.db as library_db +import backend.api.features.library.model as library_model +import backend.api.features.store.db as store_db +import backend.api.features.store.model as store_model +import backend.data.block +from backend.blocks import load_all_blocks +from backend.blocks.llm import LlmModel +from backend.data.block import AnyBlockSchema, BlockCategory, BlockInfo, BlockSchema +from backend.data.db import query_raw_with_schema +from backend.integrations.providers import ProviderName +from backend.util.cache import cached +from backend.util.models import Pagination + +from .model import ( + BlockCategoryResponse, + BlockResponse, + BlockType, + CountResponse, + FilterType, + Provider, + ProviderResponse, + SearchEntry, +) + +logger = logging.getLogger(__name__) +llm_models = [name.name.lower().replace("_", " ") for name in LlmModel] + +MAX_LIBRARY_AGENT_RESULTS = 100 +MAX_MARKETPLACE_AGENT_RESULTS = 100 +MIN_SCORE_FOR_FILTERED_RESULTS = 10.0 + +SearchResultItem = BlockInfo | library_model.LibraryAgent | store_model.StoreAgent + + +@dataclass +class _ScoredItem: + item: SearchResultItem + filter_type: FilterType + score: float + sort_key: str + + +@dataclass +class _SearchCacheEntry: + items: list[SearchResultItem] + total_items: dict[FilterType, int] + + +def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse]: + categories: dict[BlockCategory, BlockCategoryResponse] = {} + + for block_type in load_all_blocks().values(): + block: AnyBlockSchema = block_type() + # Skip disabled blocks + if block.disabled: + continue + # Skip blocks that don't have categories (all should have at least one) + if not block.categories: + continue + + # Add block to the categories + for category in block.categories: + if category not in categories: + categories[category] = BlockCategoryResponse( + name=category.name.lower(), + total_blocks=0, + blocks=[], + ) + + categories[category].total_blocks += 1 + + # Append if the category has less than the specified number of blocks + if len(categories[category].blocks) < category_blocks: + categories[category].blocks.append(block.get_info()) + + # Sort categories by name + return sorted(categories.values(), key=lambda x: x.name) + + +def get_blocks( + *, + category: str | None = None, + type: BlockType | None = None, + provider: ProviderName | None = None, + page: int = 1, + page_size: int = 50, +) -> BlockResponse: + """ + Get blocks based on either category, type or provider. + Providing nothing fetches all block types. + """ + # Only one of category, type, or provider can be specified + if (category and type) or (category and provider) or (type and provider): + raise ValueError("Only one of category, type, or provider can be specified") + + blocks: list[AnyBlockSchema] = [] + skip = (page - 1) * page_size + take = page_size + total = 0 + + for block_type in load_all_blocks().values(): + block: AnyBlockSchema = block_type() + # Skip disabled blocks + if block.disabled: + continue + # Skip blocks that don't match the category + if category and category not in {c.name.lower() for c in block.categories}: + continue + # Skip blocks that don't match the type + if ( + (type == "input" and block.block_type.value != "Input") + or (type == "output" and block.block_type.value != "Output") + or (type == "action" and block.block_type.value in ("Input", "Output")) + ): + continue + # Skip blocks that don't match the provider + if provider: + credentials_info = block.input_schema.get_credentials_fields_info().values() + if not any(provider in info.provider for info in credentials_info): + continue + + total += 1 + if skip > 0: + skip -= 1 + continue + if take > 0: + take -= 1 + blocks.append(block) + + return BlockResponse( + blocks=[b.get_info() for b in blocks], + pagination=Pagination( + total_items=total, + total_pages=(total + page_size - 1) // page_size, + current_page=page, + page_size=page_size, + ), + ) + + +def get_block_by_id(block_id: str) -> BlockInfo | None: + """ + Get a specific block by its ID. + """ + for block_type in load_all_blocks().values(): + block: AnyBlockSchema = block_type() + if block.id == block_id: + return block.get_info() + return None + + +async def update_search(user_id: str, search: SearchEntry) -> str: + """ + Upsert a search request for the user and return the search ID. + """ + if search.search_id: + # Update existing search + await prisma.models.BuilderSearchHistory.prisma().update( + where={ + "id": search.search_id, + }, + data={ + "searchQuery": search.search_query or "", + "filter": search.filter or [], # type: ignore + "byCreator": search.by_creator or [], + }, + ) + return search.search_id + else: + # Create new search + new_search = await prisma.models.BuilderSearchHistory.prisma().create( + data={ + "userId": user_id, + "searchQuery": search.search_query or "", + "filter": search.filter or [], # type: ignore + "byCreator": search.by_creator or [], + } + ) + return new_search.id + + +async def get_recent_searches(user_id: str, limit: int = 5) -> list[SearchEntry]: + """ + Get the user's most recent search requests. + """ + searches = await prisma.models.BuilderSearchHistory.prisma().find_many( + where={ + "userId": user_id, + }, + order={ + "updatedAt": "desc", + }, + take=limit, + ) + return [ + SearchEntry( + search_query=s.searchQuery, + filter=s.filter, # type: ignore + by_creator=s.byCreator, + search_id=s.id, + ) + for s in searches + ] + + +async def get_sorted_search_results( + *, + user_id: str, + search_query: str | None, + filters: Sequence[FilterType], + by_creator: Sequence[str] | None = None, +) -> _SearchCacheEntry: + normalized_filters: tuple[FilterType, ...] = tuple(sorted(set(filters or []))) + normalized_creators: tuple[str, ...] = tuple(sorted(set(by_creator or []))) + return await _build_cached_search_results( + user_id=user_id, + search_query=search_query or "", + filters=normalized_filters, + by_creator=normalized_creators, + ) + + +@cached(ttl_seconds=300, shared_cache=True) +async def _build_cached_search_results( + user_id: str, + search_query: str, + filters: tuple[FilterType, ...], + by_creator: tuple[str, ...], +) -> _SearchCacheEntry: + normalized_query = (search_query or "").strip().lower() + + include_blocks = "blocks" in filters + include_integrations = "integrations" in filters + include_library_agents = "my_agents" in filters + include_marketplace_agents = "marketplace_agents" in filters + + scored_items: list[_ScoredItem] = [] + total_items: dict[FilterType, int] = { + "blocks": 0, + "integrations": 0, + "marketplace_agents": 0, + "my_agents": 0, + } + + block_results, block_total, integration_total = _collect_block_results( + normalized_query=normalized_query, + include_blocks=include_blocks, + include_integrations=include_integrations, + ) + scored_items.extend(block_results) + total_items["blocks"] = block_total + total_items["integrations"] = integration_total + + if include_library_agents: + library_response = await library_db.list_library_agents( + user_id=user_id, + search_term=search_query or None, + page=1, + page_size=MAX_LIBRARY_AGENT_RESULTS, + ) + total_items["my_agents"] = library_response.pagination.total_items + scored_items.extend( + _build_library_items( + agents=library_response.agents, + normalized_query=normalized_query, + ) + ) + + if include_marketplace_agents: + marketplace_response = await store_db.get_store_agents( + creators=list(by_creator) or None, + search_query=search_query or None, + page=1, + page_size=MAX_MARKETPLACE_AGENT_RESULTS, + ) + total_items["marketplace_agents"] = marketplace_response.pagination.total_items + scored_items.extend( + _build_marketplace_items( + agents=marketplace_response.agents, + normalized_query=normalized_query, + ) + ) + + sorted_items = sorted( + scored_items, + key=lambda entry: (-entry.score, entry.sort_key, entry.filter_type), + ) + + return _SearchCacheEntry( + items=[entry.item for entry in sorted_items], + total_items=total_items, + ) + + +def _collect_block_results( + *, + normalized_query: str, + include_blocks: bool, + include_integrations: bool, +) -> tuple[list[_ScoredItem], int, int]: + results: list[_ScoredItem] = [] + block_count = 0 + integration_count = 0 + + if not include_blocks and not include_integrations: + return results, block_count, integration_count + + for block_type in load_all_blocks().values(): + block: AnyBlockSchema = block_type() + if block.disabled: + continue + + block_info = block.get_info() + credentials = list(block.input_schema.get_credentials_fields().values()) + is_integration = len(credentials) > 0 + + if is_integration and not include_integrations: + continue + if not is_integration and not include_blocks: + continue + + score = _score_block(block, block_info, normalized_query) + if not _should_include_item(score, normalized_query): + continue + + filter_type: FilterType = "integrations" if is_integration else "blocks" + if is_integration: + integration_count += 1 + else: + block_count += 1 + + results.append( + _ScoredItem( + item=block_info, + filter_type=filter_type, + score=score, + sort_key=_get_item_name(block_info), + ) + ) + + return results, block_count, integration_count + + +def _build_library_items( + *, + agents: list[library_model.LibraryAgent], + normalized_query: str, +) -> list[_ScoredItem]: + results: list[_ScoredItem] = [] + + for agent in agents: + score = _score_library_agent(agent, normalized_query) + if not _should_include_item(score, normalized_query): + continue + + results.append( + _ScoredItem( + item=agent, + filter_type="my_agents", + score=score, + sort_key=_get_item_name(agent), + ) + ) + + return results + + +def _build_marketplace_items( + *, + agents: list[store_model.StoreAgent], + normalized_query: str, +) -> list[_ScoredItem]: + results: list[_ScoredItem] = [] + + for agent in agents: + score = _score_store_agent(agent, normalized_query) + if not _should_include_item(score, normalized_query): + continue + + results.append( + _ScoredItem( + item=agent, + filter_type="marketplace_agents", + score=score, + sort_key=_get_item_name(agent), + ) + ) + + return results + + +def get_providers( + query: str = "", + page: int = 1, + page_size: int = 50, +) -> ProviderResponse: + providers = [] + query = query.lower() + + skip = (page - 1) * page_size + take = page_size + + all_providers = _get_all_providers() + + for provider in all_providers.values(): + if ( + query not in provider.name.value.lower() + and query not in provider.description.lower() + ): + continue + if skip > 0: + skip -= 1 + continue + if take > 0: + take -= 1 + providers.append(provider) + + total = len(all_providers) + + return ProviderResponse( + providers=providers, + pagination=Pagination( + total_items=total, + total_pages=(total + page_size - 1) // page_size, + current_page=page, + page_size=page_size, + ), + ) + + +async def get_counts(user_id: str) -> CountResponse: + my_agents = await prisma.models.LibraryAgent.prisma().count( + where={ + "userId": user_id, + "isDeleted": False, + "isArchived": False, + } + ) + counts = await _get_static_counts() + return CountResponse( + my_agents=my_agents, + **counts, + ) + + +@cached(ttl_seconds=3600) +async def _get_static_counts(): + """ + Get counts of blocks, integrations, and marketplace agents. + This is cached to avoid unnecessary database queries and calculations. + """ + all_blocks = 0 + input_blocks = 0 + action_blocks = 0 + output_blocks = 0 + integrations = 0 + + for block_type in load_all_blocks().values(): + block: AnyBlockSchema = block_type() + if block.disabled: + continue + + all_blocks += 1 + + if block.block_type.value == "Input": + input_blocks += 1 + elif block.block_type.value == "Output": + output_blocks += 1 + else: + action_blocks += 1 + + credentials = list(block.input_schema.get_credentials_fields().values()) + if len(credentials) > 0: + integrations += 1 + + marketplace_agents = await prisma.models.StoreAgent.prisma().count() + + return { + "all_blocks": all_blocks, + "input_blocks": input_blocks, + "action_blocks": action_blocks, + "output_blocks": output_blocks, + "integrations": integrations, + "marketplace_agents": marketplace_agents, + } + + +def _matches_llm_model(schema_cls: type[BlockSchema], query: str) -> bool: + for field in schema_cls.model_fields.values(): + if field.annotation == LlmModel: + # Check if query matches any value in llm_models + if any(query in name for name in llm_models): + return True + return False + + +def _score_block( + block: AnyBlockSchema, + block_info: BlockInfo, + normalized_query: str, +) -> float: + if not normalized_query: + return 0.0 + + name = block_info.name.lower() + description = block_info.description.lower() + score = _score_primary_fields(name, description, normalized_query) + + category_text = " ".join( + category.get("category", "").lower() for category in block_info.categories + ) + score += _score_additional_field(category_text, normalized_query, 12, 6) + + credentials_info = block.input_schema.get_credentials_fields_info().values() + provider_names = [ + provider.value.lower() + for info in credentials_info + for provider in info.provider + ] + provider_text = " ".join(provider_names) + score += _score_additional_field(provider_text, normalized_query, 15, 6) + + if _matches_llm_model(block.input_schema, normalized_query): + score += 20 + + return score + + +def _score_library_agent( + agent: library_model.LibraryAgent, + normalized_query: str, +) -> float: + if not normalized_query: + return 0.0 + + name = agent.name.lower() + description = (agent.description or "").lower() + instructions = (agent.instructions or "").lower() + + score = _score_primary_fields(name, description, normalized_query) + score += _score_additional_field(instructions, normalized_query, 15, 6) + score += _score_additional_field( + agent.creator_name.lower(), normalized_query, 10, 5 + ) + + return score + + +def _score_store_agent( + agent: store_model.StoreAgent, + normalized_query: str, +) -> float: + if not normalized_query: + return 0.0 + + name = agent.agent_name.lower() + description = agent.description.lower() + sub_heading = agent.sub_heading.lower() + + score = _score_primary_fields(name, description, normalized_query) + score += _score_additional_field(sub_heading, normalized_query, 12, 6) + score += _score_additional_field(agent.creator.lower(), normalized_query, 10, 5) + + return score + + +def _score_primary_fields(name: str, description: str, query: str) -> float: + score = 0.0 + if name == query: + score += 120 + elif name.startswith(query): + score += 90 + elif query in name: + score += 60 + + score += SequenceMatcher(None, name, query).ratio() * 50 + if description: + if query in description: + score += 30 + score += SequenceMatcher(None, description, query).ratio() * 25 + return score + + +def _score_additional_field( + value: str, + query: str, + contains_weight: float, + similarity_weight: float, +) -> float: + if not value or not query: + return 0.0 + + score = 0.0 + if query in value: + score += contains_weight + score += SequenceMatcher(None, value, query).ratio() * similarity_weight + return score + + +def _should_include_item(score: float, normalized_query: str) -> bool: + if not normalized_query: + return True + return score >= MIN_SCORE_FOR_FILTERED_RESULTS + + +def _get_item_name(item: SearchResultItem) -> str: + if isinstance(item, BlockInfo): + return item.name.lower() + if isinstance(item, library_model.LibraryAgent): + return item.name.lower() + return item.agent_name.lower() + + +@cached(ttl_seconds=3600) +def _get_all_providers() -> dict[ProviderName, Provider]: + providers: dict[ProviderName, Provider] = {} + + for block_type in load_all_blocks().values(): + block: AnyBlockSchema = block_type() + if block.disabled: + continue + + credentials_info = block.input_schema.get_credentials_fields_info().values() + for info in credentials_info: + for provider in info.provider: # provider is a ProviderName enum member + if provider in providers: + providers[provider].integration_count += 1 + else: + providers[provider] = Provider( + name=provider, description="", integration_count=1 + ) + return providers + + +@cached(ttl_seconds=3600) +async def get_suggested_blocks(count: int = 5) -> list[BlockInfo]: + suggested_blocks = [] + # Sum the number of executions for each block type + # Prisma cannot group by nested relations, so we do a raw query + # Calculate the cutoff timestamp + timestamp_threshold = datetime.now(timezone.utc) - timedelta(days=30) + + results = await query_raw_with_schema( + """ + SELECT + agent_node."agentBlockId" AS block_id, + COUNT(execution.id) AS execution_count + FROM {schema_prefix}"AgentNodeExecution" execution + JOIN {schema_prefix}"AgentNode" agent_node ON execution."agentNodeId" = agent_node.id + WHERE execution."endedTime" >= $1::timestamp + GROUP BY agent_node."agentBlockId" + ORDER BY execution_count DESC; + """, + timestamp_threshold, + ) + + # Get the top blocks based on execution count + # But ignore Input and Output blocks + blocks: list[tuple[BlockInfo, int]] = [] + + for block_type in load_all_blocks().values(): + block: AnyBlockSchema = block_type() + if block.disabled or block.block_type in ( + backend.data.block.BlockType.INPUT, + backend.data.block.BlockType.OUTPUT, + backend.data.block.BlockType.AGENT, + ): + continue + # Find the execution count for this block + execution_count = next( + (row["execution_count"] for row in results if row["block_id"] == block.id), + 0, + ) + blocks.append((block.get_info(), execution_count)) + # Sort blocks by execution count + blocks.sort(key=lambda x: x[1], reverse=True) + + suggested_blocks = [block[0] for block in blocks] + + # Return the top blocks + return suggested_blocks[:count] diff --git a/autogpt_platform/backend/backend/server/v2/builder/model.py b/autogpt_platform/backend/backend/api/features/builder/model.py similarity index 74% rename from autogpt_platform/backend/backend/server/v2/builder/model.py rename to autogpt_platform/backend/backend/api/features/builder/model.py index e1a7e744fd..fcd19dba94 100644 --- a/autogpt_platform/backend/backend/server/v2/builder/model.py +++ b/autogpt_platform/backend/backend/api/features/builder/model.py @@ -2,8 +2,8 @@ from typing import Literal from pydantic import BaseModel -import backend.server.v2.library.model as library_model -import backend.server.v2.store.model as store_model +import backend.api.features.library.model as library_model +import backend.api.features.store.model as store_model from backend.data.block import BlockInfo from backend.integrations.providers import ProviderName from backend.util.models import Pagination @@ -18,10 +18,17 @@ FilterType = Literal[ BlockType = Literal["all", "input", "action", "output"] +class SearchEntry(BaseModel): + search_query: str | None = None + filter: list[FilterType] | None = None + by_creator: list[str] | None = None + search_id: str | None = None + + # Suggestions class SuggestionsResponse(BaseModel): otto_suggestions: list[str] - recent_searches: list[str] + recent_searches: list[SearchEntry] providers: list[ProviderName] top_blocks: list[BlockInfo] @@ -32,7 +39,7 @@ class BlockCategoryResponse(BaseModel): total_blocks: int blocks: list[BlockInfo] - model_config = {"use_enum_values": False} # <== use enum names like "AI" + model_config = {"use_enum_values": False} # Use enum names like "AI" # Input/Action/Output and see all for block categories @@ -53,17 +60,11 @@ class ProviderResponse(BaseModel): pagination: Pagination -class SearchBlocksResponse(BaseModel): - blocks: BlockResponse - total_block_count: int - total_integration_count: int - - class SearchResponse(BaseModel): items: list[BlockInfo | library_model.LibraryAgent | store_model.StoreAgent] + search_id: str total_items: dict[FilterType, int] - page: int - more_pages: bool + pagination: Pagination class CountResponse(BaseModel): diff --git a/autogpt_platform/backend/backend/server/v2/builder/routes.py b/autogpt_platform/backend/backend/api/features/builder/routes.py similarity index 65% rename from autogpt_platform/backend/backend/server/v2/builder/routes.py rename to autogpt_platform/backend/backend/api/features/builder/routes.py index ebc9fd5baf..7fe9cab189 100644 --- a/autogpt_platform/backend/backend/server/v2/builder/routes.py +++ b/autogpt_platform/backend/backend/api/features/builder/routes.py @@ -4,15 +4,12 @@ from typing import Annotated, Sequence import fastapi from autogpt_libs.auth.dependencies import get_user_id, requires_user -import backend.server.v2.builder.db as builder_db -import backend.server.v2.builder.model as builder_model -import backend.server.v2.library.db as library_db -import backend.server.v2.library.model as library_model -import backend.server.v2.store.db as store_db -import backend.server.v2.store.model as store_model from backend.integrations.providers import ProviderName from backend.util.models import Pagination +from . import db as builder_db +from . import model as builder_model + logger = logging.getLogger(__name__) router = fastapi.APIRouter( @@ -45,7 +42,9 @@ def sanitize_query(query: str | None) -> str | None: summary="Get Builder suggestions", response_model=builder_model.SuggestionsResponse, ) -async def get_suggestions() -> builder_model.SuggestionsResponse: +async def get_suggestions( + user_id: Annotated[str, fastapi.Security(get_user_id)], +) -> builder_model.SuggestionsResponse: """ Get all suggestions for the Blocks Menu. """ @@ -55,11 +54,7 @@ async def get_suggestions() -> builder_model.SuggestionsResponse: "Help me create a list", "Help me feed my data to Google Maps", ], - recent_searches=[ - "image generation", - "deepfake", - "competitor analysis", - ], + recent_searches=await builder_db.get_recent_searches(user_id), providers=[ ProviderName.TWITTER, ProviderName.GITHUB, @@ -147,7 +142,6 @@ async def get_providers( ) -# Not using post method because on frontend, orval doesn't support Infinite Query with POST method. @router.get( "/search", summary="Builder search", @@ -157,7 +151,7 @@ async def get_providers( async def search( user_id: Annotated[str, fastapi.Security(get_user_id)], search_query: Annotated[str | None, fastapi.Query()] = None, - filter: Annotated[list[str] | None, fastapi.Query()] = None, + filter: Annotated[list[builder_model.FilterType] | None, fastapi.Query()] = None, search_id: Annotated[str | None, fastapi.Query()] = None, by_creator: Annotated[list[str] | None, fastapi.Query()] = None, page: Annotated[int, fastapi.Query()] = 1, @@ -176,69 +170,43 @@ async def search( ] search_query = sanitize_query(search_query) - # Blocks&Integrations - blocks = builder_model.SearchBlocksResponse( - blocks=builder_model.BlockResponse( - blocks=[], - pagination=Pagination.empty(), - ), - total_block_count=0, - total_integration_count=0, + # Get all possible results + cached_results = await builder_db.get_sorted_search_results( + user_id=user_id, + search_query=search_query, + filters=filter, + by_creator=by_creator, ) - if "blocks" in filter or "integrations" in filter: - blocks = builder_db.search_blocks( - include_blocks="blocks" in filter, - include_integrations="integrations" in filter, - query=search_query or "", - page=page, - page_size=page_size, - ) - # Library Agents - my_agents = library_model.LibraryAgentResponse( - agents=[], - pagination=Pagination.empty(), + # Paginate results + total_combined_items = len(cached_results.items) + pagination = Pagination( + total_items=total_combined_items, + total_pages=(total_combined_items + page_size - 1) // page_size, + current_page=page, + page_size=page_size, ) - if "my_agents" in filter: - my_agents = await library_db.list_library_agents( - user_id=user_id, - search_term=search_query, - page=page, - page_size=page_size, - ) - # Marketplace Agents - marketplace_agents = store_model.StoreAgentsResponse( - agents=[], - pagination=Pagination.empty(), - ) - if "marketplace_agents" in filter: - marketplace_agents = await store_db.get_store_agents( - creators=by_creator, + start_idx = (page - 1) * page_size + end_idx = start_idx + page_size + paginated_items = cached_results.items[start_idx:end_idx] + + # Update the search entry by id + search_id = await builder_db.update_search( + user_id, + builder_model.SearchEntry( search_query=search_query, - page=page, - page_size=page_size, - ) - - more_pages = False - if ( - blocks.blocks.pagination.current_page < blocks.blocks.pagination.total_pages - or my_agents.pagination.current_page < my_agents.pagination.total_pages - or marketplace_agents.pagination.current_page - < marketplace_agents.pagination.total_pages - ): - more_pages = True + filter=filter, + by_creator=by_creator, + search_id=search_id, + ), + ) return builder_model.SearchResponse( - items=blocks.blocks.blocks + my_agents.agents + marketplace_agents.agents, - total_items={ - "blocks": blocks.total_block_count, - "integrations": blocks.total_integration_count, - "marketplace_agents": marketplace_agents.pagination.total_items, - "my_agents": my_agents.pagination.total_items, - }, - page=page, - more_pages=more_pages, + items=paginated_items, + search_id=search_id, + total_items=cached_results.total_items, + pagination=pagination, ) diff --git a/autogpt_platform/backend/backend/server/v2/turnstile/__init__.py b/autogpt_platform/backend/backend/api/features/chat/__init__.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/turnstile/__init__.py rename to autogpt_platform/backend/backend/api/features/chat/__init__.py diff --git a/autogpt_platform/backend/backend/api/features/chat/config.py b/autogpt_platform/backend/backend/api/features/chat/config.py new file mode 100644 index 0000000000..5b8f16298e --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/config.py @@ -0,0 +1,118 @@ +"""Configuration management for chat system.""" + +import os +from pathlib import Path + +from pydantic import Field, field_validator +from pydantic_settings import BaseSettings + + +class ChatConfig(BaseSettings): + """Configuration for the chat system.""" + + # OpenAI API Configuration + model: str = Field( + default="qwen/qwen3-235b-a22b-2507", description="Default model to use" + ) + api_key: str | None = Field(default=None, description="OpenAI API key") + base_url: str | None = Field( + default="https://openrouter.ai/api/v1", + description="Base URL for API (e.g., for OpenRouter)", + ) + + # Session TTL Configuration - 12 hours + session_ttl: int = Field(default=43200, description="Session TTL in seconds") + + # System Prompt Configuration + system_prompt_path: str = Field( + default="prompts/chat_system.md", + description="Path to system prompt file relative to chat module", + ) + + # Streaming Configuration + max_context_messages: int = Field( + default=50, ge=1, le=200, description="Maximum context messages" + ) + + stream_timeout: int = Field(default=300, description="Stream timeout in seconds") + max_retries: int = Field(default=3, description="Maximum number of retries") + max_agent_runs: int = Field(default=3, description="Maximum number of agent runs") + max_agent_schedules: int = Field( + default=3, description="Maximum number of agent schedules" + ) + + @field_validator("api_key", mode="before") + @classmethod + def get_api_key(cls, v): + """Get API key from environment if not provided.""" + if v is None: + # Try to get from environment variables + # First check for CHAT_API_KEY (Pydantic prefix) + v = os.getenv("CHAT_API_KEY") + if not v: + # Fall back to OPEN_ROUTER_API_KEY + v = os.getenv("OPEN_ROUTER_API_KEY") + if not v: + # Fall back to OPENAI_API_KEY + v = os.getenv("OPENAI_API_KEY") + return v + + @field_validator("base_url", mode="before") + @classmethod + def get_base_url(cls, v): + """Get base URL from environment if not provided.""" + if v is None: + # Check for OpenRouter or custom base URL + v = os.getenv("CHAT_BASE_URL") + if not v: + v = os.getenv("OPENROUTER_BASE_URL") + if not v: + v = os.getenv("OPENAI_BASE_URL") + if not v: + v = "https://openrouter.ai/api/v1" + return v + + def get_system_prompt(self, **template_vars) -> str: + """Load and render the system prompt from file. + + Args: + **template_vars: Variables to substitute in the template + + Returns: + Rendered system prompt string + + """ + # Get the path relative to this module + module_dir = Path(__file__).parent + prompt_path = module_dir / self.system_prompt_path + + # Check for .j2 extension first (Jinja2 template) + j2_path = Path(str(prompt_path) + ".j2") + if j2_path.exists(): + try: + from jinja2 import Template + + template = Template(j2_path.read_text()) + return template.render(**template_vars) + except ImportError: + # Jinja2 not installed, fall back to reading as plain text + return j2_path.read_text() + + # Check for markdown file + if prompt_path.exists(): + content = prompt_path.read_text() + + # Simple variable substitution if Jinja2 is not available + for key, value in template_vars.items(): + placeholder = f"{{{key}}}" + content = content.replace(placeholder, str(value)) + + return content + raise FileNotFoundError(f"System prompt file not found: {prompt_path}") + + class Config: + """Pydantic config.""" + + env_file = ".env" + env_file_encoding = "utf-8" + extra = "ignore" # Ignore extra environment variables diff --git a/autogpt_platform/backend/backend/api/features/chat/model.py b/autogpt_platform/backend/backend/api/features/chat/model.py new file mode 100644 index 0000000000..b8aea5a334 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/model.py @@ -0,0 +1,204 @@ +import logging +import uuid +from datetime import UTC, datetime + +from openai.types.chat import ( + ChatCompletionAssistantMessageParam, + ChatCompletionDeveloperMessageParam, + ChatCompletionFunctionMessageParam, + ChatCompletionMessageParam, + ChatCompletionSystemMessageParam, + ChatCompletionToolMessageParam, + ChatCompletionUserMessageParam, +) +from openai.types.chat.chat_completion_assistant_message_param import FunctionCall +from openai.types.chat.chat_completion_message_tool_call_param import ( + ChatCompletionMessageToolCallParam, + Function, +) +from pydantic import BaseModel + +from backend.data.redis_client import get_redis_async +from backend.util.exceptions import RedisError + +from .config import ChatConfig + +logger = logging.getLogger(__name__) +config = ChatConfig() + + +class ChatMessage(BaseModel): + role: str + content: str | None = None + name: str | None = None + tool_call_id: str | None = None + refusal: str | None = None + tool_calls: list[dict] | None = None + function_call: dict | None = None + + +class Usage(BaseModel): + prompt_tokens: int + completion_tokens: int + total_tokens: int + + +class ChatSession(BaseModel): + session_id: str + user_id: str | None + messages: list[ChatMessage] + usage: list[Usage] + credentials: dict[str, dict] = {} # Map of provider -> credential metadata + started_at: datetime + updated_at: datetime + successful_agent_runs: dict[str, int] = {} + successful_agent_schedules: dict[str, int] = {} + + @staticmethod + def new(user_id: str | None) -> "ChatSession": + return ChatSession( + session_id=str(uuid.uuid4()), + user_id=user_id, + messages=[], + usage=[], + credentials={}, + started_at=datetime.now(UTC), + updated_at=datetime.now(UTC), + ) + + def to_openai_messages(self) -> list[ChatCompletionMessageParam]: + messages = [] + for message in self.messages: + if message.role == "developer": + m = ChatCompletionDeveloperMessageParam( + role="developer", + content=message.content or "", + ) + if message.name: + m["name"] = message.name + messages.append(m) + elif message.role == "system": + m = ChatCompletionSystemMessageParam( + role="system", + content=message.content or "", + ) + if message.name: + m["name"] = message.name + messages.append(m) + elif message.role == "user": + m = ChatCompletionUserMessageParam( + role="user", + content=message.content or "", + ) + if message.name: + m["name"] = message.name + messages.append(m) + elif message.role == "assistant": + m = ChatCompletionAssistantMessageParam( + role="assistant", + content=message.content or "", + ) + if message.function_call: + m["function_call"] = FunctionCall( + arguments=message.function_call["arguments"], + name=message.function_call["name"], + ) + if message.refusal: + m["refusal"] = message.refusal + if message.tool_calls: + t: list[ChatCompletionMessageToolCallParam] = [] + for tool_call in message.tool_calls: + # Tool calls are stored with nested structure: {id, type, function: {name, arguments}} + function_data = tool_call.get("function", {}) + + # Skip tool calls that are missing required fields + if "id" not in tool_call or "name" not in function_data: + logger.warning( + f"Skipping invalid tool call: missing required fields. " + f"Got: {tool_call.keys()}, function keys: {function_data.keys()}" + ) + continue + + # Arguments are stored as a JSON string + arguments_str = function_data.get("arguments", "{}") + + t.append( + ChatCompletionMessageToolCallParam( + id=tool_call["id"], + type="function", + function=Function( + arguments=arguments_str, + name=function_data["name"], + ), + ) + ) + m["tool_calls"] = t + if message.name: + m["name"] = message.name + messages.append(m) + elif message.role == "tool": + messages.append( + ChatCompletionToolMessageParam( + role="tool", + content=message.content or "", + tool_call_id=message.tool_call_id or "", + ) + ) + elif message.role == "function": + messages.append( + ChatCompletionFunctionMessageParam( + role="function", + content=message.content, + name=message.name or "", + ) + ) + return messages + + +async def get_chat_session( + session_id: str, + user_id: str | None, +) -> ChatSession | None: + """Get a chat session by ID.""" + redis_key = f"chat:session:{session_id}" + async_redis = await get_redis_async() + + raw_session: bytes | None = await async_redis.get(redis_key) + + if raw_session is None: + logger.warning(f"Session {session_id} not found in Redis") + return None + + try: + session = ChatSession.model_validate_json(raw_session) + except Exception as e: + logger.error(f"Failed to deserialize session {session_id}: {e}", exc_info=True) + raise RedisError(f"Corrupted session data for {session_id}") from e + + if session.user_id is not None and session.user_id != user_id: + logger.warning( + f"Session {session_id} user id mismatch: {session.user_id} != {user_id}" + ) + return None + + return session + + +async def upsert_chat_session( + session: ChatSession, +) -> ChatSession: + """Update a chat session with the given messages.""" + + redis_key = f"chat:session:{session.session_id}" + + async_redis = await get_redis_async() + resp = await async_redis.setex( + redis_key, config.session_ttl, session.model_dump_json() + ) + + if not resp: + raise RedisError( + f"Failed to persist chat session {session.session_id} to Redis: {resp}" + ) + + return session diff --git a/autogpt_platform/backend/backend/api/features/chat/model_test.py b/autogpt_platform/backend/backend/api/features/chat/model_test.py new file mode 100644 index 0000000000..b7f4c8a493 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/model_test.py @@ -0,0 +1,70 @@ +import pytest + +from .model import ( + ChatMessage, + ChatSession, + Usage, + get_chat_session, + upsert_chat_session, +) + +messages = [ + ChatMessage(content="Hello, how are you?", role="user"), + ChatMessage( + content="I'm fine, thank you!", + role="assistant", + tool_calls=[ + { + "id": "t123", + "type": "function", + "function": { + "name": "get_weather", + "arguments": '{"city": "New York"}', + }, + } + ], + ), + ChatMessage( + content="I'm using the tool to get the weather", + role="tool", + tool_call_id="t123", + ), +] + + +@pytest.mark.asyncio(loop_scope="session") +async def test_chatsession_serialization_deserialization(): + s = ChatSession.new(user_id="abc123") + s.messages = messages + s.usage = [Usage(prompt_tokens=100, completion_tokens=200, total_tokens=300)] + serialized = s.model_dump_json() + s2 = ChatSession.model_validate_json(serialized) + assert s2.model_dump() == s.model_dump() + + +@pytest.mark.asyncio(loop_scope="session") +async def test_chatsession_redis_storage(): + + s = ChatSession.new(user_id=None) + s.messages = messages + + s = await upsert_chat_session(s) + + s2 = await get_chat_session( + session_id=s.session_id, + user_id=s.user_id, + ) + + assert s2 == s + + +@pytest.mark.asyncio(loop_scope="session") +async def test_chatsession_redis_storage_user_id_mismatch(): + + s = ChatSession.new(user_id="abc123") + s.messages = messages + s = await upsert_chat_session(s) + + s2 = await get_chat_session(s.session_id, None) + + assert s2 is None diff --git a/autogpt_platform/backend/backend/api/features/chat/prompts/chat_system.md b/autogpt_platform/backend/backend/api/features/chat/prompts/chat_system.md new file mode 100644 index 0000000000..a660ca805e --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/prompts/chat_system.md @@ -0,0 +1,104 @@ +You are Otto, an AI Co-Pilot and Forward Deployed Engineer for AutoGPT, an AI Business Automation tool. Your mission is to help users quickly find and set up AutoGPT agents to solve their business problems. + +Here are the functions available to you: + + +1. **find_agent** - Search for agents that solve the user's problem +2. **run_agent** - Run or schedule an agent (automatically handles setup) + + +## HOW run_agent WORKS + +The `run_agent` tool automatically handles the entire setup flow: + +1. **First call** (no inputs) → Returns available inputs so user can decide what values to use +2. **Credentials check** → If missing, UI automatically prompts user to add them (you don't need to mention this) +3. **Execution** → Runs when you provide `inputs` OR set `use_defaults=true` + +Parameters: +- `username_agent_slug` (required): Agent identifier like "creator/agent-name" +- `inputs`: Object with input values for the agent +- `use_defaults`: Set to `true` to run with default values (only after user confirms) +- `schedule_name` + `cron`: For scheduled execution + +## WORKFLOW + +1. **find_agent** - Search for agents that solve the user's problem +2. **run_agent** (first call, no inputs) - Get available inputs for the agent +3. **Ask user** what values they want to use OR if they want to use defaults +4. **run_agent** (second call) - Either with `inputs={...}` or `use_defaults=true` + +## YOUR APPROACH + +**Step 1: Understand the Problem** +- Ask maximum 1-2 targeted questions +- Focus on: What business problem are they solving? +- Move quickly to searching for solutions + +**Step 2: Find Agents** +- Use `find_agent` immediately with relevant keywords +- Suggest the best option from search results +- Explain briefly how it solves their problem + +**Step 3: Get Agent Inputs** +- Call `run_agent(username_agent_slug="creator/agent-name")` without inputs +- This returns the available inputs (required and optional) +- Present these to the user and ask what values they want + +**Step 4: Run with User's Choice** +- If user provides values: `run_agent(username_agent_slug="...", inputs={...})` +- If user says "use defaults": `run_agent(username_agent_slug="...", use_defaults=true)` +- On success, share the agent link with the user + +**For Scheduled Execution:** +- Add `schedule_name` and `cron` parameters +- Example: `run_agent(username_agent_slug="...", inputs={...}, schedule_name="Daily Report", cron="0 9 * * *")` + +## FUNCTION CALL FORMAT + +To call a function, use this exact format: +`function_name(parameter="value")` + +Examples: +- `find_agent(query="social media automation")` +- `run_agent(username_agent_slug="creator/agent-name")` (get inputs) +- `run_agent(username_agent_slug="creator/agent-name", inputs={"topic": "AI news"})` +- `run_agent(username_agent_slug="creator/agent-name", use_defaults=true)` + +## KEY RULES + +**What You DON'T Do:** +- Don't help with login (frontend handles this) +- Don't mention or explain credentials to the user (frontend handles this automatically) +- Don't run agents without first showing available inputs to the user +- Don't use `use_defaults=true` without user explicitly confirming +- Don't write responses longer than 3 sentences + +**What You DO:** +- Always call run_agent first without inputs to see what's available +- Ask user what values they want OR if they want to use defaults +- Keep all responses to maximum 3 sentences +- Include the agent link in your response after successful execution + +**Error Handling:** +- Authentication needed → "Please sign in via the interface" +- Credentials missing → The UI handles this automatically. Focus on asking the user about input values instead. + +## RESPONSE STRUCTURE + +Before responding, wrap your analysis in tags to systematically plan your approach: +- Extract the key business problem or request from the user's message +- Determine what function call (if any) you need to make next +- Plan your response to stay under the 3-sentence maximum + +Example interaction: +``` +User: "Run the AI news agent for me" +Otto: run_agent(username_agent_slug="autogpt/ai-news") +[Tool returns: Agent accepts inputs - Required: topic. Optional: num_articles (default: 5)] +Otto: The AI News agent needs a topic. What topic would you like news about, or should I use the defaults? +User: "Use defaults" +Otto: run_agent(username_agent_slug="autogpt/ai-news", use_defaults=true) +``` + +KEEP ANSWERS TO 3 SENTENCES diff --git a/autogpt_platform/backend/backend/api/features/chat/response_model.py b/autogpt_platform/backend/backend/api/features/chat/response_model.py new file mode 100644 index 0000000000..2d38820bd5 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/response_model.py @@ -0,0 +1,101 @@ +from enum import Enum +from typing import Any + +from pydantic import BaseModel, Field + + +class ResponseType(str, Enum): + """Types of streaming responses.""" + + TEXT_CHUNK = "text_chunk" + TEXT_ENDED = "text_ended" + TOOL_CALL = "tool_call" + TOOL_CALL_START = "tool_call_start" + TOOL_RESPONSE = "tool_response" + ERROR = "error" + USAGE = "usage" + STREAM_END = "stream_end" + + +class StreamBaseResponse(BaseModel): + """Base response model for all streaming responses.""" + + type: ResponseType + timestamp: str | None = None + + def to_sse(self) -> str: + """Convert to SSE format.""" + return f"data: {self.model_dump_json()}\n\n" + + +class StreamTextChunk(StreamBaseResponse): + """Streaming text content from the assistant.""" + + type: ResponseType = ResponseType.TEXT_CHUNK + content: str = Field(..., description="Text content chunk") + + +class StreamToolCallStart(StreamBaseResponse): + """Tool call started notification.""" + + type: ResponseType = ResponseType.TOOL_CALL_START + tool_name: str = Field(..., description="Name of the tool that was executed") + tool_id: str = Field(..., description="Unique tool call ID") + + +class StreamToolCall(StreamBaseResponse): + """Tool invocation notification.""" + + type: ResponseType = ResponseType.TOOL_CALL + tool_id: str = Field(..., description="Unique tool call ID") + tool_name: str = Field(..., description="Name of the tool being called") + arguments: dict[str, Any] = Field( + default_factory=dict, description="Tool arguments" + ) + + +class StreamToolExecutionResult(StreamBaseResponse): + """Tool execution result.""" + + type: ResponseType = ResponseType.TOOL_RESPONSE + tool_id: str = Field(..., description="Tool call ID this responds to") + tool_name: str = Field(..., description="Name of the tool that was executed") + result: str | dict[str, Any] = Field(..., description="Tool execution result") + success: bool = Field( + default=True, description="Whether the tool execution succeeded" + ) + + +class StreamUsage(StreamBaseResponse): + """Token usage statistics.""" + + type: ResponseType = ResponseType.USAGE + prompt_tokens: int + completion_tokens: int + total_tokens: int + + +class StreamError(StreamBaseResponse): + """Error response.""" + + type: ResponseType = ResponseType.ERROR + message: str = Field(..., description="Error message") + code: str | None = Field(default=None, description="Error code") + details: dict[str, Any] | None = Field( + default=None, description="Additional error details" + ) + + +class StreamTextEnded(StreamBaseResponse): + """Text streaming completed marker.""" + + type: ResponseType = ResponseType.TEXT_ENDED + + +class StreamEnd(StreamBaseResponse): + """End of stream marker.""" + + type: ResponseType = ResponseType.STREAM_END + summary: dict[str, Any] | None = Field( + default=None, description="Stream summary statistics" + ) diff --git a/autogpt_platform/backend/backend/api/features/chat/routes.py b/autogpt_platform/backend/backend/api/features/chat/routes.py new file mode 100644 index 0000000000..667335d048 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/routes.py @@ -0,0 +1,219 @@ +"""Chat API routes for chat session management and streaming via SSE.""" + +import logging +from collections.abc import AsyncGenerator +from typing import Annotated + +from autogpt_libs import auth +from fastapi import APIRouter, Depends, Query, Security +from fastapi.responses import StreamingResponse +from pydantic import BaseModel + +from backend.util.exceptions import NotFoundError + +from . import service as chat_service +from .config import ChatConfig + +config = ChatConfig() + + +logger = logging.getLogger(__name__) + +router = APIRouter( + tags=["chat"], +) + +# ========== Request/Response Models ========== + + +class CreateSessionResponse(BaseModel): + """Response model containing information on a newly created chat session.""" + + id: str + created_at: str + user_id: str | None + + +class SessionDetailResponse(BaseModel): + """Response model providing complete details for a chat session, including messages.""" + + id: str + created_at: str + updated_at: str + user_id: str | None + messages: list[dict] + + +# ========== Routes ========== + + +@router.post( + "/sessions", +) +async def create_session( + user_id: Annotated[str | None, Depends(auth.get_user_id)], +) -> CreateSessionResponse: + """ + Create a new chat session. + + Initiates a new chat session for either an authenticated or anonymous user. + + Args: + user_id: The optional authenticated user ID parsed from the JWT. If missing, creates an anonymous session. + + Returns: + CreateSessionResponse: Details of the created session. + + """ + logger.info( + f"Creating session with user_id: " + f"...{user_id[-8:] if user_id and len(user_id) > 8 else ''}" + ) + + session = await chat_service.create_chat_session(user_id) + + return CreateSessionResponse( + id=session.session_id, + created_at=session.started_at.isoformat(), + user_id=session.user_id or None, + ) + + +@router.get( + "/sessions/{session_id}", +) +async def get_session( + session_id: str, + user_id: Annotated[str | None, Depends(auth.get_user_id)], +) -> SessionDetailResponse: + """ + Retrieve the details of a specific chat session. + + Looks up a chat session by ID for the given user (if authenticated) and returns all session data including messages. + + Args: + session_id: The unique identifier for the desired chat session. + user_id: The optional authenticated user ID, or None for anonymous access. + + Returns: + SessionDetailResponse: Details for the requested session; raises NotFoundError if not found. + + """ + session = await chat_service.get_session(session_id, user_id) + if not session: + raise NotFoundError(f"Session {session_id} not found") + return SessionDetailResponse( + id=session.session_id, + created_at=session.started_at.isoformat(), + updated_at=session.updated_at.isoformat(), + user_id=session.user_id or None, + messages=[message.model_dump() for message in session.messages], + ) + + +@router.get( + "/sessions/{session_id}/stream", +) +async def stream_chat( + session_id: str, + message: Annotated[str, Query(min_length=1, max_length=10000)], + user_id: str | None = Depends(auth.get_user_id), + is_user_message: bool = Query(default=True), +): + """ + Stream chat responses for a session. + + Streams the AI/completion responses in real time over Server-Sent Events (SSE), including: + - Text fragments as they are generated + - Tool call UI elements (if invoked) + - Tool execution results + + Args: + session_id: The chat session identifier to associate with the streamed messages. + message: The user's new message to process. + user_id: Optional authenticated user ID. + is_user_message: Whether the message is a user message. + Returns: + StreamingResponse: SSE-formatted response chunks. + + """ + # Validate session exists before starting the stream + # This prevents errors after the response has already started + session = await chat_service.get_session(session_id, user_id) + + if not session: + raise NotFoundError(f"Session {session_id} not found. ") + if session.user_id is None and user_id is not None: + session = await chat_service.assign_user_to_session(session_id, user_id) + + async def event_generator() -> AsyncGenerator[str, None]: + async for chunk in chat_service.stream_chat_completion( + session_id, + message, + is_user_message=is_user_message, + user_id=user_id, + session=session, # Pass pre-fetched session to avoid double-fetch + ): + yield chunk.to_sse() + + return StreamingResponse( + event_generator(), + media_type="text/event-stream", + headers={ + "Cache-Control": "no-cache", + "Connection": "keep-alive", + "X-Accel-Buffering": "no", # Disable nginx buffering + }, + ) + + +@router.patch( + "/sessions/{session_id}/assign-user", + dependencies=[Security(auth.requires_user)], + status_code=200, +) +async def session_assign_user( + session_id: str, + user_id: Annotated[str, Security(auth.get_user_id)], +) -> dict: + """ + Assign an authenticated user to a chat session. + + Used (typically post-login) to claim an existing anonymous session as the current authenticated user. + + Args: + session_id: The identifier for the (previously anonymous) session. + user_id: The authenticated user's ID to associate with the session. + + Returns: + dict: Status of the assignment. + + """ + await chat_service.assign_user_to_session(session_id, user_id) + return {"status": "ok"} + + +# ========== Health Check ========== + + +@router.get("/health", status_code=200) +async def health_check() -> dict: + """ + Health check endpoint for the chat service. + + Performs a full cycle test of session creation, assignment, and retrieval. Should always return healthy + if the service and data layer are operational. + + Returns: + dict: A status dictionary indicating health, service name, and API version. + + """ + session = await chat_service.create_chat_session(None) + await chat_service.assign_user_to_session(session.session_id, "test_user") + await chat_service.get_session(session.session_id, "test_user") + + return { + "status": "healthy", + "service": "chat", + "version": "0.1.0", + } diff --git a/autogpt_platform/backend/backend/api/features/chat/service.py b/autogpt_platform/backend/backend/api/features/chat/service.py new file mode 100644 index 0000000000..2d96d4abcd --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/service.py @@ -0,0 +1,538 @@ +import logging +from collections.abc import AsyncGenerator +from datetime import UTC, datetime +from typing import Any + +import orjson +from openai import AsyncOpenAI +from openai.types.chat import ChatCompletionChunk, ChatCompletionToolParam + +from backend.util.exceptions import NotFoundError + +from .config import ChatConfig +from .model import ( + ChatMessage, + ChatSession, + Usage, + get_chat_session, + upsert_chat_session, +) +from .response_model import ( + StreamBaseResponse, + StreamEnd, + StreamError, + StreamTextChunk, + StreamTextEnded, + StreamToolCall, + StreamToolCallStart, + StreamToolExecutionResult, + StreamUsage, +) +from .tools import execute_tool, tools + +logger = logging.getLogger(__name__) + +config = ChatConfig() +client = AsyncOpenAI(api_key=config.api_key, base_url=config.base_url) + + +async def create_chat_session( + user_id: str | None = None, +) -> ChatSession: + """ + Create a new chat session and persist it to the database. + """ + session = ChatSession.new(user_id) + # Persist the session immediately so it can be used for streaming + return await upsert_chat_session(session) + + +async def get_session( + session_id: str, + user_id: str | None = None, +) -> ChatSession | None: + """ + Get a chat session by ID. + """ + return await get_chat_session(session_id, user_id) + + +async def assign_user_to_session( + session_id: str, + user_id: str, +) -> ChatSession: + """ + Assign a user to a chat session. + """ + session = await get_chat_session(session_id, None) + if not session: + raise NotFoundError(f"Session {session_id} not found") + session.user_id = user_id + return await upsert_chat_session(session) + + +async def stream_chat_completion( + session_id: str, + message: str | None = None, + is_user_message: bool = True, + user_id: str | None = None, + retry_count: int = 0, + session: ChatSession | None = None, +) -> AsyncGenerator[StreamBaseResponse, None]: + """Main entry point for streaming chat completions with database handling. + + This function handles all database operations and delegates streaming + to the internal _stream_chat_chunks function. + + Args: + session_id: Chat session ID + user_message: User's input message + user_id: User ID for authentication (None for anonymous) + session: Optional pre-loaded session object (for recursive calls to avoid Redis refetch) + + Yields: + StreamBaseResponse objects formatted as SSE + + Raises: + NotFoundError: If session_id is invalid + ValueError: If max_context_messages is exceeded + + """ + logger.info( + f"Streaming chat completion for session {session_id} for message {message} and user id {user_id}. Message is user message: {is_user_message}" + ) + + # Only fetch from Redis if session not provided (initial call) + if session is None: + session = await get_chat_session(session_id, user_id) + logger.info( + f"Fetched session from Redis: {session.session_id if session else 'None'}, " + f"message_count={len(session.messages) if session else 0}" + ) + else: + logger.info( + f"Using provided session object: {session.session_id}, " + f"message_count={len(session.messages)}" + ) + + if not session: + raise NotFoundError( + f"Session {session_id} not found. Please create a new session first." + ) + + if message: + session.messages.append( + ChatMessage( + role="user" if is_user_message else "assistant", content=message + ) + ) + logger.info( + f"Appended message (role={'user' if is_user_message else 'assistant'}), " + f"new message_count={len(session.messages)}" + ) + + if len(session.messages) > config.max_context_messages: + raise ValueError(f"Max messages exceeded: {config.max_context_messages}") + + logger.info( + f"Upserting session: {session.session_id} with user id {session.user_id}, " + f"message_count={len(session.messages)}" + ) + session = await upsert_chat_session(session) + assert session, "Session not found" + + assistant_response = ChatMessage( + role="assistant", + content="", + ) + + has_yielded_end = False + has_yielded_error = False + has_done_tool_call = False + has_received_text = False + text_streaming_ended = False + tool_response_messages: list[ChatMessage] = [] + accumulated_tool_calls: list[dict[str, Any]] = [] + should_retry = False + + try: + async for chunk in _stream_chat_chunks( + session=session, + tools=tools, + ): + + if isinstance(chunk, StreamTextChunk): + content = chunk.content or "" + assert assistant_response.content is not None + assistant_response.content += content + has_received_text = True + yield chunk + elif isinstance(chunk, StreamToolCallStart): + # Emit text_ended before first tool call, but only if we've received text + if has_received_text and not text_streaming_ended: + yield StreamTextEnded() + text_streaming_ended = True + yield chunk + elif isinstance(chunk, StreamToolCall): + # Accumulate tool calls in OpenAI format + accumulated_tool_calls.append( + { + "id": chunk.tool_id, + "type": "function", + "function": { + "name": chunk.tool_name, + "arguments": orjson.dumps(chunk.arguments).decode("utf-8"), + }, + } + ) + elif isinstance(chunk, StreamToolExecutionResult): + result_content = ( + chunk.result + if isinstance(chunk.result, str) + else orjson.dumps(chunk.result).decode("utf-8") + ) + tool_response_messages.append( + ChatMessage( + role="tool", + content=result_content, + tool_call_id=chunk.tool_id, + ) + ) + has_done_tool_call = True + # Track if any tool execution failed + if not chunk.success: + logger.warning( + f"Tool {chunk.tool_name} (ID: {chunk.tool_id}) execution failed" + ) + yield chunk + elif isinstance(chunk, StreamEnd): + if not has_done_tool_call: + has_yielded_end = True + yield chunk + elif isinstance(chunk, StreamError): + has_yielded_error = True + elif isinstance(chunk, StreamUsage): + session.usage.append( + Usage( + prompt_tokens=chunk.prompt_tokens, + completion_tokens=chunk.completion_tokens, + total_tokens=chunk.total_tokens, + ) + ) + else: + logger.error(f"Unknown chunk type: {type(chunk)}", exc_info=True) + except Exception as e: + logger.error(f"Error during stream: {e!s}", exc_info=True) + + # Check if this is a retryable error (JSON parsing, incomplete tool calls, etc.) + is_retryable = isinstance(e, (orjson.JSONDecodeError, KeyError, TypeError)) + + if is_retryable and retry_count < config.max_retries: + logger.info( + f"Retryable error encountered. Attempt {retry_count + 1}/{config.max_retries}" + ) + should_retry = True + else: + # Non-retryable error or max retries exceeded + # Save any partial progress before reporting error + messages_to_save: list[ChatMessage] = [] + + # Add assistant message if it has content or tool calls + if accumulated_tool_calls: + assistant_response.tool_calls = accumulated_tool_calls + if assistant_response.content or assistant_response.tool_calls: + messages_to_save.append(assistant_response) + + # Add tool response messages after assistant message + messages_to_save.extend(tool_response_messages) + + session.messages.extend(messages_to_save) + await upsert_chat_session(session) + + if not has_yielded_error: + error_message = str(e) + if not is_retryable: + error_message = f"Non-retryable error: {error_message}" + elif retry_count >= config.max_retries: + error_message = ( + f"Max retries ({config.max_retries}) exceeded: {error_message}" + ) + + error_response = StreamError( + message=error_message, + timestamp=datetime.now(UTC).isoformat(), + ) + yield error_response + if not has_yielded_end: + yield StreamEnd( + timestamp=datetime.now(UTC).isoformat(), + ) + return + + # Handle retry outside of exception handler to avoid nesting + if should_retry and retry_count < config.max_retries: + logger.info( + f"Retrying stream_chat_completion for session {session_id}, attempt {retry_count + 1}" + ) + async for chunk in stream_chat_completion( + session_id=session.session_id, + user_id=user_id, + retry_count=retry_count + 1, + session=session, + ): + yield chunk + return # Exit after retry to avoid double-saving in finally block + + # Normal completion path - save session and handle tool call continuation + logger.info( + f"Normal completion path: session={session.session_id}, " + f"current message_count={len(session.messages)}" + ) + + # Build the messages list in the correct order + messages_to_save: list[ChatMessage] = [] + + # Add assistant message with tool_calls if any + if accumulated_tool_calls: + assistant_response.tool_calls = accumulated_tool_calls + logger.info( + f"Added {len(accumulated_tool_calls)} tool calls to assistant message" + ) + if assistant_response.content or assistant_response.tool_calls: + messages_to_save.append(assistant_response) + logger.info( + f"Saving assistant message with content_len={len(assistant_response.content or '')}, tool_calls={len(assistant_response.tool_calls or [])}" + ) + + # Add tool response messages after assistant message + messages_to_save.extend(tool_response_messages) + logger.info( + f"Saving {len(tool_response_messages)} tool response messages, " + f"total_to_save={len(messages_to_save)}" + ) + + session.messages.extend(messages_to_save) + logger.info(f"Extended session messages, new message_count={len(session.messages)}") + await upsert_chat_session(session) + + # If we did a tool call, stream the chat completion again to get the next response + if has_done_tool_call: + logger.info( + "Tool call executed, streaming chat completion again to get assistant response" + ) + async for chunk in stream_chat_completion( + session_id=session.session_id, + user_id=user_id, + session=session, # Pass session object to avoid Redis refetch + ): + yield chunk + + +async def _stream_chat_chunks( + session: ChatSession, + tools: list[ChatCompletionToolParam], +) -> AsyncGenerator[StreamBaseResponse, None]: + """ + Pure streaming function for OpenAI chat completions with tool calling. + + This function is database-agnostic and focuses only on streaming logic. + + Args: + messages: Conversation context as ChatCompletionMessageParam list + session_id: Session ID + user_id: User ID for tool execution + + Yields: + SSE formatted JSON response objects + + """ + model = config.model + + logger.info("Starting pure chat stream") + + # Loop to handle tool calls and continue conversation + while True: + try: + logger.info("Creating OpenAI chat completion stream...") + + # Create the stream with proper types + stream = await client.chat.completions.create( + model=model, + messages=session.to_openai_messages(), + tools=tools, + tool_choice="auto", + stream=True, + ) + + # Variables to accumulate tool calls + tool_calls: list[dict[str, Any]] = [] + active_tool_call_idx: int | None = None + finish_reason: str | None = None + # Track which tool call indices have had their start event emitted + emitted_start_for_idx: set[int] = set() + + # Process the stream + chunk: ChatCompletionChunk + async for chunk in stream: + if chunk.usage: + yield StreamUsage( + prompt_tokens=chunk.usage.prompt_tokens, + completion_tokens=chunk.usage.completion_tokens, + total_tokens=chunk.usage.total_tokens, + ) + + if chunk.choices: + choice = chunk.choices[0] + delta = choice.delta + + # Capture finish reason + if choice.finish_reason: + finish_reason = choice.finish_reason + logger.info(f"Finish reason: {finish_reason}") + + # Handle content streaming + if delta.content: + # Stream the text chunk + text_response = StreamTextChunk( + content=delta.content, + timestamp=datetime.now(UTC).isoformat(), + ) + yield text_response + + # Handle tool calls + if delta.tool_calls: + for tc_chunk in delta.tool_calls: + idx = tc_chunk.index + + # Update active tool call index if needed + if ( + active_tool_call_idx is None + or active_tool_call_idx != idx + ): + active_tool_call_idx = idx + + # Ensure we have a tool call object at this index + while len(tool_calls) <= idx: + tool_calls.append( + { + "id": "", + "type": "function", + "function": { + "name": "", + "arguments": "", + }, + }, + ) + + # Accumulate the tool call data + if tc_chunk.id: + tool_calls[idx]["id"] = tc_chunk.id + if tc_chunk.function: + if tc_chunk.function.name: + tool_calls[idx]["function"][ + "name" + ] = tc_chunk.function.name + if tc_chunk.function.arguments: + tool_calls[idx]["function"][ + "arguments" + ] += tc_chunk.function.arguments + + # Emit StreamToolCallStart only after we have the tool call ID + if ( + idx not in emitted_start_for_idx + and tool_calls[idx]["id"] + and tool_calls[idx]["function"]["name"] + ): + yield StreamToolCallStart( + tool_id=tool_calls[idx]["id"], + tool_name=tool_calls[idx]["function"]["name"], + timestamp=datetime.now(UTC).isoformat(), + ) + emitted_start_for_idx.add(idx) + logger.info(f"Stream complete. Finish reason: {finish_reason}") + + # Yield all accumulated tool calls after the stream is complete + # This ensures all tool call arguments have been fully received + for idx, tool_call in enumerate(tool_calls): + try: + async for tc in _yield_tool_call(tool_calls, idx, session): + yield tc + except (orjson.JSONDecodeError, KeyError, TypeError) as e: + logger.error( + f"Failed to parse tool call {idx}: {e}", + exc_info=True, + extra={"tool_call": tool_call}, + ) + yield StreamError( + message=f"Invalid tool call arguments for tool {tool_call.get('function', {}).get('name', 'unknown')}: {e}", + timestamp=datetime.now(UTC).isoformat(), + ) + # Re-raise to trigger retry logic in the parent function + raise + + yield StreamEnd( + timestamp=datetime.now(UTC).isoformat(), + ) + return + except Exception as e: + logger.error(f"Error in stream: {e!s}", exc_info=True) + error_response = StreamError( + message=str(e), + timestamp=datetime.now(UTC).isoformat(), + ) + yield error_response + yield StreamEnd( + timestamp=datetime.now(UTC).isoformat(), + ) + return + + +async def _yield_tool_call( + tool_calls: list[dict[str, Any]], + yield_idx: int, + session: ChatSession, +) -> AsyncGenerator[StreamBaseResponse, None]: + """ + Yield a tool call and its execution result. + + Raises: + orjson.JSONDecodeError: If tool call arguments cannot be parsed as JSON + KeyError: If expected tool call fields are missing + TypeError: If tool call structure is invalid + """ + logger.info(f"Yielding tool call: {tool_calls[yield_idx]}") + + # Parse tool call arguments - exceptions will propagate to caller + arguments = orjson.loads(tool_calls[yield_idx]["function"]["arguments"]) + + yield StreamToolCall( + tool_id=tool_calls[yield_idx]["id"], + tool_name=tool_calls[yield_idx]["function"]["name"], + arguments=arguments, + timestamp=datetime.now(UTC).isoformat(), + ) + + tool_execution_response: StreamToolExecutionResult = await execute_tool( + tool_name=tool_calls[yield_idx]["function"]["name"], + parameters=arguments, + tool_call_id=tool_calls[yield_idx]["id"], + user_id=session.user_id, + session=session, + ) + logger.info(f"Yielding Tool execution response: {tool_execution_response}") + yield tool_execution_response + + +if __name__ == "__main__": + import asyncio + + async def main(): + session = await create_chat_session() + async for chunk in stream_chat_completion( + session.session_id, + "Please find me an agent that can help me with my business. Call the tool twice once with the query 'money printing agent' and once with the query 'money generating agent'", + user_id=session.user_id, + ): + print(chunk) + + asyncio.run(main()) diff --git a/autogpt_platform/backend/backend/api/features/chat/service_test.py b/autogpt_platform/backend/backend/api/features/chat/service_test.py new file mode 100644 index 0000000000..d1af22a71a --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/service_test.py @@ -0,0 +1,81 @@ +import logging +from os import getenv + +import pytest + +from . import service as chat_service +from .response_model import ( + StreamEnd, + StreamError, + StreamTextChunk, + StreamToolExecutionResult, +) + +logger = logging.getLogger(__name__) + + +@pytest.mark.asyncio(loop_scope="session") +async def test_stream_chat_completion(): + """ + Test the stream_chat_completion function. + """ + api_key: str | None = getenv("OPEN_ROUTER_API_KEY") + if not api_key: + return pytest.skip("OPEN_ROUTER_API_KEY is not set, skipping test") + + session = await chat_service.create_chat_session() + + has_errors = False + has_ended = False + assistant_message = "" + async for chunk in chat_service.stream_chat_completion( + session.session_id, "Hello, how are you?", user_id=session.user_id + ): + logger.info(chunk) + if isinstance(chunk, StreamError): + has_errors = True + if isinstance(chunk, StreamTextChunk): + assistant_message += chunk.content + if isinstance(chunk, StreamEnd): + has_ended = True + + assert has_ended, "Chat completion did not end" + assert not has_errors, "Error occurred while streaming chat completion" + assert assistant_message, "Assistant message is empty" + + +@pytest.mark.asyncio(loop_scope="session") +async def test_stream_chat_completion_with_tool_calls(): + """ + Test the stream_chat_completion function. + """ + api_key: str | None = getenv("OPEN_ROUTER_API_KEY") + if not api_key: + return pytest.skip("OPEN_ROUTER_API_KEY is not set, skipping test") + + session = await chat_service.create_chat_session() + session = await chat_service.upsert_chat_session(session) + + has_errors = False + has_ended = False + had_tool_calls = False + async for chunk in chat_service.stream_chat_completion( + session.session_id, + "Please find me an agent that can help me with my business. Use the query 'moneny printing agent'", + user_id=session.user_id, + ): + logger.info(chunk) + if isinstance(chunk, StreamError): + has_errors = True + + if isinstance(chunk, StreamEnd): + has_ended = True + if isinstance(chunk, StreamToolExecutionResult): + had_tool_calls = True + + assert has_ended, "Chat completion did not end" + assert not has_errors, "Error occurred while streaming chat completion" + assert had_tool_calls, "Tool calls did not occur" + session = await chat_service.get_session(session.session_id) + assert session, "Session not found" + assert session.usage, "Usage is empty" diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py b/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py new file mode 100644 index 0000000000..5b9b8649a8 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/__init__.py @@ -0,0 +1,41 @@ +from typing import TYPE_CHECKING, Any + +from openai.types.chat import ChatCompletionToolParam + +from backend.api.features.chat.model import ChatSession + +from .base import BaseTool +from .find_agent import FindAgentTool +from .run_agent import RunAgentTool + +if TYPE_CHECKING: + from backend.api.features.chat.response_model import StreamToolExecutionResult + +# Initialize tool instances +find_agent_tool = FindAgentTool() +run_agent_tool = RunAgentTool() + +# Export tools as OpenAI format +tools: list[ChatCompletionToolParam] = [ + find_agent_tool.as_openai_tool(), + run_agent_tool.as_openai_tool(), +] + + +async def execute_tool( + tool_name: str, + parameters: dict[str, Any], + user_id: str | None, + session: ChatSession, + tool_call_id: str, +) -> "StreamToolExecutionResult": + + tool_map: dict[str, BaseTool] = { + "find_agent": find_agent_tool, + "run_agent": run_agent_tool, + } + if tool_name not in tool_map: + raise ValueError(f"Tool {tool_name} not found") + return await tool_map[tool_name].execute( + user_id, session, tool_call_id, **parameters + ) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/_test_data.py b/autogpt_platform/backend/backend/api/features/chat/tools/_test_data.py new file mode 100644 index 0000000000..f75b7bb0d0 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/_test_data.py @@ -0,0 +1,464 @@ +import uuid +from datetime import UTC, datetime +from os import getenv + +import pytest +from pydantic import SecretStr + +from backend.api.features.chat.model import ChatSession +from backend.api.features.store import db as store_db +from backend.blocks.firecrawl.scrape import FirecrawlScrapeBlock +from backend.blocks.io import AgentInputBlock, AgentOutputBlock +from backend.blocks.llm import AITextGeneratorBlock +from backend.data.db import prisma +from backend.data.graph import Graph, Link, Node, create_graph +from backend.data.model import APIKeyCredentials +from backend.data.user import get_or_create_user +from backend.integrations.credentials_store import IntegrationCredentialsStore + + +def make_session(user_id: str | None = None): + return ChatSession( + session_id=str(uuid.uuid4()), + user_id=user_id, + messages=[], + usage=[], + started_at=datetime.now(UTC), + updated_at=datetime.now(UTC), + successful_agent_runs={}, + successful_agent_schedules={}, + ) + + +@pytest.fixture(scope="session") +async def setup_test_data(): + """ + Set up test data for run_agent tests: + 1. Create a test user + 2. Create a test graph (agent input -> agent output) + 3. Create a store listing and store listing version + 4. Approve the store listing version + """ + # 1. Create a test user + user_data = { + "sub": f"test-user-{uuid.uuid4()}", + "email": f"test-{uuid.uuid4()}@example.com", + } + user = await get_or_create_user(user_data) + + # 1b. Create a profile with username for the user (required for store agent lookup) + username = user.email.split("@")[0] + await prisma.profile.create( + data={ + "userId": user.id, + "username": username, + "name": f"Test User {username}", + "description": "Test user profile", + "links": [], # Required field - empty array for test profiles + } + ) + + # 2. Create a test graph with agent input -> agent output + graph_id = str(uuid.uuid4()) + + # Create input node + input_node_id = str(uuid.uuid4()) + input_block = AgentInputBlock() + input_node = Node( + id=input_node_id, + block_id=input_block.id, + input_default={ + "name": "test_input", + "title": "Test Input", + "value": "", + "advanced": False, + "description": "Test input field", + "placeholder_values": [], + }, + metadata={"position": {"x": 0, "y": 0}}, + ) + + # Create output node + output_node_id = str(uuid.uuid4()) + output_block = AgentOutputBlock() + output_node = Node( + id=output_node_id, + block_id=output_block.id, + input_default={ + "name": "test_output", + "title": "Test Output", + "value": "", + "format": "", + "advanced": False, + "description": "Test output field", + }, + metadata={"position": {"x": 200, "y": 0}}, + ) + + # Create link from input to output + link = Link( + source_id=input_node_id, + sink_id=output_node_id, + source_name="result", + sink_name="value", + is_static=True, + ) + + # Create the graph + graph = Graph( + id=graph_id, + version=1, + is_active=True, + name="Test Agent", + description="A simple test agent for testing", + nodes=[input_node, output_node], + links=[link], + ) + + created_graph = await create_graph(graph, user.id) + + # 3. Create a store listing and store listing version for the agent + # Use unique slug to avoid constraint violations + unique_slug = f"test-agent-{str(uuid.uuid4())[:8]}" + store_submission = await store_db.create_store_submission( + user_id=user.id, + agent_id=created_graph.id, + agent_version=created_graph.version, + slug=unique_slug, + name="Test Agent", + description="A simple test agent", + sub_heading="Test agent for unit tests", + categories=["testing"], + image_urls=["https://example.com/image.jpg"], + ) + + assert store_submission.store_listing_version_id is not None + # 4. Approve the store listing version + await store_db.review_store_submission( + store_listing_version_id=store_submission.store_listing_version_id, + is_approved=True, + external_comments="Approved for testing", + internal_comments="Test approval", + reviewer_id=user.id, + ) + + return { + "user": user, + "graph": created_graph, + "store_submission": store_submission, + } + + +@pytest.fixture(scope="session") +async def setup_llm_test_data(): + """ + Set up test data for LLM agent tests: + 1. Create a test user + 2. Create test OpenAI credentials for the user + 3. Create a test graph with input -> LLM block -> output + 4. Create and approve a store listing + """ + key = getenv("OPENAI_API_KEY") + if not key: + return pytest.skip("OPENAI_API_KEY is not set") + + # 1. Create a test user + user_data = { + "sub": f"test-user-{uuid.uuid4()}", + "email": f"test-{uuid.uuid4()}@example.com", + } + user = await get_or_create_user(user_data) + + # 1b. Create a profile with username for the user (required for store agent lookup) + username = user.email.split("@")[0] + await prisma.profile.create( + data={ + "userId": user.id, + "username": username, + "name": f"Test User {username}", + "description": "Test user profile for LLM tests", + "links": [], # Required field - empty array for test profiles + } + ) + + # 2. Create test OpenAI credentials for the user + credentials = APIKeyCredentials( + id=str(uuid.uuid4()), + provider="openai", + api_key=SecretStr("test-openai-api-key"), + title="Test OpenAI API Key", + expires_at=None, + ) + + # Store the credentials + creds_store = IntegrationCredentialsStore() + await creds_store.add_creds(user.id, credentials) + + # 3. Create a test graph with input -> LLM block -> output + graph_id = str(uuid.uuid4()) + + # Create input node for the prompt + input_node_id = str(uuid.uuid4()) + input_block = AgentInputBlock() + input_node = Node( + id=input_node_id, + block_id=input_block.id, + input_default={ + "name": "user_prompt", + "title": "User Prompt", + "value": "", + "advanced": False, + "description": "Prompt for the LLM", + "placeholder_values": [], + }, + metadata={"position": {"x": 0, "y": 0}}, + ) + + # Create LLM block node + llm_node_id = str(uuid.uuid4()) + llm_block = AITextGeneratorBlock() + llm_node = Node( + id=llm_node_id, + block_id=llm_block.id, + input_default={ + "model": "gpt-4o-mini", + "sys_prompt": "You are a helpful assistant.", + "retry": 3, + "prompt_values": {}, + "credentials": { + "provider": "openai", + "id": credentials.id, + "type": "api_key", + "title": credentials.title, + }, + }, + metadata={"position": {"x": 300, "y": 0}}, + ) + + # Create output node + output_node_id = str(uuid.uuid4()) + output_block = AgentOutputBlock() + output_node = Node( + id=output_node_id, + block_id=output_block.id, + input_default={ + "name": "llm_response", + "title": "LLM Response", + "value": "", + "format": "", + "advanced": False, + "description": "Response from the LLM", + }, + metadata={"position": {"x": 600, "y": 0}}, + ) + + # Create links + # Link input.result -> llm.prompt + link1 = Link( + source_id=input_node_id, + sink_id=llm_node_id, + source_name="result", + sink_name="prompt", + is_static=True, + ) + + # Link llm.response -> output.value + link2 = Link( + source_id=llm_node_id, + sink_id=output_node_id, + source_name="response", + sink_name="value", + is_static=False, + ) + + # Create the graph + graph = Graph( + id=graph_id, + version=1, + is_active=True, + name="LLM Test Agent", + description="An agent that uses an LLM to process text", + nodes=[input_node, llm_node, output_node], + links=[link1, link2], + ) + + created_graph = await create_graph(graph, user.id) + + # 4. Create and approve a store listing + unique_slug = f"llm-test-agent-{str(uuid.uuid4())[:8]}" + store_submission = await store_db.create_store_submission( + user_id=user.id, + agent_id=created_graph.id, + agent_version=created_graph.version, + slug=unique_slug, + name="LLM Test Agent", + description="An agent with LLM capabilities", + sub_heading="Test agent with OpenAI integration", + categories=["testing", "ai"], + image_urls=["https://example.com/image.jpg"], + ) + assert store_submission.store_listing_version_id is not None + await store_db.review_store_submission( + store_listing_version_id=store_submission.store_listing_version_id, + is_approved=True, + external_comments="Approved for testing", + internal_comments="Test approval for LLM agent", + reviewer_id=user.id, + ) + + return { + "user": user, + "graph": created_graph, + "credentials": credentials, + "store_submission": store_submission, + } + + +@pytest.fixture(scope="session") +async def setup_firecrawl_test_data(): + """ + Set up test data for Firecrawl agent tests (missing credentials scenario): + 1. Create a test user (WITHOUT Firecrawl credentials) + 2. Create a test graph with input -> Firecrawl block -> output + 3. Create and approve a store listing + """ + # 1. Create a test user + user_data = { + "sub": f"test-user-{uuid.uuid4()}", + "email": f"test-{uuid.uuid4()}@example.com", + } + user = await get_or_create_user(user_data) + + # 1b. Create a profile with username for the user (required for store agent lookup) + username = user.email.split("@")[0] + await prisma.profile.create( + data={ + "userId": user.id, + "username": username, + "name": f"Test User {username}", + "description": "Test user profile for Firecrawl tests", + "links": [], # Required field - empty array for test profiles + } + ) + + # NOTE: We deliberately do NOT create Firecrawl credentials for this user + # This tests the scenario where required credentials are missing + + # 2. Create a test graph with input -> Firecrawl block -> output + graph_id = str(uuid.uuid4()) + + # Create input node for the URL + input_node_id = str(uuid.uuid4()) + input_block = AgentInputBlock() + input_node = Node( + id=input_node_id, + block_id=input_block.id, + input_default={ + "name": "url", + "title": "URL to Scrape", + "value": "", + "advanced": False, + "description": "URL for Firecrawl to scrape", + "placeholder_values": [], + }, + metadata={"position": {"x": 0, "y": 0}}, + ) + + # Create Firecrawl block node + firecrawl_node_id = str(uuid.uuid4()) + firecrawl_block = FirecrawlScrapeBlock() + firecrawl_node = Node( + id=firecrawl_node_id, + block_id=firecrawl_block.id, + input_default={ + "limit": 10, + "only_main_content": True, + "max_age": 3600000, + "wait_for": 200, + "formats": ["markdown"], + "credentials": { + "provider": "firecrawl", + "id": "test-firecrawl-id", + "type": "api_key", + "title": "Firecrawl API Key", + }, + }, + metadata={"position": {"x": 300, "y": 0}}, + ) + + # Create output node + output_node_id = str(uuid.uuid4()) + output_block = AgentOutputBlock() + output_node = Node( + id=output_node_id, + block_id=output_block.id, + input_default={ + "name": "scraped_data", + "title": "Scraped Data", + "value": "", + "format": "", + "advanced": False, + "description": "Data scraped by Firecrawl", + }, + metadata={"position": {"x": 600, "y": 0}}, + ) + + # Create links + # Link input.result -> firecrawl.url + link1 = Link( + source_id=input_node_id, + sink_id=firecrawl_node_id, + source_name="result", + sink_name="url", + is_static=True, + ) + + # Link firecrawl.markdown -> output.value + link2 = Link( + source_id=firecrawl_node_id, + sink_id=output_node_id, + source_name="markdown", + sink_name="value", + is_static=False, + ) + + # Create the graph + graph = Graph( + id=graph_id, + version=1, + is_active=True, + name="Firecrawl Test Agent", + description="An agent that uses Firecrawl to scrape websites", + nodes=[input_node, firecrawl_node, output_node], + links=[link1, link2], + ) + + created_graph = await create_graph(graph, user.id) + + # 3. Create and approve a store listing + unique_slug = f"firecrawl-test-agent-{str(uuid.uuid4())[:8]}" + store_submission = await store_db.create_store_submission( + user_id=user.id, + agent_id=created_graph.id, + agent_version=created_graph.version, + slug=unique_slug, + name="Firecrawl Test Agent", + description="An agent with Firecrawl integration (no credentials)", + sub_heading="Test agent requiring Firecrawl credentials", + categories=["testing", "scraping"], + image_urls=["https://example.com/image.jpg"], + ) + assert store_submission.store_listing_version_id is not None + await store_db.review_store_submission( + store_listing_version_id=store_submission.store_listing_version_id, + is_approved=True, + external_comments="Approved for testing", + internal_comments="Test approval for Firecrawl agent", + reviewer_id=user.id, + ) + + return { + "user": user, + "graph": created_graph, + "store_submission": store_submission, + } diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/base.py b/autogpt_platform/backend/backend/api/features/chat/tools/base.py new file mode 100644 index 0000000000..b4c9d8d731 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/base.py @@ -0,0 +1,119 @@ +"""Base classes and shared utilities for chat tools.""" + +import logging +from typing import Any + +from openai.types.chat import ChatCompletionToolParam + +from backend.api.features.chat.model import ChatSession +from backend.api.features.chat.response_model import StreamToolExecutionResult + +from .models import ErrorResponse, NeedLoginResponse, ToolResponseBase + +logger = logging.getLogger(__name__) + + +class BaseTool: + """Base class for all chat tools.""" + + @property + def name(self) -> str: + """Tool name for OpenAI function calling.""" + raise NotImplementedError + + @property + def description(self) -> str: + """Tool description for OpenAI.""" + raise NotImplementedError + + @property + def parameters(self) -> dict[str, Any]: + """Tool parameters schema for OpenAI.""" + raise NotImplementedError + + @property + def requires_auth(self) -> bool: + """Whether this tool requires authentication.""" + return False + + def as_openai_tool(self) -> ChatCompletionToolParam: + """Convert to OpenAI tool format.""" + return ChatCompletionToolParam( + type="function", + function={ + "name": self.name, + "description": self.description, + "parameters": self.parameters, + }, + ) + + async def execute( + self, + user_id: str | None, + session: ChatSession, + tool_call_id: str, + **kwargs, + ) -> StreamToolExecutionResult: + """Execute the tool with authentication check. + + Args: + user_id: User ID (may be anonymous like "anon_123") + session_id: Chat session ID + **kwargs: Tool-specific parameters + + Returns: + Pydantic response object + + """ + if self.requires_auth and not user_id: + logger.error( + f"Attempted tool call for {self.name} but user not authenticated" + ) + return StreamToolExecutionResult( + tool_id=tool_call_id, + tool_name=self.name, + result=NeedLoginResponse( + message=f"Please sign in to use {self.name}", + session_id=session.session_id, + ).model_dump_json(), + success=False, + ) + + try: + result = await self._execute(user_id, session, **kwargs) + return StreamToolExecutionResult( + tool_id=tool_call_id, + tool_name=self.name, + result=result.model_dump_json(), + ) + except Exception as e: + logger.error(f"Error in {self.name}: {e}", exc_info=True) + return StreamToolExecutionResult( + tool_id=tool_call_id, + tool_name=self.name, + result=ErrorResponse( + message=f"An error occurred while executing {self.name}", + error=str(e), + session_id=session.session_id, + ).model_dump_json(), + success=False, + ) + + async def _execute( + self, + user_id: str | None, + session: ChatSession, + **kwargs, + ) -> ToolResponseBase: + """Internal execution logic to be implemented by subclasses. + + Args: + user_id: User ID (authenticated or anonymous) + session_id: Chat session ID + **kwargs: Tool-specific parameters + + Returns: + Pydantic response object + + """ + raise NotImplementedError diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/find_agent.py b/autogpt_platform/backend/backend/api/features/chat/tools/find_agent.py new file mode 100644 index 0000000000..3ad071f412 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/find_agent.py @@ -0,0 +1,129 @@ +"""Tool for discovering agents from marketplace and user library.""" + +import logging +from typing import Any + +from backend.api.features.chat.model import ChatSession +from backend.api.features.store import db as store_db +from backend.util.exceptions import DatabaseError, NotFoundError + +from .base import BaseTool +from .models import ( + AgentCarouselResponse, + AgentInfo, + ErrorResponse, + NoResultsResponse, + ToolResponseBase, +) + +logger = logging.getLogger(__name__) + + +class FindAgentTool(BaseTool): + """Tool for discovering agents based on user needs.""" + + @property + def name(self) -> str: + return "find_agent" + + @property + def description(self) -> str: + return ( + "Discover agents from the marketplace based on capabilities and user needs." + ) + + @property + def parameters(self) -> dict[str, Any]: + return { + "type": "object", + "properties": { + "query": { + "type": "string", + "description": "Search query describing what the user wants to accomplish. Use single keywords for best results.", + }, + }, + "required": ["query"], + } + + async def _execute( + self, + user_id: str | None, + session: ChatSession, + **kwargs, + ) -> ToolResponseBase: + """Search for agents in the marketplace. + + Args: + user_id: User ID (may be anonymous) + session_id: Chat session ID + query: Search query + + Returns: + AgentCarouselResponse: List of agents found in the marketplace + NoResultsResponse: No agents found in the marketplace + ErrorResponse: Error message + """ + query = kwargs.get("query", "").strip() + session_id = session.session_id + if not query: + return ErrorResponse( + message="Please provide a search query", + session_id=session_id, + ) + agents = [] + try: + logger.info(f"Searching marketplace for: {query}") + store_results = await store_db.get_store_agents( + search_query=query, + page_size=5, + ) + + logger.info(f"Find agents tool found {len(store_results.agents)} agents") + for agent in store_results.agents: + agent_id = f"{agent.creator}/{agent.slug}" + logger.info(f"Building agent ID = {agent_id}") + agents.append( + AgentInfo( + id=agent_id, + name=agent.agent_name, + description=agent.description or "", + source="marketplace", + in_library=False, + creator=agent.creator, + category="general", + rating=agent.rating, + runs=agent.runs, + is_featured=False, + ), + ) + except NotFoundError: + pass + except DatabaseError as e: + logger.error(f"Error searching agents: {e}", exc_info=True) + return ErrorResponse( + message="Failed to search for agents. Please try again.", + error=str(e), + session_id=session_id, + ) + if not agents: + return NoResultsResponse( + message=f"No agents found matching '{query}'. Try different keywords or browse the marketplace. If you have 3 consecutive find_agent tool calls results and found no agents. Please stop trying and ask the user if there is anything else you can help with.", + session_id=session_id, + suggestions=[ + "Try more general terms", + "Browse categories in the marketplace", + "Check spelling", + ], + ) + + # Return formatted carousel + title = ( + f"Found {len(agents)} agent{'s' if len(agents) != 1 else ''} for '{query}'" + ) + return AgentCarouselResponse( + message="Now you have found some options for the user to choose from. You can add a link to a recommended agent at: /marketplace/agent/agent_id Please ask the user if they would like to use any of these agents. If they do, please call the get_agent_details tool for this agent.", + title=title, + agents=agents, + count=len(agents), + session_id=session_id, + ) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/models.py b/autogpt_platform/backend/backend/api/features/chat/tools/models.py new file mode 100644 index 0000000000..a3fbbe025c --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/models.py @@ -0,0 +1,175 @@ +"""Pydantic models for tool responses.""" + +from enum import Enum +from typing import Any + +from pydantic import BaseModel, Field + +from backend.data.model import CredentialsMetaInput + + +class ResponseType(str, Enum): + """Types of tool responses.""" + + AGENT_CAROUSEL = "agent_carousel" + AGENT_DETAILS = "agent_details" + SETUP_REQUIREMENTS = "setup_requirements" + EXECUTION_STARTED = "execution_started" + NEED_LOGIN = "need_login" + ERROR = "error" + NO_RESULTS = "no_results" + SUCCESS = "success" + + +# Base response model +class ToolResponseBase(BaseModel): + """Base model for all tool responses.""" + + type: ResponseType + message: str + session_id: str | None = None + + +# Agent discovery models +class AgentInfo(BaseModel): + """Information about an agent.""" + + id: str + name: str + description: str + source: str = Field(description="marketplace or library") + in_library: bool = False + creator: str | None = None + category: str | None = None + rating: float | None = None + runs: int | None = None + is_featured: bool | None = None + status: str | None = None + can_access_graph: bool | None = None + has_external_trigger: bool | None = None + new_output: bool | None = None + graph_id: str | None = None + + +class AgentCarouselResponse(ToolResponseBase): + """Response for find_agent tool.""" + + type: ResponseType = ResponseType.AGENT_CAROUSEL + title: str = "Available Agents" + agents: list[AgentInfo] + count: int + name: str = "agent_carousel" + + +class NoResultsResponse(ToolResponseBase): + """Response when no agents found.""" + + type: ResponseType = ResponseType.NO_RESULTS + suggestions: list[str] = [] + name: str = "no_results" + + +# Agent details models +class InputField(BaseModel): + """Input field specification.""" + + name: str + type: str = "string" + description: str = "" + required: bool = False + default: Any | None = None + options: list[Any] | None = None + format: str | None = None + + +class ExecutionOptions(BaseModel): + """Available execution options for an agent.""" + + manual: bool = True + scheduled: bool = True + webhook: bool = False + + +class AgentDetails(BaseModel): + """Detailed agent information.""" + + id: str + name: str + description: str + in_library: bool = False + inputs: dict[str, Any] = {} + credentials: list[CredentialsMetaInput] = [] + execution_options: ExecutionOptions = Field(default_factory=ExecutionOptions) + trigger_info: dict[str, Any] | None = None + + +class AgentDetailsResponse(ToolResponseBase): + """Response for get_details action.""" + + type: ResponseType = ResponseType.AGENT_DETAILS + agent: AgentDetails + user_authenticated: bool = False + graph_id: str | None = None + graph_version: int | None = None + + +# Setup info models +class UserReadiness(BaseModel): + """User readiness status.""" + + has_all_credentials: bool = False + missing_credentials: dict[str, Any] = {} + ready_to_run: bool = False + + +class SetupInfo(BaseModel): + """Complete setup information.""" + + agent_id: str + agent_name: str + requirements: dict[str, list[Any]] = Field( + default_factory=lambda: { + "credentials": [], + "inputs": [], + "execution_modes": [], + }, + ) + user_readiness: UserReadiness = Field(default_factory=UserReadiness) + + +class SetupRequirementsResponse(ToolResponseBase): + """Response for validate action.""" + + type: ResponseType = ResponseType.SETUP_REQUIREMENTS + setup_info: SetupInfo + graph_id: str | None = None + graph_version: int | None = None + + +# Execution models +class ExecutionStartedResponse(ToolResponseBase): + """Response for run/schedule actions.""" + + type: ResponseType = ResponseType.EXECUTION_STARTED + execution_id: str + graph_id: str + graph_name: str + library_agent_id: str | None = None + library_agent_link: str | None = None + status: str = "QUEUED" + + +# Auth/error models +class NeedLoginResponse(ToolResponseBase): + """Response when login is needed.""" + + type: ResponseType = ResponseType.NEED_LOGIN + agent_info: dict[str, Any] | None = None + + +class ErrorResponse(ToolResponseBase): + """Response for errors.""" + + type: ResponseType = ResponseType.ERROR + error: str | None = None + details: dict[str, Any] | None = None diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/run_agent.py b/autogpt_platform/backend/backend/api/features/chat/tools/run_agent.py new file mode 100644 index 0000000000..931e075021 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/run_agent.py @@ -0,0 +1,501 @@ +"""Unified tool for agent operations with automatic state detection.""" + +import logging +from typing import Any + +from pydantic import BaseModel, Field, field_validator + +from backend.api.features.chat.config import ChatConfig +from backend.api.features.chat.model import ChatSession +from backend.data.graph import GraphModel +from backend.data.model import CredentialsMetaInput +from backend.data.user import get_user_by_id +from backend.executor import utils as execution_utils +from backend.util.clients import get_scheduler_client +from backend.util.exceptions import DatabaseError, NotFoundError +from backend.util.timezone_utils import ( + convert_utc_time_to_user_timezone, + get_user_timezone_or_utc, +) + +from .base import BaseTool +from .models import ( + AgentDetails, + AgentDetailsResponse, + ErrorResponse, + ExecutionOptions, + ExecutionStartedResponse, + SetupInfo, + SetupRequirementsResponse, + ToolResponseBase, + UserReadiness, +) +from .utils import ( + check_user_has_required_credentials, + extract_credentials_from_schema, + fetch_graph_from_store_slug, + get_or_create_library_agent, + match_user_credentials_to_graph, +) + +logger = logging.getLogger(__name__) +config = ChatConfig() + +# Constants for response messages +MSG_DO_NOT_RUN_AGAIN = "Do not run again unless explicitly requested." +MSG_DO_NOT_SCHEDULE_AGAIN = "Do not schedule again unless explicitly requested." +MSG_ASK_USER_FOR_VALUES = ( + "Ask the user what values to use, or call again with use_defaults=true " + "to run with default values." +) +MSG_WHAT_VALUES_TO_USE = ( + "What values would you like to use, or would you like to run with defaults?" +) + + +class RunAgentInput(BaseModel): + """Input parameters for the run_agent tool.""" + + username_agent_slug: str = "" + inputs: dict[str, Any] = Field(default_factory=dict) + use_defaults: bool = False + schedule_name: str = "" + cron: str = "" + timezone: str = "UTC" + + @field_validator( + "username_agent_slug", "schedule_name", "cron", "timezone", mode="before" + ) + @classmethod + def strip_strings(cls, v: Any) -> Any: + """Strip whitespace from string fields.""" + return v.strip() if isinstance(v, str) else v + + +class RunAgentTool(BaseTool): + """Unified tool for agent operations with automatic state detection. + + The tool automatically determines what to do based on provided parameters: + 1. Fetches agent details (always, silently) + 2. Checks if required inputs are provided + 3. Checks if user has required credentials + 4. Runs immediately OR schedules (if cron is provided) + + The response tells the caller what's missing or confirms execution. + """ + + @property + def name(self) -> str: + return "run_agent" + + @property + def description(self) -> str: + return """Run or schedule an agent from the marketplace. + + The tool automatically handles the setup flow: + - Returns missing inputs if required fields are not provided + - Returns missing credentials if user needs to configure them + - Executes immediately if all requirements are met + - Schedules execution if cron expression is provided + + For scheduled execution, provide: schedule_name, cron, and optionally timezone.""" + + @property + def parameters(self) -> dict[str, Any]: + return { + "type": "object", + "properties": { + "username_agent_slug": { + "type": "string", + "description": "Agent identifier in format 'username/agent-name'", + }, + "inputs": { + "type": "object", + "description": "Input values for the agent", + "additionalProperties": True, + }, + "use_defaults": { + "type": "boolean", + "description": "Set to true to run with default values (user must confirm)", + }, + "schedule_name": { + "type": "string", + "description": "Name for scheduled execution (triggers scheduling mode)", + }, + "cron": { + "type": "string", + "description": "Cron expression (5 fields: min hour day month weekday)", + }, + "timezone": { + "type": "string", + "description": "IANA timezone for schedule (default: UTC)", + }, + }, + "required": ["username_agent_slug"], + } + + @property + def requires_auth(self) -> bool: + """All operations require authentication.""" + return True + + async def _execute( + self, + user_id: str | None, + session: ChatSession, + **kwargs, + ) -> ToolResponseBase: + """Execute the tool with automatic state detection.""" + params = RunAgentInput(**kwargs) + session_id = session.session_id + + # Validate agent slug format + if not params.username_agent_slug or "/" not in params.username_agent_slug: + return ErrorResponse( + message="Please provide an agent slug in format 'username/agent-name'", + session_id=session_id, + ) + + # Auth is required + if not user_id: + return ErrorResponse( + message="Authentication required. Please sign in to use this tool.", + session_id=session_id, + ) + + # Determine if this is a schedule request + is_schedule = bool(params.schedule_name or params.cron) + + try: + # Step 1: Fetch agent details (always happens first) + username, agent_name = params.username_agent_slug.split("/", 1) + graph, store_agent = await fetch_graph_from_store_slug(username, agent_name) + + if not graph: + return ErrorResponse( + message=f"Agent '{params.username_agent_slug}' not found in marketplace", + session_id=session_id, + ) + + # Step 2: Check credentials + graph_credentials, missing_creds = await match_user_credentials_to_graph( + user_id, graph + ) + + if missing_creds: + # Return credentials needed response with input data info + # The UI handles credential setup automatically, so the message + # focuses on asking about input data + credentials = extract_credentials_from_schema( + graph.credentials_input_schema + ) + missing_creds_check = await check_user_has_required_credentials( + user_id, credentials + ) + missing_credentials_dict = { + c.id: c.model_dump() for c in missing_creds_check + } + + return SetupRequirementsResponse( + message=self._build_inputs_message(graph, MSG_WHAT_VALUES_TO_USE), + session_id=session_id, + setup_info=SetupInfo( + agent_id=graph.id, + agent_name=graph.name, + user_readiness=UserReadiness( + has_all_credentials=False, + missing_credentials=missing_credentials_dict, + ready_to_run=False, + ), + requirements={ + "credentials": [c.model_dump() for c in credentials], + "inputs": self._get_inputs_list(graph.input_schema), + "execution_modes": self._get_execution_modes(graph), + }, + ), + graph_id=graph.id, + graph_version=graph.version, + ) + + # Step 3: Check inputs + # Get all available input fields from schema + input_properties = graph.input_schema.get("properties", {}) + required_fields = set(graph.input_schema.get("required", [])) + provided_inputs = set(params.inputs.keys()) + + # If agent has inputs but none were provided AND use_defaults is not set, + # always show what's available first so user can decide + if input_properties and not provided_inputs and not params.use_defaults: + credentials = extract_credentials_from_schema( + graph.credentials_input_schema + ) + return AgentDetailsResponse( + message=self._build_inputs_message(graph, MSG_ASK_USER_FOR_VALUES), + session_id=session_id, + agent=self._build_agent_details(graph, credentials), + user_authenticated=True, + graph_id=graph.id, + graph_version=graph.version, + ) + + # Check if required inputs are missing (and not using defaults) + missing_inputs = required_fields - provided_inputs + + if missing_inputs and not params.use_defaults: + # Return agent details with missing inputs info + credentials = extract_credentials_from_schema( + graph.credentials_input_schema + ) + return AgentDetailsResponse( + message=( + f"Agent '{graph.name}' is missing required inputs: " + f"{', '.join(missing_inputs)}. " + "Please provide these values to run the agent." + ), + session_id=session_id, + agent=self._build_agent_details(graph, credentials), + user_authenticated=True, + graph_id=graph.id, + graph_version=graph.version, + ) + + # Step 4: Execute or Schedule + if is_schedule: + return await self._schedule_agent( + user_id=user_id, + session=session, + graph=graph, + graph_credentials=graph_credentials, + inputs=params.inputs, + schedule_name=params.schedule_name, + cron=params.cron, + timezone=params.timezone, + ) + else: + return await self._run_agent( + user_id=user_id, + session=session, + graph=graph, + graph_credentials=graph_credentials, + inputs=params.inputs, + ) + + except NotFoundError as e: + return ErrorResponse( + message=f"Agent '{params.username_agent_slug}' not found", + error=str(e) if str(e) else "not_found", + session_id=session_id, + ) + except DatabaseError as e: + logger.error(f"Database error: {e}", exc_info=True) + return ErrorResponse( + message=f"Failed to process request: {e!s}", + error=str(e), + session_id=session_id, + ) + except Exception as e: + logger.error(f"Error processing agent request: {e}", exc_info=True) + return ErrorResponse( + message=f"Failed to process request: {e!s}", + error=str(e), + session_id=session_id, + ) + + def _get_inputs_list(self, input_schema: dict[str, Any]) -> list[dict[str, Any]]: + """Extract inputs list from schema.""" + inputs_list = [] + if isinstance(input_schema, dict) and "properties" in input_schema: + for field_name, field_schema in input_schema["properties"].items(): + inputs_list.append( + { + "name": field_name, + "title": field_schema.get("title", field_name), + "type": field_schema.get("type", "string"), + "description": field_schema.get("description", ""), + "required": field_name in input_schema.get("required", []), + } + ) + return inputs_list + + def _get_execution_modes(self, graph: GraphModel) -> list[str]: + """Get available execution modes for the graph.""" + trigger_info = graph.trigger_setup_info + if trigger_info is None: + return ["manual", "scheduled"] + return ["webhook"] + + def _build_inputs_message( + self, + graph: GraphModel, + suffix: str, + ) -> str: + """Build a message describing available inputs for an agent.""" + inputs_list = self._get_inputs_list(graph.input_schema) + required_names = [i["name"] for i in inputs_list if i["required"]] + optional_names = [i["name"] for i in inputs_list if not i["required"]] + + message_parts = [f"Agent '{graph.name}' accepts the following inputs:"] + if required_names: + message_parts.append(f"Required: {', '.join(required_names)}.") + if optional_names: + message_parts.append( + f"Optional (have defaults): {', '.join(optional_names)}." + ) + if not inputs_list: + message_parts = [f"Agent '{graph.name}' has no required inputs."] + message_parts.append(suffix) + + return " ".join(message_parts) + + def _build_agent_details( + self, + graph: GraphModel, + credentials: list[CredentialsMetaInput], + ) -> AgentDetails: + """Build AgentDetails from a graph.""" + trigger_info = ( + graph.trigger_setup_info.model_dump() if graph.trigger_setup_info else None + ) + return AgentDetails( + id=graph.id, + name=graph.name, + description=graph.description, + inputs=graph.input_schema, + credentials=credentials, + execution_options=ExecutionOptions( + manual=trigger_info is None, + scheduled=trigger_info is None, + webhook=trigger_info is not None, + ), + trigger_info=trigger_info, + ) + + async def _run_agent( + self, + user_id: str, + session: ChatSession, + graph: GraphModel, + graph_credentials: dict[str, CredentialsMetaInput], + inputs: dict[str, Any], + ) -> ToolResponseBase: + """Execute an agent immediately.""" + session_id = session.session_id + + # Check rate limits + if session.successful_agent_runs.get(graph.id, 0) >= config.max_agent_runs: + return ErrorResponse( + message="Maximum agent runs reached for this session. Please try again later.", + session_id=session_id, + ) + + # Get or create library agent + library_agent = await get_or_create_library_agent(graph, user_id) + + # Execute + execution = await execution_utils.add_graph_execution( + graph_id=library_agent.graph_id, + user_id=user_id, + inputs=inputs, + graph_credentials_inputs=graph_credentials, + ) + + # Track successful run + session.successful_agent_runs[library_agent.graph_id] = ( + session.successful_agent_runs.get(library_agent.graph_id, 0) + 1 + ) + + library_agent_link = f"/library/agents/{library_agent.id}" + return ExecutionStartedResponse( + message=( + f"Agent '{library_agent.name}' execution started successfully. " + f"View at {library_agent_link}. " + f"{MSG_DO_NOT_RUN_AGAIN}" + ), + session_id=session_id, + execution_id=execution.id, + graph_id=library_agent.graph_id, + graph_name=library_agent.name, + library_agent_id=library_agent.id, + library_agent_link=library_agent_link, + ) + + async def _schedule_agent( + self, + user_id: str, + session: ChatSession, + graph: GraphModel, + graph_credentials: dict[str, CredentialsMetaInput], + inputs: dict[str, Any], + schedule_name: str, + cron: str, + timezone: str, + ) -> ToolResponseBase: + """Set up scheduled execution for an agent.""" + session_id = session.session_id + + # Validate schedule params + if not schedule_name: + return ErrorResponse( + message="schedule_name is required for scheduled execution", + session_id=session_id, + ) + if not cron: + return ErrorResponse( + message="cron expression is required for scheduled execution", + session_id=session_id, + ) + + # Check rate limits + if ( + session.successful_agent_schedules.get(graph.id, 0) + >= config.max_agent_schedules + ): + return ErrorResponse( + message="Maximum agent schedules reached for this session.", + session_id=session_id, + ) + + # Get or create library agent + library_agent = await get_or_create_library_agent(graph, user_id) + + # Get user timezone + user = await get_user_by_id(user_id) + user_timezone = get_user_timezone_or_utc(user.timezone if user else timezone) + + # Create schedule + result = await get_scheduler_client().add_execution_schedule( + user_id=user_id, + graph_id=library_agent.graph_id, + graph_version=library_agent.graph_version, + name=schedule_name, + cron=cron, + input_data=inputs, + input_credentials=graph_credentials, + user_timezone=user_timezone, + ) + + # Convert next_run_time to user timezone for display + if result.next_run_time: + result.next_run_time = convert_utc_time_to_user_timezone( + result.next_run_time, user_timezone + ) + + # Track successful schedule + session.successful_agent_schedules[library_agent.graph_id] = ( + session.successful_agent_schedules.get(library_agent.graph_id, 0) + 1 + ) + + library_agent_link = f"/library/agents/{library_agent.id}" + return ExecutionStartedResponse( + message=( + f"Agent '{library_agent.name}' scheduled successfully as '{schedule_name}'. " + f"View at {library_agent_link}. " + f"{MSG_DO_NOT_SCHEDULE_AGAIN}" + ), + session_id=session_id, + execution_id=result.id, + graph_id=library_agent.graph_id, + graph_name=library_agent.name, + library_agent_id=library_agent.id, + library_agent_link=library_agent_link, + ) diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/run_agent_test.py b/autogpt_platform/backend/backend/api/features/chat/tools/run_agent_test.py new file mode 100644 index 0000000000..ebad1a0050 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/run_agent_test.py @@ -0,0 +1,391 @@ +import uuid + +import orjson +import pytest + +from ._test_data import ( + make_session, + setup_firecrawl_test_data, + setup_llm_test_data, + setup_test_data, +) +from .run_agent import RunAgentTool + +# This is so the formatter doesn't remove the fixture imports +setup_llm_test_data = setup_llm_test_data +setup_test_data = setup_test_data +setup_firecrawl_test_data = setup_firecrawl_test_data + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent(setup_test_data): + """Test that the run_agent tool successfully executes an approved agent""" + # Use test data from fixture + user = setup_test_data["user"] + graph = setup_test_data["graph"] + store_submission = setup_test_data["store_submission"] + + # Create the tool instance + tool = RunAgentTool() + + # Build the proper marketplace agent_id format: username/slug + agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}" + + # Build the session + session = make_session(user_id=user.id) + + # Execute the tool + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug=agent_marketplace_id, + inputs={"test_input": "Hello World"}, + session=session, + ) + + # Verify the response + assert response is not None + assert hasattr(response, "result") + # Parse the result JSON to verify the execution started + + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + assert "execution_id" in result_data + assert "graph_id" in result_data + assert result_data["graph_id"] == graph.id + assert "graph_name" in result_data + assert result_data["graph_name"] == "Test Agent" + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_missing_inputs(setup_test_data): + """Test that the run_agent tool returns error when inputs are missing""" + # Use test data from fixture + user = setup_test_data["user"] + store_submission = setup_test_data["store_submission"] + + # Create the tool instance + tool = RunAgentTool() + + # Build the proper marketplace agent_id format + agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}" + + # Build the session + session = make_session(user_id=user.id) + + # Execute the tool without required inputs + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug=agent_marketplace_id, + inputs={}, # Missing required input + session=session, + ) + + # Verify that we get an error response + assert response is not None + assert hasattr(response, "result") + # The tool should return an ErrorResponse when setup info indicates not ready + + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + assert "message" in result_data + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_invalid_agent_id(setup_test_data): + """Test that the run_agent tool returns error for invalid agent ID""" + # Use test data from fixture + user = setup_test_data["user"] + + # Create the tool instance + tool = RunAgentTool() + + # Build the session + session = make_session(user_id=user.id) + + # Execute the tool with invalid agent ID + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug="invalid/agent-id", + inputs={"test_input": "Hello World"}, + session=session, + ) + + # Verify that we get an error response + assert response is not None + assert hasattr(response, "result") + + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + assert "message" in result_data + # Should get an error about failed setup or not found + assert any( + phrase in result_data["message"].lower() for phrase in ["not found", "failed"] + ) + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_with_llm_credentials(setup_llm_test_data): + """Test that run_agent works with an agent requiring LLM credentials""" + # Use test data from fixture + user = setup_llm_test_data["user"] + graph = setup_llm_test_data["graph"] + store_submission = setup_llm_test_data["store_submission"] + + # Create the tool instance + tool = RunAgentTool() + + # Build the proper marketplace agent_id format + agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}" + + # Build the session + session = make_session(user_id=user.id) + + # Execute the tool with a prompt for the LLM + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug=agent_marketplace_id, + inputs={"user_prompt": "What is 2+2?"}, + session=session, + ) + + # Verify the response + assert response is not None + assert hasattr(response, "result") + + # Parse the result JSON to verify the execution started + + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + + # Should successfully start execution since credentials are available + assert "execution_id" in result_data + assert "graph_id" in result_data + assert result_data["graph_id"] == graph.id + assert "graph_name" in result_data + assert result_data["graph_name"] == "LLM Test Agent" + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_shows_available_inputs_when_none_provided(setup_test_data): + """Test that run_agent returns available inputs when called without inputs or use_defaults.""" + user = setup_test_data["user"] + store_submission = setup_test_data["store_submission"] + + tool = RunAgentTool() + agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}" + session = make_session(user_id=user.id) + + # Execute without inputs and without use_defaults + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug=agent_marketplace_id, + inputs={}, + use_defaults=False, + session=session, + ) + + assert response is not None + assert hasattr(response, "result") + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + + # Should return agent_details type showing available inputs + assert result_data.get("type") == "agent_details" + assert "agent" in result_data + assert "message" in result_data + # Message should mention inputs + assert "inputs" in result_data["message"].lower() + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_with_use_defaults(setup_test_data): + """Test that run_agent executes successfully with use_defaults=True.""" + user = setup_test_data["user"] + graph = setup_test_data["graph"] + store_submission = setup_test_data["store_submission"] + + tool = RunAgentTool() + agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}" + session = make_session(user_id=user.id) + + # Execute with use_defaults=True (no explicit inputs) + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug=agent_marketplace_id, + inputs={}, + use_defaults=True, + session=session, + ) + + assert response is not None + assert hasattr(response, "result") + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + + # Should execute successfully + assert "execution_id" in result_data + assert result_data["graph_id"] == graph.id + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_missing_credentials(setup_firecrawl_test_data): + """Test that run_agent returns setup_requirements when credentials are missing.""" + user = setup_firecrawl_test_data["user"] + store_submission = setup_firecrawl_test_data["store_submission"] + + tool = RunAgentTool() + agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}" + session = make_session(user_id=user.id) + + # Execute - user doesn't have firecrawl credentials + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug=agent_marketplace_id, + inputs={"url": "https://example.com"}, + session=session, + ) + + assert response is not None + assert hasattr(response, "result") + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + + # Should return setup_requirements type with missing credentials + assert result_data.get("type") == "setup_requirements" + assert "setup_info" in result_data + setup_info = result_data["setup_info"] + assert "user_readiness" in setup_info + assert setup_info["user_readiness"]["has_all_credentials"] is False + assert len(setup_info["user_readiness"]["missing_credentials"]) > 0 + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_invalid_slug_format(setup_test_data): + """Test that run_agent returns error for invalid slug format (no slash).""" + user = setup_test_data["user"] + + tool = RunAgentTool() + session = make_session(user_id=user.id) + + # Execute with invalid slug format + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug="no-slash-here", + inputs={}, + session=session, + ) + + assert response is not None + assert hasattr(response, "result") + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + + # Should return error + assert result_data.get("type") == "error" + assert "username/agent-name" in result_data["message"] + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_unauthenticated(): + """Test that run_agent returns need_login for unauthenticated users.""" + tool = RunAgentTool() + session = make_session(user_id=None) + + # Execute without user_id + response = await tool.execute( + user_id=None, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug="test/test-agent", + inputs={}, + session=session, + ) + + assert response is not None + assert hasattr(response, "result") + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + + # Base tool returns need_login type for unauthenticated users + assert result_data.get("type") == "need_login" + assert "sign in" in result_data["message"].lower() + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_schedule_without_cron(setup_test_data): + """Test that run_agent returns error when scheduling without cron expression.""" + user = setup_test_data["user"] + store_submission = setup_test_data["store_submission"] + + tool = RunAgentTool() + agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}" + session = make_session(user_id=user.id) + + # Try to schedule without cron + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug=agent_marketplace_id, + inputs={"test_input": "test"}, + schedule_name="My Schedule", + cron="", # Empty cron + session=session, + ) + + assert response is not None + assert hasattr(response, "result") + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + + # Should return error about missing cron + assert result_data.get("type") == "error" + assert "cron" in result_data["message"].lower() + + +@pytest.mark.asyncio(scope="session") +async def test_run_agent_schedule_without_name(setup_test_data): + """Test that run_agent returns error when scheduling without schedule_name.""" + user = setup_test_data["user"] + store_submission = setup_test_data["store_submission"] + + tool = RunAgentTool() + agent_marketplace_id = f"{user.email.split('@')[0]}/{store_submission.slug}" + session = make_session(user_id=user.id) + + # Try to schedule without schedule_name + response = await tool.execute( + user_id=user.id, + session_id=str(uuid.uuid4()), + tool_call_id=str(uuid.uuid4()), + username_agent_slug=agent_marketplace_id, + inputs={"test_input": "test"}, + schedule_name="", # Empty name + cron="0 9 * * *", + session=session, + ) + + assert response is not None + assert hasattr(response, "result") + assert isinstance(response.result, str) + result_data = orjson.loads(response.result) + + # Should return error about missing schedule_name + assert result_data.get("type") == "error" + assert "schedule_name" in result_data["message"].lower() diff --git a/autogpt_platform/backend/backend/api/features/chat/tools/utils.py b/autogpt_platform/backend/backend/api/features/chat/tools/utils.py new file mode 100644 index 0000000000..19e092c312 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/chat/tools/utils.py @@ -0,0 +1,288 @@ +"""Shared utilities for chat tools.""" + +import logging +from typing import Any + +from backend.api.features.library import db as library_db +from backend.api.features.library import model as library_model +from backend.api.features.store import db as store_db +from backend.data import graph as graph_db +from backend.data.graph import GraphModel +from backend.data.model import CredentialsMetaInput +from backend.integrations.creds_manager import IntegrationCredentialsManager +from backend.util.exceptions import NotFoundError + +logger = logging.getLogger(__name__) + + +async def fetch_graph_from_store_slug( + username: str, + agent_name: str, +) -> tuple[GraphModel | None, Any | None]: + """ + Fetch graph from store by username/agent_name slug. + + Args: + username: Creator's username + agent_name: Agent name/slug + + Returns: + tuple[Graph | None, StoreAgentDetails | None]: The graph and store agent details, + or (None, None) if not found. + + Raises: + DatabaseError: If there's a database error during lookup. + """ + try: + store_agent = await store_db.get_store_agent_details(username, agent_name) + except NotFoundError: + return None, None + + # Get the graph from store listing version + graph_meta = await store_db.get_available_graph( + store_agent.store_listing_version_id + ) + graph = await graph_db.get_graph( + graph_id=graph_meta.id, + version=graph_meta.version, + user_id=None, # Public access + include_subgraphs=True, + ) + return graph, store_agent + + +def extract_credentials_from_schema( + credentials_input_schema: dict[str, Any] | None, +) -> list[CredentialsMetaInput]: + """ + Extract credential requirements from graph's credentials_input_schema. + + This consolidates duplicated logic from get_agent_details.py and setup_agent.py. + + Args: + credentials_input_schema: The credentials_input_schema from a Graph object + + Returns: + List of CredentialsMetaInput with provider and type info + """ + credentials: list[CredentialsMetaInput] = [] + + if ( + not isinstance(credentials_input_schema, dict) + or "properties" not in credentials_input_schema + ): + return credentials + + for cred_name, cred_schema in credentials_input_schema["properties"].items(): + provider = _extract_provider_from_schema(cred_schema) + cred_type = _extract_credential_type_from_schema(cred_schema) + + credentials.append( + CredentialsMetaInput( + id=cred_name, + title=cred_schema.get("title", cred_name), + provider=provider, # type: ignore + type=cred_type, # type: ignore + ) + ) + + return credentials + + +def extract_credentials_as_dict( + credentials_input_schema: dict[str, Any] | None, +) -> dict[str, CredentialsMetaInput]: + """ + Extract credential requirements as a dict keyed by field name. + + Args: + credentials_input_schema: The credentials_input_schema from a Graph object + + Returns: + Dict mapping field name to CredentialsMetaInput + """ + credentials: dict[str, CredentialsMetaInput] = {} + + if ( + not isinstance(credentials_input_schema, dict) + or "properties" not in credentials_input_schema + ): + return credentials + + for cred_name, cred_schema in credentials_input_schema["properties"].items(): + provider = _extract_provider_from_schema(cred_schema) + cred_type = _extract_credential_type_from_schema(cred_schema) + + credentials[cred_name] = CredentialsMetaInput( + id=cred_name, + title=cred_schema.get("title", cred_name), + provider=provider, # type: ignore + type=cred_type, # type: ignore + ) + + return credentials + + +def _extract_provider_from_schema(cred_schema: dict[str, Any]) -> str: + """Extract provider from credential schema.""" + if "credentials_provider" in cred_schema and cred_schema["credentials_provider"]: + return cred_schema["credentials_provider"][0] + if "properties" in cred_schema and "provider" in cred_schema["properties"]: + return cred_schema["properties"]["provider"].get("const", "unknown") + return "unknown" + + +def _extract_credential_type_from_schema(cred_schema: dict[str, Any]) -> str: + """Extract credential type from credential schema.""" + if "credentials_types" in cred_schema and cred_schema["credentials_types"]: + return cred_schema["credentials_types"][0] + if "properties" in cred_schema and "type" in cred_schema["properties"]: + return cred_schema["properties"]["type"].get("const", "api_key") + return "api_key" + + +async def get_or_create_library_agent( + graph: GraphModel, + user_id: str, +) -> library_model.LibraryAgent: + """ + Get existing library agent or create new one. + + This consolidates duplicated logic from run_agent.py and setup_agent.py. + + Args: + graph: The Graph to add to library + user_id: The user's ID + + Returns: + LibraryAgent instance + """ + existing = await library_db.get_library_agent_by_graph_id( + graph_id=graph.id, user_id=user_id + ) + if existing: + return existing + + library_agents = await library_db.create_library_agent( + graph=graph, + user_id=user_id, + create_library_agents_for_sub_graphs=False, + ) + assert len(library_agents) == 1, "Expected 1 library agent to be created" + return library_agents[0] + + +async def match_user_credentials_to_graph( + user_id: str, + graph: GraphModel, +) -> tuple[dict[str, CredentialsMetaInput], list[str]]: + """ + Match user's available credentials against graph's required credentials. + + Uses graph.aggregate_credentials_inputs() which handles credentials from + multiple nodes and uses frozensets for provider matching. + + Args: + user_id: The user's ID + graph: The Graph with credential requirements + + Returns: + tuple[matched_credentials dict, missing_credential_descriptions list] + """ + graph_credentials_inputs: dict[str, CredentialsMetaInput] = {} + missing_creds: list[str] = [] + + # Get aggregated credentials requirements from the graph + aggregated_creds = graph.aggregate_credentials_inputs() + logger.debug( + f"Matching credentials for graph {graph.id}: {len(aggregated_creds)} required" + ) + + if not aggregated_creds: + return graph_credentials_inputs, missing_creds + + # Get all available credentials for the user + creds_manager = IntegrationCredentialsManager() + available_creds = await creds_manager.store.get_all_creds(user_id) + + # For each required credential field, find a matching user credential + # field_info.provider is a frozenset because aggregate_credentials_inputs() + # combines requirements from multiple nodes. A credential matches if its + # provider is in the set of acceptable providers. + for credential_field_name, ( + credential_requirements, + _node_fields, + ) in aggregated_creds.items(): + # Find first matching credential by provider and type + matching_cred = next( + ( + cred + for cred in available_creds + if cred.provider in credential_requirements.provider + and cred.type in credential_requirements.supported_types + ), + None, + ) + + if matching_cred: + try: + graph_credentials_inputs[credential_field_name] = CredentialsMetaInput( + id=matching_cred.id, + provider=matching_cred.provider, # type: ignore + type=matching_cred.type, + title=matching_cred.title, + ) + except Exception as e: + logger.error( + f"Failed to create CredentialsMetaInput for field '{credential_field_name}': " + f"provider={matching_cred.provider}, type={matching_cred.type}, " + f"credential_id={matching_cred.id}", + exc_info=True, + ) + missing_creds.append( + f"{credential_field_name} (validation failed: {e})" + ) + else: + missing_creds.append( + f"{credential_field_name} " + f"(requires provider in {list(credential_requirements.provider)}, " + f"type in {list(credential_requirements.supported_types)})" + ) + + logger.info( + f"Credential matching complete: {len(graph_credentials_inputs)}/{len(aggregated_creds)} matched" + ) + + return graph_credentials_inputs, missing_creds + + +async def check_user_has_required_credentials( + user_id: str, + required_credentials: list[CredentialsMetaInput], +) -> list[CredentialsMetaInput]: + """ + Check which required credentials the user is missing. + + Args: + user_id: The user's ID + required_credentials: List of required credentials + + Returns: + List of missing credentials (empty if user has all) + """ + if not required_credentials: + return [] + + creds_manager = IntegrationCredentialsManager() + available_creds = await creds_manager.store.get_all_creds(user_id) + + missing: list[CredentialsMetaInput] = [] + for required in required_credentials: + has_matching = any( + cred.provider == required.provider and cred.type == required.type + for cred in available_creds + ) + if not has_matching: + missing.append(required) + + return missing diff --git a/autogpt_platform/backend/backend/api/features/executions/__init__.py b/autogpt_platform/backend/backend/api/features/executions/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/autogpt_platform/backend/backend/api/features/executions/review/__init__.py b/autogpt_platform/backend/backend/api/features/executions/review/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/autogpt_platform/backend/backend/api/features/executions/review/model.py b/autogpt_platform/backend/backend/api/features/executions/review/model.py new file mode 100644 index 0000000000..74f72fe1ff --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/executions/review/model.py @@ -0,0 +1,204 @@ +import json +from datetime import datetime +from typing import TYPE_CHECKING, Any, Dict, List, Union + +from prisma.enums import ReviewStatus +from pydantic import BaseModel, Field, field_validator, model_validator + +if TYPE_CHECKING: + from prisma.models import PendingHumanReview + +# SafeJson-compatible type alias for review data +SafeJsonData = Union[Dict[str, Any], List[Any], str, int, float, bool, None] + + +class PendingHumanReviewModel(BaseModel): + """Response model for pending human review data. + + Represents a human review request that is awaiting user action. + Contains all necessary information for a user to review and approve + or reject data from a Human-in-the-Loop block execution. + + Attributes: + id: Unique identifier for the review record + user_id: ID of the user who must perform the review + node_exec_id: ID of the node execution that created this review + graph_exec_id: ID of the graph execution containing the node + graph_id: ID of the graph template being executed + graph_version: Version number of the graph template + payload: The actual data payload awaiting review + instructions: Instructions or message for the reviewer + editable: Whether the reviewer can edit the data + status: Current review status (WAITING, APPROVED, or REJECTED) + review_message: Optional message from the reviewer + created_at: Timestamp when review was created + updated_at: Timestamp when review was last modified + reviewed_at: Timestamp when review was completed (if applicable) + """ + + node_exec_id: str = Field(description="Node execution ID (primary key)") + user_id: str = Field(description="User ID associated with the review") + graph_exec_id: str = Field(description="Graph execution ID") + graph_id: str = Field(description="Graph ID") + graph_version: int = Field(description="Graph version") + payload: SafeJsonData = Field(description="The actual data payload awaiting review") + instructions: str | None = Field( + description="Instructions or message for the reviewer", default=None + ) + editable: bool = Field(description="Whether the reviewer can edit the data") + status: ReviewStatus = Field(description="Review status") + review_message: str | None = Field( + description="Optional message from the reviewer", default=None + ) + was_edited: bool | None = Field( + description="Whether the data was modified during review", default=None + ) + processed: bool = Field( + description="Whether the review result has been processed by the execution engine", + default=False, + ) + created_at: datetime = Field(description="When the review was created") + updated_at: datetime | None = Field( + description="When the review was last updated", default=None + ) + reviewed_at: datetime | None = Field( + description="When the review was completed", default=None + ) + + @classmethod + def from_db(cls, review: "PendingHumanReview") -> "PendingHumanReviewModel": + """ + Convert a database model to a response model. + + Uses the new flat database structure with separate columns for + payload, instructions, and editable flag. + + Handles invalid data gracefully by using safe defaults. + """ + return cls( + node_exec_id=review.nodeExecId, + user_id=review.userId, + graph_exec_id=review.graphExecId, + graph_id=review.graphId, + graph_version=review.graphVersion, + payload=review.payload, + instructions=review.instructions, + editable=review.editable, + status=review.status, + review_message=review.reviewMessage, + was_edited=review.wasEdited, + processed=review.processed, + created_at=review.createdAt, + updated_at=review.updatedAt, + reviewed_at=review.reviewedAt, + ) + + +class ReviewItem(BaseModel): + """Single review item for processing.""" + + node_exec_id: str = Field(description="Node execution ID to review") + approved: bool = Field( + description="Whether this review is approved (True) or rejected (False)" + ) + message: str | None = Field( + None, description="Optional review message", max_length=2000 + ) + reviewed_data: SafeJsonData | None = Field( + None, description="Optional edited data (ignored if approved=False)" + ) + + @field_validator("reviewed_data") + @classmethod + def validate_reviewed_data(cls, v): + """Validate that reviewed_data is safe and properly structured.""" + if v is None: + return v + + # Validate SafeJson compatibility + def validate_safejson_type(obj): + """Ensure object only contains SafeJson compatible types.""" + if obj is None: + return True + elif isinstance(obj, (str, int, float, bool)): + return True + elif isinstance(obj, dict): + return all( + isinstance(k, str) and validate_safejson_type(v) + for k, v in obj.items() + ) + elif isinstance(obj, list): + return all(validate_safejson_type(item) for item in obj) + else: + return False + + if not validate_safejson_type(v): + raise ValueError("reviewed_data contains non-SafeJson compatible types") + + # Validate data size to prevent DoS attacks + try: + json_str = json.dumps(v) + if len(json_str) > 1000000: # 1MB limit + raise ValueError("reviewed_data is too large (max 1MB)") + except (TypeError, ValueError) as e: + raise ValueError(f"reviewed_data must be JSON serializable: {str(e)}") + + # Ensure no dangerous nested structures (prevent infinite recursion) + def check_depth(obj, max_depth=10, current_depth=0): + """Recursively check object nesting depth to prevent stack overflow attacks.""" + if current_depth > max_depth: + raise ValueError("reviewed_data has excessive nesting depth") + + if isinstance(obj, dict): + for value in obj.values(): + check_depth(value, max_depth, current_depth + 1) + elif isinstance(obj, list): + for item in obj: + check_depth(item, max_depth, current_depth + 1) + + check_depth(v) + return v + + @field_validator("message") + @classmethod + def validate_message(cls, v): + """Validate and sanitize review message.""" + if v is not None and len(v.strip()) == 0: + return None + return v + + +class ReviewRequest(BaseModel): + """Request model for processing ALL pending reviews for an execution. + + This request must include ALL pending reviews for a graph execution. + Each review will be either approved (with optional data modifications) + or rejected (data ignored). The execution will resume only after ALL reviews are processed. + """ + + reviews: List[ReviewItem] = Field( + description="All reviews with their approval status, data, and messages" + ) + + @model_validator(mode="after") + def validate_review_completeness(self): + """Validate that we have at least one review to process and no duplicates.""" + if not self.reviews: + raise ValueError("At least one review must be provided") + + # Ensure no duplicate node_exec_ids + node_ids = [review.node_exec_id for review in self.reviews] + if len(node_ids) != len(set(node_ids)): + duplicates = [nid for nid in set(node_ids) if node_ids.count(nid) > 1] + raise ValueError(f"Duplicate review IDs found: {', '.join(duplicates)}") + + return self + + +class ReviewResponse(BaseModel): + """Response from review endpoint.""" + + approved_count: int = Field(description="Number of reviews successfully approved") + rejected_count: int = Field(description="Number of reviews successfully rejected") + failed_count: int = Field(description="Number of reviews that failed processing") + error: str | None = Field(None, description="Error message if operation failed") diff --git a/autogpt_platform/backend/backend/api/features/executions/review/review_routes_test.py b/autogpt_platform/backend/backend/api/features/executions/review/review_routes_test.py new file mode 100644 index 0000000000..c4eba0befc --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/executions/review/review_routes_test.py @@ -0,0 +1,492 @@ +import datetime + +import fastapi +import fastapi.testclient +import pytest +import pytest_mock +from prisma.enums import ReviewStatus +from pytest_snapshot.plugin import Snapshot + +from backend.api.rest_api import handle_internal_http_error + +from .model import PendingHumanReviewModel +from .routes import router + +# Using a fixed timestamp for reproducible tests +FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0, tzinfo=datetime.timezone.utc) + +app = fastapi.FastAPI() +app.include_router(router, prefix="/api/review") +app.add_exception_handler(ValueError, handle_internal_http_error(400)) + +client = fastapi.testclient.TestClient(app) + + +@pytest.fixture(autouse=True) +def setup_app_auth(mock_jwt_user): + """Setup auth overrides for all tests in this module""" + from autogpt_libs.auth.jwt_utils import get_jwt_payload + + app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"] + yield + app.dependency_overrides.clear() + + +@pytest.fixture +def sample_pending_review(test_user_id: str) -> PendingHumanReviewModel: + """Create a sample pending review for testing""" + return PendingHumanReviewModel( + node_exec_id="test_node_123", + user_id=test_user_id, + graph_exec_id="test_graph_exec_456", + graph_id="test_graph_789", + graph_version=1, + payload={"data": "test payload", "value": 42}, + instructions="Please review this data", + editable=True, + status=ReviewStatus.WAITING, + review_message=None, + was_edited=None, + processed=False, + created_at=FIXED_NOW, + updated_at=None, + reviewed_at=None, + ) + + +def test_get_pending_reviews_empty( + mocker: pytest_mock.MockerFixture, + snapshot: Snapshot, + test_user_id: str, +) -> None: + """Test getting pending reviews when none exist""" + mock_get_reviews = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_user" + ) + mock_get_reviews.return_value = [] + + response = client.get("/api/review/pending") + + assert response.status_code == 200 + assert response.json() == [] + mock_get_reviews.assert_called_once_with(test_user_id, 1, 25) + + +def test_get_pending_reviews_with_data( + mocker: pytest_mock.MockerFixture, + sample_pending_review: PendingHumanReviewModel, + snapshot: Snapshot, + test_user_id: str, +) -> None: + """Test getting pending reviews with data""" + mock_get_reviews = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_user" + ) + mock_get_reviews.return_value = [sample_pending_review] + + response = client.get("/api/review/pending?page=2&page_size=10") + + assert response.status_code == 200 + data = response.json() + assert len(data) == 1 + assert data[0]["node_exec_id"] == "test_node_123" + assert data[0]["status"] == "WAITING" + mock_get_reviews.assert_called_once_with(test_user_id, 2, 10) + + +def test_get_pending_reviews_for_execution_success( + mocker: pytest_mock.MockerFixture, + sample_pending_review: PendingHumanReviewModel, + snapshot: Snapshot, + test_user_id: str, +) -> None: + """Test getting pending reviews for specific execution""" + mock_get_graph_execution = mocker.patch( + "backend.api.features.executions.review.routes.get_graph_execution_meta" + ) + mock_get_graph_execution.return_value = { + "id": "test_graph_exec_456", + "user_id": test_user_id, + } + + mock_get_reviews = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_execution" + ) + mock_get_reviews.return_value = [sample_pending_review] + + response = client.get("/api/review/execution/test_graph_exec_456") + + assert response.status_code == 200 + data = response.json() + assert len(data) == 1 + assert data[0]["graph_exec_id"] == "test_graph_exec_456" + + +def test_get_pending_reviews_for_execution_not_available( + mocker: pytest_mock.MockerFixture, +) -> None: + """Test access denied when user doesn't own the execution""" + mock_get_graph_execution = mocker.patch( + "backend.api.features.executions.review.routes.get_graph_execution_meta" + ) + mock_get_graph_execution.return_value = None + + response = client.get("/api/review/execution/test_graph_exec_456") + + assert response.status_code == 404 + assert "not found" in response.json()["detail"] + + +def test_process_review_action_approve_success( + mocker: pytest_mock.MockerFixture, + sample_pending_review: PendingHumanReviewModel, + test_user_id: str, +) -> None: + """Test successful review approval""" + # Mock the route functions + + mock_get_reviews_for_execution = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_execution" + ) + mock_get_reviews_for_execution.return_value = [sample_pending_review] + + mock_process_all_reviews = mocker.patch( + "backend.api.features.executions.review.routes.process_all_reviews_for_execution" + ) + # Create approved review for return + approved_review = PendingHumanReviewModel( + node_exec_id="test_node_123", + user_id=test_user_id, + graph_exec_id="test_graph_exec_456", + graph_id="test_graph_789", + graph_version=1, + payload={"data": "modified payload", "value": 50}, + instructions="Please review this data", + editable=True, + status=ReviewStatus.APPROVED, + review_message="Looks good", + was_edited=True, + processed=False, + created_at=FIXED_NOW, + updated_at=FIXED_NOW, + reviewed_at=FIXED_NOW, + ) + mock_process_all_reviews.return_value = {"test_node_123": approved_review} + + mock_has_pending = mocker.patch( + "backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec" + ) + mock_has_pending.return_value = False + + mocker.patch("backend.api.features.executions.review.routes.add_graph_execution") + + request_data = { + "reviews": [ + { + "node_exec_id": "test_node_123", + "approved": True, + "message": "Looks good", + "reviewed_data": {"data": "modified payload", "value": 50}, + } + ] + } + + response = client.post("/api/review/action", json=request_data) + + assert response.status_code == 200 + data = response.json() + assert data["approved_count"] == 1 + assert data["rejected_count"] == 0 + assert data["failed_count"] == 0 + assert data["error"] is None + + +def test_process_review_action_reject_success( + mocker: pytest_mock.MockerFixture, + sample_pending_review: PendingHumanReviewModel, + test_user_id: str, +) -> None: + """Test successful review rejection""" + # Mock the route functions + + mock_get_reviews_for_execution = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_execution" + ) + mock_get_reviews_for_execution.return_value = [sample_pending_review] + + mock_process_all_reviews = mocker.patch( + "backend.api.features.executions.review.routes.process_all_reviews_for_execution" + ) + rejected_review = PendingHumanReviewModel( + node_exec_id="test_node_123", + user_id=test_user_id, + graph_exec_id="test_graph_exec_456", + graph_id="test_graph_789", + graph_version=1, + payload={"data": "test payload"}, + instructions="Please review", + editable=True, + status=ReviewStatus.REJECTED, + review_message="Rejected by user", + was_edited=False, + processed=False, + created_at=FIXED_NOW, + updated_at=None, + reviewed_at=FIXED_NOW, + ) + mock_process_all_reviews.return_value = {"test_node_123": rejected_review} + + mock_has_pending = mocker.patch( + "backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec" + ) + mock_has_pending.return_value = False + + request_data = { + "reviews": [ + { + "node_exec_id": "test_node_123", + "approved": False, + "message": None, + } + ] + } + + response = client.post("/api/review/action", json=request_data) + + assert response.status_code == 200 + data = response.json() + assert data["approved_count"] == 0 + assert data["rejected_count"] == 1 + assert data["failed_count"] == 0 + assert data["error"] is None + + +def test_process_review_action_mixed_success( + mocker: pytest_mock.MockerFixture, + sample_pending_review: PendingHumanReviewModel, + test_user_id: str, +) -> None: + """Test mixed approve/reject operations""" + # Create a second review + second_review = PendingHumanReviewModel( + node_exec_id="test_node_456", + user_id=test_user_id, + graph_exec_id="test_graph_exec_456", + graph_id="test_graph_789", + graph_version=1, + payload={"data": "second payload"}, + instructions="Second review", + editable=False, + status=ReviewStatus.WAITING, + review_message=None, + was_edited=None, + processed=False, + created_at=FIXED_NOW, + updated_at=None, + reviewed_at=None, + ) + + # Mock the route functions + + mock_get_reviews_for_execution = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_execution" + ) + mock_get_reviews_for_execution.return_value = [sample_pending_review, second_review] + + mock_process_all_reviews = mocker.patch( + "backend.api.features.executions.review.routes.process_all_reviews_for_execution" + ) + # Create approved version of first review + approved_review = PendingHumanReviewModel( + node_exec_id="test_node_123", + user_id=test_user_id, + graph_exec_id="test_graph_exec_456", + graph_id="test_graph_789", + graph_version=1, + payload={"data": "modified"}, + instructions="Please review", + editable=True, + status=ReviewStatus.APPROVED, + review_message="Approved", + was_edited=True, + processed=False, + created_at=FIXED_NOW, + updated_at=None, + reviewed_at=FIXED_NOW, + ) + # Create rejected version of second review + rejected_review = PendingHumanReviewModel( + node_exec_id="test_node_456", + user_id=test_user_id, + graph_exec_id="test_graph_exec_456", + graph_id="test_graph_789", + graph_version=1, + payload={"data": "second payload"}, + instructions="Second review", + editable=False, + status=ReviewStatus.REJECTED, + review_message="Rejected by user", + was_edited=False, + processed=False, + created_at=FIXED_NOW, + updated_at=None, + reviewed_at=FIXED_NOW, + ) + mock_process_all_reviews.return_value = { + "test_node_123": approved_review, + "test_node_456": rejected_review, + } + + mock_has_pending = mocker.patch( + "backend.api.features.executions.review.routes.has_pending_reviews_for_graph_exec" + ) + mock_has_pending.return_value = False + + request_data = { + "reviews": [ + { + "node_exec_id": "test_node_123", + "approved": True, + "message": "Approved", + "reviewed_data": {"data": "modified"}, + }, + { + "node_exec_id": "test_node_456", + "approved": False, + "message": None, + }, + ] + } + + response = client.post("/api/review/action", json=request_data) + + assert response.status_code == 200 + data = response.json() + assert data["approved_count"] == 1 + assert data["rejected_count"] == 1 + assert data["failed_count"] == 0 + assert data["error"] is None + + +def test_process_review_action_empty_request( + mocker: pytest_mock.MockerFixture, + test_user_id: str, +) -> None: + """Test error when no reviews provided""" + request_data = {"reviews": []} + + response = client.post("/api/review/action", json=request_data) + + assert response.status_code == 422 + response_data = response.json() + # Pydantic validation error format + assert isinstance(response_data["detail"], list) + assert len(response_data["detail"]) > 0 + assert "At least one review must be provided" in response_data["detail"][0]["msg"] + + +def test_process_review_action_review_not_found( + mocker: pytest_mock.MockerFixture, + test_user_id: str, +) -> None: + """Test error when review is not found""" + # Mock the functions that extract graph execution ID from the request + mock_get_reviews_for_execution = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_execution" + ) + mock_get_reviews_for_execution.return_value = [] # No reviews found + + # Mock process_all_reviews to simulate not finding reviews + mock_process_all_reviews = mocker.patch( + "backend.api.features.executions.review.routes.process_all_reviews_for_execution" + ) + # This should raise a ValueError with "Reviews not found" message based on the data/human_review.py logic + mock_process_all_reviews.side_effect = ValueError( + "Reviews not found or access denied for IDs: nonexistent_node" + ) + + request_data = { + "reviews": [ + { + "node_exec_id": "nonexistent_node", + "approved": True, + "message": "Test", + } + ] + } + + response = client.post("/api/review/action", json=request_data) + + assert response.status_code == 400 + assert "Reviews not found" in response.json()["detail"] + + +def test_process_review_action_partial_failure( + mocker: pytest_mock.MockerFixture, + sample_pending_review: PendingHumanReviewModel, + test_user_id: str, +) -> None: + """Test handling of partial failures in review processing""" + # Mock the route functions + mock_get_reviews_for_execution = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_execution" + ) + mock_get_reviews_for_execution.return_value = [sample_pending_review] + + # Mock partial failure in processing + mock_process_all_reviews = mocker.patch( + "backend.api.features.executions.review.routes.process_all_reviews_for_execution" + ) + mock_process_all_reviews.side_effect = ValueError("Some reviews failed validation") + + request_data = { + "reviews": [ + { + "node_exec_id": "test_node_123", + "approved": True, + "message": "Test", + } + ] + } + + response = client.post("/api/review/action", json=request_data) + + assert response.status_code == 400 + assert "Some reviews failed validation" in response.json()["detail"] + + +def test_process_review_action_invalid_node_exec_id( + mocker: pytest_mock.MockerFixture, + sample_pending_review: PendingHumanReviewModel, + test_user_id: str, +) -> None: + """Test failure when trying to process review with invalid node execution ID""" + # Mock the route functions + mock_get_reviews_for_execution = mocker.patch( + "backend.api.features.executions.review.routes.get_pending_reviews_for_execution" + ) + mock_get_reviews_for_execution.return_value = [sample_pending_review] + + # Mock validation failure - this should return 400, not 500 + mock_process_all_reviews = mocker.patch( + "backend.api.features.executions.review.routes.process_all_reviews_for_execution" + ) + mock_process_all_reviews.side_effect = ValueError( + "Invalid node execution ID format" + ) + + request_data = { + "reviews": [ + { + "node_exec_id": "invalid-node-format", + "approved": True, + "message": "Test", + } + ] + } + + response = client.post("/api/review/action", json=request_data) + + # Should be a 400 Bad Request, not 500 Internal Server Error + assert response.status_code == 400 + assert "Invalid node execution ID format" in response.json()["detail"] diff --git a/autogpt_platform/backend/backend/api/features/executions/review/routes.py b/autogpt_platform/backend/backend/api/features/executions/review/routes.py new file mode 100644 index 0000000000..88646046da --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/executions/review/routes.py @@ -0,0 +1,186 @@ +import logging +from typing import List + +import autogpt_libs.auth as autogpt_auth_lib +from fastapi import APIRouter, HTTPException, Query, Security, status +from prisma.enums import ReviewStatus + +from backend.data.execution import get_graph_execution_meta +from backend.data.human_review import ( + get_pending_reviews_for_execution, + get_pending_reviews_for_user, + has_pending_reviews_for_graph_exec, + process_all_reviews_for_execution, +) +from backend.executor.utils import add_graph_execution + +from .model import PendingHumanReviewModel, ReviewRequest, ReviewResponse + +logger = logging.getLogger(__name__) + + +router = APIRouter( + tags=["v2", "executions", "review"], + dependencies=[Security(autogpt_auth_lib.requires_user)], +) + + +@router.get( + "/pending", + summary="Get Pending Reviews", + response_model=List[PendingHumanReviewModel], + responses={ + 200: {"description": "List of pending reviews"}, + 500: {"description": "Server error", "content": {"application/json": {}}}, + }, +) +async def list_pending_reviews( + user_id: str = Security(autogpt_auth_lib.get_user_id), + page: int = Query(1, ge=1, description="Page number (1-indexed)"), + page_size: int = Query(25, ge=1, le=100, description="Number of reviews per page"), +) -> List[PendingHumanReviewModel]: + """Get all pending reviews for the current user. + + Retrieves all reviews with status "WAITING" that belong to the authenticated user. + Results are ordered by creation time (newest first). + + Args: + user_id: Authenticated user ID from security dependency + + Returns: + List of pending review objects with status converted to typed literals + + Raises: + HTTPException: If authentication fails or database error occurs + + Note: + Reviews with invalid status values are logged as warnings but excluded + from results rather than failing the entire request. + """ + + return await get_pending_reviews_for_user(user_id, page, page_size) + + +@router.get( + "/execution/{graph_exec_id}", + summary="Get Pending Reviews for Execution", + response_model=List[PendingHumanReviewModel], + responses={ + 200: {"description": "List of pending reviews for the execution"}, + 404: {"description": "Graph execution not found"}, + 500: {"description": "Server error", "content": {"application/json": {}}}, + }, +) +async def list_pending_reviews_for_execution( + graph_exec_id: str, + user_id: str = Security(autogpt_auth_lib.get_user_id), +) -> List[PendingHumanReviewModel]: + """Get all pending reviews for a specific graph execution. + + Retrieves all reviews with status "WAITING" for the specified graph execution + that belong to the authenticated user. Results are ordered by creation time + (oldest first) to preserve review order within the execution. + + Args: + graph_exec_id: ID of the graph execution to get reviews for + user_id: Authenticated user ID from security dependency + + Returns: + List of pending review objects for the specified execution + + Raises: + HTTPException: + - 404: If the graph execution doesn't exist or isn't owned by this user + - 500: If authentication fails or database error occurs + + Note: + Only returns reviews owned by the authenticated user for security. + Reviews with invalid status are excluded with warning logs. + """ + + # Verify user owns the graph execution before returning reviews + graph_exec = await get_graph_execution_meta( + user_id=user_id, execution_id=graph_exec_id + ) + if not graph_exec: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail=f"Graph execution #{graph_exec_id} not found", + ) + + return await get_pending_reviews_for_execution(graph_exec_id, user_id) + + +@router.post("/action", response_model=ReviewResponse) +async def process_review_action( + request: ReviewRequest, + user_id: str = Security(autogpt_auth_lib.get_user_id), +) -> ReviewResponse: + """Process reviews with approve or reject actions.""" + + # Collect all node exec IDs from the request + all_request_node_ids = {review.node_exec_id for review in request.reviews} + + if not all_request_node_ids: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail="At least one review must be provided", + ) + + # Build review decisions map + review_decisions = {} + for review in request.reviews: + review_status = ( + ReviewStatus.APPROVED if review.approved else ReviewStatus.REJECTED + ) + review_decisions[review.node_exec_id] = ( + review_status, + review.reviewed_data, + review.message, + ) + + # Process all reviews + updated_reviews = await process_all_reviews_for_execution( + user_id=user_id, + review_decisions=review_decisions, + ) + + # Count results + approved_count = sum( + 1 + for review in updated_reviews.values() + if review.status == ReviewStatus.APPROVED + ) + rejected_count = sum( + 1 + for review in updated_reviews.values() + if review.status == ReviewStatus.REJECTED + ) + + # Resume execution if we processed some reviews + if updated_reviews: + # Get graph execution ID from any processed review + first_review = next(iter(updated_reviews.values())) + graph_exec_id = first_review.graph_exec_id + + # Check if any pending reviews remain for this execution + still_has_pending = await has_pending_reviews_for_graph_exec(graph_exec_id) + + if not still_has_pending: + # Resume execution + try: + await add_graph_execution( + graph_id=first_review.graph_id, + user_id=user_id, + graph_exec_id=graph_exec_id, + ) + logger.info(f"Resumed execution {graph_exec_id}") + except Exception as e: + logger.error(f"Failed to resume execution {graph_exec_id}: {str(e)}") + + return ReviewResponse( + approved_count=approved_count, + rejected_count=rejected_count, + failed_count=0, + error=None, + ) diff --git a/autogpt_platform/backend/backend/api/features/integrations/__init__.py b/autogpt_platform/backend/backend/api/features/integrations/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/autogpt_platform/backend/backend/server/integrations/models.py b/autogpt_platform/backend/backend/api/features/integrations/models.py similarity index 100% rename from autogpt_platform/backend/backend/server/integrations/models.py rename to autogpt_platform/backend/backend/api/features/integrations/models.py diff --git a/autogpt_platform/backend/backend/server/integrations/router.py b/autogpt_platform/backend/backend/api/features/integrations/router.py similarity index 72% rename from autogpt_platform/backend/backend/server/integrations/router.py rename to autogpt_platform/backend/backend/api/features/integrations/router.py index cc509c698e..f5dd8c092b 100644 --- a/autogpt_platform/backend/backend/server/integrations/router.py +++ b/autogpt_platform/backend/backend/api/features/integrations/router.py @@ -1,7 +1,7 @@ import asyncio import logging from datetime import datetime, timedelta, timezone -from typing import TYPE_CHECKING, Annotated, Awaitable, List, Literal +from typing import TYPE_CHECKING, Annotated, List, Literal from autogpt_libs.auth import get_user_id from fastapi import ( @@ -17,9 +17,12 @@ from fastapi import ( from pydantic import BaseModel, Field, SecretStr from starlette.status import HTTP_500_INTERNAL_SERVER_ERROR, HTTP_502_BAD_GATEWAY -from backend.data.graph import get_graph, set_node_webhook +from backend.api.features.library.db import set_preset_webhook, update_preset +from backend.api.features.library.model import LibraryAgentPreset +from backend.data.graph import NodeModel, get_graph, set_node_webhook from backend.data.integrations import ( WebhookEvent, + WebhookWithRelations, get_all_webhooks_by_creds, get_webhook, publish_webhook_event, @@ -32,7 +35,11 @@ from backend.data.model import ( OAuth2Credentials, UserIntegrations, ) -from backend.data.onboarding import complete_webhook_trigger_step +from backend.data.onboarding import ( + OnboardingStep, + complete_onboarding_step, + increment_runs, +) from backend.data.user import get_user_integrations from backend.executor.utils import add_graph_execution from backend.integrations.ayrshare import AyrshareClient, SocialPlatform @@ -40,15 +47,16 @@ from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.oauth import CREDENTIALS_BY_PROVIDER, HANDLERS_BY_NAME from backend.integrations.providers import ProviderName from backend.integrations.webhooks import get_webhook_manager -from backend.server.integrations.models import ( - ProviderConstants, - ProviderNamesResponse, - get_all_provider_names, +from backend.util.exceptions import ( + GraphNotInLibraryError, + MissingConfigError, + NeedConfirmation, + NotFoundError, ) -from backend.server.v2.library.db import set_preset_webhook, update_preset -from backend.util.exceptions import MissingConfigError, NeedConfirmation, NotFoundError from backend.util.settings import Settings +from .models import ProviderConstants, ProviderNamesResponse, get_all_provider_names + if TYPE_CHECKING: from backend.integrations.oauth import BaseOAuthHandler @@ -369,65 +377,24 @@ async def webhook_ingress_generic( if not (webhook.triggered_nodes or webhook.triggered_presets): return - executions: list[Awaitable] = [] - await complete_webhook_trigger_step(user_id) + await complete_onboarding_step(user_id, OnboardingStep.TRIGGER_WEBHOOK) + await increment_runs(user_id) - for node in webhook.triggered_nodes: - logger.debug(f"Webhook-attached node: {node}") - if not node.is_triggered_by_event_type(event_type): - logger.debug(f"Node #{node.id} doesn't trigger on event {event_type}") - continue - logger.debug(f"Executing graph #{node.graph_id} node #{node.id}") - executions.append( - add_graph_execution( - user_id=webhook.user_id, - graph_id=node.graph_id, - graph_version=node.graph_version, - nodes_input_masks={node.id: {"payload": payload}}, - ) + # Execute all triggers concurrently for better performance + tasks = [] + tasks.extend( + _execute_webhook_node_trigger(node, webhook, webhook_id, event_type, payload) + for node in webhook.triggered_nodes + ) + tasks.extend( + _execute_webhook_preset_trigger( + preset, webhook, webhook_id, event_type, payload ) - for preset in webhook.triggered_presets: - logger.debug(f"Webhook-attached preset: {preset}") - if not preset.is_active: - logger.debug(f"Preset #{preset.id} is inactive") - continue + for preset in webhook.triggered_presets + ) - graph = await get_graph(preset.graph_id, preset.graph_version, webhook.user_id) - if not graph: - logger.error( - f"User #{webhook.user_id} has preset #{preset.id} for graph " - f"#{preset.graph_id} v{preset.graph_version}, " - "but no access to the graph itself." - ) - logger.info(f"Automatically deactivating broken preset #{preset.id}") - await update_preset(preset.user_id, preset.id, is_active=False) - continue - if not (trigger_node := graph.webhook_input_node): - # NOTE: this should NEVER happen, but we log and handle it gracefully - logger.error( - f"Preset #{preset.id} is triggered by webhook #{webhook.id}, but graph " - f"#{preset.graph_id} v{preset.graph_version} has no webhook input node" - ) - await set_preset_webhook(preset.user_id, preset.id, None) - continue - if not trigger_node.block.is_triggered_by_event_type(preset.inputs, event_type): - logger.debug(f"Preset #{preset.id} doesn't trigger on event {event_type}") - continue - logger.debug(f"Executing preset #{preset.id} for webhook #{webhook.id}") - - executions.append( - add_graph_execution( - user_id=webhook.user_id, - graph_id=preset.graph_id, - preset_id=preset.id, - graph_version=preset.graph_version, - graph_credentials_inputs=preset.credentials, - nodes_input_masks={ - trigger_node.id: {**preset.inputs, "payload": payload} - }, - ) - ) - asyncio.gather(*executions) + if tasks: + await asyncio.gather(*tasks, return_exceptions=True) @router.post("/webhooks/{webhook_id}/ping") @@ -456,6 +423,105 @@ async def webhook_ping( return True +async def _execute_webhook_node_trigger( + node: NodeModel, + webhook: WebhookWithRelations, + webhook_id: str, + event_type: str, + payload: dict, +) -> None: + """Execute a webhook-triggered node.""" + logger.debug(f"Webhook-attached node: {node}") + if not node.is_triggered_by_event_type(event_type): + logger.debug(f"Node #{node.id} doesn't trigger on event {event_type}") + return + logger.debug(f"Executing graph #{node.graph_id} node #{node.id}") + try: + await add_graph_execution( + user_id=webhook.user_id, + graph_id=node.graph_id, + graph_version=node.graph_version, + nodes_input_masks={node.id: {"payload": payload}}, + ) + except GraphNotInLibraryError as e: + logger.warning( + f"Webhook #{webhook_id} execution blocked for " + f"deleted/archived graph #{node.graph_id} (node #{node.id}): {e}" + ) + # Clean up orphaned webhook trigger for this graph + await _cleanup_orphaned_webhook_for_graph( + node.graph_id, webhook.user_id, webhook_id + ) + except Exception: + logger.exception( + f"Failed to execute graph #{node.graph_id} via webhook #{webhook_id}" + ) + # Continue processing - webhook should be resilient to individual failures + + +async def _execute_webhook_preset_trigger( + preset: LibraryAgentPreset, + webhook: WebhookWithRelations, + webhook_id: str, + event_type: str, + payload: dict, +) -> None: + """Execute a webhook-triggered preset.""" + logger.debug(f"Webhook-attached preset: {preset}") + if not preset.is_active: + logger.debug(f"Preset #{preset.id} is inactive") + return + + graph = await get_graph( + preset.graph_id, preset.graph_version, user_id=webhook.user_id + ) + if not graph: + logger.error( + f"User #{webhook.user_id} has preset #{preset.id} for graph " + f"#{preset.graph_id} v{preset.graph_version}, " + "but no access to the graph itself." + ) + logger.info(f"Automatically deactivating broken preset #{preset.id}") + await update_preset(preset.user_id, preset.id, is_active=False) + return + if not (trigger_node := graph.webhook_input_node): + # NOTE: this should NEVER happen, but we log and handle it gracefully + logger.error( + f"Preset #{preset.id} is triggered by webhook #{webhook.id}, but graph " + f"#{preset.graph_id} v{preset.graph_version} has no webhook input node" + ) + await set_preset_webhook(preset.user_id, preset.id, None) + return + if not trigger_node.block.is_triggered_by_event_type(preset.inputs, event_type): + logger.debug(f"Preset #{preset.id} doesn't trigger on event {event_type}") + return + logger.debug(f"Executing preset #{preset.id} for webhook #{webhook.id}") + + try: + await add_graph_execution( + user_id=webhook.user_id, + graph_id=preset.graph_id, + preset_id=preset.id, + graph_version=preset.graph_version, + graph_credentials_inputs=preset.credentials, + nodes_input_masks={trigger_node.id: {**preset.inputs, "payload": payload}}, + ) + except GraphNotInLibraryError as e: + logger.warning( + f"Webhook #{webhook_id} execution blocked for " + f"deleted/archived graph #{preset.graph_id} (preset #{preset.id}): {e}" + ) + # Clean up orphaned webhook trigger for this graph + await _cleanup_orphaned_webhook_for_graph( + preset.graph_id, webhook.user_id, webhook_id + ) + except Exception: + logger.exception( + f"Failed to execute preset #{preset.id} via webhook #{webhook_id}" + ) + # Continue processing - webhook should be resilient to individual failures + + # --------------------------- UTILITIES ---------------------------- # @@ -496,6 +562,98 @@ async def remove_all_webhooks_for_credentials( logger.warning(f"Webhook #{webhook.id} failed to prune") +async def _cleanup_orphaned_webhook_for_graph( + graph_id: str, user_id: str, webhook_id: str +) -> None: + """ + Clean up orphaned webhook connections for a specific graph when execution fails with GraphNotAccessibleError. + This happens when an agent is pulled from the Marketplace or deleted + but webhook triggers still exist. + """ + try: + webhook = await get_webhook(webhook_id, include_relations=True) + if not webhook or webhook.user_id != user_id: + logger.warning( + f"Webhook {webhook_id} not found or doesn't belong to user {user_id}" + ) + return + + nodes_removed = 0 + presets_removed = 0 + + # Remove triggered nodes that belong to the deleted graph + for node in webhook.triggered_nodes: + if node.graph_id == graph_id: + try: + await set_node_webhook(node.id, None) + nodes_removed += 1 + logger.info( + f"Removed orphaned webhook trigger from node {node.id} " + f"in deleted/archived graph {graph_id}" + ) + except Exception: + logger.exception( + f"Failed to remove webhook trigger from node {node.id}" + ) + + # Remove triggered presets that belong to the deleted graph + for preset in webhook.triggered_presets: + if preset.graph_id == graph_id: + try: + await set_preset_webhook(user_id, preset.id, None) + presets_removed += 1 + logger.info( + f"Removed orphaned webhook trigger from preset {preset.id} " + f"for deleted/archived graph {graph_id}" + ) + except Exception: + logger.exception( + f"Failed to remove webhook trigger from preset {preset.id}" + ) + + if nodes_removed > 0 or presets_removed > 0: + logger.info( + f"Cleaned up orphaned webhook #{webhook_id}: " + f"removed {nodes_removed} nodes and {presets_removed} presets " + f"for deleted/archived graph #{graph_id}" + ) + + # Check if webhook has any remaining triggers, if not, prune it + updated_webhook = await get_webhook(webhook_id, include_relations=True) + if ( + not updated_webhook.triggered_nodes + and not updated_webhook.triggered_presets + ): + try: + webhook_manager = get_webhook_manager( + ProviderName(webhook.provider) + ) + credentials = ( + await creds_manager.get(user_id, webhook.credentials_id) + if webhook.credentials_id + else None + ) + success = await webhook_manager.prune_webhook_if_dangling( + user_id, webhook.id, credentials + ) + if success: + logger.info( + f"Pruned orphaned webhook #{webhook_id} " + f"with no remaining triggers" + ) + else: + logger.warning( + f"Failed to prune orphaned webhook #{webhook_id}" + ) + except Exception: + logger.exception(f"Failed to prune orphaned webhook #{webhook_id}") + + except Exception: + logger.exception( + f"Failed to cleanup orphaned webhook #{webhook_id} for graph #{graph_id}" + ) + + def _get_provider_oauth_handler( req: Request, provider_name: ProviderName ) -> "BaseOAuthHandler": diff --git a/autogpt_platform/backend/backend/api/features/library/__init__.py b/autogpt_platform/backend/backend/api/features/library/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/autogpt_platform/backend/backend/server/v2/library/db.py b/autogpt_platform/backend/backend/api/features/library/db.py similarity index 83% rename from autogpt_platform/backend/backend/server/v2/library/db.py rename to autogpt_platform/backend/backend/api/features/library/db.py index 6e9082492a..69ed0d2730 100644 --- a/autogpt_platform/backend/backend/server/v2/library/db.py +++ b/autogpt_platform/backend/backend/api/features/library/db.py @@ -4,27 +4,30 @@ from typing import Literal, Optional import fastapi import prisma.errors -import prisma.fields import prisma.models import prisma.types +import backend.api.features.store.exceptions as store_exceptions +import backend.api.features.store.image_gen as store_image_gen +import backend.api.features.store.media as store_media import backend.data.graph as graph_db -import backend.server.v2.library.model as library_model -import backend.server.v2.store.exceptions as store_exceptions -import backend.server.v2.store.image_gen as store_image_gen -import backend.server.v2.store.media as store_media +import backend.data.integrations as integrations_db from backend.data.block import BlockInput from backend.data.db import transaction from backend.data.execution import get_graph_execution +from backend.data.graph import GraphSettings from backend.data.includes import AGENT_PRESET_INCLUDE, library_agent_include from backend.data.model import CredentialsMetaInput from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.webhooks.graph_lifecycle_hooks import on_graph_activate +from backend.util.clients import get_scheduler_client from backend.util.exceptions import DatabaseError, NotFoundError from backend.util.json import SafeJson from backend.util.models import Pagination from backend.util.settings import Config +from . import model as library_model + logger = logging.getLogger(__name__) config = Config() integration_creds_manager = IntegrationCredentialsManager() @@ -260,6 +263,30 @@ async def get_library_agent(id: str, user_id: str) -> library_model.LibraryAgent if not library_agent: raise NotFoundError(f"Library agent #{id} not found") + # Fetch marketplace listing if the agent has been published + store_listing = None + profile = None + if library_agent.AgentGraph: + store_listing = await prisma.models.StoreListing.prisma().find_first( + where={ + "agentGraphId": library_agent.AgentGraph.id, + "isDeleted": False, + "hasApprovedVersion": True, + }, + include={ + "ActiveVersion": True, + }, + ) + if ( + store_listing + and store_listing.ActiveVersion + and store_listing.owningUserId + ): + # Fetch Profile separately since User doesn't have a direct Profile relation + profile = await prisma.models.Profile.prisma().find_first( + where={"userId": store_listing.owningUserId} + ) + return library_model.LibraryAgent.from_db( library_agent, sub_graphs=( @@ -267,6 +294,8 @@ async def get_library_agent(id: str, user_id: str) -> library_model.LibraryAgent if library_agent.AgentGraph else None ), + store_listing=store_listing, + profile=profile, ) except prisma.errors.PrismaError as e: @@ -372,6 +401,24 @@ async def add_generated_agent_image( ) +def _initialize_graph_settings(graph: graph_db.GraphModel) -> GraphSettings: + """ + Initialize GraphSettings based on graph content. + + Args: + graph: The graph to analyze + + Returns: + GraphSettings with appropriate human_in_the_loop_safe_mode value + """ + if graph.has_human_in_the_loop: + # Graph has HITL blocks - set safe mode to True by default + return GraphSettings(human_in_the_loop_safe_mode=True) + else: + # Graph has no HITL blocks - keep None + return GraphSettings(human_in_the_loop_safe_mode=None) + + async def create_library_agent( graph: graph_db.GraphModel, user_id: str, @@ -394,8 +441,7 @@ async def create_library_agent( DatabaseError: If there's an error during creation or if image generation fails. """ logger.info( - f"Creating library agent for graph #{graph.id} v{graph.version}; " - f"user #{user_id}" + f"Creating library agent for graph #{graph.id} v{graph.version}; user:" ) graph_entries = ( [graph, *graph.sub_graphs] if create_library_agents_for_sub_graphs else [graph] @@ -418,6 +464,9 @@ async def create_library_agent( } } }, + settings=SafeJson( + _initialize_graph_settings(graph_entry).model_dump() + ), ), include=library_agent_include( user_id, include_nodes=False, include_executions=False @@ -438,7 +487,7 @@ async def update_agent_version_in_library( user_id: str, agent_graph_id: str, agent_graph_version: int, -) -> None: +) -> library_model.LibraryAgent: """ Updates the agent version in the library if useGraphIsActiveVersion is True. @@ -462,7 +511,7 @@ async def update_agent_version_in_library( "useGraphIsActiveVersion": True, }, ) - await prisma.models.LibraryAgent.prisma().update( + lib = await prisma.models.LibraryAgent.prisma().update( where={"id": library_agent.id}, data={ "AgentGraph": { @@ -474,7 +523,12 @@ async def update_agent_version_in_library( }, }, }, + include={"AgentGraph": True}, ) + if lib is None: + raise NotFoundError(f"Library agent {library_agent.id} not found") + + return library_model.LibraryAgent.from_db(lib) except prisma.errors.PrismaError as e: logger.error(f"Database error updating agent version in library: {e}") raise DatabaseError("Failed to update agent version in library") from e @@ -484,9 +538,11 @@ async def update_library_agent( library_agent_id: str, user_id: str, auto_update_version: Optional[bool] = None, + graph_version: Optional[int] = None, is_favorite: Optional[bool] = None, is_archived: Optional[bool] = None, is_deleted: Optional[Literal[False]] = None, + settings: Optional[GraphSettings] = None, ) -> library_model.LibraryAgent: """ Updates the specified LibraryAgent record. @@ -495,8 +551,10 @@ async def update_library_agent( library_agent_id: The ID of the LibraryAgent to update. user_id: The owner of this LibraryAgent. auto_update_version: Whether the agent should auto-update to active version. + graph_version: Specific graph version to update to. is_favorite: Whether this agent is marked as a favorite. is_archived: Whether this agent is archived. + settings: User-specific settings for this library agent. Returns: The updated LibraryAgent. @@ -507,8 +565,8 @@ async def update_library_agent( """ logger.debug( f"Updating library agent {library_agent_id} for user {user_id} with " - f"auto_update_version={auto_update_version}, is_favorite={is_favorite}, " - f"is_archived={is_archived}" + f"auto_update_version={auto_update_version}, graph_version={graph_version}, " + f"is_favorite={is_favorite}, is_archived={is_archived}, settings={settings}" ) update_fields: prisma.types.LibraryAgentUpdateManyMutationInput = {} if auto_update_version is not None: @@ -523,10 +581,25 @@ async def update_library_agent( "Use delete_library_agent() to (soft-)delete library agents" ) update_fields["isDeleted"] = is_deleted - if not update_fields: - raise ValueError("No values were passed to update") + if settings is not None: + update_fields["settings"] = SafeJson(settings.model_dump()) try: + # If graph_version is provided, update to that specific version + if graph_version is not None: + # Get the current agent to find its graph_id + agent = await get_library_agent(id=library_agent_id, user_id=user_id) + # Update to the specified version using existing function + return await update_agent_version_in_library( + user_id=user_id, + agent_graph_id=agent.graph_id, + agent_graph_version=graph_version, + ) + + # Otherwise, just update the simple fields + if not update_fields: + raise ValueError("No values were passed to update") + n_updated = await prisma.models.LibraryAgent.prisma().update_many( where={"id": library_agent_id, "userId": user_id}, data=update_fields, @@ -543,21 +616,118 @@ async def update_library_agent( raise DatabaseError("Failed to update library agent") from e +async def update_library_agent_settings( + user_id: str, + agent_id: str, + settings: GraphSettings, +) -> library_model.LibraryAgent: + """ + Updates the settings for a specific LibraryAgent. + + Args: + user_id: The owner of the LibraryAgent. + agent_id: The ID of the LibraryAgent to update. + settings: New GraphSettings to apply. + + Returns: + The updated LibraryAgent. + + Raises: + NotFoundError: If the specified LibraryAgent does not exist. + DatabaseError: If there's an error in the update operation. + """ + return await update_library_agent( + library_agent_id=agent_id, + user_id=user_id, + settings=settings, + ) + + async def delete_library_agent( library_agent_id: str, user_id: str, soft_delete: bool = True ) -> None: + # First get the agent to find the graph_id for cleanup + library_agent = await prisma.models.LibraryAgent.prisma().find_unique( + where={"id": library_agent_id}, include={"AgentGraph": True} + ) + + if not library_agent or library_agent.userId != user_id: + raise NotFoundError(f"Library agent #{library_agent_id} not found") + + graph_id = library_agent.agentGraphId + + # Clean up associated schedules and webhooks BEFORE deleting the agent + # This prevents executions from starting after agent deletion + await _cleanup_schedules_for_graph(graph_id=graph_id, user_id=user_id) + await _cleanup_webhooks_for_graph(graph_id=graph_id, user_id=user_id) + + # Delete the library agent after cleanup if soft_delete: deleted_count = await prisma.models.LibraryAgent.prisma().update_many( - where={"id": library_agent_id, "userId": user_id}, data={"isDeleted": True} + where={"id": library_agent_id, "userId": user_id}, + data={"isDeleted": True}, ) else: deleted_count = await prisma.models.LibraryAgent.prisma().delete_many( where={"id": library_agent_id, "userId": user_id} ) + if deleted_count < 1: raise NotFoundError(f"Library agent #{library_agent_id} not found") +async def _cleanup_schedules_for_graph(graph_id: str, user_id: str) -> None: + """ + Clean up all schedules for a specific graph and user. + + Args: + graph_id: The ID of the graph + user_id: The ID of the user + """ + scheduler_client = get_scheduler_client() + schedules = await scheduler_client.get_execution_schedules( + graph_id=graph_id, user_id=user_id + ) + + for schedule in schedules: + try: + await scheduler_client.delete_schedule( + schedule_id=schedule.id, user_id=user_id + ) + logger.info(f"Deleted schedule {schedule.id} for graph {graph_id}") + except Exception: + logger.exception( + f"Failed to delete schedule {schedule.id} for graph {graph_id}" + ) + + +async def _cleanup_webhooks_for_graph(graph_id: str, user_id: str) -> None: + """ + Clean up webhook connections for a specific graph and user. + Unlinks webhooks from this graph and deletes them if no other triggers remain. + + Args: + graph_id: The ID of the graph + user_id: The ID of the user + """ + # Find all webhooks that trigger nodes in this graph + webhooks = await integrations_db.find_webhooks_by_graph_id( + graph_id=graph_id, user_id=user_id + ) + + for webhook in webhooks: + try: + # Unlink webhook from this graph's nodes and presets + await integrations_db.unlink_webhook_from_graph( + webhook_id=webhook.id, graph_id=graph_id, user_id=user_id + ) + logger.info(f"Unlinked webhook {webhook.id} from graph {graph_id}") + except Exception: + logger.exception( + f"Failed to unlink webhook {webhook.id} from graph {graph_id}" + ) + + async def delete_library_agent_by_graph_id(graph_id: str, user_id: str) -> None: """ Deletes a library agent for the given user @@ -609,6 +779,18 @@ async def add_store_agent_to_library( graph = store_listing_version.AgentGraph + # Convert to GraphModel to check for HITL blocks + graph_model = await graph_db.get_graph( + graph_id=graph.id, + version=graph.version, + user_id=user_id, + include_subgraphs=False, + ) + if not graph_model: + raise store_exceptions.AgentNotFoundError( + f"Graph #{graph.id} v{graph.version} not found or accessible" + ) + # Check if user already has this agent existing_library_agent = await prisma.models.LibraryAgent.prisma().find_unique( where={ @@ -643,6 +825,9 @@ async def add_store_agent_to_library( } }, "isCreatedByUser": False, + "settings": SafeJson( + _initialize_graph_settings(graph_model).model_dump() + ), }, include=library_agent_include( user_id, include_nodes=False, include_executions=False diff --git a/autogpt_platform/backend/backend/server/v2/library/db_test.py b/autogpt_platform/backend/backend/api/features/library/db_test.py similarity index 77% rename from autogpt_platform/backend/backend/server/v2/library/db_test.py rename to autogpt_platform/backend/backend/api/features/library/db_test.py index 2d42d26cfa..6023177070 100644 --- a/autogpt_platform/backend/backend/server/v2/library/db_test.py +++ b/autogpt_platform/backend/backend/api/features/library/db_test.py @@ -1,16 +1,15 @@ from datetime import datetime import prisma.enums -import prisma.errors import prisma.models -import prisma.types import pytest -import backend.server.v2.library.db as db -import backend.server.v2.store.exceptions +import backend.api.features.store.exceptions from backend.data.db import connect from backend.data.includes import library_agent_include +from . import db + @pytest.mark.asyncio async def test_get_library_agents(mocker): @@ -32,6 +31,7 @@ async def test_get_library_agents(mocker): id="ua1", userId="test-user", agentGraphId="agent2", + settings="{}", # type: ignore agentGraphVersion=1, isCreatedByUser=False, isDeleted=False, @@ -87,7 +87,7 @@ async def test_add_agent_to_library(mocker): await connect() # Mock the transaction context - mock_transaction = mocker.patch("backend.server.v2.library.db.transaction") + mock_transaction = mocker.patch("backend.api.features.library.db.transaction") mock_transaction.return_value.__aenter__ = mocker.AsyncMock(return_value=None) mock_transaction.return_value.__aexit__ = mocker.AsyncMock(return_value=None) # Mock data @@ -123,6 +123,7 @@ async def test_add_agent_to_library(mocker): id="ua1", userId="test-user", agentGraphId=mock_store_listing_data.agentGraphId, + settings="{}", # type: ignore agentGraphVersion=1, isCreatedByUser=False, isDeleted=False, @@ -148,8 +149,18 @@ async def test_add_agent_to_library(mocker): return_value=mock_library_agent_data ) + # Mock graph_db.get_graph function that's called to check for HITL blocks + mock_graph_db = mocker.patch("backend.api.features.library.db.graph_db") + mock_graph_model = mocker.Mock() + mock_graph_model.nodes = ( + [] + ) # Empty list so _has_human_in_the_loop_blocks returns False + mock_graph_db.get_graph = mocker.AsyncMock(return_value=mock_graph_model) + # Mock the model conversion - mock_from_db = mocker.patch("backend.server.v2.library.model.LibraryAgent.from_db") + mock_from_db = mocker.patch( + "backend.api.features.library.model.LibraryAgent.from_db" + ) mock_from_db.return_value = mocker.Mock() # Call function @@ -169,17 +180,29 @@ async def test_add_agent_to_library(mocker): }, include={"AgentGraph": True}, ) - mock_library_agent.return_value.create.assert_called_once_with( - data={ - "User": {"connect": {"id": "test-user"}}, - "AgentGraph": { - "connect": {"graphVersionId": {"id": "agent1", "version": 1}} - }, - "isCreatedByUser": False, - }, - include=library_agent_include( - "test-user", include_nodes=False, include_executions=False - ), + # Check that create was called with the expected data including settings + create_call_args = mock_library_agent.return_value.create.call_args + assert create_call_args is not None + + # Verify the main structure + expected_data = { + "User": {"connect": {"id": "test-user"}}, + "AgentGraph": {"connect": {"graphVersionId": {"id": "agent1", "version": 1}}}, + "isCreatedByUser": False, + } + + actual_data = create_call_args[1]["data"] + # Check that all expected fields are present + for key, value in expected_data.items(): + assert actual_data[key] == value + + # Check that settings field is present and is a SafeJson object + assert "settings" in actual_data + assert hasattr(actual_data["settings"], "__class__") # Should be a SafeJson object + + # Check include parameter + assert create_call_args[1]["include"] == library_agent_include( + "test-user", include_nodes=False, include_executions=False ) @@ -195,7 +218,7 @@ async def test_add_agent_to_library_not_found(mocker): ) # Call function and verify exception - with pytest.raises(backend.server.v2.store.exceptions.AgentNotFoundError): + with pytest.raises(backend.api.features.store.exceptions.AgentNotFoundError): await db.add_store_agent_to_library("version123", "test-user") # Verify mock called correctly diff --git a/autogpt_platform/backend/backend/server/v2/library/model.py b/autogpt_platform/backend/backend/api/features/library/model.py similarity index 83% rename from autogpt_platform/backend/backend/server/v2/library/model.py rename to autogpt_platform/backend/backend/api/features/library/model.py index f4c1a35177..c20f82afae 100644 --- a/autogpt_platform/backend/backend/server/v2/library/model.py +++ b/autogpt_platform/backend/backend/api/features/library/model.py @@ -6,8 +6,8 @@ import prisma.enums import prisma.models import pydantic -import backend.data.block as block_model -import backend.data.graph as graph_model +from backend.data.block import BlockInput +from backend.data.graph import GraphModel, GraphSettings, GraphTriggerInfo from backend.data.model import CredentialsMetaInput, is_credentials_field_name from backend.util.models import Pagination @@ -22,6 +22,23 @@ class LibraryAgentStatus(str, Enum): ERROR = "ERROR" # Agent is in an error state +class MarketplaceListingCreator(pydantic.BaseModel): + """Creator information for a marketplace listing.""" + + name: str + id: str + slug: str + + +class MarketplaceListing(pydantic.BaseModel): + """Marketplace listing information for a library agent.""" + + id: str + name: str + slug: str + creator: MarketplaceListingCreator + + class LibraryAgent(pydantic.BaseModel): """ Represents an agent in the library, including metadata for display and @@ -39,6 +56,7 @@ class LibraryAgent(pydantic.BaseModel): status: LibraryAgentStatus + created_at: datetime.datetime updated_at: datetime.datetime name: str @@ -54,7 +72,7 @@ class LibraryAgent(pydantic.BaseModel): has_external_trigger: bool = pydantic.Field( description="Whether the agent has an external trigger (e.g. webhook) node" ) - trigger_setup_info: Optional[graph_model.GraphTriggerInfo] = None + trigger_setup_info: Optional[GraphTriggerInfo] = None # Indicates whether there's a new output (based on recent runs) new_output: bool @@ -71,10 +89,18 @@ class LibraryAgent(pydantic.BaseModel): # Recommended schedule cron (from marketplace agents) recommended_schedule_cron: str | None = None + # User-specific settings for this library agent + settings: GraphSettings = pydantic.Field(default_factory=GraphSettings) + + # Marketplace listing information if the agent has been published + marketplace_listing: Optional["MarketplaceListing"] = None + @staticmethod def from_db( agent: prisma.models.LibraryAgent, sub_graphs: Optional[list[prisma.models.AgentGraph]] = None, + store_listing: Optional[prisma.models.StoreListing] = None, + profile: Optional[prisma.models.Profile] = None, ) -> "LibraryAgent": """ Factory method that constructs a LibraryAgent from a Prisma LibraryAgent @@ -83,7 +109,9 @@ class LibraryAgent(pydantic.BaseModel): if not agent.AgentGraph: raise ValueError("Associated Agent record is required.") - graph = graph_model.GraphModel.from_db(agent.AgentGraph, sub_graphs=sub_graphs) + graph = GraphModel.from_db(agent.AgentGraph, sub_graphs=sub_graphs) + + created_at = agent.createdAt agent_updated_at = agent.AgentGraph.updatedAt lib_agent_updated_at = agent.updatedAt @@ -116,6 +144,21 @@ class LibraryAgent(pydantic.BaseModel): # Hard-coded to True until a method to check is implemented is_latest_version = True + # Build marketplace_listing if available + marketplace_listing_data = None + if store_listing and store_listing.ActiveVersion and profile: + creator_data = MarketplaceListingCreator( + name=profile.name, + id=profile.id, + slug=profile.username, + ) + marketplace_listing_data = MarketplaceListing( + id=store_listing.id, + name=store_listing.ActiveVersion.name, + slug=store_listing.slug, + creator=creator_data, + ) + return LibraryAgent( id=agent.id, graph_id=agent.agentGraphId, @@ -124,6 +167,7 @@ class LibraryAgent(pydantic.BaseModel): creator_name=creator_name, creator_image_url=creator_image_url, status=status, + created_at=created_at, updated_at=updated_at, name=graph.name, description=graph.description, @@ -140,6 +184,8 @@ class LibraryAgent(pydantic.BaseModel): is_latest_version=is_latest_version, is_favorite=agent.isFavorite, recommended_schedule_cron=agent.AgentGraph.recommendedScheduleCron, + settings=GraphSettings.model_validate(agent.settings), + marketplace_listing=marketplace_listing_data, ) @@ -207,7 +253,7 @@ class LibraryAgentPresetCreatable(pydantic.BaseModel): graph_id: str graph_version: int - inputs: block_model.BlockInput + inputs: BlockInput credentials: dict[str, CredentialsMetaInput] name: str @@ -236,7 +282,7 @@ class LibraryAgentPresetUpdatable(pydantic.BaseModel): Request model used when updating a preset for a library agent. """ - inputs: Optional[block_model.BlockInput] = None + inputs: Optional[BlockInput] = None credentials: Optional[dict[str, CredentialsMetaInput]] = None name: Optional[str] = None @@ -279,7 +325,7 @@ class LibraryAgentPreset(LibraryAgentPresetCreatable): "Webhook must be included in AgentPreset query when webhookId is set" ) - input_data: block_model.BlockInput = {} + input_data: BlockInput = {} input_credentials: dict[str, CredentialsMetaInput] = {} for preset_input in preset.InputPresets: @@ -339,9 +385,15 @@ class LibraryAgentUpdateRequest(pydantic.BaseModel): auto_update_version: Optional[bool] = pydantic.Field( default=None, description="Auto-update the agent version" ) + graph_version: Optional[int] = pydantic.Field( + default=None, description="Specific graph version to update to" + ) is_favorite: Optional[bool] = pydantic.Field( default=None, description="Mark the agent as a favorite" ) is_archived: Optional[bool] = pydantic.Field( default=None, description="Archive the agent" ) + settings: Optional[GraphSettings] = pydantic.Field( + default=None, description="User-specific settings for this library agent" + ) diff --git a/autogpt_platform/backend/backend/server/v2/library/model_test.py b/autogpt_platform/backend/backend/api/features/library/model_test.py similarity index 95% rename from autogpt_platform/backend/backend/server/v2/library/model_test.py rename to autogpt_platform/backend/backend/api/features/library/model_test.py index d90ecf6f7a..a32b19322d 100644 --- a/autogpt_platform/backend/backend/server/v2/library/model_test.py +++ b/autogpt_platform/backend/backend/api/features/library/model_test.py @@ -3,7 +3,7 @@ import datetime import prisma.models import pytest -import backend.server.v2.library.model as library_model +from . import model as library_model @pytest.mark.asyncio diff --git a/autogpt_platform/backend/backend/server/v2/library/routes/__init__.py b/autogpt_platform/backend/backend/api/features/library/routes/__init__.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/library/routes/__init__.py rename to autogpt_platform/backend/backend/api/features/library/routes/__init__.py diff --git a/autogpt_platform/backend/backend/server/v2/library/routes/agents.py b/autogpt_platform/backend/backend/api/features/library/routes/agents.py similarity index 91% rename from autogpt_platform/backend/backend/server/v2/library/routes/agents.py rename to autogpt_platform/backend/backend/api/features/library/routes/agents.py index 1bdf255ce5..38c34dd3b8 100644 --- a/autogpt_platform/backend/backend/server/v2/library/routes/agents.py +++ b/autogpt_platform/backend/backend/api/features/library/routes/agents.py @@ -1,15 +1,18 @@ import logging -from typing import Optional +from typing import Literal, Optional import autogpt_libs.auth as autogpt_auth_lib from fastapi import APIRouter, Body, HTTPException, Query, Security, status from fastapi.responses import Response +from prisma.enums import OnboardingStep -import backend.server.v2.library.db as library_db -import backend.server.v2.library.model as library_model -import backend.server.v2.store.exceptions as store_exceptions +import backend.api.features.store.exceptions as store_exceptions +from backend.data.onboarding import complete_onboarding_step from backend.util.exceptions import DatabaseError, NotFoundError +from .. import db as library_db +from .. import model as library_model + logger = logging.getLogger(__name__) router = APIRouter( @@ -22,7 +25,9 @@ router = APIRouter( @router.get( "", summary="List Library Agents", + response_model=library_model.LibraryAgentResponse, responses={ + 200: {"description": "List of library agents"}, 500: {"description": "Server error", "content": {"application/json": {}}}, }, ) @@ -155,7 +160,12 @@ async def get_library_agent_by_graph_id( @router.get( "/marketplace/{store_listing_version_id}", summary="Get Agent By Store ID", - tags=["store, library"], + tags=["store", "library"], + response_model=library_model.LibraryAgent | None, + responses={ + 200: {"description": "Library agent found"}, + 404: {"description": "Agent not found"}, + }, ) async def get_library_agent_by_store_listing_version_id( store_listing_version_id: str, @@ -193,6 +203,9 @@ async def get_library_agent_by_store_listing_version_id( ) async def add_marketplace_agent_to_library( store_listing_version_id: str = Body(embed=True), + source: Literal["onboarding", "marketplace"] = Body( + default="marketplace", embed=True + ), user_id: str = Security(autogpt_auth_lib.get_user_id), ) -> library_model.LibraryAgent: """ @@ -210,10 +223,15 @@ async def add_marketplace_agent_to_library( HTTPException(500): If a server/database error occurs. """ try: - return await library_db.add_store_agent_to_library( + agent = await library_db.add_store_agent_to_library( store_listing_version_id=store_listing_version_id, user_id=user_id, ) + if source != "onboarding": + await complete_onboarding_step( + user_id, OnboardingStep.MARKETPLACE_ADD_AGENT + ) + return agent except store_exceptions.AgentNotFoundError as e: logger.warning( @@ -267,8 +285,10 @@ async def update_library_agent( library_agent_id=library_agent_id, user_id=user_id, auto_update_version=payload.auto_update_version, + graph_version=payload.graph_version, is_favorite=payload.is_favorite, is_archived=payload.is_archived, + settings=payload.settings, ) except NotFoundError as e: raise HTTPException( diff --git a/autogpt_platform/backend/backend/server/v2/library/routes/presets.py b/autogpt_platform/backend/backend/api/features/library/routes/presets.py similarity index 99% rename from autogpt_platform/backend/backend/server/v2/library/routes/presets.py rename to autogpt_platform/backend/backend/api/features/library/routes/presets.py index 12bc77629a..cd4c04e0f2 100644 --- a/autogpt_platform/backend/backend/server/v2/library/routes/presets.py +++ b/autogpt_platform/backend/backend/api/features/library/routes/presets.py @@ -4,18 +4,20 @@ from typing import Any, Optional import autogpt_libs.auth as autogpt_auth_lib from fastapi import APIRouter, Body, HTTPException, Query, Security, status -import backend.server.v2.library.db as db -import backend.server.v2.library.model as models from backend.data.execution import GraphExecutionMeta from backend.data.graph import get_graph from backend.data.integrations import get_webhook from backend.data.model import CredentialsMetaInput +from backend.data.onboarding import increment_runs from backend.executor.utils import add_graph_execution, make_node_credentials_input_map from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.webhooks import get_webhook_manager from backend.integrations.webhooks.utils import setup_webhook_for_block from backend.util.exceptions import NotFoundError +from .. import db +from .. import model as models + logger = logging.getLogger(__name__) credentials_manager = IntegrationCredentialsManager() @@ -401,6 +403,8 @@ async def execute_preset( merged_node_input = preset.inputs | inputs merged_credential_inputs = preset.credentials | credential_inputs + await increment_runs(user_id) + return await add_graph_execution( user_id=user_id, graph_id=preset.graph_id, diff --git a/autogpt_platform/backend/backend/server/v2/library/routes_test.py b/autogpt_platform/backend/backend/api/features/library/routes_test.py similarity index 89% rename from autogpt_platform/backend/backend/server/v2/library/routes_test.py rename to autogpt_platform/backend/backend/api/features/library/routes_test.py index 85f66c3df2..ad28b5b6bd 100644 --- a/autogpt_platform/backend/backend/server/v2/library/routes_test.py +++ b/autogpt_platform/backend/backend/api/features/library/routes_test.py @@ -1,15 +1,17 @@ import datetime import json +from unittest.mock import AsyncMock import fastapi.testclient import pytest import pytest_mock from pytest_snapshot.plugin import Snapshot -import backend.server.v2.library.model as library_model -from backend.server.v2.library.routes import router as library_router from backend.util.models import Pagination +from . import model as library_model +from .routes import router as library_router + app = fastapi.FastAPI() app.include_router(library_router) @@ -55,6 +57,7 @@ async def test_get_library_agents_success( can_access_graph=True, is_latest_version=True, is_favorite=False, + created_at=datetime.datetime(2023, 1, 1, 0, 0, 0), updated_at=datetime.datetime(2023, 1, 1, 0, 0, 0), ), library_model.LibraryAgent( @@ -76,6 +79,7 @@ async def test_get_library_agents_success( can_access_graph=False, is_latest_version=True, is_favorite=False, + created_at=datetime.datetime(2023, 1, 1, 0, 0, 0), updated_at=datetime.datetime(2023, 1, 1, 0, 0, 0), ), ], @@ -83,7 +87,7 @@ async def test_get_library_agents_success( total_items=2, total_pages=1, current_page=1, page_size=50 ), ) - mock_db_call = mocker.patch("backend.server.v2.library.db.list_library_agents") + mock_db_call = mocker.patch("backend.api.features.library.db.list_library_agents") mock_db_call.return_value = mocked_value response = client.get("/agents?search_term=test") @@ -109,7 +113,7 @@ async def test_get_library_agents_success( def test_get_library_agents_error(mocker: pytest_mock.MockFixture, test_user_id: str): - mock_db_call = mocker.patch("backend.server.v2.library.db.list_library_agents") + mock_db_call = mocker.patch("backend.api.features.library.db.list_library_agents") mock_db_call.side_effect = Exception("Test error") response = client.get("/agents?search_term=test") @@ -149,6 +153,7 @@ async def test_get_favorite_library_agents_success( can_access_graph=True, is_latest_version=True, is_favorite=True, + created_at=datetime.datetime(2023, 1, 1, 0, 0, 0), updated_at=datetime.datetime(2023, 1, 1, 0, 0, 0), ), ], @@ -157,7 +162,7 @@ async def test_get_favorite_library_agents_success( ), ) mock_db_call = mocker.patch( - "backend.server.v2.library.db.list_favorite_library_agents" + "backend.api.features.library.db.list_favorite_library_agents" ) mock_db_call.return_value = mocked_value @@ -180,7 +185,7 @@ def test_get_favorite_library_agents_error( mocker: pytest_mock.MockFixture, test_user_id: str ): mock_db_call = mocker.patch( - "backend.server.v2.library.db.list_favorite_library_agents" + "backend.api.features.library.db.list_favorite_library_agents" ) mock_db_call.side_effect = Exception("Test error") @@ -214,13 +219,18 @@ def test_add_agent_to_library_success( can_access_graph=True, is_latest_version=True, is_favorite=False, + created_at=FIXED_NOW, updated_at=FIXED_NOW, ) mock_db_call = mocker.patch( - "backend.server.v2.library.db.add_store_agent_to_library" + "backend.api.features.library.db.add_store_agent_to_library" ) mock_db_call.return_value = mock_library_agent + mock_complete_onboarding = mocker.patch( + "backend.api.features.library.routes.agents.complete_onboarding_step", + new_callable=AsyncMock, + ) response = client.post( "/agents", json={"store_listing_version_id": "test-version-id"} @@ -235,11 +245,12 @@ def test_add_agent_to_library_success( mock_db_call.assert_called_once_with( store_listing_version_id="test-version-id", user_id=test_user_id ) + mock_complete_onboarding.assert_awaited_once() def test_add_agent_to_library_error(mocker: pytest_mock.MockFixture, test_user_id: str): mock_db_call = mocker.patch( - "backend.server.v2.library.db.add_store_agent_to_library" + "backend.api.features.library.db.add_store_agent_to_library" ) mock_db_call.side_effect = Exception("Test error") diff --git a/autogpt_platform/backend/backend/api/features/oauth.py b/autogpt_platform/backend/backend/api/features/oauth.py new file mode 100644 index 0000000000..023a433951 --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/oauth.py @@ -0,0 +1,833 @@ +""" +OAuth 2.0 Provider Endpoints + +Implements OAuth 2.0 Authorization Code flow with PKCE support. + +Flow: +1. User clicks "Login with AutoGPT" in 3rd party app +2. App redirects user to /auth/authorize with client_id, redirect_uri, scope, state +3. User sees consent screen (if not already logged in, redirects to login first) +4. User approves → backend creates authorization code +5. User redirected back to app with code +6. App exchanges code for access/refresh tokens at /api/oauth/token +7. App uses access token to call external API endpoints +""" + +import io +import logging +import os +import uuid +from datetime import datetime +from typing import Literal, Optional +from urllib.parse import urlencode + +from autogpt_libs.auth import get_user_id +from fastapi import APIRouter, Body, HTTPException, Security, UploadFile, status +from gcloud.aio import storage as async_storage +from PIL import Image +from prisma.enums import APIKeyPermission +from pydantic import BaseModel, Field + +from backend.data.auth.oauth import ( + InvalidClientError, + InvalidGrantError, + OAuthApplicationInfo, + TokenIntrospectionResult, + consume_authorization_code, + create_access_token, + create_authorization_code, + create_refresh_token, + get_oauth_application, + get_oauth_application_by_id, + introspect_token, + list_user_oauth_applications, + refresh_tokens, + revoke_access_token, + revoke_refresh_token, + update_oauth_application, + validate_client_credentials, + validate_redirect_uri, + validate_scopes, +) +from backend.util.settings import Settings +from backend.util.virus_scanner import scan_content_safe + +settings = Settings() +logger = logging.getLogger(__name__) + +router = APIRouter() + + +# ============================================================================ +# Request/Response Models +# ============================================================================ + + +class TokenResponse(BaseModel): + """OAuth 2.0 token response""" + + token_type: Literal["Bearer"] = "Bearer" + access_token: str + access_token_expires_at: datetime + refresh_token: str + refresh_token_expires_at: datetime + scopes: list[str] + + +class ErrorResponse(BaseModel): + """OAuth 2.0 error response""" + + error: str + error_description: Optional[str] = None + + +class OAuthApplicationPublicInfo(BaseModel): + """Public information about an OAuth application (for consent screen)""" + + name: str + description: Optional[str] = None + logo_url: Optional[str] = None + scopes: list[str] + + +# ============================================================================ +# Application Info Endpoint +# ============================================================================ + + +@router.get( + "/app/{client_id}", + responses={ + 404: {"description": "Application not found or disabled"}, + }, +) +async def get_oauth_app_info( + client_id: str, user_id: str = Security(get_user_id) +) -> OAuthApplicationPublicInfo: + """ + Get public information about an OAuth application. + + This endpoint is used by the consent screen to display application details + to the user before they authorize access. + + Returns: + - name: Application name + - description: Application description (if provided) + - scopes: List of scopes the application is allowed to request + """ + app = await get_oauth_application(client_id) + if not app or not app.is_active: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Application not found", + ) + + return OAuthApplicationPublicInfo( + name=app.name, + description=app.description, + logo_url=app.logo_url, + scopes=[s.value for s in app.scopes], + ) + + +# ============================================================================ +# Authorization Endpoint +# ============================================================================ + + +class AuthorizeRequest(BaseModel): + """OAuth 2.0 authorization request""" + + client_id: str = Field(description="Client identifier") + redirect_uri: str = Field(description="Redirect URI") + scopes: list[str] = Field(description="List of scopes") + state: str = Field(description="Anti-CSRF token from client") + response_type: str = Field( + default="code", description="Must be 'code' for authorization code flow" + ) + code_challenge: str = Field(description="PKCE code challenge (required)") + code_challenge_method: Literal["S256", "plain"] = Field( + default="S256", description="PKCE code challenge method (S256 recommended)" + ) + + +class AuthorizeResponse(BaseModel): + """OAuth 2.0 authorization response with redirect URL""" + + redirect_url: str = Field(description="URL to redirect the user to") + + +@router.post("/authorize") +async def authorize( + request: AuthorizeRequest = Body(), + user_id: str = Security(get_user_id), +) -> AuthorizeResponse: + """ + OAuth 2.0 Authorization Endpoint + + User must be logged in (authenticated with Supabase JWT). + This endpoint creates an authorization code and returns a redirect URL. + + PKCE (Proof Key for Code Exchange) is REQUIRED for all authorization requests. + + The frontend consent screen should call this endpoint after the user approves, + then redirect the user to the returned `redirect_url`. + + Request Body: + - client_id: The OAuth application's client ID + - redirect_uri: Where to redirect after authorization (must match registered URI) + - scopes: List of permissions (e.g., "EXECUTE_GRAPH READ_GRAPH") + - state: Anti-CSRF token provided by client (will be returned in redirect) + - response_type: Must be "code" (for authorization code flow) + - code_challenge: PKCE code challenge (required) + - code_challenge_method: "S256" (recommended) or "plain" + + Returns: + - redirect_url: The URL to redirect the user to (includes authorization code) + + Error cases return a redirect_url with error parameters, or raise HTTPException + for critical errors (like invalid redirect_uri). + """ + try: + # Validate response_type + if request.response_type != "code": + return _error_redirect_url( + request.redirect_uri, + request.state, + "unsupported_response_type", + "Only 'code' response type is supported", + ) + + # Get application + app = await get_oauth_application(request.client_id) + if not app: + return _error_redirect_url( + request.redirect_uri, + request.state, + "invalid_client", + "Unknown client_id", + ) + + if not app.is_active: + return _error_redirect_url( + request.redirect_uri, + request.state, + "invalid_client", + "Application is not active", + ) + + # Validate redirect URI + if not validate_redirect_uri(app, request.redirect_uri): + # For invalid redirect_uri, we can't redirect safely + # Must return error instead + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=( + "Invalid redirect_uri. " + f"Must be one of: {', '.join(app.redirect_uris)}" + ), + ) + + # Parse and validate scopes + try: + requested_scopes = [APIKeyPermission(s.strip()) for s in request.scopes] + except ValueError as e: + return _error_redirect_url( + request.redirect_uri, + request.state, + "invalid_scope", + f"Invalid scope: {e}", + ) + + if not requested_scopes: + return _error_redirect_url( + request.redirect_uri, + request.state, + "invalid_scope", + "At least one scope is required", + ) + + if not validate_scopes(app, requested_scopes): + return _error_redirect_url( + request.redirect_uri, + request.state, + "invalid_scope", + "Application is not authorized for all requested scopes. " + f"Allowed: {', '.join(s.value for s in app.scopes)}", + ) + + # Create authorization code + auth_code = await create_authorization_code( + application_id=app.id, + user_id=user_id, + scopes=requested_scopes, + redirect_uri=request.redirect_uri, + code_challenge=request.code_challenge, + code_challenge_method=request.code_challenge_method, + ) + + # Build redirect URL with authorization code + params = { + "code": auth_code.code, + "state": request.state, + } + redirect_url = f"{request.redirect_uri}?{urlencode(params)}" + + logger.info( + f"Authorization code issued for user #{user_id} " + f"and app {app.name} (#{app.id})" + ) + + return AuthorizeResponse(redirect_url=redirect_url) + + except HTTPException: + raise + except Exception as e: + logger.error(f"Error in authorization endpoint: {e}", exc_info=True) + return _error_redirect_url( + request.redirect_uri, + request.state, + "server_error", + "An unexpected error occurred", + ) + + +def _error_redirect_url( + redirect_uri: str, + state: str, + error: str, + error_description: Optional[str] = None, +) -> AuthorizeResponse: + """Helper to build redirect URL with OAuth error parameters""" + params = { + "error": error, + "state": state, + } + if error_description: + params["error_description"] = error_description + + redirect_url = f"{redirect_uri}?{urlencode(params)}" + return AuthorizeResponse(redirect_url=redirect_url) + + +# ============================================================================ +# Token Endpoint +# ============================================================================ + + +class TokenRequestByCode(BaseModel): + grant_type: Literal["authorization_code"] + code: str = Field(description="Authorization code") + redirect_uri: str = Field( + description="Redirect URI (must match authorization request)" + ) + client_id: str + client_secret: str + code_verifier: str = Field(description="PKCE code verifier") + + +class TokenRequestByRefreshToken(BaseModel): + grant_type: Literal["refresh_token"] + refresh_token: str + client_id: str + client_secret: str + + +@router.post("/token") +async def token( + request: TokenRequestByCode | TokenRequestByRefreshToken = Body(), +) -> TokenResponse: + """ + OAuth 2.0 Token Endpoint + + Exchanges authorization code or refresh token for access token. + + Grant Types: + 1. authorization_code: Exchange authorization code for tokens + - Required: grant_type, code, redirect_uri, client_id, client_secret + - Optional: code_verifier (required if PKCE was used) + + 2. refresh_token: Exchange refresh token for new access token + - Required: grant_type, refresh_token, client_id, client_secret + + Returns: + - access_token: Bearer token for API access (1 hour TTL) + - token_type: "Bearer" + - expires_in: Seconds until access token expires + - refresh_token: Token for refreshing access (30 days TTL) + - scopes: List of scopes + """ + # Validate client credentials + try: + app = await validate_client_credentials( + request.client_id, request.client_secret + ) + except InvalidClientError as e: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail=str(e), + ) + + # Handle authorization_code grant + if request.grant_type == "authorization_code": + # Consume authorization code + try: + user_id, scopes = await consume_authorization_code( + code=request.code, + application_id=app.id, + redirect_uri=request.redirect_uri, + code_verifier=request.code_verifier, + ) + except InvalidGrantError as e: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=str(e), + ) + + # Create access and refresh tokens + access_token = await create_access_token(app.id, user_id, scopes) + refresh_token = await create_refresh_token(app.id, user_id, scopes) + + logger.info( + f"Access token issued for user #{user_id} and app {app.name} (#{app.id})" + "via authorization code" + ) + + if not access_token.token or not refresh_token.token: + raise HTTPException( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + detail="Failed to generate tokens", + ) + + return TokenResponse( + token_type="Bearer", + access_token=access_token.token.get_secret_value(), + access_token_expires_at=access_token.expires_at, + refresh_token=refresh_token.token.get_secret_value(), + refresh_token_expires_at=refresh_token.expires_at, + scopes=list(s.value for s in scopes), + ) + + # Handle refresh_token grant + elif request.grant_type == "refresh_token": + # Refresh access token + try: + new_access_token, new_refresh_token = await refresh_tokens( + request.refresh_token, app.id + ) + except InvalidGrantError as e: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=str(e), + ) + + logger.info( + f"Tokens refreshed for user #{new_access_token.user_id} " + f"by app {app.name} (#{app.id})" + ) + + if not new_access_token.token or not new_refresh_token.token: + raise HTTPException( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + detail="Failed to generate tokens", + ) + + return TokenResponse( + token_type="Bearer", + access_token=new_access_token.token.get_secret_value(), + access_token_expires_at=new_access_token.expires_at, + refresh_token=new_refresh_token.token.get_secret_value(), + refresh_token_expires_at=new_refresh_token.expires_at, + scopes=list(s.value for s in new_access_token.scopes), + ) + + else: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Unsupported grant_type: {request.grant_type}. " + "Must be 'authorization_code' or 'refresh_token'", + ) + + +# ============================================================================ +# Token Introspection Endpoint +# ============================================================================ + + +@router.post("/introspect") +async def introspect( + token: str = Body(description="Token to introspect"), + token_type_hint: Optional[Literal["access_token", "refresh_token"]] = Body( + None, description="Hint about token type ('access_token' or 'refresh_token')" + ), + client_id: str = Body(description="Client identifier"), + client_secret: str = Body(description="Client secret"), +) -> TokenIntrospectionResult: + """ + OAuth 2.0 Token Introspection Endpoint (RFC 7662) + + Allows clients to check if a token is valid and get its metadata. + + Returns: + - active: Whether the token is currently active + - scopes: List of authorized scopes (if active) + - client_id: The client the token was issued to (if active) + - user_id: The user the token represents (if active) + - exp: Expiration timestamp (if active) + - token_type: "access_token" or "refresh_token" (if active) + """ + # Validate client credentials + try: + await validate_client_credentials(client_id, client_secret) + except InvalidClientError as e: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail=str(e), + ) + + # Introspect the token + return await introspect_token(token, token_type_hint) + + +# ============================================================================ +# Token Revocation Endpoint +# ============================================================================ + + +@router.post("/revoke") +async def revoke( + token: str = Body(description="Token to revoke"), + token_type_hint: Optional[Literal["access_token", "refresh_token"]] = Body( + None, description="Hint about token type ('access_token' or 'refresh_token')" + ), + client_id: str = Body(description="Client identifier"), + client_secret: str = Body(description="Client secret"), +): + """ + OAuth 2.0 Token Revocation Endpoint (RFC 7009) + + Allows clients to revoke an access or refresh token. + + Note: Revoking a refresh token does NOT revoke associated access tokens. + Revoking an access token does NOT revoke the associated refresh token. + """ + # Validate client credentials + try: + app = await validate_client_credentials(client_id, client_secret) + except InvalidClientError as e: + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail=str(e), + ) + + # Try to revoke as access token first + # Note: We pass app.id to ensure the token belongs to the authenticated app + if token_type_hint != "refresh_token": + revoked = await revoke_access_token(token, app.id) + if revoked: + logger.info( + f"Access token revoked for app {app.name} (#{app.id}); " + f"user #{revoked.user_id}" + ) + return {"status": "ok"} + + # Try to revoke as refresh token + revoked = await revoke_refresh_token(token, app.id) + if revoked: + logger.info( + f"Refresh token revoked for app {app.name} (#{app.id}); " + f"user #{revoked.user_id}" + ) + return {"status": "ok"} + + # Per RFC 7009, revocation endpoint returns 200 even if token not found + # or if token belongs to a different application. + # This prevents token scanning attacks. + logger.warning(f"Unsuccessful token revocation attempt by app {app.name} #{app.id}") + return {"status": "ok"} + + +# ============================================================================ +# Application Management Endpoints (for app owners) +# ============================================================================ + + +@router.get("/apps/mine") +async def list_my_oauth_apps( + user_id: str = Security(get_user_id), +) -> list[OAuthApplicationInfo]: + """ + List all OAuth applications owned by the current user. + + Returns a list of OAuth applications with their details including: + - id, name, description, logo_url + - client_id (public identifier) + - redirect_uris, grant_types, scopes + - is_active status + - created_at, updated_at timestamps + + Note: client_secret is never returned for security reasons. + """ + return await list_user_oauth_applications(user_id) + + +@router.patch("/apps/{app_id}/status") +async def update_app_status( + app_id: str, + user_id: str = Security(get_user_id), + is_active: bool = Body(description="Whether the app should be active", embed=True), +) -> OAuthApplicationInfo: + """ + Enable or disable an OAuth application. + + Only the application owner can update the status. + When disabled, the application cannot be used for new authorizations + and existing access tokens will fail validation. + + Returns the updated application info. + """ + updated_app = await update_oauth_application( + app_id=app_id, + owner_id=user_id, + is_active=is_active, + ) + + if not updated_app: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Application not found or you don't have permission to update it", + ) + + action = "enabled" if is_active else "disabled" + logger.info(f"OAuth app {updated_app.name} (#{app_id}) {action} by user #{user_id}") + + return updated_app + + +class UpdateAppLogoRequest(BaseModel): + logo_url: str = Field(description="URL of the uploaded logo image") + + +@router.patch("/apps/{app_id}/logo") +async def update_app_logo( + app_id: str, + request: UpdateAppLogoRequest = Body(), + user_id: str = Security(get_user_id), +) -> OAuthApplicationInfo: + """ + Update the logo URL for an OAuth application. + + Only the application owner can update the logo. + The logo should be uploaded first using the media upload endpoint, + then this endpoint is called with the resulting URL. + + Logo requirements: + - Must be square (1:1 aspect ratio) + - Minimum 512x512 pixels + - Maximum 2048x2048 pixels + + Returns the updated application info. + """ + if ( + not (app := await get_oauth_application_by_id(app_id)) + or app.owner_id != user_id + ): + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="OAuth App not found", + ) + + # Delete the current app logo file (if any and it's in our cloud storage) + await _delete_app_current_logo_file(app) + + updated_app = await update_oauth_application( + app_id=app_id, + owner_id=user_id, + logo_url=request.logo_url, + ) + + if not updated_app: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Application not found or you don't have permission to update it", + ) + + logger.info( + f"OAuth app {updated_app.name} (#{app_id}) logo updated by user #{user_id}" + ) + + return updated_app + + +# Logo upload constraints +LOGO_MIN_SIZE = 512 +LOGO_MAX_SIZE = 2048 +LOGO_ALLOWED_TYPES = {"image/jpeg", "image/png", "image/webp"} +LOGO_MAX_FILE_SIZE = 3 * 1024 * 1024 # 3MB + + +@router.post("/apps/{app_id}/logo/upload") +async def upload_app_logo( + app_id: str, + file: UploadFile, + user_id: str = Security(get_user_id), +) -> OAuthApplicationInfo: + """ + Upload a logo image for an OAuth application. + + Requirements: + - Image must be square (1:1 aspect ratio) + - Minimum 512x512 pixels + - Maximum 2048x2048 pixels + - Allowed formats: JPEG, PNG, WebP + - Maximum file size: 3MB + + The image is uploaded to cloud storage and the app's logoUrl is updated. + Returns the updated application info. + """ + # Verify ownership to reduce vulnerability to DoS(torage) or DoM(oney) attacks + if ( + not (app := await get_oauth_application_by_id(app_id)) + or app.owner_id != user_id + ): + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="OAuth App not found", + ) + + # Check GCS configuration + if not settings.config.media_gcs_bucket_name: + raise HTTPException( + status_code=status.HTTP_503_SERVICE_UNAVAILABLE, + detail="Media storage is not configured", + ) + + # Validate content type + content_type = file.content_type + if content_type not in LOGO_ALLOWED_TYPES: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Invalid file type. Allowed: JPEG, PNG, WebP. Got: {content_type}", + ) + + # Read file content + try: + file_bytes = await file.read() + except Exception as e: + logger.error(f"Error reading logo file: {e}") + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail="Failed to read uploaded file", + ) + + # Check file size + if len(file_bytes) > LOGO_MAX_FILE_SIZE: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=( + "File too large. " + f"Maximum size is {LOGO_MAX_FILE_SIZE // 1024 // 1024}MB" + ), + ) + + # Validate image dimensions + try: + image = Image.open(io.BytesIO(file_bytes)) + width, height = image.size + + if width != height: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Logo must be square. Got {width}x{height}", + ) + + if width < LOGO_MIN_SIZE: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Logo too small. Minimum {LOGO_MIN_SIZE}x{LOGO_MIN_SIZE}. " + f"Got {width}x{height}", + ) + + if width > LOGO_MAX_SIZE: + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail=f"Logo too large. Maximum {LOGO_MAX_SIZE}x{LOGO_MAX_SIZE}. " + f"Got {width}x{height}", + ) + except HTTPException: + raise + except Exception as e: + logger.error(f"Error validating logo image: {e}") + raise HTTPException( + status_code=status.HTTP_400_BAD_REQUEST, + detail="Invalid image file", + ) + + # Scan for viruses + filename = file.filename or "logo" + await scan_content_safe(file_bytes, filename=filename) + + # Generate unique filename + file_ext = os.path.splitext(filename)[1].lower() or ".png" + unique_filename = f"{uuid.uuid4()}{file_ext}" + storage_path = f"oauth-apps/{app_id}/logo/{unique_filename}" + + # Upload to GCS + try: + async with async_storage.Storage() as async_client: + bucket_name = settings.config.media_gcs_bucket_name + + await async_client.upload( + bucket_name, storage_path, file_bytes, content_type=content_type + ) + + logo_url = f"https://storage.googleapis.com/{bucket_name}/{storage_path}" + except Exception as e: + logger.error(f"Error uploading logo to GCS: {e}") + raise HTTPException( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + detail="Failed to upload logo", + ) + + # Delete the current app logo file (if any and it's in our cloud storage) + await _delete_app_current_logo_file(app) + + # Update the app with the new logo URL + updated_app = await update_oauth_application( + app_id=app_id, + owner_id=user_id, + logo_url=logo_url, + ) + + if not updated_app: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Application not found or you don't have permission to update it", + ) + + logger.info( + f"OAuth app {updated_app.name} (#{app_id}) logo uploaded by user #{user_id}" + ) + + return updated_app + + +async def _delete_app_current_logo_file(app: OAuthApplicationInfo): + """ + Delete the current logo file for the given app, if there is one in our cloud storage + """ + bucket_name = settings.config.media_gcs_bucket_name + storage_base_url = f"https://storage.googleapis.com/{bucket_name}/" + + if app.logo_url and app.logo_url.startswith(storage_base_url): + # Parse blob path from URL: https://storage.googleapis.com/{bucket}/{path} + old_path = app.logo_url.replace(storage_base_url, "") + try: + async with async_storage.Storage() as async_client: + await async_client.delete(bucket_name, old_path) + logger.info(f"Deleted old logo for OAuth app #{app.id}: {old_path}") + except Exception as e: + # Log but don't fail - the new logo was uploaded successfully + logger.warning( + f"Failed to delete old logo for OAuth app #{app.id}: {e}", exc_info=e + ) diff --git a/autogpt_platform/backend/backend/api/features/oauth_test.py b/autogpt_platform/backend/backend/api/features/oauth_test.py new file mode 100644 index 0000000000..5f6b85a88a --- /dev/null +++ b/autogpt_platform/backend/backend/api/features/oauth_test.py @@ -0,0 +1,1784 @@ +""" +End-to-end integration tests for OAuth 2.0 Provider Endpoints. + +These tests hit the actual API endpoints and database, testing the complete +OAuth flow from endpoint to database. + +Tests cover: +1. Authorization endpoint - creating authorization codes +2. Token endpoint - exchanging codes for tokens and refreshing +3. Token introspection endpoint - checking token validity +4. Token revocation endpoint - revoking tokens +5. Complete OAuth flow end-to-end +""" + +import base64 +import hashlib +import secrets +import uuid +from typing import AsyncGenerator + +import httpx +import pytest +from autogpt_libs.api_key.keysmith import APIKeySmith +from prisma.enums import APIKeyPermission +from prisma.models import OAuthAccessToken as PrismaOAuthAccessToken +from prisma.models import OAuthApplication as PrismaOAuthApplication +from prisma.models import OAuthAuthorizationCode as PrismaOAuthAuthorizationCode +from prisma.models import OAuthRefreshToken as PrismaOAuthRefreshToken +from prisma.models import User as PrismaUser + +from backend.api.rest_api import app + +keysmith = APIKeySmith() + + +# ============================================================================ +# Test Fixtures +# ============================================================================ + + +@pytest.fixture +def test_user_id() -> str: + """Test user ID for OAuth tests.""" + return str(uuid.uuid4()) + + +@pytest.fixture +async def test_user(server, test_user_id: str): + """Create a test user in the database.""" + await PrismaUser.prisma().create( + data={ + "id": test_user_id, + "email": f"oauth-test-{test_user_id}@example.com", + "name": "OAuth Test User", + } + ) + + yield test_user_id + + # Cleanup - delete in correct order due to foreign key constraints + await PrismaOAuthAccessToken.prisma().delete_many(where={"userId": test_user_id}) + await PrismaOAuthRefreshToken.prisma().delete_many(where={"userId": test_user_id}) + await PrismaOAuthAuthorizationCode.prisma().delete_many( + where={"userId": test_user_id} + ) + await PrismaOAuthApplication.prisma().delete_many(where={"ownerId": test_user_id}) + await PrismaUser.prisma().delete(where={"id": test_user_id}) + + +@pytest.fixture +async def test_oauth_app(test_user: str): + """Create a test OAuth application in the database.""" + app_id = str(uuid.uuid4()) + client_id = f"test_client_{secrets.token_urlsafe(8)}" + # Secret must start with "agpt_" prefix for keysmith verification to work + client_secret_plaintext = f"agpt_secret_{secrets.token_urlsafe(16)}" + client_secret_hash, client_secret_salt = keysmith.hash_key(client_secret_plaintext) + + await PrismaOAuthApplication.prisma().create( + data={ + "id": app_id, + "name": "Test OAuth App", + "description": "Test application for integration tests", + "clientId": client_id, + "clientSecret": client_secret_hash, + "clientSecretSalt": client_secret_salt, + "redirectUris": [ + "https://example.com/callback", + "http://localhost:3000/callback", + ], + "grantTypes": ["authorization_code", "refresh_token"], + "scopes": [APIKeyPermission.EXECUTE_GRAPH, APIKeyPermission.READ_GRAPH], + "ownerId": test_user, + "isActive": True, + } + ) + + yield { + "id": app_id, + "client_id": client_id, + "client_secret": client_secret_plaintext, + "redirect_uri": "https://example.com/callback", + } + + # Cleanup is handled by test_user fixture (cascade delete) + + +def generate_pkce() -> tuple[str, str]: + """Generate PKCE code verifier and challenge.""" + verifier = secrets.token_urlsafe(32) + challenge = ( + base64.urlsafe_b64encode(hashlib.sha256(verifier.encode("ascii")).digest()) + .decode("ascii") + .rstrip("=") + ) + return verifier, challenge + + +@pytest.fixture +def pkce_credentials() -> tuple[str, str]: + """Generate PKCE code verifier and challenge as a fixture.""" + return generate_pkce() + + +@pytest.fixture +async def client(server, test_user: str) -> AsyncGenerator[httpx.AsyncClient, None]: + """ + Create an async HTTP client that talks directly to the FastAPI app. + + Uses ASGI transport so we don't need an actual HTTP server running. + Also overrides get_user_id dependency to return our test user. + + Depends on `server` to ensure the DB is connected and `test_user` to ensure + the user exists in the database before running tests. + """ + from autogpt_libs.auth import get_user_id + + # Override get_user_id dependency to return our test user + def override_get_user_id(): + return test_user + + # Store original override if any + original_override = app.dependency_overrides.get(get_user_id) + + # Set our override + app.dependency_overrides[get_user_id] = override_get_user_id + + try: + async with httpx.AsyncClient( + transport=httpx.ASGITransport(app=app), + base_url="http://test", + ) as http_client: + yield http_client + finally: + # Restore original override + if original_override is not None: + app.dependency_overrides[get_user_id] = original_override + else: + app.dependency_overrides.pop(get_user_id, None) + + +# ============================================================================ +# Authorization Endpoint Integration Tests +# ============================================================================ + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorize_creates_code_in_database( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, + pkce_credentials: tuple[str, str], +): + """Test that authorization endpoint creates a code in the database.""" + verifier, challenge = pkce_credentials + + response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH", "READ_GRAPH"], + "state": "test_state_123", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + assert response.status_code == 200 + redirect_url = response.json()["redirect_url"] + + # Parse the redirect URL to get the authorization code + from urllib.parse import parse_qs, urlparse + + parsed = urlparse(redirect_url) + query_params = parse_qs(parsed.query) + + assert "code" in query_params, f"Expected 'code' in query params: {query_params}" + auth_code = query_params["code"][0] + assert query_params["state"][0] == "test_state_123" + + # Verify code exists in database + db_code = await PrismaOAuthAuthorizationCode.prisma().find_unique( + where={"code": auth_code} + ) + + assert db_code is not None + assert db_code.userId == test_user + assert db_code.applicationId == test_oauth_app["id"] + assert db_code.redirectUri == test_oauth_app["redirect_uri"] + assert APIKeyPermission.EXECUTE_GRAPH in db_code.scopes + assert APIKeyPermission.READ_GRAPH in db_code.scopes + assert db_code.usedAt is None # Not yet consumed + assert db_code.codeChallenge == challenge + assert db_code.codeChallengeMethod == "S256" + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorize_with_pkce_stores_challenge( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, + pkce_credentials: tuple[str, str], +): + """Test that PKCE code challenge is stored correctly.""" + verifier, challenge = pkce_credentials + + response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "pkce_test_state", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + assert response.status_code == 200 + + from urllib.parse import parse_qs, urlparse + + auth_code = parse_qs(urlparse(response.json()["redirect_url"]).query)["code"][0] + + # Verify PKCE challenge is stored + db_code = await PrismaOAuthAuthorizationCode.prisma().find_unique( + where={"code": auth_code} + ) + + assert db_code is not None + assert db_code.codeChallenge == challenge + assert db_code.codeChallengeMethod == "S256" + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorize_invalid_client_returns_error( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test that invalid client_id returns error in redirect.""" + _, challenge = generate_pkce() + + response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": "nonexistent_client_id", + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "error_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + assert response.status_code == 200 + from urllib.parse import parse_qs, urlparse + + query_params = parse_qs(urlparse(response.json()["redirect_url"]).query) + assert query_params["error"][0] == "invalid_client" + + +@pytest.fixture +async def inactive_oauth_app(test_user: str): + """Create an inactive test OAuth application in the database.""" + app_id = str(uuid.uuid4()) + client_id = f"inactive_client_{secrets.token_urlsafe(8)}" + client_secret_plaintext = f"agpt_secret_{secrets.token_urlsafe(16)}" + client_secret_hash, client_secret_salt = keysmith.hash_key(client_secret_plaintext) + + await PrismaOAuthApplication.prisma().create( + data={ + "id": app_id, + "name": "Inactive OAuth App", + "description": "Inactive test application", + "clientId": client_id, + "clientSecret": client_secret_hash, + "clientSecretSalt": client_secret_salt, + "redirectUris": ["https://example.com/callback"], + "grantTypes": ["authorization_code", "refresh_token"], + "scopes": [APIKeyPermission.EXECUTE_GRAPH], + "ownerId": test_user, + "isActive": False, # Inactive! + } + ) + + yield { + "id": app_id, + "client_id": client_id, + "client_secret": client_secret_plaintext, + "redirect_uri": "https://example.com/callback", + } + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorize_inactive_app( + client: httpx.AsyncClient, + test_user: str, + inactive_oauth_app: dict, +): + """Test that authorization with inactive app returns error.""" + _, challenge = generate_pkce() + + response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": inactive_oauth_app["client_id"], + "redirect_uri": inactive_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "inactive_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + assert response.status_code == 200 + from urllib.parse import parse_qs, urlparse + + query_params = parse_qs(urlparse(response.json()["redirect_url"]).query) + assert query_params["error"][0] == "invalid_client" + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorize_invalid_redirect_uri( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test authorization with unregistered redirect_uri returns HTTP error.""" + _, challenge = generate_pkce() + + response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": "https://malicious.com/callback", + "scopes": ["EXECUTE_GRAPH"], + "state": "invalid_redirect_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + # Invalid redirect_uri should return HTTP 400, not a redirect + assert response.status_code == 400 + assert "redirect_uri" in response.json()["detail"].lower() + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorize_invalid_scope( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test authorization with invalid scope value.""" + _, challenge = generate_pkce() + + response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["INVALID_SCOPE_NAME"], + "state": "invalid_scope_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + assert response.status_code == 200 + from urllib.parse import parse_qs, urlparse + + query_params = parse_qs(urlparse(response.json()["redirect_url"]).query) + assert query_params["error"][0] == "invalid_scope" + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorize_unauthorized_scope( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test authorization requesting scope not authorized for app.""" + _, challenge = generate_pkce() + + # The test_oauth_app only has EXECUTE_GRAPH and READ_GRAPH scopes + # DELETE_GRAPH is not in the app's allowed scopes + response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["DELETE_GRAPH"], # Not authorized for this app + "state": "unauthorized_scope_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + assert response.status_code == 200 + from urllib.parse import parse_qs, urlparse + + query_params = parse_qs(urlparse(response.json()["redirect_url"]).query) + assert query_params["error"][0] == "invalid_scope" + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorize_unsupported_response_type( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test authorization with unsupported response_type.""" + _, challenge = generate_pkce() + + response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "unsupported_response_test", + "response_type": "token", # Implicit flow not supported + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + assert response.status_code == 200 + from urllib.parse import parse_qs, urlparse + + query_params = parse_qs(urlparse(response.json()["redirect_url"]).query) + assert query_params["error"][0] == "unsupported_response_type" + + +# ============================================================================ +# Token Endpoint Integration Tests - Authorization Code Grant +# ============================================================================ + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_exchange_creates_tokens_in_database( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test that token exchange creates access and refresh tokens in database.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # First get an authorization code + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH", "READ_GRAPH"], + "state": "token_test_state", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + # Exchange code for tokens + token_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + + assert token_response.status_code == 200 + tokens = token_response.json() + + assert "access_token" in tokens + assert "refresh_token" in tokens + assert tokens["token_type"] == "Bearer" + assert "EXECUTE_GRAPH" in tokens["scopes"] + assert "READ_GRAPH" in tokens["scopes"] + + # Verify access token exists in database (hashed) + access_token_hash = hashlib.sha256(tokens["access_token"].encode()).hexdigest() + db_access_token = await PrismaOAuthAccessToken.prisma().find_unique( + where={"token": access_token_hash} + ) + + assert db_access_token is not None + assert db_access_token.userId == test_user + assert db_access_token.applicationId == test_oauth_app["id"] + assert db_access_token.revokedAt is None + + # Verify refresh token exists in database (hashed) + refresh_token_hash = hashlib.sha256(tokens["refresh_token"].encode()).hexdigest() + db_refresh_token = await PrismaOAuthRefreshToken.prisma().find_unique( + where={"token": refresh_token_hash} + ) + + assert db_refresh_token is not None + assert db_refresh_token.userId == test_user + assert db_refresh_token.applicationId == test_oauth_app["id"] + assert db_refresh_token.revokedAt is None + + # Verify authorization code is marked as used + db_code = await PrismaOAuthAuthorizationCode.prisma().find_unique( + where={"code": auth_code} + ) + assert db_code is not None + assert db_code.usedAt is not None + + +@pytest.mark.asyncio(loop_scope="session") +async def test_authorization_code_cannot_be_reused( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test that authorization code can only be used once.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get authorization code + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "reuse_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + # First exchange - should succeed + first_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + assert first_response.status_code == 200 + + # Second exchange - should fail + second_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + assert second_response.status_code == 400 + assert "already used" in second_response.json()["detail"] + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_exchange_with_invalid_client_secret( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test that token exchange fails with invalid client secret.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get authorization code + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "bad_secret_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + # Try to exchange with wrong secret + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": "wrong_secret", + "code_verifier": verifier, + }, + ) + + assert response.status_code == 401 + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_authorization_code_invalid_code( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test token exchange with invalid/nonexistent authorization code.""" + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": "nonexistent_invalid_code_xyz", + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": "", + }, + ) + + assert response.status_code == 400 + assert "not found" in response.json()["detail"].lower() + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_authorization_code_expired( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test token exchange with expired authorization code.""" + from datetime import datetime, timedelta, timezone + + # Create an expired authorization code directly in the database + expired_code = f"expired_code_{secrets.token_urlsafe(16)}" + now = datetime.now(timezone.utc) + + await PrismaOAuthAuthorizationCode.prisma().create( + data={ + "code": expired_code, + "applicationId": test_oauth_app["id"], + "userId": test_user, + "scopes": [APIKeyPermission.EXECUTE_GRAPH], + "redirectUri": test_oauth_app["redirect_uri"], + "expiresAt": now - timedelta(hours=1), # Already expired + } + ) + + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": expired_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": "", + }, + ) + + assert response.status_code == 400 + assert "expired" in response.json()["detail"].lower() + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_authorization_code_redirect_uri_mismatch( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test token exchange with mismatched redirect_uri.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get authorization code with one redirect_uri + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "redirect_mismatch_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + # Try to exchange with different redirect_uri + # Note: localhost:3000 is in the app's registered redirect_uris + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + # Different redirect_uri from authorization request + "redirect_uri": "http://localhost:3000/callback", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + + assert response.status_code == 400 + assert "redirect_uri" in response.json()["detail"].lower() + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_authorization_code_pkce_failure( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, + pkce_credentials: tuple[str, str], +): + """Test token exchange with PKCE verification failure (wrong verifier).""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = pkce_credentials + + # Get authorization code with PKCE challenge + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "pkce_failure_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + # Try to exchange with wrong verifier + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": "wrong_verifier_that_does_not_match", + }, + ) + + assert response.status_code == 400 + assert "pkce" in response.json()["detail"].lower() + + +# ============================================================================ +# Token Endpoint Integration Tests - Refresh Token Grant +# ============================================================================ + + +@pytest.mark.asyncio(loop_scope="session") +async def test_refresh_token_creates_new_tokens( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test that refresh token grant creates new access and refresh tokens.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get initial tokens + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "refresh_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + initial_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + initial_tokens = initial_response.json() + + # Use refresh token to get new tokens + refresh_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "refresh_token", + "refresh_token": initial_tokens["refresh_token"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert refresh_response.status_code == 200 + new_tokens = refresh_response.json() + + # Tokens should be different + assert new_tokens["access_token"] != initial_tokens["access_token"] + assert new_tokens["refresh_token"] != initial_tokens["refresh_token"] + + # Old refresh token should be revoked in database + old_refresh_hash = hashlib.sha256( + initial_tokens["refresh_token"].encode() + ).hexdigest() + old_db_token = await PrismaOAuthRefreshToken.prisma().find_unique( + where={"token": old_refresh_hash} + ) + assert old_db_token is not None + assert old_db_token.revokedAt is not None + + # New tokens should exist and be valid + new_access_hash = hashlib.sha256(new_tokens["access_token"].encode()).hexdigest() + new_db_access = await PrismaOAuthAccessToken.prisma().find_unique( + where={"token": new_access_hash} + ) + assert new_db_access is not None + assert new_db_access.revokedAt is None + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_refresh_invalid_token( + client: httpx.AsyncClient, + test_oauth_app: dict, +): + """Test token refresh with invalid/nonexistent refresh token.""" + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "refresh_token", + "refresh_token": "completely_invalid_refresh_token_xyz", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert response.status_code == 400 + assert "not found" in response.json()["detail"].lower() + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_refresh_expired( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test token refresh with expired refresh token.""" + from datetime import datetime, timedelta, timezone + + # Create an expired refresh token directly in the database + expired_token_value = f"expired_refresh_{secrets.token_urlsafe(16)}" + expired_token_hash = hashlib.sha256(expired_token_value.encode()).hexdigest() + now = datetime.now(timezone.utc) + + await PrismaOAuthRefreshToken.prisma().create( + data={ + "token": expired_token_hash, + "applicationId": test_oauth_app["id"], + "userId": test_user, + "scopes": [APIKeyPermission.EXECUTE_GRAPH], + "expiresAt": now - timedelta(days=1), # Already expired + } + ) + + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "refresh_token", + "refresh_token": expired_token_value, + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert response.status_code == 400 + assert "expired" in response.json()["detail"].lower() + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_refresh_revoked( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test token refresh with revoked refresh token.""" + from datetime import datetime, timedelta, timezone + + # Create a revoked refresh token directly in the database + revoked_token_value = f"revoked_refresh_{secrets.token_urlsafe(16)}" + revoked_token_hash = hashlib.sha256(revoked_token_value.encode()).hexdigest() + now = datetime.now(timezone.utc) + + await PrismaOAuthRefreshToken.prisma().create( + data={ + "token": revoked_token_hash, + "applicationId": test_oauth_app["id"], + "userId": test_user, + "scopes": [APIKeyPermission.EXECUTE_GRAPH], + "expiresAt": now + timedelta(days=30), # Not expired + "revokedAt": now - timedelta(hours=1), # But revoked + } + ) + + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "refresh_token", + "refresh_token": revoked_token_value, + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert response.status_code == 400 + assert "revoked" in response.json()["detail"].lower() + + +@pytest.fixture +async def other_oauth_app(test_user: str): + """Create a second OAuth application for cross-app tests.""" + app_id = str(uuid.uuid4()) + client_id = f"other_client_{secrets.token_urlsafe(8)}" + client_secret_plaintext = f"agpt_other_{secrets.token_urlsafe(16)}" + client_secret_hash, client_secret_salt = keysmith.hash_key(client_secret_plaintext) + + await PrismaOAuthApplication.prisma().create( + data={ + "id": app_id, + "name": "Other OAuth App", + "description": "Second test application", + "clientId": client_id, + "clientSecret": client_secret_hash, + "clientSecretSalt": client_secret_salt, + "redirectUris": ["https://other.example.com/callback"], + "grantTypes": ["authorization_code", "refresh_token"], + "scopes": [APIKeyPermission.EXECUTE_GRAPH], + "ownerId": test_user, + "isActive": True, + } + ) + + yield { + "id": app_id, + "client_id": client_id, + "client_secret": client_secret_plaintext, + "redirect_uri": "https://other.example.com/callback", + } + + +@pytest.mark.asyncio(loop_scope="session") +async def test_token_refresh_wrong_application( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, + other_oauth_app: dict, +): + """Test token refresh with token from different application.""" + from datetime import datetime, timedelta, timezone + + # Create a refresh token for `test_oauth_app` + token_value = f"app1_refresh_{secrets.token_urlsafe(16)}" + token_hash = hashlib.sha256(token_value.encode()).hexdigest() + now = datetime.now(timezone.utc) + + await PrismaOAuthRefreshToken.prisma().create( + data={ + "token": token_hash, + "applicationId": test_oauth_app["id"], # Belongs to test_oauth_app + "userId": test_user, + "scopes": [APIKeyPermission.EXECUTE_GRAPH], + "expiresAt": now + timedelta(days=30), + } + ) + + # Try to use it with `other_oauth_app` + response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "refresh_token", + "refresh_token": token_value, + "client_id": other_oauth_app["client_id"], + "client_secret": other_oauth_app["client_secret"], + }, + ) + + assert response.status_code == 400 + assert "does not belong" in response.json()["detail"].lower() + + +# ============================================================================ +# Token Introspection Integration Tests +# ============================================================================ + + +@pytest.mark.asyncio(loop_scope="session") +async def test_introspect_valid_access_token( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test introspection returns correct info for valid access token.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get tokens + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH", "READ_GRAPH"], + "state": "introspect_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + token_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + tokens = token_response.json() + + # Introspect the access token + introspect_response = await client.post( + "/api/oauth/introspect", + json={ + "token": tokens["access_token"], + "token_type_hint": "access_token", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert introspect_response.status_code == 200 + data = introspect_response.json() + + assert data["active"] is True + assert data["token_type"] == "access_token" + assert data["user_id"] == test_user + assert data["client_id"] == test_oauth_app["client_id"] + assert "EXECUTE_GRAPH" in data["scopes"] + assert "READ_GRAPH" in data["scopes"] + + +@pytest.mark.asyncio(loop_scope="session") +async def test_introspect_invalid_token_returns_inactive( + client: httpx.AsyncClient, + test_oauth_app: dict, +): + """Test introspection returns inactive for non-existent token.""" + introspect_response = await client.post( + "/api/oauth/introspect", + json={ + "token": "completely_invalid_token_that_does_not_exist", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert introspect_response.status_code == 200 + assert introspect_response.json()["active"] is False + + +@pytest.mark.asyncio(loop_scope="session") +async def test_introspect_active_refresh_token( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test introspection returns correct info for valid refresh token.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get tokens via the full flow + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH", "READ_GRAPH"], + "state": "introspect_refresh_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + token_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + tokens = token_response.json() + + # Introspect the refresh token + introspect_response = await client.post( + "/api/oauth/introspect", + json={ + "token": tokens["refresh_token"], + "token_type_hint": "refresh_token", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert introspect_response.status_code == 200 + data = introspect_response.json() + + assert data["active"] is True + assert data["token_type"] == "refresh_token" + assert data["user_id"] == test_user + assert data["client_id"] == test_oauth_app["client_id"] + + +@pytest.mark.asyncio(loop_scope="session") +async def test_introspect_invalid_client( + client: httpx.AsyncClient, + test_oauth_app: dict, +): + """Test introspection with invalid client credentials.""" + introspect_response = await client.post( + "/api/oauth/introspect", + json={ + "token": "some_token", + "client_id": test_oauth_app["client_id"], + "client_secret": "wrong_secret_value", + }, + ) + + assert introspect_response.status_code == 401 + + +@pytest.mark.asyncio(loop_scope="session") +async def test_validate_access_token_fails_when_app_disabled( + test_user: str, +): + """ + Test that validate_access_token raises InvalidClientError when the app is disabled. + + This tests the security feature where disabling an OAuth application + immediately invalidates all its access tokens. + """ + from datetime import datetime, timedelta, timezone + + from backend.data.auth.oauth import InvalidClientError, validate_access_token + + # Create an OAuth app + app_id = str(uuid.uuid4()) + client_id = f"disable_test_{secrets.token_urlsafe(8)}" + client_secret_plaintext = f"agpt_disable_{secrets.token_urlsafe(16)}" + client_secret_hash, client_secret_salt = keysmith.hash_key(client_secret_plaintext) + + await PrismaOAuthApplication.prisma().create( + data={ + "id": app_id, + "name": "App To Be Disabled", + "description": "Test app for disabled validation", + "clientId": client_id, + "clientSecret": client_secret_hash, + "clientSecretSalt": client_secret_salt, + "redirectUris": ["https://example.com/callback"], + "grantTypes": ["authorization_code"], + "scopes": [APIKeyPermission.EXECUTE_GRAPH], + "ownerId": test_user, + "isActive": True, + } + ) + + # Create an access token directly in the database + token_plaintext = f"test_token_{secrets.token_urlsafe(32)}" + token_hash = hashlib.sha256(token_plaintext.encode()).hexdigest() + now = datetime.now(timezone.utc) + + await PrismaOAuthAccessToken.prisma().create( + data={ + "token": token_hash, + "applicationId": app_id, + "userId": test_user, + "scopes": [APIKeyPermission.EXECUTE_GRAPH], + "expiresAt": now + timedelta(hours=1), + } + ) + + # Token should be valid while app is active + token_info, _ = await validate_access_token(token_plaintext) + assert token_info.user_id == test_user + + # Disable the app + await PrismaOAuthApplication.prisma().update( + where={"id": app_id}, + data={"isActive": False}, + ) + + # Token should now fail validation with InvalidClientError + with pytest.raises(InvalidClientError, match="disabled"): + await validate_access_token(token_plaintext) + + # Cleanup + await PrismaOAuthApplication.prisma().delete(where={"id": app_id}) + + +# ============================================================================ +# Token Revocation Integration Tests +# ============================================================================ + + +@pytest.mark.asyncio(loop_scope="session") +async def test_revoke_access_token_updates_database( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test that revoking access token updates database.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get tokens + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "revoke_access_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + token_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + tokens = token_response.json() + + # Verify token is not revoked in database + access_hash = hashlib.sha256(tokens["access_token"].encode()).hexdigest() + db_token_before = await PrismaOAuthAccessToken.prisma().find_unique( + where={"token": access_hash} + ) + assert db_token_before is not None + assert db_token_before.revokedAt is None + + # Revoke the token + revoke_response = await client.post( + "/api/oauth/revoke", + json={ + "token": tokens["access_token"], + "token_type_hint": "access_token", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert revoke_response.status_code == 200 + assert revoke_response.json()["status"] == "ok" + + # Verify token is now revoked in database + db_token_after = await PrismaOAuthAccessToken.prisma().find_unique( + where={"token": access_hash} + ) + assert db_token_after is not None + assert db_token_after.revokedAt is not None + + +@pytest.mark.asyncio(loop_scope="session") +async def test_revoke_unknown_token_returns_ok( + client: httpx.AsyncClient, + test_oauth_app: dict, +): + """Test that revoking unknown token returns 200 (per RFC 7009).""" + revoke_response = await client.post( + "/api/oauth/revoke", + json={ + "token": "unknown_token_that_does_not_exist_anywhere", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + # Per RFC 7009, should return 200 even for unknown tokens + assert revoke_response.status_code == 200 + assert revoke_response.json()["status"] == "ok" + + +@pytest.mark.asyncio(loop_scope="session") +async def test_revoke_refresh_token_updates_database( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """Test that revoking refresh token updates database.""" + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get tokens + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "revoke_refresh_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + token_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + tokens = token_response.json() + + # Verify refresh token is not revoked in database + refresh_hash = hashlib.sha256(tokens["refresh_token"].encode()).hexdigest() + db_token_before = await PrismaOAuthRefreshToken.prisma().find_unique( + where={"token": refresh_hash} + ) + assert db_token_before is not None + assert db_token_before.revokedAt is None + + # Revoke the refresh token + revoke_response = await client.post( + "/api/oauth/revoke", + json={ + "token": tokens["refresh_token"], + "token_type_hint": "refresh_token", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert revoke_response.status_code == 200 + assert revoke_response.json()["status"] == "ok" + + # Verify refresh token is now revoked in database + db_token_after = await PrismaOAuthRefreshToken.prisma().find_unique( + where={"token": refresh_hash} + ) + assert db_token_after is not None + assert db_token_after.revokedAt is not None + + +@pytest.mark.asyncio(loop_scope="session") +async def test_revoke_invalid_client( + client: httpx.AsyncClient, + test_oauth_app: dict, +): + """Test revocation with invalid client credentials.""" + revoke_response = await client.post( + "/api/oauth/revoke", + json={ + "token": "some_token", + "client_id": test_oauth_app["client_id"], + "client_secret": "wrong_secret_value", + }, + ) + + assert revoke_response.status_code == 401 + + +@pytest.mark.asyncio(loop_scope="session") +async def test_revoke_token_from_different_app_fails_silently( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, +): + """ + Test that an app cannot revoke tokens belonging to a different app. + + Per RFC 7009, the endpoint still returns 200 OK (to prevent token scanning), + but the token should remain valid in the database. + """ + from urllib.parse import parse_qs, urlparse + + verifier, challenge = generate_pkce() + + # Get tokens for app 1 + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH"], + "state": "cross_app_revoke_test", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + auth_code = parse_qs(urlparse(auth_response.json()["redirect_url"]).query)["code"][ + 0 + ] + + token_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + tokens = token_response.json() + + # Create a second OAuth app + app2_id = str(uuid.uuid4()) + app2_client_id = f"test_client_app2_{secrets.token_urlsafe(8)}" + app2_client_secret_plaintext = f"agpt_secret_app2_{secrets.token_urlsafe(16)}" + app2_client_secret_hash, app2_client_secret_salt = keysmith.hash_key( + app2_client_secret_plaintext + ) + + await PrismaOAuthApplication.prisma().create( + data={ + "id": app2_id, + "name": "Second Test OAuth App", + "description": "Second test application for cross-app revocation test", + "clientId": app2_client_id, + "clientSecret": app2_client_secret_hash, + "clientSecretSalt": app2_client_secret_salt, + "redirectUris": ["https://other-app.com/callback"], + "grantTypes": ["authorization_code", "refresh_token"], + "scopes": [APIKeyPermission.EXECUTE_GRAPH, APIKeyPermission.READ_GRAPH], + "ownerId": test_user, + "isActive": True, + } + ) + + # App 2 tries to revoke App 1's access token + revoke_response = await client.post( + "/api/oauth/revoke", + json={ + "token": tokens["access_token"], + "token_type_hint": "access_token", + "client_id": app2_client_id, + "client_secret": app2_client_secret_plaintext, + }, + ) + + # Per RFC 7009, returns 200 OK even if token not found/not owned + assert revoke_response.status_code == 200 + assert revoke_response.json()["status"] == "ok" + + # But the token should NOT be revoked in the database + access_hash = hashlib.sha256(tokens["access_token"].encode()).hexdigest() + db_token = await PrismaOAuthAccessToken.prisma().find_unique( + where={"token": access_hash} + ) + assert db_token is not None + assert db_token.revokedAt is None, "Token should NOT be revoked by different app" + + # Now app 1 revokes its own token - should work + revoke_response2 = await client.post( + "/api/oauth/revoke", + json={ + "token": tokens["access_token"], + "token_type_hint": "access_token", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert revoke_response2.status_code == 200 + + # Token should now be revoked + db_token_after = await PrismaOAuthAccessToken.prisma().find_unique( + where={"token": access_hash} + ) + assert db_token_after is not None + assert db_token_after.revokedAt is not None, "Token should be revoked by own app" + + # Cleanup second app + await PrismaOAuthApplication.prisma().delete(where={"id": app2_id}) + + +# ============================================================================ +# Complete End-to-End OAuth Flow Test +# ============================================================================ + + +@pytest.mark.asyncio(loop_scope="session") +async def test_complete_oauth_flow_end_to_end( + client: httpx.AsyncClient, + test_user: str, + test_oauth_app: dict, + pkce_credentials: tuple[str, str], +): + """ + Test the complete OAuth 2.0 flow from authorization to token refresh. + + This is a comprehensive integration test that verifies the entire + OAuth flow works correctly with real API calls and database operations. + """ + from urllib.parse import parse_qs, urlparse + + verifier, challenge = pkce_credentials + + # Step 1: Authorization request with PKCE + auth_response = await client.post( + "/api/oauth/authorize", + json={ + "client_id": test_oauth_app["client_id"], + "redirect_uri": test_oauth_app["redirect_uri"], + "scopes": ["EXECUTE_GRAPH", "READ_GRAPH"], + "state": "e2e_test_state", + "response_type": "code", + "code_challenge": challenge, + "code_challenge_method": "S256", + }, + follow_redirects=False, + ) + + assert auth_response.status_code == 200 + + redirect_url = auth_response.json()["redirect_url"] + query = parse_qs(urlparse(redirect_url).query) + + assert query["state"][0] == "e2e_test_state" + auth_code = query["code"][0] + + # Verify authorization code in database + db_code = await PrismaOAuthAuthorizationCode.prisma().find_unique( + where={"code": auth_code} + ) + assert db_code is not None + assert db_code.codeChallenge == challenge + + # Step 2: Exchange code for tokens + token_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "authorization_code", + "code": auth_code, + "redirect_uri": test_oauth_app["redirect_uri"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + "code_verifier": verifier, + }, + ) + + assert token_response.status_code == 200 + tokens = token_response.json() + assert "access_token" in tokens + assert "refresh_token" in tokens + + # Verify code is marked as used + db_code_used = await PrismaOAuthAuthorizationCode.prisma().find_unique_or_raise( + where={"code": auth_code} + ) + assert db_code_used.usedAt is not None + + # Step 3: Introspect access token + introspect_response = await client.post( + "/api/oauth/introspect", + json={ + "token": tokens["access_token"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert introspect_response.status_code == 200 + introspect_data = introspect_response.json() + assert introspect_data["active"] is True + assert introspect_data["user_id"] == test_user + + # Step 4: Refresh tokens + refresh_response = await client.post( + "/api/oauth/token", + json={ + "grant_type": "refresh_token", + "refresh_token": tokens["refresh_token"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert refresh_response.status_code == 200 + new_tokens = refresh_response.json() + assert new_tokens["access_token"] != tokens["access_token"] + assert new_tokens["refresh_token"] != tokens["refresh_token"] + + # Verify old refresh token is revoked + old_refresh_hash = hashlib.sha256(tokens["refresh_token"].encode()).hexdigest() + old_db_refresh = await PrismaOAuthRefreshToken.prisma().find_unique_or_raise( + where={"token": old_refresh_hash} + ) + assert old_db_refresh.revokedAt is not None + + # Step 5: Verify new access token works + new_introspect = await client.post( + "/api/oauth/introspect", + json={ + "token": new_tokens["access_token"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert new_introspect.status_code == 200 + assert new_introspect.json()["active"] is True + + # Step 6: Revoke new access token + revoke_response = await client.post( + "/api/oauth/revoke", + json={ + "token": new_tokens["access_token"], + "token_type_hint": "access_token", + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert revoke_response.status_code == 200 + + # Step 7: Verify revoked token is inactive + final_introspect = await client.post( + "/api/oauth/introspect", + json={ + "token": new_tokens["access_token"], + "client_id": test_oauth_app["client_id"], + "client_secret": test_oauth_app["client_secret"], + }, + ) + + assert final_introspect.status_code == 200 + assert final_introspect.json()["active"] is False + + # Verify in database + new_access_hash = hashlib.sha256(new_tokens["access_token"].encode()).hexdigest() + db_revoked = await PrismaOAuthAccessToken.prisma().find_unique_or_raise( + where={"token": new_access_hash} + ) + assert db_revoked.revokedAt is not None diff --git a/autogpt_platform/backend/backend/api/features/otto/__init__.py b/autogpt_platform/backend/backend/api/features/otto/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/autogpt_platform/backend/backend/server/v2/otto/models.py b/autogpt_platform/backend/backend/api/features/otto/models.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/otto/models.py rename to autogpt_platform/backend/backend/api/features/otto/models.py diff --git a/autogpt_platform/backend/backend/server/v2/otto/routes.py b/autogpt_platform/backend/backend/api/features/otto/routes.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/otto/routes.py rename to autogpt_platform/backend/backend/api/features/otto/routes.py diff --git a/autogpt_platform/backend/backend/server/v2/otto/routes_test.py b/autogpt_platform/backend/backend/api/features/otto/routes_test.py similarity index 97% rename from autogpt_platform/backend/backend/server/v2/otto/routes_test.py rename to autogpt_platform/backend/backend/api/features/otto/routes_test.py index 2641babe2b..416bcdee76 100644 --- a/autogpt_platform/backend/backend/server/v2/otto/routes_test.py +++ b/autogpt_platform/backend/backend/api/features/otto/routes_test.py @@ -6,9 +6,9 @@ import pytest import pytest_mock from pytest_snapshot.plugin import Snapshot -import backend.server.v2.otto.models as otto_models -import backend.server.v2.otto.routes as otto_routes -from backend.server.v2.otto.service import OttoService +from . import models as otto_models +from . import routes as otto_routes +from .service import OttoService app = fastapi.FastAPI() app.include_router(otto_routes.router) diff --git a/autogpt_platform/backend/backend/server/v2/otto/service.py b/autogpt_platform/backend/backend/api/features/otto/service.py similarity index 97% rename from autogpt_platform/backend/backend/server/v2/otto/service.py rename to autogpt_platform/backend/backend/api/features/otto/service.py index 8efa4f642f..5f00022ff2 100644 --- a/autogpt_platform/backend/backend/server/v2/otto/service.py +++ b/autogpt_platform/backend/backend/api/features/otto/service.py @@ -27,7 +27,9 @@ class OttoService: return None try: - graph = await graph_db.get_graph(request.graph_id, user_id=user_id) + graph = await graph_db.get_graph( + graph_id=request.graph_id, version=None, user_id=user_id + ) if not graph: return None diff --git a/autogpt_platform/backend/backend/api/features/postmark/__init__.py b/autogpt_platform/backend/backend/api/features/postmark/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/autogpt_platform/backend/backend/server/routers/postmark/models.py b/autogpt_platform/backend/backend/api/features/postmark/models.py similarity index 100% rename from autogpt_platform/backend/backend/server/routers/postmark/models.py rename to autogpt_platform/backend/backend/api/features/postmark/models.py diff --git a/autogpt_platform/backend/backend/server/routers/postmark/postmark.py b/autogpt_platform/backend/backend/api/features/postmark/postmark.py similarity index 96% rename from autogpt_platform/backend/backend/server/routers/postmark/postmark.py rename to autogpt_platform/backend/backend/api/features/postmark/postmark.py index 2190aa5fce..224e30fa9d 100644 --- a/autogpt_platform/backend/backend/server/routers/postmark/postmark.py +++ b/autogpt_platform/backend/backend/api/features/postmark/postmark.py @@ -4,12 +4,15 @@ from typing import Annotated from fastapi import APIRouter, Body, HTTPException, Query, Security from fastapi.responses import JSONResponse +from backend.api.utils.api_key_auth import APIKeyAuthenticator from backend.data.user import ( get_user_by_email, set_user_email_verification, unsubscribe_user_by_token, ) -from backend.server.routers.postmark.models import ( +from backend.util.settings import Settings + +from .models import ( PostmarkBounceEnum, PostmarkBounceWebhook, PostmarkClickWebhook, @@ -19,8 +22,6 @@ from backend.server.routers.postmark.models import ( PostmarkSubscriptionChangeWebhook, PostmarkWebhook, ) -from backend.server.utils.api_key_auth import APIKeyAuthenticator -from backend.util.settings import Settings logger = logging.getLogger(__name__) settings = Settings() diff --git a/autogpt_platform/backend/backend/server/v2/store/README.md b/autogpt_platform/backend/backend/api/features/store/README.md similarity index 100% rename from autogpt_platform/backend/backend/server/v2/store/README.md rename to autogpt_platform/backend/backend/api/features/store/README.md diff --git a/autogpt_platform/backend/backend/api/features/store/__init__.py b/autogpt_platform/backend/backend/api/features/store/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/autogpt_platform/backend/backend/server/v2/store/cache.py b/autogpt_platform/backend/backend/api/features/store/cache.py similarity index 74% rename from autogpt_platform/backend/backend/server/v2/store/cache.py rename to autogpt_platform/backend/backend/api/features/store/cache.py index 6d62e684ac..5d9bc24e5d 100644 --- a/autogpt_platform/backend/backend/server/v2/store/cache.py +++ b/autogpt_platform/backend/backend/api/features/store/cache.py @@ -1,6 +1,9 @@ -import backend.server.v2.store.db +from typing import Literal + from backend.util.cache import cached +from . import db as store_db + ############################################## ############### Caches ####################### ############################################## @@ -20,14 +23,14 @@ def clear_all_caches(): async def _get_cached_store_agents( featured: bool, creator: str | None, - sorted_by: str | None, + sorted_by: Literal["rating", "runs", "name", "updated_at"] | None, search_query: str | None, category: str | None, page: int, page_size: int, ): """Cached helper to get store agents.""" - return await backend.server.v2.store.db.get_store_agents( + return await store_db.get_store_agents( featured=featured, creators=[creator] if creator else None, sorted_by=sorted_by, @@ -40,10 +43,12 @@ async def _get_cached_store_agents( # Cache individual agent details for 15 minutes @cached(maxsize=200, ttl_seconds=300, shared_cache=True) -async def _get_cached_agent_details(username: str, agent_name: str): +async def _get_cached_agent_details( + username: str, agent_name: str, include_changelog: bool = False +): """Cached helper to get agent details.""" - return await backend.server.v2.store.db.get_store_agent_details( - username=username, agent_name=agent_name + return await store_db.get_store_agent_details( + username=username, agent_name=agent_name, include_changelog=include_changelog ) @@ -52,12 +57,12 @@ async def _get_cached_agent_details(username: str, agent_name: str): async def _get_cached_store_creators( featured: bool, search_query: str | None, - sorted_by: str | None, + sorted_by: Literal["agent_rating", "agent_runs", "num_agents"] | None, page: int, page_size: int, ): """Cached helper to get store creators.""" - return await backend.server.v2.store.db.get_store_creators( + return await store_db.get_store_creators( featured=featured, search_query=search_query, sorted_by=sorted_by, @@ -70,6 +75,4 @@ async def _get_cached_store_creators( @cached(maxsize=100, ttl_seconds=300, shared_cache=True) async def _get_cached_creator_details(username: str): """Cached helper to get creator details.""" - return await backend.server.v2.store.db.get_store_creator_details( - username=username.lower() - ) + return await store_db.get_store_creator_details(username=username.lower()) diff --git a/autogpt_platform/backend/backend/server/v2/store/db.py b/autogpt_platform/backend/backend/api/features/store/db.py similarity index 82% rename from autogpt_platform/backend/backend/server/v2/store/db.py rename to autogpt_platform/backend/backend/api/features/store/db.py index 7e3e78aa77..8e5a39df89 100644 --- a/autogpt_platform/backend/backend/server/v2/store/db.py +++ b/autogpt_platform/backend/backend/api/features/store/db.py @@ -1,6 +1,8 @@ import asyncio import logging +import typing from datetime import datetime, timezone +from typing import Literal import fastapi import prisma.enums @@ -8,9 +10,7 @@ import prisma.errors import prisma.models import prisma.types -import backend.server.v2.store.exceptions -import backend.server.v2.store.model -from backend.data.db import transaction +from backend.data.db import query_raw_with_schema, transaction from backend.data.graph import ( GraphMeta, GraphModel, @@ -28,6 +28,9 @@ from backend.notifications.notifications import queue_notification_async from backend.util.exceptions import DatabaseError from backend.util.settings import Settings +from . import exceptions as store_exceptions +from . import model as store_model + logger = logging.getLogger(__name__) settings = Settings() @@ -37,103 +40,193 @@ DEFAULT_ADMIN_NAME = "AutoGPT Admin" DEFAULT_ADMIN_EMAIL = "admin@autogpt.co" -def sanitize_query(query: str | None) -> str | None: - if query is None: - return query - query = query.strip()[:100] - return ( - query.replace("\\", "\\\\") - .replace("%", "\\%") - .replace("_", "\\_") - .replace("[", "\\[") - .replace("]", "\\]") - .replace("'", "\\'") - .replace('"', '\\"') - .replace(";", "\\;") - .replace("--", "\\--") - .replace("/*", "\\/*") - .replace("*/", "\\*/") - ) - - async def get_store_agents( featured: bool = False, creators: list[str] | None = None, - sorted_by: str | None = None, + sorted_by: Literal["rating", "runs", "name", "updated_at"] | None = None, search_query: str | None = None, category: str | None = None, page: int = 1, page_size: int = 20, -) -> backend.server.v2.store.model.StoreAgentsResponse: +) -> store_model.StoreAgentsResponse: """ Get PUBLIC store agents from the StoreAgent view """ logger.debug( f"Getting store agents. featured={featured}, creators={creators}, sorted_by={sorted_by}, search={search_query}, category={category}, page={page}" ) - search_term = sanitize_query(search_query) - where_clause: prisma.types.StoreAgentWhereInput = {"is_available": True} - if featured: - where_clause["featured"] = featured - if creators: - where_clause["creator_username"] = {"in": creators} - if category: - where_clause["categories"] = {"has": category} - - if search_term: - where_clause["OR"] = [ - {"agent_name": {"contains": search_term, "mode": "insensitive"}}, - {"description": {"contains": search_term, "mode": "insensitive"}}, - ] - - order_by = [] - if sorted_by == "rating": - order_by.append({"rating": "desc"}) - elif sorted_by == "runs": - order_by.append({"runs": "desc"}) - elif sorted_by == "name": - order_by.append({"agent_name": "asc"}) try: - agents = await prisma.models.StoreAgent.prisma().find_many( - where=where_clause, - order=order_by, - skip=(page - 1) * page_size, - take=page_size, - ) + # If search_query is provided, use full-text search + if search_query: + offset = (page - 1) * page_size - total = await prisma.models.StoreAgent.prisma().count(where=where_clause) - total_pages = (total + page_size - 1) // page_size + # Whitelist allowed order_by columns + ALLOWED_ORDER_BY = { + "rating": "rating DESC, rank DESC", + "runs": "runs DESC, rank DESC", + "name": "agent_name ASC, rank ASC", + "updated_at": "updated_at DESC, rank DESC", + } - store_agents: list[backend.server.v2.store.model.StoreAgent] = [] - for agent in agents: - try: - # Create the StoreAgent object safely - store_agent = backend.server.v2.store.model.StoreAgent( - slug=agent.slug, - agent_name=agent.agent_name, - agent_image=agent.agent_image[0] if agent.agent_image else "", - creator=agent.creator_username or "Needs Profile", - creator_avatar=agent.creator_avatar or "", - sub_heading=agent.sub_heading, - description=agent.description, - runs=agent.runs, - rating=agent.rating, - ) - # Add to the list only if creation was successful - store_agents.append(store_agent) - except Exception as e: - # Skip this agent if there was an error - # You could log the error here if needed - logger.error( - f"Error parsing Store agent when getting store agents from db: {e}" - ) - continue + # Validate and get order clause + if sorted_by and sorted_by in ALLOWED_ORDER_BY: + order_by_clause = ALLOWED_ORDER_BY[sorted_by] + else: + order_by_clause = "updated_at DESC, rank DESC" + + # Build WHERE conditions and parameters list + where_parts: list[str] = [] + params: list[typing.Any] = [search_query] # $1 - search term + param_index = 2 # Start at $2 for next parameter + + # Always filter for available agents + where_parts.append("is_available = true") + + if featured: + where_parts.append("featured = true") + + if creators and creators: + # Use ANY with array parameter + where_parts.append(f"creator_username = ANY(${param_index})") + params.append(creators) + param_index += 1 + + if category and category: + where_parts.append(f"${param_index} = ANY(categories)") + params.append(category) + param_index += 1 + + sql_where_clause: str = " AND ".join(where_parts) if where_parts else "1=1" + + # Add pagination params + params.extend([page_size, offset]) + limit_param = f"${param_index}" + offset_param = f"${param_index + 1}" + + # Execute full-text search query with parameterized values + sql_query = f""" + SELECT + slug, + agent_name, + agent_image, + creator_username, + creator_avatar, + sub_heading, + description, + runs, + rating, + categories, + featured, + is_available, + updated_at, + ts_rank_cd(search, query) AS rank + FROM {{schema_prefix}}"StoreAgent", + plainto_tsquery('english', $1) AS query + WHERE {sql_where_clause} + AND search @@ query + ORDER BY {order_by_clause} + LIMIT {limit_param} OFFSET {offset_param} + """ + + # Count query for pagination - only uses search term parameter + count_query = f""" + SELECT COUNT(*) as count + FROM {{schema_prefix}}"StoreAgent", + plainto_tsquery('english', $1) AS query + WHERE {sql_where_clause} + AND search @@ query + """ + + # Execute both queries with parameters + agents = await query_raw_with_schema(sql_query, *params) + + # For count, use params without pagination (last 2 params) + count_params = params[:-2] + count_result = await query_raw_with_schema(count_query, *count_params) + + total = count_result[0]["count"] if count_result else 0 + total_pages = (total + page_size - 1) // page_size + + # Convert raw results to StoreAgent models + store_agents: list[store_model.StoreAgent] = [] + for agent in agents: + try: + store_agent = store_model.StoreAgent( + slug=agent["slug"], + agent_name=agent["agent_name"], + agent_image=( + agent["agent_image"][0] if agent["agent_image"] else "" + ), + creator=agent["creator_username"] or "Needs Profile", + creator_avatar=agent["creator_avatar"] or "", + sub_heading=agent["sub_heading"], + description=agent["description"], + runs=agent["runs"], + rating=agent["rating"], + ) + store_agents.append(store_agent) + except Exception as e: + logger.error(f"Error parsing Store agent from search results: {e}") + continue + + else: + # Non-search query path (original logic) + where_clause: prisma.types.StoreAgentWhereInput = {"is_available": True} + if featured: + where_clause["featured"] = featured + if creators: + where_clause["creator_username"] = {"in": creators} + if category: + where_clause["categories"] = {"has": category} + + order_by = [] + if sorted_by == "rating": + order_by.append({"rating": "desc"}) + elif sorted_by == "runs": + order_by.append({"runs": "desc"}) + elif sorted_by == "name": + order_by.append({"agent_name": "asc"}) + + agents = await prisma.models.StoreAgent.prisma().find_many( + where=where_clause, + order=order_by, + skip=(page - 1) * page_size, + take=page_size, + ) + + total = await prisma.models.StoreAgent.prisma().count(where=where_clause) + total_pages = (total + page_size - 1) // page_size + + store_agents: list[store_model.StoreAgent] = [] + for agent in agents: + try: + # Create the StoreAgent object safely + store_agent = store_model.StoreAgent( + slug=agent.slug, + agent_name=agent.agent_name, + agent_image=agent.agent_image[0] if agent.agent_image else "", + creator=agent.creator_username or "Needs Profile", + creator_avatar=agent.creator_avatar or "", + sub_heading=agent.sub_heading, + description=agent.description, + runs=agent.runs, + rating=agent.rating, + ) + # Add to the list only if creation was successful + store_agents.append(store_agent) + except Exception as e: + # Skip this agent if there was an error + # You could log the error here if needed + logger.error( + f"Error parsing Store agent when getting store agents from db: {e}" + ) + continue logger.debug(f"Found {len(store_agents)} agents") - return backend.server.v2.store.model.StoreAgentsResponse( + return store_model.StoreAgentsResponse( agents=store_agents, - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=page, total_items=total, total_pages=total_pages, @@ -164,8 +257,8 @@ async def log_search_term(search_query: str): async def get_store_agent_details( - username: str, agent_name: str -) -> backend.server.v2.store.model.StoreAgentDetails: + username: str, agent_name: str, include_changelog: bool = False +) -> store_model.StoreAgentDetails: """Get PUBLIC store agent details from the StoreAgent view""" logger.debug(f"Getting store agent details for {username}/{agent_name}") @@ -176,7 +269,7 @@ async def get_store_agent_details( if not agent: logger.warning(f"Agent not found: {username}/{agent_name}") - raise backend.server.v2.store.exceptions.AgentNotFoundError( + raise store_exceptions.AgentNotFoundError( f"Agent {username}/{agent_name} not found" ) @@ -229,12 +322,34 @@ async def get_store_agent_details( else: recommended_schedule_cron = None + # Fetch changelog data if requested + changelog_data = None + if include_changelog and store_listing: + changelog_versions = ( + await prisma.models.StoreListingVersion.prisma().find_many( + where={ + "storeListingId": store_listing.id, + "submissionStatus": prisma.enums.SubmissionStatus.APPROVED, + }, + order=[{"version": "desc"}], + ) + ) + changelog_data = [ + store_model.ChangelogEntry( + version=str(version.version), + changes_summary=version.changesSummary or "No changes recorded", + date=version.createdAt, + ) + for version in changelog_versions + ] + logger.debug(f"Found agent details for {username}/{agent_name}") - return backend.server.v2.store.model.StoreAgentDetails( + return store_model.StoreAgentDetails( store_listing_version_id=agent.storeListingVersionId, slug=agent.slug, agent_name=agent.agent_name, agent_video=agent.agent_video or "", + agent_output_demo=agent.agent_output_demo or "", agent_image=agent.agent_image, creator=agent.creator_username or "", creator_avatar=agent.creator_avatar or "", @@ -244,12 +359,15 @@ async def get_store_agent_details( runs=agent.runs, rating=agent.rating, versions=agent.versions, + agentGraphVersions=agent.agentGraphVersions, + agentGraphId=agent.agentGraphId, last_updated=agent.updated_at, active_version_id=active_version_id, has_approved_version=has_approved_version, recommended_schedule_cron=recommended_schedule_cron, + changelog=changelog_data, ) - except backend.server.v2.store.exceptions.AgentNotFoundError: + except store_exceptions.AgentNotFoundError: raise except Exception as e: logger.error(f"Error getting store agent details: {e}") @@ -285,7 +403,7 @@ async def get_available_graph(store_listing_version_id: str) -> GraphMeta: async def get_store_agent_by_version_id( store_listing_version_id: str, -) -> backend.server.v2.store.model.StoreAgentDetails: +) -> store_model.StoreAgentDetails: logger.debug(f"Getting store agent details for {store_listing_version_id}") try: @@ -295,16 +413,17 @@ async def get_store_agent_by_version_id( if not agent: logger.warning(f"Agent not found: {store_listing_version_id}") - raise backend.server.v2.store.exceptions.AgentNotFoundError( + raise store_exceptions.AgentNotFoundError( f"Agent {store_listing_version_id} not found" ) logger.debug(f"Found agent details for {store_listing_version_id}") - return backend.server.v2.store.model.StoreAgentDetails( + return store_model.StoreAgentDetails( store_listing_version_id=agent.storeListingVersionId, slug=agent.slug, agent_name=agent.agent_name, agent_video=agent.agent_video or "", + agent_output_demo=agent.agent_output_demo or "", agent_image=agent.agent_image, creator=agent.creator_username or "", creator_avatar=agent.creator_avatar or "", @@ -314,9 +433,11 @@ async def get_store_agent_by_version_id( runs=agent.runs, rating=agent.rating, versions=agent.versions, + agentGraphVersions=agent.agentGraphVersions, + agentGraphId=agent.agentGraphId, last_updated=agent.updated_at, ) - except backend.server.v2.store.exceptions.AgentNotFoundError: + except store_exceptions.AgentNotFoundError: raise except Exception as e: logger.error(f"Error getting store agent details: {e}") @@ -326,10 +447,10 @@ async def get_store_agent_by_version_id( async def get_store_creators( featured: bool = False, search_query: str | None = None, - sorted_by: str | None = None, + sorted_by: Literal["agent_rating", "agent_runs", "num_agents"] | None = None, page: int = 1, page_size: int = 20, -) -> backend.server.v2.store.model.CreatorsResponse: +) -> store_model.CreatorsResponse: """Get PUBLIC store creators from the Creator view""" logger.debug( f"Getting store creators. featured={featured}, search={search_query}, sorted_by={sorted_by}, page={page}" @@ -404,7 +525,7 @@ async def get_store_creators( # Convert to response model creator_models = [ - backend.server.v2.store.model.Creator( + store_model.Creator( username=creator.username, name=creator.name, description=creator.description, @@ -418,9 +539,9 @@ async def get_store_creators( ] logger.debug(f"Found {len(creator_models)} creators") - return backend.server.v2.store.model.CreatorsResponse( + return store_model.CreatorsResponse( creators=creator_models, - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=page, total_items=total, total_pages=total_pages, @@ -434,7 +555,7 @@ async def get_store_creators( async def get_store_creator_details( username: str, -) -> backend.server.v2.store.model.CreatorDetails: +) -> store_model.CreatorDetails: logger.debug(f"Getting store creator details for {username}") try: @@ -445,12 +566,10 @@ async def get_store_creator_details( if not creator: logger.warning(f"Creator not found: {username}") - raise backend.server.v2.store.exceptions.CreatorNotFoundError( - f"Creator {username} not found" - ) + raise store_exceptions.CreatorNotFoundError(f"Creator {username} not found") logger.debug(f"Found creator details for {username}") - return backend.server.v2.store.model.CreatorDetails( + return store_model.CreatorDetails( name=creator.name, username=creator.username, description=creator.description, @@ -460,7 +579,7 @@ async def get_store_creator_details( agent_runs=creator.agent_runs, top_categories=creator.top_categories, ) - except backend.server.v2.store.exceptions.CreatorNotFoundError: + except store_exceptions.CreatorNotFoundError: raise except Exception as e: logger.error(f"Error getting store creator details: {e}") @@ -469,7 +588,7 @@ async def get_store_creator_details( async def get_store_submissions( user_id: str, page: int = 1, page_size: int = 20 -) -> backend.server.v2.store.model.StoreSubmissionsResponse: +) -> store_model.StoreSubmissionsResponse: """Get store submissions for the authenticated user -- not an admin""" logger.debug(f"Getting store submissions for user {user_id}, page={page}") @@ -494,7 +613,7 @@ async def get_store_submissions( # Convert to response models submission_models = [] for sub in submissions: - submission_model = backend.server.v2.store.model.StoreSubmission( + submission_model = store_model.StoreSubmission( agent_id=sub.agent_id, agent_version=sub.agent_version, name=sub.name, @@ -519,9 +638,9 @@ async def get_store_submissions( submission_models.append(submission_model) logger.debug(f"Found {len(submission_models)} submissions") - return backend.server.v2.store.model.StoreSubmissionsResponse( + return store_model.StoreSubmissionsResponse( submissions=submission_models, - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=page, total_items=total, total_pages=total_pages, @@ -532,9 +651,9 @@ async def get_store_submissions( except Exception as e: logger.error(f"Error fetching store submissions: {e}") # Return empty response rather than exposing internal errors - return backend.server.v2.store.model.StoreSubmissionsResponse( + return store_model.StoreSubmissionsResponse( submissions=[], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=page, total_items=0, total_pages=0, @@ -567,7 +686,7 @@ async def delete_store_submission( if not submission: logger.warning(f"Submission not found for user {user_id}: {submission_id}") - raise backend.server.v2.store.exceptions.SubmissionNotFoundError( + raise store_exceptions.SubmissionNotFoundError( f"Submission not found for this user. User ID: {user_id}, Submission ID: {submission_id}" ) @@ -591,6 +710,7 @@ async def create_store_submission( slug: str, name: str, video_url: str | None = None, + agent_output_demo_url: str | None = None, image_urls: list[str] = [], description: str = "", instructions: str | None = None, @@ -598,7 +718,7 @@ async def create_store_submission( categories: list[str] = [], changes_summary: str | None = "Initial Submission", recommended_schedule_cron: str | None = None, -) -> backend.server.v2.store.model.StoreSubmission: +) -> store_model.StoreSubmission: """ Create the first (and only) store listing and thus submission as a normal user @@ -639,7 +759,7 @@ async def create_store_submission( logger.warning( f"Agent not found for user {user_id}: {agent_id} v{agent_version}" ) - raise backend.server.v2.store.exceptions.AgentNotFoundError( + raise store_exceptions.AgentNotFoundError( f"Agent not found for this user. User ID: {user_id}, Agent ID: {agent_id}, Version: {agent_version}" ) @@ -685,6 +805,7 @@ async def create_store_submission( agentGraphVersion=agent_version, name=name, videoUrl=video_url, + agentOutputDemoUrl=agent_output_demo_url, imageUrls=image_urls, description=description, instructions=instructions, @@ -711,7 +832,7 @@ async def create_store_submission( logger.debug(f"Created store listing for agent {agent_id}") # Return submission details - return backend.server.v2.store.model.StoreSubmission( + return store_model.StoreSubmission( agent_id=agent_id, agent_version=agent_version, name=name, @@ -734,7 +855,7 @@ async def create_store_submission( logger.debug( f"Slug '{slug}' is already in use by another agent (agent_id: {agent_id}) for user {user_id}" ) - raise backend.server.v2.store.exceptions.SlugAlreadyInUseError( + raise store_exceptions.SlugAlreadyInUseError( f"The URL slug '{slug}' is already in use by another one of your agents. Please choose a different slug." ) from exc else: @@ -743,8 +864,8 @@ async def create_store_submission( f"Unique constraint violated (not slug): {error_str}" ) from exc except ( - backend.server.v2.store.exceptions.AgentNotFoundError, - backend.server.v2.store.exceptions.ListingExistsError, + store_exceptions.AgentNotFoundError, + store_exceptions.ListingExistsError, ): raise except prisma.errors.PrismaError as e: @@ -757,6 +878,7 @@ async def edit_store_submission( store_listing_version_id: str, name: str, video_url: str | None = None, + agent_output_demo_url: str | None = None, image_urls: list[str] = [], description: str = "", sub_heading: str = "", @@ -764,7 +886,7 @@ async def edit_store_submission( changes_summary: str | None = "Update submission", recommended_schedule_cron: str | None = None, instructions: str | None = None, -) -> backend.server.v2.store.model.StoreSubmission: +) -> store_model.StoreSubmission: """ Edit an existing store listing submission. @@ -806,7 +928,7 @@ async def edit_store_submission( ) if not current_version: - raise backend.server.v2.store.exceptions.SubmissionNotFoundError( + raise store_exceptions.SubmissionNotFoundError( f"Store listing version not found: {store_listing_version_id}" ) @@ -815,7 +937,7 @@ async def edit_store_submission( not current_version.StoreListing or current_version.StoreListing.owningUserId != user_id ): - raise backend.server.v2.store.exceptions.UnauthorizedError( + raise store_exceptions.UnauthorizedError( f"User {user_id} does not own submission {store_listing_version_id}" ) @@ -824,7 +946,7 @@ async def edit_store_submission( # Check if we can edit this submission if current_version.submissionStatus == prisma.enums.SubmissionStatus.REJECTED: - raise backend.server.v2.store.exceptions.InvalidOperationError( + raise store_exceptions.InvalidOperationError( "Cannot edit a rejected submission" ) @@ -838,6 +960,7 @@ async def edit_store_submission( store_listing_id=current_version.storeListingId, name=name, video_url=video_url, + agent_output_demo_url=agent_output_demo_url, image_urls=image_urls, description=description, sub_heading=sub_heading, @@ -855,6 +978,7 @@ async def edit_store_submission( data=prisma.types.StoreListingVersionUpdateInput( name=name, videoUrl=video_url, + agentOutputDemoUrl=agent_output_demo_url, imageUrls=image_urls, description=description, categories=categories, @@ -871,7 +995,7 @@ async def edit_store_submission( if not updated_version: raise DatabaseError("Failed to update store listing version") - return backend.server.v2.store.model.StoreSubmission( + return store_model.StoreSubmission( agent_id=current_version.agentGraphId, agent_version=current_version.agentGraphVersion, name=name, @@ -892,16 +1016,16 @@ async def edit_store_submission( ) else: - raise backend.server.v2.store.exceptions.InvalidOperationError( + raise store_exceptions.InvalidOperationError( f"Cannot edit submission with status: {current_version.submissionStatus}" ) except ( - backend.server.v2.store.exceptions.SubmissionNotFoundError, - backend.server.v2.store.exceptions.UnauthorizedError, - backend.server.v2.store.exceptions.AgentNotFoundError, - backend.server.v2.store.exceptions.ListingExistsError, - backend.server.v2.store.exceptions.InvalidOperationError, + store_exceptions.SubmissionNotFoundError, + store_exceptions.UnauthorizedError, + store_exceptions.AgentNotFoundError, + store_exceptions.ListingExistsError, + store_exceptions.InvalidOperationError, ): raise except prisma.errors.PrismaError as e: @@ -916,6 +1040,7 @@ async def create_store_version( store_listing_id: str, name: str, video_url: str | None = None, + agent_output_demo_url: str | None = None, image_urls: list[str] = [], description: str = "", instructions: str | None = None, @@ -923,7 +1048,7 @@ async def create_store_version( categories: list[str] = [], changes_summary: str | None = "Initial submission", recommended_schedule_cron: str | None = None, -) -> backend.server.v2.store.model.StoreSubmission: +) -> store_model.StoreSubmission: """ Create a new version for an existing store listing @@ -956,7 +1081,7 @@ async def create_store_version( ) if not listing: - raise backend.server.v2.store.exceptions.ListingNotFoundError( + raise store_exceptions.ListingNotFoundError( f"Store listing not found. User ID: {user_id}, Listing ID: {store_listing_id}" ) @@ -968,7 +1093,7 @@ async def create_store_version( ) if not agent: - raise backend.server.v2.store.exceptions.AgentNotFoundError( + raise store_exceptions.AgentNotFoundError( f"Agent not found for this user. User ID: {user_id}, Agent ID: {agent_id}, Version: {agent_version}" ) @@ -985,6 +1110,7 @@ async def create_store_version( agentGraphVersion=agent_version, name=name, videoUrl=video_url, + agentOutputDemoUrl=agent_output_demo_url, imageUrls=image_urls, description=description, instructions=instructions, @@ -1002,7 +1128,7 @@ async def create_store_version( f"Created new version for listing {store_listing_id} of agent {agent_id}" ) # Return submission details - return backend.server.v2.store.model.StoreSubmission( + return store_model.StoreSubmission( agent_id=agent_id, agent_version=agent_version, name=name, @@ -1029,7 +1155,7 @@ async def create_store_review( store_listing_version_id: str, score: int, comments: str | None = None, -) -> backend.server.v2.store.model.StoreReview: +) -> store_model.StoreReview: """Create a review for a store listing as a user to detail their experience""" try: data = prisma.types.StoreListingReviewUpsertInput( @@ -1054,7 +1180,7 @@ async def create_store_review( data=data, ) - return backend.server.v2.store.model.StoreReview( + return store_model.StoreReview( score=review.score, comments=review.comments, ) @@ -1066,7 +1192,7 @@ async def create_store_review( async def get_user_profile( user_id: str, -) -> backend.server.v2.store.model.ProfileDetails | None: +) -> store_model.ProfileDetails | None: logger.debug(f"Getting user profile for {user_id}") try: @@ -1076,7 +1202,7 @@ async def get_user_profile( if not profile: return None - return backend.server.v2.store.model.ProfileDetails( + return store_model.ProfileDetails( name=profile.name, username=profile.username, description=profile.description, @@ -1089,8 +1215,8 @@ async def get_user_profile( async def update_profile( - user_id: str, profile: backend.server.v2.store.model.Profile -) -> backend.server.v2.store.model.CreatorDetails: + user_id: str, profile: store_model.Profile +) -> store_model.CreatorDetails: """ Update the store profile for a user or create a new one if it doesn't exist. Args: @@ -1113,7 +1239,7 @@ async def update_profile( where={"userId": user_id} ) if not existing_profile: - raise backend.server.v2.store.exceptions.ProfileNotFoundError( + raise store_exceptions.ProfileNotFoundError( f"Profile not found for user {user_id}. This should not be possible." ) @@ -1149,7 +1275,7 @@ async def update_profile( logger.error(f"Failed to update profile for user {user_id}") raise DatabaseError("Failed to update profile") - return backend.server.v2.store.model.CreatorDetails( + return store_model.CreatorDetails( name=updated_profile.name, username=updated_profile.username, description=updated_profile.description, @@ -1169,7 +1295,7 @@ async def get_my_agents( user_id: str, page: int = 1, page_size: int = 20, -) -> backend.server.v2.store.model.MyAgentsResponse: +) -> store_model.MyAgentsResponse: """Get the agents for the authenticated user""" logger.debug(f"Getting my agents for user {user_id}, page={page}") @@ -1206,7 +1332,7 @@ async def get_my_agents( total_pages = (total + page_size - 1) // page_size my_agents = [ - backend.server.v2.store.model.MyAgent( + store_model.MyAgent( agent_id=graph.id, agent_version=graph.version, agent_name=graph.name or "", @@ -1219,9 +1345,9 @@ async def get_my_agents( if (graph := library_agent.AgentGraph) ] - return backend.server.v2.store.model.MyAgentsResponse( + return store_model.MyAgentsResponse( agents=my_agents, - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=page, total_items=total, total_pages=total_pages, @@ -1247,6 +1373,7 @@ async def get_agent(store_listing_version_id: str) -> GraphModel: graph = await get_graph( graph_id=store_listing_version.agentGraphId, version=store_listing_version.agentGraphVersion, + user_id=None, for_export=True, ) if not graph: @@ -1367,7 +1494,7 @@ async def review_store_submission( external_comments: str, internal_comments: str, reviewer_id: str, -) -> backend.server.v2.store.model.StoreSubmission: +) -> store_model.StoreSubmission: """Review a store listing submission as an admin.""" try: store_listing_version = ( @@ -1580,7 +1707,7 @@ async def review_store_submission( pass # Convert to Pydantic model for consistency - return backend.server.v2.store.model.StoreSubmission( + return store_model.StoreSubmission( agent_id=submission.agentGraphId, agent_version=submission.agentGraphVersion, name=submission.name, @@ -1615,7 +1742,7 @@ async def get_admin_listings_with_versions( search_query: str | None = None, page: int = 1, page_size: int = 20, -) -> backend.server.v2.store.model.StoreListingsWithVersionsResponse: +) -> store_model.StoreListingsWithVersionsResponse: """ Get store listings for admins with all their versions. @@ -1640,22 +1767,21 @@ async def get_admin_listings_with_versions( if status: where_dict["Versions"] = {"some": {"submissionStatus": status}} - sanitized_query = sanitize_query(search_query) - if sanitized_query: + if search_query: # Find users with matching email matching_users = await prisma.models.User.prisma().find_many( - where={"email": {"contains": sanitized_query, "mode": "insensitive"}}, + where={"email": {"contains": search_query, "mode": "insensitive"}}, ) user_ids = [user.id for user in matching_users] # Set up OR conditions where_dict["OR"] = [ - {"slug": {"contains": sanitized_query, "mode": "insensitive"}}, + {"slug": {"contains": search_query, "mode": "insensitive"}}, { "Versions": { "some": { - "name": {"contains": sanitized_query, "mode": "insensitive"} + "name": {"contains": search_query, "mode": "insensitive"} } } }, @@ -1663,7 +1789,7 @@ async def get_admin_listings_with_versions( "Versions": { "some": { "description": { - "contains": sanitized_query, + "contains": search_query, "mode": "insensitive", } } @@ -1673,7 +1799,7 @@ async def get_admin_listings_with_versions( "Versions": { "some": { "subHeading": { - "contains": sanitized_query, + "contains": search_query, "mode": "insensitive", } } @@ -1715,10 +1841,10 @@ async def get_admin_listings_with_versions( # Convert to response models listings_with_versions = [] for listing in listings: - versions: list[backend.server.v2.store.model.StoreSubmission] = [] + versions: list[store_model.StoreSubmission] = [] # If we have versions, turn them into StoreSubmission models for version in listing.Versions or []: - version_model = backend.server.v2.store.model.StoreSubmission( + version_model = store_model.StoreSubmission( agent_id=version.agentGraphId, agent_version=version.agentGraphVersion, name=version.name, @@ -1746,26 +1872,24 @@ async def get_admin_listings_with_versions( creator_email = listing.OwningUser.email if listing.OwningUser else None - listing_with_versions = ( - backend.server.v2.store.model.StoreListingWithVersions( - listing_id=listing.id, - slug=listing.slug, - agent_id=listing.agentGraphId, - agent_version=listing.agentGraphVersion, - active_version_id=listing.activeVersionId, - has_approved_version=listing.hasApprovedVersion, - creator_email=creator_email, - latest_version=latest_version, - versions=versions, - ) + listing_with_versions = store_model.StoreListingWithVersions( + listing_id=listing.id, + slug=listing.slug, + agent_id=listing.agentGraphId, + agent_version=listing.agentGraphVersion, + active_version_id=listing.activeVersionId, + has_approved_version=listing.hasApprovedVersion, + creator_email=creator_email, + latest_version=latest_version, + versions=versions, ) listings_with_versions.append(listing_with_versions) logger.debug(f"Found {len(listings_with_versions)} listings for admin") - return backend.server.v2.store.model.StoreListingsWithVersionsResponse( + return store_model.StoreListingsWithVersionsResponse( listings=listings_with_versions, - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=page, total_items=total, total_pages=total_pages, @@ -1775,9 +1899,9 @@ async def get_admin_listings_with_versions( except Exception as e: logger.error(f"Error fetching admin store listings: {e}") # Return empty response rather than exposing internal errors - return backend.server.v2.store.model.StoreListingsWithVersionsResponse( + return store_model.StoreListingsWithVersionsResponse( listings=[], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=page, total_items=0, total_pages=0, diff --git a/autogpt_platform/backend/backend/server/v2/store/db_test.py b/autogpt_platform/backend/backend/api/features/store/db_test.py similarity index 83% rename from autogpt_platform/backend/backend/server/v2/store/db_test.py rename to autogpt_platform/backend/backend/api/features/store/db_test.py index dacad92e35..b48ce5db95 100644 --- a/autogpt_platform/backend/backend/server/v2/store/db_test.py +++ b/autogpt_platform/backend/backend/api/features/store/db_test.py @@ -6,8 +6,8 @@ import prisma.models import pytest from prisma import Prisma -import backend.server.v2.store.db as db -from backend.server.v2.store.model import Profile +from . import db +from .model import Profile @pytest.fixture(autouse=True) @@ -20,7 +20,7 @@ async def setup_prisma(): yield -@pytest.mark.asyncio +@pytest.mark.asyncio(loop_scope="session") async def test_get_store_agents(mocker): # Mock data mock_agents = [ @@ -40,6 +40,8 @@ async def test_get_store_agents(mocker): runs=10, rating=4.5, versions=["1.0"], + agentGraphVersions=["1"], + agentGraphId="test-graph-id", updated_at=datetime.now(), is_available=False, useForOnboarding=False, @@ -64,7 +66,7 @@ async def test_get_store_agents(mocker): mock_store_agent.return_value.count.assert_called_once() -@pytest.mark.asyncio +@pytest.mark.asyncio(loop_scope="session") async def test_get_store_agent_details(mocker): # Mock data mock_agent = prisma.models.StoreAgent( @@ -83,6 +85,8 @@ async def test_get_store_agent_details(mocker): runs=10, rating=4.5, versions=["1.0"], + agentGraphVersions=["1"], + agentGraphId="test-graph-id", updated_at=datetime.now(), is_available=False, useForOnboarding=False, @@ -105,6 +109,8 @@ async def test_get_store_agent_details(mocker): runs=15, rating=4.8, versions=["1.0", "2.0"], + agentGraphVersions=["1", "2"], + agentGraphId="test-graph-id-active", updated_at=datetime.now(), is_available=True, useForOnboarding=False, @@ -173,7 +179,7 @@ async def test_get_store_agent_details(mocker): mock_store_listing_db.return_value.find_first.assert_called_once() -@pytest.mark.asyncio +@pytest.mark.asyncio(loop_scope="session") async def test_get_store_creator_details(mocker): # Mock data mock_creator_data = prisma.models.Creator( @@ -210,7 +216,7 @@ async def test_get_store_creator_details(mocker): ) -@pytest.mark.asyncio +@pytest.mark.asyncio(loop_scope="session") async def test_create_store_submission(mocker): # Mock data mock_agent = prisma.models.AgentGraph( @@ -282,7 +288,7 @@ async def test_create_store_submission(mocker): mock_store_listing.return_value.create.assert_called_once() -@pytest.mark.asyncio +@pytest.mark.asyncio(loop_scope="session") async def test_update_profile(mocker): # Mock data mock_profile = prisma.models.Profile( @@ -327,7 +333,7 @@ async def test_update_profile(mocker): mock_profile_db.return_value.update.assert_called_once() -@pytest.mark.asyncio +@pytest.mark.asyncio(loop_scope="session") async def test_get_user_profile(mocker): # Mock data mock_profile = prisma.models.Profile( @@ -359,3 +365,49 @@ async def test_get_user_profile(mocker): assert result.description == "Test description" assert result.links == ["link1", "link2"] assert result.avatar_url == "avatar.jpg" + + +@pytest.mark.asyncio(loop_scope="session") +async def test_get_store_agents_with_search_parameterized(mocker): + """Test that search query uses parameterized SQL - validates the fix works""" + + # Call function with search query containing potential SQL injection + malicious_search = "test'; DROP TABLE StoreAgent; --" + result = await db.get_store_agents(search_query=malicious_search) + + # Verify query executed safely + assert isinstance(result.agents, list) + + +@pytest.mark.asyncio(loop_scope="session") +async def test_get_store_agents_with_search_and_filters_parameterized(): + """Test parameterized SQL with multiple filters""" + + # Call with multiple filters including potential injection attempts + result = await db.get_store_agents( + search_query="test", + creators=["creator1'; DROP TABLE Users; --", "creator2"], + category="AI'; DELETE FROM StoreAgent; --", + featured=True, + sorted_by="rating", + page=1, + page_size=20, + ) + + # Verify the query executed without error + assert isinstance(result.agents, list) + + +@pytest.mark.asyncio(loop_scope="session") +async def test_get_store_agents_search_category_array_injection(): + """Test that category parameter is safely passed as a parameter""" + # Try SQL injection via category + malicious_category = "AI'; DROP TABLE StoreAgent; --" + result = await db.get_store_agents( + search_query="test", + category=malicious_category, + ) + + # Verify the query executed without error + # Category should be parameterized, preventing SQL injection + assert isinstance(result.agents, list) diff --git a/autogpt_platform/backend/backend/server/v2/store/exceptions.py b/autogpt_platform/backend/backend/api/features/store/exceptions.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/store/exceptions.py rename to autogpt_platform/backend/backend/api/features/store/exceptions.py diff --git a/autogpt_platform/backend/backend/server/v2/store/image_gen.py b/autogpt_platform/backend/backend/api/features/store/image_gen.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/store/image_gen.py rename to autogpt_platform/backend/backend/api/features/store/image_gen.py diff --git a/autogpt_platform/backend/backend/server/v2/store/media.py b/autogpt_platform/backend/backend/api/features/store/media.py similarity index 81% rename from autogpt_platform/backend/backend/server/v2/store/media.py rename to autogpt_platform/backend/backend/api/features/store/media.py index 88542dd2c8..cfdc71567a 100644 --- a/autogpt_platform/backend/backend/server/v2/store/media.py +++ b/autogpt_platform/backend/backend/api/features/store/media.py @@ -5,11 +5,12 @@ import uuid import fastapi from gcloud.aio import storage as async_storage -import backend.server.v2.store.exceptions from backend.util.exceptions import MissingConfigError from backend.util.settings import Settings from backend.util.virus_scanner import scan_content_safe +from . import exceptions as store_exceptions + logger = logging.getLogger(__name__) ALLOWED_IMAGE_TYPES = {"image/jpeg", "image/png", "image/gif", "image/webp"} @@ -68,61 +69,55 @@ async def upload_media( await file.seek(0) # Reset file pointer except Exception as e: logger.error(f"Error reading file content: {str(e)}") - raise backend.server.v2.store.exceptions.FileReadError( - "Failed to read file content" - ) from e + raise store_exceptions.FileReadError("Failed to read file content") from e # Validate file signature/magic bytes if file.content_type in ALLOWED_IMAGE_TYPES: # Check image file signatures if content.startswith(b"\xff\xd8\xff"): # JPEG if file.content_type != "image/jpeg": - raise backend.server.v2.store.exceptions.InvalidFileTypeError( + raise store_exceptions.InvalidFileTypeError( "File signature does not match content type" ) elif content.startswith(b"\x89PNG\r\n\x1a\n"): # PNG if file.content_type != "image/png": - raise backend.server.v2.store.exceptions.InvalidFileTypeError( + raise store_exceptions.InvalidFileTypeError( "File signature does not match content type" ) elif content.startswith(b"GIF87a") or content.startswith(b"GIF89a"): # GIF if file.content_type != "image/gif": - raise backend.server.v2.store.exceptions.InvalidFileTypeError( + raise store_exceptions.InvalidFileTypeError( "File signature does not match content type" ) elif content.startswith(b"RIFF") and content[8:12] == b"WEBP": # WebP if file.content_type != "image/webp": - raise backend.server.v2.store.exceptions.InvalidFileTypeError( + raise store_exceptions.InvalidFileTypeError( "File signature does not match content type" ) else: - raise backend.server.v2.store.exceptions.InvalidFileTypeError( - "Invalid image file signature" - ) + raise store_exceptions.InvalidFileTypeError("Invalid image file signature") elif file.content_type in ALLOWED_VIDEO_TYPES: # Check video file signatures if content.startswith(b"\x00\x00\x00") and (content[4:8] == b"ftyp"): # MP4 if file.content_type != "video/mp4": - raise backend.server.v2.store.exceptions.InvalidFileTypeError( + raise store_exceptions.InvalidFileTypeError( "File signature does not match content type" ) elif content.startswith(b"\x1a\x45\xdf\xa3"): # WebM if file.content_type != "video/webm": - raise backend.server.v2.store.exceptions.InvalidFileTypeError( + raise store_exceptions.InvalidFileTypeError( "File signature does not match content type" ) else: - raise backend.server.v2.store.exceptions.InvalidFileTypeError( - "Invalid video file signature" - ) + raise store_exceptions.InvalidFileTypeError("Invalid video file signature") settings = Settings() # Check required settings first before doing any file processing if not settings.config.media_gcs_bucket_name: logger.error("Missing GCS bucket name setting") - raise backend.server.v2.store.exceptions.StorageConfigError( + raise store_exceptions.StorageConfigError( "Missing storage bucket configuration" ) @@ -137,7 +132,7 @@ async def upload_media( and content_type not in ALLOWED_VIDEO_TYPES ): logger.warning(f"Invalid file type attempted: {content_type}") - raise backend.server.v2.store.exceptions.InvalidFileTypeError( + raise store_exceptions.InvalidFileTypeError( f"File type not supported. Must be jpeg, png, gif, webp, mp4 or webm. Content type: {content_type}" ) @@ -150,16 +145,14 @@ async def upload_media( file_size += len(chunk) if file_size > MAX_FILE_SIZE: logger.warning(f"File size too large: {file_size} bytes") - raise backend.server.v2.store.exceptions.FileSizeTooLargeError( + raise store_exceptions.FileSizeTooLargeError( "File too large. Maximum size is 50MB" ) - except backend.server.v2.store.exceptions.FileSizeTooLargeError: + except store_exceptions.FileSizeTooLargeError: raise except Exception as e: logger.error(f"Error reading file chunks: {str(e)}") - raise backend.server.v2.store.exceptions.FileReadError( - "Failed to read uploaded file" - ) from e + raise store_exceptions.FileReadError("Failed to read uploaded file") from e # Reset file pointer await file.seek(0) @@ -198,14 +191,14 @@ async def upload_media( except Exception as e: logger.error(f"GCS storage error: {str(e)}") - raise backend.server.v2.store.exceptions.StorageUploadError( + raise store_exceptions.StorageUploadError( "Failed to upload file to storage" ) from e - except backend.server.v2.store.exceptions.MediaUploadError: + except store_exceptions.MediaUploadError: raise except Exception as e: logger.exception("Unexpected error in upload_media") - raise backend.server.v2.store.exceptions.MediaUploadError( + raise store_exceptions.MediaUploadError( "Unexpected error during media upload" ) from e diff --git a/autogpt_platform/backend/backend/server/v2/store/media_test.py b/autogpt_platform/backend/backend/api/features/store/media_test.py similarity index 75% rename from autogpt_platform/backend/backend/server/v2/store/media_test.py rename to autogpt_platform/backend/backend/api/features/store/media_test.py index 3722d2fdc3..7f3899c8a5 100644 --- a/autogpt_platform/backend/backend/server/v2/store/media_test.py +++ b/autogpt_platform/backend/backend/api/features/store/media_test.py @@ -6,17 +6,18 @@ import fastapi import pytest import starlette.datastructures -import backend.server.v2.store.exceptions -import backend.server.v2.store.media from backend.util.settings import Settings +from . import exceptions as store_exceptions +from . import media as store_media + @pytest.fixture def mock_settings(monkeypatch): settings = Settings() settings.config.media_gcs_bucket_name = "test-bucket" settings.config.google_application_credentials = "test-credentials" - monkeypatch.setattr("backend.server.v2.store.media.Settings", lambda: settings) + monkeypatch.setattr("backend.api.features.store.media.Settings", lambda: settings) return settings @@ -32,12 +33,13 @@ def mock_storage_client(mocker): # Mock the constructor to return our mock client mocker.patch( - "backend.server.v2.store.media.async_storage.Storage", return_value=mock_client + "backend.api.features.store.media.async_storage.Storage", + return_value=mock_client, ) # Mock virus scanner to avoid actual scanning mocker.patch( - "backend.server.v2.store.media.scan_content_safe", new_callable=AsyncMock + "backend.api.features.store.media.scan_content_safe", new_callable=AsyncMock ) return mock_client @@ -53,7 +55,7 @@ async def test_upload_media_success(mock_settings, mock_storage_client): headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}), ) - result = await backend.server.v2.store.media.upload_media("test-user", test_file) + result = await store_media.upload_media("test-user", test_file) assert result.startswith( "https://storage.googleapis.com/test-bucket/users/test-user/images/" @@ -69,8 +71,8 @@ async def test_upload_media_invalid_type(mock_settings, mock_storage_client): headers=starlette.datastructures.Headers({"content-type": "text/plain"}), ) - with pytest.raises(backend.server.v2.store.exceptions.InvalidFileTypeError): - await backend.server.v2.store.media.upload_media("test-user", test_file) + with pytest.raises(store_exceptions.InvalidFileTypeError): + await store_media.upload_media("test-user", test_file) mock_storage_client.upload.assert_not_called() @@ -79,7 +81,7 @@ async def test_upload_media_missing_credentials(monkeypatch): settings = Settings() settings.config.media_gcs_bucket_name = "" settings.config.google_application_credentials = "" - monkeypatch.setattr("backend.server.v2.store.media.Settings", lambda: settings) + monkeypatch.setattr("backend.api.features.store.media.Settings", lambda: settings) test_file = fastapi.UploadFile( filename="laptop.jpeg", @@ -87,8 +89,8 @@ async def test_upload_media_missing_credentials(monkeypatch): headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}), ) - with pytest.raises(backend.server.v2.store.exceptions.StorageConfigError): - await backend.server.v2.store.media.upload_media("test-user", test_file) + with pytest.raises(store_exceptions.StorageConfigError): + await store_media.upload_media("test-user", test_file) async def test_upload_media_video_type(mock_settings, mock_storage_client): @@ -98,7 +100,7 @@ async def test_upload_media_video_type(mock_settings, mock_storage_client): headers=starlette.datastructures.Headers({"content-type": "video/mp4"}), ) - result = await backend.server.v2.store.media.upload_media("test-user", test_file) + result = await store_media.upload_media("test-user", test_file) assert result.startswith( "https://storage.googleapis.com/test-bucket/users/test-user/videos/" @@ -117,8 +119,8 @@ async def test_upload_media_file_too_large(mock_settings, mock_storage_client): headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}), ) - with pytest.raises(backend.server.v2.store.exceptions.FileSizeTooLargeError): - await backend.server.v2.store.media.upload_media("test-user", test_file) + with pytest.raises(store_exceptions.FileSizeTooLargeError): + await store_media.upload_media("test-user", test_file) async def test_upload_media_file_read_error(mock_settings, mock_storage_client): @@ -129,8 +131,8 @@ async def test_upload_media_file_read_error(mock_settings, mock_storage_client): ) test_file.read = unittest.mock.AsyncMock(side_effect=Exception("Read error")) - with pytest.raises(backend.server.v2.store.exceptions.FileReadError): - await backend.server.v2.store.media.upload_media("test-user", test_file) + with pytest.raises(store_exceptions.FileReadError): + await store_media.upload_media("test-user", test_file) async def test_upload_media_png_success(mock_settings, mock_storage_client): @@ -140,7 +142,7 @@ async def test_upload_media_png_success(mock_settings, mock_storage_client): headers=starlette.datastructures.Headers({"content-type": "image/png"}), ) - result = await backend.server.v2.store.media.upload_media("test-user", test_file) + result = await store_media.upload_media("test-user", test_file) assert result.startswith( "https://storage.googleapis.com/test-bucket/users/test-user/images/" ) @@ -154,7 +156,7 @@ async def test_upload_media_gif_success(mock_settings, mock_storage_client): headers=starlette.datastructures.Headers({"content-type": "image/gif"}), ) - result = await backend.server.v2.store.media.upload_media("test-user", test_file) + result = await store_media.upload_media("test-user", test_file) assert result.startswith( "https://storage.googleapis.com/test-bucket/users/test-user/images/" ) @@ -168,7 +170,7 @@ async def test_upload_media_webp_success(mock_settings, mock_storage_client): headers=starlette.datastructures.Headers({"content-type": "image/webp"}), ) - result = await backend.server.v2.store.media.upload_media("test-user", test_file) + result = await store_media.upload_media("test-user", test_file) assert result.startswith( "https://storage.googleapis.com/test-bucket/users/test-user/images/" ) @@ -182,7 +184,7 @@ async def test_upload_media_webm_success(mock_settings, mock_storage_client): headers=starlette.datastructures.Headers({"content-type": "video/webm"}), ) - result = await backend.server.v2.store.media.upload_media("test-user", test_file) + result = await store_media.upload_media("test-user", test_file) assert result.startswith( "https://storage.googleapis.com/test-bucket/users/test-user/videos/" ) @@ -196,8 +198,8 @@ async def test_upload_media_mismatched_signature(mock_settings, mock_storage_cli headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}), ) - with pytest.raises(backend.server.v2.store.exceptions.InvalidFileTypeError): - await backend.server.v2.store.media.upload_media("test-user", test_file) + with pytest.raises(store_exceptions.InvalidFileTypeError): + await store_media.upload_media("test-user", test_file) async def test_upload_media_invalid_signature(mock_settings, mock_storage_client): @@ -207,5 +209,5 @@ async def test_upload_media_invalid_signature(mock_settings, mock_storage_client headers=starlette.datastructures.Headers({"content-type": "image/jpeg"}), ) - with pytest.raises(backend.server.v2.store.exceptions.InvalidFileTypeError): - await backend.server.v2.store.media.upload_media("test-user", test_file) + with pytest.raises(store_exceptions.InvalidFileTypeError): + await store_media.upload_media("test-user", test_file) diff --git a/autogpt_platform/backend/backend/server/v2/store/model.py b/autogpt_platform/backend/backend/api/features/store/model.py similarity index 91% rename from autogpt_platform/backend/backend/server/v2/store/model.py rename to autogpt_platform/backend/backend/api/features/store/model.py index ce2aabaa28..972898b296 100644 --- a/autogpt_platform/backend/backend/server/v2/store/model.py +++ b/autogpt_platform/backend/backend/api/features/store/model.py @@ -7,6 +7,12 @@ import pydantic from backend.util.models import Pagination +class ChangelogEntry(pydantic.BaseModel): + version: str + changes_summary: str + date: datetime.datetime + + class MyAgent(pydantic.BaseModel): agent_id: str agent_version: int @@ -44,6 +50,7 @@ class StoreAgentDetails(pydantic.BaseModel): slug: str agent_name: str agent_video: str + agent_output_demo: str agent_image: list[str] creator: str creator_avatar: str @@ -54,12 +61,17 @@ class StoreAgentDetails(pydantic.BaseModel): runs: int rating: float versions: list[str] + agentGraphVersions: list[str] + agentGraphId: str last_updated: datetime.datetime recommended_schedule_cron: str | None = None active_version_id: str | None = None has_approved_version: bool = False + # Optional changelog data when include_changelog=True + changelog: list[ChangelogEntry] | None = None + class Creator(pydantic.BaseModel): name: str @@ -121,6 +133,7 @@ class StoreSubmission(pydantic.BaseModel): # Additional fields for editing video_url: str | None = None + agent_output_demo_url: str | None = None categories: list[str] = [] @@ -157,6 +170,7 @@ class StoreSubmissionRequest(pydantic.BaseModel): name: str sub_heading: str video_url: str | None = None + agent_output_demo_url: str | None = None image_urls: list[str] = [] description: str = "" instructions: str | None = None @@ -169,6 +183,7 @@ class StoreSubmissionEditRequest(pydantic.BaseModel): name: str sub_heading: str video_url: str | None = None + agent_output_demo_url: str | None = None image_urls: list[str] = [] description: str = "" instructions: str | None = None diff --git a/autogpt_platform/backend/backend/server/v2/store/model_test.py b/autogpt_platform/backend/backend/api/features/store/model_test.py similarity index 83% rename from autogpt_platform/backend/backend/server/v2/store/model_test.py rename to autogpt_platform/backend/backend/api/features/store/model_test.py index ec90fe6854..a37966601b 100644 --- a/autogpt_platform/backend/backend/server/v2/store/model_test.py +++ b/autogpt_platform/backend/backend/api/features/store/model_test.py @@ -2,11 +2,11 @@ import datetime import prisma.enums -import backend.server.v2.store.model +from . import model as store_model def test_pagination(): - pagination = backend.server.v2.store.model.Pagination( + pagination = store_model.Pagination( total_items=100, total_pages=5, current_page=2, page_size=20 ) assert pagination.total_items == 100 @@ -16,7 +16,7 @@ def test_pagination(): def test_store_agent(): - agent = backend.server.v2.store.model.StoreAgent( + agent = store_model.StoreAgent( slug="test-agent", agent_name="Test Agent", agent_image="test.jpg", @@ -34,9 +34,9 @@ def test_store_agent(): def test_store_agents_response(): - response = backend.server.v2.store.model.StoreAgentsResponse( + response = store_model.StoreAgentsResponse( agents=[ - backend.server.v2.store.model.StoreAgent( + store_model.StoreAgent( slug="test-agent", agent_name="Test Agent", agent_image="test.jpg", @@ -48,7 +48,7 @@ def test_store_agents_response(): rating=4.5, ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( total_items=1, total_pages=1, current_page=1, page_size=20 ), ) @@ -57,11 +57,12 @@ def test_store_agents_response(): def test_store_agent_details(): - details = backend.server.v2.store.model.StoreAgentDetails( + details = store_model.StoreAgentDetails( store_listing_version_id="version123", slug="test-agent", agent_name="Test Agent", agent_video="video.mp4", + agent_output_demo="demo.mp4", agent_image=["image1.jpg", "image2.jpg"], creator="creator1", creator_avatar="avatar.jpg", @@ -71,6 +72,8 @@ def test_store_agent_details(): runs=50, rating=4.5, versions=["1.0", "2.0"], + agentGraphVersions=["1", "2"], + agentGraphId="test-graph-id", last_updated=datetime.datetime.now(), ) assert details.slug == "test-agent" @@ -80,7 +83,7 @@ def test_store_agent_details(): def test_creator(): - creator = backend.server.v2.store.model.Creator( + creator = store_model.Creator( agent_rating=4.8, agent_runs=1000, name="Test Creator", @@ -95,9 +98,9 @@ def test_creator(): def test_creators_response(): - response = backend.server.v2.store.model.CreatorsResponse( + response = store_model.CreatorsResponse( creators=[ - backend.server.v2.store.model.Creator( + store_model.Creator( agent_rating=4.8, agent_runs=1000, name="Test Creator", @@ -108,7 +111,7 @@ def test_creators_response(): is_featured=False, ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( total_items=1, total_pages=1, current_page=1, page_size=20 ), ) @@ -117,7 +120,7 @@ def test_creators_response(): def test_creator_details(): - details = backend.server.v2.store.model.CreatorDetails( + details = store_model.CreatorDetails( name="Test Creator", username="creator1", description="Test description", @@ -134,7 +137,7 @@ def test_creator_details(): def test_store_submission(): - submission = backend.server.v2.store.model.StoreSubmission( + submission = store_model.StoreSubmission( agent_id="agent123", agent_version=1, sub_heading="Test subheading", @@ -153,9 +156,9 @@ def test_store_submission(): def test_store_submissions_response(): - response = backend.server.v2.store.model.StoreSubmissionsResponse( + response = store_model.StoreSubmissionsResponse( submissions=[ - backend.server.v2.store.model.StoreSubmission( + store_model.StoreSubmission( agent_id="agent123", agent_version=1, sub_heading="Test subheading", @@ -169,7 +172,7 @@ def test_store_submissions_response(): rating=4.5, ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( total_items=1, total_pages=1, current_page=1, page_size=20 ), ) @@ -178,7 +181,7 @@ def test_store_submissions_response(): def test_store_submission_request(): - request = backend.server.v2.store.model.StoreSubmissionRequest( + request = store_model.StoreSubmissionRequest( agent_id="agent123", agent_version=1, slug="test-agent", diff --git a/autogpt_platform/backend/backend/server/v2/store/routes.py b/autogpt_platform/backend/backend/api/features/store/routes.py similarity index 54% rename from autogpt_platform/backend/backend/server/v2/store/routes.py rename to autogpt_platform/backend/backend/api/features/store/routes.py index 5dca0b22df..7816b25d5a 100644 --- a/autogpt_platform/backend/backend/server/v2/store/routes.py +++ b/autogpt_platform/backend/backend/api/features/store/routes.py @@ -2,20 +2,21 @@ import logging import tempfile import typing import urllib.parse +from typing import Literal import autogpt_libs.auth import fastapi import fastapi.responses import backend.data.graph -import backend.server.v2.store.cache as store_cache -import backend.server.v2.store.db -import backend.server.v2.store.exceptions -import backend.server.v2.store.image_gen -import backend.server.v2.store.media -import backend.server.v2.store.model import backend.util.json +from . import cache as store_cache +from . import db as store_db +from . import image_gen as store_image_gen +from . import media as store_media +from . import model as store_model + logger = logging.getLogger(__name__) router = fastapi.APIRouter() @@ -31,7 +32,7 @@ router = fastapi.APIRouter() summary="Get user profile", tags=["store", "private"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], - response_model=backend.server.v2.store.model.ProfileDetails, + response_model=store_model.ProfileDetails, ) async def get_profile( user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id), @@ -40,23 +41,13 @@ async def get_profile( Get the profile details for the authenticated user. Cached for 1 hour per user. """ - try: - profile = await backend.server.v2.store.db.get_user_profile(user_id) - if profile is None: - return fastapi.responses.JSONResponse( - status_code=404, - content={"detail": "Profile not found"}, - ) - return profile - except Exception as e: - logger.exception("Failed to fetch user profile for %s: %s", user_id, e) + profile = await store_db.get_user_profile(user_id) + if profile is None: return fastapi.responses.JSONResponse( - status_code=500, - content={ - "detail": "Failed to retrieve user profile", - "hint": "Check database connection.", - }, + status_code=404, + content={"detail": "Profile not found"}, ) + return profile @router.post( @@ -64,10 +55,10 @@ async def get_profile( summary="Update user profile", tags=["store", "private"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], - response_model=backend.server.v2.store.model.CreatorDetails, + response_model=store_model.CreatorDetails, ) async def update_or_create_profile( - profile: backend.server.v2.store.model.Profile, + profile: store_model.Profile, user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id), ): """ @@ -83,20 +74,8 @@ async def update_or_create_profile( Raises: HTTPException: If there is an error updating the profile """ - try: - updated_profile = await backend.server.v2.store.db.update_profile( - user_id=user_id, profile=profile - ) - return updated_profile - except Exception as e: - logger.exception("Failed to update profile for user %s: %s", user_id, e) - return fastapi.responses.JSONResponse( - status_code=500, - content={ - "detail": "Failed to update user profile", - "hint": "Validate request data.", - }, - ) + updated_profile = await store_db.update_profile(user_id=user_id, profile=profile) + return updated_profile ############################################## @@ -108,12 +87,12 @@ async def update_or_create_profile( "/agents", summary="List store agents", tags=["store", "public"], - response_model=backend.server.v2.store.model.StoreAgentsResponse, + response_model=store_model.StoreAgentsResponse, ) async def get_agents( featured: bool = False, creator: str | None = None, - sorted_by: str | None = None, + sorted_by: Literal["rating", "runs", "name", "updated_at"] | None = None, search_query: str | None = None, category: str | None = None, page: int = 1, @@ -155,56 +134,41 @@ async def get_agents( status_code=422, detail="Page size must be greater than 0" ) - try: - agents = await store_cache._get_cached_store_agents( - featured=featured, - creator=creator, - sorted_by=sorted_by, - search_query=search_query, - category=category, - page=page, - page_size=page_size, - ) - return agents - except Exception as e: - logger.exception("Failed to retrieve store agents: %s", e) - return fastapi.responses.JSONResponse( - status_code=500, - content={ - "detail": "Failed to retrieve store agents", - "hint": "Check database or search parameters.", - }, - ) + agents = await store_cache._get_cached_store_agents( + featured=featured, + creator=creator, + sorted_by=sorted_by, + search_query=search_query, + category=category, + page=page, + page_size=page_size, + ) + return agents @router.get( "/agents/{username}/{agent_name}", summary="Get specific agent", tags=["store", "public"], - response_model=backend.server.v2.store.model.StoreAgentDetails, + response_model=store_model.StoreAgentDetails, ) -async def get_agent(username: str, agent_name: str): +async def get_agent( + username: str, + agent_name: str, + include_changelog: bool = fastapi.Query(default=False), +): """ This is only used on the AgentDetails Page. It returns the store listing agents details. """ - try: - username = urllib.parse.unquote(username).lower() - # URL decode the agent name since it comes from the URL path - agent_name = urllib.parse.unquote(agent_name).lower() - agent = await store_cache._get_cached_agent_details( - username=username, agent_name=agent_name - ) - return agent - except Exception: - logger.exception("Exception occurred whilst getting store agent details") - return fastapi.responses.JSONResponse( - status_code=500, - content={ - "detail": "An error occurred while retrieving the store agent details" - }, - ) + username = urllib.parse.unquote(username).lower() + # URL decode the agent name since it comes from the URL path + agent_name = urllib.parse.unquote(agent_name).lower() + agent = await store_cache._get_cached_agent_details( + username=username, agent_name=agent_name, include_changelog=include_changelog + ) + return agent @router.get( @@ -213,21 +177,14 @@ async def get_agent(username: str, agent_name: str): tags=["store"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], ) -async def get_graph_meta_by_store_listing_version_id(store_listing_version_id: str): +async def get_graph_meta_by_store_listing_version_id( + store_listing_version_id: str, +) -> backend.data.graph.GraphMeta: """ Get Agent Graph from Store Listing Version ID. """ - try: - graph = await backend.server.v2.store.db.get_available_graph( - store_listing_version_id - ) - return graph - except Exception: - logger.exception("Exception occurred whilst getting agent graph") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while retrieving the agent graph"}, - ) + graph = await store_db.get_available_graph(store_listing_version_id) + return graph @router.get( @@ -235,24 +192,15 @@ async def get_graph_meta_by_store_listing_version_id(store_listing_version_id: s summary="Get agent by version", tags=["store"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], - response_model=backend.server.v2.store.model.StoreAgentDetails, + response_model=store_model.StoreAgentDetails, ) async def get_store_agent(store_listing_version_id: str): """ Get Store Agent Details from Store Listing Version ID. """ - try: - agent = await backend.server.v2.store.db.get_store_agent_by_version_id( - store_listing_version_id - ) + agent = await store_db.get_store_agent_by_version_id(store_listing_version_id) - return agent - except Exception: - logger.exception("Exception occurred whilst getting store agent") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while retrieving the store agent"}, - ) + return agent @router.post( @@ -260,12 +208,12 @@ async def get_store_agent(store_listing_version_id: str): summary="Create agent review", tags=["store"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], - response_model=backend.server.v2.store.model.StoreReview, + response_model=store_model.StoreReview, ) async def create_review( username: str, agent_name: str, - review: backend.server.v2.store.model.StoreReviewCreate, + review: store_model.StoreReviewCreate, user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id), ): """ @@ -280,24 +228,17 @@ async def create_review( Returns: The created review """ - try: - username = urllib.parse.unquote(username).lower() - agent_name = urllib.parse.unquote(agent_name).lower() - # Create the review - created_review = await backend.server.v2.store.db.create_store_review( - user_id=user_id, - store_listing_version_id=review.store_listing_version_id, - score=review.score, - comments=review.comments, - ) + username = urllib.parse.unquote(username).lower() + agent_name = urllib.parse.unquote(agent_name).lower() + # Create the review + created_review = await store_db.create_store_review( + user_id=user_id, + store_listing_version_id=review.store_listing_version_id, + score=review.score, + comments=review.comments, + ) - return created_review - except Exception: - logger.exception("Exception occurred whilst creating store review") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while creating the store review"}, - ) + return created_review ############################################## @@ -309,12 +250,12 @@ async def create_review( "/creators", summary="List store creators", tags=["store", "public"], - response_model=backend.server.v2.store.model.CreatorsResponse, + response_model=store_model.CreatorsResponse, ) async def get_creators( featured: bool = False, search_query: str | None = None, - sorted_by: str | None = None, + sorted_by: Literal["agent_rating", "agent_runs", "num_agents"] | None = None, page: int = 1, page_size: int = 20, ): @@ -340,28 +281,21 @@ async def get_creators( status_code=422, detail="Page size must be greater than 0" ) - try: - creators = await store_cache._get_cached_store_creators( - featured=featured, - search_query=search_query, - sorted_by=sorted_by, - page=page, - page_size=page_size, - ) - return creators - except Exception: - logger.exception("Exception occurred whilst getting store creators") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while retrieving the store creators"}, - ) + creators = await store_cache._get_cached_store_creators( + featured=featured, + search_query=search_query, + sorted_by=sorted_by, + page=page, + page_size=page_size, + ) + return creators @router.get( "/creator/{username}", summary="Get creator details", tags=["store", "public"], - response_model=backend.server.v2.store.model.CreatorDetails, + response_model=store_model.CreatorDetails, ) async def get_creator( username: str, @@ -370,18 +304,9 @@ async def get_creator( Get the details of a creator. - Creator Details Page """ - try: - username = urllib.parse.unquote(username).lower() - creator = await store_cache._get_cached_creator_details(username=username) - return creator - except Exception: - logger.exception("Exception occurred whilst getting creator details") - return fastapi.responses.JSONResponse( - status_code=500, - content={ - "detail": "An error occurred while retrieving the creator details" - }, - ) + username = urllib.parse.unquote(username).lower() + creator = await store_cache._get_cached_creator_details(username=username) + return creator ############################################ @@ -394,7 +319,7 @@ async def get_creator( summary="Get my agents", tags=["store", "private"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], - response_model=backend.server.v2.store.model.MyAgentsResponse, + response_model=store_model.MyAgentsResponse, ) async def get_my_agents( user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id), @@ -404,17 +329,8 @@ async def get_my_agents( """ Get user's own agents. """ - try: - agents = await backend.server.v2.store.db.get_my_agents( - user_id, page=page, page_size=page_size - ) - return agents - except Exception: - logger.exception("Exception occurred whilst getting my agents") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while retrieving the my agents"}, - ) + agents = await store_db.get_my_agents(user_id, page=page, page_size=page_size) + return agents @router.delete( @@ -438,19 +354,12 @@ async def delete_submission( Returns: bool: True if the submission was successfully deleted, False otherwise """ - try: - result = await backend.server.v2.store.db.delete_store_submission( - user_id=user_id, - submission_id=submission_id, - ) + result = await store_db.delete_store_submission( + user_id=user_id, + submission_id=submission_id, + ) - return result - except Exception: - logger.exception("Exception occurred whilst deleting store submission") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while deleting the store submission"}, - ) + return result @router.get( @@ -458,7 +367,7 @@ async def delete_submission( summary="List my submissions", tags=["store", "private"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], - response_model=backend.server.v2.store.model.StoreSubmissionsResponse, + response_model=store_model.StoreSubmissionsResponse, ) async def get_submissions( user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id), @@ -488,21 +397,12 @@ async def get_submissions( raise fastapi.HTTPException( status_code=422, detail="Page size must be greater than 0" ) - try: - listings = await backend.server.v2.store.db.get_store_submissions( - user_id=user_id, - page=page, - page_size=page_size, - ) - return listings - except Exception: - logger.exception("Exception occurred whilst getting store submissions") - return fastapi.responses.JSONResponse( - status_code=500, - content={ - "detail": "An error occurred while retrieving the store submissions" - }, - ) + listings = await store_db.get_store_submissions( + user_id=user_id, + page=page, + page_size=page_size, + ) + return listings @router.post( @@ -510,10 +410,10 @@ async def get_submissions( summary="Create store submission", tags=["store", "private"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], - response_model=backend.server.v2.store.model.StoreSubmission, + response_model=store_model.StoreSubmission, ) async def create_submission( - submission_request: backend.server.v2.store.model.StoreSubmissionRequest, + submission_request: store_model.StoreSubmissionRequest, user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id), ): """ @@ -529,36 +429,24 @@ async def create_submission( Raises: HTTPException: If there is an error creating the submission """ - try: - result = await backend.server.v2.store.db.create_store_submission( - user_id=user_id, - agent_id=submission_request.agent_id, - agent_version=submission_request.agent_version, - slug=submission_request.slug, - name=submission_request.name, - video_url=submission_request.video_url, - image_urls=submission_request.image_urls, - description=submission_request.description, - instructions=submission_request.instructions, - sub_heading=submission_request.sub_heading, - categories=submission_request.categories, - changes_summary=submission_request.changes_summary or "Initial Submission", - recommended_schedule_cron=submission_request.recommended_schedule_cron, - ) + result = await store_db.create_store_submission( + user_id=user_id, + agent_id=submission_request.agent_id, + agent_version=submission_request.agent_version, + slug=submission_request.slug, + name=submission_request.name, + video_url=submission_request.video_url, + agent_output_demo_url=submission_request.agent_output_demo_url, + image_urls=submission_request.image_urls, + description=submission_request.description, + instructions=submission_request.instructions, + sub_heading=submission_request.sub_heading, + categories=submission_request.categories, + changes_summary=submission_request.changes_summary or "Initial Submission", + recommended_schedule_cron=submission_request.recommended_schedule_cron, + ) - return result - except backend.server.v2.store.exceptions.SlugAlreadyInUseError as e: - logger.warning("Slug already in use: %s", str(e)) - return fastapi.responses.JSONResponse( - status_code=409, - content={"detail": str(e)}, - ) - except Exception: - logger.exception("Exception occurred whilst creating store submission") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while creating the store submission"}, - ) + return result @router.put( @@ -566,11 +454,11 @@ async def create_submission( summary="Edit store submission", tags=["store", "private"], dependencies=[fastapi.Security(autogpt_libs.auth.requires_user)], - response_model=backend.server.v2.store.model.StoreSubmission, + response_model=store_model.StoreSubmission, ) async def edit_submission( store_listing_version_id: str, - submission_request: backend.server.v2.store.model.StoreSubmissionEditRequest, + submission_request: store_model.StoreSubmissionEditRequest, user_id: str = fastapi.Security(autogpt_libs.auth.get_user_id), ): """ @@ -587,11 +475,12 @@ async def edit_submission( Raises: HTTPException: If there is an error editing the submission """ - result = await backend.server.v2.store.db.edit_store_submission( + result = await store_db.edit_store_submission( user_id=user_id, store_listing_version_id=store_listing_version_id, name=submission_request.name, video_url=submission_request.video_url, + agent_output_demo_url=submission_request.agent_output_demo_url, image_urls=submission_request.image_urls, description=submission_request.description, instructions=submission_request.instructions, @@ -627,36 +516,8 @@ async def upload_submission_media( Raises: HTTPException: If there is an error uploading the media """ - try: - media_url = await backend.server.v2.store.media.upload_media( - user_id=user_id, file=file - ) - return media_url - except backend.server.v2.store.exceptions.VirusDetectedError as e: - logger.warning(f"Virus detected in uploaded file: {e.threat_name}") - return fastapi.responses.JSONResponse( - status_code=400, - content={ - "detail": f"File rejected due to virus detection: {e.threat_name}", - "error_type": "virus_detected", - "threat_name": e.threat_name, - }, - ) - except backend.server.v2.store.exceptions.VirusScanError as e: - logger.error(f"Virus scanning failed: {str(e)}") - return fastapi.responses.JSONResponse( - status_code=503, - content={ - "detail": "Virus scanning service unavailable. Please try again later.", - "error_type": "virus_scan_failed", - }, - ) - except Exception: - logger.exception("Exception occurred whilst uploading submission media") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while uploading the media file"}, - ) + media_url = await store_media.upload_media(user_id=user_id, file=file) + return media_url @router.post( @@ -679,44 +540,35 @@ async def generate_image( Returns: JSONResponse: JSON containing the URL of the generated image """ - try: - agent = await backend.data.graph.get_graph(agent_id, user_id=user_id) + agent = await backend.data.graph.get_graph( + graph_id=agent_id, version=None, user_id=user_id + ) - if not agent: - raise fastapi.HTTPException( - status_code=404, detail=f"Agent with ID {agent_id} not found" - ) - # Use .jpeg here since we are generating JPEG images - filename = f"agent_{agent_id}.jpeg" + if not agent: + raise fastapi.HTTPException( + status_code=404, detail=f"Agent with ID {agent_id} not found" + ) + # Use .jpeg here since we are generating JPEG images + filename = f"agent_{agent_id}.jpeg" - existing_url = await backend.server.v2.store.media.check_media_exists( - user_id, filename - ) - if existing_url: - logger.info(f"Using existing image for agent {agent_id}") - return fastapi.responses.JSONResponse(content={"image_url": existing_url}) - # Generate agent image as JPEG - image = await backend.server.v2.store.image_gen.generate_agent_image( - agent=agent - ) + existing_url = await store_media.check_media_exists(user_id, filename) + if existing_url: + logger.info(f"Using existing image for agent {agent_id}") + return fastapi.responses.JSONResponse(content={"image_url": existing_url}) + # Generate agent image as JPEG + image = await store_image_gen.generate_agent_image(agent=agent) - # Create UploadFile with the correct filename and content_type - image_file = fastapi.UploadFile( - file=image, - filename=filename, - ) + # Create UploadFile with the correct filename and content_type + image_file = fastapi.UploadFile( + file=image, + filename=filename, + ) - image_url = await backend.server.v2.store.media.upload_media( - user_id=user_id, file=image_file, use_file_name=True - ) + image_url = await store_media.upload_media( + user_id=user_id, file=image_file, use_file_name=True + ) - return fastapi.responses.JSONResponse(content={"image_url": image_url}) - except Exception: - logger.exception("Exception occurred whilst generating submission image") - return fastapi.responses.JSONResponse( - status_code=500, - content={"detail": "An error occurred while generating the image"}, - ) + return fastapi.responses.JSONResponse(content={"image_url": image_url}) @router.get( @@ -741,7 +593,7 @@ async def download_agent_file( Raises: HTTPException: If the agent is not found or an unexpected error occurs. """ - graph_data = await backend.server.v2.store.db.get_agent(store_listing_version_id) + graph_data = await store_db.get_agent(store_listing_version_id) file_name = f"agent_{graph_data.id}_v{graph_data.version or 'latest'}.json" # Sending graph as a stream (similar to marketplace v1) diff --git a/autogpt_platform/backend/backend/server/v2/store/routes_test.py b/autogpt_platform/backend/backend/api/features/store/routes_test.py similarity index 76% rename from autogpt_platform/backend/backend/server/v2/store/routes_test.py rename to autogpt_platform/backend/backend/api/features/store/routes_test.py index 8dd83149b0..7fdc0b9ebb 100644 --- a/autogpt_platform/backend/backend/server/v2/store/routes_test.py +++ b/autogpt_platform/backend/backend/api/features/store/routes_test.py @@ -8,15 +8,15 @@ import pytest import pytest_mock from pytest_snapshot.plugin import Snapshot -import backend.server.v2.store.model -import backend.server.v2.store.routes +from . import model as store_model +from . import routes as store_routes # Using a fixed timestamp for reproducible tests # 2023 date is intentionally used to ensure tests work regardless of current year FIXED_NOW = datetime.datetime(2023, 1, 1, 0, 0, 0) app = fastapi.FastAPI() -app.include_router(backend.server.v2.store.routes.router) +app.include_router(store_routes.router) client = fastapi.testclient.TestClient(app) @@ -35,23 +35,21 @@ def test_get_agents_defaults( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.StoreAgentsResponse( + mocked_value = store_model.StoreAgentsResponse( agents=[], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=0, total_items=0, total_pages=0, page_size=10, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents") mock_db_call.return_value = mocked_value response = client.get("/agents") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreAgentsResponse.model_validate( - response.json() - ) + data = store_model.StoreAgentsResponse.model_validate(response.json()) assert data.pagination.total_pages == 0 assert data.agents == [] @@ -72,9 +70,9 @@ def test_get_agents_featured( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.StoreAgentsResponse( + mocked_value = store_model.StoreAgentsResponse( agents=[ - backend.server.v2.store.model.StoreAgent( + store_model.StoreAgent( slug="featured-agent", agent_name="Featured Agent", agent_image="featured.jpg", @@ -86,20 +84,18 @@ def test_get_agents_featured( rating=4.5, ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=1, total_items=1, total_pages=1, page_size=20, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents") mock_db_call.return_value = mocked_value response = client.get("/agents?featured=true") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreAgentsResponse.model_validate( - response.json() - ) + data = store_model.StoreAgentsResponse.model_validate(response.json()) assert len(data.agents) == 1 assert data.agents[0].slug == "featured-agent" snapshot.snapshot_dir = "snapshots" @@ -119,9 +115,9 @@ def test_get_agents_by_creator( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.StoreAgentsResponse( + mocked_value = store_model.StoreAgentsResponse( agents=[ - backend.server.v2.store.model.StoreAgent( + store_model.StoreAgent( slug="creator-agent", agent_name="Creator Agent", agent_image="agent.jpg", @@ -133,20 +129,18 @@ def test_get_agents_by_creator( rating=4.0, ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=1, total_items=1, total_pages=1, page_size=20, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents") mock_db_call.return_value = mocked_value response = client.get("/agents?creator=specific-creator") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreAgentsResponse.model_validate( - response.json() - ) + data = store_model.StoreAgentsResponse.model_validate(response.json()) assert len(data.agents) == 1 assert data.agents[0].creator == "specific-creator" snapshot.snapshot_dir = "snapshots" @@ -166,9 +160,9 @@ def test_get_agents_sorted( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.StoreAgentsResponse( + mocked_value = store_model.StoreAgentsResponse( agents=[ - backend.server.v2.store.model.StoreAgent( + store_model.StoreAgent( slug="top-agent", agent_name="Top Agent", agent_image="top.jpg", @@ -180,20 +174,18 @@ def test_get_agents_sorted( rating=5.0, ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=1, total_items=1, total_pages=1, page_size=20, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents") mock_db_call.return_value = mocked_value response = client.get("/agents?sorted_by=runs") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreAgentsResponse.model_validate( - response.json() - ) + data = store_model.StoreAgentsResponse.model_validate(response.json()) assert len(data.agents) == 1 assert data.agents[0].runs == 1000 snapshot.snapshot_dir = "snapshots" @@ -213,9 +205,9 @@ def test_get_agents_search( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.StoreAgentsResponse( + mocked_value = store_model.StoreAgentsResponse( agents=[ - backend.server.v2.store.model.StoreAgent( + store_model.StoreAgent( slug="search-agent", agent_name="Search Agent", agent_image="search.jpg", @@ -227,20 +219,18 @@ def test_get_agents_search( rating=4.2, ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=1, total_items=1, total_pages=1, page_size=20, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents") mock_db_call.return_value = mocked_value response = client.get("/agents?search_query=specific") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreAgentsResponse.model_validate( - response.json() - ) + data = store_model.StoreAgentsResponse.model_validate(response.json()) assert len(data.agents) == 1 assert "specific" in data.agents[0].description.lower() snapshot.snapshot_dir = "snapshots" @@ -260,9 +250,9 @@ def test_get_agents_category( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.StoreAgentsResponse( + mocked_value = store_model.StoreAgentsResponse( agents=[ - backend.server.v2.store.model.StoreAgent( + store_model.StoreAgent( slug="category-agent", agent_name="Category Agent", agent_image="category.jpg", @@ -274,20 +264,18 @@ def test_get_agents_category( rating=4.1, ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=1, total_items=1, total_pages=1, page_size=20, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents") mock_db_call.return_value = mocked_value response = client.get("/agents?category=test-category") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreAgentsResponse.model_validate( - response.json() - ) + data = store_model.StoreAgentsResponse.model_validate(response.json()) assert len(data.agents) == 1 snapshot.snapshot_dir = "snapshots" snapshot.assert_match(json.dumps(response.json(), indent=2), "agts_category") @@ -306,9 +294,9 @@ def test_get_agents_pagination( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.StoreAgentsResponse( + mocked_value = store_model.StoreAgentsResponse( agents=[ - backend.server.v2.store.model.StoreAgent( + store_model.StoreAgent( slug=f"agent-{i}", agent_name=f"Agent {i}", agent_image=f"agent{i}.jpg", @@ -321,20 +309,18 @@ def test_get_agents_pagination( ) for i in range(5) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=2, total_items=15, total_pages=3, page_size=5, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents") mock_db_call.return_value = mocked_value response = client.get("/agents?page=2&page_size=5") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreAgentsResponse.model_validate( - response.json() - ) + data = store_model.StoreAgentsResponse.model_validate(response.json()) assert len(data.agents) == 5 assert data.pagination.current_page == 2 assert data.pagination.page_size == 5 @@ -365,7 +351,7 @@ def test_get_agents_malformed_request(mocker: pytest_mock.MockFixture): assert response.status_code == 422 # Verify no DB calls were made - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agents") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agents") mock_db_call.assert_not_called() @@ -373,11 +359,12 @@ def test_get_agent_details( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.StoreAgentDetails( + mocked_value = store_model.StoreAgentDetails( store_listing_version_id="test-version-id", slug="test-agent", agent_name="Test Agent", agent_video="video.mp4", + agent_output_demo="demo.mp4", agent_image=["image1.jpg", "image2.jpg"], creator="creator1", creator_avatar="avatar1.jpg", @@ -387,46 +374,46 @@ def test_get_agent_details( runs=100, rating=4.5, versions=["1.0.0", "1.1.0"], + agentGraphVersions=["1", "2"], + agentGraphId="test-graph-id", last_updated=FIXED_NOW, ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_agent_details") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_agent_details") mock_db_call.return_value = mocked_value response = client.get("/agents/creator1/test-agent") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreAgentDetails.model_validate( - response.json() - ) + data = store_model.StoreAgentDetails.model_validate(response.json()) assert data.agent_name == "Test Agent" assert data.creator == "creator1" snapshot.snapshot_dir = "snapshots" snapshot.assert_match(json.dumps(response.json(), indent=2), "agt_details") - mock_db_call.assert_called_once_with(username="creator1", agent_name="test-agent") + mock_db_call.assert_called_once_with( + username="creator1", agent_name="test-agent", include_changelog=False + ) def test_get_creators_defaults( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.CreatorsResponse( + mocked_value = store_model.CreatorsResponse( creators=[], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=0, total_items=0, total_pages=0, page_size=10, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_creators") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_creators") mock_db_call.return_value = mocked_value response = client.get("/creators") assert response.status_code == 200 - data = backend.server.v2.store.model.CreatorsResponse.model_validate( - response.json() - ) + data = store_model.CreatorsResponse.model_validate(response.json()) assert data.pagination.total_pages == 0 assert data.creators == [] snapshot.snapshot_dir = "snapshots" @@ -440,9 +427,9 @@ def test_get_creators_pagination( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.CreatorsResponse( + mocked_value = store_model.CreatorsResponse( creators=[ - backend.server.v2.store.model.Creator( + store_model.Creator( name=f"Creator {i}", username=f"creator{i}", description=f"Creator {i} description", @@ -454,22 +441,20 @@ def test_get_creators_pagination( ) for i in range(5) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=2, total_items=15, total_pages=3, page_size=5, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_creators") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_creators") mock_db_call.return_value = mocked_value response = client.get("/creators?page=2&page_size=5") assert response.status_code == 200 - data = backend.server.v2.store.model.CreatorsResponse.model_validate( - response.json() - ) + data = store_model.CreatorsResponse.model_validate(response.json()) assert len(data.creators) == 5 assert data.pagination.current_page == 2 assert data.pagination.page_size == 5 @@ -494,7 +479,7 @@ def test_get_creators_malformed_request(mocker: pytest_mock.MockFixture): assert response.status_code == 422 # Verify no DB calls were made - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_creators") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_creators") mock_db_call.assert_not_called() @@ -502,7 +487,7 @@ def test_get_creator_details( mocker: pytest_mock.MockFixture, snapshot: Snapshot, ) -> None: - mocked_value = backend.server.v2.store.model.CreatorDetails( + mocked_value = store_model.CreatorDetails( name="Test User", username="creator1", description="Test creator description", @@ -512,13 +497,15 @@ def test_get_creator_details( agent_runs=1000, top_categories=["category1", "category2"], ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_creator_details") + mock_db_call = mocker.patch( + "backend.api.features.store.db.get_store_creator_details" + ) mock_db_call.return_value = mocked_value response = client.get("/creator/creator1") assert response.status_code == 200 - data = backend.server.v2.store.model.CreatorDetails.model_validate(response.json()) + data = store_model.CreatorDetails.model_validate(response.json()) assert data.username == "creator1" assert data.name == "Test User" snapshot.snapshot_dir = "snapshots" @@ -531,9 +518,9 @@ def test_get_submissions_success( snapshot: Snapshot, test_user_id: str, ) -> None: - mocked_value = backend.server.v2.store.model.StoreSubmissionsResponse( + mocked_value = store_model.StoreSubmissionsResponse( submissions=[ - backend.server.v2.store.model.StoreSubmission( + store_model.StoreSubmission( name="Test Agent", description="Test agent description", image_urls=["test.jpg"], @@ -549,22 +536,20 @@ def test_get_submissions_success( categories=["test-category"], ) ], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=1, total_items=1, total_pages=1, page_size=20, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_submissions") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_submissions") mock_db_call.return_value = mocked_value response = client.get("/submissions") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreSubmissionsResponse.model_validate( - response.json() - ) + data = store_model.StoreSubmissionsResponse.model_validate(response.json()) assert len(data.submissions) == 1 assert data.submissions[0].name == "Test Agent" assert data.pagination.current_page == 1 @@ -578,24 +563,22 @@ def test_get_submissions_pagination( snapshot: Snapshot, test_user_id: str, ) -> None: - mocked_value = backend.server.v2.store.model.StoreSubmissionsResponse( + mocked_value = store_model.StoreSubmissionsResponse( submissions=[], - pagination=backend.server.v2.store.model.Pagination( + pagination=store_model.Pagination( current_page=2, total_items=10, total_pages=2, page_size=5, ), ) - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_submissions") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_submissions") mock_db_call.return_value = mocked_value response = client.get("/submissions?page=2&page_size=5") assert response.status_code == 200 - data = backend.server.v2.store.model.StoreSubmissionsResponse.model_validate( - response.json() - ) + data = store_model.StoreSubmissionsResponse.model_validate(response.json()) assert data.pagination.current_page == 2 assert data.pagination.page_size == 5 snapshot.snapshot_dir = "snapshots" @@ -617,5 +600,5 @@ def test_get_submissions_malformed_request(mocker: pytest_mock.MockFixture): assert response.status_code == 422 # Verify no DB calls were made - mock_db_call = mocker.patch("backend.server.v2.store.db.get_store_submissions") + mock_db_call = mocker.patch("backend.api.features.store.db.get_store_submissions") mock_db_call.assert_not_called() diff --git a/autogpt_platform/backend/backend/server/v2/store/test_cache_delete.py b/autogpt_platform/backend/backend/api/features/store/test_cache_delete.py similarity index 96% rename from autogpt_platform/backend/backend/server/v2/store/test_cache_delete.py rename to autogpt_platform/backend/backend/api/features/store/test_cache_delete.py index 4111de0ee8..dd9be1f4ab 100644 --- a/autogpt_platform/backend/backend/server/v2/store/test_cache_delete.py +++ b/autogpt_platform/backend/backend/api/features/store/test_cache_delete.py @@ -8,10 +8,11 @@ from unittest.mock import AsyncMock, patch import pytest -from backend.server.v2.store import cache as store_cache -from backend.server.v2.store.model import StoreAgent, StoreAgentsResponse from backend.util.models import Pagination +from . import cache as store_cache +from .model import StoreAgent, StoreAgentsResponse + class TestCacheDeletion: """Test cache deletion functionality for store routes.""" @@ -43,7 +44,7 @@ class TestCacheDeletion: ) with patch( - "backend.server.v2.store.db.get_store_agents", + "backend.api.features.store.db.get_store_agents", new_callable=AsyncMock, return_value=mock_response, ) as mock_db: @@ -152,7 +153,7 @@ class TestCacheDeletion: ) with patch( - "backend.server.v2.store.db.get_store_agents", + "backend.api.features.store.db.get_store_agents", new_callable=AsyncMock, return_value=mock_response, ): @@ -203,7 +204,7 @@ class TestCacheDeletion: ) with patch( - "backend.server.v2.store.db.get_store_agents", + "backend.api.features.store.db.get_store_agents", new_callable=AsyncMock, return_value=mock_response, ) as mock_db: diff --git a/autogpt_platform/backend/backend/server/routers/v1.py b/autogpt_platform/backend/backend/api/features/v1.py similarity index 84% rename from autogpt_platform/backend/backend/server/routers/v1.py rename to autogpt_platform/backend/backend/api/features/v1.py index 3a5799919f..9b05b4755f 100644 --- a/autogpt_platform/backend/backend/server/routers/v1.py +++ b/autogpt_platform/backend/backend/api/features/v1.py @@ -5,7 +5,7 @@ import time import uuid from collections import defaultdict from datetime import datetime, timezone -from typing import Annotated, Any, Sequence +from typing import Annotated, Any, Sequence, get_args import pydantic import stripe @@ -28,12 +28,21 @@ from pydantic import BaseModel from starlette.status import HTTP_204_NO_CONTENT, HTTP_404_NOT_FOUND from typing_extensions import Optional, TypedDict -import backend.server.integrations.router -import backend.server.routers.analytics -import backend.server.v2.library.db as library_db -from backend.data import api_key as api_key_db +from backend.api.model import ( + CreateAPIKeyRequest, + CreateAPIKeyResponse, + CreateGraph, + GraphExecutionSource, + RequestTopUp, + SetGraphActiveVersion, + TimezoneResponse, + UpdatePermissionsRequest, + UpdateTimezoneRequest, + UploadFileResponse, +) from backend.data import execution as execution_db from backend.data import graph as graph_db +from backend.data.auth import api_key as api_key_db from backend.data.block import BlockInput, CompletedBlockOutput, get_block, get_blocks from backend.data.credit import ( AutoTopUpConfig, @@ -44,14 +53,20 @@ from backend.data.credit import ( get_user_credit_model, set_auto_top_up, ) -from backend.data.execution import UserContext -from backend.data.model import CredentialsMetaInput +from backend.data.graph import GraphSettings +from backend.data.model import CredentialsMetaInput, UserOnboarding from backend.data.notifications import NotificationPreference, NotificationPreferenceDTO from backend.data.onboarding import ( + FrontendOnboardingStep, + OnboardingStep, UserOnboardingUpdate, + complete_onboarding_step, + complete_re_run_agent, get_recommended_agents, get_user_onboarding, + increment_runs, onboarding_enabled, + reset_user_onboarding, update_user_onboarding, ) from backend.data.user import ( @@ -73,21 +88,11 @@ from backend.monitoring.instrumentation import ( record_graph_execution, record_graph_operation, ) -from backend.server.model import ( - CreateAPIKeyRequest, - CreateAPIKeyResponse, - CreateGraph, - RequestTopUp, - SetGraphActiveVersion, - TimezoneResponse, - UpdatePermissionsRequest, - UpdateTimezoneRequest, - UploadFileResponse, -) from backend.util.cache import cached from backend.util.clients import get_scheduler_client from backend.util.cloud_storage import get_cloud_storage_handler from backend.util.exceptions import GraphValidationError, NotFoundError +from backend.util.feature_flag import Flag, is_feature_enabled from backend.util.json import dumps from backend.util.settings import Settings from backend.util.timezone_utils import ( @@ -96,6 +101,10 @@ from backend.util.timezone_utils import ( ) from backend.util.virus_scanner import scan_content_safe +from .library import db as library_db +from .library import model as library_model +from .store.model import StoreAgentDetails + def _create_file_size_error(size_bytes: int, max_size_mb: int) -> HTTPException: """Create standardized file size error response.""" @@ -108,22 +117,10 @@ def _create_file_size_error(size_bytes: int, max_size_mb: int) -> HTTPException: settings = Settings() logger = logging.getLogger(__name__) + # Define the API routes v1_router = APIRouter() -v1_router.include_router( - backend.server.integrations.router.router, - prefix="/integrations", - tags=["integrations"], -) - -v1_router.include_router( - backend.server.routers.analytics.router, - prefix="/analytics", - tags=["analytics"], - dependencies=[Security(requires_user)], -) - ######################################################## ##################### Auth ############################# @@ -217,9 +214,10 @@ async def update_preferences( @v1_router.get( "/onboarding", - summary="Get onboarding status", + summary="Onboarding state", tags=["onboarding"], dependencies=[Security(requires_user)], + response_model=UserOnboarding, ) async def get_onboarding(user_id: Annotated[str, Security(get_user_id)]): return await get_user_onboarding(user_id) @@ -227,9 +225,10 @@ async def get_onboarding(user_id: Annotated[str, Security(get_user_id)]): @v1_router.patch( "/onboarding", - summary="Update onboarding progress", + summary="Update onboarding state", tags=["onboarding"], dependencies=[Security(requires_user)], + response_model=UserOnboarding, ) async def update_onboarding( user_id: Annotated[str, Security(get_user_id)], data: UserOnboardingUpdate @@ -237,28 +236,53 @@ async def update_onboarding( return await update_user_onboarding(user_id, data) +@v1_router.post( + "/onboarding/step", + summary="Complete onboarding step", + tags=["onboarding"], + dependencies=[Security(requires_user)], +) +async def onboarding_complete_step( + user_id: Annotated[str, Security(get_user_id)], step: FrontendOnboardingStep +): + if step not in get_args(FrontendOnboardingStep): + raise HTTPException(status_code=400, detail="Invalid onboarding step") + return await complete_onboarding_step(user_id, step) + + @v1_router.get( "/onboarding/agents", - summary="Get recommended agents", + summary="Recommended onboarding agents", tags=["onboarding"], dependencies=[Security(requires_user)], ) async def get_onboarding_agents( user_id: Annotated[str, Security(get_user_id)], -): +) -> list[StoreAgentDetails]: return await get_recommended_agents(user_id) @v1_router.get( "/onboarding/enabled", - summary="Check onboarding enabled", + summary="Is onboarding enabled", tags=["onboarding", "public"], dependencies=[Security(requires_user)], ) -async def is_onboarding_enabled(): +async def is_onboarding_enabled() -> bool: return await onboarding_enabled() +@v1_router.post( + "/onboarding/reset", + summary="Reset onboarding progress", + tags=["onboarding"], + dependencies=[Security(requires_user)], + response_model=UserOnboarding, +) +async def reset_onboarding(user_id: Annotated[str, Security(get_user_id)]): + return await reset_user_onboarding(user_id) + + ######################################################## ##################### Blocks ########################### ######################################################## @@ -342,19 +366,15 @@ async def execute_graph_block( if not obj: raise HTTPException(status_code=404, detail=f"Block #{block_id} not found.") - # Get user context for block execution user = await get_user_by_id(user_id) if not user: raise HTTPException(status_code=404, detail="User not found.") - user_context = UserContext(timezone=user.timezone) - start_time = time.time() try: output = defaultdict(list) async for name, data in obj.execute( data, - user_context=user_context, user_id=user_id, # Note: graph_exec_id and graph_id are not available for direct block execution ): @@ -746,7 +766,12 @@ async def create_new_graph( # as the graph already valid and no sub-graphs are returned back. await graph_db.create_graph(graph, user_id=user_id) await library_db.create_library_agent(graph, user_id=user_id) - return await on_graph_activate(graph, user_id=user_id) + activated_graph = await on_graph_activate(graph, user_id=user_id) + + if create_graph.source == "builder": + await complete_onboarding_step(user_id, OnboardingStep.BUILDER_SAVE_AGENT) + + return activated_graph @v1_router.delete( @@ -758,7 +783,9 @@ async def create_new_graph( async def delete_graph( graph_id: str, user_id: Annotated[str, Security(get_user_id)] ) -> DeleteGraphResponse: - if active_version := await graph_db.get_graph(graph_id, user_id=user_id): + if active_version := await graph_db.get_graph( + graph_id=graph_id, version=None, user_id=user_id + ): await on_graph_deactivate(active_version, user_id=user_id) return {"version_counts": await graph_db.delete_graph(graph_id, user_id=user_id)} @@ -795,9 +822,7 @@ async def update_graph( if new_graph_version.is_active: # Keep the library agent up to date with the new active version - await library_db.update_agent_version_in_library( - user_id, graph.id, graph.version - ) + await _update_library_agent_version_and_settings(user_id, new_graph_version) # Handle activation of the new graph first to ensure continuity new_graph_version = await on_graph_activate(new_graph_version, user_id=user_id) @@ -838,7 +863,11 @@ async def set_graph_active_version( if not new_active_graph: raise HTTPException(404, f"Graph #{graph_id} v{new_active_version} not found") - current_active_graph = await graph_db.get_graph(graph_id, user_id=user_id) + current_active_graph = await graph_db.get_graph( + graph_id=graph_id, + version=None, + user_id=user_id, + ) # Handle activation of the new graph first to ensure continuity await on_graph_activate(new_active_graph, user_id=user_id) @@ -850,15 +879,65 @@ async def set_graph_active_version( ) # Keep the library agent up to date with the new active version - await library_db.update_agent_version_in_library( - user_id, new_active_graph.id, new_active_graph.version - ) + await _update_library_agent_version_and_settings(user_id, new_active_graph) if current_active_graph and current_active_graph.version != new_active_version: # Handle deactivation of the previously active version await on_graph_deactivate(current_active_graph, user_id=user_id) +async def _update_library_agent_version_and_settings( + user_id: str, agent_graph: graph_db.GraphModel +) -> library_model.LibraryAgent: + # Keep the library agent up to date with the new active version + library = await library_db.update_agent_version_in_library( + user_id, agent_graph.id, agent_graph.version + ) + # If the graph has HITL node, initialize the setting if it's not already set. + if ( + agent_graph.has_human_in_the_loop + and library.settings.human_in_the_loop_safe_mode is None + ): + await library_db.update_library_agent_settings( + user_id=user_id, + agent_id=library.id, + settings=library.settings.model_copy( + update={"human_in_the_loop_safe_mode": True} + ), + ) + return library + + +@v1_router.patch( + path="/graphs/{graph_id}/settings", + summary="Update graph settings", + tags=["graphs"], + dependencies=[Security(requires_user)], +) +async def update_graph_settings( + graph_id: str, + settings: GraphSettings, + user_id: Annotated[str, Security(get_user_id)], +) -> GraphSettings: + """Update graph settings for the user's library agent.""" + # Get the library agent for this graph + library_agent = await library_db.get_library_agent_by_graph_id( + graph_id=graph_id, user_id=user_id + ) + if not library_agent: + raise HTTPException(404, f"Graph #{graph_id} not found in user's library") + + # Update the library agent settings + updated_agent = await library_db.update_library_agent_settings( + user_id=user_id, + agent_id=library_agent.id, + settings=settings, + ) + + # Return the updated settings + return GraphSettings.model_validate(updated_agent.settings) + + @v1_router.post( path="/graphs/{graph_id}/execute/{graph_version}", summary="Execute graph agent", @@ -872,6 +951,7 @@ async def execute_graph( credentials_inputs: Annotated[ dict[str, CredentialsMetaInput], Body(..., embed=True, default_factory=dict) ], + source: Annotated[GraphExecutionSource | None, Body(embed=True)] = None, graph_version: Optional[int] = None, preset_id: Optional[str] = None, ) -> execution_db.GraphExecutionMeta: @@ -895,6 +975,14 @@ async def execute_graph( # Record successful graph execution record_graph_execution(graph_id=graph_id, status="success", user_id=user_id) record_graph_operation(operation="execute", status="success") + await increment_runs(user_id) + await complete_re_run_agent(user_id, graph_id) + if source == "library": + await complete_onboarding_step( + user_id, OnboardingStep.MARKETPLACE_RUN_AGENT + ) + elif source == "builder": + await complete_onboarding_step(user_id, OnboardingStep.BUILDER_RUN_AGENT) return result except GraphValidationError as e: # Record failed graph execution @@ -975,7 +1063,12 @@ async def list_graphs_executions( page=1, page_size=250, ) - return paginated_result.executions + + # Apply feature flags to filter out disabled features + filtered_executions = await hide_activity_summaries_if_disabled( + paginated_result.executions, user_id + ) + return filtered_executions @v1_router.get( @@ -992,13 +1085,47 @@ async def list_graph_executions( 25, ge=1, le=100, description="Number of executions per page" ), ) -> execution_db.GraphExecutionsPaginated: - return await execution_db.get_graph_executions_paginated( + paginated_result = await execution_db.get_graph_executions_paginated( graph_id=graph_id, user_id=user_id, page=page, page_size=page_size, ) + # Apply feature flags to filter out disabled features + filtered_executions = await hide_activity_summaries_if_disabled( + paginated_result.executions, user_id + ) + onboarding = await get_user_onboarding(user_id) + if ( + onboarding.onboardingAgentExecutionId + and onboarding.onboardingAgentExecutionId + in [exec.id for exec in filtered_executions] + and OnboardingStep.GET_RESULTS not in onboarding.completedSteps + ): + await complete_onboarding_step(user_id, OnboardingStep.GET_RESULTS) + + return execution_db.GraphExecutionsPaginated( + executions=filtered_executions, pagination=paginated_result.pagination + ) + + +async def hide_activity_summaries_if_disabled( + executions: list[execution_db.GraphExecutionMeta], user_id: str +) -> list[execution_db.GraphExecutionMeta]: + """Hide activity summaries and scores if AI_ACTIVITY_STATUS feature is disabled.""" + if await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id): + return executions # Return as-is if feature is enabled + + # Filter out activity features if disabled + filtered_executions = [] + for execution in executions: + if execution.stats: + filtered_stats = execution.stats.without_activity_features() + execution = execution.model_copy(update={"stats": filtered_stats}) + filtered_executions.append(execution) + return filtered_executions + @v1_router.get( path="/graphs/{graph_id}/executions/{graph_exec_id}", @@ -1011,25 +1138,52 @@ async def get_graph_execution( graph_exec_id: str, user_id: Annotated[str, Security(get_user_id)], ) -> execution_db.GraphExecution | execution_db.GraphExecutionWithNodes: - graph = await graph_db.get_graph(graph_id=graph_id, user_id=user_id) - if not graph: - raise HTTPException( - status_code=HTTP_404_NOT_FOUND, detail=f"Graph #{graph_id} not found" - ) - result = await execution_db.get_graph_execution( user_id=user_id, execution_id=graph_exec_id, - include_node_executions=graph.user_id == user_id, + include_node_executions=True, ) if not result or result.graph_id != graph_id: raise HTTPException( status_code=404, detail=f"Graph execution #{graph_exec_id} not found." ) + if not await graph_db.get_graph( + graph_id=result.graph_id, + version=result.graph_version, + user_id=user_id, + ): + raise HTTPException( + status_code=HTTP_404_NOT_FOUND, detail=f"Graph #{graph_id} not found" + ) + + # Apply feature flags to filter out disabled features + result = await hide_activity_summary_if_disabled(result, user_id) + onboarding = await get_user_onboarding(user_id) + if ( + onboarding.onboardingAgentExecutionId == graph_exec_id + and OnboardingStep.GET_RESULTS not in onboarding.completedSteps + ): + await complete_onboarding_step(user_id, OnboardingStep.GET_RESULTS) + return result +async def hide_activity_summary_if_disabled( + execution: execution_db.GraphExecution | execution_db.GraphExecutionWithNodes, + user_id: str, +) -> execution_db.GraphExecution | execution_db.GraphExecutionWithNodes: + """Hide activity summary and score for a single execution if AI_ACTIVITY_STATUS feature is disabled.""" + if await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id): + return execution # Return as-is if feature is enabled + + # Filter out activity features if disabled + if execution.stats: + filtered_stats = execution.stats.without_activity_features() + return execution.model_copy(update={"stats": filtered_stats}) + return execution + + @v1_router.delete( path="/executions/{graph_exec_id}", summary="Delete graph execution", @@ -1090,7 +1244,7 @@ async def enable_execution_sharing( ) # Return the share URL - frontend_url = Settings().config.frontend_base_url or "http://localhost:3000" + frontend_url = settings.config.frontend_base_url or "http://localhost:3000" share_url = f"{frontend_url}/share/{share_token}" return ShareResponse(share_url=share_url, share_token=share_token) @@ -1128,7 +1282,7 @@ async def disable_execution_sharing( async def get_shared_execution( share_token: Annotated[ str, - Path(regex=r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$"), + Path(pattern=r"^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$"), ], ) -> execution_db.SharedExecutionResponse: """Get a shared graph execution by share token (no auth required).""" @@ -1202,6 +1356,8 @@ async def create_graph_execution_schedule( result.next_run_time, user_timezone ) + await complete_onboarding_step(user_id, OnboardingStep.SCHEDULE_AGENT) + return result diff --git a/autogpt_platform/backend/backend/server/routers/v1_test.py b/autogpt_platform/backend/backend/api/features/v1_test.py similarity index 91% rename from autogpt_platform/backend/backend/server/routers/v1_test.py rename to autogpt_platform/backend/backend/api/features/v1_test.py index 69e1b5f2ae..a186d38810 100644 --- a/autogpt_platform/backend/backend/server/routers/v1_test.py +++ b/autogpt_platform/backend/backend/api/features/v1_test.py @@ -11,13 +11,13 @@ import starlette.datastructures from fastapi import HTTPException, UploadFile from pytest_snapshot.plugin import Snapshot -import backend.server.routers.v1 as v1_routes from backend.data.credit import AutoTopUpConfig from backend.data.graph import GraphModel -from backend.server.routers.v1 import upload_file + +from .v1 import upload_file, v1_router app = fastapi.FastAPI() -app.include_router(v1_routes.v1_router) +app.include_router(v1_router) client = fastapi.testclient.TestClient(app) @@ -50,7 +50,7 @@ def test_get_or_create_user_route( } mocker.patch( - "backend.server.routers.v1.get_or_create_user", + "backend.api.features.v1.get_or_create_user", return_value=mock_user, ) @@ -71,7 +71,7 @@ def test_update_user_email_route( ) -> None: """Test update user email endpoint""" mocker.patch( - "backend.server.routers.v1.update_user_email", + "backend.api.features.v1.update_user_email", return_value=None, ) @@ -107,7 +107,7 @@ def test_get_graph_blocks( # Mock get_blocks mocker.patch( - "backend.server.routers.v1.get_blocks", + "backend.api.features.v1.get_blocks", return_value={"test-block": lambda: mock_block}, ) @@ -146,7 +146,7 @@ def test_execute_graph_block( mock_block.execute = mock_execute mocker.patch( - "backend.server.routers.v1.get_block", + "backend.api.features.v1.get_block", return_value=mock_block, ) @@ -155,7 +155,7 @@ def test_execute_graph_block( mock_user.timezone = "UTC" mocker.patch( - "backend.server.routers.v1.get_user_by_id", + "backend.api.features.v1.get_user_by_id", return_value=mock_user, ) @@ -181,7 +181,7 @@ def test_execute_graph_block_not_found( ) -> None: """Test execute block with non-existent block""" mocker.patch( - "backend.server.routers.v1.get_block", + "backend.api.features.v1.get_block", return_value=None, ) @@ -200,7 +200,7 @@ def test_get_user_credits( mock_credit_model = Mock() mock_credit_model.get_credits = AsyncMock(return_value=1000) mocker.patch( - "backend.server.routers.v1.get_user_credit_model", + "backend.api.features.v1.get_user_credit_model", return_value=mock_credit_model, ) @@ -227,7 +227,7 @@ def test_request_top_up( return_value="https://checkout.example.com/session123" ) mocker.patch( - "backend.server.routers.v1.get_user_credit_model", + "backend.api.features.v1.get_user_credit_model", return_value=mock_credit_model, ) @@ -254,7 +254,7 @@ def test_get_auto_top_up( mock_config = AutoTopUpConfig(threshold=100, amount=500) mocker.patch( - "backend.server.routers.v1.get_auto_top_up", + "backend.api.features.v1.get_auto_top_up", return_value=mock_config, ) @@ -279,7 +279,7 @@ def test_configure_auto_top_up( """Test configure auto top-up endpoint - this test would have caught the enum casting bug""" # Mock the set_auto_top_up function to avoid database operations mocker.patch( - "backend.server.routers.v1.set_auto_top_up", + "backend.api.features.v1.set_auto_top_up", return_value=None, ) @@ -289,7 +289,7 @@ def test_configure_auto_top_up( mock_credit_model.top_up_credits.return_value = None mocker.patch( - "backend.server.routers.v1.get_user_credit_model", + "backend.api.features.v1.get_user_credit_model", return_value=mock_credit_model, ) @@ -311,7 +311,7 @@ def test_configure_auto_top_up_validation_errors( ) -> None: """Test configure auto top-up endpoint validation""" # Mock set_auto_top_up to avoid database operations for successful case - mocker.patch("backend.server.routers.v1.set_auto_top_up") + mocker.patch("backend.api.features.v1.set_auto_top_up") # Mock credit model to avoid Stripe API calls for the successful case mock_credit_model = mocker.AsyncMock() @@ -319,7 +319,7 @@ def test_configure_auto_top_up_validation_errors( mock_credit_model.top_up_credits.return_value = None mocker.patch( - "backend.server.routers.v1.get_user_credit_model", + "backend.api.features.v1.get_user_credit_model", return_value=mock_credit_model, ) @@ -393,7 +393,7 @@ def test_get_graph( ) mocker.patch( - "backend.server.routers.v1.graph_db.get_graph", + "backend.api.features.v1.graph_db.get_graph", return_value=mock_graph, ) @@ -415,7 +415,7 @@ def test_get_graph_not_found( ) -> None: """Test get graph with non-existent ID""" mocker.patch( - "backend.server.routers.v1.graph_db.get_graph", + "backend.api.features.v1.graph_db.get_graph", return_value=None, ) @@ -443,15 +443,15 @@ def test_delete_graph( ) mocker.patch( - "backend.server.routers.v1.graph_db.get_graph", + "backend.api.features.v1.graph_db.get_graph", return_value=mock_graph, ) mocker.patch( - "backend.server.routers.v1.on_graph_deactivate", + "backend.api.features.v1.on_graph_deactivate", return_value=None, ) mocker.patch( - "backend.server.routers.v1.graph_db.delete_graph", + "backend.api.features.v1.graph_db.delete_graph", return_value=3, # Number of versions deleted ) @@ -498,8 +498,8 @@ async def test_upload_file_success(test_user_id: str): ) # Mock dependencies - with patch("backend.server.routers.v1.scan_content_safe") as mock_scan, patch( - "backend.server.routers.v1.get_cloud_storage_handler" + with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch( + "backend.api.features.v1.get_cloud_storage_handler" ) as mock_handler_getter: mock_scan.return_value = None @@ -550,8 +550,8 @@ async def test_upload_file_no_filename(test_user_id: str): ), ) - with patch("backend.server.routers.v1.scan_content_safe") as mock_scan, patch( - "backend.server.routers.v1.get_cloud_storage_handler" + with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch( + "backend.api.features.v1.get_cloud_storage_handler" ) as mock_handler_getter: mock_scan.return_value = None @@ -610,7 +610,7 @@ async def test_upload_file_virus_scan_failure(test_user_id: str): headers=starlette.datastructures.Headers({"content-type": "text/plain"}), ) - with patch("backend.server.routers.v1.scan_content_safe") as mock_scan: + with patch("backend.api.features.v1.scan_content_safe") as mock_scan: # Mock virus scan to raise exception mock_scan.side_effect = RuntimeError("Virus detected!") @@ -631,8 +631,8 @@ async def test_upload_file_cloud_storage_failure(test_user_id: str): headers=starlette.datastructures.Headers({"content-type": "text/plain"}), ) - with patch("backend.server.routers.v1.scan_content_safe") as mock_scan, patch( - "backend.server.routers.v1.get_cloud_storage_handler" + with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch( + "backend.api.features.v1.get_cloud_storage_handler" ) as mock_handler_getter: mock_scan.return_value = None @@ -678,8 +678,8 @@ async def test_upload_file_gcs_not_configured_fallback(test_user_id: str): headers=starlette.datastructures.Headers({"content-type": "text/plain"}), ) - with patch("backend.server.routers.v1.scan_content_safe") as mock_scan, patch( - "backend.server.routers.v1.get_cloud_storage_handler" + with patch("backend.api.features.v1.scan_content_safe") as mock_scan, patch( + "backend.api.features.v1.get_cloud_storage_handler" ) as mock_handler_getter: mock_scan.return_value = None diff --git a/autogpt_platform/backend/backend/server/middleware/security.py b/autogpt_platform/backend/backend/api/middleware/security.py similarity index 100% rename from autogpt_platform/backend/backend/server/middleware/security.py rename to autogpt_platform/backend/backend/api/middleware/security.py diff --git a/autogpt_platform/backend/backend/server/middleware/security_test.py b/autogpt_platform/backend/backend/api/middleware/security_test.py similarity index 98% rename from autogpt_platform/backend/backend/server/middleware/security_test.py rename to autogpt_platform/backend/backend/api/middleware/security_test.py index 462e5b27ed..57137afc9a 100644 --- a/autogpt_platform/backend/backend/server/middleware/security_test.py +++ b/autogpt_platform/backend/backend/api/middleware/security_test.py @@ -3,7 +3,7 @@ from fastapi import FastAPI from fastapi.testclient import TestClient from starlette.applications import Starlette -from backend.server.middleware.security import SecurityHeadersMiddleware +from backend.api.middleware.security import SecurityHeadersMiddleware @pytest.fixture diff --git a/autogpt_platform/backend/backend/server/model.py b/autogpt_platform/backend/backend/api/model.py similarity index 75% rename from autogpt_platform/backend/backend/server/model.py rename to autogpt_platform/backend/backend/api/model.py index bbb904a794..5e13e20450 100644 --- a/autogpt_platform/backend/backend/server/model.py +++ b/autogpt_platform/backend/backend/api/model.py @@ -1,9 +1,10 @@ import enum -from typing import Any, Optional +from typing import Any, Literal, Optional import pydantic +from prisma.enums import OnboardingStep -from backend.data.api_key import APIKeyInfo, APIKeyPermission +from backend.data.auth.api_key import APIKeyInfo, APIKeyPermission from backend.data.graph import Graph from backend.util.timezone_name import TimeZoneName @@ -14,6 +15,7 @@ class WSMethod(enum.Enum): UNSUBSCRIBE = "unsubscribe" GRAPH_EXECUTION_EVENT = "graph_execution_event" NODE_EXECUTION_EVENT = "node_execution_event" + NOTIFICATION = "notification" ERROR = "error" HEARTBEAT = "heartbeat" @@ -34,8 +36,13 @@ class WSSubscribeGraphExecutionsRequest(pydantic.BaseModel): graph_id: str +GraphCreationSource = Literal["builder", "upload"] +GraphExecutionSource = Literal["builder", "library", "onboarding"] + + class CreateGraph(pydantic.BaseModel): graph: Graph + source: GraphCreationSource | None = None class CreateAPIKeyRequest(pydantic.BaseModel): @@ -76,3 +83,14 @@ class TimezoneResponse(pydantic.BaseModel): class UpdateTimezoneRequest(pydantic.BaseModel): timezone: TimeZoneName + + +class NotificationPayload(pydantic.BaseModel): + type: str + event: str + + model_config = pydantic.ConfigDict(extra="allow") + + +class OnboardingNotificationPayload(NotificationPayload): + step: OnboardingStep | None diff --git a/autogpt_platform/backend/backend/server/rest_api.py b/autogpt_platform/backend/backend/api/rest_api.py similarity index 71% rename from autogpt_platform/backend/backend/server/rest_api.py rename to autogpt_platform/backend/backend/api/rest_api.py index 7c3d97b748..147f62e781 100644 --- a/autogpt_platform/backend/backend/server/rest_api.py +++ b/autogpt_platform/backend/backend/api/rest_api.py @@ -16,38 +16,50 @@ from fastapi.middleware.gzip import GZipMiddleware from fastapi.routing import APIRoute from prisma.errors import PrismaError +import backend.api.features.admin.credit_admin_routes +import backend.api.features.admin.execution_analytics_routes +import backend.api.features.admin.store_admin_routes +import backend.api.features.builder +import backend.api.features.builder.routes +import backend.api.features.chat.routes as chat_routes +import backend.api.features.executions.review.routes +import backend.api.features.library.db +import backend.api.features.library.model +import backend.api.features.library.routes +import backend.api.features.oauth +import backend.api.features.otto.routes +import backend.api.features.postmark.postmark +import backend.api.features.store.model +import backend.api.features.store.routes +import backend.api.features.v1 import backend.data.block import backend.data.db import backend.data.graph import backend.data.user import backend.integrations.webhooks.utils -import backend.server.routers.postmark.postmark -import backend.server.routers.v1 -import backend.server.v2.admin.credit_admin_routes -import backend.server.v2.admin.store_admin_routes -import backend.server.v2.builder -import backend.server.v2.builder.routes -import backend.server.v2.library.db -import backend.server.v2.library.model -import backend.server.v2.library.routes -import backend.server.v2.otto.routes -import backend.server.v2.store.model -import backend.server.v2.store.routes -import backend.server.v2.turnstile.routes import backend.util.service import backend.util.settings from backend.blocks.llm import LlmModel from backend.data.model import Credentials from backend.integrations.providers import ProviderName from backend.monitoring.instrumentation import instrument_fastapi -from backend.server.external.api import external_app -from backend.server.middleware.security import SecurityHeadersMiddleware from backend.util import json from backend.util.cloud_storage import shutdown_cloud_storage_handler -from backend.util.exceptions import NotAuthorizedError, NotFoundError +from backend.util.exceptions import ( + MissingConfigError, + NotAuthorizedError, + NotFoundError, +) from backend.util.feature_flag import initialize_launchdarkly, shutdown_launchdarkly from backend.util.service import UnhealthyServiceError +from .external.fastapi_app import external_api +from .features.analytics import router as analytics_router +from .features.integrations.router import router as integrations_router +from .middleware.security import SecurityHeadersMiddleware +from .utils.cors import build_cors_params +from .utils.openapi import sort_openapi + settings = backend.util.settings.Settings() logger = logging.getLogger(__name__) @@ -168,6 +180,9 @@ app.add_middleware(GZipMiddleware, minimum_size=50_000) # 50KB threshold # Add 401 responses to authenticated endpoints in OpenAPI spec add_auth_responses_to_openapi(app) +# Sort OpenAPI schema to eliminate diff on refactors +sort_openapi(app) + # Add Prometheus instrumentation instrument_fastapi( app, @@ -187,6 +202,7 @@ def handle_internal_http_error(status_code: int = 500, log_error: bool = True): request.method, request.url.path, exc, + exc_info=exc, ) hint = ( @@ -241,45 +257,71 @@ app.add_exception_handler(NotFoundError, handle_internal_http_error(404, False)) app.add_exception_handler(NotAuthorizedError, handle_internal_http_error(403, False)) app.add_exception_handler(RequestValidationError, validation_error_handler) app.add_exception_handler(pydantic.ValidationError, validation_error_handler) +app.add_exception_handler(MissingConfigError, handle_internal_http_error(503)) app.add_exception_handler(ValueError, handle_internal_http_error(400)) app.add_exception_handler(Exception, handle_internal_http_error(500)) -app.include_router(backend.server.routers.v1.v1_router, tags=["v1"], prefix="/api") +app.include_router(backend.api.features.v1.v1_router, tags=["v1"], prefix="/api") app.include_router( - backend.server.v2.store.routes.router, tags=["v2"], prefix="/api/store" + integrations_router, + prefix="/api/integrations", + tags=["v1", "integrations"], ) app.include_router( - backend.server.v2.builder.routes.router, tags=["v2"], prefix="/api/builder" + analytics_router, + prefix="/api/analytics", + tags=["analytics"], ) app.include_router( - backend.server.v2.admin.store_admin_routes.router, + backend.api.features.store.routes.router, tags=["v2"], prefix="/api/store" +) +app.include_router( + backend.api.features.builder.routes.router, tags=["v2"], prefix="/api/builder" +) +app.include_router( + backend.api.features.admin.store_admin_routes.router, tags=["v2", "admin"], prefix="/api/store", ) app.include_router( - backend.server.v2.admin.credit_admin_routes.router, + backend.api.features.admin.credit_admin_routes.router, tags=["v2", "admin"], prefix="/api/credits", ) app.include_router( - backend.server.v2.library.routes.router, tags=["v2"], prefix="/api/library" + backend.api.features.admin.execution_analytics_routes.router, + tags=["v2", "admin"], + prefix="/api/executions", ) app.include_router( - backend.server.v2.otto.routes.router, tags=["v2", "otto"], prefix="/api/otto" + backend.api.features.executions.review.routes.router, + tags=["v2", "executions", "review"], + prefix="/api/review", ) app.include_router( - backend.server.v2.turnstile.routes.router, - tags=["v2", "turnstile"], - prefix="/api/turnstile", + backend.api.features.library.routes.router, tags=["v2"], prefix="/api/library" +) +app.include_router( + backend.api.features.otto.routes.router, tags=["v2", "otto"], prefix="/api/otto" ) app.include_router( - backend.server.routers.postmark.postmark.router, + backend.api.features.postmark.postmark.router, tags=["v1", "email"], prefix="/api/email", ) +app.include_router( + chat_routes.router, + tags=["v2", "chat"], + prefix="/api/chat", +) +app.include_router( + backend.api.features.oauth.router, + tags=["oauth"], + prefix="/api/oauth", +) -app.mount("/external-api", external_app) +app.mount("/external-api", external_api) @app.get(path="/health", tags=["health"], dependencies=[]) @@ -291,39 +333,39 @@ async def health(): class AgentServer(backend.util.service.AppProcess): def run(self): + cors_params = build_cors_params( + settings.config.backend_cors_allow_origins, + settings.config.app_env, + ) + server_app = starlette.middleware.cors.CORSMiddleware( app=app, - allow_origins=settings.config.backend_cors_allow_origins, + **cors_params, allow_credentials=True, allow_methods=["*"], # Allows all methods allow_headers=["*"], # Allows all headers ) - config = backend.util.settings.Config() - - # Configure uvicorn with performance optimizations from Kludex FastAPI tips - uvicorn_config = { - "app": server_app, - "host": config.agent_api_host, - "port": config.agent_api_port, - "log_config": None, - # Use httptools for HTTP parsing (if available) - "http": "httptools", - # Only use uvloop on Unix-like systems (not supported on Windows) - "loop": "uvloop" if platform.system() != "Windows" else "auto", - } # Only add debug in local environment (not supported in all uvicorn versions) - if config.app_env == backend.util.settings.AppEnvironment.LOCAL: + if settings.config.app_env == backend.util.settings.AppEnvironment.LOCAL: import os # Enable asyncio debug mode via environment variable os.environ["PYTHONASYNCIODEBUG"] = "1" - uvicorn.run(**uvicorn_config) - - def cleanup(self): - super().cleanup() - logger.info(f"[{self.service_name}] ⏳ Shutting down Agent Server...") + # Configure uvicorn with performance optimizations from Kludex FastAPI tips + uvicorn.run( + app=server_app, + host=settings.config.agent_api_host, + port=settings.config.agent_api_port, + log_config=None, + # Use httptools for HTTP parsing (if available) + http="httptools", + # Only use uvloop on Unix-like systems (not supported on Windows) + loop="uvloop" if platform.system() != "Windows" else "auto", + # Disable WebSockets since this service doesn't have any WebSocket endpoints + ws="none", + ) @staticmethod async def test_execute_graph( @@ -332,7 +374,7 @@ class AgentServer(backend.util.service.AppProcess): graph_version: Optional[int] = None, node_input: Optional[dict[str, Any]] = None, ): - return await backend.server.routers.v1.execute_graph( + return await backend.api.features.v1.execute_graph( user_id=user_id, graph_id=graph_id, graph_version=graph_version, @@ -347,16 +389,16 @@ class AgentServer(backend.util.service.AppProcess): user_id: str, for_export: bool = False, ): - return await backend.server.routers.v1.get_graph( + return await backend.api.features.v1.get_graph( graph_id, user_id, graph_version, for_export ) @staticmethod async def test_create_graph( - create_graph: backend.server.routers.v1.CreateGraph, + create_graph: backend.api.features.v1.CreateGraph, user_id: str, ): - return await backend.server.routers.v1.create_new_graph(create_graph, user_id) + return await backend.api.features.v1.create_new_graph(create_graph, user_id) @staticmethod async def test_get_graph_run_status(graph_exec_id: str, user_id: str): @@ -372,45 +414,45 @@ class AgentServer(backend.util.service.AppProcess): @staticmethod async def test_delete_graph(graph_id: str, user_id: str): """Used for clean-up after a test run""" - await backend.server.v2.library.db.delete_library_agent_by_graph_id( + await backend.api.features.library.db.delete_library_agent_by_graph_id( graph_id=graph_id, user_id=user_id ) - return await backend.server.routers.v1.delete_graph(graph_id, user_id) + return await backend.api.features.v1.delete_graph(graph_id, user_id) @staticmethod async def test_get_presets(user_id: str, page: int = 1, page_size: int = 10): - return await backend.server.v2.library.routes.presets.list_presets( + return await backend.api.features.library.routes.presets.list_presets( user_id=user_id, page=page, page_size=page_size ) @staticmethod async def test_get_preset(preset_id: str, user_id: str): - return await backend.server.v2.library.routes.presets.get_preset( + return await backend.api.features.library.routes.presets.get_preset( preset_id=preset_id, user_id=user_id ) @staticmethod async def test_create_preset( - preset: backend.server.v2.library.model.LibraryAgentPresetCreatable, + preset: backend.api.features.library.model.LibraryAgentPresetCreatable, user_id: str, ): - return await backend.server.v2.library.routes.presets.create_preset( + return await backend.api.features.library.routes.presets.create_preset( preset=preset, user_id=user_id ) @staticmethod async def test_update_preset( preset_id: str, - preset: backend.server.v2.library.model.LibraryAgentPresetUpdatable, + preset: backend.api.features.library.model.LibraryAgentPresetUpdatable, user_id: str, ): - return await backend.server.v2.library.routes.presets.update_preset( + return await backend.api.features.library.routes.presets.update_preset( preset_id=preset_id, preset=preset, user_id=user_id ) @staticmethod async def test_delete_preset(preset_id: str, user_id: str): - return await backend.server.v2.library.routes.presets.delete_preset( + return await backend.api.features.library.routes.presets.delete_preset( preset_id=preset_id, user_id=user_id ) @@ -420,7 +462,7 @@ class AgentServer(backend.util.service.AppProcess): user_id: str, inputs: Optional[dict[str, Any]] = None, ): - return await backend.server.v2.library.routes.presets.execute_preset( + return await backend.api.features.library.routes.presets.execute_preset( preset_id=preset_id, user_id=user_id, inputs=inputs or {}, @@ -429,18 +471,20 @@ class AgentServer(backend.util.service.AppProcess): @staticmethod async def test_create_store_listing( - request: backend.server.v2.store.model.StoreSubmissionRequest, user_id: str + request: backend.api.features.store.model.StoreSubmissionRequest, user_id: str ): - return await backend.server.v2.store.routes.create_submission(request, user_id) + return await backend.api.features.store.routes.create_submission( + request, user_id + ) ### ADMIN ### @staticmethod async def test_review_store_listing( - request: backend.server.v2.store.model.ReviewSubmissionRequest, + request: backend.api.features.store.model.ReviewSubmissionRequest, user_id: str, ): - return await backend.server.v2.admin.store_admin_routes.review_submission( + return await backend.api.features.admin.store_admin_routes.review_submission( request.store_listing_version_id, request, user_id ) @@ -450,10 +494,7 @@ class AgentServer(backend.util.service.AppProcess): provider: ProviderName, credentials: Credentials, ) -> Credentials: - from backend.server.integrations.router import ( - create_credentials, - get_credential, - ) + from .features.integrations.router import create_credentials, get_credential try: return await create_credentials( diff --git a/autogpt_platform/backend/backend/server/test_helpers.py b/autogpt_platform/backend/backend/api/test_helpers.py similarity index 81% rename from autogpt_platform/backend/backend/server/test_helpers.py rename to autogpt_platform/backend/backend/api/test_helpers.py index 98073f0992..c6ba333a2e 100644 --- a/autogpt_platform/backend/backend/server/test_helpers.py +++ b/autogpt_platform/backend/backend/api/test_helpers.py @@ -1,7 +1,8 @@ """Helper functions for improved test assertions and error handling.""" import json -from typing import Any, Dict, Optional +from contextlib import contextmanager +from typing import Any, Dict, Iterator, Optional def assert_response_status( @@ -107,3 +108,24 @@ def assert_mock_called_with_partial(mock_obj: Any, **expected_kwargs: Any) -> No assert ( actual_kwargs[key] == expected_value ), f"Mock called with {key}={actual_kwargs[key]}, expected {expected_value}" + + +@contextmanager +def override_config(settings: Any, attribute: str, value: Any) -> Iterator[None]: + """Temporarily override a config attribute for testing. + + Warning: Directly mutates settings.config. If config is reloaded or cached + elsewhere during the test, side effects may leak. Use with caution in + parallel tests or when config is accessed globally. + + Args: + settings: The settings object containing .config + attribute: The config attribute name to override + value: The temporary value to set + """ + original = getattr(settings.config, attribute) + setattr(settings.config, attribute, value) + try: + yield + finally: + setattr(settings.config, attribute, original) diff --git a/autogpt_platform/backend/backend/server/utils/api_key_auth.py b/autogpt_platform/backend/backend/api/utils/api_key_auth.py similarity index 100% rename from autogpt_platform/backend/backend/server/utils/api_key_auth.py rename to autogpt_platform/backend/backend/api/utils/api_key_auth.py diff --git a/autogpt_platform/backend/backend/server/utils/api_key_auth_test.py b/autogpt_platform/backend/backend/api/utils/api_key_auth_test.py similarity index 99% rename from autogpt_platform/backend/backend/server/utils/api_key_auth_test.py rename to autogpt_platform/backend/backend/api/utils/api_key_auth_test.py index df6af6633c..39c3150561 100644 --- a/autogpt_platform/backend/backend/server/utils/api_key_auth_test.py +++ b/autogpt_platform/backend/backend/api/utils/api_key_auth_test.py @@ -8,7 +8,7 @@ import pytest from fastapi import HTTPException, Request from starlette.status import HTTP_401_UNAUTHORIZED, HTTP_403_FORBIDDEN -from backend.server.utils.api_key_auth import APIKeyAuthenticator +from backend.api.utils.api_key_auth import APIKeyAuthenticator from backend.util.exceptions import MissingConfigError diff --git a/autogpt_platform/backend/backend/api/utils/cors.py b/autogpt_platform/backend/backend/api/utils/cors.py new file mode 100644 index 0000000000..b8a230eb83 --- /dev/null +++ b/autogpt_platform/backend/backend/api/utils/cors.py @@ -0,0 +1,67 @@ +from __future__ import annotations + +import re +from typing import List, Sequence, TypedDict + +from backend.util.settings import AppEnvironment + + +class CorsParams(TypedDict): + allow_origins: List[str] + allow_origin_regex: str | None + + +def build_cors_params(origins: Sequence[str], app_env: AppEnvironment) -> CorsParams: + allow_origins: List[str] = [] + regex_patterns: List[str] = [] + + if app_env == AppEnvironment.PRODUCTION: + for origin in origins: + if origin.startswith("regex:"): + pattern = origin[len("regex:") :] + pattern_lower = pattern.lower() + if "localhost" in pattern_lower or "127.0.0.1" in pattern_lower: + raise ValueError( + f"Production environment cannot allow localhost origins via regex: {pattern}" + ) + try: + compiled = re.compile(pattern) + test_urls = [ + "http://localhost:3000", + "http://127.0.0.1:3000", + "https://localhost:8000", + "https://127.0.0.1:8000", + ] + for test_url in test_urls: + if compiled.search(test_url): + raise ValueError( + f"Production regex pattern matches localhost/127.0.0.1: {pattern}" + ) + except re.error: + pass + continue + + lowered = origin.lower() + if "localhost" in lowered or "127.0.0.1" in lowered: + raise ValueError( + "Production environment cannot allow localhost origins" + ) + + for origin in origins: + if origin.startswith("regex:"): + regex_patterns.append(origin[len("regex:") :]) + else: + allow_origins.append(origin) + + allow_origin_regex = None + if regex_patterns: + if len(regex_patterns) == 1: + allow_origin_regex = f"^(?:{regex_patterns[0]})$" + else: + combined_pattern = "|".join(f"(?:{pattern})" for pattern in regex_patterns) + allow_origin_regex = f"^(?:{combined_pattern})$" + + return { + "allow_origins": allow_origins, + "allow_origin_regex": allow_origin_regex, + } diff --git a/autogpt_platform/backend/backend/api/utils/cors_test.py b/autogpt_platform/backend/backend/api/utils/cors_test.py new file mode 100644 index 0000000000..011974383b --- /dev/null +++ b/autogpt_platform/backend/backend/api/utils/cors_test.py @@ -0,0 +1,62 @@ +import pytest + +from backend.api.utils.cors import build_cors_params +from backend.util.settings import AppEnvironment + + +def test_build_cors_params_splits_regex_patterns() -> None: + origins = [ + "https://app.example.com", + "regex:https://.*\\.example\\.com", + ] + + result = build_cors_params(origins, AppEnvironment.LOCAL) + + assert result["allow_origins"] == ["https://app.example.com"] + assert result["allow_origin_regex"] == "^(?:https://.*\\.example\\.com)$" + + +def test_build_cors_params_combines_multiple_regex_patterns() -> None: + origins = [ + "regex:https://alpha.example.com", + "regex:https://beta.example.com", + ] + + result = build_cors_params(origins, AppEnvironment.DEVELOPMENT) + + assert result["allow_origins"] == [] + assert result["allow_origin_regex"] == ( + "^(?:(?:https://alpha.example.com)|(?:https://beta.example.com))$" + ) + + +def test_build_cors_params_blocks_localhost_literal_in_production() -> None: + with pytest.raises(ValueError): + build_cors_params(["http://localhost:3000"], AppEnvironment.PRODUCTION) + + +def test_build_cors_params_blocks_localhost_regex_in_production() -> None: + with pytest.raises(ValueError): + build_cors_params(["regex:https://.*localhost.*"], AppEnvironment.PRODUCTION) + + +def test_build_cors_params_blocks_case_insensitive_localhost_regex() -> None: + with pytest.raises(ValueError, match="localhost origins via regex"): + build_cors_params(["regex:https://(?i)LOCALHOST.*"], AppEnvironment.PRODUCTION) + + +def test_build_cors_params_blocks_regex_matching_localhost_at_runtime() -> None: + with pytest.raises(ValueError, match="matches localhost"): + build_cors_params(["regex:https?://.*:3000"], AppEnvironment.PRODUCTION) + + +def test_build_cors_params_allows_vercel_preview_regex() -> None: + result = build_cors_params( + ["regex:https://autogpt-git-[a-z0-9-]+\\.vercel\\.app"], + AppEnvironment.PRODUCTION, + ) + + assert result["allow_origins"] == [] + assert result["allow_origin_regex"] == ( + "^(?:https://autogpt-git-[a-z0-9-]+\\.vercel\\.app)$" + ) diff --git a/autogpt_platform/backend/backend/api/utils/openapi.py b/autogpt_platform/backend/backend/api/utils/openapi.py new file mode 100644 index 0000000000..757b220fd0 --- /dev/null +++ b/autogpt_platform/backend/backend/api/utils/openapi.py @@ -0,0 +1,41 @@ +from fastapi import FastAPI + + +def sort_openapi(app: FastAPI) -> None: + """ + Patch a FastAPI instance's `openapi()` method to sort the endpoints, + schemas, and responses. + """ + wrapped_openapi = app.openapi + + def custom_openapi(): + if app.openapi_schema: + return app.openapi_schema + + openapi_schema = wrapped_openapi() + + # Sort endpoints + openapi_schema["paths"] = dict(sorted(openapi_schema["paths"].items())) + + # Sort endpoints -> methods + for p in openapi_schema["paths"].keys(): + openapi_schema["paths"][p] = dict( + sorted(openapi_schema["paths"][p].items()) + ) + + # Sort endpoints -> methods -> responses + for m in openapi_schema["paths"][p].keys(): + openapi_schema["paths"][p][m]["responses"] = dict( + sorted(openapi_schema["paths"][p][m]["responses"].items()) + ) + + # Sort schemas and responses as well + for k in openapi_schema["components"].keys(): + openapi_schema["components"][k] = dict( + sorted(openapi_schema["components"][k].items()) + ) + + app.openapi_schema = openapi_schema + return openapi_schema + + app.openapi = custom_openapi diff --git a/autogpt_platform/backend/backend/server/ws_api.py b/autogpt_platform/backend/backend/api/ws_api.py similarity index 89% rename from autogpt_platform/backend/backend/server/ws_api.py rename to autogpt_platform/backend/backend/api/ws_api.py index dc8a64d79f..b71fdb3526 100644 --- a/autogpt_platform/backend/backend/server/ws_api.py +++ b/autogpt_platform/backend/backend/api/ws_api.py @@ -9,19 +9,21 @@ from autogpt_libs.auth.jwt_utils import parse_jwt_token from fastapi import Depends, FastAPI, WebSocket, WebSocketDisconnect from starlette.middleware.cors import CORSMiddleware -from backend.data.execution import AsyncRedisExecutionEventBus -from backend.data.user import DEFAULT_USER_ID -from backend.monitoring.instrumentation import ( - instrument_fastapi, - update_websocket_connections, -) -from backend.server.conn_manager import ConnectionManager -from backend.server.model import ( +from backend.api.conn_manager import ConnectionManager +from backend.api.model import ( WSMessage, WSMethod, WSSubscribeGraphExecutionRequest, WSSubscribeGraphExecutionsRequest, ) +from backend.api.utils.cors import build_cors_params +from backend.data.execution import AsyncRedisExecutionEventBus +from backend.data.notification_bus import AsyncRedisNotificationEventBus +from backend.data.user import DEFAULT_USER_ID +from backend.monitoring.instrumentation import ( + instrument_fastapi, + update_websocket_connections, +) from backend.util.retry import continuous_retry from backend.util.service import AppProcess from backend.util.settings import AppEnvironment, Config, Settings @@ -61,9 +63,21 @@ def get_connection_manager(): @continuous_retry() async def event_broadcaster(manager: ConnectionManager): - event_queue = AsyncRedisExecutionEventBus() - async for event in event_queue.listen("*"): - await manager.send_execution_update(event) + execution_bus = AsyncRedisExecutionEventBus() + notification_bus = AsyncRedisNotificationEventBus() + + async def execution_worker(): + async for event in execution_bus.listen("*"): + await manager.send_execution_update(event) + + async def notification_worker(): + async for notification in notification_bus.listen("*"): + await manager.send_notification( + user_id=notification.user_id, + payload=notification.payload, + ) + + await asyncio.gather(execution_worker(), notification_worker()) async def authenticate_websocket(websocket: WebSocket) -> str: @@ -228,7 +242,7 @@ async def websocket_router( user_id = await authenticate_websocket(websocket) if not user_id: return - await manager.connect_socket(websocket) + await manager.connect_socket(websocket, user_id=user_id) # Track WebSocket connection update_websocket_connections(user_id, 1) @@ -301,7 +315,7 @@ async def websocket_router( ) except WebSocketDisconnect: - manager.disconnect_socket(websocket) + manager.disconnect_socket(websocket, user_id=user_id) logger.debug("WebSocket client disconnected") finally: update_websocket_connections(user_id, -1) @@ -315,9 +329,13 @@ async def health(): class WebsocketServer(AppProcess): def run(self): logger.info(f"CORS allow origins: {settings.config.backend_cors_allow_origins}") + cors_params = build_cors_params( + settings.config.backend_cors_allow_origins, + settings.config.app_env, + ) server_app = CORSMiddleware( app=app, - allow_origins=settings.config.backend_cors_allow_origins, + **cors_params, allow_credentials=True, allow_methods=["*"], allow_headers=["*"], @@ -327,9 +345,6 @@ class WebsocketServer(AppProcess): server_app, host=Config().websocket_server_host, port=Config().websocket_server_port, + ws="websockets-sansio", log_config=None, ) - - def cleanup(self): - super().cleanup() - logger.info(f"[{self.service_name}] ⏳ Shutting down WebSocket Server...") diff --git a/autogpt_platform/backend/backend/server/ws_api_test.py b/autogpt_platform/backend/backend/api/ws_api_test.py similarity index 73% rename from autogpt_platform/backend/backend/server/ws_api_test.py rename to autogpt_platform/backend/backend/api/ws_api_test.py index c9c27eb086..edab1bbded 100644 --- a/autogpt_platform/backend/backend/server/ws_api_test.py +++ b/autogpt_platform/backend/backend/api/ws_api_test.py @@ -6,15 +6,17 @@ import pytest from fastapi import WebSocket, WebSocketDisconnect from pytest_snapshot.plugin import Snapshot -from backend.data.user import DEFAULT_USER_ID -from backend.server.conn_manager import ConnectionManager -from backend.server.ws_api import ( - WSMessage, - WSMethod, +from backend.api.conn_manager import ConnectionManager +from backend.api.test_helpers import override_config +from backend.api.ws_api import AppEnvironment, WebsocketServer, WSMessage, WSMethod +from backend.api.ws_api import app as websocket_app +from backend.api.ws_api import ( handle_subscribe, handle_unsubscribe, + settings, websocket_router, ) +from backend.data.user import DEFAULT_USER_ID @pytest.fixture @@ -29,13 +31,54 @@ def mock_manager() -> AsyncMock: return AsyncMock(spec=ConnectionManager) +def test_websocket_server_uses_cors_helper(mocker) -> None: + cors_params = { + "allow_origins": ["https://app.example.com"], + "allow_origin_regex": None, + } + mocker.patch("backend.api.ws_api.uvicorn.run") + cors_middleware = mocker.patch( + "backend.api.ws_api.CORSMiddleware", return_value=object() + ) + build_cors = mocker.patch( + "backend.api.ws_api.build_cors_params", return_value=cors_params + ) + + with override_config( + settings, "backend_cors_allow_origins", cors_params["allow_origins"] + ), override_config(settings, "app_env", AppEnvironment.LOCAL): + WebsocketServer().run() + + build_cors.assert_called_once_with( + cors_params["allow_origins"], AppEnvironment.LOCAL + ) + cors_middleware.assert_called_once_with( + app=websocket_app, + allow_origins=cors_params["allow_origins"], + allow_origin_regex=cors_params["allow_origin_regex"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], + ) + + +def test_websocket_server_blocks_localhost_in_production(mocker) -> None: + mocker.patch("backend.api.ws_api.uvicorn.run") + + with override_config( + settings, "backend_cors_allow_origins", ["http://localhost:3000"] + ), override_config(settings, "app_env", AppEnvironment.PRODUCTION): + with pytest.raises(ValueError): + WebsocketServer().run() + + @pytest.mark.asyncio async def test_websocket_router_subscribe( mock_websocket: AsyncMock, mock_manager: AsyncMock, snapshot: Snapshot, mocker ) -> None: # Mock the authenticate_websocket function to ensure it returns a valid user_id mocker.patch( - "backend.server.ws_api.authenticate_websocket", return_value=DEFAULT_USER_ID + "backend.api.ws_api.authenticate_websocket", return_value=DEFAULT_USER_ID ) mock_websocket.receive_text.side_effect = [ @@ -53,7 +96,9 @@ async def test_websocket_router_subscribe( cast(WebSocket, mock_websocket), cast(ConnectionManager, mock_manager) ) - mock_manager.connect_socket.assert_called_once_with(mock_websocket) + mock_manager.connect_socket.assert_called_once_with( + mock_websocket, user_id=DEFAULT_USER_ID + ) mock_manager.subscribe_graph_exec.assert_called_once_with( user_id=DEFAULT_USER_ID, graph_exec_id="test-graph-exec-1", @@ -72,7 +117,9 @@ async def test_websocket_router_subscribe( snapshot.snapshot_dir = "snapshots" snapshot.assert_match(json.dumps(parsed_message, indent=2, sort_keys=True), "sub") - mock_manager.disconnect_socket.assert_called_once_with(mock_websocket) + mock_manager.disconnect_socket.assert_called_once_with( + mock_websocket, user_id=DEFAULT_USER_ID + ) @pytest.mark.asyncio @@ -81,7 +128,7 @@ async def test_websocket_router_unsubscribe( ) -> None: # Mock the authenticate_websocket function to ensure it returns a valid user_id mocker.patch( - "backend.server.ws_api.authenticate_websocket", return_value=DEFAULT_USER_ID + "backend.api.ws_api.authenticate_websocket", return_value=DEFAULT_USER_ID ) mock_websocket.receive_text.side_effect = [ @@ -99,7 +146,9 @@ async def test_websocket_router_unsubscribe( cast(WebSocket, mock_websocket), cast(ConnectionManager, mock_manager) ) - mock_manager.connect_socket.assert_called_once_with(mock_websocket) + mock_manager.connect_socket.assert_called_once_with( + mock_websocket, user_id=DEFAULT_USER_ID + ) mock_manager.unsubscribe_graph_exec.assert_called_once_with( user_id=DEFAULT_USER_ID, graph_exec_id="test-graph-exec-1", @@ -115,7 +164,9 @@ async def test_websocket_router_unsubscribe( snapshot.snapshot_dir = "snapshots" snapshot.assert_match(json.dumps(parsed_message, indent=2, sort_keys=True), "unsub") - mock_manager.disconnect_socket.assert_called_once_with(mock_websocket) + mock_manager.disconnect_socket.assert_called_once_with( + mock_websocket, user_id=DEFAULT_USER_ID + ) @pytest.mark.asyncio @@ -124,7 +175,7 @@ async def test_websocket_router_invalid_method( ) -> None: # Mock the authenticate_websocket function to ensure it returns a valid user_id mocker.patch( - "backend.server.ws_api.authenticate_websocket", return_value=DEFAULT_USER_ID + "backend.api.ws_api.authenticate_websocket", return_value=DEFAULT_USER_ID ) mock_websocket.receive_text.side_effect = [ @@ -136,11 +187,15 @@ async def test_websocket_router_invalid_method( cast(WebSocket, mock_websocket), cast(ConnectionManager, mock_manager) ) - mock_manager.connect_socket.assert_called_once_with(mock_websocket) + mock_manager.connect_socket.assert_called_once_with( + mock_websocket, user_id=DEFAULT_USER_ID + ) mock_websocket.send_text.assert_called_once() assert '"method":"error"' in mock_websocket.send_text.call_args[0][0] assert '"success":false' in mock_websocket.send_text.call_args[0][0] - mock_manager.disconnect_socket.assert_called_once_with(mock_websocket) + mock_manager.disconnect_socket.assert_called_once_with( + mock_websocket, user_id=DEFAULT_USER_ID + ) @pytest.mark.asyncio diff --git a/autogpt_platform/backend/backend/app.py b/autogpt_platform/backend/backend/app.py index 596962ae0b..0afed130ed 100644 --- a/autogpt_platform/backend/backend/app.py +++ b/autogpt_platform/backend/backend/app.py @@ -36,10 +36,10 @@ def main(**kwargs): Run all the processes required for the AutoGPT-server (REST and WebSocket APIs). """ + from backend.api.rest_api import AgentServer + from backend.api.ws_api import WebsocketServer from backend.executor import DatabaseManager, ExecutionManager, Scheduler from backend.notifications import NotificationManager - from backend.server.rest_api import AgentServer - from backend.server.ws_api import WebsocketServer run_processes( DatabaseManager().set_log_level("warning"), diff --git a/autogpt_platform/backend/backend/blocks/agent.py b/autogpt_platform/backend/backend/blocks/agent.py index 68fef45ada..0efc0a3369 100644 --- a/autogpt_platform/backend/backend/blocks/agent.py +++ b/autogpt_platform/backend/backend/blocks/agent.py @@ -7,10 +7,11 @@ from backend.data.block import ( BlockInput, BlockOutput, BlockSchema, + BlockSchemaInput, BlockType, get_block, ) -from backend.data.execution import ExecutionStatus, NodesInputMasks +from backend.data.execution import ExecutionContext, ExecutionStatus, NodesInputMasks from backend.data.model import NodeExecutionStats, SchemaField from backend.util.json import validate_with_jsonschema from backend.util.retry import func_retry @@ -19,7 +20,7 @@ _logger = logging.getLogger(__name__) class AgentExecutorBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): user_id: str = SchemaField(description="User ID") graph_id: str = SchemaField(description="Graph ID") graph_version: int = SchemaField(description="Graph Version") @@ -53,6 +54,7 @@ class AgentExecutorBlock(Block): return validate_with_jsonschema(cls.get_input_schema(data), data) class Output(BlockSchema): + # Use BlockSchema to avoid automatic error field that could clash with graph outputs pass def __init__(self): @@ -65,8 +67,14 @@ class AgentExecutorBlock(Block): categories={BlockCategory.AGENT}, ) - async def run(self, input_data: Input, **kwargs) -> BlockOutput: - + async def run( + self, + input_data: Input, + *, + graph_exec_id: str, + execution_context: ExecutionContext, + **kwargs, + ) -> BlockOutput: from backend.executor import utils as execution_utils graph_exec = await execution_utils.add_graph_execution( @@ -75,6 +83,9 @@ class AgentExecutorBlock(Block): user_id=input_data.user_id, inputs=input_data.inputs, nodes_input_masks=input_data.nodes_input_masks, + execution_context=execution_context.model_copy( + update={"parent_execution_id": graph_exec_id}, + ), ) logger = execution_utils.LogMetadata( diff --git a/autogpt_platform/backend/backend/blocks/ai_condition.py b/autogpt_platform/backend/backend/blocks/ai_condition.py index 18f9046a57..de43c29a90 100644 --- a/autogpt_platform/backend/backend/blocks/ai_condition.py +++ b/autogpt_platform/backend/backend/blocks/ai_condition.py @@ -10,7 +10,12 @@ from backend.blocks.llm import ( LLMResponse, llm_call, ) -from backend.data.block import BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import APIKeyCredentials, NodeExecutionStats, SchemaField @@ -23,7 +28,7 @@ class AIConditionBlock(AIBlockBase): It provides the same yes/no data pass-through functionality as the standard ConditionBlock. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): input_value: Any = SchemaField( description="The input value to evaluate with the AI condition", placeholder="Enter the value to be evaluated (text, number, or any data)", @@ -50,7 +55,7 @@ class AIConditionBlock(AIBlockBase): ) credentials: AICredentials = AICredentialsField() - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: bool = SchemaField( description="The result of the AI condition evaluation (True or False)" ) diff --git a/autogpt_platform/backend/backend/blocks/ai_image_customizer.py b/autogpt_platform/backend/backend/blocks/ai_image_customizer.py index d0d3ec6b1d..83178e924d 100644 --- a/autogpt_platform/backend/backend/blocks/ai_image_customizer.py +++ b/autogpt_platform/backend/backend/blocks/ai_image_customizer.py @@ -1,3 +1,4 @@ +import asyncio from enum import Enum from typing import Literal @@ -5,7 +6,13 @@ from pydantic import SecretStr from replicate.client import Client as ReplicateClient from replicate.helpers import FileOutput -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -13,11 +20,26 @@ from backend.data.model import ( SchemaField, ) from backend.integrations.providers import ProviderName -from backend.util.file import MediaFileType +from backend.util.file import MediaFileType, store_media_file class GeminiImageModel(str, Enum): NANO_BANANA = "google/nano-banana" + NANO_BANANA_PRO = "google/nano-banana-pro" + + +class AspectRatio(str, Enum): + MATCH_INPUT_IMAGE = "match_input_image" + ASPECT_1_1 = "1:1" + ASPECT_2_3 = "2:3" + ASPECT_3_2 = "3:2" + ASPECT_3_4 = "3:4" + ASPECT_4_3 = "4:3" + ASPECT_4_5 = "4:5" + ASPECT_5_4 = "5:4" + ASPECT_9_16 = "9:16" + ASPECT_16_9 = "16:9" + ASPECT_21_9 = "21:9" class OutputFormat(str, Enum): @@ -42,7 +64,7 @@ TEST_CREDENTIALS_INPUT = { class AIImageCustomizerBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.REPLICATE], Literal["api_key"] ] = CredentialsField( @@ -62,15 +84,19 @@ class AIImageCustomizerBlock(Block): default=[], title="Input Images", ) + aspect_ratio: AspectRatio = SchemaField( + description="Aspect ratio of the generated image", + default=AspectRatio.MATCH_INPUT_IMAGE, + title="Aspect Ratio", + ) output_format: OutputFormat = SchemaField( description="Format of the output image", default=OutputFormat.PNG, title="Output Format", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): image_url: MediaFileType = SchemaField(description="URL of the generated image") - error: str = SchemaField(description="Error message if generation failed") def __init__(self): super().__init__( @@ -86,6 +112,7 @@ class AIImageCustomizerBlock(Block): "prompt": "Make the scene more vibrant and colorful", "model": GeminiImageModel.NANO_BANANA, "images": [], + "aspect_ratio": AspectRatio.MATCH_INPUT_IMAGE, "output_format": OutputFormat.JPG, "credentials": TEST_CREDENTIALS_INPUT, }, @@ -110,11 +137,25 @@ class AIImageCustomizerBlock(Block): **kwargs, ) -> BlockOutput: try: + # Convert local file paths to Data URIs (base64) so Replicate can access them + processed_images = await asyncio.gather( + *( + store_media_file( + graph_exec_id=graph_exec_id, + file=img, + user_id=user_id, + return_content=True, + ) + for img in input_data.images + ) + ) + result = await self.run_model( api_key=credentials.api_key, model_name=input_data.model.value, prompt=input_data.prompt, - images=input_data.images, + images=processed_images, + aspect_ratio=input_data.aspect_ratio.value, output_format=input_data.output_format.value, ) yield "image_url", result @@ -127,12 +168,14 @@ class AIImageCustomizerBlock(Block): model_name: str, prompt: str, images: list[MediaFileType], + aspect_ratio: str, output_format: str, ) -> MediaFileType: client = ReplicateClient(api_token=api_key.get_secret_value()) input_params: dict = { "prompt": prompt, + "aspect_ratio": aspect_ratio, "output_format": output_format, } diff --git a/autogpt_platform/backend/backend/blocks/ai_image_generator_block.py b/autogpt_platform/backend/backend/blocks/ai_image_generator_block.py index 39c0d4ac54..8c7b6e6102 100644 --- a/autogpt_platform/backend/backend/blocks/ai_image_generator_block.py +++ b/autogpt_platform/backend/backend/blocks/ai_image_generator_block.py @@ -5,7 +5,7 @@ from pydantic import SecretStr from replicate.client import Client as ReplicateClient from replicate.helpers import FileOutput -from backend.data.block import Block, BlockCategory, BlockSchema +from backend.data.block import Block, BlockCategory, BlockSchemaInput, BlockSchemaOutput from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -60,6 +60,14 @@ SIZE_TO_RECRAFT_DIMENSIONS = { ImageSize.TALL: "1024x1536", } +SIZE_TO_NANO_BANANA_RATIO = { + ImageSize.SQUARE: "1:1", + ImageSize.LANDSCAPE: "4:3", + ImageSize.PORTRAIT: "3:4", + ImageSize.WIDE: "16:9", + ImageSize.TALL: "9:16", +} + class ImageStyle(str, Enum): """ @@ -98,10 +106,11 @@ class ImageGenModel(str, Enum): FLUX_ULTRA = "Flux 1.1 Pro Ultra" RECRAFT = "Recraft v3" SD3_5 = "Stable Diffusion 3.5 Medium" + NANO_BANANA_PRO = "Nano Banana Pro" class AIImageGeneratorBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.REPLICATE], Literal["api_key"] ] = CredentialsField( @@ -135,9 +144,8 @@ class AIImageGeneratorBlock(Block): title="Image Style", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): image_url: str = SchemaField(description="URL of the generated image") - error: str = SchemaField(description="Error message if generation failed") def __init__(self): super().__init__( @@ -262,6 +270,20 @@ class AIImageGeneratorBlock(Block): ) return output + elif input_data.model == ImageGenModel.NANO_BANANA_PRO: + # Use Nano Banana Pro (Google Gemini 3 Pro Image) + input_params = { + "prompt": modified_prompt, + "aspect_ratio": SIZE_TO_NANO_BANANA_RATIO[input_data.size], + "resolution": "2K", # Default to 2K for good quality/cost balance + "output_format": "jpg", + "safety_filter_level": "block_only_high", # Most permissive + } + output = await self._run_client( + credentials, "google/nano-banana-pro", input_params + ) + return output + except Exception as e: raise RuntimeError(f"Failed to generate image: {str(e)}") diff --git a/autogpt_platform/backend/backend/blocks/ai_music_generator.py b/autogpt_platform/backend/backend/blocks/ai_music_generator.py index 92182fb16a..1ecb78f95e 100644 --- a/autogpt_platform/backend/backend/blocks/ai_music_generator.py +++ b/autogpt_platform/backend/backend/blocks/ai_music_generator.py @@ -6,7 +6,13 @@ from typing import Literal from pydantic import SecretStr from replicate.client import Client as ReplicateClient -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -54,7 +60,7 @@ class NormalizationStrategy(str, Enum): class AIMusicGeneratorBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.REPLICATE], Literal["api_key"] ] = CredentialsField( @@ -107,9 +113,8 @@ class AIMusicGeneratorBlock(Block): title="Normalization Strategy", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: str = SchemaField(description="URL of the generated audio file") - error: str = SchemaField(description="Error message if the model run failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/ai_shortform_video_block.py b/autogpt_platform/backend/backend/blocks/ai_shortform_video_block.py index fe13687970..7242ff8304 100644 --- a/autogpt_platform/backend/backend/blocks/ai_shortform_video_block.py +++ b/autogpt_platform/backend/backend/blocks/ai_shortform_video_block.py @@ -6,7 +6,13 @@ from typing import Literal from pydantic import SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -14,6 +20,7 @@ from backend.data.model import ( SchemaField, ) from backend.integrations.providers import ProviderName +from backend.util.exceptions import BlockExecutionError from backend.util.request import Requests TEST_CREDENTIALS = APIKeyCredentials( @@ -148,7 +155,7 @@ logger = logging.getLogger(__name__) class AIShortformVideoCreatorBlock(Block): """Creates a short‑form text‑to‑video clip using stock or AI imagery.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.REVID], Literal["api_key"] ] = CredentialsField( @@ -187,9 +194,8 @@ class AIShortformVideoCreatorBlock(Block): placeholder=VisualMediaType.STOCK_VIDEOS, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): video_url: str = SchemaField(description="The URL of the created video") - error: str = SchemaField(description="Error message if the request failed") async def create_webhook(self) -> tuple[str, str]: """Create a new webhook URL for receiving notifications.""" @@ -241,7 +247,11 @@ class AIShortformVideoCreatorBlock(Block): await asyncio.sleep(10) logger.error("Video creation timed out") - raise TimeoutError("Video creation timed out") + raise BlockExecutionError( + message="Video creation timed out", + block_name=self.name, + block_id=self.id, + ) def __init__(self): super().__init__( @@ -336,7 +346,7 @@ class AIShortformVideoCreatorBlock(Block): class AIAdMakerVideoCreatorBlock(Block): """Generates a 30‑second vertical AI advert using optional user‑supplied imagery.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.REVID], Literal["api_key"] ] = CredentialsField( @@ -364,9 +374,8 @@ class AIAdMakerVideoCreatorBlock(Block): description="Restrict visuals to supplied images only.", default=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): video_url: str = SchemaField(description="URL of the finished advert") - error: str = SchemaField(description="Error message on failure") async def create_webhook(self) -> tuple[str, str]: """Create a new webhook URL for receiving notifications.""" @@ -418,7 +427,11 @@ class AIAdMakerVideoCreatorBlock(Block): await asyncio.sleep(10) logger.error("Video creation timed out") - raise TimeoutError("Video creation timed out") + raise BlockExecutionError( + message="Video creation timed out", + block_name=self.name, + block_id=self.id, + ) def __init__(self): super().__init__( @@ -524,7 +537,7 @@ class AIAdMakerVideoCreatorBlock(Block): class AIScreenshotToVideoAdBlock(Block): """Creates an advert where the supplied screenshot is narrated by an AI avatar.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.REVID], Literal["api_key"] ] = CredentialsField(description="Revid.ai API key") @@ -542,9 +555,8 @@ class AIScreenshotToVideoAdBlock(Block): default=AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): video_url: str = SchemaField(description="Rendered video URL") - error: str = SchemaField(description="Error, if encountered") async def create_webhook(self) -> tuple[str, str]: """Create a new webhook URL for receiving notifications.""" @@ -596,7 +608,11 @@ class AIScreenshotToVideoAdBlock(Block): await asyncio.sleep(10) logger.error("Video creation timed out") - raise TimeoutError("Video creation timed out") + raise BlockExecutionError( + message="Video creation timed out", + block_name=self.name, + block_id=self.id, + ) def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/airtable/_api.py b/autogpt_platform/backend/backend/blocks/airtable/_api.py index a4d321d9ab..53ace72d98 100644 --- a/autogpt_platform/backend/backend/blocks/airtable/_api.py +++ b/autogpt_platform/backend/backend/blocks/airtable/_api.py @@ -1371,7 +1371,7 @@ async def create_base( if tables: params["tables"] = tables - print(params) + logger.debug(f"Creating Airtable base with params: {params}") response = await Requests().post( "https://api.airtable.com/v0/meta/bases", diff --git a/autogpt_platform/backend/backend/blocks/airtable/bases.py b/autogpt_platform/backend/backend/blocks/airtable/bases.py index c6212ffed4..550f6b7fdc 100644 --- a/autogpt_platform/backend/backend/blocks/airtable/bases.py +++ b/autogpt_platform/backend/backend/blocks/airtable/bases.py @@ -9,7 +9,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, ) @@ -23,7 +24,7 @@ class AirtableCreateBaseBlock(Block): Creates a new base in an Airtable workspace, or returns existing base if one with the same name exists. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -53,7 +54,7 @@ class AirtableCreateBaseBlock(Block): ], ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): base_id: str = SchemaField(description="The ID of the created or found base") tables: list[dict] = SchemaField(description="Array of table objects") table: dict = SchemaField(description="A single table object") @@ -118,7 +119,7 @@ class AirtableListBasesBlock(Block): Lists all bases in an Airtable workspace that the user has access to. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -129,7 +130,7 @@ class AirtableListBasesBlock(Block): description="Pagination offset from previous request", default="" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): bases: list[dict] = SchemaField(description="Array of base objects") offset: Optional[str] = SchemaField( description="Offset for next page (null if no more bases)", default=None diff --git a/autogpt_platform/backend/backend/blocks/airtable/records.py b/autogpt_platform/backend/backend/blocks/airtable/records.py index 2bbe35b313..a876658f0d 100644 --- a/autogpt_platform/backend/backend/blocks/airtable/records.py +++ b/autogpt_platform/backend/backend/blocks/airtable/records.py @@ -9,7 +9,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, ) @@ -31,7 +32,7 @@ class AirtableListRecordsBlock(Block): Lists records from an Airtable table with optional filtering, sorting, and pagination. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -65,7 +66,7 @@ class AirtableListRecordsBlock(Block): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): records: list[dict] = SchemaField(description="Array of record objects") offset: Optional[str] = SchemaField( description="Offset for next page (null if no more records)", default=None @@ -137,7 +138,7 @@ class AirtableGetRecordBlock(Block): Retrieves a single record from an Airtable table by its ID. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -153,7 +154,7 @@ class AirtableGetRecordBlock(Block): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: str = SchemaField(description="The record ID") fields: dict = SchemaField(description="The record fields") created_time: str = SchemaField(description="The record created time") @@ -217,7 +218,7 @@ class AirtableCreateRecordsBlock(Block): Creates one or more records in an Airtable table. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -239,7 +240,7 @@ class AirtableCreateRecordsBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): records: list[dict] = SchemaField(description="Array of created record objects") details: dict = SchemaField(description="Details of the created records") @@ -290,7 +291,7 @@ class AirtableUpdateRecordsBlock(Block): Updates one or more existing records in an Airtable table. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -306,7 +307,7 @@ class AirtableUpdateRecordsBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): records: list[dict] = SchemaField(description="Array of updated record objects") def __init__(self): @@ -339,7 +340,7 @@ class AirtableDeleteRecordsBlock(Block): Deletes one or more records from an Airtable table. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -351,7 +352,7 @@ class AirtableDeleteRecordsBlock(Block): description="Array of upto 10 record IDs to delete" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): records: list[dict] = SchemaField(description="Array of deletion results") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/airtable/schema.py b/autogpt_platform/backend/backend/blocks/airtable/schema.py index 5d2006a2ff..715df6f0eb 100644 --- a/autogpt_platform/backend/backend/blocks/airtable/schema.py +++ b/autogpt_platform/backend/backend/blocks/airtable/schema.py @@ -7,7 +7,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, Requests, SchemaField, @@ -23,13 +24,13 @@ class AirtableListSchemaBlock(Block): fields, and views. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) base_id: str = SchemaField(description="The Airtable base ID") - class Output(BlockSchema): + class Output(BlockSchemaOutput): base_schema: dict = SchemaField( description="Complete base schema with tables, fields, and views" ) @@ -66,7 +67,7 @@ class AirtableCreateTableBlock(Block): Creates a new table in an Airtable base with specified fields and views. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -77,7 +78,7 @@ class AirtableCreateTableBlock(Block): default=[{"name": "Name", "type": "singleLineText"}], ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): table: dict = SchemaField(description="Created table object") table_id: str = SchemaField(description="ID of the created table") @@ -109,7 +110,7 @@ class AirtableUpdateTableBlock(Block): Updates an existing table's properties such as name or description. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -125,7 +126,7 @@ class AirtableUpdateTableBlock(Block): description="The date dependency of the table to update", default=None ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): table: dict = SchemaField(description="Updated table object") def __init__(self): @@ -157,7 +158,7 @@ class AirtableCreateFieldBlock(Block): Adds a new field (column) to an existing Airtable table. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -176,7 +177,7 @@ class AirtableCreateFieldBlock(Block): description="The options of the field to create", default=None ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): field: dict = SchemaField(description="Created field object") field_id: str = SchemaField(description="ID of the created field") @@ -209,7 +210,7 @@ class AirtableUpdateFieldBlock(Block): Updates an existing field's properties in an Airtable table. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -225,7 +226,7 @@ class AirtableUpdateFieldBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): field: dict = SchemaField(description="Updated field object") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/airtable/triggers.py b/autogpt_platform/backend/backend/blocks/airtable/triggers.py index 2cfc8178e3..03ed3182be 100644 --- a/autogpt_platform/backend/backend/blocks/airtable/triggers.py +++ b/autogpt_platform/backend/backend/blocks/airtable/triggers.py @@ -3,7 +3,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, BlockType, BlockWebhookConfig, CredentialsMetaInput, @@ -32,7 +33,7 @@ class AirtableWebhookTriggerBlock(Block): Thin wrapper just forwards the payloads one at a time to the next block. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = airtable.credentials_field( description="Airtable API credentials" ) @@ -43,7 +44,7 @@ class AirtableWebhookTriggerBlock(Block): description="Airtable webhook event filter" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): payload: WebhookPayload = SchemaField(description="Airtable webhook payload") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/apollo/organization.py b/autogpt_platform/backend/backend/blocks/apollo/organization.py index 10abec0825..b6f8a7e06a 100644 --- a/autogpt_platform/backend/backend/blocks/apollo/organization.py +++ b/autogpt_platform/backend/backend/blocks/apollo/organization.py @@ -10,14 +10,20 @@ from backend.blocks.apollo.models import ( PrimaryPhone, SearchOrganizationsRequest, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import CredentialsField, SchemaField class SearchOrganizationsBlock(Block): """Search for organizations in Apollo""" - class Input(BlockSchema): + class Input(BlockSchemaInput): organization_num_employees_range: list[int] = SchemaField( description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results. @@ -69,7 +75,7 @@ To find IDs, identify the values for organization_id when you call this endpoint description="Apollo credentials", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): organizations: list[Organization] = SchemaField( description="List of organizations found", default_factory=list, diff --git a/autogpt_platform/backend/backend/blocks/apollo/people.py b/autogpt_platform/backend/backend/blocks/apollo/people.py index 0ef35cd445..a58321ecfc 100644 --- a/autogpt_platform/backend/backend/blocks/apollo/people.py +++ b/autogpt_platform/backend/backend/blocks/apollo/people.py @@ -14,14 +14,20 @@ from backend.blocks.apollo.models import ( SearchPeopleRequest, SenorityLevels, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import CredentialsField, SchemaField class SearchPeopleBlock(Block): """Search for people in Apollo""" - class Input(BlockSchema): + class Input(BlockSchemaInput): person_titles: list[str] = SchemaField( description="""Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results. @@ -109,7 +115,7 @@ class SearchPeopleBlock(Block): description="Apollo credentials", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): people: list[Contact] = SchemaField( description="List of people found", default_factory=list, diff --git a/autogpt_platform/backend/backend/blocks/apollo/person.py b/autogpt_platform/backend/backend/blocks/apollo/person.py index dad8ab733f..84b86d2bfd 100644 --- a/autogpt_platform/backend/backend/blocks/apollo/person.py +++ b/autogpt_platform/backend/backend/blocks/apollo/person.py @@ -6,14 +6,20 @@ from backend.blocks.apollo._auth import ( ApolloCredentialsInput, ) from backend.blocks.apollo.models import Contact, EnrichPersonRequest -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import CredentialsField, SchemaField class GetPersonDetailBlock(Block): """Get detailed person data with Apollo API, including email reveal""" - class Input(BlockSchema): + class Input(BlockSchemaInput): person_id: str = SchemaField( description="Apollo person ID to enrich (most accurate method)", default="", @@ -68,7 +74,7 @@ class GetPersonDetailBlock(Block): description="Apollo credentials", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): contact: Contact = SchemaField( description="Enriched contact information", ) diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/_util.py b/autogpt_platform/backend/backend/blocks/ayrshare/_util.py index a647e933df..8d0b9914f9 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/_util.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/_util.py @@ -3,7 +3,7 @@ from typing import Optional from pydantic import BaseModel, Field -from backend.data.block import BlockSchema +from backend.data.block import BlockSchemaInput from backend.data.model import SchemaField, UserIntegrations from backend.integrations.ayrshare import AyrshareClient from backend.util.clients import get_database_manager_async_client @@ -17,7 +17,7 @@ async def get_profile_key(user_id: str): return user_integrations.managed_credentials.ayrshare_profile_key -class BaseAyrshareInput(BlockSchema): +class BaseAyrshareInput(BlockSchemaInput): """Base input model for Ayrshare social media posts with common fields.""" post: str = SchemaField( diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_bluesky.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_bluesky.py index 0d6eeed0a1..df0d5ad269 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_bluesky.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_bluesky.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -38,7 +38,7 @@ class PostToBlueskyBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_facebook.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_facebook.py index dccc443ef9..a9087915e6 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_facebook.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_facebook.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -101,7 +101,7 @@ class PostToFacebookBlock(Block): description="URL for custom link preview", default="", advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_gmb.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_gmb.py index 5c510cccb3..1f223f1f80 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_gmb.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_gmb.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -94,7 +94,7 @@ class PostToGMBBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_instagram.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_instagram.py index 1fc7c77df0..06d80db528 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_instagram.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_instagram.py @@ -5,7 +5,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -94,7 +94,7 @@ class PostToInstagramBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_linkedin.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_linkedin.py index 7fad89e838..961587d201 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_linkedin.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_linkedin.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -94,7 +94,7 @@ class PostToLinkedInBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_pinterest.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_pinterest.py index 55bae81618..834cd4e301 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_pinterest.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_pinterest.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -73,7 +73,7 @@ class PostToPinterestBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_reddit.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_reddit.py index c193f94d2e..1df721f424 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_reddit.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_reddit.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -19,7 +19,7 @@ class PostToRedditBlock(Block): pass # Uses all base fields - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_snapchat.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_snapchat.py index 8de728e569..3645f7cc9b 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_snapchat.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_snapchat.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -43,7 +43,7 @@ class PostToSnapchatBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_telegram.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_telegram.py index a18d9a6cb1..a220cbe9e8 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_telegram.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_telegram.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -38,7 +38,7 @@ class PostToTelegramBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_threads.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_threads.py index 6fdf06f1a5..75983b2d13 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_threads.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_threads.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -31,7 +31,7 @@ class PostToThreadsBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_tiktok.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_tiktok.py index 581e65b74e..2d68f10ff0 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_tiktok.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_tiktok.py @@ -5,7 +5,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -98,7 +98,7 @@ class PostToTikTokBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_x.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_x.py index bc23ac2c78..bbecd31ed4 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_x.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_x.py @@ -3,7 +3,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -97,7 +97,7 @@ class PostToXBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_youtube.py b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_youtube.py index fbcc9afce6..8a366ba5c5 100644 --- a/autogpt_platform/backend/backend/blocks/ayrshare/post_to_youtube.py +++ b/autogpt_platform/backend/backend/blocks/ayrshare/post_to_youtube.py @@ -6,7 +6,7 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaOutput, BlockType, SchemaField, ) @@ -119,7 +119,7 @@ class PostToYouTubeBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_result: PostResponse = SchemaField(description="The result of the post") post: PostIds = SchemaField(description="The result of the post") diff --git a/autogpt_platform/backend/backend/blocks/baas/bots.py b/autogpt_platform/backend/backend/blocks/baas/bots.py index da8400bb2c..68af9a675e 100644 --- a/autogpt_platform/backend/backend/blocks/baas/bots.py +++ b/autogpt_platform/backend/backend/blocks/baas/bots.py @@ -9,7 +9,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, ) @@ -23,7 +24,7 @@ class BaasBotJoinMeetingBlock(Block): Deploy a bot immediately or at a scheduled start_time to join and record a meeting. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = baas.credentials_field( description="Meeting BaaS API credentials" ) @@ -57,7 +58,7 @@ class BaasBotJoinMeetingBlock(Block): description="Custom metadata to attach to the bot", default={} ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): bot_id: str = SchemaField(description="UUID of the deployed bot") join_response: dict = SchemaField( description="Full response from join operation" @@ -103,13 +104,13 @@ class BaasBotLeaveMeetingBlock(Block): Force the bot to exit the call. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = baas.credentials_field( description="Meeting BaaS API credentials" ) bot_id: str = SchemaField(description="UUID of the bot to remove from meeting") - class Output(BlockSchema): + class Output(BlockSchemaOutput): left: bool = SchemaField(description="Whether the bot successfully left") def __init__(self): @@ -138,7 +139,7 @@ class BaasBotFetchMeetingDataBlock(Block): Pull MP4 URL, transcript & metadata for a completed meeting. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = baas.credentials_field( description="Meeting BaaS API credentials" ) @@ -147,7 +148,7 @@ class BaasBotFetchMeetingDataBlock(Block): description="Include transcript data in response", default=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): mp4_url: str = SchemaField( description="URL to download the meeting recording (time-limited)" ) @@ -185,13 +186,13 @@ class BaasBotDeleteRecordingBlock(Block): Purge MP4 + transcript data for privacy or storage management. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = baas.credentials_field( description="Meeting BaaS API credentials" ) bot_id: str = SchemaField(description="UUID of the bot whose data to delete") - class Output(BlockSchema): + class Output(BlockSchemaOutput): deleted: bool = SchemaField( description="Whether the data was successfully deleted" ) diff --git a/autogpt_platform/backend/backend/blocks/bannerbear/text_overlay.py b/autogpt_platform/backend/backend/blocks/bannerbear/text_overlay.py index 7dbf0096db..16d46c0d99 100644 --- a/autogpt_platform/backend/backend/blocks/bannerbear/text_overlay.py +++ b/autogpt_platform/backend/backend/blocks/bannerbear/text_overlay.py @@ -11,7 +11,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, Requests, SchemaField, @@ -27,7 +28,7 @@ TEST_CREDENTIALS = APIKeyCredentials( ) -class TextModification(BlockSchema): +class TextModification(BlockSchemaInput): name: str = SchemaField( description="The name of the layer to modify in the template" ) @@ -60,7 +61,7 @@ class TextModification(BlockSchema): class BannerbearTextOverlayBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = bannerbear.credentials_field( description="API credentials for Bannerbear" ) @@ -96,7 +97,7 @@ class BannerbearTextOverlayBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the image generation was successfully initiated" ) @@ -105,7 +106,6 @@ class BannerbearTextOverlayBlock(Block): ) uid: str = SchemaField(description="Unique identifier for the generated image") status: str = SchemaField(description="Status of the image generation") - error: str = SchemaField(description="Error message if the operation failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/basic.py b/autogpt_platform/backend/backend/blocks/basic.py index ef251489c7..a5f558f5c5 100644 --- a/autogpt_platform/backend/backend/blocks/basic.py +++ b/autogpt_platform/backend/backend/blocks/basic.py @@ -1,14 +1,21 @@ import enum from typing import Any -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockType +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + BlockType, +) from backend.data.model import SchemaField from backend.util.file import store_media_file from backend.util.type import MediaFileType, convert class FileStoreBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): file_in: MediaFileType = SchemaField( description="The file to store in the temporary directory, it can be a URL, data URI, or local path." ) @@ -19,7 +26,7 @@ class FileStoreBlock(Block): title="Produce Base64 Output", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): file_out: MediaFileType = SchemaField( description="The relative path to the stored file in the temporary directory." ) @@ -57,7 +64,7 @@ class StoreValueBlock(Block): The block output will be static, the output can be consumed multiple times. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): input: Any = SchemaField( description="Trigger the block to produce the output. " "The value is only used when `data` is None." @@ -68,7 +75,7 @@ class StoreValueBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): output: Any = SchemaField(description="The stored data retained in the block.") def __init__(self): @@ -94,10 +101,10 @@ class StoreValueBlock(Block): class PrintToConsoleBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: Any = SchemaField(description="The data to print to the console.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): output: Any = SchemaField(description="The data printed to the console.") status: str = SchemaField(description="The status of the print operation.") @@ -121,10 +128,10 @@ class PrintToConsoleBlock(Block): class NoteBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField(description="The text to display in the sticky note.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): output: str = SchemaField(description="The text to display in the sticky note.") def __init__(self): @@ -154,15 +161,14 @@ class TypeOptions(enum.Enum): class UniversalTypeConverterBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): value: Any = SchemaField( description="The value to convert to a universal type." ) type: TypeOptions = SchemaField(description="The type to convert the value to.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): value: Any = SchemaField(description="The converted value.") - error: str = SchemaField(description="Error message if conversion failed.") def __init__(self): super().__init__( @@ -195,10 +201,10 @@ class ReverseListOrderBlock(Block): A block which takes in a list and returns it in the opposite order. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): input_list: list[Any] = SchemaField(description="The list to reverse") - class Output(BlockSchema): + class Output(BlockSchemaOutput): reversed_list: list[Any] = SchemaField(description="The list in reversed order") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/block.py b/autogpt_platform/backend/backend/blocks/block.py index e1745d3055..95c92a41ab 100644 --- a/autogpt_platform/backend/backend/blocks/block.py +++ b/autogpt_platform/backend/backend/blocks/block.py @@ -2,7 +2,13 @@ import os import re from typing import Type -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -15,12 +21,12 @@ class BlockInstallationBlock(Block): for development purposes only. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): code: str = SchemaField( description="Python code of the block to be installed", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: str = SchemaField( description="Success message if the block is installed successfully", ) diff --git a/autogpt_platform/backend/backend/blocks/branching.py b/autogpt_platform/backend/backend/blocks/branching.py index fa66c2d30d..e9177a8b65 100644 --- a/autogpt_platform/backend/backend/blocks/branching.py +++ b/autogpt_platform/backend/backend/blocks/branching.py @@ -1,7 +1,13 @@ from enum import Enum from typing import Any -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.type import convert @@ -16,7 +22,7 @@ class ComparisonOperator(Enum): class ConditionBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): value1: Any = SchemaField( description="Enter the first value for comparison", placeholder="For example: 10 or 'hello' or True", @@ -40,7 +46,7 @@ class ConditionBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: bool = SchemaField( description="The result of the condition evaluation (True or False)" ) @@ -100,7 +106,10 @@ class ConditionBlock(Block): ComparisonOperator.LESS_THAN_OR_EQUAL: lambda a, b: a <= b, } - result = comparison_funcs[operator](value1, value2) + try: + result = comparison_funcs[operator](value1, value2) + except Exception as e: + raise ValueError(f"Comparison failed: {e}") from e yield "result", result @@ -111,7 +120,7 @@ class ConditionBlock(Block): class IfInputMatchesBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): input: Any = SchemaField( description="The input to match against", placeholder="For example: 10 or 'hello' or True", @@ -131,7 +140,7 @@ class IfInputMatchesBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: bool = SchemaField( description="The result of the condition evaluation (True or False)" ) diff --git a/autogpt_platform/backend/backend/blocks/code_executor.py b/autogpt_platform/backend/backend/blocks/code_executor.py index 20f2ec038e..be6f2bba55 100644 --- a/autogpt_platform/backend/backend/blocks/code_executor.py +++ b/autogpt_platform/backend/backend/blocks/code_executor.py @@ -4,9 +4,15 @@ from typing import Any, Literal, Optional from e2b_code_interpreter import AsyncSandbox from e2b_code_interpreter import Result as E2BExecutionResult from e2b_code_interpreter.charts import Chart as E2BExecutionResultChart -from pydantic import BaseModel, JsonValue, SecretStr +from pydantic import BaseModel, Field, JsonValue, SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -61,7 +67,7 @@ class MainCodeExecutionResult(BaseModel): jpeg: Optional[str] = None pdf: Optional[str] = None latex: Optional[str] = None - json: Optional[JsonValue] = None # type: ignore (reportIncompatibleMethodOverride) + json_data: Optional[JsonValue] = Field(None, alias="json") javascript: Optional[str] = None data: Optional[dict] = None chart: Optional[Chart] = None @@ -159,7 +165,7 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin): # TODO : Add support to upload and download files # NOTE: Currently, you can only customize the CPU and Memory # by creating a pre customized sandbox template - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.E2B], Literal["api_key"] ] = CredentialsField( @@ -217,7 +223,7 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): main_result: MainCodeExecutionResult = SchemaField( title="Main Result", description="The main result from the code execution" ) @@ -232,7 +238,6 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin): description="Standard output logs from execution" ) stderr_logs: str = SchemaField(description="Standard error logs from execution") - error: str = SchemaField(description="Error message if execution failed") def __init__(self): super().__init__( @@ -296,7 +301,7 @@ class ExecuteCodeBlock(Block, BaseE2BExecutorMixin): class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.E2B], Literal["api_key"] ] = CredentialsField( @@ -346,7 +351,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): sandbox_id: str = SchemaField(description="ID of the sandbox instance") response: str = SchemaField( title="Text Result", @@ -356,7 +361,6 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin): description="Standard output logs from execution" ) stderr_logs: str = SchemaField(description="Standard error logs from execution") - error: str = SchemaField(description="Error message if execution failed") def __init__(self): super().__init__( @@ -421,7 +425,7 @@ class InstantiateCodeSandboxBlock(Block, BaseE2BExecutorMixin): class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.E2B], Literal["api_key"] ] = CredentialsField( @@ -454,7 +458,7 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): main_result: MainCodeExecutionResult = SchemaField( title="Main Result", description="The main result from the code execution" ) @@ -469,7 +473,6 @@ class ExecuteCodeStepBlock(Block, BaseE2BExecutorMixin): description="Standard output logs from execution" ) stderr_logs: str = SchemaField(description="Standard error logs from execution") - error: str = SchemaField(description="Error message if execution failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/code_extraction_block.py b/autogpt_platform/backend/backend/blocks/code_extraction_block.py index c421d40092..98f40c7a8b 100644 --- a/autogpt_platform/backend/backend/blocks/code_extraction_block.py +++ b/autogpt_platform/backend/backend/blocks/code_extraction_block.py @@ -1,17 +1,23 @@ import re -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField class CodeExtractionBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField( description="Text containing code blocks to extract (e.g., AI response)", placeholder="Enter text containing code blocks", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): html: str = SchemaField(description="Extracted HTML code") css: str = SchemaField(description="Extracted CSS code") javascript: str = SchemaField(description="Extracted JavaScript code") diff --git a/autogpt_platform/backend/backend/blocks/codex.py b/autogpt_platform/backend/backend/blocks/codex.py new file mode 100644 index 0000000000..1b907cafce --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/codex.py @@ -0,0 +1,224 @@ +from dataclasses import dataclass +from enum import Enum +from typing import Any, Literal + +from openai import AsyncOpenAI +from openai.types.responses import Response as OpenAIResponse +from pydantic import SecretStr + +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) +from backend.data.model import ( + APIKeyCredentials, + CredentialsField, + CredentialsMetaInput, + NodeExecutionStats, + SchemaField, +) +from backend.integrations.providers import ProviderName + + +@dataclass +class CodexCallResult: + """Structured response returned by Codex invocations.""" + + response: str + reasoning: str + response_id: str + + +class CodexModel(str, Enum): + """Codex-capable OpenAI models.""" + + GPT5_1_CODEX = "gpt-5.1-codex" + + +class CodexReasoningEffort(str, Enum): + """Configuration for the Responses API reasoning effort.""" + + NONE = "none" + LOW = "low" + MEDIUM = "medium" + HIGH = "high" + + +CodexCredentials = CredentialsMetaInput[ + Literal[ProviderName.OPENAI], Literal["api_key"] +] + +TEST_CREDENTIALS = APIKeyCredentials( + id="e2fcb203-3f2d-4ad4-a344-8df3bc7db36b", + provider="openai", + api_key=SecretStr("mock-openai-api-key"), + title="Mock OpenAI API key", + expires_at=None, +) +TEST_CREDENTIALS_INPUT = { + "provider": TEST_CREDENTIALS.provider, + "id": TEST_CREDENTIALS.id, + "type": TEST_CREDENTIALS.type, + "title": TEST_CREDENTIALS.title, +} + + +def CodexCredentialsField() -> CodexCredentials: + return CredentialsField( + description="OpenAI API key with access to Codex models (Responses API).", + ) + + +class CodeGenerationBlock(Block): + """Block that talks to Codex models via the OpenAI Responses API.""" + + class Input(BlockSchemaInput): + prompt: str = SchemaField( + description="Primary coding request passed to the Codex model.", + placeholder="Generate a Python function that reverses a list.", + ) + system_prompt: str = SchemaField( + title="System Prompt", + default=( + "You are Codex, an elite software engineer. " + "Favor concise, working code and highlight important caveats." + ), + description="Optional instructions injected via the Responses API instructions field.", + advanced=True, + ) + model: CodexModel = SchemaField( + title="Codex Model", + default=CodexModel.GPT5_1_CODEX, + description="Codex-optimized model served via the Responses API.", + advanced=False, + ) + reasoning_effort: CodexReasoningEffort = SchemaField( + title="Reasoning Effort", + default=CodexReasoningEffort.MEDIUM, + description="Controls the Responses API reasoning budget. Select 'none' to skip reasoning configs.", + advanced=True, + ) + max_output_tokens: int | None = SchemaField( + title="Max Output Tokens", + default=2048, + description="Upper bound for generated tokens (hard limit 128,000). Leave blank to let OpenAI decide.", + advanced=True, + ) + credentials: CodexCredentials = CodexCredentialsField() + + class Output(BlockSchemaOutput): + response: str = SchemaField( + description="Code-focused response returned by the Codex model." + ) + reasoning: str = SchemaField( + description="Reasoning summary returned by the model, if available.", + default="", + ) + response_id: str = SchemaField( + description="ID of the Responses API call for auditing/debugging.", + default="", + ) + + def __init__(self): + super().__init__( + id="86a2a099-30df-47b4-b7e4-34ae5f83e0d5", + description="Generate or refactor code using OpenAI's Codex (Responses API).", + categories={BlockCategory.AI, BlockCategory.DEVELOPER_TOOLS}, + input_schema=CodeGenerationBlock.Input, + output_schema=CodeGenerationBlock.Output, + test_input=[ + { + "prompt": "Write a TypeScript function that deduplicates an array.", + "credentials": TEST_CREDENTIALS_INPUT, + } + ], + test_output=[ + ("response", str), + ("reasoning", str), + ("response_id", str), + ], + test_mock={ + "call_codex": lambda *_args, **_kwargs: CodexCallResult( + response="function dedupe(items: T[]): T[] { return [...new Set(items)]; }", + reasoning="Used Set to remove duplicates in O(n).", + response_id="resp_test", + ) + }, + test_credentials=TEST_CREDENTIALS, + ) + self.execution_stats = NodeExecutionStats() + + async def call_codex( + self, + *, + credentials: APIKeyCredentials, + model: CodexModel, + prompt: str, + system_prompt: str, + max_output_tokens: int | None, + reasoning_effort: CodexReasoningEffort, + ) -> CodexCallResult: + """Invoke the OpenAI Responses API.""" + client = AsyncOpenAI(api_key=credentials.api_key.get_secret_value()) + + request_payload: dict[str, Any] = { + "model": model.value, + "input": prompt, + } + if system_prompt: + request_payload["instructions"] = system_prompt + if max_output_tokens is not None: + request_payload["max_output_tokens"] = max_output_tokens + if reasoning_effort != CodexReasoningEffort.NONE: + request_payload["reasoning"] = {"effort": reasoning_effort.value} + + response = await client.responses.create(**request_payload) + if not isinstance(response, OpenAIResponse): + raise TypeError(f"Expected OpenAIResponse, got {type(response).__name__}") + + # Extract data directly from typed response + text_output = response.output_text or "" + reasoning_summary = ( + str(response.reasoning.summary) + if response.reasoning and response.reasoning.summary + else "" + ) + response_id = response.id or "" + + # Update usage stats + self.execution_stats.input_token_count = ( + response.usage.input_tokens if response.usage else 0 + ) + self.execution_stats.output_token_count = ( + response.usage.output_tokens if response.usage else 0 + ) + self.execution_stats.llm_call_count += 1 + + return CodexCallResult( + response=text_output, + reasoning=reasoning_summary, + response_id=response_id, + ) + + async def run( + self, + input_data: Input, + *, + credentials: APIKeyCredentials, + **_kwargs, + ) -> BlockOutput: + result = await self.call_codex( + credentials=credentials, + model=input_data.model, + prompt=input_data.prompt, + system_prompt=input_data.system_prompt, + max_output_tokens=input_data.max_output_tokens, + reasoning_effort=input_data.reasoning_effort, + ) + + yield "response", result.response + yield "reasoning", result.reasoning + yield "response_id", result.response_id diff --git a/autogpt_platform/backend/backend/blocks/compass/triggers.py b/autogpt_platform/backend/backend/blocks/compass/triggers.py index 6eac52ce53..f6ac8dfd81 100644 --- a/autogpt_platform/backend/backend/blocks/compass/triggers.py +++ b/autogpt_platform/backend/backend/blocks/compass/triggers.py @@ -5,7 +5,8 @@ from backend.data.block import ( BlockCategory, BlockManualWebhookConfig, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, ) from backend.data.model import SchemaField from backend.integrations.providers import ProviderName @@ -27,10 +28,10 @@ class TranscriptionDataModel(BaseModel): class CompassAITriggerBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): payload: TranscriptionDataModel = SchemaField(hidden=True) - class Output(BlockSchema): + class Output(BlockSchemaOutput): transcription: str = SchemaField( description="The contents of the compass transcription." ) diff --git a/autogpt_platform/backend/backend/blocks/count_words_and_char_block.py b/autogpt_platform/backend/backend/blocks/count_words_and_char_block.py index ddbcf07876..20a5077a2d 100644 --- a/autogpt_platform/backend/backend/blocks/count_words_and_char_block.py +++ b/autogpt_platform/backend/backend/blocks/count_words_and_char_block.py @@ -1,16 +1,22 @@ -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField class WordCharacterCountBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField( description="Input text to count words and characters", placeholder="Enter your text here", advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): word_count: int = SchemaField(description="Number of words in the input text") character_count: int = SchemaField( description="Number of characters in the input text" diff --git a/autogpt_platform/backend/backend/blocks/data_manipulation.py b/autogpt_platform/backend/backend/blocks/data_manipulation.py index ca674519d2..94921e26c0 100644 --- a/autogpt_platform/backend/backend/blocks/data_manipulation.py +++ b/autogpt_platform/backend/backend/blocks/data_manipulation.py @@ -1,6 +1,12 @@ from typing import Any, List -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.json import loads from backend.util.mock import MockObject @@ -12,13 +18,13 @@ from backend.util.prompt import estimate_token_count_str class CreateDictionaryBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): values: dict[str, Any] = SchemaField( description="Key-value pairs to create the dictionary with", placeholder="e.g., {'name': 'Alice', 'age': 25}", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): dictionary: dict[str, Any] = SchemaField( description="The created dictionary containing the specified key-value pairs" ) @@ -62,7 +68,7 @@ class CreateDictionaryBlock(Block): class AddToDictionaryBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): dictionary: dict[Any, Any] = SchemaField( default_factory=dict, description="The dictionary to add the entry to. If not provided, a new dictionary will be created.", @@ -86,11 +92,10 @@ class AddToDictionaryBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): updated_dictionary: dict = SchemaField( description="The dictionary with the new entry added." ) - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( @@ -141,11 +146,11 @@ class AddToDictionaryBlock(Block): class FindInDictionaryBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): input: Any = SchemaField(description="Dictionary to lookup from") key: str | int = SchemaField(description="Key to lookup in the dictionary") - class Output(BlockSchema): + class Output(BlockSchemaOutput): output: Any = SchemaField(description="Value found for the given key") missing: Any = SchemaField( description="Value of the input that missing the key" @@ -201,7 +206,7 @@ class FindInDictionaryBlock(Block): class RemoveFromDictionaryBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): dictionary: dict[Any, Any] = SchemaField( description="The dictionary to modify." ) @@ -210,12 +215,11 @@ class RemoveFromDictionaryBlock(Block): default=False, description="Whether to return the removed value." ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): updated_dictionary: dict[Any, Any] = SchemaField( description="The dictionary after removal." ) removed_value: Any = SchemaField(description="The removed value if requested.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( @@ -251,19 +255,18 @@ class RemoveFromDictionaryBlock(Block): class ReplaceDictionaryValueBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): dictionary: dict[Any, Any] = SchemaField( description="The dictionary to modify." ) key: str | int = SchemaField(description="Key to replace the value for.") value: Any = SchemaField(description="The new value for the given key.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): updated_dictionary: dict[Any, Any] = SchemaField( description="The dictionary after replacement." ) old_value: Any = SchemaField(description="The value that was replaced.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( @@ -300,10 +303,10 @@ class ReplaceDictionaryValueBlock(Block): class DictionaryIsEmptyBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): dictionary: dict[Any, Any] = SchemaField(description="The dictionary to check.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): is_empty: bool = SchemaField(description="True if the dictionary is empty.") def __init__(self): @@ -327,7 +330,7 @@ class DictionaryIsEmptyBlock(Block): class CreateListBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): values: List[Any] = SchemaField( description="A list of values to be combined into a new list.", placeholder="e.g., ['Alice', 25, True]", @@ -343,11 +346,10 @@ class CreateListBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): list: List[Any] = SchemaField( description="The created list containing the specified values." ) - error: str = SchemaField(description="Error message if list creation failed.") def __init__(self): super().__init__( @@ -404,7 +406,7 @@ class CreateListBlock(Block): class AddToListBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): list: List[Any] = SchemaField( default_factory=list, advanced=False, @@ -425,11 +427,10 @@ class AddToListBlock(Block): description="The position to insert the new entry. If not provided, the entry will be appended to the end of the list.", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): updated_list: List[Any] = SchemaField( description="The list with the new entry added." ) - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( @@ -484,11 +485,11 @@ class AddToListBlock(Block): class FindInListBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): list: List[Any] = SchemaField(description="The list to search in.") value: Any = SchemaField(description="The value to search for.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): index: int = SchemaField(description="The index of the value in the list.") found: bool = SchemaField( description="Whether the value was found in the list." @@ -526,15 +527,14 @@ class FindInListBlock(Block): class GetListItemBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): list: List[Any] = SchemaField(description="The list to get the item from.") index: int = SchemaField( description="The 0-based index of the item (supports negative indices)." ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): item: Any = SchemaField(description="The item at the specified index.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( @@ -561,7 +561,7 @@ class GetListItemBlock(Block): class RemoveFromListBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): list: List[Any] = SchemaField(description="The list to modify.") value: Any = SchemaField( default=None, description="Value to remove from the list." @@ -574,10 +574,9 @@ class RemoveFromListBlock(Block): default=False, description="Whether to return the removed item." ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): updated_list: List[Any] = SchemaField(description="The list after removal.") removed_item: Any = SchemaField(description="The removed item if requested.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( @@ -618,17 +617,16 @@ class RemoveFromListBlock(Block): class ReplaceListItemBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): list: List[Any] = SchemaField(description="The list to modify.") index: int = SchemaField( description="Index of the item to replace (supports negative indices)." ) value: Any = SchemaField(description="The new value for the given index.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): updated_list: List[Any] = SchemaField(description="The list after replacement.") old_item: Any = SchemaField(description="The item that was replaced.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( @@ -663,10 +661,10 @@ class ReplaceListItemBlock(Block): class ListIsEmptyBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): list: List[Any] = SchemaField(description="The list to check.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): is_empty: bool = SchemaField(description="True if the list is empty.") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/dataforseo/keyword_suggestions.py b/autogpt_platform/backend/backend/blocks/dataforseo/keyword_suggestions.py index 1a04f8e598..a1ecc86386 100644 --- a/autogpt_platform/backend/backend/blocks/dataforseo/keyword_suggestions.py +++ b/autogpt_platform/backend/backend/blocks/dataforseo/keyword_suggestions.py @@ -8,7 +8,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, UserPasswordCredentials, @@ -18,7 +19,7 @@ from ._api import DataForSeoClient from ._config import dataforseo -class KeywordSuggestion(BlockSchema): +class KeywordSuggestion(BlockSchemaInput): """Schema for a keyword suggestion result.""" keyword: str = SchemaField(description="The keyword suggestion") @@ -45,7 +46,7 @@ class KeywordSuggestion(BlockSchema): class DataForSeoKeywordSuggestionsBlock(Block): """Block for getting keyword suggestions from DataForSEO Labs.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = dataforseo.credentials_field( description="DataForSEO credentials (username and password)" ) @@ -77,7 +78,7 @@ class DataForSeoKeywordSuggestionsBlock(Block): le=3000, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): suggestions: List[KeywordSuggestion] = SchemaField( description="List of keyword suggestions with metrics" ) @@ -90,7 +91,6 @@ class DataForSeoKeywordSuggestionsBlock(Block): seed_keyword: str = SchemaField( description="The seed keyword used for the query" ) - error: str = SchemaField(description="Error message if the API call failed") def __init__(self): super().__init__( @@ -213,12 +213,12 @@ class DataForSeoKeywordSuggestionsBlock(Block): class KeywordSuggestionExtractorBlock(Block): """Extracts individual fields from a KeywordSuggestion object.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): suggestion: KeywordSuggestion = SchemaField( description="The keyword suggestion object to extract fields from" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): keyword: str = SchemaField(description="The keyword suggestion") search_volume: Optional[int] = SchemaField( description="Monthly search volume", default=None diff --git a/autogpt_platform/backend/backend/blocks/dataforseo/related_keywords.py b/autogpt_platform/backend/backend/blocks/dataforseo/related_keywords.py index f0c26c5b06..7a7fbdd11a 100644 --- a/autogpt_platform/backend/backend/blocks/dataforseo/related_keywords.py +++ b/autogpt_platform/backend/backend/blocks/dataforseo/related_keywords.py @@ -8,7 +8,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, UserPasswordCredentials, @@ -18,7 +19,7 @@ from ._api import DataForSeoClient from ._config import dataforseo -class RelatedKeyword(BlockSchema): +class RelatedKeyword(BlockSchemaInput): """Schema for a related keyword result.""" keyword: str = SchemaField(description="The related keyword") @@ -45,7 +46,7 @@ class RelatedKeyword(BlockSchema): class DataForSeoRelatedKeywordsBlock(Block): """Block for getting related keywords from DataForSEO Labs.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = dataforseo.credentials_field( description="DataForSEO credentials (username and password)" ) @@ -85,7 +86,7 @@ class DataForSeoRelatedKeywordsBlock(Block): le=4, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): related_keywords: List[RelatedKeyword] = SchemaField( description="List of related keywords with metrics" ) @@ -98,7 +99,6 @@ class DataForSeoRelatedKeywordsBlock(Block): seed_keyword: str = SchemaField( description="The seed keyword used for the query" ) - error: str = SchemaField(description="Error message if the API call failed") def __init__(self): super().__init__( @@ -231,12 +231,12 @@ class DataForSeoRelatedKeywordsBlock(Block): class RelatedKeywordExtractorBlock(Block): """Extracts individual fields from a RelatedKeyword object.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): related_keyword: RelatedKeyword = SchemaField( description="The related keyword object to extract fields from" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): keyword: str = SchemaField(description="The related keyword") search_volume: Optional[int] = SchemaField( description="Monthly search volume", default=None diff --git a/autogpt_platform/backend/backend/blocks/decoder_block.py b/autogpt_platform/backend/backend/blocks/decoder_block.py index 754d79b068..7a7406bd1a 100644 --- a/autogpt_platform/backend/backend/blocks/decoder_block.py +++ b/autogpt_platform/backend/backend/blocks/decoder_block.py @@ -1,17 +1,23 @@ import codecs -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField class TextDecoderBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField( description="A string containing escaped characters to be decoded", placeholder='Your entire text block with \\n and \\" escaped characters', ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): decoded_text: str = SchemaField( description="The decoded text with escape sequences processed" ) diff --git a/autogpt_platform/backend/backend/blocks/discord/bot_blocks.py b/autogpt_platform/backend/backend/blocks/discord/bot_blocks.py index a0fde74e69..5ecd730f47 100644 --- a/autogpt_platform/backend/backend/blocks/discord/bot_blocks.py +++ b/autogpt_platform/backend/backend/blocks/discord/bot_blocks.py @@ -1,16 +1,23 @@ import base64 import io import mimetypes +from enum import Enum from pathlib import Path -from typing import Any +from typing import Any, Literal, cast -import aiohttp import discord from pydantic import SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import APIKeyCredentials, SchemaField from backend.util.file import store_media_file +from backend.util.request import Requests from backend.util.type import MediaFileType from ._auth import ( @@ -27,11 +34,24 @@ TEST_CREDENTIALS = TEST_BOT_CREDENTIALS TEST_CREDENTIALS_INPUT = TEST_BOT_CREDENTIALS_INPUT +class ThreadArchiveDuration(str, Enum): + """Discord thread auto-archive duration options""" + + ONE_HOUR = "60" + ONE_DAY = "1440" + THREE_DAYS = "4320" + ONE_WEEK = "10080" + + def to_minutes(self) -> int: + """Convert the duration string to minutes for Discord API""" + return int(self.value) + + class ReadDiscordMessagesBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordCredentials = DiscordCredentialsField() - class Output(BlockSchema): + class Output(BlockSchemaOutput): message_content: str = SchemaField( description="The content of the message received" ) @@ -114,10 +134,9 @@ class ReadDiscordMessagesBlock(Block): if message.attachments: attachment = message.attachments[0] # Process the first attachment if attachment.filename.endswith((".txt", ".py")): - async with aiohttp.ClientSession() as session: - async with session.get(attachment.url) as response: - file_content = response.text() - self.output_data += f"\n\nFile from user: {attachment.filename}\nContent: {file_content}" + response = await Requests().get(attachment.url) + file_content = response.text() + self.output_data += f"\n\nFile from user: {attachment.filename}\nContent: {file_content}" await client.close() @@ -165,7 +184,7 @@ class ReadDiscordMessagesBlock(Block): class SendDiscordMessageBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordCredentials = DiscordCredentialsField() message_content: str = SchemaField( description="The content of the message to send" @@ -179,7 +198,7 @@ class SendDiscordMessageBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField( description="The status of the operation (e.g., 'Message sent', 'Error')" ) @@ -311,7 +330,7 @@ class SendDiscordMessageBlock(Block): class SendDiscordDMBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordCredentials = DiscordCredentialsField() user_id: str = SchemaField( description="The Discord user ID to send the DM to (e.g., '123456789012345678')" @@ -320,7 +339,7 @@ class SendDiscordDMBlock(Block): description="The content of the direct message to send" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="The status of the operation") message_id: str = SchemaField(description="The ID of the sent message") @@ -400,7 +419,7 @@ class SendDiscordDMBlock(Block): class SendDiscordEmbedBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordCredentials = DiscordCredentialsField() channel_identifier: str = SchemaField( description="Channel ID or channel name to send the embed to" @@ -437,7 +456,7 @@ class SendDiscordEmbedBlock(Block): default=[], ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Operation status") message_id: str = SchemaField(description="ID of the sent embed message") @@ -587,7 +606,7 @@ class SendDiscordEmbedBlock(Block): class SendDiscordFileBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordCredentials = DiscordCredentialsField() channel_identifier: str = SchemaField( description="Channel ID or channel name to send the file to" @@ -608,7 +627,7 @@ class SendDiscordFileBlock(Block): description="Optional message to send with the file", default="" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Operation status") message_id: str = SchemaField(description="ID of the sent message") @@ -699,16 +718,15 @@ class SendDiscordFileBlock(Block): elif file.startswith(("http://", "https://")): # URL - download the file - async with aiohttp.ClientSession() as session: - async with session.get(file) as response: - file_bytes = await response.read() + response = await Requests().get(file) + file_bytes = response.content - # Try to get filename from URL if not provided - if not filename: - from urllib.parse import urlparse + # Try to get filename from URL if not provided + if not filename: + from urllib.parse import urlparse - path = urlparse(file).path - detected_filename = Path(path).name or "download" + path = urlparse(file).path + detected_filename = Path(path).name or "download" else: # Local file path - read from stored media file # This would be a path from a previous block's output @@ -790,7 +808,7 @@ class SendDiscordFileBlock(Block): class ReplyToDiscordMessageBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordCredentials = DiscordCredentialsField() channel_id: str = SchemaField( description="The channel ID where the message to reply to is located" @@ -801,7 +819,7 @@ class ReplyToDiscordMessageBlock(Block): description="Whether to mention the original message author", default=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Operation status") reply_id: str = SchemaField(description="ID of the reply message") @@ -915,13 +933,13 @@ class ReplyToDiscordMessageBlock(Block): class DiscordUserInfoBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordCredentials = DiscordCredentialsField() user_id: str = SchemaField( description="The Discord user ID to get information about" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): user_id: str = SchemaField( description="The user's ID (passed through for chaining)" ) @@ -1032,7 +1050,7 @@ class DiscordUserInfoBlock(Block): class DiscordChannelInfoBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordCredentials = DiscordCredentialsField() channel_identifier: str = SchemaField( description="Channel name or channel ID to look up" @@ -1043,7 +1061,7 @@ class DiscordChannelInfoBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): channel_id: str = SchemaField(description="The channel's ID") channel_name: str = SchemaField(description="The channel's name") server_id: str = SchemaField(description="The server's ID") @@ -1162,3 +1180,211 @@ class DiscordChannelInfoBlock(Block): raise ValueError(f"Login error occurred: {login_err}") except Exception as e: raise ValueError(f"An error occurred: {e}") + + +class CreateDiscordThreadBlock(Block): + class Input(BlockSchemaInput): + credentials: DiscordCredentials = DiscordCredentialsField() + channel_name: str = SchemaField( + description="Channel ID or channel name to create the thread in" + ) + server_name: str = SchemaField( + description="Server name (only needed if using channel name)", + advanced=True, + default="", + ) + thread_name: str = SchemaField(description="The name of the thread to create") + is_private: bool = SchemaField( + description="Whether to create a private thread (requires Boost Level 2+) or public thread", + default=False, + ) + auto_archive_duration: ThreadArchiveDuration = SchemaField( + description="Duration before the thread is automatically archived", + advanced=True, + default=ThreadArchiveDuration.ONE_WEEK, + ) + message_content: str = SchemaField( + description="Optional initial message to send in the thread", + advanced=True, + default="", + ) + + class Output(BlockSchemaOutput): + status: str = SchemaField(description="Operation status") + thread_id: str = SchemaField(description="ID of the created thread") + thread_name: str = SchemaField(description="Name of the created thread") + + def __init__(self): + super().__init__( + id="e8f3c9a2-7b5d-4f1e-9c6a-3d8e2b4f7a1c", + input_schema=CreateDiscordThreadBlock.Input, + output_schema=CreateDiscordThreadBlock.Output, + description="Creates a new thread in a Discord channel.", + categories={BlockCategory.SOCIAL}, + test_input={ + "channel_name": "general", + "thread_name": "Test Thread", + "is_private": False, + "auto_archive_duration": ThreadArchiveDuration.ONE_HOUR, + "credentials": TEST_CREDENTIALS_INPUT, + }, + test_output=[ + ("status", "Thread created successfully"), + ("thread_id", "123456789012345678"), + ("thread_name", "Test Thread"), + ], + test_mock={ + "create_thread": lambda *args, **kwargs: { + "status": "Thread created successfully", + "thread_id": "123456789012345678", + "thread_name": "Test Thread", + } + }, + test_credentials=TEST_CREDENTIALS, + ) + + async def create_thread( + self, + token: str, + channel_name: str, + server_name: str | None, + thread_name: str, + is_private: bool, + auto_archive_duration: ThreadArchiveDuration, + message_content: str, + ) -> dict: + intents = discord.Intents.default() + intents.guilds = True + intents.message_content = True # Required for sending messages in threads + client = discord.Client(intents=intents) + + result = {} + + @client.event + async def on_ready(): + channel = None + + # Try to parse as channel ID first + try: + channel_id = int(channel_name) + try: + channel = await client.fetch_channel(channel_id) + except discord.errors.NotFound: + result["status"] = f"Channel with ID {channel_id} not found" + await client.close() + return + except discord.errors.Forbidden: + result["status"] = ( + f"Bot does not have permission to view channel {channel_id}" + ) + await client.close() + return + except ValueError: + # Not an ID, treat as channel name + # Collect all matching channels to detect duplicates + matching_channels = [] + for guild in client.guilds: + # Skip guilds if server_name is provided and doesn't match + if ( + server_name + and server_name.strip() + and guild.name != server_name + ): + continue + for ch in guild.text_channels: + if ch.name == channel_name: + matching_channels.append(ch) + + if not matching_channels: + result["status"] = f"Channel not found: {channel_name}" + await client.close() + return + elif len(matching_channels) > 1: + result["status"] = ( + f"Multiple channels named '{channel_name}' found. " + "Please specify server_name to disambiguate." + ) + await client.close() + return + else: + channel = matching_channels[0] + + if not channel: + result["status"] = "Failed to resolve channel" + await client.close() + return + + # Type check - ensure it's a text channel that can create threads + if not hasattr(channel, "create_thread"): + result["status"] = ( + f"Channel {channel_name} cannot create threads (not a text channel)" + ) + await client.close() + return + + # After the hasattr check, we know channel is a TextChannel + channel = cast(discord.TextChannel, channel) + + try: + # Create the thread using discord.py 2.0+ API + thread_type = ( + discord.ChannelType.private_thread + if is_private + else discord.ChannelType.public_thread + ) + + # Cast to the specific Literal type that discord.py expects + duration_minutes = cast( + Literal[60, 1440, 4320, 10080], auto_archive_duration.to_minutes() + ) + + # The 'type' parameter exists in discord.py 2.0+ but isn't in type stubs yet + # pyright: ignore[reportCallIssue] + thread = await channel.create_thread( + name=thread_name, + type=thread_type, + auto_archive_duration=duration_minutes, + ) + + # Send initial message if provided + if message_content: + await thread.send(message_content) + + result["status"] = "Thread created successfully" + result["thread_id"] = str(thread.id) + result["thread_name"] = thread.name + + except discord.errors.Forbidden as e: + result["status"] = ( + f"Bot does not have permission to create threads in this channel. {str(e)}" + ) + except Exception as e: + result["status"] = f"Error creating thread: {str(e)}" + finally: + await client.close() + + await client.start(token) + return result + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + try: + result = await self.create_thread( + token=credentials.api_key.get_secret_value(), + channel_name=input_data.channel_name, + server_name=input_data.server_name or None, + thread_name=input_data.thread_name, + is_private=input_data.is_private, + auto_archive_duration=input_data.auto_archive_duration, + message_content=input_data.message_content, + ) + + yield "status", result.get("status", "Unknown error") + if "thread_id" in result: + yield "thread_id", result["thread_id"] + if "thread_name" in result: + yield "thread_name", result["thread_name"] + + except discord.errors.LoginFailure as login_err: + raise ValueError(f"Login error occurred: {login_err}") diff --git a/autogpt_platform/backend/backend/blocks/discord/oauth_blocks.py b/autogpt_platform/backend/backend/blocks/discord/oauth_blocks.py index 31d2df65c2..ca20eb6337 100644 --- a/autogpt_platform/backend/backend/blocks/discord/oauth_blocks.py +++ b/autogpt_platform/backend/backend/blocks/discord/oauth_blocks.py @@ -2,7 +2,13 @@ Discord OAuth-based blocks. """ -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import OAuth2Credentials, SchemaField from ._api import DiscordOAuthUser, get_current_user @@ -21,12 +27,12 @@ class DiscordGetCurrentUserBlock(Block): This block requires Discord OAuth2 credentials (not bot tokens). """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: DiscordOAuthCredentialsInput = DiscordOAuthCredentialsField( ["identify"] ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): user_id: str = SchemaField(description="The authenticated user's Discord ID") username: str = SchemaField(description="The user's username") avatar_url: str = SchemaField(description="URL to the user's avatar image") diff --git a/autogpt_platform/backend/backend/blocks/email_block.py b/autogpt_platform/backend/backend/blocks/email_block.py index 3738bf0de8..fad2f411cb 100644 --- a/autogpt_platform/backend/backend/blocks/email_block.py +++ b/autogpt_platform/backend/backend/blocks/email_block.py @@ -1,11 +1,19 @@ import smtplib +import socket +import ssl from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText from typing import Literal from pydantic import BaseModel, ConfigDict, SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( CredentialsField, CredentialsMetaInput, @@ -42,16 +50,14 @@ def SMTPCredentialsField() -> SMTPCredentialsInput: class SMTPConfig(BaseModel): - smtp_server: str = SchemaField( - default="smtp.example.com", description="SMTP server address" - ) + smtp_server: str = SchemaField(description="SMTP server address") smtp_port: int = SchemaField(default=25, description="SMTP port number") model_config = ConfigDict(title="SMTP Config") class SendEmailBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): to_email: str = SchemaField( description="Recipient email address", placeholder="recipient@example.com" ) @@ -61,13 +67,10 @@ class SendEmailBlock(Block): body: str = SchemaField( description="Body of the email", placeholder="Enter the email body" ) - config: SMTPConfig = SchemaField( - description="SMTP Config", - default=SMTPConfig(), - ) + config: SMTPConfig = SchemaField(description="SMTP Config") credentials: SMTPCredentialsInput = SMTPCredentialsField() - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Status of the email sending operation") error: str = SchemaField( description="Error message if the email sending failed" @@ -114,7 +117,7 @@ class SendEmailBlock(Block): msg["Subject"] = subject msg.attach(MIMEText(body, "plain")) - with smtplib.SMTP(smtp_server, smtp_port) as server: + with smtplib.SMTP(smtp_server, smtp_port, timeout=30) as server: server.starttls() server.login(smtp_username, smtp_password) server.sendmail(smtp_username, to_email, msg.as_string()) @@ -124,10 +127,59 @@ class SendEmailBlock(Block): async def run( self, input_data: Input, *, credentials: SMTPCredentials, **kwargs ) -> BlockOutput: - yield "status", self.send_email( - config=input_data.config, - to_email=input_data.to_email, - subject=input_data.subject, - body=input_data.body, - credentials=credentials, - ) + try: + status = self.send_email( + config=input_data.config, + to_email=input_data.to_email, + subject=input_data.subject, + body=input_data.body, + credentials=credentials, + ) + yield "status", status + except socket.gaierror: + yield "error", ( + f"Cannot connect to SMTP server '{input_data.config.smtp_server}'. " + "Please verify the server address is correct." + ) + except socket.timeout: + yield "error", ( + f"Connection timeout to '{input_data.config.smtp_server}' " + f"on port {input_data.config.smtp_port}. " + "The server may be down or unreachable." + ) + except ConnectionRefusedError: + yield "error", ( + f"Connection refused to '{input_data.config.smtp_server}' " + f"on port {input_data.config.smtp_port}. " + "Common SMTP ports are: 587 (TLS), 465 (SSL), 25 (plain). " + "Please verify the port is correct." + ) + except smtplib.SMTPNotSupportedError: + yield "error", ( + f"STARTTLS not supported by server '{input_data.config.smtp_server}'. " + "Try using port 465 for SSL or port 25 for unencrypted connection." + ) + except ssl.SSLError as e: + yield "error", ( + f"SSL/TLS error when connecting to '{input_data.config.smtp_server}': {str(e)}. " + "The server may require a different security protocol." + ) + except smtplib.SMTPAuthenticationError: + yield "error", ( + "Authentication failed. Please verify your username and password are correct." + ) + except smtplib.SMTPRecipientsRefused: + yield "error", ( + f"Recipient email address '{input_data.to_email}' was rejected by the server. " + "Please verify the email address is valid." + ) + except smtplib.SMTPSenderRefused: + yield "error", ( + "Sender email address defined in the credentials that where used" + "was rejected by the server. " + "Please verify your account is authorized to send emails." + ) + except smtplib.SMTPDataError as e: + yield "error", f"Email data rejected by server: {str(e)}" + except Exception as e: + raise e diff --git a/autogpt_platform/backend/backend/blocks/enrichlayer/linkedin.py b/autogpt_platform/backend/backend/blocks/enrichlayer/linkedin.py index 52d593eb0e..974ad28eed 100644 --- a/autogpt_platform/backend/backend/blocks/enrichlayer/linkedin.py +++ b/autogpt_platform/backend/backend/blocks/enrichlayer/linkedin.py @@ -8,7 +8,13 @@ which provides access to LinkedIn profile data and related information. import logging from typing import Optional -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField from backend.util.type import MediaFileType @@ -29,7 +35,7 @@ logger = logging.getLogger(__name__) class GetLinkedinProfileBlock(Block): """Block to fetch LinkedIn profile data using Enrichlayer API.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): """Input schema for GetLinkedinProfileBlock.""" linkedin_url: str = SchemaField( @@ -80,13 +86,12 @@ class GetLinkedinProfileBlock(Block): description="Enrichlayer API credentials" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): """Output schema for GetLinkedinProfileBlock.""" profile: PersonProfileResponse = SchemaField( description="LinkedIn profile data" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): """Initialize GetLinkedinProfileBlock.""" @@ -199,7 +204,7 @@ class GetLinkedinProfileBlock(Block): class LinkedinPersonLookupBlock(Block): """Block to look up LinkedIn profiles by person's information using Enrichlayer API.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): """Input schema for LinkedinPersonLookupBlock.""" first_name: str = SchemaField( @@ -242,13 +247,12 @@ class LinkedinPersonLookupBlock(Block): description="Enrichlayer API credentials" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): """Output schema for LinkedinPersonLookupBlock.""" lookup_result: PersonLookupResponse = SchemaField( description="LinkedIn profile lookup result" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): """Initialize LinkedinPersonLookupBlock.""" @@ -346,7 +350,7 @@ class LinkedinPersonLookupBlock(Block): class LinkedinRoleLookupBlock(Block): """Block to look up LinkedIn profiles by role in a company using Enrichlayer API.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): """Input schema for LinkedinRoleLookupBlock.""" role: str = SchemaField( @@ -366,13 +370,12 @@ class LinkedinRoleLookupBlock(Block): description="Enrichlayer API credentials" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): """Output schema for LinkedinRoleLookupBlock.""" role_lookup_result: RoleLookupResponse = SchemaField( description="LinkedIn role lookup result" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): """Initialize LinkedinRoleLookupBlock.""" @@ -449,7 +452,7 @@ class LinkedinRoleLookupBlock(Block): class GetLinkedinProfilePictureBlock(Block): """Block to get LinkedIn profile pictures using Enrichlayer API.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): """Input schema for GetLinkedinProfilePictureBlock.""" linkedin_profile_url: str = SchemaField( @@ -460,13 +463,12 @@ class GetLinkedinProfilePictureBlock(Block): description="Enrichlayer API credentials" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): """Output schema for GetLinkedinProfilePictureBlock.""" profile_picture_url: MediaFileType = SchemaField( description="LinkedIn profile picture URL" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): """Initialize GetLinkedinProfilePictureBlock.""" diff --git a/autogpt_platform/backend/backend/blocks/exa/_test.py b/autogpt_platform/backend/backend/blocks/exa/_test.py new file mode 100644 index 0000000000..4ab5d4f9ef --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/_test.py @@ -0,0 +1,22 @@ +""" +Test credentials and helpers for Exa blocks. +""" + +from pydantic import SecretStr + +from backend.data.model import APIKeyCredentials + +TEST_CREDENTIALS = APIKeyCredentials( + id="01234567-89ab-cdef-0123-456789abcdef", + provider="exa", + api_key=SecretStr("mock-exa-api-key"), + title="Mock Exa API key", + expires_at=None, +) + +TEST_CREDENTIALS_INPUT = { + "provider": TEST_CREDENTIALS.provider, + "id": TEST_CREDENTIALS.id, + "type": TEST_CREDENTIALS.type, + "title": TEST_CREDENTIALS.title, +} diff --git a/autogpt_platform/backend/backend/blocks/exa/answers.py b/autogpt_platform/backend/backend/blocks/exa/answers.py index fa3f6b403f..9033d6b5f8 100644 --- a/autogpt_platform/backend/backend/blocks/exa/answers.py +++ b/autogpt_platform/backend/backend/blocks/exa/answers.py @@ -1,55 +1,59 @@ +from typing import Optional + +from exa_py import AsyncExa +from exa_py.api import AnswerResponse +from pydantic import BaseModel + from backend.sdk import ( APIKeyCredentials, - BaseModel, Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, - Requests, + MediaFileType, SchemaField, ) from ._config import exa -class CostBreakdown(BaseModel): - keywordSearch: float - neuralSearch: float - contentText: float - contentHighlight: float - contentSummary: float +class AnswerCitation(BaseModel): + """Citation model for answer endpoint.""" + id: str = SchemaField(description="The temporary ID for the document") + url: str = SchemaField(description="The URL of the search result") + title: Optional[str] = SchemaField(description="The title of the search result") + author: Optional[str] = SchemaField(description="The author of the content") + publishedDate: Optional[str] = SchemaField( + description="An estimate of the creation date" + ) + text: Optional[str] = SchemaField(description="The full text content of the source") + image: Optional[MediaFileType] = SchemaField( + description="The URL of the image associated with the result" + ) + favicon: Optional[MediaFileType] = SchemaField( + description="The URL of the favicon for the domain" + ) -class SearchBreakdown(BaseModel): - search: float - contents: float - breakdown: CostBreakdown - - -class PerRequestPrices(BaseModel): - neuralSearch_1_25_results: float - neuralSearch_26_100_results: float - neuralSearch_100_plus_results: float - keywordSearch_1_100_results: float - keywordSearch_100_plus_results: float - - -class PerPagePrices(BaseModel): - contentText: float - contentHighlight: float - contentSummary: float - - -class CostDollars(BaseModel): - total: float - breakDown: list[SearchBreakdown] - perRequestPrices: PerRequestPrices - perPagePrices: PerPagePrices + @classmethod + def from_sdk(cls, sdk_citation) -> "AnswerCitation": + """Convert SDK AnswerResult (dataclass) to our Pydantic model.""" + return cls( + id=getattr(sdk_citation, "id", ""), + url=getattr(sdk_citation, "url", ""), + title=getattr(sdk_citation, "title", None), + author=getattr(sdk_citation, "author", None), + publishedDate=getattr(sdk_citation, "published_date", None), + text=getattr(sdk_citation, "text", None), + image=getattr(sdk_citation, "image", None), + favicon=getattr(sdk_citation, "favicon", None), + ) class ExaAnswerBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) @@ -58,31 +62,21 @@ class ExaAnswerBlock(Block): placeholder="What is the latest valuation of SpaceX?", ) text: bool = SchemaField( - default=False, - description="If true, the response includes full text content in the search results", - advanced=True, - ) - model: str = SchemaField( - default="exa", - description="The search model to use (exa or exa-pro)", - placeholder="exa", - advanced=True, + description="Include full text content in the search results used for the answer", + default=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): answer: str = SchemaField( description="The generated answer based on search results" ) - citations: list[dict] = SchemaField( - description="Search results used to generate the answer", - default_factory=list, + citations: list[AnswerCitation] = SchemaField( + description="Search results used to generate the answer" ) - cost_dollars: CostDollars = SchemaField( - description="Cost breakdown of the request" - ) - error: str = SchemaField( - description="Error message if the request failed", default="" + citation: AnswerCitation = SchemaField( + description="Individual citation from the answer" ) + error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -96,26 +90,24 @@ class ExaAnswerBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = "https://api.exa.ai/answer" - headers = { - "Content-Type": "application/json", - "x-api-key": credentials.api_key.get_secret_value(), - } + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) - # Build the payload - payload = { - "query": input_data.query, - "text": input_data.text, - "model": input_data.model, - } + # Get answer using SDK (stream=False for blocks) - this IS async, needs await + response = await aexa.answer( + query=input_data.query, text=input_data.text, stream=False + ) - try: - response = await Requests().post(url, headers=headers, json=payload) - data = response.json() + # this should remain true as long as they don't start defaulting to streaming only. + # provides a bit of safety for sdk updates. + assert type(response) is AnswerResponse - yield "answer", data.get("answer", "") - yield "citations", data.get("citations", []) - yield "cost_dollars", data.get("costDollars", {}) + yield "answer", response.answer - except Exception as e: - yield "error", str(e) + citations = [ + AnswerCitation.from_sdk(sdk_citation) + for sdk_citation in response.citations or [] + ] + + yield "citations", citations + for citation in citations: + yield "citation", citation diff --git a/autogpt_platform/backend/backend/blocks/exa/code_context.py b/autogpt_platform/backend/backend/blocks/exa/code_context.py new file mode 100644 index 0000000000..962d13fdfa --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/code_context.py @@ -0,0 +1,118 @@ +""" +Exa Code Context Block + +Provides code search capabilities to find relevant code snippets and examples +from open source repositories, documentation, and Stack Overflow. +""" + +from typing import Union + +from pydantic import BaseModel + +from backend.sdk import ( + APIKeyCredentials, + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + CredentialsMetaInput, + Requests, + SchemaField, +) + +from ._config import exa + + +class CodeContextResponse(BaseModel): + """Stable output model for code context responses.""" + + request_id: str + query: str + response: str + results_count: int + cost_dollars: str + search_time: float + output_tokens: int + + @classmethod + def from_api(cls, data: dict) -> "CodeContextResponse": + """Convert API response to our stable model.""" + return cls( + request_id=data.get("requestId", ""), + query=data.get("query", ""), + response=data.get("response", ""), + results_count=data.get("resultsCount", 0), + cost_dollars=data.get("costDollars", ""), + search_time=data.get("searchTime", 0.0), + output_tokens=data.get("outputTokens", 0), + ) + + +class ExaCodeContextBlock(Block): + """Get relevant code snippets and examples from open source repositories.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + query: str = SchemaField( + description="Search query to find relevant code snippets. Describe what you're trying to do or what code you're looking for.", + placeholder="how to use React hooks for state management", + ) + tokens_num: Union[str, int] = SchemaField( + default="dynamic", + description="Token limit for response. Use 'dynamic' for automatic sizing, 5000 for standard queries, or 10000 for comprehensive examples.", + placeholder="dynamic", + ) + + class Output(BlockSchemaOutput): + request_id: str = SchemaField(description="Unique identifier for this request") + query: str = SchemaField(description="The search query used") + response: str = SchemaField( + description="Formatted code snippets and contextual examples with sources" + ) + results_count: int = SchemaField( + description="Number of code sources found and included" + ) + cost_dollars: str = SchemaField(description="Cost of this request in dollars") + search_time: float = SchemaField( + description="Time taken to search in milliseconds" + ) + output_tokens: int = SchemaField(description="Number of tokens in the response") + + def __init__(self): + super().__init__( + id="8f9e0d1c-2b3a-4567-8901-23456789abcd", + description="Search billions of GitHub repos, docs, and Stack Overflow for relevant code examples", + categories={BlockCategory.SEARCH, BlockCategory.DEVELOPER_TOOLS}, + input_schema=ExaCodeContextBlock.Input, + output_schema=ExaCodeContextBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + url = "https://api.exa.ai/context" + headers = { + "Content-Type": "application/json", + "x-api-key": credentials.api_key.get_secret_value(), + } + + payload = { + "query": input_data.query, + "tokensNum": input_data.tokens_num, + } + + response = await Requests().post(url, headers=headers, json=payload) + data = response.json() + + context = CodeContextResponse.from_api(data) + + yield "request_id", context.request_id + yield "query", context.query + yield "response", context.response + yield "results_count", context.results_count + yield "cost_dollars", context.cost_dollars + yield "search_time", context.search_time + yield "output_tokens", context.output_tokens diff --git a/autogpt_platform/backend/backend/blocks/exa/contents.py b/autogpt_platform/backend/backend/blocks/exa/contents.py index ec537232d2..9ab854fa85 100644 --- a/autogpt_platform/backend/backend/blocks/exa/contents.py +++ b/autogpt_platform/backend/backend/blocks/exa/contents.py @@ -1,39 +1,127 @@ +from enum import Enum +from typing import Optional + +from exa_py import AsyncExa +from pydantic import BaseModel + from backend.sdk import ( APIKeyCredentials, Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, - Requests, SchemaField, ) from ._config import exa -from .helpers import ContentSettings +from .helpers import ( + CostDollars, + ExaSearchResults, + ExtrasSettings, + HighlightSettings, + LivecrawlTypes, + SummarySettings, +) + + +class ContentStatusTag(str, Enum): + CRAWL_NOT_FOUND = "CRAWL_NOT_FOUND" + CRAWL_TIMEOUT = "CRAWL_TIMEOUT" + CRAWL_LIVECRAWL_TIMEOUT = "CRAWL_LIVECRAWL_TIMEOUT" + SOURCE_NOT_AVAILABLE = "SOURCE_NOT_AVAILABLE" + CRAWL_UNKNOWN_ERROR = "CRAWL_UNKNOWN_ERROR" + + +class ContentError(BaseModel): + tag: Optional[ContentStatusTag] = SchemaField( + default=None, description="Specific error type" + ) + httpStatusCode: Optional[int] = SchemaField( + default=None, description="The corresponding HTTP status code" + ) + + +class ContentStatus(BaseModel): + id: str = SchemaField(description="The URL that was requested") + status: str = SchemaField( + description="Status of the content fetch operation (success or error)" + ) + error: Optional[ContentError] = SchemaField( + default=None, description="Error details, only present when status is 'error'" + ) class ExaContentsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) - ids: list[str] = SchemaField( - description="Array of document IDs obtained from searches" + urls: list[str] = SchemaField( + description="Array of URLs to crawl (preferred over 'ids')", + default_factory=list, + advanced=False, ) - contents: ContentSettings = SchemaField( - description="Content retrieval settings", - default=ContentSettings(), + ids: list[str] = SchemaField( + description="[DEPRECATED - use 'urls' instead] Array of document IDs obtained from searches", + default_factory=list, + advanced=True, + ) + text: bool = SchemaField( + description="Retrieve text content from pages", + default=True, + ) + highlights: HighlightSettings = SchemaField( + description="Text snippets most relevant from each page", + default=HighlightSettings(), + ) + summary: SummarySettings = SchemaField( + description="LLM-generated summary of the webpage", + default=SummarySettings(), + ) + livecrawl: Optional[LivecrawlTypes] = SchemaField( + description="Livecrawling options: never, fallback (default), always, preferred", + default=LivecrawlTypes.FALLBACK, + advanced=True, + ) + livecrawl_timeout: Optional[int] = SchemaField( + description="Timeout for livecrawling in milliseconds", + default=10000, + advanced=True, + ) + subpages: Optional[int] = SchemaField( + description="Number of subpages to crawl", default=0, ge=0, advanced=True + ) + subpage_target: Optional[str | list[str]] = SchemaField( + description="Keyword(s) to find specific subpages of search results", + default=None, + advanced=True, + ) + extras: ExtrasSettings = SchemaField( + description="Extra parameters for additional content", + default=ExtrasSettings(), advanced=True, ) - class Output(BlockSchema): - results: list = SchemaField( - description="List of document contents", default_factory=list + class Output(BlockSchemaOutput): + results: list[ExaSearchResults] = SchemaField( + description="List of document contents with metadata" ) - error: str = SchemaField( - description="Error message if the request failed", default="" + result: ExaSearchResults = SchemaField( + description="Single document content result" ) + context: str = SchemaField( + description="A formatted string of the results ready for LLMs" + ) + request_id: str = SchemaField(description="Unique identifier for the request") + statuses: list[ContentStatus] = SchemaField( + description="Status information for each requested URL" + ) + cost_dollars: Optional[CostDollars] = SchemaField( + description="Cost breakdown for the request" + ) + error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -47,23 +135,91 @@ class ExaContentsBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = "https://api.exa.ai/contents" - headers = { - "Content-Type": "application/json", - "x-api-key": credentials.api_key.get_secret_value(), - } + if not input_data.urls and not input_data.ids: + raise ValueError("Either 'urls' or 'ids' must be provided") - # Convert ContentSettings to API format - payload = { - "ids": input_data.ids, - "text": input_data.contents.text, - "highlights": input_data.contents.highlights, - "summary": input_data.contents.summary, - } + sdk_kwargs = {} - try: - response = await Requests().post(url, headers=headers, json=payload) - data = response.json() - yield "results", data.get("results", []) - except Exception as e: - yield "error", str(e) + # Prefer urls over ids + if input_data.urls: + sdk_kwargs["urls"] = input_data.urls + elif input_data.ids: + sdk_kwargs["ids"] = input_data.ids + + if input_data.text: + sdk_kwargs["text"] = {"includeHtmlTags": True} + + # Handle highlights - only include if modified from defaults + if input_data.highlights and ( + input_data.highlights.num_sentences != 1 + or input_data.highlights.highlights_per_url != 1 + or input_data.highlights.query is not None + ): + highlights_dict = {} + highlights_dict["numSentences"] = input_data.highlights.num_sentences + highlights_dict["highlightsPerUrl"] = ( + input_data.highlights.highlights_per_url + ) + if input_data.highlights.query: + highlights_dict["query"] = input_data.highlights.query + sdk_kwargs["highlights"] = highlights_dict + + # Handle summary - only include if modified from defaults + if input_data.summary and ( + input_data.summary.query is not None + or input_data.summary.schema is not None + ): + summary_dict = {} + if input_data.summary.query: + summary_dict["query"] = input_data.summary.query + if input_data.summary.schema: + summary_dict["schema"] = input_data.summary.schema + sdk_kwargs["summary"] = summary_dict + + if input_data.livecrawl: + sdk_kwargs["livecrawl"] = input_data.livecrawl.value + + if input_data.livecrawl_timeout is not None: + sdk_kwargs["livecrawl_timeout"] = input_data.livecrawl_timeout + + if input_data.subpages is not None: + sdk_kwargs["subpages"] = input_data.subpages + + if input_data.subpage_target: + sdk_kwargs["subpage_target"] = input_data.subpage_target + + # Handle extras - only include if modified from defaults + if input_data.extras and ( + input_data.extras.links > 0 or input_data.extras.image_links > 0 + ): + extras_dict = {} + if input_data.extras.links: + extras_dict["links"] = input_data.extras.links + if input_data.extras.image_links: + extras_dict["image_links"] = input_data.extras.image_links + sdk_kwargs["extras"] = extras_dict + + # Always enable context for LLM-ready output + sdk_kwargs["context"] = True + + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + response = await aexa.get_contents(**sdk_kwargs) + + converted_results = [ + ExaSearchResults.from_sdk(sdk_result) + for sdk_result in response.results or [] + ] + + yield "results", converted_results + + for result in converted_results: + yield "result", result + + if response.context: + yield "context", response.context + + if response.statuses: + yield "statuses", response.statuses + + if response.cost_dollars: + yield "cost_dollars", response.cost_dollars diff --git a/autogpt_platform/backend/backend/blocks/exa/helpers.py b/autogpt_platform/backend/backend/blocks/exa/helpers.py index d32ea2dc64..f31f01c78a 100644 --- a/autogpt_platform/backend/backend/blocks/exa/helpers.py +++ b/autogpt_platform/backend/backend/blocks/exa/helpers.py @@ -1,51 +1,150 @@ -from typing import Optional +from enum import Enum +from typing import Any, Dict, Literal, Optional, Union -from backend.sdk import BaseModel, SchemaField +from backend.sdk import BaseModel, MediaFileType, SchemaField -class TextSettings(BaseModel): - max_characters: int = SchemaField( - default=1000, +class LivecrawlTypes(str, Enum): + NEVER = "never" + FALLBACK = "fallback" + ALWAYS = "always" + PREFERRED = "preferred" + + +class TextEnabled(BaseModel): + discriminator: Literal["enabled"] = "enabled" + + +class TextDisabled(BaseModel): + discriminator: Literal["disabled"] = "disabled" + + +class TextAdvanced(BaseModel): + discriminator: Literal["advanced"] = "advanced" + max_characters: Optional[int] = SchemaField( + default=None, description="Maximum number of characters to return", placeholder="1000", ) include_html_tags: bool = SchemaField( default=False, - description="Whether to include HTML tags in the text", + description="Include HTML tags in the response, helps LLMs understand text structure", placeholder="False", ) class HighlightSettings(BaseModel): num_sentences: int = SchemaField( - default=3, + default=1, description="Number of sentences per highlight", - placeholder="3", + placeholder="1", + ge=1, ) highlights_per_url: int = SchemaField( - default=3, + default=1, description="Number of highlights per URL", - placeholder="3", + placeholder="1", + ge=1, + ) + query: Optional[str] = SchemaField( + default=None, + description="Custom query to direct the LLM's selection of highlights", + placeholder="Key advancements", ) class SummarySettings(BaseModel): query: Optional[str] = SchemaField( - default="", - description="Query string for summarization", - placeholder="Enter query", + default=None, + description="Custom query for the LLM-generated summary", + placeholder="Main developments", + ) + schema: Optional[dict] = SchemaField( # type: ignore + default=None, + description="JSON schema for structured output from summary", + advanced=True, + ) + + +class ExtrasSettings(BaseModel): + links: int = SchemaField( + default=0, + description="Number of URLs to return from each webpage", + placeholder="1", + ge=0, + ) + image_links: int = SchemaField( + default=0, + description="Number of images to return for each result", + placeholder="1", + ge=0, + ) + + +class ContextEnabled(BaseModel): + discriminator: Literal["enabled"] = "enabled" + + +class ContextDisabled(BaseModel): + discriminator: Literal["disabled"] = "disabled" + + +class ContextAdvanced(BaseModel): + discriminator: Literal["advanced"] = "advanced" + max_characters: Optional[int] = SchemaField( + default=None, + description="Maximum character limit for context string", + placeholder="10000", ) class ContentSettings(BaseModel): - text: TextSettings = SchemaField( - default=TextSettings(), + text: Optional[Union[bool, TextEnabled, TextDisabled, TextAdvanced]] = SchemaField( + default=None, + description="Text content retrieval. Boolean for simple enable/disable or object for advanced settings", ) - highlights: HighlightSettings = SchemaField( - default=HighlightSettings(), + highlights: Optional[HighlightSettings] = SchemaField( + default=None, + description="Text snippets most relevant from each page", ) - summary: SummarySettings = SchemaField( - default=SummarySettings(), + summary: Optional[SummarySettings] = SchemaField( + default=None, + description="LLM-generated summary of the webpage", + ) + livecrawl: Optional[LivecrawlTypes] = SchemaField( + default=None, + description="Livecrawling options: never, fallback, always, preferred", + advanced=True, + ) + livecrawl_timeout: Optional[int] = SchemaField( + default=None, + description="Timeout for livecrawling in milliseconds", + placeholder="10000", + advanced=True, + ) + subpages: Optional[int] = SchemaField( + default=None, + description="Number of subpages to crawl", + placeholder="0", + ge=0, + advanced=True, + ) + subpage_target: Optional[Union[str, list[str]]] = SchemaField( + default=None, + description="Keyword(s) to find specific subpages of search results", + advanced=True, + ) + extras: Optional[ExtrasSettings] = SchemaField( + default=None, + description="Extra parameters for additional content", + advanced=True, + ) + context: Optional[Union[bool, ContextEnabled, ContextDisabled, ContextAdvanced]] = ( + SchemaField( + default=None, + description="Format search results into a context string for LLMs", + advanced=True, + ) ) @@ -127,3 +226,225 @@ class WebsetEnrichmentConfig(BaseModel): default=None, description="Options for the enrichment", ) + + +# Shared result models +class ExaSearchExtras(BaseModel): + links: list[str] = SchemaField( + default_factory=list, description="Array of links from the search result" + ) + imageLinks: list[str] = SchemaField( + default_factory=list, description="Array of image links from the search result" + ) + + +class ExaSearchResults(BaseModel): + title: str | None = None + url: str | None = None + publishedDate: str | None = None + author: str | None = None + id: str + image: MediaFileType | None = None + favicon: MediaFileType | None = None + text: str | None = None + highlights: list[str] = SchemaField(default_factory=list) + highlightScores: list[float] = SchemaField(default_factory=list) + summary: str | None = None + subpages: list[dict] = SchemaField(default_factory=list) + extras: ExaSearchExtras | None = None + + @classmethod + def from_sdk(cls, sdk_result) -> "ExaSearchResults": + """Convert SDK Result (dataclass) to our Pydantic model.""" + return cls( + id=getattr(sdk_result, "id", ""), + url=getattr(sdk_result, "url", None), + title=getattr(sdk_result, "title", None), + author=getattr(sdk_result, "author", None), + publishedDate=getattr(sdk_result, "published_date", None), + text=getattr(sdk_result, "text", None), + highlights=getattr(sdk_result, "highlights", None) or [], + highlightScores=getattr(sdk_result, "highlight_scores", None) or [], + summary=getattr(sdk_result, "summary", None), + subpages=getattr(sdk_result, "subpages", None) or [], + image=getattr(sdk_result, "image", None), + favicon=getattr(sdk_result, "favicon", None), + extras=getattr(sdk_result, "extras", None), + ) + + +# Cost tracking models +class CostBreakdown(BaseModel): + keywordSearch: float = SchemaField(default=0.0) + neuralSearch: float = SchemaField(default=0.0) + contentText: float = SchemaField(default=0.0) + contentHighlight: float = SchemaField(default=0.0) + contentSummary: float = SchemaField(default=0.0) + + +class CostBreakdownItem(BaseModel): + search: float = SchemaField(default=0.0) + contents: float = SchemaField(default=0.0) + breakdown: CostBreakdown = SchemaField(default_factory=CostBreakdown) + + +class PerRequestPrices(BaseModel): + neuralSearch_1_25_results: float = SchemaField(default=0.005) + neuralSearch_26_100_results: float = SchemaField(default=0.025) + neuralSearch_100_plus_results: float = SchemaField(default=1.0) + keywordSearch_1_100_results: float = SchemaField(default=0.0025) + keywordSearch_100_plus_results: float = SchemaField(default=3.0) + + +class PerPagePrices(BaseModel): + contentText: float = SchemaField(default=0.001) + contentHighlight: float = SchemaField(default=0.001) + contentSummary: float = SchemaField(default=0.001) + + +class CostDollars(BaseModel): + total: float = SchemaField(description="Total dollar cost for your request") + breakDown: list[CostBreakdownItem] = SchemaField( + default_factory=list, description="Breakdown of costs by operation type" + ) + perRequestPrices: PerRequestPrices = SchemaField( + default_factory=PerRequestPrices, + description="Standard price per request for different operations", + ) + perPagePrices: PerPagePrices = SchemaField( + default_factory=PerPagePrices, + description="Standard price per page for different content operations", + ) + + +# Helper functions for payload processing +def process_text_field( + text: Union[bool, TextEnabled, TextDisabled, TextAdvanced, None] +) -> Optional[Union[bool, Dict[str, Any]]]: + """Process text field for API payload.""" + if text is None: + return None + + # Handle backward compatibility with boolean + if isinstance(text, bool): + return text + elif isinstance(text, TextDisabled): + return False + elif isinstance(text, TextEnabled): + return True + elif isinstance(text, TextAdvanced): + text_dict = {} + if text.max_characters: + text_dict["maxCharacters"] = text.max_characters + if text.include_html_tags: + text_dict["includeHtmlTags"] = text.include_html_tags + return text_dict if text_dict else True + return None + + +def process_contents_settings(contents: Optional[ContentSettings]) -> Dict[str, Any]: + """Process ContentSettings into API payload format.""" + if not contents: + return {} + + content_settings = {} + + # Handle text field (can be boolean or object) + text_value = process_text_field(contents.text) + if text_value is not None: + content_settings["text"] = text_value + + # Handle highlights + if contents.highlights: + highlights_dict: Dict[str, Any] = { + "numSentences": contents.highlights.num_sentences, + "highlightsPerUrl": contents.highlights.highlights_per_url, + } + if contents.highlights.query: + highlights_dict["query"] = contents.highlights.query + content_settings["highlights"] = highlights_dict + + if contents.summary: + summary_dict = {} + if contents.summary.query: + summary_dict["query"] = contents.summary.query + if contents.summary.schema: + summary_dict["schema"] = contents.summary.schema + content_settings["summary"] = summary_dict + + if contents.livecrawl: + content_settings["livecrawl"] = contents.livecrawl.value + + if contents.livecrawl_timeout is not None: + content_settings["livecrawlTimeout"] = contents.livecrawl_timeout + + if contents.subpages is not None: + content_settings["subpages"] = contents.subpages + + if contents.subpage_target: + content_settings["subpageTarget"] = contents.subpage_target + + if contents.extras: + extras_dict = {} + if contents.extras.links: + extras_dict["links"] = contents.extras.links + if contents.extras.image_links: + extras_dict["imageLinks"] = contents.extras.image_links + content_settings["extras"] = extras_dict + + context_value = process_context_field(contents.context) + if context_value is not None: + content_settings["context"] = context_value + + return content_settings + + +def process_context_field( + context: Union[bool, dict, ContextEnabled, ContextDisabled, ContextAdvanced, None] +) -> Optional[Union[bool, Dict[str, int]]]: + """Process context field for API payload.""" + if context is None: + return None + + # Handle backward compatibility with boolean + if isinstance(context, bool): + return context if context else None + elif isinstance(context, dict) and "maxCharacters" in context: + return {"maxCharacters": context["maxCharacters"]} + elif isinstance(context, ContextDisabled): + return None # Don't send context field at all when disabled + elif isinstance(context, ContextEnabled): + return True + elif isinstance(context, ContextAdvanced): + if context.max_characters: + return {"maxCharacters": context.max_characters} + return True + return None + + +def format_date_fields( + input_data: Any, date_field_mapping: Dict[str, str] +) -> Dict[str, str]: + """Format datetime fields for API payload.""" + formatted_dates = {} + for input_field, api_field in date_field_mapping.items(): + value = getattr(input_data, input_field, None) + if value: + formatted_dates[api_field] = value.strftime("%Y-%m-%dT%H:%M:%S.000Z") + return formatted_dates + + +def add_optional_fields( + input_data: Any, + field_mapping: Dict[str, str], + payload: Dict[str, Any], + process_enums: bool = False, +) -> None: + """Add optional fields to payload if they have values.""" + for input_field, api_field in field_mapping.items(): + value = getattr(input_data, input_field, None) + if value: # Only add non-empty values + if process_enums and hasattr(value, "value"): + payload[api_field] = value.value + else: + payload[api_field] = value diff --git a/autogpt_platform/backend/backend/blocks/exa/model.py b/autogpt_platform/backend/backend/blocks/exa/model.py deleted file mode 100644 index 69b223f467..0000000000 --- a/autogpt_platform/backend/backend/blocks/exa/model.py +++ /dev/null @@ -1,247 +0,0 @@ -from datetime import datetime -from enum import Enum -from typing import Any, Dict, List, Optional - -from pydantic import BaseModel, Field - - -# Enum definitions based on available options -class WebsetStatus(str, Enum): - IDLE = "idle" - PENDING = "pending" - RUNNING = "running" - PAUSED = "paused" - - -class WebsetSearchStatus(str, Enum): - CREATED = "created" - # Add more if known, based on example it's "created" - - -class ImportStatus(str, Enum): - PENDING = "pending" - # Add more if known - - -class ImportFormat(str, Enum): - CSV = "csv" - # Add more if known - - -class EnrichmentStatus(str, Enum): - PENDING = "pending" - # Add more if known - - -class EnrichmentFormat(str, Enum): - TEXT = "text" - # Add more if known - - -class MonitorStatus(str, Enum): - ENABLED = "enabled" - # Add more if known - - -class MonitorBehaviorType(str, Enum): - SEARCH = "search" - # Add more if known - - -class MonitorRunStatus(str, Enum): - CREATED = "created" - # Add more if known - - -class CanceledReason(str, Enum): - WEBSET_DELETED = "webset_deleted" - # Add more if known - - -class FailedReason(str, Enum): - INVALID_FORMAT = "invalid_format" - # Add more if known - - -class Confidence(str, Enum): - HIGH = "high" - # Add more if known - - -# Nested models - - -class Entity(BaseModel): - type: str - - -class Criterion(BaseModel): - description: str - successRate: Optional[int] = None - - -class ExcludeItem(BaseModel): - source: str = Field(default="import") - id: str - - -class Relationship(BaseModel): - definition: str - limit: Optional[float] = None - - -class ScopeItem(BaseModel): - source: str = Field(default="import") - id: str - relationship: Optional[Relationship] = None - - -class Progress(BaseModel): - found: int - analyzed: int - completion: int - timeLeft: int - - -class Bounds(BaseModel): - min: int - max: int - - -class Expected(BaseModel): - total: int - confidence: str = Field(default="high") # Use str or Confidence enum - bounds: Bounds - - -class Recall(BaseModel): - expected: Expected - reasoning: str - - -class WebsetSearch(BaseModel): - id: str - object: str = Field(default="webset_search") - status: str = Field(default="created") # Or use WebsetSearchStatus - websetId: str - query: str - entity: Entity - criteria: List[Criterion] - count: int - behavior: str = Field(default="override") - exclude: List[ExcludeItem] - scope: List[ScopeItem] - progress: Progress - recall: Recall - metadata: Dict[str, Any] = Field(default_factory=dict) - canceledAt: Optional[datetime] = None - canceledReason: Optional[str] = Field(default=None) # Or use CanceledReason - createdAt: datetime - updatedAt: datetime - - -class ImportEntity(BaseModel): - type: str - - -class Import(BaseModel): - id: str - object: str = Field(default="import") - status: str = Field(default="pending") # Or use ImportStatus - format: str = Field(default="csv") # Or use ImportFormat - entity: ImportEntity - title: str - count: int - metadata: Dict[str, Any] = Field(default_factory=dict) - failedReason: Optional[str] = Field(default=None) # Or use FailedReason - failedAt: Optional[datetime] = None - failedMessage: Optional[str] = None - createdAt: datetime - updatedAt: datetime - - -class Option(BaseModel): - label: str - - -class WebsetEnrichment(BaseModel): - id: str - object: str = Field(default="webset_enrichment") - status: str = Field(default="pending") # Or use EnrichmentStatus - websetId: str - title: str - description: str - format: str = Field(default="text") # Or use EnrichmentFormat - options: List[Option] - instructions: str - metadata: Dict[str, Any] = Field(default_factory=dict) - createdAt: datetime - updatedAt: datetime - - -class Cadence(BaseModel): - cron: str - timezone: str = Field(default="Etc/UTC") - - -class BehaviorConfig(BaseModel): - query: Optional[str] = None - criteria: Optional[List[Criterion]] = None - entity: Optional[Entity] = None - count: Optional[int] = None - behavior: Optional[str] = Field(default=None) - - -class Behavior(BaseModel): - type: str = Field(default="search") # Or use MonitorBehaviorType - config: BehaviorConfig - - -class MonitorRun(BaseModel): - id: str - object: str = Field(default="monitor_run") - status: str = Field(default="created") # Or use MonitorRunStatus - monitorId: str - type: str = Field(default="search") - completedAt: Optional[datetime] = None - failedAt: Optional[datetime] = None - failedReason: Optional[str] = None - canceledAt: Optional[datetime] = None - createdAt: datetime - updatedAt: datetime - - -class Monitor(BaseModel): - id: str - object: str = Field(default="monitor") - status: str = Field(default="enabled") # Or use MonitorStatus - websetId: str - cadence: Cadence - behavior: Behavior - lastRun: Optional[MonitorRun] = None - nextRunAt: Optional[datetime] = None - metadata: Dict[str, Any] = Field(default_factory=dict) - createdAt: datetime - updatedAt: datetime - - -class Webset(BaseModel): - id: str - object: str = Field(default="webset") - status: WebsetStatus - externalId: Optional[str] = None - title: Optional[str] = None - searches: List[WebsetSearch] - imports: List[Import] - enrichments: List[WebsetEnrichment] - monitors: List[Monitor] - streams: List[Any] - createdAt: datetime - updatedAt: datetime - metadata: Dict[str, Any] = Field(default_factory=dict) - - -class ListWebsets(BaseModel): - data: List[Webset] - hasMore: bool - nextCursor: Optional[str] = None diff --git a/autogpt_platform/backend/backend/blocks/exa/research.py b/autogpt_platform/backend/backend/blocks/exa/research.py new file mode 100644 index 0000000000..c35a1048df --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/research.py @@ -0,0 +1,518 @@ +""" +Exa Research Task Blocks + +Provides asynchronous research capabilities that explore the web, gather sources, +synthesize findings, and return structured results with citations. +""" + +import asyncio +import time +from enum import Enum +from typing import Any, Dict, List, Optional + +from pydantic import BaseModel + +from backend.sdk import ( + APIKeyCredentials, + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + CredentialsMetaInput, + Requests, + SchemaField, +) + +from ._config import exa + + +class ResearchModel(str, Enum): + """Available research models.""" + + FAST = "exa-research-fast" + STANDARD = "exa-research" + PRO = "exa-research-pro" + + +class ResearchStatus(str, Enum): + """Research task status.""" + + PENDING = "pending" + RUNNING = "running" + COMPLETED = "completed" + CANCELED = "canceled" + FAILED = "failed" + + +class ResearchCostModel(BaseModel): + """Cost breakdown for a research request.""" + + total: float + num_searches: int + num_pages: int + reasoning_tokens: int + + @classmethod + def from_api(cls, data: dict) -> "ResearchCostModel": + """Convert API response, rounding fractional counts to integers.""" + return cls( + total=data.get("total", 0.0), + num_searches=int(round(data.get("numSearches", 0))), + num_pages=int(round(data.get("numPages", 0))), + reasoning_tokens=int(round(data.get("reasoningTokens", 0))), + ) + + +class ResearchOutputModel(BaseModel): + """Research output with content and optional structured data.""" + + content: str + parsed: Optional[Dict[str, Any]] = None + + +class ResearchTaskModel(BaseModel): + """Stable output model for research tasks.""" + + research_id: str + created_at: int + model: str + instructions: str + status: str + output_schema: Optional[Dict[str, Any]] = None + output: Optional[ResearchOutputModel] = None + cost_dollars: Optional[ResearchCostModel] = None + finished_at: Optional[int] = None + error: Optional[str] = None + + @classmethod + def from_api(cls, data: dict) -> "ResearchTaskModel": + """Convert API response to our stable model.""" + output_data = data.get("output") + output = None + if output_data: + output = ResearchOutputModel( + content=output_data.get("content", ""), + parsed=output_data.get("parsed"), + ) + + cost_data = data.get("costDollars") + cost = None + if cost_data: + cost = ResearchCostModel.from_api(cost_data) + + return cls( + research_id=data.get("researchId", ""), + created_at=data.get("createdAt", 0), + model=data.get("model", "exa-research"), + instructions=data.get("instructions", ""), + status=data.get("status", "pending"), + output_schema=data.get("outputSchema"), + output=output, + cost_dollars=cost, + finished_at=data.get("finishedAt"), + error=data.get("error"), + ) + + +class ExaCreateResearchBlock(Block): + """Create an asynchronous research task that explores the web and synthesizes findings.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + instructions: str = SchemaField( + description="Research instructions - clearly define what information to find, how to conduct research, and desired output format.", + placeholder="Research the top 5 AI coding assistants, their features, pricing, and user reviews", + ) + model: ResearchModel = SchemaField( + default=ResearchModel.STANDARD, + description="Research model: 'fast' for quick results, 'standard' for balanced quality, 'pro' for thorough analysis", + ) + output_schema: Optional[dict] = SchemaField( + default=None, + description="JSON Schema to enforce structured output. When provided, results are validated and returned as parsed JSON.", + advanced=True, + ) + wait_for_completion: bool = SchemaField( + default=True, + description="Wait for research to complete before returning. Ensures you get results immediately.", + ) + polling_timeout: int = SchemaField( + default=600, + description="Maximum time to wait for completion in seconds (only if wait_for_completion is True)", + advanced=True, + ge=1, + le=3600, + ) + + class Output(BlockSchemaOutput): + research_id: str = SchemaField( + description="Unique identifier for tracking this research request" + ) + status: str = SchemaField(description="Final status of the research") + model: str = SchemaField(description="The research model used") + instructions: str = SchemaField( + description="The research instructions provided" + ) + created_at: int = SchemaField( + description="When the research was created (Unix timestamp in ms)" + ) + output_content: Optional[str] = SchemaField( + description="Research output as text (only if wait_for_completion was True and completed)" + ) + output_parsed: Optional[dict] = SchemaField( + description="Structured JSON output (only if wait_for_completion and outputSchema were provided)" + ) + cost_total: Optional[float] = SchemaField( + description="Total cost in USD (only if wait_for_completion was True and completed)" + ) + elapsed_time: Optional[float] = SchemaField( + description="Time taken to complete in seconds (only if wait_for_completion was True)" + ) + + def __init__(self): + super().__init__( + id="a1f2e3d4-c5b6-4a78-9012-3456789abcde", + description="Create research task with optional waiting - explores web and synthesizes findings with citations", + categories={BlockCategory.SEARCH, BlockCategory.AI}, + input_schema=ExaCreateResearchBlock.Input, + output_schema=ExaCreateResearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + url = "https://api.exa.ai/research/v1" + headers = { + "Content-Type": "application/json", + "x-api-key": credentials.api_key.get_secret_value(), + } + + payload: Dict[str, Any] = { + "model": input_data.model.value, + "instructions": input_data.instructions, + } + + if input_data.output_schema: + payload["outputSchema"] = input_data.output_schema + + response = await Requests().post(url, headers=headers, json=payload) + data = response.json() + + research_id = data.get("researchId", "") + + if input_data.wait_for_completion: + start_time = time.time() + get_url = f"https://api.exa.ai/research/v1/{research_id}" + get_headers = {"x-api-key": credentials.api_key.get_secret_value()} + check_interval = 10 + + while time.time() - start_time < input_data.polling_timeout: + poll_response = await Requests().get(url=get_url, headers=get_headers) + poll_data = poll_response.json() + + status = poll_data.get("status", "") + + if status in ["completed", "failed", "canceled"]: + elapsed = time.time() - start_time + research = ResearchTaskModel.from_api(poll_data) + + yield "research_id", research.research_id + yield "status", research.status + yield "model", research.model + yield "instructions", research.instructions + yield "created_at", research.created_at + yield "elapsed_time", elapsed + + if research.output: + yield "output_content", research.output.content + yield "output_parsed", research.output.parsed + + if research.cost_dollars: + yield "cost_total", research.cost_dollars.total + return + + await asyncio.sleep(check_interval) + + raise ValueError( + f"Research did not complete within {input_data.polling_timeout} seconds" + ) + else: + yield "research_id", research_id + yield "status", data.get("status", "pending") + yield "model", data.get("model", input_data.model.value) + yield "instructions", data.get("instructions", input_data.instructions) + yield "created_at", data.get("createdAt", 0) + + +class ExaGetResearchBlock(Block): + """Get the status and results of a research task.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + research_id: str = SchemaField( + description="The ID of the research task to retrieve", + placeholder="01jszdfs0052sg4jc552sg4jc5", + ) + include_events: bool = SchemaField( + default=False, + description="Include detailed event log of research operations", + advanced=True, + ) + + class Output(BlockSchemaOutput): + research_id: str = SchemaField(description="The research task identifier") + status: str = SchemaField( + description="Current status: pending, running, completed, canceled, or failed" + ) + instructions: str = SchemaField( + description="The original research instructions" + ) + model: str = SchemaField(description="The research model used") + created_at: int = SchemaField( + description="When research was created (Unix timestamp in ms)" + ) + finished_at: Optional[int] = SchemaField( + description="When research finished (Unix timestamp in ms, if completed/canceled/failed)" + ) + output_content: Optional[str] = SchemaField( + description="Research output as text (if completed)" + ) + output_parsed: Optional[dict] = SchemaField( + description="Structured JSON output matching outputSchema (if provided and completed)" + ) + cost_total: Optional[float] = SchemaField( + description="Total cost in USD (if completed)" + ) + cost_searches: Optional[int] = SchemaField( + description="Number of searches performed (if completed)" + ) + cost_pages: Optional[int] = SchemaField( + description="Number of pages crawled (if completed)" + ) + cost_reasoning_tokens: Optional[int] = SchemaField( + description="AI tokens used for reasoning (if completed)" + ) + error_message: Optional[str] = SchemaField( + description="Error message if research failed" + ) + events: Optional[List[dict]] = SchemaField( + description="Detailed event log (if include_events was True)" + ) + + def __init__(self): + super().__init__( + id="b2e3f4a5-6789-4bcd-9012-3456789abcde", + description="Get status and results of a research task", + categories={BlockCategory.SEARCH}, + input_schema=ExaGetResearchBlock.Input, + output_schema=ExaGetResearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + url = f"https://api.exa.ai/research/v1/{input_data.research_id}" + headers = { + "x-api-key": credentials.api_key.get_secret_value(), + } + + params = {} + if input_data.include_events: + params["events"] = "true" + + response = await Requests().get(url, headers=headers, params=params) + data = response.json() + + research = ResearchTaskModel.from_api(data) + + yield "research_id", research.research_id + yield "status", research.status + yield "instructions", research.instructions + yield "model", research.model + yield "created_at", research.created_at + yield "finished_at", research.finished_at + + if research.output: + yield "output_content", research.output.content + yield "output_parsed", research.output.parsed + + if research.cost_dollars: + yield "cost_total", research.cost_dollars.total + yield "cost_searches", research.cost_dollars.num_searches + yield "cost_pages", research.cost_dollars.num_pages + yield "cost_reasoning_tokens", research.cost_dollars.reasoning_tokens + + yield "error_message", research.error + + if input_data.include_events: + yield "events", data.get("events", []) + + +class ExaWaitForResearchBlock(Block): + """Wait for a research task to complete with progress tracking.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + research_id: str = SchemaField( + description="The ID of the research task to wait for", + placeholder="01jszdfs0052sg4jc552sg4jc5", + ) + timeout: int = SchemaField( + default=600, + description="Maximum time to wait in seconds", + ge=1, + le=3600, + ) + check_interval: int = SchemaField( + default=10, + description="Seconds between status checks", + advanced=True, + ge=1, + le=60, + ) + + class Output(BlockSchemaOutput): + research_id: str = SchemaField(description="The research task identifier") + final_status: str = SchemaField(description="Final status when polling stopped") + output_content: Optional[str] = SchemaField( + description="Research output as text (if completed)" + ) + output_parsed: Optional[dict] = SchemaField( + description="Structured JSON output (if outputSchema was provided and completed)" + ) + cost_total: Optional[float] = SchemaField(description="Total cost in USD") + elapsed_time: float = SchemaField(description="Total time waited in seconds") + timed_out: bool = SchemaField( + description="Whether polling timed out before completion" + ) + + def __init__(self): + super().__init__( + id="c3d4e5f6-7890-4abc-9012-3456789abcde", + description="Wait for a research task to complete with configurable timeout", + categories={BlockCategory.SEARCH}, + input_schema=ExaWaitForResearchBlock.Input, + output_schema=ExaWaitForResearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + start_time = time.time() + url = f"https://api.exa.ai/research/v1/{input_data.research_id}" + headers = { + "x-api-key": credentials.api_key.get_secret_value(), + } + + while time.time() - start_time < input_data.timeout: + response = await Requests().get(url, headers=headers) + data = response.json() + + status = data.get("status", "") + + if status in ["completed", "failed", "canceled"]: + elapsed = time.time() - start_time + research = ResearchTaskModel.from_api(data) + + yield "research_id", research.research_id + yield "final_status", research.status + yield "elapsed_time", elapsed + yield "timed_out", False + + if research.output: + yield "output_content", research.output.content + yield "output_parsed", research.output.parsed + + if research.cost_dollars: + yield "cost_total", research.cost_dollars.total + + return + + await asyncio.sleep(input_data.check_interval) + + elapsed = time.time() - start_time + response = await Requests().get(url, headers=headers) + data = response.json() + + yield "research_id", input_data.research_id + yield "final_status", data.get("status", "unknown") + yield "elapsed_time", elapsed + yield "timed_out", True + + +class ExaListResearchBlock(Block): + """List all research tasks with pagination support.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + cursor: Optional[str] = SchemaField( + default=None, + description="Cursor for pagination through results", + advanced=True, + ) + limit: int = SchemaField( + default=10, + description="Number of research tasks to return (1-50)", + ge=1, + le=50, + advanced=True, + ) + + class Output(BlockSchemaOutput): + research_tasks: List[ResearchTaskModel] = SchemaField( + description="List of research tasks ordered by creation time (newest first)" + ) + research_task: ResearchTaskModel = SchemaField( + description="Individual research task (yielded for each task)" + ) + has_more: bool = SchemaField( + description="Whether there are more tasks to paginate through" + ) + next_cursor: Optional[str] = SchemaField( + description="Cursor for the next page of results" + ) + + def __init__(self): + super().__init__( + id="d4e5f6a7-8901-4bcd-9012-3456789abcde", + description="List all research tasks with pagination support", + categories={BlockCategory.SEARCH}, + input_schema=ExaListResearchBlock.Input, + output_schema=ExaListResearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + url = "https://api.exa.ai/research/v1" + headers = { + "x-api-key": credentials.api_key.get_secret_value(), + } + + params: Dict[str, Any] = { + "limit": input_data.limit, + } + if input_data.cursor: + params["cursor"] = input_data.cursor + + response = await Requests().get(url, headers=headers, params=params) + data = response.json() + + tasks = [ResearchTaskModel.from_api(task) for task in data.get("data", [])] + + yield "research_tasks", tasks + + for task in tasks: + yield "research_task", task + + yield "has_more", data.get("hasMore", False) + yield "next_cursor", data.get("nextCursor") diff --git a/autogpt_platform/backend/backend/blocks/exa/search.py b/autogpt_platform/backend/backend/blocks/exa/search.py index 4bc772f7f7..7e4ccfc538 100644 --- a/autogpt_platform/backend/backend/blocks/exa/search.py +++ b/autogpt_platform/backend/backend/blocks/exa/search.py @@ -1,32 +1,66 @@ from datetime import datetime +from enum import Enum +from typing import Optional + +from exa_py import AsyncExa from backend.sdk import ( APIKeyCredentials, Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, - Requests, SchemaField, ) from ._config import exa -from .helpers import ContentSettings +from .helpers import ( + ContentSettings, + CostDollars, + ExaSearchResults, + process_contents_settings, +) + + +class ExaSearchTypes(Enum): + KEYWORD = "keyword" + NEURAL = "neural" + FAST = "fast" + AUTO = "auto" + + +class ExaSearchCategories(Enum): + COMPANY = "company" + RESEARCH_PAPER = "research paper" + NEWS = "news" + PDF = "pdf" + GITHUB = "github" + TWEET = "tweet" + PERSONAL_SITE = "personal site" + LINKEDIN_PROFILE = "linkedin profile" + FINANCIAL_REPORT = "financial report" class ExaSearchBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) query: str = SchemaField(description="The search query") - use_auto_prompt: bool = SchemaField( - description="Whether to use autoprompt", default=True, advanced=True + type: ExaSearchTypes = SchemaField( + description="Type of search", default=ExaSearchTypes.AUTO, advanced=True ) - type: str = SchemaField(description="Type of search", default="", advanced=True) - category: str = SchemaField( - description="Category to search within", default="", advanced=True + category: ExaSearchCategories | None = SchemaField( + description="Category to search within: company, research paper, news, pdf, github, tweet, personal site, linkedin profile, financial report", + default=None, + advanced=True, + ) + user_location: str | None = SchemaField( + description="The two-letter ISO country code of the user (e.g., 'US')", + default=None, + advanced=True, ) number_of_results: int = SchemaField( description="Number of results to return", default=10, advanced=True @@ -39,17 +73,17 @@ class ExaSearchBlock(Block): default_factory=list, advanced=True, ) - start_crawl_date: datetime = SchemaField( - description="Start date for crawled content" + start_crawl_date: datetime | None = SchemaField( + description="Start date for crawled content", advanced=True, default=None ) - end_crawl_date: datetime = SchemaField( - description="End date for crawled content" + end_crawl_date: datetime | None = SchemaField( + description="End date for crawled content", advanced=True, default=None ) - start_published_date: datetime = SchemaField( - description="Start date for published content" + start_published_date: datetime | None = SchemaField( + description="Start date for published content", advanced=True, default=None ) - end_published_date: datetime = SchemaField( - description="End date for published content" + end_published_date: datetime | None = SchemaField( + description="End date for published content", advanced=True, default=None ) include_text: list[str] = SchemaField( description="Text patterns to include", default_factory=list, advanced=True @@ -62,14 +96,30 @@ class ExaSearchBlock(Block): default=ContentSettings(), advanced=True, ) + moderation: bool = SchemaField( + description="Enable content moderation to filter unsafe content from search results", + default=False, + advanced=True, + ) - class Output(BlockSchema): - results: list = SchemaField( - description="List of search results", default_factory=list + class Output(BlockSchemaOutput): + results: list[ExaSearchResults] = SchemaField( + description="List of search results" ) - error: str = SchemaField( - description="Error message if the request failed", + result: ExaSearchResults = SchemaField(description="Single search result") + context: str = SchemaField( + description="A formatted string of the search results ready for LLMs." ) + search_type: str = SchemaField( + description="For auto searches, indicates which search type was selected." + ) + resolved_search_type: str = SchemaField( + description="The search type that was actually used for this request (neural or keyword)" + ) + cost_dollars: Optional[CostDollars] = SchemaField( + description="Cost breakdown for the request" + ) + error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -83,51 +133,76 @@ class ExaSearchBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = "https://api.exa.ai/search" - headers = { - "Content-Type": "application/json", - "x-api-key": credentials.api_key.get_secret_value(), - } - - payload = { + sdk_kwargs = { "query": input_data.query, - "useAutoprompt": input_data.use_auto_prompt, - "numResults": input_data.number_of_results, - "contents": input_data.contents.model_dump(), + "num_results": input_data.number_of_results, } - date_field_mapping = { - "start_crawl_date": "startCrawlDate", - "end_crawl_date": "endCrawlDate", - "start_published_date": "startPublishedDate", - "end_published_date": "endPublishedDate", - } + if input_data.type: + sdk_kwargs["type"] = input_data.type.value - # Add dates if they exist - for input_field, api_field in date_field_mapping.items(): - value = getattr(input_data, input_field, None) - if value: - payload[api_field] = value.strftime("%Y-%m-%dT%H:%M:%S.000Z") + if input_data.category: + sdk_kwargs["category"] = input_data.category.value - optional_field_mapping = { - "type": "type", - "category": "category", - "include_domains": "includeDomains", - "exclude_domains": "excludeDomains", - "include_text": "includeText", - "exclude_text": "excludeText", - } + if input_data.user_location: + sdk_kwargs["user_location"] = input_data.user_location - # Add other fields - for input_field, api_field in optional_field_mapping.items(): - value = getattr(input_data, input_field) - if value: # Only add non-empty values - payload[api_field] = value + # Handle domains + if input_data.include_domains: + sdk_kwargs["include_domains"] = input_data.include_domains + if input_data.exclude_domains: + sdk_kwargs["exclude_domains"] = input_data.exclude_domains - try: - response = await Requests().post(url, headers=headers, json=payload) - data = response.json() - # Extract just the results array from the response - yield "results", data.get("results", []) - except Exception as e: - yield "error", str(e) + # Handle dates + if input_data.start_crawl_date: + sdk_kwargs["start_crawl_date"] = input_data.start_crawl_date.isoformat() + if input_data.end_crawl_date: + sdk_kwargs["end_crawl_date"] = input_data.end_crawl_date.isoformat() + if input_data.start_published_date: + sdk_kwargs["start_published_date"] = ( + input_data.start_published_date.isoformat() + ) + if input_data.end_published_date: + sdk_kwargs["end_published_date"] = input_data.end_published_date.isoformat() + + # Handle text filters + if input_data.include_text: + sdk_kwargs["include_text"] = input_data.include_text + if input_data.exclude_text: + sdk_kwargs["exclude_text"] = input_data.exclude_text + + if input_data.moderation: + sdk_kwargs["moderation"] = input_data.moderation + + # heck if we need to use search_and_contents + content_settings = process_contents_settings(input_data.contents) + + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + if content_settings: + sdk_kwargs["text"] = content_settings.get("text", False) + if "highlights" in content_settings: + sdk_kwargs["highlights"] = content_settings["highlights"] + if "summary" in content_settings: + sdk_kwargs["summary"] = content_settings["summary"] + response = await aexa.search_and_contents(**sdk_kwargs) + else: + response = await aexa.search(**sdk_kwargs) + + converted_results = [ + ExaSearchResults.from_sdk(sdk_result) + for sdk_result in response.results or [] + ] + + yield "results", converted_results + for result in converted_results: + yield "result", result + + if response.context: + yield "context", response.context + + if response.resolved_search_type: + yield "resolved_search_type", response.resolved_search_type + + if response.cost_dollars: + yield "cost_dollars", response.cost_dollars diff --git a/autogpt_platform/backend/backend/blocks/exa/similar.py b/autogpt_platform/backend/backend/blocks/exa/similar.py index 940b9676c8..e2c592ff05 100644 --- a/autogpt_platform/backend/backend/blocks/exa/similar.py +++ b/autogpt_platform/backend/backend/blocks/exa/similar.py @@ -1,23 +1,30 @@ from datetime import datetime -from typing import Any +from typing import Optional + +from exa_py import AsyncExa from backend.sdk import ( APIKeyCredentials, Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, - Requests, SchemaField, ) from ._config import exa -from .helpers import ContentSettings +from .helpers import ( + ContentSettings, + CostDollars, + ExaSearchResults, + process_contents_settings, +) class ExaFindSimilarBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) @@ -28,7 +35,7 @@ class ExaFindSimilarBlock(Block): description="Number of results to return", default=10, advanced=True ) include_domains: list[str] = SchemaField( - description="Domains to include in search", + description="List of domains to include in the search. If specified, results will only come from these domains.", default_factory=list, advanced=True, ) @@ -37,17 +44,17 @@ class ExaFindSimilarBlock(Block): default_factory=list, advanced=True, ) - start_crawl_date: datetime = SchemaField( - description="Start date for crawled content" + start_crawl_date: Optional[datetime] = SchemaField( + description="Start date for crawled content", advanced=True, default=None ) - end_crawl_date: datetime = SchemaField( - description="End date for crawled content" + end_crawl_date: Optional[datetime] = SchemaField( + description="End date for crawled content", advanced=True, default=None ) - start_published_date: datetime = SchemaField( - description="Start date for published content" + start_published_date: Optional[datetime] = SchemaField( + description="Start date for published content", advanced=True, default=None ) - end_published_date: datetime = SchemaField( - description="End date for published content" + end_published_date: Optional[datetime] = SchemaField( + description="End date for published content", advanced=True, default=None ) include_text: list[str] = SchemaField( description="Text patterns to include (max 1 string, up to 5 words)", @@ -64,15 +71,27 @@ class ExaFindSimilarBlock(Block): default=ContentSettings(), advanced=True, ) + moderation: bool = SchemaField( + description="Enable content moderation to filter unsafe content from search results", + default=False, + advanced=True, + ) - class Output(BlockSchema): - results: list[Any] = SchemaField( - description="List of similar documents with title, URL, published date, author, and score", - default_factory=list, + class Output(BlockSchemaOutput): + results: list[ExaSearchResults] = SchemaField( + description="List of similar documents with metadata and content" ) - error: str = SchemaField( - description="Error message if the request failed", default="" + result: ExaSearchResults = SchemaField( + description="Single similar document result" ) + context: str = SchemaField( + description="A formatted string of the results ready for LLMs." + ) + request_id: str = SchemaField(description="Unique identifier for the request") + cost_dollars: Optional[CostDollars] = SchemaField( + description="Cost breakdown for the request" + ) + error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -86,47 +105,65 @@ class ExaFindSimilarBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = "https://api.exa.ai/findSimilar" - headers = { - "Content-Type": "application/json", - "x-api-key": credentials.api_key.get_secret_value(), - } - - payload = { + sdk_kwargs = { "url": input_data.url, - "numResults": input_data.number_of_results, - "contents": input_data.contents.model_dump(), + "num_results": input_data.number_of_results, } - optional_field_mapping = { - "include_domains": "includeDomains", - "exclude_domains": "excludeDomains", - "include_text": "includeText", - "exclude_text": "excludeText", - } + # Handle domains + if input_data.include_domains: + sdk_kwargs["include_domains"] = input_data.include_domains + if input_data.exclude_domains: + sdk_kwargs["exclude_domains"] = input_data.exclude_domains - # Add optional fields if they have values - for input_field, api_field in optional_field_mapping.items(): - value = getattr(input_data, input_field) - if value: # Only add non-empty values - payload[api_field] = value + # Handle dates + if input_data.start_crawl_date: + sdk_kwargs["start_crawl_date"] = input_data.start_crawl_date.isoformat() + if input_data.end_crawl_date: + sdk_kwargs["end_crawl_date"] = input_data.end_crawl_date.isoformat() + if input_data.start_published_date: + sdk_kwargs["start_published_date"] = ( + input_data.start_published_date.isoformat() + ) + if input_data.end_published_date: + sdk_kwargs["end_published_date"] = input_data.end_published_date.isoformat() - date_field_mapping = { - "start_crawl_date": "startCrawlDate", - "end_crawl_date": "endCrawlDate", - "start_published_date": "startPublishedDate", - "end_published_date": "endPublishedDate", - } + # Handle text filters + if input_data.include_text: + sdk_kwargs["include_text"] = input_data.include_text + if input_data.exclude_text: + sdk_kwargs["exclude_text"] = input_data.exclude_text - # Add dates if they exist - for input_field, api_field in date_field_mapping.items(): - value = getattr(input_data, input_field, None) - if value: - payload[api_field] = value.strftime("%Y-%m-%dT%H:%M:%S.000Z") + if input_data.moderation: + sdk_kwargs["moderation"] = input_data.moderation - try: - response = await Requests().post(url, headers=headers, json=payload) - data = response.json() - yield "results", data.get("results", []) - except Exception as e: - yield "error", str(e) + # check if we need to use find_similar_and_contents + content_settings = process_contents_settings(input_data.contents) + + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + if content_settings: + # Use find_similar_and_contents when contents are requested + sdk_kwargs["text"] = content_settings.get("text", False) + if "highlights" in content_settings: + sdk_kwargs["highlights"] = content_settings["highlights"] + if "summary" in content_settings: + sdk_kwargs["summary"] = content_settings["summary"] + response = await aexa.find_similar_and_contents(**sdk_kwargs) + else: + response = await aexa.find_similar(**sdk_kwargs) + + converted_results = [ + ExaSearchResults.from_sdk(sdk_result) + for sdk_result in response.results or [] + ] + + yield "results", converted_results + for result in converted_results: + yield "result", result + + if response.context: + yield "context", response.context + + if response.cost_dollars: + yield "cost_dollars", response.cost_dollars diff --git a/autogpt_platform/backend/backend/blocks/exa/webhook_blocks.py b/autogpt_platform/backend/backend/blocks/exa/webhook_blocks.py index eb3854ed9c..6995930c8f 100644 --- a/autogpt_platform/backend/backend/blocks/exa/webhook_blocks.py +++ b/autogpt_platform/backend/backend/blocks/exa/webhook_blocks.py @@ -9,7 +9,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, BlockType, BlockWebhookConfig, CredentialsMetaInput, @@ -84,7 +85,7 @@ class ExaWebsetWebhookBlock(Block): including creation, updates, searches, and exports. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="Exa API credentials for webhook management" ) @@ -104,7 +105,7 @@ class ExaWebsetWebhookBlock(Block): description="Webhook payload data", default={}, hidden=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): event_type: str = SchemaField(description="Type of event that occurred") event_id: str = SchemaField(description="Unique identifier for this event") webset_id: str = SchemaField(description="ID of the affected webset") @@ -131,45 +132,33 @@ class ExaWebsetWebhookBlock(Block): async def run(self, input_data: Input, **kwargs) -> BlockOutput: """Process incoming Exa webhook payload.""" - try: - payload = input_data.payload + payload = input_data.payload - # Extract event details - event_type = payload.get("eventType", "unknown") - event_id = payload.get("eventId", "") + # Extract event details + event_type = payload.get("eventType", "unknown") + event_id = payload.get("eventId", "") - # Get webset ID from payload or input - webset_id = payload.get("websetId", input_data.webset_id) + # Get webset ID from payload or input + webset_id = payload.get("websetId", input_data.webset_id) - # Check if we should process this event based on filter - should_process = self._should_process_event( - event_type, input_data.event_filter - ) + # Check if we should process this event based on filter + should_process = self._should_process_event(event_type, input_data.event_filter) - if not should_process: - # Skip events that don't match our filter - return + if not should_process: + # Skip events that don't match our filter + return - # Extract event data - event_data = payload.get("data", {}) - timestamp = payload.get("occurredAt", payload.get("createdAt", "")) - metadata = payload.get("metadata", {}) + # Extract event data + event_data = payload.get("data", {}) + timestamp = payload.get("occurredAt", payload.get("createdAt", "")) + metadata = payload.get("metadata", {}) - yield "event_type", event_type - yield "event_id", event_id - yield "webset_id", webset_id - yield "data", event_data - yield "timestamp", timestamp - yield "metadata", metadata - - except Exception as e: - # Handle errors gracefully - yield "event_type", "error" - yield "event_id", "" - yield "webset_id", input_data.webset_id - yield "data", {"error": str(e)} - yield "timestamp", "" - yield "metadata", {} + yield "event_type", event_type + yield "event_id", event_id + yield "webset_id", webset_id + yield "data", event_data + yield "timestamp", timestamp + yield "metadata", metadata def _should_process_event( self, event_type: str, event_filter: WebsetEventFilter diff --git a/autogpt_platform/backend/backend/blocks/exa/websets.py b/autogpt_platform/backend/backend/blocks/exa/websets.py index 37e0971d92..069911af66 100644 --- a/autogpt_platform/backend/backend/blocks/exa/websets.py +++ b/autogpt_platform/backend/backend/blocks/exa/websets.py @@ -1,8 +1,9 @@ +import time from datetime import datetime from enum import Enum from typing import Annotated, Any, Dict, List, Optional -from exa_py import Exa +from exa_py import AsyncExa, Exa from exa_py.websets.types import ( CreateCriterionParameters, CreateEnrichmentParameters, @@ -31,9 +32,9 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, - Requests, SchemaField, ) @@ -104,7 +105,7 @@ class Webset(BaseModel): class ExaCreateWebsetBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) @@ -219,15 +220,32 @@ class ExaCreateWebsetBlock(Block): advanced=True, ) - class Output(BlockSchema): - webset: Webset = SchemaField( - description="The unique identifier for the created webset" + # Polling parameters + wait_for_initial_results: bool = SchemaField( + default=True, + description="Wait for the initial search to complete before returning. This ensures you get results immediately.", + ) + polling_timeout: int = SchemaField( + default=300, + description="Maximum time to wait for completion in seconds (only used if wait_for_initial_results is True)", + advanced=True, + ge=1, + le=600, + ) + + class Output(BlockSchemaOutput): + webset: Webset = SchemaField(description="The created webset with full details") + initial_item_count: Optional[int] = SchemaField( + description="Number of items found in the initial search (only if wait_for_initial_results was True)" + ) + completion_time: Optional[float] = SchemaField( + description="Time taken to complete the initial search in seconds (only if wait_for_initial_results was True)" ) def __init__(self): super().__init__( id="0cda29ff-c549-4a19-8805-c982b7d4ec34", - description="Create a new Exa Webset for persistent web search collections", + description="Create a new Exa Webset for persistent web search collections with optional waiting for initial results", categories={BlockCategory.SEARCH}, input_schema=ExaCreateWebsetBlock.Input, output_schema=ExaCreateWebsetBlock.Output, @@ -239,9 +257,6 @@ class ExaCreateWebsetBlock(Block): exa = Exa(credentials.api_key.get_secret_value()) - # ------------------------------------------------------------ - # Build entity (if explicitly provided) - # ------------------------------------------------------------ entity = None if input_data.search_entity_type == SearchEntityType.COMPANY: entity = WebsetCompanyEntity(type="company") @@ -259,9 +274,6 @@ class ExaCreateWebsetBlock(Block): type="custom", description=input_data.search_entity_description ) - # ------------------------------------------------------------ - # Build criteria list - # ------------------------------------------------------------ criteria = None if input_data.search_criteria: criteria = [ @@ -269,9 +281,6 @@ class ExaCreateWebsetBlock(Block): for item in input_data.search_criteria ] - # ------------------------------------------------------------ - # Build exclude sources list - # ------------------------------------------------------------ exclude_items = None if input_data.search_exclude_sources: exclude_items = [] @@ -288,9 +297,6 @@ class ExaCreateWebsetBlock(Block): source_enum = ImportSource.import_ exclude_items.append(ExcludeItem(source=source_enum, id=src_id)) - # ------------------------------------------------------------ - # Build scope list - # ------------------------------------------------------------ scope_items = None if input_data.search_scope_sources: scope_items = [] @@ -319,9 +325,6 @@ class ExaCreateWebsetBlock(Block): ScopeItem(source=src_enum, id=src_id, relationship=relationship) ) - # ------------------------------------------------------------ - # Assemble search parameters (only if a query is provided) - # ------------------------------------------------------------ search_params = None if input_data.search_query: search_params = CreateWebsetParametersSearch( @@ -333,9 +336,6 @@ class ExaCreateWebsetBlock(Block): scope=scope_items, ) - # ------------------------------------------------------------ - # Build imports list - # ------------------------------------------------------------ imports_params = None if input_data.import_sources: imports_params = [] @@ -349,9 +349,6 @@ class ExaCreateWebsetBlock(Block): source_enum = ImportSource.import_ imports_params.append(ImportItem(source=source_enum, id=src_id)) - # ------------------------------------------------------------ - # Build enrichment list - # ------------------------------------------------------------ enrichments_params = None if input_data.enrichment_descriptions: enrichments_params = [] @@ -386,25 +383,136 @@ class ExaCreateWebsetBlock(Block): ) ) - # ------------------------------------------------------------ - # Create the webset - # ------------------------------------------------------------ - webset = exa.websets.create( - params=CreateWebsetParameters( - search=search_params, - imports=imports_params, - enrichments=enrichments_params, - external_id=input_data.external_id, - metadata=input_data.metadata, + try: + start_time = time.time() + webset = exa.websets.create( + params=CreateWebsetParameters( + search=search_params, + imports=imports_params, + enrichments=enrichments_params, + external_id=input_data.external_id, + metadata=input_data.metadata, + ) ) + + webset_result = Webset.model_validate(webset.model_dump(by_alias=True)) + + # If wait_for_initial_results is True, poll for completion + if input_data.wait_for_initial_results and search_params: + final_webset = exa.websets.wait_until_idle( + id=webset_result.id, + timeout=input_data.polling_timeout, + poll_interval=5, + ) + completion_time = time.time() - start_time + + item_count = 0 + if final_webset.searches: + for search in final_webset.searches: + if search.progress: + item_count += search.progress.found + + yield "webset", webset_result + yield "initial_item_count", item_count + yield "completion_time", completion_time + else: + yield "webset", webset_result + + except ValueError as e: + raise ValueError(f"Invalid webset configuration: {e}") from e + + +class ExaCreateOrFindWebsetBlock(Block): + """Create a new webset or return existing one if external_id already exists (idempotent).""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." ) - # Use alias field names returned from Exa SDK so that nested models validate correctly - yield "webset", Webset.model_validate(webset.model_dump(by_alias=True)) + external_id: str = SchemaField( + description="External identifier for this webset - used to find existing or create new", + placeholder="my-unique-webset-id", + ) + + search_query: Optional[str] = SchemaField( + default=None, + description="Search query (optional - only needed if creating new webset)", + placeholder="Marketing agencies based in the US", + ) + search_count: int = SchemaField( + default=10, + description="Number of items to find in initial search", + ge=1, + le=1000, + ) + + metadata: Optional[dict] = SchemaField( + default=None, + description="Key-value pairs to associate with the webset", + advanced=True, + ) + + class Output(BlockSchemaOutput): + webset: Webset = SchemaField( + description="The webset (existing or newly created)" + ) + was_created: bool = SchemaField( + description="True if webset was newly created, False if it already existed" + ) + + def __init__(self): + super().__init__( + id="214542b6-3603-4bea-bc07-f51c2871cbd9", + description="Create a new webset or return existing one by external_id (idempotent operation)", + categories={BlockCategory.SEARCH}, + input_schema=ExaCreateOrFindWebsetBlock.Input, + output_schema=ExaCreateOrFindWebsetBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + import httpx + + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + try: + webset = aexa.websets.get(id=input_data.external_id) + webset_result = Webset.model_validate(webset.model_dump(by_alias=True)) + + yield "webset", webset_result + yield "was_created", False + + except httpx.HTTPStatusError as e: + if e.response.status_code == 404: + # Not found - create new webset + search_params = None + if input_data.search_query: + search_params = CreateWebsetParametersSearch( + query=input_data.search_query, + count=input_data.search_count, + ) + + webset = aexa.websets.create( + params=CreateWebsetParameters( + search=search_params, + external_id=input_data.external_id, + metadata=input_data.metadata, + ) + ) + + webset_result = Webset.model_validate(webset.model_dump(by_alias=True)) + + yield "webset", webset_result + yield "was_created", True + else: + # Other HTTP errors should propagate + raise class ExaUpdateWebsetBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) @@ -417,21 +525,16 @@ class ExaUpdateWebsetBlock(Block): description="Key-value pairs to associate with this webset (set to null to clear)", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): webset_id: str = SchemaField(description="The unique identifier for the webset") status: str = SchemaField(description="The status of the webset") external_id: Optional[str] = SchemaField( - description="The external identifier for the webset", default=None - ) - metadata: dict = SchemaField( - description="Updated metadata for the webset", default_factory=dict + description="The external identifier for the webset" ) + metadata: dict = SchemaField(description="Updated metadata for the webset") updated_at: str = SchemaField( description="The date and time the webset was updated" ) - error: str = SchemaField( - description="Error message if the request failed", default="" - ) def __init__(self): super().__init__( @@ -445,37 +548,31 @@ class ExaUpdateWebsetBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}" - headers = { - "Content-Type": "application/json", - "x-api-key": credentials.api_key.get_secret_value(), - } + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) - # Build the payload payload = {} if input_data.metadata is not None: payload["metadata"] = input_data.metadata - try: - response = await Requests().post(url, headers=headers, json=payload) - data = response.json() + sdk_webset = aexa.websets.update(id=input_data.webset_id, params=payload) - yield "webset_id", data.get("id", "") - yield "status", data.get("status", "") - yield "external_id", data.get("externalId") - yield "metadata", data.get("metadata", {}) - yield "updated_at", data.get("updatedAt", "") + status_str = ( + sdk_webset.status.value + if hasattr(sdk_webset.status, "value") + else str(sdk_webset.status) + ) - except Exception as e: - yield "error", str(e) - yield "webset_id", "" - yield "status", "" - yield "metadata", {} - yield "updated_at", "" + yield "webset_id", sdk_webset.id + yield "status", status_str + yield "external_id", sdk_webset.external_id + yield "metadata", sdk_webset.metadata or {} + yield "updated_at", ( + sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else "" + ) class ExaListWebsetsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) @@ -497,19 +594,13 @@ class ExaListWebsetsBlock(Block): advanced=True, ) - class Output(BlockSchema): - websets: list[Webset] = SchemaField( - description="List of websets", default_factory=list - ) + class Output(BlockSchemaOutput): + websets: list[Webset] = SchemaField(description="List of websets") has_more: bool = SchemaField( - description="Whether there are more results to paginate through", - default=False, + description="Whether there are more results to paginate through" ) next_cursor: Optional[str] = SchemaField( - description="Cursor for the next page of results", default=None - ) - error: str = SchemaField( - description="Error message if the request failed", default="" + description="Cursor for the next page of results" ) def __init__(self): @@ -524,33 +615,24 @@ class ExaListWebsetsBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = "https://api.exa.ai/websets/v0/websets" - headers = { - "x-api-key": credentials.api_key.get_secret_value(), - } + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) - params: dict[str, Any] = { - "limit": input_data.limit, - } - if input_data.cursor: - params["cursor"] = input_data.cursor + response = aexa.websets.list( + cursor=input_data.cursor, + limit=input_data.limit, + ) - try: - response = await Requests().get(url, headers=headers, params=params) - data = response.json() + websets_data = [ + w.model_dump(by_alias=True, exclude_none=True) for w in response.data + ] - yield "websets", data.get("data", []) - yield "has_more", data.get("hasMore", False) - yield "next_cursor", data.get("nextCursor") - - except Exception as e: - yield "error", str(e) - yield "websets", [] - yield "has_more", False + yield "websets", websets_data + yield "has_more", response.has_more + yield "next_cursor", response.next_cursor class ExaGetWebsetBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) @@ -559,28 +641,21 @@ class ExaGetWebsetBlock(Block): placeholder="webset-id-or-external-id", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): webset_id: str = SchemaField(description="The unique identifier for the webset") status: str = SchemaField(description="The status of the webset") external_id: Optional[str] = SchemaField( - description="The external identifier for the webset", default=None + description="The external identifier for the webset" ) searches: list[dict] = SchemaField( - description="The searches performed on the webset", default_factory=list + description="The searches performed on the webset" ) enrichments: list[dict] = SchemaField( - description="The enrichments applied to the webset", default_factory=list - ) - monitors: list[dict] = SchemaField( - description="The monitors for the webset", default_factory=list - ) - items: Optional[list[dict]] = SchemaField( - description="The items in the webset (if expand_items is true)", - default=None, + description="The enrichments applied to the webset" ) + monitors: list[dict] = SchemaField(description="The monitors for the webset") metadata: dict = SchemaField( - description="Key-value pairs associated with the webset", - default_factory=dict, + description="Key-value pairs associated with the webset" ) created_at: str = SchemaField( description="The date and time the webset was created" @@ -588,9 +663,6 @@ class ExaGetWebsetBlock(Block): updated_at: str = SchemaField( description="The date and time the webset was last updated" ) - error: str = SchemaField( - description="Error message if the request failed", default="" - ) def __init__(self): super().__init__( @@ -604,40 +676,46 @@ class ExaGetWebsetBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}" - headers = { - "x-api-key": credentials.api_key.get_secret_value(), - } + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) - try: - response = await Requests().get(url, headers=headers) - data = response.json() + sdk_webset = aexa.websets.get(id=input_data.webset_id) - yield "webset_id", data.get("id", "") - yield "status", data.get("status", "") - yield "external_id", data.get("externalId") - yield "searches", data.get("searches", []) - yield "enrichments", data.get("enrichments", []) - yield "monitors", data.get("monitors", []) - yield "items", data.get("items") - yield "metadata", data.get("metadata", {}) - yield "created_at", data.get("createdAt", "") - yield "updated_at", data.get("updatedAt", "") + status_str = ( + sdk_webset.status.value + if hasattr(sdk_webset.status, "value") + else str(sdk_webset.status) + ) - except Exception as e: - yield "error", str(e) - yield "webset_id", "" - yield "status", "" - yield "searches", [] - yield "enrichments", [] - yield "monitors", [] - yield "metadata", {} - yield "created_at", "" - yield "updated_at", "" + searches_data = [ + s.model_dump(by_alias=True, exclude_none=True) + for s in sdk_webset.searches or [] + ] + enrichments_data = [ + e.model_dump(by_alias=True, exclude_none=True) + for e in sdk_webset.enrichments or [] + ] + monitors_data = [ + m.model_dump(by_alias=True, exclude_none=True) + for m in sdk_webset.monitors or [] + ] + + yield "webset_id", sdk_webset.id + yield "status", status_str + yield "external_id", sdk_webset.external_id + yield "searches", searches_data + yield "enrichments", enrichments_data + yield "monitors", monitors_data + yield "metadata", sdk_webset.metadata or {} + yield "created_at", ( + sdk_webset.created_at.isoformat() if sdk_webset.created_at else "" + ) + yield "updated_at", ( + sdk_webset.updated_at.isoformat() if sdk_webset.updated_at else "" + ) class ExaDeleteWebsetBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) @@ -646,20 +724,15 @@ class ExaDeleteWebsetBlock(Block): placeholder="webset-id-or-external-id", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): webset_id: str = SchemaField( description="The unique identifier for the deleted webset" ) external_id: Optional[str] = SchemaField( - description="The external identifier for the deleted webset", default=None + description="The external identifier for the deleted webset" ) status: str = SchemaField(description="The status of the deleted webset") - success: str = SchemaField( - description="Whether the deletion was successful", default="true" - ) - error: str = SchemaField( - description="Error message if the request failed", default="" - ) + success: str = SchemaField(description="Whether the deletion was successful") def __init__(self): super().__init__( @@ -673,29 +746,24 @@ class ExaDeleteWebsetBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}" - headers = { - "x-api-key": credentials.api_key.get_secret_value(), - } + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) - try: - response = await Requests().delete(url, headers=headers) - data = response.json() + deleted_webset = aexa.websets.delete(id=input_data.webset_id) - yield "webset_id", data.get("id", "") - yield "external_id", data.get("externalId") - yield "status", data.get("status", "") - yield "success", "true" + status_str = ( + deleted_webset.status.value + if hasattr(deleted_webset.status, "value") + else str(deleted_webset.status) + ) - except Exception as e: - yield "error", str(e) - yield "webset_id", "" - yield "status", "" - yield "success", "false" + yield "webset_id", deleted_webset.id + yield "external_id", deleted_webset.external_id + yield "status", status_str + yield "success", "true" class ExaCancelWebsetBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = exa.credentials_field( description="The Exa integration requires an API Key." ) @@ -704,19 +772,16 @@ class ExaCancelWebsetBlock(Block): placeholder="webset-id-or-external-id", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): webset_id: str = SchemaField(description="The unique identifier for the webset") status: str = SchemaField( description="The status of the webset after cancellation" ) external_id: Optional[str] = SchemaField( - description="The external identifier for the webset", default=None + description="The external identifier for the webset" ) success: str = SchemaField( - description="Whether the cancellation was successful", default="true" - ) - error: str = SchemaField( - description="Error message if the request failed", default="" + description="Whether the cancellation was successful" ) def __init__(self): @@ -731,22 +796,612 @@ class ExaCancelWebsetBlock(Block): async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: - url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}/cancel" - headers = { - "x-api-key": credentials.api_key.get_secret_value(), + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + canceled_webset = aexa.websets.cancel(id=input_data.webset_id) + + status_str = ( + canceled_webset.status.value + if hasattr(canceled_webset.status, "value") + else str(canceled_webset.status) + ) + + yield "webset_id", canceled_webset.id + yield "status", status_str + yield "external_id", canceled_webset.external_id + yield "success", "true" + + +# Mirrored models for Preview response stability +class PreviewCriterionModel(BaseModel): + """Stable model for preview criteria.""" + + description: str + + @classmethod + def from_sdk(cls, sdk_criterion) -> "PreviewCriterionModel": + """Convert SDK criterion to our model.""" + return cls(description=sdk_criterion.description) + + +class PreviewEnrichmentModel(BaseModel): + """Stable model for preview enrichment.""" + + description: str + format: str + options: List[str] + + @classmethod + def from_sdk(cls, sdk_enrichment) -> "PreviewEnrichmentModel": + """Convert SDK enrichment to our model.""" + format_str = ( + sdk_enrichment.format.value + if hasattr(sdk_enrichment.format, "value") + else str(sdk_enrichment.format) + ) + + options_list = [] + if sdk_enrichment.options: + for opt in sdk_enrichment.options: + opt_dict = opt.model_dump(by_alias=True) + options_list.append(opt_dict.get("label", "")) + + return cls( + description=sdk_enrichment.description, + format=format_str, + options=options_list, + ) + + +class PreviewSearchModel(BaseModel): + """Stable model for preview search details.""" + + entity_type: str + entity_description: Optional[str] + criteria: List[PreviewCriterionModel] + + @classmethod + def from_sdk(cls, sdk_search) -> "PreviewSearchModel": + """Convert SDK search preview to our model.""" + # Extract entity type from union + entity_dict = sdk_search.entity.model_dump(by_alias=True) + entity_type = entity_dict.get("type", "auto") + entity_description = entity_dict.get("description") + + # Convert criteria + criteria = [ + PreviewCriterionModel.from_sdk(c) for c in sdk_search.criteria or [] + ] + + return cls( + entity_type=entity_type, + entity_description=entity_description, + criteria=criteria, + ) + + +class PreviewWebsetModel(BaseModel): + """Stable model for preview response.""" + + search: PreviewSearchModel + enrichments: List[PreviewEnrichmentModel] + + @classmethod + def from_sdk(cls, sdk_preview) -> "PreviewWebsetModel": + """Convert SDK PreviewWebsetResponse to our model.""" + + search = PreviewSearchModel.from_sdk(sdk_preview.search) + enrichments = [ + PreviewEnrichmentModel.from_sdk(e) for e in sdk_preview.enrichments or [] + ] + + return cls(search=search, enrichments=enrichments) + + +class ExaPreviewWebsetBlock(Block): + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + query: str = SchemaField( + description="Your search query to preview. Use this to see how Exa will interpret your search before creating a webset.", + placeholder="Marketing agencies based in the US, with brands worked with and city", + ) + entity_type: Optional[SearchEntityType] = SchemaField( + default=None, + description="Entity type to force: 'company', 'person', 'article', 'research_paper', or 'custom'. If not provided, Exa will auto-detect.", + advanced=True, + ) + entity_description: Optional[str] = SchemaField( + default=None, + description="Description for custom entity type (required when entity_type is 'custom')", + advanced=True, + ) + + class Output(BlockSchemaOutput): + preview: PreviewWebsetModel = SchemaField( + description="Full preview response with search and enrichment details" + ) + entity_type: str = SchemaField( + description="The detected or specified entity type" + ) + entity_description: Optional[str] = SchemaField( + description="Description of the entity type" + ) + criteria: list[PreviewCriterionModel] = SchemaField( + description="Generated search criteria that will be used" + ) + enrichment_columns: list[PreviewEnrichmentModel] = SchemaField( + description="Available enrichment columns that can be extracted" + ) + interpretation: str = SchemaField( + description="Human-readable interpretation of how the query will be processed" + ) + suggestions: list[str] = SchemaField( + description="Suggestions for improving the query" + ) + + def __init__(self): + super().__init__( + id="f8c4e2a1-9b3d-4e5f-a6c7-d8e9f0a1b2c3", + description="Preview how a search query will be interpreted before creating a webset. Helps understand entity detection, criteria generation, and available enrichments.", + categories={BlockCategory.SEARCH}, + input_schema=ExaPreviewWebsetBlock.Input, + output_schema=ExaPreviewWebsetBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + payload: dict[str, Any] = { + "query": input_data.query, } - try: - response = await Requests().post(url, headers=headers) - data = response.json() + if input_data.entity_type: + entity: dict[str, Any] = {"type": input_data.entity_type.value} + if ( + input_data.entity_type == SearchEntityType.CUSTOM + and input_data.entity_description + ): + entity["description"] = input_data.entity_description + payload["entity"] = entity - yield "webset_id", data.get("id", "") - yield "status", data.get("status", "") - yield "external_id", data.get("externalId") - yield "success", "true" + sdk_preview = aexa.websets.preview(params=payload) - except Exception as e: - yield "error", str(e) - yield "webset_id", "" - yield "status", "" - yield "success", "false" + preview = PreviewWebsetModel.from_sdk(sdk_preview) + + entity_type = preview.search.entity_type + entity_description = preview.search.entity_description + criteria = preview.search.criteria + enrichments = preview.enrichments + + # Generate interpretation + interpretation = f"Query will search for {entity_type}" + if entity_description: + interpretation += f" ({entity_description})" + if criteria: + interpretation += f" with {len(criteria)} criteria" + if enrichments: + interpretation += f" and {len(enrichments)} available enrichment columns" + + # Generate suggestions + suggestions = [] + if not criteria: + suggestions.append( + "Consider adding specific criteria to narrow your search" + ) + if not enrichments: + suggestions.append( + "Consider specifying what data points you want to extract" + ) + + # Yield full model first + yield "preview", preview + + # Then yield individual fields for graph flexibility + yield "entity_type", entity_type + yield "entity_description", entity_description + yield "criteria", criteria + yield "enrichment_columns", enrichments + yield "interpretation", interpretation + yield "suggestions", suggestions + + +class ExaWebsetStatusBlock(Block): + """Get a quick status overview of a webset without fetching all details.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + + class Output(BlockSchemaOutput): + webset_id: str = SchemaField(description="The webset identifier") + status: str = SchemaField( + description="Current status (idle, running, paused, etc.)" + ) + item_count: int = SchemaField(description="Total number of items in the webset") + search_count: int = SchemaField(description="Number of searches performed") + enrichment_count: int = SchemaField( + description="Number of enrichments configured" + ) + monitor_count: int = SchemaField(description="Number of monitors configured") + last_updated: str = SchemaField(description="When the webset was last updated") + is_processing: bool = SchemaField( + description="Whether any operations are currently running" + ) + + def __init__(self): + super().__init__( + id="47cc3cd8-840f-4ec4-8d40-fcaba75fbe1a", + description="Get a quick status overview of a webset", + categories={BlockCategory.SEARCH}, + input_schema=ExaWebsetStatusBlock.Input, + output_schema=ExaWebsetStatusBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + webset = aexa.websets.get(id=input_data.webset_id) + + status = ( + webset.status.value + if hasattr(webset.status, "value") + else str(webset.status) + ) + is_processing = status in ["running", "pending"] + + # Estimate item count from search progress + item_count = 0 + if webset.searches: + for search in webset.searches: + if search.progress: + item_count += search.progress.found + + # Count searches, enrichments, monitors + search_count = len(webset.searches or []) + enrichment_count = len(webset.enrichments or []) + monitor_count = len(webset.monitors or []) + + yield "webset_id", webset.id + yield "status", status + yield "item_count", item_count + yield "search_count", search_count + yield "enrichment_count", enrichment_count + yield "monitor_count", monitor_count + yield "last_updated", webset.updated_at.isoformat() if webset.updated_at else "" + yield "is_processing", is_processing + + +# Summary models for ExaWebsetSummaryBlock +class SearchSummaryModel(BaseModel): + """Summary of searches in a webset.""" + + total_searches: int + completed_searches: int + total_items_found: int + queries: List[str] + + +class EnrichmentSummaryModel(BaseModel): + """Summary of enrichments in a webset.""" + + total_enrichments: int + completed_enrichments: int + enrichment_types: List[str] + titles: List[str] + + +class MonitorSummaryModel(BaseModel): + """Summary of monitors in a webset.""" + + total_monitors: int + active_monitors: int + next_run: Optional[datetime] = None + + +class WebsetStatisticsModel(BaseModel): + """Various statistics about a webset.""" + + total_operations: int + is_processing: bool + has_monitors: bool + avg_items_per_search: float + + +class ExaWebsetSummaryBlock(Block): + """Get a comprehensive summary of a webset including samples and statistics.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + include_sample_items: bool = SchemaField( + default=True, + description="Include sample items in the summary", + ) + sample_size: int = SchemaField( + default=3, + description="Number of sample items to include", + ge=0, + le=10, + ) + include_search_details: bool = SchemaField( + default=True, + description="Include details about searches", + ) + include_enrichment_details: bool = SchemaField( + default=True, + description="Include details about enrichments", + ) + + class Output(BlockSchemaOutput): + webset_id: str = SchemaField(description="The webset identifier") + status: str = SchemaField(description="Current status") + entity_type: str = SchemaField(description="Type of entities in the webset") + total_items: int = SchemaField(description="Total number of items") + sample_items: list[Dict[str, Any]] = SchemaField( + description="Sample items from the webset" + ) + search_summary: SearchSummaryModel = SchemaField( + description="Summary of searches performed" + ) + enrichment_summary: EnrichmentSummaryModel = SchemaField( + description="Summary of enrichments applied" + ) + monitor_summary: MonitorSummaryModel = SchemaField( + description="Summary of monitors configured" + ) + statistics: WebsetStatisticsModel = SchemaField( + description="Various statistics about the webset" + ) + created_at: str = SchemaField(description="When the webset was created") + updated_at: str = SchemaField(description="When the webset was last updated") + + def __init__(self): + super().__init__( + id="9eff1710-a49b-490e-b486-197bf8b23c61", + description="Get a comprehensive summary of a webset with samples and statistics", + categories={BlockCategory.SEARCH}, + input_schema=ExaWebsetSummaryBlock.Input, + output_schema=ExaWebsetSummaryBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + webset = aexa.websets.get(id=input_data.webset_id) + + # Extract basic info + webset_id = webset.id + status = ( + webset.status.value + if hasattr(webset.status, "value") + else str(webset.status) + ) + + # Determine entity type from searches + entity_type = "unknown" + searches = webset.searches or [] + if searches: + first_search = searches[0] + if first_search.entity: + entity_dict = first_search.entity.model_dump( + by_alias=True, exclude_none=True + ) + entity_type = entity_dict.get("type", "unknown") + + # Get sample items if requested + sample_items_data = [] + total_items = 0 + + if input_data.include_sample_items and input_data.sample_size > 0: + items_response = aexa.websets.items.list( + webset_id=input_data.webset_id, limit=input_data.sample_size + ) + sample_items_data = [ + item.model_dump(by_alias=True, exclude_none=True) + for item in items_response.data + ] + total_items = len(sample_items_data) + + # Build search summary using Pydantic model + search_summary = SearchSummaryModel( + total_searches=0, + completed_searches=0, + total_items_found=0, + queries=[], + ) + if input_data.include_search_details and searches: + search_summary = SearchSummaryModel( + total_searches=len(searches), + completed_searches=sum( + 1 + for s in searches + if (s.status.value if hasattr(s.status, "value") else str(s.status)) + == "completed" + ), + total_items_found=int( + sum(s.progress.found if s.progress else 0 for s in searches) + ), + queries=[s.query for s in searches[:3]], # First 3 queries + ) + + # Build enrichment summary using Pydantic model + enrichment_summary = EnrichmentSummaryModel( + total_enrichments=0, + completed_enrichments=0, + enrichment_types=[], + titles=[], + ) + enrichments = webset.enrichments or [] + if input_data.include_enrichment_details and enrichments: + enrichment_summary = EnrichmentSummaryModel( + total_enrichments=len(enrichments), + completed_enrichments=sum( + 1 + for e in enrichments + if (e.status.value if hasattr(e.status, "value") else str(e.status)) + == "completed" + ), + enrichment_types=list( + set( + ( + e.format.value + if e.format and hasattr(e.format, "value") + else str(e.format) if e.format else "text" + ) + for e in enrichments + ) + ), + titles=[(e.title or e.description or "")[:50] for e in enrichments[:3]], + ) + + # Build monitor summary using Pydantic model + monitors = webset.monitors or [] + next_run_dt = None + if monitors: + next_runs = [m.next_run_at for m in monitors if m.next_run_at] + if next_runs: + next_run_dt = min(next_runs) + + monitor_summary = MonitorSummaryModel( + total_monitors=len(monitors), + active_monitors=sum( + 1 + for m in monitors + if (m.status.value if hasattr(m.status, "value") else str(m.status)) + == "enabled" + ), + next_run=next_run_dt, + ) + + # Build statistics using Pydantic model + statistics = WebsetStatisticsModel( + total_operations=len(searches) + len(enrichments), + is_processing=status in ["running", "pending"], + has_monitors=len(monitors) > 0, + avg_items_per_search=( + search_summary.total_items_found / len(searches) if searches else 0 + ), + ) + + yield "webset_id", webset_id + yield "status", status + yield "entity_type", entity_type + yield "total_items", total_items + yield "sample_items", sample_items_data + yield "search_summary", search_summary + yield "enrichment_summary", enrichment_summary + yield "monitor_summary", monitor_summary + yield "statistics", statistics + yield "created_at", webset.created_at.isoformat() if webset.created_at else "" + yield "updated_at", webset.updated_at.isoformat() if webset.updated_at else "" + + +class ExaWebsetReadyCheckBlock(Block): + """Check if a webset is ready for the next operation (conditional workflow helper).""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset to check", + placeholder="webset-id-or-external-id", + ) + min_items: int = SchemaField( + default=1, + description="Minimum number of items required to be 'ready'", + ge=0, + ) + + class Output(BlockSchemaOutput): + is_ready: bool = SchemaField( + description="True if webset is idle AND has minimum items" + ) + status: str = SchemaField(description="Current webset status") + item_count: int = SchemaField(description="Number of items in webset") + has_searches: bool = SchemaField( + description="Whether webset has any searches configured" + ) + has_enrichments: bool = SchemaField( + description="Whether webset has any enrichments" + ) + recommendation: str = SchemaField( + description="Suggested next action (ready_to_process, waiting_for_results, needs_search, etc.)" + ) + + def __init__(self): + super().__init__( + id="faf9f0f3-e659-4264-b33b-284a02166bec", + description="Check if webset is ready for next operation - enables conditional workflow branching", + categories={BlockCategory.SEARCH, BlockCategory.LOGIC}, + input_schema=ExaWebsetReadyCheckBlock.Input, + output_schema=ExaWebsetReadyCheckBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + # Get webset details + webset = aexa.websets.get(id=input_data.webset_id) + + status = ( + webset.status.value + if hasattr(webset.status, "value") + else str(webset.status) + ) + + # Estimate item count from search progress + item_count = 0 + if webset.searches: + for search in webset.searches: + if search.progress: + item_count += search.progress.found + + # Determine readiness + is_idle = status == "idle" + has_min_items = item_count >= input_data.min_items + is_ready = is_idle and has_min_items + + # Check resources + has_searches = len(webset.searches or []) > 0 + has_enrichments = len(webset.enrichments or []) > 0 + + # Generate recommendation + recommendation = "" + if not has_searches: + recommendation = "needs_search" + elif status in ["running", "pending"]: + recommendation = "waiting_for_results" + elif not has_min_items: + recommendation = "insufficient_items" + elif not has_enrichments: + recommendation = "ready_to_enrich" + else: + recommendation = "ready_to_process" + + yield "is_ready", is_ready + yield "status", status + yield "item_count", item_count + yield "has_searches", has_searches + yield "has_enrichments", has_enrichments + yield "recommendation", recommendation diff --git a/autogpt_platform/backend/backend/blocks/exa/websets_enrichment.py b/autogpt_platform/backend/backend/blocks/exa/websets_enrichment.py new file mode 100644 index 0000000000..242700713a --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/websets_enrichment.py @@ -0,0 +1,554 @@ +""" +Exa Websets Enrichment Management Blocks + +This module provides blocks for creating and managing enrichments on webset items, +allowing extraction of additional structured data from existing items. +""" + +from enum import Enum +from typing import Any, Dict, List, Optional + +from exa_py import AsyncExa +from exa_py.websets.types import WebsetEnrichment as SdkWebsetEnrichment +from pydantic import BaseModel + +from backend.sdk import ( + APIKeyCredentials, + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + CredentialsMetaInput, + Requests, + SchemaField, +) + +from ._config import exa + + +# Mirrored model for stability +class WebsetEnrichmentModel(BaseModel): + """Stable output model mirroring SDK WebsetEnrichment.""" + + id: str + webset_id: str + status: str + title: Optional[str] + description: str + format: str + options: List[str] + instructions: Optional[str] + metadata: Dict[str, Any] + created_at: str + updated_at: str + + @classmethod + def from_sdk(cls, enrichment: SdkWebsetEnrichment) -> "WebsetEnrichmentModel": + """Convert SDK WebsetEnrichment to our stable model.""" + # Extract options + options_list = [] + if enrichment.options: + for option in enrichment.options: + option_dict = option.model_dump(by_alias=True) + options_list.append(option_dict.get("label", "")) + + return cls( + id=enrichment.id, + webset_id=enrichment.webset_id, + status=( + enrichment.status.value + if hasattr(enrichment.status, "value") + else str(enrichment.status) + ), + title=enrichment.title, + description=enrichment.description, + format=( + enrichment.format.value + if enrichment.format and hasattr(enrichment.format, "value") + else "text" + ), + options=options_list, + instructions=enrichment.instructions, + metadata=enrichment.metadata if enrichment.metadata else {}, + created_at=( + enrichment.created_at.isoformat() if enrichment.created_at else "" + ), + updated_at=( + enrichment.updated_at.isoformat() if enrichment.updated_at else "" + ), + ) + + +class EnrichmentFormat(str, Enum): + """Format types for enrichment responses.""" + + TEXT = "text" # Free text response + DATE = "date" # Date/datetime format + NUMBER = "number" # Numeric value + OPTIONS = "options" # Multiple choice from provided options + EMAIL = "email" # Email address format + PHONE = "phone" # Phone number format + + +class ExaCreateEnrichmentBlock(Block): + """Create a new enrichment to extract additional data from webset items.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + description: str = SchemaField( + description="What data to extract from each item", + placeholder="Extract the company's main product or service offering", + ) + title: Optional[str] = SchemaField( + default=None, + description="Short title for this enrichment (auto-generated if not provided)", + placeholder="Main Product", + ) + format: EnrichmentFormat = SchemaField( + default=EnrichmentFormat.TEXT, + description="Expected format of the extracted data", + ) + options: list[str] = SchemaField( + default_factory=list, + description="Available options when format is 'options'", + placeholder='["B2B", "B2C", "Both", "Unknown"]', + advanced=True, + ) + apply_to_existing: bool = SchemaField( + default=True, + description="Apply this enrichment to existing items in the webset", + ) + metadata: Optional[dict] = SchemaField( + default=None, + description="Metadata to attach to the enrichment", + advanced=True, + ) + wait_for_completion: bool = SchemaField( + default=False, + description="Wait for the enrichment to complete on existing items", + ) + polling_timeout: int = SchemaField( + default=300, + description="Maximum time to wait for completion in seconds", + advanced=True, + ge=1, + le=600, + ) + + class Output(BlockSchemaOutput): + enrichment_id: str = SchemaField( + description="The unique identifier for the created enrichment" + ) + webset_id: str = SchemaField( + description="The webset this enrichment belongs to" + ) + status: str = SchemaField(description="Current status of the enrichment") + title: str = SchemaField(description="Title of the enrichment") + description: str = SchemaField( + description="Description of what data is extracted" + ) + format: str = SchemaField(description="Format of the extracted data") + instructions: str = SchemaField( + description="Generated instructions for the enrichment" + ) + items_enriched: Optional[int] = SchemaField( + description="Number of items enriched (if wait_for_completion was True)" + ) + completion_time: Optional[float] = SchemaField( + description="Time taken to complete in seconds (if wait_for_completion was True)" + ) + + def __init__(self): + super().__init__( + id="71146ae8-0cb1-4a15-8cde-eae30de71cb6", + description="Create enrichments to extract additional structured data from webset items", + categories={BlockCategory.AI, BlockCategory.SEARCH}, + input_schema=ExaCreateEnrichmentBlock.Input, + output_schema=ExaCreateEnrichmentBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + import time + + # Build the payload + payload: dict[str, Any] = { + "description": input_data.description, + "format": input_data.format.value, + } + + # Add title if provided + if input_data.title: + payload["title"] = input_data.title + + # Add options for 'options' format + if input_data.format == EnrichmentFormat.OPTIONS and input_data.options: + payload["options"] = [{"label": opt} for opt in input_data.options] + + # Add metadata if provided + if input_data.metadata: + payload["metadata"] = input_data.metadata + + start_time = time.time() + + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + sdk_enrichment = aexa.websets.enrichments.create( + webset_id=input_data.webset_id, params=payload + ) + + enrichment_id = sdk_enrichment.id + status = ( + sdk_enrichment.status.value + if hasattr(sdk_enrichment.status, "value") + else str(sdk_enrichment.status) + ) + + # If wait_for_completion is True and apply_to_existing is True, poll for completion + if input_data.wait_for_completion and input_data.apply_to_existing: + import asyncio + + poll_interval = 5 + max_interval = 30 + poll_start = time.time() + items_enriched = 0 + + while time.time() - poll_start < input_data.polling_timeout: + current_enrich = aexa.websets.enrichments.get( + webset_id=input_data.webset_id, id=enrichment_id + ) + current_status = ( + current_enrich.status.value + if hasattr(current_enrich.status, "value") + else str(current_enrich.status) + ) + + if current_status in ["completed", "failed", "cancelled"]: + # Estimate items from webset searches + webset = aexa.websets.get(id=input_data.webset_id) + if webset.searches: + for search in webset.searches: + if search.progress: + items_enriched += search.progress.found + completion_time = time.time() - start_time + + yield "enrichment_id", enrichment_id + yield "webset_id", input_data.webset_id + yield "status", current_status + yield "title", sdk_enrichment.title + yield "description", input_data.description + yield "format", input_data.format.value + yield "instructions", sdk_enrichment.instructions + yield "items_enriched", items_enriched + yield "completion_time", completion_time + return + + await asyncio.sleep(poll_interval) + poll_interval = min(poll_interval * 1.5, max_interval) + + # Timeout + completion_time = time.time() - start_time + yield "enrichment_id", enrichment_id + yield "webset_id", input_data.webset_id + yield "status", status + yield "title", sdk_enrichment.title + yield "description", input_data.description + yield "format", input_data.format.value + yield "instructions", sdk_enrichment.instructions + yield "items_enriched", 0 + yield "completion_time", completion_time + else: + yield "enrichment_id", enrichment_id + yield "webset_id", input_data.webset_id + yield "status", status + yield "title", sdk_enrichment.title + yield "description", input_data.description + yield "format", input_data.format.value + yield "instructions", sdk_enrichment.instructions + + +class ExaGetEnrichmentBlock(Block): + """Get the status and details of a webset enrichment.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + enrichment_id: str = SchemaField( + description="The ID of the enrichment to retrieve", + placeholder="enrichment-id", + ) + + class Output(BlockSchemaOutput): + enrichment_id: str = SchemaField( + description="The unique identifier for the enrichment" + ) + status: str = SchemaField(description="Current status of the enrichment") + title: str = SchemaField(description="Title of the enrichment") + description: str = SchemaField( + description="Description of what data is extracted" + ) + format: str = SchemaField(description="Format of the extracted data") + options: list[str] = SchemaField( + description="Available options (for 'options' format)" + ) + instructions: str = SchemaField( + description="Generated instructions for the enrichment" + ) + created_at: str = SchemaField(description="When the enrichment was created") + updated_at: str = SchemaField( + description="When the enrichment was last updated" + ) + metadata: dict = SchemaField(description="Metadata attached to the enrichment") + + def __init__(self): + super().__init__( + id="b8c9d0e1-f2a3-4567-89ab-cdef01234567", + description="Get the status and details of a webset enrichment", + categories={BlockCategory.SEARCH}, + input_schema=ExaGetEnrichmentBlock.Input, + output_schema=ExaGetEnrichmentBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + sdk_enrichment = aexa.websets.enrichments.get( + webset_id=input_data.webset_id, id=input_data.enrichment_id + ) + + enrichment = WebsetEnrichmentModel.from_sdk(sdk_enrichment) + + yield "enrichment_id", enrichment.id + yield "status", enrichment.status + yield "title", enrichment.title + yield "description", enrichment.description + yield "format", enrichment.format + yield "options", enrichment.options + yield "instructions", enrichment.instructions + yield "created_at", enrichment.created_at + yield "updated_at", enrichment.updated_at + yield "metadata", enrichment.metadata + + +class ExaUpdateEnrichmentBlock(Block): + """Update an existing enrichment configuration.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + enrichment_id: str = SchemaField( + description="The ID of the enrichment to update", + placeholder="enrichment-id", + ) + description: Optional[str] = SchemaField( + default=None, + description="New description for what data to extract", + ) + format: Optional[EnrichmentFormat] = SchemaField( + default=None, + description="New format for the extracted data", + ) + options: Optional[list[str]] = SchemaField( + default=None, + description="New options when format is 'options'", + ) + metadata: Optional[dict] = SchemaField( + default=None, + description="New metadata to attach to the enrichment", + ) + + class Output(BlockSchemaOutput): + enrichment_id: str = SchemaField( + description="The unique identifier for the enrichment" + ) + status: str = SchemaField(description="Current status of the enrichment") + title: str = SchemaField(description="Title of the enrichment") + description: str = SchemaField(description="Updated description") + format: str = SchemaField(description="Updated format") + success: str = SchemaField(description="Whether the update was successful") + + def __init__(self): + super().__init__( + id="c8d5c5fb-9684-4a29-bd2a-5b38d71776c9", + description="Update an existing enrichment configuration", + categories={BlockCategory.SEARCH}, + input_schema=ExaUpdateEnrichmentBlock.Input, + output_schema=ExaUpdateEnrichmentBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + url = f"https://api.exa.ai/websets/v0/websets/{input_data.webset_id}/enrichments/{input_data.enrichment_id}" + headers = { + "Content-Type": "application/json", + "x-api-key": credentials.api_key.get_secret_value(), + } + + # Build the update payload + payload = {} + + if input_data.description is not None: + payload["description"] = input_data.description + + if input_data.format is not None: + payload["format"] = input_data.format.value + + if input_data.options is not None: + payload["options"] = [{"label": opt} for opt in input_data.options] + + if input_data.metadata is not None: + payload["metadata"] = input_data.metadata + + try: + response = await Requests().patch(url, headers=headers, json=payload) + data = response.json() + + yield "enrichment_id", data.get("id", "") + yield "status", data.get("status", "") + yield "title", data.get("title", "") + yield "description", data.get("description", "") + yield "format", data.get("format", "") + yield "success", "true" + + except ValueError as e: + # Re-raise user input validation errors + raise ValueError(f"Failed to update enrichment: {e}") from e + # Let all other exceptions propagate naturally + + +class ExaDeleteEnrichmentBlock(Block): + """Delete an enrichment from a webset.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + enrichment_id: str = SchemaField( + description="The ID of the enrichment to delete", + placeholder="enrichment-id", + ) + + class Output(BlockSchemaOutput): + enrichment_id: str = SchemaField(description="The ID of the deleted enrichment") + success: str = SchemaField(description="Whether the deletion was successful") + + def __init__(self): + super().__init__( + id="b250de56-2ca6-4237-a7b8-b5684892189f", + description="Delete an enrichment from a webset", + categories={BlockCategory.SEARCH}, + input_schema=ExaDeleteEnrichmentBlock.Input, + output_schema=ExaDeleteEnrichmentBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + deleted_enrichment = aexa.websets.enrichments.delete( + webset_id=input_data.webset_id, id=input_data.enrichment_id + ) + + yield "enrichment_id", deleted_enrichment.id + yield "success", "true" + + +class ExaCancelEnrichmentBlock(Block): + """Cancel a running enrichment operation.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + enrichment_id: str = SchemaField( + description="The ID of the enrichment to cancel", + placeholder="enrichment-id", + ) + + class Output(BlockSchemaOutput): + enrichment_id: str = SchemaField( + description="The ID of the canceled enrichment" + ) + status: str = SchemaField(description="Status after cancellation") + items_enriched_before_cancel: int = SchemaField( + description="Approximate number of items enriched before cancellation" + ) + success: str = SchemaField( + description="Whether the cancellation was successful" + ) + + def __init__(self): + super().__init__( + id="7e1f8f0f-b6ab-43b3-bd1d-0c534a649295", + description="Cancel a running enrichment operation", + categories={BlockCategory.SEARCH}, + input_schema=ExaCancelEnrichmentBlock.Input, + output_schema=ExaCancelEnrichmentBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + canceled_enrichment = aexa.websets.enrichments.cancel( + webset_id=input_data.webset_id, id=input_data.enrichment_id + ) + + # Try to estimate how many items were enriched before cancellation + items_enriched = 0 + items_response = aexa.websets.items.list( + webset_id=input_data.webset_id, limit=100 + ) + + for sdk_item in items_response.data: + # Check if this enrichment is present + for enrich_result in sdk_item.enrichments: + if enrich_result.enrichment_id == input_data.enrichment_id: + items_enriched += 1 + break + + status = ( + canceled_enrichment.status.value + if hasattr(canceled_enrichment.status, "value") + else str(canceled_enrichment.status) + ) + + yield "enrichment_id", canceled_enrichment.id + yield "status", status + yield "items_enriched_before_cancel", items_enriched + yield "success", "true" diff --git a/autogpt_platform/backend/backend/blocks/exa/websets_import_export.py b/autogpt_platform/backend/backend/blocks/exa/websets_import_export.py new file mode 100644 index 0000000000..9a91c7a4d0 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/websets_import_export.py @@ -0,0 +1,676 @@ +""" +Exa Websets Import/Export Management Blocks + +This module provides blocks for importing data into websets from CSV files +and exporting webset data in various formats. +""" + +import csv +import json +from enum import Enum +from io import StringIO +from typing import Optional, Union + +from exa_py import AsyncExa +from exa_py.websets.types import CreateImportResponse +from exa_py.websets.types import Import as SdkImport +from pydantic import BaseModel + +from backend.sdk import ( + APIKeyCredentials, + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + CredentialsMetaInput, + SchemaField, +) + +from ._config import exa +from ._test import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT + + +# Mirrored model for stability - don't use SDK types directly in block outputs +class ImportModel(BaseModel): + """Stable output model mirroring SDK Import.""" + + id: str + status: str + title: str + format: str + entity_type: str + count: int + upload_url: Optional[str] # Only in CreateImportResponse + upload_valid_until: Optional[str] # Only in CreateImportResponse + failed_reason: str + failed_message: str + metadata: dict + created_at: str + updated_at: str + + @classmethod + def from_sdk( + cls, import_obj: Union[SdkImport, CreateImportResponse] + ) -> "ImportModel": + """Convert SDK Import or CreateImportResponse to our stable model.""" + # Extract entity type from union (may be None) + entity_type = "unknown" + if import_obj.entity: + entity_dict = import_obj.entity.model_dump(by_alias=True, exclude_none=True) + entity_type = entity_dict.get("type", "unknown") + + # Handle status enum + status_str = ( + import_obj.status.value + if hasattr(import_obj.status, "value") + else str(import_obj.status) + ) + + # Handle format enum + format_str = ( + import_obj.format.value + if hasattr(import_obj.format, "value") + else str(import_obj.format) + ) + + # Handle failed_reason enum (may be None or enum) + failed_reason_str = "" + if import_obj.failed_reason: + failed_reason_str = ( + import_obj.failed_reason.value + if hasattr(import_obj.failed_reason, "value") + else str(import_obj.failed_reason) + ) + + return cls( + id=import_obj.id, + status=status_str, + title=import_obj.title or "", + format=format_str, + entity_type=entity_type, + count=int(import_obj.count or 0), + upload_url=getattr( + import_obj, "upload_url", None + ), # Only in CreateImportResponse + upload_valid_until=getattr( + import_obj, "upload_valid_until", None + ), # Only in CreateImportResponse + failed_reason=failed_reason_str, + failed_message=import_obj.failed_message or "", + metadata=import_obj.metadata or {}, + created_at=( + import_obj.created_at.isoformat() if import_obj.created_at else "" + ), + updated_at=( + import_obj.updated_at.isoformat() if import_obj.updated_at else "" + ), + ) + + +class ImportFormat(str, Enum): + """Supported import formats.""" + + CSV = "csv" + # JSON = "json" # Future support + + +class ImportEntityType(str, Enum): + """Entity types for imports.""" + + COMPANY = "company" + PERSON = "person" + ARTICLE = "article" + RESEARCH_PAPER = "research_paper" + CUSTOM = "custom" + + +class ExportFormat(str, Enum): + """Supported export formats.""" + + JSON = "json" + CSV = "csv" + JSON_LINES = "jsonl" + + +class ExaCreateImportBlock(Block): + """Create an import to load external data that can be used with websets.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + title: str = SchemaField( + description="Title for this import", + placeholder="Customer List Import", + ) + csv_data: str = SchemaField( + description="CSV data to import (as a string)", + placeholder="name,url\nAcme Corp,https://acme.com\nExample Inc,https://example.com", + ) + entity_type: ImportEntityType = SchemaField( + default=ImportEntityType.COMPANY, + description="Type of entities being imported", + ) + entity_description: Optional[str] = SchemaField( + default=None, + description="Description for custom entity type", + advanced=True, + ) + identifier_column: int = SchemaField( + default=0, + description="Column index containing the identifier (0-based)", + ge=0, + ) + url_column: Optional[int] = SchemaField( + default=None, + description="Column index containing URLs (optional)", + ge=0, + advanced=True, + ) + metadata: Optional[dict] = SchemaField( + default=None, + description="Metadata to attach to the import", + advanced=True, + ) + + class Output(BlockSchemaOutput): + import_id: str = SchemaField( + description="The unique identifier for the created import" + ) + status: str = SchemaField(description="Current status of the import") + title: str = SchemaField(description="Title of the import") + count: int = SchemaField(description="Number of items in the import") + entity_type: str = SchemaField(description="Type of entities imported") + upload_url: Optional[str] = SchemaField( + description="Upload URL for CSV data (only if csv_data not provided in request)" + ) + upload_valid_until: Optional[str] = SchemaField( + description="Expiration time for upload URL (only if upload_url is provided)" + ) + created_at: str = SchemaField(description="When the import was created") + + def __init__(self): + super().__init__( + id="020a35d8-8a53-4e60-8b60-1de5cbab1df3", + description="Import CSV data to use with websets for targeted searches", + categories={BlockCategory.DATA}, + input_schema=ExaCreateImportBlock.Input, + output_schema=ExaCreateImportBlock.Output, + test_input={ + "credentials": TEST_CREDENTIALS_INPUT, + "title": "Test Import", + "csv_data": "name,url\nAcme,https://acme.com", + "entity_type": ImportEntityType.COMPANY, + "identifier_column": 0, + }, + test_output=[ + ("import_id", "import-123"), + ("status", "pending"), + ("title", "Test Import"), + ("count", 1), + ("entity_type", "company"), + ("upload_url", None), + ("upload_valid_until", None), + ("created_at", "2024-01-01T00:00:00"), + ], + test_credentials=TEST_CREDENTIALS, + test_mock=self._create_test_mock(), + ) + + @staticmethod + def _create_test_mock(): + """Create test mocks for the AsyncExa SDK.""" + from datetime import datetime + from unittest.mock import MagicMock + + # Create mock SDK import object + mock_import = MagicMock() + mock_import.id = "import-123" + mock_import.status = MagicMock(value="pending") + mock_import.title = "Test Import" + mock_import.format = MagicMock(value="csv") + mock_import.count = 1 + mock_import.upload_url = None + mock_import.upload_valid_until = None + mock_import.failed_reason = None + mock_import.failed_message = "" + mock_import.metadata = {} + mock_import.created_at = datetime.fromisoformat("2024-01-01T00:00:00") + mock_import.updated_at = datetime.fromisoformat("2024-01-01T00:00:00") + + # Mock entity + mock_entity = MagicMock() + mock_entity.model_dump = MagicMock(return_value={"type": "company"}) + mock_import.entity = mock_entity + + return { + "_get_client": lambda *args, **kwargs: MagicMock( + websets=MagicMock( + imports=MagicMock(create=lambda *args, **kwargs: mock_import) + ) + ) + } + + def _get_client(self, api_key: str) -> AsyncExa: + """Get Exa client (separated for testing).""" + return AsyncExa(api_key=api_key) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = self._get_client(credentials.api_key.get_secret_value()) + + csv_reader = csv.reader(StringIO(input_data.csv_data)) + rows = list(csv_reader) + count = len(rows) - 1 if len(rows) > 1 else 0 + + size = len(input_data.csv_data.encode("utf-8")) + + payload = { + "title": input_data.title, + "format": ImportFormat.CSV.value, + "count": count, + "size": size, + "csv": { + "identifier": input_data.identifier_column, + }, + } + + # Add URL column if specified + if input_data.url_column is not None: + payload["csv"]["url"] = input_data.url_column + + # Add entity configuration + entity = {"type": input_data.entity_type.value} + if ( + input_data.entity_type == ImportEntityType.CUSTOM + and input_data.entity_description + ): + entity["description"] = input_data.entity_description + payload["entity"] = entity + + # Add metadata if provided + if input_data.metadata: + payload["metadata"] = input_data.metadata + + sdk_import = aexa.websets.imports.create( + params=payload, csv_data=input_data.csv_data + ) + + import_obj = ImportModel.from_sdk(sdk_import) + + yield "import_id", import_obj.id + yield "status", import_obj.status + yield "title", import_obj.title + yield "count", import_obj.count + yield "entity_type", import_obj.entity_type + yield "upload_url", import_obj.upload_url + yield "upload_valid_until", import_obj.upload_valid_until + yield "created_at", import_obj.created_at + + +class ExaGetImportBlock(Block): + """Get the status and details of an import.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + import_id: str = SchemaField( + description="The ID of the import to retrieve", + placeholder="import-id", + ) + + class Output(BlockSchemaOutput): + import_id: str = SchemaField(description="The unique identifier for the import") + status: str = SchemaField(description="Current status of the import") + title: str = SchemaField(description="Title of the import") + format: str = SchemaField(description="Format of the imported data") + entity_type: str = SchemaField(description="Type of entities imported") + count: int = SchemaField(description="Number of items imported") + upload_url: Optional[str] = SchemaField( + description="Upload URL for CSV data (if import not yet uploaded)" + ) + upload_valid_until: Optional[str] = SchemaField( + description="Expiration time for upload URL (if applicable)" + ) + failed_reason: Optional[str] = SchemaField( + description="Reason for failure (if applicable)" + ) + failed_message: Optional[str] = SchemaField( + description="Detailed failure message (if applicable)" + ) + created_at: str = SchemaField(description="When the import was created") + updated_at: str = SchemaField(description="When the import was last updated") + metadata: dict = SchemaField(description="Metadata attached to the import") + + def __init__(self): + super().__init__( + id="236663c8-a8dc-45f7-a050-2676bb0a3dd2", + description="Get the status and details of an import", + categories={BlockCategory.DATA}, + input_schema=ExaGetImportBlock.Input, + output_schema=ExaGetImportBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + sdk_import = aexa.websets.imports.get(import_id=input_data.import_id) + + import_obj = ImportModel.from_sdk(sdk_import) + + # Yield all fields + yield "import_id", import_obj.id + yield "status", import_obj.status + yield "title", import_obj.title + yield "format", import_obj.format + yield "entity_type", import_obj.entity_type + yield "count", import_obj.count + yield "upload_url", import_obj.upload_url + yield "upload_valid_until", import_obj.upload_valid_until + yield "failed_reason", import_obj.failed_reason + yield "failed_message", import_obj.failed_message + yield "created_at", import_obj.created_at + yield "updated_at", import_obj.updated_at + yield "metadata", import_obj.metadata + + +class ExaListImportsBlock(Block): + """List all imports with pagination.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + limit: int = SchemaField( + default=25, + description="Number of imports to return", + ge=1, + le=100, + ) + cursor: Optional[str] = SchemaField( + default=None, + description="Cursor for pagination", + advanced=True, + ) + + class Output(BlockSchemaOutput): + imports: list[dict] = SchemaField(description="List of imports") + import_item: dict = SchemaField( + description="Individual import (yielded for each import)" + ) + has_more: bool = SchemaField( + description="Whether there are more imports to paginate through" + ) + next_cursor: Optional[str] = SchemaField( + description="Cursor for the next page of results" + ) + + def __init__(self): + super().__init__( + id="65323630-f7e9-4692-a624-184ba14c0686", + description="List all imports with pagination support", + categories={BlockCategory.DATA}, + input_schema=ExaListImportsBlock.Input, + output_schema=ExaListImportsBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + response = aexa.websets.imports.list( + cursor=input_data.cursor, + limit=input_data.limit, + ) + + # Convert SDK imports to our stable models + imports = [ImportModel.from_sdk(i) for i in response.data] + + yield "imports", [i.model_dump() for i in imports] + + for import_obj in imports: + yield "import_item", import_obj.model_dump() + + yield "has_more", response.has_more + yield "next_cursor", response.next_cursor + + +class ExaDeleteImportBlock(Block): + """Delete an import.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + import_id: str = SchemaField( + description="The ID of the import to delete", + placeholder="import-id", + ) + + class Output(BlockSchemaOutput): + import_id: str = SchemaField(description="The ID of the deleted import") + success: str = SchemaField(description="Whether the deletion was successful") + + def __init__(self): + super().__init__( + id="81ae30ed-c7ba-4b5d-8483-b726846e570c", + description="Delete an import", + categories={BlockCategory.DATA}, + input_schema=ExaDeleteImportBlock.Input, + output_schema=ExaDeleteImportBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + deleted_import = aexa.websets.imports.delete(import_id=input_data.import_id) + + yield "import_id", deleted_import.id + yield "success", "true" + + +class ExaExportWebsetBlock(Block): + """Export all data from a webset in various formats.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset to export", + placeholder="webset-id-or-external-id", + ) + format: ExportFormat = SchemaField( + default=ExportFormat.JSON, + description="Export format", + ) + include_content: bool = SchemaField( + default=True, + description="Include full content in export", + ) + include_enrichments: bool = SchemaField( + default=True, + description="Include enrichment data in export", + ) + max_items: int = SchemaField( + default=100, + description="Maximum number of items to export", + ge=1, + le=100, + ) + + class Output(BlockSchemaOutput): + export_data: str = SchemaField( + description="Exported data in the requested format" + ) + item_count: int = SchemaField(description="Number of items exported") + total_items: int = SchemaField( + description="Total number of items in the webset" + ) + truncated: bool = SchemaField( + description="Whether the export was truncated due to max_items limit" + ) + format: str = SchemaField(description="Format of the exported data") + + def __init__(self): + super().__init__( + id="5da9d0fd-4b5b-4318-8302-8f71d0ccce9d", + description="Export webset data in JSON, CSV, or JSON Lines format", + categories={BlockCategory.DATA}, + input_schema=ExaExportWebsetBlock.Input, + output_schema=ExaExportWebsetBlock.Output, + test_input={ + "credentials": TEST_CREDENTIALS_INPUT, + "webset_id": "test-webset", + "format": ExportFormat.JSON, + "include_content": True, + "include_enrichments": True, + "max_items": 10, + }, + test_output=[ + ("export_data", str), + ("item_count", 2), + ("total_items", 2), + ("truncated", False), + ("format", "json"), + ], + test_credentials=TEST_CREDENTIALS, + test_mock=self._create_test_mock(), + ) + + @staticmethod + def _create_test_mock(): + """Create test mocks for the AsyncExa SDK.""" + from unittest.mock import MagicMock + + # Create mock webset items + mock_item1 = MagicMock() + mock_item1.model_dump = MagicMock( + return_value={ + "id": "item-1", + "url": "https://example.com", + "title": "Test Item 1", + } + ) + + mock_item2 = MagicMock() + mock_item2.model_dump = MagicMock( + return_value={ + "id": "item-2", + "url": "https://example.org", + "title": "Test Item 2", + } + ) + + # Create mock iterator + mock_items = [mock_item1, mock_item2] + + return { + "_get_client": lambda *args, **kwargs: MagicMock( + websets=MagicMock( + items=MagicMock(list_all=lambda *args, **kwargs: iter(mock_items)) + ) + ) + } + + def _get_client(self, api_key: str) -> AsyncExa: + """Get Exa client (separated for testing).""" + return AsyncExa(api_key=api_key) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = self._get_client(credentials.api_key.get_secret_value()) + + try: + all_items = [] + + # Use SDK's list_all iterator to fetch items + item_iterator = aexa.websets.items.list_all( + webset_id=input_data.webset_id, limit=input_data.max_items + ) + + for sdk_item in item_iterator: + if len(all_items) >= input_data.max_items: + break + + # Convert to dict for export + item_dict = sdk_item.model_dump(by_alias=True, exclude_none=True) + all_items.append(item_dict) + + # Calculate total and truncated + total_items = len(all_items) # SDK doesn't provide total count + truncated = len(all_items) >= input_data.max_items + + # Process items based on include flags + if not input_data.include_content: + for item in all_items: + item.pop("content", None) + + if not input_data.include_enrichments: + for item in all_items: + item.pop("enrichments", None) + + # Format the export data + export_data = "" + + if input_data.format == ExportFormat.JSON: + export_data = json.dumps(all_items, indent=2, default=str) + + elif input_data.format == ExportFormat.JSON_LINES: + lines = [json.dumps(item, default=str) for item in all_items] + export_data = "\n".join(lines) + + elif input_data.format == ExportFormat.CSV: + # Extract all unique keys for CSV headers + all_keys = set() + for item in all_items: + all_keys.update(self._flatten_dict(item).keys()) + + # Create CSV + output = StringIO() + writer = csv.DictWriter(output, fieldnames=sorted(all_keys)) + writer.writeheader() + + for item in all_items: + flat_item = self._flatten_dict(item) + writer.writerow(flat_item) + + export_data = output.getvalue() + + yield "export_data", export_data + yield "item_count", len(all_items) + yield "total_items", total_items + yield "truncated", truncated + yield "format", input_data.format.value + + except ValueError as e: + # Re-raise user input validation errors + raise ValueError(f"Failed to export webset: {e}") from e + # Let all other exceptions propagate naturally + + def _flatten_dict(self, d: dict, parent_key: str = "", sep: str = "_") -> dict: + """Flatten nested dictionaries for CSV export.""" + items = [] + for k, v in d.items(): + new_key = f"{parent_key}{sep}{k}" if parent_key else k + if isinstance(v, dict): + items.extend(self._flatten_dict(v, new_key, sep=sep).items()) + elif isinstance(v, list): + # Convert lists to JSON strings for CSV + items.append((new_key, json.dumps(v, default=str))) + else: + items.append((new_key, v)) + return dict(items) diff --git a/autogpt_platform/backend/backend/blocks/exa/websets_items.py b/autogpt_platform/backend/backend/blocks/exa/websets_items.py new file mode 100644 index 0000000000..3c5d0d51a8 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/websets_items.py @@ -0,0 +1,591 @@ +""" +Exa Websets Item Management Blocks + +This module provides blocks for managing items within Exa websets, including +retrieving, listing, deleting, and bulk operations on webset items. +""" + +from typing import Any, Dict, List, Optional + +from exa_py import AsyncExa +from exa_py.websets.types import WebsetItem as SdkWebsetItem +from exa_py.websets.types import ( + WebsetItemArticleProperties, + WebsetItemCompanyProperties, + WebsetItemCustomProperties, + WebsetItemPersonProperties, + WebsetItemResearchPaperProperties, +) +from pydantic import AnyUrl, BaseModel + +from backend.sdk import ( + APIKeyCredentials, + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + CredentialsMetaInput, + SchemaField, +) + +from ._config import exa + + +# Mirrored model for enrichment results +class EnrichmentResultModel(BaseModel): + """Stable output model mirroring SDK EnrichmentResult.""" + + enrichment_id: str + format: str + result: Optional[List[str]] + reasoning: Optional[str] + references: List[Dict[str, Any]] + + @classmethod + def from_sdk(cls, sdk_enrich) -> "EnrichmentResultModel": + """Convert SDK EnrichmentResult to our model.""" + format_str = ( + sdk_enrich.format.value + if hasattr(sdk_enrich.format, "value") + else str(sdk_enrich.format) + ) + + # Convert references to dicts + references_list = [] + if sdk_enrich.references: + for ref in sdk_enrich.references: + references_list.append(ref.model_dump(by_alias=True, exclude_none=True)) + + return cls( + enrichment_id=sdk_enrich.enrichment_id, + format=format_str, + result=sdk_enrich.result, + reasoning=sdk_enrich.reasoning, + references=references_list, + ) + + +# Mirrored model for stability - don't use SDK types directly in block outputs +class WebsetItemModel(BaseModel): + """Stable output model mirroring SDK WebsetItem.""" + + id: str + url: Optional[AnyUrl] + title: str + content: str + entity_data: Dict[str, Any] + enrichments: Dict[str, EnrichmentResultModel] + created_at: str + updated_at: str + + @classmethod + def from_sdk(cls, item: SdkWebsetItem) -> "WebsetItemModel": + """Convert SDK WebsetItem to our stable model.""" + # Extract properties from the union type + properties_dict = {} + url_value = None + title = "" + content = "" + + if hasattr(item, "properties") and item.properties: + properties_dict = item.properties.model_dump( + by_alias=True, exclude_none=True + ) + + # URL is always available on all property types + url_value = item.properties.url + + # Extract title using isinstance checks on the union type + if isinstance(item.properties, WebsetItemPersonProperties): + title = item.properties.person.name + content = "" # Person type has no content + elif isinstance(item.properties, WebsetItemCompanyProperties): + title = item.properties.company.name + content = item.properties.content or "" + elif isinstance(item.properties, WebsetItemArticleProperties): + title = item.properties.description + content = item.properties.content or "" + elif isinstance(item.properties, WebsetItemResearchPaperProperties): + title = item.properties.description + content = item.properties.content or "" + elif isinstance(item.properties, WebsetItemCustomProperties): + title = item.properties.description + content = item.properties.content or "" + else: + # Fallback + title = item.properties.description + content = getattr(item.properties, "content", "") + + # Convert enrichments from list to dict keyed by enrichment_id using Pydantic models + enrichments_dict: Dict[str, EnrichmentResultModel] = {} + if hasattr(item, "enrichments") and item.enrichments: + for sdk_enrich in item.enrichments: + enrich_model = EnrichmentResultModel.from_sdk(sdk_enrich) + enrichments_dict[enrich_model.enrichment_id] = enrich_model + + return cls( + id=item.id, + url=url_value, + title=title, + content=content or "", + entity_data=properties_dict, + enrichments=enrichments_dict, + created_at=item.created_at.isoformat() if item.created_at else "", + updated_at=item.updated_at.isoformat() if item.updated_at else "", + ) + + +class ExaGetWebsetItemBlock(Block): + """Get a specific item from a webset by its ID.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + item_id: str = SchemaField( + description="The ID of the specific item to retrieve", + placeholder="item-id", + ) + + class Output(BlockSchemaOutput): + item_id: str = SchemaField(description="The unique identifier for the item") + url: str = SchemaField(description="The URL of the original source") + title: str = SchemaField(description="The title of the item") + content: str = SchemaField(description="The main content of the item") + entity_data: dict = SchemaField(description="Entity-specific structured data") + enrichments: dict = SchemaField(description="Enrichment data added to the item") + created_at: str = SchemaField( + description="When the item was added to the webset" + ) + updated_at: str = SchemaField(description="When the item was last updated") + + def __init__(self): + super().__init__( + id="c4a7d9e2-8f3b-4a6c-9d8e-a5b6c7d8e9f0", + description="Get a specific item from a webset by its ID", + categories={BlockCategory.SEARCH}, + input_schema=ExaGetWebsetItemBlock.Input, + output_schema=ExaGetWebsetItemBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + sdk_item = aexa.websets.items.get( + webset_id=input_data.webset_id, id=input_data.item_id + ) + + item = WebsetItemModel.from_sdk(sdk_item) + + yield "item_id", item.id + yield "url", item.url + yield "title", item.title + yield "content", item.content + yield "entity_data", item.entity_data + yield "enrichments", item.enrichments + yield "created_at", item.created_at + yield "updated_at", item.updated_at + + +class ExaListWebsetItemsBlock(Block): + """List items in a webset with pagination and optional filtering.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + limit: int = SchemaField( + default=25, + description="Number of items to return (1-100)", + ge=1, + le=100, + ) + cursor: Optional[str] = SchemaField( + default=None, + description="Cursor for pagination through results", + advanced=True, + ) + wait_for_items: bool = SchemaField( + default=False, + description="Wait for items to be available if webset is still processing", + advanced=True, + ) + wait_timeout: int = SchemaField( + default=60, + description="Maximum time to wait for items in seconds", + advanced=True, + ge=1, + le=300, + ) + + class Output(BlockSchemaOutput): + items: list[WebsetItemModel] = SchemaField( + description="List of webset items", + ) + webset_id: str = SchemaField( + description="The ID of the webset", + ) + item: WebsetItemModel = SchemaField( + description="Individual item (yielded for each item in the list)", + ) + has_more: bool = SchemaField( + description="Whether there are more items to paginate through", + ) + next_cursor: Optional[str] = SchemaField( + description="Cursor for the next page of results", + ) + + def __init__(self): + super().__init__( + id="7b5e8c9f-01a2-43c4-95e6-f7a8b9c0d1e2", + description="List items in a webset with pagination support", + categories={BlockCategory.SEARCH}, + input_schema=ExaListWebsetItemsBlock.Input, + output_schema=ExaListWebsetItemsBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + if input_data.wait_for_items: + import asyncio + import time + + start_time = time.time() + interval = 2 + response = None + + while time.time() - start_time < input_data.wait_timeout: + response = aexa.websets.items.list( + webset_id=input_data.webset_id, + cursor=input_data.cursor, + limit=input_data.limit, + ) + + if response.data: + break + + await asyncio.sleep(interval) + interval = min(interval * 1.2, 10) + + if not response: + response = aexa.websets.items.list( + webset_id=input_data.webset_id, + cursor=input_data.cursor, + limit=input_data.limit, + ) + else: + response = aexa.websets.items.list( + webset_id=input_data.webset_id, + cursor=input_data.cursor, + limit=input_data.limit, + ) + + items = [WebsetItemModel.from_sdk(item) for item in response.data] + + yield "items", items + + for item in items: + yield "item", item + + yield "has_more", response.has_more + yield "next_cursor", response.next_cursor + yield "webset_id", input_data.webset_id + + +class ExaDeleteWebsetItemBlock(Block): + """Delete a specific item from a webset.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + item_id: str = SchemaField( + description="The ID of the item to delete", + placeholder="item-id", + ) + + class Output(BlockSchemaOutput): + item_id: str = SchemaField(description="The ID of the deleted item") + success: str = SchemaField(description="Whether the deletion was successful") + + def __init__(self): + super().__init__( + id="12c57fbe-c270-4877-a2b6-d2d05529ba79", + description="Delete a specific item from a webset", + categories={BlockCategory.SEARCH}, + input_schema=ExaDeleteWebsetItemBlock.Input, + output_schema=ExaDeleteWebsetItemBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + deleted_item = aexa.websets.items.delete( + webset_id=input_data.webset_id, id=input_data.item_id + ) + + yield "item_id", deleted_item.id + yield "success", "true" + + +class ExaBulkWebsetItemsBlock(Block): + """Get all items from a webset in a single operation (with size limits).""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + max_items: int = SchemaField( + default=100, + description="Maximum number of items to retrieve (1-1000). Note: Large values may take longer.", + ge=1, + le=1000, + ) + include_enrichments: bool = SchemaField( + default=True, + description="Include enrichment data for each item", + ) + include_content: bool = SchemaField( + default=True, + description="Include full content for each item", + ) + + class Output(BlockSchemaOutput): + items: list[WebsetItemModel] = SchemaField( + description="All items from the webset" + ) + item: WebsetItemModel = SchemaField( + description="Individual item (yielded for each item)" + ) + total_retrieved: int = SchemaField( + description="Total number of items retrieved" + ) + truncated: bool = SchemaField( + description="Whether results were truncated due to max_items limit" + ) + + def __init__(self): + super().__init__( + id="dbd619f5-476e-4395-af9a-a7a7c0fb8c4e", + description="Get all items from a webset in bulk (with configurable limits)", + categories={BlockCategory.SEARCH}, + input_schema=ExaBulkWebsetItemsBlock.Input, + output_schema=ExaBulkWebsetItemsBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + all_items: List[WebsetItemModel] = [] + item_iterator = aexa.websets.items.list_all( + webset_id=input_data.webset_id, limit=input_data.max_items + ) + + for sdk_item in item_iterator: + if len(all_items) >= input_data.max_items: + break + + item = WebsetItemModel.from_sdk(sdk_item) + + if not input_data.include_enrichments: + item.enrichments = {} + if not input_data.include_content: + item.content = "" + + all_items.append(item) + + yield "items", all_items + + for item in all_items: + yield "item", item + + yield "total_retrieved", len(all_items) + yield "truncated", len(all_items) >= input_data.max_items + + +class ExaWebsetItemsSummaryBlock(Block): + """Get a summary of items in a webset without retrieving all data.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + sample_size: int = SchemaField( + default=5, + description="Number of sample items to include", + ge=0, + le=10, + ) + + class Output(BlockSchemaOutput): + total_items: int = SchemaField( + description="Total number of items in the webset" + ) + entity_type: str = SchemaField(description="Type of entities in the webset") + sample_items: list[WebsetItemModel] = SchemaField( + description="Sample of items from the webset" + ) + enrichment_columns: list[str] = SchemaField( + description="List of enrichment columns available" + ) + + def __init__(self): + super().__init__( + id="db7813ad-10bd-4652-8623-5667d6fecdd5", + description="Get a summary of webset items without retrieving all data", + categories={BlockCategory.SEARCH}, + input_schema=ExaWebsetItemsSummaryBlock.Input, + output_schema=ExaWebsetItemsSummaryBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + webset = aexa.websets.get(id=input_data.webset_id) + + entity_type = "unknown" + if webset.searches: + first_search = webset.searches[0] + if first_search.entity: + # The entity is a union type, extract type field + entity_dict = first_search.entity.model_dump(by_alias=True) + entity_type = entity_dict.get("type", "unknown") + + # Get enrichment columns + enrichment_columns = [] + if webset.enrichments: + enrichment_columns = [ + e.title if e.title else e.description for e in webset.enrichments + ] + + # Get sample items if requested + sample_items: List[WebsetItemModel] = [] + if input_data.sample_size > 0: + items_response = aexa.websets.items.list( + webset_id=input_data.webset_id, limit=input_data.sample_size + ) + # Convert to our stable models + sample_items = [ + WebsetItemModel.from_sdk(item) for item in items_response.data + ] + + total_items = 0 + if webset.searches: + for search in webset.searches: + if search.progress: + total_items += search.progress.found + + yield "total_items", total_items + yield "entity_type", entity_type + yield "sample_items", sample_items + yield "enrichment_columns", enrichment_columns + + +class ExaGetNewItemsBlock(Block): + """Get items added to a webset since a specific cursor (incremental processing helper).""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + since_cursor: Optional[str] = SchemaField( + default=None, + description="Cursor from previous run - only items after this will be returned. Leave empty on first run.", + placeholder="cursor-from-previous-run", + ) + max_items: int = SchemaField( + default=100, + description="Maximum number of new items to retrieve", + ge=1, + le=1000, + ) + + class Output(BlockSchemaOutput): + new_items: list[WebsetItemModel] = SchemaField( + description="Items added since the cursor" + ) + item: WebsetItemModel = SchemaField( + description="Individual item (yielded for each new item)" + ) + count: int = SchemaField(description="Number of new items found") + next_cursor: Optional[str] = SchemaField( + description="Save this cursor for the next run to get only newer items" + ) + has_more: bool = SchemaField( + description="Whether there are more new items beyond max_items" + ) + + def __init__(self): + super().__init__( + id="3ff9bdf5-9613-4d21-8a60-90eb8b69c414", + description="Get items added since a cursor - enables incremental processing without reprocessing", + categories={BlockCategory.SEARCH, BlockCategory.DATA}, + input_schema=ExaGetNewItemsBlock.Input, + output_schema=ExaGetNewItemsBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + # Get items starting from cursor + response = aexa.websets.items.list( + webset_id=input_data.webset_id, + cursor=input_data.since_cursor, + limit=input_data.max_items, + ) + + # Convert SDK items to our stable models + new_items = [WebsetItemModel.from_sdk(item) for item in response.data] + + # Yield the full list + yield "new_items", new_items + + # Yield individual items for processing + for item in new_items: + yield "item", item + + # Yield metadata for next run + yield "count", len(new_items) + yield "next_cursor", response.next_cursor + yield "has_more", response.has_more diff --git a/autogpt_platform/backend/backend/blocks/exa/websets_monitor.py b/autogpt_platform/backend/backend/blocks/exa/websets_monitor.py new file mode 100644 index 0000000000..b10fd65310 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/websets_monitor.py @@ -0,0 +1,600 @@ +""" +Exa Websets Monitor Management Blocks + +This module provides blocks for creating and managing monitors that automatically +keep websets updated with fresh data on a schedule. +""" + +from enum import Enum +from typing import Optional + +from exa_py import AsyncExa +from exa_py.websets.types import Monitor as SdkMonitor +from pydantic import BaseModel + +from backend.sdk import ( + APIKeyCredentials, + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + CredentialsMetaInput, + SchemaField, +) + +from ._config import exa +from ._test import TEST_CREDENTIALS, TEST_CREDENTIALS_INPUT + + +# Mirrored model for stability - don't use SDK types directly in block outputs +class MonitorModel(BaseModel): + """Stable output model mirroring SDK Monitor.""" + + id: str + status: str + webset_id: str + behavior_type: str + behavior_config: dict + cron_expression: str + timezone: str + next_run_at: str + last_run: dict + metadata: dict + created_at: str + updated_at: str + + @classmethod + def from_sdk(cls, monitor: SdkMonitor) -> "MonitorModel": + """Convert SDK Monitor to our stable model.""" + # Extract behavior information + behavior_dict = monitor.behavior.model_dump(by_alias=True, exclude_none=True) + behavior_type = behavior_dict.get("type", "unknown") + behavior_config = behavior_dict.get("config", {}) + + # Extract cadence information + cadence_dict = monitor.cadence.model_dump(by_alias=True, exclude_none=True) + cron_expr = cadence_dict.get("cron", "") + timezone = cadence_dict.get("timezone", "Etc/UTC") + + # Extract last run information + last_run_dict = {} + if monitor.last_run: + last_run_dict = monitor.last_run.model_dump( + by_alias=True, exclude_none=True + ) + + # Handle status enum + status_str = ( + monitor.status.value + if hasattr(monitor.status, "value") + else str(monitor.status) + ) + + return cls( + id=monitor.id, + status=status_str, + webset_id=monitor.webset_id, + behavior_type=behavior_type, + behavior_config=behavior_config, + cron_expression=cron_expr, + timezone=timezone, + next_run_at=monitor.next_run_at.isoformat() if monitor.next_run_at else "", + last_run=last_run_dict, + metadata=monitor.metadata or {}, + created_at=monitor.created_at.isoformat() if monitor.created_at else "", + updated_at=monitor.updated_at.isoformat() if monitor.updated_at else "", + ) + + +class MonitorStatus(str, Enum): + """Status of a monitor.""" + + ENABLED = "enabled" + DISABLED = "disabled" + PAUSED = "paused" + + +class MonitorBehaviorType(str, Enum): + """Type of behavior for a monitor.""" + + SEARCH = "search" # Run new searches + REFRESH = "refresh" # Refresh existing items + + +class SearchBehavior(str, Enum): + """How search results interact with existing items.""" + + APPEND = "append" + OVERRIDE = "override" + + +class ExaCreateMonitorBlock(Block): + """Create a monitor to automatically keep a webset updated on a schedule.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset to monitor", + placeholder="webset-id-or-external-id", + ) + + # Schedule configuration + cron_expression: str = SchemaField( + description="Cron expression for scheduling (5 fields, max once per day)", + placeholder="0 9 * * 1", # Every Monday at 9 AM + ) + timezone: str = SchemaField( + default="Etc/UTC", + description="IANA timezone for the schedule", + placeholder="America/New_York", + advanced=True, + ) + + # Behavior configuration + behavior_type: MonitorBehaviorType = SchemaField( + default=MonitorBehaviorType.SEARCH, + description="Type of monitor behavior (search for new items or refresh existing)", + ) + + # Search configuration (for SEARCH behavior) + search_query: Optional[str] = SchemaField( + default=None, + description="Search query for finding new items (required for search behavior)", + placeholder="AI startups that raised funding in the last week", + ) + search_count: int = SchemaField( + default=10, + description="Number of items to find in each search", + ge=1, + le=100, + ) + search_criteria: list[str] = SchemaField( + default_factory=list, + description="Criteria that items must meet", + advanced=True, + ) + search_behavior: SearchBehavior = SchemaField( + default=SearchBehavior.APPEND, + description="How new results interact with existing items", + advanced=True, + ) + entity_type: Optional[str] = SchemaField( + default=None, + description="Type of entity to search for (company, person, etc.)", + advanced=True, + ) + + # Refresh configuration (for REFRESH behavior) + refresh_content: bool = SchemaField( + default=True, + description="Refresh content from source URLs (for refresh behavior)", + advanced=True, + ) + refresh_enrichments: bool = SchemaField( + default=True, + description="Re-run enrichments on items (for refresh behavior)", + advanced=True, + ) + + # Metadata + metadata: Optional[dict] = SchemaField( + default=None, + description="Metadata to attach to the monitor", + advanced=True, + ) + + class Output(BlockSchemaOutput): + monitor_id: str = SchemaField( + description="The unique identifier for the created monitor" + ) + webset_id: str = SchemaField(description="The webset this monitor belongs to") + status: str = SchemaField(description="Status of the monitor") + behavior_type: str = SchemaField(description="Type of monitor behavior") + next_run_at: Optional[str] = SchemaField( + description="When the monitor will next run" + ) + cron_expression: str = SchemaField(description="The schedule cron expression") + timezone: str = SchemaField(description="The timezone for scheduling") + created_at: str = SchemaField(description="When the monitor was created") + + def __init__(self): + super().__init__( + id="f8a9b0c1-d2e3-4567-890a-bcdef1234567", + description="Create automated monitors to keep websets updated with fresh data on a schedule", + categories={BlockCategory.SEARCH}, + input_schema=ExaCreateMonitorBlock.Input, + output_schema=ExaCreateMonitorBlock.Output, + test_input={ + "credentials": TEST_CREDENTIALS_INPUT, + "webset_id": "test-webset", + "cron_expression": "0 9 * * 1", + "behavior_type": MonitorBehaviorType.SEARCH, + "search_query": "AI startups", + "search_count": 10, + }, + test_output=[ + ("monitor_id", "monitor-123"), + ("webset_id", "test-webset"), + ("status", "enabled"), + ("behavior_type", "search"), + ("next_run_at", "2024-01-01T00:00:00"), + ("cron_expression", "0 9 * * 1"), + ("timezone", "Etc/UTC"), + ("created_at", "2024-01-01T00:00:00"), + ], + test_credentials=TEST_CREDENTIALS, + test_mock=self._create_test_mock(), + ) + + @staticmethod + def _create_test_mock(): + """Create test mocks for the AsyncExa SDK.""" + from datetime import datetime + from unittest.mock import MagicMock + + # Create mock SDK monitor object + mock_monitor = MagicMock() + mock_monitor.id = "monitor-123" + mock_monitor.status = MagicMock(value="enabled") + mock_monitor.webset_id = "test-webset" + mock_monitor.next_run_at = datetime.fromisoformat("2024-01-01T00:00:00") + mock_monitor.created_at = datetime.fromisoformat("2024-01-01T00:00:00") + mock_monitor.updated_at = datetime.fromisoformat("2024-01-01T00:00:00") + mock_monitor.metadata = {} + mock_monitor.last_run = None + + # Mock behavior + mock_behavior = MagicMock() + mock_behavior.model_dump = MagicMock( + return_value={"type": "search", "config": {}} + ) + mock_monitor.behavior = mock_behavior + + # Mock cadence + mock_cadence = MagicMock() + mock_cadence.model_dump = MagicMock( + return_value={"cron": "0 9 * * 1", "timezone": "Etc/UTC"} + ) + mock_monitor.cadence = mock_cadence + + return { + "_get_client": lambda *args, **kwargs: MagicMock( + websets=MagicMock( + monitors=MagicMock(create=lambda *args, **kwargs: mock_monitor) + ) + ) + } + + def _get_client(self, api_key: str) -> AsyncExa: + """Get Exa client (separated for testing).""" + return AsyncExa(api_key=api_key) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + aexa = self._get_client(credentials.api_key.get_secret_value()) + + # Build the payload + payload = { + "websetId": input_data.webset_id, + "cadence": { + "cron": input_data.cron_expression, + "timezone": input_data.timezone, + }, + } + + # Build behavior configuration based on type + if input_data.behavior_type == MonitorBehaviorType.SEARCH: + behavior_config = { + "query": input_data.search_query or "", + "count": input_data.search_count, + "behavior": input_data.search_behavior.value, + } + + if input_data.search_criteria: + behavior_config["criteria"] = [ + {"description": c} for c in input_data.search_criteria + ] + + if input_data.entity_type: + behavior_config["entity"] = {"type": input_data.entity_type} + + payload["behavior"] = { + "type": "search", + "config": behavior_config, + } + else: + # REFRESH behavior + payload["behavior"] = { + "type": "refresh", + "config": { + "content": input_data.refresh_content, + "enrichments": input_data.refresh_enrichments, + }, + } + + # Add metadata if provided + if input_data.metadata: + payload["metadata"] = input_data.metadata + + sdk_monitor = aexa.websets.monitors.create(params=payload) + + monitor = MonitorModel.from_sdk(sdk_monitor) + + # Yield all fields + yield "monitor_id", monitor.id + yield "webset_id", monitor.webset_id + yield "status", monitor.status + yield "behavior_type", monitor.behavior_type + yield "next_run_at", monitor.next_run_at + yield "cron_expression", monitor.cron_expression + yield "timezone", monitor.timezone + yield "created_at", monitor.created_at + + +class ExaGetMonitorBlock(Block): + """Get the details and status of a monitor.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + monitor_id: str = SchemaField( + description="The ID of the monitor to retrieve", + placeholder="monitor-id", + ) + + class Output(BlockSchemaOutput): + monitor_id: str = SchemaField( + description="The unique identifier for the monitor" + ) + webset_id: str = SchemaField(description="The webset this monitor belongs to") + status: str = SchemaField(description="Current status of the monitor") + behavior_type: str = SchemaField(description="Type of monitor behavior") + behavior_config: dict = SchemaField( + description="Configuration for the monitor behavior" + ) + cron_expression: str = SchemaField(description="The schedule cron expression") + timezone: str = SchemaField(description="The timezone for scheduling") + next_run_at: Optional[str] = SchemaField( + description="When the monitor will next run" + ) + last_run: Optional[dict] = SchemaField( + description="Information about the last run" + ) + created_at: str = SchemaField(description="When the monitor was created") + updated_at: str = SchemaField(description="When the monitor was last updated") + metadata: dict = SchemaField(description="Metadata attached to the monitor") + + def __init__(self): + super().__init__( + id="5c852a2d-d505-4a56-b711-7def8dd14e72", + description="Get the details and status of a webset monitor", + categories={BlockCategory.SEARCH}, + input_schema=ExaGetMonitorBlock.Input, + output_schema=ExaGetMonitorBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + sdk_monitor = aexa.websets.monitors.get(monitor_id=input_data.monitor_id) + + monitor = MonitorModel.from_sdk(sdk_monitor) + + # Yield all fields + yield "monitor_id", monitor.id + yield "webset_id", monitor.webset_id + yield "status", monitor.status + yield "behavior_type", monitor.behavior_type + yield "behavior_config", monitor.behavior_config + yield "cron_expression", monitor.cron_expression + yield "timezone", monitor.timezone + yield "next_run_at", monitor.next_run_at + yield "last_run", monitor.last_run + yield "created_at", monitor.created_at + yield "updated_at", monitor.updated_at + yield "metadata", monitor.metadata + + +class ExaUpdateMonitorBlock(Block): + """Update a monitor's configuration.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + monitor_id: str = SchemaField( + description="The ID of the monitor to update", + placeholder="monitor-id", + ) + status: Optional[MonitorStatus] = SchemaField( + default=None, + description="New status for the monitor", + ) + cron_expression: Optional[str] = SchemaField( + default=None, + description="New cron expression for scheduling", + ) + timezone: Optional[str] = SchemaField( + default=None, + description="New timezone for the schedule", + advanced=True, + ) + metadata: Optional[dict] = SchemaField( + default=None, + description="New metadata for the monitor", + advanced=True, + ) + + class Output(BlockSchemaOutput): + monitor_id: str = SchemaField( + description="The unique identifier for the monitor" + ) + status: str = SchemaField(description="Updated status of the monitor") + next_run_at: Optional[str] = SchemaField( + description="When the monitor will next run" + ) + updated_at: str = SchemaField(description="When the monitor was updated") + success: str = SchemaField(description="Whether the update was successful") + + def __init__(self): + super().__init__( + id="245102c3-6af3-4515-a308-c2210b7939d2", + description="Update a monitor's status, schedule, or metadata", + categories={BlockCategory.SEARCH}, + input_schema=ExaUpdateMonitorBlock.Input, + output_schema=ExaUpdateMonitorBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + # Build update payload + payload = {} + + if input_data.status is not None: + payload["status"] = input_data.status.value + + if input_data.cron_expression is not None or input_data.timezone is not None: + cadence = {} + if input_data.cron_expression: + cadence["cron"] = input_data.cron_expression + if input_data.timezone: + cadence["timezone"] = input_data.timezone + payload["cadence"] = cadence + + if input_data.metadata is not None: + payload["metadata"] = input_data.metadata + + sdk_monitor = aexa.websets.monitors.update( + monitor_id=input_data.monitor_id, params=payload + ) + + # Convert to our stable model + monitor = MonitorModel.from_sdk(sdk_monitor) + + # Yield fields + yield "monitor_id", monitor.id + yield "status", monitor.status + yield "next_run_at", monitor.next_run_at + yield "updated_at", monitor.updated_at + yield "success", "true" + + +class ExaDeleteMonitorBlock(Block): + """Delete a monitor from a webset.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + monitor_id: str = SchemaField( + description="The ID of the monitor to delete", + placeholder="monitor-id", + ) + + class Output(BlockSchemaOutput): + monitor_id: str = SchemaField(description="The ID of the deleted monitor") + success: str = SchemaField(description="Whether the deletion was successful") + + def __init__(self): + super().__init__( + id="f16f9b10-0c4d-4db8-997d-7b96b6026094", + description="Delete a monitor from a webset", + categories={BlockCategory.SEARCH}, + input_schema=ExaDeleteMonitorBlock.Input, + output_schema=ExaDeleteMonitorBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + deleted_monitor = aexa.websets.monitors.delete(monitor_id=input_data.monitor_id) + + yield "monitor_id", deleted_monitor.id + yield "success", "true" + + +class ExaListMonitorsBlock(Block): + """List all monitors with pagination.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: Optional[str] = SchemaField( + default=None, + description="Filter monitors by webset ID", + placeholder="webset-id", + ) + limit: int = SchemaField( + default=25, + description="Number of monitors to return", + ge=1, + le=100, + ) + cursor: Optional[str] = SchemaField( + default=None, + description="Cursor for pagination", + advanced=True, + ) + + class Output(BlockSchemaOutput): + monitors: list[dict] = SchemaField(description="List of monitors") + monitor: dict = SchemaField( + description="Individual monitor (yielded for each monitor)" + ) + has_more: bool = SchemaField( + description="Whether there are more monitors to paginate through" + ) + next_cursor: Optional[str] = SchemaField( + description="Cursor for the next page of results" + ) + + def __init__(self): + super().__init__( + id="f06e2b38-5397-4e8f-aa85-491149dd98df", + description="List all monitors with optional webset filtering", + categories={BlockCategory.SEARCH}, + input_schema=ExaListMonitorsBlock.Input, + output_schema=ExaListMonitorsBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + response = aexa.websets.monitors.list( + cursor=input_data.cursor, + limit=input_data.limit, + webset_id=input_data.webset_id, + ) + + # Convert SDK monitors to our stable models + monitors = [MonitorModel.from_sdk(m) for m in response.data] + + # Yield the full list + yield "monitors", [m.model_dump() for m in monitors] + + # Yield individual monitors for graph chaining + for monitor in monitors: + yield "monitor", monitor.model_dump() + + # Yield pagination metadata + yield "has_more", response.has_more + yield "next_cursor", response.next_cursor diff --git a/autogpt_platform/backend/backend/blocks/exa/websets_polling.py b/autogpt_platform/backend/backend/blocks/exa/websets_polling.py new file mode 100644 index 0000000000..4aa86567b4 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/websets_polling.py @@ -0,0 +1,600 @@ +""" +Exa Websets Polling Blocks + +This module provides dedicated polling blocks for waiting on webset operations +to complete, with progress tracking and timeout management. +""" + +import asyncio +import time +from enum import Enum +from typing import Any, Dict + +from exa_py import AsyncExa +from pydantic import BaseModel + +from backend.sdk import ( + APIKeyCredentials, + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + CredentialsMetaInput, + SchemaField, +) + +from ._config import exa + +# Import WebsetItemModel for use in enrichment samples +# This is safe as websets_items doesn't import from websets_polling +from .websets_items import WebsetItemModel + + +# Model for sample enrichment data +class SampleEnrichmentModel(BaseModel): + """Sample enrichment result for display.""" + + item_id: str + item_title: str + enrichment_data: Dict[str, Any] + + +class WebsetTargetStatus(str, Enum): + IDLE = "idle" + COMPLETED = "completed" + RUNNING = "running" + PAUSED = "paused" + ANY_COMPLETE = "any_complete" # Either idle or completed + + +class ExaWaitForWebsetBlock(Block): + """Wait for a webset to reach a specific status with progress tracking.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset to monitor", + placeholder="webset-id-or-external-id", + ) + target_status: WebsetTargetStatus = SchemaField( + default=WebsetTargetStatus.IDLE, + description="Status to wait for (idle=all operations complete, completed=search done, running=actively processing)", + ) + timeout: int = SchemaField( + default=300, + description="Maximum time to wait in seconds", + ge=1, + le=1800, # 30 minutes max + ) + check_interval: int = SchemaField( + default=5, + description="Initial interval between status checks in seconds", + advanced=True, + ge=1, + le=60, + ) + max_interval: int = SchemaField( + default=30, + description="Maximum interval between checks (for exponential backoff)", + advanced=True, + ge=5, + le=120, + ) + include_progress: bool = SchemaField( + default=True, + description="Include detailed progress information in output", + ) + + class Output(BlockSchemaOutput): + webset_id: str = SchemaField(description="The webset ID that was monitored") + final_status: str = SchemaField(description="The final status of the webset") + elapsed_time: float = SchemaField(description="Total time elapsed in seconds") + item_count: int = SchemaField(description="Number of items found") + search_progress: dict = SchemaField( + description="Detailed search progress information" + ) + enrichment_progress: dict = SchemaField( + description="Detailed enrichment progress information" + ) + timed_out: bool = SchemaField(description="Whether the operation timed out") + + def __init__(self): + super().__init__( + id="619d71e8-b72a-434d-8bd4-23376dd0342c", + description="Wait for a webset to reach a specific status with progress tracking", + categories={BlockCategory.SEARCH}, + input_schema=ExaWaitForWebsetBlock.Input, + output_schema=ExaWaitForWebsetBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + start_time = time.time() + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + try: + if input_data.target_status in [ + WebsetTargetStatus.IDLE, + WebsetTargetStatus.ANY_COMPLETE, + ]: + final_webset = aexa.websets.wait_until_idle( + id=input_data.webset_id, + timeout=input_data.timeout, + poll_interval=input_data.check_interval, + ) + + elapsed = time.time() - start_time + + status_str = ( + final_webset.status.value + if hasattr(final_webset.status, "value") + else str(final_webset.status) + ) + + item_count = 0 + if final_webset.searches: + for search in final_webset.searches: + if search.progress: + item_count += search.progress.found + + # Extract progress if requested + search_progress = {} + enrichment_progress = {} + if input_data.include_progress: + webset_dict = final_webset.model_dump( + by_alias=True, exclude_none=True + ) + search_progress = self._extract_search_progress(webset_dict) + enrichment_progress = self._extract_enrichment_progress(webset_dict) + + yield "webset_id", input_data.webset_id + yield "final_status", status_str + yield "elapsed_time", elapsed + yield "item_count", item_count + if input_data.include_progress: + yield "search_progress", search_progress + yield "enrichment_progress", enrichment_progress + yield "timed_out", False + else: + # For other status targets, manually poll + interval = input_data.check_interval + while time.time() - start_time < input_data.timeout: + # Get current webset status + webset = aexa.websets.get(id=input_data.webset_id) + current_status = ( + webset.status.value + if hasattr(webset.status, "value") + else str(webset.status) + ) + + # Check if target status reached + if current_status == input_data.target_status.value: + elapsed = time.time() - start_time + + # Estimate item count from search progress + item_count = 0 + if webset.searches: + for search in webset.searches: + if search.progress: + item_count += search.progress.found + + search_progress = {} + enrichment_progress = {} + if input_data.include_progress: + webset_dict = webset.model_dump( + by_alias=True, exclude_none=True + ) + search_progress = self._extract_search_progress(webset_dict) + enrichment_progress = self._extract_enrichment_progress( + webset_dict + ) + + yield "webset_id", input_data.webset_id + yield "final_status", current_status + yield "elapsed_time", elapsed + yield "item_count", item_count + if input_data.include_progress: + yield "search_progress", search_progress + yield "enrichment_progress", enrichment_progress + yield "timed_out", False + return + + # Wait before next check with exponential backoff + await asyncio.sleep(interval) + interval = min(interval * 1.5, input_data.max_interval) + + # Timeout reached + elapsed = time.time() - start_time + webset = aexa.websets.get(id=input_data.webset_id) + final_status = ( + webset.status.value + if hasattr(webset.status, "value") + else str(webset.status) + ) + + item_count = 0 + if webset.searches: + for search in webset.searches: + if search.progress: + item_count += search.progress.found + + search_progress = {} + enrichment_progress = {} + if input_data.include_progress: + webset_dict = webset.model_dump(by_alias=True, exclude_none=True) + search_progress = self._extract_search_progress(webset_dict) + enrichment_progress = self._extract_enrichment_progress(webset_dict) + + yield "webset_id", input_data.webset_id + yield "final_status", final_status + yield "elapsed_time", elapsed + yield "item_count", item_count + if input_data.include_progress: + yield "search_progress", search_progress + yield "enrichment_progress", enrichment_progress + yield "timed_out", True + + except asyncio.TimeoutError: + raise ValueError( + f"Polling timed out after {input_data.timeout} seconds" + ) from None + + def _extract_search_progress(self, webset_data: dict) -> dict: + """Extract search progress information from webset data.""" + progress = {} + searches = webset_data.get("searches", []) + + for idx, search in enumerate(searches): + search_id = search.get("id", f"search_{idx}") + search_progress = search.get("progress", {}) + + progress[search_id] = { + "status": search.get("status", "unknown"), + "found": search_progress.get("found", 0), + "analyzed": search_progress.get("analyzed", 0), + "completion": search_progress.get("completion", 0), + "time_left": search_progress.get("timeLeft", 0), + } + + return progress + + def _extract_enrichment_progress(self, webset_data: dict) -> dict: + """Extract enrichment progress information from webset data.""" + progress = {} + enrichments = webset_data.get("enrichments", []) + + for idx, enrichment in enumerate(enrichments): + enrich_id = enrichment.get("id", f"enrichment_{idx}") + + progress[enrich_id] = { + "status": enrichment.get("status", "unknown"), + "title": enrichment.get("title", ""), + "description": enrichment.get("description", ""), + } + + return progress + + +class ExaWaitForSearchBlock(Block): + """Wait for a specific webset search to complete with progress tracking.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + search_id: str = SchemaField( + description="The ID of the search to monitor", + placeholder="search-id", + ) + timeout: int = SchemaField( + default=300, + description="Maximum time to wait in seconds", + ge=1, + le=1800, + ) + check_interval: int = SchemaField( + default=5, + description="Initial interval between status checks in seconds", + advanced=True, + ge=1, + le=60, + ) + + class Output(BlockSchemaOutput): + search_id: str = SchemaField(description="The search ID that was monitored") + final_status: str = SchemaField(description="The final status of the search") + items_found: int = SchemaField( + description="Number of items found by the search" + ) + items_analyzed: int = SchemaField(description="Number of items analyzed") + completion_percentage: int = SchemaField( + description="Completion percentage (0-100)" + ) + elapsed_time: float = SchemaField(description="Total time elapsed in seconds") + recall_info: dict = SchemaField( + description="Information about expected results and confidence" + ) + timed_out: bool = SchemaField(description="Whether the operation timed out") + + def __init__(self): + super().__init__( + id="14da21ae-40a1-41bc-a111-c8e5c9ef012b", + description="Wait for a specific webset search to complete with progress tracking", + categories={BlockCategory.SEARCH}, + input_schema=ExaWaitForSearchBlock.Input, + output_schema=ExaWaitForSearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + start_time = time.time() + interval = input_data.check_interval + max_interval = 30 + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + try: + while time.time() - start_time < input_data.timeout: + # Get current search status using SDK + search = aexa.websets.searches.get( + webset_id=input_data.webset_id, id=input_data.search_id + ) + + # Extract status + status = ( + search.status.value + if hasattr(search.status, "value") + else str(search.status) + ) + + # Check if search is complete + if status in ["completed", "failed", "canceled"]: + elapsed = time.time() - start_time + + # Extract progress information + progress_dict = {} + if search.progress: + progress_dict = search.progress.model_dump( + by_alias=True, exclude_none=True + ) + + # Extract recall information + recall_info = {} + if search.recall: + recall_dict = search.recall.model_dump( + by_alias=True, exclude_none=True + ) + expected = recall_dict.get("expected", {}) + recall_info = { + "expected_total": expected.get("total", 0), + "confidence": expected.get("confidence", ""), + "min_expected": expected.get("bounds", {}).get("min", 0), + "max_expected": expected.get("bounds", {}).get("max", 0), + "reasoning": recall_dict.get("reasoning", ""), + } + + yield "search_id", input_data.search_id + yield "final_status", status + yield "items_found", progress_dict.get("found", 0) + yield "items_analyzed", progress_dict.get("analyzed", 0) + yield "completion_percentage", progress_dict.get("completion", 0) + yield "elapsed_time", elapsed + yield "recall_info", recall_info + yield "timed_out", False + + return + + # Wait before next check with exponential backoff + await asyncio.sleep(interval) + interval = min(interval * 1.5, max_interval) + + # Timeout reached + elapsed = time.time() - start_time + + # Get last known status + search = aexa.websets.searches.get( + webset_id=input_data.webset_id, id=input_data.search_id + ) + final_status = ( + search.status.value + if hasattr(search.status, "value") + else str(search.status) + ) + + progress_dict = {} + if search.progress: + progress_dict = search.progress.model_dump( + by_alias=True, exclude_none=True + ) + + yield "search_id", input_data.search_id + yield "final_status", final_status + yield "items_found", progress_dict.get("found", 0) + yield "items_analyzed", progress_dict.get("analyzed", 0) + yield "completion_percentage", progress_dict.get("completion", 0) + yield "elapsed_time", elapsed + yield "timed_out", True + + except asyncio.TimeoutError: + raise ValueError( + f"Search polling timed out after {input_data.timeout} seconds" + ) from None + + +class ExaWaitForEnrichmentBlock(Block): + """Wait for a webset enrichment to complete with progress tracking.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + enrichment_id: str = SchemaField( + description="The ID of the enrichment to monitor", + placeholder="enrichment-id", + ) + timeout: int = SchemaField( + default=300, + description="Maximum time to wait in seconds", + ge=1, + le=1800, + ) + check_interval: int = SchemaField( + default=5, + description="Initial interval between status checks in seconds", + advanced=True, + ge=1, + le=60, + ) + sample_results: bool = SchemaField( + default=True, + description="Include sample enrichment results in output", + ) + + class Output(BlockSchemaOutput): + enrichment_id: str = SchemaField( + description="The enrichment ID that was monitored" + ) + final_status: str = SchemaField( + description="The final status of the enrichment" + ) + items_enriched: int = SchemaField( + description="Number of items successfully enriched" + ) + enrichment_title: str = SchemaField( + description="Title/description of the enrichment" + ) + elapsed_time: float = SchemaField(description="Total time elapsed in seconds") + sample_data: list[SampleEnrichmentModel] = SchemaField( + description="Sample of enriched data (if requested)" + ) + timed_out: bool = SchemaField(description="Whether the operation timed out") + + def __init__(self): + super().__init__( + id="a11865c3-ac80-4721-8a40-ac4e3b71a558", + description="Wait for a webset enrichment to complete with progress tracking", + categories={BlockCategory.SEARCH}, + input_schema=ExaWaitForEnrichmentBlock.Input, + output_schema=ExaWaitForEnrichmentBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + start_time = time.time() + interval = input_data.check_interval + max_interval = 30 + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + try: + while time.time() - start_time < input_data.timeout: + # Get current enrichment status using SDK + enrichment = aexa.websets.enrichments.get( + webset_id=input_data.webset_id, id=input_data.enrichment_id + ) + + # Extract status + status = ( + enrichment.status.value + if hasattr(enrichment.status, "value") + else str(enrichment.status) + ) + + # Check if enrichment is complete + if status in ["completed", "failed", "canceled"]: + elapsed = time.time() - start_time + + # Get sample enriched items if requested + sample_data = [] + items_enriched = 0 + + if input_data.sample_results and status == "completed": + sample_data, items_enriched = ( + await self._get_sample_enrichments( + input_data.webset_id, input_data.enrichment_id, aexa + ) + ) + + yield "enrichment_id", input_data.enrichment_id + yield "final_status", status + yield "items_enriched", items_enriched + yield "enrichment_title", enrichment.title or enrichment.description or "" + yield "elapsed_time", elapsed + if input_data.sample_results: + yield "sample_data", sample_data + yield "timed_out", False + + return + + # Wait before next check with exponential backoff + await asyncio.sleep(interval) + interval = min(interval * 1.5, max_interval) + + # Timeout reached + elapsed = time.time() - start_time + + # Get last known status + enrichment = aexa.websets.enrichments.get( + webset_id=input_data.webset_id, id=input_data.enrichment_id + ) + final_status = ( + enrichment.status.value + if hasattr(enrichment.status, "value") + else str(enrichment.status) + ) + title = enrichment.title or enrichment.description or "" + + yield "enrichment_id", input_data.enrichment_id + yield "final_status", final_status + yield "items_enriched", 0 + yield "enrichment_title", title + yield "elapsed_time", elapsed + yield "timed_out", True + + except asyncio.TimeoutError: + raise ValueError( + f"Enrichment polling timed out after {input_data.timeout} seconds" + ) from None + + async def _get_sample_enrichments( + self, webset_id: str, enrichment_id: str, aexa: AsyncExa + ) -> tuple[list[SampleEnrichmentModel], int]: + """Get sample enriched data and count.""" + # Get a few items to see enrichment results using SDK + response = aexa.websets.items.list(webset_id=webset_id, limit=5) + + sample_data: list[SampleEnrichmentModel] = [] + enriched_count = 0 + + for sdk_item in response.data: + # Convert to our WebsetItemModel first + item = WebsetItemModel.from_sdk(sdk_item) + + # Check if this item has the enrichment we're looking for + if enrichment_id in item.enrichments: + enriched_count += 1 + enrich_model = item.enrichments[enrichment_id] + + # Create sample using our typed model + sample = SampleEnrichmentModel( + item_id=item.id, + item_title=item.title, + enrichment_data=enrich_model.model_dump(exclude_none=True), + ) + sample_data.append(sample) + + return sample_data, enriched_count diff --git a/autogpt_platform/backend/backend/blocks/exa/websets_search.py b/autogpt_platform/backend/backend/blocks/exa/websets_search.py new file mode 100644 index 0000000000..ff0974f021 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/exa/websets_search.py @@ -0,0 +1,650 @@ +""" +Exa Websets Search Management Blocks + +This module provides blocks for creating and managing searches within websets, +including adding new searches, checking status, and canceling operations. +""" + +from enum import Enum +from typing import Any, Dict, List, Optional + +from exa_py import AsyncExa +from exa_py.websets.types import WebsetSearch as SdkWebsetSearch +from pydantic import BaseModel + +from backend.sdk import ( + APIKeyCredentials, + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + CredentialsMetaInput, + SchemaField, +) + +from ._config import exa + + +# Mirrored model for stability +class WebsetSearchModel(BaseModel): + """Stable output model mirroring SDK WebsetSearch.""" + + id: str + webset_id: str + status: str + query: str + entity_type: str + criteria: List[Dict[str, Any]] + count: int + behavior: str + progress: Dict[str, Any] + recall: Optional[Dict[str, Any]] + created_at: str + updated_at: str + canceled_at: Optional[str] + canceled_reason: Optional[str] + metadata: Dict[str, Any] + + @classmethod + def from_sdk(cls, search: SdkWebsetSearch) -> "WebsetSearchModel": + """Convert SDK WebsetSearch to our stable model.""" + # Extract entity type + entity_type = "auto" + if search.entity: + entity_dict = search.entity.model_dump(by_alias=True) + entity_type = entity_dict.get("type", "auto") + + # Convert criteria + criteria = [c.model_dump(by_alias=True) for c in search.criteria] + + # Convert progress + progress_dict = {} + if search.progress: + progress_dict = search.progress.model_dump(by_alias=True) + + # Convert recall + recall_dict = None + if search.recall: + recall_dict = search.recall.model_dump(by_alias=True) + + return cls( + id=search.id, + webset_id=search.webset_id, + status=( + search.status.value + if hasattr(search.status, "value") + else str(search.status) + ), + query=search.query, + entity_type=entity_type, + criteria=criteria, + count=search.count, + behavior=search.behavior.value if search.behavior else "override", + progress=progress_dict, + recall=recall_dict, + created_at=search.created_at.isoformat() if search.created_at else "", + updated_at=search.updated_at.isoformat() if search.updated_at else "", + canceled_at=search.canceled_at.isoformat() if search.canceled_at else None, + canceled_reason=( + search.canceled_reason.value if search.canceled_reason else None + ), + metadata=search.metadata if search.metadata else {}, + ) + + +class SearchBehavior(str, Enum): + """Behavior for how new search results interact with existing items.""" + + OVERRIDE = "override" # Replace existing items + APPEND = "append" # Add to existing items + MERGE = "merge" # Merge with existing items + + +class SearchEntityType(str, Enum): + COMPANY = "company" + PERSON = "person" + ARTICLE = "article" + RESEARCH_PAPER = "research_paper" + CUSTOM = "custom" + AUTO = "auto" + + +class ExaCreateWebsetSearchBlock(Block): + """Add a new search to an existing webset.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + query: str = SchemaField( + description="Search query describing what to find", + placeholder="Engineering managers at Fortune 500 companies", + ) + count: int = SchemaField( + default=10, + description="Number of items to find", + ge=1, + le=1000, + ) + + # Entity configuration + entity_type: SearchEntityType = SchemaField( + default=SearchEntityType.AUTO, + description="Type of entity to search for", + ) + entity_description: Optional[str] = SchemaField( + default=None, + description="Description for custom entity type", + advanced=True, + ) + + # Criteria for verification + criteria: list[str] = SchemaField( + default_factory=list, + description="List of criteria that items must meet. If not provided, auto-detected from query.", + advanced=True, + ) + + # Advanced search options + behavior: SearchBehavior = SchemaField( + default=SearchBehavior.APPEND, + description="How new results interact with existing items", + advanced=True, + ) + recall: bool = SchemaField( + default=True, + description="Enable recall estimation for expected results", + advanced=True, + ) + + # Exclude sources + exclude_source_ids: list[str] = SchemaField( + default_factory=list, + description="IDs of imports/websets to exclude from results", + advanced=True, + ) + exclude_source_types: list[str] = SchemaField( + default_factory=list, + description="Types of sources to exclude ('import' or 'webset')", + advanced=True, + ) + + # Scope sources + scope_source_ids: list[str] = SchemaField( + default_factory=list, + description="IDs of imports/websets to limit search scope to", + advanced=True, + ) + scope_source_types: list[str] = SchemaField( + default_factory=list, + description="Types of scope sources ('import' or 'webset')", + advanced=True, + ) + scope_relationships: list[str] = SchemaField( + default_factory=list, + description="Relationship definitions for hop searches", + advanced=True, + ) + scope_relationship_limits: list[int] = SchemaField( + default_factory=list, + description="Limits on related entities to find", + advanced=True, + ) + + metadata: Optional[dict] = SchemaField( + default=None, + description="Metadata to attach to the search", + advanced=True, + ) + + # Polling options + wait_for_completion: bool = SchemaField( + default=False, + description="Wait for the search to complete before returning", + ) + polling_timeout: int = SchemaField( + default=300, + description="Maximum time to wait for completion in seconds", + advanced=True, + ge=1, + le=600, + ) + + class Output(BlockSchemaOutput): + search_id: str = SchemaField( + description="The unique identifier for the created search" + ) + webset_id: str = SchemaField(description="The webset this search belongs to") + status: str = SchemaField(description="Current status of the search") + query: str = SchemaField(description="The search query") + expected_results: dict = SchemaField( + description="Recall estimation of expected results" + ) + items_found: Optional[int] = SchemaField( + description="Number of items found (if wait_for_completion was True)" + ) + completion_time: Optional[float] = SchemaField( + description="Time taken to complete in seconds (if wait_for_completion was True)" + ) + + def __init__(self): + super().__init__( + id="342ff776-2e2c-4cdb-b392-4eeb34b21d5f", + description="Add a new search to an existing webset to find more items", + categories={BlockCategory.SEARCH}, + input_schema=ExaCreateWebsetSearchBlock.Input, + output_schema=ExaCreateWebsetSearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + import time + + # Build the payload + payload = { + "query": input_data.query, + "count": input_data.count, + "behavior": input_data.behavior.value, + "recall": input_data.recall, + } + + # Add entity configuration + if input_data.entity_type != SearchEntityType.AUTO: + entity = {"type": input_data.entity_type.value} + if ( + input_data.entity_type == SearchEntityType.CUSTOM + and input_data.entity_description + ): + entity["description"] = input_data.entity_description + payload["entity"] = entity + + # Add criteria if provided + if input_data.criteria: + payload["criteria"] = [{"description": c} for c in input_data.criteria] + + # Add exclude sources + if input_data.exclude_source_ids: + exclude_list = [] + for idx, src_id in enumerate(input_data.exclude_source_ids): + src_type = "import" + if input_data.exclude_source_types and idx < len( + input_data.exclude_source_types + ): + src_type = input_data.exclude_source_types[idx] + exclude_list.append({"source": src_type, "id": src_id}) + payload["exclude"] = exclude_list + + # Add scope sources + if input_data.scope_source_ids: + scope_list: list[dict[str, Any]] = [] + for idx, src_id in enumerate(input_data.scope_source_ids): + scope_item: dict[str, Any] = {"source": "import", "id": src_id} + + if input_data.scope_source_types and idx < len( + input_data.scope_source_types + ): + scope_item["source"] = input_data.scope_source_types[idx] + + # Add relationship if provided + if input_data.scope_relationships and idx < len( + input_data.scope_relationships + ): + relationship: dict[str, Any] = { + "definition": input_data.scope_relationships[idx] + } + if input_data.scope_relationship_limits and idx < len( + input_data.scope_relationship_limits + ): + relationship["limit"] = input_data.scope_relationship_limits[ + idx + ] + scope_item["relationship"] = relationship + + scope_list.append(scope_item) + payload["scope"] = scope_list + + # Add metadata if provided + if input_data.metadata: + payload["metadata"] = input_data.metadata + + start_time = time.time() + + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + sdk_search = aexa.websets.searches.create( + webset_id=input_data.webset_id, params=payload + ) + + search_id = sdk_search.id + status = ( + sdk_search.status.value + if hasattr(sdk_search.status, "value") + else str(sdk_search.status) + ) + + # Extract expected results from recall + expected_results = {} + if sdk_search.recall: + recall_dict = sdk_search.recall.model_dump(by_alias=True) + expected = recall_dict.get("expected", {}) + expected_results = { + "total": expected.get("total", 0), + "confidence": expected.get("confidence", ""), + "min": expected.get("bounds", {}).get("min", 0), + "max": expected.get("bounds", {}).get("max", 0), + "reasoning": recall_dict.get("reasoning", ""), + } + + # If wait_for_completion is True, poll for completion + if input_data.wait_for_completion: + import asyncio + + poll_interval = 5 + max_interval = 30 + poll_start = time.time() + + while time.time() - poll_start < input_data.polling_timeout: + current_search = aexa.websets.searches.get( + webset_id=input_data.webset_id, id=search_id + ) + current_status = ( + current_search.status.value + if hasattr(current_search.status, "value") + else str(current_search.status) + ) + + if current_status in ["completed", "failed", "cancelled"]: + items_found = 0 + if current_search.progress: + items_found = current_search.progress.found + completion_time = time.time() - start_time + + yield "search_id", search_id + yield "webset_id", input_data.webset_id + yield "status", current_status + yield "query", input_data.query + yield "expected_results", expected_results + yield "items_found", items_found + yield "completion_time", completion_time + return + + await asyncio.sleep(poll_interval) + poll_interval = min(poll_interval * 1.5, max_interval) + + # Timeout - yield what we have + yield "search_id", search_id + yield "webset_id", input_data.webset_id + yield "status", status + yield "query", input_data.query + yield "expected_results", expected_results + yield "items_found", 0 + yield "completion_time", time.time() - start_time + else: + yield "search_id", search_id + yield "webset_id", input_data.webset_id + yield "status", status + yield "query", input_data.query + yield "expected_results", expected_results + + +class ExaGetWebsetSearchBlock(Block): + """Get the status and details of a webset search.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + search_id: str = SchemaField( + description="The ID of the search to retrieve", + placeholder="search-id", + ) + + class Output(BlockSchemaOutput): + search_id: str = SchemaField(description="The unique identifier for the search") + status: str = SchemaField(description="Current status of the search") + query: str = SchemaField(description="The search query") + entity_type: str = SchemaField(description="Type of entity being searched") + criteria: list[dict] = SchemaField(description="Criteria used for verification") + progress: dict = SchemaField(description="Search progress information") + recall: dict = SchemaField(description="Recall estimation information") + created_at: str = SchemaField(description="When the search was created") + updated_at: str = SchemaField(description="When the search was last updated") + canceled_at: Optional[str] = SchemaField( + description="When the search was canceled (if applicable)" + ) + canceled_reason: Optional[str] = SchemaField( + description="Reason for cancellation (if applicable)" + ) + metadata: dict = SchemaField(description="Metadata attached to the search") + + def __init__(self): + super().__init__( + id="4fa3e627-a0ff-485f-8732-52148051646c", + description="Get the status and details of a webset search", + categories={BlockCategory.SEARCH}, + input_schema=ExaGetWebsetSearchBlock.Input, + output_schema=ExaGetWebsetSearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + sdk_search = aexa.websets.searches.get( + webset_id=input_data.webset_id, id=input_data.search_id + ) + + search = WebsetSearchModel.from_sdk(sdk_search) + + # Extract progress information + progress_info = { + "found": search.progress.get("found", 0), + "analyzed": search.progress.get("analyzed", 0), + "completion": search.progress.get("completion", 0), + "time_left": search.progress.get("timeLeft", 0), + } + + # Extract recall information + recall_data = {} + if search.recall: + expected = search.recall.get("expected", {}) + recall_data = { + "expected_total": expected.get("total", 0), + "confidence": expected.get("confidence", ""), + "min_expected": expected.get("bounds", {}).get("min", 0), + "max_expected": expected.get("bounds", {}).get("max", 0), + "reasoning": search.recall.get("reasoning", ""), + } + + yield "search_id", search.id + yield "status", search.status + yield "query", search.query + yield "entity_type", search.entity_type + yield "criteria", search.criteria + yield "progress", progress_info + yield "recall", recall_data + yield "created_at", search.created_at + yield "updated_at", search.updated_at + yield "canceled_at", search.canceled_at + yield "canceled_reason", search.canceled_reason + yield "metadata", search.metadata + + +class ExaCancelWebsetSearchBlock(Block): + """Cancel a running webset search.""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + search_id: str = SchemaField( + description="The ID of the search to cancel", + placeholder="search-id", + ) + + class Output(BlockSchemaOutput): + search_id: str = SchemaField(description="The ID of the canceled search") + status: str = SchemaField(description="Status after cancellation") + items_found_before_cancel: int = SchemaField( + description="Number of items found before cancellation" + ) + success: str = SchemaField( + description="Whether the cancellation was successful" + ) + + def __init__(self): + super().__init__( + id="74ef9f1e-ae89-4c7f-9d7d-d217214815b4", + description="Cancel a running webset search", + categories={BlockCategory.SEARCH}, + input_schema=ExaCancelWebsetSearchBlock.Input, + output_schema=ExaCancelWebsetSearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + canceled_search = aexa.websets.searches.cancel( + webset_id=input_data.webset_id, id=input_data.search_id + ) + + # Extract items found before cancellation + items_found = 0 + if canceled_search.progress: + items_found = canceled_search.progress.found + + status = ( + canceled_search.status.value + if hasattr(canceled_search.status, "value") + else str(canceled_search.status) + ) + + yield "search_id", canceled_search.id + yield "status", status + yield "items_found_before_cancel", items_found + yield "success", "true" + + +class ExaFindOrCreateSearchBlock(Block): + """Find existing search by query or create new one (prevents duplicate searches).""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = exa.credentials_field( + description="The Exa integration requires an API Key." + ) + webset_id: str = SchemaField( + description="The ID or external ID of the Webset", + placeholder="webset-id-or-external-id", + ) + query: str = SchemaField( + description="Search query to find or create", + placeholder="AI companies in San Francisco", + ) + count: int = SchemaField( + default=10, + description="Number of items to find (only used if creating new search)", + ge=1, + le=1000, + ) + entity_type: SearchEntityType = SchemaField( + default=SearchEntityType.AUTO, + description="Entity type (only used if creating)", + advanced=True, + ) + behavior: SearchBehavior = SchemaField( + default=SearchBehavior.OVERRIDE, + description="Search behavior (only used if creating)", + advanced=True, + ) + + class Output(BlockSchemaOutput): + search_id: str = SchemaField(description="The search ID (existing or new)") + webset_id: str = SchemaField(description="The webset ID") + status: str = SchemaField(description="Current search status") + query: str = SchemaField(description="The search query") + was_created: bool = SchemaField( + description="True if search was newly created, False if already existed" + ) + items_found: int = SchemaField( + description="Number of items found (0 if still running)" + ) + + def __init__(self): + super().__init__( + id="cbdb05ac-cb73-4b03-a493-6d34e9a011da", + description="Find existing search by query or create new - prevents duplicate searches in workflows", + categories={BlockCategory.SEARCH}, + input_schema=ExaFindOrCreateSearchBlock.Input, + output_schema=ExaFindOrCreateSearchBlock.Output, + ) + + async def run( + self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs + ) -> BlockOutput: + # Use AsyncExa SDK + aexa = AsyncExa(api_key=credentials.api_key.get_secret_value()) + + # Get webset to check existing searches + webset = aexa.websets.get(id=input_data.webset_id) + + # Look for existing search with same query + existing_search = None + if webset.searches: + for search in webset.searches: + if search.query.strip().lower() == input_data.query.strip().lower(): + existing_search = search + break + + if existing_search: + # Found existing search + search = WebsetSearchModel.from_sdk(existing_search) + + yield "search_id", search.id + yield "webset_id", input_data.webset_id + yield "status", search.status + yield "query", search.query + yield "was_created", False + yield "items_found", search.progress.get("found", 0) + else: + # Create new search + payload: Dict[str, Any] = { + "query": input_data.query, + "count": input_data.count, + "behavior": input_data.behavior.value, + } + + # Add entity if not auto + if input_data.entity_type != SearchEntityType.AUTO: + payload["entity"] = {"type": input_data.entity_type.value} + + sdk_search = aexa.websets.searches.create( + webset_id=input_data.webset_id, params=payload + ) + + search = WebsetSearchModel.from_sdk(sdk_search) + + yield "search_id", search.id + yield "webset_id", input_data.webset_id + yield "status", search.status + yield "query", search.query + yield "was_created", True + yield "items_found", 0 # Newly created, no items yet diff --git a/autogpt_platform/backend/backend/blocks/fal/ai_video_generator.py b/autogpt_platform/backend/backend/blocks/fal/ai_video_generator.py index 2e795f0d78..2a71548dcc 100644 --- a/autogpt_platform/backend/backend/blocks/fal/ai_video_generator.py +++ b/autogpt_platform/backend/backend/blocks/fal/ai_video_generator.py @@ -10,7 +10,13 @@ from backend.blocks.fal._auth import ( FalCredentialsField, FalCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.request import ClientResponseError, Requests @@ -24,7 +30,7 @@ class FalModel(str, Enum): class AIVideoGeneratorBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): prompt: str = SchemaField( description="Description of the video to generate.", placeholder="A dog running in a field.", @@ -36,7 +42,7 @@ class AIVideoGeneratorBlock(Block): ) credentials: FalCredentialsInput = FalCredentialsField() - class Output(BlockSchema): + class Output(BlockSchemaOutput): video_url: str = SchemaField(description="The URL of the generated video.") error: str = SchemaField( description="Error message if video generation failed." diff --git a/autogpt_platform/backend/backend/blocks/firecrawl/crawl.py b/autogpt_platform/backend/backend/blocks/firecrawl/crawl.py index b20452f777..eced461a8a 100644 --- a/autogpt_platform/backend/backend/blocks/firecrawl/crawl.py +++ b/autogpt_platform/backend/backend/blocks/firecrawl/crawl.py @@ -9,7 +9,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, ) @@ -19,7 +20,7 @@ from ._format_utils import convert_to_format_options class FirecrawlCrawlBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = firecrawl.credentials_field() url: str = SchemaField(description="The URL to crawl") limit: int = SchemaField(description="The number of pages to crawl", default=10) @@ -39,7 +40,7 @@ class FirecrawlCrawlBlock(Block): description="The format of the crawl", default=[ScrapeFormat.MARKDOWN] ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): data: list[dict[str, Any]] = SchemaField(description="The result of the crawl") markdown: str = SchemaField(description="The markdown of the crawl") html: str = SchemaField(description="The html of the crawl") @@ -55,6 +56,10 @@ class FirecrawlCrawlBlock(Block): change_tracking: dict[str, Any] = SchemaField( description="The change tracking of the crawl" ) + error: str = SchemaField( + description="Error message if the crawl failed", + default="", + ) def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/firecrawl/extract.py b/autogpt_platform/backend/backend/blocks/firecrawl/extract.py index caef24d126..e5fd5ec9f3 100755 --- a/autogpt_platform/backend/backend/blocks/firecrawl/extract.py +++ b/autogpt_platform/backend/backend/blocks/firecrawl/extract.py @@ -9,18 +9,20 @@ from backend.sdk import ( BlockCost, BlockCostType, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, cost, ) +from backend.util.exceptions import BlockExecutionError from ._config import firecrawl @cost(BlockCost(2, BlockCostType.RUN)) class FirecrawlExtractBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = firecrawl.credentials_field() urls: list[str] = SchemaField( description="The URLs to crawl - at least one is required. Wildcards are supported. (/*)" @@ -37,8 +39,12 @@ class FirecrawlExtractBlock(Block): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): data: dict[str, Any] = SchemaField(description="The result of the crawl") + error: str = SchemaField( + description="Error message if the extraction failed", + default="", + ) def __init__(self): super().__init__( @@ -54,11 +60,18 @@ class FirecrawlExtractBlock(Block): ) -> BlockOutput: app = FirecrawlApp(api_key=credentials.api_key.get_secret_value()) - extract_result = app.extract( - urls=input_data.urls, - prompt=input_data.prompt, - schema=input_data.output_schema, - enable_web_search=input_data.enable_web_search, - ) + try: + extract_result = app.extract( + urls=input_data.urls, + prompt=input_data.prompt, + schema=input_data.output_schema, + enable_web_search=input_data.enable_web_search, + ) + except Exception as e: + raise BlockExecutionError( + message=f"Extract failed: {e}", + block_name=self.name, + block_id=self.id, + ) from e yield "data", extract_result.data diff --git a/autogpt_platform/backend/backend/blocks/firecrawl/map.py b/autogpt_platform/backend/backend/blocks/firecrawl/map.py index 3e33f90461..e2e04adac0 100644 --- a/autogpt_platform/backend/backend/blocks/firecrawl/map.py +++ b/autogpt_platform/backend/backend/blocks/firecrawl/map.py @@ -7,7 +7,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, ) @@ -16,16 +17,20 @@ from ._config import firecrawl class FirecrawlMapWebsiteBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = firecrawl.credentials_field() url: str = SchemaField(description="The website url to map") - class Output(BlockSchema): + class Output(BlockSchemaOutput): links: list[str] = SchemaField(description="List of URLs found on the website") results: list[dict[str, Any]] = SchemaField( description="List of search results with url, title, and description" ) + error: str = SchemaField( + description="Error message if the map failed", + default="", + ) def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/firecrawl/scrape.py b/autogpt_platform/backend/backend/blocks/firecrawl/scrape.py index 2adde7e6d2..2c1a68d6d9 100644 --- a/autogpt_platform/backend/backend/blocks/firecrawl/scrape.py +++ b/autogpt_platform/backend/backend/blocks/firecrawl/scrape.py @@ -8,7 +8,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, ) @@ -18,7 +19,7 @@ from ._format_utils import convert_to_format_options class FirecrawlScrapeBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = firecrawl.credentials_field() url: str = SchemaField(description="The URL to crawl") limit: int = SchemaField(description="The number of pages to crawl", default=10) @@ -38,7 +39,7 @@ class FirecrawlScrapeBlock(Block): description="The format of the crawl", default=[ScrapeFormat.MARKDOWN] ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): data: dict[str, Any] = SchemaField(description="The result of the crawl") markdown: str = SchemaField(description="The markdown of the crawl") html: str = SchemaField(description="The html of the crawl") @@ -54,6 +55,10 @@ class FirecrawlScrapeBlock(Block): change_tracking: dict[str, Any] = SchemaField( description="The change tracking of the crawl" ) + error: str = SchemaField( + description="Error message if the scrape failed", + default="", + ) def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/firecrawl/search.py b/autogpt_platform/backend/backend/blocks/firecrawl/search.py index 7af8796111..a2769a0f96 100644 --- a/autogpt_platform/backend/backend/blocks/firecrawl/search.py +++ b/autogpt_platform/backend/backend/blocks/firecrawl/search.py @@ -9,7 +9,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, ) @@ -19,7 +20,7 @@ from ._format_utils import convert_to_format_options class FirecrawlSearchBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = firecrawl.credentials_field() query: str = SchemaField(description="The query to search for") limit: int = SchemaField(description="The number of pages to crawl", default=10) @@ -35,9 +36,13 @@ class FirecrawlSearchBlock(Block): description="Returns the content of the search if specified", default=[] ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): data: dict[str, Any] = SchemaField(description="The result of the search") site: dict[str, Any] = SchemaField(description="The site of the search") + error: str = SchemaField( + description="Error message if the search failed", + default="", + ) def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/flux_kontext.py b/autogpt_platform/backend/backend/blocks/flux_kontext.py index e3729240fa..dd8375c4ce 100644 --- a/autogpt_platform/backend/backend/blocks/flux_kontext.py +++ b/autogpt_platform/backend/backend/blocks/flux_kontext.py @@ -5,7 +5,13 @@ from pydantic import SecretStr from replicate.client import Client as ReplicateClient from replicate.helpers import FileOutput -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -13,6 +19,7 @@ from backend.data.model import ( SchemaField, ) from backend.integrations.providers import ProviderName +from backend.util.exceptions import ModerationError from backend.util.file import MediaFileType, store_media_file TEST_CREDENTIALS = APIKeyCredentials( @@ -57,7 +64,7 @@ class AspectRatio(str, Enum): class AIImageEditorBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.REPLICATE], Literal["api_key"] ] = CredentialsField( @@ -90,11 +97,10 @@ class AIImageEditorBlock(Block): title="Model", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): output_image: MediaFileType = SchemaField( description="URL of the transformed image" ) - error: str = SchemaField(description="Error message if generation failed") def __init__(self): super().__init__( @@ -148,6 +154,8 @@ class AIImageEditorBlock(Block): ), aspect_ratio=input_data.aspect_ratio.value, seed=input_data.seed, + user_id=user_id, + graph_exec_id=graph_exec_id, ) yield "output_image", result @@ -159,6 +167,8 @@ class AIImageEditorBlock(Block): input_image_b64: Optional[str], aspect_ratio: str, seed: Optional[int], + user_id: str, + graph_exec_id: str, ) -> MediaFileType: client = ReplicateClient(api_token=api_key.get_secret_value()) input_params = { @@ -168,11 +178,21 @@ class AIImageEditorBlock(Block): **({"seed": seed} if seed is not None else {}), } - output: FileOutput | list[FileOutput] = await client.async_run( # type: ignore - model_name, - input=input_params, - wait=False, - ) + try: + output: FileOutput | list[FileOutput] = await client.async_run( # type: ignore + model_name, + input=input_params, + wait=False, + ) + except Exception as e: + if "flagged as sensitive" in str(e).lower(): + raise ModerationError( + message="Content was flagged as sensitive by the model provider", + user_id=user_id, + graph_exec_id=graph_exec_id, + moderation_type="model_provider", + ) + raise ValueError(f"Model execution failed: {e}") from e if isinstance(output, list) and output: output = output[0] diff --git a/autogpt_platform/backend/backend/blocks/generic_webhook/triggers.py b/autogpt_platform/backend/backend/blocks/generic_webhook/triggers.py index fdddeb1437..dc554a0a88 100644 --- a/autogpt_platform/backend/backend/blocks/generic_webhook/triggers.py +++ b/autogpt_platform/backend/backend/blocks/generic_webhook/triggers.py @@ -3,7 +3,8 @@ from backend.sdk import ( BlockCategory, BlockManualWebhookConfig, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, ProviderBuilder, ProviderName, SchemaField, @@ -19,14 +20,14 @@ generic_webhook = ( class GenericWebhookTriggerBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): payload: dict = SchemaField(hidden=True, default_factory=dict) constants: dict = SchemaField( description="The constants to be set when the block is put on the graph", default_factory=dict, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): payload: dict = SchemaField( description="The complete webhook payload that was received from the generic webhook." ) diff --git a/autogpt_platform/backend/backend/blocks/github/checks.py b/autogpt_platform/backend/backend/blocks/github/checks.py index 9b9aecdf07..02bc8d2400 100644 --- a/autogpt_platform/backend/backend/blocks/github/checks.py +++ b/autogpt_platform/backend/backend/blocks/github/checks.py @@ -3,7 +3,13 @@ from typing import Optional from pydantic import BaseModel -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from ._api import get_api @@ -39,7 +45,7 @@ class ChecksConclusion(Enum): class GithubCreateCheckRunBlock(Block): """Block for creating a new check run on a GitHub repository.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo:status") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -76,7 +82,7 @@ class GithubCreateCheckRunBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class CheckRunResult(BaseModel): id: int html_url: str @@ -211,7 +217,7 @@ class GithubCreateCheckRunBlock(Block): class GithubUpdateCheckRunBlock(Block): """Block for updating an existing check run on a GitHub repository.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo:status") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -239,7 +245,7 @@ class GithubUpdateCheckRunBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class CheckRunResult(BaseModel): id: int html_url: str @@ -249,7 +255,6 @@ class GithubUpdateCheckRunBlock(Block): check_run: CheckRunResult = SchemaField( description="Details of the updated check run" ) - error: str = SchemaField(description="Error message if check run update failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/github/ci.py b/autogpt_platform/backend/backend/blocks/github/ci.py index 25adc04202..8ba58e389e 100644 --- a/autogpt_platform/backend/backend/blocks/github/ci.py +++ b/autogpt_platform/backend/backend/blocks/github/ci.py @@ -5,7 +5,13 @@ from typing import Optional from typing_extensions import TypedDict -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from ._api import get_api @@ -37,7 +43,7 @@ class CheckRunConclusion(Enum): class GithubGetCIResultsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo: str = SchemaField( description="GitHub repository", @@ -60,7 +66,7 @@ class GithubGetCIResultsBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class CheckRunItem(TypedDict, total=False): id: int name: str @@ -104,7 +110,6 @@ class GithubGetCIResultsBlock(Block): total_checks: int = SchemaField(description="Total number of CI checks") passed_checks: int = SchemaField(description="Number of passed checks") failed_checks: int = SchemaField(description="Number of failed checks") - error: str = SchemaField(description="Error message if the operation failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/github/example_payloads/discussion.created.json b/autogpt_platform/backend/backend/blocks/github/example_payloads/discussion.created.json new file mode 100644 index 0000000000..6b0d73dda3 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/github/example_payloads/discussion.created.json @@ -0,0 +1,108 @@ +{ + "action": "created", + "discussion": { + "repository_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT", + "category": { + "id": 12345678, + "node_id": "DIC_kwDOJKSTjM4CXXXX", + "repository_id": 614765452, + "emoji": ":pray:", + "name": "Q&A", + "description": "Ask the community for help", + "created_at": "2023-03-16T09:21:07Z", + "updated_at": "2023-03-16T09:21:07Z", + "slug": "q-a", + "is_answerable": true + }, + "answer_html_url": null, + "answer_chosen_at": null, + "answer_chosen_by": null, + "html_url": "https://github.com/Significant-Gravitas/AutoGPT/discussions/9999", + "id": 5000000001, + "node_id": "D_kwDOJKSTjM4AYYYY", + "number": 9999, + "title": "How do I configure custom blocks?", + "user": { + "login": "curious-user", + "id": 22222222, + "node_id": "MDQ6VXNlcjIyMjIyMjIy", + "avatar_url": "https://avatars.githubusercontent.com/u/22222222?v=4", + "url": "https://api.github.com/users/curious-user", + "html_url": "https://github.com/curious-user", + "type": "User", + "site_admin": false + }, + "state": "open", + "state_reason": null, + "locked": false, + "comments": 0, + "created_at": "2024-12-01T17:00:00Z", + "updated_at": "2024-12-01T17:00:00Z", + "author_association": "NONE", + "active_lock_reason": null, + "body": "## Question\n\nI'm trying to create a custom block for my specific use case. I've read the documentation but I'm not sure how to:\n\n1. Define the input/output schema\n2. Handle authentication\n3. Test my block locally\n\nCan someone point me to examples or provide guidance?\n\n## Environment\n\n- AutoGPT Platform version: latest\n- Python: 3.11", + "reactions": { + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/discussions/9999/reactions", + "total_count": 0, + "+1": 0, + "-1": 0, + "laugh": 0, + "hooray": 0, + "confused": 0, + "heart": 0, + "rocket": 0, + "eyes": 0 + }, + "timeline_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/discussions/9999/timeline" + }, + "repository": { + "id": 614765452, + "node_id": "R_kgDOJKSTjA", + "name": "AutoGPT", + "full_name": "Significant-Gravitas/AutoGPT", + "private": false, + "owner": { + "login": "Significant-Gravitas", + "id": 130738209, + "node_id": "O_kgDOB8roIQ", + "avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4", + "url": "https://api.github.com/users/Significant-Gravitas", + "html_url": "https://github.com/Significant-Gravitas", + "type": "Organization", + "site_admin": false + }, + "html_url": "https://github.com/Significant-Gravitas/AutoGPT", + "description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on.", + "fork": false, + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT", + "created_at": "2023-03-16T09:21:07Z", + "updated_at": "2024-12-01T17:00:00Z", + "pushed_at": "2024-12-01T12:00:00Z", + "stargazers_count": 170000, + "watchers_count": 170000, + "language": "Python", + "has_discussions": true, + "forks_count": 45000, + "visibility": "public", + "default_branch": "master" + }, + "organization": { + "login": "Significant-Gravitas", + "id": 130738209, + "node_id": "O_kgDOB8roIQ", + "url": "https://api.github.com/orgs/Significant-Gravitas", + "avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4", + "description": "" + }, + "sender": { + "login": "curious-user", + "id": 22222222, + "node_id": "MDQ6VXNlcjIyMjIyMjIy", + "avatar_url": "https://avatars.githubusercontent.com/u/22222222?v=4", + "gravatar_id": "", + "url": "https://api.github.com/users/curious-user", + "html_url": "https://github.com/curious-user", + "type": "User", + "site_admin": false + } +} diff --git a/autogpt_platform/backend/backend/blocks/github/example_payloads/issues.opened.json b/autogpt_platform/backend/backend/blocks/github/example_payloads/issues.opened.json new file mode 100644 index 0000000000..078d5da0be --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/github/example_payloads/issues.opened.json @@ -0,0 +1,112 @@ +{ + "action": "opened", + "issue": { + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345", + "repository_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT", + "labels_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/labels{/name}", + "comments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/comments", + "events_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/events", + "html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/12345", + "id": 2000000001, + "node_id": "I_kwDOJKSTjM5wXXXX", + "number": 12345, + "title": "Bug: Application crashes when processing large files", + "user": { + "login": "bug-reporter", + "id": 11111111, + "node_id": "MDQ6VXNlcjExMTExMTEx", + "avatar_url": "https://avatars.githubusercontent.com/u/11111111?v=4", + "url": "https://api.github.com/users/bug-reporter", + "html_url": "https://github.com/bug-reporter", + "type": "User", + "site_admin": false + }, + "labels": [ + { + "id": 5272676214, + "node_id": "LA_kwDOJKSTjM8AAAABOkandg", + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels/bug", + "name": "bug", + "color": "d73a4a", + "default": true, + "description": "Something isn't working" + } + ], + "state": "open", + "locked": false, + "assignee": null, + "assignees": [], + "milestone": null, + "comments": 0, + "created_at": "2024-12-01T16:00:00Z", + "updated_at": "2024-12-01T16:00:00Z", + "closed_at": null, + "author_association": "NONE", + "active_lock_reason": null, + "body": "## Description\n\nWhen I try to process a file larger than 100MB, the application crashes with an out of memory error.\n\n## Steps to Reproduce\n\n1. Open the application\n2. Select a file larger than 100MB\n3. Click 'Process'\n4. Application crashes\n\n## Expected Behavior\n\nThe application should handle large files gracefully.\n\n## Environment\n\n- OS: Ubuntu 22.04\n- Python: 3.11\n- AutoGPT Version: 1.0.0", + "reactions": { + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/reactions", + "total_count": 0, + "+1": 0, + "-1": 0, + "laugh": 0, + "hooray": 0, + "confused": 0, + "heart": 0, + "rocket": 0, + "eyes": 0 + }, + "timeline_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/timeline", + "state_reason": null + }, + "repository": { + "id": 614765452, + "node_id": "R_kgDOJKSTjA", + "name": "AutoGPT", + "full_name": "Significant-Gravitas/AutoGPT", + "private": false, + "owner": { + "login": "Significant-Gravitas", + "id": 130738209, + "node_id": "O_kgDOB8roIQ", + "avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4", + "url": "https://api.github.com/users/Significant-Gravitas", + "html_url": "https://github.com/Significant-Gravitas", + "type": "Organization", + "site_admin": false + }, + "html_url": "https://github.com/Significant-Gravitas/AutoGPT", + "description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on.", + "fork": false, + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT", + "created_at": "2023-03-16T09:21:07Z", + "updated_at": "2024-12-01T16:00:00Z", + "pushed_at": "2024-12-01T12:00:00Z", + "stargazers_count": 170000, + "watchers_count": 170000, + "language": "Python", + "forks_count": 45000, + "open_issues_count": 190, + "visibility": "public", + "default_branch": "master" + }, + "organization": { + "login": "Significant-Gravitas", + "id": 130738209, + "node_id": "O_kgDOB8roIQ", + "url": "https://api.github.com/orgs/Significant-Gravitas", + "avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4", + "description": "" + }, + "sender": { + "login": "bug-reporter", + "id": 11111111, + "node_id": "MDQ6VXNlcjExMTExMTEx", + "avatar_url": "https://avatars.githubusercontent.com/u/11111111?v=4", + "gravatar_id": "", + "url": "https://api.github.com/users/bug-reporter", + "html_url": "https://github.com/bug-reporter", + "type": "User", + "site_admin": false + } +} diff --git a/autogpt_platform/backend/backend/blocks/github/example_payloads/release.published.json b/autogpt_platform/backend/backend/blocks/github/example_payloads/release.published.json new file mode 100644 index 0000000000..eac8461e59 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/github/example_payloads/release.published.json @@ -0,0 +1,97 @@ +{ + "action": "published", + "release": { + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases/123456789", + "assets_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases/123456789/assets", + "upload_url": "https://uploads.github.com/repos/Significant-Gravitas/AutoGPT/releases/123456789/assets{?name,label}", + "html_url": "https://github.com/Significant-Gravitas/AutoGPT/releases/tag/v1.0.0", + "id": 123456789, + "author": { + "login": "ntindle", + "id": 12345678, + "node_id": "MDQ6VXNlcjEyMzQ1Njc4", + "avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4", + "gravatar_id": "", + "url": "https://api.github.com/users/ntindle", + "html_url": "https://github.com/ntindle", + "type": "User", + "site_admin": false + }, + "node_id": "RE_kwDOJKSTjM4HWwAA", + "tag_name": "v1.0.0", + "target_commitish": "master", + "name": "AutoGPT Platform v1.0.0", + "draft": false, + "prerelease": false, + "created_at": "2024-12-01T10:00:00Z", + "published_at": "2024-12-01T12:00:00Z", + "assets": [ + { + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases/assets/987654321", + "id": 987654321, + "node_id": "RA_kwDOJKSTjM4HWwBB", + "name": "autogpt-v1.0.0.zip", + "label": "Release Package", + "content_type": "application/zip", + "state": "uploaded", + "size": 52428800, + "download_count": 0, + "created_at": "2024-12-01T11:30:00Z", + "updated_at": "2024-12-01T11:35:00Z", + "browser_download_url": "https://github.com/Significant-Gravitas/AutoGPT/releases/download/v1.0.0/autogpt-v1.0.0.zip" + } + ], + "tarball_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/tarball/v1.0.0", + "zipball_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/zipball/v1.0.0", + "body": "## What's New\n\n- Feature 1: Amazing new capability\n- Feature 2: Performance improvements\n- Bug fixes and stability improvements\n\n## Breaking Changes\n\nNone\n\n## Contributors\n\nThanks to all our contributors!" + }, + "repository": { + "id": 614765452, + "node_id": "R_kgDOJKSTjA", + "name": "AutoGPT", + "full_name": "Significant-Gravitas/AutoGPT", + "private": false, + "owner": { + "login": "Significant-Gravitas", + "id": 130738209, + "node_id": "O_kgDOB8roIQ", + "avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4", + "url": "https://api.github.com/users/Significant-Gravitas", + "html_url": "https://github.com/Significant-Gravitas", + "type": "Organization", + "site_admin": false + }, + "html_url": "https://github.com/Significant-Gravitas/AutoGPT", + "description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on.", + "fork": false, + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT", + "created_at": "2023-03-16T09:21:07Z", + "updated_at": "2024-12-01T12:00:00Z", + "pushed_at": "2024-12-01T12:00:00Z", + "stargazers_count": 170000, + "watchers_count": 170000, + "language": "Python", + "forks_count": 45000, + "visibility": "public", + "default_branch": "master" + }, + "organization": { + "login": "Significant-Gravitas", + "id": 130738209, + "node_id": "O_kgDOB8roIQ", + "url": "https://api.github.com/orgs/Significant-Gravitas", + "avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4", + "description": "" + }, + "sender": { + "login": "ntindle", + "id": 12345678, + "node_id": "MDQ6VXNlcjEyMzQ1Njc4", + "avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4", + "gravatar_id": "", + "url": "https://api.github.com/users/ntindle", + "html_url": "https://github.com/ntindle", + "type": "User", + "site_admin": false + } +} diff --git a/autogpt_platform/backend/backend/blocks/github/example_payloads/star.created.json b/autogpt_platform/backend/backend/blocks/github/example_payloads/star.created.json new file mode 100644 index 0000000000..cb2dfd7522 --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/github/example_payloads/star.created.json @@ -0,0 +1,53 @@ +{ + "action": "created", + "starred_at": "2024-12-01T15:30:00Z", + "repository": { + "id": 614765452, + "node_id": "R_kgDOJKSTjA", + "name": "AutoGPT", + "full_name": "Significant-Gravitas/AutoGPT", + "private": false, + "owner": { + "login": "Significant-Gravitas", + "id": 130738209, + "node_id": "O_kgDOB8roIQ", + "avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4", + "url": "https://api.github.com/users/Significant-Gravitas", + "html_url": "https://github.com/Significant-Gravitas", + "type": "Organization", + "site_admin": false + }, + "html_url": "https://github.com/Significant-Gravitas/AutoGPT", + "description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on.", + "fork": false, + "url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT", + "created_at": "2023-03-16T09:21:07Z", + "updated_at": "2024-12-01T15:30:00Z", + "pushed_at": "2024-12-01T12:00:00Z", + "stargazers_count": 170001, + "watchers_count": 170001, + "language": "Python", + "forks_count": 45000, + "visibility": "public", + "default_branch": "master" + }, + "organization": { + "login": "Significant-Gravitas", + "id": 130738209, + "node_id": "O_kgDOB8roIQ", + "url": "https://api.github.com/orgs/Significant-Gravitas", + "avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4", + "description": "" + }, + "sender": { + "login": "awesome-contributor", + "id": 98765432, + "node_id": "MDQ6VXNlcjk4NzY1NDMy", + "avatar_url": "https://avatars.githubusercontent.com/u/98765432?v=4", + "gravatar_id": "", + "url": "https://api.github.com/users/awesome-contributor", + "html_url": "https://github.com/awesome-contributor", + "type": "User", + "site_admin": false + } +} diff --git a/autogpt_platform/backend/backend/blocks/github/issues.py b/autogpt_platform/backend/backend/blocks/github/issues.py index 29766cabc7..b8187ac1f5 100644 --- a/autogpt_platform/backend/backend/blocks/github/issues.py +++ b/autogpt_platform/backend/backend/blocks/github/issues.py @@ -3,7 +3,13 @@ from urllib.parse import urlparse from typing_extensions import TypedDict -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from ._api import convert_comment_url_to_api_endpoint, get_api @@ -24,7 +30,7 @@ def is_github_url(url: str) -> bool: # --8<-- [start:GithubCommentBlockExample] class GithubCommentBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") issue_url: str = SchemaField( description="URL of the GitHub issue or pull request", @@ -35,7 +41,7 @@ class GithubCommentBlock(Block): placeholder="Enter your comment", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: int = SchemaField(description="ID of the created comment") url: str = SchemaField(description="URL to the comment on GitHub") error: str = SchemaField( @@ -112,7 +118,7 @@ class GithubCommentBlock(Block): class GithubUpdateCommentBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") comment_url: str = SchemaField( description="URL of the GitHub comment", @@ -135,7 +141,7 @@ class GithubUpdateCommentBlock(Block): placeholder="Enter your comment", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: int = SchemaField(description="ID of the updated comment") url: str = SchemaField(description="URL to the comment on GitHub") error: str = SchemaField( @@ -219,14 +225,14 @@ class GithubUpdateCommentBlock(Block): class GithubListCommentsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") issue_url: str = SchemaField( description="URL of the GitHub issue or pull request", placeholder="https://github.com/owner/repo/issues/1", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class CommentItem(TypedDict): id: int body: str @@ -239,7 +245,6 @@ class GithubListCommentsBlock(Block): comments: list[CommentItem] = SchemaField( description="List of comments with their ID, body, user, and URL" ) - error: str = SchemaField(description="Error message if listing comments failed") def __init__(self): super().__init__( @@ -335,7 +340,7 @@ class GithubListCommentsBlock(Block): class GithubMakeIssueBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -348,7 +353,7 @@ class GithubMakeIssueBlock(Block): description="Body of the issue", placeholder="Enter the issue body" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): number: int = SchemaField(description="Number of the created issue") url: str = SchemaField(description="URL of the created issue") error: str = SchemaField( @@ -410,14 +415,14 @@ class GithubMakeIssueBlock(Block): class GithubReadIssueBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") issue_url: str = SchemaField( description="URL of the GitHub issue", placeholder="https://github.com/owner/repo/issues/1", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): title: str = SchemaField(description="Title of the issue") body: str = SchemaField(description="Body of the issue") user: str = SchemaField(description="User who created the issue") @@ -483,14 +488,14 @@ class GithubReadIssueBlock(Block): class GithubListIssuesBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", placeholder="https://github.com/owner/repo", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class IssueItem(TypedDict): title: str url: str @@ -501,7 +506,6 @@ class GithubListIssuesBlock(Block): issues: list[IssueItem] = SchemaField( description="List of issues with their title and URL" ) - error: str = SchemaField(description="Error message if listing issues failed") def __init__(self): super().__init__( @@ -573,7 +577,7 @@ class GithubListIssuesBlock(Block): class GithubAddLabelBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") issue_url: str = SchemaField( description="URL of the GitHub issue or pull request", @@ -584,7 +588,7 @@ class GithubAddLabelBlock(Block): placeholder="Enter the label", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Status of the label addition operation") error: str = SchemaField( description="Error message if the label addition failed" @@ -633,7 +637,7 @@ class GithubAddLabelBlock(Block): class GithubRemoveLabelBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") issue_url: str = SchemaField( description="URL of the GitHub issue or pull request", @@ -644,7 +648,7 @@ class GithubRemoveLabelBlock(Block): placeholder="Enter the label", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Status of the label removal operation") error: str = SchemaField( description="Error message if the label removal failed" @@ -694,7 +698,7 @@ class GithubRemoveLabelBlock(Block): class GithubAssignIssueBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") issue_url: str = SchemaField( description="URL of the GitHub issue", @@ -705,7 +709,7 @@ class GithubAssignIssueBlock(Block): placeholder="Enter the username", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField( description="Status of the issue assignment operation" ) @@ -760,7 +764,7 @@ class GithubAssignIssueBlock(Block): class GithubUnassignIssueBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") issue_url: str = SchemaField( description="URL of the GitHub issue", @@ -771,7 +775,7 @@ class GithubUnassignIssueBlock(Block): placeholder="Enter the username", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField( description="Status of the issue unassignment operation" ) diff --git a/autogpt_platform/backend/backend/blocks/github/pull_requests.py b/autogpt_platform/backend/backend/blocks/github/pull_requests.py index 90370f8166..9049037716 100644 --- a/autogpt_platform/backend/backend/blocks/github/pull_requests.py +++ b/autogpt_platform/backend/backend/blocks/github/pull_requests.py @@ -2,7 +2,13 @@ import re from typing_extensions import TypedDict -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from ._api import get_api @@ -16,14 +22,14 @@ from ._auth import ( class GithubListPullRequestsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", placeholder="https://github.com/owner/repo", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class PRItem(TypedDict): title: str url: str @@ -108,7 +114,7 @@ class GithubListPullRequestsBlock(Block): class GithubMakePullRequestBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -135,7 +141,7 @@ class GithubMakePullRequestBlock(Block): placeholder="Enter the base branch", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): number: int = SchemaField(description="Number of the created pull request") url: str = SchemaField(description="URL of the created pull request") error: str = SchemaField( @@ -209,7 +215,7 @@ class GithubMakePullRequestBlock(Block): class GithubReadPullRequestBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") pr_url: str = SchemaField( description="URL of the GitHub pull request", @@ -221,7 +227,7 @@ class GithubReadPullRequestBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): title: str = SchemaField(description="Title of the pull request") body: str = SchemaField(description="Body of the pull request") author: str = SchemaField(description="User who created the pull request") @@ -325,7 +331,7 @@ class GithubReadPullRequestBlock(Block): class GithubAssignPRReviewerBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") pr_url: str = SchemaField( description="URL of the GitHub pull request", @@ -336,7 +342,7 @@ class GithubAssignPRReviewerBlock(Block): placeholder="Enter the reviewer's username", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField( description="Status of the reviewer assignment operation" ) @@ -392,7 +398,7 @@ class GithubAssignPRReviewerBlock(Block): class GithubUnassignPRReviewerBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") pr_url: str = SchemaField( description="URL of the GitHub pull request", @@ -403,7 +409,7 @@ class GithubUnassignPRReviewerBlock(Block): placeholder="Enter the reviewer's username", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField( description="Status of the reviewer unassignment operation" ) @@ -459,14 +465,14 @@ class GithubUnassignPRReviewerBlock(Block): class GithubListPRReviewersBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") pr_url: str = SchemaField( description="URL of the GitHub pull request", placeholder="https://github.com/owner/repo/pull/1", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class ReviewerItem(TypedDict): username: str url: str diff --git a/autogpt_platform/backend/backend/blocks/github/repo.py b/autogpt_platform/backend/backend/blocks/github/repo.py index 08c1d038d3..78ce26bfad 100644 --- a/autogpt_platform/backend/backend/blocks/github/repo.py +++ b/autogpt_platform/backend/backend/blocks/github/repo.py @@ -2,7 +2,13 @@ import base64 from typing_extensions import TypedDict -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from ._api import get_api @@ -16,14 +22,14 @@ from ._auth import ( class GithubListTagsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", placeholder="https://github.com/owner/repo", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class TagItem(TypedDict): name: str url: str @@ -34,7 +40,6 @@ class GithubListTagsBlock(Block): tags: list[TagItem] = SchemaField( description="List of tags with their name and file tree browser URL" ) - error: str = SchemaField(description="Error message if listing tags failed") def __init__(self): super().__init__( @@ -111,14 +116,14 @@ class GithubListTagsBlock(Block): class GithubListBranchesBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", placeholder="https://github.com/owner/repo", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class BranchItem(TypedDict): name: str url: str @@ -130,7 +135,6 @@ class GithubListBranchesBlock(Block): branches: list[BranchItem] = SchemaField( description="List of branches with their name and file tree browser URL" ) - error: str = SchemaField(description="Error message if listing branches failed") def __init__(self): super().__init__( @@ -207,7 +211,7 @@ class GithubListBranchesBlock(Block): class GithubListDiscussionsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -217,7 +221,7 @@ class GithubListDiscussionsBlock(Block): description="Number of discussions to fetch", default=5 ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class DiscussionItem(TypedDict): title: str url: str @@ -323,14 +327,14 @@ class GithubListDiscussionsBlock(Block): class GithubListReleasesBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", placeholder="https://github.com/owner/repo", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class ReleaseItem(TypedDict): name: str url: str @@ -342,7 +346,6 @@ class GithubListReleasesBlock(Block): releases: list[ReleaseItem] = SchemaField( description="List of releases with their name and file tree browser URL" ) - error: str = SchemaField(description="Error message if listing releases failed") def __init__(self): super().__init__( @@ -414,7 +417,7 @@ class GithubListReleasesBlock(Block): class GithubReadFileBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -430,7 +433,7 @@ class GithubReadFileBlock(Block): default="master", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): text_content: str = SchemaField( description="Content of the file (decoded as UTF-8 text)" ) @@ -438,7 +441,6 @@ class GithubReadFileBlock(Block): description="Raw base64-encoded content of the file" ) size: int = SchemaField(description="The size of the file (in bytes)") - error: str = SchemaField(description="Error message if the file reading failed") def __init__(self): super().__init__( @@ -501,7 +503,7 @@ class GithubReadFileBlock(Block): class GithubReadFolderBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -517,7 +519,7 @@ class GithubReadFolderBlock(Block): default="master", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class DirEntry(TypedDict): name: str path: str @@ -625,7 +627,7 @@ class GithubReadFolderBlock(Block): class GithubMakeBranchBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -640,7 +642,7 @@ class GithubMakeBranchBlock(Block): placeholder="source_branch_name", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Status of the branch creation operation") error: str = SchemaField( description="Error message if the branch creation failed" @@ -705,7 +707,7 @@ class GithubMakeBranchBlock(Block): class GithubDeleteBranchBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -716,7 +718,7 @@ class GithubDeleteBranchBlock(Block): placeholder="branch_name", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Status of the branch deletion operation") error: str = SchemaField( description="Error message if the branch deletion failed" @@ -766,7 +768,7 @@ class GithubDeleteBranchBlock(Block): class GithubCreateFileBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -789,7 +791,7 @@ class GithubCreateFileBlock(Block): default="Create new file", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): url: str = SchemaField(description="URL of the created file") sha: str = SchemaField(description="SHA of the commit") error: str = SchemaField( @@ -868,7 +870,7 @@ class GithubCreateFileBlock(Block): class GithubUpdateFileBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", @@ -891,10 +893,9 @@ class GithubUpdateFileBlock(Block): default="Update file", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): url: str = SchemaField(description="URL of the updated file") sha: str = SchemaField(description="SHA of the commit") - error: str = SchemaField(description="Error message if the file update failed") def __init__(self): super().__init__( @@ -974,7 +975,7 @@ class GithubUpdateFileBlock(Block): class GithubCreateRepositoryBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") name: str = SchemaField( description="Name of the repository to create", @@ -998,7 +999,7 @@ class GithubCreateRepositoryBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): url: str = SchemaField(description="URL of the created repository") clone_url: str = SchemaField(description="Git clone URL of the repository") error: str = SchemaField( @@ -1077,14 +1078,14 @@ class GithubCreateRepositoryBlock(Block): class GithubListStargazersBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo_url: str = SchemaField( description="URL of the GitHub repository", placeholder="https://github.com/owner/repo", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class StargazerItem(TypedDict): username: str url: str diff --git a/autogpt_platform/backend/backend/blocks/github/reviews.py b/autogpt_platform/backend/backend/blocks/github/reviews.py index 2b909da8ff..11718d1402 100644 --- a/autogpt_platform/backend/backend/blocks/github/reviews.py +++ b/autogpt_platform/backend/backend/blocks/github/reviews.py @@ -4,7 +4,13 @@ from typing import Any, List, Optional from typing_extensions import TypedDict -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from ._api import get_api @@ -26,7 +32,7 @@ class ReviewEvent(Enum): class GithubCreatePRReviewBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): class ReviewComment(TypedDict, total=False): path: str position: Optional[int] @@ -61,7 +67,7 @@ class GithubCreatePRReviewBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): review_id: int = SchemaField(description="ID of the created review") state: str = SchemaField( description="State of the review (e.g., PENDING, COMMENTED, APPROVED, CHANGES_REQUESTED)" @@ -197,7 +203,7 @@ class GithubCreatePRReviewBlock(Block): class GithubListPRReviewsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo: str = SchemaField( description="GitHub repository", @@ -208,7 +214,7 @@ class GithubListPRReviewsBlock(Block): placeholder="123", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class ReviewItem(TypedDict): id: int user: str @@ -223,7 +229,6 @@ class GithubListPRReviewsBlock(Block): reviews: list[ReviewItem] = SchemaField( description="List of all reviews on the pull request" ) - error: str = SchemaField(description="Error message if listing reviews failed") def __init__(self): super().__init__( @@ -317,7 +322,7 @@ class GithubListPRReviewsBlock(Block): class GithubSubmitPendingReviewBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo: str = SchemaField( description="GitHub repository", @@ -336,7 +341,7 @@ class GithubSubmitPendingReviewBlock(Block): default=ReviewEvent.COMMENT, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): state: str = SchemaField(description="State of the submitted review") html_url: str = SchemaField(description="URL of the submitted review") error: str = SchemaField( @@ -415,7 +420,7 @@ class GithubSubmitPendingReviewBlock(Block): class GithubResolveReviewDiscussionBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo: str = SchemaField( description="GitHub repository", @@ -434,9 +439,8 @@ class GithubResolveReviewDiscussionBlock(Block): default=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the operation was successful") - error: str = SchemaField(description="Error message if the operation failed") def __init__(self): super().__init__( @@ -579,7 +583,7 @@ class GithubResolveReviewDiscussionBlock(Block): class GithubGetPRReviewCommentsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo: str = SchemaField( description="GitHub repository", @@ -596,7 +600,7 @@ class GithubGetPRReviewCommentsBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class CommentItem(TypedDict): id: int user: str @@ -616,7 +620,6 @@ class GithubGetPRReviewCommentsBlock(Block): comments: list[CommentItem] = SchemaField( description="List of all review comments on the pull request" ) - error: str = SchemaField(description="Error message if getting comments failed") def __init__(self): super().__init__( @@ -744,7 +747,7 @@ class GithubGetPRReviewCommentsBlock(Block): class GithubCreateCommentObjectBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): path: str = SchemaField( description="The file path to comment on", placeholder="src/main.py", @@ -781,7 +784,7 @@ class GithubCreateCommentObjectBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): comment_object: dict = SchemaField( description="The comment object formatted for GitHub API" ) diff --git a/autogpt_platform/backend/backend/blocks/github/statuses.py b/autogpt_platform/backend/backend/blocks/github/statuses.py index a7e2b006aa..42826a8a51 100644 --- a/autogpt_platform/backend/backend/blocks/github/statuses.py +++ b/autogpt_platform/backend/backend/blocks/github/statuses.py @@ -3,7 +3,13 @@ from typing import Optional from pydantic import BaseModel -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from ._api import get_api @@ -26,7 +32,7 @@ class StatusState(Enum): class GithubCreateStatusBlock(Block): """Block for creating a commit status on a GitHub repository.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubFineGrainedAPICredentialsInput = ( GithubFineGrainedAPICredentialsField("repo:status") ) @@ -54,7 +60,7 @@ class GithubCreateStatusBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): class StatusResult(BaseModel): id: int url: str @@ -66,7 +72,6 @@ class GithubCreateStatusBlock(Block): updated_at: str status: StatusResult = SchemaField(description="Details of the created status") - error: str = SchemaField(description="Error message if status creation failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/github/triggers.py b/autogpt_platform/backend/backend/blocks/github/triggers.py index 83b1689b89..2fc568a468 100644 --- a/autogpt_platform/backend/backend/blocks/github/triggers.py +++ b/autogpt_platform/backend/backend/blocks/github/triggers.py @@ -8,7 +8,8 @@ from backend.data.block import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, BlockWebhookConfig, ) from backend.data.model import SchemaField @@ -26,7 +27,7 @@ logger = logging.getLogger(__name__) # --8<-- [start:GithubTriggerExample] class GitHubTriggerBase: - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GithubCredentialsInput = GithubCredentialsField("repo") repo: str = SchemaField( description=( @@ -40,7 +41,7 @@ class GitHubTriggerBase: payload: dict = SchemaField(hidden=True, default_factory=dict) # --8<-- [end:example-payload-field] - class Output(BlockSchema): + class Output(BlockSchemaOutput): payload: dict = SchemaField( description="The complete webhook payload that was received from GitHub. " "Includes information about the affected resource (e.g. pull request), " @@ -158,3 +159,391 @@ class GithubPullRequestTriggerBlock(GitHubTriggerBase, Block): # --8<-- [end:GithubTriggerExample] + + +class GithubStarTriggerBlock(GitHubTriggerBase, Block): + """Trigger block for GitHub star events - useful for milestone celebrations.""" + + EXAMPLE_PAYLOAD_FILE = ( + Path(__file__).parent / "example_payloads" / "star.created.json" + ) + + class Input(GitHubTriggerBase.Input): + class EventsFilter(BaseModel): + """ + https://docs.github.com/en/webhooks/webhook-events-and-payloads#star + """ + + created: bool = False + deleted: bool = False + + events: EventsFilter = SchemaField( + title="Events", description="The star events to subscribe to" + ) + + class Output(GitHubTriggerBase.Output): + event: str = SchemaField( + description="The star event that triggered the webhook ('created' or 'deleted')" + ) + starred_at: str = SchemaField( + description="ISO timestamp when the repo was starred (empty if deleted)" + ) + stargazers_count: int = SchemaField( + description="Current number of stars on the repository" + ) + repository_name: str = SchemaField( + description="Full name of the repository (owner/repo)" + ) + repository_url: str = SchemaField(description="URL to the repository") + + def __init__(self): + from backend.integrations.webhooks.github import GithubWebhookType + + example_payload = json.loads( + self.EXAMPLE_PAYLOAD_FILE.read_text(encoding="utf-8") + ) + + super().__init__( + id="551e0a35-100b-49b7-89b8-3031322239b6", + description="This block triggers on GitHub star events. " + "Useful for celebrating milestones (e.g., 1k, 10k stars) or tracking engagement.", + categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT}, + input_schema=GithubStarTriggerBlock.Input, + output_schema=GithubStarTriggerBlock.Output, + webhook_config=BlockWebhookConfig( + provider=ProviderName.GITHUB, + webhook_type=GithubWebhookType.REPO, + resource_format="{repo}", + event_filter_input="events", + event_format="star.{event}", + ), + test_input={ + "repo": "Significant-Gravitas/AutoGPT", + "events": {"created": True}, + "credentials": TEST_CREDENTIALS_INPUT, + "payload": example_payload, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("payload", example_payload), + ("triggered_by_user", example_payload["sender"]), + ("event", example_payload["action"]), + ("starred_at", example_payload.get("starred_at", "")), + ("stargazers_count", example_payload["repository"]["stargazers_count"]), + ("repository_name", example_payload["repository"]["full_name"]), + ("repository_url", example_payload["repository"]["html_url"]), + ], + ) + + async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore + async for name, value in super().run(input_data, **kwargs): + yield name, value + yield "event", input_data.payload["action"] + yield "starred_at", input_data.payload.get("starred_at", "") + yield "stargazers_count", input_data.payload["repository"]["stargazers_count"] + yield "repository_name", input_data.payload["repository"]["full_name"] + yield "repository_url", input_data.payload["repository"]["html_url"] + + +class GithubReleaseTriggerBlock(GitHubTriggerBase, Block): + """Trigger block for GitHub release events - ideal for announcing new versions.""" + + EXAMPLE_PAYLOAD_FILE = ( + Path(__file__).parent / "example_payloads" / "release.published.json" + ) + + class Input(GitHubTriggerBase.Input): + class EventsFilter(BaseModel): + """ + https://docs.github.com/en/webhooks/webhook-events-and-payloads#release + """ + + published: bool = False + unpublished: bool = False + created: bool = False + edited: bool = False + deleted: bool = False + prereleased: bool = False + released: bool = False + + events: EventsFilter = SchemaField( + title="Events", description="The release events to subscribe to" + ) + + class Output(GitHubTriggerBase.Output): + event: str = SchemaField( + description="The release event that triggered the webhook (e.g., 'published')" + ) + release: dict = SchemaField(description="The full release object") + release_url: str = SchemaField(description="URL to the release page") + tag_name: str = SchemaField(description="The release tag name (e.g., 'v1.0.0')") + release_name: str = SchemaField(description="Human-readable release name") + body: str = SchemaField(description="Release notes/description") + prerelease: bool = SchemaField(description="Whether this is a prerelease") + draft: bool = SchemaField(description="Whether this is a draft release") + assets: list = SchemaField(description="List of release assets/files") + + def __init__(self): + from backend.integrations.webhooks.github import GithubWebhookType + + example_payload = json.loads( + self.EXAMPLE_PAYLOAD_FILE.read_text(encoding="utf-8") + ) + + super().__init__( + id="2052dd1b-74e1-46ac-9c87-c7a0e057b60b", + description="This block triggers on GitHub release events. " + "Perfect for automating announcements to Discord, Twitter, or other platforms.", + categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT}, + input_schema=GithubReleaseTriggerBlock.Input, + output_schema=GithubReleaseTriggerBlock.Output, + webhook_config=BlockWebhookConfig( + provider=ProviderName.GITHUB, + webhook_type=GithubWebhookType.REPO, + resource_format="{repo}", + event_filter_input="events", + event_format="release.{event}", + ), + test_input={ + "repo": "Significant-Gravitas/AutoGPT", + "events": {"published": True}, + "credentials": TEST_CREDENTIALS_INPUT, + "payload": example_payload, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("payload", example_payload), + ("triggered_by_user", example_payload["sender"]), + ("event", example_payload["action"]), + ("release", example_payload["release"]), + ("release_url", example_payload["release"]["html_url"]), + ("tag_name", example_payload["release"]["tag_name"]), + ("release_name", example_payload["release"]["name"]), + ("body", example_payload["release"]["body"]), + ("prerelease", example_payload["release"]["prerelease"]), + ("draft", example_payload["release"]["draft"]), + ("assets", example_payload["release"]["assets"]), + ], + ) + + async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore + async for name, value in super().run(input_data, **kwargs): + yield name, value + release = input_data.payload["release"] + yield "event", input_data.payload["action"] + yield "release", release + yield "release_url", release["html_url"] + yield "tag_name", release["tag_name"] + yield "release_name", release.get("name", "") + yield "body", release.get("body", "") + yield "prerelease", release["prerelease"] + yield "draft", release["draft"] + yield "assets", release["assets"] + + +class GithubIssuesTriggerBlock(GitHubTriggerBase, Block): + """Trigger block for GitHub issues events - great for triage and notifications.""" + + EXAMPLE_PAYLOAD_FILE = ( + Path(__file__).parent / "example_payloads" / "issues.opened.json" + ) + + class Input(GitHubTriggerBase.Input): + class EventsFilter(BaseModel): + """ + https://docs.github.com/en/webhooks/webhook-events-and-payloads#issues + """ + + opened: bool = False + edited: bool = False + deleted: bool = False + closed: bool = False + reopened: bool = False + assigned: bool = False + unassigned: bool = False + labeled: bool = False + unlabeled: bool = False + locked: bool = False + unlocked: bool = False + transferred: bool = False + milestoned: bool = False + demilestoned: bool = False + pinned: bool = False + unpinned: bool = False + + events: EventsFilter = SchemaField( + title="Events", description="The issue events to subscribe to" + ) + + class Output(GitHubTriggerBase.Output): + event: str = SchemaField( + description="The issue event that triggered the webhook (e.g., 'opened')" + ) + number: int = SchemaField(description="The issue number") + issue: dict = SchemaField(description="The full issue object") + issue_url: str = SchemaField(description="URL to the issue") + issue_title: str = SchemaField(description="The issue title") + issue_body: str = SchemaField(description="The issue body/description") + labels: list = SchemaField(description="List of labels on the issue") + assignees: list = SchemaField(description="List of assignees") + state: str = SchemaField(description="Issue state ('open' or 'closed')") + + def __init__(self): + from backend.integrations.webhooks.github import GithubWebhookType + + example_payload = json.loads( + self.EXAMPLE_PAYLOAD_FILE.read_text(encoding="utf-8") + ) + + super().__init__( + id="b2605464-e486-4bf4-aad3-d8a213c8a48a", + description="This block triggers on GitHub issues events. " + "Useful for automated triage, notifications, and welcoming first-time contributors.", + categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT}, + input_schema=GithubIssuesTriggerBlock.Input, + output_schema=GithubIssuesTriggerBlock.Output, + webhook_config=BlockWebhookConfig( + provider=ProviderName.GITHUB, + webhook_type=GithubWebhookType.REPO, + resource_format="{repo}", + event_filter_input="events", + event_format="issues.{event}", + ), + test_input={ + "repo": "Significant-Gravitas/AutoGPT", + "events": {"opened": True}, + "credentials": TEST_CREDENTIALS_INPUT, + "payload": example_payload, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("payload", example_payload), + ("triggered_by_user", example_payload["sender"]), + ("event", example_payload["action"]), + ("number", example_payload["issue"]["number"]), + ("issue", example_payload["issue"]), + ("issue_url", example_payload["issue"]["html_url"]), + ("issue_title", example_payload["issue"]["title"]), + ("issue_body", example_payload["issue"]["body"]), + ("labels", example_payload["issue"]["labels"]), + ("assignees", example_payload["issue"]["assignees"]), + ("state", example_payload["issue"]["state"]), + ], + ) + + async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore + async for name, value in super().run(input_data, **kwargs): + yield name, value + issue = input_data.payload["issue"] + yield "event", input_data.payload["action"] + yield "number", issue["number"] + yield "issue", issue + yield "issue_url", issue["html_url"] + yield "issue_title", issue["title"] + yield "issue_body", issue.get("body") or "" + yield "labels", issue["labels"] + yield "assignees", issue["assignees"] + yield "state", issue["state"] + + +class GithubDiscussionTriggerBlock(GitHubTriggerBase, Block): + """Trigger block for GitHub discussion events - perfect for community Q&A sync.""" + + EXAMPLE_PAYLOAD_FILE = ( + Path(__file__).parent / "example_payloads" / "discussion.created.json" + ) + + class Input(GitHubTriggerBase.Input): + class EventsFilter(BaseModel): + """ + https://docs.github.com/en/webhooks/webhook-events-and-payloads#discussion + """ + + created: bool = False + edited: bool = False + deleted: bool = False + answered: bool = False + unanswered: bool = False + labeled: bool = False + unlabeled: bool = False + locked: bool = False + unlocked: bool = False + category_changed: bool = False + transferred: bool = False + pinned: bool = False + unpinned: bool = False + + events: EventsFilter = SchemaField( + title="Events", description="The discussion events to subscribe to" + ) + + class Output(GitHubTriggerBase.Output): + event: str = SchemaField( + description="The discussion event that triggered the webhook" + ) + number: int = SchemaField(description="The discussion number") + discussion: dict = SchemaField(description="The full discussion object") + discussion_url: str = SchemaField(description="URL to the discussion") + title: str = SchemaField(description="The discussion title") + body: str = SchemaField(description="The discussion body") + category: dict = SchemaField(description="The discussion category object") + category_name: str = SchemaField(description="Name of the category") + state: str = SchemaField(description="Discussion state") + + def __init__(self): + from backend.integrations.webhooks.github import GithubWebhookType + + example_payload = json.loads( + self.EXAMPLE_PAYLOAD_FILE.read_text(encoding="utf-8") + ) + + super().__init__( + id="87f847b3-d81a-424e-8e89-acadb5c9d52b", + description="This block triggers on GitHub Discussions events. " + "Great for syncing Q&A to Discord or auto-responding to common questions. " + "Note: Discussions must be enabled on the repository.", + categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT}, + input_schema=GithubDiscussionTriggerBlock.Input, + output_schema=GithubDiscussionTriggerBlock.Output, + webhook_config=BlockWebhookConfig( + provider=ProviderName.GITHUB, + webhook_type=GithubWebhookType.REPO, + resource_format="{repo}", + event_filter_input="events", + event_format="discussion.{event}", + ), + test_input={ + "repo": "Significant-Gravitas/AutoGPT", + "events": {"created": True}, + "credentials": TEST_CREDENTIALS_INPUT, + "payload": example_payload, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("payload", example_payload), + ("triggered_by_user", example_payload["sender"]), + ("event", example_payload["action"]), + ("number", example_payload["discussion"]["number"]), + ("discussion", example_payload["discussion"]), + ("discussion_url", example_payload["discussion"]["html_url"]), + ("title", example_payload["discussion"]["title"]), + ("body", example_payload["discussion"]["body"]), + ("category", example_payload["discussion"]["category"]), + ("category_name", example_payload["discussion"]["category"]["name"]), + ("state", example_payload["discussion"]["state"]), + ], + ) + + async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore + async for name, value in super().run(input_data, **kwargs): + yield name, value + discussion = input_data.payload["discussion"] + yield "event", input_data.payload["action"] + yield "number", discussion["number"] + yield "discussion", discussion + yield "discussion_url", discussion["html_url"] + yield "title", discussion["title"] + yield "body", discussion.get("body") or "" + yield "category", discussion["category"] + yield "category_name", discussion["category"]["name"] + yield "state", discussion["state"] diff --git a/autogpt_platform/backend/backend/blocks/google/_drive.py b/autogpt_platform/backend/backend/blocks/google/_drive.py new file mode 100644 index 0000000000..cb2b52821c --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/google/_drive.py @@ -0,0 +1,155 @@ +from typing import Any, Literal, Optional + +from pydantic import BaseModel, ConfigDict, Field + +from backend.data.model import SchemaField + +AttachmentView = Literal[ + "DOCS", + "DOCUMENTS", + "SPREADSHEETS", + "PRESENTATIONS", + "DOCS_IMAGES", + "FOLDERS", +] +ATTACHMENT_VIEWS: tuple[AttachmentView, ...] = ( + "DOCS", + "DOCUMENTS", + "SPREADSHEETS", + "PRESENTATIONS", + "DOCS_IMAGES", + "FOLDERS", +) + + +class _GoogleDriveFileBase(BaseModel): + """Internal base class for Google Drive file representation.""" + + model_config = ConfigDict(populate_by_name=True) + + id: str = Field(description="Google Drive file/folder ID") + name: Optional[str] = Field(None, description="File/folder name") + mime_type: Optional[str] = Field( + None, + alias="mimeType", + description="MIME type (e.g., application/vnd.google-apps.document)", + ) + url: Optional[str] = Field(None, description="URL to open the file") + icon_url: Optional[str] = Field(None, alias="iconUrl", description="Icon URL") + is_folder: Optional[bool] = Field( + None, alias="isFolder", description="Whether this is a folder" + ) + + +class GoogleDriveFile(_GoogleDriveFileBase): + """ + Represents a Google Drive file/folder with optional credentials for chaining. + + Used for both inputs and outputs in Google Drive blocks. The `_credentials_id` + field enables chaining between blocks - when one block outputs a file, the + next block can use the same credentials to access it. + + When used with GoogleDriveFileField(), the frontend renders a combined + auth + file picker UI that automatically populates `_credentials_id`. + """ + + # Hidden field for credential ID - populated by frontend, preserved in outputs + credentials_id: Optional[str] = Field( + None, + alias="_credentials_id", + description="Internal: credential ID for authentication", + ) + + +def GoogleDriveFileField( + *, + title: str, + description: str | None = None, + credentials_kwarg: str = "credentials", + credentials_scopes: list[str] | None = None, + allowed_views: list[AttachmentView] | None = None, + allowed_mime_types: list[str] | None = None, + placeholder: str | None = None, + **kwargs: Any, +) -> Any: + """ + Creates a Google Drive file input field with auto-generated credentials. + + This field type produces a single UI element that handles both: + 1. Google OAuth authentication + 2. File selection via Google Drive Picker + + The system automatically generates a credentials field, and the credentials + are passed to the run() method using the specified kwarg name. + + Args: + title: Field title shown in UI + description: Field description/help text + credentials_kwarg: Name of the kwarg that will receive GoogleCredentials + in the run() method (default: "credentials") + credentials_scopes: OAuth scopes required (default: drive.file) + allowed_views: List of view types to show in picker (default: ["DOCS"]) + allowed_mime_types: Filter by MIME types + placeholder: Placeholder text for the button + **kwargs: Additional SchemaField arguments + + Returns: + Field definition that produces GoogleDriveFile + + Example: + >>> class MyBlock(Block): + ... class Input(BlockSchemaInput): + ... spreadsheet: GoogleDriveFile = GoogleDriveFileField( + ... title="Select Spreadsheet", + ... credentials_kwarg="creds", + ... allowed_views=["SPREADSHEETS"], + ... ) + ... + ... async def run( + ... self, input_data: Input, *, creds: GoogleCredentials, **kwargs + ... ): + ... # creds is automatically populated + ... file = input_data.spreadsheet + """ + + # Determine scopes - drive.file is sufficient for picker-selected files + scopes = credentials_scopes or ["https://www.googleapis.com/auth/drive.file"] + + # Build picker configuration with auto_credentials embedded + picker_config = { + "multiselect": False, + "allow_folder_selection": False, + "allowed_views": list(allowed_views) if allowed_views else ["DOCS"], + "scopes": scopes, + # Auto-credentials config tells frontend to include _credentials_id in output + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": scopes, + "kwarg_name": credentials_kwarg, + }, + } + + if allowed_mime_types: + picker_config["allowed_mime_types"] = list(allowed_mime_types) + + return SchemaField( + default=None, + title=title, + description=description, + placeholder=placeholder or "Select from Google Drive", + # Use google-drive-picker format so frontend renders existing component + format="google-drive-picker", + advanced=False, + json_schema_extra={ + "google_drive_picker_config": picker_config, + # Also keep auto_credentials at top level for backend detection + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": scopes, + "kwarg_name": credentials_kwarg, + }, + **kwargs, + }, + ) diff --git a/autogpt_platform/backend/backend/blocks/google/calendar.py b/autogpt_platform/backend/backend/blocks/google/calendar.py index 339daab430..55c41f047c 100644 --- a/autogpt_platform/backend/backend/blocks/google/calendar.py +++ b/autogpt_platform/backend/backend/blocks/google/calendar.py @@ -8,7 +8,13 @@ from google.oauth2.credentials import Credentials from googleapiclient.discovery import build from pydantic import BaseModel -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.settings import Settings @@ -43,7 +49,7 @@ class CalendarEvent(BaseModel): class GoogleCalendarReadEventsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/calendar.readonly"] ) @@ -73,7 +79,7 @@ class GoogleCalendarReadEventsBlock(Block): description="Include events you've declined", default=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): events: list[CalendarEvent] = SchemaField( description="List of calendar events in the requested time range", default_factory=list, @@ -379,7 +385,7 @@ class RecurringEvent(BaseModel): class GoogleCalendarCreateEventBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/calendar"] ) @@ -433,12 +439,11 @@ class GoogleCalendarCreateEventBlock(Block): default_factory=lambda: [ReminderPreset.TEN_MINUTES], ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): event_id: str = SchemaField(description="ID of the created event") event_link: str = SchemaField( description="Link to view the event in Google Calendar" ) - error: str = SchemaField(description="Error message if event creation failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/google/gmail.py b/autogpt_platform/backend/backend/blocks/google/gmail.py index 9efeed0331..bded362314 100644 --- a/autogpt_platform/backend/backend/blocks/google/gmail.py +++ b/autogpt_platform/backend/backend/blocks/google/gmail.py @@ -14,7 +14,13 @@ from google.oauth2.credentials import Credentials from googleapiclient.discovery import build from pydantic import BaseModel, Field -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.file import MediaFileType, get_exec_file_path, store_media_file from backend.util.settings import Settings @@ -320,7 +326,7 @@ class GmailBase(Block, ABC): class GmailReadBlock(GmailBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/gmail.readonly"] ) @@ -333,7 +339,7 @@ class GmailReadBlock(GmailBase): default=10, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): email: Email = SchemaField( description="Email data", ) @@ -516,7 +522,7 @@ class GmailSendBlock(GmailBase): - Attachment support for multiple files """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/gmail.send"] ) @@ -540,7 +546,7 @@ class GmailSendBlock(GmailBase): description="Files to attach", default_factory=list, advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: GmailSendResult = SchemaField( description="Send confirmation", ) @@ -618,7 +624,7 @@ class GmailCreateDraftBlock(GmailBase): - Attachment support for multiple files """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/gmail.modify"] ) @@ -642,7 +648,7 @@ class GmailCreateDraftBlock(GmailBase): description="Files to attach", default_factory=list, advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: GmailDraftResult = SchemaField( description="Draft creation result", ) @@ -721,12 +727,12 @@ class GmailCreateDraftBlock(GmailBase): class GmailListLabelsBlock(GmailBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/gmail.labels"] ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: list[dict] = SchemaField( description="List of labels", ) @@ -779,7 +785,7 @@ class GmailListLabelsBlock(GmailBase): class GmailAddLabelBlock(GmailBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/gmail.modify"] ) @@ -790,7 +796,7 @@ class GmailAddLabelBlock(GmailBase): description="Label name to add", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: GmailLabelResult = SchemaField( description="Label addition result", ) @@ -865,7 +871,7 @@ class GmailAddLabelBlock(GmailBase): class GmailRemoveLabelBlock(GmailBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/gmail.modify"] ) @@ -876,7 +882,7 @@ class GmailRemoveLabelBlock(GmailBase): description="Label name to remove", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: GmailLabelResult = SchemaField( description="Label removal result", ) @@ -941,17 +947,16 @@ class GmailRemoveLabelBlock(GmailBase): class GmailGetThreadBlock(GmailBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/gmail.readonly"] ) threadId: str = SchemaField(description="Gmail thread ID") - class Output(BlockSchema): + class Output(BlockSchemaOutput): thread: Thread = SchemaField( description="Gmail thread with decoded message bodies" ) - error: str = SchemaField(description="Error message if any") def __init__(self): super().__init__( @@ -1218,7 +1223,7 @@ class GmailReplyBlock(GmailBase): - Full Unicode/emoji support with UTF-8 encoding """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( [ "https://www.googleapis.com/auth/gmail.send", @@ -1246,14 +1251,13 @@ class GmailReplyBlock(GmailBase): description="Files to attach", default_factory=list, advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): messageId: str = SchemaField(description="Sent message ID") threadId: str = SchemaField(description="Thread ID") message: dict = SchemaField(description="Raw Gmail message object") email: Email = SchemaField( description="Parsed email object with decoded body and attachments" ) - error: str = SchemaField(description="Error message if any") def __init__(self): super().__init__( @@ -1368,7 +1372,7 @@ class GmailDraftReplyBlock(GmailBase): - Full Unicode/emoji support with UTF-8 encoding """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( [ "https://www.googleapis.com/auth/gmail.modify", @@ -1396,12 +1400,11 @@ class GmailDraftReplyBlock(GmailBase): description="Files to attach", default_factory=list, advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): draftId: str = SchemaField(description="Created draft ID") messageId: str = SchemaField(description="Draft message ID") threadId: str = SchemaField(description="Thread ID") status: str = SchemaField(description="Draft creation status") - error: str = SchemaField(description="Error message if any") def __init__(self): super().__init__( @@ -1482,14 +1485,13 @@ class GmailDraftReplyBlock(GmailBase): class GmailGetProfileBlock(GmailBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( ["https://www.googleapis.com/auth/gmail.readonly"] ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): profile: Profile = SchemaField(description="Gmail user profile information") - error: str = SchemaField(description="Error message if any") def __init__(self): super().__init__( @@ -1555,7 +1557,7 @@ class GmailForwardBlock(GmailBase): - Manual content type override option """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: GoogleCredentialsInput = GoogleCredentialsField( [ "https://www.googleapis.com/auth/gmail.send", @@ -1589,11 +1591,10 @@ class GmailForwardBlock(GmailBase): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): messageId: str = SchemaField(description="Forwarded message ID") threadId: str = SchemaField(description="Thread ID") status: str = SchemaField(description="Forward status") - error: str = SchemaField(description="Error message if any") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/google/sheets.py b/autogpt_platform/backend/backend/blocks/google/sheets.py index 6e63958c82..7b9ba2161e 100644 --- a/autogpt_platform/backend/backend/blocks/google/sheets.py +++ b/autogpt_platform/backend/backend/blocks/google/sheets.py @@ -1,11 +1,20 @@ import asyncio +import csv +import io +import re from enum import Enum -from typing import Any from google.oauth2.credentials import Credentials from googleapiclient.discovery import build -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.blocks.google._drive import GoogleDriveFile, GoogleDriveFileField +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.settings import Settings @@ -124,36 +133,8 @@ def sheet_id_by_name(service, spreadsheet_id: str, sheet_name: str) -> int | Non return None -def _convert_dicts_to_rows( - data: list[dict[str, Any]], headers: list[str] -) -> list[list[str]]: - """Convert list of dictionaries to list of rows using the specified header order. - - Args: - data: List of dictionaries to convert - headers: List of column headers to use for ordering - - Returns: - List of rows where each row is a list of string values in header order - """ - if not data: - return [] - - if not headers: - raise ValueError("Headers are required when using list[dict] format") - - rows = [] - for item in data: - row = [] - for header in headers: - value = item.get(header, "") - row.append(str(value) if value is not None else "") - rows.append(row) - - return rows - - def _build_sheets_service(credentials: GoogleCredentials): + """Build Sheets service from platform credentials (with refresh token).""" settings = Settings() creds = Credentials( token=( @@ -174,6 +155,63 @@ def _build_sheets_service(credentials: GoogleCredentials): return build("sheets", "v4", credentials=creds) +def _build_drive_service(credentials: GoogleCredentials): + """Build Drive service from platform credentials (with refresh token).""" + settings = Settings() + creds = Credentials( + token=( + credentials.access_token.get_secret_value() + if credentials.access_token + else None + ), + refresh_token=( + credentials.refresh_token.get_secret_value() + if credentials.refresh_token + else None + ), + token_uri="https://oauth2.googleapis.com/token", + client_id=settings.secrets.google_client_id, + client_secret=settings.secrets.google_client_secret, + scopes=credentials.scopes, + ) + return build("drive", "v3", credentials=creds) + + +def _validate_spreadsheet_file(spreadsheet_file: "GoogleDriveFile") -> str | None: + """Validate that the selected file is a Google Sheets spreadsheet. + + Returns None if valid, error message string if invalid. + """ + if spreadsheet_file.mime_type != "application/vnd.google-apps.spreadsheet": + file_type = spreadsheet_file.mime_type + file_name = spreadsheet_file.name + if file_type == "text/csv": + return f"Cannot use CSV file '{file_name}' with Google Sheets block. Please use a CSV reader block instead, or convert the CSV to a Google Sheets spreadsheet first." + elif file_type in [ + "application/vnd.ms-excel", + "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", + ]: + return f"Cannot use Excel file '{file_name}' with Google Sheets block. Please use an Excel reader block instead, or convert to Google Sheets first." + else: + return f"Cannot use file '{file_name}' (type: {file_type}) with Google Sheets block. This block only works with Google Sheets spreadsheets." + return None + + +def _handle_sheets_api_error(error_msg: str, operation: str = "access") -> str: + """Convert common Google Sheets API errors to user-friendly messages.""" + if "Request contains an invalid argument" in error_msg: + return f"Invalid request to Google Sheets API. This usually means the file is not a Google Sheets spreadsheet, the range is invalid, or you don't have permission to {operation} this file." + elif "The caller does not have permission" in error_msg or "Forbidden" in error_msg: + if operation in ["write", "modify", "update", "append", "clear"]: + return "Permission denied. You don't have edit access to this spreadsheet. Make sure it's shared with edit permissions." + else: + return "Permission denied. You don't have access to this spreadsheet. Make sure it's shared with you and try re-selecting the file." + elif "not found" in error_msg.lower() or "does not exist" in error_msg.lower(): + return "Spreadsheet not found. The file may have been deleted or the link is invalid." + else: + return f"Failed to {operation} Google Sheet: {error_msg}" + + class SheetOperation(str, Enum): CREATE = "create" DELETE = "delete" @@ -195,7 +233,18 @@ class BatchOperationType(str, Enum): CLEAR = "clear" -class BatchOperation(BlockSchema): +class PublicAccessRole(str, Enum): + READER = "reader" + COMMENTER = "commenter" + + +class ShareRole(str, Enum): + READER = "reader" + WRITER = "writer" + COMMENTER = "commenter" + + +class BatchOperation(BlockSchemaInput): type: BatchOperationType = SchemaField( description="The type of operation to perform" ) @@ -206,22 +255,26 @@ class BatchOperation(BlockSchema): class GoogleSheetsReadBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets.readonly"] - ) - spreadsheet_id: str = SchemaField( - description="The ID or URL of the spreadsheet to read from", - title="Spreadsheet ID or URL", + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) range: str = SchemaField( description="The A1 notation of the range to read", + placeholder="Sheet1!A1:Z1000", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: list[list[str]] = SchemaField( description="The data read from the spreadsheet", ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) error: str = SchemaField( description="Error message if any", ) @@ -235,9 +288,12 @@ class GoogleSheetsReadBlock(Block): output_schema=GoogleSheetsReadBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, "range": "Sheet1!A1:B2", - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, test_output=[ @@ -248,6 +304,18 @@ class GoogleSheetsReadBlock(Block): ["Alice", "85"], ], ), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), ], test_mock={ "_read_sheet": lambda *args, **kwargs: [ @@ -260,39 +328,80 @@ class GoogleSheetsReadBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - data = await asyncio.to_thread( - self._read_sheet, service, spreadsheet_id, input_data.range - ) - yield "result", data + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + try: + service = _build_sheets_service(credentials) + spreadsheet_id = input_data.spreadsheet.id + data = await asyncio.to_thread( + self._read_sheet, service, spreadsheet_id, input_data.range + ) + yield "result", data + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=spreadsheet_id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{spreadsheet_id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", _handle_sheets_api_error(str(e), "read") def _read_sheet(self, service, spreadsheet_id: str, range: str) -> list[list[str]]: sheet = service.spreadsheets() - result = sheet.values().get(spreadsheetId=spreadsheet_id, range=range).execute() + range_to_use = range or "A:Z" + sheet_name, cell_range = parse_a1_notation(range_to_use) + if sheet_name: + cleaned_sheet = sheet_name.strip().strip("'\"") + formatted_sheet = format_sheet_name(cleaned_sheet) + cell_part = cell_range.strip() if cell_range else "" + if cell_part: + range_to_use = f"{formatted_sheet}!{cell_part}" + else: + range_to_use = f"{formatted_sheet}!A:Z" + # If no sheet name, keep the original range (e.g., "A1:B2" or "B:B") + result = ( + sheet.values() + .get(spreadsheetId=spreadsheet_id, range=range_to_use) + .execute() + ) return result.get("values", []) class GoogleSheetsWriteBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets"] - ) - spreadsheet_id: str = SchemaField( - description="The ID or URL of the spreadsheet to write to", - title="Spreadsheet ID or URL", + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) range: str = SchemaField( description="The A1 notation of the range to write", + placeholder="Sheet1!A1:B2", ) values: list[list[str]] = SchemaField( description="The data to write to the spreadsheet", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField( description="The result of the write operation", ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) error: str = SchemaField( description="Error message if any", ) @@ -306,13 +415,16 @@ class GoogleSheetsWriteBlock(Block): output_schema=GoogleSheetsWriteBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, "range": "Sheet1!A1:B2", "values": [ ["Name", "Score"], ["Bob", "90"], ], - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, test_output=[ @@ -320,6 +432,18 @@ class GoogleSheetsWriteBlock(Block): "result", {"updatedCells": 4, "updatedColumns": 2, "updatedRows": 2}, ), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), ], test_mock={ "_write_sheet": lambda *args, **kwargs: { @@ -333,16 +457,45 @@ class GoogleSheetsWriteBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - result = await asyncio.to_thread( - self._write_sheet, - service, - spreadsheet_id, - input_data.range, - input_data.values, - ) - yield "result", result + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + # Customize message for write operations on CSV files + if "CSV file" in validation_error: + yield "error", validation_error.replace( + "Please use a CSV reader block instead, or", + "CSV files are read-only through Google Drive. Please", + ) + else: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._write_sheet, + service, + input_data.spreadsheet.id, + input_data.range, + input_data.values, + ) + yield "result", result + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", _handle_sheets_api_error(str(e), "write") def _write_sheet( self, service, spreadsheet_id: str, range: str, values: list[list[str]] @@ -362,70 +515,71 @@ class GoogleSheetsWriteBlock(Block): return result -class GoogleSheetsAppendBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets"] +class GoogleSheetsAppendRowBlock(Block): + """Append a single row to the end of a Google Sheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) - spreadsheet_id: str = SchemaField( - description="Spreadsheet ID or URL", - title="Spreadsheet ID or URL", + row: list[str] = SchemaField( + description="Row values to append (e.g., ['Alice', 'alice@example.com', '25'])", ) sheet_name: str = SchemaField( - description="Optional sheet to append to (defaults to first sheet)", + description="Sheet to append to (optional, defaults to first sheet)", default="", ) - values: list[list[str]] = SchemaField( - description="Rows to append as list of rows (list[list[str]])", - default=[], - ) - dict_values: list[dict[str, Any]] = SchemaField( - description="Rows to append as list of dictionaries (list[dict])", - default=[], - ) - headers: list[str] = SchemaField( - description="Column headers to use for ordering dict values (required when dict_values is provided)", - default=[], - ) - range: str = SchemaField( - description="Range to append to (e.g. 'A:A' for column A only, 'A:C' for columns A-C, or leave empty for unlimited columns). When empty, data will span as many columns as needed.", - default="", - advanced=True, - ) value_input_option: ValueInputOption = SchemaField( - description="How input data should be interpreted", + description="How values are interpreted. USER_ENTERED: parsed like typed input (e.g., '=SUM(A1:A5)' becomes a formula, '1/2/2024' becomes a date). RAW: stored as-is without parsing.", default=ValueInputOption.USER_ENTERED, advanced=True, ) - insert_data_option: InsertDataOption = SchemaField( - description="How new data should be inserted", - default=InsertDataOption.INSERT_ROWS, - advanced=True, - ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField(description="Append API response") - error: str = SchemaField(description="Error message, if any") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining to other blocks", + ) + error: str = SchemaField(description="Error message if any") def __init__(self): super().__init__( id="531d50c0-d6b9-4cf9-a013-7bf783d313c7", - description="Append data to a Google Sheet. Use 'values' for list of rows (list[list[str]]) or 'dict_values' with 'headers' for list of dictionaries (list[dict]). Data is added to the next empty row without overwriting existing content. Leave range empty for unlimited columns, or specify range like 'A:A' to constrain to specific columns.", + description="Append or Add a single row to the end of a Google Sheet. The row is added after the last row with data.", categories={BlockCategory.DATA}, - input_schema=GoogleSheetsAppendBlock.Input, - output_schema=GoogleSheetsAppendBlock.Output, + input_schema=GoogleSheetsAppendRowBlock.Input, + output_schema=GoogleSheetsAppendRowBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", - "values": [["Charlie", "95"]], - "credentials": TEST_CREDENTIALS_INPUT, + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "row": ["Charlie", "95"], }, test_credentials=TEST_CREDENTIALS, test_output=[ ("result", {"updatedCells": 2, "updatedColumns": 2, "updatedRows": 1}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), ], test_mock={ - "_append_sheet": lambda *args, **kwargs: { + "_append_row": lambda *args, **kwargs: { "updatedCells": 2, "updatedColumns": 2, "updatedRows": 1, @@ -436,89 +590,94 @@ class GoogleSheetsAppendBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - # Determine which values to use and convert if needed - processed_values: list[list[str]] + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return - # Validate that only one format is provided - if input_data.values and input_data.dict_values: - raise ValueError("Provide either 'values' or 'dict_values', not both") + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return - if input_data.dict_values: - if not input_data.headers: - raise ValueError("Headers are required when using dict_values") - processed_values = _convert_dicts_to_rows( - input_data.dict_values, input_data.headers + if not input_data.row: + yield "error", "Row data is required" + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._append_row, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.row, + input_data.value_input_option, ) - elif input_data.values: - processed_values = input_data.values - else: - raise ValueError("Either 'values' or 'dict_values' must be provided") + yield "result", result + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to append row: {str(e)}" - result = await asyncio.to_thread( - self._append_sheet, - service, - spreadsheet_id, - input_data.sheet_name, - processed_values, - input_data.range, - input_data.value_input_option, - input_data.insert_data_option, - ) - yield "result", result - - def _append_sheet( + def _append_row( self, service, spreadsheet_id: str, sheet_name: str, - values: list[list[str]], - range: str, + row: list[str], value_input_option: ValueInputOption, - insert_data_option: InsertDataOption, ) -> dict: - target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name) + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) formatted_sheet = format_sheet_name(target_sheet) - # If no range specified, use A1 to let Google Sheets find the next empty row with unlimited columns - # If range specified, use it to constrain columns (e.g., A:A for column A only) - if range: - append_range = f"{formatted_sheet}!{range}" - else: - # Use A1 as starting point for unlimited columns - Google Sheets will find next empty row - append_range = f"{formatted_sheet}!A1" - body = {"values": values} - return ( + append_range = f"{formatted_sheet}!A1" + body = {"values": [row]} # Wrap single row in list for API + result = ( service.spreadsheets() .values() .append( spreadsheetId=spreadsheet_id, range=append_range, valueInputOption=value_input_option.value, - insertDataOption=insert_data_option.value, + insertDataOption="INSERT_ROWS", body=body, ) .execute() ) + return { + "updatedCells": result.get("updates", {}).get("updatedCells", 0), + "updatedRows": result.get("updates", {}).get("updatedRows", 0), + "updatedColumns": result.get("updates", {}).get("updatedColumns", 0), + } class GoogleSheetsClearBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets"] - ) - spreadsheet_id: str = SchemaField( - description="The ID or URL of the spreadsheet to clear", - title="Spreadsheet ID or URL", + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) range: str = SchemaField( description="The A1 notation of the range to clear", + placeholder="Sheet1!A1:B2", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField( description="The result of the clear operation", ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) error: str = SchemaField( description="Error message if any", ) @@ -532,13 +691,28 @@ class GoogleSheetsClearBlock(Block): output_schema=GoogleSheetsClearBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, "range": "Sheet1!A1:B2", - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, test_output=[ ("result", {"clearedRange": "Sheet1!A1:B2"}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), ], test_mock={ "_clear_range": lambda *args, **kwargs: { @@ -550,15 +724,37 @@ class GoogleSheetsClearBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - result = await asyncio.to_thread( - self._clear_range, - service, - spreadsheet_id, - input_data.range, - ) - yield "result", result + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._clear_range, + service, + input_data.spreadsheet.id, + input_data.range, + ) + yield "result", result + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to clear Google Sheet range: {str(e)}" def _clear_range(self, service, spreadsheet_id: str, range: str) -> dict: result = ( @@ -571,19 +767,22 @@ class GoogleSheetsClearBlock(Block): class GoogleSheetsMetadataBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets.readonly"] - ) - spreadsheet_id: str = SchemaField( - description="The ID or URL of the spreadsheet to get metadata for", - title="Spreadsheet ID or URL", + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField( description="The metadata of the spreadsheet including sheets info", ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) error: str = SchemaField( description="Error message if any", ) @@ -597,8 +796,11 @@ class GoogleSheetsMetadataBlock(Block): output_schema=GoogleSheetsMetadataBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", - "credentials": TEST_CREDENTIALS_INPUT, + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, }, test_credentials=TEST_CREDENTIALS, test_output=[ @@ -609,6 +811,18 @@ class GoogleSheetsMetadataBlock(Block): "sheets": [{"title": "Sheet1", "sheetId": 0}], }, ), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), ], test_mock={ "_get_metadata": lambda *args, **kwargs: { @@ -621,14 +835,36 @@ class GoogleSheetsMetadataBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - result = await asyncio.to_thread( - self._get_metadata, - service, - spreadsheet_id, - ) - yield "result", result + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._get_metadata, + service, + input_data.spreadsheet.id, + ) + yield "result", result + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to get spreadsheet metadata: {str(e)}" def _get_metadata(self, service, spreadsheet_id: str) -> dict: result = ( @@ -652,13 +888,13 @@ class GoogleSheetsMetadataBlock(Block): class GoogleSheetsManageSheetBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets"] - ) - spreadsheet_id: str = SchemaField( - description="Spreadsheet ID or URL", - title="Spreadsheet ID or URL", + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) operation: SheetOperation = SchemaField(description="Operation to perform") sheet_name: str = SchemaField( @@ -672,9 +908,14 @@ class GoogleSheetsManageSheetBlock(Block): description="New sheet name for copy", default="" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField(description="Operation result") - error: str = SchemaField(description="Error message, if any") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) + error: str = SchemaField( + description="Error message if any", + ) def __init__(self): super().__init__( @@ -685,13 +926,30 @@ class GoogleSheetsManageSheetBlock(Block): output_schema=GoogleSheetsManageSheetBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, "operation": SheetOperation.CREATE, "sheet_name": "NewSheet", - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, - test_output=[("result", {"success": True, "sheetId": 123})], + test_output=[ + ("result", {"success": True, "sheetId": 123}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], test_mock={ "_manage_sheet": lambda *args, **kwargs: { "success": True, @@ -703,18 +961,40 @@ class GoogleSheetsManageSheetBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - result = await asyncio.to_thread( - self._manage_sheet, - service, - spreadsheet_id, - input_data.operation, - input_data.sheet_name, - input_data.source_sheet_id, - input_data.destination_sheet_name, - ) - yield "result", result + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._manage_sheet, + service, + input_data.spreadsheet.id, + input_data.operation, + input_data.sheet_name, + input_data.source_sheet_id, + input_data.destination_sheet_name, + ) + yield "result", result + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to manage sheet: {str(e)}" def _manage_sheet( self, @@ -727,17 +1007,21 @@ class GoogleSheetsManageSheetBlock(Block): ) -> dict: requests = [] - # Ensure a target sheet name when needed - target_name = resolve_sheet_name(service, spreadsheet_id, sheet_name) - if operation == SheetOperation.CREATE: + # For CREATE, use sheet_name directly or default to "New Sheet" + target_name = sheet_name or "New Sheet" requests.append({"addSheet": {"properties": {"title": target_name}}}) elif operation == SheetOperation.DELETE: + # For DELETE, resolve sheet name (fall back to first sheet if empty) + target_name = resolve_sheet_name( + service, spreadsheet_id, sheet_name or None + ) sid = sheet_id_by_name(service, spreadsheet_id, target_name) if sid is None: return {"error": f"Sheet '{target_name}' not found"} requests.append({"deleteSheet": {"sheetId": sid}}) elif operation == SheetOperation.COPY: + # For COPY, use source_sheet_id and destination_sheet_name directly requests.append( { "duplicateSheet": { @@ -760,22 +1044,25 @@ class GoogleSheetsManageSheetBlock(Block): class GoogleSheetsBatchOperationsBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets"] - ) - spreadsheet_id: str = SchemaField( - description="The ID or URL of the spreadsheet to perform batch operations on", - title="Spreadsheet ID or URL", + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) operations: list[BatchOperation] = SchemaField( description="List of operations to perform", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField( description="The result of the batch operations", ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) error: str = SchemaField( description="Error message if any", ) @@ -789,7 +1076,11 @@ class GoogleSheetsBatchOperationsBlock(Block): output_schema=GoogleSheetsBatchOperationsBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, "operations": [ { "type": BatchOperationType.UPDATE, @@ -802,11 +1093,22 @@ class GoogleSheetsBatchOperationsBlock(Block): "values": [["Data1", "Data2"]], }, ], - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, test_output=[ ("result", {"totalUpdatedCells": 4, "replies": []}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), ], test_mock={ "_batch_operations": lambda *args, **kwargs: { @@ -819,15 +1121,37 @@ class GoogleSheetsBatchOperationsBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - result = await asyncio.to_thread( - self._batch_operations, - service, - spreadsheet_id, - input_data.operations, - ) - yield "result", result + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._batch_operations, + service, + input_data.spreadsheet.id, + input_data.operations, + ) + yield "result", result + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to perform batch operations: {str(e)}" def _batch_operations( self, service, spreadsheet_id: str, operations: list[BatchOperation] @@ -877,13 +1201,13 @@ class GoogleSheetsBatchOperationsBlock(Block): class GoogleSheetsFindReplaceBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets"] - ) - spreadsheet_id: str = SchemaField( - description="The ID or URL of the spreadsheet to perform find/replace on", - title="Spreadsheet ID or URL", + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) find_text: str = SchemaField( description="The text to find", @@ -904,10 +1228,13 @@ class GoogleSheetsFindReplaceBlock(Block): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField( description="The result of the find/replace operation including number of replacements", ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) error: str = SchemaField( description="Error message if any", ) @@ -921,16 +1248,31 @@ class GoogleSheetsFindReplaceBlock(Block): output_schema=GoogleSheetsFindReplaceBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, "find_text": "old_value", "replace_text": "new_value", "match_case": False, "match_entire_cell": False, - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, test_output=[ ("result", {"occurrencesChanged": 5}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), ], test_mock={ "_find_replace": lambda *args, **kwargs: {"occurrencesChanged": 5}, @@ -940,19 +1282,41 @@ class GoogleSheetsFindReplaceBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - result = await asyncio.to_thread( - self._find_replace, - service, - spreadsheet_id, - input_data.find_text, - input_data.replace_text, - input_data.sheet_id, - input_data.match_case, - input_data.match_entire_cell, - ) - yield "result", result + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._find_replace, + service, + input_data.spreadsheet.id, + input_data.find_text, + input_data.replace_text, + input_data.sheet_id, + input_data.match_case, + input_data.match_entire_cell, + ) + yield "result", result + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to find/replace in Google Sheet: {str(e)}" def _find_replace( self, @@ -987,13 +1351,13 @@ class GoogleSheetsFindReplaceBlock(Block): class GoogleSheetsFindBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets.readonly"] - ) - spreadsheet_id: str = SchemaField( - description="The ID or URL of the spreadsheet to search in", - title="Spreadsheet ID or URL", + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) find_text: str = SchemaField( description="The text to find", @@ -1020,7 +1384,7 @@ class GoogleSheetsFindBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField( description="The result of the find operation including locations and count", ) @@ -1030,6 +1394,9 @@ class GoogleSheetsFindBlock(Block): count: int = SchemaField( description="Number of occurrences found", ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) error: str = SchemaField( description="Error message if any", ) @@ -1043,13 +1410,16 @@ class GoogleSheetsFindBlock(Block): output_schema=GoogleSheetsFindBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, "find_text": "search_value", "match_case": False, "match_entire_cell": False, "find_all": True, "range": "Sheet1!A1:C10", - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, test_output=[ @@ -1063,6 +1433,18 @@ class GoogleSheetsFindBlock(Block): ], ), ("result", {"success": True}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), ], test_mock={ "_find_text": lambda *args, **kwargs: { @@ -1079,22 +1461,44 @@ class GoogleSheetsFindBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - result = await asyncio.to_thread( - self._find_text, - service, - spreadsheet_id, - input_data.find_text, - input_data.sheet_id, - input_data.match_case, - input_data.match_entire_cell, - input_data.find_all, - input_data.range, - ) - yield "count", result["count"] - yield "locations", result["locations"] - yield "result", {"success": True} + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._find_text, + service, + input_data.spreadsheet.id, + input_data.find_text, + input_data.sheet_id, + input_data.match_case, + input_data.match_entire_cell, + input_data.find_all, + input_data.range, + ) + yield "count", result["count"] + yield "locations", result["locations"] + yield "result", {"success": True} + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to find text in Google Sheet: {str(e)}" def _find_text( self, @@ -1255,24 +1659,32 @@ class GoogleSheetsFindBlock(Block): class GoogleSheetsFormatBlock(Block): - class Input(BlockSchema): - credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets"] + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], ) - spreadsheet_id: str = SchemaField( - description="Spreadsheet ID or URL", - title="Spreadsheet ID or URL", + range: str = SchemaField( + description="A1 notation – sheet optional", + placeholder="Sheet1!A1:B2", ) - range: str = SchemaField(description="A1 notation – sheet optional") background_color: dict = SchemaField(default={}) text_color: dict = SchemaField(default={}) bold: bool = SchemaField(default=False) italic: bool = SchemaField(default=False) font_size: int = SchemaField(default=10) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField(description="API response or success flag") - error: str = SchemaField(description="Error message, if any") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) + error: str = SchemaField( + description="Error message if any", + ) def __init__(self): super().__init__( @@ -1283,37 +1695,76 @@ class GoogleSheetsFormatBlock(Block): output_schema=GoogleSheetsFormatBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ - "spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, "range": "A1:B2", "background_color": {"red": 1.0, "green": 0.9, "blue": 0.9}, "bold": True, - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, - test_output=[("result", {"success": True})], + test_output=[ + ("result", {"success": True}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], test_mock={"_format_cells": lambda *args, **kwargs: {"success": True}}, ) async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) - spreadsheet_id = extract_spreadsheet_id(input_data.spreadsheet_id) - result = await asyncio.to_thread( - self._format_cells, - service, - spreadsheet_id, - input_data.range, - input_data.background_color, - input_data.text_color, - input_data.bold, - input_data.italic, - input_data.font_size, - ) - if "error" in result: - yield "error", result["error"] - else: - yield "result", result + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._format_cells, + service, + input_data.spreadsheet.id, + input_data.range, + input_data.background_color, + input_data.text_color, + input_data.bold, + input_data.italic, + input_data.font_size, + ) + if "error" in result: + yield "error", result["error"] + else: + yield "result", result + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to format Google Sheet cells: {str(e)}" def _format_cells( self, @@ -1383,9 +1834,10 @@ class GoogleSheetsFormatBlock(Block): class GoogleSheetsCreateSpreadsheetBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): + # Explicit credentials since this block creates a file (no file picker) credentials: GoogleCredentialsInput = GoogleCredentialsField( - ["https://www.googleapis.com/auth/spreadsheets"] + ["https://www.googleapis.com/auth/drive.file"] ) title: str = SchemaField( description="The title of the new spreadsheet", @@ -1395,10 +1847,13 @@ class GoogleSheetsCreateSpreadsheetBlock(Block): default=["Sheet1"], ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField( description="The result containing spreadsheet ID and URL", ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The created spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) spreadsheet_id: str = SchemaField( description="The ID of the created spreadsheet", ) @@ -1418,12 +1873,26 @@ class GoogleSheetsCreateSpreadsheetBlock(Block): output_schema=GoogleSheetsCreateSpreadsheetBlock.Output, disabled=GOOGLE_SHEETS_DISABLED, test_input={ + "credentials": TEST_CREDENTIALS_INPUT, "title": "Test Spreadsheet", "sheet_names": ["Sheet1", "Data", "Summary"], - "credentials": TEST_CREDENTIALS_INPUT, }, test_credentials=TEST_CREDENTIALS, test_output=[ + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=TEST_CREDENTIALS_INPUT[ + "id" + ], # Preserves credential ID for chaining + ), + ), ("spreadsheet_id", "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms"), ( "spreadsheet_url", @@ -1435,6 +1904,7 @@ class GoogleSheetsCreateSpreadsheetBlock(Block): "_create_spreadsheet": lambda *args, **kwargs: { "spreadsheetId": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", "spreadsheetUrl": "https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + "title": "Test Spreadsheet", }, }, ) @@ -1442,10 +1912,12 @@ class GoogleSheetsCreateSpreadsheetBlock(Block): async def run( self, input_data: Input, *, credentials: GoogleCredentials, **kwargs ) -> BlockOutput: - service = _build_sheets_service(credentials) + drive_service = _build_drive_service(credentials) + sheets_service = _build_sheets_service(credentials) result = await asyncio.to_thread( self._create_spreadsheet, - service, + drive_service, + sheets_service, input_data.title, input_data.sheet_names, ) @@ -1453,43 +1925,4607 @@ class GoogleSheetsCreateSpreadsheetBlock(Block): if "error" in result: yield "error", result["error"] else: - yield "spreadsheet_id", result["spreadsheetId"] - yield "spreadsheet_url", result["spreadsheetUrl"] - yield "result", {"success": True} - - def _create_spreadsheet(self, service, title: str, sheet_names: list[str]) -> dict: - try: - # Create the initial spreadsheet - spreadsheet_body = { - "properties": {"title": title}, - "sheets": [ - { - "properties": { - "title": sheet_names[0] if sheet_names else "Sheet1" - } - } - ], - } - - result = service.spreadsheets().create(body=spreadsheet_body).execute() spreadsheet_id = result["spreadsheetId"] spreadsheet_url = result["spreadsheetUrl"] + # Output the GoogleDriveFile for chaining (includes credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=spreadsheet_id, + name=result.get("title", input_data.title), + mimeType="application/vnd.google-apps.spreadsheet", + url=spreadsheet_url, + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.credentials.id, # Preserve credentials for chaining + ) + yield "spreadsheet_id", spreadsheet_id + yield "spreadsheet_url", spreadsheet_url + yield "result", {"success": True} + + def _create_spreadsheet( + self, drive_service, sheets_service, title: str, sheet_names: list[str] + ) -> dict: + try: + # Create blank spreadsheet using Drive API + file_metadata = { + "name": title, + "mimeType": "application/vnd.google-apps.spreadsheet", + } + result = ( + drive_service.files() + .create(body=file_metadata, fields="id, webViewLink") + .execute() + ) + + spreadsheet_id = result["id"] + spreadsheet_url = result.get( + "webViewLink", + f"https://docs.google.com/spreadsheets/d/{spreadsheet_id}/edit", + ) + + # Rename first sheet if custom name provided (default is "Sheet1") + if sheet_names and sheet_names[0] != "Sheet1": + # Get first sheet ID and rename it + meta = ( + sheets_service.spreadsheets() + .get(spreadsheetId=spreadsheet_id) + .execute() + ) + first_sheet_id = meta["sheets"][0]["properties"]["sheetId"] + sheets_service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, + body={ + "requests": [ + { + "updateSheetProperties": { + "properties": { + "sheetId": first_sheet_id, + "title": sheet_names[0], + }, + "fields": "title", + } + } + ] + }, + ).execute() # Add additional sheets if requested if len(sheet_names) > 1: - requests = [] - for sheet_name in sheet_names[1:]: - requests.append({"addSheet": {"properties": {"title": sheet_name}}}) - - if requests: - batch_body = {"requests": requests} - service.spreadsheets().batchUpdate( - spreadsheetId=spreadsheet_id, body=batch_body - ).execute() + requests = [ + {"addSheet": {"properties": {"title": name}}} + for name in sheet_names[1:] + ] + sheets_service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": requests} + ).execute() return { "spreadsheetId": spreadsheet_id, "spreadsheetUrl": spreadsheet_url, + "title": title, } except Exception as e: return {"error": str(e)} + + +class GoogleSheetsUpdateCellBlock(Block): + """Update a single cell in a Google Sheets spreadsheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + cell: str = SchemaField( + description="Cell address in A1 notation (e.g., 'A1', 'Sheet1!B2')", + placeholder="A1", + ) + value: str = SchemaField( + description="Value to write to the cell", + ) + value_input_option: ValueInputOption = SchemaField( + description="How input data should be interpreted", + default=ValueInputOption.USER_ENTERED, + advanced=True, + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="The result of the update operation", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet as a GoogleDriveFile (for chaining to other blocks)", + ) + error: str = SchemaField( + description="Error message if any", + ) + + def __init__(self): + super().__init__( + id="df521b68-62d9-42e4-924f-fb6c245516fc", + description="Update a single cell in a Google Sheets spreadsheet.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsUpdateCellBlock.Input, + output_schema=GoogleSheetsUpdateCellBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "cell": "A1", + "value": "Hello World", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ( + "result", + {"updatedCells": 1, "updatedColumns": 1, "updatedRows": 1}, + ), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_update_cell": lambda *args, **kwargs: { + "updatedCells": 1, + "updatedColumns": 1, + "updatedRows": 1, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + try: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + # Check if the selected file is actually a Google Sheets spreadsheet + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._update_cell, + service, + input_data.spreadsheet.id, + input_data.cell, + input_data.value, + input_data.value_input_option, + ) + + yield "result", result + # Output the GoogleDriveFile for chaining (preserves credentials_id) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", _handle_sheets_api_error(str(e), "update") + + def _update_cell( + self, + service, + spreadsheet_id: str, + cell: str, + value: str, + value_input_option: ValueInputOption, + ) -> dict: + body = {"values": [[value]]} + result = ( + service.spreadsheets() + .values() + .update( + spreadsheetId=spreadsheet_id, + range=cell, + valueInputOption=value_input_option.value, + body=body, + ) + .execute() + ) + return { + "updatedCells": result.get("updatedCells", 0), + "updatedRows": result.get("updatedRows", 0), + "updatedColumns": result.get("updatedColumns", 0), + } + + +class FilterOperator(str, Enum): + EQUALS = "equals" + NOT_EQUALS = "not_equals" + CONTAINS = "contains" + NOT_CONTAINS = "not_contains" + GREATER_THAN = "greater_than" + LESS_THAN = "less_than" + GREATER_THAN_OR_EQUAL = "greater_than_or_equal" + LESS_THAN_OR_EQUAL = "less_than_or_equal" + IS_EMPTY = "is_empty" + IS_NOT_EMPTY = "is_not_empty" + + +class SortOrder(str, Enum): + ASCENDING = "ascending" + DESCENDING = "descending" + + +def _column_letter_to_index(letter: str) -> int: + """Convert column letter (A, B, ..., Z, AA, AB, ...) to 0-based index.""" + result = 0 + for char in letter.upper(): + result = result * 26 + (ord(char) - ord("A") + 1) + return result - 1 + + +def _index_to_column_letter(index: int) -> str: + """Convert 0-based column index to column letter (A, B, ..., Z, AA, AB, ...).""" + result = "" + index += 1 # Convert to 1-based + while index > 0: + index, remainder = divmod(index - 1, 26) + result = chr(ord("A") + remainder) + result + return result + + +def _apply_filter( + cell_value: str, + filter_value: str, + operator: FilterOperator, + match_case: bool, +) -> bool: + """Apply a filter condition to a cell value.""" + if operator == FilterOperator.IS_EMPTY: + return cell_value.strip() == "" + if operator == FilterOperator.IS_NOT_EMPTY: + return cell_value.strip() != "" + + # For comparison operators, apply case sensitivity + compare_cell = cell_value if match_case else cell_value.lower() + compare_filter = filter_value if match_case else filter_value.lower() + + if operator == FilterOperator.EQUALS: + return compare_cell == compare_filter + elif operator == FilterOperator.NOT_EQUALS: + return compare_cell != compare_filter + elif operator == FilterOperator.CONTAINS: + return compare_filter in compare_cell + elif operator == FilterOperator.NOT_CONTAINS: + return compare_filter not in compare_cell + elif operator in ( + FilterOperator.GREATER_THAN, + FilterOperator.LESS_THAN, + FilterOperator.GREATER_THAN_OR_EQUAL, + FilterOperator.LESS_THAN_OR_EQUAL, + ): + # Try numeric comparison first + try: + num_cell = float(cell_value) + num_filter = float(filter_value) + if operator == FilterOperator.GREATER_THAN: + return num_cell > num_filter + elif operator == FilterOperator.LESS_THAN: + return num_cell < num_filter + elif operator == FilterOperator.GREATER_THAN_OR_EQUAL: + return num_cell >= num_filter + elif operator == FilterOperator.LESS_THAN_OR_EQUAL: + return num_cell <= num_filter + except ValueError: + # Fall back to string comparison + if operator == FilterOperator.GREATER_THAN: + return compare_cell > compare_filter + elif operator == FilterOperator.LESS_THAN: + return compare_cell < compare_filter + elif operator == FilterOperator.GREATER_THAN_OR_EQUAL: + return compare_cell >= compare_filter + elif operator == FilterOperator.LESS_THAN_OR_EQUAL: + return compare_cell <= compare_filter + + return False + + +class GoogleSheetsFilterRowsBlock(Block): + """Filter rows in a Google Sheet based on column conditions.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + filter_column: str = SchemaField( + description="Column to filter on (header name or column letter like 'A', 'B')", + placeholder="Status", + ) + filter_value: str = SchemaField( + description="Value to filter by (not used for is_empty/is_not_empty operators)", + default="", + ) + operator: FilterOperator = SchemaField( + description="Filter comparison operator", + default=FilterOperator.EQUALS, + ) + match_case: bool = SchemaField( + description="Whether to match case in comparisons", + default=False, + ) + include_header: bool = SchemaField( + description="Include header row in output", + default=True, + ) + + class Output(BlockSchemaOutput): + rows: list[list[str]] = SchemaField( + description="Filtered rows (including header if requested)", + ) + row_indices: list[int] = SchemaField( + description="Original 1-based row indices of matching rows (useful for deletion)", + ) + count: int = SchemaField( + description="Number of matching rows (excluding header)", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="582195c2-ccee-4fc2-b646-18f72eb9906c", + description="Filter rows in a Google Sheet based on a column condition. Returns matching rows and their indices.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsFilterRowsBlock.Input, + output_schema=GoogleSheetsFilterRowsBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "filter_column": "Status", + "filter_value": "Active", + "operator": FilterOperator.EQUALS, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ( + "rows", + [ + ["Name", "Status", "Score"], + ["Alice", "Active", "85"], + ["Charlie", "Active", "92"], + ], + ), + ("row_indices", [2, 4]), + ("count", 2), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_filter_rows": lambda *args, **kwargs: { + "rows": [ + ["Name", "Status", "Score"], + ["Alice", "Active", "85"], + ["Charlie", "Active", "92"], + ], + "row_indices": [2, 4], + "count": 2, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._filter_rows, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.filter_column, + input_data.filter_value, + input_data.operator, + input_data.match_case, + input_data.include_header, + ) + yield "rows", result["rows"] + yield "row_indices", result["row_indices"] + yield "count", result["count"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to filter rows: {str(e)}" + + def _filter_rows( + self, + service, + spreadsheet_id: str, + sheet_name: str, + filter_column: str, + filter_value: str, + operator: FilterOperator, + match_case: bool, + include_header: bool, + ) -> dict: + # Resolve sheet name + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + + # Read all data from the sheet + result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=formatted_sheet) + .execute() + ) + all_rows = result.get("values", []) + + if not all_rows: + return {"rows": [], "row_indices": [], "count": 0} + + header = all_rows[0] + data_rows = all_rows[1:] + + # Determine filter column index + filter_col_idx = -1 + + # First, try to match against header names (handles "ID", "No", "To", etc.) + for idx, col_name in enumerate(header): + if (match_case and col_name == filter_column) or ( + not match_case and col_name.lower() == filter_column.lower() + ): + filter_col_idx = idx + break + + # If no header match and looks like a column letter (A, B, AA, etc.), try that + if filter_col_idx < 0 and filter_column.isalpha() and len(filter_column) <= 2: + filter_col_idx = _column_letter_to_index(filter_column) + # Validate column letter is within data range + if filter_col_idx >= len(header): + raise ValueError( + f"Column '{filter_column}' (index {filter_col_idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + + if filter_col_idx < 0: + raise ValueError( + f"Column '{filter_column}' not found. Available columns: {header}" + ) + + # Filter rows + filtered_rows = [] + row_indices = [] + + for row_idx, row in enumerate(data_rows): + # Get cell value (handle rows shorter than filter column) + cell_value = row[filter_col_idx] if filter_col_idx < len(row) else "" + + if _apply_filter(str(cell_value), filter_value, operator, match_case): + filtered_rows.append(row) + row_indices.append(row_idx + 2) # +2 for 1-based index and header + + # Prepare output + output_rows = [] + if include_header: + output_rows.append(header) + output_rows.extend(filtered_rows) + + return { + "rows": output_rows, + "row_indices": row_indices, + "count": len(filtered_rows), + } + + +class GoogleSheetsLookupRowBlock(Block): + """Look up a row by matching a value in a column (VLOOKUP-style).""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + lookup_column: str = SchemaField( + description="Column to search in (header name or column letter)", + placeholder="ID", + ) + lookup_value: str = SchemaField( + description="Value to search for", + ) + return_columns: list[str] = SchemaField( + description="Columns to return (header names or letters). Empty = all columns.", + default=[], + ) + match_case: bool = SchemaField( + description="Whether to match case", + default=False, + ) + + class Output(BlockSchemaOutput): + row: list[str] = SchemaField( + description="The matching row (all or selected columns)", + ) + row_dict: dict[str, str] = SchemaField( + description="The matching row as a dictionary (header: value)", + ) + row_index: int = SchemaField( + description="1-based row index of the match", + ) + found: bool = SchemaField( + description="Whether a match was found", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="e58c0bad-6597-400c-9548-d151ec428ffc", + description="Look up a row by finding a value in a specific column. Returns the first matching row and optionally specific columns.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsLookupRowBlock.Input, + output_schema=GoogleSheetsLookupRowBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "lookup_column": "ID", + "lookup_value": "123", + "return_columns": ["Name", "Email"], + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("row", ["Alice", "alice@example.com"]), + ("row_dict", {"Name": "Alice", "Email": "alice@example.com"}), + ("row_index", 2), + ("found", True), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_lookup_row": lambda *args, **kwargs: { + "row": ["Alice", "alice@example.com"], + "row_dict": {"Name": "Alice", "Email": "alice@example.com"}, + "row_index": 2, + "found": True, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._lookup_row, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.lookup_column, + input_data.lookup_value, + input_data.return_columns, + input_data.match_case, + ) + yield "row", result["row"] + yield "row_dict", result["row_dict"] + yield "row_index", result["row_index"] + yield "found", result["found"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to lookup row: {str(e)}" + + def _lookup_row( + self, + service, + spreadsheet_id: str, + sheet_name: str, + lookup_column: str, + lookup_value: str, + return_columns: list[str], + match_case: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + + result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=formatted_sheet) + .execute() + ) + all_rows = result.get("values", []) + + if not all_rows: + return {"row": [], "row_dict": {}, "row_index": 0, "found": False} + + header = all_rows[0] + data_rows = all_rows[1:] + + # Find lookup column index - first try header name match, then column letter + lookup_col_idx = -1 + for idx, col_name in enumerate(header): + if (match_case and col_name == lookup_column) or ( + not match_case and col_name.lower() == lookup_column.lower() + ): + lookup_col_idx = idx + break + + # If no header match and looks like a column letter, try that + if lookup_col_idx < 0 and lookup_column.isalpha() and len(lookup_column) <= 2: + lookup_col_idx = _column_letter_to_index(lookup_column) + # Validate column letter is within data range + if lookup_col_idx >= len(header): + raise ValueError( + f"Column '{lookup_column}' (index {lookup_col_idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + + if lookup_col_idx < 0: + raise ValueError( + f"Lookup column '{lookup_column}' not found. Available: {header}" + ) + + # Find return column indices - first try header name match, then column letter + return_col_indices = [] + return_col_headers = [] + if return_columns: + for ret_col in return_columns: + found = False + # First try header name match + for idx, col_name in enumerate(header): + if (match_case and col_name == ret_col) or ( + not match_case and col_name.lower() == ret_col.lower() + ): + return_col_indices.append(idx) + return_col_headers.append(col_name) + found = True + break + + # If no header match and looks like a column letter, try that + if not found and ret_col.isalpha() and len(ret_col) <= 2: + idx = _column_letter_to_index(ret_col) + # Validate column letter is within data range + if idx >= len(header): + raise ValueError( + f"Return column '{ret_col}' (index {idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + return_col_indices.append(idx) + return_col_headers.append(header[idx]) + found = True + + if not found: + raise ValueError( + f"Return column '{ret_col}' not found. Available: {header}" + ) + else: + return_col_indices = list(range(len(header))) + return_col_headers = header + + # Search for matching row + compare_value = lookup_value if match_case else lookup_value.lower() + + for row_idx, row in enumerate(data_rows): + cell_value = row[lookup_col_idx] if lookup_col_idx < len(row) else "" + compare_cell = str(cell_value) if match_case else str(cell_value).lower() + + if compare_cell == compare_value: + # Found a match - extract requested columns + result_row = [] + result_dict = {} + for i, col_idx in enumerate(return_col_indices): + value = row[col_idx] if col_idx < len(row) else "" + result_row.append(value) + result_dict[return_col_headers[i]] = value + + return { + "row": result_row, + "row_dict": result_dict, + "row_index": row_idx + 2, + "found": True, + } + + return {"row": [], "row_dict": {}, "row_index": 0, "found": False} + + +class GoogleSheetsDeleteRowsBlock(Block): + """Delete rows from a Google Sheet by row indices.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + row_indices: list[int] = SchemaField( + description="1-based row indices to delete (e.g., [2, 5, 7])", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the delete operation", + ) + deleted_count: int = SchemaField( + description="Number of rows deleted", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="24bcd490-b02d-44c6-847d-b62a2319f5eb", + description="Delete specific rows from a Google Sheet by their row indices. Works well with FilterRowsBlock output.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsDeleteRowsBlock.Input, + output_schema=GoogleSheetsDeleteRowsBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "row_indices": [2, 5], + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ("deleted_count", 2), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_delete_rows": lambda *args, **kwargs: { + "success": True, + "deleted_count": 2, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._delete_rows, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.row_indices, + ) + yield "result", {"success": True} + yield "deleted_count", result["deleted_count"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to delete rows: {str(e)}" + + def _delete_rows( + self, + service, + spreadsheet_id: str, + sheet_name: str, + row_indices: list[int], + ) -> dict: + if not row_indices: + return {"success": True, "deleted_count": 0} + + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + # Deduplicate and sort row indices in descending order to delete from bottom to top + # Deduplication prevents deleting wrong rows if same index appears multiple times + sorted_indices = sorted(set(row_indices), reverse=True) + + # Build delete requests + requests = [] + for row_idx in sorted_indices: + # Convert to 0-based index + start_idx = row_idx - 1 + requests.append( + { + "deleteDimension": { + "range": { + "sheetId": sheet_id, + "dimension": "ROWS", + "startIndex": start_idx, + "endIndex": start_idx + 1, + } + } + } + ) + + service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": requests} + ).execute() + + return {"success": True, "deleted_count": len(sorted_indices)} + + +class GoogleSheetsGetColumnBlock(Block): + """Get all values from a specific column by header name.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + column: str = SchemaField( + description="Column to extract (header name or column letter like 'A', 'B')", + placeholder="Email", + ) + include_header: bool = SchemaField( + description="Include the header in output", + default=False, + ) + skip_empty: bool = SchemaField( + description="Skip empty cells", + default=False, + ) + + class Output(BlockSchemaOutput): + values: list[str] = SchemaField( + description="List of values from the column", + ) + count: int = SchemaField( + description="Number of values (excluding header if not included)", + ) + column_index: int = SchemaField( + description="0-based column index", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="108d911f-e109-47fb-addc-2259792ee850", + description="Extract all values from a specific column. Useful for getting a list of emails, IDs, or any single field.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsGetColumnBlock.Input, + output_schema=GoogleSheetsGetColumnBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "column": "Email", + "include_header": False, + "skip_empty": True, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ( + "values", + ["alice@example.com", "bob@example.com", "charlie@example.com"], + ), + ("count", 3), + ("column_index", 2), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_get_column": lambda *args, **kwargs: { + "values": [ + "alice@example.com", + "bob@example.com", + "charlie@example.com", + ], + "count": 3, + "column_index": 2, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._get_column, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.column, + input_data.include_header, + input_data.skip_empty, + ) + yield "values", result["values"] + yield "count", result["count"] + yield "column_index", result["column_index"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to get column: {str(e)}" + + def _get_column( + self, + service, + spreadsheet_id: str, + sheet_name: str, + column: str, + include_header: bool, + skip_empty: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + + result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=formatted_sheet) + .execute() + ) + all_rows = result.get("values", []) + + if not all_rows: + return {"values": [], "count": 0, "column_index": -1} + + header = all_rows[0] + + # Find column index - first try header name match, then column letter + col_idx = -1 + for idx, col_name in enumerate(header): + if col_name.lower() == column.lower(): + col_idx = idx + break + + # If no header match and looks like a column letter, try that + if col_idx < 0 and column.isalpha() and len(column) <= 2: + col_idx = _column_letter_to_index(column) + # Validate column letter is within data range + if col_idx >= len(header): + raise ValueError( + f"Column '{column}' (index {col_idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + + if col_idx < 0: + raise ValueError( + f"Column '{column}' not found. Available columns: {header}" + ) + + # Extract column values + values = [] + start_row = 0 if include_header else 1 + + for row in all_rows[start_row:]: + value = row[col_idx] if col_idx < len(row) else "" + if skip_empty and not str(value).strip(): + continue + values.append(str(value)) + + return {"values": values, "count": len(values), "column_index": col_idx} + + +class GoogleSheetsSortBlock(Block): + """Sort a Google Sheet by one or more columns.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + sort_column: str = SchemaField( + description="Primary column to sort by (header name or column letter)", + placeholder="Date", + ) + sort_order: SortOrder = SchemaField( + description="Sort order for primary column", + default=SortOrder.ASCENDING, + ) + secondary_column: str = SchemaField( + description="Secondary column to sort by (optional)", + default="", + ) + secondary_order: SortOrder = SchemaField( + description="Sort order for secondary column", + default=SortOrder.ASCENDING, + ) + has_header: bool = SchemaField( + description="Whether the data has a header row (header won't be sorted)", + default=True, + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the sort operation", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="a265bd84-c93b-459d-bbe0-94e6addaa38f", + description="Sort a Google Sheet by one or two columns. The sheet is sorted in-place.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsSortBlock.Input, + output_schema=GoogleSheetsSortBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "sort_column": "Score", + "sort_order": SortOrder.DESCENDING, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_sort_sheet": lambda *args, **kwargs: {"success": True}, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._sort_sheet, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.sort_column, + input_data.sort_order, + input_data.secondary_column, + input_data.secondary_order, + input_data.has_header, + ) + yield "result", result + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to sort sheet: {str(e)}" + + def _sort_sheet( + self, + service, + spreadsheet_id: str, + sheet_name: str, + sort_column: str, + sort_order: SortOrder, + secondary_column: str, + secondary_order: SortOrder, + has_header: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + # Get sheet metadata to find column indices and grid properties + meta = service.spreadsheets().get(spreadsheetId=spreadsheet_id).execute() + sheet_meta = None + for sheet in meta.get("sheets", []): + if sheet.get("properties", {}).get("sheetId") == sheet_id: + sheet_meta = sheet + break + + if not sheet_meta: + raise ValueError(f"Could not find metadata for sheet '{target_sheet}'") + + grid_props = sheet_meta.get("properties", {}).get("gridProperties", {}) + row_count = grid_props.get("rowCount", 1000) + col_count = grid_props.get("columnCount", 26) + + # Get header to resolve column names + formatted_sheet = format_sheet_name(target_sheet) + header_result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=f"{formatted_sheet}!1:1") + .execute() + ) + header = ( + header_result.get("values", [[]])[0] if header_result.get("values") else [] + ) + + # Find primary sort column index - first try header name match, then column letter + sort_col_idx = -1 + for idx, col_name in enumerate(header): + if col_name.lower() == sort_column.lower(): + sort_col_idx = idx + break + + # If no header match and looks like a column letter, try that + if sort_col_idx < 0 and sort_column.isalpha() and len(sort_column) <= 2: + sort_col_idx = _column_letter_to_index(sort_column) + # Validate column letter is within data range + if sort_col_idx >= len(header): + raise ValueError( + f"Sort column '{sort_column}' (index {sort_col_idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + + if sort_col_idx < 0: + raise ValueError( + f"Sort column '{sort_column}' not found. Available: {header}" + ) + + # Build sort specs + sort_specs = [ + { + "dimensionIndex": sort_col_idx, + "sortOrder": ( + "ASCENDING" if sort_order == SortOrder.ASCENDING else "DESCENDING" + ), + } + ] + + # Add secondary sort if specified + if secondary_column: + sec_col_idx = -1 + # First try header name match + for idx, col_name in enumerate(header): + if col_name.lower() == secondary_column.lower(): + sec_col_idx = idx + break + + # If no header match and looks like a column letter, try that + if ( + sec_col_idx < 0 + and secondary_column.isalpha() + and len(secondary_column) <= 2 + ): + sec_col_idx = _column_letter_to_index(secondary_column) + # Validate column letter is within data range + if sec_col_idx >= len(header): + raise ValueError( + f"Secondary sort column '{secondary_column}' (index {sec_col_idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + + if sec_col_idx < 0: + raise ValueError( + f"Secondary sort column '{secondary_column}' not found. Available: {header}" + ) + + sort_specs.append( + { + "dimensionIndex": sec_col_idx, + "sortOrder": ( + "ASCENDING" + if secondary_order == SortOrder.ASCENDING + else "DESCENDING" + ), + } + ) + + # Build sort range request + start_row = 1 if has_header else 0 # Skip header if present + + request = { + "sortRange": { + "range": { + "sheetId": sheet_id, + "startRowIndex": start_row, + "endRowIndex": row_count, + "startColumnIndex": 0, + "endColumnIndex": col_count, + }, + "sortSpecs": sort_specs, + } + } + + service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": [request]} + ).execute() + + return {"success": True} + + +class GoogleSheetsGetUniqueValuesBlock(Block): + """Get unique values from a column.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + column: str = SchemaField( + description="Column to get unique values from (header name or column letter)", + placeholder="Category", + ) + include_count: bool = SchemaField( + description="Include count of each unique value", + default=False, + ) + sort_by_count: bool = SchemaField( + description="Sort results by count (most frequent first)", + default=False, + ) + + class Output(BlockSchemaOutput): + values: list[str] = SchemaField( + description="List of unique values", + ) + counts: dict[str, int] = SchemaField( + description="Count of each unique value (if include_count is True)", + ) + total_unique: int = SchemaField( + description="Total number of unique values", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="0f296c0b-6b6e-4280-b96e-ae1459b98dff", + description="Get unique values from a column. Useful for building dropdown options or finding distinct categories.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsGetUniqueValuesBlock.Input, + output_schema=GoogleSheetsGetUniqueValuesBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "column": "Status", + "include_count": True, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("values", ["Active", "Inactive", "Pending"]), + ("counts", {"Active": 5, "Inactive": 3, "Pending": 2}), + ("total_unique", 3), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_get_unique_values": lambda *args, **kwargs: { + "values": ["Active", "Inactive", "Pending"], + "counts": {"Active": 5, "Inactive": 3, "Pending": 2}, + "total_unique": 3, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._get_unique_values, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.column, + input_data.include_count, + input_data.sort_by_count, + ) + yield "values", result["values"] + yield "counts", result["counts"] + yield "total_unique", result["total_unique"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to get unique values: {str(e)}" + + def _get_unique_values( + self, + service, + spreadsheet_id: str, + sheet_name: str, + column: str, + include_count: bool, + sort_by_count: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + + result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=formatted_sheet) + .execute() + ) + all_rows = result.get("values", []) + + if not all_rows: + return {"values": [], "counts": {}, "total_unique": 0} + + header = all_rows[0] + + # Find column index - first try header name match, then column letter + col_idx = -1 + for idx, col_name in enumerate(header): + if col_name.lower() == column.lower(): + col_idx = idx + break + + # If no header match and looks like a column letter, try that + if col_idx < 0 and column.isalpha() and len(column) <= 2: + col_idx = _column_letter_to_index(column) + # Validate column letter is within data range + if col_idx >= len(header): + raise ValueError( + f"Column '{column}' (index {col_idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + + if col_idx < 0: + raise ValueError( + f"Column '{column}' not found. Available columns: {header}" + ) + + # Count values + value_counts: dict[str, int] = {} + for row in all_rows[1:]: # Skip header + value = str(row[col_idx]) if col_idx < len(row) else "" + if value.strip(): # Skip empty values + value_counts[value] = value_counts.get(value, 0) + 1 + + # Sort values + if sort_by_count: + sorted_items = sorted(value_counts.items(), key=lambda x: -x[1]) + unique_values = [item[0] for item in sorted_items] + else: + unique_values = sorted(value_counts.keys()) + + return { + "values": unique_values, + "counts": value_counts if include_count else {}, + "total_unique": len(unique_values), + } + + +class GoogleSheetsInsertRowBlock(Block): + """Insert a single row at a specific position in a Google Sheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + row: list[str] = SchemaField( + description="Row values to insert (e.g., ['Alice', 'alice@example.com', '25'])", + ) + row_index: int = SchemaField( + description="1-based row index where to insert (existing rows shift down)", + placeholder="2", + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + value_input_option: ValueInputOption = SchemaField( + description="How values are interpreted. USER_ENTERED: parsed like typed input (e.g., '=SUM(A1:A5)' becomes a formula, '1/2/2024' becomes a date). RAW: stored as-is without parsing.", + default=ValueInputOption.USER_ENTERED, + advanced=True, + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField(description="Result of the insert operation") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="03eda5df-8080-4ed1-bfdf-212f543d657e", + description="Insert a single row at a specific position. Existing rows shift down.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsInsertRowBlock.Input, + output_schema=GoogleSheetsInsertRowBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "row": ["New", "Row", "Data"], + "row_index": 3, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_insert_row": lambda *args, **kwargs: {"success": True}, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + if not input_data.row: + yield "error", "Row data is required" + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._insert_row, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.row_index, + input_data.row, + input_data.value_input_option, + ) + yield "result", result + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to insert row: {str(e)}" + + def _insert_row( + self, + service, + spreadsheet_id: str, + sheet_name: str, + row_index: int, + row: list[str], + value_input_option: ValueInputOption, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + start_idx = row_index - 1 # Convert to 0-based + + # First, insert an empty row + insert_request = { + "insertDimension": { + "range": { + "sheetId": sheet_id, + "dimension": "ROWS", + "startIndex": start_idx, + "endIndex": start_idx + 1, + }, + "inheritFromBefore": start_idx > 0, + } + } + + service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": [insert_request]} + ).execute() + + # Then, write the values + formatted_sheet = format_sheet_name(target_sheet) + write_range = f"{formatted_sheet}!A{row_index}" + + service.spreadsheets().values().update( + spreadsheetId=spreadsheet_id, + range=write_range, + valueInputOption=value_input_option.value, + body={"values": [row]}, # Wrap single row in list for API + ).execute() + + return {"success": True} + + +class GoogleSheetsAddColumnBlock(Block): + """Add a new column with a header to a Google Sheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + header: str = SchemaField( + description="Header name for the new column", + placeholder="New Column", + ) + position: str = SchemaField( + description="Where to add: 'end' for last column, or column letter (e.g., 'C') to insert before", + default="end", + ) + default_value: str = SchemaField( + description="Default value to fill in all data rows (optional). Requires existing data rows.", + default="", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the operation", + ) + column_letter: str = SchemaField( + description="Letter of the new column (e.g., 'D')", + ) + column_index: int = SchemaField( + description="0-based index of the new column", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="cac51050-fc9e-4e63-987a-66c2ba2a127b", + description="Add a new column with a header. Can add at the end or insert at a specific position.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsAddColumnBlock.Input, + output_schema=GoogleSheetsAddColumnBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "header": "New Status", + "position": "end", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ("column_letter", "D"), + ("column_index", 3), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_add_column": lambda *args, **kwargs: { + "success": True, + "column_letter": "D", + "column_index": 3, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._add_column, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.header, + input_data.position, + input_data.default_value, + ) + yield "result", {"success": True} + yield "column_letter", result["column_letter"] + yield "column_index", result["column_index"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to add column: {str(e)}" + + def _add_column( + self, + service, + spreadsheet_id: str, + sheet_name: str, + header: str, + position: str, + default_value: str, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + # Get current data to determine column count and row count + result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=formatted_sheet) + .execute() + ) + all_rows = result.get("values", []) + current_col_count = max(len(row) for row in all_rows) if all_rows else 0 + row_count = len(all_rows) + + # Determine target column index + if position.lower() == "end": + col_idx = current_col_count + elif position.isalpha() and len(position) <= 2: + col_idx = _column_letter_to_index(position) + # Insert a new column at this position + insert_request = { + "insertDimension": { + "range": { + "sheetId": sheet_id, + "dimension": "COLUMNS", + "startIndex": col_idx, + "endIndex": col_idx + 1, + }, + "inheritFromBefore": col_idx > 0, + } + } + service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": [insert_request]} + ).execute() + else: + raise ValueError( + f"Invalid position: '{position}'. Use 'end' or a column letter." + ) + + col_letter = _index_to_column_letter(col_idx) + + # Write header + header_range = f"{formatted_sheet}!{col_letter}1" + service.spreadsheets().values().update( + spreadsheetId=spreadsheet_id, + range=header_range, + valueInputOption="USER_ENTERED", + body={"values": [[header]]}, + ).execute() + + # Fill default value if provided and there are data rows + if default_value and row_count > 1: + values_to_fill = [[default_value]] * (row_count - 1) + data_range = f"{formatted_sheet}!{col_letter}2:{col_letter}{row_count}" + service.spreadsheets().values().update( + spreadsheetId=spreadsheet_id, + range=data_range, + valueInputOption="USER_ENTERED", + body={"values": values_to_fill}, + ).execute() + + return { + "success": True, + "column_letter": col_letter, + "column_index": col_idx, + } + + +class GoogleSheetsGetRowCountBlock(Block): + """Get the number of rows in a Google Sheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + include_header: bool = SchemaField( + description="Include header row in count", + default=True, + ) + count_empty: bool = SchemaField( + description="Count rows with only empty cells", + default=False, + ) + + class Output(BlockSchemaOutput): + total_rows: int = SchemaField( + description="Total number of rows", + ) + data_rows: int = SchemaField( + description="Number of data rows (excluding header)", + ) + last_row: int = SchemaField( + description="1-based index of the last row with data", + ) + column_count: int = SchemaField( + description="Number of columns", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="080cc84b-a94a-4fb4-90e3-dcc55ee783af", + description="Get row count and dimensions of a Google Sheet. Useful for knowing where data ends.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsGetRowCountBlock.Input, + output_schema=GoogleSheetsGetRowCountBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("total_rows", 101), + ("data_rows", 100), + ("last_row", 101), + ("column_count", 5), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_get_row_count": lambda *args, **kwargs: { + "total_rows": 101, + "data_rows": 100, + "last_row": 101, + "column_count": 5, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._get_row_count, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.include_header, + input_data.count_empty, + ) + yield "total_rows", result["total_rows"] + yield "data_rows", result["data_rows"] + yield "last_row", result["last_row"] + yield "column_count", result["column_count"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to get row count: {str(e)}" + + def _get_row_count( + self, + service, + spreadsheet_id: str, + sheet_name: str, + include_header: bool, + count_empty: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + + result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=formatted_sheet) + .execute() + ) + all_rows = result.get("values", []) + + if not all_rows: + return { + "total_rows": 0, + "data_rows": 0, + "last_row": 0, + "column_count": 0, + } + + # Count non-empty rows + if count_empty: + total_rows = len(all_rows) + last_row = total_rows + else: + # Find last row with actual data + last_row = 0 + for idx, row in enumerate(all_rows): + if any(str(cell).strip() for cell in row): + last_row = idx + 1 + total_rows = last_row + + data_rows = total_rows - 1 if total_rows > 0 else 0 + if not include_header: + total_rows = data_rows + + column_count = max(len(row) for row in all_rows) if all_rows else 0 + + return { + "total_rows": total_rows, + "data_rows": data_rows, + "last_row": last_row, + "column_count": column_count, + } + + +class GoogleSheetsRemoveDuplicatesBlock(Block): + """Remove duplicate rows from a Google Sheet based on specified columns.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + columns: list[str] = SchemaField( + description="Columns to check for duplicates (header names or letters). Empty = all columns.", + default=[], + ) + keep: str = SchemaField( + description="Which duplicate to keep: 'first' or 'last'", + default="first", + ) + match_case: bool = SchemaField( + description="Whether to match case when comparing", + default=False, + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the operation", + ) + removed_count: int = SchemaField( + description="Number of duplicate rows removed", + ) + remaining_rows: int = SchemaField( + description="Number of rows remaining", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="6eb50ff7-205b-400e-8ecc-1ce8d50075be", + description="Remove duplicate rows based on specified columns. Keeps either the first or last occurrence.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsRemoveDuplicatesBlock.Input, + output_schema=GoogleSheetsRemoveDuplicatesBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "columns": ["Email"], + "keep": "first", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ("removed_count", 5), + ("remaining_rows", 95), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_remove_duplicates": lambda *args, **kwargs: { + "success": True, + "removed_count": 5, + "remaining_rows": 95, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._remove_duplicates, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.columns, + input_data.keep, + input_data.match_case, + ) + yield "result", {"success": True} + yield "removed_count", result["removed_count"] + yield "remaining_rows", result["remaining_rows"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to remove duplicates: {str(e)}" + + def _remove_duplicates( + self, + service, + spreadsheet_id: str, + sheet_name: str, + columns: list[str], + keep: str, + match_case: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + # Read all data + result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=formatted_sheet) + .execute() + ) + all_rows = result.get("values", []) + + if len(all_rows) <= 1: # Only header or empty + return { + "success": True, + "removed_count": 0, + "remaining_rows": len(all_rows), + } + + header = all_rows[0] + data_rows = all_rows[1:] + + # Determine which column indices to use for comparison + # First try header name match, then column letter + if columns: + col_indices = [] + for col in columns: + found = False + # First try header name match + for idx, col_name in enumerate(header): + if col_name.lower() == col.lower(): + col_indices.append(idx) + found = True + break + + # If no header match and looks like a column letter, try that + if not found and col.isalpha() and len(col) <= 2: + col_idx = _column_letter_to_index(col) + # Validate column letter is within data range + if col_idx >= len(header): + raise ValueError( + f"Column '{col}' (index {col_idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + col_indices.append(col_idx) + found = True + + if not found: + raise ValueError( + f"Column '{col}' not found in sheet. " + f"Available columns: {', '.join(header)}" + ) + else: + col_indices = list(range(len(header))) + + # Find duplicates + seen: dict[tuple, int] = {} + rows_to_delete: list[int] = [] + + for row_idx, row in enumerate(data_rows): + # Build key from specified columns + key_parts = [] + for col_idx in col_indices: + value = str(row[col_idx]) if col_idx < len(row) else "" + if not match_case: + value = value.lower() + key_parts.append(value) + key = tuple(key_parts) + + if key in seen: + if keep == "first": + # Delete this row (keep the first one we saw) + rows_to_delete.append(row_idx + 2) # +2 for 1-based and header + else: + # Delete the previous row, then update seen to keep this one + prev_row = seen[key] + rows_to_delete.append(prev_row) + seen[key] = row_idx + 2 + else: + seen[key] = row_idx + 2 + + if not rows_to_delete: + return { + "success": True, + "removed_count": 0, + "remaining_rows": len(all_rows), + } + + # Sort in descending order to delete from bottom to top + rows_to_delete = sorted(set(rows_to_delete), reverse=True) + + # Delete rows + requests = [] + for row_idx in rows_to_delete: + start_idx = row_idx - 1 + requests.append( + { + "deleteDimension": { + "range": { + "sheetId": sheet_id, + "dimension": "ROWS", + "startIndex": start_idx, + "endIndex": start_idx + 1, + } + } + } + ) + + service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": requests} + ).execute() + + remaining = len(all_rows) - len(rows_to_delete) + return { + "success": True, + "removed_count": len(rows_to_delete), + "remaining_rows": remaining, + } + + +class GoogleSheetsUpdateRowBlock(Block): + """Update a specific row by index with new values.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + row_index: int = SchemaField( + description="1-based row index to update", + ) + values: list[str] = SchemaField( + description="New values for the row (in column order)", + default=[], + ) + dict_values: dict[str, str] = SchemaField( + description="Values as dict with column headers as keys (alternative to values)", + default={}, + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the update operation", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="b8a934d5-fca0-4be3-9fc2-a99bf63bd385", + description="Update a specific row by its index. Can use list or dict format for values.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsUpdateRowBlock.Input, + output_schema=GoogleSheetsUpdateRowBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "row_index": 5, + "dict_values": {"Name": "Updated Name", "Status": "Active"}, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True, "updatedCells": 2}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_update_row": lambda *args, **kwargs: { + "success": True, + "updatedCells": 2, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + if not input_data.values and not input_data.dict_values: + yield "error", "Either values or dict_values must be provided" + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._update_row, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.row_index, + input_data.values, + input_data.dict_values, + ) + yield "result", result + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to update row: {str(e)}" + + def _update_row( + self, + service, + spreadsheet_id: str, + sheet_name: str, + row_index: int, + values: list[str], + dict_values: dict[str, str], + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + + if dict_values: + # Get header to map column names to indices + header_result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=f"{formatted_sheet}!1:1") + .execute() + ) + header = ( + header_result.get("values", [[]])[0] + if header_result.get("values") + else [] + ) + + # Get current row values + row_range = f"{formatted_sheet}!{row_index}:{row_index}" + current_result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=row_range) + .execute() + ) + current_row = ( + current_result.get("values", [[]])[0] + if current_result.get("values") + else [] + ) + + # Extend current row to match header length + while len(current_row) < len(header): + current_row.append("") + + # Update specific columns from dict - validate all column names first + for col_name in dict_values.keys(): + found = False + for h in header: + if h.lower() == col_name.lower(): + found = True + break + if not found: + raise ValueError( + f"Column '{col_name}' not found in sheet. " + f"Available columns: {', '.join(header)}" + ) + + # Now apply updates + updated_count = 0 + for col_name, value in dict_values.items(): + for idx, h in enumerate(header): + if h.lower() == col_name.lower(): + current_row[idx] = value + updated_count += 1 + break + + values = current_row + else: + updated_count = len(values) + + # Write the row + write_range = f"{formatted_sheet}!A{row_index}" + service.spreadsheets().values().update( + spreadsheetId=spreadsheet_id, + range=write_range, + valueInputOption="USER_ENTERED", + body={"values": [values]}, + ).execute() + + return {"success": True, "updatedCells": updated_count} + + +class GoogleSheetsGetRowBlock(Block): + """Get a specific row by its index.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + row_index: int = SchemaField( + description="1-based row index to retrieve", + ) + + class Output(BlockSchemaOutput): + row: list[str] = SchemaField( + description="The row values as a list", + ) + row_dict: dict[str, str] = SchemaField( + description="The row as a dictionary (header: value)", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="c4be9390-2431-4682-9769-7025b22a5fa7", + description="Get a specific row by its index. Returns both list and dict formats.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsGetRowBlock.Input, + output_schema=GoogleSheetsGetRowBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "row_index": 3, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("row", ["Alice", "Active", "85"]), + ("row_dict", {"Name": "Alice", "Status": "Active", "Score": "85"}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_get_row": lambda *args, **kwargs: { + "row": ["Alice", "Active", "85"], + "row_dict": {"Name": "Alice", "Status": "Active", "Score": "85"}, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._get_row, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.row_index, + ) + yield "row", result["row"] + yield "row_dict", result["row_dict"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to get row: {str(e)}" + + def _get_row( + self, + service, + spreadsheet_id: str, + sheet_name: str, + row_index: int, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + + # Get header + header_result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=f"{formatted_sheet}!1:1") + .execute() + ) + header = ( + header_result.get("values", [[]])[0] if header_result.get("values") else [] + ) + + # Get the row + row_range = f"{formatted_sheet}!{row_index}:{row_index}" + row_result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=row_range) + .execute() + ) + row = row_result.get("values", [[]])[0] if row_result.get("values") else [] + + # Build dictionary + row_dict = {} + for idx, h in enumerate(header): + row_dict[h] = row[idx] if idx < len(row) else "" + + return {"row": row, "row_dict": row_dict} + + +class GoogleSheetsDeleteColumnBlock(Block): + """Delete a column from a Google Sheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + column: str = SchemaField( + description="Column to delete (header name or column letter like 'A', 'B')", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the delete operation", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="59b266b6-5cce-4661-a1d3-c417e64d68e9", + description="Delete a column by header name or column letter.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsDeleteColumnBlock.Input, + output_schema=GoogleSheetsDeleteColumnBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "column": "Status", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_delete_column": lambda *args, **kwargs: {"success": True}, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._delete_column, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.column, + ) + yield "result", result + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to delete column: {str(e)}" + + def _delete_column( + self, + service, + spreadsheet_id: str, + sheet_name: str, + column: str, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + formatted_sheet = format_sheet_name(target_sheet) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + # Get header to find column by name or validate column letter + header_result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=f"{formatted_sheet}!1:1") + .execute() + ) + header = ( + header_result.get("values", [[]])[0] if header_result.get("values") else [] + ) + + # Find column index - first try header name match, then column letter + col_idx = -1 + for idx, h in enumerate(header): + if h.lower() == column.lower(): + col_idx = idx + break + + # If no header match and looks like a column letter, try that + if col_idx < 0 and column.isalpha() and len(column) <= 2: + col_idx = _column_letter_to_index(column) + # Validate column letter is within data range + if col_idx >= len(header): + raise ValueError( + f"Column '{column}' (index {col_idx}) is out of range. " + f"Sheet only has {len(header)} columns (A-{_index_to_column_letter(len(header) - 1)})." + ) + + if col_idx < 0: + raise ValueError(f"Column '{column}' not found") + + # Delete the column + request = { + "deleteDimension": { + "range": { + "sheetId": sheet_id, + "dimension": "COLUMNS", + "startIndex": col_idx, + "endIndex": col_idx + 1, + } + } + } + + service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": [request]} + ).execute() + + return {"success": True} + + +class GoogleSheetsCreateNamedRangeBlock(Block): + """Create a named range in a Google Sheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + name: str = SchemaField( + description="Name for the range (e.g., 'SalesData', 'CustomerList')", + placeholder="MyNamedRange", + ) + range: str = SchemaField( + description="Cell range in A1 notation (e.g., 'A1:D10', 'B2:B100')", + placeholder="A1:D10", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the operation", + ) + named_range_id: str = SchemaField( + description="ID of the created named range", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="a2707376-8016-494b-98c4-d0e2752ab9cb", + description="Create a named range to reference cells by name instead of A1 notation.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsCreateNamedRangeBlock.Input, + output_schema=GoogleSheetsCreateNamedRangeBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "name": "SalesData", + "range": "A1:D10", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ("named_range_id", "nr_12345"), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_create_named_range": lambda *args, **kwargs: { + "success": True, + "named_range_id": "nr_12345", + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._create_named_range, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.name, + input_data.range, + ) + yield "result", {"success": True} + yield "named_range_id", result["named_range_id"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to create named range: {str(e)}" + + def _create_named_range( + self, + service, + spreadsheet_id: str, + sheet_name: str, + name: str, + range_str: str, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + # Parse range to get grid coordinates + # Handle both "A1:D10" and "Sheet1!A1:D10" formats + if "!" in range_str: + range_str = range_str.split("!")[1] + + # Parse start and end cells + match = re.match(r"([A-Z]+)(\d+):([A-Z]+)(\d+)", range_str.upper()) + if not match: + raise ValueError(f"Invalid range format: {range_str}") + + start_col = _column_letter_to_index(match.group(1)) + start_row = int(match.group(2)) - 1 # 0-based + end_col = _column_letter_to_index(match.group(3)) + 1 # exclusive + end_row = int(match.group(4)) # exclusive (already 1-based becomes 0-based + 1) + + request = { + "addNamedRange": { + "namedRange": { + "name": name, + "range": { + "sheetId": sheet_id, + "startRowIndex": start_row, + "endRowIndex": end_row, + "startColumnIndex": start_col, + "endColumnIndex": end_col, + }, + } + } + } + + result = ( + service.spreadsheets() + .batchUpdate(spreadsheetId=spreadsheet_id, body={"requests": [request]}) + .execute() + ) + + # Extract the named range ID from the response + named_range_id = "" + replies = result.get("replies", []) + if replies and "addNamedRange" in replies[0]: + named_range_id = replies[0]["addNamedRange"]["namedRange"]["namedRangeId"] + + return {"success": True, "named_range_id": named_range_id} + + +class GoogleSheetsListNamedRangesBlock(Block): + """List all named ranges in a Google Sheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + + class Output(BlockSchemaOutput): + named_ranges: list[dict] = SchemaField( + description="List of named ranges with name, id, and range info", + ) + count: int = SchemaField( + description="Number of named ranges", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="b81a9d27-3997-4860-9303-cc68086db13a", + description="List all named ranges in a spreadsheet.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsListNamedRangesBlock.Input, + output_schema=GoogleSheetsListNamedRangesBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ( + "named_ranges", + [ + {"name": "SalesData", "id": "nr_1", "range": "Sheet1!A1:D10"}, + { + "name": "CustomerList", + "id": "nr_2", + "range": "Sheet1!E1:F50", + }, + ], + ), + ("count", 2), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_list_named_ranges": lambda *args, **kwargs: { + "named_ranges": [ + {"name": "SalesData", "id": "nr_1", "range": "Sheet1!A1:D10"}, + { + "name": "CustomerList", + "id": "nr_2", + "range": "Sheet1!E1:F50", + }, + ], + "count": 2, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._list_named_ranges, + service, + input_data.spreadsheet.id, + ) + yield "named_ranges", result["named_ranges"] + yield "count", result["count"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to list named ranges: {str(e)}" + + def _list_named_ranges( + self, + service, + spreadsheet_id: str, + ) -> dict: + # Get spreadsheet metadata including named ranges + meta = service.spreadsheets().get(spreadsheetId=spreadsheet_id).execute() + + named_ranges_list = [] + named_ranges = meta.get("namedRanges", []) + + # Get sheet names for reference + sheets = { + sheet["properties"]["sheetId"]: sheet["properties"]["title"] + for sheet in meta.get("sheets", []) + } + + for nr in named_ranges: + range_info = nr.get("range", {}) + sheet_id = range_info.get("sheetId", 0) + sheet_name = sheets.get(sheet_id, "Sheet1") + + # Convert grid range back to A1 notation + start_col = _index_to_column_letter(range_info.get("startColumnIndex", 0)) + end_col = _index_to_column_letter(range_info.get("endColumnIndex", 1) - 1) + start_row = range_info.get("startRowIndex", 0) + 1 + end_row = range_info.get("endRowIndex", 1) + + range_str = f"{sheet_name}!{start_col}{start_row}:{end_col}{end_row}" + + named_ranges_list.append( + { + "name": nr.get("name", ""), + "id": nr.get("namedRangeId", ""), + "range": range_str, + } + ) + + return {"named_ranges": named_ranges_list, "count": len(named_ranges_list)} + + +class GoogleSheetsAddDropdownBlock(Block): + """Add a dropdown (data validation) to cells.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + range: str = SchemaField( + description="Cell range to add dropdown to (e.g., 'B2:B100')", + placeholder="B2:B100", + ) + options: list[str] = SchemaField( + description="List of dropdown options", + ) + strict: bool = SchemaField( + description="Reject input not in the list", + default=True, + ) + show_dropdown: bool = SchemaField( + description="Show dropdown arrow in cells", + default=True, + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the operation", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="725431c9-71ba-4fce-b829-5a3e495a8a88", + description="Add a dropdown list (data validation) to cells. Useful for enforcing valid inputs.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsAddDropdownBlock.Input, + output_schema=GoogleSheetsAddDropdownBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "range": "B2:B100", + "options": ["Active", "Inactive", "Pending"], + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_add_dropdown": lambda *args, **kwargs: {"success": True}, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + if not input_data.options: + yield "error", "Options list cannot be empty" + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._add_dropdown, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.range, + input_data.options, + input_data.strict, + input_data.show_dropdown, + ) + yield "result", result + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to add dropdown: {str(e)}" + + def _add_dropdown( + self, + service, + spreadsheet_id: str, + sheet_name: str, + range_str: str, + options: list[str], + strict: bool, + show_dropdown: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + # Parse range + if "!" in range_str: + range_str = range_str.split("!")[1] + + match = re.match(r"([A-Z]+)(\d+):([A-Z]+)(\d+)", range_str.upper()) + if not match: + raise ValueError(f"Invalid range format: {range_str}") + + start_col = _column_letter_to_index(match.group(1)) + start_row = int(match.group(2)) - 1 + end_col = _column_letter_to_index(match.group(3)) + 1 + end_row = int(match.group(4)) + + # Build condition values + condition_values = [{"userEnteredValue": opt} for opt in options] + + request = { + "setDataValidation": { + "range": { + "sheetId": sheet_id, + "startRowIndex": start_row, + "endRowIndex": end_row, + "startColumnIndex": start_col, + "endColumnIndex": end_col, + }, + "rule": { + "condition": { + "type": "ONE_OF_LIST", + "values": condition_values, + }, + "strict": strict, + "showCustomUi": show_dropdown, + }, + } + } + + service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": [request]} + ).execute() + + return {"success": True} + + +class GoogleSheetsCopyToSpreadsheetBlock(Block): + """Copy a sheet to another spreadsheet.""" + + class Input(BlockSchemaInput): + source_spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Source Spreadsheet", + description="Select the source spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + source_sheet_name: str = SchemaField( + description="Sheet to copy (optional, defaults to first sheet)", + default="", + ) + destination_spreadsheet_id: str = SchemaField( + description="ID of the destination spreadsheet", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the copy operation", + ) + new_sheet_id: int = SchemaField( + description="ID of the new sheet in the destination", + ) + new_sheet_name: str = SchemaField( + description="Name of the new sheet", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The source spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="740eec3f-2b51-4e95-b87f-22ce2acafdfa", + description="Copy a sheet from one spreadsheet to another.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsCopyToSpreadsheetBlock.Input, + output_schema=GoogleSheetsCopyToSpreadsheetBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "source_spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Source Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "destination_spreadsheet_id": "dest_spreadsheet_id_123", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ("new_sheet_id", 12345), + ("new_sheet_name", "Copy of Sheet1"), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Source Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_copy_to_spreadsheet": lambda *args, **kwargs: { + "success": True, + "new_sheet_id": 12345, + "new_sheet_name": "Copy of Sheet1", + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.source_spreadsheet: + yield "error", "No source spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.source_spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._copy_to_spreadsheet, + service, + input_data.source_spreadsheet.id, + input_data.source_sheet_name, + input_data.destination_spreadsheet_id, + ) + yield "result", {"success": True} + yield "new_sheet_id", result["new_sheet_id"] + yield "new_sheet_name", result["new_sheet_name"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.source_spreadsheet.id, + name=input_data.source_spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.source_spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.source_spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to copy sheet: {str(e)}" + + def _copy_to_spreadsheet( + self, + service, + source_spreadsheet_id: str, + source_sheet_name: str, + destination_spreadsheet_id: str, + ) -> dict: + target_sheet = resolve_sheet_name( + service, source_spreadsheet_id, source_sheet_name or None + ) + sheet_id = sheet_id_by_name(service, source_spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + result = ( + service.spreadsheets() + .sheets() + .copyTo( + spreadsheetId=source_spreadsheet_id, + sheetId=sheet_id, + body={"destinationSpreadsheetId": destination_spreadsheet_id}, + ) + .execute() + ) + + return { + "success": True, + "new_sheet_id": result.get("sheetId", 0), + "new_sheet_name": result.get("title", ""), + } + + +class GoogleSheetsProtectRangeBlock(Block): + """Protect a range from editing.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="Select a Google Sheets spreadsheet", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + description="Sheet name (optional, defaults to first sheet)", + default="", + ) + range: str = SchemaField( + description="Cell range to protect (e.g., 'A1:D10'). Leave empty to protect entire sheet.", + default="", + ) + description: str = SchemaField( + description="Description for the protected range", + default="Protected by automation", + ) + warning_only: bool = SchemaField( + description="Show warning but allow editing (vs blocking completely)", + default=False, + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField( + description="Result of the operation", + ) + protection_id: int = SchemaField( + description="ID of the protection", + ) + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining", + ) + error: str = SchemaField(description="Error message if any") + + def __init__(self): + super().__init__( + id="d0e4f5d1-76e7-4082-9be8-e656ec1f432d", + description="Protect a cell range or entire sheet from editing.", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsProtectRangeBlock.Input, + output_schema=GoogleSheetsProtectRangeBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "range": "A1:D10", + "description": "Header row protection", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ("protection_id", 12345), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_protect_range": lambda *args, **kwargs: { + "success": True, + "protection_id": 12345, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._protect_range, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.range, + input_data.description, + input_data.warning_only, + ) + yield "result", {"success": True} + yield "protection_id", result["protection_id"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to protect range: {str(e)}" + + def _protect_range( + self, + service, + spreadsheet_id: str, + sheet_name: str, + range_str: str, + description: str, + warning_only: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + protected_range: dict = {"sheetId": sheet_id} + + if range_str: + # Parse specific range + if "!" in range_str: + range_str = range_str.split("!")[1] + + match = re.match(r"([A-Z]+)(\d+):([A-Z]+)(\d+)", range_str.upper()) + if not match: + raise ValueError(f"Invalid range format: {range_str}") + + protected_range["startRowIndex"] = int(match.group(2)) - 1 + protected_range["endRowIndex"] = int(match.group(4)) + protected_range["startColumnIndex"] = _column_letter_to_index( + match.group(1) + ) + protected_range["endColumnIndex"] = ( + _column_letter_to_index(match.group(3)) + 1 + ) + + request = { + "addProtectedRange": { + "protectedRange": { + "range": protected_range, + "description": description, + "warningOnly": warning_only, + } + } + } + + result = ( + service.spreadsheets() + .batchUpdate(spreadsheetId=spreadsheet_id, body={"requests": [request]}) + .execute() + ) + + protection_id = 0 + replies = result.get("replies", []) + if replies and "addProtectedRange" in replies[0]: + protection_id = replies[0]["addProtectedRange"]["protectedRange"][ + "protectedRangeId" + ] + + return {"success": True, "protection_id": protection_id} + + +class GoogleSheetsExportCsvBlock(Block): + """Export a sheet as CSV data.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="The spreadsheet to export from", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + sheet_name: str = SchemaField( + default="", + description="Name of the sheet to export. Defaults to first sheet.", + ) + include_headers: bool = SchemaField( + default=True, + description="Include the first row (headers) in the CSV output", + ) + + class Output(BlockSchemaOutput): + csv_data: str = SchemaField(description="The sheet data as CSV string") + row_count: int = SchemaField(description="Number of rows exported") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining" + ) + error: str = SchemaField(description="Error message if export failed") + + def __init__(self): + super().__init__( + id="2617e68a-43b3-441f-8b11-66bb041105b8", + description="Export a Google Sheet as CSV data", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsExportCsvBlock.Input, + output_schema=GoogleSheetsExportCsvBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("csv_data", "Name,Email,Status\nJohn,john@test.com,Active\n"), + ("row_count", 2), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_export_csv": lambda *args, **kwargs: { + "csv_data": "Name,Email,Status\nJohn,john@test.com,Active\n", + "row_count": 2, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._export_csv, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.include_headers, + ) + yield "csv_data", result["csv_data"] + yield "row_count", result["row_count"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to export CSV: {str(e)}" + + def _export_csv( + self, + service, + spreadsheet_id: str, + sheet_name: str, + include_headers: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + range_name = f"'{target_sheet}'" + + result = ( + service.spreadsheets() + .values() + .get(spreadsheetId=spreadsheet_id, range=range_name) + .execute() + ) + + rows = result.get("values", []) + + # Skip header row if not including headers + if not include_headers and rows: + rows = rows[1:] + + output = io.StringIO() + writer = csv.writer(output) + for row in rows: + writer.writerow(row) + + csv_data = output.getvalue() + return {"csv_data": csv_data, "row_count": len(rows)} + + +class GoogleSheetsImportCsvBlock(Block): + """Import CSV data into a sheet.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="The spreadsheet to import into", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + csv_data: str = SchemaField(description="CSV data to import") + sheet_name: str = SchemaField( + default="", + description="Name of the sheet. Defaults to first sheet.", + ) + start_cell: str = SchemaField( + default="A1", + description="Cell to start importing at (e.g., A1, B2)", + ) + clear_existing: bool = SchemaField( + default=False, + description="Clear existing data before importing", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField(description="Import result") + rows_imported: int = SchemaField(description="Number of rows imported") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining" + ) + error: str = SchemaField(description="Error message if import failed") + + def __init__(self): + super().__init__( + id="cb992884-1ff2-450a-8f1b-7650d63e3aa0", + description="Import CSV data into a Google Sheet", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsImportCsvBlock.Input, + output_schema=GoogleSheetsImportCsvBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "csv_data": "Name,Email,Status\nJohn,john@test.com,Active\n", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ("rows_imported", 2), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_import_csv": lambda *args, **kwargs: { + "success": True, + "rows_imported": 2, + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._import_csv, + service, + input_data.spreadsheet.id, + input_data.csv_data, + input_data.sheet_name, + input_data.start_cell, + input_data.clear_existing, + ) + yield "result", {"success": True} + yield "rows_imported", result["rows_imported"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to import CSV: {str(e)}" + + def _import_csv( + self, + service, + spreadsheet_id: str, + csv_data: str, + sheet_name: str, + start_cell: str, + clear_existing: bool, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + + # Parse CSV data + reader = csv.reader(io.StringIO(csv_data)) + rows = list(reader) + + if not rows: + return {"success": True, "rows_imported": 0} + + # Clear existing data if requested + if clear_existing: + service.spreadsheets().values().clear( + spreadsheetId=spreadsheet_id, + range=f"'{target_sheet}'", + ).execute() + + # Write data + range_name = f"'{target_sheet}'!{start_cell}" + service.spreadsheets().values().update( + spreadsheetId=spreadsheet_id, + range=range_name, + valueInputOption="RAW", + body={"values": rows}, + ).execute() + + return {"success": True, "rows_imported": len(rows)} + + +class GoogleSheetsAddNoteBlock(Block): + """Add a note (comment) to a cell.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="The spreadsheet to add note to", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + cell: str = SchemaField( + description="Cell to add note to (e.g., A1, B2)", + ) + note: str = SchemaField(description="Note text to add") + sheet_name: str = SchemaField( + default="", + description="Name of the sheet. Defaults to first sheet.", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField(description="Result of the operation") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining" + ) + error: str = SchemaField(description="Error message if operation failed") + + def __init__(self): + super().__init__( + id="774ac529-74f9-41da-bbba-6a06a51a5d7e", + description="Add a note to a cell in a Google Sheet", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsAddNoteBlock.Input, + output_schema=GoogleSheetsAddNoteBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "cell": "A1", + "note": "This is a test note", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_add_note": lambda *args, **kwargs: {"success": True}, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + await asyncio.to_thread( + self._add_note, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.cell, + input_data.note, + ) + yield "result", {"success": True} + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to add note: {str(e)}" + + def _add_note( + self, + service, + spreadsheet_id: str, + sheet_name: str, + cell: str, + note: str, + ) -> dict: + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + sheet_id = sheet_id_by_name(service, spreadsheet_id, target_sheet) + + if sheet_id is None: + raise ValueError(f"Sheet '{target_sheet}' not found") + + # Parse cell reference + match = re.match(r"([A-Z]+)(\d+)", cell.upper()) + if not match: + raise ValueError(f"Invalid cell reference: {cell}") + + col_index = _column_letter_to_index(match.group(1)) + row_index = int(match.group(2)) - 1 + + request = { + "updateCells": { + "rows": [{"values": [{"note": note}]}], + "fields": "note", + "start": { + "sheetId": sheet_id, + "rowIndex": row_index, + "columnIndex": col_index, + }, + } + } + + service.spreadsheets().batchUpdate( + spreadsheetId=spreadsheet_id, body={"requests": [request]} + ).execute() + + return {"success": True} + + +class GoogleSheetsGetNotesBlock(Block): + """Get notes from cells in a range.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="The spreadsheet to get notes from", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + range: str = SchemaField( + default="A1:Z100", + description="Range to get notes from (e.g., A1:B10)", + ) + sheet_name: str = SchemaField( + default="", + description="Name of the sheet. Defaults to first sheet.", + ) + + class Output(BlockSchemaOutput): + notes: list[dict] = SchemaField(description="List of notes with cell and text") + count: int = SchemaField(description="Number of notes found") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining" + ) + error: str = SchemaField(description="Error message if operation failed") + + def __init__(self): + super().__init__( + id="fa16834f-fff4-4d7a-9f7f-531ced90492b", + description="Get notes from cells in a Google Sheet", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsGetNotesBlock.Input, + output_schema=GoogleSheetsGetNotesBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ( + "notes", + [ + {"cell": "A1", "note": "Header note"}, + {"cell": "B2", "note": "Data note"}, + ], + ), + ("count", 2), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_get_notes": lambda *args, **kwargs: { + "notes": [ + {"cell": "A1", "note": "Header note"}, + {"cell": "B2", "note": "Data note"}, + ], + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_sheets_service(credentials) + result = await asyncio.to_thread( + self._get_notes, + service, + input_data.spreadsheet.id, + input_data.sheet_name, + input_data.range, + ) + notes = result["notes"] + yield "notes", notes + yield "count", len(notes) + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to get notes: {str(e)}" + + def _get_notes( + self, + service, + spreadsheet_id: str, + sheet_name: str, + range_str: str, + ) -> dict: + + target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name or None) + full_range = f"'{target_sheet}'!{range_str}" + + # Get spreadsheet data including notes + result = ( + service.spreadsheets() + .get( + spreadsheetId=spreadsheet_id, + ranges=[full_range], + includeGridData=True, + ) + .execute() + ) + + notes = [] + sheets = result.get("sheets", []) + + for sheet in sheets: + data = sheet.get("data", []) + for grid_data in data: + start_row = grid_data.get("startRow", 0) + start_col = grid_data.get("startColumn", 0) + row_data = grid_data.get("rowData", []) + + for row_idx, row in enumerate(row_data): + values = row.get("values", []) + for col_idx, cell in enumerate(values): + note = cell.get("note") + if note: + col_letter = _index_to_column_letter(start_col + col_idx) + cell_ref = f"{col_letter}{start_row + row_idx + 1}" + notes.append({"cell": cell_ref, "note": note}) + + return {"notes": notes} + + +class GoogleSheetsShareSpreadsheetBlock(Block): + """Share a spreadsheet with specific users or make it accessible.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="The spreadsheet to share", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + email: str = SchemaField( + default="", + description="Email address to share with. Leave empty for link sharing.", + ) + role: ShareRole = SchemaField( + default=ShareRole.READER, + description="Permission role for the user", + ) + send_notification: bool = SchemaField( + default=True, + description="Send notification email to the user", + ) + message: str = SchemaField( + default="", + description="Optional message to include in notification email", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField(description="Result of the share operation") + share_link: str = SchemaField(description="Link to the spreadsheet") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining" + ) + error: str = SchemaField(description="Error message if share failed") + + def __init__(self): + super().__init__( + id="3e47e8ac-511a-4eb6-89c5-a6bcedc4236f", + description="Share a Google Spreadsheet with users or get shareable link", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsShareSpreadsheetBlock.Input, + output_schema=GoogleSheetsShareSpreadsheetBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "email": "test@example.com", + "role": "reader", + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True}), + ( + "share_link", + "https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + ), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_share_spreadsheet": lambda *args, **kwargs: { + "success": True, + "share_link": "https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_drive_service(credentials) + result = await asyncio.to_thread( + self._share_spreadsheet, + service, + input_data.spreadsheet.id, + input_data.email, + input_data.role, + input_data.send_notification, + input_data.message, + ) + yield "result", {"success": True} + yield "share_link", result["share_link"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to share spreadsheet: {str(e)}" + + def _share_spreadsheet( + self, + service, + spreadsheet_id: str, + email: str, + role: ShareRole, + send_notification: bool, + message: str, + ) -> dict: + share_link = f"https://docs.google.com/spreadsheets/d/{spreadsheet_id}/edit" + + if email: + # Share with specific user + permission = {"type": "user", "role": role.value, "emailAddress": email} + + kwargs: dict = { + "fileId": spreadsheet_id, + "body": permission, + "sendNotificationEmail": send_notification, + } + if message: + kwargs["emailMessage"] = message + + service.permissions().create(**kwargs).execute() + else: + # Get shareable link - use reader or commenter only (writer not allowed for "anyone") + link_role = "reader" if role == ShareRole.WRITER else role.value + permission = {"type": "anyone", "role": link_role} + service.permissions().create( + fileId=spreadsheet_id, body=permission + ).execute() + share_link += "?usp=sharing" + + return {"success": True, "share_link": share_link} + + +class GoogleSheetsSetPublicAccessBlock(Block): + """Make a spreadsheet publicly accessible or private.""" + + class Input(BlockSchemaInput): + spreadsheet: GoogleDriveFile = GoogleDriveFileField( + title="Spreadsheet", + description="The spreadsheet to modify access for", + credentials_kwarg="credentials", + allowed_views=["SPREADSHEETS"], + allowed_mime_types=["application/vnd.google-apps.spreadsheet"], + ) + public: bool = SchemaField( + default=True, + description="True to make public, False to make private", + ) + role: PublicAccessRole = SchemaField( + default=PublicAccessRole.READER, + description="Permission role for public access", + ) + + class Output(BlockSchemaOutput): + result: dict = SchemaField(description="Result of the operation") + share_link: str = SchemaField(description="Link to the spreadsheet") + spreadsheet: GoogleDriveFile = SchemaField( + description="The spreadsheet for chaining" + ) + error: str = SchemaField(description="Error message if operation failed") + + def __init__(self): + super().__init__( + id="d08d46cd-088b-4ba7-a545-45050f33b889", + description="Make a Google Spreadsheet public or private", + categories={BlockCategory.DATA}, + input_schema=GoogleSheetsSetPublicAccessBlock.Input, + output_schema=GoogleSheetsSetPublicAccessBlock.Output, + disabled=GOOGLE_SHEETS_DISABLED, + test_input={ + "spreadsheet": { + "id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + }, + "public": True, + }, + test_credentials=TEST_CREDENTIALS, + test_output=[ + ("result", {"success": True, "is_public": True}), + ( + "share_link", + "https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit?usp=sharing", + ), + ( + "spreadsheet", + GoogleDriveFile( + id="1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms", + name="Test Spreadsheet", + mimeType="application/vnd.google-apps.spreadsheet", + url="https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=None, + ), + ), + ], + test_mock={ + "_set_public_access": lambda *args, **kwargs: { + "success": True, + "is_public": True, + "share_link": "https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit?usp=sharing", + }, + }, + ) + + async def run( + self, input_data: Input, *, credentials: GoogleCredentials, **kwargs + ) -> BlockOutput: + if not input_data.spreadsheet: + yield "error", "No spreadsheet selected" + return + + validation_error = _validate_spreadsheet_file(input_data.spreadsheet) + if validation_error: + yield "error", validation_error + return + + try: + service = _build_drive_service(credentials) + result = await asyncio.to_thread( + self._set_public_access, + service, + input_data.spreadsheet.id, + input_data.public, + input_data.role, + ) + yield "result", {"success": True, "is_public": result["is_public"]} + yield "share_link", result["share_link"] + yield "spreadsheet", GoogleDriveFile( + id=input_data.spreadsheet.id, + name=input_data.spreadsheet.name, + mimeType="application/vnd.google-apps.spreadsheet", + url=f"https://docs.google.com/spreadsheets/d/{input_data.spreadsheet.id}/edit", + iconUrl="https://www.gstatic.com/images/branding/product/1x/sheets_48dp.png", + isFolder=False, + _credentials_id=input_data.spreadsheet.credentials_id, + ) + except Exception as e: + yield "error", f"Failed to set public access: {str(e)}" + + def _set_public_access( + self, + service, + spreadsheet_id: str, + public: bool, + role: PublicAccessRole, + ) -> dict: + share_link = f"https://docs.google.com/spreadsheets/d/{spreadsheet_id}/edit" + + if public: + # Make public + permission = {"type": "anyone", "role": role.value} + service.permissions().create( + fileId=spreadsheet_id, body=permission + ).execute() + share_link += "?usp=sharing" + else: + # Make private - remove 'anyone' permissions + permissions = service.permissions().list(fileId=spreadsheet_id).execute() + for perm in permissions.get("permissions", []): + if perm.get("type") == "anyone": + service.permissions().delete( + fileId=spreadsheet_id, permissionId=perm["id"] + ).execute() + + return {"success": True, "is_public": public, "share_link": share_link} diff --git a/autogpt_platform/backend/backend/blocks/google_maps.py b/autogpt_platform/backend/backend/blocks/google_maps.py index 01e81c69c9..2ee2959326 100644 --- a/autogpt_platform/backend/backend/blocks/google_maps.py +++ b/autogpt_platform/backend/backend/blocks/google_maps.py @@ -3,7 +3,13 @@ from typing import Literal import googlemaps from pydantic import BaseModel, SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -37,7 +43,7 @@ class Place(BaseModel): class GoogleMapsSearchBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.GOOGLE_MAPS], Literal["api_key"] ] = CredentialsField(description="Google Maps API Key") @@ -58,9 +64,8 @@ class GoogleMapsSearchBlock(Block): le=60, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): place: Place = SchemaField(description="Place found") - error: str = SchemaField(description="Error message if the search failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/http.py b/autogpt_platform/backend/backend/blocks/http.py index c07c1ca508..9b27a3b129 100644 --- a/autogpt_platform/backend/backend/blocks/http.py +++ b/autogpt_platform/backend/backend/blocks/http.py @@ -8,7 +8,13 @@ from typing import Literal import aiofiles from pydantic import SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( CredentialsField, CredentialsMetaInput, @@ -62,7 +68,7 @@ class HttpMethod(Enum): class SendWebRequestBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): url: str = SchemaField( description="The URL to send the request to", placeholder="https://api.example.com", @@ -93,7 +99,7 @@ class SendWebRequestBlock(Block): default_factory=list, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): response: object = SchemaField(description="The response from the server") client_error: object = SchemaField(description="Errors on 4xx status codes") server_error: object = SchemaField(description="Errors on 5xx status codes") @@ -178,7 +184,13 @@ class SendWebRequestBlock(Block): ) # ─── Execute request ───────────────────────────────────────── - response = await Requests().request( + # Use raise_for_status=False so HTTP errors (4xx, 5xx) are returned + # as response objects instead of raising exceptions, allowing proper + # handling via client_error and server_error outputs + response = await Requests( + raise_for_status=False, + retry_max_attempts=1, # allow callers to handle HTTP errors immediately + ).request( input_data.method.value, input_data.url, headers=input_data.headers, diff --git a/autogpt_platform/backend/backend/blocks/hubspot/company.py b/autogpt_platform/backend/backend/blocks/hubspot/company.py index 3026112259..dee9169e59 100644 --- a/autogpt_platform/backend/backend/blocks/hubspot/company.py +++ b/autogpt_platform/backend/backend/blocks/hubspot/company.py @@ -3,13 +3,19 @@ from backend.blocks.hubspot._auth import ( HubSpotCredentialsField, HubSpotCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.request import Requests class HubSpotCompanyBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: HubSpotCredentialsInput = HubSpotCredentialsField() operation: str = SchemaField( description="Operation to perform (create, update, get)", default="get" @@ -22,7 +28,7 @@ class HubSpotCompanyBlock(Block): description="Company domain for get/update operations", default="" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): company: dict = SchemaField(description="Company information") status: str = SchemaField(description="Operation status") diff --git a/autogpt_platform/backend/backend/blocks/hubspot/contact.py b/autogpt_platform/backend/backend/blocks/hubspot/contact.py index 2029adaca1..b4451c3b8b 100644 --- a/autogpt_platform/backend/backend/blocks/hubspot/contact.py +++ b/autogpt_platform/backend/backend/blocks/hubspot/contact.py @@ -3,13 +3,19 @@ from backend.blocks.hubspot._auth import ( HubSpotCredentialsField, HubSpotCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.request import Requests class HubSpotContactBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: HubSpotCredentialsInput = HubSpotCredentialsField() operation: str = SchemaField( description="Operation to perform (create, update, get)", default="get" @@ -22,7 +28,7 @@ class HubSpotContactBlock(Block): description="Email address for get/update operations", default="" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): contact: dict = SchemaField(description="Contact information") status: str = SchemaField(description="Operation status") diff --git a/autogpt_platform/backend/backend/blocks/hubspot/engagement.py b/autogpt_platform/backend/backend/blocks/hubspot/engagement.py index 7e4dbc3d01..683607c5b3 100644 --- a/autogpt_platform/backend/backend/blocks/hubspot/engagement.py +++ b/autogpt_platform/backend/backend/blocks/hubspot/engagement.py @@ -5,13 +5,19 @@ from backend.blocks.hubspot._auth import ( HubSpotCredentialsField, HubSpotCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.request import Requests class HubSpotEngagementBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: HubSpotCredentialsInput = HubSpotCredentialsField() operation: str = SchemaField( description="Operation to perform (send_email, track_engagement)", @@ -29,7 +35,7 @@ class HubSpotEngagementBlock(Block): default=30, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: dict = SchemaField(description="Operation result") status: str = SchemaField(description="Operation status") diff --git a/autogpt_platform/backend/backend/blocks/human_in_the_loop.py b/autogpt_platform/backend/backend/blocks/human_in_the_loop.py new file mode 100644 index 0000000000..13c9fb31db --- /dev/null +++ b/autogpt_platform/backend/backend/blocks/human_in_the_loop.py @@ -0,0 +1,166 @@ +import logging +from typing import Any + +from prisma.enums import ReviewStatus + +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, + BlockType, +) +from backend.data.execution import ExecutionContext, ExecutionStatus +from backend.data.human_review import ReviewResult +from backend.data.model import SchemaField +from backend.executor.manager import async_update_node_execution_status +from backend.util.clients import get_database_manager_async_client + +logger = logging.getLogger(__name__) + + +class HumanInTheLoopBlock(Block): + """ + This block pauses execution and waits for human approval or modification of the data. + + When executed, it creates a pending review entry and sets the node execution status + to REVIEW. The execution will remain paused until a human user either: + - Approves the data (with or without modifications) + - Rejects the data + + This is useful for workflows that require human validation or intervention before + proceeding to the next steps. + """ + + class Input(BlockSchemaInput): + data: Any = SchemaField(description="The data to be reviewed by a human user") + name: str = SchemaField( + description="A descriptive name for what this data represents", + ) + editable: bool = SchemaField( + description="Whether the human reviewer can edit the data", + default=True, + advanced=True, + ) + + class Output(BlockSchemaOutput): + approved_data: Any = SchemaField( + description="The data when approved (may be modified by reviewer)" + ) + rejected_data: Any = SchemaField( + description="The data when rejected (may be modified by reviewer)" + ) + review_message: str = SchemaField( + description="Any message provided by the reviewer", default="" + ) + + def __init__(self): + super().__init__( + id="8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d", + description="Pause execution and wait for human approval or modification of data", + categories={BlockCategory.BASIC}, + input_schema=HumanInTheLoopBlock.Input, + output_schema=HumanInTheLoopBlock.Output, + block_type=BlockType.HUMAN_IN_THE_LOOP, + test_input={ + "data": {"name": "John Doe", "age": 30}, + "name": "User profile data", + "editable": True, + }, + test_output=[ + ("approved_data", {"name": "John Doe", "age": 30}), + ], + test_mock={ + "get_or_create_human_review": lambda *_args, **_kwargs: ReviewResult( + data={"name": "John Doe", "age": 30}, + status=ReviewStatus.APPROVED, + message="", + processed=False, + node_exec_id="test-node-exec-id", + ), + "update_node_execution_status": lambda *_args, **_kwargs: None, + "update_review_processed_status": lambda *_args, **_kwargs: None, + }, + ) + + async def get_or_create_human_review(self, **kwargs): + return await get_database_manager_async_client().get_or_create_human_review( + **kwargs + ) + + async def update_node_execution_status(self, **kwargs): + return await async_update_node_execution_status( + db_client=get_database_manager_async_client(), **kwargs + ) + + async def update_review_processed_status(self, node_exec_id: str, processed: bool): + return await get_database_manager_async_client().update_review_processed_status( + node_exec_id, processed + ) + + async def run( + self, + input_data: Input, + *, + user_id: str, + node_exec_id: str, + graph_exec_id: str, + graph_id: str, + graph_version: int, + execution_context: ExecutionContext, + **kwargs, + ) -> BlockOutput: + if not execution_context.safe_mode: + logger.info( + f"HITL block skipping review for node {node_exec_id} - safe mode disabled" + ) + yield "approved_data", input_data.data + yield "review_message", "Auto-approved (safe mode disabled)" + return + + try: + result = await self.get_or_create_human_review( + user_id=user_id, + node_exec_id=node_exec_id, + graph_exec_id=graph_exec_id, + graph_id=graph_id, + graph_version=graph_version, + input_data=input_data.data, + message=input_data.name, + editable=input_data.editable, + ) + except Exception as e: + logger.error(f"Error in HITL block for node {node_exec_id}: {str(e)}") + raise + + if result is None: + logger.info( + f"HITL block pausing execution for node {node_exec_id} - awaiting human review" + ) + try: + await self.update_node_execution_status( + exec_id=node_exec_id, + status=ExecutionStatus.REVIEW, + ) + return + except Exception as e: + logger.error( + f"Failed to update node status for HITL block {node_exec_id}: {str(e)}" + ) + raise + + if not result.processed: + await self.update_review_processed_status( + node_exec_id=node_exec_id, processed=True + ) + + if result.status == ReviewStatus.APPROVED: + yield "approved_data", result.data + if result.message: + yield "review_message", result.message + + elif result.status == ReviewStatus.REJECTED: + yield "rejected_data", result.data + if result.message: + yield "review_message", result.message diff --git a/autogpt_platform/backend/backend/blocks/ideogram.py b/autogpt_platform/backend/backend/blocks/ideogram.py index ef5aca2489..09a384c74a 100644 --- a/autogpt_platform/backend/backend/blocks/ideogram.py +++ b/autogpt_platform/backend/backend/blocks/ideogram.py @@ -2,9 +2,14 @@ from enum import Enum from typing import Any, Dict, Literal, Optional from pydantic import SecretStr -from requests.exceptions import RequestException -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -84,7 +89,7 @@ class UpscaleOption(str, Enum): class IdeogramModelBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.IDEOGRAM], Literal["api_key"] ] = CredentialsField( @@ -154,9 +159,8 @@ class IdeogramModelBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: str = SchemaField(description="Generated image URL") - error: str = SchemaField(description="Error message if the model run failed") def __init__(self): super().__init__( @@ -327,8 +331,8 @@ class IdeogramModelBlock(Block): try: response = await Requests().post(url, headers=headers, json=data) return response.json()["data"][0]["url"] - except RequestException as e: - raise Exception(f"Failed to fetch image with V3 endpoint: {str(e)}") + except Exception as e: + raise ValueError(f"Failed to fetch image with V3 endpoint: {e}") from e async def _run_model_legacy( self, @@ -380,8 +384,8 @@ class IdeogramModelBlock(Block): try: response = await Requests().post(url, headers=headers, json=data) return response.json()["data"][0]["url"] - except RequestException as e: - raise Exception(f"Failed to fetch image with legacy endpoint: {str(e)}") + except Exception as e: + raise ValueError(f"Failed to fetch image with legacy endpoint: {e}") from e async def upscale_image(self, api_key: SecretStr, image_url: str): url = "https://api.ideogram.ai/upscale" @@ -408,5 +412,5 @@ class IdeogramModelBlock(Block): return (response.json())["data"][0]["url"] - except RequestException as e: - raise Exception(f"Failed to upscale image: {str(e)}") + except Exception as e: + raise ValueError(f"Failed to upscale image: {e}") from e diff --git a/autogpt_platform/backend/backend/blocks/io.py b/autogpt_platform/backend/backend/blocks/io.py index 37618671eb..07f09eb349 100644 --- a/autogpt_platform/backend/backend/blocks/io.py +++ b/autogpt_platform/backend/backend/blocks/io.py @@ -2,7 +2,16 @@ import copy from datetime import date, time from typing import Any, Optional -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockType +# Import for Google Drive file input block +from backend.blocks.google._drive import AttachmentView, GoogleDriveFile +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchema, + BlockSchemaInput, + BlockType, +) from backend.data.model import SchemaField from backend.util.file import store_media_file from backend.util.mock import MockObject @@ -22,7 +31,7 @@ class AgentInputBlock(Block): It Outputs the value passed as input. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): name: str = SchemaField(description="The name of the input.") value: Any = SchemaField( description="The value to be passed as input.", @@ -60,6 +69,7 @@ class AgentInputBlock(Block): return schema class Output(BlockSchema): + # Use BlockSchema to avoid automatic error field for interface definition result: Any = SchemaField(description="The value passed as input.") def __init__(self, **kwargs): @@ -109,7 +119,7 @@ class AgentOutputBlock(Block): If formatting fails or no `format` is provided, the raw `value` is output. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): value: Any = SchemaField( description="The value to be recorded as output.", default=None, @@ -151,6 +161,7 @@ class AgentOutputBlock(Block): return self.get_field_schema("value") class Output(BlockSchema): + # Use BlockSchema to avoid automatic error field for interface definition output: Any = SchemaField(description="The value recorded as output.") name: Any = SchemaField(description="The name of the value recorded as output.") @@ -637,6 +648,119 @@ class AgentTableInputBlock(AgentInputBlock): yield "result", input_data.value if input_data.value is not None else [] +class AgentGoogleDriveFileInputBlock(AgentInputBlock): + """ + This block allows users to select a file from Google Drive. + + It provides a Google Drive file picker UI that handles both authentication + and file selection. The selected file information (ID, name, URL, etc.) + is output for use by other blocks like Google Sheets Read. + """ + + class Input(AgentInputBlock.Input): + value: Optional[GoogleDriveFile] = SchemaField( + description="The selected Google Drive file.", + default=None, + advanced=False, + title="Selected File", + ) + allowed_views: list[AttachmentView] = SchemaField( + description="Which views to show in the file picker (DOCS, SPREADSHEETS, PRESENTATIONS, etc.).", + default_factory=lambda: ["DOCS", "SPREADSHEETS", "PRESENTATIONS"], + advanced=False, + title="Allowed Views", + ) + allow_folder_selection: bool = SchemaField( + description="Whether to allow selecting folders.", + default=False, + advanced=True, + title="Allow Folder Selection", + ) + + def generate_schema(self): + """Generate schema for the value field with Google Drive picker format.""" + schema = super().generate_schema() + + # Default scopes for drive.file access + scopes = ["https://www.googleapis.com/auth/drive.file"] + + # Build picker configuration + picker_config = { + "multiselect": False, # Single file selection only for now + "allow_folder_selection": self.allow_folder_selection, + "allowed_views": ( + list(self.allowed_views) if self.allowed_views else ["DOCS"] + ), + "scopes": scopes, + # Auto-credentials config tells frontend to include _credentials_id in output + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": scopes, + "kwarg_name": "credentials", + }, + } + + # Set format and config for frontend to render Google Drive picker + schema["format"] = "google-drive-picker" + schema["google_drive_picker_config"] = picker_config + # Also keep auto_credentials at top level for backend detection + schema["auto_credentials"] = { + "provider": "google", + "type": "oauth2", + "scopes": scopes, + "kwarg_name": "credentials", + } + + if self.value is not None: + schema["default"] = self.value.model_dump() + + return schema + + class Output(AgentInputBlock.Output): + result: GoogleDriveFile = SchemaField( + description="The selected Google Drive file with ID, name, URL, and other metadata." + ) + + def __init__(self): + test_file = GoogleDriveFile.model_validate( + { + "id": "test-file-id", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + "url": "https://docs.google.com/spreadsheets/d/test-file-id", + } + ) + super().__init__( + id="d3b32f15-6fd7-40e3-be52-e083f51b19a2", + description="Block for selecting a file from Google Drive.", + disabled=not config.enable_agent_input_subtype_blocks, + input_schema=AgentGoogleDriveFileInputBlock.Input, + output_schema=AgentGoogleDriveFileInputBlock.Output, + test_input=[ + { + "name": "spreadsheet_input", + "description": "Select a spreadsheet from Google Drive", + "allowed_views": ["SPREADSHEETS"], + "value": { + "id": "test-file-id", + "name": "Test Spreadsheet", + "mimeType": "application/vnd.google-apps.spreadsheet", + "url": "https://docs.google.com/spreadsheets/d/test-file-id", + }, + } + ], + test_output=[("result", test_file)], + ) + + async def run(self, input_data: Input, *args, **kwargs) -> BlockOutput: + """ + Yields the selected Google Drive file. + """ + if input_data.value is not None: + yield "result", input_data.value + + IO_BLOCK_IDs = [ AgentInputBlock().id, AgentOutputBlock().id, @@ -649,4 +773,5 @@ IO_BLOCK_IDs = [ AgentDropdownInputBlock().id, AgentToggleInputBlock().id, AgentTableInputBlock().id, + AgentGoogleDriveFileInputBlock().id, ] diff --git a/autogpt_platform/backend/backend/blocks/iteration.py b/autogpt_platform/backend/backend/blocks/iteration.py index 45864c5a3d..441f73fc4a 100644 --- a/autogpt_platform/backend/backend/blocks/iteration.py +++ b/autogpt_platform/backend/backend/blocks/iteration.py @@ -1,12 +1,18 @@ from typing import Any -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.json import loads class StepThroughItemsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): items: list = SchemaField( advanced=False, description="The list or dictionary of items to iterate over", @@ -26,7 +32,7 @@ class StepThroughItemsBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): item: Any = SchemaField(description="The current item in the iteration") key: Any = SchemaField( description="The key or index of the current item in the iteration", diff --git a/autogpt_platform/backend/backend/blocks/jina/chunking.py b/autogpt_platform/backend/backend/blocks/jina/chunking.py index 052fa8e815..9a9b242aae 100644 --- a/autogpt_platform/backend/backend/blocks/jina/chunking.py +++ b/autogpt_platform/backend/backend/blocks/jina/chunking.py @@ -3,13 +3,19 @@ from backend.blocks.jina._auth import ( JinaCredentialsField, JinaCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.request import Requests class JinaChunkingBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): texts: list = SchemaField(description="List of texts to chunk") credentials: JinaCredentialsInput = JinaCredentialsField() @@ -20,7 +26,7 @@ class JinaChunkingBlock(Block): description="Whether to return token information", default=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): chunks: list = SchemaField(description="List of chunked texts") tokens: list = SchemaField( description="List of token information for each chunk", diff --git a/autogpt_platform/backend/backend/blocks/jina/embeddings.py b/autogpt_platform/backend/backend/blocks/jina/embeddings.py index abc2f9d6ae..0f6cf68c6c 100644 --- a/autogpt_platform/backend/backend/blocks/jina/embeddings.py +++ b/autogpt_platform/backend/backend/blocks/jina/embeddings.py @@ -3,13 +3,19 @@ from backend.blocks.jina._auth import ( JinaCredentialsField, JinaCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.request import Requests class JinaEmbeddingBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): texts: list = SchemaField(description="List of texts to embed") credentials: JinaCredentialsInput = JinaCredentialsField() model: str = SchemaField( @@ -17,7 +23,7 @@ class JinaEmbeddingBlock(Block): default="jina-embeddings-v2-base-en", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): embeddings: list = SchemaField(description="List of embeddings") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/jina/fact_checker.py b/autogpt_platform/backend/backend/blocks/jina/fact_checker.py index 663ce4ae16..3367ab99e6 100644 --- a/autogpt_platform/backend/backend/blocks/jina/fact_checker.py +++ b/autogpt_platform/backend/backend/blocks/jina/fact_checker.py @@ -8,7 +8,13 @@ from backend.blocks.jina._auth import ( JinaCredentialsField, JinaCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.request import Requests @@ -20,13 +26,13 @@ class Reference(TypedDict): class FactCheckerBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): statement: str = SchemaField( description="The statement to check for factuality" ) credentials: JinaCredentialsInput = JinaCredentialsField() - class Output(BlockSchema): + class Output(BlockSchemaOutput): factuality: float = SchemaField( description="The factuality score of the statement" ) @@ -36,7 +42,6 @@ class FactCheckerBlock(Block): description="List of references supporting or contradicting the statement", default=[], ) - error: str = SchemaField(description="Error message if the check fails") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/jina/search.py b/autogpt_platform/backend/backend/blocks/jina/search.py index 90a6eea51c..05cddcc1df 100644 --- a/autogpt_platform/backend/backend/blocks/jina/search.py +++ b/autogpt_platform/backend/backend/blocks/jina/search.py @@ -8,20 +8,26 @@ from backend.blocks.jina._auth import ( JinaCredentialsInput, ) from backend.blocks.search import GetRequest -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField +from backend.util.exceptions import BlockExecutionError class SearchTheWebBlock(Block, GetRequest): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: JinaCredentialsInput = JinaCredentialsField() query: str = SchemaField(description="The search query to search the web for") - class Output(BlockSchema): + class Output(BlockSchemaOutput): results: str = SchemaField( description="The search results including content from top 5 URLs" ) - error: str = SchemaField(description="Error message if the search fails") def __init__(self): super().__init__( @@ -51,14 +57,24 @@ class SearchTheWebBlock(Block, GetRequest): # Prepend the Jina Search URL to the encoded query jina_search_url = f"https://s.jina.ai/{encoded_query}" - results = await self.get_request(jina_search_url, headers=headers, json=False) + + try: + results = await self.get_request( + jina_search_url, headers=headers, json=False + ) + except Exception as e: + raise BlockExecutionError( + message=f"Search failed: {e}", + block_name=self.name, + block_id=self.id, + ) from e # Output the search results yield "results", results class ExtractWebsiteContentBlock(Block, GetRequest): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: JinaCredentialsInput = JinaCredentialsField() url: str = SchemaField(description="The URL to scrape the content from") raw_content: bool = SchemaField( @@ -68,7 +84,7 @@ class ExtractWebsiteContentBlock(Block, GetRequest): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): content: str = SchemaField(description="The scraped content from the given URL") error: str = SchemaField( description="Error message if the content cannot be retrieved" diff --git a/autogpt_platform/backend/backend/blocks/linear/_api.py b/autogpt_platform/backend/backend/blocks/linear/_api.py index d79aaa39b4..477b8a209c 100644 --- a/autogpt_platform/backend/backend/blocks/linear/_api.py +++ b/autogpt_platform/backend/backend/blocks/linear/_api.py @@ -265,3 +265,68 @@ class LinearClient: return [Issue(**issue) for issue in issues["searchIssues"]["nodes"]] except LinearAPIException as e: raise e + + async def try_get_issues( + self, project: str, status: str, is_assigned: bool, include_comments: bool + ) -> list[Issue]: + try: + query = """ + query IssuesByProjectStatusAndAssignee( + $projectName: String! + $statusName: String! + $isAssigned: Boolean! + $includeComments: Boolean! = false + ) { + issues( + filter: { + project: { name: { eq: $projectName } } + state: { name: { eq: $statusName } } + assignee: { null: $isAssigned } + } + ) { + nodes { + id + title + identifier + description + createdAt + priority + assignee { + id + name + } + project { + id + name + } + state { + id + name + } + comments @include(if: $includeComments) { + nodes { + id + body + createdAt + user { + id + name + } + } + } + } + } + } + """ + + variables: dict[str, Any] = { + "projectName": project, + "statusName": status, + "isAssigned": not is_assigned, + "includeComments": include_comments, + } + + issues = await self.query(query, variables) + return [Issue(**issue) for issue in issues["issues"]["nodes"]] + except LinearAPIException as e: + raise e diff --git a/autogpt_platform/backend/backend/blocks/linear/_config.py b/autogpt_platform/backend/backend/blocks/linear/_config.py index c5337c481c..c0f76ecb02 100644 --- a/autogpt_platform/backend/backend/blocks/linear/_config.py +++ b/autogpt_platform/backend/backend/blocks/linear/_config.py @@ -62,10 +62,10 @@ TEST_CREDENTIALS_OAUTH = OAuth2Credentials( title="Mock Linear API key", username="mock-linear-username", access_token=SecretStr("mock-linear-access-token"), - access_token_expires_at=None, + access_token_expires_at=1672531200, # Mock expiration time for short-lived token refresh_token=SecretStr("mock-linear-refresh-token"), refresh_token_expires_at=None, - scopes=["mock-linear-scopes"], + scopes=["read", "write"], ) TEST_CREDENTIALS_API_KEY = APIKeyCredentials( diff --git a/autogpt_platform/backend/backend/blocks/linear/_oauth.py b/autogpt_platform/backend/backend/blocks/linear/_oauth.py index d1eb4f6bfc..66dc6456f8 100644 --- a/autogpt_platform/backend/backend/blocks/linear/_oauth.py +++ b/autogpt_platform/backend/backend/blocks/linear/_oauth.py @@ -2,7 +2,9 @@ Linear OAuth handler implementation. """ +import base64 import json +import time from typing import Optional from urllib.parse import urlencode @@ -38,8 +40,9 @@ class LinearOAuthHandler(BaseOAuthHandler): self.client_secret = client_secret self.redirect_uri = redirect_uri self.auth_base_url = "https://linear.app/oauth/authorize" - self.token_url = "https://api.linear.app/oauth/token" # Correct token URL + self.token_url = "https://api.linear.app/oauth/token" self.revoke_url = "https://api.linear.app/oauth/revoke" + self.migrate_url = "https://api.linear.app/oauth/migrate_old_token" def get_login_url( self, scopes: list[str], state: str, code_challenge: Optional[str] @@ -82,19 +85,84 @@ class LinearOAuthHandler(BaseOAuthHandler): return True # Linear doesn't return JSON on successful revoke + async def migrate_old_token( + self, credentials: OAuth2Credentials + ) -> OAuth2Credentials: + """ + Migrate an old long-lived token to a new short-lived token with refresh token. + + This uses Linear's /oauth/migrate_old_token endpoint to exchange current + long-lived tokens for short-lived tokens with refresh tokens without + requiring users to re-authorize. + """ + if not credentials.access_token: + raise ValueError("No access token to migrate") + + request_body = { + "client_id": self.client_id, + "client_secret": self.client_secret, + } + + headers = { + "Authorization": f"Bearer {credentials.access_token.get_secret_value()}", + "Content-Type": "application/x-www-form-urlencoded", + } + + response = await Requests().post( + self.migrate_url, data=request_body, headers=headers + ) + + if not response.ok: + try: + error_data = response.json() + error_message = error_data.get("error", "Unknown error") + error_description = error_data.get("error_description", "") + if error_description: + error_message = f"{error_message}: {error_description}" + except json.JSONDecodeError: + error_message = response.text + raise LinearAPIException( + f"Failed to migrate Linear token ({response.status}): {error_message}", + response.status, + ) + + token_data = response.json() + + # Extract token expiration + now = int(time.time()) + expires_in = token_data.get("expires_in") + access_token_expires_at = None + if expires_in: + access_token_expires_at = now + expires_in + + new_credentials = OAuth2Credentials( + provider=self.PROVIDER_NAME, + title=credentials.title, + username=credentials.username, + access_token=token_data["access_token"], + scopes=credentials.scopes, # Preserve original scopes + refresh_token=token_data.get("refresh_token"), + access_token_expires_at=access_token_expires_at, + refresh_token_expires_at=None, + ) + + new_credentials.id = credentials.id + return new_credentials + async def _refresh_tokens( self, credentials: OAuth2Credentials ) -> OAuth2Credentials: if not credentials.refresh_token: raise ValueError( - "No refresh token available." - ) # Linear uses non-expiring tokens + "No refresh token available. Token may need to be migrated to the new refresh token system." + ) return await self._request_tokens( { "refresh_token": credentials.refresh_token.get_secret_value(), "grant_type": "refresh_token", - } + }, + current_credentials=credentials, ) async def _request_tokens( @@ -102,16 +170,33 @@ class LinearOAuthHandler(BaseOAuthHandler): params: dict[str, str], current_credentials: Optional[OAuth2Credentials] = None, ) -> OAuth2Credentials: + # Determine if this is a refresh token request + is_refresh = params.get("grant_type") == "refresh_token" + + # Build request body with appropriate grant_type request_body = { "client_id": self.client_id, "client_secret": self.client_secret, - "grant_type": "authorization_code", # Ensure grant_type is correct **params, } - headers = { - "Content-Type": "application/x-www-form-urlencoded" - } # Correct header for token request + # Set default grant_type if not provided + if "grant_type" not in request_body: + request_body["grant_type"] = "authorization_code" + + headers = {"Content-Type": "application/x-www-form-urlencoded"} + + # For refresh token requests, support HTTP Basic Authentication as recommended + if is_refresh: + # Option 1: Use HTTP Basic Auth (preferred by Linear) + client_credentials = f"{self.client_id}:{self.client_secret}" + encoded_credentials = base64.b64encode(client_credentials.encode()).decode() + headers["Authorization"] = f"Basic {encoded_credentials}" + + # Remove client credentials from body when using Basic Auth + request_body.pop("client_id", None) + request_body.pop("client_secret", None) + response = await Requests().post( self.token_url, data=request_body, headers=headers ) @@ -120,6 +205,9 @@ class LinearOAuthHandler(BaseOAuthHandler): try: error_data = response.json() error_message = error_data.get("error", "Unknown error") + error_description = error_data.get("error_description", "") + if error_description: + error_message = f"{error_message}: {error_description}" except json.JSONDecodeError: error_message = response.text raise LinearAPIException( @@ -129,27 +217,84 @@ class LinearOAuthHandler(BaseOAuthHandler): token_data = response.json() - # Note: Linear access tokens do not expire, so we set expires_at to None + # Extract token expiration if provided (for new refresh token implementation) + now = int(time.time()) + expires_in = token_data.get("expires_in") + access_token_expires_at = None + if expires_in: + access_token_expires_at = now + expires_in + + # Get username - preserve from current credentials if refreshing + username = None + if current_credentials and is_refresh: + username = current_credentials.username + elif "user" in token_data: + username = token_data["user"].get("name", "Unknown User") + else: + # Fetch username using the access token + username = await self._request_username(token_data["access_token"]) + new_credentials = OAuth2Credentials( provider=self.PROVIDER_NAME, title=current_credentials.title if current_credentials else None, - username=token_data.get("user", {}).get( - "name", "Unknown User" - ), # extract name or set appropriate + username=username or "Unknown User", access_token=token_data["access_token"], - scopes=token_data["scope"].split( - "," - ), # Linear returns comma-separated scopes - refresh_token=token_data.get( - "refresh_token" - ), # Linear uses non-expiring tokens so this might be null - access_token_expires_at=None, - refresh_token_expires_at=None, + scopes=( + token_data["scope"].split(",") + if "scope" in token_data + else (current_credentials.scopes if current_credentials else []) + ), + refresh_token=token_data.get("refresh_token"), + access_token_expires_at=access_token_expires_at, + refresh_token_expires_at=None, # Linear doesn't provide refresh token expiration ) + if current_credentials: new_credentials.id = current_credentials.id + return new_credentials + async def get_access_token(self, credentials: OAuth2Credentials) -> str: + """ + Returns a valid access token, handling migration and refresh as needed. + + This overrides the base implementation to handle Linear's token migration + from old long-lived tokens to new short-lived tokens with refresh tokens. + """ + # If token has no expiration and no refresh token, it might be an old token + # that needs migration + if ( + credentials.access_token_expires_at is None + and credentials.refresh_token is None + ): + try: + # Attempt to migrate the old token + migrated_credentials = await self.migrate_old_token(credentials) + # Update the credentials store would need to be handled by the caller + # For now, use the migrated credentials for this request + credentials = migrated_credentials + except LinearAPIException: + # Migration failed, try to use the old token as-is + # This maintains backward compatibility + pass + + # Use the standard refresh logic from the base class + if self.needs_refresh(credentials): + credentials = await self.refresh_tokens(credentials) + + return credentials.access_token.get_secret_value() + + def needs_migration(self, credentials: OAuth2Credentials) -> bool: + """ + Check if credentials represent an old long-lived token that needs migration. + + Old tokens have no expiration time and no refresh token. + """ + return ( + credentials.access_token_expires_at is None + and credentials.refresh_token is None + ) + async def _request_username(self, access_token: str) -> Optional[str]: # Use the LinearClient to fetch user details using GraphQL from ._api import LinearClient diff --git a/autogpt_platform/backend/backend/blocks/linear/comment.py b/autogpt_platform/backend/backend/blocks/linear/comment.py index 17cd54c212..33a757fab4 100644 --- a/autogpt_platform/backend/backend/blocks/linear/comment.py +++ b/autogpt_platform/backend/backend/blocks/linear/comment.py @@ -3,7 +3,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, OAuth2Credentials, SchemaField, @@ -22,7 +23,7 @@ from .models import CreateCommentResponse class LinearCreateCommentBlock(Block): """Block for creating comments on Linear issues""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = linear.credentials_field( description="Linear credentials with comment creation permissions", required_scopes={LinearScope.COMMENTS_CREATE}, @@ -30,12 +31,11 @@ class LinearCreateCommentBlock(Block): issue_id: str = SchemaField(description="ID of the issue to comment on") comment: str = SchemaField(description="Comment text to add to the issue") - class Output(BlockSchema): + class Output(BlockSchemaOutput): comment_id: str = SchemaField(description="ID of the created comment") comment_body: str = SchemaField( description="Text content of the created comment" ) - error: str = SchemaField(description="Error message if comment creation failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/linear/issues.py b/autogpt_platform/backend/backend/blocks/linear/issues.py index cd0fa0e98a..baac01214c 100644 --- a/autogpt_platform/backend/backend/blocks/linear/issues.py +++ b/autogpt_platform/backend/backend/blocks/linear/issues.py @@ -3,7 +3,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, OAuth2Credentials, SchemaField, @@ -22,7 +23,7 @@ from .models import CreateIssueResponse, Issue class LinearCreateIssueBlock(Block): """Block for creating issues on Linear""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = linear.credentials_field( description="Linear credentials with issue creation permissions", required_scopes={LinearScope.ISSUES_CREATE}, @@ -43,10 +44,9 @@ class LinearCreateIssueBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): issue_id: str = SchemaField(description="ID of the created issue") issue_title: str = SchemaField(description="Title of the created issue") - error: str = SchemaField(description="Error message if issue creation failed") def __init__(self): super().__init__( @@ -129,14 +129,14 @@ class LinearCreateIssueBlock(Block): class LinearSearchIssuesBlock(Block): """Block for searching issues on Linear""" - class Input(BlockSchema): + class Input(BlockSchemaInput): term: str = SchemaField(description="Term to search for issues") credentials: CredentialsMetaInput = linear.credentials_field( description="Linear credentials with read permissions", required_scopes={LinearScope.READ}, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): issues: list[Issue] = SchemaField(description="List of issues") def __init__(self): @@ -203,3 +203,106 @@ class LinearSearchIssuesBlock(Block): yield "error", str(e) except Exception as e: yield "error", f"Unexpected error: {str(e)}" + + +class LinearGetProjectIssuesBlock(Block): + """Block for getting issues from a Linear project filtered by status and assignee""" + + class Input(BlockSchemaInput): + credentials: CredentialsMetaInput = linear.credentials_field( + description="Linear credentials with read permissions", + required_scopes={LinearScope.READ}, + ) + project: str = SchemaField(description="Name of the project to get issues from") + status: str = SchemaField( + description="Status/state name to filter issues by (e.g., 'In Progress', 'Done')" + ) + is_assigned: bool = SchemaField( + description="Filter by assignee status - True to get assigned issues, False to get unassigned issues", + default=False, + ) + include_comments: bool = SchemaField( + description="Whether to include comments in the response", + default=False, + ) + + class Output(BlockSchemaOutput): + issues: list[Issue] = SchemaField( + description="List of issues matching the criteria" + ) + + def __init__(self): + super().__init__( + id="c7d3f1e8-45a9-4b2c-9f81-3e6a8d7c5b1a", + description="Gets issues from a Linear project filtered by status and assignee", + input_schema=self.Input, + output_schema=self.Output, + categories={BlockCategory.PRODUCTIVITY, BlockCategory.ISSUE_TRACKING}, + test_input={ + "project": "Test Project", + "status": "In Progress", + "is_assigned": False, + "include_comments": False, + "credentials": TEST_CREDENTIALS_INPUT_OAUTH, + }, + test_credentials=TEST_CREDENTIALS_OAUTH, + test_output=[ + ( + "issues", + [ + Issue( + id="abc123", + identifier="TST-123", + title="Test issue", + description="Test description", + priority=1, + ) + ], + ), + ], + test_mock={ + "get_project_issues": lambda *args, **kwargs: [ + Issue( + id="abc123", + identifier="TST-123", + title="Test issue", + description="Test description", + priority=1, + ) + ] + }, + ) + + @staticmethod + async def get_project_issues( + credentials: OAuth2Credentials | APIKeyCredentials, + project: str, + status: str, + is_assigned: bool, + include_comments: bool, + ) -> list[Issue]: + client = LinearClient(credentials=credentials) + response: list[Issue] = await client.try_get_issues( + project=project, + status=status, + is_assigned=is_assigned, + include_comments=include_comments, + ) + return response + + async def run( + self, + input_data: Input, + *, + credentials: OAuth2Credentials | APIKeyCredentials, + **kwargs, + ) -> BlockOutput: + """Execute getting project issues""" + issues = await self.get_project_issues( + credentials=credentials, + project=input_data.project, + status=input_data.status, + is_assigned=input_data.is_assigned, + include_comments=input_data.include_comments, + ) + yield "issues", issues diff --git a/autogpt_platform/backend/backend/blocks/linear/models.py b/autogpt_platform/backend/backend/blocks/linear/models.py index 7113435e06..bfeaa13656 100644 --- a/autogpt_platform/backend/backend/blocks/linear/models.py +++ b/autogpt_platform/backend/backend/blocks/linear/models.py @@ -1,9 +1,16 @@ from backend.sdk import BaseModel +class User(BaseModel): + id: str + name: str + + class Comment(BaseModel): id: str body: str + createdAt: str | None = None + user: User | None = None class CreateCommentInput(BaseModel): @@ -20,22 +27,26 @@ class CreateCommentResponseWrapper(BaseModel): commentCreate: CreateCommentResponse +class Project(BaseModel): + id: str + name: str + description: str | None = None + priority: int | None = None + progress: float | None = None + content: str | None = None + + class Issue(BaseModel): id: str identifier: str title: str description: str | None priority: int + project: Project | None = None + createdAt: str | None = None + comments: list[Comment] | None = None + assignee: User | None = None class CreateIssueResponse(BaseModel): issue: Issue - - -class Project(BaseModel): - id: str - name: str - description: str - priority: int - progress: float - content: str | None diff --git a/autogpt_platform/backend/backend/blocks/linear/projects.py b/autogpt_platform/backend/backend/blocks/linear/projects.py index 4eeb1ed99d..841867b137 100644 --- a/autogpt_platform/backend/backend/blocks/linear/projects.py +++ b/autogpt_platform/backend/backend/blocks/linear/projects.py @@ -3,7 +3,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, OAuth2Credentials, SchemaField, @@ -22,16 +23,15 @@ from .models import Project class LinearSearchProjectsBlock(Block): """Block for searching projects on Linear""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = linear.credentials_field( description="Linear credentials with read permissions", required_scopes={LinearScope.READ}, ) term: str = SchemaField(description="Term to search for projects") - class Output(BlockSchema): + class Output(BlockSchemaOutput): projects: list[Project] = SchemaField(description="List of projects") - error: str = SchemaField(description="Error message if issue creation failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/llm.py b/autogpt_platform/backend/backend/blocks/llm.py index bc7d3122f6..447c28783f 100644 --- a/autogpt_platform/backend/backend/blocks/llm.py +++ b/autogpt_platform/backend/backend/blocks/llm.py @@ -1,6 +1,5 @@ # This file contains a lot of prompt block strings that would trigger "line too long" # flake8: noqa: E501 -import ast import logging import re import secrets @@ -16,7 +15,13 @@ from anthropic.types import ToolParam from groq import AsyncGroq from pydantic import BaseModel, SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -88,6 +93,7 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta): O1_MINI = "o1-mini" # GPT-5 models GPT5 = "gpt-5-2025-08-07" + GPT5_1 = "gpt-5.1-2025-11-13" GPT5_MINI = "gpt-5-mini-2025-08-07" GPT5_NANO = "gpt-5-nano-2025-08-07" GPT5_CHAT = "gpt-5-chat-latest" @@ -101,11 +107,10 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta): CLAUDE_4_1_OPUS = "claude-opus-4-1-20250805" CLAUDE_4_OPUS = "claude-opus-4-20250514" CLAUDE_4_SONNET = "claude-sonnet-4-20250514" + CLAUDE_4_5_OPUS = "claude-opus-4-5-20251101" CLAUDE_4_5_SONNET = "claude-sonnet-4-5-20250929" CLAUDE_4_5_HAIKU = "claude-haiku-4-5-20251001" CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219" - CLAUDE_3_5_SONNET = "claude-3-5-sonnet-latest" - CLAUDE_3_5_HAIKU = "claude-3-5-haiku-latest" CLAUDE_3_HAIKU = "claude-3-haiku-20240307" # AI/ML API models AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo" @@ -114,13 +119,8 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta): AIML_API_META_LLAMA_3_1_70B = "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo" AIML_API_LLAMA_3_2_3B = "meta-llama/Llama-3.2-3B-Instruct-Turbo" # Groq models - GEMMA2_9B = "gemma2-9b-it" LLAMA3_3_70B = "llama-3.3-70b-versatile" LLAMA3_1_8B = "llama-3.1-8b-instant" - LLAMA3_70B = "llama3-70b-8192" - LLAMA3_8B = "llama3-8b-8192" - # Groq preview models - DEEPSEEK_LLAMA_70B = "deepseek-r1-distill-llama-70b" # Ollama models OLLAMA_LLAMA3_3 = "llama3.3" OLLAMA_LLAMA3_2 = "llama3.2" @@ -130,8 +130,8 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta): # OpenRouter models OPENAI_GPT_OSS_120B = "openai/gpt-oss-120b" OPENAI_GPT_OSS_20B = "openai/gpt-oss-20b" - GEMINI_FLASH_1_5 = "google/gemini-flash-1.5" GEMINI_2_5_PRO = "google/gemini-2.5-pro-preview-03-25" + GEMINI_3_PRO_PREVIEW = "google/gemini-3-pro-preview" GEMINI_2_5_FLASH = "google/gemini-2.5-flash" GEMINI_2_0_FLASH = "google/gemini-2.0-flash-001" GEMINI_2_5_FLASH_LITE_PREVIEW = "google/gemini-2.5-flash-lite-preview-06-17" @@ -154,6 +154,9 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta): META_LLAMA_4_SCOUT = "meta-llama/llama-4-scout" META_LLAMA_4_MAVERICK = "meta-llama/llama-4-maverick" GROK_4 = "x-ai/grok-4" + GROK_4_FAST = "x-ai/grok-4-fast" + GROK_4_1_FAST = "x-ai/grok-4.1-fast" + GROK_CODE_FAST_1 = "x-ai/grok-code-fast-1" KIMI_K2 = "moonshotai/kimi-k2" QWEN3_235B_A22B_THINKING = "qwen/qwen3-235b-a22b-thinking-2507" QWEN3_CODER = "qwen/qwen3-coder" @@ -192,6 +195,7 @@ MODEL_METADATA = { LlmModel.O1_MINI: ModelMetadata("openai", 128000, 65536), # o1-mini-2024-09-12 # GPT-5 models LlmModel.GPT5: ModelMetadata("openai", 400000, 128000), + LlmModel.GPT5_1: ModelMetadata("openai", 400000, 128000), LlmModel.GPT5_MINI: ModelMetadata("openai", 400000, 128000), LlmModel.GPT5_NANO: ModelMetadata("openai", 400000, 128000), LlmModel.GPT5_CHAT: ModelMetadata("openai", 400000, 16384), @@ -215,6 +219,9 @@ MODEL_METADATA = { LlmModel.CLAUDE_4_SONNET: ModelMetadata( "anthropic", 200000, 64000 ), # claude-4-sonnet-20250514 + LlmModel.CLAUDE_4_5_OPUS: ModelMetadata( + "anthropic", 200000, 64000 + ), # claude-opus-4-5-20251101 LlmModel.CLAUDE_4_5_SONNET: ModelMetadata( "anthropic", 200000, 64000 ), # claude-sonnet-4-5-20250929 @@ -224,12 +231,6 @@ MODEL_METADATA = { LlmModel.CLAUDE_3_7_SONNET: ModelMetadata( "anthropic", 200000, 64000 ), # claude-3-7-sonnet-20250219 - LlmModel.CLAUDE_3_5_SONNET: ModelMetadata( - "anthropic", 200000, 8192 - ), # claude-3-5-sonnet-20241022 - LlmModel.CLAUDE_3_5_HAIKU: ModelMetadata( - "anthropic", 200000, 8192 - ), # claude-3-5-haiku-20241022 LlmModel.CLAUDE_3_HAIKU: ModelMetadata( "anthropic", 200000, 4096 ), # claude-3-haiku-20240307 @@ -240,12 +241,8 @@ MODEL_METADATA = { LlmModel.AIML_API_META_LLAMA_3_1_70B: ModelMetadata("aiml_api", 131000, 2000), LlmModel.AIML_API_LLAMA_3_2_3B: ModelMetadata("aiml_api", 128000, None), # https://console.groq.com/docs/models - LlmModel.GEMMA2_9B: ModelMetadata("groq", 8192, None), LlmModel.LLAMA3_3_70B: ModelMetadata("groq", 128000, 32768), LlmModel.LLAMA3_1_8B: ModelMetadata("groq", 128000, 8192), - LlmModel.LLAMA3_70B: ModelMetadata("groq", 8192, None), - LlmModel.LLAMA3_8B: ModelMetadata("groq", 8192, None), - LlmModel.DEEPSEEK_LLAMA_70B: ModelMetadata("groq", 128000, None), # https://ollama.com/library LlmModel.OLLAMA_LLAMA3_3: ModelMetadata("ollama", 8192, None), LlmModel.OLLAMA_LLAMA3_2: ModelMetadata("ollama", 8192, None), @@ -253,8 +250,8 @@ MODEL_METADATA = { LlmModel.OLLAMA_LLAMA3_405B: ModelMetadata("ollama", 8192, None), LlmModel.OLLAMA_DOLPHIN: ModelMetadata("ollama", 32768, None), # https://openrouter.ai/models - LlmModel.GEMINI_FLASH_1_5: ModelMetadata("open_router", 1000000, 8192), LlmModel.GEMINI_2_5_PRO: ModelMetadata("open_router", 1050000, 8192), + LlmModel.GEMINI_3_PRO_PREVIEW: ModelMetadata("open_router", 1048576, 65535), LlmModel.GEMINI_2_5_FLASH: ModelMetadata("open_router", 1048576, 65535), LlmModel.GEMINI_2_0_FLASH: ModelMetadata("open_router", 1048576, 8192), LlmModel.GEMINI_2_5_FLASH_LITE_PREVIEW: ModelMetadata( @@ -266,12 +263,12 @@ MODEL_METADATA = { LlmModel.COHERE_COMMAND_R_PLUS_08_2024: ModelMetadata("open_router", 128000, 4096), LlmModel.DEEPSEEK_CHAT: ModelMetadata("open_router", 64000, 2048), LlmModel.DEEPSEEK_R1_0528: ModelMetadata("open_router", 163840, 163840), - LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 127000), + LlmModel.PERPLEXITY_SONAR: ModelMetadata("open_router", 127000, 8000), LlmModel.PERPLEXITY_SONAR_PRO: ModelMetadata("open_router", 200000, 8000), LlmModel.PERPLEXITY_SONAR_DEEP_RESEARCH: ModelMetadata( "open_router", 128000, - 128000, + 16000, ), LlmModel.NOUSRESEARCH_HERMES_3_LLAMA_3_1_405B: ModelMetadata( "open_router", 131000, 4096 @@ -289,6 +286,9 @@ MODEL_METADATA = { LlmModel.META_LLAMA_4_SCOUT: ModelMetadata("open_router", 131072, 131072), LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata("open_router", 1048576, 1000000), LlmModel.GROK_4: ModelMetadata("open_router", 256000, 256000), + LlmModel.GROK_4_FAST: ModelMetadata("open_router", 2000000, 30000), + LlmModel.GROK_4_1_FAST: ModelMetadata("open_router", 2000000, 30000), + LlmModel.GROK_CODE_FAST_1: ModelMetadata("open_router", 256000, 10000), LlmModel.KIMI_K2: ModelMetadata("open_router", 131000, 131000), LlmModel.QWEN3_235B_A22B_THINKING: ModelMetadata("open_router", 262144, 262144), LlmModel.QWEN3_CODER: ModelMetadata("open_router", 262144, 262144), @@ -774,7 +774,7 @@ class AIBlockBase(Block, ABC): class AIStructuredResponseGeneratorBlock(AIBlockBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): prompt: str = SchemaField( description="The prompt to send to the language model.", placeholder="Enter your prompt here...", @@ -811,7 +811,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase): default="", description="The system prompt to provide additional context to the model.", ) - conversation_history: list[dict] = SchemaField( + conversation_history: list[dict] | None = SchemaField( default_factory=list, description="The conversation history to provide context for the prompt.", ) @@ -841,12 +841,11 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase): description="Ollama host for local models", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): response: dict[str, Any] | list[dict[str, Any]] = SchemaField( description="The response object generated by the language model." ) prompt: list = SchemaField(description="The prompt sent to the language model.") - error: str = SchemaField(description="Error message if the API call failed.") def __init__(self): super().__init__( @@ -919,7 +918,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase): self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: logger.debug(f"Calling LLM with input data: {input_data}") - prompt = [json.to_dict(p) for p in input_data.conversation_history] + prompt = [json.to_dict(p) for p in input_data.conversation_history or [] if p] values = input_data.prompt_values if values: @@ -1215,7 +1214,7 @@ def trim_prompt(s: str) -> str: class AITextGeneratorBlock(AIBlockBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): prompt: str = SchemaField( description="The prompt to send to the language model. You can use any of the {keys} from Prompt Values to fill in the prompt with values from the prompt values dictionary by putting them in curly braces.", placeholder="Enter your prompt here...", @@ -1253,12 +1252,11 @@ class AITextGeneratorBlock(AIBlockBase): description="The maximum number of tokens to generate in the chat completion.", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): response: str = SchemaField( description="The response generated by the language model." ) prompt: list = SchemaField(description="The prompt sent to the language model.") - error: str = SchemaField(description="Error message if the API call failed.") def __init__(self): super().__init__( @@ -1312,7 +1310,7 @@ class SummaryStyle(Enum): class AITextSummarizerBlock(AIBlockBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField( description="The text to summarize.", placeholder="Enter the text to summarize here...", @@ -1352,10 +1350,9 @@ class AITextSummarizerBlock(AIBlockBase): description="Ollama host for local models", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): summary: str = SchemaField(description="The final summary of the text.") prompt: list = SchemaField(description="The prompt sent to the language model.") - error: str = SchemaField(description="Error message if the API call failed.") def __init__(self): super().__init__( @@ -1459,7 +1456,20 @@ class AITextSummarizerBlock(AIBlockBase): credentials=credentials, ) - return llm_response["summary"] + summary = llm_response["summary"] + + # Validate that the LLM returned a string and not a list or other type + if not isinstance(summary, str): + from backend.util.truncate import truncate + + truncated_summary = truncate(summary, 500) + raise ValueError( + f"LLM generation failed: Expected a string summary, but received {type(summary).__name__}. " + f"The language model incorrectly formatted its response. " + f"Received value: {json.dumps(truncated_summary)}" + ) + + return summary async def _combine_summaries( self, summaries: list[str], input_data: Input, credentials: APIKeyCredentials @@ -1481,7 +1491,20 @@ class AITextSummarizerBlock(AIBlockBase): credentials=credentials, ) - return llm_response["final_summary"] + final_summary = llm_response["final_summary"] + + # Validate that the LLM returned a string and not a list or other type + if not isinstance(final_summary, str): + from backend.util.truncate import truncate + + truncated_final_summary = truncate(final_summary, 500) + raise ValueError( + f"LLM generation failed: Expected a string final summary, but received {type(final_summary).__name__}. " + f"The language model incorrectly formatted its response. " + f"Received value: {json.dumps(truncated_final_summary)}" + ) + + return final_summary else: # If combined summaries are still too long, recursively summarize block = AITextSummarizerBlock() @@ -1499,7 +1522,7 @@ class AITextSummarizerBlock(AIBlockBase): class AIConversationBlock(AIBlockBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): prompt: str = SchemaField( description="The prompt to send to the language model.", placeholder="Enter your prompt here...", @@ -1526,12 +1549,11 @@ class AIConversationBlock(AIBlockBase): description="Ollama host for local models", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): response: str = SchemaField( description="The model's response to the conversation." ) prompt: list = SchemaField(description="The prompt sent to the language model.") - error: str = SchemaField(description="Error message if the API call failed.") def __init__(self): super().__init__( @@ -1562,7 +1584,9 @@ class AIConversationBlock(AIBlockBase): ("prompt", list), ], test_mock={ - "llm_call": lambda *args, **kwargs: "The 2020 World Series was played at Globe Life Field in Arlington, Texas." + "llm_call": lambda *args, **kwargs: dict( + response="The 2020 World Series was played at Globe Life Field in Arlington, Texas." + ) }, ) @@ -1591,12 +1615,12 @@ class AIConversationBlock(AIBlockBase): ), credentials=credentials, ) - yield "response", response + yield "response", response["response"] yield "prompt", self.prompt class AIListGeneratorBlock(AIBlockBase): - class Input(BlockSchema): + class Input(BlockSchemaInput): focus: str | None = SchemaField( description="The focus of the list to generate.", placeholder="The top 5 most interesting news stories in the data.", @@ -1622,6 +1646,17 @@ class AIListGeneratorBlock(AIBlockBase): ge=1, le=5, ) + force_json_output: bool = SchemaField( + title="Restrict LLM to pure JSON output", + default=False, + description=( + "Whether to force the LLM to produce a JSON-only response. " + "This can increase the block's reliability, " + "but may also reduce the quality of the response " + "because it prohibits the LLM from reasoning " + "before providing its JSON response." + ), + ) max_tokens: int | None = SchemaField( advanced=True, default=None, @@ -1633,20 +1668,17 @@ class AIListGeneratorBlock(AIBlockBase): description="Ollama host for local models", ) - class Output(BlockSchema): - generated_list: List[str] = SchemaField(description="The generated list.") + class Output(BlockSchemaOutput): + generated_list: list[str] = SchemaField(description="The generated list.") list_item: str = SchemaField( description="Each individual item in the list.", ) prompt: list = SchemaField(description="The prompt sent to the language model.") - error: str = SchemaField( - description="Error message if the list generation failed." - ) def __init__(self): super().__init__( id="9c0b0450-d199-458b-a731-072189dd6593", - description="Generate a Python list based on the given prompt using a Large Language Model (LLM).", + description="Generate a list of values based on the given prompt using a Large Language Model (LLM).", categories={BlockCategory.AI, BlockCategory.TEXT}, input_schema=AIListGeneratorBlock.Input, output_schema=AIListGeneratorBlock.Output, @@ -1663,6 +1695,7 @@ class AIListGeneratorBlock(AIBlockBase): "model": LlmModel.GPT4O, "credentials": TEST_CREDENTIALS_INPUT, "max_retries": 3, + "force_json_output": False, }, test_credentials=TEST_CREDENTIALS, test_output=[ @@ -1679,7 +1712,13 @@ class AIListGeneratorBlock(AIBlockBase): ], test_mock={ "llm_call": lambda input_data, credentials: { - "response": "['Zylora Prime', 'Kharon-9', 'Vortexia', 'Oceara', 'Draknos']" + "list": [ + "Zylora Prime", + "Kharon-9", + "Vortexia", + "Oceara", + "Draknos", + ] }, }, ) @@ -1688,7 +1727,7 @@ class AIListGeneratorBlock(AIBlockBase): self, input_data: AIStructuredResponseGeneratorBlock.Input, credentials: APIKeyCredentials, - ) -> dict[str, str]: + ) -> dict[str, Any]: llm_block = AIStructuredResponseGeneratorBlock() response = await llm_block.run_once( input_data, "response", credentials=credentials @@ -1696,71 +1735,23 @@ class AIListGeneratorBlock(AIBlockBase): self.merge_llm_stats(llm_block) return response - @staticmethod - def string_to_list(string): - """ - Converts a string representation of a list into an actual Python list object. - """ - logger.debug(f"Converting string to list. Input string: {string}") - try: - # Use ast.literal_eval to safely evaluate the string - python_list = ast.literal_eval(string) - if isinstance(python_list, list): - logger.debug(f"Successfully converted string to list: {python_list}") - return python_list - else: - logger.error(f"The provided string '{string}' is not a valid list") - raise ValueError(f"The provided string '{string}' is not a valid list.") - except (SyntaxError, ValueError) as e: - logger.error(f"Failed to convert string to list: {e}") - raise ValueError("Invalid list format. Could not convert to list.") - async def run( self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs ) -> BlockOutput: logger.debug(f"Starting AIListGeneratorBlock.run with input data: {input_data}") - # Check for API key - api_key_check = credentials.api_key.get_secret_value() - if not api_key_check: - raise ValueError("No LLM API key provided.") + # Create a proper expected format for the structured response generator + expected_format = { + "list": "A JSON array containing the generated string values" + } + if input_data.force_json_output: + # Add reasoning field for better performance + expected_format = { + "reasoning": "... (optional)", + **expected_format, + } - # Prepare the system prompt - sys_prompt = """You are a Python list generator. Your task is to generate a Python list based on the user's prompt. - |Respond ONLY with a valid python list. - |The list can contain strings, numbers, or nested lists as appropriate. - |Do not include any explanations or additional text. - - |Valid Example string formats: - - |Example 1: - |``` - |['1', '2', '3', '4'] - |``` - - |Example 2: - |``` - |[['1', '2'], ['3', '4'], ['5', '6']] - |``` - - |Example 3: - |``` - |['1', ['2', '3'], ['4', ['5', '6']]] - |``` - - |Example 4: - |``` - |['a', 'b', 'c'] - |``` - - |Example 5: - |``` - |['1', '2.5', 'string', 'True', ['False', 'None']] - |``` - - |Do not include any explanations or additional text, just respond with the list in the format specified above. - """ - # If a focus is provided, add it to the prompt + # Build the prompt if input_data.focus: prompt = f"Generate a list with the following focus:\n\n\n{input_data.focus}" else: @@ -1768,7 +1759,7 @@ class AIListGeneratorBlock(AIBlockBase): if input_data.source_data: prompt = "Extract the main focus of the source data to a list.\ni.e if the source data is a news website, the focus would be the news stories rather than the social links in the footer." else: - # No focus or source data provided, generat a random list + # No focus or source data provided, generate a random list prompt = "Generate a random list." # If the source data is provided, add it to the prompt @@ -1778,63 +1769,56 @@ class AIListGeneratorBlock(AIBlockBase): else: prompt += "\n\nInvent the data to generate the list from." - for attempt in range(input_data.max_retries): - try: - logger.debug("Calling LLM") - llm_response = await self.llm_call( - AIStructuredResponseGeneratorBlock.Input( - sys_prompt=sys_prompt, - prompt=prompt, - credentials=input_data.credentials, - model=input_data.model, - expected_format={}, # Do not use structured response - ollama_host=input_data.ollama_host, - ), - credentials=credentials, - ) + # Use the structured response generator to handle all the complexity + response_obj = await self.llm_call( + AIStructuredResponseGeneratorBlock.Input( + sys_prompt=self.SYSTEM_PROMPT, + prompt=prompt, + credentials=input_data.credentials, + model=input_data.model, + expected_format=expected_format, + force_json_output=input_data.force_json_output, + retry=input_data.max_retries, + max_tokens=input_data.max_tokens, + ollama_host=input_data.ollama_host, + ), + credentials=credentials, + ) + logger.debug(f"Response object: {response_obj}") - logger.debug(f"LLM response: {llm_response}") + # Extract the list from the response object + if isinstance(response_obj, dict) and "list" in response_obj: + parsed_list = response_obj["list"] + else: + # Fallback - treat the whole response as the list + parsed_list = response_obj - # Extract Response string - response_string = llm_response["response"] - logger.debug(f"Response string: {response_string}") + # Validate that we got a list + if not isinstance(parsed_list, list): + raise ValueError( + f"Expected a list, but got {type(parsed_list).__name__}: {parsed_list}" + ) - # Convert the string to a Python list - logger.debug("Converting string to Python list") - parsed_list = self.string_to_list(response_string) - logger.debug(f"Parsed list: {parsed_list}") + logger.debug(f"Parsed list: {parsed_list}") - # If we reach here, we have a valid Python list - logger.debug("Successfully generated a valid Python list") - yield "generated_list", parsed_list - yield "prompt", self.prompt + # Yield the results + yield "generated_list", parsed_list + yield "prompt", self.prompt - # Yield each item in the list - for item in parsed_list: - yield "list_item", item - return + # Yield each item in the list + for item in parsed_list: + yield "list_item", item - except Exception as e: - logger.error(f"Error in attempt {attempt + 1}: {str(e)}") - if attempt == input_data.max_retries - 1: - logger.error( - f"Failed to generate a valid Python list after {input_data.max_retries} attempts" - ) - raise RuntimeError( - f"Failed to generate a valid Python list after {input_data.max_retries} attempts. Last error: {str(e)}" - ) - else: - # Add a retry prompt - logger.debug("Preparing retry prompt") - prompt = f""" - The previous attempt failed due to `{e}` - Generate a valid Python list based on the original prompt. - Remember to respond ONLY with a valid Python list as per the format specified earlier. - Original prompt: - ```{prompt}``` - - Respond only with the list in the format specified with no commentary or apologies. - """ - logger.debug(f"Retry prompt: {prompt}") - - logger.debug("AIListGeneratorBlock.run completed") + SYSTEM_PROMPT = trim_prompt( + """ + |You are a JSON array generator. Your task is to generate a JSON array of string values based on the user's prompt. + | + |The 'list' field should contain a JSON array with the generated string values. + |The array can contain ONLY strings. + | + |Valid JSON array formats include: + |• ["string1", "string2", "string3"] + | + |Ensure you provide a proper JSON array with only string values in the 'list' field. + """ + ) diff --git a/autogpt_platform/backend/backend/blocks/maths.py b/autogpt_platform/backend/backend/blocks/maths.py index 0559d9673d..ad6dc67bbe 100644 --- a/autogpt_platform/backend/backend/blocks/maths.py +++ b/autogpt_platform/backend/backend/blocks/maths.py @@ -2,7 +2,13 @@ import operator from enum import Enum from typing import Any -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -15,7 +21,7 @@ class Operation(Enum): class CalculatorBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): operation: Operation = SchemaField( description="Choose the math operation you want to perform", placeholder="Select an operation", @@ -31,7 +37,7 @@ class CalculatorBlock(Block): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: float = SchemaField(description="The result of your calculation") def __init__(self): @@ -85,13 +91,13 @@ class CalculatorBlock(Block): class CountItemsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): collection: Any = SchemaField( description="Enter the collection you want to count. This can be a list, dictionary, string, or any other iterable.", placeholder="For example: [1, 2, 3] or {'a': 1, 'b': 2} or 'hello'", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): count: int = SchemaField(description="The number of items in the collection") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/media.py b/autogpt_platform/backend/backend/blocks/media.py index d642eac5e4..c8d4b4768f 100644 --- a/autogpt_platform/backend/backend/blocks/media.py +++ b/autogpt_platform/backend/backend/blocks/media.py @@ -6,14 +6,20 @@ from moviepy.audio.io.AudioFileClip import AudioFileClip from moviepy.video.fx.Loop import Loop from moviepy.video.io.VideoFileClip import VideoFileClip -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.file import MediaFileType, get_exec_file_path, store_media_file class MediaDurationBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): media_in: MediaFileType = SchemaField( description="Media input (URL, data URI, or local path)." ) @@ -22,13 +28,10 @@ class MediaDurationBlock(Block): default=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): duration: float = SchemaField( description="Duration of the media file (in seconds)." ) - error: str = SchemaField( - description="Error message if something fails.", default="" - ) def __init__(self): super().__init__( @@ -70,7 +73,7 @@ class LoopVideoBlock(Block): Block for looping (repeating) a video clip until a given duration or number of loops. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): video_in: MediaFileType = SchemaField( description="The input video (can be a URL, data URI, or local path)." ) @@ -90,13 +93,10 @@ class LoopVideoBlock(Block): default="file_path", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): video_out: str = SchemaField( description="Looped video returned either as a relative path or a data URI." ) - error: str = SchemaField( - description="Error message if something fails.", default="" - ) def __init__(self): super().__init__( @@ -166,7 +166,7 @@ class AddAudioToVideoBlock(Block): Optionally scale the volume of the new track. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): video_in: MediaFileType = SchemaField( description="Video input (URL, data URI, or local path)." ) @@ -182,13 +182,10 @@ class AddAudioToVideoBlock(Block): default="file_path", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): video_out: MediaFileType = SchemaField( description="Final video (with attached audio), as a path or data URI." ) - error: str = SchemaField( - description="Error message if something fails.", default="" - ) def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/medium.py b/autogpt_platform/backend/backend/blocks/medium.py index a8964ca940..713a8274f9 100644 --- a/autogpt_platform/backend/backend/blocks/medium.py +++ b/autogpt_platform/backend/backend/blocks/medium.py @@ -3,7 +3,13 @@ from typing import List, Literal from pydantic import SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, BlockSecret, @@ -37,7 +43,7 @@ class PublishToMediumStatus(str, Enum): class PublishToMediumBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): author_id: BlockSecret = SecretField( key="medium_author_id", description="""The Medium AuthorID of the user. You can get this by calling the /me endpoint of the Medium API.\n\ncurl -H "Authorization: Bearer YOUR_ACCESS_TOKEN" https://api.medium.com/v1/me" the response will contain the authorId field.""", @@ -84,7 +90,7 @@ class PublishToMediumBlock(Block): description="The Medium integration can be used with any API key with sufficient permissions for the blocks it is used on.", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_id: str = SchemaField(description="The ID of the created Medium post") post_url: str = SchemaField(description="The URL of the created Medium post") published_at: int = SchemaField( diff --git a/autogpt_platform/backend/backend/blocks/mem0.py b/autogpt_platform/backend/backend/blocks/mem0.py index 0aae1c316a..b8dc11064a 100644 --- a/autogpt_platform/backend/backend/blocks/mem0.py +++ b/autogpt_platform/backend/backend/blocks/mem0.py @@ -3,7 +3,7 @@ from typing import Any, Literal, Optional, Union from mem0 import MemoryClient from pydantic import BaseModel, SecretStr -from backend.data.block import Block, BlockOutput, BlockSchema +from backend.data.block import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -55,7 +55,7 @@ class AddMemoryBlock(Block, Mem0Base): Always limited by user_id and optional graph_id and graph_exec_id""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.MEM0], Literal["api_key"] ] = CredentialsField(description="Mem0 API key credentials") @@ -74,13 +74,12 @@ class AddMemoryBlock(Block, Mem0Base): description="Limit the memory to the agent", default=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): action: str = SchemaField(description="Action of the operation") memory: str = SchemaField(description="Memory created") results: list[dict[str, str]] = SchemaField( description="List of all results from the operation" ) - error: str = SchemaField(description="Error message if operation fails") def __init__(self): super().__init__( @@ -172,7 +171,7 @@ class AddMemoryBlock(Block, Mem0Base): class SearchMemoryBlock(Block, Mem0Base): """Block for searching memories in Mem0""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.MEM0], Literal["api_key"] ] = CredentialsField(description="Mem0 API key credentials") @@ -201,9 +200,8 @@ class SearchMemoryBlock(Block, Mem0Base): description="Limit the memory to the agent", default=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): memories: Any = SchemaField(description="List of matching memories") - error: str = SchemaField(description="Error message if operation fails") def __init__(self): super().__init__( @@ -266,7 +264,7 @@ class SearchMemoryBlock(Block, Mem0Base): class GetAllMemoriesBlock(Block, Mem0Base): """Block for retrieving all memories from Mem0""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.MEM0], Literal["api_key"] ] = CredentialsField(description="Mem0 API key credentials") @@ -289,9 +287,8 @@ class GetAllMemoriesBlock(Block, Mem0Base): description="Limit the memory to the agent", default=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): memories: Any = SchemaField(description="List of memories") - error: str = SchemaField(description="Error message if operation fails") def __init__(self): super().__init__( @@ -353,7 +350,7 @@ class GetAllMemoriesBlock(Block, Mem0Base): class GetLatestMemoryBlock(Block, Mem0Base): """Block for retrieving the latest memory from Mem0""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.MEM0], Literal["api_key"] ] = CredentialsField(description="Mem0 API key credentials") @@ -380,12 +377,11 @@ class GetLatestMemoryBlock(Block, Mem0Base): description="Limit the memory to the agent", default=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): memory: Optional[dict[str, Any]] = SchemaField( description="Latest memory if found" ) found: bool = SchemaField(description="Whether a memory was found") - error: str = SchemaField(description="Error message if operation fails") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/notion/create_page.py b/autogpt_platform/backend/backend/blocks/notion/create_page.py index cd7a259c40..5edef144e3 100644 --- a/autogpt_platform/backend/backend/blocks/notion/create_page.py +++ b/autogpt_platform/backend/backend/blocks/notion/create_page.py @@ -4,7 +4,13 @@ from typing import Any, Dict, List, Optional from pydantic import model_validator -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import OAuth2Credentials, SchemaField from ._api import NotionClient @@ -20,7 +26,7 @@ from ._auth import ( class NotionCreatePageBlock(Block): """Create a new page in Notion with content.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: NotionCredentialsInput = NotionCredentialsField() parent_page_id: Optional[str] = SchemaField( description="Parent page ID to create the page under. Either this OR parent_database_id is required.", @@ -58,10 +64,9 @@ class NotionCreatePageBlock(Block): ) return self - class Output(BlockSchema): + class Output(BlockSchemaOutput): page_id: str = SchemaField(description="ID of the created page.") page_url: str = SchemaField(description="URL of the created page.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/notion/read_database.py b/autogpt_platform/backend/backend/blocks/notion/read_database.py index 115842940d..5720bea2f8 100644 --- a/autogpt_platform/backend/backend/blocks/notion/read_database.py +++ b/autogpt_platform/backend/backend/blocks/notion/read_database.py @@ -2,7 +2,13 @@ from __future__ import annotations from typing import Any, Dict, List, Optional -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import OAuth2Credentials, SchemaField from ._api import NotionClient, parse_rich_text @@ -18,7 +24,7 @@ from ._auth import ( class NotionReadDatabaseBlock(Block): """Query a Notion database and retrieve entries with their properties.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: NotionCredentialsInput = NotionCredentialsField() database_id: str = SchemaField( description="Notion database ID. Must be accessible by the connected integration.", @@ -44,7 +50,7 @@ class NotionReadDatabaseBlock(Block): le=100, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): entries: List[Dict[str, Any]] = SchemaField( description="List of database entries with their properties." ) @@ -59,7 +65,6 @@ class NotionReadDatabaseBlock(Block): ) count: int = SchemaField(description="Number of entries retrieved.") database_title: str = SchemaField(description="Title of the database.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/notion/read_page.py b/autogpt_platform/backend/backend/blocks/notion/read_page.py index f3d50f93a2..400fd2a929 100644 --- a/autogpt_platform/backend/backend/blocks/notion/read_page.py +++ b/autogpt_platform/backend/backend/blocks/notion/read_page.py @@ -1,6 +1,12 @@ from __future__ import annotations -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import OAuth2Credentials, SchemaField from ._api import NotionClient @@ -16,15 +22,14 @@ from ._auth import ( class NotionReadPageBlock(Block): """Read a Notion page by ID and return its raw JSON.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: NotionCredentialsInput = NotionCredentialsField() page_id: str = SchemaField( description="Notion page ID. Must be accessible by the connected integration. You can get this from the page URL notion.so/A-Page-586edd711467478da59fe3ce29a1ffab would be 586edd711467478da59fe35e29a1ffab", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): page: dict = SchemaField(description="Raw Notion page JSON.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/notion/read_page_markdown.py b/autogpt_platform/backend/backend/blocks/notion/read_page_markdown.py index 323b748e1b..7ed87eaef9 100644 --- a/autogpt_platform/backend/backend/blocks/notion/read_page_markdown.py +++ b/autogpt_platform/backend/backend/blocks/notion/read_page_markdown.py @@ -1,6 +1,12 @@ from __future__ import annotations -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import OAuth2Credentials, SchemaField from ._api import NotionClient, blocks_to_markdown, extract_page_title @@ -16,7 +22,7 @@ from ._auth import ( class NotionReadPageMarkdownBlock(Block): """Read a Notion page and convert it to clean Markdown format.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: NotionCredentialsInput = NotionCredentialsField() page_id: str = SchemaField( description="Notion page ID. Must be accessible by the connected integration. You can get this from the page URL notion.so/A-Page-586edd711467478da59fe35e29a1ffab would be 586edd711467478da59fe35e29a1ffab", @@ -26,10 +32,9 @@ class NotionReadPageMarkdownBlock(Block): default=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): markdown: str = SchemaField(description="Page content in Markdown format.") title: str = SchemaField(description="Page title.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/notion/search.py b/autogpt_platform/backend/backend/blocks/notion/search.py index 24ef67fe41..1983763537 100644 --- a/autogpt_platform/backend/backend/blocks/notion/search.py +++ b/autogpt_platform/backend/backend/blocks/notion/search.py @@ -4,7 +4,13 @@ from typing import List, Optional from pydantic import BaseModel -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import OAuth2Credentials, SchemaField from ._api import NotionClient, extract_page_title, parse_rich_text @@ -35,7 +41,7 @@ class NotionSearchResult(BaseModel): class NotionSearchBlock(Block): """Search across your Notion workspace for pages and databases.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: NotionCredentialsInput = NotionCredentialsField() query: str = SchemaField( description="Search query text. Leave empty to get all accessible pages/databases.", @@ -49,7 +55,7 @@ class NotionSearchBlock(Block): description="Maximum number of results to return", default=20, ge=1, le=100 ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): results: List[NotionSearchResult] = SchemaField( description="List of search results with title, type, URL, and metadata." ) @@ -60,7 +66,6 @@ class NotionSearchBlock(Block): description="List of IDs from search results for batch operations." ) count: int = SchemaField(description="Number of results found.") - error: str = SchemaField(description="Error message if the operation failed.") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/nvidia/deepfake.py b/autogpt_platform/backend/backend/blocks/nvidia/deepfake.py index f5205f6e72..f60b649839 100644 --- a/autogpt_platform/backend/backend/blocks/nvidia/deepfake.py +++ b/autogpt_platform/backend/backend/blocks/nvidia/deepfake.py @@ -3,14 +3,20 @@ from backend.blocks.nvidia._auth import ( NvidiaCredentialsField, NvidiaCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.request import Requests from backend.util.type import MediaFileType class NvidiaDeepfakeDetectBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: NvidiaCredentialsInput = NvidiaCredentialsField() image_base64: MediaFileType = SchemaField( description="Image to analyze for deepfakes", @@ -20,7 +26,7 @@ class NvidiaDeepfakeDetectBlock(Block): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField( description="Detection status (SUCCESS, ERROR, CONTENT_FILTERED)", ) diff --git a/autogpt_platform/backend/backend/blocks/perplexity.py b/autogpt_platform/backend/backend/blocks/perplexity.py index 989ecac254..e2796718a9 100644 --- a/autogpt_platform/backend/backend/blocks/perplexity.py +++ b/autogpt_platform/backend/backend/blocks/perplexity.py @@ -6,7 +6,13 @@ from typing import Any, Literal import openai from pydantic import SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -54,7 +60,7 @@ def PerplexityCredentialsField() -> PerplexityCredentials: class PerplexityBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): prompt: str = SchemaField( description="The query to send to the Perplexity model.", placeholder="Enter your query here...", @@ -78,14 +84,13 @@ class PerplexityBlock(Block): description="The maximum number of tokens to generate.", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): response: str = SchemaField( description="The response from the Perplexity model." ) annotations: list[dict[str, Any]] = SchemaField( description="List of URL citations and annotations from the response." ) - error: str = SchemaField(description="Error message if the API call failed.") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/persistence.py b/autogpt_platform/backend/backend/blocks/persistence.py index 8b165569b5..a327fd22c7 100644 --- a/autogpt_platform/backend/backend/blocks/persistence.py +++ b/autogpt_platform/backend/backend/blocks/persistence.py @@ -1,7 +1,13 @@ import logging from typing import Any, Literal -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.clients import get_database_manager_async_client @@ -22,7 +28,7 @@ def get_storage_key(key: str, scope: StorageScope, graph_id: str) -> str: class PersistInformationBlock(Block): """Block for persisting key-value data for the current user with configurable scope""" - class Input(BlockSchema): + class Input(BlockSchemaInput): key: str = SchemaField(description="Key to store the information under") value: Any = SchemaField(description="Value to store") scope: StorageScope = SchemaField( @@ -30,7 +36,7 @@ class PersistInformationBlock(Block): default="within_agent", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): value: Any = SchemaField(description="Value that was stored") def __init__(self): @@ -90,7 +96,7 @@ class PersistInformationBlock(Block): class RetrieveInformationBlock(Block): """Block for retrieving key-value data for the current user with configurable scope""" - class Input(BlockSchema): + class Input(BlockSchemaInput): key: str = SchemaField(description="Key to retrieve the information for") scope: StorageScope = SchemaField( description="Scope of persistence: within_agent (shared across all runs of this agent) or across_agents (shared across all agents for this user)", @@ -100,7 +106,7 @@ class RetrieveInformationBlock(Block): description="Default value to return if key is not found", default=None ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): value: Any = SchemaField(description="Retrieved value or default value") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/pinecone.py b/autogpt_platform/backend/backend/blocks/pinecone.py index 529940b7cf..878f6f72fb 100644 --- a/autogpt_platform/backend/backend/blocks/pinecone.py +++ b/autogpt_platform/backend/backend/blocks/pinecone.py @@ -3,7 +3,13 @@ from typing import Any, Literal from pinecone import Pinecone, ServerlessSpec -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -27,7 +33,7 @@ def PineconeCredentialsField() -> PineconeCredentialsInput: class PineconeInitBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: PineconeCredentialsInput = PineconeCredentialsField() index_name: str = SchemaField(description="Name of the Pinecone index") dimension: int = SchemaField( @@ -43,7 +49,7 @@ class PineconeInitBlock(Block): description="Region for serverless", default="us-east-1" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): index: str = SchemaField(description="Name of the initialized Pinecone index") message: str = SchemaField(description="Status message") @@ -83,7 +89,7 @@ class PineconeInitBlock(Block): class PineconeQueryBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: PineconeCredentialsInput = PineconeCredentialsField() query_vector: list = SchemaField(description="Query vector") namespace: str = SchemaField( @@ -102,7 +108,7 @@ class PineconeQueryBlock(Block): host: str = SchemaField(description="Host for pinecone", default="") idx_name: str = SchemaField(description="Index name for pinecone") - class Output(BlockSchema): + class Output(BlockSchemaOutput): results: Any = SchemaField(description="Query results from Pinecone") combined_results: Any = SchemaField( description="Combined results from Pinecone" @@ -166,7 +172,7 @@ class PineconeQueryBlock(Block): class PineconeInsertBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: PineconeCredentialsInput = PineconeCredentialsField() index: str = SchemaField(description="Initialized Pinecone index") chunks: list = SchemaField(description="List of text chunks to ingest") @@ -181,7 +187,7 @@ class PineconeInsertBlock(Block): default_factory=dict, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): upsert_response: str = SchemaField( description="Response from Pinecone upsert operation" ) diff --git a/autogpt_platform/backend/backend/blocks/reddit.py b/autogpt_platform/backend/backend/blocks/reddit.py index b7fddf0d9a..231e7affef 100644 --- a/autogpt_platform/backend/backend/blocks/reddit.py +++ b/autogpt_platform/backend/backend/blocks/reddit.py @@ -1,10 +1,17 @@ +import logging from datetime import datetime, timezone from typing import Iterator, Literal import praw from pydantic import BaseModel, SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( CredentialsField, CredentialsMetaInput, @@ -58,6 +65,7 @@ class RedditComment(BaseModel): settings = Settings() +logger = logging.getLogger(__name__) def get_praw(creds: RedditCredentials) -> praw.Reddit: @@ -71,12 +79,12 @@ def get_praw(creds: RedditCredentials) -> praw.Reddit: me = client.user.me() if not me: raise ValueError("Invalid Reddit credentials.") - print(f"Logged in as Reddit user: {me.name}") + logger.info(f"Logged in as Reddit user: {me.name}") return client class GetRedditPostsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): subreddit: str = SchemaField( description="Subreddit name, excluding the /r/ prefix", default="writingprompts", @@ -94,7 +102,7 @@ class GetRedditPostsBlock(Block): description="Number of posts to fetch", default=10 ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post: RedditPost = SchemaField(description="Reddit post") posts: list[RedditPost] = SchemaField(description="List of all Reddit posts") @@ -194,11 +202,11 @@ class GetRedditPostsBlock(Block): class PostRedditCommentBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: RedditCredentialsInput = RedditCredentialsField() data: RedditComment = SchemaField(description="Reddit comment") - class Output(BlockSchema): + class Output(BlockSchemaOutput): comment_id: str = SchemaField(description="Posted comment ID") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py b/autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py index 477b1fa3e2..c112ce75c4 100644 --- a/autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py +++ b/autogpt_platform/backend/backend/blocks/replicate/flux_advanced.py @@ -10,7 +10,13 @@ from backend.blocks.replicate._auth import ( ReplicateCredentialsInput, ) from backend.blocks.replicate._helper import ReplicateOutputs, extract_result -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField @@ -38,7 +44,7 @@ class ImageType(str, Enum): class ReplicateFluxAdvancedModelBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: ReplicateCredentialsInput = CredentialsField( description="The Replicate integration can be used with " "any API key with sufficient permissions for the blocks it is used on.", @@ -105,9 +111,8 @@ class ReplicateFluxAdvancedModelBlock(Block): title="Safety Tolerance", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: str = SchemaField(description="Generated output") - error: str = SchemaField(description="Error message if the model run failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/replicate/replicate_block.py b/autogpt_platform/backend/backend/blocks/replicate/replicate_block.py index ca0477788a..7ee054d02e 100644 --- a/autogpt_platform/backend/backend/blocks/replicate/replicate_block.py +++ b/autogpt_platform/backend/backend/blocks/replicate/replicate_block.py @@ -10,8 +10,15 @@ from backend.blocks.replicate._auth import ( ReplicateCredentialsInput, ) from backend.blocks.replicate._helper import ReplicateOutputs, extract_result -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField +from backend.util.exceptions import BlockExecutionError, BlockInputError logger = logging.getLogger(__name__) @@ -27,7 +34,7 @@ class ReplicateModelBlock(Block): - Get structured outputs with prediction metadata """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: ReplicateCredentialsInput = CredentialsField( description="Enter your Replicate API key to access the model API. You can obtain an API key from https://replicate.com/account/api-tokens.", ) @@ -49,11 +56,10 @@ class ReplicateModelBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: str = SchemaField(description="The output from the Replicate model") status: str = SchemaField(description="Status of the prediction") model_name: str = SchemaField(description="Name of the model used") - error: str = SchemaField(description="Error message if any", default="") def __init__(self): super().__init__( @@ -106,9 +112,27 @@ class ReplicateModelBlock(Block): yield "status", "succeeded" yield "model_name", input_data.model_name except Exception as e: - error_msg = f"Unexpected error running Replicate model: {str(e)}" - logger.error(error_msg) - raise RuntimeError(error_msg) + error_msg = str(e) + logger.error(f"Error running Replicate model: {error_msg}") + + # Input validation errors (422, 400) → BlockInputError + if ( + "422" in error_msg + or "Input validation failed" in error_msg + or "400" in error_msg + ): + raise BlockInputError( + message=f"Invalid model inputs: {error_msg}", + block_name=self.name, + block_id=self.id, + ) from e + # Everything else → BlockExecutionError + else: + raise BlockExecutionError( + message=f"Replicate model error: {error_msg}", + block_name=self.name, + block_id=self.id, + ) from e async def run_model(self, model_ref: str, model_inputs: dict, api_key: SecretStr): """ diff --git a/autogpt_platform/backend/backend/blocks/rss.py b/autogpt_platform/backend/backend/blocks/rss.py index 8d8dc91d09..a23b3ee25c 100644 --- a/autogpt_platform/backend/backend/blocks/rss.py +++ b/autogpt_platform/backend/backend/blocks/rss.py @@ -1,15 +1,20 @@ import asyncio import logging -import urllib.parse -import urllib.request from datetime import datetime, timedelta, timezone from typing import Any import feedparser import pydantic -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField +from backend.util.request import Requests class RSSEntry(pydantic.BaseModel): @@ -22,7 +27,7 @@ class RSSEntry(pydantic.BaseModel): class ReadRSSFeedBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): rss_url: str = SchemaField( description="The URL of the RSS feed to read", placeholder="https://example.com/rss", @@ -41,7 +46,7 @@ class ReadRSSFeedBlock(Block): default=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): entry: RSSEntry = SchemaField(description="The RSS item") entries: list[RSSEntry] = SchemaField(description="List of all RSS entries") @@ -103,35 +108,29 @@ class ReadRSSFeedBlock(Block): ) @staticmethod - def parse_feed(url: str) -> dict[str, Any]: + async def parse_feed(url: str) -> dict[str, Any]: # Security fix: Add protection against memory exhaustion attacks MAX_FEED_SIZE = 10 * 1024 * 1024 # 10MB limit for RSS feeds - # Validate URL - parsed_url = urllib.parse.urlparse(url) - if parsed_url.scheme not in ("http", "https"): - raise ValueError(f"Invalid URL scheme: {parsed_url.scheme}") - - # Download with size limit + # Download feed content with size limit try: - with urllib.request.urlopen(url, timeout=30) as response: - # Check content length if available - content_length = response.headers.get("Content-Length") - if content_length and int(content_length) > MAX_FEED_SIZE: - raise ValueError( - f"Feed too large: {content_length} bytes exceeds {MAX_FEED_SIZE} limit" - ) + response = await Requests(raise_for_status=True).get(url) - # Read with size limit - content = response.read(MAX_FEED_SIZE + 1) - if len(content) > MAX_FEED_SIZE: - raise ValueError( - f"Feed too large: exceeds {MAX_FEED_SIZE} byte limit" - ) + # Check content length if available + content_length = response.headers.get("Content-Length") + if content_length and int(content_length) > MAX_FEED_SIZE: + raise ValueError( + f"Feed too large: {content_length} bytes exceeds {MAX_FEED_SIZE} limit" + ) - # Parse with feedparser using the validated content - # feedparser has built-in protection against XML attacks - return feedparser.parse(content) # type: ignore + # Get content with size limit + content = response.content + if len(content) > MAX_FEED_SIZE: + raise ValueError(f"Feed too large: exceeds {MAX_FEED_SIZE} byte limit") + + # Parse with feedparser using the validated content + # feedparser has built-in protection against XML attacks + return feedparser.parse(content) # type: ignore except Exception as e: # Log error and return empty feed logging.warning(f"Failed to parse RSS feed from {url}: {e}") @@ -145,7 +144,7 @@ class ReadRSSFeedBlock(Block): while keep_going: keep_going = input_data.run_continuously - feed = self.parse_feed(input_data.rss_url) + feed = await self.parse_feed(input_data.rss_url) all_entries = [] for entry in feed["entries"]: diff --git a/autogpt_platform/backend/backend/blocks/sampling.py b/autogpt_platform/backend/backend/blocks/sampling.py index ffd509ff75..b4463947a7 100644 --- a/autogpt_platform/backend/backend/blocks/sampling.py +++ b/autogpt_platform/backend/backend/blocks/sampling.py @@ -3,7 +3,13 @@ from collections import defaultdict from enum import Enum from typing import Any, Dict, List, Optional, Union -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -19,7 +25,7 @@ class SamplingMethod(str, Enum): class DataSamplingBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): data: Union[Dict[str, Any], List[Union[dict, List[Any]]]] = SchemaField( description="The dataset to sample from. Can be a single dictionary, a list of dictionaries, or a list of lists.", placeholder="{'id': 1, 'value': 'a'} or [{'id': 1, 'value': 'a'}, {'id': 2, 'value': 'b'}, ...]", @@ -54,7 +60,7 @@ class DataSamplingBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): sampled_data: List[Union[dict, List[Any]]] = SchemaField( description="The sampled subset of the input data." ) diff --git a/autogpt_platform/backend/backend/blocks/screenshotone.py b/autogpt_platform/backend/backend/blocks/screenshotone.py index 5ca97e77f7..1f8947376b 100644 --- a/autogpt_platform/backend/backend/blocks/screenshotone.py +++ b/autogpt_platform/backend/backend/blocks/screenshotone.py @@ -4,7 +4,13 @@ from typing import Literal from pydantic import SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -25,7 +31,7 @@ class Format(str, Enum): class ScreenshotWebPageBlock(Block): """Block for taking screenshots using ScreenshotOne API""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.SCREENSHOTONE], Literal["api_key"] ] = CredentialsField(description="The ScreenshotOne API key") @@ -56,9 +62,8 @@ class ScreenshotWebPageBlock(Block): description="Whether to enable caching", default=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): image: MediaFileType = SchemaField(description="The screenshot image data") - error: str = SchemaField(description="Error message if the screenshot failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/search.py b/autogpt_platform/backend/backend/blocks/search.py index 51eadf215e..2d10dffab6 100644 --- a/autogpt_platform/backend/backend/blocks/search.py +++ b/autogpt_platform/backend/backend/blocks/search.py @@ -4,7 +4,13 @@ from urllib.parse import quote from pydantic import SecretStr from backend.blocks.helpers.http import GetRequest -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -15,10 +21,10 @@ from backend.integrations.providers import ProviderName class GetWikipediaSummaryBlock(Block, GetRequest): - class Input(BlockSchema): + class Input(BlockSchemaInput): topic: str = SchemaField(description="The topic to fetch the summary for") - class Output(BlockSchema): + class Output(BlockSchemaOutput): summary: str = SchemaField(description="The summary of the given topic") error: str = SchemaField( description="Error message if the summary cannot be retrieved" @@ -39,10 +45,16 @@ class GetWikipediaSummaryBlock(Block, GetRequest): async def run(self, input_data: Input, **kwargs) -> BlockOutput: topic = input_data.topic url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{topic}" - response = await self.get_request(url, json=True) - if "extract" not in response: - raise RuntimeError(f"Unable to parse Wikipedia response: {response}") - yield "summary", response["extract"] + + # Note: User-Agent is now automatically set by the request library + # to comply with Wikimedia's robot policy (https://w.wiki/4wJS) + try: + response = await self.get_request(url, json=True) + if "extract" not in response: + raise ValueError(f"Unable to parse Wikipedia response: {response}") + yield "summary", response["extract"] + except Exception as e: + raise ValueError(f"Failed to fetch Wikipedia summary: {e}") from e TEST_CREDENTIALS = APIKeyCredentials( @@ -61,7 +73,7 @@ TEST_CREDENTIALS_INPUT = { class GetWeatherInformationBlock(Block, GetRequest): - class Input(BlockSchema): + class Input(BlockSchemaInput): location: str = SchemaField( description="Location to get weather information for" ) @@ -76,7 +88,7 @@ class GetWeatherInformationBlock(Block, GetRequest): description="Whether to use Celsius or Fahrenheit for temperature", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): temperature: str = SchemaField( description="Temperature in the specified location" ) diff --git a/autogpt_platform/backend/backend/blocks/slant3d/filament.py b/autogpt_platform/backend/backend/blocks/slant3d/filament.py index 0659a45561..f2b9eae38d 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/filament.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/filament.py @@ -1,6 +1,6 @@ from typing import List -from backend.data.block import BlockOutput, BlockSchema +from backend.data.block import BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import APIKeyCredentials, SchemaField from ._api import ( @@ -16,14 +16,13 @@ from .base import Slant3DBlockBase class Slant3DFilamentBlock(Slant3DBlockBase): """Block for retrieving available filaments""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() - class Output(BlockSchema): + class Output(BlockSchemaOutput): filaments: List[Filament] = SchemaField( description="List of available filaments" ) - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/slant3d/order.py b/autogpt_platform/backend/backend/blocks/slant3d/order.py index 43a5802468..4ece3fc51e 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/order.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/order.py @@ -1,7 +1,7 @@ import uuid from typing import List -from backend.data.block import BlockOutput, BlockSchema +from backend.data.block import BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import APIKeyCredentials, SchemaField from backend.util.settings import BehaveAs, Settings @@ -21,7 +21,7 @@ settings = Settings() class Slant3DCreateOrderBlock(Slant3DBlockBase): """Block for creating new orders""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() order_number: str = SchemaField( description="Your custom order number (or leave blank for a random one)", @@ -36,9 +36,8 @@ class Slant3DCreateOrderBlock(Slant3DBlockBase): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): order_id: str = SchemaField(description="Slant3D order ID") - error: str = SchemaField(description="Error message if order failed") def __init__(self): super().__init__( @@ -97,7 +96,7 @@ class Slant3DCreateOrderBlock(Slant3DBlockBase): class Slant3DEstimateOrderBlock(Slant3DBlockBase): """Block for getting order cost estimates""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() order_number: str = SchemaField( description="Your custom order number (or leave blank for a random one)", @@ -112,11 +111,10 @@ class Slant3DEstimateOrderBlock(Slant3DBlockBase): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): total_price: float = SchemaField(description="Total price in USD") shipping_cost: float = SchemaField(description="Shipping cost") printing_cost: float = SchemaField(description="Printing cost") - error: str = SchemaField(description="Error message if estimation failed") def __init__(self): super().__init__( @@ -184,7 +182,7 @@ class Slant3DEstimateOrderBlock(Slant3DBlockBase): class Slant3DEstimateShippingBlock(Slant3DBlockBase): """Block for getting shipping cost estimates""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() order_number: str = SchemaField( description="Your custom order number (or leave blank for a random one)", @@ -198,10 +196,9 @@ class Slant3DEstimateShippingBlock(Slant3DBlockBase): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): shipping_cost: float = SchemaField(description="Estimated shipping cost") currency_code: str = SchemaField(description="Currency code (e.g., 'usd')") - error: str = SchemaField(description="Error message if estimation failed") def __init__(self): super().__init__( @@ -267,12 +264,11 @@ class Slant3DEstimateShippingBlock(Slant3DBlockBase): class Slant3DGetOrdersBlock(Slant3DBlockBase): """Block for retrieving all orders""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() - class Output(BlockSchema): + class Output(BlockSchemaOutput): orders: List[str] = SchemaField(description="List of orders with their details") - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( @@ -323,16 +319,15 @@ class Slant3DGetOrdersBlock(Slant3DBlockBase): class Slant3DTrackingBlock(Slant3DBlockBase): """Block for tracking order status and shipping""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() order_id: str = SchemaField(description="Slant3D order ID to track") - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Order status") tracking_numbers: List[str] = SchemaField( description="List of tracking numbers" ) - error: str = SchemaField(description="Error message if tracking failed") def __init__(self): super().__init__( @@ -373,13 +368,12 @@ class Slant3DTrackingBlock(Slant3DBlockBase): class Slant3DCancelOrderBlock(Slant3DBlockBase): """Block for canceling orders""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() order_id: str = SchemaField(description="Slant3D order ID to cancel") - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Cancellation status message") - error: str = SchemaField(description="Error message if cancellation failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/slant3d/slicing.py b/autogpt_platform/backend/backend/blocks/slant3d/slicing.py index 6abe3045ac..1952b162d2 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/slicing.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/slicing.py @@ -1,4 +1,4 @@ -from backend.data.block import BlockOutput, BlockSchema +from backend.data.block import BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import APIKeyCredentials, SchemaField from ._api import ( @@ -13,16 +13,15 @@ from .base import Slant3DBlockBase class Slant3DSlicerBlock(Slant3DBlockBase): """Block for slicing 3D model files""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() file_url: str = SchemaField( description="URL of the 3D model file to slice (STL)" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): message: str = SchemaField(description="Response message") price: float = SchemaField(description="Calculated price for printing") - error: str = SchemaField(description="Error message if slicing failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/slant3d/webhook.py b/autogpt_platform/backend/backend/blocks/slant3d/webhook.py index 22f87b468d..e5a2d72568 100644 --- a/autogpt_platform/backend/backend/blocks/slant3d/webhook.py +++ b/autogpt_platform/backend/backend/blocks/slant3d/webhook.py @@ -4,7 +4,8 @@ from backend.data.block import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, BlockWebhookConfig, ) from backend.data.model import SchemaField @@ -24,12 +25,12 @@ settings = Settings() class Slant3DTriggerBase: """Base class for Slant3D webhook triggers""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: Slant3DCredentialsInput = Slant3DCredentialsField() # Webhook URL is handled by the webhook system payload: dict = SchemaField(hidden=True, default_factory=dict) - class Output(BlockSchema): + class Output(BlockSchemaOutput): payload: dict = SchemaField( description="The complete webhook payload received from Slant3D" ) diff --git a/autogpt_platform/backend/backend/blocks/smart_decision_maker.py b/autogpt_platform/backend/backend/blocks/smart_decision_maker.py index cd1abf9718..e2e5cfa3e4 100644 --- a/autogpt_platform/backend/backend/blocks/smart_decision_maker.py +++ b/autogpt_platform/backend/backend/blocks/smart_decision_maker.py @@ -1,8 +1,11 @@ import logging import re from collections import Counter +from concurrent.futures import Future from typing import TYPE_CHECKING, Any +from pydantic import BaseModel + import backend.blocks.llm as llm from backend.blocks.agent import AgentExecutorBlock from backend.data.block import ( @@ -10,24 +13,51 @@ from backend.data.block import ( BlockCategory, BlockInput, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, BlockType, ) from backend.data.dynamic_fields import ( extract_base_field_name, get_dynamic_field_description, is_dynamic_field, + is_tool_pin, ) +from backend.data.execution import ExecutionContext from backend.data.model import NodeExecutionStats, SchemaField from backend.util import json from backend.util.clients import get_database_manager_async_client +from backend.util.prompt import MAIN_OBJECTIVE_PREFIX if TYPE_CHECKING: from backend.data.graph import Link, Node + from backend.executor.manager import ExecutionProcessor logger = logging.getLogger(__name__) +class ToolInfo(BaseModel): + """Processed tool call information.""" + + tool_call: Any # The original tool call object from LLM response + tool_name: str # The function name + tool_def: dict[str, Any] # The tool definition from tool_functions + input_data: dict[str, Any] # Processed input data ready for tool execution + field_mapping: dict[str, str] # Field name mapping for the tool + + +class ExecutionParams(BaseModel): + """Tool execution parameters.""" + + user_id: str + graph_id: str + node_id: str + graph_version: int + graph_exec_id: str + node_exec_id: str + execution_context: "ExecutionContext" + + def _get_tool_requests(entry: dict[str, Any]) -> list[str]: """ Return a list of tool_call_ids if the entry is a tool request. @@ -103,6 +133,50 @@ def _create_tool_response(call_id: str, output: Any) -> dict[str, Any]: return {"role": "tool", "tool_call_id": call_id, "content": content} +def _combine_tool_responses(tool_outputs: list[dict[str, Any]]) -> list[dict[str, Any]]: + """ + Combine multiple Anthropic tool responses into a single user message. + For non-Anthropic formats, returns the original list unchanged. + """ + if len(tool_outputs) <= 1: + return tool_outputs + + # Anthropic responses have role="user", type="message", and content is a list with tool_result items + anthropic_responses = [ + output + for output in tool_outputs + if ( + output.get("role") == "user" + and output.get("type") == "message" + and isinstance(output.get("content"), list) + and any( + item.get("type") == "tool_result" + for item in output.get("content", []) + if isinstance(item, dict) + ) + ) + ] + + if len(anthropic_responses) > 1: + combined_content = [ + item for response in anthropic_responses for item in response["content"] + ] + + combined_response = { + "role": "user", + "type": "message", + "content": combined_content, + } + + non_anthropic_responses = [ + output for output in tool_outputs if output not in anthropic_responses + ] + + return [combined_response] + non_anthropic_responses + + return tool_outputs + + def _convert_raw_response_to_dict(raw_response: Any) -> dict[str, Any]: """ Safely convert raw_response to dictionary format for conversation history. @@ -119,13 +193,16 @@ def _convert_raw_response_to_dict(raw_response: Any) -> dict[str, Any]: return json.to_dict(raw_response) -def get_pending_tool_calls(conversation_history: list[Any]) -> dict[str, int]: +def get_pending_tool_calls(conversation_history: list[Any] | None) -> dict[str, int]: """ All the tool calls entry in the conversation history requires a response. This function returns the pending tool calls that has not generated an output yet. Return: dict[str, int] - A dictionary of pending tool call IDs with their count. """ + if not conversation_history: + return {} + pending_calls = Counter() for history in conversation_history: for call_id in _get_tool_requests(history): @@ -142,7 +219,7 @@ class SmartDecisionMakerBlock(Block): A block that uses a language model to make smart decisions based on a given prompt. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): prompt: str = SchemaField( description="The prompt to send to the language model.", placeholder="Enter your prompt here...", @@ -171,7 +248,7 @@ class SmartDecisionMakerBlock(Block): "Function parameters that has no default value and not optional typed has to be provided. ", description="The system prompt to provide additional context to the model.", ) - conversation_history: list[dict] = SchemaField( + conversation_history: list[dict] | None = SchemaField( default_factory=list, description="The conversation history to provide context for the prompt.", ) @@ -199,6 +276,17 @@ class SmartDecisionMakerBlock(Block): default="localhost:11434", description="Ollama host for local models", ) + agent_mode_max_iterations: int = SchemaField( + title="Agent Mode Max Iterations", + description="Maximum iterations for agent mode. 0 = traditional mode (single LLM call, yield tool calls for external execution), -1 = infinite agent mode (loop until finished), 1+ = agent mode with max iterations limit.", + advanced=True, + default=0, + ) + conversation_compaction: bool = SchemaField( + default=True, + title="Context window auto-compaction", + description="Automatically compact the context window once it hits the limit", + ) @classmethod def get_missing_links(cls, data: BlockInput, links: list["Link"]) -> set[str]: @@ -254,8 +342,7 @@ class SmartDecisionMakerBlock(Block): return set() - class Output(BlockSchema): - error: str = SchemaField(description="Error message if the API call failed.") + class Output(BlockSchemaOutput): tools: Any = SchemaField(description="The tools that are available to use.") finished: str = SchemaField( description="The finished message to display to the user." @@ -367,8 +454,9 @@ class SmartDecisionMakerBlock(Block): "required": sorted(required_fields), } - # Store field mapping for later use in output processing + # Store field mapping and node info for later use in output processing tool_function["_field_mapping"] = field_mapping + tool_function["_sink_node_id"] = sink_node.id return {"type": "function", "function": tool_function} @@ -431,10 +519,13 @@ class SmartDecisionMakerBlock(Block): "strict": True, } + # Store node info for later use in output processing + tool_function["_sink_node_id"] = sink_node.id + return {"type": "function", "function": tool_function} @staticmethod - async def _create_function_signature( + async def _create_tool_node_signatures( node_id: str, ) -> list[dict[str, Any]]: """ @@ -450,7 +541,7 @@ class SmartDecisionMakerBlock(Block): tools = [ (link, node) for link, node in await db_client.get_connected_output_nodes(node_id) - if link.source_name.startswith("tools_^_") and link.source_id == node_id + if is_tool_pin(link.source_name) and link.source_id == node_id ] if not tools: raise ValueError("There is no next node to execute.") @@ -498,6 +589,7 @@ class SmartDecisionMakerBlock(Block): Returns the response if successful, raises ValueError if validation fails. """ resp = await llm.llm_call( + compress_prompt_to_fit=input_data.conversation_compaction, credentials=credentials, llm_model=input_data.model, prompt=current_prompt, @@ -538,8 +630,14 @@ class SmartDecisionMakerBlock(Block): ), None, ) - if tool_def is None and len(tool_functions) == 1: - tool_def = tool_functions[0] + if tool_def is None: + if len(tool_functions) == 1: + tool_def = tool_functions[0] + else: + validation_errors_list.append( + f"Tool call for '{tool_name}' does not match any known " + "tool definition." + ) # Get parameters schema from tool definition if ( @@ -579,6 +677,291 @@ class SmartDecisionMakerBlock(Block): return resp + def _process_tool_calls( + self, response, tool_functions: list[dict[str, Any]] + ) -> list[ToolInfo]: + """Process tool calls and extract tool definitions, arguments, and input data. + + Returns a list of tool info dicts with: + - tool_call: The original tool call object + - tool_name: The function name + - tool_def: The tool definition from tool_functions + - input_data: Processed input data dict (includes None values) + - field_mapping: Field name mapping for the tool + """ + if not response.tool_calls: + return [] + + processed_tools = [] + for tool_call in response.tool_calls: + tool_name = tool_call.function.name + tool_args = json.loads(tool_call.function.arguments) + + tool_def = next( + ( + tool + for tool in tool_functions + if tool["function"]["name"] == tool_name + ), + None, + ) + if not tool_def: + if len(tool_functions) == 1: + tool_def = tool_functions[0] + else: + continue + + # Build input data for the tool + input_data = {} + field_mapping = tool_def["function"].get("_field_mapping", {}) + if "function" in tool_def and "parameters" in tool_def["function"]: + expected_args = tool_def["function"]["parameters"].get("properties", {}) + for clean_arg_name in expected_args: + original_field_name = field_mapping.get( + clean_arg_name, clean_arg_name + ) + arg_value = tool_args.get(clean_arg_name) + # Include all expected parameters, even if None (for backward compatibility with tests) + input_data[original_field_name] = arg_value + + processed_tools.append( + ToolInfo( + tool_call=tool_call, + tool_name=tool_name, + tool_def=tool_def, + input_data=input_data, + field_mapping=field_mapping, + ) + ) + + return processed_tools + + def _update_conversation( + self, prompt: list[dict], response, tool_outputs: list | None = None + ): + """Update conversation history with response and tool outputs.""" + # Don't add separate reasoning message with tool calls (breaks Anthropic's tool_use->tool_result pairing) + assistant_message = _convert_raw_response_to_dict(response.raw_response) + has_tool_calls = isinstance(assistant_message.get("content"), list) and any( + item.get("type") == "tool_use" + for item in assistant_message.get("content", []) + ) + + if response.reasoning and not has_tool_calls: + prompt.append( + {"role": "assistant", "content": f"[Reasoning]: {response.reasoning}"} + ) + + prompt.append(assistant_message) + + if tool_outputs: + prompt.extend(tool_outputs) + + async def _execute_single_tool_with_manager( + self, + tool_info: ToolInfo, + execution_params: ExecutionParams, + execution_processor: "ExecutionProcessor", + ) -> dict: + """Execute a single tool using the execution manager for proper integration.""" + # Lazy imports to avoid circular dependencies + from backend.data.execution import NodeExecutionEntry + + tool_call = tool_info.tool_call + tool_def = tool_info.tool_def + raw_input_data = tool_info.input_data + + # Get sink node and field mapping + sink_node_id = tool_def["function"]["_sink_node_id"] + + # Use proper database operations for tool execution + db_client = get_database_manager_async_client() + + # Get target node + target_node = await db_client.get_node(sink_node_id) + if not target_node: + raise ValueError(f"Target node {sink_node_id} not found") + + # Create proper node execution using upsert_execution_input + node_exec_result = None + final_input_data = None + + # Add all inputs to the execution + if not raw_input_data: + raise ValueError(f"Tool call has no input data: {tool_call}") + + for input_name, input_value in raw_input_data.items(): + node_exec_result, final_input_data = await db_client.upsert_execution_input( + node_id=sink_node_id, + graph_exec_id=execution_params.graph_exec_id, + input_name=input_name, + input_data=input_value, + ) + + assert node_exec_result is not None, "node_exec_result should not be None" + + # Create NodeExecutionEntry for execution manager + node_exec_entry = NodeExecutionEntry( + user_id=execution_params.user_id, + graph_exec_id=execution_params.graph_exec_id, + graph_id=execution_params.graph_id, + graph_version=execution_params.graph_version, + node_exec_id=node_exec_result.node_exec_id, + node_id=sink_node_id, + block_id=target_node.block_id, + inputs=final_input_data or {}, + execution_context=execution_params.execution_context, + ) + + # Use the execution manager to execute the tool node + try: + # Get NodeExecutionProgress from the execution manager's running nodes + node_exec_progress = execution_processor.running_node_execution[ + sink_node_id + ] + + # Use the execution manager's own graph stats + graph_stats_pair = ( + execution_processor.execution_stats, + execution_processor.execution_stats_lock, + ) + + # Create a completed future for the task tracking system + node_exec_future = Future() + node_exec_progress.add_task( + node_exec_id=node_exec_result.node_exec_id, + task=node_exec_future, + ) + + # Execute the node directly since we're in the SmartDecisionMaker context + node_exec_future.set_result( + await execution_processor.on_node_execution( + node_exec=node_exec_entry, + node_exec_progress=node_exec_progress, + nodes_input_masks=None, + graph_stats_pair=graph_stats_pair, + ) + ) + + # Get outputs from database after execution completes using database manager client + node_outputs = await db_client.get_execution_outputs_by_node_exec_id( + node_exec_result.node_exec_id + ) + + # Create tool response + tool_response_content = ( + json.dumps(node_outputs) + if node_outputs + else "Tool executed successfully" + ) + return _create_tool_response(tool_call.id, tool_response_content) + + except Exception as e: + logger.error(f"Tool execution with manager failed: {e}") + # Return error response + return _create_tool_response( + tool_call.id, f"Tool execution failed: {str(e)}" + ) + + async def _execute_tools_agent_mode( + self, + input_data, + credentials, + tool_functions: list[dict[str, Any]], + prompt: list[dict], + graph_exec_id: str, + node_id: str, + node_exec_id: str, + user_id: str, + graph_id: str, + graph_version: int, + execution_context: ExecutionContext, + execution_processor: "ExecutionProcessor", + ): + """Execute tools in agent mode with a loop until finished.""" + max_iterations = input_data.agent_mode_max_iterations + iteration = 0 + + # Execution parameters for tool execution + execution_params = ExecutionParams( + user_id=user_id, + graph_id=graph_id, + node_id=node_id, + graph_version=graph_version, + graph_exec_id=graph_exec_id, + node_exec_id=node_exec_id, + execution_context=execution_context, + ) + + current_prompt = list(prompt) + + while max_iterations < 0 or iteration < max_iterations: + iteration += 1 + logger.debug(f"Agent mode iteration {iteration}") + + # Prepare prompt for this iteration + iteration_prompt = list(current_prompt) + + # On the last iteration, add a special system message to encourage completion + if max_iterations > 0 and iteration == max_iterations: + last_iteration_message = { + "role": "system", + "content": f"{MAIN_OBJECTIVE_PREFIX}This is your last iteration ({iteration}/{max_iterations}). " + "Try to complete the task with the information you have. If you cannot fully complete it, " + "provide a summary of what you've accomplished and what remains to be done. " + "Prefer finishing with a clear response rather than making additional tool calls.", + } + iteration_prompt.append(last_iteration_message) + + # Get LLM response + try: + response = await self._attempt_llm_call_with_validation( + credentials, input_data, iteration_prompt, tool_functions + ) + except Exception as e: + yield "error", f"LLM call failed in agent mode iteration {iteration}: {str(e)}" + return + + # Process tool calls + processed_tools = self._process_tool_calls(response, tool_functions) + + # If no tool calls, we're done + if not processed_tools: + yield "finished", response.response + self._update_conversation(current_prompt, response) + yield "conversations", current_prompt + return + + # Execute tools and collect responses + tool_outputs = [] + for tool_info in processed_tools: + try: + tool_response = await self._execute_single_tool_with_manager( + tool_info, execution_params, execution_processor + ) + tool_outputs.append(tool_response) + except Exception as e: + logger.error(f"Tool execution failed: {e}") + # Create error response for the tool + error_response = _create_tool_response( + tool_info.tool_call.id, f"Error: {str(e)}" + ) + tool_outputs.append(error_response) + + tool_outputs = _combine_tool_responses(tool_outputs) + + self._update_conversation(current_prompt, response, tool_outputs) + + # Yield intermediate conversation state + yield "conversations", current_prompt + + # If we reach max iterations, yield the current state + if max_iterations < 0: + yield "finished", f"Agent mode completed after {iteration} iterations" + else: + yield "finished", f"Agent mode completed after {max_iterations} iterations (limit reached)" + yield "conversations", current_prompt + async def run( self, input_data: Input, @@ -589,15 +972,19 @@ class SmartDecisionMakerBlock(Block): graph_exec_id: str, node_exec_id: str, user_id: str, + graph_version: int, + execution_context: ExecutionContext, + execution_processor: "ExecutionProcessor", **kwargs, ) -> BlockOutput: - tool_functions = await self._create_function_signature(node_id) + + tool_functions = await self._create_tool_node_signatures(node_id) yield "tool_functions", json.dumps(tool_functions) - input_data.conversation_history = input_data.conversation_history or [] - prompt = [json.to_dict(p) for p in input_data.conversation_history if p] + conversation_history = input_data.conversation_history or [] + prompt = [json.to_dict(p) for p in conversation_history if p] - pending_tool_calls = get_pending_tool_calls(input_data.conversation_history) + pending_tool_calls = get_pending_tool_calls(conversation_history) if pending_tool_calls and input_data.last_tool_output is None: raise ValueError(f"Tool call requires an output for {pending_tool_calls}") @@ -634,24 +1021,52 @@ class SmartDecisionMakerBlock(Block): input_data.prompt = llm.fmt.format_string(input_data.prompt, values) input_data.sys_prompt = llm.fmt.format_string(input_data.sys_prompt, values) - prefix = "[Main Objective Prompt]: " - if input_data.sys_prompt and not any( - p["role"] == "system" and p["content"].startswith(prefix) for p in prompt + p["role"] == "system" and p["content"].startswith(MAIN_OBJECTIVE_PREFIX) + for p in prompt ): - prompt.append({"role": "system", "content": prefix + input_data.sys_prompt}) + prompt.append( + { + "role": "system", + "content": MAIN_OBJECTIVE_PREFIX + input_data.sys_prompt, + } + ) if input_data.prompt and not any( - p["role"] == "user" and p["content"].startswith(prefix) for p in prompt + p["role"] == "user" and p["content"].startswith(MAIN_OBJECTIVE_PREFIX) + for p in prompt ): - prompt.append({"role": "user", "content": prefix + input_data.prompt}) + prompt.append( + {"role": "user", "content": MAIN_OBJECTIVE_PREFIX + input_data.prompt} + ) + # Execute tools based on the selected mode + if input_data.agent_mode_max_iterations != 0: + # In agent mode, execute tools directly in a loop until finished + async for result in self._execute_tools_agent_mode( + input_data=input_data, + credentials=credentials, + tool_functions=tool_functions, + prompt=prompt, + graph_exec_id=graph_exec_id, + node_id=node_id, + node_exec_id=node_exec_id, + user_id=user_id, + graph_id=graph_id, + graph_version=graph_version, + execution_context=execution_context, + execution_processor=execution_processor, + ): + yield result + return + + # One-off mode: single LLM call and yield tool calls for external execution current_prompt = list(prompt) max_attempts = max(1, int(input_data.retry)) response = None last_error = None - for attempt in range(max_attempts): + for _ in range(max_attempts): try: response = await self._attempt_llm_call_with_validation( credentials, input_data, current_prompt, tool_functions @@ -661,9 +1076,9 @@ class SmartDecisionMakerBlock(Block): except ValueError as e: last_error = e error_feedback = ( - "Your tool call had parameter errors. Please fix the following issues and try again:\n" + "Your tool call had errors. Please fix the following issues and try again:\n" + f"- {str(e)}\n" - + "\nPlease make sure to use the exact parameter names as specified in the function schema." + + "\nPlease make sure to use the exact tool and parameter names as specified in the function schema." ) current_prompt = list(current_prompt) + [ {"role": "user", "content": error_feedback} @@ -690,21 +1105,23 @@ class SmartDecisionMakerBlock(Block): ), None, ) - if ( - tool_def - and "function" in tool_def - and "parameters" in tool_def["function"] - ): + if not tool_def: + # NOTE: This matches the logic in _attempt_llm_call_with_validation and + # relies on its validation for the assumption that this is valid to use. + if len(tool_functions) == 1: + tool_def = tool_functions[0] + else: + # This should not happen due to prior validation + continue + + if "function" in tool_def and "parameters" in tool_def["function"]: expected_args = tool_def["function"]["parameters"].get("properties", {}) else: expected_args = {arg: {} for arg in tool_args.keys()} - # Get field mapping from tool definition - field_mapping = ( - tool_def.get("function", {}).get("_field_mapping", {}) - if tool_def - else {} - ) + # Get the sink node ID and field mapping from tool definition + field_mapping = tool_def["function"].get("_field_mapping", {}) + sink_node_id = tool_def["function"]["_sink_node_id"] for clean_arg_name in expected_args: # arg_name is now always the cleaned field name (for Anthropic API compliance) @@ -712,9 +1129,8 @@ class SmartDecisionMakerBlock(Block): original_field_name = field_mapping.get(clean_arg_name, clean_arg_name) arg_value = tool_args.get(clean_arg_name) - sanitized_tool_name = self.cleanup(tool_name) sanitized_arg_name = self.cleanup(original_field_name) - emit_key = f"tools_^_{sanitized_tool_name}_~_{sanitized_arg_name}" + emit_key = f"tools_^_{sink_node_id}_~_{sanitized_arg_name}" logger.debug( "[SmartDecisionMakerBlock|geid:%s|neid:%s] emit %s", diff --git a/autogpt_platform/backend/backend/blocks/smartlead/campaign.py b/autogpt_platform/backend/backend/blocks/smartlead/campaign.py index 112004d7ad..c3bf930068 100644 --- a/autogpt_platform/backend/backend/blocks/smartlead/campaign.py +++ b/autogpt_platform/backend/backend/blocks/smartlead/campaign.py @@ -16,14 +16,20 @@ from backend.blocks.smartlead.models import ( SaveSequencesResponse, Sequence, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import CredentialsField, SchemaField class CreateCampaignBlock(Block): """Create a campaign in SmartLead""" - class Input(BlockSchema): + class Input(BlockSchemaInput): name: str = SchemaField( description="The name of the campaign", ) @@ -31,7 +37,7 @@ class CreateCampaignBlock(Block): description="SmartLead credentials", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: int = SchemaField( description="The ID of the created campaign", ) @@ -105,7 +111,7 @@ class CreateCampaignBlock(Block): class AddLeadToCampaignBlock(Block): """Add a lead to a campaign in SmartLead""" - class Input(BlockSchema): + class Input(BlockSchemaInput): campaign_id: int = SchemaField( description="The ID of the campaign to add the lead to", ) @@ -123,7 +129,7 @@ class AddLeadToCampaignBlock(Block): description="SmartLead credentials", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): campaign_id: int = SchemaField( description="The ID of the campaign the lead was added to (passed through)", ) @@ -242,7 +248,7 @@ class AddLeadToCampaignBlock(Block): class SaveCampaignSequencesBlock(Block): """Save sequences within a campaign""" - class Input(BlockSchema): + class Input(BlockSchemaInput): campaign_id: int = SchemaField( description="The ID of the campaign to save sequences for", ) @@ -255,7 +261,7 @@ class SaveCampaignSequencesBlock(Block): description="SmartLead credentials", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): data: dict | str | None = SchemaField( description="Data from the API", default=None, diff --git a/autogpt_platform/backend/backend/blocks/spreadsheet.py b/autogpt_platform/backend/backend/blocks/spreadsheet.py index 1de849e1b0..211aac23f4 100644 --- a/autogpt_platform/backend/backend/blocks/spreadsheet.py +++ b/autogpt_platform/backend/backend/blocks/spreadsheet.py @@ -1,13 +1,19 @@ from pathlib import Path -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ContributorDetails, SchemaField from backend.util.file import get_exec_file_path, store_media_file from backend.util.type import MediaFileType class ReadSpreadsheetBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): contents: str | None = SchemaField( description="The contents of the CSV/spreadsheet data to read", placeholder="a, b, c\n1,2,3\n4,5,6", @@ -52,7 +58,7 @@ class ReadSpreadsheetBlock(Block): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): row: dict[str, str] = SchemaField( description="The data produced from each row in the spreadsheet" ) diff --git a/autogpt_platform/backend/backend/blocks/stagehand/blocks.py b/autogpt_platform/backend/backend/blocks/stagehand/blocks.py index 50ec368dd2..be1d736962 100644 --- a/autogpt_platform/backend/backend/blocks/stagehand/blocks.py +++ b/autogpt_platform/backend/backend/blocks/stagehand/blocks.py @@ -1,6 +1,7 @@ import logging import signal import threading +import warnings from contextlib import contextmanager from enum import Enum @@ -21,11 +22,15 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, SchemaField, ) +# Suppress false positive cleanup warning of litellm (a dependency of stagehand) +warnings.filterwarnings("ignore", module="litellm.llms.custom_httpx") + # Store the original method original_register_signal_handlers = stagehand.main.Stagehand._register_signal_handlers @@ -118,7 +123,7 @@ class StagehandRecommendedLlmModel(str, Enum): class StagehandObserveBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): # Browserbase credentials (Stagehand provider) or raw API key stagehand_credentials: CredentialsMetaInput = ( stagehand_provider.credentials_field( @@ -151,7 +156,7 @@ class StagehandObserveBlock(Block): default=45000, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): selector: str = SchemaField(description="XPath selector to locate element.") description: str = SchemaField(description="Human-readable description") method: str | None = SchemaField(description="Suggested action method") @@ -211,7 +216,7 @@ class StagehandObserveBlock(Block): class StagehandActBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): # Browserbase credentials (Stagehand provider) or raw API key stagehand_credentials: CredentialsMetaInput = ( stagehand_provider.credentials_field( @@ -252,7 +257,7 @@ class StagehandActBlock(Block): default=60000, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the action was completed successfully" ) @@ -311,7 +316,7 @@ class StagehandActBlock(Block): class StagehandExtractBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): # Browserbase credentials (Stagehand provider) or raw API key stagehand_credentials: CredentialsMetaInput = ( stagehand_provider.credentials_field( @@ -344,7 +349,7 @@ class StagehandExtractBlock(Block): default=45000, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): extraction: str = SchemaField(description="Extracted data from the page.") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/system/library_operations.py b/autogpt_platform/backend/backend/blocks/system/library_operations.py index 2cf1aec0db..116da64599 100644 --- a/autogpt_platform/backend/backend/blocks/system/library_operations.py +++ b/autogpt_platform/backend/backend/blocks/system/library_operations.py @@ -3,7 +3,13 @@ from typing import Any from pydantic import BaseModel -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.clients import get_database_manager_async_client @@ -30,7 +36,7 @@ class AddToLibraryFromStoreBlock(Block): This enables users to easily import agents from the marketplace into their personal collection. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): store_listing_version_id: str = SchemaField( description="The ID of the store listing version to add to library" ) @@ -39,7 +45,7 @@ class AddToLibraryFromStoreBlock(Block): default=None, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the agent was successfully added to library" ) @@ -134,7 +140,7 @@ class ListLibraryAgentsBlock(Block): Block that lists all agents in the user's library. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): search_query: str | None = SchemaField( description="Optional search query to filter agents", default=None ) @@ -145,7 +151,7 @@ class ListLibraryAgentsBlock(Block): description="Page number for pagination", default=1, ge=1 ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): agents: list[LibraryAgent] = SchemaField( description="List of agents in the library", default_factory=list, diff --git a/autogpt_platform/backend/backend/blocks/system/store_operations.py b/autogpt_platform/backend/backend/blocks/system/store_operations.py index 6f5763bc93..e9b7a01ebe 100644 --- a/autogpt_platform/backend/backend/blocks/system/store_operations.py +++ b/autogpt_platform/backend/backend/blocks/system/store_operations.py @@ -3,7 +3,13 @@ from typing import Literal from pydantic import BaseModel -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util.clients import get_database_manager_async_client @@ -59,11 +65,11 @@ class GetStoreAgentDetailsBlock(Block): Block that retrieves detailed information about an agent from the store. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): creator: str = SchemaField(description="The username of the agent creator") slug: str = SchemaField(description="The name of the agent") - class Output(BlockSchema): + class Output(BlockSchemaOutput): found: bool = SchemaField( description="Whether the agent was found in the store" ) @@ -163,21 +169,21 @@ class SearchStoreAgentsBlock(Block): Block that searches for agents in the store based on various criteria. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): query: str | None = SchemaField( description="Search query to find agents", default=None ) category: str | None = SchemaField( description="Filter by category", default=None ) - sort_by: Literal["rating", "runs", "name", "recent"] = SchemaField( + sort_by: Literal["rating", "runs", "name", "updated_at"] = SchemaField( description="How to sort the results", default="rating" ) limit: int = SchemaField( description="Maximum number of results to return", default=10, ge=1, le=100 ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): agents: list[StoreAgent] = SchemaField( description="List of agents matching the search criteria", default_factory=list, @@ -272,24 +278,18 @@ class SearchStoreAgentsBlock(Block): self, query: str | None = None, category: str | None = None, - sort_by: str = "rating", + sort_by: Literal["rating", "runs", "name", "updated_at"] = "rating", limit: int = 10, ) -> SearchAgentsResponse: """ Search for agents in the store using the existing store database function. """ # Map our sort_by to the store's sorted_by parameter - sorted_by_map = { - "rating": "most_popular", - "runs": "most_runs", - "name": "alphabetical", - "recent": "recently_updated", - } result = await get_database_manager_async_client().get_store_agents( featured=False, creators=None, - sorted_by=sorted_by_map.get(sort_by, "most_popular"), + sorted_by=sort_by, search_query=query, category=category, page=1, diff --git a/autogpt_platform/backend/backend/blocks/talking_head.py b/autogpt_platform/backend/backend/blocks/talking_head.py index 3861cb7752..b33561d7aa 100644 --- a/autogpt_platform/backend/backend/blocks/talking_head.py +++ b/autogpt_platform/backend/backend/blocks/talking_head.py @@ -3,7 +3,13 @@ from typing import Literal from pydantic import SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -29,7 +35,7 @@ TEST_CREDENTIALS_INPUT = { class CreateTalkingAvatarVideoBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput[ Literal[ProviderName.D_ID], Literal["api_key"] ] = CredentialsField( @@ -70,9 +76,8 @@ class CreateTalkingAvatarVideoBlock(Block): description="Interval between polling attempts in seconds", default=10, ge=5 ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): video_url: str = SchemaField(description="The URL of the created video") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/test/test_block.py b/autogpt_platform/backend/backend/blocks/test/test_block.py index 14f17a9764..7a1fdbcc73 100644 --- a/autogpt_platform/backend/backend/blocks/test/test_block.py +++ b/autogpt_platform/backend/backend/blocks/test/test_block.py @@ -1,17 +1,27 @@ -from typing import Type +from typing import Any, Type import pytest -from backend.data.block import Block, get_blocks +from backend.data.block import Block, BlockSchemaInput, get_blocks +from backend.data.model import SchemaField from backend.util.test import execute_block_test +SKIP_BLOCK_TESTS = { + "HumanInTheLoopBlock", +} -@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b.name) + +@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b().name) async def test_available_blocks(block: Type[Block]): - await execute_block_test(block()) + block_instance = block() + if block_instance.__class__.__name__ in SKIP_BLOCK_TESTS: + pytest.skip( + f"Skipping {block_instance.__class__.__name__} - requires external service" + ) + await execute_block_test(block_instance) -@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b.name) +@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b().name) async def test_block_ids_valid(block: Type[Block]): # add the tests here to check they are uuid4 import uuid @@ -123,3 +133,148 @@ async def test_block_ids_valid(block: Type[Block]): ), f"Block {block.name} ID is UUID version {parsed_uuid.version}, expected version 4" except ValueError: pytest.fail(f"Block {block.name} has invalid UUID format: {block_instance.id}") + + +class TestAutoCredentialsFieldsValidation: + """Tests for auto_credentials field validation in BlockSchema.""" + + def test_duplicate_auto_credentials_kwarg_name_raises_error(self): + """Test that duplicate kwarg_name in auto_credentials raises ValueError.""" + + class DuplicateKwargSchema(BlockSchemaInput): + """Schema with duplicate auto_credentials kwarg_name.""" + + # Both fields explicitly use the same kwarg_name "credentials" + file1: dict[str, Any] | None = SchemaField( + description="First file input", + default=None, + json_schema_extra={ + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": ["https://www.googleapis.com/auth/drive.file"], + "kwarg_name": "credentials", + } + }, + ) + file2: dict[str, Any] | None = SchemaField( + description="Second file input", + default=None, + json_schema_extra={ + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": ["https://www.googleapis.com/auth/drive.file"], + "kwarg_name": "credentials", # Duplicate kwarg_name! + } + }, + ) + + with pytest.raises(ValueError) as exc_info: + DuplicateKwargSchema.get_auto_credentials_fields() + + error_message = str(exc_info.value) + assert "Duplicate auto_credentials kwarg_name 'credentials'" in error_message + assert "file1" in error_message + assert "file2" in error_message + + def test_unique_auto_credentials_kwarg_names_succeed(self): + """Test that unique kwarg_name values work correctly.""" + + class UniqueKwargSchema(BlockSchemaInput): + """Schema with unique auto_credentials kwarg_name values.""" + + file1: dict[str, Any] | None = SchemaField( + description="First file input", + default=None, + json_schema_extra={ + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": ["https://www.googleapis.com/auth/drive.file"], + "kwarg_name": "file1_credentials", + } + }, + ) + file2: dict[str, Any] | None = SchemaField( + description="Second file input", + default=None, + json_schema_extra={ + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": ["https://www.googleapis.com/auth/drive.file"], + "kwarg_name": "file2_credentials", # Different kwarg_name + } + }, + ) + + # Should not raise + result = UniqueKwargSchema.get_auto_credentials_fields() + + assert "file1_credentials" in result + assert "file2_credentials" in result + assert result["file1_credentials"]["field_name"] == "file1" + assert result["file2_credentials"]["field_name"] == "file2" + + def test_default_kwarg_name_is_credentials(self): + """Test that missing kwarg_name defaults to 'credentials'.""" + + class DefaultKwargSchema(BlockSchemaInput): + """Schema with auto_credentials missing kwarg_name.""" + + file: dict[str, Any] | None = SchemaField( + description="File input", + default=None, + json_schema_extra={ + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": ["https://www.googleapis.com/auth/drive.file"], + # No kwarg_name specified - should default to "credentials" + } + }, + ) + + result = DefaultKwargSchema.get_auto_credentials_fields() + + assert "credentials" in result + assert result["credentials"]["field_name"] == "file" + + def test_duplicate_default_kwarg_name_raises_error(self): + """Test that two fields with default kwarg_name raises ValueError.""" + + class DefaultDuplicateSchema(BlockSchemaInput): + """Schema where both fields omit kwarg_name, defaulting to 'credentials'.""" + + file1: dict[str, Any] | None = SchemaField( + description="First file input", + default=None, + json_schema_extra={ + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": ["https://www.googleapis.com/auth/drive.file"], + # No kwarg_name - defaults to "credentials" + } + }, + ) + file2: dict[str, Any] | None = SchemaField( + description="Second file input", + default=None, + json_schema_extra={ + "auto_credentials": { + "provider": "google", + "type": "oauth2", + "scopes": ["https://www.googleapis.com/auth/drive.file"], + # No kwarg_name - also defaults to "credentials" + } + }, + ) + + with pytest.raises(ValueError) as exc_info: + DefaultDuplicateSchema.get_auto_credentials_fields() + + assert "Duplicate auto_credentials kwarg_name 'credentials'" in str( + exc_info.value + ) diff --git a/autogpt_platform/backend/backend/blocks/test/test_llm.py b/autogpt_platform/backend/backend/blocks/test/test_llm.py index 07e8a7f732..090587767a 100644 --- a/autogpt_platform/backend/backend/blocks/test/test_llm.py +++ b/autogpt_platform/backend/backend/blocks/test/test_llm.py @@ -362,40 +362,25 @@ class TestLLMStatsTracking: assert block.execution_stats.llm_call_count == 1 # Check output - assert outputs["response"] == {"response": "AI response to conversation"} + assert outputs["response"] == "AI response to conversation" @pytest.mark.asyncio - async def test_ai_list_generator_with_retries(self): - """Test that AIListGeneratorBlock correctly tracks stats with retries.""" + async def test_ai_list_generator_basic_functionality(self): + """Test that AIListGeneratorBlock correctly works with structured responses.""" import backend.blocks.llm as llm block = llm.AIListGeneratorBlock() - # Counter to track calls - call_count = 0 - + # Mock the llm_call to return a structured response async def mock_llm_call(input_data, credentials): - nonlocal call_count - call_count += 1 - - # Update stats - if hasattr(block, "execution_stats") and block.execution_stats: - block.execution_stats.input_token_count += 40 - block.execution_stats.output_token_count += 20 - block.execution_stats.llm_call_count += 1 - else: - block.execution_stats = NodeExecutionStats( - input_token_count=40, - output_token_count=20, - llm_call_count=1, - ) - - if call_count == 1: - # First call returns invalid format - return {"response": "not a valid list"} - else: - # Second call returns valid list - return {"response": "['item1', 'item2', 'item3']"} + # Update stats to simulate LLM call + block.execution_stats = NodeExecutionStats( + input_token_count=50, + output_token_count=30, + llm_call_count=1, + ) + # Return a structured response with the expected format + return {"list": ["item1", "item2", "item3"]} block.llm_call = mock_llm_call # type: ignore @@ -413,14 +398,20 @@ class TestLLMStatsTracking: ): outputs[output_name] = output_data - # Check stats - should have 2 calls - assert call_count == 2 - assert block.execution_stats.input_token_count == 80 # 40 * 2 - assert block.execution_stats.output_token_count == 40 # 20 * 2 - assert block.execution_stats.llm_call_count == 2 + # Check stats + assert block.execution_stats.input_token_count == 50 + assert block.execution_stats.output_token_count == 30 + assert block.execution_stats.llm_call_count == 1 # Check output assert outputs["generated_list"] == ["item1", "item2", "item3"] + # Check that individual items were yielded + # Note: outputs dict will only contain the last value for each key + # So we need to check that the list_item output exists + assert "list_item" in outputs + # The list_item output should be the last item in the list + assert outputs["list_item"] == "item3" + assert "prompt" in outputs @pytest.mark.asyncio async def test_merge_llm_stats(self): @@ -500,3 +491,181 @@ class TestLLMStatsTracking: # Check output assert "response" in outputs assert outputs["response"] == {"result": "test"} + + +class TestAITextSummarizerValidation: + """Test that AITextSummarizerBlock validates LLM responses are strings.""" + + @pytest.mark.asyncio + async def test_summarize_chunk_rejects_list_response(self): + """Test that _summarize_chunk raises ValueError when LLM returns a list instead of string.""" + import backend.blocks.llm as llm + + block = llm.AITextSummarizerBlock() + + # Mock llm_call to return a list instead of a string + async def mock_llm_call(input_data, credentials): + # Simulate LLM returning a list when it should return a string + return {"summary": ["bullet point 1", "bullet point 2", "bullet point 3"]} + + block.llm_call = mock_llm_call # type: ignore + + # Create input data + input_data = llm.AITextSummarizerBlock.Input( + text="Some text to summarize", + model=llm.LlmModel.GPT4O, + credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore + style=llm.SummaryStyle.BULLET_POINTS, + ) + + # Should raise ValueError with descriptive message + with pytest.raises(ValueError) as exc_info: + await block._summarize_chunk( + "Some text to summarize", + input_data, + credentials=llm.TEST_CREDENTIALS, + ) + + error_message = str(exc_info.value) + assert "Expected a string summary" in error_message + assert "received list" in error_message + assert "incorrectly formatted" in error_message + + @pytest.mark.asyncio + async def test_combine_summaries_rejects_list_response(self): + """Test that _combine_summaries raises ValueError when LLM returns a list instead of string.""" + import backend.blocks.llm as llm + + block = llm.AITextSummarizerBlock() + + # Mock llm_call to return a list instead of a string + async def mock_llm_call(input_data, credentials): + # Check if this is the final summary call + if "final_summary" in input_data.expected_format: + # Simulate LLM returning a list when it should return a string + return { + "final_summary": [ + "bullet point 1", + "bullet point 2", + "bullet point 3", + ] + } + else: + return {"summary": "Valid summary"} + + block.llm_call = mock_llm_call # type: ignore + + # Create input data + input_data = llm.AITextSummarizerBlock.Input( + text="Some text to summarize", + model=llm.LlmModel.GPT4O, + credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore + style=llm.SummaryStyle.BULLET_POINTS, + max_tokens=1000, + ) + + # Should raise ValueError with descriptive message + with pytest.raises(ValueError) as exc_info: + await block._combine_summaries( + ["summary 1", "summary 2"], + input_data, + credentials=llm.TEST_CREDENTIALS, + ) + + error_message = str(exc_info.value) + assert "Expected a string final summary" in error_message + assert "received list" in error_message + assert "incorrectly formatted" in error_message + + @pytest.mark.asyncio + async def test_summarize_chunk_accepts_valid_string_response(self): + """Test that _summarize_chunk accepts valid string responses.""" + import backend.blocks.llm as llm + + block = llm.AITextSummarizerBlock() + + # Mock llm_call to return a valid string + async def mock_llm_call(input_data, credentials): + return {"summary": "This is a valid string summary"} + + block.llm_call = mock_llm_call # type: ignore + + # Create input data + input_data = llm.AITextSummarizerBlock.Input( + text="Some text to summarize", + model=llm.LlmModel.GPT4O, + credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore + ) + + # Should not raise any error + result = await block._summarize_chunk( + "Some text to summarize", + input_data, + credentials=llm.TEST_CREDENTIALS, + ) + + assert result == "This is a valid string summary" + assert isinstance(result, str) + + @pytest.mark.asyncio + async def test_combine_summaries_accepts_valid_string_response(self): + """Test that _combine_summaries accepts valid string responses.""" + import backend.blocks.llm as llm + + block = llm.AITextSummarizerBlock() + + # Mock llm_call to return a valid string + async def mock_llm_call(input_data, credentials): + return {"final_summary": "This is a valid final summary string"} + + block.llm_call = mock_llm_call # type: ignore + + # Create input data + input_data = llm.AITextSummarizerBlock.Input( + text="Some text to summarize", + model=llm.LlmModel.GPT4O, + credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore + max_tokens=1000, + ) + + # Should not raise any error + result = await block._combine_summaries( + ["summary 1", "summary 2"], + input_data, + credentials=llm.TEST_CREDENTIALS, + ) + + assert result == "This is a valid final summary string" + assert isinstance(result, str) + + @pytest.mark.asyncio + async def test_summarize_chunk_rejects_dict_response(self): + """Test that _summarize_chunk raises ValueError when LLM returns a dict instead of string.""" + import backend.blocks.llm as llm + + block = llm.AITextSummarizerBlock() + + # Mock llm_call to return a dict instead of a string + async def mock_llm_call(input_data, credentials): + return {"summary": {"nested": "object", "with": "data"}} + + block.llm_call = mock_llm_call # type: ignore + + # Create input data + input_data = llm.AITextSummarizerBlock.Input( + text="Some text to summarize", + model=llm.LlmModel.GPT4O, + credentials=llm.TEST_CREDENTIALS_INPUT, # type: ignore + ) + + # Should raise ValueError + with pytest.raises(ValueError) as exc_info: + await block._summarize_chunk( + "Some text to summarize", + input_data, + credentials=llm.TEST_CREDENTIALS, + ) + + error_message = str(exc_info.value) + assert "Expected a string summary" in error_message + assert "received dict" in error_message diff --git a/autogpt_platform/backend/backend/blocks/test/test_smart_decision_maker.py b/autogpt_platform/backend/backend/blocks/test/test_smart_decision_maker.py index c58a88249c..deff4278f9 100644 --- a/autogpt_platform/backend/backend/blocks/test/test_smart_decision_maker.py +++ b/autogpt_platform/backend/backend/blocks/test/test_smart_decision_maker.py @@ -1,10 +1,14 @@ import logging +import threading +from collections import defaultdict +from unittest.mock import AsyncMock, MagicMock, patch import pytest +from backend.api.model import CreateGraph +from backend.api.rest_api import AgentServer +from backend.data.execution import ExecutionContext from backend.data.model import ProviderName, User -from backend.server.model import CreateGraph -from backend.server.rest_api import AgentServer from backend.usecases.sample import create_test_graph, create_test_user from backend.util.test import SpinTestServer, wait_execution @@ -17,10 +21,10 @@ async def create_graph(s: SpinTestServer, g, u: User): async def create_credentials(s: SpinTestServer, u: User): - import backend.blocks.llm as llm + import backend.blocks.llm as llm_module provider = ProviderName.OPENAI - credentials = llm.TEST_CREDENTIALS + credentials = llm_module.TEST_CREDENTIALS return await s.agent_server.test_create_credentials(u.id, provider, credentials) @@ -165,7 +169,7 @@ async def test_smart_decision_maker_function_signature(server: SpinTestServer): ) test_graph = await create_graph(server, test_graph, test_user) - tool_functions = await SmartDecisionMakerBlock._create_function_signature( + tool_functions = await SmartDecisionMakerBlock._create_tool_node_signatures( test_graph.nodes[0].id ) assert tool_functions is not None, "Tool functions should not be None" @@ -196,8 +200,6 @@ async def test_smart_decision_maker_function_signature(server: SpinTestServer): @pytest.mark.asyncio async def test_smart_decision_maker_tracks_llm_stats(): """Test that SmartDecisionMakerBlock correctly tracks LLM usage stats.""" - from unittest.mock import MagicMock, patch - import backend.blocks.llm as llm_module from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock @@ -215,8 +217,7 @@ async def test_smart_decision_maker_tracks_llm_stats(): "content": "I need to think about this.", } - # Mock the _create_function_signature method to avoid database calls - from unittest.mock import AsyncMock + # Mock the _create_tool_node_signatures method to avoid database calls with patch( "backend.blocks.llm.llm_call", @@ -224,7 +225,7 @@ async def test_smart_decision_maker_tracks_llm_stats(): return_value=mock_response, ), patch.object( SmartDecisionMakerBlock, - "_create_function_signature", + "_create_tool_node_signatures", new_callable=AsyncMock, return_value=[], ): @@ -234,10 +235,19 @@ async def test_smart_decision_maker_tracks_llm_stats(): prompt="Should I continue with this task?", model=llm_module.LlmModel.GPT4O, credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore + agent_mode_max_iterations=0, ) # Execute the block outputs = {} + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + async for output_name, output_data in block.run( input_data, credentials=llm_module.TEST_CREDENTIALS, @@ -246,6 +256,9 @@ async def test_smart_decision_maker_tracks_llm_stats(): graph_exec_id="test-exec-id", node_exec_id="test-node-exec-id", user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_data @@ -263,8 +276,6 @@ async def test_smart_decision_maker_tracks_llm_stats(): @pytest.mark.asyncio async def test_smart_decision_maker_parameter_validation(): """Test that SmartDecisionMakerBlock correctly validates tool call parameters.""" - from unittest.mock import MagicMock, patch - import backend.blocks.llm as llm_module from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock @@ -293,6 +304,7 @@ async def test_smart_decision_maker_parameter_validation(): }, "required": ["query", "max_keyword_difficulty"], }, + "_sink_node_id": "test-sink-node-id", }, } ] @@ -310,15 +322,13 @@ async def test_smart_decision_maker_parameter_validation(): mock_response_with_typo.reasoning = None mock_response_with_typo.raw_response = {"role": "assistant", "content": None} - from unittest.mock import AsyncMock - with patch( "backend.blocks.llm.llm_call", new_callable=AsyncMock, return_value=mock_response_with_typo, ) as mock_llm_call, patch.object( SmartDecisionMakerBlock, - "_create_function_signature", + "_create_tool_node_signatures", new_callable=AsyncMock, return_value=mock_tool_functions, ): @@ -328,8 +338,17 @@ async def test_smart_decision_maker_parameter_validation(): model=llm_module.LlmModel.GPT4O, credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore retry=2, # Set retry to 2 for testing + agent_mode_max_iterations=0, ) + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + # Should raise ValueError after retries due to typo'd parameter name with pytest.raises(ValueError) as exc_info: outputs = {} @@ -341,6 +360,9 @@ async def test_smart_decision_maker_parameter_validation(): graph_exec_id="test-exec-id", node_exec_id="test-node-exec-id", user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_data @@ -367,15 +389,13 @@ async def test_smart_decision_maker_parameter_validation(): mock_response_missing_required.reasoning = None mock_response_missing_required.raw_response = {"role": "assistant", "content": None} - from unittest.mock import AsyncMock - with patch( "backend.blocks.llm.llm_call", new_callable=AsyncMock, return_value=mock_response_missing_required, ), patch.object( SmartDecisionMakerBlock, - "_create_function_signature", + "_create_tool_node_signatures", new_callable=AsyncMock, return_value=mock_tool_functions, ): @@ -384,8 +404,17 @@ async def test_smart_decision_maker_parameter_validation(): prompt="Search for keywords", model=llm_module.LlmModel.GPT4O, credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore + agent_mode_max_iterations=0, ) + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + # Should raise ValueError due to missing required parameter with pytest.raises(ValueError) as exc_info: outputs = {} @@ -397,6 +426,9 @@ async def test_smart_decision_maker_parameter_validation(): graph_exec_id="test-exec-id", node_exec_id="test-node-exec-id", user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_data @@ -417,15 +449,13 @@ async def test_smart_decision_maker_parameter_validation(): mock_response_valid.reasoning = None mock_response_valid.raw_response = {"role": "assistant", "content": None} - from unittest.mock import AsyncMock - with patch( "backend.blocks.llm.llm_call", new_callable=AsyncMock, return_value=mock_response_valid, ), patch.object( SmartDecisionMakerBlock, - "_create_function_signature", + "_create_tool_node_signatures", new_callable=AsyncMock, return_value=mock_tool_functions, ): @@ -434,10 +464,19 @@ async def test_smart_decision_maker_parameter_validation(): prompt="Search for keywords", model=llm_module.LlmModel.GPT4O, credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore + agent_mode_max_iterations=0, ) # Should succeed - optional parameter missing is OK outputs = {} + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + async for output_name, output_data in block.run( input_data, credentials=llm_module.TEST_CREDENTIALS, @@ -446,17 +485,20 @@ async def test_smart_decision_maker_parameter_validation(): graph_exec_id="test-exec-id", node_exec_id="test-node-exec-id", user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_data # Verify tool outputs were generated correctly - assert "tools_^_search_keywords_~_query" in outputs - assert outputs["tools_^_search_keywords_~_query"] == "test" - assert "tools_^_search_keywords_~_max_keyword_difficulty" in outputs - assert outputs["tools_^_search_keywords_~_max_keyword_difficulty"] == 50 + assert "tools_^_test-sink-node-id_~_query" in outputs + assert outputs["tools_^_test-sink-node-id_~_query"] == "test" + assert "tools_^_test-sink-node-id_~_max_keyword_difficulty" in outputs + assert outputs["tools_^_test-sink-node-id_~_max_keyword_difficulty"] == 50 # Optional parameter should be None when not provided - assert "tools_^_search_keywords_~_optional_param" in outputs - assert outputs["tools_^_search_keywords_~_optional_param"] is None + assert "tools_^_test-sink-node-id_~_optional_param" in outputs + assert outputs["tools_^_test-sink-node-id_~_optional_param"] is None # Test case 4: Valid tool call with ALL parameters (should succeed) mock_tool_call_all_params = MagicMock() @@ -471,15 +513,13 @@ async def test_smart_decision_maker_parameter_validation(): mock_response_all_params.reasoning = None mock_response_all_params.raw_response = {"role": "assistant", "content": None} - from unittest.mock import AsyncMock - with patch( "backend.blocks.llm.llm_call", new_callable=AsyncMock, return_value=mock_response_all_params, ), patch.object( SmartDecisionMakerBlock, - "_create_function_signature", + "_create_tool_node_signatures", new_callable=AsyncMock, return_value=mock_tool_functions, ): @@ -488,10 +528,19 @@ async def test_smart_decision_maker_parameter_validation(): prompt="Search for keywords", model=llm_module.LlmModel.GPT4O, credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore + agent_mode_max_iterations=0, ) # Should succeed with all parameters outputs = {} + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + async for output_name, output_data in block.run( input_data, credentials=llm_module.TEST_CREDENTIALS, @@ -500,20 +549,21 @@ async def test_smart_decision_maker_parameter_validation(): graph_exec_id="test-exec-id", node_exec_id="test-node-exec-id", user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_data # Verify all tool outputs were generated correctly - assert outputs["tools_^_search_keywords_~_query"] == "test" - assert outputs["tools_^_search_keywords_~_max_keyword_difficulty"] == 50 - assert outputs["tools_^_search_keywords_~_optional_param"] == "custom_value" + assert outputs["tools_^_test-sink-node-id_~_query"] == "test" + assert outputs["tools_^_test-sink-node-id_~_max_keyword_difficulty"] == 50 + assert outputs["tools_^_test-sink-node-id_~_optional_param"] == "custom_value" @pytest.mark.asyncio async def test_smart_decision_maker_raw_response_conversion(): """Test that SmartDecisionMaker correctly handles different raw_response types with retry mechanism.""" - from unittest.mock import MagicMock, patch - import backend.blocks.llm as llm_module from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock @@ -530,6 +580,7 @@ async def test_smart_decision_maker_raw_response_conversion(): "properties": {"param": {"type": "string"}}, "required": ["param"], }, + "_sink_node_id": "test-sink-node-id", }, } ] @@ -582,13 +633,12 @@ async def test_smart_decision_maker_raw_response_conversion(): ) # Mock llm_call to return different responses on different calls - from unittest.mock import AsyncMock with patch( "backend.blocks.llm.llm_call", new_callable=AsyncMock ) as mock_llm_call, patch.object( SmartDecisionMakerBlock, - "_create_function_signature", + "_create_tool_node_signatures", new_callable=AsyncMock, return_value=mock_tool_functions, ): @@ -601,10 +651,19 @@ async def test_smart_decision_maker_raw_response_conversion(): model=llm_module.LlmModel.GPT4O, credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore retry=2, + agent_mode_max_iterations=0, ) # Should succeed after retry, demonstrating our helper function works outputs = {} + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + async for output_name, output_data in block.run( input_data, credentials=llm_module.TEST_CREDENTIALS, @@ -613,12 +672,15 @@ async def test_smart_decision_maker_raw_response_conversion(): graph_exec_id="test-exec-id", node_exec_id="test-node-exec-id", user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_data # Verify the tool output was generated successfully - assert "tools_^_test_tool_~_param" in outputs - assert outputs["tools_^_test_tool_~_param"] == "test_value" + assert "tools_^_test-sink-node-id_~_param" in outputs + assert outputs["tools_^_test-sink-node-id_~_param"] == "test_value" # Verify conversation history was properly maintained assert "conversations" in outputs @@ -648,15 +710,13 @@ async def test_smart_decision_maker_raw_response_conversion(): "I'll help you with that." # Ollama returns string ) - from unittest.mock import AsyncMock - with patch( "backend.blocks.llm.llm_call", new_callable=AsyncMock, return_value=mock_response_ollama, ), patch.object( SmartDecisionMakerBlock, - "_create_function_signature", + "_create_tool_node_signatures", new_callable=AsyncMock, return_value=[], # No tools for this test ): @@ -664,9 +724,18 @@ async def test_smart_decision_maker_raw_response_conversion(): prompt="Simple prompt", model=llm_module.LlmModel.GPT4O, credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore + agent_mode_max_iterations=0, ) outputs = {} + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + async for output_name, output_data in block.run( input_data, credentials=llm_module.TEST_CREDENTIALS, @@ -675,6 +744,9 @@ async def test_smart_decision_maker_raw_response_conversion(): graph_exec_id="test-exec-id", node_exec_id="test-node-exec-id", user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_data @@ -694,15 +766,13 @@ async def test_smart_decision_maker_raw_response_conversion(): "content": "Test response", } # Dict format - from unittest.mock import AsyncMock - with patch( "backend.blocks.llm.llm_call", new_callable=AsyncMock, return_value=mock_response_dict, ), patch.object( SmartDecisionMakerBlock, - "_create_function_signature", + "_create_tool_node_signatures", new_callable=AsyncMock, return_value=[], ): @@ -710,6 +780,160 @@ async def test_smart_decision_maker_raw_response_conversion(): prompt="Another test", model=llm_module.LlmModel.GPT4O, credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore + agent_mode_max_iterations=0, + ) + + outputs = {} + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + + async for output_name, output_data in block.run( + input_data, + credentials=llm_module.TEST_CREDENTIALS, + graph_id="test-graph-id", + node_id="test-node-id", + graph_exec_id="test-exec-id", + node_exec_id="test-node-exec-id", + user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, + ): + outputs[output_name] = output_data + + assert "finished" in outputs + assert outputs["finished"] == "Test response" + + +@pytest.mark.asyncio +async def test_smart_decision_maker_agent_mode(): + """Test that agent mode executes tools directly and loops until finished.""" + import backend.blocks.llm as llm_module + from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock + + block = SmartDecisionMakerBlock() + + # Mock tool call that requires multiple iterations + mock_tool_call_1 = MagicMock() + mock_tool_call_1.id = "call_1" + mock_tool_call_1.function.name = "search_keywords" + mock_tool_call_1.function.arguments = ( + '{"query": "test", "max_keyword_difficulty": 50}' + ) + + mock_response_1 = MagicMock() + mock_response_1.response = None + mock_response_1.tool_calls = [mock_tool_call_1] + mock_response_1.prompt_tokens = 50 + mock_response_1.completion_tokens = 25 + mock_response_1.reasoning = "Using search tool" + mock_response_1.raw_response = { + "role": "assistant", + "content": None, + "tool_calls": [{"id": "call_1", "type": "function"}], + } + + # Final response with no tool calls (finished) + mock_response_2 = MagicMock() + mock_response_2.response = "Task completed successfully" + mock_response_2.tool_calls = [] + mock_response_2.prompt_tokens = 30 + mock_response_2.completion_tokens = 15 + mock_response_2.reasoning = None + mock_response_2.raw_response = { + "role": "assistant", + "content": "Task completed successfully", + } + + # Mock the LLM call to return different responses on each iteration + llm_call_mock = AsyncMock() + llm_call_mock.side_effect = [mock_response_1, mock_response_2] + + # Mock tool node signatures + mock_tool_signatures = [ + { + "type": "function", + "function": { + "name": "search_keywords", + "_sink_node_id": "test-sink-node-id", + "_field_mapping": {}, + "parameters": { + "properties": { + "query": {"type": "string"}, + "max_keyword_difficulty": {"type": "integer"}, + }, + "required": ["query", "max_keyword_difficulty"], + }, + }, + } + ] + + # Mock database and execution components + mock_db_client = AsyncMock() + mock_node = MagicMock() + mock_node.block_id = "test-block-id" + mock_db_client.get_node.return_value = mock_node + + # Mock upsert_execution_input to return proper NodeExecutionResult and input data + mock_node_exec_result = MagicMock() + mock_node_exec_result.node_exec_id = "test-tool-exec-id" + mock_input_data = {"query": "test", "max_keyword_difficulty": 50} + mock_db_client.upsert_execution_input.return_value = ( + mock_node_exec_result, + mock_input_data, + ) + + # No longer need mock_execute_node since we use execution_processor.on_node_execution + + with patch("backend.blocks.llm.llm_call", llm_call_mock), patch.object( + block, "_create_tool_node_signatures", return_value=mock_tool_signatures + ), patch( + "backend.blocks.smart_decision_maker.get_database_manager_async_client", + return_value=mock_db_client, + ), patch( + "backend.executor.manager.async_update_node_execution_status", + new_callable=AsyncMock, + ), patch( + "backend.integrations.creds_manager.IntegrationCredentialsManager" + ): + + # Create a mock execution context + + mock_execution_context = ExecutionContext( + safe_mode=False, + ) + + # Create a mock execution processor for agent mode tests + + mock_execution_processor = AsyncMock() + # Configure the execution processor mock with required attributes + mock_execution_processor.running_node_execution = defaultdict(MagicMock) + mock_execution_processor.execution_stats = MagicMock() + mock_execution_processor.execution_stats_lock = threading.Lock() + + # Mock the on_node_execution method to return successful stats + mock_node_stats = MagicMock() + mock_node_stats.error = None # No error + mock_execution_processor.on_node_execution = AsyncMock( + return_value=mock_node_stats + ) + + # Mock the get_execution_outputs_by_node_exec_id method + mock_db_client.get_execution_outputs_by_node_exec_id.return_value = { + "result": {"status": "success", "data": "search completed"} + } + + # Test agent mode with max_iterations = 3 + input_data = SmartDecisionMakerBlock.Input( + prompt="Complete this task using tools", + model=llm_module.LlmModel.GPT4O, + credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore + agent_mode_max_iterations=3, # Enable agent mode with 3 max iterations ) outputs = {} @@ -721,8 +945,115 @@ async def test_smart_decision_maker_raw_response_conversion(): graph_exec_id="test-exec-id", node_exec_id="test-node-exec-id", user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_data + # Verify agent mode behavior + assert "tool_functions" in outputs # tool_functions is yielded in both modes assert "finished" in outputs - assert outputs["finished"] == "Test response" + assert outputs["finished"] == "Task completed successfully" + assert "conversations" in outputs + + # Verify the conversation includes tool responses + conversations = outputs["conversations"] + assert len(conversations) > 2 # Should have multiple conversation entries + + # Verify LLM was called twice (once for tool call, once for finish) + assert llm_call_mock.call_count == 2 + + # Verify tool was executed via execution processor + assert mock_execution_processor.on_node_execution.call_count == 1 + + +@pytest.mark.asyncio +async def test_smart_decision_maker_traditional_mode_default(): + """Test that default behavior (agent_mode_max_iterations=0) works as traditional mode.""" + import backend.blocks.llm as llm_module + from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock + + block = SmartDecisionMakerBlock() + + # Mock tool call + mock_tool_call = MagicMock() + mock_tool_call.function.name = "search_keywords" + mock_tool_call.function.arguments = ( + '{"query": "test", "max_keyword_difficulty": 50}' + ) + + mock_response = MagicMock() + mock_response.response = None + mock_response.tool_calls = [mock_tool_call] + mock_response.prompt_tokens = 50 + mock_response.completion_tokens = 25 + mock_response.reasoning = None + mock_response.raw_response = {"role": "assistant", "content": None} + + mock_tool_signatures = [ + { + "type": "function", + "function": { + "name": "search_keywords", + "_sink_node_id": "test-sink-node-id", + "_field_mapping": {}, + "parameters": { + "properties": { + "query": {"type": "string"}, + "max_keyword_difficulty": {"type": "integer"}, + }, + "required": ["query", "max_keyword_difficulty"], + }, + }, + } + ] + + with patch( + "backend.blocks.llm.llm_call", + new_callable=AsyncMock, + return_value=mock_response, + ), patch.object( + block, "_create_tool_node_signatures", return_value=mock_tool_signatures + ): + + # Test default behavior (traditional mode) + input_data = SmartDecisionMakerBlock.Input( + prompt="Test prompt", + model=llm_module.LlmModel.GPT4O, + credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore + agent_mode_max_iterations=0, # Traditional mode + ) + + # Create execution context + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a mock execution processor for tests + + mock_execution_processor = MagicMock() + + outputs = {} + async for output_name, output_data in block.run( + input_data, + credentials=llm_module.TEST_CREDENTIALS, + graph_id="test-graph-id", + node_id="test-node-id", + graph_exec_id="test-exec-id", + node_exec_id="test-node-exec-id", + user_id="test-user-id", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, + ): + outputs[output_name] = output_data + + # Verify traditional mode behavior + assert ( + "tool_functions" in outputs + ) # Should yield tool_functions in traditional mode + assert ( + "tools_^_test-sink-node-id_~_query" in outputs + ) # Should yield individual tool parameters + assert "tools_^_test-sink-node-id_~_max_keyword_difficulty" in outputs + assert "conversations" in outputs diff --git a/autogpt_platform/backend/backend/blocks/test/test_smart_decision_maker_dynamic_fields.py b/autogpt_platform/backend/backend/blocks/test/test_smart_decision_maker_dynamic_fields.py index d51687712b..d6a0c0fe39 100644 --- a/autogpt_platform/backend/backend/blocks/test/test_smart_decision_maker_dynamic_fields.py +++ b/autogpt_platform/backend/backend/blocks/test/test_smart_decision_maker_dynamic_fields.py @@ -1,7 +1,7 @@ """Comprehensive tests for SmartDecisionMakerBlock dynamic field handling.""" import json -from unittest.mock import AsyncMock, Mock, patch +from unittest.mock import AsyncMock, MagicMock, Mock, patch import pytest @@ -192,7 +192,7 @@ async def test_create_block_function_signature_with_object_fields(): @pytest.mark.asyncio -async def test_create_function_signature(): +async def test_create_tool_node_signatures(): """Test that the mapping between sanitized and original field names is built correctly.""" block = SmartDecisionMakerBlock() @@ -241,7 +241,7 @@ async def test_create_function_signature(): ] # Call the method that builds signatures - tool_functions = await block._create_function_signature("test_node_id") + tool_functions = await block._create_tool_node_signatures("test_node_id") # Verify we got 2 tool functions (one for dict, one for list) assert len(tool_functions) == 2 @@ -308,10 +308,47 @@ async def test_output_yielding_with_dynamic_fields(): ) as mock_llm: mock_llm.return_value = mock_response - # Mock the function signature creation - with patch.object( - block, "_create_function_signature", new_callable=AsyncMock + # Mock the database manager to avoid HTTP calls during tool execution + with patch( + "backend.blocks.smart_decision_maker.get_database_manager_async_client" + ) as mock_db_manager, patch.object( + block, "_create_tool_node_signatures", new_callable=AsyncMock ) as mock_sig: + # Set up the mock database manager + mock_db_client = AsyncMock() + mock_db_manager.return_value = mock_db_client + + # Mock the node retrieval + mock_target_node = Mock() + mock_target_node.id = "test-sink-node-id" + mock_target_node.block_id = "CreateDictionaryBlock" + mock_target_node.block = Mock() + mock_target_node.block.name = "Create Dictionary" + mock_db_client.get_node.return_value = mock_target_node + + # Mock the execution result creation + mock_node_exec_result = Mock() + mock_node_exec_result.node_exec_id = "mock-node-exec-id" + mock_final_input_data = { + "values_#_name": "Alice", + "values_#_age": 30, + "values_#_email": "alice@example.com", + } + mock_db_client.upsert_execution_input.return_value = ( + mock_node_exec_result, + mock_final_input_data, + ) + + # Mock the output retrieval + mock_outputs = { + "values_#_name": "Alice", + "values_#_age": 30, + "values_#_email": "alice@example.com", + } + mock_db_client.get_execution_outputs_by_node_exec_id.return_value = ( + mock_outputs + ) + mock_sig.return_value = [ { "type": "function", @@ -325,6 +362,7 @@ async def test_output_yielding_with_dynamic_fields(): "values___email": {"type": "string"}, }, }, + "_sink_node_id": "test-sink-node-id", }, } ] @@ -336,10 +374,16 @@ async def test_output_yielding_with_dynamic_fields(): prompt="Create a user dictionary", credentials=llm.TEST_CREDENTIALS_INPUT, model=llm.LlmModel.GPT4O, + agent_mode_max_iterations=0, # Use traditional mode to test output yielding ) # Run the block outputs = {} + from backend.data.execution import ExecutionContext + + mock_execution_context = ExecutionContext(safe_mode=False) + mock_execution_processor = MagicMock() + async for output_name, output_value in block.run( input_data, credentials=llm.TEST_CREDENTIALS, @@ -348,19 +392,22 @@ async def test_output_yielding_with_dynamic_fields(): graph_exec_id="test_exec", node_exec_id="test_node_exec", user_id="test_user", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, ): outputs[output_name] = output_value - # Verify the outputs use sanitized field names (matching frontend normalizeToolName) - assert "tools_^_createdictionaryblock_~_values___name" in outputs - assert outputs["tools_^_createdictionaryblock_~_values___name"] == "Alice" + # Verify the outputs use sink node ID in output keys + assert "tools_^_test-sink-node-id_~_values___name" in outputs + assert outputs["tools_^_test-sink-node-id_~_values___name"] == "Alice" - assert "tools_^_createdictionaryblock_~_values___age" in outputs - assert outputs["tools_^_createdictionaryblock_~_values___age"] == 30 + assert "tools_^_test-sink-node-id_~_values___age" in outputs + assert outputs["tools_^_test-sink-node-id_~_values___age"] == 30 - assert "tools_^_createdictionaryblock_~_values___email" in outputs + assert "tools_^_test-sink-node-id_~_values___email" in outputs assert ( - outputs["tools_^_createdictionaryblock_~_values___email"] + outputs["tools_^_test-sink-node-id_~_values___email"] == "alice@example.com" ) @@ -488,7 +535,7 @@ async def test_validation_errors_dont_pollute_conversation(): # Mock the function signature creation with patch.object( - block, "_create_function_signature", new_callable=AsyncMock + block, "_create_tool_node_signatures", new_callable=AsyncMock ) as mock_sig: mock_sig.return_value = [ { @@ -505,49 +552,113 @@ async def test_validation_errors_dont_pollute_conversation(): }, "required": ["correct_param"], }, + "_sink_node_id": "test-sink-node-id", }, } ] - # Create input data - from backend.blocks import llm + # Mock the database manager to avoid HTTP calls during tool execution + with patch( + "backend.blocks.smart_decision_maker.get_database_manager_async_client" + ) as mock_db_manager: + # Set up the mock database manager for agent mode + mock_db_client = AsyncMock() + mock_db_manager.return_value = mock_db_client - input_data = block.input_schema( - prompt="Test prompt", - credentials=llm.TEST_CREDENTIALS_INPUT, - model=llm.LlmModel.GPT4O, - retry=3, # Allow retries - ) + # Mock the node retrieval + mock_target_node = Mock() + mock_target_node.id = "test-sink-node-id" + mock_target_node.block_id = "TestBlock" + mock_target_node.block = Mock() + mock_target_node.block.name = "Test Block" + mock_db_client.get_node.return_value = mock_target_node - # Run the block - outputs = {} - async for output_name, output_value in block.run( - input_data, - credentials=llm.TEST_CREDENTIALS, - graph_id="test_graph", - node_id="test_node", - graph_exec_id="test_exec", - node_exec_id="test_node_exec", - user_id="test_user", - ): - outputs[output_name] = output_value + # Mock the execution result creation + mock_node_exec_result = Mock() + mock_node_exec_result.node_exec_id = "mock-node-exec-id" + mock_final_input_data = {"correct_param": "value"} + mock_db_client.upsert_execution_input.return_value = ( + mock_node_exec_result, + mock_final_input_data, + ) - # Verify we had 2 LLM calls (initial + retry) - assert call_count == 2 + # Mock the output retrieval + mock_outputs = {"correct_param": "value"} + mock_db_client.get_execution_outputs_by_node_exec_id.return_value = ( + mock_outputs + ) - # Check the final conversation output - final_conversation = outputs.get("conversations", []) + # Create input data + from backend.blocks import llm - # The final conversation should NOT contain the validation error message - error_messages = [ - msg - for msg in final_conversation - if msg.get("role") == "user" - and "parameter errors" in msg.get("content", "") - ] - assert ( - len(error_messages) == 0 - ), "Validation error leaked into final conversation" + input_data = block.input_schema( + prompt="Test prompt", + credentials=llm.TEST_CREDENTIALS_INPUT, + model=llm.LlmModel.GPT4O, + retry=3, # Allow retries + agent_mode_max_iterations=1, + ) - # The final conversation should only have the successful response - assert final_conversation[-1]["content"] == "valid" + # Run the block + outputs = {} + from backend.data.execution import ExecutionContext + + mock_execution_context = ExecutionContext(safe_mode=False) + + # Create a proper mock execution processor for agent mode + from collections import defaultdict + + mock_execution_processor = AsyncMock() + mock_execution_processor.execution_stats = MagicMock() + mock_execution_processor.execution_stats_lock = MagicMock() + + # Create a mock NodeExecutionProgress for the sink node + mock_node_exec_progress = MagicMock() + mock_node_exec_progress.add_task = MagicMock() + mock_node_exec_progress.pop_output = MagicMock( + return_value=None + ) # No outputs to process + + # Set up running_node_execution as a defaultdict that returns our mock for any key + mock_execution_processor.running_node_execution = defaultdict( + lambda: mock_node_exec_progress + ) + + # Mock the on_node_execution method that gets called during tool execution + mock_node_stats = MagicMock() + mock_node_stats.error = None + mock_execution_processor.on_node_execution.return_value = ( + mock_node_stats + ) + + async for output_name, output_value in block.run( + input_data, + credentials=llm.TEST_CREDENTIALS, + graph_id="test_graph", + node_id="test_node", + graph_exec_id="test_exec", + node_exec_id="test_node_exec", + user_id="test_user", + graph_version=1, + execution_context=mock_execution_context, + execution_processor=mock_execution_processor, + ): + outputs[output_name] = output_value + + # Verify we had at least 1 LLM call + assert call_count >= 1 + + # Check the final conversation output + final_conversation = outputs.get("conversations", []) + + # The final conversation should NOT contain validation error messages + # Even if retries don't happen in agent mode, we should not leak errors + error_messages = [ + msg + for msg in final_conversation + if msg.get("role") == "user" + and "parameter errors" in msg.get("content", "") + ] + assert ( + len(error_messages) == 0 + ), "Validation error leaked into final conversation" diff --git a/autogpt_platform/backend/backend/blocks/text.py b/autogpt_platform/backend/backend/blocks/text.py index b6dae2c840..5e58e27101 100644 --- a/autogpt_platform/backend/backend/blocks/text.py +++ b/autogpt_platform/backend/backend/blocks/text.py @@ -4,7 +4,13 @@ from typing import Any import regex # Has built-in timeout support -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField from backend.util import json, text from backend.util.file import get_exec_file_path, store_media_file @@ -14,7 +20,7 @@ formatter = text.TextFormatter() class MatchTextPatternBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: Any = SchemaField(description="Text to match") match: str = SchemaField(description="Pattern (Regex) to match") data: Any = SchemaField(description="Data to be forwarded to output") @@ -23,7 +29,7 @@ class MatchTextPatternBlock(Block): ) dot_all: bool = SchemaField(description="Dot matches all", default=True) - class Output(BlockSchema): + class Output(BlockSchemaOutput): positive: Any = SchemaField(description="Output data if match is found") negative: Any = SchemaField(description="Output data if match is not found") @@ -68,7 +74,7 @@ class MatchTextPatternBlock(Block): class ExtractTextInformationBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: Any = SchemaField(description="Text to parse") pattern: str = SchemaField(description="Pattern (Regex) to parse") group: int = SchemaField(description="Group number to extract", default=0) @@ -78,7 +84,7 @@ class ExtractTextInformationBlock(Block): dot_all: bool = SchemaField(description="Dot matches all", default=True) find_all: bool = SchemaField(description="Find all matches", default=False) - class Output(BlockSchema): + class Output(BlockSchemaOutput): positive: str = SchemaField(description="Extracted text") negative: str = SchemaField(description="Original text") matched_results: list[str] = SchemaField(description="List of matched results") @@ -237,7 +243,7 @@ class ExtractTextInformationBlock(Block): class FillTextTemplateBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): values: dict[str, Any] = SchemaField( description="Values (dict) to be used in format. These values can be used by putting them in double curly braces in the format template. e.g. {{value_name}}.", ) @@ -250,7 +256,7 @@ class FillTextTemplateBlock(Block): description="Whether to escape special characters in the inserted values to be HTML-safe. Enable for HTML output, disable for plain text.", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): output: str = SchemaField(description="Formatted text") def __init__(self): @@ -287,13 +293,13 @@ class FillTextTemplateBlock(Block): class CombineTextsBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): input: list[str] = SchemaField(description="text input to combine") delimiter: str = SchemaField( description="Delimiter to combine texts", default="" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): output: str = SchemaField(description="Combined text") def __init__(self): @@ -319,14 +325,14 @@ class CombineTextsBlock(Block): class TextSplitBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField(description="The text to split.") delimiter: str = SchemaField(description="The delimiter to split the text by.") strip: bool = SchemaField( description="Whether to strip the text.", default=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): texts: list[str] = SchemaField( description="The text split into a list of strings." ) @@ -359,12 +365,12 @@ class TextSplitBlock(Block): class TextReplaceBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField(description="The text to replace.") old: str = SchemaField(description="The old text to replace.") new: str = SchemaField(description="The new text to replace with.") - class Output(BlockSchema): + class Output(BlockSchemaOutput): output: str = SchemaField(description="The text with the replaced text.") def __init__(self): @@ -387,7 +393,7 @@ class TextReplaceBlock(Block): class FileReadBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): file_input: MediaFileType = SchemaField( description="The file to read from (URL, data URI, or local path)" ) @@ -417,7 +423,7 @@ class FileReadBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): content: str = SchemaField( description="File content, yielded as individual chunks when delimiter or size limits are applied" ) diff --git a/autogpt_platform/backend/backend/blocks/text_to_speech_block.py b/autogpt_platform/backend/backend/blocks/text_to_speech_block.py index f0b7c107de..8fe9e1cda7 100644 --- a/autogpt_platform/backend/backend/blocks/text_to_speech_block.py +++ b/autogpt_platform/backend/backend/blocks/text_to_speech_block.py @@ -2,7 +2,13 @@ from typing import Any, Literal from pydantic import SecretStr -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import ( APIKeyCredentials, CredentialsField, @@ -28,7 +34,7 @@ TEST_CREDENTIALS_INPUT = { class UnrealTextToSpeechBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField( description="The text to be converted to speech", placeholder="Enter the text you want to convert to speech", @@ -45,9 +51,8 @@ class UnrealTextToSpeechBlock(Block): "any API key with sufficient permissions for the blocks it is used on.", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): mp3_url: str = SchemaField(description="The URL of the generated MP3 file") - error: str = SchemaField(description="Error message if the API call failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/time_blocks.py b/autogpt_platform/backend/backend/blocks/time_blocks.py index 61fd7606f3..3a1f4c678e 100644 --- a/autogpt_platform/backend/backend/blocks/time_blocks.py +++ b/autogpt_platform/backend/backend/blocks/time_blocks.py @@ -7,8 +7,14 @@ from zoneinfo import ZoneInfo from pydantic import BaseModel -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema -from backend.data.execution import UserContext +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) +from backend.data.execution import ExecutionContext from backend.data.model import SchemaField # Shared timezone literal type for all time/date blocks @@ -131,7 +137,7 @@ class TimeISO8601Format(BaseModel): class GetCurrentTimeBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): trigger: str = SchemaField( description="Trigger any data to output the current time" ) @@ -141,7 +147,7 @@ class GetCurrentTimeBlock(Block): default=TimeStrftimeFormat(discriminator="strftime"), ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): time: str = SchemaField( description="Current time in the specified format (default: %H:%M:%S)" ) @@ -182,10 +188,9 @@ class GetCurrentTimeBlock(Block): ) async def run( - self, input_data: Input, *, user_context: UserContext, **kwargs + self, input_data: Input, *, execution_context: ExecutionContext, **kwargs ) -> BlockOutput: - # Extract timezone from user_context (always present) - effective_timezone = user_context.timezone + effective_timezone = execution_context.user_timezone # Get the appropriate timezone tz = _get_timezone(input_data.format_type, effective_timezone) @@ -221,7 +226,7 @@ class DateISO8601Format(BaseModel): class GetCurrentDateBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): trigger: str = SchemaField( description="Trigger any data to output the current date" ) @@ -236,7 +241,7 @@ class GetCurrentDateBlock(Block): default=DateStrftimeFormat(discriminator="strftime"), ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): date: str = SchemaField( description="Current date in the specified format (default: YYYY-MM-DD)" ) @@ -292,10 +297,10 @@ class GetCurrentDateBlock(Block): ], ) - async def run(self, input_data: Input, **kwargs) -> BlockOutput: - # Extract timezone from user_context (required keyword argument) - user_context: UserContext = kwargs["user_context"] - effective_timezone = user_context.timezone + async def run( + self, input_data: Input, *, execution_context: ExecutionContext, **kwargs + ) -> BlockOutput: + effective_timezone = execution_context.user_timezone try: offset = int(input_data.offset) @@ -332,7 +337,7 @@ class ISO8601Format(BaseModel): class GetCurrentDateAndTimeBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): trigger: str = SchemaField( description="Trigger any data to output the current date and time" ) @@ -342,7 +347,7 @@ class GetCurrentDateAndTimeBlock(Block): default=StrftimeFormat(discriminator="strftime"), ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): date_time: str = SchemaField( description="Current date and time in the specified format (default: YYYY-MM-DD HH:MM:SS)" ) @@ -398,10 +403,10 @@ class GetCurrentDateAndTimeBlock(Block): ], ) - async def run(self, input_data: Input, **kwargs) -> BlockOutput: - # Extract timezone from user_context (required keyword argument) - user_context: UserContext = kwargs["user_context"] - effective_timezone = user_context.timezone + async def run( + self, input_data: Input, *, execution_context: ExecutionContext, **kwargs + ) -> BlockOutput: + effective_timezone = execution_context.user_timezone # Get the appropriate timezone tz = _get_timezone(input_data.format_type, effective_timezone) @@ -419,7 +424,7 @@ class GetCurrentDateAndTimeBlock(Block): class CountdownTimerBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): input_message: Any = SchemaField( advanced=False, description="Message to output after the timer finishes", @@ -442,7 +447,7 @@ class CountdownTimerBlock(Block): default=1, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): output_message: Any = SchemaField( description="Message after the timer finishes" ) diff --git a/autogpt_platform/backend/backend/blocks/todoist/comments.py b/autogpt_platform/backend/backend/blocks/todoist/comments.py index 703afb696f..f11534cbe3 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/comments.py +++ b/autogpt_platform/backend/backend/blocks/todoist/comments.py @@ -12,7 +12,13 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsField, TodoistCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -29,7 +35,7 @@ class ProjectId(BaseModel): class TodoistCreateCommentBlock(Block): """Creates a new comment on a Todoist task or project""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) content: str = SchemaField(description="Comment content") id_type: Union[TaskId, ProjectId] = SchemaField( @@ -42,7 +48,7 @@ class TodoistCreateCommentBlock(Block): description="Optional file attachment", default=None ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: str = SchemaField(description="ID of created comment") content: str = SchemaField(description="Comment content") posted_at: str = SchemaField(description="Comment timestamp") @@ -53,8 +59,6 @@ class TodoistCreateCommentBlock(Block): description="Associated project ID", default=None ) - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="1bba7e54-2310-4a31-8e6f-54d5f9ab7459", @@ -146,7 +150,7 @@ class TodoistCreateCommentBlock(Block): class TodoistGetCommentsBlock(Block): """Get all comments for a Todoist task or project""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) id_type: Union[TaskId, ProjectId] = SchemaField( discriminator="discriminator", @@ -155,9 +159,8 @@ class TodoistGetCommentsBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): comments: list = SchemaField(description="List of comments") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -244,11 +247,11 @@ class TodoistGetCommentsBlock(Block): class TodoistGetCommentBlock(Block): """Get a single comment from Todoist using comment ID""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) comment_id: str = SchemaField(description="Comment ID to retrieve") - class Output(BlockSchema): + class Output(BlockSchemaOutput): content: str = SchemaField(description="Comment content") id: str = SchemaField(description="Comment ID") posted_at: str = SchemaField(description="Comment timestamp") @@ -262,8 +265,6 @@ class TodoistGetCommentBlock(Block): description="Optional file attachment", default=None ) - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="a809d264-ddf2-11ef-9764-32d3674e8b7e", @@ -334,14 +335,13 @@ class TodoistGetCommentBlock(Block): class TodoistUpdateCommentBlock(Block): """Updates a Todoist comment""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) comment_id: str = SchemaField(description="Comment ID to update") content: str = SchemaField(description="New content for the comment") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the update was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -394,13 +394,12 @@ class TodoistUpdateCommentBlock(Block): class TodoistDeleteCommentBlock(Block): """Deletes a Todoist comment""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) comment_id: str = SchemaField(description="Comment ID to delete") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the deletion was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/todoist/labels.py b/autogpt_platform/backend/backend/blocks/todoist/labels.py index 4700ebb6c0..8107459567 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/labels.py +++ b/autogpt_platform/backend/backend/blocks/todoist/labels.py @@ -10,14 +10,20 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsInput, ) from backend.blocks.todoist._types import Colors -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField class TodoistCreateLabelBlock(Block): """Creates a new label in Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) name: str = SchemaField(description="Name of the label") order: Optional[int] = SchemaField(description="Label order", default=None) @@ -28,13 +34,12 @@ class TodoistCreateLabelBlock(Block): description="Whether the label is a favorite", default=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: str = SchemaField(description="ID of the created label") name: str = SchemaField(description="Name of the label") color: str = SchemaField(description="Color of the label") order: int = SchemaField(description="Label order") is_favorite: bool = SchemaField(description="Favorite status") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -116,14 +121,13 @@ class TodoistCreateLabelBlock(Block): class TodoistListLabelsBlock(Block): """Gets all personal labels from Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) - class Output(BlockSchema): + class Output(BlockSchemaOutput): labels: list = SchemaField(description="List of complete label data") label_ids: list = SchemaField(description="List of label IDs") label_names: list = SchemaField(description="List of label names") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -194,19 +198,17 @@ class TodoistListLabelsBlock(Block): class TodoistGetLabelBlock(Block): """Gets a personal label from Todoist by ID""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) label_id: str = SchemaField(description="ID of the label to retrieve") - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: str = SchemaField(description="ID of the label") name: str = SchemaField(description="Name of the label") color: str = SchemaField(description="Color of the label") order: int = SchemaField(description="Label order") is_favorite: bool = SchemaField(description="Favorite status") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="7f236514-de14-11ef-bd7a-32d3674e8b7e", @@ -272,7 +274,7 @@ class TodoistGetLabelBlock(Block): class TodoistUpdateLabelBlock(Block): """Updates a personal label in Todoist using ID""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) label_id: str = SchemaField(description="ID of the label to update") name: Optional[str] = SchemaField( @@ -286,9 +288,8 @@ class TodoistUpdateLabelBlock(Block): description="Whether the label is a favorite (true/false)", default=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the update was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -354,13 +355,12 @@ class TodoistUpdateLabelBlock(Block): class TodoistDeleteLabelBlock(Block): """Deletes a personal label in Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) label_id: str = SchemaField(description="ID of the label to delete") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the deletion was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -407,12 +407,11 @@ class TodoistDeleteLabelBlock(Block): class TodoistGetSharedLabelsBlock(Block): """Gets all shared labels from Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) - class Output(BlockSchema): + class Output(BlockSchemaOutput): labels: list = SchemaField(description="List of shared label names") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -462,14 +461,13 @@ class TodoistGetSharedLabelsBlock(Block): class TodoistRenameSharedLabelsBlock(Block): """Renames all instances of a shared label""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) name: str = SchemaField(description="The name of the existing label to rename") new_name: str = SchemaField(description="The new name for the label") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the rename was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -519,13 +517,12 @@ class TodoistRenameSharedLabelsBlock(Block): class TodoistRemoveSharedLabelsBlock(Block): """Removes all instances of a shared label""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) name: str = SchemaField(description="The name of the label to remove") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the removal was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/todoist/projects.py b/autogpt_platform/backend/backend/blocks/todoist/projects.py index 33ad7950fa..c6d345c116 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/projects.py +++ b/autogpt_platform/backend/backend/blocks/todoist/projects.py @@ -10,24 +10,29 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsInput, ) from backend.blocks.todoist._types import Colors -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField class TodoistListProjectsBlock(Block): """Gets all projects for a Todoist user""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) - class Output(BlockSchema): + class Output(BlockSchemaOutput): names_list: list[str] = SchemaField(description="List of project names") ids_list: list[str] = SchemaField(description="List of project IDs") url_list: list[str] = SchemaField(description="List of project URLs") complete_data: list[dict] = SchemaField( description="Complete project data including all fields" ) - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( @@ -121,7 +126,7 @@ class TodoistListProjectsBlock(Block): class TodoistCreateProjectBlock(Block): """Creates a new project in Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) name: str = SchemaField(description="Name of the project", advanced=False) parent_id: Optional[str] = SchemaField( @@ -141,9 +146,8 @@ class TodoistCreateProjectBlock(Block): description="Display style (list or board)", default=None, advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the creation was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -211,20 +215,19 @@ class TodoistCreateProjectBlock(Block): class TodoistGetProjectBlock(Block): """Gets details for a specific Todoist project""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) project_id: str = SchemaField( description="ID of the project to get details for", advanced=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): project_id: str = SchemaField(description="ID of project") project_name: str = SchemaField(description="Name of project") project_url: str = SchemaField(description="URL of project") complete_data: dict = SchemaField( description="Complete project data including all fields" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -305,7 +308,7 @@ class TodoistGetProjectBlock(Block): class TodoistUpdateProjectBlock(Block): """Updates an existing project in Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) project_id: str = SchemaField( description="ID of project to update", advanced=False @@ -325,9 +328,8 @@ class TodoistUpdateProjectBlock(Block): description="Display style (list or board)", default=None, advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the update was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -401,15 +403,14 @@ class TodoistUpdateProjectBlock(Block): class TodoistDeleteProjectBlock(Block): """Deletes a project and all of its sections and tasks""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) project_id: str = SchemaField( description="ID of project to delete", advanced=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the deletion was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -459,13 +460,13 @@ class TodoistDeleteProjectBlock(Block): class TodoistListCollaboratorsBlock(Block): """Gets all collaborators for a Todoist project""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) project_id: str = SchemaField( description="ID of the project to get collaborators for", advanced=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): collaborator_ids: list[str] = SchemaField( description="List of collaborator IDs" ) @@ -478,7 +479,6 @@ class TodoistListCollaboratorsBlock(Block): complete_data: list[dict] = SchemaField( description="Complete collaborator data including all fields" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/todoist/sections.py b/autogpt_platform/backend/backend/blocks/todoist/sections.py index 764f7e166e..52dceb70b9 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/sections.py +++ b/autogpt_platform/backend/backend/blocks/todoist/sections.py @@ -9,26 +9,31 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsField, TodoistCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField class TodoistListSectionsBlock(Block): """Gets all sections for a Todoist project""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) project_id: Optional[str] = SchemaField( description="Optional project ID to filter sections" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): names_list: list[str] = SchemaField(description="List of section names") ids_list: list[str] = SchemaField(description="List of section IDs") complete_data: list[dict] = SchemaField( description="Complete section data including all fields" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -123,13 +128,13 @@ class TodoistListSectionsBlock(Block): # class TodoistCreateSectionBlock(Block): # """Creates a new section in a Todoist project""" -# class Input(BlockSchema): +# class Input(BlockSchemaInput): # credentials: TodoistCredentialsInput = TodoistCredentialsField([]) # name: str = SchemaField(description="Section name") # project_id: str = SchemaField(description="Project ID this section should belong to") # order: Optional[int] = SchemaField(description="Optional order among other sections", default=None) -# class Output(BlockSchema): +# class Output(BlockSchemaOutput): # success: bool = SchemaField(description="Whether section was successfully created") # error: str = SchemaField(description="Error message if the request failed") @@ -191,16 +196,15 @@ class TodoistListSectionsBlock(Block): class TodoistGetSectionBlock(Block): """Gets a single section from Todoist by ID""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) section_id: str = SchemaField(description="ID of section to fetch") - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: str = SchemaField(description="ID of section") project_id: str = SchemaField(description="Project ID the section belongs to") order: int = SchemaField(description="Order of the section") name: str = SchemaField(description="Name of the section") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -261,15 +265,14 @@ class TodoistGetSectionBlock(Block): class TodoistDeleteSectionBlock(Block): """Deletes a section and all its tasks from Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) section_id: str = SchemaField(description="ID of section to delete") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether section was successfully deleted" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/todoist/tasks.py b/autogpt_platform/backend/backend/blocks/todoist/tasks.py index d50124a9ef..183a3340b3 100644 --- a/autogpt_platform/backend/backend/blocks/todoist/tasks.py +++ b/autogpt_platform/backend/backend/blocks/todoist/tasks.py @@ -12,14 +12,20 @@ from backend.blocks.todoist._auth import ( TodoistCredentialsField, TodoistCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField class TodoistCreateTaskBlock(Block): """Creates a new task in a Todoist project""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) content: str = SchemaField(description="Task content", advanced=False) description: Optional[str] = SchemaField( @@ -72,13 +78,12 @@ class TodoistCreateTaskBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): id: str = SchemaField(description="Task ID") url: str = SchemaField(description="Task URL") complete_data: dict = SchemaField( description="Complete task data as dictionary" ) - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( @@ -181,7 +186,7 @@ class TodoistCreateTaskBlock(Block): class TodoistGetTasksBlock(Block): """Get active tasks from Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) project_id: Optional[str] = SchemaField( description="Filter tasks by project ID", default=None, advanced=False @@ -204,13 +209,12 @@ class TodoistGetTasksBlock(Block): description="List of task IDs to retrieve", default=None, advanced=False ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): ids: list[str] = SchemaField(description="Task IDs") urls: list[str] = SchemaField(description="Task URLs") complete_data: list[dict] = SchemaField( description="Complete task data as dictionary" ) - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( @@ -293,17 +297,16 @@ class TodoistGetTasksBlock(Block): class TodoistGetTaskBlock(Block): """Get an active task from Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) task_id: str = SchemaField(description="Task ID to retrieve") - class Output(BlockSchema): + class Output(BlockSchemaOutput): project_id: str = SchemaField(description="Project ID containing the task") url: str = SchemaField(description="Task URL") complete_data: dict = SchemaField( description="Complete task data as dictionary" ) - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( @@ -367,7 +370,7 @@ class TodoistGetTaskBlock(Block): class TodoistUpdateTaskBlock(Block): """Updates an existing task in Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) task_id: str = SchemaField(description="Task ID to update") content: str = SchemaField(description="Task content", advanced=False) @@ -421,9 +424,8 @@ class TodoistUpdateTaskBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the update was successful") - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( @@ -506,15 +508,14 @@ class TodoistUpdateTaskBlock(Block): class TodoistCloseTaskBlock(Block): """Closes a task in Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) task_id: str = SchemaField(description="Task ID to close") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the task was successfully closed" ) - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( @@ -557,15 +558,14 @@ class TodoistCloseTaskBlock(Block): class TodoistReopenTaskBlock(Block): """Reopens a task in Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) task_id: str = SchemaField(description="Task ID to reopen") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the task was successfully reopened" ) - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( @@ -610,15 +610,14 @@ class TodoistReopenTaskBlock(Block): class TodoistDeleteTaskBlock(Block): """Deletes a task in Todoist""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TodoistCredentialsInput = TodoistCredentialsField([]) task_id: str = SchemaField(description="Task ID to delete") - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the task was successfully deleted" ) - error: str = SchemaField(description="Error message if request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/_builders.py b/autogpt_platform/backend/backend/blocks/twitter/_builders.py index 6dc450c247..5f396b11ad 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/_builders.py +++ b/autogpt_platform/backend/backend/blocks/twitter/_builders.py @@ -1,4 +1,4 @@ -from datetime import datetime +from datetime import datetime, timedelta, timezone from typing import Any, Dict from backend.blocks.twitter._mappers import ( @@ -237,6 +237,12 @@ class TweetDurationBuilder: def add_start_time(self, start_time: datetime | None): if start_time: + # Twitter API requires start_time to be at least 10 seconds before now + max_start_time = datetime.now(timezone.utc) - timedelta(seconds=10) + if start_time.tzinfo is None: + start_time = start_time.replace(tzinfo=timezone.utc) + if start_time > max_start_time: + start_time = max_start_time self.params["start_time"] = start_time return self diff --git a/autogpt_platform/backend/backend/blocks/twitter/_serializer.py b/autogpt_platform/backend/backend/blocks/twitter/_serializer.py index 906c524456..bb570d995d 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/_serializer.py +++ b/autogpt_platform/backend/backend/blocks/twitter/_serializer.py @@ -51,8 +51,10 @@ class ResponseDataSerializer(BaseSerializer): return serialized_item @classmethod - def serialize_list(cls, data: List[Dict[str, Any]]) -> List[Dict[str, Any]]: + def serialize_list(cls, data: List[Dict[str, Any]] | None) -> List[Dict[str, Any]]: """Serializes a list of dictionary items""" + if not data: + return [] return [cls.serialize_dict(item) for item in data] diff --git a/autogpt_platform/backend/backend/blocks/twitter/_types.py b/autogpt_platform/backend/backend/blocks/twitter/_types.py index 2b404e4f56..88050ed545 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/_types.py +++ b/autogpt_platform/backend/backend/blocks/twitter/_types.py @@ -3,7 +3,7 @@ from enum import Enum from pydantic import BaseModel -from backend.data.block import BlockSchema +from backend.data.block import BlockSchemaInput from backend.data.model import SchemaField # -------------- Tweets ----------------- @@ -255,7 +255,7 @@ class ListFieldsFilter(BaseModel): # --------- [Input Types] ------------- -class TweetExpansionInputs(BlockSchema): +class TweetExpansionInputs(BlockSchemaInput): expansions: ExpansionFilter | None = SchemaField( description="Choose what extra information you want to get with your tweets. For example:\n- Select 'Media_Keys' to get media details\n- Select 'Author_User_ID' to get user information\n- Select 'Place_ID' to get location details", @@ -300,7 +300,7 @@ class TweetExpansionInputs(BlockSchema): ) -class DMEventExpansionInputs(BlockSchema): +class DMEventExpansionInputs(BlockSchemaInput): expansions: DMEventExpansionFilter | None = SchemaField( description="Select expansions to include related data objects in the 'includes' section.", placeholder="Enter expansions", @@ -337,7 +337,7 @@ class DMEventExpansionInputs(BlockSchema): ) -class UserExpansionInputs(BlockSchema): +class UserExpansionInputs(BlockSchemaInput): expansions: UserExpansionsFilter | None = SchemaField( description="Choose what extra information you want to get with user data. Currently only 'pinned_tweet_id' is available to see a user's pinned tweet.", placeholder="Select extra user information to include", @@ -360,7 +360,7 @@ class UserExpansionInputs(BlockSchema): ) -class SpaceExpansionInputs(BlockSchema): +class SpaceExpansionInputs(BlockSchemaInput): expansions: SpaceExpansionsFilter | None = SchemaField( description="Choose additional information you want to get with your Twitter Spaces:\n- Select 'Invited_Users' to see who was invited\n- Select 'Speakers' to see who can speak\n- Select 'Creator' to get details about who made the Space\n- Select 'Hosts' to see who's hosting\n- Select 'Topics' to see Space topics", placeholder="Pick what extra information you want to see about the Space", @@ -383,7 +383,7 @@ class SpaceExpansionInputs(BlockSchema): ) -class ListExpansionInputs(BlockSchema): +class ListExpansionInputs(BlockSchemaInput): expansions: ListExpansionsFilter | None = SchemaField( description="Choose what extra information you want to get with your Twitter Lists:\n- Select 'List_Owner_ID' to get details about who owns the list\n\nThis will let you see more details about the list owner when you also select user fields below.", placeholder="Pick what extra list information you want to see", @@ -406,9 +406,9 @@ class ListExpansionInputs(BlockSchema): ) -class TweetTimeWindowInputs(BlockSchema): +class TweetTimeWindowInputs(BlockSchemaInput): start_time: datetime | None = SchemaField( - description="Start time in YYYY-MM-DDTHH:mm:ssZ format", + description="Start time in YYYY-MM-DDTHH:mm:ssZ format. If set to a time less than 10 seconds ago, it will be automatically adjusted to 10 seconds ago (Twitter API requirement).", placeholder="Enter start time", default=None, advanced=False, diff --git a/autogpt_platform/backend/backend/blocks/twitter/direct_message/direct_message_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/direct_message/direct_message_lookup.py index 99c5bcab79..0ce8e08535 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/direct_message/direct_message_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/direct_message/direct_message_lookup.py @@ -5,7 +5,7 @@ # from tweepy.client import Response # from backend.blocks.twitter._serializer import IncludesSerializer, ResponseDataSerializer -# from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +# from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockSchemaInput, BlockSchemaOutput # from backend.data.model import SchemaField # from backend.blocks.twitter._builders import DMExpansionsBuilder # from backend.blocks.twitter._types import DMEventExpansion, DMEventExpansionInputs, DMEventType, DMMediaField, DMTweetField, TweetUserFields @@ -49,7 +49,7 @@ # default="" # ) -# class Output(BlockSchema): +# class Output(BlockSchemaOutput): # # Common outputs # event_ids: list[str] = SchemaField(description="DM Event IDs") # event_texts: list[str] = SchemaField(description="DM Event text contents") diff --git a/autogpt_platform/backend/backend/blocks/twitter/direct_message/manage_direct_message.py b/autogpt_platform/backend/backend/blocks/twitter/direct_message/manage_direct_message.py index 19fdb2819f..cbbe019f37 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/direct_message/manage_direct_message.py +++ b/autogpt_platform/backend/backend/blocks/twitter/direct_message/manage_direct_message.py @@ -5,7 +5,7 @@ # import tweepy # from tweepy.client import Response -# from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +# from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockSchemaInput, BlockSchemaOutput # from backend.data.model import SchemaField # from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception # from backend.blocks.twitter._auth import ( @@ -22,7 +22,7 @@ # Sends a direct message to a Twitter user # """ -# class Input(BlockSchema): +# class Input(BlockSchemaInput): # credentials: TwitterCredentialsInput = TwitterCredentialsField( # ["offline.access", "direct_messages.write"] # ) @@ -54,7 +54,7 @@ # default="" # ) -# class Output(BlockSchema): +# class Output(BlockSchemaOutput): # dm_event_id: str = SchemaField(description="ID of the sent direct message") # dm_conversation_id_: str = SchemaField(description="ID of the conversation") # error: str = SchemaField(description="Error message if sending failed") @@ -148,7 +148,7 @@ # Creates a new group direct message conversation on Twitter # """ -# class Input(BlockSchema): +# class Input(BlockSchemaInput): # credentials: TwitterCredentialsInput = TwitterCredentialsField( # ["offline.access", "dm.write","dm.read","tweet.read","user.read"] # ) @@ -174,7 +174,7 @@ # advanced=False # ) -# class Output(BlockSchema): +# class Output(BlockSchemaOutput): # dm_event_id: str = SchemaField(description="ID of the sent direct message") # dm_conversation_id: str = SchemaField(description="ID of the conversation") # error: str = SchemaField(description="Error message if sending failed") diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/list_follows.py b/autogpt_platform/backend/backend/blocks/twitter/lists/list_follows.py index 62c6c05f0c..5616e0ce14 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/list_follows.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/list_follows.py @@ -13,7 +13,13 @@ from backend.blocks.twitter._auth import ( # from backend.blocks.twitter._builders import UserExpansionsBuilder # from backend.blocks.twitter._types import TweetFields, TweetUserFields, UserExpansionInputs, UserExpansions from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField # from tweepy.client import Response @@ -24,7 +30,7 @@ class TwitterUnfollowListBlock(Block): Unfollows a Twitter list for the authenticated user """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["follows.write", "offline.access"] ) @@ -34,9 +40,8 @@ class TwitterUnfollowListBlock(Block): placeholder="Enter list ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the unfollow was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -87,7 +92,7 @@ class TwitterFollowListBlock(Block): Follows a Twitter list for the authenticated user """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "users.read", "list.write", "offline.access"] ) @@ -97,9 +102,8 @@ class TwitterFollowListBlock(Block): placeholder="Enter list ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the follow was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -178,7 +182,7 @@ class TwitterFollowListBlock(Block): # advanced=True, # ) -# class Output(BlockSchema): +# class Output(BlockSchemaOutput): # user_ids: list[str] = SchemaField(description="List of user IDs of followers") # usernames: list[str] = SchemaField(description="List of usernames of followers") # next_token: str = SchemaField(description="Token for next page of results") @@ -340,7 +344,7 @@ class TwitterFollowListBlock(Block): # advanced=True, # ) -# class Output(BlockSchema): +# class Output(BlockSchemaOutput): # list_ids: list[str] = SchemaField(description="List of list IDs") # list_names: list[str] = SchemaField(description="List of list names") # data: list[dict] = SchemaField(description="Complete list data") diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/list_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/lists/list_lookup.py index 6dbaf2b23d..6b46f00a37 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/list_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/list_lookup.py @@ -23,7 +23,7 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -42,7 +42,7 @@ class TwitterGetListBlock(Block): placeholder="Enter list ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs id: str = SchemaField(description="ID of the Twitter List") name: str = SchemaField(description="Name of the Twitter List") @@ -55,7 +55,6 @@ class TwitterGetListBlock(Block): description="Additional data requested via expansions" ) meta: dict = SchemaField(description="Metadata about the response") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -201,7 +200,7 @@ class TwitterGetOwnedListsBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs list_ids: list[str] = SchemaField(description="List ids of the owned lists") list_names: list[str] = SchemaField(description="List names of the owned lists") @@ -213,7 +212,6 @@ class TwitterGetOwnedListsBlock(Block): description="Additional data requested via expansions" ) meta: dict = SchemaField(description="Metadata about the response") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/list_members.py b/autogpt_platform/backend/backend/blocks/twitter/lists/list_members.py index 9bcd8f15a2..32ffb9e5b6 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/list_members.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/list_members.py @@ -29,7 +29,13 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -38,7 +44,7 @@ class TwitterRemoveListMemberBlock(Block): Removes a member from a Twitter List that the authenticated user owns """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["list.write", "users.read", "tweet.read", "offline.access"] ) @@ -53,11 +59,10 @@ class TwitterRemoveListMemberBlock(Block): placeholder="Enter user ID to remove", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the member was successfully removed" ) - error: str = SchemaField(description="Error message if the removal failed") def __init__(self): super().__init__( @@ -112,7 +117,7 @@ class TwitterAddListMemberBlock(Block): Adds a member to a Twitter List that the authenticated user owns """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["list.write", "users.read", "tweet.read", "offline.access"] ) @@ -127,11 +132,10 @@ class TwitterAddListMemberBlock(Block): placeholder="Enter user ID to add", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the member was successfully added" ) - error: str = SchemaField(description="Error message if the addition failed") def __init__(self): super().__init__( @@ -210,7 +214,7 @@ class TwitterGetListMembersBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): ids: list[str] = SchemaField(description="List of member user IDs") usernames: list[str] = SchemaField(description="List of member usernames") next_token: str = SchemaField(description="Next token for pagination") @@ -223,8 +227,6 @@ class TwitterGetListMembersBlock(Block): ) meta: dict = SchemaField(description="Metadata including pagination info") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="4dba046e-a62f-11ef-b69a-87240c84b4c7", @@ -391,7 +393,7 @@ class TwitterGetListMembershipsBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): list_ids: list[str] = SchemaField(description="List of list IDs") next_token: str = SchemaField(description="Next token for pagination") @@ -400,7 +402,6 @@ class TwitterGetListMembershipsBlock(Block): description="Additional data requested via expansions" ) meta: dict = SchemaField(description="Metadata about pagination") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/list_tweets_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/lists/list_tweets_lookup.py index bda25e1d2d..e43980683e 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/list_tweets_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/list_tweets_lookup.py @@ -26,7 +26,7 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -59,7 +59,7 @@ class TwitterGetListTweetsBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs tweet_ids: list[str] = SchemaField(description="List of tweet IDs") texts: list[str] = SchemaField(description="List of tweet texts") @@ -73,7 +73,6 @@ class TwitterGetListTweetsBlock(Block): meta: dict = SchemaField( description="Response metadata including pagination tokens" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/manage_lists.py b/autogpt_platform/backend/backend/blocks/twitter/lists/manage_lists.py index 2ba8158f9c..4092fbaa93 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/manage_lists.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/manage_lists.py @@ -12,7 +12,13 @@ from backend.blocks.twitter._auth import ( TwitterCredentialsInput, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -21,7 +27,7 @@ class TwitterDeleteListBlock(Block): Deletes a Twitter List owned by the authenticated user """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["list.write", "offline.access"] ) @@ -31,9 +37,8 @@ class TwitterDeleteListBlock(Block): placeholder="Enter list ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the deletion was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -84,7 +89,7 @@ class TwitterUpdateListBlock(Block): Updates a Twitter List owned by the authenticated user """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["list.write", "offline.access"] ) @@ -109,9 +114,8 @@ class TwitterUpdateListBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the update was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -180,7 +184,7 @@ class TwitterCreateListBlock(Block): Creates a Twitter List owned by the authenticated user """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["list.write", "offline.access"] ) @@ -205,10 +209,9 @@ class TwitterCreateListBlock(Block): default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): url: str = SchemaField(description="URL of the created list") list_id: str = SchemaField(description="ID of the created list") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/lists/pinned_lists.py b/autogpt_platform/backend/backend/blocks/twitter/lists/pinned_lists.py index a31d1059f6..7bc5bb543f 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/lists/pinned_lists.py +++ b/autogpt_platform/backend/backend/blocks/twitter/lists/pinned_lists.py @@ -23,7 +23,13 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -32,7 +38,7 @@ class TwitterUnpinListBlock(Block): Enables the authenticated user to unpin a List. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["list.write", "users.read", "tweet.read", "offline.access"] ) @@ -42,9 +48,8 @@ class TwitterUnpinListBlock(Block): placeholder="Enter list ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the unpin was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -96,7 +101,7 @@ class TwitterPinListBlock(Block): Enables the authenticated user to pin a List. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["list.write", "users.read", "tweet.read", "offline.access"] ) @@ -106,9 +111,8 @@ class TwitterPinListBlock(Block): placeholder="Enter list ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the pin was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -165,7 +169,7 @@ class TwitterGetPinnedListsBlock(Block): ["lists.read", "users.read", "offline.access"] ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): list_ids: list[str] = SchemaField(description="List IDs of the pinned lists") list_names: list[str] = SchemaField( description="List names of the pinned lists" @@ -178,7 +182,6 @@ class TwitterGetPinnedListsBlock(Block): description="Additional data requested via expansions" ) meta: dict = SchemaField(description="Metadata about the response") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/spaces/search_spaces.py b/autogpt_platform/backend/backend/blocks/twitter/spaces/search_spaces.py index 77b28fa654..bd013cecc1 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/spaces/search_spaces.py +++ b/autogpt_platform/backend/backend/blocks/twitter/spaces/search_spaces.py @@ -24,7 +24,7 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -56,7 +56,7 @@ class TwitterSearchSpacesBlock(Block): default=SpaceStatesFilter.all, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs that user commonly uses ids: list[str] = SchemaField(description="List of space IDs") titles: list[str] = SchemaField(description="List of space titles") @@ -70,8 +70,6 @@ class TwitterSearchSpacesBlock(Block): ) meta: dict = SchemaField(description="Metadata including pagination info") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="aaefdd48-a62f-11ef-a73c-3f44df63e276", diff --git a/autogpt_platform/backend/backend/blocks/twitter/spaces/spaces_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/spaces/spaces_lookup.py index d4ff5459e4..2c99d3ba3a 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/spaces/spaces_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/spaces/spaces_lookup.py @@ -36,7 +36,7 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -76,7 +76,7 @@ class TwitterGetSpacesBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs ids: list[str] = SchemaField(description="List of space IDs") titles: list[str] = SchemaField(description="List of space titles") @@ -86,7 +86,6 @@ class TwitterGetSpacesBlock(Block): includes: dict = SchemaField( description="Additional data requested via expansions" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -231,7 +230,7 @@ class TwitterGetSpaceByIdBlock(Block): placeholder="Enter Space ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs id: str = SchemaField(description="Space ID") title: str = SchemaField(description="Space title") @@ -242,7 +241,6 @@ class TwitterGetSpaceByIdBlock(Block): includes: dict = SchemaField( description="Additional data requested via expansions" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -393,7 +391,7 @@ class TwitterGetSpaceBuyersBlock(Block): placeholder="Enter Space ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs buyer_ids: list[str] = SchemaField(description="List of buyer IDs") usernames: list[str] = SchemaField(description="List of buyer usernames") @@ -403,7 +401,6 @@ class TwitterGetSpaceBuyersBlock(Block): includes: dict = SchemaField( description="Additional data requested via expansions" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -521,7 +518,7 @@ class TwitterGetSpaceTweetsBlock(Block): placeholder="Enter Space ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs tweet_ids: list[str] = SchemaField(description="List of tweet IDs") texts: list[str] = SchemaField(description="List of tweet texts") @@ -532,7 +529,6 @@ class TwitterGetSpaceTweetsBlock(Block): description="Additional data requested via expansions" ) meta: dict = SchemaField(description="Response metadata") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/bookmark.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/bookmark.py index ec8976fc2f..b69002837e 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/bookmark.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/bookmark.py @@ -26,7 +26,13 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -35,7 +41,7 @@ class TwitterBookmarkTweetBlock(Block): Bookmark a tweet on Twitter """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "bookmark.write", "users.read", "offline.access"] ) @@ -45,9 +51,8 @@ class TwitterBookmarkTweetBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the bookmark was successful") - error: str = SchemaField(description="Error message if the bookmark failed") def __init__(self): super().__init__( @@ -123,7 +128,7 @@ class TwitterGetBookmarkedTweetsBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses id: list[str] = SchemaField(description="All Tweet IDs") text: list[str] = SchemaField(description="All Tweet texts") @@ -140,8 +145,6 @@ class TwitterGetBookmarkedTweetsBlock(Block): ) next_token: str = SchemaField(description="Next token for pagination") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="ed26783e-a62f-11ef-9a21-c77c57dd8a1f", @@ -308,7 +311,7 @@ class TwitterRemoveBookmarkTweetBlock(Block): Remove a bookmark for a tweet on Twitter """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "bookmark.write", "users.read", "offline.access"] ) @@ -318,7 +321,7 @@ class TwitterRemoveBookmarkTweetBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the bookmark was successfully removed" ) diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/hide.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/hide.py index 65faa315ae..f9992ea7c0 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/hide.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/hide.py @@ -9,7 +9,13 @@ from backend.blocks.twitter._auth import ( TwitterCredentialsInput, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -18,7 +24,7 @@ class TwitterHideReplyBlock(Block): Hides a reply of one of your tweets """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "tweet.moderate.write", "users.read", "offline.access"] ) @@ -28,9 +34,8 @@ class TwitterHideReplyBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the operation was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -90,7 +95,7 @@ class TwitterUnhideReplyBlock(Block): Unhides a reply to a tweet """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "tweet.moderate.write", "users.read", "offline.access"] ) @@ -100,9 +105,8 @@ class TwitterUnhideReplyBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the operation was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/like.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/like.py index 8bbc30e8e9..2d499257a9 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/like.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/like.py @@ -31,7 +31,13 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -40,7 +46,7 @@ class TwitterLikeTweetBlock(Block): Likes a tweet """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "like.write", "users.read", "offline.access"] ) @@ -50,9 +56,8 @@ class TwitterLikeTweetBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the operation was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -134,7 +139,7 @@ class TwitterGetLikingUsersBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses id: list[str] = SchemaField(description="All User IDs who liked the tweet") username: list[str] = SchemaField( @@ -152,7 +157,6 @@ class TwitterGetLikingUsersBlock(Block): ) # error - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -309,7 +313,7 @@ class TwitterGetLikedTweetsBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses ids: list[str] = SchemaField(description="All Tweet IDs") texts: list[str] = SchemaField(description="All Tweet texts") @@ -331,7 +335,6 @@ class TwitterGetLikedTweetsBlock(Block): ) # error - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -514,7 +517,7 @@ class TwitterUnlikeTweetBlock(Block): Unlikes a tweet that was previously liked """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "like.write", "users.read", "offline.access"] ) @@ -524,9 +527,8 @@ class TwitterUnlikeTweetBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the operation was successful") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/manage.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/manage.py index 6dca0d74c8..875e22738b 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/manage.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/manage.py @@ -35,7 +35,13 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -71,7 +77,7 @@ class TwitterPostTweetBlock(Block): Create a tweet on Twitter with the option to include one additional element such as a media, quote, or deep link. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "tweet.write", "users.read", "offline.access"] ) @@ -118,7 +124,7 @@ class TwitterPostTweetBlock(Block): default=TweetReplySettingsFilter(All_Users=True), ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): tweet_id: str = SchemaField(description="ID of the created tweet") tweet_url: str = SchemaField(description="URL to the tweet") error: str = SchemaField( @@ -240,7 +246,7 @@ class TwitterDeleteTweetBlock(Block): Deletes a tweet on Twitter using twitter Id """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "tweet.write", "users.read", "offline.access"] ) @@ -250,7 +256,7 @@ class TwitterDeleteTweetBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the tweet was successfully deleted" ) @@ -335,7 +341,7 @@ class TwitterSearchRecentTweetsBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses tweet_ids: list[str] = SchemaField(description="All Tweet IDs") tweet_texts: list[str] = SchemaField(description="All Tweet texts") @@ -351,7 +357,6 @@ class TwitterSearchRecentTweetsBlock(Block): ) # error - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/quote.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/quote.py index b15271b072..fc6c336e20 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/quote.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/quote.py @@ -27,7 +27,7 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -62,7 +62,7 @@ class TwitterGetQuoteTweetsBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses ids: list = SchemaField(description="All Tweet IDs ") texts: list = SchemaField(description="All Tweet texts") @@ -78,7 +78,6 @@ class TwitterGetQuoteTweetsBlock(Block): ) # error - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/retweet.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/retweet.py index 9b1ba81b78..1f65f90ea3 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/retweet.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/retweet.py @@ -23,7 +23,13 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -32,7 +38,7 @@ class TwitterRetweetBlock(Block): Retweets a tweet on Twitter """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "tweet.write", "users.read", "offline.access"] ) @@ -42,9 +48,8 @@ class TwitterRetweetBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField(description="Whether the retweet was successful") - error: str = SchemaField(description="Error message if the retweet failed") def __init__(self): super().__init__( @@ -107,7 +112,7 @@ class TwitterRemoveRetweetBlock(Block): Removes a retweet on Twitter """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["tweet.read", "tweet.write", "users.read", "offline.access"] ) @@ -117,11 +122,10 @@ class TwitterRemoveRetweetBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the retweet was successfully removed" ) - error: str = SchemaField(description="Error message if the removal failed") def __init__(self): super().__init__( @@ -207,7 +211,7 @@ class TwitterGetRetweetersBlock(Block): default="", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses ids: list = SchemaField(description="List of user ids who retweeted") names: list = SchemaField(description="List of user names who retweeted") @@ -225,8 +229,6 @@ class TwitterGetRetweetersBlock(Block): description="Provides metadata such as pagination info (next_token) or result counts" ) - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="ad7aa6fa-a630-11ef-a6b0-e7ca640aa030", diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/timeline.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/timeline.py index ca89039c2e..9f07beba66 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/timeline.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/timeline.py @@ -31,7 +31,7 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -60,7 +60,7 @@ class TwitterGetUserMentionsBlock(Block): description="Token for pagination", default="", advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses ids: list[str] = SchemaField(description="List of Tweet IDs") texts: list[str] = SchemaField(description="All Tweet texts") @@ -83,7 +83,6 @@ class TwitterGetUserMentionsBlock(Block): ) # error - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -302,7 +301,7 @@ class TwitterGetHomeTimelineBlock(Block): description="Token for pagination", default="", advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses ids: list[str] = SchemaField(description="List of Tweet IDs") texts: list[str] = SchemaField(description="All Tweet texts") @@ -325,7 +324,6 @@ class TwitterGetHomeTimelineBlock(Block): ) # error - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -539,7 +537,7 @@ class TwitterGetUserTweetsBlock(Block): description="Token for pagination", default="", advanced=True ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses ids: list[str] = SchemaField(description="List of Tweet IDs") texts: list[str] = SchemaField(description="All Tweet texts") @@ -562,7 +560,6 @@ class TwitterGetUserTweetsBlock(Block): ) # error - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/tweets/tweet_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/tweets/tweet_lookup.py index 5021161b9e..540aa1395f 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/tweets/tweet_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/tweets/tweet_lookup.py @@ -26,7 +26,7 @@ from backend.blocks.twitter._types import ( TweetUserFieldsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -45,7 +45,7 @@ class TwitterGetTweetBlock(Block): placeholder="Enter tweet ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses id: str = SchemaField(description="Tweet ID") text: str = SchemaField(description="Tweet text") @@ -59,8 +59,6 @@ class TwitterGetTweetBlock(Block): ) meta: dict = SchemaField(description="Metadata about the tweet") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="f5155c3a-a630-11ef-9cc1-a309988b4d92", @@ -204,7 +202,7 @@ class TwitterGetTweetsBlock(Block): placeholder="Enter tweet IDs", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common Outputs that user commonly uses ids: list[str] = SchemaField(description="All Tweet IDs") texts: list[str] = SchemaField(description="All Tweet texts") @@ -222,8 +220,6 @@ class TwitterGetTweetsBlock(Block): ) meta: dict = SchemaField(description="Metadata about the tweets") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="e7cc5420-a630-11ef-bfaf-13bdd8096a51", diff --git a/autogpt_platform/backend/backend/blocks/twitter/users/blocks.py b/autogpt_platform/backend/backend/blocks/twitter/users/blocks.py index ca118e91e2..1c192aa6b5 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/users/blocks.py +++ b/autogpt_platform/backend/backend/blocks/twitter/users/blocks.py @@ -20,7 +20,7 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -48,7 +48,7 @@ class TwitterGetBlockedUsersBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): user_ids: list[str] = SchemaField(description="List of blocked user IDs") usernames_: list[str] = SchemaField(description="List of blocked usernames") included: dict = SchemaField( @@ -56,7 +56,6 @@ class TwitterGetBlockedUsersBlock(Block): ) meta: dict = SchemaField(description="Metadata including pagination info") next_token: str = SchemaField(description="Next token for pagination") - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/users/follows.py b/autogpt_platform/backend/backend/blocks/twitter/users/follows.py index 160ffe9b35..537aea6031 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/users/follows.py +++ b/autogpt_platform/backend/backend/blocks/twitter/users/follows.py @@ -23,7 +23,13 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -33,7 +39,7 @@ class TwitterUnfollowUserBlock(Block): The request succeeds with no action when the authenticated user sends a request to a user they're not following or have already unfollowed. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["users.read", "users.write", "follows.write", "offline.access"] ) @@ -43,11 +49,10 @@ class TwitterUnfollowUserBlock(Block): placeholder="Enter target user ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the unfollow action was successful" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -105,7 +110,7 @@ class TwitterFollowUserBlock(Block): public Tweets. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["users.read", "users.write", "follows.write", "offline.access"] ) @@ -115,11 +120,10 @@ class TwitterFollowUserBlock(Block): placeholder="Enter target user ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the follow action was successful" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -196,7 +200,7 @@ class TwitterGetFollowersBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): ids: list[str] = SchemaField(description="List of follower user IDs") usernames: list[str] = SchemaField(description="List of follower usernames") next_token: str = SchemaField(description="Next token for pagination") @@ -207,8 +211,6 @@ class TwitterGetFollowersBlock(Block): ) meta: dict = SchemaField(description="Metadata including pagination info") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="30f66410-a631-11ef-8fe7-d7f888b4f43c", @@ -370,7 +372,7 @@ class TwitterGetFollowingBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): ids: list[str] = SchemaField(description="List of following user IDs") usernames: list[str] = SchemaField(description="List of following usernames") next_token: str = SchemaField(description="Next token for pagination") @@ -381,8 +383,6 @@ class TwitterGetFollowingBlock(Block): ) meta: dict = SchemaField(description="Metadata including pagination info") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="264a399c-a631-11ef-a97d-bfde4ca91173", diff --git a/autogpt_platform/backend/backend/blocks/twitter/users/mutes.py b/autogpt_platform/backend/backend/blocks/twitter/users/mutes.py index 36bb4028f9..e22aec94dc 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/users/mutes.py +++ b/autogpt_platform/backend/backend/blocks/twitter/users/mutes.py @@ -23,7 +23,13 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import SchemaField @@ -33,7 +39,7 @@ class TwitterUnmuteUserBlock(Block): The request succeeds with no action when the user sends a request to a user they're not muting or have already unmuted. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["users.read", "users.write", "offline.access"] ) @@ -43,11 +49,10 @@ class TwitterUnmuteUserBlock(Block): placeholder="Enter target user ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the unmute action was successful" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -121,7 +126,7 @@ class TwitterGetMutedUsersBlock(Block): advanced=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): ids: list[str] = SchemaField(description="List of muted user IDs") usernames: list[str] = SchemaField(description="List of muted usernames") next_token: str = SchemaField(description="Next token for pagination") @@ -132,8 +137,6 @@ class TwitterGetMutedUsersBlock(Block): ) meta: dict = SchemaField(description="Metadata including pagination info") - error: str = SchemaField(description="Error message if the request failed") - def __init__(self): super().__init__( id="475024da-a631-11ef-9ccd-f724b8b03cda", @@ -269,7 +272,7 @@ class TwitterMuteUserBlock(Block): Allows a user to mute another user specified by target user ID """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: TwitterCredentialsInput = TwitterCredentialsField( ["users.read", "users.write", "offline.access"] ) @@ -279,11 +282,10 @@ class TwitterMuteUserBlock(Block): placeholder="Enter target user ID", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): success: bool = SchemaField( description="Whether the mute action was successful" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/twitter/users/user_lookup.py b/autogpt_platform/backend/backend/blocks/twitter/users/user_lookup.py index 585ebff3db..67c7d14c9b 100644 --- a/autogpt_platform/backend/backend/blocks/twitter/users/user_lookup.py +++ b/autogpt_platform/backend/backend/blocks/twitter/users/user_lookup.py @@ -24,7 +24,7 @@ from backend.blocks.twitter._types import ( UserExpansionsFilter, ) from backend.blocks.twitter.tweepy_exceptions import handle_tweepy_exception -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchemaOutput from backend.data.model import SchemaField @@ -56,7 +56,7 @@ class TwitterGetUserBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs id: str = SchemaField(description="User ID") username_: str = SchemaField(description="User username") @@ -67,7 +67,6 @@ class TwitterGetUserBlock(Block): included: dict = SchemaField( description="Additional data requested via expansions" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( @@ -233,7 +232,7 @@ class TwitterGetUsersBlock(Block): advanced=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): # Common outputs ids: list[str] = SchemaField(description="User IDs") usernames_: list[str] = SchemaField(description="User usernames") @@ -244,7 +243,6 @@ class TwitterGetUsersBlock(Block): included: dict = SchemaField( description="Additional data requested via expansions" ) - error: str = SchemaField(description="Error message if the request failed") def __init__(self): super().__init__( diff --git a/autogpt_platform/backend/backend/blocks/wolfram/llm_api.py b/autogpt_platform/backend/backend/blocks/wolfram/llm_api.py index 5586e0d8ef..cbae674bdf 100644 --- a/autogpt_platform/backend/backend/blocks/wolfram/llm_api.py +++ b/autogpt_platform/backend/backend/blocks/wolfram/llm_api.py @@ -4,7 +4,8 @@ from backend.sdk import ( BlockCategory, BlockCostType, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, ProviderBuilder, SchemaField, @@ -25,13 +26,13 @@ class AskWolframBlock(Block): Ask Wolfram Alpha a question. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = wolfram.credentials_field( description="Wolfram Alpha API credentials" ) question: str = SchemaField(description="The question to ask") - class Output(BlockSchema): + class Output(BlockSchemaOutput): answer: str = SchemaField(description="The answer to the question") def __init__(self): diff --git a/autogpt_platform/backend/backend/blocks/wordpress/blog.py b/autogpt_platform/backend/backend/blocks/wordpress/blog.py index 5474b7afda..c0ad5eca54 100644 --- a/autogpt_platform/backend/backend/blocks/wordpress/blog.py +++ b/autogpt_platform/backend/backend/blocks/wordpress/blog.py @@ -2,7 +2,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, Credentials, CredentialsMetaInput, SchemaField, @@ -17,7 +18,7 @@ class WordPressCreatePostBlock(Block): Creates a new post on a WordPress.com site or Jetpack-enabled site and publishes it. """ - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = wordpress.credentials_field() site: str = SchemaField( description="Site ID or domain (e.g., 'myblog.wordpress.com' or '123456789')" @@ -49,7 +50,7 @@ class WordPressCreatePostBlock(Block): description="URLs of images to sideload and attach to the post", default=[] ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): post_id: int = SchemaField(description="The ID of the created post") post_url: str = SchemaField(description="The full URL of the created post") short_url: str = SchemaField(description="The shortened wp.me URL") diff --git a/autogpt_platform/backend/backend/blocks/xml_parser.py b/autogpt_platform/backend/backend/blocks/xml_parser.py index cd2c5b2514..bcc06e3386 100644 --- a/autogpt_platform/backend/backend/blocks/xml_parser.py +++ b/autogpt_platform/backend/backend/blocks/xml_parser.py @@ -1,15 +1,15 @@ from gravitasml.parser import Parser from gravitasml.token import tokenize -from backend.data.block import Block, BlockOutput, BlockSchema +from backend.data.block import Block, BlockOutput, BlockSchemaInput, BlockSchemaOutput from backend.data.model import SchemaField class XMLParserBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): input_xml: str = SchemaField(description="input xml to be parsed") - class Output(BlockSchema): + class Output(BlockSchemaOutput): parsed_xml: dict = SchemaField(description="output parsed xml to dict") error: str = SchemaField(description="Error in parsing") diff --git a/autogpt_platform/backend/backend/blocks/youtube.py b/autogpt_platform/backend/backend/blocks/youtube.py index bb8c61449e..322cac35a8 100644 --- a/autogpt_platform/backend/backend/blocks/youtube.py +++ b/autogpt_platform/backend/backend/blocks/youtube.py @@ -1,22 +1,69 @@ +import logging +from typing import Literal from urllib.parse import parse_qs, urlparse +from pydantic import SecretStr from youtube_transcript_api._api import YouTubeTranscriptApi +from youtube_transcript_api._errors import NoTranscriptFound from youtube_transcript_api._transcripts import FetchedTranscript from youtube_transcript_api.formatters import TextFormatter +from youtube_transcript_api.proxies import WebshareProxyConfig -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema -from backend.data.model import SchemaField +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) +from backend.data.model import ( + CredentialsField, + CredentialsMetaInput, + SchemaField, + UserPasswordCredentials, +) +from backend.integrations.providers import ProviderName + +logger = logging.getLogger(__name__) + +TEST_CREDENTIALS = UserPasswordCredentials( + id="01234567-89ab-cdef-0123-456789abcdef", + provider="webshare_proxy", + username=SecretStr("mock-webshare-username"), + password=SecretStr("mock-webshare-password"), + title="Mock Webshare Proxy credentials", +) + +TEST_CREDENTIALS_INPUT = { + "provider": TEST_CREDENTIALS.provider, + "id": TEST_CREDENTIALS.id, + "type": TEST_CREDENTIALS.type, + "title": TEST_CREDENTIALS.title, +} + +WebshareProxyCredentials = UserPasswordCredentials +WebshareProxyCredentialsInput = CredentialsMetaInput[ + Literal[ProviderName.WEBSHARE_PROXY], + Literal["user_password"], +] + + +def WebshareProxyCredentialsField() -> WebshareProxyCredentialsInput: + return CredentialsField( + description="Webshare proxy credentials for fetching YouTube transcripts", + ) class TranscribeYoutubeVideoBlock(Block): - class Input(BlockSchema): + class Input(BlockSchemaInput): youtube_url: str = SchemaField( title="YouTube URL", description="The URL of the YouTube video to transcribe", placeholder="https://www.youtube.com/watch?v=dQw4w9WgXcQ", ) + credentials: WebshareProxyCredentialsInput = WebshareProxyCredentialsField() - class Output(BlockSchema): + class Output(BlockSchemaOutput): video_id: str = SchemaField(description="The extracted YouTube video ID") transcript: str = SchemaField(description="The transcribed text of the video") error: str = SchemaField( @@ -28,9 +75,12 @@ class TranscribeYoutubeVideoBlock(Block): id="f3a8f7e1-4b1d-4e5f-9f2a-7c3d5a2e6b4c", input_schema=TranscribeYoutubeVideoBlock.Input, output_schema=TranscribeYoutubeVideoBlock.Output, - description="Transcribes a YouTube video.", + description="Transcribes a YouTube video using a proxy.", categories={BlockCategory.SOCIAL}, - test_input={"youtube_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ"}, + test_input={ + "youtube_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ", + "credentials": TEST_CREDENTIALS_INPUT, + }, test_output=[ ("video_id", "dQw4w9WgXcQ"), ( @@ -38,8 +88,9 @@ class TranscribeYoutubeVideoBlock(Block): "Never gonna give you up\nNever gonna let you down", ), ], + test_credentials=TEST_CREDENTIALS, test_mock={ - "get_transcript": lambda video_id: [ + "get_transcript": lambda video_id, credentials: [ {"text": "Never gonna give you up"}, {"text": "Never gonna let you down"}, ], @@ -62,9 +113,42 @@ class TranscribeYoutubeVideoBlock(Block): return parsed_url.path.split("/")[2] raise ValueError(f"Invalid YouTube URL: {url}") - @staticmethod - def get_transcript(video_id: str) -> FetchedTranscript: - return YouTubeTranscriptApi().fetch(video_id=video_id) + def get_transcript( + self, video_id: str, credentials: WebshareProxyCredentials + ) -> FetchedTranscript: + """ + Get transcript for a video, preferring English but falling back to any available language. + + :param video_id: The YouTube video ID + :param credentials: The Webshare proxy credentials + :return: The fetched transcript + :raises: Any exception except NoTranscriptFound for requested languages + """ + logger.warning( + "Using Webshare proxy for YouTube transcript fetch (video_id=%s)", + video_id, + ) + proxy_config = WebshareProxyConfig( + proxy_username=credentials.username.get_secret_value(), + proxy_password=credentials.password.get_secret_value(), + ) + + api = YouTubeTranscriptApi(proxy_config=proxy_config) + try: + # Try to get English transcript first (default behavior) + return api.fetch(video_id=video_id) + except NoTranscriptFound: + # If English is not available, get the first available transcript + transcript_list = api.list(video_id) + # Try manually created transcripts first, then generated ones + available_transcripts = list( + transcript_list._manually_created_transcripts.values() + ) + list(transcript_list._generated_transcripts.values()) + if available_transcripts: + # Fetch the first available transcript + return available_transcripts[0].fetch() + # If no transcripts at all, re-raise the original error + raise @staticmethod def format_transcript(transcript: FetchedTranscript) -> str: @@ -72,11 +156,17 @@ class TranscribeYoutubeVideoBlock(Block): transcript_text = formatter.format_transcript(transcript) return transcript_text - async def run(self, input_data: Input, **kwargs) -> BlockOutput: + async def run( + self, + input_data: Input, + *, + credentials: WebshareProxyCredentials, + **kwargs, + ) -> BlockOutput: video_id = self.extract_video_id(input_data.youtube_url) yield "video_id", video_id - transcript = self.get_transcript(video_id) + transcript = self.get_transcript(video_id, credentials) transcript_text = self.format_transcript(transcript=transcript) yield "transcript", transcript_text diff --git a/autogpt_platform/backend/backend/blocks/zerobounce/validate_emails.py b/autogpt_platform/backend/backend/blocks/zerobounce/validate_emails.py index 6bb96d0a8f..fa5283f324 100644 --- a/autogpt_platform/backend/backend/blocks/zerobounce/validate_emails.py +++ b/autogpt_platform/backend/backend/blocks/zerobounce/validate_emails.py @@ -14,7 +14,13 @@ from backend.blocks.zerobounce._auth import ( ZeroBounceCredentials, ZeroBounceCredentialsInput, ) -from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema +from backend.data.block import ( + Block, + BlockCategory, + BlockOutput, + BlockSchemaInput, + BlockSchemaOutput, +) from backend.data.model import CredentialsField, SchemaField @@ -82,7 +88,7 @@ class Response(BaseModel): class ValidateEmailsBlock(Block): """Search for people in Apollo""" - class Input(BlockSchema): + class Input(BlockSchemaInput): email: str = SchemaField( description="Email to validate", ) @@ -94,7 +100,7 @@ class ValidateEmailsBlock(Block): description="ZeroBounce credentials", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): response: Response = SchemaField( description="Response from ZeroBounce", ) diff --git a/autogpt_platform/backend/backend/check_db.py b/autogpt_platform/backend/backend/check_db.py index 591c519f84..7e1c3ee14f 100644 --- a/autogpt_platform/backend/backend/check_db.py +++ b/autogpt_platform/backend/backend/check_db.py @@ -5,6 +5,8 @@ from datetime import datetime from faker import Faker from prisma import Prisma +from backend.data.db import query_raw_with_schema + faker = Faker() @@ -15,9 +17,9 @@ async def check_cron_job(db): try: # Check if pg_cron extension exists - extension_check = await db.query_raw("CREATE EXTENSION pg_cron;") + extension_check = await query_raw_with_schema("CREATE EXTENSION pg_cron;") print(extension_check) - extension_check = await db.query_raw( + extension_check = await query_raw_with_schema( "SELECT COUNT(*) as count FROM pg_extension WHERE extname = 'pg_cron'" ) if extension_check[0]["count"] == 0: @@ -25,7 +27,7 @@ async def check_cron_job(db): return False # Check if the refresh job exists - job_check = await db.query_raw( + job_check = await query_raw_with_schema( """ SELECT jobname, schedule, command FROM cron.job @@ -55,33 +57,33 @@ async def get_materialized_view_counts(db): print("-" * 40) # Get counts from mv_agent_run_counts - agent_runs = await db.query_raw( + agent_runs = await query_raw_with_schema( """ SELECT COUNT(*) as total_agents, SUM(run_count) as total_runs, MAX(run_count) as max_runs, MIN(run_count) as min_runs - FROM mv_agent_run_counts + FROM {schema_prefix}mv_agent_run_counts """ ) # Get counts from mv_review_stats - review_stats = await db.query_raw( + review_stats = await query_raw_with_schema( """ SELECT COUNT(*) as total_listings, SUM(review_count) as total_reviews, AVG(avg_rating) as overall_avg_rating - FROM mv_review_stats + FROM {schema_prefix}mv_review_stats """ ) # Get sample data from StoreAgent view - store_agents = await db.query_raw( + store_agents = await query_raw_with_schema( """ SELECT COUNT(*) as total_store_agents, AVG(runs) as avg_runs, AVG(rating) as avg_rating - FROM "StoreAgent" + FROM {schema_prefix}"StoreAgent" """ ) diff --git a/autogpt_platform/backend/backend/check_store_data.py b/autogpt_platform/backend/backend/check_store_data.py index 10aa6507ba..c17393a6d4 100644 --- a/autogpt_platform/backend/backend/check_store_data.py +++ b/autogpt_platform/backend/backend/check_store_data.py @@ -5,6 +5,8 @@ import asyncio from prisma import Prisma +from backend.data.db import query_raw_with_schema + async def check_store_data(db): """Check what store data exists in the database.""" @@ -89,11 +91,11 @@ async def check_store_data(db): sa.creator_username, sa.categories, sa.updated_at - FROM "StoreAgent" sa + FROM {schema_prefix}"StoreAgent" sa LIMIT 10; """ - store_agents = await db.query_raw(query) + store_agents = await query_raw_with_schema(query) print(f"Total store agents in view: {len(store_agents)}") if store_agents: @@ -111,22 +113,22 @@ async def check_store_data(db): # Check for any APPROVED store listing versions query = """ SELECT COUNT(*) as count - FROM "StoreListingVersion" + FROM {schema_prefix}"StoreListingVersion" WHERE "submissionStatus" = 'APPROVED' """ - result = await db.query_raw(query) + result = await query_raw_with_schema(query) approved_count = result[0]["count"] if result else 0 print(f"Approved store listing versions: {approved_count}") # Check for store listings with hasApprovedVersion = true query = """ SELECT COUNT(*) as count - FROM "StoreListing" + FROM {schema_prefix}"StoreListing" WHERE "hasApprovedVersion" = true AND "isDeleted" = false """ - result = await db.query_raw(query) + result = await query_raw_with_schema(query) has_approved_count = result[0]["count"] if result else 0 print(f"Store listings with approved versions: {has_approved_count}") @@ -134,10 +136,10 @@ async def check_store_data(db): query = """ SELECT COUNT(DISTINCT "agentGraphId") as unique_agents, COUNT(*) as total_executions - FROM "AgentGraphExecution" + FROM {schema_prefix}"AgentGraphExecution" """ - result = await db.query_raw(query) + result = await query_raw_with_schema(query) if result: print("\nAgent Graph Executions:") print(f" Unique agents with executions: {result[0]['unique_agents']}") diff --git a/autogpt_platform/backend/backend/cli.py b/autogpt_platform/backend/backend/cli.py index 988961b2de..d6eaca1dd0 100755 --- a/autogpt_platform/backend/backend/cli.py +++ b/autogpt_platform/backend/backend/cli.py @@ -45,9 +45,6 @@ class MainApp(AppProcess): app.main(silent=True) - def cleanup(self): - pass - @click.group() def main(): @@ -247,11 +244,7 @@ def websocket(server_address: str, graph_exec_id: str): import websockets.asyncio.client - from backend.server.ws_api import ( - WSMessage, - WSMethod, - WSSubscribeGraphExecutionRequest, - ) + from backend.api.ws_api import WSMessage, WSMethod, WSSubscribeGraphExecutionRequest async def send_message(server_address: str): uri = f"ws://{server_address}" diff --git a/autogpt_platform/backend/backend/cli/__init__.py b/autogpt_platform/backend/backend/cli/__init__.py new file mode 100644 index 0000000000..d96b0c7d49 --- /dev/null +++ b/autogpt_platform/backend/backend/cli/__init__.py @@ -0,0 +1 @@ +"""CLI utilities for backend development & administration""" diff --git a/autogpt_platform/backend/backend/cli/generate_openapi_json.py b/autogpt_platform/backend/backend/cli/generate_openapi_json.py new file mode 100644 index 0000000000..de74c0b5d2 --- /dev/null +++ b/autogpt_platform/backend/backend/cli/generate_openapi_json.py @@ -0,0 +1,57 @@ +#!/usr/bin/env python3 +""" +Script to generate OpenAPI JSON specification for the FastAPI app. + +This script imports the FastAPI app from backend.api.rest_api and outputs +the OpenAPI specification as JSON to stdout or a specified file. + +Usage: + `poetry run python generate_openapi_json.py` + `poetry run python generate_openapi_json.py --output openapi.json` + `poetry run python generate_openapi_json.py --indent 4 --output openapi.json` +""" + +import json +import os +from pathlib import Path + +import click + + +@click.command() +@click.option( + "--output", + type=click.Path(dir_okay=False, path_type=Path), + help="Output file path (default: stdout)", +) +@click.option( + "--pretty", + type=click.BOOL, + default=False, + help="Pretty-print JSON output (indented 2 spaces)", +) +def main(output: Path, pretty: bool): + """Generate and output the OpenAPI JSON specification.""" + openapi_schema = get_openapi_schema() + + json_output = json.dumps(openapi_schema, indent=2 if pretty else None) + + if output: + output.write_text(json_output) + click.echo(f"✅ OpenAPI specification written to {output}\n\nPreview:") + click.echo(f"\n{json_output[:500]} ...") + else: + print(json_output) + + +def get_openapi_schema(): + """Get the OpenAPI schema from the FastAPI app""" + from backend.api.rest_api import app + + return app.openapi() + + +if __name__ == "__main__": + os.environ["LOG_LEVEL"] = "ERROR" # disable stdout log output + + main() diff --git a/autogpt_platform/backend/backend/cli/oauth_tool.py b/autogpt_platform/backend/backend/cli/oauth_tool.py new file mode 100755 index 0000000000..57982d359b --- /dev/null +++ b/autogpt_platform/backend/backend/cli/oauth_tool.py @@ -0,0 +1,1177 @@ +#!/usr/bin/env python3 +""" +OAuth Application Credential Generator and Test Server + +Generates client IDs, client secrets, and SQL INSERT statements for OAuth applications. +Also provides a test server to test the OAuth flows end-to-end. + +Usage: + # Generate credentials interactively (recommended) + poetry run oauth-tool generate-app + + # Generate credentials with all options provided + poetry run oauth-tool generate-app \\ + --name "My App" \\ + --description "My application description" \\ + --redirect-uris "https://app.example.com/callback,http://localhost:3000/callback" \\ + --scopes "EXECUTE_GRAPH,READ_GRAPH" + + # Mix of options and interactive prompts + poetry run oauth-tool generate-app --name "My App" + + # Hash an existing plaintext secret (for secret rotation) + poetry run oauth-tool hash-secret "my-plaintext-secret" + + # Validate a plaintext secret against a hash and salt + poetry run oauth-tool validate-secret "my-plaintext-secret" "hash" "salt" + + # Run a test server to test OAuth flows + poetry run oauth-tool test-server --owner-id YOUR_USER_ID +""" + +import asyncio +import base64 +import hashlib +import secrets +import sys +import uuid +from datetime import datetime +from typing import Optional +from urllib.parse import urlparse + +import click +from autogpt_libs.api_key.keysmith import APIKeySmith +from prisma.enums import APIKeyPermission + +keysmith = APIKeySmith() + + +def generate_client_id() -> str: + """Generate a unique client ID""" + return f"agpt_client_{secrets.token_urlsafe(16)}" + + +def generate_client_secret() -> tuple[str, str, str]: + """ + Generate a client secret with its hash and salt. + Returns (plaintext_secret, hashed_secret, salt) + """ + # Generate a secure random secret (32 bytes = 256 bits of entropy) + plaintext = f"agpt_secret_{secrets.token_urlsafe(32)}" + + # Hash using Scrypt (same as API keys) + hashed, salt = keysmith.hash_key(plaintext) + + return plaintext, hashed, salt + + +def hash_secret(plaintext: str) -> tuple[str, str]: + """Hash a plaintext secret using Scrypt. Returns (hash, salt)""" + return keysmith.hash_key(plaintext) + + +def validate_secret(plaintext: str, hash_value: str, salt: str) -> bool: + """Validate a plaintext secret against a stored hash and salt""" + return keysmith.verify_key(plaintext, hash_value, salt) + + +def generate_app_credentials( + name: str, + redirect_uris: list[str], + scopes: list[str], + description: str | None = None, + grant_types: list[str] | None = None, +) -> dict: + """ + Generate complete credentials for an OAuth application. + + Returns dict with: + - id: UUID for the application + - name: Application name + - description: Application description + - client_id: Client identifier (plaintext) + - client_secret_plaintext: Client secret (SENSITIVE - show only once) + - client_secret_hash: Hashed client secret (for database) + - redirect_uris: List of allowed redirect URIs + - grant_types: List of allowed grant types + - scopes: List of allowed scopes + """ + if grant_types is None: + grant_types = ["authorization_code", "refresh_token"] + + # Validate scopes + try: + validated_scopes = [APIKeyPermission(s.strip()) for s in scopes if s.strip()] + except ValueError as e: + raise ValueError(f"Invalid scope: {e}") + + if not validated_scopes: + raise ValueError("At least one scope is required") + + # Generate credentials + app_id = str(uuid.uuid4()) + client_id = generate_client_id() + client_secret_plaintext, client_secret_hash, client_secret_salt = ( + generate_client_secret() + ) + + return { + "id": app_id, + "name": name, + "description": description, + "client_id": client_id, + "client_secret_plaintext": client_secret_plaintext, + "client_secret_hash": client_secret_hash, + "client_secret_salt": client_secret_salt, + "redirect_uris": redirect_uris, + "grant_types": grant_types, + "scopes": [s.value for s in validated_scopes], + } + + +def format_sql_insert(creds: dict) -> str: + """ + Format credentials as a SQL INSERT statement. + + The statement includes placeholders that must be replaced: + - YOUR_USER_ID_HERE: Replace with the owner's user ID + """ + now_iso = datetime.utcnow().isoformat() + + # Format arrays for PostgreSQL + redirect_uris_pg = ( + "{" + ",".join(f'"{uri}"' for uri in creds["redirect_uris"]) + "}" + ) + grant_types_pg = "{" + ",".join(f'"{gt}"' for gt in creds["grant_types"]) + "}" + scopes_pg = "{" + ",".join(creds["scopes"]) + "}" + + sql = f""" +-- ============================================================ +-- OAuth Application: {creds['name']} +-- Generated: {now_iso} UTC +-- ============================================================ + +INSERT INTO "OAuthApplication" ( + id, + "createdAt", + "updatedAt", + name, + description, + "clientId", + "clientSecret", + "clientSecretSalt", + "redirectUris", + "grantTypes", + scopes, + "ownerId", + "isActive" +) +VALUES ( + '{creds['id']}', + NOW(), + NOW(), + '{creds['name']}', + {f"'{creds['description']}'" if creds['description'] else 'NULL'}, + '{creds['client_id']}', + '{creds['client_secret_hash']}', + '{creds['client_secret_salt']}', + ARRAY{redirect_uris_pg}::TEXT[], + ARRAY{grant_types_pg}::TEXT[], + ARRAY{scopes_pg}::"APIKeyPermission"[], + 'YOUR_USER_ID_HERE', -- ⚠️ REPLACE with actual owner user ID + true +); + +-- ============================================================ +-- ⚠️ IMPORTANT: Save these credentials securely! +-- ============================================================ +-- +-- Client ID: {creds['client_id']} +-- Client Secret: {creds['client_secret_plaintext']} +-- +-- ⚠️ The client secret is shown ONLY ONCE! +-- ⚠️ Store it securely and share only with the application developer. +-- ⚠️ Never commit it to version control. +-- +-- The client secret has been hashed in the database using Scrypt. +-- The plaintext secret above is needed by the application to authenticate. +-- ============================================================ + +-- To verify the application was created: +-- SELECT "clientId", name, scopes, "redirectUris", "isActive" +-- FROM "OAuthApplication" +-- WHERE "clientId" = '{creds['client_id']}'; +""" + return sql + + +@click.group() +def cli(): + """OAuth Application Credential Generator + + Generates client IDs, client secrets, and SQL INSERT statements for OAuth applications. + Does NOT directly insert into the database - outputs SQL for manual execution. + """ + pass + + +AVAILABLE_SCOPES = [ + "EXECUTE_GRAPH", + "READ_GRAPH", + "EXECUTE_BLOCK", + "READ_BLOCK", + "READ_STORE", + "USE_TOOLS", + "MANAGE_INTEGRATIONS", + "READ_INTEGRATIONS", + "DELETE_INTEGRATIONS", +] + +DEFAULT_GRANT_TYPES = ["authorization_code", "refresh_token"] + + +def prompt_for_name() -> str: + """Prompt for application name""" + return click.prompt("Application name", type=str) + + +def prompt_for_description() -> str | None: + """Prompt for application description""" + description = click.prompt( + "Application description (optional, press Enter to skip)", + type=str, + default="", + show_default=False, + ) + return description if description else None + + +def prompt_for_redirect_uris() -> list[str]: + """Prompt for redirect URIs interactively""" + click.echo("\nRedirect URIs (enter one per line, empty line to finish):") + click.echo(" Example: https://app.example.com/callback") + uris = [] + while True: + uri = click.prompt(" URI", type=str, default="", show_default=False) + if not uri: + if not uris: + click.echo(" At least one redirect URI is required.") + continue + break + uris.append(uri.strip()) + return uris + + +def prompt_for_scopes() -> list[str]: + """Prompt for scopes interactively with a menu""" + click.echo("\nAvailable scopes:") + for i, scope in enumerate(AVAILABLE_SCOPES, 1): + click.echo(f" {i}. {scope}") + + click.echo( + "\nSelect scopes by number (comma-separated) or enter scope names directly:" + ) + click.echo(" Example: 1,2 or EXECUTE_GRAPH,READ_GRAPH") + + while True: + selection = click.prompt("Scopes", type=str) + scopes = [] + + for item in selection.split(","): + item = item.strip() + if not item: + continue + + # Check if it's a number + if item.isdigit(): + idx = int(item) - 1 + if 0 <= idx < len(AVAILABLE_SCOPES): + scopes.append(AVAILABLE_SCOPES[idx]) + else: + click.echo(f" Invalid number: {item}") + scopes = [] + break + # Check if it's a valid scope name + elif item.upper() in AVAILABLE_SCOPES: + scopes.append(item.upper()) + else: + click.echo(f" Invalid scope: {item}") + scopes = [] + break + + if scopes: + return scopes + click.echo(" Please enter valid scope numbers or names.") + + +def prompt_for_grant_types() -> list[str] | None: + """Prompt for grant types interactively""" + click.echo(f"\nGrant types (default: {', '.join(DEFAULT_GRANT_TYPES)})") + grant_types_input = click.prompt( + "Grant types (comma-separated, press Enter for default)", + type=str, + default="", + show_default=False, + ) + + if not grant_types_input: + return None # Use default + + return [gt.strip() for gt in grant_types_input.split(",") if gt.strip()] + + +@cli.command(name="generate-app") +@click.option( + "--name", + default=None, + help="Application name (e.g., 'My Cool App')", +) +@click.option( + "--description", + default=None, + help="Application description", +) +@click.option( + "--redirect-uris", + default=None, + help="Comma-separated list of redirect URIs (e.g., 'https://app.example.com/callback,http://localhost:3000/callback')", +) +@click.option( + "--scopes", + default=None, + help="Comma-separated list of scopes (e.g., 'EXECUTE_GRAPH,READ_GRAPH')", +) +@click.option( + "--grant-types", + default=None, + help="Comma-separated list of grant types (default: 'authorization_code,refresh_token')", +) +def generate_app( + name: str | None, + description: str | None, + redirect_uris: str | None, + scopes: str | None, + grant_types: str | None, +): + """Generate credentials for a new OAuth application + + All options are optional. If not provided, you will be prompted interactively. + """ + # Interactive prompts for missing required values + if name is None: + name = prompt_for_name() + + if description is None: + description = prompt_for_description() + + if redirect_uris is None: + redirect_uris_list = prompt_for_redirect_uris() + else: + redirect_uris_list = [uri.strip() for uri in redirect_uris.split(",")] + + if scopes is None: + scopes_list = prompt_for_scopes() + else: + scopes_list = [scope.strip() for scope in scopes.split(",")] + + if grant_types is None: + grant_types_list = prompt_for_grant_types() + else: + grant_types_list = [gt.strip() for gt in grant_types.split(",")] + + try: + creds = generate_app_credentials( + name=name, + description=description, + redirect_uris=redirect_uris_list, + scopes=scopes_list, + grant_types=grant_types_list, + ) + + sql = format_sql_insert(creds) + click.echo(sql) + + except ValueError as e: + click.echo(f"Error: {e}", err=True) + sys.exit(1) + + +@cli.command(name="hash-secret") +@click.argument("secret") +def hash_secret_command(secret): + """Hash a plaintext secret using Scrypt""" + hashed, salt = hash_secret(secret) + click.echo(f"Hash: {hashed}") + click.echo(f"Salt: {salt}") + + +@cli.command(name="validate-secret") +@click.argument("secret") +@click.argument("hash") +@click.argument("salt") +def validate_secret_command(secret, hash, salt): + """Validate a plaintext secret against a hash and salt""" + is_valid = validate_secret(secret, hash, salt) + if is_valid: + click.echo("✓ Secret is valid!") + sys.exit(0) + else: + click.echo("✗ Secret is invalid!", err=True) + sys.exit(1) + + +# ============================================================================ +# Test Server Command +# ============================================================================ + +TEST_APP_NAME = "OAuth Test App (CLI)" +TEST_APP_DESCRIPTION = "Temporary test application created by oauth_admin CLI" +TEST_SERVER_PORT = 9876 + + +def generate_pkce() -> tuple[str, str]: + """Generate PKCE code_verifier and code_challenge (S256)""" + code_verifier = secrets.token_urlsafe(32) + code_challenge = ( + base64.urlsafe_b64encode(hashlib.sha256(code_verifier.encode()).digest()) + .decode() + .rstrip("=") + ) + return code_verifier, code_challenge + + +def create_test_html( + platform_url: str, + client_id: str, + client_secret: str, + redirect_uri: str, + backend_url: str, +) -> str: + """Generate HTML page for test OAuth client""" + return f""" + + + + + OAuth Test Client + + + +
      +
      +

      🔐 OAuth Test Client

      +

      Test the "Sign in with AutoGPT" and Integration Setup flows

      + +
      + + {client_id} +
      + +
      + + +
      +
      + + + +
      +

      📋 Request Log

      +
      Waiting for action...
      +
      + +
      +

      ⚙️ Configuration

      +
      + + {platform_url} +
      +
      + + {backend_url} +
      +
      + + {redirect_uri} +
      +
      +
      + + + + +""" + + +async def create_test_app_in_db( + owner_id: str, + redirect_uri: str, +) -> dict: + """Create a temporary test OAuth application in the database""" + from prisma.models import OAuthApplication + + from backend.data import db + + # Connect to database + await db.connect() + + # Generate credentials + creds = generate_app_credentials( + name=TEST_APP_NAME, + description=TEST_APP_DESCRIPTION, + redirect_uris=[redirect_uri], + scopes=AVAILABLE_SCOPES, # All scopes for testing + ) + + # Insert into database + app = await OAuthApplication.prisma().create( + data={ + "id": creds["id"], + "name": creds["name"], + "description": creds["description"], + "clientId": creds["client_id"], + "clientSecret": creds["client_secret_hash"], + "clientSecretSalt": creds["client_secret_salt"], + "redirectUris": creds["redirect_uris"], + "grantTypes": creds["grant_types"], + "scopes": creds["scopes"], + "ownerId": owner_id, + "isActive": True, + } + ) + + click.echo(f"✓ Created test OAuth application: {app.clientId}") + + return { + "id": app.id, + "client_id": app.clientId, + "client_secret": creds["client_secret_plaintext"], + } + + +async def cleanup_test_app(app_id: str) -> None: + """Remove test application and all associated tokens from database""" + from prisma.models import ( + OAuthAccessToken, + OAuthApplication, + OAuthAuthorizationCode, + OAuthRefreshToken, + ) + + from backend.data import db + + if not db.is_connected(): + await db.connect() + + click.echo("\n🧹 Cleaning up test data...") + + # Delete authorization codes + deleted_codes = await OAuthAuthorizationCode.prisma().delete_many( + where={"applicationId": app_id} + ) + if deleted_codes: + click.echo(f" Deleted {deleted_codes} authorization code(s)") + + # Delete access tokens + deleted_access = await OAuthAccessToken.prisma().delete_many( + where={"applicationId": app_id} + ) + if deleted_access: + click.echo(f" Deleted {deleted_access} access token(s)") + + # Delete refresh tokens + deleted_refresh = await OAuthRefreshToken.prisma().delete_many( + where={"applicationId": app_id} + ) + if deleted_refresh: + click.echo(f" Deleted {deleted_refresh} refresh token(s)") + + # Delete the application itself + await OAuthApplication.prisma().delete(where={"id": app_id}) + click.echo(" Deleted test OAuth application") + + await db.disconnect() + click.echo("✓ Cleanup complete!") + + +def run_test_server( + port: int, + platform_url: str, + backend_url: str, + client_id: str, + client_secret: str, +) -> None: + """Run a simple HTTP server for testing OAuth flows""" + import json as json_module + import threading + from http.server import BaseHTTPRequestHandler, HTTPServer + from urllib.request import Request, urlopen + + redirect_uri = f"http://localhost:{port}/callback" + + html_content = create_test_html( + platform_url=platform_url, + client_id=client_id, + client_secret=client_secret, + redirect_uri=redirect_uri, + backend_url=backend_url, + ) + + class TestHandler(BaseHTTPRequestHandler): + def do_GET(self): + from urllib.parse import parse_qs + + # Parse the path + parsed = urlparse(self.path) + + # Serve the test page for root and callback + if parsed.path in ["/", "/callback"]: + self.send_response(200) + self.send_header("Content-Type", "text/html; charset=utf-8") + self.end_headers() + self.wfile.write(html_content.encode()) + + # Proxy API calls to backend (avoids CORS issues) + # Supports both /proxy/api/* and /proxy/external-api/* + elif parsed.path.startswith("/proxy/"): + try: + # Extract the API path and token from query params + api_path = parsed.path[len("/proxy") :] + query_params = parse_qs(parsed.query) + token = query_params.get("token", [None])[0] + + headers = {} + if token: + headers["Authorization"] = f"Bearer {token}" + + req = Request( + f"{backend_url}{api_path}", + headers=headers, + method="GET", + ) + + with urlopen(req) as response: + response_body = response.read() + self.send_response(response.status) + self.send_header("Content-Type", "application/json") + self.end_headers() + self.wfile.write(response_body) + + except Exception as e: + error_msg = str(e) + status_code = 500 + if hasattr(e, "code"): + status_code = e.code # type: ignore + if hasattr(e, "read"): + try: + error_body = e.read().decode() # type: ignore + error_data = json_module.loads(error_body) + error_msg = error_data.get("detail", error_msg) + except Exception: + pass + + self.send_response(status_code) + self.send_header("Content-Type", "application/json") + self.end_headers() + self.wfile.write(json_module.dumps({"detail": error_msg}).encode()) + + else: + self.send_response(404) + self.end_headers() + + def do_POST(self): + # Parse the path + parsed = urlparse(self.path) + + # Proxy token exchange to backend (avoids CORS issues) + if parsed.path == "/proxy/token": + try: + # Read request body + content_length = int(self.headers.get("Content-Length", 0)) + body = self.rfile.read(content_length) + + # Forward to backend + req = Request( + f"{backend_url}/api/oauth/token", + data=body, + headers={"Content-Type": "application/json"}, + method="POST", + ) + + with urlopen(req) as response: + response_body = response.read() + self.send_response(response.status) + self.send_header("Content-Type", "application/json") + self.end_headers() + self.wfile.write(response_body) + + except Exception as e: + error_msg = str(e) + # Try to extract error detail from urllib error + if hasattr(e, "read"): + try: + error_body = e.read().decode() # type: ignore + error_data = json_module.loads(error_body) + error_msg = error_data.get("detail", error_msg) + except Exception: + pass + + self.send_response(500) + self.send_header("Content-Type", "application/json") + self.end_headers() + self.wfile.write(json_module.dumps({"detail": error_msg}).encode()) + else: + self.send_response(404) + self.end_headers() + + def log_message(self, format, *args): + # Suppress default logging + pass + + server = HTTPServer(("localhost", port), TestHandler) + click.echo(f"\n🚀 Test server running at http://localhost:{port}") + click.echo(" Open this URL in your browser to test the OAuth flows\n") + + # Run server in a daemon thread + server_thread = threading.Thread(target=server.serve_forever, daemon=True) + server_thread.start() + + # Use a simple polling loop that can be interrupted + try: + while server_thread.is_alive(): + server_thread.join(timeout=1.0) + except KeyboardInterrupt: + pass + + click.echo("\n\n⏹️ Server stopped") + server.shutdown() + + +async def setup_and_cleanup_test_app( + owner_id: str, + redirect_uri: str, + port: int, + platform_url: str, + backend_url: str, +) -> None: + """ + Async context manager that handles test app lifecycle. + Creates the app, yields control to run the server, then cleans up. + """ + app_info: Optional[dict] = None + + try: + # Create test app in database + click.echo("\n📝 Creating temporary OAuth application...") + app_info = await create_test_app_in_db(owner_id, redirect_uri) + + click.echo(f"\n Client ID: {app_info['client_id']}") + click.echo(f" Client Secret: {app_info['client_secret'][:30]}...") + + # Run the test server (blocking, synchronous) + click.echo("\n" + "-" * 60) + click.echo(" Press Ctrl+C to stop the server and clean up") + click.echo("-" * 60) + + run_test_server( + port=port, + platform_url=platform_url, + backend_url=backend_url, + client_id=app_info["client_id"], + client_secret=app_info["client_secret"], + ) + + finally: + # Always clean up - we're still in the same event loop + if app_info: + try: + await cleanup_test_app(app_info["id"]) + except Exception as e: + click.echo(f"\n⚠️ Cleanup error: {e}", err=True) + click.echo( + f" You may need to manually delete app with ID: {app_info['id']}" + ) + + +@cli.command(name="test-server") +@click.option( + "--owner-id", + required=True, + help="User ID to own the temporary test OAuth application", +) +@click.option( + "--port", + default=TEST_SERVER_PORT, + help=f"Port to run the test server on (default: {TEST_SERVER_PORT})", +) +@click.option( + "--platform-url", + default="http://localhost:3000", + help="AutoGPT Platform frontend URL (default: http://localhost:3000)", +) +@click.option( + "--backend-url", + default="http://localhost:8006", + help="AutoGPT Platform backend URL (default: http://localhost:8006)", +) +def test_server_command( + owner_id: str, + port: int, + platform_url: str, + backend_url: str, +): + """Run a test server to test OAuth flows interactively + + This command: + 1. Creates a temporary OAuth application in the database + 2. Starts a minimal web server that acts as a third-party client + 3. Lets you test "Sign in with AutoGPT" and Integration Setup flows + 4. Cleans up all test data (app, tokens, codes) when you stop the server + + Example: + poetry run oauth-tool test-server --owner-id YOUR_USER_ID + + The test server will be available at http://localhost:9876 + """ + redirect_uri = f"http://localhost:{port}/callback" + + click.echo("=" * 60) + click.echo(" OAuth Test Server") + click.echo("=" * 60) + click.echo(f"\n Owner ID: {owner_id}") + click.echo(f" Platform URL: {platform_url}") + click.echo(f" Backend URL: {backend_url}") + click.echo(f" Test Server: http://localhost:{port}") + click.echo(f" Redirect URI: {redirect_uri}") + click.echo("\n" + "=" * 60) + + try: + # Run everything in a single event loop to keep Prisma client happy + asyncio.run( + setup_and_cleanup_test_app( + owner_id=owner_id, + redirect_uri=redirect_uri, + port=port, + platform_url=platform_url, + backend_url=backend_url, + ) + ) + except KeyboardInterrupt: + # Already handled inside, just exit cleanly + pass + except Exception as e: + click.echo(f"\n❌ Error: {e}", err=True) + sys.exit(1) + + +if __name__ == "__main__": + cli() diff --git a/autogpt_platform/backend/backend/data/__init__.py b/autogpt_platform/backend/backend/data/__init__.py index 31ab09a5df..c98667e362 100644 --- a/autogpt_platform/backend/backend/data/__init__.py +++ b/autogpt_platform/backend/backend/data/__init__.py @@ -1,4 +1,4 @@ -from backend.server.v2.library.model import LibraryAgentPreset +from backend.api.features.library.model import LibraryAgentPreset from .graph import NodeModel from .integrations import Webhook # noqa: F401 diff --git a/autogpt_platform/backend/backend/data/analytics.py b/autogpt_platform/backend/backend/data/analytics.py index fde2d3fd6e..7419539026 100644 --- a/autogpt_platform/backend/backend/data/analytics.py +++ b/autogpt_platform/backend/backend/data/analytics.py @@ -1,12 +1,45 @@ import logging +from datetime import datetime, timedelta, timezone +from typing import Optional import prisma.types +from pydantic import BaseModel +from backend.data.db import query_raw_with_schema from backend.util.json import SafeJson logger = logging.getLogger(__name__) +class AccuracyAlertData(BaseModel): + """Alert data when accuracy drops significantly.""" + + graph_id: str + user_id: Optional[str] + drop_percent: float + three_day_avg: float + seven_day_avg: float + detected_at: datetime + + +class AccuracyLatestData(BaseModel): + """Latest execution accuracy data point.""" + + date: datetime + daily_score: Optional[float] + three_day_avg: Optional[float] + seven_day_avg: Optional[float] + fourteen_day_avg: Optional[float] + + +class AccuracyTrendsResponse(BaseModel): + """Response model for accuracy trends and alerts.""" + + latest_data: AccuracyLatestData + alert: Optional[AccuracyAlertData] + historical_data: Optional[list[AccuracyLatestData]] = None + + async def log_raw_analytics( user_id: str, type: str, @@ -43,3 +76,217 @@ async def log_raw_metric( ) return result + + +async def get_accuracy_trends_and_alerts( + graph_id: str, + days_back: int = 30, + user_id: Optional[str] = None, + drop_threshold: float = 10.0, + include_historical: bool = False, +) -> AccuracyTrendsResponse: + """Get accuracy trends and detect alerts for a specific graph.""" + query_template = """ + WITH daily_scores AS ( + SELECT + DATE(e."createdAt") as execution_date, + AVG(CASE + WHEN e.stats IS NOT NULL + AND e.stats::json->>'correctness_score' IS NOT NULL + AND e.stats::json->>'correctness_score' != 'null' + THEN (e.stats::json->>'correctness_score')::float * 100 + ELSE NULL + END) as daily_score + FROM {schema_prefix}"AgentGraphExecution" e + WHERE e."agentGraphId" = $1::text + AND e."isDeleted" = false + AND e."createdAt" >= $2::timestamp + AND e."executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED') + {user_filter} + GROUP BY DATE(e."createdAt") + HAVING COUNT(*) >= 3 -- Need at least 3 executions per day + ), + trends AS ( + SELECT + execution_date, + daily_score, + AVG(daily_score) OVER ( + ORDER BY execution_date + ROWS BETWEEN 2 PRECEDING AND CURRENT ROW + ) as three_day_avg, + AVG(daily_score) OVER ( + ORDER BY execution_date + ROWS BETWEEN 6 PRECEDING AND CURRENT ROW + ) as seven_day_avg, + AVG(daily_score) OVER ( + ORDER BY execution_date + ROWS BETWEEN 13 PRECEDING AND CURRENT ROW + ) as fourteen_day_avg + FROM daily_scores + ) + SELECT *, + CASE + WHEN three_day_avg IS NOT NULL AND seven_day_avg IS NOT NULL AND seven_day_avg > 0 + THEN ((seven_day_avg - three_day_avg) / seven_day_avg * 100) + ELSE NULL + END as drop_percent + FROM trends + ORDER BY execution_date DESC + {limit_clause} + """ + + start_date = datetime.now(timezone.utc) - timedelta(days=days_back) + params = [graph_id, start_date] + user_filter = "" + if user_id: + user_filter = 'AND e."userId" = $3::text' + params.append(user_id) + + # Determine limit clause + limit_clause = "" if include_historical else "LIMIT 1" + + final_query = query_template.format( + schema_prefix="{schema_prefix}", + user_filter=user_filter, + limit_clause=limit_clause, + ) + + result = await query_raw_with_schema(final_query, *params) + + if not result: + return AccuracyTrendsResponse( + latest_data=AccuracyLatestData( + date=datetime.now(timezone.utc), + daily_score=None, + three_day_avg=None, + seven_day_avg=None, + fourteen_day_avg=None, + ), + alert=None, + ) + + latest = result[0] + + alert = None + if ( + latest["drop_percent"] is not None + and latest["drop_percent"] >= drop_threshold + and latest["three_day_avg"] is not None + and latest["seven_day_avg"] is not None + ): + alert = AccuracyAlertData( + graph_id=graph_id, + user_id=user_id, + drop_percent=float(latest["drop_percent"]), + three_day_avg=float(latest["three_day_avg"]), + seven_day_avg=float(latest["seven_day_avg"]), + detected_at=datetime.now(timezone.utc), + ) + + # Prepare historical data if requested + historical_data = None + if include_historical: + historical_data = [] + for row in result: + historical_data.append( + AccuracyLatestData( + date=row["execution_date"], + daily_score=( + float(row["daily_score"]) + if row["daily_score"] is not None + else None + ), + three_day_avg=( + float(row["three_day_avg"]) + if row["three_day_avg"] is not None + else None + ), + seven_day_avg=( + float(row["seven_day_avg"]) + if row["seven_day_avg"] is not None + else None + ), + fourteen_day_avg=( + float(row["fourteen_day_avg"]) + if row["fourteen_day_avg"] is not None + else None + ), + ) + ) + + return AccuracyTrendsResponse( + latest_data=AccuracyLatestData( + date=latest["execution_date"], + daily_score=( + float(latest["daily_score"]) + if latest["daily_score"] is not None + else None + ), + three_day_avg=( + float(latest["three_day_avg"]) + if latest["three_day_avg"] is not None + else None + ), + seven_day_avg=( + float(latest["seven_day_avg"]) + if latest["seven_day_avg"] is not None + else None + ), + fourteen_day_avg=( + float(latest["fourteen_day_avg"]) + if latest["fourteen_day_avg"] is not None + else None + ), + ), + alert=alert, + historical_data=historical_data, + ) + + +class MarketplaceGraphData(BaseModel): + """Data structure for marketplace graph monitoring.""" + + graph_id: str + user_id: Optional[str] + execution_count: int + + +async def get_marketplace_graphs_for_monitoring( + days_back: int = 30, + min_executions: int = 10, +) -> list[MarketplaceGraphData]: + """Get published marketplace graphs with recent executions for monitoring.""" + query_template = """ + WITH marketplace_graphs AS ( + SELECT DISTINCT + slv."agentGraphId" as graph_id, + slv."agentGraphVersion" as graph_version + FROM {schema_prefix}"StoreListing" sl + JOIN {schema_prefix}"StoreListingVersion" slv ON sl."activeVersionId" = slv."id" + WHERE sl."hasApprovedVersion" = true + AND sl."isDeleted" = false + ) + SELECT DISTINCT + mg.graph_id, + NULL as user_id, -- Marketplace graphs don't have a specific user_id for monitoring + COUNT(*) as execution_count + FROM marketplace_graphs mg + JOIN {schema_prefix}"AgentGraphExecution" e ON e."agentGraphId" = mg.graph_id + WHERE e."createdAt" >= $1::timestamp + AND e."isDeleted" = false + AND e."executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED') + GROUP BY mg.graph_id + HAVING COUNT(*) >= $2 + ORDER BY execution_count DESC + """ + start_date = datetime.now(timezone.utc) - timedelta(days=days_back) + result = await query_raw_with_schema(query_template, start_date, min_executions) + + return [ + MarketplaceGraphData( + graph_id=row["graph_id"], + user_id=row["user_id"], + execution_count=int(row["execution_count"]), + ) + for row in result + ] diff --git a/autogpt_platform/backend/backend/data/api_key.py b/autogpt_platform/backend/backend/data/auth/api_key.py similarity index 95% rename from autogpt_platform/backend/backend/data/api_key.py rename to autogpt_platform/backend/backend/data/auth/api_key.py index 45194897de..2ecd5be9a5 100644 --- a/autogpt_platform/backend/backend/data/api_key.py +++ b/autogpt_platform/backend/backend/data/auth/api_key.py @@ -1,22 +1,24 @@ import logging import uuid from datetime import datetime, timezone -from typing import Optional +from typing import Literal, Optional from autogpt_libs.api_key.keysmith import APIKeySmith from prisma.enums import APIKeyPermission, APIKeyStatus from prisma.models import APIKey as PrismaAPIKey from prisma.types import APIKeyWhereUniqueInput -from pydantic import BaseModel, Field +from pydantic import Field from backend.data.includes import MAX_USER_API_KEYS_FETCH from backend.util.exceptions import NotAuthorizedError, NotFoundError +from .base import APIAuthorizationInfo + logger = logging.getLogger(__name__) keysmith = APIKeySmith() -class APIKeyInfo(BaseModel): +class APIKeyInfo(APIAuthorizationInfo): id: str name: str head: str = Field( @@ -26,12 +28,9 @@ class APIKeyInfo(BaseModel): description=f"The last {APIKeySmith.TAIL_LENGTH} characters of the key" ) status: APIKeyStatus - permissions: list[APIKeyPermission] - created_at: datetime - last_used_at: Optional[datetime] = None - revoked_at: Optional[datetime] = None description: Optional[str] = None - user_id: str + + type: Literal["api_key"] = "api_key" # type: ignore @staticmethod def from_db(api_key: PrismaAPIKey): @@ -41,7 +40,7 @@ class APIKeyInfo(BaseModel): head=api_key.head, tail=api_key.tail, status=APIKeyStatus(api_key.status), - permissions=[APIKeyPermission(p) for p in api_key.permissions], + scopes=[APIKeyPermission(p) for p in api_key.permissions], created_at=api_key.createdAt, last_used_at=api_key.lastUsedAt, revoked_at=api_key.revokedAt, @@ -211,7 +210,7 @@ async def suspend_api_key(key_id: str, user_id: str) -> APIKeyInfo: def has_permission(api_key: APIKeyInfo, required_permission: APIKeyPermission) -> bool: - return required_permission in api_key.permissions + return required_permission in api_key.scopes async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyInfo]: diff --git a/autogpt_platform/backend/backend/data/auth/base.py b/autogpt_platform/backend/backend/data/auth/base.py new file mode 100644 index 0000000000..e307b5f49f --- /dev/null +++ b/autogpt_platform/backend/backend/data/auth/base.py @@ -0,0 +1,15 @@ +from datetime import datetime +from typing import Literal, Optional + +from prisma.enums import APIKeyPermission +from pydantic import BaseModel + + +class APIAuthorizationInfo(BaseModel): + user_id: str + scopes: list[APIKeyPermission] + type: Literal["oauth", "api_key"] + created_at: datetime + expires_at: Optional[datetime] = None + last_used_at: Optional[datetime] = None + revoked_at: Optional[datetime] = None diff --git a/autogpt_platform/backend/backend/data/auth/oauth.py b/autogpt_platform/backend/backend/data/auth/oauth.py new file mode 100644 index 0000000000..e49586194c --- /dev/null +++ b/autogpt_platform/backend/backend/data/auth/oauth.py @@ -0,0 +1,872 @@ +""" +OAuth 2.0 Provider Data Layer + +Handles management of OAuth applications, authorization codes, +access tokens, and refresh tokens. + +Hashing strategy: +- Access tokens & Refresh tokens: SHA256 (deterministic, allows direct lookup by hash) +- Client secrets: Scrypt with salt (lookup by client_id, then verify with salt) +""" + +import hashlib +import logging +import secrets +import uuid +from datetime import datetime, timedelta, timezone +from typing import Literal, Optional + +from autogpt_libs.api_key.keysmith import APIKeySmith +from prisma.enums import APIKeyPermission as APIPermission +from prisma.models import OAuthAccessToken as PrismaOAuthAccessToken +from prisma.models import OAuthApplication as PrismaOAuthApplication +from prisma.models import OAuthAuthorizationCode as PrismaOAuthAuthorizationCode +from prisma.models import OAuthRefreshToken as PrismaOAuthRefreshToken +from prisma.types import OAuthApplicationUpdateInput +from pydantic import BaseModel, Field, SecretStr + +from .base import APIAuthorizationInfo + +logger = logging.getLogger(__name__) +keysmith = APIKeySmith() # Only used for client secret hashing (Scrypt) + + +def _generate_token() -> str: + """Generate a cryptographically secure random token.""" + return secrets.token_urlsafe(32) + + +def _hash_token(token: str) -> str: + """Hash a token using SHA256 (deterministic, for direct lookup).""" + return hashlib.sha256(token.encode()).hexdigest() + + +# Token TTLs +AUTHORIZATION_CODE_TTL = timedelta(minutes=10) +ACCESS_TOKEN_TTL = timedelta(hours=1) +REFRESH_TOKEN_TTL = timedelta(days=30) + +ACCESS_TOKEN_PREFIX = "agpt_xt_" +REFRESH_TOKEN_PREFIX = "agpt_rt_" + + +# ============================================================================ +# Exception Classes +# ============================================================================ + + +class OAuthError(Exception): + """Base OAuth error""" + + pass + + +class InvalidClientError(OAuthError): + """Invalid client_id or client_secret""" + + pass + + +class InvalidGrantError(OAuthError): + """Invalid or expired authorization code/refresh token""" + + def __init__(self, reason: str): + self.reason = reason + super().__init__(f"Invalid grant: {reason}") + + +class InvalidTokenError(OAuthError): + """Invalid, expired, or revoked token""" + + def __init__(self, reason: str): + self.reason = reason + super().__init__(f"Invalid token: {reason}") + + +# ============================================================================ +# Data Models +# ============================================================================ + + +class OAuthApplicationInfo(BaseModel): + """OAuth application information (without client secret hash)""" + + id: str + name: str + description: Optional[str] = None + logo_url: Optional[str] = None + client_id: str + redirect_uris: list[str] + grant_types: list[str] + scopes: list[APIPermission] + owner_id: str + is_active: bool + created_at: datetime + updated_at: datetime + + @staticmethod + def from_db(app: PrismaOAuthApplication): + return OAuthApplicationInfo( + id=app.id, + name=app.name, + description=app.description, + logo_url=app.logoUrl, + client_id=app.clientId, + redirect_uris=app.redirectUris, + grant_types=app.grantTypes, + scopes=[APIPermission(s) for s in app.scopes], + owner_id=app.ownerId, + is_active=app.isActive, + created_at=app.createdAt, + updated_at=app.updatedAt, + ) + + +class OAuthApplicationInfoWithSecret(OAuthApplicationInfo): + """OAuth application with client secret hash (for validation)""" + + client_secret_hash: str + client_secret_salt: str + + @staticmethod + def from_db(app: PrismaOAuthApplication): + return OAuthApplicationInfoWithSecret( + **OAuthApplicationInfo.from_db(app).model_dump(), + client_secret_hash=app.clientSecret, + client_secret_salt=app.clientSecretSalt, + ) + + def verify_secret(self, plaintext_secret: str) -> bool: + """Verify a plaintext client secret against the stored hash""" + # Use keysmith.verify_key() with stored salt + return keysmith.verify_key( + plaintext_secret, self.client_secret_hash, self.client_secret_salt + ) + + +class OAuthAuthorizationCodeInfo(BaseModel): + """Authorization code information""" + + id: str + code: str + created_at: datetime + expires_at: datetime + application_id: str + user_id: str + scopes: list[APIPermission] + redirect_uri: str + code_challenge: Optional[str] = None + code_challenge_method: Optional[str] = None + used_at: Optional[datetime] = None + + @property + def is_used(self) -> bool: + return self.used_at is not None + + @staticmethod + def from_db(code: PrismaOAuthAuthorizationCode): + return OAuthAuthorizationCodeInfo( + id=code.id, + code=code.code, + created_at=code.createdAt, + expires_at=code.expiresAt, + application_id=code.applicationId, + user_id=code.userId, + scopes=[APIPermission(s) for s in code.scopes], + redirect_uri=code.redirectUri, + code_challenge=code.codeChallenge, + code_challenge_method=code.codeChallengeMethod, + used_at=code.usedAt, + ) + + +class OAuthAccessTokenInfo(APIAuthorizationInfo): + """Access token information""" + + id: str + expires_at: datetime # type: ignore + application_id: str + + type: Literal["oauth"] = "oauth" # type: ignore + + @staticmethod + def from_db(token: PrismaOAuthAccessToken): + return OAuthAccessTokenInfo( + id=token.id, + user_id=token.userId, + scopes=[APIPermission(s) for s in token.scopes], + created_at=token.createdAt, + expires_at=token.expiresAt, + last_used_at=None, + revoked_at=token.revokedAt, + application_id=token.applicationId, + ) + + +class OAuthAccessToken(OAuthAccessTokenInfo): + """Access token with plaintext token included (sensitive)""" + + token: SecretStr = Field(description="Plaintext token (sensitive)") + + @staticmethod + def from_db(token: PrismaOAuthAccessToken, plaintext_token: str): # type: ignore + return OAuthAccessToken( + **OAuthAccessTokenInfo.from_db(token).model_dump(), + token=SecretStr(plaintext_token), + ) + + +class OAuthRefreshTokenInfo(BaseModel): + """Refresh token information""" + + id: str + user_id: str + scopes: list[APIPermission] + created_at: datetime + expires_at: datetime + application_id: str + revoked_at: Optional[datetime] = None + + @property + def is_revoked(self) -> bool: + return self.revoked_at is not None + + @staticmethod + def from_db(token: PrismaOAuthRefreshToken): + return OAuthRefreshTokenInfo( + id=token.id, + user_id=token.userId, + scopes=[APIPermission(s) for s in token.scopes], + created_at=token.createdAt, + expires_at=token.expiresAt, + application_id=token.applicationId, + revoked_at=token.revokedAt, + ) + + +class OAuthRefreshToken(OAuthRefreshTokenInfo): + """Refresh token with plaintext token included (sensitive)""" + + token: SecretStr = Field(description="Plaintext token (sensitive)") + + @staticmethod + def from_db(token: PrismaOAuthRefreshToken, plaintext_token: str): # type: ignore + return OAuthRefreshToken( + **OAuthRefreshTokenInfo.from_db(token).model_dump(), + token=SecretStr(plaintext_token), + ) + + +class TokenIntrospectionResult(BaseModel): + """Result of token introspection (RFC 7662)""" + + active: bool + scopes: Optional[list[str]] = None + client_id: Optional[str] = None + user_id: Optional[str] = None + exp: Optional[int] = None # Unix timestamp + token_type: Optional[Literal["access_token", "refresh_token"]] = None + + +# ============================================================================ +# OAuth Application Management +# ============================================================================ + + +async def get_oauth_application(client_id: str) -> Optional[OAuthApplicationInfo]: + """Get OAuth application by client ID (without secret)""" + app = await PrismaOAuthApplication.prisma().find_unique( + where={"clientId": client_id} + ) + if not app: + return None + return OAuthApplicationInfo.from_db(app) + + +async def get_oauth_application_with_secret( + client_id: str, +) -> Optional[OAuthApplicationInfoWithSecret]: + """Get OAuth application by client ID (with secret hash for validation)""" + app = await PrismaOAuthApplication.prisma().find_unique( + where={"clientId": client_id} + ) + if not app: + return None + return OAuthApplicationInfoWithSecret.from_db(app) + + +async def validate_client_credentials( + client_id: str, client_secret: str +) -> OAuthApplicationInfo: + """ + Validate client credentials and return application info. + + Raises: + InvalidClientError: If client_id or client_secret is invalid, or app is inactive + """ + app = await get_oauth_application_with_secret(client_id) + if not app: + raise InvalidClientError("Invalid client_id") + + if not app.is_active: + raise InvalidClientError("Application is not active") + + # Verify client secret + if not app.verify_secret(client_secret): + raise InvalidClientError("Invalid client_secret") + + # Return without secret hash + return OAuthApplicationInfo(**app.model_dump(exclude={"client_secret_hash"})) + + +def validate_redirect_uri(app: OAuthApplicationInfo, redirect_uri: str) -> bool: + """Validate that redirect URI is registered for the application""" + return redirect_uri in app.redirect_uris + + +def validate_scopes( + app: OAuthApplicationInfo, requested_scopes: list[APIPermission] +) -> bool: + """Validate that all requested scopes are allowed for the application""" + return all(scope in app.scopes for scope in requested_scopes) + + +# ============================================================================ +# Authorization Code Flow +# ============================================================================ + + +def _generate_authorization_code() -> str: + """Generate a cryptographically secure authorization code""" + # 32 bytes = 256 bits of entropy + return secrets.token_urlsafe(32) + + +async def create_authorization_code( + application_id: str, + user_id: str, + scopes: list[APIPermission], + redirect_uri: str, + code_challenge: Optional[str] = None, + code_challenge_method: Optional[Literal["S256", "plain"]] = None, +) -> OAuthAuthorizationCodeInfo: + """ + Create a new authorization code. + Expires in 10 minutes and can only be used once. + """ + code = _generate_authorization_code() + now = datetime.now(timezone.utc) + expires_at = now + AUTHORIZATION_CODE_TTL + + saved_code = await PrismaOAuthAuthorizationCode.prisma().create( + data={ + "id": str(uuid.uuid4()), + "code": code, + "expiresAt": expires_at, + "applicationId": application_id, + "userId": user_id, + "scopes": [s for s in scopes], + "redirectUri": redirect_uri, + "codeChallenge": code_challenge, + "codeChallengeMethod": code_challenge_method, + } + ) + + return OAuthAuthorizationCodeInfo.from_db(saved_code) + + +async def consume_authorization_code( + code: str, + application_id: str, + redirect_uri: str, + code_verifier: Optional[str] = None, +) -> tuple[str, list[APIPermission]]: + """ + Consume an authorization code and return (user_id, scopes). + + This marks the code as used and validates: + - Code exists and matches application + - Code is not expired + - Code has not been used + - Redirect URI matches + - PKCE code verifier matches (if code challenge was provided) + + Raises: + InvalidGrantError: If code is invalid, expired, used, or PKCE fails + """ + auth_code = await PrismaOAuthAuthorizationCode.prisma().find_unique( + where={"code": code} + ) + + if not auth_code: + raise InvalidGrantError("authorization code not found") + + # Validate application + if auth_code.applicationId != application_id: + raise InvalidGrantError( + "authorization code does not belong to this application" + ) + + # Check if already used + if auth_code.usedAt is not None: + raise InvalidGrantError( + f"authorization code already used at {auth_code.usedAt}" + ) + + # Check expiration + now = datetime.now(timezone.utc) + if auth_code.expiresAt < now: + raise InvalidGrantError("authorization code expired") + + # Validate redirect URI + if auth_code.redirectUri != redirect_uri: + raise InvalidGrantError("redirect_uri mismatch") + + # Validate PKCE if code challenge was provided + if auth_code.codeChallenge: + if not code_verifier: + raise InvalidGrantError("code_verifier required but not provided") + + if not _verify_pkce( + code_verifier, auth_code.codeChallenge, auth_code.codeChallengeMethod + ): + raise InvalidGrantError("PKCE verification failed") + + # Mark code as used + await PrismaOAuthAuthorizationCode.prisma().update( + where={"code": code}, + data={"usedAt": now}, + ) + + return auth_code.userId, [APIPermission(s) for s in auth_code.scopes] + + +def _verify_pkce( + code_verifier: str, code_challenge: str, code_challenge_method: Optional[str] +) -> bool: + """ + Verify PKCE code verifier against code challenge. + + Supports: + - S256: SHA256(code_verifier) == code_challenge + - plain: code_verifier == code_challenge + """ + if code_challenge_method == "S256": + # Hash the verifier with SHA256 and base64url encode + hashed = hashlib.sha256(code_verifier.encode("ascii")).digest() + computed_challenge = ( + secrets.token_urlsafe(len(hashed)).encode("ascii").decode("ascii") + ) + # For proper base64url encoding + import base64 + + computed_challenge = ( + base64.urlsafe_b64encode(hashed).decode("ascii").rstrip("=") + ) + return secrets.compare_digest(computed_challenge, code_challenge) + elif code_challenge_method == "plain" or code_challenge_method is None: + # Plain comparison + return secrets.compare_digest(code_verifier, code_challenge) + else: + logger.warning(f"Unsupported code challenge method: {code_challenge_method}") + return False + + +# ============================================================================ +# Access Token Management +# ============================================================================ + + +async def create_access_token( + application_id: str, user_id: str, scopes: list[APIPermission] +) -> OAuthAccessToken: + """ + Create a new access token. + Returns OAuthAccessToken (with plaintext token). + """ + plaintext_token = ACCESS_TOKEN_PREFIX + _generate_token() + token_hash = _hash_token(plaintext_token) + now = datetime.now(timezone.utc) + expires_at = now + ACCESS_TOKEN_TTL + + saved_token = await PrismaOAuthAccessToken.prisma().create( + data={ + "id": str(uuid.uuid4()), + "token": token_hash, # SHA256 hash for direct lookup + "expiresAt": expires_at, + "applicationId": application_id, + "userId": user_id, + "scopes": [s for s in scopes], + } + ) + + return OAuthAccessToken.from_db(saved_token, plaintext_token=plaintext_token) + + +async def validate_access_token( + token: str, +) -> tuple[OAuthAccessTokenInfo, OAuthApplicationInfo]: + """ + Validate an access token and return token info. + + Raises: + InvalidTokenError: If token is invalid, expired, or revoked + InvalidClientError: If the client application is not marked as active + """ + token_hash = _hash_token(token) + + # Direct lookup by hash + access_token = await PrismaOAuthAccessToken.prisma().find_unique( + where={"token": token_hash}, include={"Application": True} + ) + + if not access_token: + raise InvalidTokenError("access token not found") + + if not access_token.Application: # should be impossible + raise InvalidClientError("Client application not found") + + if not access_token.Application.isActive: + raise InvalidClientError("Client application is disabled") + + if access_token.revokedAt is not None: + raise InvalidTokenError("access token has been revoked") + + # Check expiration + now = datetime.now(timezone.utc) + if access_token.expiresAt < now: + raise InvalidTokenError("access token expired") + + return ( + OAuthAccessTokenInfo.from_db(access_token), + OAuthApplicationInfo.from_db(access_token.Application), + ) + + +async def revoke_access_token( + token: str, application_id: str +) -> OAuthAccessTokenInfo | None: + """ + Revoke an access token. + + Args: + token: The plaintext access token to revoke + application_id: The application ID making the revocation request. + Only tokens belonging to this application will be revoked. + + Returns: + OAuthAccessTokenInfo if token was found and revoked, None otherwise. + + Note: + Always performs exactly 2 DB queries regardless of outcome to prevent + timing side-channel attacks that could reveal token existence. + """ + try: + token_hash = _hash_token(token) + + # Use update_many to filter by both token and applicationId + updated_count = await PrismaOAuthAccessToken.prisma().update_many( + where={ + "token": token_hash, + "applicationId": application_id, + "revokedAt": None, + }, + data={"revokedAt": datetime.now(timezone.utc)}, + ) + + # Always perform second query to ensure constant time + result = await PrismaOAuthAccessToken.prisma().find_unique( + where={"token": token_hash} + ) + + # Only return result if we actually revoked something + if updated_count == 0: + return None + + return OAuthAccessTokenInfo.from_db(result) if result else None + except Exception as e: + logger.exception(f"Error revoking access token: {e}") + return None + + +# ============================================================================ +# Refresh Token Management +# ============================================================================ + + +async def create_refresh_token( + application_id: str, user_id: str, scopes: list[APIPermission] +) -> OAuthRefreshToken: + """ + Create a new refresh token. + Returns OAuthRefreshToken (with plaintext token). + """ + plaintext_token = REFRESH_TOKEN_PREFIX + _generate_token() + token_hash = _hash_token(plaintext_token) + now = datetime.now(timezone.utc) + expires_at = now + REFRESH_TOKEN_TTL + + saved_token = await PrismaOAuthRefreshToken.prisma().create( + data={ + "id": str(uuid.uuid4()), + "token": token_hash, # SHA256 hash for direct lookup + "expiresAt": expires_at, + "applicationId": application_id, + "userId": user_id, + "scopes": [s for s in scopes], + } + ) + + return OAuthRefreshToken.from_db(saved_token, plaintext_token=plaintext_token) + + +async def refresh_tokens( + refresh_token: str, application_id: str +) -> tuple[OAuthAccessToken, OAuthRefreshToken]: + """ + Use a refresh token to create new access and refresh tokens. + Returns (new_access_token, new_refresh_token) both with plaintext tokens included. + + Raises: + InvalidGrantError: If refresh token is invalid, expired, or revoked + """ + token_hash = _hash_token(refresh_token) + + # Direct lookup by hash + rt = await PrismaOAuthRefreshToken.prisma().find_unique(where={"token": token_hash}) + + if not rt: + raise InvalidGrantError("refresh token not found") + + # NOTE: no need to check Application.isActive, this is checked by the token endpoint + + if rt.revokedAt is not None: + raise InvalidGrantError("refresh token has been revoked") + + # Validate application + if rt.applicationId != application_id: + raise InvalidGrantError("refresh token does not belong to this application") + + # Check expiration + now = datetime.now(timezone.utc) + if rt.expiresAt < now: + raise InvalidGrantError("refresh token expired") + + # Revoke old refresh token + await PrismaOAuthRefreshToken.prisma().update( + where={"token": token_hash}, + data={"revokedAt": now}, + ) + + # Create new access and refresh tokens with same scopes + scopes = [APIPermission(s) for s in rt.scopes] + new_access_token = await create_access_token( + rt.applicationId, + rt.userId, + scopes, + ) + new_refresh_token = await create_refresh_token( + rt.applicationId, + rt.userId, + scopes, + ) + + return new_access_token, new_refresh_token + + +async def revoke_refresh_token( + token: str, application_id: str +) -> OAuthRefreshTokenInfo | None: + """ + Revoke a refresh token. + + Args: + token: The plaintext refresh token to revoke + application_id: The application ID making the revocation request. + Only tokens belonging to this application will be revoked. + + Returns: + OAuthRefreshTokenInfo if token was found and revoked, None otherwise. + + Note: + Always performs exactly 2 DB queries regardless of outcome to prevent + timing side-channel attacks that could reveal token existence. + """ + try: + token_hash = _hash_token(token) + + # Use update_many to filter by both token and applicationId + updated_count = await PrismaOAuthRefreshToken.prisma().update_many( + where={ + "token": token_hash, + "applicationId": application_id, + "revokedAt": None, + }, + data={"revokedAt": datetime.now(timezone.utc)}, + ) + + # Always perform second query to ensure constant time + result = await PrismaOAuthRefreshToken.prisma().find_unique( + where={"token": token_hash} + ) + + # Only return result if we actually revoked something + if updated_count == 0: + return None + + return OAuthRefreshTokenInfo.from_db(result) if result else None + except Exception as e: + logger.exception(f"Error revoking refresh token: {e}") + return None + + +# ============================================================================ +# Token Introspection +# ============================================================================ + + +async def introspect_token( + token: str, + token_type_hint: Optional[Literal["access_token", "refresh_token"]] = None, +) -> TokenIntrospectionResult: + """ + Introspect a token and return its metadata (RFC 7662). + + Returns TokenIntrospectionResult with active=True and metadata if valid, + or active=False if the token is invalid/expired/revoked. + """ + # Try as access token first (or if hint says "access_token") + if token_type_hint != "refresh_token": + try: + token_info, app = await validate_access_token(token) + return TokenIntrospectionResult( + active=True, + scopes=list(s.value for s in token_info.scopes), + client_id=app.client_id if app else None, + user_id=token_info.user_id, + exp=int(token_info.expires_at.timestamp()), + token_type="access_token", + ) + except InvalidTokenError: + pass # Try as refresh token + + # Try as refresh token + token_hash = _hash_token(token) + refresh_token = await PrismaOAuthRefreshToken.prisma().find_unique( + where={"token": token_hash} + ) + + if refresh_token and refresh_token.revokedAt is None: + # Check if valid (not expired) + now = datetime.now(timezone.utc) + if refresh_token.expiresAt > now: + app = await get_oauth_application_by_id(refresh_token.applicationId) + return TokenIntrospectionResult( + active=True, + scopes=list(s for s in refresh_token.scopes), + client_id=app.client_id if app else None, + user_id=refresh_token.userId, + exp=int(refresh_token.expiresAt.timestamp()), + token_type="refresh_token", + ) + + # Token not found or inactive + return TokenIntrospectionResult(active=False) + + +async def get_oauth_application_by_id(app_id: str) -> Optional[OAuthApplicationInfo]: + """Get OAuth application by ID""" + app = await PrismaOAuthApplication.prisma().find_unique(where={"id": app_id}) + if not app: + return None + return OAuthApplicationInfo.from_db(app) + + +async def list_user_oauth_applications(user_id: str) -> list[OAuthApplicationInfo]: + """Get all OAuth applications owned by a user""" + apps = await PrismaOAuthApplication.prisma().find_many( + where={"ownerId": user_id}, + order={"createdAt": "desc"}, + ) + return [OAuthApplicationInfo.from_db(app) for app in apps] + + +async def update_oauth_application( + app_id: str, + *, + owner_id: str, + is_active: Optional[bool] = None, + logo_url: Optional[str] = None, +) -> Optional[OAuthApplicationInfo]: + """ + Update OAuth application active status. + Only the owner can update their app's status. + + Returns the updated app info, or None if app not found or not owned by user. + """ + # First verify ownership + app = await PrismaOAuthApplication.prisma().find_first( + where={"id": app_id, "ownerId": owner_id} + ) + if not app: + return None + + patch: OAuthApplicationUpdateInput = {} + if is_active is not None: + patch["isActive"] = is_active + if logo_url: + patch["logoUrl"] = logo_url + if not patch: + return OAuthApplicationInfo.from_db(app) # return unchanged + + updated_app = await PrismaOAuthApplication.prisma().update( + where={"id": app_id}, + data=patch, + ) + return OAuthApplicationInfo.from_db(updated_app) if updated_app else None + + +# ============================================================================ +# Token Cleanup +# ============================================================================ + + +async def cleanup_expired_oauth_tokens() -> dict[str, int]: + """ + Delete expired OAuth tokens from the database. + + This removes: + - Expired authorization codes (10 min TTL) + - Expired access tokens (1 hour TTL) + - Expired refresh tokens (30 day TTL) + + Returns a dict with counts of deleted tokens by type. + """ + now = datetime.now(timezone.utc) + + # Delete expired authorization codes + codes_result = await PrismaOAuthAuthorizationCode.prisma().delete_many( + where={"expiresAt": {"lt": now}} + ) + + # Delete expired access tokens + access_result = await PrismaOAuthAccessToken.prisma().delete_many( + where={"expiresAt": {"lt": now}} + ) + + # Delete expired refresh tokens + refresh_result = await PrismaOAuthRefreshToken.prisma().delete_many( + where={"expiresAt": {"lt": now}} + ) + + deleted = { + "authorization_codes": codes_result, + "access_tokens": access_result, + "refresh_tokens": refresh_result, + } + + total = sum(deleted.values()) + if total > 0: + logger.info(f"Cleaned up {total} expired OAuth tokens: {deleted}") + + return deleted diff --git a/autogpt_platform/backend/backend/data/block.py b/autogpt_platform/backend/backend/data/block.py index b96211a829..727688dcf0 100644 --- a/autogpt_platform/backend/backend/data/block.py +++ b/autogpt_platform/backend/backend/data/block.py @@ -13,6 +13,7 @@ from typing import ( Optional, Sequence, Type, + TypeAlias, TypeVar, cast, get_origin, @@ -28,6 +29,13 @@ from backend.data.model import NodeExecutionStats from backend.integrations.providers import ProviderName from backend.util import json from backend.util.cache import cached +from backend.util.exceptions import ( + BlockError, + BlockExecutionError, + BlockInputError, + BlockOutputError, + BlockUnknownError, +) from backend.util.settings import Config from .model import ( @@ -35,6 +43,7 @@ from .model import ( Credentials, CredentialsFieldInfo, CredentialsMetaInput, + SchemaField, is_credentials_field_name, ) @@ -62,6 +71,7 @@ class BlockType(Enum): AGENT = "Agent" AI = "AI" AYRSHARE = "Ayrshare" + HUMAN_IN_THE_LOOP = "Human In The Loop" class BlockCategory(Enum): @@ -256,14 +266,61 @@ class BlockSchema(BaseModel): ) } + @classmethod + def get_auto_credentials_fields(cls) -> dict[str, dict[str, Any]]: + """ + Get fields that have auto_credentials metadata (e.g., GoogleDriveFileInput). + + Returns a dict mapping kwarg_name -> {field_name, auto_credentials_config} + + Raises: + ValueError: If multiple fields have the same kwarg_name, as this would + cause silent overwriting and only the last field would be processed. + """ + result: dict[str, dict[str, Any]] = {} + schema = cls.jsonschema() + properties = schema.get("properties", {}) + + for field_name, field_schema in properties.items(): + auto_creds = field_schema.get("auto_credentials") + if auto_creds: + kwarg_name = auto_creds.get("kwarg_name", "credentials") + if kwarg_name in result: + raise ValueError( + f"Duplicate auto_credentials kwarg_name '{kwarg_name}' " + f"in fields '{result[kwarg_name]['field_name']}' and " + f"'{field_name}' on {cls.__qualname__}" + ) + result[kwarg_name] = { + "field_name": field_name, + "config": auto_creds, + } + return result + @classmethod def get_credentials_fields_info(cls) -> dict[str, CredentialsFieldInfo]: - return { - field_name: CredentialsFieldInfo.model_validate( + result = {} + + # Regular credentials fields + for field_name in cls.get_credentials_fields().keys(): + result[field_name] = CredentialsFieldInfo.model_validate( cls.get_field_schema(field_name), by_alias=True ) - for field_name in cls.get_credentials_fields().keys() - } + + # Auto-generated credentials fields (from GoogleDriveFileInput etc.) + for kwarg_name, info in cls.get_auto_credentials_fields().items(): + config = info["config"] + # Build a schema-like dict that CredentialsFieldInfo can parse + auto_schema = { + "credentials_provider": [config.get("provider", "google")], + "credentials_types": [config.get("type", "oauth2")], + "credentials_scopes": config.get("scopes"), + } + result[kwarg_name] = CredentialsFieldInfo.model_validate( + auto_schema, by_alias=True + ) + + return result @classmethod def get_input_defaults(cls, data: BlockInput) -> BlockInput: @@ -279,14 +336,42 @@ class BlockSchema(BaseModel): return cls.get_required_fields() - set(data) -BlockSchemaInputType = TypeVar("BlockSchemaInputType", bound=BlockSchema) -BlockSchemaOutputType = TypeVar("BlockSchemaOutputType", bound=BlockSchema) +class BlockSchemaInput(BlockSchema): + """ + Base schema class for block inputs. + All block input schemas should extend this class for consistency. + """ - -class EmptySchema(BlockSchema): pass +class BlockSchemaOutput(BlockSchema): + """ + Base schema class for block outputs that includes a standard error field. + All block output schemas should extend this class to ensure consistent error handling. + """ + + error: str = SchemaField( + description="Error message if the operation failed", default="" + ) + + +BlockSchemaInputType = TypeVar("BlockSchemaInputType", bound=BlockSchemaInput) +BlockSchemaOutputType = TypeVar("BlockSchemaOutputType", bound=BlockSchemaOutput) + + +class EmptyInputSchema(BlockSchemaInput): + pass + + +class EmptyOutputSchema(BlockSchemaOutput): + pass + + +# For backward compatibility - will be deprecated +EmptySchema = EmptyOutputSchema + + # --8<-- [start:BlockWebhookConfig] class BlockManualWebhookConfig(BaseModel): """ @@ -344,8 +429,8 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]): description: str = "", contributors: list[ContributorDetails] = [], categories: set[BlockCategory] | None = None, - input_schema: Type[BlockSchemaInputType] = EmptySchema, - output_schema: Type[BlockSchemaOutputType] = EmptySchema, + input_schema: Type[BlockSchemaInputType] = EmptyInputSchema, + output_schema: Type[BlockSchemaOutputType] = EmptyOutputSchema, test_input: BlockInput | list[BlockInput] | None = None, test_output: BlockTestOutput | list[BlockTestOutput] | None = None, test_mock: dict[str, Any] | None = None, @@ -512,9 +597,29 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]): ) async def execute(self, input_data: BlockInput, **kwargs) -> BlockOutput: + try: + async for output_name, output_data in self._execute(input_data, **kwargs): + yield output_name, output_data + except Exception as ex: + if isinstance(ex, BlockError): + raise ex + else: + raise ( + BlockExecutionError + if isinstance(ex, ValueError) + else BlockUnknownError + )( + message=str(ex), + block_name=self.name, + block_id=self.id, + ) from ex + + async def _execute(self, input_data: BlockInput, **kwargs) -> BlockOutput: if error := self.input_schema.validate_data(input_data): - raise ValueError( - f"Unable to execute block with invalid input data: {error}" + raise BlockInputError( + message=f"Unable to execute block with invalid input data: {error}", + block_name=self.name, + block_id=self.id, ) async for output_name, output_data in self.run( @@ -522,11 +627,17 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]): **kwargs, ): if output_name == "error": - raise RuntimeError(output_data) + raise BlockExecutionError( + message=output_data, block_name=self.name, block_id=self.id + ) if self.block_type == BlockType.STANDARD and ( error := self.output_schema.validate_field(output_name, output_data) ): - raise ValueError(f"Block produced an invalid output data: {error}") + raise BlockOutputError( + message=f"Block produced an invalid output data: {error}", + block_name=self.name, + block_id=self.id, + ) yield output_name, output_data def is_triggered_by_event_type( @@ -546,6 +657,10 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]): ] +# Type alias for any block with standard input/output schemas +AnyBlockSchema: TypeAlias = Block[BlockSchemaInput, BlockSchemaOutput] + + # ======================= Block Helper Functions ======================= # @@ -556,7 +671,7 @@ def get_blocks() -> dict[str, Type[Block]]: def is_block_auth_configured( - block_cls: type["Block[BlockSchema, BlockSchema]"], + block_cls: type[AnyBlockSchema], ) -> bool: """ Check if a block has a valid authentication method configured at runtime. @@ -593,11 +708,6 @@ def is_block_auth_configured( f"Block {block_cls.__name__} has only optional credential inputs" " - will work without credentials configured" ) - if len(credential_inputs) > 1: - logger.warning( - f"Block {block_cls.__name__} has multiple credential inputs: " - f"{', '.join(credential_inputs.keys())}" - ) # Check if the credential inputs for this block are correctly configured for field_name, field_info in credential_inputs.items(): @@ -717,7 +827,7 @@ async def initialize_blocks() -> None: # Note on the return type annotation: https://github.com/microsoft/pyright/issues/10281 -def get_block(block_id: str) -> Block[BlockSchema, BlockSchema] | None: +def get_block(block_id: str) -> AnyBlockSchema | None: cls = get_blocks().get(block_id) return cls() if cls else None @@ -738,3 +848,12 @@ def get_io_block_ids() -> Sequence[str]: for id, B in get_blocks().items() if B().block_type in (BlockType.INPUT, BlockType.OUTPUT) ] + + +@cached(ttl_seconds=3600) +def get_human_in_the_loop_block_ids() -> Sequence[str]: + return [ + id + for id, B in get_blocks().items() + if B().block_type == BlockType.HUMAN_IN_THE_LOOP + ] diff --git a/autogpt_platform/backend/backend/data/block_cost_config.py b/autogpt_platform/backend/backend/data/block_cost_config.py index 32087ff4e6..6bb32d3a47 100644 --- a/autogpt_platform/backend/backend/data/block_cost_config.py +++ b/autogpt_platform/backend/backend/data/block_cost_config.py @@ -1,10 +1,17 @@ from typing import Type +from backend.blocks.ai_image_customizer import AIImageCustomizerBlock, GeminiImageModel +from backend.blocks.ai_image_generator_block import AIImageGeneratorBlock, ImageGenModel from backend.blocks.ai_music_generator import AIMusicGeneratorBlock -from backend.blocks.ai_shortform_video_block import AIShortformVideoCreatorBlock +from backend.blocks.ai_shortform_video_block import ( + AIAdMakerVideoCreatorBlock, + AIScreenshotToVideoAdBlock, + AIShortformVideoCreatorBlock, +) from backend.blocks.apollo.organization import SearchOrganizationsBlock from backend.blocks.apollo.people import SearchPeopleBlock from backend.blocks.apollo.person import GetPersonDetailBlock +from backend.blocks.codex import CodeGenerationBlock, CodexModel from backend.blocks.enrichlayer.linkedin import ( GetLinkedinProfileBlock, GetLinkedinProfilePictureBlock, @@ -57,9 +64,10 @@ MODEL_COST: dict[LlmModel, int] = { LlmModel.O1_MINI: 4, # GPT-5 models LlmModel.GPT5: 2, + LlmModel.GPT5_1: 5, LlmModel.GPT5_MINI: 1, LlmModel.GPT5_NANO: 1, - LlmModel.GPT5_CHAT: 2, + LlmModel.GPT5_CHAT: 5, LlmModel.GPT41: 2, LlmModel.GPT41_MINI: 1, LlmModel.GPT4O_MINI: 1, @@ -70,31 +78,26 @@ MODEL_COST: dict[LlmModel, int] = { LlmModel.CLAUDE_4_OPUS: 21, LlmModel.CLAUDE_4_SONNET: 5, LlmModel.CLAUDE_4_5_HAIKU: 4, + LlmModel.CLAUDE_4_5_OPUS: 14, LlmModel.CLAUDE_4_5_SONNET: 9, LlmModel.CLAUDE_3_7_SONNET: 5, - LlmModel.CLAUDE_3_5_SONNET: 4, - LlmModel.CLAUDE_3_5_HAIKU: 1, # $0.80 / $4.00 LlmModel.CLAUDE_3_HAIKU: 1, LlmModel.AIML_API_QWEN2_5_72B: 1, LlmModel.AIML_API_LLAMA3_1_70B: 1, LlmModel.AIML_API_LLAMA3_3_70B: 1, LlmModel.AIML_API_META_LLAMA_3_1_70B: 1, LlmModel.AIML_API_LLAMA_3_2_3B: 1, - LlmModel.LLAMA3_8B: 1, - LlmModel.LLAMA3_70B: 1, - LlmModel.GEMMA2_9B: 1, LlmModel.LLAMA3_3_70B: 1, # $0.59 / $0.79 LlmModel.LLAMA3_1_8B: 1, LlmModel.OLLAMA_LLAMA3_3: 1, LlmModel.OLLAMA_LLAMA3_2: 1, LlmModel.OLLAMA_LLAMA3_8B: 1, LlmModel.OLLAMA_LLAMA3_405B: 1, - LlmModel.DEEPSEEK_LLAMA_70B: 1, # ? / ? LlmModel.OLLAMA_DOLPHIN: 1, LlmModel.OPENAI_GPT_OSS_120B: 1, LlmModel.OPENAI_GPT_OSS_20B: 1, - LlmModel.GEMINI_FLASH_1_5: 1, LlmModel.GEMINI_2_5_PRO: 4, + LlmModel.GEMINI_3_PRO_PREVIEW: 5, LlmModel.MISTRAL_NEMO: 1, LlmModel.COHERE_COMMAND_R_08_2024: 1, LlmModel.COHERE_COMMAND_R_PLUS_08_2024: 3, @@ -116,6 +119,9 @@ MODEL_COST: dict[LlmModel, int] = { LlmModel.LLAMA_API_LLAMA3_3_8B: 1, LlmModel.LLAMA_API_LLAMA3_3_70B: 1, LlmModel.GROK_4: 9, + LlmModel.GROK_4_FAST: 1, + LlmModel.GROK_4_1_FAST: 1, + LlmModel.GROK_CODE_FAST_1: 1, LlmModel.KIMI_K2: 1, LlmModel.QWEN3_235B_A22B_THINKING: 1, LlmModel.QWEN3_CODER: 9, @@ -261,6 +267,20 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = { AIStructuredResponseGeneratorBlock: LLM_COST, AITextSummarizerBlock: LLM_COST, AIListGeneratorBlock: LLM_COST, + CodeGenerationBlock: [ + BlockCost( + cost_type=BlockCostType.RUN, + cost_filter={ + "model": CodexModel.GPT5_1_CODEX, + "credentials": { + "id": openai_credentials.id, + "provider": openai_credentials.provider, + "type": openai_credentials.type, + }, + }, + cost_amount=5, + ) + ], CreateTalkingAvatarVideoBlock: [ BlockCost( cost_amount=15, @@ -323,7 +343,31 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = { ], AIShortformVideoCreatorBlock: [ BlockCost( - cost_amount=50, + cost_amount=307, + cost_filter={ + "credentials": { + "id": revid_credentials.id, + "provider": revid_credentials.provider, + "type": revid_credentials.type, + } + }, + ) + ], + AIAdMakerVideoCreatorBlock: [ + BlockCost( + cost_amount=714, + cost_filter={ + "credentials": { + "id": revid_credentials.id, + "provider": revid_credentials.provider, + "type": revid_credentials.type, + } + }, + ) + ], + AIScreenshotToVideoAdBlock: [ + BlockCost( + cost_amount=612, cost_filter={ "credentials": { "id": revid_credentials.id, @@ -514,4 +558,85 @@ BLOCK_COSTS: dict[Type[Block], list[BlockCost]] = { }, ) ], + AIImageGeneratorBlock: [ + BlockCost( + cost_amount=5, # SD3.5 Medium: ~$0.035 per image + cost_filter={ + "model": ImageGenModel.SD3_5, + "credentials": { + "id": replicate_credentials.id, + "provider": replicate_credentials.provider, + "type": replicate_credentials.type, + }, + }, + ), + BlockCost( + cost_amount=6, # Flux 1.1 Pro: ~$0.04 per image + cost_filter={ + "model": ImageGenModel.FLUX, + "credentials": { + "id": replicate_credentials.id, + "provider": replicate_credentials.provider, + "type": replicate_credentials.type, + }, + }, + ), + BlockCost( + cost_amount=10, # Flux 1.1 Pro Ultra: ~$0.08 per image + cost_filter={ + "model": ImageGenModel.FLUX_ULTRA, + "credentials": { + "id": replicate_credentials.id, + "provider": replicate_credentials.provider, + "type": replicate_credentials.type, + }, + }, + ), + BlockCost( + cost_amount=7, # Recraft v3: ~$0.05 per image + cost_filter={ + "model": ImageGenModel.RECRAFT, + "credentials": { + "id": replicate_credentials.id, + "provider": replicate_credentials.provider, + "type": replicate_credentials.type, + }, + }, + ), + BlockCost( + cost_amount=14, # Nano Banana Pro: $0.14 per image at 2K + cost_filter={ + "model": ImageGenModel.NANO_BANANA_PRO, + "credentials": { + "id": replicate_credentials.id, + "provider": replicate_credentials.provider, + "type": replicate_credentials.type, + }, + }, + ), + ], + AIImageCustomizerBlock: [ + BlockCost( + cost_amount=10, # Nano Banana (original) + cost_filter={ + "model": GeminiImageModel.NANO_BANANA, + "credentials": { + "id": replicate_credentials.id, + "provider": replicate_credentials.provider, + "type": replicate_credentials.type, + }, + }, + ), + BlockCost( + cost_amount=14, # Nano Banana Pro: $0.14 per image at 2K + cost_filter={ + "model": GeminiImageModel.NANO_BANANA_PRO, + "credentials": { + "id": replicate_credentials.id, + "provider": replicate_credentials.provider, + "type": replicate_credentials.type, + }, + }, + ), + ], } diff --git a/autogpt_platform/backend/backend/data/credit.py b/autogpt_platform/backend/backend/data/credit.py index a8942d3b2e..95f0b158e1 100644 --- a/autogpt_platform/backend/backend/data/credit.py +++ b/autogpt_platform/backend/backend/data/credit.py @@ -16,6 +16,7 @@ from prisma.models import CreditRefundRequest, CreditTransaction, User, UserBala from prisma.types import CreditRefundRequestCreateInput, CreditTransactionWhereInput from pydantic import BaseModel +from backend.api.features.admin.model import UserHistoryResponse from backend.data.block_cost_config import BLOCK_COSTS from backend.data.db import query_raw_with_schema from backend.data.includes import MAX_CREDIT_REFUND_REQUESTS_FETCH @@ -29,7 +30,6 @@ from backend.data.model import ( from backend.data.notifications import NotificationEventModel, RefundRequestData from backend.data.user import get_user_by_id, get_user_email_by_id from backend.notifications.notifications import queue_notification_async -from backend.server.v2.admin.model import UserHistoryResponse from backend.util.exceptions import InsufficientBalanceError from backend.util.feature_flag import Flag, is_feature_enabled from backend.util.json import SafeJson, dumps diff --git a/autogpt_platform/backend/backend/data/credit_test.py b/autogpt_platform/backend/backend/data/credit_test.py index 8e9487f74a..391a373b86 100644 --- a/autogpt_platform/backend/backend/data/credit_test.py +++ b/autogpt_platform/backend/backend/data/credit_test.py @@ -7,7 +7,7 @@ from prisma.models import CreditTransaction, UserBalance from backend.blocks.llm import AITextGeneratorBlock from backend.data.block import get_block from backend.data.credit import BetaUserCredit, UsageTransactionMetadata -from backend.data.execution import NodeExecutionEntry, UserContext +from backend.data.execution import ExecutionContext, NodeExecutionEntry from backend.data.user import DEFAULT_USER_ID from backend.executor.utils import block_usage_cost from backend.integrations.credentials_store import openai_credentials @@ -73,6 +73,7 @@ async def test_block_credit_usage(server: SpinTestServer): NodeExecutionEntry( user_id=DEFAULT_USER_ID, graph_id="test_graph", + graph_version=1, node_id="test_node", graph_exec_id="test_graph_exec", node_exec_id="test_node_exec", @@ -85,7 +86,7 @@ async def test_block_credit_usage(server: SpinTestServer): "type": openai_credentials.type, }, }, - user_context=UserContext(timezone="UTC"), + execution_context=ExecutionContext(user_timezone="UTC"), ), ) assert spending_amount_1 > 0 @@ -94,12 +95,13 @@ async def test_block_credit_usage(server: SpinTestServer): NodeExecutionEntry( user_id=DEFAULT_USER_ID, graph_id="test_graph", + graph_version=1, node_id="test_node", graph_exec_id="test_graph_exec", node_exec_id="test_node_exec", block_id=AITextGeneratorBlock().id, inputs={"model": "gpt-4-turbo", "api_key": "owned_api_key"}, - user_context=UserContext(timezone="UTC"), + execution_context=ExecutionContext(user_timezone="UTC"), ), ) assert spending_amount_2 == 0 diff --git a/autogpt_platform/backend/backend/data/db.py b/autogpt_platform/backend/backend/data/db.py index 7fab1e3619..31a27e9163 100644 --- a/autogpt_platform/backend/backend/data/db.py +++ b/autogpt_platform/backend/backend/data/db.py @@ -1,6 +1,7 @@ import logging import os from contextlib import asynccontextmanager +from datetime import timedelta from urllib.parse import parse_qsl, urlencode, urlparse, urlunparse from uuid import uuid4 @@ -82,17 +83,19 @@ async def disconnect(): raise ConnectionError("Failed to disconnect from Prisma.") -# Transaction timeout constant (in milliseconds) -TRANSACTION_TIMEOUT = 30000 # 30 seconds - Increased from 15s to prevent timeout errors during graph creation under load +# Transaction timeout constant: +# increased from 15s to prevent timeout errors during graph creation under load. +TRANSACTION_TIMEOUT = timedelta(seconds=30) @asynccontextmanager -async def transaction(timeout: int = TRANSACTION_TIMEOUT): +async def transaction(timeout: timedelta = TRANSACTION_TIMEOUT): """ Create a database transaction with optional timeout. Args: - timeout: Transaction timeout in milliseconds. If None, uses TRANSACTION_TIMEOUT (15s). + timeout: Transaction timeout as a timedelta. + Defaults to `TRANSACTION_TIMEOUT` (30s). """ async with prisma.tx(timeout=timeout) as tx: yield tx @@ -108,7 +111,7 @@ def get_database_schema() -> str: async def query_raw_with_schema(query_template: str, *args) -> list[dict]: """Execute raw SQL query with proper schema handling.""" schema = get_database_schema() - schema_prefix = f"{schema}." if schema != "public" else "" + schema_prefix = f'"{schema}".' if schema != "public" else "" formatted_query = query_template.format(schema_prefix=schema_prefix) import prisma as prisma_module diff --git a/autogpt_platform/backend/backend/data/dynamic_fields.py b/autogpt_platform/backend/backend/data/dynamic_fields.py index 775394d189..51dc7bd41d 100644 --- a/autogpt_platform/backend/backend/data/dynamic_fields.py +++ b/autogpt_platform/backend/backend/data/dynamic_fields.py @@ -92,6 +92,18 @@ def get_dynamic_field_description(field_name: str) -> str: return f"Value for {field_name}" +def is_tool_pin(name: str) -> bool: + """Check if a pin name represents a tool connection.""" + return name.startswith("tools_^_") or name == "tools" + + +def sanitize_pin_name(name: str) -> str: + sanitized_name = extract_base_field_name(name) + if is_tool_pin(sanitized_name): + return "tools" + return sanitized_name + + # --------------------------------------------------------------------------- # # Dynamic field parsing and merging utilities # --------------------------------------------------------------------------- # @@ -137,30 +149,64 @@ def _tokenise(path: str) -> list[tuple[str, str]] | None: return tokens -def parse_execution_output(output: tuple[str, Any], name: str) -> Any: +def parse_execution_output( + output_item: tuple[str, Any], + link_output_selector: str, + sink_node_id: str | None = None, + sink_pin_name: str | None = None, +) -> Any: """ - Retrieve a nested value out of `output` using the flattened *name*. + Retrieve a nested value out of `output` using the flattened `link_output_selector`. - On any failure (wrong name, wrong type, out-of-range, bad path) - returns **None**. + On any failure (wrong name, wrong type, out-of-range, bad path) returns **None**. + + ### Special Case: Tool pins + For regular output pins, the `output_item`'s name will simply be the field name, and + `link_output_selector` (= the `source_name` of the link) may provide a "selector" + used to extract part of the output value and route it through the link + to the next node. + + However, for tool pins, it is the other way around: the `output_item`'s name + provides the routing information (`tools_^_{sink_node_id}_~_{field_name}`), + and the `link_output_selector` is simply `"tools"` + (or `"tools_^_{tool_name}_~_{field_name}"` for backward compatibility). Args: - output: Tuple of (base_name, data) representing a block output entry - name: The flattened field name to extract from the output data + output_item: Tuple of (base_name, data) representing a block output entry. + link_output_selector: The flattened field name to extract from the output data. + sink_node_id: Sink node ID, used for tool use routing. + sink_pin_name: Sink pin name, used for tool use routing. Returns: - The value at the specified path, or None if not found/invalid + The value at the specified path, or `None` if not found/invalid. """ - base_name, data = output + output_pin_name, data = output_item + + # Special handling for tool pins + if is_tool_pin(link_output_selector) and ( # "tools" or "tools_^_…" + output_pin_name.startswith("tools_^_") and "_~_" in output_pin_name + ): + if not (sink_node_id and sink_pin_name): + raise ValueError( + "sink_node_id and sink_pin_name must be provided for tool pin routing" + ) + + # Extract routing information from emit key: tools_^_{node_id}_~_{field} + selector = output_pin_name[8:] # Remove "tools_^_" prefix + target_node_id, target_input_pin = selector.split("_~_", 1) + if target_node_id == sink_node_id and target_input_pin == sink_pin_name: + return data + else: + return None # Exact match → whole object - if name == base_name: + if link_output_selector == output_pin_name: return data # Must start with the expected name - if not name.startswith(base_name): + if not link_output_selector.startswith(output_pin_name): return None - path = name[len(base_name) :] + path = link_output_selector[len(output_pin_name) :] if not path: return None # nothing left to parse diff --git a/autogpt_platform/backend/backend/data/execution.py b/autogpt_platform/backend/backend/data/execution.py index 6c40e55a31..020a5a1906 100644 --- a/autogpt_platform/backend/backend/data/execution.py +++ b/autogpt_platform/backend/backend/data/execution.py @@ -5,6 +5,7 @@ from enum import Enum from multiprocessing import Manager from queue import Empty from typing import ( + TYPE_CHECKING, Annotated, Any, AsyncGenerator, @@ -34,6 +35,7 @@ from prisma.types import ( AgentNodeExecutionKeyValueDataCreateInput, AgentNodeExecutionUpdateInput, AgentNodeExecutionWhereInput, + AgentNodeExecutionWhereUniqueInput, ) from pydantic import BaseModel, ConfigDict, JsonValue, ValidationError from pydantic.fields import Field @@ -64,12 +66,27 @@ from .includes import ( ) from .model import CredentialsMetaInput, GraphExecutionStats, NodeExecutionStats +if TYPE_CHECKING: + pass + T = TypeVar("T") logger = logging.getLogger(__name__) config = Config() +class ExecutionContext(BaseModel): + """ + Unified context that carries execution-level data throughout the entire execution flow. + This includes information needed by blocks, sub-graphs, and execution management. + """ + + safe_mode: bool = True + user_timezone: str = "UTC" + root_execution_id: Optional[str] = None + parent_execution_id: Optional[str] = None + + # -------------------------- Models -------------------------- # @@ -96,11 +113,14 @@ NodesInputMasks = Mapping[str, NodeInputMask] VALID_STATUS_TRANSITIONS = { ExecutionStatus.QUEUED: [ ExecutionStatus.INCOMPLETE, + ExecutionStatus.TERMINATED, # For resuming halted execution + ExecutionStatus.REVIEW, # For resuming after review ], ExecutionStatus.RUNNING: [ ExecutionStatus.INCOMPLETE, ExecutionStatus.QUEUED, ExecutionStatus.TERMINATED, # For resuming halted execution + ExecutionStatus.REVIEW, # For resuming after review ], ExecutionStatus.COMPLETED: [ ExecutionStatus.RUNNING, @@ -109,11 +129,16 @@ VALID_STATUS_TRANSITIONS = { ExecutionStatus.INCOMPLETE, ExecutionStatus.QUEUED, ExecutionStatus.RUNNING, + ExecutionStatus.REVIEW, ], ExecutionStatus.TERMINATED: [ ExecutionStatus.INCOMPLETE, ExecutionStatus.QUEUED, ExecutionStatus.RUNNING, + ExecutionStatus.REVIEW, + ], + ExecutionStatus.REVIEW: [ + ExecutionStatus.RUNNING, ], } @@ -175,6 +200,10 @@ class GraphExecutionMeta(BaseDbModel): default=None, description="AI-generated summary of what the agent did", ) + correctness_score: float | None = Field( + default=None, + description="AI-generated score (0.0-1.0) indicating how well the execution achieved its intended purpose", + ) def to_db(self) -> GraphExecutionStats: return GraphExecutionStats( @@ -187,6 +216,13 @@ class GraphExecutionMeta(BaseDbModel): node_error_count=self.node_error_count, error=self.error, activity_status=self.activity_status, + correctness_score=self.correctness_score, + ) + + def without_activity_features(self) -> "GraphExecutionMeta.Stats": + """Return a copy of stats with activity features (activity_status, correctness_score) set to None.""" + return self.model_copy( + update={"activity_status": None, "correctness_score": None} ) stats: Stats | None @@ -244,6 +280,7 @@ class GraphExecutionMeta(BaseDbModel): else stats.error ), activity_status=stats.activity_status, + correctness_score=stats.correctness_score, ) if stats else None @@ -344,7 +381,7 @@ class GraphExecutionWithNodes(GraphExecution): def to_graph_execution_entry( self, - user_context: "UserContext", + execution_context: ExecutionContext, compiled_nodes_input_masks: Optional[NodesInputMasks] = None, ): return GraphExecutionEntry( @@ -353,7 +390,7 @@ class GraphExecutionWithNodes(GraphExecution): graph_version=self.graph_version or 0, graph_exec_id=self.id, nodes_input_masks=compiled_nodes_input_masks, - user_context=user_context, + execution_context=execution_context, ) @@ -426,17 +463,18 @@ class NodeExecutionResult(BaseModel): ) def to_node_execution_entry( - self, user_context: "UserContext" + self, execution_context: ExecutionContext ) -> "NodeExecutionEntry": return NodeExecutionEntry( user_id=self.user_id, graph_exec_id=self.graph_exec_id, graph_id=self.graph_id, + graph_version=self.graph_version, node_exec_id=self.node_exec_id, node_id=self.node_id, block_id=self.block_id, inputs=self.input_data, - user_context=user_context, + execution_context=execution_context, ) @@ -446,6 +484,7 @@ class NodeExecutionResult(BaseModel): async def get_graph_executions( graph_exec_id: Optional[str] = None, graph_id: Optional[str] = None, + graph_version: Optional[int] = None, user_id: Optional[str] = None, statuses: Optional[list[ExecutionStatus]] = None, created_time_gte: Optional[datetime] = None, @@ -462,6 +501,8 @@ async def get_graph_executions( where_filter["userId"] = user_id if graph_id: where_filter["agentGraphId"] = graph_id + if graph_version is not None: + where_filter["agentGraphVersion"] = graph_version if created_time_gte or created_time_lte: where_filter["createdAt"] = { "gte": created_time_gte or datetime.min.replace(tzinfo=timezone.utc), @@ -632,6 +673,25 @@ async def get_graph_execution( ) +async def get_child_graph_executions( + parent_exec_id: str, +) -> list[GraphExecutionMeta]: + """ + Get all child executions of a parent execution. + + Args: + parent_exec_id: Parent graph execution ID + + Returns: + List of child graph executions + """ + children = await AgentGraphExecution.prisma().find_many( + where={"parentGraphExecutionId": parent_exec_id, "isDeleted": False} + ) + + return [GraphExecutionMeta.from_db(child) for child in children] + + async def create_graph_execution( graph_id: str, graph_version: int, @@ -641,6 +701,7 @@ async def create_graph_execution( preset_id: Optional[str] = None, credential_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None, nodes_input_masks: Optional[NodesInputMasks] = None, + parent_graph_exec_id: Optional[str] = None, ) -> GraphExecutionWithNodes: """ Create a new AgentGraphExecution record. @@ -677,6 +738,7 @@ async def create_graph_execution( }, "userId": user_id, "agentPresetId": preset_id, + "parentGraphExecutionId": parent_graph_exec_id, }, include=GRAPH_EXECUTION_INCLUDE_WITH_NODES, ) @@ -690,7 +752,7 @@ async def upsert_execution_input( input_name: str, input_data: JsonValue, node_exec_id: str | None = None, -) -> tuple[str, BlockInput]: +) -> tuple[NodeExecutionResult, BlockInput]: """ Insert AgentNodeExecutionInputOutput record for as one of AgentNodeExecution.Input. If there is no AgentNodeExecution that has no `input_name` as input, create new one. @@ -723,7 +785,7 @@ async def upsert_execution_input( existing_execution = await AgentNodeExecution.prisma().find_first( where=existing_exec_query_filter, order={"addedTime": "asc"}, - include={"Input": True}, + include={"Input": True, "GraphExecution": True}, ) json_input_data = SafeJson(input_data) @@ -735,7 +797,7 @@ async def upsert_execution_input( referencedByInputExecId=existing_execution.id, ) ) - return existing_execution.id, { + return NodeExecutionResult.from_db(existing_execution), { **{ input_data.name: type_utils.convert(input_data.data, JsonValue) for input_data in existing_execution.Input or [] @@ -750,9 +812,10 @@ async def upsert_execution_input( agentGraphExecutionId=graph_exec_id, executionStatus=ExecutionStatus.INCOMPLETE, Input={"create": {"name": input_name, "data": json_input_data}}, - ) + ), + include={"GraphExecution": True}, ) - return result.id, {input_name: input_data} + return NodeExecutionResult.from_db(result), {input_name: input_data} else: raise ValueError( @@ -777,6 +840,30 @@ async def upsert_execution_output( await AgentNodeExecutionInputOutput.prisma().create(data=data) +async def get_execution_outputs_by_node_exec_id( + node_exec_id: str, +) -> dict[str, Any]: + """ + Get all execution outputs for a specific node execution ID. + + Args: + node_exec_id: The node execution ID to get outputs for + + Returns: + Dictionary mapping output names to their data values + """ + outputs = await AgentNodeExecutionInputOutput.prisma().find_many( + where={"referencedByOutputExecId": node_exec_id} + ) + + result = {} + for output in outputs: + if output.data is not None: + result[output.name] = type_utils.convert(output.data, JsonValue) + + return result + + async def update_graph_execution_start_time( graph_exec_id: str, ) -> GraphExecution | None: @@ -848,9 +935,25 @@ async def update_node_execution_status_batch( node_exec_ids: list[str], status: ExecutionStatus, stats: dict[str, Any] | None = None, -): - await AgentNodeExecution.prisma().update_many( - where={"id": {"in": node_exec_ids}}, +) -> int: + # Validate status transitions - allowed_from should never be empty for valid statuses + allowed_from = VALID_STATUS_TRANSITIONS.get(status, []) + if not allowed_from: + raise ValueError( + f"Invalid status transition: {status} has no valid source statuses" + ) + + # For batch updates, we filter to only update nodes with valid current statuses + where_clause = cast( + AgentNodeExecutionWhereInput, + { + "id": {"in": node_exec_ids}, + "executionStatus": {"in": [s.value for s in allowed_from]}, + }, + ) + + return await AgentNodeExecution.prisma().update_many( + where=where_clause, data=_get_update_status_data(status, None, stats), ) @@ -864,15 +967,32 @@ async def update_node_execution_status( if status == ExecutionStatus.QUEUED and execution_data is None: raise ValueError("Execution data must be provided when queuing an execution.") - res = await AgentNodeExecution.prisma().update( - where={"id": node_exec_id}, + # Validate status transitions - allowed_from should never be empty for valid statuses + allowed_from = VALID_STATUS_TRANSITIONS.get(status, []) + if not allowed_from: + raise ValueError( + f"Invalid status transition: {status} has no valid source statuses" + ) + + if res := await AgentNodeExecution.prisma().update( + where=cast( + AgentNodeExecutionWhereUniqueInput, + { + "id": node_exec_id, + "executionStatus": {"in": [s.value for s in allowed_from]}, + }, + ), data=_get_update_status_data(status, execution_data, stats), include=EXECUTION_RESULT_INCLUDE, - ) - if not res: - raise ValueError(f"Execution {node_exec_id} not found.") + ): + return NodeExecutionResult.from_db(res) - return NodeExecutionResult.from_db(res) + if res := await AgentNodeExecution.prisma().find_unique( + where={"id": node_exec_id}, include=EXECUTION_RESULT_INCLUDE + ): + return NodeExecutionResult.from_db(res) + + raise ValueError(f"Execution {node_exec_id} not found.") def _get_update_status_data( @@ -926,17 +1046,17 @@ async def get_node_execution(node_exec_id: str) -> NodeExecutionResult | None: return NodeExecutionResult.from_db(execution) -async def get_node_executions( +def _build_node_execution_where_clause( graph_exec_id: str | None = None, node_id: str | None = None, block_ids: list[str] | None = None, statuses: list[ExecutionStatus] | None = None, - limit: int | None = None, created_time_gte: datetime | None = None, created_time_lte: datetime | None = None, - include_exec_data: bool = True, -) -> list[NodeExecutionResult]: - """⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints.""" +) -> AgentNodeExecutionWhereInput: + """ + Build where clause for node execution queries. + """ where_clause: AgentNodeExecutionWhereInput = {} if graph_exec_id: where_clause["agentGraphExecutionId"] = graph_exec_id @@ -953,6 +1073,29 @@ async def get_node_executions( "lte": created_time_lte or datetime.max.replace(tzinfo=timezone.utc), } + return where_clause + + +async def get_node_executions( + graph_exec_id: str | None = None, + node_id: str | None = None, + block_ids: list[str] | None = None, + statuses: list[ExecutionStatus] | None = None, + limit: int | None = None, + created_time_gte: datetime | None = None, + created_time_lte: datetime | None = None, + include_exec_data: bool = True, +) -> list[NodeExecutionResult]: + """⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints.""" + where_clause = _build_node_execution_where_clause( + graph_exec_id=graph_exec_id, + node_id=node_id, + block_ids=block_ids, + statuses=statuses, + created_time_gte=created_time_gte, + created_time_lte=created_time_lte, + ) + executions = await AgentNodeExecution.prisma().find_many( where=where_clause, include=( @@ -994,30 +1137,29 @@ async def get_latest_node_execution( # ----------------- Execution Infrastructure ----------------- # -class UserContext(BaseModel): - """Generic user context for graph execution containing user-specific settings.""" - - timezone: str - - class GraphExecutionEntry(BaseModel): + model_config = {"extra": "ignore"} + user_id: str graph_exec_id: str graph_id: str graph_version: int nodes_input_masks: Optional[NodesInputMasks] = None - user_context: UserContext + execution_context: ExecutionContext = Field(default_factory=ExecutionContext) class NodeExecutionEntry(BaseModel): + model_config = {"extra": "ignore"} + user_id: str graph_exec_id: str graph_id: str + graph_version: int node_exec_id: str node_id: str block_id: str inputs: BlockInput - user_context: UserContext + execution_context: ExecutionContext = Field(default_factory=ExecutionContext) class ExecutionQueue(Generic[T]): @@ -1351,3 +1493,35 @@ async def get_graph_execution_by_share_token( created_at=execution.createdAt, outputs=outputs, ) + + +async def get_frequently_executed_graphs( + days_back: int = 30, + min_executions: int = 10, +) -> list[dict]: + """Get graphs that have been frequently executed for monitoring.""" + query_template = """ + SELECT DISTINCT + e."agentGraphId" as graph_id, + e."userId" as user_id, + COUNT(*) as execution_count + FROM {schema_prefix}"AgentGraphExecution" e + WHERE e."createdAt" >= $1::timestamp + AND e."isDeleted" = false + AND e."executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED') + GROUP BY e."agentGraphId", e."userId" + HAVING COUNT(*) >= $2 + ORDER BY execution_count DESC + """ + + start_date = datetime.now(timezone.utc) - timedelta(days=days_back) + result = await query_raw_with_schema(query_template, start_date, min_executions) + + return [ + { + "graph_id": row["graph_id"], + "user_id": row["user_id"], + "execution_count": int(row["execution_count"]), + } + for row in result + ] diff --git a/autogpt_platform/backend/backend/data/graph.py b/autogpt_platform/backend/backend/data/graph.py index bf9285c84b..0757a86f4a 100644 --- a/autogpt_platform/backend/backend/data/graph.py +++ b/autogpt_platform/backend/backend/data/graph.py @@ -1,3 +1,4 @@ +import asyncio import logging import uuid from collections import defaultdict @@ -5,7 +6,13 @@ from datetime import datetime, timezone from typing import TYPE_CHECKING, Any, Literal, Optional, cast from prisma.enums import SubmissionStatus -from prisma.models import AgentGraph, AgentNode, AgentNodeLink, StoreListingVersion +from prisma.models import ( + AgentGraph, + AgentNode, + AgentNodeLink, + LibraryAgent, + StoreListingVersion, +) from prisma.types import ( AgentGraphCreateInput, AgentGraphWhereInput, @@ -20,7 +27,7 @@ from backend.blocks.agent import AgentExecutorBlock from backend.blocks.io import AgentInputBlock, AgentOutputBlock from backend.blocks.llm import LlmModel from backend.data.db import prisma as db -from backend.data.dynamic_fields import extract_base_field_name +from backend.data.dynamic_fields import is_tool_pin, sanitize_pin_name from backend.data.includes import MAX_GRAPH_VERSIONS_FETCH from backend.data.model import ( CredentialsField, @@ -30,10 +37,12 @@ from backend.data.model import ( ) from backend.integrations.providers import ProviderName from backend.util import type as type_utils +from backend.util.exceptions import GraphNotAccessibleError, GraphNotInLibraryError from backend.util.json import SafeJson from backend.util.models import Pagination from .block import ( + AnyBlockSchema, Block, BlockInput, BlockSchema, @@ -52,6 +61,10 @@ if TYPE_CHECKING: logger = logging.getLogger(__name__) +class GraphSettings(BaseModel): + human_in_the_loop_safe_mode: bool | None = None + + class Link(BaseDbModel): source_id: str sink_id: str @@ -82,7 +95,7 @@ class Node(BaseDbModel): output_links: list[Link] = [] @property - def block(self) -> "Block[BlockSchema, BlockSchema] | _UnknownBlockBase": + def block(self) -> AnyBlockSchema | "_UnknownBlockBase": """Get the block for this node. Returns UnknownBlock if block is deleted/missing.""" block = get_block(self.block_id) if not block: @@ -216,6 +229,15 @@ class BaseGraph(BaseDbModel): def has_external_trigger(self) -> bool: return self.webhook_input_node is not None + @computed_field + @property + def has_human_in_the_loop(self) -> bool: + return any( + node.block_id + for node in self.nodes + if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP + ) + @property def webhook_input_node(self) -> Node | None: return next( @@ -570,9 +592,9 @@ class GraphModel(Graph): nodes_input_masks.get(node.id, {}) if nodes_input_masks else {} ) provided_inputs = set( - [_sanitize_pin_name(name) for name in node.input_default] + [sanitize_pin_name(name) for name in node.input_default] + [ - _sanitize_pin_name(link.sink_name) + sanitize_pin_name(link.sink_name) for link in input_links.get(node.id, []) ] + ([name for name in node_input_mask] if node_input_mask else []) @@ -688,7 +710,7 @@ class GraphModel(Graph): f"{prefix}, {node.block_id} is invalid block id, available blocks: {blocks}" ) - sanitized_name = _sanitize_pin_name(name) + sanitized_name = sanitize_pin_name(name) vals = node.input_default if i == 0: fields = ( @@ -702,7 +724,7 @@ class GraphModel(Graph): if block.block_type not in [BlockType.AGENT] else vals.get("input_schema", {}).get("properties", {}).keys() ) - if sanitized_name not in fields and not _is_tool_pin(name): + if sanitized_name not in fields and not is_tool_pin(name): fields_msg = f"Allowed fields: {fields}" raise ValueError(f"{prefix}, `{name}` invalid, {fields_msg}") @@ -742,17 +764,6 @@ class GraphModel(Graph): ) -def _is_tool_pin(name: str) -> bool: - return name.startswith("tools_^_") - - -def _sanitize_pin_name(name: str) -> str: - sanitized_name = extract_base_field_name(name) - if _is_tool_pin(sanitized_name): - return "tools" - return sanitized_name - - class GraphMeta(Graph): user_id: str @@ -887,10 +898,12 @@ async def get_graph_metadata(graph_id: str, version: int | None = None) -> Graph async def get_graph( graph_id: str, - version: int | None = None, - user_id: str | None = None, + version: int | None, + user_id: str | None, + *, for_export: bool = False, include_subgraphs: bool = False, + skip_access_check: bool = False, ) -> GraphModel | None: """ Retrieves a graph from the DB. @@ -898,35 +911,43 @@ async def get_graph( Returns `None` if the record is not found. """ - where_clause: AgentGraphWhereInput = { - "id": graph_id, - } + graph = None - if version is not None: - where_clause["version"] = version - - graph = await AgentGraph.prisma().find_first( - where=where_clause, - include=AGENT_GRAPH_INCLUDE, - order={"version": "desc"}, - ) - if graph is None: - return None - - if graph.userId != user_id: - store_listing_filter: StoreListingVersionWhereInput = { - "agentGraphId": graph_id, - "isDeleted": False, - "submissionStatus": SubmissionStatus.APPROVED, + # Only search graph directly on owned graph (or access check is skipped) + if skip_access_check or user_id is not None: + graph_where_clause: AgentGraphWhereInput = { + "id": graph_id, } if version is not None: - store_listing_filter["agentGraphVersion"] = version + graph_where_clause["version"] = version + if not skip_access_check and user_id is not None: + graph_where_clause["userId"] = user_id - # For access, the graph must be owned by the user or listed in the store - if not await StoreListingVersion.prisma().find_first( - where=store_listing_filter, order={"agentGraphVersion": "desc"} + graph = await AgentGraph.prisma().find_first( + where=graph_where_clause, + include=AGENT_GRAPH_INCLUDE, + order={"version": "desc"}, + ) + + # Use store listed graph to find not owned graph + if graph is None: + store_where_clause: StoreListingVersionWhereInput = { + "agentGraphId": graph_id, + "submissionStatus": SubmissionStatus.APPROVED, + "isDeleted": False, + } + if version is not None: + store_where_clause["agentGraphVersion"] = version + + if store_listing := await StoreListingVersion.prisma().find_first( + where=store_where_clause, + order={"agentGraphVersion": "desc"}, + include={"AgentGraph": {"include": AGENT_GRAPH_INCLUDE}}, ): - return None + graph = store_listing.AgentGraph + + if graph is None: + return None if include_subgraphs or for_export: sub_graphs = await get_sub_graphs(graph) @@ -969,13 +990,8 @@ async def get_graph_as_admin( # For access, the graph must be owned by the user or listed in the store if graph is None or ( graph.userId != user_id - and not ( - await StoreListingVersion.prisma().find_first( - where={ - "agentGraphId": graph_id, - "agentGraphVersion": version or graph.version, - } - ) + and not await is_graph_published_in_marketplace( + graph_id, version or graph.version ) ): return None @@ -1102,6 +1118,121 @@ async def delete_graph(graph_id: str, user_id: str) -> int: return entries_count +async def get_graph_settings(user_id: str, graph_id: str) -> GraphSettings: + lib = await LibraryAgent.prisma().find_first( + where={ + "userId": user_id, + "agentGraphId": graph_id, + "isDeleted": False, + "isArchived": False, + }, + order={"agentGraphVersion": "desc"}, + ) + if not lib or not lib.settings: + return GraphSettings() + + try: + return GraphSettings.model_validate(lib.settings) + except Exception: + logger.warning( + f"Malformed settings for LibraryAgent user={user_id} graph={graph_id}" + ) + return GraphSettings() + + +async def validate_graph_execution_permissions( + user_id: str, graph_id: str, graph_version: int, is_sub_graph: bool = False +) -> None: + """ + Validate that a user has permission to execute a specific graph. + + This function performs comprehensive authorization checks and raises specific + exceptions for different types of failures to enable appropriate error handling. + + ## Logic + A user can execute a graph if any of these is true: + 1. They own the graph and some version of it is still listed in their library + 2. The graph is published in the marketplace and listed in their library + 3. The graph is published in the marketplace and is being executed as a sub-agent + + Args: + graph_id: The ID of the graph to check + user_id: The ID of the user + graph_version: The version of the graph to check + is_sub_graph: Whether this is being executed as a sub-graph. + If `True`, the graph isn't required to be in the user's Library. + + Raises: + GraphNotAccessibleError: If the graph is not accessible to the user. + GraphNotInLibraryError: If the graph is not in the user's library (deleted/archived). + NotAuthorizedError: If the user lacks execution permissions for other reasons + """ + graph, library_agent = await asyncio.gather( + AgentGraph.prisma().find_unique( + where={"graphVersionId": {"id": graph_id, "version": graph_version}} + ), + LibraryAgent.prisma().find_first( + where={ + "userId": user_id, + "agentGraphId": graph_id, + "isDeleted": False, + "isArchived": False, + } + ), + ) + + # Step 1: Check if user owns this graph + user_owns_graph = graph and graph.userId == user_id + + # Step 2: Check if agent is in the library *and not deleted* + user_has_in_library = library_agent is not None + + # Step 3: Apply permission logic + if not ( + user_owns_graph + or await is_graph_published_in_marketplace(graph_id, graph_version) + ): + raise GraphNotAccessibleError( + f"You do not have access to graph #{graph_id} v{graph_version}: " + "it is not owned by you and not available in the Marketplace" + ) + elif not (user_has_in_library or is_sub_graph): + raise GraphNotInLibraryError(f"Graph #{graph_id} is not in your library") + + # Step 6: Check execution-specific permissions (raises generic NotAuthorizedError) + # Additional authorization checks beyond the above: + # 1. Check if user has execution credits (future) + # 2. Check if graph is suspended/disabled (future) + # 3. Check rate limiting rules (future) + # 4. Check organization-level permissions (future) + + # For now, the above check logic is sufficient for execution permission. + # Future enhancements can add more granular permission checks here. + # When adding new checks, raise NotAuthorizedError for non-library issues. + + +async def is_graph_published_in_marketplace(graph_id: str, graph_version: int) -> bool: + """ + Check if a graph is published in the marketplace. + + Params: + graph_id: The ID of the graph to check + graph_version: The version of the graph to check + + Returns: + True if the graph is published and approved in the marketplace, False otherwise + """ + marketplace_listing = await StoreListingVersion.prisma().find_first( + where={ + "agentGraphId": graph_id, + "agentGraphVersion": graph_version, + "submissionStatus": SubmissionStatus.APPROVED, + "isDeleted": False, + } + ) + return marketplace_listing is not None + + async def create_graph(graph: Graph, user_id: str) -> GraphModel: async with transaction() as tx: await __create_graph(tx, graph, user_id) @@ -1116,7 +1247,7 @@ async def fork_graph(graph_id: str, graph_version: int, user_id: str) -> GraphMo """ Forks a graph by copying it and all its nodes and links to a new graph. """ - graph = await get_graph(graph_id, graph_version, user_id, True) + graph = await get_graph(graph_id, graph_version, user_id=user_id, for_export=True) if not graph: raise ValueError(f"Graph {graph_id} v{graph_version} not found") diff --git a/autogpt_platform/backend/backend/data/graph_test.py b/autogpt_platform/backend/backend/data/graph_test.py index 920ad8afa8..044d75e0ca 100644 --- a/autogpt_platform/backend/backend/data/graph_test.py +++ b/autogpt_platform/backend/backend/data/graph_test.py @@ -6,14 +6,14 @@ import fastapi.exceptions import pytest from pytest_snapshot.plugin import Snapshot -import backend.server.v2.store.model as store +import backend.api.features.store.model as store +from backend.api.model import CreateGraph from backend.blocks.basic import StoreValueBlock from backend.blocks.io import AgentInputBlock, AgentOutputBlock -from backend.data.block import BlockSchema +from backend.data.block import BlockSchema, BlockSchemaInput from backend.data.graph import Graph, Link, Node from backend.data.model import SchemaField from backend.data.user import DEFAULT_USER_ID -from backend.server.model import CreateGraph from backend.usecases.sample import create_test_user from backend.util.test import SpinTestServer @@ -166,11 +166,13 @@ async def test_get_input_schema(server: SpinTestServer, snapshot: Snapshot): create_graph, DEFAULT_USER_ID ) - class ExpectedInputSchema(BlockSchema): + class ExpectedInputSchema(BlockSchemaInput): in_key_a: Any = SchemaField(title="Key A", default="A", advanced=True) in_key_b: Any = SchemaField(title="in_key_b", advanced=False) class ExpectedOutputSchema(BlockSchema): + # Note: Graph output schemas are dynamically generated and don't inherit + # from BlockSchemaOutput, so we use BlockSchema as the base instead out_key: Any = SchemaField( description="This is an output key", title="out_key", diff --git a/autogpt_platform/backend/backend/data/human_review.py b/autogpt_platform/backend/backend/data/human_review.py new file mode 100644 index 0000000000..de7a30759e --- /dev/null +++ b/autogpt_platform/backend/backend/data/human_review.py @@ -0,0 +1,258 @@ +""" +Data layer for Human In The Loop (HITL) review operations. +Handles all database operations for pending human reviews. +""" + +import asyncio +import logging +from datetime import datetime, timezone +from typing import Optional + +from prisma.enums import ReviewStatus +from prisma.models import PendingHumanReview +from prisma.types import PendingHumanReviewUpdateInput +from pydantic import BaseModel + +from backend.api.features.executions.review.model import ( + PendingHumanReviewModel, + SafeJsonData, +) +from backend.util.json import SafeJson + +logger = logging.getLogger(__name__) + + +class ReviewResult(BaseModel): + """Result of a review operation.""" + + data: Optional[SafeJsonData] = None + status: ReviewStatus + message: str = "" + processed: bool + node_exec_id: str + + +async def get_or_create_human_review( + user_id: str, + node_exec_id: str, + graph_exec_id: str, + graph_id: str, + graph_version: int, + input_data: SafeJsonData, + message: str, + editable: bool, +) -> Optional[ReviewResult]: + """ + Get existing review or create a new pending review entry. + + Uses upsert with empty update to get existing or create new review in a single operation. + + Args: + user_id: ID of the user who owns this review + node_exec_id: ID of the node execution + graph_exec_id: ID of the graph execution + graph_id: ID of the graph template + graph_version: Version of the graph template + input_data: The data to be reviewed + message: Instructions for the reviewer + editable: Whether the data can be edited + + Returns: + ReviewResult if the review is complete, None if waiting for human input + """ + try: + logger.debug(f"Getting or creating review for node {node_exec_id}") + + # Upsert - get existing or create new review + review = await PendingHumanReview.prisma().upsert( + where={"nodeExecId": node_exec_id}, + data={ + "create": { + "userId": user_id, + "nodeExecId": node_exec_id, + "graphExecId": graph_exec_id, + "graphId": graph_id, + "graphVersion": graph_version, + "payload": SafeJson(input_data), + "instructions": message, + "editable": editable, + "status": ReviewStatus.WAITING, + }, + "update": {}, # Do nothing on update - keep existing review as is + }, + ) + + logger.info( + f"Review {'created' if review.createdAt == review.updatedAt else 'retrieved'} for node {node_exec_id} with status {review.status}" + ) + except Exception as e: + logger.error( + f"Database error in get_or_create_human_review for node {node_exec_id}: {str(e)}" + ) + raise + + # Early return if already processed + if review.processed: + return None + + # If pending, return None to continue waiting, otherwise return the review result + if review.status == ReviewStatus.WAITING: + return None + else: + return ReviewResult( + data=review.payload, + status=review.status, + message=review.reviewMessage or "", + processed=review.processed, + node_exec_id=review.nodeExecId, + ) + + +async def has_pending_reviews_for_graph_exec(graph_exec_id: str) -> bool: + """ + Check if a graph execution has any pending reviews. + + Args: + graph_exec_id: The graph execution ID to check + + Returns: + True if there are reviews waiting for human input, False otherwise + """ + # Check if there are any reviews waiting for human input + count = await PendingHumanReview.prisma().count( + where={"graphExecId": graph_exec_id, "status": ReviewStatus.WAITING} + ) + return count > 0 + + +async def get_pending_reviews_for_user( + user_id: str, page: int = 1, page_size: int = 25 +) -> list["PendingHumanReviewModel"]: + """ + Get all pending reviews for a user with pagination. + + Args: + user_id: User ID to get reviews for + page: Page number (1-indexed) + page_size: Number of reviews per page + + Returns: + List of pending review models + """ + # Calculate offset for pagination + offset = (page - 1) * page_size + + reviews = await PendingHumanReview.prisma().find_many( + where={"userId": user_id, "status": ReviewStatus.WAITING}, + order={"createdAt": "desc"}, + skip=offset, + take=page_size, + ) + + return [PendingHumanReviewModel.from_db(review) for review in reviews] + + +async def get_pending_reviews_for_execution( + graph_exec_id: str, user_id: str +) -> list["PendingHumanReviewModel"]: + """ + Get all pending reviews for a specific graph execution. + + Args: + graph_exec_id: Graph execution ID + user_id: User ID for security validation + + Returns: + List of pending review models + """ + reviews = await PendingHumanReview.prisma().find_many( + where={ + "userId": user_id, + "graphExecId": graph_exec_id, + "status": ReviewStatus.WAITING, + }, + order={"createdAt": "asc"}, + ) + + return [PendingHumanReviewModel.from_db(review) for review in reviews] + + +async def process_all_reviews_for_execution( + user_id: str, + review_decisions: dict[str, tuple[ReviewStatus, SafeJsonData | None, str | None]], +) -> dict[str, PendingHumanReviewModel]: + """Process all pending reviews for an execution with approve/reject decisions. + + Args: + user_id: User ID for ownership validation + review_decisions: Map of node_exec_id -> (status, reviewed_data, message) + + Returns: + Dict of node_exec_id -> updated review model + """ + if not review_decisions: + return {} + + node_exec_ids = list(review_decisions.keys()) + + # Get all reviews for validation + reviews = await PendingHumanReview.prisma().find_many( + where={ + "nodeExecId": {"in": node_exec_ids}, + "userId": user_id, + "status": ReviewStatus.WAITING, + }, + ) + + # Validate all reviews can be processed + if len(reviews) != len(node_exec_ids): + missing_ids = set(node_exec_ids) - {review.nodeExecId for review in reviews} + raise ValueError( + f"Reviews not found, access denied, or not in WAITING status: {', '.join(missing_ids)}" + ) + + # Create parallel update tasks + update_tasks = [] + + for review in reviews: + new_status, reviewed_data, message = review_decisions[review.nodeExecId] + has_data_changes = reviewed_data is not None and reviewed_data != review.payload + + # Check edit permissions for actual data modifications + if has_data_changes and not review.editable: + raise ValueError(f"Review {review.nodeExecId} is not editable") + + update_data: PendingHumanReviewUpdateInput = { + "status": new_status, + "reviewMessage": message, + "wasEdited": has_data_changes, + "reviewedAt": datetime.now(timezone.utc), + } + + if has_data_changes: + update_data["payload"] = SafeJson(reviewed_data) + + task = PendingHumanReview.prisma().update( + where={"nodeExecId": review.nodeExecId}, + data=update_data, + ) + update_tasks.append(task) + + # Execute all updates in parallel and get updated reviews + updated_reviews = await asyncio.gather(*update_tasks) + + # Note: Execution resumption is now handled at the API layer after ALL reviews + # for an execution are processed (both approved and rejected) + + # Return as dict for easy access + return { + review.nodeExecId: PendingHumanReviewModel.from_db(review) + for review in updated_reviews + } + + +async def update_review_processed_status(node_exec_id: str, processed: bool) -> None: + """Update the processed status of a review.""" + await PendingHumanReview.prisma().update( + where={"nodeExecId": node_exec_id}, data={"processed": processed} + ) diff --git a/autogpt_platform/backend/backend/data/human_review_test.py b/autogpt_platform/backend/backend/data/human_review_test.py new file mode 100644 index 0000000000..c349fdde46 --- /dev/null +++ b/autogpt_platform/backend/backend/data/human_review_test.py @@ -0,0 +1,342 @@ +import datetime +from unittest.mock import AsyncMock, Mock + +import pytest +import pytest_mock +from prisma.enums import ReviewStatus + +from backend.data.human_review import ( + get_or_create_human_review, + get_pending_reviews_for_execution, + get_pending_reviews_for_user, + has_pending_reviews_for_graph_exec, + process_all_reviews_for_execution, +) + + +@pytest.fixture +def sample_db_review(): + """Create a sample database review object""" + mock_review = Mock() + mock_review.nodeExecId = "test_node_123" + mock_review.userId = "test-user-123" + mock_review.graphExecId = "test_graph_exec_456" + mock_review.graphId = "test_graph_789" + mock_review.graphVersion = 1 + mock_review.payload = {"data": "test payload"} + mock_review.instructions = "Please review" + mock_review.editable = True + mock_review.status = ReviewStatus.WAITING + mock_review.reviewMessage = None + mock_review.wasEdited = False + mock_review.processed = False + mock_review.createdAt = datetime.datetime.now(datetime.timezone.utc) + mock_review.updatedAt = None + mock_review.reviewedAt = None + return mock_review + + +@pytest.mark.asyncio +async def test_get_or_create_human_review_new( + mocker: pytest_mock.MockFixture, + sample_db_review, +): + """Test creating a new human review""" + # Mock the upsert to return a new review (created_at == updated_at) + sample_db_review.status = ReviewStatus.WAITING + sample_db_review.processed = False + + mock_upsert = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_upsert.return_value.upsert = AsyncMock(return_value=sample_db_review) + + result = await get_or_create_human_review( + user_id="test-user-123", + node_exec_id="test_node_123", + graph_exec_id="test_graph_exec_456", + graph_id="test_graph_789", + graph_version=1, + input_data={"data": "test payload"}, + message="Please review", + editable=True, + ) + + # Should return None for pending reviews (waiting for human input) + assert result is None + + +@pytest.mark.asyncio +async def test_get_or_create_human_review_approved( + mocker: pytest_mock.MockFixture, + sample_db_review, +): + """Test retrieving an already approved review""" + # Set up review as already approved + sample_db_review.status = ReviewStatus.APPROVED + sample_db_review.processed = False + sample_db_review.reviewMessage = "Looks good" + + mock_upsert = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_upsert.return_value.upsert = AsyncMock(return_value=sample_db_review) + + result = await get_or_create_human_review( + user_id="test-user-123", + node_exec_id="test_node_123", + graph_exec_id="test_graph_exec_456", + graph_id="test_graph_789", + graph_version=1, + input_data={"data": "test payload"}, + message="Please review", + editable=True, + ) + + # Should return the approved result + assert result is not None + assert result.status == ReviewStatus.APPROVED + assert result.data == {"data": "test payload"} + assert result.message == "Looks good" + + +@pytest.mark.asyncio +async def test_has_pending_reviews_for_graph_exec_true( + mocker: pytest_mock.MockFixture, +): + """Test when there are pending reviews""" + mock_count = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_count.return_value.count = AsyncMock(return_value=2) + + result = await has_pending_reviews_for_graph_exec("test_graph_exec") + + assert result is True + + +@pytest.mark.asyncio +async def test_has_pending_reviews_for_graph_exec_false( + mocker: pytest_mock.MockFixture, +): + """Test when there are no pending reviews""" + mock_count = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_count.return_value.count = AsyncMock(return_value=0) + + result = await has_pending_reviews_for_graph_exec("test_graph_exec") + + assert result is False + + +@pytest.mark.asyncio +async def test_get_pending_reviews_for_user( + mocker: pytest_mock.MockFixture, + sample_db_review, +): + """Test getting pending reviews for a user with pagination""" + mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_find_many.return_value.find_many = AsyncMock(return_value=[sample_db_review]) + + result = await get_pending_reviews_for_user("test_user", page=2, page_size=10) + + assert len(result) == 1 + assert result[0].node_exec_id == "test_node_123" + + # Verify pagination parameters + call_args = mock_find_many.return_value.find_many.call_args + assert call_args.kwargs["skip"] == 10 # (page-1) * page_size = (2-1) * 10 + assert call_args.kwargs["take"] == 10 + + +@pytest.mark.asyncio +async def test_get_pending_reviews_for_execution( + mocker: pytest_mock.MockFixture, + sample_db_review, +): + """Test getting pending reviews for specific execution""" + mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_find_many.return_value.find_many = AsyncMock(return_value=[sample_db_review]) + + result = await get_pending_reviews_for_execution( + "test_graph_exec_456", "test-user-123" + ) + + assert len(result) == 1 + assert result[0].graph_exec_id == "test_graph_exec_456" + + # Verify it filters by execution and user + call_args = mock_find_many.return_value.find_many.call_args + where_clause = call_args.kwargs["where"] + assert where_clause["userId"] == "test-user-123" + assert where_clause["graphExecId"] == "test_graph_exec_456" + assert where_clause["status"] == ReviewStatus.WAITING + + +@pytest.mark.asyncio +async def test_process_all_reviews_for_execution_success( + mocker: pytest_mock.MockFixture, + sample_db_review, +): + """Test successful processing of reviews for an execution""" + # Mock finding reviews + mock_prisma = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_prisma.return_value.find_many = AsyncMock(return_value=[sample_db_review]) + + # Mock updating reviews + updated_review = Mock() + updated_review.nodeExecId = "test_node_123" + updated_review.userId = "test-user-123" + updated_review.graphExecId = "test_graph_exec_456" + updated_review.graphId = "test_graph_789" + updated_review.graphVersion = 1 + updated_review.payload = {"data": "modified"} + updated_review.instructions = "Please review" + updated_review.editable = True + updated_review.status = ReviewStatus.APPROVED + updated_review.reviewMessage = "Approved" + updated_review.wasEdited = True + updated_review.processed = False + updated_review.createdAt = datetime.datetime.now(datetime.timezone.utc) + updated_review.updatedAt = datetime.datetime.now(datetime.timezone.utc) + updated_review.reviewedAt = datetime.datetime.now(datetime.timezone.utc) + mock_prisma.return_value.update = AsyncMock(return_value=updated_review) + + # Mock gather to simulate parallel updates + mocker.patch( + "backend.data.human_review.asyncio.gather", + new=AsyncMock(return_value=[updated_review]), + ) + + result = await process_all_reviews_for_execution( + user_id="test-user-123", + review_decisions={ + "test_node_123": (ReviewStatus.APPROVED, {"data": "modified"}, "Approved") + }, + ) + + assert len(result) == 1 + assert "test_node_123" in result + assert result["test_node_123"].status == ReviewStatus.APPROVED + + +@pytest.mark.asyncio +async def test_process_all_reviews_for_execution_validation_errors( + mocker: pytest_mock.MockFixture, +): + """Test validation errors in process_all_reviews_for_execution""" + # Mock finding fewer reviews than requested (some not found) + mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_find_many.return_value.find_many = AsyncMock( + return_value=[] + ) # No reviews found + + with pytest.raises(ValueError, match="Reviews not found"): + await process_all_reviews_for_execution( + user_id="test-user-123", + review_decisions={ + "nonexistent_node": (ReviewStatus.APPROVED, {"data": "test"}, "message") + }, + ) + + +@pytest.mark.asyncio +async def test_process_all_reviews_edit_permission_error( + mocker: pytest_mock.MockFixture, + sample_db_review, +): + """Test editing non-editable review""" + # Set review as non-editable + sample_db_review.editable = False + + # Mock finding reviews + mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_find_many.return_value.find_many = AsyncMock(return_value=[sample_db_review]) + + with pytest.raises(ValueError, match="not editable"): + await process_all_reviews_for_execution( + user_id="test-user-123", + review_decisions={ + "test_node_123": ( + ReviewStatus.APPROVED, + {"data": "modified"}, + "message", + ) + }, + ) + + +@pytest.mark.asyncio +async def test_process_all_reviews_mixed_approval_rejection( + mocker: pytest_mock.MockFixture, + sample_db_review, +): + """Test processing mixed approval and rejection decisions""" + # Create second review for rejection + second_review = Mock() + second_review.nodeExecId = "test_node_456" + second_review.userId = "test-user-123" + second_review.graphExecId = "test_graph_exec_456" + second_review.graphId = "test_graph_789" + second_review.graphVersion = 1 + second_review.payload = {"data": "original"} + second_review.instructions = "Second review" + second_review.editable = True + second_review.status = ReviewStatus.WAITING + second_review.reviewMessage = None + second_review.wasEdited = False + second_review.processed = False + second_review.createdAt = datetime.datetime.now(datetime.timezone.utc) + second_review.updatedAt = None + second_review.reviewedAt = None + + # Mock finding reviews + mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma") + mock_find_many.return_value.find_many = AsyncMock( + return_value=[sample_db_review, second_review] + ) + + # Mock updating reviews + approved_review = Mock() + approved_review.nodeExecId = "test_node_123" + approved_review.userId = "test-user-123" + approved_review.graphExecId = "test_graph_exec_456" + approved_review.graphId = "test_graph_789" + approved_review.graphVersion = 1 + approved_review.payload = {"data": "modified"} + approved_review.instructions = "Please review" + approved_review.editable = True + approved_review.status = ReviewStatus.APPROVED + approved_review.reviewMessage = "Approved" + approved_review.wasEdited = True + approved_review.processed = False + approved_review.createdAt = datetime.datetime.now(datetime.timezone.utc) + approved_review.updatedAt = datetime.datetime.now(datetime.timezone.utc) + approved_review.reviewedAt = datetime.datetime.now(datetime.timezone.utc) + + rejected_review = Mock() + rejected_review.nodeExecId = "test_node_456" + rejected_review.userId = "test-user-123" + rejected_review.graphExecId = "test_graph_exec_456" + rejected_review.graphId = "test_graph_789" + rejected_review.graphVersion = 1 + rejected_review.payload = {"data": "original"} + rejected_review.instructions = "Please review" + rejected_review.editable = True + rejected_review.status = ReviewStatus.REJECTED + rejected_review.reviewMessage = "Rejected" + rejected_review.wasEdited = False + rejected_review.processed = False + rejected_review.createdAt = datetime.datetime.now(datetime.timezone.utc) + rejected_review.updatedAt = datetime.datetime.now(datetime.timezone.utc) + rejected_review.reviewedAt = datetime.datetime.now(datetime.timezone.utc) + + mocker.patch( + "backend.data.human_review.asyncio.gather", + new=AsyncMock(return_value=[approved_review, rejected_review]), + ) + + result = await process_all_reviews_for_execution( + user_id="test-user-123", + review_decisions={ + "test_node_123": (ReviewStatus.APPROVED, {"data": "modified"}, "Approved"), + "test_node_456": (ReviewStatus.REJECTED, None, "Rejected"), + }, + ) + + assert len(result) == 2 + assert "test_node_123" in result + assert "test_node_456" in result diff --git a/autogpt_platform/backend/backend/data/integrations.py b/autogpt_platform/backend/backend/data/integrations.py index 82f9d7a8bb..5f44f928bd 100644 --- a/autogpt_platform/backend/backend/data/integrations.py +++ b/autogpt_platform/backend/backend/data/integrations.py @@ -1,7 +1,7 @@ import logging -from typing import AsyncGenerator, Literal, Optional, overload +from typing import TYPE_CHECKING, AsyncGenerator, Literal, Optional, overload -from prisma.models import IntegrationWebhook +from prisma.models import AgentNode, AgentPreset, IntegrationWebhook from prisma.types import ( IntegrationWebhookCreateInput, IntegrationWebhookUpdateInput, @@ -15,12 +15,16 @@ from backend.data.includes import ( INTEGRATION_WEBHOOK_INCLUDE, MAX_INTEGRATION_WEBHOOKS_FETCH, ) +from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.integrations.providers import ProviderName +from backend.integrations.webhooks import get_webhook_manager from backend.integrations.webhooks.utils import webhook_ingress_url -from backend.server.v2.library.model import LibraryAgentPreset from backend.util.exceptions import NotFoundError from backend.util.json import SafeJson +if TYPE_CHECKING: + from backend.api.features.library.model import LibraryAgentPreset + from .db import BaseDbModel from .graph import NodeModel @@ -62,7 +66,7 @@ class Webhook(BaseDbModel): class WebhookWithRelations(Webhook): triggered_nodes: list[NodeModel] - triggered_presets: list[LibraryAgentPreset] + triggered_presets: list["LibraryAgentPreset"] @staticmethod def from_db(webhook: IntegrationWebhook): @@ -71,6 +75,12 @@ class WebhookWithRelations(Webhook): "AgentNodes and AgentPresets must be included in " "IntegrationWebhook query with relations" ) + # LibraryAgentPreset import is moved to TYPE_CHECKING to avoid circular import: + # integrations.py → library/model.py → integrations.py (for Webhook) + # Runtime import is used in WebhookWithRelations.from_db() method instead + # Import at runtime to avoid circular dependency + from backend.api.features.library.model import LibraryAgentPreset + return WebhookWithRelations( **Webhook.from_db(webhook).model_dump(), triggered_nodes=[NodeModel.from_db(node) for node in webhook.AgentNodes], @@ -237,6 +247,77 @@ async def update_webhook( return Webhook.from_db(_updated_webhook) +async def find_webhooks_by_graph_id(graph_id: str, user_id: str) -> list[Webhook]: + """ + Find all webhooks that trigger nodes OR presets in a specific graph for a user. + + Args: + graph_id: The ID of the graph + user_id: The ID of the user + + Returns: + list[Webhook]: List of webhooks associated with the graph + """ + where_clause: IntegrationWebhookWhereInput = { + "userId": user_id, + "OR": [ + # Webhooks that trigger nodes in this graph + {"AgentNodes": {"some": {"agentGraphId": graph_id}}}, + # Webhooks that trigger presets for this graph + {"AgentPresets": {"some": {"agentGraphId": graph_id}}}, + ], + } + webhooks = await IntegrationWebhook.prisma().find_many(where=where_clause) + return [Webhook.from_db(webhook) for webhook in webhooks] + + +async def unlink_webhook_from_graph( + webhook_id: str, graph_id: str, user_id: str +) -> None: + """ + Unlink a webhook from all nodes and presets in a specific graph. + If the webhook has no remaining triggers, it will be automatically deleted + and deregistered with the provider. + + Args: + webhook_id: The ID of the webhook + graph_id: The ID of the graph to unlink from + user_id: The ID of the user (for authorization) + """ + # Avoid circular imports + from backend.api.features.library.db import set_preset_webhook + from backend.data.graph import set_node_webhook + + # Find all nodes in this graph that use this webhook + nodes = await AgentNode.prisma().find_many( + where={"agentGraphId": graph_id, "webhookId": webhook_id} + ) + + # Unlink webhook from each node + for node in nodes: + await set_node_webhook(node.id, None) + + # Find all presets for this graph that use this webhook + presets = await AgentPreset.prisma().find_many( + where={"agentGraphId": graph_id, "webhookId": webhook_id, "userId": user_id} + ) + + # Unlink webhook from each preset + for preset in presets: + await set_preset_webhook(user_id, preset.id, None) + + # Check if webhook needs cleanup (prune_webhook_if_dangling handles the trigger check) + webhook = await get_webhook(webhook_id, include_relations=False) + webhook_manager = get_webhook_manager(webhook.provider) + creds_manager = IntegrationCredentialsManager() + credentials = ( + await creds_manager.get(user_id, webhook.credentials_id) + if webhook.credentials_id + else None + ) + await webhook_manager.prune_webhook_if_dangling(user_id, webhook.id, credentials) + + async def delete_webhook(user_id: str, webhook_id: str) -> None: deleted = await IntegrationWebhook.prisma().delete_many( where={"id": webhook_id, "userId": user_id} diff --git a/autogpt_platform/backend/backend/data/model.py b/autogpt_platform/backend/backend/data/model.py index bd78632ba8..2cc73f6b7b 100644 --- a/autogpt_platform/backend/backend/data/model.py +++ b/autogpt_platform/backend/backend/data/model.py @@ -22,7 +22,7 @@ from typing import ( from urllib.parse import urlparse from uuid import uuid4 -from prisma.enums import CreditTransactionType +from prisma.enums import CreditTransactionType, OnboardingStep from pydantic import ( BaseModel, ConfigDict, @@ -46,6 +46,7 @@ from backend.util.settings import Secrets # Type alias for any provider name (including custom ones) AnyProviderName = str # Will be validated as ProviderName at runtime +USER_TIMEZONE_NOT_SET = "not-set" class User(BaseModel): @@ -98,7 +99,7 @@ class User(BaseModel): # User timezone for scheduling and time display timezone: str = Field( - default="not-set", + default=USER_TIMEZONE_NOT_SET, description="User timezone (IANA timezone identifier or 'not-set')", ) @@ -155,7 +156,7 @@ class User(BaseModel): notify_on_daily_summary=prisma_user.notifyOnDailySummary or True, notify_on_weekly_summary=prisma_user.notifyOnWeeklySummary or True, notify_on_monthly_summary=prisma_user.notifyOnMonthlySummary or True, - timezone=prisma_user.timezone or "not-set", + timezone=prisma_user.timezone or USER_TIMEZONE_NOT_SET, ) @@ -347,6 +348,9 @@ class APIKeyCredentials(_BaseCredentials): """Unix timestamp (seconds) indicating when the API key expires (if at all)""" def auth_header(self) -> str: + # Linear API keys should not have Bearer prefix + if self.provider == "linear": + return self.api_key.get_secret_value() return f"Bearer {self.api_key.get_secret_value()}" @@ -430,6 +434,18 @@ class OAuthState(BaseModel): code_verifier: Optional[str] = None """Unix timestamp (seconds) indicating when this OAuth state expires""" scopes: list[str] + # Fields for external API OAuth flows + callback_url: Optional[str] = None + """External app's callback URL for OAuth redirect""" + state_metadata: dict[str, Any] = Field(default_factory=dict) + """Metadata to echo back to external app on completion""" + initiated_by_api_key_id: Optional[str] = None + """ID of the API key that initiated this OAuth flow""" + + @property + def is_external(self) -> bool: + """Whether this OAuth flow was initiated via external API.""" + return self.callback_url is not None class UserMetadata(BaseModel): @@ -830,6 +846,10 @@ class GraphExecutionStats(BaseModel): activity_status: Optional[str] = Field( default=None, description="AI-generated summary of what the agent did" ) + correctness_score: Optional[float] = Field( + default=None, + description="AI-generated score (0.0-1.0) indicating how well the execution achieved its intended purpose", + ) class UserExecutionSummaryStats(BaseModel): @@ -848,3 +868,20 @@ class UserExecutionSummaryStats(BaseModel): total_execution_time: float = Field(default=0) average_execution_time: float = Field(default=0) cost_breakdown: dict[str, float] = Field(default_factory=dict) + + +class UserOnboarding(BaseModel): + userId: str + completedSteps: list[OnboardingStep] + walletShown: bool + notified: list[OnboardingStep] + rewardedFor: list[OnboardingStep] + usageReason: Optional[str] + integrations: list[str] + otherIntegrations: Optional[str] + selectedStoreListingVersionId: Optional[str] + agentInput: Optional[dict[str, Any]] + onboardingAgentExecutionId: Optional[str] + agentRuns: int + lastRunAt: Optional[datetime] + consecutiveRunDays: int diff --git a/autogpt_platform/backend/backend/data/notification_bus.py b/autogpt_platform/backend/backend/data/notification_bus.py new file mode 100644 index 0000000000..fbd484d379 --- /dev/null +++ b/autogpt_platform/backend/backend/data/notification_bus.py @@ -0,0 +1,38 @@ +from __future__ import annotations + +from typing import AsyncGenerator + +from pydantic import BaseModel, field_serializer + +from backend.api.model import NotificationPayload +from backend.data.event_bus import AsyncRedisEventBus +from backend.util.settings import Settings + + +class NotificationEvent(BaseModel): + """Generic notification event destined for websocket delivery.""" + + user_id: str + payload: NotificationPayload + + @field_serializer("payload") + def serialize_payload(self, payload: NotificationPayload): + """Ensure extra fields survive Redis serialization.""" + return payload.model_dump() + + +class AsyncRedisNotificationEventBus(AsyncRedisEventBus[NotificationEvent]): + Model = NotificationEvent # type: ignore + + @property + def event_bus_name(self) -> str: + return Settings().config.notification_event_bus_name + + async def publish(self, event: NotificationEvent) -> None: + await self.publish_event(event, event.user_id) + + async def listen( + self, user_id: str = "*" + ) -> AsyncGenerator[NotificationEvent, None]: + async for event in self.listen_events(user_id): + yield event diff --git a/autogpt_platform/backend/backend/data/onboarding.py b/autogpt_platform/backend/backend/data/onboarding.py index 6bfc9b494d..cc63b89afd 100644 --- a/autogpt_platform/backend/backend/data/onboarding.py +++ b/autogpt_platform/backend/backend/data/onboarding.py @@ -1,6 +1,7 @@ import re -from datetime import datetime -from typing import Any, Optional +from datetime import datetime, timedelta, timezone +from typing import Any, Literal, Optional +from zoneinfo import ZoneInfo import prisma import pydantic @@ -8,12 +9,18 @@ from prisma.enums import OnboardingStep from prisma.models import UserOnboarding from prisma.types import UserOnboardingCreateInput, UserOnboardingUpdateInput -from backend.data.block import get_blocks +from backend.api.features.store.model import StoreAgentDetails +from backend.api.model import OnboardingNotificationPayload +from backend.data import execution as execution_db from backend.data.credit import get_user_credit_model -from backend.data.model import CredentialsMetaInput -from backend.server.v2.store.model import StoreAgentDetails +from backend.data.notification_bus import ( + AsyncRedisNotificationEventBus, + NotificationEvent, +) +from backend.data.user import get_user_by_id from backend.util.cache import cached from backend.util.json import SafeJson +from backend.util.timezone_utils import get_user_timezone_or_utc # Mapping from user reason id to categories to search for when choosing agent to show REASON_MAPPING: dict[str, list[str]] = { @@ -26,9 +33,20 @@ REASON_MAPPING: dict[str, list[str]] = { POINTS_AGENT_COUNT = 50 # Number of agents to calculate points for MIN_AGENT_COUNT = 2 # Minimum number of marketplace agents to enable onboarding +FrontendOnboardingStep = Literal[ + OnboardingStep.WELCOME, + OnboardingStep.USAGE_REASON, + OnboardingStep.INTEGRATIONS, + OnboardingStep.AGENT_CHOICE, + OnboardingStep.AGENT_NEW_RUN, + OnboardingStep.AGENT_INPUT, + OnboardingStep.CONGRATS, + OnboardingStep.MARKETPLACE_VISIT, + OnboardingStep.BUILDER_OPEN, +] + class UserOnboardingUpdate(pydantic.BaseModel): - completedSteps: Optional[list[OnboardingStep]] = None walletShown: Optional[bool] = None notified: Optional[list[OnboardingStep]] = None usageReason: Optional[str] = None @@ -37,9 +55,6 @@ class UserOnboardingUpdate(pydantic.BaseModel): selectedStoreListingVersionId: Optional[str] = None agentInput: Optional[dict[str, Any]] = None onboardingAgentExecutionId: Optional[str] = None - agentRuns: Optional[int] = None - lastRunAt: Optional[datetime] = None - consecutiveRunDays: Optional[int] = None async def get_user_onboarding(user_id: str): @@ -52,30 +67,36 @@ async def get_user_onboarding(user_id: str): ) +async def reset_user_onboarding(user_id: str): + return await UserOnboarding.prisma().upsert( + where={"userId": user_id}, + data={ + "create": UserOnboardingCreateInput(userId=user_id), + "update": { + "completedSteps": [], + "walletShown": False, + "notified": [], + "usageReason": None, + "integrations": [], + "otherIntegrations": None, + "selectedStoreListingVersionId": None, + "agentInput": prisma.Json({}), + "onboardingAgentExecutionId": None, + "agentRuns": 0, + "lastRunAt": None, + "consecutiveRunDays": 0, + }, + }, + ) + + async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate): update: UserOnboardingUpdateInput = {} - if data.completedSteps is not None: - update["completedSteps"] = list(set(data.completedSteps)) - for step in ( - OnboardingStep.AGENT_NEW_RUN, - OnboardingStep.MARKETPLACE_VISIT, - OnboardingStep.MARKETPLACE_ADD_AGENT, - OnboardingStep.MARKETPLACE_RUN_AGENT, - OnboardingStep.BUILDER_SAVE_AGENT, - OnboardingStep.RE_RUN_AGENT, - OnboardingStep.SCHEDULE_AGENT, - OnboardingStep.RUN_AGENTS, - OnboardingStep.RUN_3_DAYS, - OnboardingStep.TRIGGER_WEBHOOK, - OnboardingStep.RUN_14_DAYS, - OnboardingStep.RUN_AGENTS_100, - ): - if step in data.completedSteps: - await reward_user(user_id, step) - if data.walletShown is not None: + onboarding = await get_user_onboarding(user_id) + if data.walletShown: update["walletShown"] = data.walletShown if data.notified is not None: - update["notified"] = list(set(data.notified)) + update["notified"] = list(set(data.notified + onboarding.notified)) if data.usageReason is not None: update["usageReason"] = data.usageReason if data.integrations is not None: @@ -88,12 +109,6 @@ async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate): update["agentInput"] = SafeJson(data.agentInput) if data.onboardingAgentExecutionId is not None: update["onboardingAgentExecutionId"] = data.onboardingAgentExecutionId - if data.agentRuns is not None: - update["agentRuns"] = data.agentRuns - if data.lastRunAt is not None: - update["lastRunAt"] = data.lastRunAt - if data.consecutiveRunDays is not None: - update["consecutiveRunDays"] = data.consecutiveRunDays return await UserOnboarding.prisma().upsert( where={"userId": user_id}, @@ -104,7 +119,7 @@ async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate): ) -async def reward_user(user_id: str, step: OnboardingStep): +async def _reward_user(user_id: str, onboarding: UserOnboarding, step: OnboardingStep): reward = 0 match step: # Reward user when they clicked New Run during onboarding @@ -138,41 +153,70 @@ async def reward_user(user_id: str, step: OnboardingStep): if reward == 0: return - onboarding = await get_user_onboarding(user_id) - # Skip if already rewarded if step in onboarding.rewardedFor: return - onboarding.rewardedFor.append(step) user_credit_model = await get_user_credit_model(user_id) await user_credit_model.onboarding_reward(user_id, reward, step) await UserOnboarding.prisma().update( where={"userId": user_id}, data={ - "completedSteps": list(set(onboarding.completedSteps + [step])), - "rewardedFor": onboarding.rewardedFor, + "rewardedFor": list(set(onboarding.rewardedFor + [step])), }, ) -async def complete_webhook_trigger_step(user_id: str): +async def complete_onboarding_step(user_id: str, step: OnboardingStep): """ - Completes the TRIGGER_WEBHOOK onboarding step for the user if not already completed. + Completes the specified onboarding step for the user if not already completed. """ - onboarding = await get_user_onboarding(user_id) - if OnboardingStep.TRIGGER_WEBHOOK not in onboarding.completedSteps: - await update_user_onboarding( - user_id, - UserOnboardingUpdate( - completedSteps=onboarding.completedSteps - + [OnboardingStep.TRIGGER_WEBHOOK] - ), + if step not in onboarding.completedSteps: + await UserOnboarding.prisma().update( + where={"userId": user_id}, + data={ + "completedSteps": list(set(onboarding.completedSteps + [step])), + }, ) + await _reward_user(user_id, onboarding, step) + await _send_onboarding_notification(user_id, step) -def clean_and_split(text: str) -> list[str]: +async def _send_onboarding_notification( + user_id: str, step: OnboardingStep | None, event: str = "step_completed" +): + """ + Sends an onboarding notification to the user. + """ + payload = OnboardingNotificationPayload( + type="onboarding", + event=event, + step=step, + ) + await AsyncRedisNotificationEventBus().publish( + NotificationEvent(user_id=user_id, payload=payload) + ) + + +async def complete_re_run_agent(user_id: str, graph_id: str) -> None: + """ + Complete RE_RUN_AGENT step when a user runs a graph they've run before. + Keeps overhead low by only counting executions if the step is still pending. + """ + onboarding = await get_user_onboarding(user_id) + if OnboardingStep.RE_RUN_AGENT in onboarding.completedSteps: + return + + # Includes current execution, so count > 1 means there was at least one prior run. + previous_exec_count = await execution_db.get_graph_executions_count( + user_id=user_id, graph_id=graph_id + ) + if previous_exec_count > 1: + await complete_onboarding_step(user_id, OnboardingStep.RE_RUN_AGENT) + + +def _clean_and_split(text: str) -> list[str]: """ Removes all special characters from a string, truncates it to 100 characters, and splits it by whitespace and commas. @@ -195,7 +239,7 @@ def clean_and_split(text: str) -> list[str]: return words -def calculate_points( +def _calculate_points( agent, categories: list[str], custom: list[str], integrations: list[str] ) -> int: """ @@ -239,18 +283,85 @@ def calculate_points( return int(points) -def get_credentials_blocks() -> dict[str, str]: - # Returns a dictionary of block id to credentials field name - creds: dict[str, str] = {} - blocks = get_blocks() - for id, block in blocks.items(): - for field_name, field_info in block().input_schema.model_fields.items(): - if field_info.annotation == CredentialsMetaInput: - creds[id] = field_name - return creds +def _normalize_datetime(value: datetime | None) -> datetime | None: + if value is None: + return None + if value.tzinfo is None: + return value.replace(tzinfo=timezone.utc) + return value.astimezone(timezone.utc) -CREDENTIALS_FIELDS: dict[str, str] = get_credentials_blocks() +def _calculate_consecutive_run_days( + last_run_at: datetime | None, current_consecutive_days: int, user_timezone: str +) -> tuple[datetime, int]: + tz = ZoneInfo(user_timezone) + local_now = datetime.now(tz) + normalized_last_run = _normalize_datetime(last_run_at) + + if normalized_last_run is None: + return local_now.astimezone(timezone.utc), 1 + + last_run_local = normalized_last_run.astimezone(tz) + last_run_date = last_run_local.date() + today = local_now.date() + + if last_run_date == today: + return local_now.astimezone(timezone.utc), current_consecutive_days + + if last_run_date == today - timedelta(days=1): + return local_now.astimezone(timezone.utc), current_consecutive_days + 1 + + return local_now.astimezone(timezone.utc), 1 + + +def _get_run_milestone_steps( + new_run_count: int, consecutive_days: int +) -> list[OnboardingStep]: + milestones: list[OnboardingStep] = [] + if new_run_count >= 10: + milestones.append(OnboardingStep.RUN_AGENTS) + if new_run_count >= 100: + milestones.append(OnboardingStep.RUN_AGENTS_100) + if consecutive_days >= 3: + milestones.append(OnboardingStep.RUN_3_DAYS) + if consecutive_days >= 14: + milestones.append(OnboardingStep.RUN_14_DAYS) + return milestones + + +async def _get_user_timezone(user_id: str) -> str: + user = await get_user_by_id(user_id) + return get_user_timezone_or_utc(user.timezone if user else None) + + +async def increment_runs(user_id: str): + """ + Increment a user's run counters and trigger any onboarding milestones. + """ + user_timezone = await _get_user_timezone(user_id) + onboarding = await get_user_onboarding(user_id) + new_run_count = onboarding.agentRuns + 1 + last_run_at, consecutive_run_days = _calculate_consecutive_run_days( + onboarding.lastRunAt, onboarding.consecutiveRunDays, user_timezone + ) + + await UserOnboarding.prisma().update( + where={"userId": user_id}, + data={ + "agentRuns": {"increment": 1}, + "lastRunAt": last_run_at, + "consecutiveRunDays": consecutive_run_days, + }, + ) + + milestones = _get_run_milestone_steps(new_run_count, consecutive_run_days) + new_steps = [step for step in milestones if step not in onboarding.completedSteps] + + for step in new_steps: + await complete_onboarding_step(user_id, step) + # Send progress notification if no steps were completed, so client refetches onboarding state + if not new_steps: + await _send_onboarding_notification(user_id, None, event="increment_runs") async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]: @@ -259,7 +370,7 @@ async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]: where_clause: dict[str, Any] = {} - custom = clean_and_split((user_onboarding.usageReason or "").lower()) + custom = _clean_and_split((user_onboarding.usageReason or "").lower()) if categories: where_clause["OR"] = [ @@ -307,7 +418,7 @@ async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]: # Calculate points for the first X agents and choose the top 2 agent_points = [] for agent in storeAgents[:POINTS_AGENT_COUNT]: - points = calculate_points( + points = _calculate_points( agent, categories, custom, user_onboarding.integrations ) agent_points.append((agent, points)) @@ -321,6 +432,7 @@ async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]: slug=agent.slug, agent_name=agent.agent_name, agent_video=agent.agent_video or "", + agent_output_demo=agent.agent_output_demo or "", agent_image=agent.agent_image, creator=agent.creator_username, creator_avatar=agent.creator_avatar, @@ -330,6 +442,8 @@ async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]: runs=agent.runs, rating=agent.rating, versions=agent.versions, + agentGraphVersions=agent.agentGraphVersions, + agentGraphId=agent.agentGraphId, last_updated=agent.updated_at, ) for agent in recommended_agents diff --git a/autogpt_platform/backend/backend/data/partial_types.py b/autogpt_platform/backend/backend/data/partial_types.py new file mode 100644 index 0000000000..befa32219f --- /dev/null +++ b/autogpt_platform/backend/backend/data/partial_types.py @@ -0,0 +1,5 @@ +import prisma.models + + +class StoreAgentWithRank(prisma.models.StoreAgent): + rank: float diff --git a/autogpt_platform/backend/backend/executor/activity_status_generator.py b/autogpt_platform/backend/backend/executor/activity_status_generator.py index b9d1ed7bd5..3bc6bcb876 100644 --- a/autogpt_platform/backend/backend/executor/activity_status_generator.py +++ b/autogpt_platform/backend/backend/executor/activity_status_generator.py @@ -13,12 +13,11 @@ except ImportError: from pydantic import SecretStr -from backend.blocks.llm import LlmModel, llm_call +from backend.blocks.llm import AIStructuredResponseGeneratorBlock, LlmModel from backend.data.block import get_block from backend.data.execution import ExecutionStatus, NodeExecutionResult from backend.data.model import APIKeyCredentials, GraphExecutionStats from backend.util.feature_flag import Flag, is_feature_enabled -from backend.util.retry import func_retry from backend.util.settings import Settings from backend.util.truncate import truncate @@ -28,6 +27,101 @@ if TYPE_CHECKING: logger = logging.getLogger(__name__) +# Default system prompt template for activity status generation +DEFAULT_SYSTEM_PROMPT = """You are an AI assistant analyzing what an agent execution accomplished and whether it worked correctly. +You need to provide both a user-friendly summary AND a correctness assessment. + +FOR THE ACTIVITY STATUS: +- Write from the user's perspective about what they accomplished, NOT about technical execution details +- Focus on the ACTUAL TASK the user wanted done, not the internal workflow steps +- Avoid technical terms like 'workflow', 'execution', 'components', 'nodes', 'processing', etc. +- Keep it to 3 sentences maximum. Be conversational and human-friendly + +FOR THE CORRECTNESS SCORE: +- Provide a score from 0.0 to 1.0 indicating how well the execution achieved its intended purpose +- Use this scoring guide: + 0.0-0.2: Failure - The result clearly did not meet the task requirements + 0.2-0.4: Poor - Major issues; only small parts of the goal were achieved + 0.4-0.6: Partial Success - Some objectives met, but with noticeable gaps or inaccuracies + 0.6-0.8: Mostly Successful - Largely achieved the intended outcome, with minor flaws + 0.8-1.0: Success - Fully met or exceeded the task requirements +- Base the score on actual outputs produced, not just technical completion + +UNDERSTAND THE INTENDED PURPOSE: +- FIRST: Read the graph description carefully to understand what the user wanted to accomplish +- The graph name and description tell you the main goal/intention of this automation +- Use this intended purpose as your PRIMARY criteria for success/failure evaluation +- Ask yourself: 'Did this execution actually accomplish what the graph was designed to do?' + +CRITICAL OUTPUT ANALYSIS: +- Check if blocks that should produce user-facing results actually produced outputs +- Blocks with names containing 'Output', 'Post', 'Create', 'Send', 'Publish', 'Generate' are usually meant to produce final results +- If these critical blocks have NO outputs (empty recent_outputs), the task likely FAILED even if status shows 'completed' +- Sub-agents (AgentExecutorBlock) that produce no outputs usually indicate failed sub-tasks +- Most importantly: Does the execution result match what the graph description promised to deliver? + +SUCCESS EVALUATION BASED ON INTENTION: +- If the graph is meant to 'create blog posts' → check if blog content was actually created +- If the graph is meant to 'send emails' → check if emails were actually sent +- If the graph is meant to 'analyze data' → check if analysis results were produced +- If the graph is meant to 'generate reports' → check if reports were generated +- Technical completion ≠ goal achievement. Focus on whether the USER'S INTENDED OUTCOME was delivered + +IMPORTANT: Be HONEST about what actually happened: +- If the input was invalid/nonsensical, say so directly +- If the task failed, explain what went wrong in simple terms +- If errors occurred, focus on what the user needs to know +- Only claim success if the INTENDED PURPOSE was genuinely accomplished AND produced expected outputs +- Don't sugar-coat failures or present them as helpful feedback +- ESPECIALLY: If the graph's main purpose wasn't achieved, this is a failure regardless of 'completed' status + +Understanding Errors: +- Node errors: Individual steps may fail but the overall task might still complete (e.g., one data source fails but others work) +- Graph error (in overall_status.graph_error): This means the entire execution failed and nothing was accomplished +- Missing outputs from critical blocks: Even if no errors, this means the task failed to produce expected results +- Focus on whether the graph's intended purpose was fulfilled, not whether technical steps completed""" + +# Default user prompt template for activity status generation +DEFAULT_USER_PROMPT = """A user ran '{{GRAPH_NAME}}' to accomplish something. Based on this execution data, +provide both an activity summary and correctness assessment: + +{{EXECUTION_DATA}} + +ANALYSIS CHECKLIST: +1. READ graph_info.description FIRST - this tells you what the user intended to accomplish +2. Check overall_status.graph_error - if present, the entire execution failed +3. Look for nodes with 'Output', 'Post', 'Create', 'Send', 'Publish', 'Generate' in their block_name +4. Check if these critical blocks have empty recent_outputs arrays - this indicates failure +5. Look for AgentExecutorBlock (sub-agents) with no outputs - this suggests sub-task failures +6. Count how many nodes produced outputs vs total nodes - low ratio suggests problems +7. MOST IMPORTANT: Does the execution outcome match what graph_info.description promised? + +INTENTION-BASED EVALUATION: +- If description mentions 'blog writing' → did it create blog content? +- If description mentions 'email automation' → were emails actually sent? +- If description mentions 'data analysis' → were analysis results produced? +- If description mentions 'content generation' → was content actually generated? +- If description mentions 'social media posting' → were posts actually made? +- Match the outputs to the stated intention, not just technical completion + +PROVIDE: +activity_status: 1-3 sentences about what the user accomplished, such as: +- 'I analyzed your resume and provided detailed feedback for the IT industry.' +- 'I couldn't complete the task because critical steps failed to produce any results.' +- 'I failed to generate the content you requested due to missing API access.' +- 'I extracted key information from your documents and organized it into a summary.' +- 'The task failed because the blog post creation step didn't produce any output.' + +correctness_score: A float score from 0.0 to 1.0 based on how well the intended purpose was achieved: +- 0.0-0.2: Failure (didn't meet requirements) +- 0.2-0.4: Poor (major issues, minimal achievement) +- 0.4-0.6: Partial Success (some objectives met with gaps) +- 0.6-0.8: Mostly Successful (largely achieved with minor flaws) +- 0.8-1.0: Success (fully met or exceeded requirements) + +BE CRITICAL: If the graph's intended purpose (from description) wasn't achieved, use a low score (0.0-0.4) even if status is 'completed'.""" + + class ErrorInfo(TypedDict): """Type definition for error information.""" @@ -70,6 +164,13 @@ class NodeRelation(TypedDict): sink_block_name: NotRequired[str] # Optional, only set if block exists +class ActivityStatusResponse(TypedDict): + """Type definition for structured activity status response.""" + + activity_status: str + correctness_score: float + + def _truncate_uuid(uuid_str: str) -> str: """Truncate UUID to first segment to reduce payload size.""" if not uuid_str: @@ -85,9 +186,14 @@ async def generate_activity_status_for_execution( db_client: "DatabaseManagerAsyncClient", user_id: str, execution_status: ExecutionStatus | None = None, -) -> str | None: + model_name: str = "gpt-4o-mini", + skip_feature_flag: bool = False, + system_prompt: str = DEFAULT_SYSTEM_PROMPT, + user_prompt: str = DEFAULT_USER_PROMPT, + skip_existing: bool = True, +) -> ActivityStatusResponse | None: """ - Generate an AI-based activity status summary for a graph execution. + Generate an AI-based activity status summary and correctness assessment for a graph execution. This function handles all the data collection and AI generation logic, keeping the manager integration simple. @@ -100,15 +206,37 @@ async def generate_activity_status_for_execution( db_client: Database client for fetching data user_id: User ID for LaunchDarkly feature flag evaluation execution_status: The overall execution status (COMPLETED, FAILED, TERMINATED) + model_name: AI model to use for generation (default: gpt-4o-mini) + skip_feature_flag: Whether to skip LaunchDarkly feature flag check + system_prompt: Custom system prompt template (default: DEFAULT_SYSTEM_PROMPT) + user_prompt: Custom user prompt template with placeholders (default: DEFAULT_USER_PROMPT) + skip_existing: Whether to skip if activity_status and correctness_score already exist Returns: - AI-generated activity status string, or None if feature is disabled + AI-generated activity status response with activity_status and correctness_status, + or None if feature is disabled or skipped """ # Check LaunchDarkly feature flag for AI activity status generation with full context support - if not await is_feature_enabled(Flag.AI_ACTIVITY_STATUS, user_id): + if not skip_feature_flag and not await is_feature_enabled( + Flag.AI_ACTIVITY_STATUS, user_id + ): logger.debug("AI activity status generation is disabled via LaunchDarkly") return None + # Check if we should skip existing data (for admin regeneration option) + if ( + skip_existing + and execution_stats.activity_status + and execution_stats.correctness_score is not None + ): + logger.debug( + f"Skipping activity status generation for {graph_exec_id}: already exists" + ) + return { + "activity_status": execution_stats.activity_status, + "correctness_score": execution_stats.correctness_score, + } + # Check if we have OpenAI API key try: settings = Settings() @@ -125,7 +253,12 @@ async def generate_activity_status_for_execution( # Get graph metadata and full graph structure for name, description, and links graph_metadata = await db_client.get_graph_metadata(graph_id, graph_version) - graph = await db_client.get_graph(graph_id, graph_version) + graph = await db_client.get_graph( + graph_id=graph_id, + version=graph_version, + user_id=user_id, + skip_access_check=True, + ) graph_name = graph_metadata.name if graph_metadata else f"Graph {graph_id}" graph_description = graph_metadata.description if graph_metadata else "" @@ -141,76 +274,23 @@ async def generate_activity_status_for_execution( execution_status, ) - # Prepare prompt for AI + # Prepare execution data as JSON for template substitution + execution_data_json = json.dumps(execution_data, indent=2) + + # Perform template substitution for user prompt + user_prompt_content = user_prompt.replace("{{GRAPH_NAME}}", graph_name).replace( + "{{EXECUTION_DATA}}", execution_data_json + ) + + # Prepare prompt for AI with structured output requirements prompt = [ { "role": "system", - "content": ( - "You are an AI assistant summarizing what you just did for a user in simple, friendly language. " - "Write from the user's perspective about what they accomplished, NOT about technical execution details. " - "Focus on the ACTUAL TASK the user wanted done, not the internal workflow steps. " - "Avoid technical terms like 'workflow', 'execution', 'components', 'nodes', 'processing', etc. " - "Keep it to 3 sentences maximum. Be conversational and human-friendly.\n\n" - "UNDERSTAND THE INTENDED PURPOSE:\n" - "- FIRST: Read the graph description carefully to understand what the user wanted to accomplish\n" - "- The graph name and description tell you the main goal/intention of this automation\n" - "- Use this intended purpose as your PRIMARY criteria for success/failure evaluation\n" - "- Ask yourself: 'Did this execution actually accomplish what the graph was designed to do?'\n\n" - "CRITICAL OUTPUT ANALYSIS:\n" - "- Check if blocks that should produce user-facing results actually produced outputs\n" - "- Blocks with names containing 'Output', 'Post', 'Create', 'Send', 'Publish', 'Generate' are usually meant to produce final results\n" - "- If these critical blocks have NO outputs (empty recent_outputs), the task likely FAILED even if status shows 'completed'\n" - "- Sub-agents (AgentExecutorBlock) that produce no outputs usually indicate failed sub-tasks\n" - "- Most importantly: Does the execution result match what the graph description promised to deliver?\n\n" - "SUCCESS EVALUATION BASED ON INTENTION:\n" - "- If the graph is meant to 'create blog posts' → check if blog content was actually created\n" - "- If the graph is meant to 'send emails' → check if emails were actually sent\n" - "- If the graph is meant to 'analyze data' → check if analysis results were produced\n" - "- If the graph is meant to 'generate reports' → check if reports were generated\n" - "- Technical completion ≠ goal achievement. Focus on whether the USER'S INTENDED OUTCOME was delivered\n\n" - "IMPORTANT: Be HONEST about what actually happened:\n" - "- If the input was invalid/nonsensical, say so directly\n" - "- If the task failed, explain what went wrong in simple terms\n" - "- If errors occurred, focus on what the user needs to know\n" - "- Only claim success if the INTENDED PURPOSE was genuinely accomplished AND produced expected outputs\n" - "- Don't sugar-coat failures or present them as helpful feedback\n" - "- ESPECIALLY: If the graph's main purpose wasn't achieved, this is a failure regardless of 'completed' status\n\n" - "Understanding Errors:\n" - "- Node errors: Individual steps may fail but the overall task might still complete (e.g., one data source fails but others work)\n" - "- Graph error (in overall_status.graph_error): This means the entire execution failed and nothing was accomplished\n" - "- Missing outputs from critical blocks: Even if no errors, this means the task failed to produce expected results\n" - "- Focus on whether the graph's intended purpose was fulfilled, not whether technical steps completed" - ), + "content": system_prompt, }, { "role": "user", - "content": ( - f"A user ran '{graph_name}' to accomplish something. Based on this execution data, " - f"write what they achieved in simple, user-friendly terms:\n\n" - f"{json.dumps(execution_data, indent=2)}\n\n" - "ANALYSIS CHECKLIST:\n" - "1. READ graph_info.description FIRST - this tells you what the user intended to accomplish\n" - "2. Check overall_status.graph_error - if present, the entire execution failed\n" - "3. Look for nodes with 'Output', 'Post', 'Create', 'Send', 'Publish', 'Generate' in their block_name\n" - "4. Check if these critical blocks have empty recent_outputs arrays - this indicates failure\n" - "5. Look for AgentExecutorBlock (sub-agents) with no outputs - this suggests sub-task failures\n" - "6. Count how many nodes produced outputs vs total nodes - low ratio suggests problems\n" - "7. MOST IMPORTANT: Does the execution outcome match what graph_info.description promised?\n\n" - "INTENTION-BASED EVALUATION:\n" - "- If description mentions 'blog writing' → did it create blog content?\n" - "- If description mentions 'email automation' → were emails actually sent?\n" - "- If description mentions 'data analysis' → were analysis results produced?\n" - "- If description mentions 'content generation' → was content actually generated?\n" - "- If description mentions 'social media posting' → were posts actually made?\n" - "- Match the outputs to the stated intention, not just technical completion\n\n" - "Write 1-3 sentences about what the user accomplished, such as:\n" - "- 'I analyzed your resume and provided detailed feedback for the IT industry.'\n" - "- 'I couldn't complete the task because critical steps failed to produce any results.'\n" - "- 'I failed to generate the content you requested due to missing API access.'\n" - "- 'I extracted key information from your documents and organized it into a summary.'\n" - "- 'The task failed because the blog post creation step didn't produce any output.'\n\n" - "BE CRITICAL: If the graph's intended purpose (from description) wasn't achieved, report this as a failure even if status is 'completed'." - ), + "content": user_prompt_content, }, ] @@ -227,17 +307,61 @@ async def generate_activity_status_for_execution( title="System OpenAI", ) - # Make LLM call using current event loop - activity_status = await _call_llm_direct(credentials, prompt) + # Define expected response format + expected_format = { + "activity_status": "A user-friendly 1-3 sentence summary of what was accomplished", + "correctness_score": "Float score from 0.0 to 1.0 indicating how well the execution achieved its intended purpose", + } - logger.debug( - f"Generated activity status for {graph_exec_id}: {activity_status}" + # Use existing AIStructuredResponseGeneratorBlock for structured LLM call + structured_block = AIStructuredResponseGeneratorBlock() + + # Convert credentials to the format expected by AIStructuredResponseGeneratorBlock + credentials_input = { + "provider": credentials.provider, + "id": credentials.id, + "type": credentials.type, + "title": credentials.title, + } + + structured_input = AIStructuredResponseGeneratorBlock.Input( + prompt=prompt[1]["content"], # User prompt content + sys_prompt=prompt[0]["content"], # System prompt content + expected_format=expected_format, + model=LlmModel(model_name), + credentials=credentials_input, # type: ignore + max_tokens=150, + retry=3, ) - return activity_status + # Execute the structured LLM call + async for output_name, output_data in structured_block.run( + structured_input, credentials=credentials + ): + if output_name == "response": + response = output_data + break + else: + raise RuntimeError("Failed to get response from structured LLM call") + + # Create typed response with validation + correctness_score = float(response["correctness_score"]) + # Clamp score to valid range + correctness_score = max(0.0, min(1.0, correctness_score)) + + activity_response: ActivityStatusResponse = { + "activity_status": response["activity_status"], + "correctness_score": correctness_score, + } + + logger.debug( + f"Generated activity status for {graph_exec_id}: {activity_response}" + ) + + return activity_response except Exception as e: - logger.error( + logger.exception( f"Failed to generate activity status for execution {graph_exec_id}: {str(e)}" ) return None @@ -448,23 +572,3 @@ def _build_execution_summary( ), }, } - - -@func_retry -async def _call_llm_direct( - credentials: APIKeyCredentials, prompt: list[dict[str, str]] -) -> str: - """Make direct LLM call.""" - - response = await llm_call( - credentials=credentials, - llm_model=LlmModel.GPT4O_MINI, - prompt=prompt, - max_tokens=150, - compress_prompt_to_fit=True, - ) - - if response and response.response: - return response.response.strip() - else: - return "Unable to generate activity summary" diff --git a/autogpt_platform/backend/backend/executor/activity_status_generator_test.py b/autogpt_platform/backend/backend/executor/activity_status_generator_test.py index 206be21d02..c3ce0b6bf0 100644 --- a/autogpt_platform/backend/backend/executor/activity_status_generator_test.py +++ b/autogpt_platform/backend/backend/executor/activity_status_generator_test.py @@ -7,12 +7,11 @@ from unittest.mock import AsyncMock, MagicMock, patch import pytest -from backend.blocks.llm import LLMResponse +from backend.blocks.llm import LlmModel, LLMResponse from backend.data.execution import ExecutionStatus, NodeExecutionResult from backend.data.model import GraphExecutionStats from backend.executor.activity_status_generator import ( _build_execution_summary, - _call_llm_direct, generate_activity_status_for_execution, ) @@ -373,25 +372,24 @@ class TestLLMCall: """Tests for LLM calling functionality.""" @pytest.mark.asyncio - async def test_call_llm_direct_success(self): - """Test successful LLM call.""" + async def test_structured_llm_call_success(self): + """Test successful structured LLM call.""" from pydantic import SecretStr + from backend.blocks.llm import AIStructuredResponseGeneratorBlock from backend.data.model import APIKeyCredentials - mock_response = LLMResponse( - raw_response={}, - prompt=[], - response="Agent successfully processed user input and generated response.", - tool_calls=None, - prompt_tokens=50, - completion_tokens=20, - ) - - with patch( - "backend.executor.activity_status_generator.llm_call" - ) as mock_llm_call: - mock_llm_call.return_value = mock_response + with patch("backend.blocks.llm.llm_call") as mock_llm_call, patch( + "backend.blocks.llm.secrets.token_hex", return_value="test123" + ): + mock_llm_call.return_value = LLMResponse( + raw_response={}, + prompt=[], + response='{"activity_status": "Test completed successfully", "correctness_score": 0.9}', + tool_calls=None, + prompt_tokens=50, + completion_tokens=20, + ) credentials = APIKeyCredentials( id="test", @@ -401,26 +399,61 @@ class TestLLMCall: ) prompt = [{"role": "user", "content": "Test prompt"}] + expected_format = { + "activity_status": "User-friendly summary", + "correctness_score": "Float score from 0.0 to 1.0", + } - result = await _call_llm_direct(credentials, prompt) + # Create structured block and input + structured_block = AIStructuredResponseGeneratorBlock() + credentials_input = { + "provider": credentials.provider, + "id": credentials.id, + "type": credentials.type, + "title": credentials.title, + } - assert ( - result - == "Agent successfully processed user input and generated response." + structured_input = AIStructuredResponseGeneratorBlock.Input( + prompt=prompt[0]["content"], + expected_format=expected_format, + model=LlmModel.GPT4O_MINI, + credentials=credentials_input, # type: ignore ) - mock_llm_call.assert_called_once() + + # Execute the structured LLM call + result = None + async for output_name, output_data in structured_block.run( + structured_input, credentials=credentials + ): + if output_name == "response": + result = output_data + break + + assert result is not None + assert result["activity_status"] == "Test completed successfully" + assert result["correctness_score"] == 0.9 + mock_llm_call.assert_called() @pytest.mark.asyncio - async def test_call_llm_direct_no_response(self): - """Test LLM call with no response.""" + async def test_structured_llm_call_validation_error(self): + """Test structured LLM call with validation error.""" from pydantic import SecretStr + from backend.blocks.llm import AIStructuredResponseGeneratorBlock from backend.data.model import APIKeyCredentials - with patch( - "backend.executor.activity_status_generator.llm_call" - ) as mock_llm_call: - mock_llm_call.return_value = None + with patch("backend.blocks.llm.llm_call") as mock_llm_call, patch( + "backend.blocks.llm.secrets.token_hex", return_value="test123" + ): + # Return invalid JSON that will fail validation (missing required field) + mock_llm_call.return_value = LLMResponse( + raw_response={}, + prompt=[], + response='{"activity_status": "Test completed successfully"}', + tool_calls=None, + prompt_tokens=50, + completion_tokens=20, + ) credentials = APIKeyCredentials( id="test", @@ -430,10 +463,36 @@ class TestLLMCall: ) prompt = [{"role": "user", "content": "Test prompt"}] + expected_format = { + "activity_status": "User-friendly summary", + "correctness_score": "Float score from 0.0 to 1.0", + } - result = await _call_llm_direct(credentials, prompt) + # Create structured block and input + structured_block = AIStructuredResponseGeneratorBlock() + credentials_input = { + "provider": credentials.provider, + "id": credentials.id, + "type": credentials.type, + "title": credentials.title, + } - assert result == "Unable to generate activity summary" + structured_input = AIStructuredResponseGeneratorBlock.Input( + prompt=prompt[0]["content"], + expected_format=expected_format, + model=LlmModel.GPT4O_MINI, + credentials=credentials_input, # type: ignore + retry=1, # Use fewer retries for faster test + ) + + with pytest.raises( + Exception + ): # AIStructuredResponseGeneratorBlock may raise different exceptions + async for output_name, output_data in structured_block.run( + structured_input, credentials=credentials + ): + if output_name == "response": + break class TestGenerateActivityStatusForExecution: @@ -461,17 +520,25 @@ class TestGenerateActivityStatusForExecution: ) as mock_get_block, patch( "backend.executor.activity_status_generator.Settings" ) as mock_settings, patch( - "backend.executor.activity_status_generator._call_llm_direct" - ) as mock_llm, patch( + "backend.executor.activity_status_generator.AIStructuredResponseGeneratorBlock" + ) as mock_structured_block, patch( "backend.executor.activity_status_generator.is_feature_enabled", return_value=True, ): mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id) mock_settings.return_value.secrets.openai_internal_api_key = "test_key" - mock_llm.return_value = ( - "I analyzed your data and provided the requested insights." - ) + + # Mock the structured block to return our expected response + mock_instance = mock_structured_block.return_value + + async def mock_run(*args, **kwargs): + yield "response", { + "activity_status": "I analyzed your data and provided the requested insights.", + "correctness_score": 0.85, + } + + mock_instance.run = mock_run result = await generate_activity_status_for_execution( graph_exec_id="test_exec", @@ -482,11 +549,16 @@ class TestGenerateActivityStatusForExecution: user_id="test_user", ) - assert result == "I analyzed your data and provided the requested insights." + assert result is not None + assert ( + result["activity_status"] + == "I analyzed your data and provided the requested insights." + ) + assert result["correctness_score"] == 0.85 mock_db_client.get_node_executions.assert_called_once() mock_db_client.get_graph_metadata.assert_called_once() mock_db_client.get_graph.assert_called_once() - mock_llm.assert_called_once() + mock_structured_block.assert_called_once() @pytest.mark.asyncio async def test_generate_status_feature_disabled(self, mock_execution_stats): @@ -574,15 +646,25 @@ class TestGenerateActivityStatusForExecution: ) as mock_get_block, patch( "backend.executor.activity_status_generator.Settings" ) as mock_settings, patch( - "backend.executor.activity_status_generator._call_llm_direct" - ) as mock_llm, patch( + "backend.executor.activity_status_generator.AIStructuredResponseGeneratorBlock" + ) as mock_structured_block, patch( "backend.executor.activity_status_generator.is_feature_enabled", return_value=True, ): mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id) mock_settings.return_value.secrets.openai_internal_api_key = "test_key" - mock_llm.return_value = "Agent completed execution." + + # Mock the structured block to return our expected response + mock_instance = mock_structured_block.return_value + + async def mock_run(*args, **kwargs): + yield "response", { + "activity_status": "Agent completed execution.", + "correctness_score": 0.8, + } + + mock_instance.run = mock_run result = await generate_activity_status_for_execution( graph_exec_id="test_exec", @@ -593,10 +675,11 @@ class TestGenerateActivityStatusForExecution: user_id="test_user", ) - assert result == "Agent completed execution." - # Should use fallback graph name in prompt - call_args = mock_llm.call_args[0][1] # prompt argument - assert "Graph test_graph" in call_args[1]["content"] + assert result is not None + assert result["activity_status"] == "Agent completed execution." + assert result["correctness_score"] == 0.8 + # The structured block should have been instantiated + assert mock_structured_block.called class TestIntegration: @@ -626,8 +709,8 @@ class TestIntegration: ) as mock_get_block, patch( "backend.executor.activity_status_generator.Settings" ) as mock_settings, patch( - "backend.executor.activity_status_generator.llm_call" - ) as mock_llm_call, patch( + "backend.executor.activity_status_generator.AIStructuredResponseGeneratorBlock" + ) as mock_structured_block, patch( "backend.executor.activity_status_generator.is_feature_enabled", return_value=True, ): @@ -635,15 +718,16 @@ class TestIntegration: mock_get_block.side_effect = lambda block_id: mock_blocks.get(block_id) mock_settings.return_value.secrets.openai_internal_api_key = "test_key" - mock_response = LLMResponse( - raw_response={}, - prompt=[], - response=expected_activity, - tool_calls=None, - prompt_tokens=100, - completion_tokens=30, - ) - mock_llm_call.return_value = mock_response + # Mock the structured block to return our expected response + mock_instance = mock_structured_block.return_value + + async def mock_run(*args, **kwargs): + yield "response", { + "activity_status": expected_activity, + "correctness_score": 0.3, # Low score since there was a failure + } + + mock_instance.run = mock_run result = await generate_activity_status_for_execution( graph_exec_id="test_exec", @@ -654,24 +738,14 @@ class TestIntegration: user_id="test_user", ) - assert result == expected_activity + assert result is not None + assert result["activity_status"] == expected_activity + assert result["correctness_score"] == 0.3 - # Verify the correct data was passed to LLM - llm_call_args = mock_llm_call.call_args - prompt = llm_call_args[1]["prompt"] - - # Check system prompt - assert prompt[0]["role"] == "system" - assert "user's perspective" in prompt[0]["content"] - - # Check user prompt contains expected data - user_content = prompt[1]["content"] - assert "Test Integration Agent" in user_content - assert "user-friendly terms" in user_content.lower() - - # Verify that execution data is present in the prompt - assert "{" in user_content # Should contain JSON data - assert "overall_status" in user_content + # Verify the structured block was called + assert mock_structured_block.called + # The structured block should have been instantiated + mock_structured_block.assert_called_once() @pytest.mark.asyncio async def test_manager_integration_with_disabled_feature( diff --git a/autogpt_platform/backend/backend/server/v2/AutoMod/__init__.py b/autogpt_platform/backend/backend/executor/automod/__init__.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/AutoMod/__init__.py rename to autogpt_platform/backend/backend/executor/automod/__init__.py diff --git a/autogpt_platform/backend/backend/server/v2/AutoMod/manager.py b/autogpt_platform/backend/backend/executor/automod/manager.py similarity index 99% rename from autogpt_platform/backend/backend/server/v2/AutoMod/manager.py rename to autogpt_platform/backend/backend/executor/automod/manager.py index 181fcec248..81001196dd 100644 --- a/autogpt_platform/backend/backend/server/v2/AutoMod/manager.py +++ b/autogpt_platform/backend/backend/executor/automod/manager.py @@ -9,16 +9,13 @@ if TYPE_CHECKING: from pydantic import ValidationError from backend.data.execution import ExecutionStatus -from backend.server.v2.AutoMod.models import ( - AutoModRequest, - AutoModResponse, - ModerationConfig, -) from backend.util.exceptions import ModerationError from backend.util.feature_flag import Flag, is_feature_enabled from backend.util.request import Requests from backend.util.settings import Settings +from .models import AutoModRequest, AutoModResponse, ModerationConfig + logger = logging.getLogger(__name__) diff --git a/autogpt_platform/backend/backend/server/v2/AutoMod/models.py b/autogpt_platform/backend/backend/executor/automod/models.py similarity index 100% rename from autogpt_platform/backend/backend/server/v2/AutoMod/models.py rename to autogpt_platform/backend/backend/executor/automod/models.py diff --git a/autogpt_platform/backend/backend/executor/database.py b/autogpt_platform/backend/backend/executor/database.py index 8f15f73774..af68bf526d 100644 --- a/autogpt_platform/backend/backend/executor/database.py +++ b/autogpt_platform/backend/backend/executor/database.py @@ -1,12 +1,25 @@ import logging -from typing import Callable, Concatenate, ParamSpec, TypeVar, cast +from contextlib import asynccontextmanager +from typing import TYPE_CHECKING, Callable, Concatenate, ParamSpec, TypeVar, cast +from backend.api.features.library.db import ( + add_store_agent_to_library, + list_library_agents, +) +from backend.api.features.store.db import get_store_agent_details, get_store_agents from backend.data import db +from backend.data.analytics import ( + get_accuracy_trends_and_alerts, + get_marketplace_graphs_for_monitoring, +) from backend.data.credit import UsageTransactionMetadata, get_user_credit_model from backend.data.execution import ( create_graph_execution, get_block_error_stats, + get_child_graph_executions, get_execution_kv_data, + get_execution_outputs_by_node_exec_id, + get_frequently_executed_graphs, get_graph_execution_meta, get_graph_executions, get_graph_executions_count, @@ -26,7 +39,14 @@ from backend.data.graph import ( get_connected_output_nodes, get_graph, get_graph_metadata, + get_graph_settings, get_node, + validate_graph_execution_permissions, +) +from backend.data.human_review import ( + get_or_create_human_review, + has_pending_reviews_for_graph_exec, + update_review_processed_status, ) from backend.data.notifications import ( clear_all_user_notification_batches, @@ -39,14 +59,13 @@ from backend.data.notifications import ( ) from backend.data.user import ( get_active_user_ids_in_timerange, + get_user_by_id, get_user_email_by_id, get_user_email_verification, get_user_integrations, get_user_notification_preference, update_user_integrations, ) -from backend.server.v2.library.db import add_store_agent_to_library, list_library_agents -from backend.server.v2.store.db import get_store_agent_details, get_store_agents from backend.util.service import ( AppService, AppServiceClient, @@ -56,6 +75,9 @@ from backend.util.service import ( ) from backend.util.settings import Config +if TYPE_CHECKING: + from fastapi import FastAPI + config = Config() logger = logging.getLogger(__name__) P = ParamSpec("P") @@ -75,15 +97,17 @@ async def _get_credits(user_id: str) -> int: class DatabaseManager(AppService): - def run_service(self) -> None: - logger.info(f"[{self.service_name}] ⏳ Connecting to Database...") - self.run_and_wait(db.connect()) - super().run_service() + @asynccontextmanager + async def lifespan(self, app: "FastAPI"): + async with super().lifespan(app): + logger.info(f"[{self.service_name}] ⏳ Connecting to Database...") + await db.connect() - def cleanup(self): - super().cleanup() - logger.info(f"[{self.service_name}] ⏳ Disconnecting Database...") - self.run_and_wait(db.disconnect()) + logger.info(f"[{self.service_name}] ✅ Ready") + yield + + logger.info(f"[{self.service_name}] ⏳ Disconnecting Database...") + await db.disconnect() async def health_check(self) -> str: if not db.is_connected(): @@ -113,6 +137,7 @@ class DatabaseManager(AppService): return cast(Callable[Concatenate[object, P], R], expose(f)) # Executions + get_child_graph_executions = _(get_child_graph_executions) get_graph_executions = _(get_graph_executions) get_graph_executions_count = _(get_graph_executions_count) get_graph_execution_meta = _(get_graph_execution_meta) @@ -126,15 +151,20 @@ class DatabaseManager(AppService): update_graph_execution_stats = _(update_graph_execution_stats) upsert_execution_input = _(upsert_execution_input) upsert_execution_output = _(upsert_execution_output) + get_execution_outputs_by_node_exec_id = _(get_execution_outputs_by_node_exec_id) get_execution_kv_data = _(get_execution_kv_data) set_execution_kv_data = _(set_execution_kv_data) get_block_error_stats = _(get_block_error_stats) + get_accuracy_trends_and_alerts = _(get_accuracy_trends_and_alerts) + get_frequently_executed_graphs = _(get_frequently_executed_graphs) + get_marketplace_graphs_for_monitoring = _(get_marketplace_graphs_for_monitoring) # Graphs get_node = _(get_node) get_graph = _(get_graph) get_connected_output_nodes = _(get_connected_output_nodes) get_graph_metadata = _(get_graph_metadata) + get_graph_settings = _(get_graph_settings) # Credits spend_credits = _(_spend_credits, name="spend_credits") @@ -146,10 +176,16 @@ class DatabaseManager(AppService): # User Comms - async get_active_user_ids_in_timerange = _(get_active_user_ids_in_timerange) + get_user_by_id = _(get_user_by_id) get_user_email_by_id = _(get_user_email_by_id) get_user_email_verification = _(get_user_email_verification) get_user_notification_preference = _(get_user_notification_preference) + # Human In The Loop + get_or_create_human_review = _(get_or_create_human_review) + has_pending_reviews_for_graph_exec = _(has_pending_reviews_for_graph_exec) + update_review_processed_status = _(update_review_processed_status) + # Notifications - async clear_all_user_notification_batches = _(clear_all_user_notification_batches) create_or_add_to_user_notification_batch = _( @@ -166,6 +202,7 @@ class DatabaseManager(AppService): # Library list_library_agents = _(list_library_agents) add_store_agent_to_library = _(add_store_agent_to_library) + validate_graph_execution_permissions = _(validate_graph_execution_permissions) # Store get_store_agents = _(get_store_agents) @@ -202,6 +239,13 @@ class DatabaseManagerClient(AppServiceClient): # Block error monitoring get_block_error_stats = _(d.get_block_error_stats) + # Execution accuracy monitoring + get_accuracy_trends_and_alerts = _(d.get_accuracy_trends_and_alerts) + get_frequently_executed_graphs = _(d.get_frequently_executed_graphs) + get_marketplace_graphs_for_monitoring = _(d.get_marketplace_graphs_for_monitoring) + + # Human In The Loop + has_pending_reviews_for_graph_exec = _(d.has_pending_reviews_for_graph_exec) # User Emails get_user_email_by_id = _(d.get_user_email_by_id) @@ -209,6 +253,7 @@ class DatabaseManagerClient(AppServiceClient): # Library list_library_agents = _(d.list_library_agents) add_store_agent_to_library = _(d.add_store_agent_to_library) + validate_graph_execution_permissions = _(d.validate_graph_execution_permissions) # Store get_store_agents = _(d.get_store_agents) @@ -223,17 +268,21 @@ class DatabaseManagerAsyncClient(AppServiceClient): return DatabaseManager create_graph_execution = d.create_graph_execution + get_child_graph_executions = d.get_child_graph_executions get_connected_output_nodes = d.get_connected_output_nodes get_latest_node_execution = d.get_latest_node_execution get_graph = d.get_graph get_graph_metadata = d.get_graph_metadata + get_graph_settings = d.get_graph_settings get_graph_execution_meta = d.get_graph_execution_meta get_node = d.get_node get_node_execution = d.get_node_execution get_node_executions = d.get_node_executions + get_user_by_id = d.get_user_by_id get_user_integrations = d.get_user_integrations upsert_execution_input = d.upsert_execution_input upsert_execution_output = d.upsert_execution_output + get_execution_outputs_by_node_exec_id = d.get_execution_outputs_by_node_exec_id update_graph_execution_stats = d.update_graph_execution_stats update_node_execution_status = d.update_node_execution_status update_node_execution_status_batch = d.update_node_execution_status_batch @@ -241,6 +290,10 @@ class DatabaseManagerAsyncClient(AppServiceClient): get_execution_kv_data = d.get_execution_kv_data set_execution_kv_data = d.set_execution_kv_data + # Human In The Loop + get_or_create_human_review = d.get_or_create_human_review + update_review_processed_status = d.update_review_processed_status + # User Comms get_active_user_ids_in_timerange = d.get_active_user_ids_in_timerange get_user_email_by_id = d.get_user_email_by_id @@ -263,6 +316,7 @@ class DatabaseManagerAsyncClient(AppServiceClient): # Library list_library_agents = d.list_library_agents add_store_agent_to_library = d.add_store_agent_to_library + validate_graph_execution_permissions = d.validate_graph_execution_permissions # Store get_store_agents = d.get_store_agents diff --git a/autogpt_platform/backend/backend/executor/manager.py b/autogpt_platform/backend/backend/executor/manager.py index 2fb7d315e3..161e68b0d6 100644 --- a/autogpt_platform/backend/backend/executor/manager.py +++ b/autogpt_platform/backend/backend/executor/manager.py @@ -29,6 +29,7 @@ from backend.data.block import ( from backend.data.credit import UsageTransactionMetadata from backend.data.dynamic_fields import parse_execution_output from backend.data.execution import ( + ExecutionContext, ExecutionQueue, ExecutionStatus, GraphExecution, @@ -36,7 +37,6 @@ from backend.data.execution import ( NodeExecutionEntry, NodeExecutionResult, NodesInputMasks, - UserContext, ) from backend.data.graph import Link, Node from backend.data.model import GraphExecutionStats, NodeExecutionStats @@ -48,25 +48,8 @@ from backend.data.notifications import ( ZeroBalanceData, ) from backend.data.rabbitmq import SyncRabbitMQ -from backend.executor.activity_status_generator import ( - generate_activity_status_for_execution, -) -from backend.executor.utils import ( - GRACEFUL_SHUTDOWN_TIMEOUT_SECONDS, - GRAPH_EXECUTION_CANCEL_QUEUE_NAME, - GRAPH_EXECUTION_QUEUE_NAME, - CancelExecutionEvent, - ExecutionOutputEntry, - LogMetadata, - NodeExecutionProgress, - block_usage_cost, - create_execution_queue_config, - execution_usage_cost, - validate_exec, -) from backend.integrations.creds_manager import IntegrationCredentialsManager from backend.notifications.notifications import queue_notification -from backend.server.v2.AutoMod.manager import automod_manager from backend.util import json from backend.util.clients import ( get_async_execution_event_bus, @@ -93,7 +76,24 @@ from backend.util.retry import ( ) from backend.util.settings import Settings +from .activity_status_generator import generate_activity_status_for_execution +from .automod.manager import automod_manager from .cluster_lock import ClusterLock +from .utils import ( + GRACEFUL_SHUTDOWN_TIMEOUT_SECONDS, + GRAPH_EXECUTION_CANCEL_QUEUE_NAME, + GRAPH_EXECUTION_EXCHANGE, + GRAPH_EXECUTION_QUEUE_NAME, + GRAPH_EXECUTION_ROUTING_KEY, + CancelExecutionEvent, + ExecutionOutputEntry, + LogMetadata, + NodeExecutionProgress, + block_usage_cost, + create_execution_queue_config, + execution_usage_cost, + validate_exec, +) if TYPE_CHECKING: from backend.executor import DatabaseManagerAsyncClient, DatabaseManagerClient @@ -131,9 +131,8 @@ def execute_graph( cluster_lock: ClusterLock, ): """Execute graph using thread-local ExecutionProcessor instance""" - return _tls.processor.on_graph_execution( - graph_exec_entry, cancel_event, cluster_lock - ) + processor: ExecutionProcessor = _tls.processor + return processor.on_graph_execution(graph_exec_entry, cancel_event, cluster_lock) T = TypeVar("T") @@ -141,8 +140,8 @@ T = TypeVar("T") async def execute_node( node: Node, - creds_manager: IntegrationCredentialsManager, data: NodeExecutionEntry, + execution_processor: "ExecutionProcessor", execution_stats: NodeExecutionStats | None = None, nodes_input_masks: Optional[NodesInputMasks] = None, ) -> BlockOutput: @@ -162,9 +161,12 @@ async def execute_node( user_id = data.user_id graph_exec_id = data.graph_exec_id graph_id = data.graph_id + graph_version = data.graph_version node_exec_id = data.node_exec_id node_id = data.node_id node_block = node.block + execution_context = data.execution_context + creds_manager = execution_processor.creds_manager log_metadata = LogMetadata( logger=_logger, @@ -202,28 +204,66 @@ async def execute_node( # Inject extra execution arguments for the blocks via kwargs extra_exec_kwargs: dict = { "graph_id": graph_id, + "graph_version": graph_version, "node_id": node_id, "graph_exec_id": graph_exec_id, "node_exec_id": node_exec_id, "user_id": user_id, + "execution_context": execution_context, + "execution_processor": execution_processor, } - # Add user context from NodeExecutionEntry - extra_exec_kwargs["user_context"] = data.user_context - # Last-minute fetch credentials + acquire a system-wide read-write lock to prevent # changes during execution. ⚠️ This means a set of credentials can only be used by # one (running) block at a time; simultaneous execution of blocks using same # credentials is not supported. - creds_lock = None + creds_locks: list[AsyncRedisLock] = [] input_model = cast(type[BlockSchema], node_block.input_schema) + + # Handle regular credentials fields for field_name, input_type in input_model.get_credentials_fields().items(): credentials_meta = input_type(**input_data[field_name]) - credentials, creds_lock = await creds_manager.acquire( - user_id, credentials_meta.id - ) + credentials, lock = await creds_manager.acquire(user_id, credentials_meta.id) + creds_locks.append(lock) extra_exec_kwargs[field_name] = credentials + # Handle auto-generated credentials (e.g., from GoogleDriveFileInput) + for kwarg_name, info in input_model.get_auto_credentials_fields().items(): + field_name = info["field_name"] + field_data = input_data.get(field_name) + if field_data and isinstance(field_data, dict): + # Check if _credentials_id key exists in the field data + if "_credentials_id" in field_data: + cred_id = field_data["_credentials_id"] + if cred_id: + # Credential ID provided - acquire credentials + provider = info.get("config", {}).get( + "provider", "external service" + ) + file_name = field_data.get("name", "selected file") + try: + credentials, lock = await creds_manager.acquire( + user_id, cred_id + ) + creds_locks.append(lock) + extra_exec_kwargs[kwarg_name] = credentials + except ValueError: + # Credential was deleted or doesn't exist + raise ValueError( + f"Authentication expired for '{file_name}' in field '{field_name}'. " + f"The saved {provider.capitalize()} credentials no longer exist. " + f"Please re-select the file to re-authenticate." + ) + # else: _credentials_id is explicitly None, skip credentials (for chained data) + else: + # _credentials_id key missing entirely - this is an error + provider = info.get("config", {}).get("provider", "external service") + file_name = field_data.get("name", "selected file") + raise ValueError( + f"Authentication missing for '{file_name}' in field '{field_name}'. " + f"Please re-select the file to authenticate with {provider.capitalize()}." + ) + output_size = 0 # sentry tracking nonsense to get user counts for blocks because isolation scopes don't work :( @@ -239,8 +279,8 @@ async def execute_node( scope.set_tag("node_id", node_id) scope.set_tag("block_name", node_block.name) scope.set_tag("block_id", node_block.id) - for k, v in (data.user_context or UserContext(timezone="UTC")).model_dump().items(): - scope.set_tag(f"user_context.{k}", v) + for k, v in execution_context.model_dump().items(): + scope.set_tag(f"execution_context.{k}", v) try: async for output_name, output_data in node_block.execute( @@ -250,19 +290,24 @@ async def execute_node( output_size += len(json.dumps(output_data)) log_metadata.debug("Node produced output", **{output_name: output_data}) yield output_name, output_data - except Exception: + except Exception as ex: # Capture exception WITH context still set before restoring scope - sentry_sdk.capture_exception(scope=scope) + sentry_sdk.capture_exception(error=ex, scope=scope) sentry_sdk.flush() # Ensure it's sent before we restore scope # Re-raise to maintain normal error flow raise finally: - # Ensure credentials are released even if execution fails - if creds_lock and (await creds_lock.locked()) and (await creds_lock.owned()): - try: - await creds_lock.release() - except Exception as e: - log_metadata.error(f"Failed to release credentials lock: {e}") + # Ensure all credentials are released even if execution fails + for creds_lock in creds_locks: + if ( + creds_lock + and (await creds_lock.locked()) + and (await creds_lock.owned()) + ): + try: + await creds_lock.release() + except Exception as e: + log_metadata.error(f"Failed to release credentials lock: {e}") # Update execution stats if execution_stats is not None: @@ -282,9 +327,10 @@ async def _enqueue_next_nodes( user_id: str, graph_exec_id: str, graph_id: str, + graph_version: int, log_metadata: LogMetadata, nodes_input_masks: Optional[NodesInputMasks], - user_context: UserContext, + execution_context: ExecutionContext, ) -> list[NodeExecutionEntry]: async def add_enqueued_execution( node_exec_id: str, node_id: str, block_id: str, data: BlockInput @@ -299,11 +345,12 @@ async def _enqueue_next_nodes( user_id=user_id, graph_exec_id=graph_exec_id, graph_id=graph_id, + graph_version=graph_version, node_exec_id=node_exec_id, node_id=node_id, block_id=block_id, inputs=data, - user_context=user_context, + execution_context=execution_context, ) async def register_next_executions(node_link: Link) -> list[NodeExecutionEntry]: @@ -320,7 +367,9 @@ async def _enqueue_next_nodes( next_node_id = node_link.sink_id output_name, _ = output - next_data = parse_execution_output(output, next_output_name) + next_data = parse_execution_output( + output, next_output_name, next_node_id, next_input_name + ) if next_data is None and output_name != next_output_name: return enqueued_executions next_node = await db_client.get_node(next_node_id) @@ -330,17 +379,14 @@ async def _enqueue_next_nodes( # Or the same input to be consumed multiple times. async with synchronized(f"upsert_input-{next_node_id}-{graph_exec_id}"): # Add output data to the earliest incomplete execution, or create a new one. - next_node_exec_id, next_node_input = await db_client.upsert_execution_input( + next_node_exec, next_node_input = await db_client.upsert_execution_input( node_id=next_node_id, graph_exec_id=graph_exec_id, input_name=next_input_name, input_data=next_data, ) - await async_update_node_execution_status( - db_client=db_client, - exec_id=next_node_exec_id, - status=ExecutionStatus.INCOMPLETE, - ) + next_node_exec_id = next_node_exec.node_exec_id + await send_async_execution_update(next_node_exec) # Complete missing static input pins data using the last execution input. static_link_names = { @@ -369,7 +415,7 @@ async def _enqueue_next_nodes( # Incomplete input data, skip queueing the execution. if not next_node_input: - log_metadata.warning(f"Skipped queueing {suffix}") + log_metadata.info(f"Skipped queueing {suffix}") return enqueued_executions # Input is complete, enqueue the execution. @@ -561,8 +607,8 @@ class ExecutionProcessor: async for output_name, output_data in execute_node( node=node, - creds_manager=self.creds_manager, data=node_exec, + execution_processor=self, execution_stats=stats, nodes_input_masks=nodes_input_masks, ): @@ -656,6 +702,16 @@ class ExecutionProcessor: log_metadata.info( f"⚙️ Graph execution #{graph_exec.graph_exec_id} is already running, continuing where it left off." ) + elif exec_meta.status == ExecutionStatus.REVIEW: + exec_meta.status = ExecutionStatus.RUNNING + log_metadata.info( + f"⚙️ Graph execution #{graph_exec.graph_exec_id} was waiting for review, resuming execution." + ) + update_graph_execution_state( + db_client=db_client, + graph_exec_id=graph_exec.graph_exec_id, + status=ExecutionStatus.RUNNING, + ) elif exec_meta.status == ExecutionStatus.FAILED: exec_meta.status = ExecutionStatus.RUNNING log_metadata.info( @@ -693,31 +749,36 @@ class ExecutionProcessor: raise status exec_meta.status = status - # Activity status handling - activity_status = asyncio.run_coroutine_threadsafe( - generate_activity_status_for_execution( - graph_exec_id=graph_exec.graph_exec_id, - graph_id=graph_exec.graph_id, - graph_version=graph_exec.graph_version, - execution_stats=exec_stats, - db_client=get_db_async_client(), - user_id=graph_exec.user_id, - execution_status=status, - ), - self.node_execution_loop, - ).result(timeout=60.0) - if activity_status is not None: - exec_stats.activity_status = activity_status - log_metadata.info(f"Generated activity status: {activity_status}") + if status in [ExecutionStatus.COMPLETED, ExecutionStatus.FAILED]: + activity_response = asyncio.run_coroutine_threadsafe( + generate_activity_status_for_execution( + graph_exec_id=graph_exec.graph_exec_id, + graph_id=graph_exec.graph_id, + graph_version=graph_exec.graph_version, + execution_stats=exec_stats, + db_client=get_db_async_client(), + user_id=graph_exec.user_id, + execution_status=status, + ), + self.node_execution_loop, + ).result(timeout=60.0) + else: + activity_response = None + if activity_response is not None: + exec_stats.activity_status = activity_response["activity_status"] + exec_stats.correctness_score = activity_response["correctness_score"] + log_metadata.info( + f"Generated activity status: {activity_response['activity_status']} " + f"(correctness: {activity_response['correctness_score']:.2f})" + ) else: log_metadata.debug( - "Activity status generation disabled, not setting field" + "Activity status generation disabled, not setting fields" ) - + finally: # Communication handling self._handle_agent_run_notif(db_client, graph_exec, exec_stats) - finally: update_graph_execution_state( db_client=db_client, graph_exec_id=graph_exec.graph_exec_id, @@ -798,12 +859,17 @@ class ExecutionProcessor: execution_stats_lock = threading.Lock() # State holders ---------------------------------------------------- - running_node_execution: dict[str, NodeExecutionProgress] = defaultdict( + self.running_node_execution: dict[str, NodeExecutionProgress] = defaultdict( NodeExecutionProgress ) - running_node_evaluation: dict[str, Future] = {} + self.running_node_evaluation: dict[str, Future] = {} + self.execution_stats = execution_stats + self.execution_stats_lock = execution_stats_lock execution_queue = ExecutionQueue[NodeExecutionEntry]() + running_node_execution = self.running_node_execution + running_node_evaluation = self.running_node_evaluation + try: if db_client.get_credits(graph_exec.user_id) <= 0: raise InsufficientBalanceError( @@ -838,14 +904,18 @@ class ExecutionProcessor: ExecutionStatus.RUNNING, ExecutionStatus.QUEUED, ExecutionStatus.TERMINATED, + ExecutionStatus.REVIEW, ], ): - node_entry = node_exec.to_node_execution_entry(graph_exec.user_context) + node_entry = node_exec.to_node_execution_entry( + graph_exec.execution_context + ) execution_queue.add(node_entry) # ------------------------------------------------------------ # Main dispatch / polling loop ----------------------------- # ------------------------------------------------------------ + while not execution_queue.empty(): if cancel.is_set(): break @@ -999,7 +1069,12 @@ class ExecutionProcessor: elif error is not None: execution_status = ExecutionStatus.FAILED else: - execution_status = ExecutionStatus.COMPLETED + if db_client.has_pending_reviews_for_graph_exec( + graph_exec.graph_exec_id + ): + execution_status = ExecutionStatus.REVIEW + else: + execution_status = ExecutionStatus.COMPLETED if error: execution_stats.error = str(error) or type(error).__name__ @@ -1135,9 +1210,10 @@ class ExecutionProcessor: user_id=graph_exec.user_id, graph_exec_id=graph_exec.graph_exec_id, graph_id=graph_exec.graph_id, + graph_version=graph_exec.graph_version, log_metadata=log_metadata, nodes_input_masks=nodes_input_masks, - user_context=graph_exec.user_context, + execution_context=graph_exec.execution_context, ): execution_queue.add(next_execution) @@ -1335,6 +1411,9 @@ class ExecutionManager(AppProcess): return self._run_client def run(self): + logger.info( + f"[{self.service_name}] 🆔 Pod assigned executor_id: {self.executor_id}" + ) logger.info(f"[{self.service_name}] ⏳ Spawn max-{self.pool_size} workers...") pool_size_gauge.set(self.pool_size) @@ -1456,14 +1535,43 @@ class ExecutionManager(AppProcess): @func_retry def _ack_message(reject: bool, requeue: bool): - """Acknowledge or reject the message based on execution status.""" + """ + Acknowledge or reject the message based on execution status. + + Args: + reject: Whether to reject the message + requeue: Whether to requeue the message + """ # Connection can be lost, so always get a fresh channel channel = self.run_client.get_channel() if reject: - channel.connection.add_callback_threadsafe( - lambda: channel.basic_nack(delivery_tag, requeue=requeue) - ) + if requeue and settings.config.requeue_by_republishing: + # Send rejected message to back of queue using republishing + def _republish_to_back(): + try: + # First republish to back of queue + self.run_client.publish_message( + routing_key=GRAPH_EXECUTION_ROUTING_KEY, + message=body.decode(), # publish_message expects string, not bytes + exchange=GRAPH_EXECUTION_EXCHANGE, + ) + # Then reject without requeue (message already republished) + channel.basic_nack(delivery_tag, requeue=False) + logger.info("Message requeued to back of queue") + except Exception as e: + logger.error( + f"[{self.service_name}] Failed to requeue message to back: {e}" + ) + # Fall back to traditional requeue on failure + channel.basic_nack(delivery_tag, requeue=True) + + channel.connection.add_callback_threadsafe(_republish_to_back) + else: + # Traditional requeue (goes to front) or no requeue + channel.connection.add_callback_threadsafe( + lambda: channel.basic_nack(delivery_tag, requeue=requeue) + ) else: channel.connection.add_callback_threadsafe( lambda: channel.basic_ack(delivery_tag) @@ -1495,10 +1603,33 @@ class ExecutionManager(AppProcess): graph_exec_id = graph_exec_entry.graph_exec_id user_id = graph_exec_entry.user_id graph_id = graph_exec_entry.graph_id + root_exec_id = graph_exec_entry.execution_context.root_execution_id + parent_exec_id = graph_exec_entry.execution_context.parent_execution_id + logger.info( - f"[{self.service_name}] Received RUN for graph_exec_id={graph_exec_id}, user_id={user_id}" + f"[{self.service_name}] Received RUN for graph_exec_id={graph_exec_id}, user_id={user_id}, executor_id={self.executor_id}" + + (f", root={root_exec_id}" if root_exec_id else "") + + (f", parent={parent_exec_id}" if parent_exec_id else "") ) + # Check if root execution is already terminated (prevents orphaned child executions) + if root_exec_id and root_exec_id != graph_exec_id: + parent_exec = get_db_client().get_graph_execution_meta( + execution_id=root_exec_id, + user_id=user_id, + ) + if parent_exec and parent_exec.status == ExecutionStatus.TERMINATED: + logger.info( + f"[{self.service_name}] Skipping execution {graph_exec_id} - parent {root_exec_id} is TERMINATED" + ) + # Mark this child as terminated since parent was stopped + get_db_client().update_graph_execution_stats( + graph_exec_id=graph_exec_id, + status=ExecutionStatus.TERMINATED, + ) + _ack_message(reject=False, requeue=False) + return + # Check user rate limit before processing try: # Only check executions from the last 24 hours for performance @@ -1546,7 +1677,7 @@ class ExecutionManager(AppProcess): # Either someone else has it or Redis is unavailable if current_owner is not None: logger.warning( - f"[{self.service_name}] Graph {graph_exec_id} already running on pod {current_owner}" + f"[{self.service_name}] Graph {graph_exec_id} already running on pod {current_owner}, current executor_id={self.executor_id}" ) _ack_message(reject=True, requeue=False) else: @@ -1555,18 +1686,30 @@ class ExecutionManager(AppProcess): ) _ack_message(reject=True, requeue=True) return - self._execution_locks[graph_exec_id] = cluster_lock - logger.info( - f"[{self.service_name}] Acquired cluster lock for {graph_exec_id} with executor {self.executor_id}" - ) + # Wrap entire block after successful lock acquisition + try: + self._execution_locks[graph_exec_id] = cluster_lock - cancel_event = threading.Event() + logger.info( + f"[{self.service_name}] Successfully acquired cluster lock for {graph_exec_id}, executor_id={self.executor_id}" + ) - future = self.executor.submit( - execute_graph, graph_exec_entry, cancel_event, cluster_lock - ) - self.active_graph_runs[graph_exec_id] = (future, cancel_event) + cancel_event = threading.Event() + future = self.executor.submit( + execute_graph, graph_exec_entry, cancel_event, cluster_lock + ) + self.active_graph_runs[graph_exec_id] = (future, cancel_event) + except Exception as e: + logger.warning( + f"[{self.service_name}] Failed to setup execution for {graph_exec_id}: {type(e).__name__}: {e}" + ) + # Release cluster lock before requeue + cluster_lock.release() + if graph_exec_id in self._execution_locks: + del self._execution_locks[graph_exec_id] + _ack_message(reject=True, requeue=True) + return self._update_prompt_metrics() def _on_run_done(f: Future): @@ -1586,6 +1729,9 @@ class ExecutionManager(AppProcess): finally: # Release the cluster-wide execution lock if graph_exec_id in self._execution_locks: + logger.info( + f"[{self.service_name}] Releasing cluster lock for {graph_exec_id}, executor_id={self.executor_id}" + ) self._execution_locks[graph_exec_id].release() del self._execution_locks[graph_exec_id] self._cleanup_completed_runs() @@ -1714,6 +1860,8 @@ class ExecutionManager(AppProcess): logger.info(f"{prefix} ✅ Finished GraphExec cleanup") + super().cleanup() + # ------- UTILITIES ------- # diff --git a/autogpt_platform/backend/backend/executor/manager_test.py b/autogpt_platform/backend/backend/executor/manager_test.py index 0b37c2e6a7..bdfdb5d724 100644 --- a/autogpt_platform/backend/backend/executor/manager_test.py +++ b/autogpt_platform/backend/backend/executor/manager_test.py @@ -3,16 +3,16 @@ import logging import fastapi.responses import pytest -import backend.server.v2.library.model -import backend.server.v2.store.model +import backend.api.features.library.model +import backend.api.features.store.model +from backend.api.model import CreateGraph +from backend.api.rest_api import AgentServer from backend.blocks.basic import StoreValueBlock from backend.blocks.data_manipulation import FindInDictionaryBlock from backend.blocks.io import AgentInputBlock from backend.blocks.maths import CalculatorBlock, Operation from backend.data import execution, graph from backend.data.model import User -from backend.server.model import CreateGraph -from backend.server.rest_api import AgentServer from backend.usecases.sample import create_test_graph, create_test_user from backend.util.test import SpinTestServer, wait_execution @@ -356,7 +356,7 @@ async def test_execute_preset(server: SpinTestServer): test_graph = await create_graph(server, test_graph, test_user) # Create preset with initial values - preset = backend.server.v2.library.model.LibraryAgentPresetCreatable( + preset = backend.api.features.library.model.LibraryAgentPresetCreatable( name="Test Preset With Clash", description="Test preset with clashing input values", graph_id=test_graph.id, @@ -444,7 +444,7 @@ async def test_execute_preset_with_clash(server: SpinTestServer): test_graph = await create_graph(server, test_graph, test_user) # Create preset with initial values - preset = backend.server.v2.library.model.LibraryAgentPresetCreatable( + preset = backend.api.features.library.model.LibraryAgentPresetCreatable( name="Test Preset With Clash", description="Test preset with clashing input values", graph_id=test_graph.id, @@ -485,7 +485,7 @@ async def test_store_listing_graph(server: SpinTestServer): test_user = await create_test_user() test_graph = await create_graph(server, create_test_graph(), test_user) - store_submission_request = backend.server.v2.store.model.StoreSubmissionRequest( + store_submission_request = backend.api.features.store.model.StoreSubmissionRequest( agent_id=test_graph.id, agent_version=test_graph.version, slug=test_graph.id, @@ -514,13 +514,21 @@ async def test_store_listing_graph(server: SpinTestServer): admin_user = await create_test_user(alt_user=True) await server.agent_server.test_review_store_listing( - backend.server.v2.store.model.ReviewSubmissionRequest( + backend.api.features.store.model.ReviewSubmissionRequest( store_listing_version_id=slv_id, is_approved=True, comments="Test comments", ), user_id=admin_user.id, ) + + # Add the approved store listing to the admin user's library so they can execute it + from backend.api.features.library.db import add_store_agent_to_library + + await add_store_agent_to_library( + store_listing_version_id=slv_id, user_id=admin_user.id + ) + alt_test_user = admin_user data = {"input_1": "Hello", "input_2": "World"} diff --git a/autogpt_platform/backend/backend/executor/scheduler.py b/autogpt_platform/backend/backend/executor/scheduler.py index 77ed652886..06c50bf82e 100644 --- a/autogpt_platform/backend/backend/executor/scheduler.py +++ b/autogpt_platform/backend/backend/executor/scheduler.py @@ -2,6 +2,7 @@ import asyncio import logging import os import threading +import uuid from enum import Enum from typing import Optional from urllib.parse import parse_qs, urlencode, urlparse, urlunparse @@ -22,19 +23,29 @@ from dotenv import load_dotenv from pydantic import BaseModel, Field, ValidationError from sqlalchemy import MetaData, create_engine +from backend.data.auth.oauth import cleanup_expired_oauth_tokens from backend.data.block import BlockInput from backend.data.execution import GraphExecutionWithNodes from backend.data.model import CredentialsMetaInput +from backend.data.onboarding import increment_runs from backend.executor import utils as execution_utils from backend.monitoring import ( NotificationJobArgs, process_existing_batches, process_weekly_summary, report_block_error_rates, + report_execution_accuracy_alerts, report_late_executions, ) +from backend.util.clients import get_scheduler_client from backend.util.cloud_storage import cleanup_expired_files_async -from backend.util.exceptions import NotAuthorizedError, NotFoundError +from backend.util.exceptions import ( + GraphNotFoundError, + GraphNotInLibraryError, + GraphValidationError, + NotAuthorizedError, + NotFoundError, +) from backend.util.logging import PrefixFilter from backend.util.retry import func_retry from backend.util.service import ( @@ -145,6 +156,7 @@ async def _execute_graph(**kwargs): inputs=args.input_data, graph_credentials_inputs=args.input_credentials, ) + await increment_runs(args.user_id) elapsed = asyncio.get_event_loop().time() - start_time logger.info( f"Graph execution started with ID {graph_exec.id} for graph {args.graph_id} " @@ -155,6 +167,12 @@ async def _execute_graph(**kwargs): f"Graph execution {graph_exec.id} took {elapsed:.2f}s to create/publish - " f"this is unusually slow and may indicate resource contention" ) + except GraphNotFoundError as e: + await _handle_graph_not_available(e, args, start_time) + except GraphNotInLibraryError as e: + await _handle_graph_not_available(e, args, start_time) + except GraphValidationError: + await _handle_graph_validation_error(args) except Exception as e: elapsed = asyncio.get_event_loop().time() - start_time logger.error( @@ -163,12 +181,79 @@ async def _execute_graph(**kwargs): ) +async def _handle_graph_validation_error(args: "GraphExecutionJobArgs") -> None: + logger.error( + f"Scheduled Graph {args.graph_id} failed validation. Unscheduling graph" + ) + if args.schedule_id: + scheduler_client = get_scheduler_client() + await scheduler_client.delete_schedule( + schedule_id=args.schedule_id, + user_id=args.user_id, + ) + else: + logger.error( + f"Unable to unschedule graph: {args.graph_id} as this is an old job with no associated schedule_id please remove manually" + ) + + +async def _handle_graph_not_available( + e: Exception, args: "GraphExecutionJobArgs", start_time: float +) -> None: + elapsed = asyncio.get_event_loop().time() - start_time + logger.warning( + f"Scheduled execution blocked for deleted/archived graph {args.graph_id} " + f"(user {args.user_id}) after {elapsed:.2f}s: {e}" + ) + # Clean up orphaned schedules for this graph + await _cleanup_orphaned_schedules_for_graph(args.graph_id, args.user_id) + + +async def _cleanup_orphaned_schedules_for_graph(graph_id: str, user_id: str) -> None: + """ + Clean up orphaned schedules for a specific graph when execution fails with GraphNotAccessibleError. + This happens when an agent is pulled from the Marketplace or deleted + but schedules still exist. + """ + # Use scheduler client to access the scheduler service + scheduler_client = get_scheduler_client() + + # Find all schedules for this graph and user + schedules = await scheduler_client.get_execution_schedules( + graph_id=graph_id, user_id=user_id + ) + + for schedule in schedules: + try: + await scheduler_client.delete_schedule( + schedule_id=schedule.id, user_id=user_id + ) + logger.info( + f"Cleaned up orphaned schedule {schedule.id} for deleted/archived graph {graph_id}" + ) + except Exception: + logger.exception( + f"Failed to delete orphaned schedule {schedule.id} for graph {graph_id}" + ) + + def cleanup_expired_files(): """Clean up expired files from cloud storage.""" # Wait for completion run_async(cleanup_expired_files_async()) +def cleanup_oauth_tokens(): + """Clean up expired OAuth tokens from the database.""" + # Wait for completion + run_async(cleanup_expired_oauth_tokens()) + + +def execution_accuracy_alerts(): + """Check execution accuracy and send alerts if drops are detected.""" + return report_execution_accuracy_alerts() + + # Monitoring functions are now imported from monitoring module @@ -179,9 +264,11 @@ class Jobstores(Enum): class GraphExecutionJobArgs(BaseModel): + schedule_id: str | None = None user_id: str graph_id: str graph_version: int + agent_name: str | None = None cron: str input_data: BlockInput input_credentials: dict[str, CredentialsMetaInput] = Field(default_factory=dict) @@ -248,7 +335,7 @@ class Scheduler(AppService): raise UnhealthyServiceError("Scheduler is still initializing") # Check if we're in the middle of cleanup - if self.cleaned_up: + if self._shutting_down: return await super().health_check() # Normal operation - check if scheduler is running @@ -366,6 +453,28 @@ class Scheduler(AppService): jobstore=Jobstores.EXECUTION.value, ) + # OAuth Token Cleanup - configurable interval + self.scheduler.add_job( + cleanup_oauth_tokens, + id="cleanup_oauth_tokens", + trigger="interval", + replace_existing=True, + seconds=config.oauth_token_cleanup_interval_hours + * 3600, # Convert hours to seconds + jobstore=Jobstores.EXECUTION.value, + ) + + # Execution Accuracy Monitoring - configurable interval + self.scheduler.add_job( + execution_accuracy_alerts, + id="report_execution_accuracy_alerts", + trigger="interval", + replace_existing=True, + seconds=config.execution_accuracy_check_interval_hours + * 3600, # Convert hours to seconds + jobstore=Jobstores.EXECUTION.value, + ) + self.scheduler.add_listener(job_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR) self.scheduler.add_listener(job_missed_listener, EVENT_JOB_MISSED) self.scheduler.add_listener(job_max_instances_listener, EVENT_JOB_MAX_INSTANCES) @@ -375,7 +484,6 @@ class Scheduler(AppService): super().run_service() def cleanup(self): - super().cleanup() if self.scheduler: logger.info("⏳ Shutting down scheduler...") self.scheduler.shutdown(wait=True) @@ -390,7 +498,7 @@ class Scheduler(AppService): logger.info("⏳ Waiting for event loop thread to finish...") _event_loop_thread.join(timeout=SCHEDULER_OPERATION_TIMEOUT_SECONDS) - logger.info("Scheduler cleanup complete.") + super().cleanup() @expose def add_graph_execution_schedule( @@ -428,11 +536,14 @@ class Scheduler(AppService): logger.info( f"Scheduling job for user {user_id} with timezone {user_timezone} (cron: {cron})" ) + schedule_id = str(uuid.uuid4()) job_args = GraphExecutionJobArgs( + schedule_id=schedule_id, user_id=user_id, graph_id=graph_id, graph_version=graph_version, + agent_name=name, cron=cron, input_data=input_data, input_credentials=input_credentials, @@ -444,6 +555,7 @@ class Scheduler(AppService): trigger=CronTrigger.from_crontab(cron, timezone=user_timezone), jobstore=Jobstores.EXECUTION.value, replace_existing=True, + id=schedule_id, ) logger.info( f"Added job {job.id} with cron schedule '{cron}' in timezone {user_timezone}, input data: {input_data}" @@ -510,6 +622,16 @@ class Scheduler(AppService): """Manually trigger cleanup of expired cloud storage files.""" return cleanup_expired_files() + @expose + def execute_cleanup_oauth_tokens(self): + """Manually trigger cleanup of expired OAuth tokens.""" + return cleanup_oauth_tokens() + + @expose + def execute_report_execution_accuracy_alerts(self): + """Manually trigger execution accuracy alert checking.""" + return execution_accuracy_alerts() + class SchedulerClient(AppServiceClient): @classmethod diff --git a/autogpt_platform/backend/backend/executor/scheduler_test.py b/autogpt_platform/backend/backend/executor/scheduler_test.py index c4fa35d46c..21acbaf0e1 100644 --- a/autogpt_platform/backend/backend/executor/scheduler_test.py +++ b/autogpt_platform/backend/backend/executor/scheduler_test.py @@ -1,7 +1,7 @@ import pytest +from backend.api.model import CreateGraph from backend.data import db -from backend.server.model import CreateGraph from backend.usecases.sample import create_test_graph, create_test_user from backend.util.clients import get_scheduler_client from backend.util.test import SpinTestServer diff --git a/autogpt_platform/backend/backend/executor/utils.py b/autogpt_platform/backend/backend/executor/utils.py index 7273e83fea..bcd3dcf3b6 100644 --- a/autogpt_platform/backend/backend/executor/utils.py +++ b/autogpt_platform/backend/backend/executor/utils.py @@ -10,6 +10,7 @@ from pydantic import BaseModel, JsonValue, ValidationError from backend.data import execution as execution_db from backend.data import graph as graph_db +from backend.data import user as user_db from backend.data.block import ( Block, BlockCostType, @@ -24,48 +25,32 @@ from backend.data.db import prisma # Import dynamic field utilities from centralized location from backend.data.dynamic_fields import merge_execution_input from backend.data.execution import ( + ExecutionContext, ExecutionStatus, + GraphExecutionMeta, GraphExecutionStats, GraphExecutionWithNodes, NodesInputMasks, - UserContext, + get_graph_execution, ) from backend.data.graph import GraphModel, Node -from backend.data.model import CredentialsMetaInput +from backend.data.model import USER_TIMEZONE_NOT_SET, CredentialsMetaInput from backend.data.rabbitmq import Exchange, ExchangeType, Queue, RabbitMQConfig -from backend.data.user import get_user_by_id from backend.util.clients import ( get_async_execution_event_bus, get_async_execution_queue, get_database_manager_async_client, get_integration_credentials_store, ) -from backend.util.exceptions import GraphValidationError, NotFoundError -from backend.util.logging import TruncatedLogger +from backend.util.exceptions import ( + GraphNotFoundError, + GraphValidationError, + NotFoundError, +) +from backend.util.logging import TruncatedLogger, is_structured_logging_enabled from backend.util.settings import Config from backend.util.type import convert - -async def get_user_context(user_id: str) -> UserContext: - """ - Get UserContext for a user, always returns a valid context with timezone. - Defaults to UTC if user has no timezone set. - """ - user_context = UserContext(timezone="UTC") # Default to UTC - try: - user = await get_user_by_id(user_id) - if user and user.timezone and user.timezone != "not-set": - user_context.timezone = user.timezone - logger.debug(f"Retrieved user context: timezone={user.timezone}") - else: - logger.debug("User has no timezone set, using UTC") - except Exception as e: - logger.warning(f"Could not fetch user timezone: {e}") - # Continue with UTC as default - - return user_context - - config = Config() logger = TruncatedLogger(logging.getLogger(__name__), prefix="[GraphExecutorUtil]") @@ -93,7 +78,11 @@ class LogMetadata(TruncatedLogger): "node_id": node_id, "block_name": block_name, } - prefix = f"[ExecutionManager|uid:{user_id}|gid:{graph_id}|nid:{node_id}]|geid:{graph_eid}|neid:{node_eid}|{block_name}]" + prefix = ( + "[ExecutionManager]" + if is_structured_logging_enabled() + else f"[ExecutionManager|uid:{user_id}|gid:{graph_id}|nid:{node_id}]|geid:{graph_eid}|neid:{node_eid}|{block_name}]" # noqa + ) super().__init__( logger, max_length=max_length, @@ -466,6 +455,7 @@ async def validate_and_construct_node_execution_input( graph_version: Optional[int] = None, graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None, nodes_input_masks: Optional[NodesInputMasks] = None, + is_sub_graph: bool = False, ) -> tuple[GraphModel, list[tuple[str, BlockInput]], NodesInputMasks]: """ Public wrapper that handles graph fetching, credential mapping, and validation+construction. @@ -499,9 +489,21 @@ async def validate_and_construct_node_execution_input( user_id=user_id, version=graph_version, include_subgraphs=True, + # Execution/access permission is checked by validate_graph_execution_permissions + skip_access_check=True, ) if not graph: - raise NotFoundError(f"Graph #{graph_id} not found.") + raise GraphNotFoundError(f"Graph #{graph_id} not found.") + + # Validate that the user has permission to execute this graph + # This checks both library membership and execution permissions, + # raising specific exceptions for appropriate error handling. + await gdb.validate_graph_execution_permissions( + user_id=user_id, + graph_id=graph.id, + graph_version=graph.version, + is_sub_graph=is_sub_graph, + ) nodes_input_masks = _merge_nodes_input_masks( ( @@ -602,20 +604,74 @@ class CancelExecutionEvent(BaseModel): graph_exec_id: str +async def _get_child_executions(parent_exec_id: str) -> list["GraphExecutionMeta"]: + """ + Get all child executions of a parent execution using the execution_db pattern. + + Args: + parent_exec_id: Parent graph execution ID + + Returns: + List of child graph executions + """ + from backend.data.db import prisma + + if prisma.is_connected(): + edb = execution_db + else: + edb = get_database_manager_async_client() + + return await edb.get_child_graph_executions(parent_exec_id) + + async def stop_graph_execution( user_id: str, graph_exec_id: str, wait_timeout: float = 15.0, + cascade: bool = True, ): """ + Stop a graph execution and optionally all its child executions. + Mechanism: - 1. Set the cancel event - 2. Graph executor's cancel handler thread detects the event, terminates workers, + 1. Set the cancel event for this execution + 2. If cascade=True, recursively stop all child executions + 3. Graph executor's cancel handler thread detects the event, terminates workers, reinitializes worker pool, and returns. - 3. Update execution statuses in DB and set `error` outputs to `"TERMINATED"`. + 4. Update execution statuses in DB and set `error` outputs to `"TERMINATED"`. + + Args: + user_id: User ID who owns the execution + graph_exec_id: Graph execution ID to stop + wait_timeout: Maximum time to wait for execution to stop (seconds) + cascade: If True, recursively stop all child executions """ queue_client = await get_async_execution_queue() db = execution_db if prisma.is_connected() else get_database_manager_async_client() + + # First, find and stop all child executions if cascading + if cascade: + children = await _get_child_executions(graph_exec_id) + logger.info( + f"Stopping {len(children)} child executions of execution {graph_exec_id}" + ) + + # Stop all children in parallel (recursively, with cascading enabled) + if children: + await asyncio.gather( + *[ + stop_graph_execution( + user_id=user_id, + graph_exec_id=child.id, + wait_timeout=wait_timeout, + cascade=True, # Recursively cascade to grandchildren + ) + for child in children + ], + return_exceptions=True, # Don't fail parent stop if child stop fails + ) + + # Now stop this execution await queue_client.publish_message( routing_key="", message=CancelExecutionEvent(graph_exec_id=graph_exec_id).model_dump_json(), @@ -679,6 +735,8 @@ async def add_graph_execution( graph_version: Optional[int] = None, graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None, nodes_input_masks: Optional[NodesInputMasks] = None, + execution_context: Optional[ExecutionContext] = None, + graph_exec_id: Optional[str] = None, ) -> GraphExecutionWithNodes: """ Adds a graph execution to the queue and returns the execution entry. @@ -692,31 +750,55 @@ async def add_graph_execution( graph_credentials_inputs: Credentials inputs to use in the execution. Keys should map to the keys generated by `GraphModel.aggregate_credentials_inputs`. nodes_input_masks: Node inputs to use in the execution. + parent_graph_exec_id: The ID of the parent graph execution (for nested executions). + graph_exec_id: If provided, resume this existing execution instead of creating a new one. Returns: GraphExecutionEntry: The entry for the graph execution. Raises: ValueError: If the graph is not found or if there are validation errors. + NotFoundError: If graph_exec_id is provided but execution is not found. """ if prisma.is_connected(): edb = execution_db + udb = user_db + gdb = graph_db else: - edb = get_database_manager_async_client() + edb = udb = gdb = get_database_manager_async_client() - graph, starting_nodes_input, compiled_nodes_input_masks = ( - await validate_and_construct_node_execution_input( - graph_id=graph_id, + # Get or create the graph execution + if graph_exec_id: + # Resume existing execution + graph_exec = await get_graph_execution( user_id=user_id, - graph_inputs=inputs or {}, - graph_version=graph_version, - graph_credentials_inputs=graph_credentials_inputs, - nodes_input_masks=nodes_input_masks, + execution_id=graph_exec_id, + include_node_executions=True, + ) + + if not graph_exec: + raise NotFoundError(f"Graph execution #{graph_exec_id} not found.") + + # Use existing execution's compiled input masks + compiled_nodes_input_masks = graph_exec.nodes_input_masks or {} + + logger.info(f"Resuming graph execution #{graph_exec.id} for graph #{graph_id}") + else: + parent_exec_id = ( + execution_context.parent_execution_id if execution_context else None + ) + + # Create new execution + graph, starting_nodes_input, compiled_nodes_input_masks = ( + await validate_and_construct_node_execution_input( + graph_id=graph_id, + user_id=user_id, + graph_inputs=inputs or {}, + graph_version=graph_version, + graph_credentials_inputs=graph_credentials_inputs, + nodes_input_masks=nodes_input_masks, + is_sub_graph=parent_exec_id is not None, + ) ) - ) - graph_exec = None - try: - # Sanity check: running add_graph_execution with the properties of - # the graph_exec created here should create the same execution again. graph_exec = await edb.create_graph_execution( user_id=user_id, graph_id=graph_id, @@ -726,18 +808,38 @@ async def add_graph_execution( nodes_input_masks=nodes_input_masks, starting_nodes_input=starting_nodes_input, preset_id=preset_id, + parent_graph_exec_id=parent_exec_id, ) - graph_exec_entry = graph_exec.to_graph_execution_entry( - user_context=await get_user_context(user_id), - compiled_nodes_input_masks=compiled_nodes_input_masks, - ) logger.info( f"Created graph execution #{graph_exec.id} for graph " - f"#{graph_id} with {len(starting_nodes_input)} starting nodes. " - f"Now publishing to execution queue." + f"#{graph_id} with {len(starting_nodes_input)} starting nodes" ) + # Generate execution context if it's not provided + if execution_context is None: + user = await udb.get_user_by_id(user_id) + settings = await gdb.get_graph_settings(user_id=user_id, graph_id=graph_id) + + execution_context = ExecutionContext( + safe_mode=( + settings.human_in_the_loop_safe_mode + if settings.human_in_the_loop_safe_mode is not None + else True + ), + user_timezone=( + user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC" + ), + root_execution_id=graph_exec.id, + ) + + try: + graph_exec_entry = graph_exec.to_graph_execution_entry( + compiled_nodes_input_masks=compiled_nodes_input_masks, + execution_context=execution_context, + ) + logger.info(f"Publishing execution {graph_exec.id} to execution queue") + exec_queue = await get_async_execution_queue() await exec_queue.publish_message( routing_key=GRAPH_EXECUTION_ROUTING_KEY, diff --git a/autogpt_platform/backend/backend/executor/utils_test.py b/autogpt_platform/backend/backend/executor/utils_test.py index 64f40bbcb6..8854214e14 100644 --- a/autogpt_platform/backend/backend/executor/utils_test.py +++ b/autogpt_platform/backend/backend/executor/utils_test.py @@ -111,6 +111,35 @@ def test_parse_execution_output(): parse_execution_output(output, "result_@_attr_$_0_#_key") is None ) # Should fail at @_attr + # Test case 7: Tool pin routing with matching node ID and pin name + output = ("tools_^_node123_~_query", "search term") + assert parse_execution_output(output, "tools", "node123", "query") == "search term" + + # Test case 8: Tool pin routing with node ID mismatch + output = ("tools_^_node123_~_query", "search term") + assert parse_execution_output(output, "tools", "node456", "query") is None + + # Test case 9: Tool pin routing with pin name mismatch + output = ("tools_^_node123_~_query", "search term") + assert parse_execution_output(output, "tools", "node123", "different_pin") is None + + # Test case 10: Tool pin routing with complex field names + output = ("tools_^_node789_~_nested_field", {"key": "value"}) + result = parse_execution_output(output, "tools", "node789", "nested_field") + assert result == {"key": "value"} + + # Test case 11: Tool pin routing missing required parameters should raise error + output = ("tools_^_node123_~_query", "search term") + try: + parse_execution_output(output, "tools", "node123") # Missing sink_pin_name + assert False, "Should have raised ValueError" + except ValueError as e: + assert "must be provided for tool pin routing" in str(e) + + # Test case 12: Non-tool pin with similar pattern should use normal logic + output = ("tools_^_node123_~_query", "search term") + assert parse_execution_output(output, "different_name", "node123", "query") is None + def test_merge_execution_input(): # Test case for basic list extraction @@ -319,9 +348,6 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture): mock_graph_exec.node_executions = [] # Add this to avoid AttributeError mock_graph_exec.to_graph_execution_entry.return_value = mocker.MagicMock() - # Mock user context - mock_user_context = {"user_id": user_id, "context": "test_context"} - # Mock the queue and event bus mock_queue = mocker.AsyncMock() mock_event_bus = mocker.MagicMock() @@ -333,7 +359,8 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture): ) mock_edb = mocker.patch("backend.executor.utils.execution_db") mock_prisma = mocker.patch("backend.executor.utils.prisma") - mock_get_user_context = mocker.patch("backend.executor.utils.get_user_context") + mock_udb = mocker.patch("backend.executor.utils.user_db") + mock_gdb = mocker.patch("backend.executor.utils.graph_db") mock_get_queue = mocker.patch("backend.executor.utils.get_async_execution_queue") mock_get_event_bus = mocker.patch( "backend.executor.utils.get_async_execution_event_bus" @@ -351,7 +378,14 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture): return_value=mock_graph_exec ) mock_edb.update_node_execution_status_batch = mocker.AsyncMock() - mock_get_user_context.return_value = mock_user_context + # Mock user and settings data + mock_user = mocker.MagicMock() + mock_user.timezone = "UTC" + mock_settings = mocker.MagicMock() + mock_settings.human_in_the_loop_safe_mode = True + + mock_udb.get_user_by_id = mocker.AsyncMock(return_value=mock_user) + mock_gdb.get_graph_settings = mocker.AsyncMock(return_value=mock_settings) mock_get_queue.return_value = mock_queue mock_get_event_bus.return_value = mock_event_bus @@ -379,6 +413,7 @@ async def test_add_graph_execution_is_repeatable(mocker: MockerFixture): nodes_input_masks=nodes_input_masks, starting_nodes_input=starting_nodes_input, preset_id=preset_id, + parent_graph_exec_id=None, ) # Set up the graph execution mock to have properties we can extract diff --git a/autogpt_platform/backend/backend/integrations/credentials_store.py b/autogpt_platform/backend/backend/integrations/credentials_store.py index 75ae346d5d..7d805913b2 100644 --- a/autogpt_platform/backend/backend/integrations/credentials_store.py +++ b/autogpt_platform/backend/backend/integrations/credentials_store.py @@ -15,6 +15,7 @@ from backend.data.model import ( OAuth2Credentials, OAuthState, UserIntegrations, + UserPasswordCredentials, ) from backend.data.redis_client import get_redis_async from backend.util.settings import Settings @@ -207,6 +208,14 @@ v0_credentials = APIKeyCredentials( expires_at=None, ) +webshare_proxy_credentials = UserPasswordCredentials( + id="a5b3c7d9-2e4f-4a6b-8c1d-9e0f1a2b3c4d", + provider="webshare_proxy", + username=SecretStr(settings.secrets.webshare_proxy_username), + password=SecretStr(settings.secrets.webshare_proxy_password), + title="Use Credits for Webshare Proxy", +) + DEFAULT_CREDENTIALS = [ ollama_credentials, revid_credentials, @@ -233,6 +242,7 @@ DEFAULT_CREDENTIALS = [ google_maps_credentials, llama_api_credentials, v0_credentials, + webshare_proxy_credentials, ] @@ -321,6 +331,11 @@ class IntegrationCredentialsStore: all_credentials.append(zerobounce_credentials) if settings.secrets.google_maps_api_key: all_credentials.append(google_maps_credentials) + if ( + settings.secrets.webshare_proxy_username + and settings.secrets.webshare_proxy_password + ): + all_credentials.append(webshare_proxy_credentials) return all_credentials async def get_creds_by_id( @@ -399,7 +414,15 @@ class IntegrationCredentialsStore: # ===================== OAUTH STATES ===================== # async def store_state_token( - self, user_id: str, provider: str, scopes: list[str], use_pkce: bool = False + self, + user_id: str, + provider: str, + scopes: list[str], + use_pkce: bool = False, + # New parameters for external API OAuth flows + callback_url: Optional[str] = None, + state_metadata: Optional[dict] = None, + initiated_by_api_key_id: Optional[str] = None, ) -> tuple[str, str]: token = secrets.token_urlsafe(32) expires_at = datetime.now(timezone.utc) + timedelta(minutes=10) @@ -412,6 +435,10 @@ class IntegrationCredentialsStore: code_verifier=code_verifier, expires_at=int(expires_at.timestamp()), scopes=scopes, + # External API OAuth flow fields + callback_url=callback_url, + state_metadata=state_metadata or {}, + initiated_by_api_key_id=initiated_by_api_key_id, ) async with self.edit_user_integrations(user_id) as user_integrations: diff --git a/autogpt_platform/backend/backend/integrations/providers.py b/autogpt_platform/backend/backend/integrations/providers.py index 3564ad32a8..3af5006ca4 100644 --- a/autogpt_platform/backend/backend/integrations/providers.py +++ b/autogpt_platform/backend/backend/integrations/providers.py @@ -49,6 +49,7 @@ class ProviderName(str, Enum): TODOIST = "todoist" UNREAL_SPEECH = "unreal_speech" V0 = "v0" + WEBSHARE_PROXY = "webshare_proxy" ZEROBOUNCE = "zerobounce" @classmethod diff --git a/autogpt_platform/backend/backend/integrations/webhooks/_base.py b/autogpt_platform/backend/backend/integrations/webhooks/_base.py index 9342a6417b..7daf0dc6de 100644 --- a/autogpt_platform/backend/backend/integrations/webhooks/_base.py +++ b/autogpt_platform/backend/backend/integrations/webhooks/_base.py @@ -105,11 +105,15 @@ class BaseWebhooksManager(ABC, Generic[WT]): webhook = await integrations.get_webhook(webhook_id, include_relations=True) if webhook.triggered_nodes or webhook.triggered_presets: # Don't prune webhook if in use + logger.info( + f"Webhook #{webhook_id} kept as it has triggers in other graphs" + ) return False if credentials: await self._deregister_webhook(webhook, credentials) await integrations.delete_webhook(user_id, webhook.id) + logger.info(f"Webhook #{webhook_id} deleted as it had no remaining triggers") return True # --8<-- [start:BaseWebhooksManager3] diff --git a/autogpt_platform/backend/backend/integrations/webhooks/_manual_base.py b/autogpt_platform/backend/backend/integrations/webhooks/_manual_base.py index cf749a3cf9..fd9eb00e2a 100644 --- a/autogpt_platform/backend/backend/integrations/webhooks/_manual_base.py +++ b/autogpt_platform/backend/backend/integrations/webhooks/_manual_base.py @@ -18,7 +18,9 @@ class ManualWebhookManagerBase(BaseWebhooksManager[WT]): ingress_url: str, secret: str, ) -> tuple[str, dict]: - print(ingress_url) # FIXME: pass URL to user in front end + # TODO: pass ingress_url to user in frontend + # See: https://github.com/Significant-Gravitas/AutoGPT/issues/8537 + logger.debug(f"Manual webhook registered with ingress URL: {ingress_url}") return "", {} diff --git a/autogpt_platform/backend/backend/integrations/webhooks/utils.py b/autogpt_platform/backend/backend/integrations/webhooks/utils.py index 0bf9e6a3f4..79316c4c0e 100644 --- a/autogpt_platform/backend/backend/integrations/webhooks/utils.py +++ b/autogpt_platform/backend/backend/integrations/webhooks/utils.py @@ -9,7 +9,7 @@ from backend.util.settings import Config from . import get_webhook_manager, supports_webhooks if TYPE_CHECKING: - from backend.data.block import Block, BlockSchema + from backend.data.block import AnyBlockSchema from backend.data.integrations import Webhook from backend.data.model import Credentials from backend.integrations.providers import ProviderName @@ -29,7 +29,7 @@ def webhook_ingress_url(provider_name: "ProviderName", webhook_id: str) -> str: async def setup_webhook_for_block( user_id: str, - trigger_block: "Block[BlockSchema, BlockSchema]", + trigger_block: "AnyBlockSchema", trigger_config: dict[str, JsonValue], # = Trigger block inputs for_graph_id: Optional[str] = None, for_preset_id: Optional[str] = None, @@ -149,10 +149,10 @@ async def setup_webhook_for_block( async def migrate_legacy_triggered_graphs(): from prisma.models import AgentGraph + from backend.api.features.library.db import create_preset + from backend.api.features.library.model import LibraryAgentPresetCreatable from backend.data.graph import AGENT_GRAPH_INCLUDE, GraphModel, set_node_webhook from backend.data.model import is_credentials_field_name - from backend.server.v2.library.db import create_preset - from backend.server.v2.library.model import LibraryAgentPresetCreatable triggered_graphs = [ GraphModel.from_db(_graph) diff --git a/autogpt_platform/backend/backend/monitoring/__init__.py b/autogpt_platform/backend/backend/monitoring/__init__.py index 5f9ff30917..f2d22f5af7 100644 --- a/autogpt_platform/backend/backend/monitoring/__init__.py +++ b/autogpt_platform/backend/backend/monitoring/__init__.py @@ -1,5 +1,6 @@ """Monitoring module for platform health and alerting.""" +from .accuracy_monitor import AccuracyMonitor, report_execution_accuracy_alerts from .block_error_monitor import BlockErrorMonitor, report_block_error_rates from .late_execution_monitor import ( LateExecutionException, @@ -13,10 +14,12 @@ from .notification_monitor import ( ) __all__ = [ + "AccuracyMonitor", "BlockErrorMonitor", "LateExecutionMonitor", "LateExecutionException", "NotificationJobArgs", + "report_execution_accuracy_alerts", "report_block_error_rates", "report_late_executions", "process_existing_batches", diff --git a/autogpt_platform/backend/backend/monitoring/accuracy_monitor.py b/autogpt_platform/backend/backend/monitoring/accuracy_monitor.py new file mode 100644 index 0000000000..1a3c7eb2f1 --- /dev/null +++ b/autogpt_platform/backend/backend/monitoring/accuracy_monitor.py @@ -0,0 +1,107 @@ +"""Execution accuracy monitoring module.""" + +import logging + +from backend.util.clients import ( + get_database_manager_client, + get_notification_manager_client, +) +from backend.util.metrics import DiscordChannel, sentry_capture_error +from backend.util.settings import Config + +logger = logging.getLogger(__name__) +config = Config() + + +class AccuracyMonitor: + """Monitor execution accuracy trends and send alerts for drops.""" + + def __init__(self, drop_threshold: float = 10.0): + self.config = config + self.notification_client = get_notification_manager_client() + self.database_client = get_database_manager_client() + self.drop_threshold = drop_threshold + + def check_execution_accuracy_alerts(self) -> str: + """Check marketplace agents for accuracy drops and send alerts.""" + try: + logger.info("Checking execution accuracy for marketplace agents") + + # Get marketplace graphs using database client + graphs = self.database_client.get_marketplace_graphs_for_monitoring( + days_back=30, min_executions=10 + ) + + alerts_found = 0 + + for graph_data in graphs: + result = self.database_client.get_accuracy_trends_and_alerts( + graph_id=graph_data.graph_id, + user_id=graph_data.user_id, + days_back=21, # 3 weeks + drop_threshold=self.drop_threshold, + ) + + if result.alert: + alert = result.alert + + # Get graph details for better alert info + try: + graph_info = self.database_client.get_graph_metadata( + graph_id=alert.graph_id + ) + graph_name = graph_info.name if graph_info else "Unknown Agent" + except Exception: + graph_name = "Unknown Agent" + + # Create detailed alert message + alert_msg = ( + f"🚨 **AGENT ACCURACY DROP DETECTED**\n\n" + f"**Agent:** {graph_name}\n" + f"**Graph ID:** `{alert.graph_id}`\n" + f"**Accuracy Drop:** {alert.drop_percent:.1f}%\n" + f"**Recent Performance:**\n" + f" • 3-day average: {alert.three_day_avg:.1f}%\n" + f" • 7-day average: {alert.seven_day_avg:.1f}%\n" + ) + + if alert.user_id: + alert_msg += f"**Owner:** {alert.user_id}\n" + + # Send individual alert for each agent (not batched) + self.notification_client.discord_system_alert( + alert_msg, DiscordChannel.PRODUCT + ) + alerts_found += 1 + logger.warning( + f"Sent accuracy alert for agent: {graph_name} ({alert.graph_id})" + ) + + if alerts_found > 0: + return f"Alert sent for {alerts_found} agents with accuracy drops" + + logger.info("No execution accuracy alerts detected") + return "No accuracy alerts detected" + + except Exception as e: + logger.exception(f"Error checking execution accuracy alerts: {e}") + + error = Exception(f"Error checking execution accuracy alerts: {e}") + msg = str(error) + sentry_capture_error(error) + self.notification_client.discord_system_alert(msg, DiscordChannel.PRODUCT) + return msg + + +def report_execution_accuracy_alerts(drop_threshold: float = 10.0) -> str: + """ + Check execution accuracy and send alerts if drops are detected. + + Args: + drop_threshold: Percentage drop threshold to trigger alerts (default 10.0%) + + Returns: + Status message indicating results of the check + """ + monitor = AccuracyMonitor(drop_threshold=drop_threshold) + return monitor.check_execution_accuracy_alerts() diff --git a/autogpt_platform/backend/backend/monitoring/instrumentation.py b/autogpt_platform/backend/backend/monitoring/instrumentation.py index 898324deaa..bd384b4ad2 100644 --- a/autogpt_platform/backend/backend/monitoring/instrumentation.py +++ b/autogpt_platform/backend/backend/monitoring/instrumentation.py @@ -143,6 +143,9 @@ def instrument_fastapi( ) # Create instrumentator with default metrics + # Use service-specific inprogress_name to avoid duplicate registration + # when multiple FastAPI apps are instrumented in the same process + service_subsystem = service_name.replace("-", "_") instrumentator = Instrumentator( should_group_status_codes=True, should_ignore_untemplated=True, @@ -150,7 +153,7 @@ def instrument_fastapi( should_instrument_requests_inprogress=True, excluded_handlers=excluded_handlers or ["/health", "/readiness"], env_var_name="ENABLE_METRICS", - inprogress_name="autogpt_http_requests_inprogress", + inprogress_name=f"autogpt_{service_subsystem}_http_requests_inprogress", inprogress_labels=True, ) diff --git a/autogpt_platform/backend/backend/notifications/email.py b/autogpt_platform/backend/backend/notifications/email.py index 84202ea6a9..5ca71eefeb 100644 --- a/autogpt_platform/backend/backend/notifications/email.py +++ b/autogpt_platform/backend/backend/notifications/email.py @@ -44,6 +44,8 @@ class EmailSender: self.postmark = None self.formatter = TextFormatter() + MAX_EMAIL_CHARS = 5_000_000 # ~5MB buffer + def send_templated( self, notification: NotificationType, @@ -54,21 +56,19 @@ class EmailSender: ), user_unsub_link: str | None = None, ): - """Send an email to a user using a template pulled from the notification type""" + """Send an email to a user using a template pulled from the notification type, or fallback""" if not self.postmark: logger.warning("Postmark client not initialized, email not sent") return + template = self._get_template(notification) base_url = ( settings.config.frontend_base_url or settings.config.platform_base_url ) - # Handle the case when data is a list - template_data = data - if isinstance(data, list): - # Create a dictionary with a 'notifications' key containing the list - template_data = {"notifications": data} + # Normalize data + template_data = {"notifications": data} if isinstance(data, list) else data try: subject, full_message = self.formatter.format_email( @@ -82,24 +82,37 @@ class EmailSender: logger.error(f"Error formatting full message: {e}") raise e - # Check email size (Postmark limit is 5MB = 5,242,880 characters) + # Check email size & send summary if too large email_size = len(full_message) - if email_size > 5_000_000: # Leave some buffer + if email_size > self.MAX_EMAIL_CHARS: logger.warning( f"Email size ({email_size} chars) exceeds safe limit. " - f"This should have been chunked before calling send_templated." + "Sending summary email instead." ) - raise ValueError( - f"Email body too large: {email_size} characters (limit: 5,242,880)" + + # Create lightweight summary + summary_message = ( + f"⚠️ Your agent '{getattr(data, 'agent_name', 'Unknown')}' " + f"generated a very large output ({email_size / 1_000_000:.2f} MB).\n\n" + f"Execution time: {getattr(data, 'execution_time', 'N/A')}\n" + f"Credits used: {getattr(data, 'credits_used', 'N/A')}\n" + f"View full results: {base_url}/executions/{getattr(data, 'id', 'N/A')}" ) + self._send_email( + user_email=user_email, + subject=f"{subject} (Output Too Large)", + body=summary_message, + user_unsubscribe_link=user_unsub_link, + ) + return # Skip sending full email + logger.debug(f"Sending email with size: {email_size} characters") - self._send_email( user_email=user_email, - user_unsubscribe_link=user_unsub_link, subject=subject, body=full_message, + user_unsubscribe_link=user_unsub_link, ) def _get_template(self, notification: NotificationType): @@ -137,7 +150,6 @@ class EmailSender: To=user_email, Subject=subject, HtmlBody=body, - # Headers default to None internally so this is fine Headers=( { "List-Unsubscribe-Post": "List-Unsubscribe=One-Click", diff --git a/autogpt_platform/backend/backend/notifications/notifications.py b/autogpt_platform/backend/backend/notifications/notifications.py index 1199be46d1..3503bc911a 100644 --- a/autogpt_platform/backend/backend/notifications/notifications.py +++ b/autogpt_platform/backend/backend/notifications/notifications.py @@ -1017,10 +1017,14 @@ class NotificationManager(AppService): logger.exception(f"Fatal error in consumer for {queue_name}: {e}") raise - @continuous_retry() def run_service(self): - self.run_and_wait(self._run_service()) + # Queue the main _run_service task + asyncio.run_coroutine_threadsafe(self._run_service(), self.shared_event_loop) + # Start the main event loop + super().run_service() + + @continuous_retry() async def _run_service(self): logger.info(f"[{self.service_name}] ⏳ Configuring RabbitMQ...") self.rabbitmq_service = rabbitmq.AsyncRabbitMQ(self.rabbitmq_config) @@ -1086,10 +1090,11 @@ class NotificationManager(AppService): def cleanup(self): """Cleanup service resources""" self.running = False - super().cleanup() - logger.info(f"[{self.service_name}] ⏳ Disconnecting RabbitMQ...") + logger.info("⏳ Disconnecting RabbitMQ...") self.run_and_wait(self.rabbitmq_service.disconnect()) + super().cleanup() + class NotificationManagerClient(AppServiceClient): @classmethod diff --git a/autogpt_platform/backend/backend/rest.py b/autogpt_platform/backend/backend/rest.py index b601144c6f..96a807c125 100644 --- a/autogpt_platform/backend/backend/rest.py +++ b/autogpt_platform/backend/backend/rest.py @@ -1,5 +1,5 @@ +from backend.api.rest_api import AgentServer from backend.app import run_processes -from backend.server.rest_api import AgentServer def main(): diff --git a/autogpt_platform/backend/backend/sdk/__init__.py b/autogpt_platform/backend/backend/sdk/__init__.py index a26cb9b679..b3a23dc735 100644 --- a/autogpt_platform/backend/backend/sdk/__init__.py +++ b/autogpt_platform/backend/backend/sdk/__init__.py @@ -23,6 +23,8 @@ from backend.data.block import ( BlockManualWebhookConfig, BlockOutput, BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, BlockType, BlockWebhookConfig, ) @@ -122,6 +124,8 @@ __all__ = [ "BlockCategory", "BlockOutput", "BlockSchema", + "BlockSchemaInput", + "BlockSchemaOutput", "BlockType", "BlockWebhookConfig", "BlockManualWebhookConfig", diff --git a/autogpt_platform/backend/backend/server/external/middleware.py b/autogpt_platform/backend/backend/server/external/middleware.py deleted file mode 100644 index af84c92687..0000000000 --- a/autogpt_platform/backend/backend/server/external/middleware.py +++ /dev/null @@ -1,36 +0,0 @@ -from fastapi import HTTPException, Security -from fastapi.security import APIKeyHeader -from prisma.enums import APIKeyPermission - -from backend.data.api_key import APIKeyInfo, has_permission, validate_api_key - -api_key_header = APIKeyHeader(name="X-API-Key", auto_error=False) - - -async def require_api_key(api_key: str | None = Security(api_key_header)) -> APIKeyInfo: - """Base middleware for API key authentication""" - if api_key is None: - raise HTTPException(status_code=401, detail="Missing API key") - - api_key_obj = await validate_api_key(api_key) - - if not api_key_obj: - raise HTTPException(status_code=401, detail="Invalid API key") - - return api_key_obj - - -def require_permission(permission: APIKeyPermission): - """Dependency function for checking specific permissions""" - - async def check_permission( - api_key: APIKeyInfo = Security(require_api_key), - ) -> APIKeyInfo: - if not has_permission(api_key, permission): - raise HTTPException( - status_code=403, - detail=f"API key lacks the required permission '{permission}'", - ) - return api_key - - return check_permission diff --git a/autogpt_platform/backend/backend/server/external/routes/v1.py b/autogpt_platform/backend/backend/server/external/routes/v1.py deleted file mode 100644 index db232ab811..0000000000 --- a/autogpt_platform/backend/backend/server/external/routes/v1.py +++ /dev/null @@ -1,143 +0,0 @@ -import logging -from collections import defaultdict -from typing import Annotated, Any, Optional, Sequence - -from fastapi import APIRouter, Body, HTTPException, Security -from prisma.enums import AgentExecutionStatus, APIKeyPermission -from typing_extensions import TypedDict - -import backend.data.block -from backend.data import execution as execution_db -from backend.data import graph as graph_db -from backend.data.api_key import APIKeyInfo -from backend.data.block import BlockInput, CompletedBlockOutput -from backend.executor.utils import add_graph_execution -from backend.server.external.middleware import require_permission -from backend.util.settings import Settings - -settings = Settings() -logger = logging.getLogger(__name__) - -v1_router = APIRouter() - - -class NodeOutput(TypedDict): - key: str - value: Any - - -class ExecutionNode(TypedDict): - node_id: str - input: Any - output: dict[str, Any] - - -class ExecutionNodeOutput(TypedDict): - node_id: str - outputs: list[NodeOutput] - - -class GraphExecutionResult(TypedDict): - execution_id: str - status: str - nodes: list[ExecutionNode] - output: Optional[list[dict[str, str]]] - - -@v1_router.get( - path="/blocks", - tags=["blocks"], - dependencies=[Security(require_permission(APIKeyPermission.READ_BLOCK))], -) -async def get_graph_blocks() -> Sequence[dict[Any, Any]]: - blocks = [block() for block in backend.data.block.get_blocks().values()] - return [b.to_dict() for b in blocks if not b.disabled] - - -@v1_router.post( - path="/blocks/{block_id}/execute", - tags=["blocks"], - dependencies=[Security(require_permission(APIKeyPermission.EXECUTE_BLOCK))], -) -async def execute_graph_block( - block_id: str, - data: BlockInput, - api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.EXECUTE_BLOCK)), -) -> CompletedBlockOutput: - obj = backend.data.block.get_block(block_id) - if not obj: - raise HTTPException(status_code=404, detail=f"Block #{block_id} not found.") - - output = defaultdict(list) - async for name, data in obj.execute(data): - output[name].append(data) - return output - - -@v1_router.post( - path="/graphs/{graph_id}/execute/{graph_version}", - tags=["graphs"], -) -async def execute_graph( - graph_id: str, - graph_version: int, - node_input: Annotated[dict[str, Any], Body(..., embed=True, default_factory=dict)], - api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.EXECUTE_GRAPH)), -) -> dict[str, Any]: - try: - graph_exec = await add_graph_execution( - graph_id=graph_id, - user_id=api_key.user_id, - inputs=node_input, - graph_version=graph_version, - ) - return {"id": graph_exec.id} - except Exception as e: - msg = str(e).encode().decode("unicode_escape") - raise HTTPException(status_code=400, detail=msg) - - -@v1_router.get( - path="/graphs/{graph_id}/executions/{graph_exec_id}/results", - tags=["graphs"], -) -async def get_graph_execution_results( - graph_id: str, - graph_exec_id: str, - api_key: APIKeyInfo = Security(require_permission(APIKeyPermission.READ_GRAPH)), -) -> GraphExecutionResult: - graph = await graph_db.get_graph(graph_id, user_id=api_key.user_id) - if not graph: - raise HTTPException(status_code=404, detail=f"Graph #{graph_id} not found.") - - graph_exec = await execution_db.get_graph_execution( - user_id=api_key.user_id, - execution_id=graph_exec_id, - include_node_executions=True, - ) - if not graph_exec: - raise HTTPException( - status_code=404, detail=f"Graph execution #{graph_exec_id} not found." - ) - - return GraphExecutionResult( - execution_id=graph_exec_id, - status=graph_exec.status.value, - nodes=[ - ExecutionNode( - node_id=node_exec.node_id, - input=node_exec.input_data.get("value", node_exec.input_data), - output={k: v for k, v in node_exec.output_data.items()}, - ) - for node_exec in graph_exec.node_executions - ], - output=( - [ - {name: value} - for name, values in graph_exec.outputs.items() - for value in values - ] - if graph_exec.status == AgentExecutionStatus.COMPLETED - else None - ), - ) diff --git a/autogpt_platform/backend/backend/server/routers/analytics_improved_test.py b/autogpt_platform/backend/backend/server/routers/analytics_improved_test.py deleted file mode 100644 index 7040faa0b5..0000000000 --- a/autogpt_platform/backend/backend/server/routers/analytics_improved_test.py +++ /dev/null @@ -1,150 +0,0 @@ -"""Example of analytics tests with improved error handling and assertions.""" - -import json -from unittest.mock import AsyncMock, Mock - -import fastapi -import fastapi.testclient -import pytest -import pytest_mock -from pytest_snapshot.plugin import Snapshot - -import backend.server.routers.analytics as analytics_routes -from backend.server.test_helpers import ( - assert_error_response_structure, - assert_mock_called_with_partial, - assert_response_status, - safe_parse_json, -) - -app = fastapi.FastAPI() -app.include_router(analytics_routes.router) - -client = fastapi.testclient.TestClient(app) - - -@pytest.fixture(autouse=True) -def setup_app_auth(mock_jwt_user): - """Setup auth overrides for all tests in this module""" - from autogpt_libs.auth.jwt_utils import get_jwt_payload - - app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"] - yield - app.dependency_overrides.clear() - - -def test_log_raw_metric_success_improved( - mocker: pytest_mock.MockFixture, - configured_snapshot: Snapshot, - test_user_id: str, -) -> None: - """Test successful raw metric logging with improved assertions.""" - # Mock the analytics function - mock_result = Mock(id="metric-123-uuid") - - mock_log_metric = mocker.patch( - "backend.data.analytics.log_raw_metric", - new_callable=AsyncMock, - return_value=mock_result, - ) - - request_data = { - "metric_name": "page_load_time", - "metric_value": 2.5, - "data_string": "/dashboard", - } - - response = client.post("/log_raw_metric", json=request_data) - - # Improved assertions with better error messages - assert_response_status(response, 200, "Metric logging should succeed") - response_data = safe_parse_json(response, "Metric response parsing") - - assert response_data == "metric-123-uuid", f"Unexpected response: {response_data}" - - # Verify the function was called with correct parameters - assert_mock_called_with_partial( - mock_log_metric, - user_id=test_user_id, - metric_name="page_load_time", - metric_value=2.5, - data_string="/dashboard", - ) - - # Snapshot test the response - configured_snapshot.assert_match( - json.dumps({"metric_id": response_data}, indent=2, sort_keys=True), - "analytics_log_metric_success_improved", - ) - - -def test_log_raw_metric_invalid_request_improved() -> None: - """Test invalid metric request with improved error assertions.""" - # Test missing required fields - response = client.post("/log_raw_metric", json={}) - - error_data = assert_error_response_structure( - response, expected_status=422, expected_error_fields=["loc", "msg", "type"] - ) - - # Verify specific error details - detail = error_data["detail"] - assert isinstance(detail, list), "Error detail should be a list" - assert len(detail) > 0, "Should have at least one error" - - # Check that required fields are mentioned in errors - error_fields = [error["loc"][-1] for error in detail if "loc" in error] - assert "metric_name" in error_fields, "Should report missing metric_name" - assert "metric_value" in error_fields, "Should report missing metric_value" - assert "data_string" in error_fields, "Should report missing data_string" - - -def test_log_raw_metric_type_validation_improved( - mocker: pytest_mock.MockFixture, -) -> None: - """Test metric type validation with improved assertions.""" - # Mock the analytics function to avoid event loop issues - mocker.patch( - "backend.data.analytics.log_raw_metric", - new_callable=AsyncMock, - return_value=Mock(id="test-id"), - ) - - invalid_requests = [ - { - "data": { - "metric_name": "test", - "metric_value": "not_a_number", # Invalid type - "data_string": "test", - }, - "expected_error": "Input should be a valid number", - }, - { - "data": { - "metric_name": "", # Empty string - "metric_value": 1.0, - "data_string": "test", - }, - "expected_error": "String should have at least 1 character", - }, - { - "data": { - "metric_name": "test", - "metric_value": 123, # Valid number - "data_string": "", # Empty data_string - }, - "expected_error": "String should have at least 1 character", - }, - ] - - for test_case in invalid_requests: - response = client.post("/log_raw_metric", json=test_case["data"]) - - error_data = assert_error_response_structure(response, expected_status=422) - - # Check that expected error is in the response - error_text = json.dumps(error_data) - assert ( - test_case["expected_error"] in error_text - or test_case["expected_error"].lower() in error_text.lower() - ), f"Expected error '{test_case['expected_error']}' not found in: {error_text}" diff --git a/autogpt_platform/backend/backend/server/routers/analytics_parametrized_test.py b/autogpt_platform/backend/backend/server/routers/analytics_parametrized_test.py deleted file mode 100644 index 9dbf03b727..0000000000 --- a/autogpt_platform/backend/backend/server/routers/analytics_parametrized_test.py +++ /dev/null @@ -1,115 +0,0 @@ -"""Example of parametrized tests for analytics endpoints.""" - -import json -from unittest.mock import AsyncMock, Mock - -import fastapi -import fastapi.testclient -import pytest -import pytest_mock -from pytest_snapshot.plugin import Snapshot - -import backend.server.routers.analytics as analytics_routes - -app = fastapi.FastAPI() -app.include_router(analytics_routes.router) - -client = fastapi.testclient.TestClient(app) - - -@pytest.fixture(autouse=True) -def setup_app_auth(mock_jwt_user): - """Setup auth overrides for all tests in this module""" - from autogpt_libs.auth.jwt_utils import get_jwt_payload - - app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"] - yield - app.dependency_overrides.clear() - - -@pytest.mark.parametrize( - "metric_value,metric_name,data_string,test_id", - [ - (100, "api_calls_count", "external_api", "integer_value"), - (0, "error_count", "no_errors", "zero_value"), - (-5.2, "temperature_delta", "cooling", "negative_value"), - (1.23456789, "precision_test", "float_precision", "float_precision"), - (999999999, "large_number", "max_value", "large_number"), - (0.0000001, "tiny_number", "min_value", "tiny_number"), - ], -) -def test_log_raw_metric_values_parametrized( - mocker: pytest_mock.MockFixture, - configured_snapshot: Snapshot, - metric_value: float, - metric_name: str, - data_string: str, - test_id: str, -) -> None: - """Test raw metric logging with various metric values using parametrize.""" - # Mock the analytics function - mock_result = Mock(id=f"metric-{test_id}-uuid") - - mocker.patch( - "backend.data.analytics.log_raw_metric", - new_callable=AsyncMock, - return_value=mock_result, - ) - - request_data = { - "metric_name": metric_name, - "metric_value": metric_value, - "data_string": data_string, - } - - response = client.post("/log_raw_metric", json=request_data) - - # Better error handling - assert response.status_code == 200, f"Failed for {test_id}: {response.text}" - response_data = response.json() - - # Snapshot test the response - configured_snapshot.assert_match( - json.dumps( - {"metric_id": response_data, "test_case": test_id}, indent=2, sort_keys=True - ), - f"analytics_metric_{test_id}", - ) - - -@pytest.mark.parametrize( - "invalid_data,expected_error", - [ - ({}, "Field required"), # Missing all fields - ({"metric_name": "test"}, "Field required"), # Missing metric_value - ( - {"metric_name": "test", "metric_value": "not_a_number"}, - "Input should be a valid number", - ), # Invalid type - ( - {"metric_name": "", "metric_value": 1.0, "data_string": "test"}, - "String should have at least 1 character", - ), # Empty name - ], -) -def test_log_raw_metric_invalid_requests_parametrized( - mocker: pytest_mock.MockFixture, - invalid_data: dict, - expected_error: str, -) -> None: - """Test invalid metric requests with parametrize.""" - # Mock the analytics function to avoid event loop issues - mocker.patch( - "backend.data.analytics.log_raw_metric", - new_callable=AsyncMock, - return_value=Mock(id="test-id"), - ) - - response = client.post("/log_raw_metric", json=invalid_data) - - assert response.status_code == 422 - error_detail = response.json() - assert "detail" in error_detail - # Verify error message contains expected error - error_text = json.dumps(error_detail) - assert expected_error in error_text or expected_error.lower() in error_text.lower() diff --git a/autogpt_platform/backend/backend/server/routers/analytics_test.py b/autogpt_platform/backend/backend/server/routers/analytics_test.py deleted file mode 100644 index 16ee6708dc..0000000000 --- a/autogpt_platform/backend/backend/server/routers/analytics_test.py +++ /dev/null @@ -1,284 +0,0 @@ -import json -from unittest.mock import AsyncMock, Mock - -import fastapi -import fastapi.testclient -import pytest -import pytest_mock -from pytest_snapshot.plugin import Snapshot - -import backend.server.routers.analytics as analytics_routes - -app = fastapi.FastAPI() -app.include_router(analytics_routes.router) - -client = fastapi.testclient.TestClient(app) - - -@pytest.fixture(autouse=True) -def setup_app_auth(mock_jwt_user): - """Setup auth overrides for all tests in this module""" - from autogpt_libs.auth.jwt_utils import get_jwt_payload - - app.dependency_overrides[get_jwt_payload] = mock_jwt_user["get_jwt_payload"] - yield - app.dependency_overrides.clear() - - -def test_log_raw_metric_success( - mocker: pytest_mock.MockFixture, - configured_snapshot: Snapshot, - test_user_id: str, -) -> None: - """Test successful raw metric logging""" - - # Mock the analytics function - mock_result = Mock(id="metric-123-uuid") - - mock_log_metric = mocker.patch( - "backend.data.analytics.log_raw_metric", - new_callable=AsyncMock, - return_value=mock_result, - ) - - request_data = { - "metric_name": "page_load_time", - "metric_value": 2.5, - "data_string": "/dashboard", - } - - response = client.post("/log_raw_metric", json=request_data) - - assert response.status_code == 200 - response_data = response.json() - assert response_data == "metric-123-uuid" - - # Verify the function was called with correct parameters - mock_log_metric.assert_called_once_with( - user_id=test_user_id, - metric_name="page_load_time", - metric_value=2.5, - data_string="/dashboard", - ) - - # Snapshot test the response - configured_snapshot.assert_match( - json.dumps({"metric_id": response.json()}, indent=2, sort_keys=True), - "analytics_log_metric_success", - ) - - -def test_log_raw_metric_various_values( - mocker: pytest_mock.MockFixture, - configured_snapshot: Snapshot, -) -> None: - """Test raw metric logging with various metric values""" - - # Mock the analytics function - mock_result = Mock(id="metric-456-uuid") - - mocker.patch( - "backend.data.analytics.log_raw_metric", - new_callable=AsyncMock, - return_value=mock_result, - ) - - # Test with integer value - request_data = { - "metric_name": "api_calls_count", - "metric_value": 100, - "data_string": "external_api", - } - - response = client.post("/log_raw_metric", json=request_data) - assert response.status_code == 200 - - # Test with zero value - request_data = { - "metric_name": "error_count", - "metric_value": 0, - "data_string": "no_errors", - } - - response = client.post("/log_raw_metric", json=request_data) - assert response.status_code == 200 - - # Test with negative value - request_data = { - "metric_name": "temperature_delta", - "metric_value": -5.2, - "data_string": "cooling", - } - - response = client.post("/log_raw_metric", json=request_data) - assert response.status_code == 200 - - # Snapshot the last response - configured_snapshot.assert_match( - json.dumps({"metric_id": response.json()}, indent=2, sort_keys=True), - "analytics_log_metric_various_values", - ) - - -def test_log_raw_analytics_success( - mocker: pytest_mock.MockFixture, - configured_snapshot: Snapshot, - test_user_id: str, -) -> None: - """Test successful raw analytics logging""" - - # Mock the analytics function - mock_result = Mock(id="analytics-789-uuid") - - mock_log_analytics = mocker.patch( - "backend.data.analytics.log_raw_analytics", - new_callable=AsyncMock, - return_value=mock_result, - ) - - request_data = { - "type": "user_action", - "data": { - "action": "button_click", - "button_id": "submit_form", - "timestamp": "2023-01-01T00:00:00Z", - "metadata": { - "form_type": "registration", - "fields_filled": 5, - }, - }, - "data_index": "button_click_submit_form", - } - - response = client.post("/log_raw_analytics", json=request_data) - - assert response.status_code == 200 - response_data = response.json() - assert response_data == "analytics-789-uuid" - - # Verify the function was called with correct parameters - mock_log_analytics.assert_called_once_with( - test_user_id, - "user_action", - request_data["data"], - "button_click_submit_form", - ) - - # Snapshot test the response - configured_snapshot.assert_match( - json.dumps({"analytics_id": response_data}, indent=2, sort_keys=True), - "analytics_log_analytics_success", - ) - - -def test_log_raw_analytics_complex_data( - mocker: pytest_mock.MockFixture, - configured_snapshot: Snapshot, -) -> None: - """Test raw analytics logging with complex nested data""" - - # Mock the analytics function - mock_result = Mock(id="analytics-complex-uuid") - - mocker.patch( - "backend.data.analytics.log_raw_analytics", - new_callable=AsyncMock, - return_value=mock_result, - ) - - request_data = { - "type": "agent_execution", - "data": { - "agent_id": "agent_123", - "execution_id": "exec_456", - "status": "completed", - "duration_ms": 3500, - "nodes_executed": 15, - "blocks_used": [ - {"block_id": "llm_block", "count": 3}, - {"block_id": "http_block", "count": 5}, - {"block_id": "code_block", "count": 2}, - ], - "errors": [], - "metadata": { - "trigger": "manual", - "user_tier": "premium", - "environment": "production", - }, - }, - "data_index": "agent_123_exec_456", - } - - response = client.post("/log_raw_analytics", json=request_data) - - assert response.status_code == 200 - response_data = response.json() - - # Snapshot test the complex data structure - configured_snapshot.assert_match( - json.dumps( - { - "analytics_id": response_data, - "logged_data": request_data["data"], - }, - indent=2, - sort_keys=True, - ), - "analytics_log_analytics_complex_data", - ) - - -def test_log_raw_metric_invalid_request() -> None: - """Test raw metric logging with invalid request data""" - # Missing required fields - response = client.post("/log_raw_metric", json={}) - assert response.status_code == 422 - - # Invalid metric_value type - response = client.post( - "/log_raw_metric", - json={ - "metric_name": "test", - "metric_value": "not_a_number", - "data_string": "test", - }, - ) - assert response.status_code == 422 - - # Missing data_string - response = client.post( - "/log_raw_metric", - json={ - "metric_name": "test", - "metric_value": 1.0, - }, - ) - assert response.status_code == 422 - - -def test_log_raw_analytics_invalid_request() -> None: - """Test raw analytics logging with invalid request data""" - # Missing required fields - response = client.post("/log_raw_analytics", json={}) - assert response.status_code == 422 - - # Invalid data type (should be dict) - response = client.post( - "/log_raw_analytics", - json={ - "type": "test", - "data": "not_a_dict", - "data_index": "test", - }, - ) - assert response.status_code == 422 - - # Missing data_index - response = client.post( - "/log_raw_analytics", - json={ - "type": "test", - "data": {"key": "value"}, - }, - ) - assert response.status_code == 422 diff --git a/autogpt_platform/backend/backend/server/v2/builder/db.py b/autogpt_platform/backend/backend/server/v2/builder/db.py deleted file mode 100644 index 0b34f50c92..0000000000 --- a/autogpt_platform/backend/backend/server/v2/builder/db.py +++ /dev/null @@ -1,381 +0,0 @@ -import logging -from datetime import datetime, timedelta, timezone - -import prisma - -import backend.data.block -from backend.blocks import load_all_blocks -from backend.blocks.llm import LlmModel -from backend.data.block import Block, BlockCategory, BlockInfo, BlockSchema -from backend.integrations.providers import ProviderName -from backend.server.v2.builder.model import ( - BlockCategoryResponse, - BlockResponse, - BlockType, - CountResponse, - Provider, - ProviderResponse, - SearchBlocksResponse, -) -from backend.util.cache import cached -from backend.util.models import Pagination - -logger = logging.getLogger(__name__) -llm_models = [name.name.lower().replace("_", " ") for name in LlmModel] -_static_counts_cache: dict | None = None -_suggested_blocks: list[BlockInfo] | None = None - - -def get_block_categories(category_blocks: int = 3) -> list[BlockCategoryResponse]: - categories: dict[BlockCategory, BlockCategoryResponse] = {} - - for block_type in load_all_blocks().values(): - block: Block[BlockSchema, BlockSchema] = block_type() - # Skip disabled blocks - if block.disabled: - continue - # Skip blocks that don't have categories (all should have at least one) - if not block.categories: - continue - - # Add block to the categories - for category in block.categories: - if category not in categories: - categories[category] = BlockCategoryResponse( - name=category.name.lower(), - total_blocks=0, - blocks=[], - ) - - categories[category].total_blocks += 1 - - # Append if the category has less than the specified number of blocks - if len(categories[category].blocks) < category_blocks: - categories[category].blocks.append(block.get_info()) - - # Sort categories by name - return sorted(categories.values(), key=lambda x: x.name) - - -def get_blocks( - *, - category: str | None = None, - type: BlockType | None = None, - provider: ProviderName | None = None, - page: int = 1, - page_size: int = 50, -) -> BlockResponse: - """ - Get blocks based on either category, type or provider. - Providing nothing fetches all block types. - """ - # Only one of category, type, or provider can be specified - if (category and type) or (category and provider) or (type and provider): - raise ValueError("Only one of category, type, or provider can be specified") - - blocks: list[Block[BlockSchema, BlockSchema]] = [] - skip = (page - 1) * page_size - take = page_size - total = 0 - - for block_type in load_all_blocks().values(): - block: Block[BlockSchema, BlockSchema] = block_type() - # Skip disabled blocks - if block.disabled: - continue - # Skip blocks that don't match the category - if category and category not in {c.name.lower() for c in block.categories}: - continue - # Skip blocks that don't match the type - if ( - (type == "input" and block.block_type.value != "Input") - or (type == "output" and block.block_type.value != "Output") - or (type == "action" and block.block_type.value in ("Input", "Output")) - ): - continue - # Skip blocks that don't match the provider - if provider: - credentials_info = block.input_schema.get_credentials_fields_info().values() - if not any(provider in info.provider for info in credentials_info): - continue - - total += 1 - if skip > 0: - skip -= 1 - continue - if take > 0: - take -= 1 - blocks.append(block) - - return BlockResponse( - blocks=[b.get_info() for b in blocks], - pagination=Pagination( - total_items=total, - total_pages=(total + page_size - 1) // page_size, - current_page=page, - page_size=page_size, - ), - ) - - -def get_block_by_id(block_id: str) -> BlockInfo | None: - """ - Get a specific block by its ID. - """ - for block_type in load_all_blocks().values(): - block: Block[BlockSchema, BlockSchema] = block_type() - if block.id == block_id: - return block.get_info() - return None - - -def search_blocks( - include_blocks: bool = True, - include_integrations: bool = True, - query: str = "", - page: int = 1, - page_size: int = 50, -) -> SearchBlocksResponse: - """ - Get blocks based on the filter and query. - `providers` only applies for `integrations` filter. - """ - blocks: list[Block[BlockSchema, BlockSchema]] = [] - query = query.lower() - - total = 0 - skip = (page - 1) * page_size - take = page_size - block_count = 0 - integration_count = 0 - - for block_type in load_all_blocks().values(): - block: Block[BlockSchema, BlockSchema] = block_type() - # Skip disabled blocks - if block.disabled: - continue - # Skip blocks that don't match the query - if ( - query not in block.name.lower() - and query not in block.description.lower() - and not _matches_llm_model(block.input_schema, query) - ): - continue - keep = False - credentials = list(block.input_schema.get_credentials_fields().values()) - if include_integrations and len(credentials) > 0: - keep = True - integration_count += 1 - if include_blocks and len(credentials) == 0: - keep = True - block_count += 1 - - if not keep: - continue - - total += 1 - if skip > 0: - skip -= 1 - continue - if take > 0: - take -= 1 - blocks.append(block) - - return SearchBlocksResponse( - blocks=BlockResponse( - blocks=[b.get_info() for b in blocks], - pagination=Pagination( - total_items=total, - total_pages=(total + page_size - 1) // page_size, - current_page=page, - page_size=page_size, - ), - ), - total_block_count=block_count, - total_integration_count=integration_count, - ) - - -def get_providers( - query: str = "", - page: int = 1, - page_size: int = 50, -) -> ProviderResponse: - providers = [] - query = query.lower() - - skip = (page - 1) * page_size - take = page_size - - all_providers = _get_all_providers() - - for provider in all_providers.values(): - if ( - query not in provider.name.value.lower() - and query not in provider.description.lower() - ): - continue - if skip > 0: - skip -= 1 - continue - if take > 0: - take -= 1 - providers.append(provider) - - total = len(all_providers) - - return ProviderResponse( - providers=providers, - pagination=Pagination( - total_items=total, - total_pages=(total + page_size - 1) // page_size, - current_page=page, - page_size=page_size, - ), - ) - - -async def get_counts(user_id: str) -> CountResponse: - my_agents = await prisma.models.LibraryAgent.prisma().count( - where={ - "userId": user_id, - "isDeleted": False, - "isArchived": False, - } - ) - counts = await _get_static_counts() - return CountResponse( - my_agents=my_agents, - **counts, - ) - - -async def _get_static_counts(): - """ - Get counts of blocks, integrations, and marketplace agents. - This is cached to avoid unnecessary database queries and calculations. - Can't use functools.cache here because the function is async. - """ - global _static_counts_cache - if _static_counts_cache is not None: - return _static_counts_cache - - all_blocks = 0 - input_blocks = 0 - action_blocks = 0 - output_blocks = 0 - integrations = 0 - - for block_type in load_all_blocks().values(): - block: Block[BlockSchema, BlockSchema] = block_type() - if block.disabled: - continue - - all_blocks += 1 - - if block.block_type.value == "Input": - input_blocks += 1 - elif block.block_type.value == "Output": - output_blocks += 1 - else: - action_blocks += 1 - - credentials = list(block.input_schema.get_credentials_fields().values()) - if len(credentials) > 0: - integrations += 1 - - marketplace_agents = await prisma.models.StoreAgent.prisma().count() - - _static_counts_cache = { - "all_blocks": all_blocks, - "input_blocks": input_blocks, - "action_blocks": action_blocks, - "output_blocks": output_blocks, - "integrations": integrations, - "marketplace_agents": marketplace_agents, - } - - return _static_counts_cache - - -def _matches_llm_model(schema_cls: type[BlockSchema], query: str) -> bool: - for field in schema_cls.model_fields.values(): - if field.annotation == LlmModel: - # Check if query matches any value in llm_models - if any(query in name for name in llm_models): - return True - return False - - -@cached(ttl_seconds=3600) -def _get_all_providers() -> dict[ProviderName, Provider]: - providers: dict[ProviderName, Provider] = {} - - for block_type in load_all_blocks().values(): - block: Block[BlockSchema, BlockSchema] = block_type() - if block.disabled: - continue - - credentials_info = block.input_schema.get_credentials_fields_info().values() - for info in credentials_info: - for provider in info.provider: # provider is a ProviderName enum member - if provider in providers: - providers[provider].integration_count += 1 - else: - providers[provider] = Provider( - name=provider, description="", integration_count=1 - ) - return providers - - -async def get_suggested_blocks(count: int = 5) -> list[BlockInfo]: - global _suggested_blocks - - if _suggested_blocks is not None and len(_suggested_blocks) >= count: - return _suggested_blocks[:count] - - _suggested_blocks = [] - # Sum the number of executions for each block type - # Prisma cannot group by nested relations, so we do a raw query - # Calculate the cutoff timestamp - timestamp_threshold = datetime.now(timezone.utc) - timedelta(days=30) - - results = await prisma.get_client().query_raw( - """ - SELECT - agent_node."agentBlockId" AS block_id, - COUNT(execution.id) AS execution_count - FROM "AgentNodeExecution" execution - JOIN "AgentNode" agent_node ON execution."agentNodeId" = agent_node.id - WHERE execution."endedTime" >= $1::timestamp - GROUP BY agent_node."agentBlockId" - ORDER BY execution_count DESC; - """, - timestamp_threshold, - ) - - # Get the top blocks based on execution count - # But ignore Input and Output blocks - blocks: list[tuple[BlockInfo, int]] = [] - - for block_type in load_all_blocks().values(): - block: Block[BlockSchema, BlockSchema] = block_type() - if block.disabled or block.block_type in ( - backend.data.block.BlockType.INPUT, - backend.data.block.BlockType.OUTPUT, - backend.data.block.BlockType.AGENT, - ): - continue - # Find the execution count for this block - execution_count = next( - (row["execution_count"] for row in results if row["block_id"] == block.id), - 0, - ) - blocks.append((block.get_info(), execution_count)) - # Sort blocks by execution count - blocks.sort(key=lambda x: x[1], reverse=True) - - _suggested_blocks = [block[0] for block in blocks] - - # Return the top blocks - return _suggested_blocks[:count] diff --git a/autogpt_platform/backend/backend/server/v2/turnstile/models.py b/autogpt_platform/backend/backend/server/v2/turnstile/models.py deleted file mode 100644 index 9410b89511..0000000000 --- a/autogpt_platform/backend/backend/server/v2/turnstile/models.py +++ /dev/null @@ -1,30 +0,0 @@ -from typing import Optional - -from pydantic import BaseModel, Field - - -class TurnstileVerifyRequest(BaseModel): - """Request model for verifying a Turnstile token.""" - - token: str = Field(description="The Turnstile token to verify") - action: Optional[str] = Field( - default=None, description="The action that the user is attempting to perform" - ) - - -class TurnstileVerifyResponse(BaseModel): - """Response model for the Turnstile verification endpoint.""" - - success: bool = Field(description="Whether the token verification was successful") - error: Optional[str] = Field( - default=None, description="Error message if verification failed" - ) - challenge_timestamp: Optional[str] = Field( - default=None, description="Timestamp of the challenge (ISO format)" - ) - hostname: Optional[str] = Field( - default=None, description="Hostname of the site where the challenge was solved" - ) - action: Optional[str] = Field( - default=None, description="The action associated with this verification" - ) diff --git a/autogpt_platform/backend/backend/server/v2/turnstile/routes.py b/autogpt_platform/backend/backend/server/v2/turnstile/routes.py deleted file mode 100644 index 7a4fe5bafa..0000000000 --- a/autogpt_platform/backend/backend/server/v2/turnstile/routes.py +++ /dev/null @@ -1,112 +0,0 @@ -import logging - -import aiohttp -from fastapi import APIRouter - -from backend.util.settings import Settings - -from .models import TurnstileVerifyRequest, TurnstileVerifyResponse - -logger = logging.getLogger(__name__) - -router = APIRouter() -settings = Settings() - - -@router.post( - "/verify", response_model=TurnstileVerifyResponse, summary="Verify Turnstile Token" -) -async def verify_turnstile_token( - request: TurnstileVerifyRequest, -) -> TurnstileVerifyResponse: - """ - Verify a Cloudflare Turnstile token. - This endpoint verifies a token returned by the Cloudflare Turnstile challenge - on the client side. It returns whether the verification was successful. - """ - logger.info(f"Verifying Turnstile token for action: {request.action}") - return await verify_token(request) - - -async def verify_token(request: TurnstileVerifyRequest) -> TurnstileVerifyResponse: - """ - Verify a Cloudflare Turnstile token by making a request to the Cloudflare API. - """ - # Get the secret key from settings - turnstile_secret_key = settings.secrets.turnstile_secret_key - turnstile_verify_url = settings.secrets.turnstile_verify_url - - if not turnstile_secret_key: - logger.error( - "Turnstile secret key missing. Set TURNSTILE_SECRET_KEY to enable verification." - ) - return TurnstileVerifyResponse( - success=False, - error="CONFIGURATION_ERROR", - challenge_timestamp=None, - hostname=None, - action=None, - ) - - try: - async with aiohttp.ClientSession() as session: - payload = { - "secret": turnstile_secret_key, - "response": request.token, - } - - if request.action: - payload["action"] = request.action - - logger.debug(f"Verifying Turnstile token with action: {request.action}") - - async with session.post( - turnstile_verify_url, - data=payload, - timeout=aiohttp.ClientTimeout(total=10), - ) as response: - if response.status != 200: - error_text = await response.text() - logger.error(f"Turnstile API error: {error_text}") - return TurnstileVerifyResponse( - success=False, - error=f"API_ERROR: {response.status}", - challenge_timestamp=None, - hostname=None, - action=None, - ) - - data = await response.json() - logger.debug(f"Turnstile API response: {data}") - - # Parse the response and return a structured object - return TurnstileVerifyResponse( - success=data.get("success", False), - error=( - data.get("error-codes", None)[0] - if data.get("error-codes") - else None - ), - challenge_timestamp=data.get("challenge_timestamp"), - hostname=data.get("hostname"), - action=data.get("action"), - ) - - except aiohttp.ClientError as e: - logger.error(f"Connection error to Turnstile API: {str(e)}") - return TurnstileVerifyResponse( - success=False, - error=f"CONNECTION_ERROR: {str(e)}", - challenge_timestamp=None, - hostname=None, - action=None, - ) - except Exception as e: - logger.error(f"Unexpected error in Turnstile verification: {str(e)}") - return TurnstileVerifyResponse( - success=False, - error=f"UNEXPECTED_ERROR: {str(e)}", - challenge_timestamp=None, - hostname=None, - action=None, - ) diff --git a/autogpt_platform/backend/backend/server/v2/turnstile/routes_test.py b/autogpt_platform/backend/backend/server/v2/turnstile/routes_test.py deleted file mode 100644 index 5a9260131f..0000000000 --- a/autogpt_platform/backend/backend/server/v2/turnstile/routes_test.py +++ /dev/null @@ -1,32 +0,0 @@ -import fastapi -import fastapi.testclient -import pytest_mock - -import backend.server.v2.turnstile.routes as turnstile_routes - -app = fastapi.FastAPI() -app.include_router(turnstile_routes.router) - -client = fastapi.testclient.TestClient(app) - - -def test_verify_turnstile_token_no_secret_key(mocker: pytest_mock.MockFixture) -> None: - """Test token verification without secret key configured""" - # Mock the settings with no secret key - mock_settings = mocker.patch("backend.server.v2.turnstile.routes.settings") - mock_settings.secrets.turnstile_secret_key = None - - request_data = {"token": "test_token", "action": "login"} - response = client.post("/verify", json=request_data) - - assert response.status_code == 200 - response_data = response.json() - assert response_data["success"] is False - assert response_data["error"] == "CONFIGURATION_ERROR" - - -def test_verify_turnstile_token_invalid_request() -> None: - """Test token verification with invalid request data""" - # Missing token - response = client.post("/verify", json={"action": "login"}) - assert response.status_code == 422 diff --git a/autogpt_platform/backend/backend/util/cache.py b/autogpt_platform/backend/backend/util/cache.py index c718d4ef90..757ba45b42 100644 --- a/autogpt_platform/backend/backend/util/cache.py +++ b/autogpt_platform/backend/backend/util/cache.py @@ -27,6 +27,7 @@ from backend.util.settings import Settings P = ParamSpec("P") R = TypeVar("R") R_co = TypeVar("R_co", covariant=True) +T = TypeVar("T") logger = logging.getLogger(__name__) settings = Settings() @@ -143,7 +144,7 @@ def cached( ttl_seconds: int, shared_cache: bool = False, refresh_ttl_on_get: bool = False, -) -> Callable[[Callable], CachedFunction]: +) -> Callable[[Callable[P, R]], CachedFunction[P, R]]: """ Thundering herd safe cache decorator for both sync and async functions. @@ -169,7 +170,7 @@ def cached( return {"result": param} """ - def decorator(target_func): + def decorator(target_func: Callable[P, R]) -> CachedFunction[P, R]: cache_storage: dict[tuple, CachedValue] = {} _event_loop_locks: dict[Any, asyncio.Lock] = {} @@ -386,7 +387,7 @@ def cached( setattr(wrapper, "cache_info", cache_info) setattr(wrapper, "cache_delete", cache_delete) - return cast(CachedFunction, wrapper) + return cast(CachedFunction[P, R], wrapper) return decorator diff --git a/autogpt_platform/backend/backend/util/cloud_storage.py b/autogpt_platform/backend/backend/util/cloud_storage.py index 1cb38d2be6..93fb9039ec 100644 --- a/autogpt_platform/backend/backend/util/cloud_storage.py +++ b/autogpt_platform/backend/backend/util/cloud_storage.py @@ -66,7 +66,6 @@ class CloudStorageHandler: connector=aiohttp.TCPConnector( limit=100, # Connection pool limit force_close=False, # Reuse connections - enable_cleanup_closed=True, ) ) diff --git a/autogpt_platform/backend/backend/util/exceptions.py b/autogpt_platform/backend/backend/util/exceptions.py index 892f14470a..6d0192c0e5 100644 --- a/autogpt_platform/backend/backend/util/exceptions.py +++ b/autogpt_platform/backend/backend/util/exceptions.py @@ -1,6 +1,41 @@ from typing import Mapping +class BlockError(Exception): + """An error occurred during the running of a block""" + + def __init__(self, message: str, block_name: str, block_id: str) -> None: + super().__init__( + f"raised by {block_name} with message: {message}. block_id: {block_id}" + ) + + +class BlockInputError(BlockError, ValueError): + """The block had incorrect inputs, resulting in an error condition""" + + +class BlockOutputError(BlockError, ValueError): + """The block had incorrect outputs, resulting in an error condition""" + + +class BlockExecutionError(BlockError, ValueError): + """The block failed to execute at runtime, resulting in a handled error""" + + def __init__(self, message: str | None, block_name: str, block_id: str) -> None: + if message is None: + message = "Output error was None" + super().__init__(message, block_name, block_id) + + +class BlockUnknownError(BlockError): + """Critical unknown error with block handling""" + + def __init__(self, message: str | None, block_name: str, block_id: str) -> None: + if not message: + message = "Unknown error occurred" + super().__init__(message, block_name, block_id) + + class MissingConfigError(Exception): """The attempted operation requires configuration which is not available""" @@ -9,6 +44,10 @@ class NotFoundError(ValueError): """The requested record was not found, resulting in an error condition""" +class GraphNotFoundError(ValueError): + """The requested Agent Graph was not found, resulting in an error condition""" + + class NeedConfirmation(Exception): """The user must explicitly confirm that they want to proceed""" @@ -17,6 +56,14 @@ class NotAuthorizedError(ValueError): """The user is not authorized to perform the requested operation""" +class GraphNotAccessibleError(NotAuthorizedError): + """Raised when attempting to execute a graph that is not accessible to the user.""" + + +class GraphNotInLibraryError(GraphNotAccessibleError): + """Raised when attempting to execute a graph that is not / no longer in the user's library.""" + + class InsufficientBalanceError(ValueError): user_id: str message: str @@ -92,3 +139,9 @@ class DatabaseError(Exception): """Raised when there is an error interacting with the database""" pass + + +class RedisError(Exception): + """Raised when there is an error interacting with Redis""" + + pass diff --git a/autogpt_platform/backend/backend/util/exceptions_test.py b/autogpt_platform/backend/backend/util/exceptions_test.py new file mode 100644 index 0000000000..3821356db4 --- /dev/null +++ b/autogpt_platform/backend/backend/util/exceptions_test.py @@ -0,0 +1,125 @@ +from backend.util.exceptions import ( + BlockError, + BlockExecutionError, + BlockInputError, + BlockOutputError, + BlockUnknownError, +) + + +class TestBlockError: + """Tests for BlockError and its subclasses.""" + + def test_block_error_message_format(self): + """Test that BlockError formats the message correctly.""" + error = BlockError( + message="Test error", block_name="TestBlock", block_id="test-123" + ) + assert ( + str(error) + == "raised by TestBlock with message: Test error. block_id: test-123" + ) + + def test_block_input_error_inherits_format(self): + """Test that BlockInputError uses parent's message format.""" + error = BlockInputError( + message="Invalid input", block_name="TestBlock", block_id="test-123" + ) + assert "raised by TestBlock with message: Invalid input" in str(error) + + def test_block_output_error_inherits_format(self): + """Test that BlockOutputError uses parent's message format.""" + error = BlockOutputError( + message="Invalid output", block_name="TestBlock", block_id="test-123" + ) + assert "raised by TestBlock with message: Invalid output" in str(error) + + +class TestBlockExecutionErrorNoneHandling: + """Tests for BlockExecutionError handling of None messages.""" + + def test_execution_error_with_none_message(self): + """Test that None message is replaced with descriptive text.""" + error = BlockExecutionError( + message=None, block_name="TestBlock", block_id="test-123" + ) + assert "Output error was None" in str(error) + assert "raised by TestBlock with message: Output error was None" in str(error) + + def test_execution_error_with_valid_message(self): + """Test that valid messages are preserved.""" + error = BlockExecutionError( + message="Actual error", block_name="TestBlock", block_id="test-123" + ) + assert "Actual error" in str(error) + assert "Output error was None" not in str(error) + + def test_execution_error_with_empty_string(self): + """Test that empty string message is NOT replaced (only None is).""" + error = BlockExecutionError( + message="", block_name="TestBlock", block_id="test-123" + ) + # Empty string is falsy but not None, so it's preserved + assert "raised by TestBlock with message: . block_id:" in str(error) + + +class TestBlockUnknownErrorNoneHandling: + """Tests for BlockUnknownError handling of None/empty messages.""" + + def test_unknown_error_with_none_message(self): + """Test that None message is replaced with descriptive text.""" + error = BlockUnknownError( + message=None, block_name="TestBlock", block_id="test-123" + ) + assert "Unknown error occurred" in str(error) + + def test_unknown_error_with_empty_string(self): + """Test that empty string is replaced with descriptive text.""" + error = BlockUnknownError( + message="", block_name="TestBlock", block_id="test-123" + ) + assert "Unknown error occurred" in str(error) + + def test_unknown_error_with_valid_message(self): + """Test that valid messages are preserved.""" + error = BlockUnknownError( + message="Something went wrong", block_name="TestBlock", block_id="test-123" + ) + assert "Something went wrong" in str(error) + assert "Unknown error occurred" not in str(error) + + +class TestBlockErrorInheritance: + """Tests for proper exception inheritance.""" + + def test_block_execution_error_is_value_error(self): + """Test that BlockExecutionError is a ValueError.""" + error = BlockExecutionError( + message="test", block_name="TestBlock", block_id="test-123" + ) + assert isinstance(error, ValueError) + assert isinstance(error, BlockError) + + def test_block_input_error_is_value_error(self): + """Test that BlockInputError is a ValueError.""" + error = BlockInputError( + message="test", block_name="TestBlock", block_id="test-123" + ) + assert isinstance(error, ValueError) + assert isinstance(error, BlockError) + + def test_block_output_error_is_value_error(self): + """Test that BlockOutputError is a ValueError.""" + error = BlockOutputError( + message="test", block_name="TestBlock", block_id="test-123" + ) + assert isinstance(error, ValueError) + assert isinstance(error, BlockError) + + def test_block_unknown_error_is_not_value_error(self): + """Test that BlockUnknownError is NOT a ValueError.""" + error = BlockUnknownError( + message="test", block_name="TestBlock", block_id="test-123" + ) + assert not isinstance(error, ValueError) + assert isinstance(error, BlockError) diff --git a/autogpt_platform/backend/backend/util/feature_flag.py b/autogpt_platform/backend/backend/util/feature_flag.py index 92625b0be9..fbd3573112 100644 --- a/autogpt_platform/backend/backend/util/feature_flag.py +++ b/autogpt_platform/backend/backend/util/feature_flag.py @@ -5,7 +5,8 @@ from functools import wraps from typing import Any, Awaitable, Callable, TypeVar import ldclient -from fastapi import HTTPException +from autogpt_libs.auth.dependencies import get_optional_user_id +from fastapi import HTTPException, Security from ldclient import Context, LDClient from ldclient.config import Config from typing_extensions import ParamSpec @@ -36,6 +37,7 @@ class Flag(str, Enum): BETA_BLOCKS = "beta-blocks" AGENT_ACTIVITY = "agent-activity" ENABLE_PLATFORM_PAYMENT = "enable-platform-payment" + CHAT = "chat" def is_configured() -> bool: @@ -63,9 +65,9 @@ def initialize_launchdarkly() -> None: config = Config(sdk_key) ldclient.set_config(config) + global _is_initialized + _is_initialized = True if ldclient.get().is_initialized(): - global _is_initialized - _is_initialized = True logger.info("LaunchDarkly client initialized successfully") else: logger.error("LaunchDarkly client failed to initialize") @@ -218,7 +220,8 @@ def feature_flag( if not get_client().is_initialized(): logger.warning( - f"LaunchDarkly not initialized, using default={default}" + "LaunchDarkly not initialized, " + f"using default {flag_key}={repr(default)}" ) is_enabled = default else: @@ -232,8 +235,9 @@ def feature_flag( else: # Log warning and use default for non-boolean values logger.warning( - f"Feature flag {flag_key} returned non-boolean value: {flag_value} (type: {type(flag_value).__name__}). " - f"Using default={default}" + f"Feature flag {flag_key} returned non-boolean value: " + f"{repr(flag_value)} (type: {type(flag_value).__name__}). " + f"Using default value {repr(default)}" ) is_enabled = default @@ -250,6 +254,72 @@ def feature_flag( return decorator +def create_feature_flag_dependency( + flag_key: Flag, + default: bool = False, +) -> Callable[[str | None], Awaitable[None]]: + """ + Create a FastAPI dependency that checks a feature flag. + + This dependency automatically extracts the user_id from the JWT token + (if present) for proper LaunchDarkly user targeting, while still + supporting anonymous access. + + Args: + flag_key: The Flag enum value to check + default: Default value if flag evaluation fails + + Returns: + An async dependency function that raises HTTPException if flag is disabled + + Example: + router = APIRouter( + dependencies=[Depends(create_feature_flag_dependency(Flag.CHAT))] + ) + """ + + async def check_feature_flag( + user_id: str | None = Security(get_optional_user_id), + ) -> None: + """Check if feature flag is enabled for the user. + + The user_id is automatically injected from JWT authentication if present, + or None for anonymous access. + """ + # For routes that don't require authentication, use anonymous context + check_user_id = user_id or "anonymous" + + if not is_configured(): + logger.debug( + f"LaunchDarkly not configured, using default {flag_key.value}={default}" + ) + if not default: + raise HTTPException(status_code=404, detail="Feature not available") + return + + try: + client = get_client() + if not client.is_initialized(): + logger.debug( + f"LaunchDarkly not initialized, using default {flag_key.value}={default}" + ) + if not default: + raise HTTPException(status_code=404, detail="Feature not available") + return + + is_enabled = await is_feature_enabled(flag_key, check_user_id, default) + + if not is_enabled: + raise HTTPException(status_code=404, detail="Feature not available") + except Exception as e: + logger.warning( + f"LaunchDarkly error for flag {flag_key.value}: {e}, using default={default}" + ) + raise HTTPException(status_code=500, detail="Failed to check feature flag") + + return check_feature_flag + + @contextlib.contextmanager def mock_flag_variation(flag_key: str, return_value: Any): """Context manager for testing feature flags.""" diff --git a/autogpt_platform/backend/backend/util/file.py b/autogpt_platform/backend/backend/util/file.py index edf302fb34..dc8f86ea41 100644 --- a/autogpt_platform/backend/backend/util/file.py +++ b/autogpt_platform/backend/backend/util/file.py @@ -14,12 +14,47 @@ from backend.util.virus_scanner import scan_content_safe TEMP_DIR = Path(tempfile.gettempdir()).resolve() +# Maximum filename length (conservative limit for most filesystems) +MAX_FILENAME_LENGTH = 200 + + +def sanitize_filename(filename: str) -> str: + """ + Sanitize and truncate filename to prevent filesystem errors. + """ + # Remove or replace invalid characters + sanitized = re.sub(r'[<>:"/\\|?*\n\r\t]', "_", filename) + + # Truncate if too long + if len(sanitized) > MAX_FILENAME_LENGTH: + # Keep the extension if possible + if "." in sanitized: + name, ext = sanitized.rsplit(".", 1) + max_name_length = MAX_FILENAME_LENGTH - len(ext) - 1 + sanitized = name[:max_name_length] + "." + ext + else: + sanitized = sanitized[:MAX_FILENAME_LENGTH] + + # Ensure it's not empty or just dots + if not sanitized or sanitized.strip(".") == "": + sanitized = f"file_{uuid.uuid4().hex[:8]}" + + return sanitized + def get_exec_file_path(graph_exec_id: str, path: str) -> str: """ Utility to build an absolute path in the {temp}/exec_file/{exec_id}/... folder. """ - return str(TEMP_DIR / "exec_file" / graph_exec_id / path) + try: + full_path = TEMP_DIR / "exec_file" / graph_exec_id / path + return str(full_path) + except OSError as e: + if "File name too long" in str(e): + raise ValueError( + f"File path too long: {len(path)} characters. Maximum path length exceeded." + ) from e + raise ValueError(f"Invalid file path: {e}") from e def clean_exec_files(graph_exec_id: str, file: str = "") -> None: @@ -117,8 +152,11 @@ async def store_media_file( # Generate filename from cloud path _, path_part = cloud_storage.parse_cloud_path(file) - filename = Path(path_part).name or f"{uuid.uuid4()}.bin" - target_path = _ensure_inside_base(base_path / filename, base_path) + filename = sanitize_filename(Path(path_part).name or f"{uuid.uuid4()}.bin") + try: + target_path = _ensure_inside_base(base_path / filename, base_path) + except OSError as e: + raise ValueError(f"Invalid file path '{filename}': {e}") from e # Check file size limit if len(cloud_content) > MAX_FILE_SIZE: @@ -144,7 +182,10 @@ async def store_media_file( # Generate filename and decode extension = _extension_from_mime(mime_type) filename = f"{uuid.uuid4()}{extension}" - target_path = _ensure_inside_base(base_path / filename, base_path) + try: + target_path = _ensure_inside_base(base_path / filename, base_path) + except OSError as e: + raise ValueError(f"Invalid file path '{filename}': {e}") from e content = base64.b64decode(b64_content) # Check file size limit @@ -160,8 +201,11 @@ async def store_media_file( elif file.startswith(("http://", "https://")): # URL parsed_url = urlparse(file) - filename = Path(parsed_url.path).name or f"{uuid.uuid4()}" - target_path = _ensure_inside_base(base_path / filename, base_path) + filename = sanitize_filename(Path(parsed_url.path).name or f"{uuid.uuid4()}") + try: + target_path = _ensure_inside_base(base_path / filename, base_path) + except OSError as e: + raise ValueError(f"Invalid file path '{filename}': {e}") from e # Download and save resp = await Requests().get(file) @@ -177,8 +221,12 @@ async def store_media_file( target_path.write_bytes(resp.content) else: - # Local path - target_path = _ensure_inside_base(base_path / file, base_path) + # Local path - sanitize the filename part to prevent long filename errors + sanitized_file = sanitize_filename(file) + try: + target_path = _ensure_inside_base(base_path / sanitized_file, base_path) + except OSError as e: + raise ValueError(f"Invalid file path '{sanitized_file}': {e}") from e if not target_path.is_file(): raise ValueError(f"Local file does not exist: {target_path}") diff --git a/autogpt_platform/backend/backend/util/logging.py b/autogpt_platform/backend/backend/util/logging.py index f07d154dab..61280d93f1 100644 --- a/autogpt_platform/backend/backend/util/logging.py +++ b/autogpt_platform/backend/backend/util/logging.py @@ -8,10 +8,7 @@ settings = Settings() def configure_logging(): import autogpt_libs.logging.config - if ( - settings.config.behave_as == BehaveAs.LOCAL - or settings.config.app_env == AppEnvironment.LOCAL - ): + if not is_structured_logging_enabled(): autogpt_libs.logging.config.configure_logging(force_cloud_logging=False) else: autogpt_libs.logging.config.configure_logging(force_cloud_logging=True) @@ -20,6 +17,14 @@ def configure_logging(): logging.getLogger("httpx").setLevel(logging.WARNING) +def is_structured_logging_enabled() -> bool: + """Check if structured logging (cloud logging) is enabled.""" + return not ( + settings.config.behave_as == BehaveAs.LOCAL + or settings.config.app_env == AppEnvironment.LOCAL + ) + + class TruncatedLogger: def __init__( self, diff --git a/autogpt_platform/backend/backend/util/metrics.py b/autogpt_platform/backend/backend/util/metrics.py index 7922fef516..3982b5fabb 100644 --- a/autogpt_platform/backend/backend/util/metrics.py +++ b/autogpt_platform/backend/backend/util/metrics.py @@ -3,15 +3,17 @@ from enum import Enum import sentry_sdk from pydantic import SecretStr +from sentry_sdk.integrations import DidNotEnable from sentry_sdk.integrations.anthropic import AnthropicIntegration from sentry_sdk.integrations.asyncio import AsyncioIntegration from sentry_sdk.integrations.launchdarkly import LaunchDarklyIntegration from sentry_sdk.integrations.logging import LoggingIntegration -from backend.util.feature_flag import get_client, is_configured +from backend.util import feature_flag from backend.util.settings import Settings settings = Settings() +logger = logging.getLogger(__name__) class DiscordChannel(str, Enum): @@ -22,8 +24,11 @@ class DiscordChannel(str, Enum): def sentry_init(): sentry_dsn = settings.secrets.sentry_dsn integrations = [] - if is_configured(): - integrations.append(LaunchDarklyIntegration(get_client())) + if feature_flag.is_configured(): + try: + integrations.append(LaunchDarklyIntegration(feature_flag.get_client())) + except DidNotEnable as e: + logger.error(f"Error enabling LaunchDarklyIntegration for Sentry: {e}") sentry_sdk.init( dsn=sentry_dsn, traces_sample_rate=1.0, diff --git a/autogpt_platform/backend/backend/util/process.py b/autogpt_platform/backend/backend/util/process.py index 4b968163ae..beadcfd296 100644 --- a/autogpt_platform/backend/backend/util/process.py +++ b/autogpt_platform/backend/backend/util/process.py @@ -19,7 +19,8 @@ class AppProcess(ABC): """ process: Optional[Process] = None - cleaned_up = False + _shutting_down: bool = False + _cleaned_up: bool = False if "forkserver" in get_all_start_methods(): set_start_method("forkserver", force=True) @@ -43,7 +44,6 @@ class AppProcess(ABC): def service_name(self) -> str: return self.__class__.__name__ - @abstractmethod def cleanup(self): """ Implement this method on a subclass to do post-execution cleanup, @@ -65,7 +65,8 @@ class AppProcess(ABC): self.run() except BaseException as e: logger.warning( - f"[{self.service_name}] Termination request: {type(e).__name__}; {e} executing cleanup." + f"[{self.service_name}] 🛑 Terminating because of {type(e).__name__}: {e}", # noqa + exc_info=e if not isinstance(e, SystemExit) else None, ) # Send error to Sentry before cleanup if not isinstance(e, (KeyboardInterrupt, SystemExit)): @@ -76,8 +77,12 @@ class AppProcess(ABC): except Exception: pass # Silently ignore if Sentry isn't available finally: - self.cleanup() - logger.info(f"[{self.service_name}] Terminated.") + if not self._cleaned_up: + self._cleaned_up = True + logger.info(f"[{self.service_name}] 🧹 Running cleanup") + self.cleanup() + logger.info(f"[{self.service_name}] ✅ Cleanup done") + logger.info(f"[{self.service_name}] 🛑 Terminated") @staticmethod def llprint(message: str): @@ -88,8 +93,8 @@ class AppProcess(ABC): os.write(sys.stdout.fileno(), (message + "\n").encode()) def _self_terminate(self, signum: int, frame): - if not self.cleaned_up: - self.cleaned_up = True + if not self._shutting_down: + self._shutting_down = True sys.exit(0) else: self.llprint( diff --git a/autogpt_platform/backend/backend/util/prompt.py b/autogpt_platform/backend/backend/util/prompt.py index a39f0367dd..775d1c932b 100644 --- a/autogpt_platform/backend/backend/util/prompt.py +++ b/autogpt_platform/backend/backend/util/prompt.py @@ -5,6 +5,13 @@ from tiktoken import encoding_for_model from backend.util import json +# ---------------------------------------------------------------------------# +# CONSTANTS # +# ---------------------------------------------------------------------------# + +# Message prefixes for important system messages that should be protected during compression +MAIN_OBJECTIVE_PREFIX = "[Main Objective Prompt]: " + # ---------------------------------------------------------------------------# # INTERNAL UTILITIES # # ---------------------------------------------------------------------------# @@ -63,6 +70,55 @@ def _msg_tokens(msg: dict, enc) -> int: return WRAPPER + content_tokens + tool_call_tokens +def _is_tool_message(msg: dict) -> bool: + """Check if a message contains tool calls or results that should be protected.""" + content = msg.get("content") + + # Check for Anthropic-style tool messages + if isinstance(content, list) and any( + isinstance(item, dict) and item.get("type") in ("tool_use", "tool_result") + for item in content + ): + return True + + # Check for OpenAI-style tool calls in the message + if "tool_calls" in msg or msg.get("role") == "tool": + return True + + return False + + +def _is_objective_message(msg: dict) -> bool: + """Check if a message contains objective/system prompts that should be absolutely protected.""" + content = msg.get("content", "") + if isinstance(content, str): + # Protect any message with the main objective prefix + return content.startswith(MAIN_OBJECTIVE_PREFIX) + return False + + +def _truncate_tool_message_content(msg: dict, enc, max_tokens: int) -> None: + """ + Carefully truncate tool message content while preserving tool structure. + Only truncates tool_result content, leaves tool_use intact. + """ + content = msg.get("content") + if not isinstance(content, list): + return + + for item in content: + # Only process tool_result items, leave tool_use blocks completely intact + if not (isinstance(item, dict) and item.get("type") == "tool_result"): + continue + + result_content = item.get("content", "") + if ( + isinstance(result_content, str) + and _tok_len(result_content, enc) > max_tokens + ): + item["content"] = _truncate_middle_tokens(result_content, enc, max_tokens) + + def _truncate_middle_tokens(text: str, enc, max_tok: int) -> str: """ Return *text* shortened to ≈max_tok tokens by keeping the head & tail @@ -140,13 +196,21 @@ def compress_prompt( return sum(_msg_tokens(m, enc) for m in msgs) original_token_count = total_tokens() + if original_token_count + reserve <= target_tokens: return msgs # ---- STEP 0 : normalise content -------------------------------------- # Convert non-string payloads to strings so token counting is coherent. - for m in msgs[1:-1]: # keep the first & last intact + for i, m in enumerate(msgs): if not isinstance(m.get("content"), str) and m.get("content") is not None: + if _is_tool_message(m): + continue + + # Keep first and last messages intact (unless they're tool messages) + if i == 0 or i == len(msgs) - 1: + continue + # Reasonable 20k-char ceiling prevents pathological blobs content_str = json.dumps(m["content"], separators=(",", ":")) if len(content_str) > 20_000: @@ -157,34 +221,45 @@ def compress_prompt( cap = start_cap while total_tokens() + reserve > target_tokens and cap >= floor_cap: for m in msgs[1:-1]: # keep first & last intact - if _tok_len(m.get("content") or "", enc) > cap: - m["content"] = _truncate_middle_tokens(m["content"], enc, cap) + if _is_tool_message(m): + # For tool messages, only truncate tool result content, preserve structure + _truncate_tool_message_content(m, enc, cap) + continue + + if _is_objective_message(m): + # Never truncate objective messages - they contain the core task + continue + + content = m.get("content") or "" + if _tok_len(content, enc) > cap: + m["content"] = _truncate_middle_tokens(content, enc, cap) cap //= 2 # tighten the screw # ---- STEP 2 : middle-out deletion ----------------------------------- while total_tokens() + reserve > target_tokens and len(msgs) > 2: + # Identify all deletable messages (not first/last, not tool messages, not objective messages) + deletable_indices = [] + for i in range(1, len(msgs) - 1): # Skip first and last + if not _is_tool_message(msgs[i]) and not _is_objective_message(msgs[i]): + deletable_indices.append(i) + + if not deletable_indices: + break # nothing more we can drop + + # Delete from center outward - find the index closest to center centre = len(msgs) // 2 - # Build a symmetrical centre-out index walk: centre, centre+1, centre-1, ... - order = [centre] + [ - i - for pair in zip(range(centre + 1, len(msgs) - 1), range(centre - 1, 0, -1)) - for i in pair - ] - removed = False - for i in order: - msg = msgs[i] - if "tool_calls" in msg or msg.get("role") == "tool": - continue # protect tool shells - del msgs[i] - removed = True - break - if not removed: # nothing more we can drop - break + to_delete = min(deletable_indices, key=lambda i: abs(i - centre)) + del msgs[to_delete] # ---- STEP 3 : final safety-net trim on first & last ------------------ cap = start_cap while total_tokens() + reserve > target_tokens and cap >= floor_cap: for idx in (0, -1): # first and last + if _is_tool_message(msgs[idx]): + # For tool messages at first/last position, truncate tool result content only + _truncate_tool_message_content(msgs[idx], enc, cap) + continue + text = msgs[idx].get("content") or "" if _tok_len(text, enc) > cap: msgs[idx]["content"] = _truncate_middle_tokens(text, enc, cap) diff --git a/autogpt_platform/backend/backend/util/request.py b/autogpt_platform/backend/backend/util/request.py index bbe493265a..9744372b15 100644 --- a/autogpt_platform/backend/backend/util/request.py +++ b/autogpt_platform/backend/backend/util/request.py @@ -11,10 +11,36 @@ from urllib.parse import quote, urljoin, urlparse import aiohttp import idna from aiohttp import FormData, abc -from tenacity import retry, retry_if_result, wait_exponential_jitter +from tenacity import ( + RetryCallState, + retry, + retry_if_result, + stop_after_attempt, + wait_exponential_jitter, +) from backend.util.json import loads + +class HTTPClientError(Exception): + """4xx client errors (400-499)""" + + def __init__(self, message: str, status_code: int): + super().__init__(message) + self.status_code = status_code + + +class HTTPServerError(Exception): + """5xx server errors (500-599)""" + + def __init__(self, message: str, status_code: int): + super().__init__(message) + self.status_code = status_code + + +# Default User-Agent for all requests +DEFAULT_USER_AGENT = "AutoGPT-Platform/1.0 (https://github.com/Significant-Gravitas/AutoGPT; info@agpt.co) aiohttp" + # Retry status codes for which we will automatically retry the request THROTTLE_RETRY_STATUS_CODES: set[int] = {429, 500, 502, 503, 504, 408} @@ -175,10 +201,15 @@ async def validate_url( f"for hostname {ascii_hostname} is not allowed." ) + # Reconstruct the netloc with IDNA-encoded hostname and preserve port + netloc = ascii_hostname + if parsed.port: + netloc = f"{ascii_hostname}:{parsed.port}" + return ( URL( parsed.scheme, - ascii_hostname, + netloc, quote(parsed.path, safe="/%:@"), parsed.params, parsed.query, @@ -280,6 +311,20 @@ class Response: return 200 <= self.status < 300 +def _return_last_result(retry_state: RetryCallState) -> "Response": + """ + Ensure the final attempt's response is returned when retrying stops. + """ + if retry_state.outcome is None: + raise RuntimeError("Retry state is missing an outcome.") + + exception = retry_state.outcome.exception() + if exception is not None: + raise exception + + return retry_state.outcome.result() + + class Requests: """ A wrapper around an aiohttp ClientSession that validates URLs before @@ -294,6 +339,7 @@ class Requests: extra_url_validator: Callable[[URL], URL] | None = None, extra_headers: dict[str, str] | None = None, retry_max_wait: float = 300.0, + retry_max_attempts: int | None = None, ): self.trusted_origins = [] for url in trusted_origins or []: @@ -306,6 +352,9 @@ class Requests: self.extra_url_validator = extra_url_validator self.extra_headers = extra_headers self.retry_max_wait = retry_max_wait + if retry_max_attempts is not None and retry_max_attempts < 1: + raise ValueError("retry_max_attempts must be None or >= 1") + self.retry_max_attempts = retry_max_attempts async def request( self, @@ -320,11 +369,17 @@ class Requests: max_redirects: int = 10, **kwargs, ) -> Response: - @retry( - wait=wait_exponential_jitter(max=self.retry_max_wait), - retry=retry_if_result(lambda r: r.status in THROTTLE_RETRY_STATUS_CODES), - reraise=True, - ) + retry_kwargs: dict[str, Any] = { + "wait": wait_exponential_jitter(max=self.retry_max_wait), + "retry": retry_if_result(lambda r: r.status in THROTTLE_RETRY_STATUS_CODES), + "reraise": True, + } + + if self.retry_max_attempts is not None: + retry_kwargs["stop"] = stop_after_attempt(self.retry_max_attempts) + retry_kwargs["retry_error_callback"] = _return_last_result + + @retry(**retry_kwargs) async def _make_request() -> Response: return await self._request( method=method, @@ -415,11 +470,18 @@ class Requests: if self.extra_headers is not None: req_headers.update(self.extra_headers) + # Set default User-Agent if not provided + if "User-Agent" not in req_headers and "user-agent" not in req_headers: + req_headers["User-Agent"] = DEFAULT_USER_AGENT + # Override Host header if using IP connection if connector: req_headers["Host"] = hostname # Override data if files are provided + # Set max_field_size to handle servers with large headers (e.g., long CSP headers) + # Default is 8190 bytes, we increase to 16KB to accommodate legitimate large headers + session_kwargs["max_field_size"] = 16384 async with aiohttp.ClientSession(**session_kwargs) as session: # Perform the request with redirects disabled for manual handling @@ -438,9 +500,16 @@ class Requests: response.raise_for_status() except ClientResponseError as e: body = await response.read() - raise Exception( - f"HTTP {response.status} Error: {response.reason}, Body: {body.decode(errors='replace')}" - ) from e + error_message = f"HTTP {response.status} Error: {response.reason}, Body: {body.decode(errors='replace')}" + + # Raise specific exceptions based on status code range + if 400 <= response.status <= 499: + raise HTTPClientError(error_message, response.status) from e + elif 500 <= response.status <= 599: + raise HTTPServerError(error_message, response.status) from e + else: + # Generic fallback for other HTTP errors + raise Exception(error_message) from e # If allowed and a redirect is received, follow the redirect manually if allow_redirects and response.status in (301, 302, 303, 307, 308): diff --git a/autogpt_platform/backend/backend/util/request_test.py b/autogpt_platform/backend/backend/util/request_test.py index 57717ff77f..eaabbf4e09 100644 --- a/autogpt_platform/backend/backend/util/request_test.py +++ b/autogpt_platform/backend/backend/util/request_test.py @@ -1,4 +1,5 @@ import pytest +from aiohttp import web from backend.util.request import pin_url, validate_url @@ -110,3 +111,63 @@ async def test_dns_rebinding_fix( assert expected_ip in pinned_url # The unpinned URL's hostname should match our original IDNA encoded hostname assert url.hostname == hostname + + +@pytest.mark.asyncio +async def test_large_header_handling(): + """Test that ClientSession with max_field_size=16384 can handle large headers (>8190 bytes)""" + import aiohttp + + # Create a test server that returns large headers + async def large_header_handler(request): + # Create a header value larger than the default aiohttp max_field_size (8190 bytes) + # Simulate a long CSP header or similar legitimate large header + large_value = "policy-" + "x" * 8500 + return web.Response( + text="OK", + headers={"X-Large-Header": large_value}, + ) + + app = web.Application() + app.router.add_get("/large-header", large_header_handler) + + # Start test server + runner = web.AppRunner(app) + await runner.setup() + site = web.TCPSite(runner, "127.0.0.1", 0) + await site.start() + + try: + # Get the port from the server + server = site._server + assert server is not None + sockets = getattr(server, "sockets", None) + assert sockets is not None + port = sockets[0].getsockname()[1] + + # Test with default max_field_size (should fail) + default_failed = False + try: + async with aiohttp.ClientSession() as session: + async with session.get(f"http://127.0.0.1:{port}/large-header") as resp: + await resp.read() + except Exception: + # Expected: any error with default settings when header > 8190 bytes + default_failed = True + + assert default_failed, "Expected error with default max_field_size" + + # Test with increased max_field_size (should succeed) + # This is the fix: setting max_field_size=16384 allows headers up to 16KB + async with aiohttp.ClientSession(max_field_size=16384) as session: + async with session.get(f"http://127.0.0.1:{port}/large-header") as resp: + body = await resp.read() + # Verify the response is successful + assert resp.status == 200 + assert "X-Large-Header" in resp.headers + # Verify the large header value was received + assert len(resp.headers["X-Large-Header"]) > 8190 + assert body == b"OK" + + finally: + await runner.cleanup() diff --git a/autogpt_platform/backend/backend/util/service.py b/autogpt_platform/backend/backend/util/service.py index f53b7f82f3..00b938c170 100644 --- a/autogpt_platform/backend/backend/util/service.py +++ b/autogpt_platform/backend/backend/util/service.py @@ -4,9 +4,12 @@ import concurrent.futures import inspect import logging import os +import signal +import sys import threading import time from abc import ABC, abstractmethod +from contextlib import asynccontextmanager from functools import update_wrapper from typing import ( Any, @@ -25,6 +28,7 @@ from typing import ( import httpx import uvicorn from fastapi import FastAPI, Request, responses +from prisma.errors import DataError from pydantic import BaseModel, TypeAdapter, create_model import backend.util.exceptions as exceptions @@ -111,14 +115,44 @@ class BaseAppService(AppProcess, ABC): return target_host def run_service(self) -> None: - while True: - time.sleep(10) + # HACK: run the main event loop outside the main thread to disable Uvicorn's + # internal signal handlers, since there is no config option for this :( + shared_asyncio_thread = threading.Thread( + target=self._run_shared_event_loop, + daemon=True, + name=f"{self.service_name}-shared-event-loop", + ) + shared_asyncio_thread.start() + shared_asyncio_thread.join() + + def _run_shared_event_loop(self) -> None: + try: + self.shared_event_loop.run_forever() + finally: + logger.info(f"[{self.service_name}] 🛑 Shared event loop stopped") + self.shared_event_loop.close() # ensure held resources are released def run_and_wait(self, coro: Coroutine[Any, Any, T]) -> T: return asyncio.run_coroutine_threadsafe(coro, self.shared_event_loop).result() def run(self): - self.shared_event_loop = asyncio.get_event_loop() + self.shared_event_loop = asyncio.new_event_loop() + asyncio.set_event_loop(self.shared_event_loop) + + def cleanup(self): + """ + **💡 Overriding `AppService.lifespan` may be a more convenient option.** + + Implement this method on a subclass to do post-execution cleanup, + e.g. disconnecting from a database or terminating child processes. + + **Note:** if you override this method in a subclass, it must call + `super().cleanup()` *at the end*! + """ + # Stop the shared event loop to allow resource clean-up + self.shared_event_loop.call_soon_threadsafe(self.shared_event_loop.stop) + + super().cleanup() class RemoteCallError(BaseModel): @@ -160,6 +194,7 @@ EXCEPTION_MAPPING = { e.__name__: e for e in [ ValueError, + DataError, RuntimeError, TimeoutError, ConnectionError, @@ -179,6 +214,7 @@ EXCEPTION_MAPPING = { class AppService(BaseAppService, ABC): fastapi_app: FastAPI + http_server: uvicorn.Server | None = None log_level: str = "info" def set_log_level(self, log_level: str): @@ -190,11 +226,10 @@ class AppService(BaseAppService, ABC): def _handle_internal_http_error(status_code: int = 500, log_error: bool = True): def handler(request: Request, exc: Exception): if log_error: - if status_code == 500: - log = logger.exception - else: - log = logger.error - log(f"{request.method} {request.url.path} failed: {exc}") + logger.error( + f"{request.method} {request.url.path} failed: {exc}", + exc_info=exc if status_code == 500 else None, + ) return responses.JSONResponse( status_code=status_code, content=RemoteCallError( @@ -256,13 +291,13 @@ class AppService(BaseAppService, ABC): return sync_endpoint - @conn_retry("FastAPI server", "Starting FastAPI server") + @conn_retry("FastAPI server", "Running FastAPI server") def __start_fastapi(self): logger.info( f"[{self.service_name}] Starting RPC server at http://{api_host}:{self.get_port()}" ) - server = uvicorn.Server( + self.http_server = uvicorn.Server( uvicorn.Config( self.fastapi_app, host=api_host, @@ -271,18 +306,76 @@ class AppService(BaseAppService, ABC): log_level=self.log_level, ) ) - self.shared_event_loop.run_until_complete(server.serve()) + self.run_and_wait(self.http_server.serve()) + + # Perform clean-up when the server exits + if not self._cleaned_up: + self._cleaned_up = True + logger.info(f"[{self.service_name}] 🧹 Running cleanup") + self.cleanup() + logger.info(f"[{self.service_name}] ✅ Cleanup done") + + def _self_terminate(self, signum: int, frame): + """Pass SIGTERM to Uvicorn so it can shut down gracefully""" + signame = signal.Signals(signum).name + if not self._shutting_down: + self._shutting_down = True + if self.http_server: + logger.info( + f"[{self.service_name}] 🛑 Received {signame} ({signum}) - " + "Entering RPC server graceful shutdown" + ) + self.http_server.handle_exit(signum, frame) # stop accepting requests + + # NOTE: Actually stopping the process is triggered by: + # 1. The call to self.cleanup() at the end of __start_fastapi() 👆🏼 + # 2. BaseAppService.cleanup() stopping the shared event loop + else: + logger.warning( + f"[{self.service_name}] {signame} received before HTTP server init." + " Terminating..." + ) + sys.exit(0) + + else: + # Expedite shutdown on second SIGTERM + logger.info( + f"[{self.service_name}] 🛑🛑 Received {signame} ({signum}), " + "but shutdown is already underway. Terminating..." + ) + sys.exit(0) + + @asynccontextmanager + async def lifespan(self, app: FastAPI): + """ + The FastAPI/Uvicorn server's lifespan manager, used for setup and shutdown. + + You can extend and use this in a subclass like: + ``` + @asynccontextmanager + async def lifespan(self, app: FastAPI): + async with super().lifespan(app): + await db.connect() + yield + await db.disconnect() + ``` + """ + # Startup - this runs before Uvicorn starts accepting connections + + yield + + # Shutdown - this runs when FastAPI/Uvicorn shuts down + logger.info(f"[{self.service_name}] ✅ FastAPI has finished") async def health_check(self) -> str: - """ - A method to check the health of the process. - """ + """A method to check the health of the process.""" return "OK" def run(self): sentry_init() super().run() - self.fastapi_app = FastAPI() + + self.fastapi_app = FastAPI(lifespan=self.lifespan) # Add Prometheus instrumentation to all services try: @@ -320,12 +413,19 @@ class AppService(BaseAppService, ABC): self.fastapi_app.add_exception_handler( ValueError, self._handle_internal_http_error(400) ) + self.fastapi_app.add_exception_handler( + DataError, self._handle_internal_http_error(400) + ) self.fastapi_app.add_exception_handler( Exception, self._handle_internal_http_error(500) ) # Start the FastAPI server in a separate thread. - api_thread = threading.Thread(target=self.__start_fastapi, daemon=True) + api_thread = threading.Thread( + target=self.__start_fastapi, + daemon=True, + name=f"{self.service_name}-http-server", + ) api_thread.start() # Run the main service loop (blocking). @@ -377,6 +477,7 @@ def get_service_client( exclude_exceptions=( # Don't retry these specific exceptions that won't be fixed by retrying ValueError, # Invalid input/parameters + DataError, # Prisma data integrity errors (foreign key, unique constraints) KeyError, # Missing required data TypeError, # Wrong data types AttributeError, # Missing attributes diff --git a/autogpt_platform/backend/backend/util/service_test.py b/autogpt_platform/backend/backend/util/service_test.py index 1683c64220..faa0dd6c84 100644 --- a/autogpt_platform/backend/backend/util/service_test.py +++ b/autogpt_platform/backend/backend/util/service_test.py @@ -1,3 +1,5 @@ +import asyncio +import contextlib import time from functools import cached_property from unittest.mock import Mock @@ -18,20 +20,11 @@ from backend.util.service import ( TEST_SERVICE_PORT = 8765 -def wait_for_service_ready(service_client_type, timeout_seconds=30): - """Helper method to wait for a service to be ready using health check with retry.""" - client = get_service_client(service_client_type, request_retry=True) - client.health_check() # This will retry until service is ready - - class ServiceTest(AppService): def __init__(self): super().__init__() self.fail_count = 0 - def cleanup(self): - pass - @classmethod def get_port(cls) -> int: return TEST_SERVICE_PORT @@ -41,10 +34,17 @@ class ServiceTest(AppService): result = super().__enter__() # Wait for the service to be ready - wait_for_service_ready(ServiceTestClient) + self.wait_until_ready() return result + def wait_until_ready(self, timeout_seconds: int = 5): + """Helper method to wait for a service to be ready using health check with retry.""" + client = get_service_client( + ServiceTestClient, call_timeout=timeout_seconds, request_retry=True + ) + client.health_check() # This will retry until service is ready\ + @expose def add(self, a: int, b: int) -> int: return a + b @@ -490,3 +490,167 @@ class TestHTTPErrorRetryBehavior: ) assert exc_info.value.status_code == status_code + + +class TestGracefulShutdownService(AppService): + """Test service with slow endpoints for testing graceful shutdown""" + + @classmethod + def get_port(cls) -> int: + return 18999 # Use a specific test port + + def __init__(self): + super().__init__() + self.request_log = [] + self.cleanup_called = False + self.cleanup_completed = False + + @expose + async def slow_endpoint(self, duration: int = 5) -> dict: + """Endpoint that takes time to complete""" + start_time = time.time() + self.request_log.append(f"slow_endpoint started at {start_time}") + + await asyncio.sleep(duration) + + end_time = time.time() + result = { + "message": "completed", + "duration": end_time - start_time, + "start_time": start_time, + "end_time": end_time, + } + self.request_log.append(f"slow_endpoint completed at {end_time}") + return result + + @expose + def fast_endpoint(self) -> dict: + """Fast endpoint for testing rejection during shutdown""" + timestamp = time.time() + self.request_log.append(f"fast_endpoint called at {timestamp}") + return {"message": "fast", "timestamp": timestamp} + + def cleanup(self): + """Override cleanup to track when it's called""" + self.cleanup_called = True + self.request_log.append(f"cleanup started at {time.time()}") + + # Call parent cleanup + super().cleanup() + + self.cleanup_completed = True + self.request_log.append(f"cleanup completed at {time.time()}") + + +@pytest.fixture(scope="function") +async def test_service(): + """Run the test service in a separate process""" + + service = TestGracefulShutdownService() + service.start(background=True) + + base_url = f"http://localhost:{service.get_port()}" + + await wait_until_service_ready(base_url) + yield service, base_url + + service.stop() + + +async def wait_until_service_ready(base_url: str, timeout: float = 10): + start_time = time.time() + while time.time() - start_time <= timeout: + async with httpx.AsyncClient(timeout=5) as client: + with contextlib.suppress(httpx.ConnectError): + response = await client.get(f"{base_url}/health_check", timeout=5) + + if response.status_code == 200 and response.json() == "OK": + return + + await asyncio.sleep(0.5) + + raise RuntimeError(f"Service at {base_url} not available after {timeout} seconds") + + +async def send_slow_request(base_url: str) -> dict: + """Send a slow request and return the result""" + async with httpx.AsyncClient(timeout=30) as client: + response = await client.post(f"{base_url}/slow_endpoint", json={"duration": 5}) + assert response.status_code == 200 + return response.json() + + +@pytest.mark.asyncio +async def test_graceful_shutdown(test_service): + """Test that AppService handles graceful shutdown correctly""" + service, test_service_url = test_service + + # Start a slow request that should complete even after shutdown + slow_task = asyncio.create_task(send_slow_request(test_service_url)) + + # Give the slow request time to start + await asyncio.sleep(1) + + # Send SIGTERM to the service process + shutdown_start_time = time.time() + service.process.terminate() # This sends SIGTERM + + # Wait a moment for shutdown to start + await asyncio.sleep(0.5) + + # Try to send a new request - should be rejected or connection refused + try: + async with httpx.AsyncClient(timeout=5) as client: + response = await client.post(f"{test_service_url}/fast_endpoint", json={}) + # Should get 503 Service Unavailable during shutdown + assert response.status_code == 503 + assert "shutting down" in response.json()["detail"].lower() + except httpx.ConnectError: + # Connection refused is also acceptable - server stopped accepting + pass + + # The slow request should still complete successfully + slow_result = await slow_task + assert slow_result["message"] == "completed" + assert 4.9 < slow_result["duration"] < 5.5 # Should have taken ~5 seconds + + # Wait for the service to fully shut down + service.process.join(timeout=15) + shutdown_end_time = time.time() + + # Verify the service actually terminated + assert not service.process.is_alive() + + # Verify shutdown took reasonable time (slow request - 1s + cleanup) + shutdown_duration = shutdown_end_time - shutdown_start_time + assert 4 <= shutdown_duration <= 6 # ~5s request - 1s + buffer + + print(f"Shutdown took {shutdown_duration:.2f} seconds") + print(f"Slow request completed in: {slow_result['duration']:.2f} seconds") + + +@pytest.mark.asyncio +async def test_health_check_during_shutdown(test_service): + """Test that health checks behave correctly during shutdown""" + service, test_service_url = test_service + + # Health check should pass initially + async with httpx.AsyncClient(timeout=5) as client: + response = await client.get(f"{test_service_url}/health_check") + assert response.status_code == 200 + + # Send SIGTERM + service.process.terminate() + + # Wait for shutdown to begin + await asyncio.sleep(1) + + # Health check should now fail or connection should be refused + try: + async with httpx.AsyncClient(timeout=5) as client: + response = await client.get(f"{test_service_url}/health_check") + # Could either get 503, 500 (unhealthy), or connection error + assert response.status_code in [500, 503] + except (httpx.ConnectError, httpx.ConnectTimeout): + # Connection refused/timeout is also acceptable + pass diff --git a/autogpt_platform/backend/backend/util/settings.py b/autogpt_platform/backend/backend/util/settings.py index f07f998a38..0f17b1215c 100644 --- a/autogpt_platform/backend/backend/util/settings.py +++ b/autogpt_platform/backend/backend/util/settings.py @@ -1,5 +1,6 @@ import json import os +import re from enum import Enum from typing import Any, Dict, Generic, List, Set, Tuple, Type, TypeVar @@ -71,6 +72,11 @@ class Config(UpdateTrackingModel["Config"], BaseSettings): description="Maximum number of workers to use for graph execution.", ) + requeue_by_republishing: bool = Field( + default=True, + description="Send rate-limited messages to back of queue by republishing instead of front requeue to prevent blocking other users.", + ) + # FastAPI Thread Pool Configuration # IMPORTANT: FastAPI automatically offloads ALL sync functions to a thread pool: # - Sync endpoint functions (def instead of async def) @@ -179,6 +185,12 @@ class Config(UpdateTrackingModel["Config"], BaseSettings): description="Number of top blocks with most errors to show when no blocks exceed threshold (0 to disable).", ) + # Execution Accuracy Monitoring + execution_accuracy_check_interval_hours: int = Field( + default=24, + description="Interval in hours between execution accuracy alert checks.", + ) + model_config = SettingsConfigDict( env_file=".env", extra="allow", @@ -350,6 +362,13 @@ class Config(UpdateTrackingModel["Config"], BaseSettings): description="Hours between cloud storage cleanup runs (1-24 hours)", ) + oauth_token_cleanup_interval_hours: int = Field( + default=6, + ge=1, + le=24, + description="Hours between OAuth token cleanup runs (1-24 hours)", + ) + upload_file_size_limit_mb: int = Field( default=256, ge=1, @@ -412,6 +431,11 @@ class Config(UpdateTrackingModel["Config"], BaseSettings): description="Name of the event bus", ) + notification_event_bus_name: str = Field( + default="notification_event", + description="Name of the websocket notification event bus", + ) + trust_endpoints_for_requests: List[str] = Field( default_factory=list, description="A whitelist of trusted internal endpoints for the backend to make requests to.", @@ -422,34 +446,68 @@ class Config(UpdateTrackingModel["Config"], BaseSettings): description="Maximum message size limit for communication with the message bus", ) - backend_cors_allow_origins: List[str] = Field(default=["http://localhost:3000"]) + backend_cors_allow_origins: List[str] = Field( + default=["http://localhost:3000"], + description="Allowed Origins for CORS. Supports exact URLs (http/https) or entries prefixed with " + '"regex:" to match via regular expression.', + ) + + external_oauth_callback_origins: List[str] = Field( + default=["http://localhost:3000"], + description="Allowed callback URL origins for external OAuth flows. " + "External apps (like Autopilot) must have their callback URLs start with one of these origins.", + ) @field_validator("backend_cors_allow_origins") @classmethod def validate_cors_allow_origins(cls, v: List[str]) -> List[str]: - out = [] - port = None - has_localhost = False - has_127_0_0_1 = False - for url in v: - url = url.strip() - if url.startswith(("http://", "https://")): - if "localhost" in url: - port = url.split(":")[2] - has_localhost = True - if "127.0.0.1" in url: - port = url.split(":")[2] - has_127_0_0_1 = True - out.append(url) - else: - raise ValueError(f"Invalid URL: {url}") + validated: List[str] = [] + localhost_ports: set[str] = set() + ip127_ports: set[str] = set() - if has_127_0_0_1 and not has_localhost: - out.append(f"http://localhost:{port}") - if has_localhost and not has_127_0_0_1: - out.append(f"http://127.0.0.1:{port}") + for raw_origin in v: + origin = raw_origin.strip() + if origin.startswith("regex:"): + pattern = origin[len("regex:") :] + if not pattern: + raise ValueError("Invalid regex pattern: pattern cannot be empty") + try: + re.compile(pattern) + except re.error as exc: + raise ValueError( + f"Invalid regex pattern '{pattern}': {exc}" + ) from exc + validated.append(origin) + continue - return out + if origin.startswith(("http://", "https://")): + if "localhost" in origin: + try: + port = origin.split(":")[2] + localhost_ports.add(port) + except IndexError as exc: + raise ValueError( + "localhost origins must include an explicit port, e.g. http://localhost:3000" + ) from exc + if "127.0.0.1" in origin: + try: + port = origin.split(":")[2] + ip127_ports.add(port) + except IndexError as exc: + raise ValueError( + "127.0.0.1 origins must include an explicit port, e.g. http://127.0.0.1:3000" + ) from exc + validated.append(origin) + continue + + raise ValueError(f"Invalid URL or regex origin: {origin}") + + for port in ip127_ports - localhost_ports: + validated.append(f"http://localhost:{port}") + for port in localhost_ports - ip127_ports: + validated.append(f"http://127.0.0.1:{port}") + + return validated @classmethod def settings_customise_sources( @@ -498,16 +556,6 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings): description="The secret key to use for the unsubscribe user by token", ) - # Cloudflare Turnstile credentials - turnstile_secret_key: str = Field( - default="", - description="Cloudflare Turnstile backend secret key", - ) - turnstile_verify_url: str = Field( - default="https://challenges.cloudflare.com/turnstile/v0/siteverify", - description="Cloudflare Turnstile verify URL", - ) - # OAuth server credentials for integrations # --8<-- [start:OAuthServerCredentialsExample] github_client_id: str = Field(default="", description="GitHub OAuth client ID") @@ -542,6 +590,12 @@ class Secrets(UpdateTrackingModel["Secrets"], BaseSettings): open_router_api_key: str = Field(default="", description="Open Router API Key") llama_api_key: str = Field(default="", description="Llama API Key") v0_api_key: str = Field(default="", description="v0 by Vercel API key") + webshare_proxy_username: str = Field( + default="", description="Webshare Proxy Username" + ) + webshare_proxy_password: str = Field( + default="", description="Webshare Proxy Password" + ) reddit_client_id: str = Field(default="", description="Reddit client ID") reddit_client_secret: str = Field(default="", description="Reddit client secret") diff --git a/autogpt_platform/backend/backend/util/test.py b/autogpt_platform/backend/backend/util/test.py index 13b3365446..1e8244ff8e 100644 --- a/autogpt_platform/backend/backend/util/test.py +++ b/autogpt_platform/backend/backend/util/test.py @@ -6,19 +6,19 @@ from typing import Sequence, cast from autogpt_libs.auth import get_user_id +from backend.api.rest_api import AgentServer from backend.data import db from backend.data.block import Block, BlockSchema, initialize_blocks from backend.data.execution import ( + ExecutionContext, ExecutionStatus, NodeExecutionResult, - UserContext, get_graph_execution, ) from backend.data.model import _BaseCredentials from backend.data.user import create_default_user from backend.executor import DatabaseManager, ExecutionManager, Scheduler from backend.notifications.notifications import NotificationManager -from backend.server.rest_api import AgentServer log = logging.getLogger(__name__) @@ -140,9 +140,12 @@ async def execute_block_test(block: Block): "graph_exec_id": str(uuid.uuid4()), "node_exec_id": str(uuid.uuid4()), "user_id": str(uuid.uuid4()), - "user_context": UserContext(timezone="UTC"), # Default for tests + "graph_version": 1, # Default version for tests + "execution_context": ExecutionContext(), } input_model = cast(type[BlockSchema], block.input_schema) + + # Handle regular credentials fields credentials_input_fields = input_model.get_credentials_fields() if len(credentials_input_fields) == 1 and isinstance( block.test_credentials, _BaseCredentials @@ -157,6 +160,18 @@ async def execute_block_test(block: Block): if field_name in block.test_credentials: extra_exec_kwargs[field_name] = block.test_credentials[field_name] + # Handle auto-generated credentials (e.g., from GoogleDriveFileInput) + auto_creds_fields = input_model.get_auto_credentials_fields() + if auto_creds_fields and block.test_credentials: + if isinstance(block.test_credentials, _BaseCredentials): + # Single credentials object - use for all auto_credentials kwargs + for kwarg_name in auto_creds_fields.keys(): + extra_exec_kwargs[kwarg_name] = block.test_credentials + elif isinstance(block.test_credentials, dict): + for kwarg_name in auto_creds_fields.keys(): + if kwarg_name in block.test_credentials: + extra_exec_kwargs[kwarg_name] = block.test_credentials[kwarg_name] + for input_data in block.test_input: log.info(f"{prefix} in: {input_data}") diff --git a/autogpt_platform/backend/backend/util/text.py b/autogpt_platform/backend/backend/util/text.py index 45681aeaec..be80299ba7 100644 --- a/autogpt_platform/backend/backend/util/text.py +++ b/autogpt_platform/backend/backend/util/text.py @@ -3,6 +3,7 @@ import logging import bleach from bleach.css_sanitizer import CSSSanitizer from jinja2 import BaseLoader +from jinja2.exceptions import TemplateError from jinja2.sandbox import SandboxedEnvironment from markupsafe import Markup @@ -101,8 +102,11 @@ class TextFormatter: def format_string(self, template_str: str, values=None, **kwargs) -> str: """Regular template rendering with escaping""" - template = self.env.from_string(template_str) - return template.render(values or {}, **kwargs) + try: + template = self.env.from_string(template_str) + return template.render(values or {}, **kwargs) + except TemplateError as e: + raise ValueError(e) from e def format_email( self, diff --git a/autogpt_platform/backend/backend/util/timezone_utils.py b/autogpt_platform/backend/backend/util/timezone_utils.py index 6a6c438085..76614a8357 100644 --- a/autogpt_platform/backend/backend/util/timezone_utils.py +++ b/autogpt_platform/backend/backend/util/timezone_utils.py @@ -10,6 +10,8 @@ from zoneinfo import ZoneInfo from croniter import croniter +from backend.data.model import USER_TIMEZONE_NOT_SET + logger = logging.getLogger(__name__) @@ -138,7 +140,7 @@ def get_user_timezone_or_utc(user_timezone: Optional[str]) -> str: Returns: Valid timezone string (user's preference or UTC fallback) """ - if not user_timezone or user_timezone == "not-set": + if not user_timezone or user_timezone == USER_TIMEZONE_NOT_SET: return "UTC" if validate_timezone(user_timezone): diff --git a/autogpt_platform/backend/backend/util/type.py b/autogpt_platform/backend/backend/util/type.py index e1cda80203..2402011669 100644 --- a/autogpt_platform/backend/backend/util/type.py +++ b/autogpt_platform/backend/backend/util/type.py @@ -5,6 +5,13 @@ from typing import Any, Type, TypeVar, Union, cast, get_args, get_origin, overlo from prisma import Json as PrismaJson +def _is_type_or_subclass(origin: Any, target_type: type) -> bool: + """Check if origin is exactly the target type or a subclass of it.""" + return origin is target_type or ( + isinstance(origin, type) and issubclass(origin, target_type) + ) + + class ConversionError(ValueError): pass @@ -138,7 +145,11 @@ def _try_convert(value: Any, target_type: Any, raise_on_mismatch: bool) -> Any: if origin is None: origin = target_type - if origin not in [list, dict, tuple, str, set, int, float, bool]: + # Early return for unsupported types (skip subclasses of supported types) + supported_types = [list, dict, tuple, str, set, int, float, bool] + if origin not in supported_types and not ( + isinstance(origin, type) and any(issubclass(origin, t) for t in supported_types) + ): return value # Handle the case when value is already of the target type @@ -168,44 +179,47 @@ def _try_convert(value: Any, target_type: Any, raise_on_mismatch: bool) -> Any: raise TypeError(f"Value {value} is not of expected type {target_type}") else: # Need to convert value to the origin type - if origin is list: - value = __convert_list(value) + if _is_type_or_subclass(origin, list): + converted_list = __convert_list(value) if args: - return [convert(v, args[0]) for v in value] - else: - return value - elif origin is dict: - value = __convert_dict(value) + converted_list = [convert(v, args[0]) for v in converted_list] + return origin(converted_list) if origin is not list else converted_list + elif _is_type_or_subclass(origin, dict): + converted_dict = __convert_dict(value) if args: key_type, val_type = args - return { - convert(k, key_type): convert(v, val_type) for k, v in value.items() + converted_dict = { + convert(k, key_type): convert(v, val_type) + for k, v in converted_dict.items() } - else: - return value - elif origin is tuple: - value = __convert_tuple(value) + return origin(converted_dict) if origin is not dict else converted_dict + elif _is_type_or_subclass(origin, tuple): + converted_tuple = __convert_tuple(value) if args: if len(args) == 1: - return tuple(convert(v, args[0]) for v in value) + converted_tuple = tuple( + convert(v, args[0]) for v in converted_tuple + ) else: - return tuple(convert(v, t) for v, t in zip(value, args)) - else: - return value - elif origin is str: - return __convert_str(value) - elif origin is set: + converted_tuple = tuple( + convert(v, t) for v, t in zip(converted_tuple, args) + ) + return origin(converted_tuple) if origin is not tuple else converted_tuple + elif _is_type_or_subclass(origin, str): + converted_str = __convert_str(value) + return origin(converted_str) if origin is not str else converted_str + elif _is_type_or_subclass(origin, set): value = __convert_set(value) if args: return {convert(v, args[0]) for v in value} else: return value - elif origin is int: - return __convert_num(value, int) - elif origin is float: - return __convert_num(value, float) - elif origin is bool: + elif _is_type_or_subclass(origin, bool): return __convert_bool(value) + elif _is_type_or_subclass(origin, int): + return __convert_num(value, int) + elif _is_type_or_subclass(origin, float): + return __convert_num(value, float) else: return value diff --git a/autogpt_platform/backend/backend/util/type_test.py b/autogpt_platform/backend/backend/util/type_test.py index becadf48b2..920776edbf 100644 --- a/autogpt_platform/backend/backend/util/type_test.py +++ b/autogpt_platform/backend/backend/util/type_test.py @@ -32,3 +32,17 @@ def test_type_conversion(): assert convert("5", List[int]) == [5] assert convert("[5,4,2]", List[int]) == [5, 4, 2] assert convert([5, 4, 2], List[str]) == ["5", "4", "2"] + + # Test the specific case that was failing: empty list to Optional[str] + assert convert([], Optional[str]) == "[]" + assert convert([], str) == "[]" + + # Test the actual failing case: empty list to ShortTextType + from backend.util.type import ShortTextType + + assert convert([], Optional[ShortTextType]) == "[]" + assert convert([], ShortTextType) == "[]" + + # Test other empty list conversions + assert convert([], int) == 0 # len([]) = 0 + assert convert([], Optional[int]) == 0 diff --git a/autogpt_platform/backend/backend/util/virus_scanner.py b/autogpt_platform/backend/backend/util/virus_scanner.py index 1ea31cac95..aa43e5f5d9 100644 --- a/autogpt_platform/backend/backend/util/virus_scanner.py +++ b/autogpt_platform/backend/backend/util/virus_scanner.py @@ -196,7 +196,7 @@ async def scan_content_safe(content: bytes, *, filename: str = "unknown") -> Non VirusDetectedError: If virus is found VirusScanError: If scanning fails """ - from backend.server.v2.store.exceptions import VirusDetectedError, VirusScanError + from backend.api.features.store.exceptions import VirusDetectedError, VirusScanError try: result = await get_virus_scanner().scan_file(content, filename=filename) diff --git a/autogpt_platform/backend/backend/util/virus_scanner_test.py b/autogpt_platform/backend/backend/util/virus_scanner_test.py index 81b5ad3342..77010c7320 100644 --- a/autogpt_platform/backend/backend/util/virus_scanner_test.py +++ b/autogpt_platform/backend/backend/util/virus_scanner_test.py @@ -3,7 +3,7 @@ from unittest.mock import AsyncMock, Mock, patch import pytest -from backend.server.v2.store.exceptions import VirusDetectedError, VirusScanError +from backend.api.features.store.exceptions import VirusDetectedError, VirusScanError from backend.util.virus_scanner import ( VirusScannerService, VirusScannerSettings, diff --git a/autogpt_platform/backend/backend/ws.py b/autogpt_platform/backend/backend/ws.py index 3b15a60eb0..77e2e82a90 100644 --- a/autogpt_platform/backend/backend/ws.py +++ b/autogpt_platform/backend/backend/ws.py @@ -1,5 +1,5 @@ +from backend.api.ws_api import WebsocketServer from backend.app import run_processes -from backend.server.ws_api import WebsocketServer def main(): diff --git a/autogpt_platform/backend/migrations/20251016093049_add_full_text_search/migration.sql b/autogpt_platform/backend/migrations/20251016093049_add_full_text_search/migration.sql new file mode 100644 index 0000000000..b3f90ebd3c --- /dev/null +++ b/autogpt_platform/backend/migrations/20251016093049_add_full_text_search/migration.sql @@ -0,0 +1,100 @@ +-- AlterTable +ALTER TABLE "StoreListingVersion" ADD COLUMN "search" tsvector DEFAULT ''::tsvector; + +-- Add trigger to update the search column with the tsvector of the agent +-- Function to be invoked by trigger + +-- Drop the trigger first +DROP TRIGGER IF EXISTS "update_tsvector" ON "StoreListingVersion"; + +-- Drop the function completely +DROP FUNCTION IF EXISTS update_tsvector_column(); + +-- Now recreate it fresh +CREATE OR REPLACE FUNCTION update_tsvector_column() RETURNS TRIGGER AS $$ +BEGIN + NEW.search := to_tsvector('english', + COALESCE(NEW.name, '') || ' ' || + COALESCE(NEW.description, '') || ' ' || + COALESCE(NEW."subHeading", '') + ); + RETURN NEW; +END; +$$ LANGUAGE plpgsql SECURITY DEFINER SET search_path = platform, pg_temp; + +-- Recreate the trigger +CREATE TRIGGER "update_tsvector" +BEFORE INSERT OR UPDATE ON "StoreListingVersion" +FOR EACH ROW +EXECUTE FUNCTION update_tsvector_column(); + +UPDATE "StoreListingVersion" +SET search = to_tsvector('english', + COALESCE(name, '') || ' ' || + COALESCE(description, '') || ' ' || + COALESCE("subHeading", '') +) +WHERE search IS NULL; + +-- Drop and recreate the StoreAgent view with isAvailable field +DROP VIEW IF EXISTS "StoreAgent"; + +CREATE OR REPLACE VIEW "StoreAgent" AS +WITH latest_versions AS ( + SELECT + "storeListingId", + MAX(version) AS max_version + FROM "StoreListingVersion" + WHERE "submissionStatus" = 'APPROVED' + GROUP BY "storeListingId" +), +agent_versions AS ( + SELECT + "storeListingId", + array_agg(DISTINCT version::text ORDER BY version::text) AS versions + FROM "StoreListingVersion" + WHERE "submissionStatus" = 'APPROVED' + GROUP BY "storeListingId" +) +SELECT + sl.id AS listing_id, + slv.id AS "storeListingVersionId", + slv."createdAt" AS updated_at, + sl.slug, + COALESCE(slv.name, '') AS agent_name, + slv."videoUrl" AS agent_video, + COALESCE(slv."imageUrls", ARRAY[]::text[]) AS agent_image, + slv."isFeatured" AS featured, + p.username AS creator_username, -- Allow NULL for malformed sub-agents + p."avatarUrl" AS creator_avatar, -- Allow NULL for malformed sub-agents + slv."subHeading" AS sub_heading, + slv.description, + slv.categories, + slv.search, + COALESCE(ar.run_count, 0::bigint) AS runs, + COALESCE(rs.avg_rating, 0.0)::double precision AS rating, + COALESCE(av.versions, ARRAY[slv.version::text]) AS versions, + COALESCE(sl."useForOnboarding", false) AS "useForOnboarding", + slv."isAvailable" AS is_available -- Add isAvailable field to filter sub-agents +FROM "StoreListing" sl +JOIN latest_versions lv + ON sl.id = lv."storeListingId" +JOIN "StoreListingVersion" slv + ON slv."storeListingId" = lv."storeListingId" + AND slv.version = lv.max_version + AND slv."submissionStatus" = 'APPROVED' +JOIN "AgentGraph" a + ON slv."agentGraphId" = a.id + AND slv."agentGraphVersion" = a.version +LEFT JOIN "Profile" p + ON sl."owningUserId" = p."userId" +LEFT JOIN "mv_review_stats" rs + ON sl.id = rs."storeListingId" +LEFT JOIN "mv_agent_run_counts" ar + ON a.id = ar."agentGraphId" +LEFT JOIN agent_versions av + ON sl.id = av."storeListingId" +WHERE sl."isDeleted" = false + AND sl."hasApprovedVersion" = true; + +COMMIT; \ No newline at end of file diff --git a/autogpt_platform/backend/migrations/20251027162201_migrate_claude_3_5_to_4_5_models/migration.sql b/autogpt_platform/backend/migrations/20251027162201_migrate_claude_3_5_to_4_5_models/migration.sql new file mode 100644 index 0000000000..f85db06378 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251027162201_migrate_claude_3_5_to_4_5_models/migration.sql @@ -0,0 +1,21 @@ +-- Migrate Claude 3.5 models to Claude 4.5 models +-- This updates all AgentNode blocks that use deprecated Claude 3.5 models to the new 4.5 models +-- See: https://docs.anthropic.com/en/docs/about-claude/models/legacy-model-guide + +-- Update Claude 3.5 Sonnet to Claude 4.5 Sonnet +UPDATE "AgentNode" +SET "constantInput" = JSONB_SET( + "constantInput"::jsonb, + '{model}', + '"claude-sonnet-4-5-20250929"'::jsonb + ) +WHERE "constantInput"::jsonb->>'model' = 'claude-3-5-sonnet-latest'; + +-- Update Claude 3.5 Haiku to Claude 4.5 Haiku +UPDATE "AgentNode" +SET "constantInput" = JSONB_SET( + "constantInput"::jsonb, + '{model}', + '"claude-haiku-4-5-20251001"'::jsonb + ) +WHERE "constantInput"::jsonb->>'model' = 'claude-3-5-haiku-latest'; diff --git a/autogpt_platform/backend/migrations/20251028000000_add_parent_graph_execution_tracking/migration.sql b/autogpt_platform/backend/migrations/20251028000000_add_parent_graph_execution_tracking/migration.sql new file mode 100644 index 0000000000..ff1f93239c --- /dev/null +++ b/autogpt_platform/backend/migrations/20251028000000_add_parent_graph_execution_tracking/migration.sql @@ -0,0 +1,11 @@ +-- Add parent execution tracking for nested agent graphs +-- This enables cascading stop operations and prevents orphaned child executions + +-- AlterTable +ALTER TABLE "AgentGraphExecution" ADD COLUMN "parentGraphExecutionId" TEXT; + +-- CreateIndex +CREATE INDEX "AgentGraphExecution_parentGraphExecutionId_idx" ON "AgentGraphExecution"("parentGraphExecutionId"); + +-- AddForeignKey +ALTER TABLE "AgentGraphExecution" ADD CONSTRAINT "AgentGraphExecution_parentGraphExecutionId_fkey" FOREIGN KEY ("parentGraphExecutionId") REFERENCES "AgentGraphExecution"("id") ON DELETE SET NULL ON UPDATE CASCADE; diff --git a/autogpt_platform/backend/migrations/20251106091413_migrate_deprecated_groq_openrouter_models/migration.sql b/autogpt_platform/backend/migrations/20251106091413_migrate_deprecated_groq_openrouter_models/migration.sql new file mode 100644 index 0000000000..c750301ae0 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251106091413_migrate_deprecated_groq_openrouter_models/migration.sql @@ -0,0 +1,53 @@ +-- Migrate deprecated Groq and OpenRouter models to their replacements +-- This updates all AgentNode blocks that use deprecated models that have been decommissioned +-- Deprecated models: +-- - deepseek-r1-distill-llama-70b (Groq - decommissioned) +-- - gemma2-9b-it (Groq - decommissioned) +-- - llama3-70b-8192 (Groq - decommissioned) +-- - llama3-8b-8192 (Groq - decommissioned) +-- - google/gemini-flash-1.5 (OpenRouter - no endpoints found) + +-- Update llama3-70b-8192 to llama-3.3-70b-versatile +UPDATE "AgentNode" +SET "constantInput" = JSONB_SET( + "constantInput"::jsonb, + '{model}', + '"llama-3.3-70b-versatile"'::jsonb + ) +WHERE "constantInput"::jsonb->>'model' = 'llama3-70b-8192'; + +-- Update llama3-8b-8192 to llama-3.1-8b-instant +UPDATE "AgentNode" +SET "constantInput" = JSONB_SET( + "constantInput"::jsonb, + '{model}', + '"llama-3.1-8b-instant"'::jsonb + ) +WHERE "constantInput"::jsonb->>'model' = 'llama3-8b-8192'; + +-- Update google/gemini-flash-1.5 to google/gemini-2.5-flash +UPDATE "AgentNode" +SET "constantInput" = JSONB_SET( + "constantInput"::jsonb, + '{model}', + '"google/gemini-2.5-flash"'::jsonb + ) +WHERE "constantInput"::jsonb->>'model' = 'google/gemini-flash-1.5'; + +-- Update deepseek-r1-distill-llama-70b to gpt-5-chat-latest (no direct replacement) +UPDATE "AgentNode" +SET "constantInput" = JSONB_SET( + "constantInput"::jsonb, + '{model}', + '"gpt-5-chat-latest"'::jsonb + ) +WHERE "constantInput"::jsonb->>'model' = 'deepseek-r1-distill-llama-70b'; + +-- Update gemma2-9b-it to gpt-5-chat-latest (no direct replacement) +UPDATE "AgentNode" +SET "constantInput" = JSONB_SET( + "constantInput"::jsonb, + '{model}', + '"gpt-5-chat-latest"'::jsonb + ) +WHERE "constantInput"::jsonb->>'model' = 'gemma2-9b-it'; diff --git a/autogpt_platform/backend/migrations/20251117102522_add_human_in_the_loop_table/migration.sql b/autogpt_platform/backend/migrations/20251117102522_add_human_in_the_loop_table/migration.sql new file mode 100644 index 0000000000..5a2cc2f722 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251117102522_add_human_in_the_loop_table/migration.sql @@ -0,0 +1,44 @@ +-- CreateEnum +CREATE TYPE "ReviewStatus" AS ENUM ('WAITING', 'APPROVED', 'REJECTED'); + +-- AlterEnum +ALTER TYPE "AgentExecutionStatus" ADD VALUE 'REVIEW'; + +-- CreateTable +CREATE TABLE "PendingHumanReview" ( + "nodeExecId" TEXT NOT NULL, + "userId" TEXT NOT NULL, + "graphExecId" TEXT NOT NULL, + "graphId" TEXT NOT NULL, + "graphVersion" INTEGER NOT NULL, + "payload" JSONB NOT NULL, + "instructions" TEXT, + "editable" BOOLEAN NOT NULL DEFAULT true, + "status" "ReviewStatus" NOT NULL DEFAULT 'WAITING', + "reviewMessage" TEXT, + "wasEdited" BOOLEAN, + "processed" BOOLEAN NOT NULL DEFAULT false, + "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "updatedAt" TIMESTAMP(3), + "reviewedAt" TIMESTAMP(3), + + CONSTRAINT "PendingHumanReview_pkey" PRIMARY KEY ("nodeExecId") +); + +-- CreateIndex +CREATE INDEX "PendingHumanReview_userId_status_idx" ON "PendingHumanReview"("userId", "status"); + +-- CreateIndex +CREATE INDEX "PendingHumanReview_graphExecId_status_idx" ON "PendingHumanReview"("graphExecId", "status"); + +-- CreateIndex +CREATE UNIQUE INDEX "PendingHumanReview_nodeExecId_key" ON "PendingHumanReview"("nodeExecId"); + +-- AddForeignKey +ALTER TABLE "PendingHumanReview" ADD CONSTRAINT "PendingHumanReview_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE; + +-- AddForeignKey +ALTER TABLE "PendingHumanReview" ADD CONSTRAINT "PendingHumanReview_nodeExecId_fkey" FOREIGN KEY ("nodeExecId") REFERENCES "AgentNodeExecution"("id") ON DELETE CASCADE ON UPDATE CASCADE; + +-- AddForeignKey +ALTER TABLE "PendingHumanReview" ADD CONSTRAINT "PendingHumanReview_graphExecId_fkey" FOREIGN KEY ("graphExecId") REFERENCES "AgentGraphExecution"("id") ON DELETE CASCADE ON UPDATE CASCADE; diff --git a/autogpt_platform/backend/migrations/20251126141555_add_api_key_store_permissions/migration.sql b/autogpt_platform/backend/migrations/20251126141555_add_api_key_store_permissions/migration.sql new file mode 100644 index 0000000000..c921244c99 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251126141555_add_api_key_store_permissions/migration.sql @@ -0,0 +1,2 @@ +-- AlterEnum +ALTER TYPE "APIKeyPermission" ADD VALUE 'READ_STORE'; diff --git a/autogpt_platform/backend/migrations/20251127092500_add_api_key_tools_permission/migration.sql b/autogpt_platform/backend/migrations/20251127092500_add_api_key_tools_permission/migration.sql new file mode 100644 index 0000000000..11549032e9 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251127092500_add_api_key_tools_permission/migration.sql @@ -0,0 +1,2 @@ +-- AlterEnum +ALTER TYPE "APIKeyPermission" ADD VALUE 'USE_TOOLS'; diff --git a/autogpt_platform/backend/migrations/20251127144817_add_api_key_integration_permissions/migration.sql b/autogpt_platform/backend/migrations/20251127144817_add_api_key_integration_permissions/migration.sql new file mode 100644 index 0000000000..f3abc99947 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251127144817_add_api_key_integration_permissions/migration.sql @@ -0,0 +1,4 @@ +-- AlterEnum +ALTER TYPE "APIKeyPermission" ADD VALUE 'MANAGE_INTEGRATIONS'; +ALTER TYPE "APIKeyPermission" ADD VALUE 'READ_INTEGRATIONS'; +ALTER TYPE "APIKeyPermission" ADD VALUE 'DELETE_INTEGRATIONS'; diff --git a/autogpt_platform/backend/migrations/20251128112407_add_library_agent_settings/migration.sql b/autogpt_platform/backend/migrations/20251128112407_add_library_agent_settings/migration.sql new file mode 100644 index 0000000000..a9cf141ce2 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251128112407_add_library_agent_settings/migration.sql @@ -0,0 +1,2 @@ +-- AlterTable +ALTER TABLE "LibraryAgent" ADD COLUMN "settings" JSONB NOT NULL DEFAULT '{}'; diff --git a/autogpt_platform/backend/migrations/20251204012214_add_marketplace_agent_output_video_column/migration.sql b/autogpt_platform/backend/migrations/20251204012214_add_marketplace_agent_output_video_column/migration.sql new file mode 100644 index 0000000000..bdecc9678b --- /dev/null +++ b/autogpt_platform/backend/migrations/20251204012214_add_marketplace_agent_output_video_column/migration.sql @@ -0,0 +1,64 @@ +-- AlterTable +ALTER TABLE "StoreListingVersion" ADD COLUMN "agentOutputDemoUrl" TEXT; + +-- Drop and recreate the StoreAgent view with agentOutputDemoUrl field +DROP VIEW IF EXISTS "StoreAgent"; + +CREATE OR REPLACE VIEW "StoreAgent" AS +WITH latest_versions AS ( + SELECT + "storeListingId", + MAX(version) AS max_version + FROM "StoreListingVersion" + WHERE "submissionStatus" = 'APPROVED' + GROUP BY "storeListingId" +), +agent_versions AS ( + SELECT + "storeListingId", + array_agg(DISTINCT version::text ORDER BY version::text) AS versions + FROM "StoreListingVersion" + WHERE "submissionStatus" = 'APPROVED' + GROUP BY "storeListingId" +) +SELECT + sl.id AS listing_id, + slv.id AS "storeListingVersionId", + slv."createdAt" AS updated_at, + sl.slug, + COALESCE(slv.name, '') AS agent_name, + slv."videoUrl" AS agent_video, + slv."agentOutputDemoUrl" AS agent_output_demo, + COALESCE(slv."imageUrls", ARRAY[]::text[]) AS agent_image, + slv."isFeatured" AS featured, + p.username AS creator_username, -- Allow NULL for malformed sub-agents + p."avatarUrl" AS creator_avatar, -- Allow NULL for malformed sub-agents + slv."subHeading" AS sub_heading, + slv.description, + slv.categories, + slv.search, + COALESCE(ar.run_count, 0::bigint) AS runs, + COALESCE(rs.avg_rating, 0.0)::double precision AS rating, + COALESCE(av.versions, ARRAY[slv.version::text]) AS versions, + slv."isAvailable" AS is_available, + COALESCE(sl."useForOnboarding", false) AS "useForOnboarding" +FROM "StoreListing" sl +JOIN latest_versions lv + ON sl.id = lv."storeListingId" +JOIN "StoreListingVersion" slv + ON slv."storeListingId" = lv."storeListingId" + AND slv.version = lv.max_version + AND slv."submissionStatus" = 'APPROVED' +JOIN "AgentGraph" a + ON slv."agentGraphId" = a.id + AND slv."agentGraphVersion" = a.version +LEFT JOIN "Profile" p + ON sl."owningUserId" = p."userId" +LEFT JOIN "mv_review_stats" rs + ON sl.id = rs."storeListingId" +LEFT JOIN "mv_agent_run_counts" ar + ON a.id = ar."agentGraphId" +LEFT JOIN agent_versions av + ON sl.id = av."storeListingId" +WHERE sl."isDeleted" = false + AND sl."hasApprovedVersion" = true; diff --git a/autogpt_platform/backend/migrations/20251209182537_add_builder_search/migration.sql b/autogpt_platform/backend/migrations/20251209182537_add_builder_search/migration.sql new file mode 100644 index 0000000000..8b9786e47c --- /dev/null +++ b/autogpt_platform/backend/migrations/20251209182537_add_builder_search/migration.sql @@ -0,0 +1,15 @@ +-- Create BuilderSearchHistory table +CREATE TABLE "BuilderSearchHistory" ( + "id" TEXT NOT NULL, + "userId" TEXT NOT NULL, + "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "updatedAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "searchQuery" TEXT NOT NULL, + "filter" TEXT[] DEFAULT ARRAY[]::TEXT[], + "byCreator" TEXT[] DEFAULT ARRAY[]::TEXT[], + + CONSTRAINT "BuilderSearchHistory_pkey" PRIMARY KEY ("id") +); + +-- Define User foreign relation +ALTER TABLE "BuilderSearchHistory" ADD CONSTRAINT "BuilderSearchHistory_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE; diff --git a/autogpt_platform/backend/migrations/20251212165920_add_oauth_provider_support/migration.sql b/autogpt_platform/backend/migrations/20251212165920_add_oauth_provider_support/migration.sql new file mode 100644 index 0000000000..9c8672c4c3 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251212165920_add_oauth_provider_support/migration.sql @@ -0,0 +1,129 @@ +-- CreateTable +CREATE TABLE "OAuthApplication" ( + "id" TEXT NOT NULL, + "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "updatedAt" TIMESTAMP(3) NOT NULL, + "name" TEXT NOT NULL, + "description" TEXT, + "clientId" TEXT NOT NULL, + "clientSecret" TEXT NOT NULL, + "clientSecretSalt" TEXT NOT NULL, + "redirectUris" TEXT[], + "grantTypes" TEXT[] DEFAULT ARRAY['authorization_code', 'refresh_token']::TEXT[], + "scopes" "APIKeyPermission"[], + "ownerId" TEXT NOT NULL, + "isActive" BOOLEAN NOT NULL DEFAULT true, + + CONSTRAINT "OAuthApplication_pkey" PRIMARY KEY ("id") +); + +-- CreateTable +CREATE TABLE "OAuthAuthorizationCode" ( + "id" TEXT NOT NULL, + "code" TEXT NOT NULL, + "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "expiresAt" TIMESTAMP(3) NOT NULL, + "applicationId" TEXT NOT NULL, + "userId" TEXT NOT NULL, + "scopes" "APIKeyPermission"[], + "redirectUri" TEXT NOT NULL, + "codeChallenge" TEXT, + "codeChallengeMethod" TEXT, + "usedAt" TIMESTAMP(3), + + CONSTRAINT "OAuthAuthorizationCode_pkey" PRIMARY KEY ("id") +); + +-- CreateTable +CREATE TABLE "OAuthAccessToken" ( + "id" TEXT NOT NULL, + "token" TEXT NOT NULL, + "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "expiresAt" TIMESTAMP(3) NOT NULL, + "applicationId" TEXT NOT NULL, + "userId" TEXT NOT NULL, + "scopes" "APIKeyPermission"[], + "revokedAt" TIMESTAMP(3), + + CONSTRAINT "OAuthAccessToken_pkey" PRIMARY KEY ("id") +); + +-- CreateTable +CREATE TABLE "OAuthRefreshToken" ( + "id" TEXT NOT NULL, + "token" TEXT NOT NULL, + "createdAt" TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP, + "expiresAt" TIMESTAMP(3) NOT NULL, + "applicationId" TEXT NOT NULL, + "userId" TEXT NOT NULL, + "scopes" "APIKeyPermission"[], + "revokedAt" TIMESTAMP(3), + + CONSTRAINT "OAuthRefreshToken_pkey" PRIMARY KEY ("id") +); + +-- CreateIndex +CREATE UNIQUE INDEX "OAuthApplication_clientId_key" ON "OAuthApplication"("clientId"); + +-- CreateIndex +CREATE INDEX "OAuthApplication_clientId_idx" ON "OAuthApplication"("clientId"); + +-- CreateIndex +CREATE INDEX "OAuthApplication_ownerId_idx" ON "OAuthApplication"("ownerId"); + +-- CreateIndex +CREATE UNIQUE INDEX "OAuthAuthorizationCode_code_key" ON "OAuthAuthorizationCode"("code"); + +-- CreateIndex +CREATE INDEX "OAuthAuthorizationCode_code_idx" ON "OAuthAuthorizationCode"("code"); + +-- CreateIndex +CREATE INDEX "OAuthAuthorizationCode_applicationId_userId_idx" ON "OAuthAuthorizationCode"("applicationId", "userId"); + +-- CreateIndex +CREATE INDEX "OAuthAuthorizationCode_expiresAt_idx" ON "OAuthAuthorizationCode"("expiresAt"); + +-- CreateIndex +CREATE UNIQUE INDEX "OAuthAccessToken_token_key" ON "OAuthAccessToken"("token"); + +-- CreateIndex +CREATE INDEX "OAuthAccessToken_token_idx" ON "OAuthAccessToken"("token"); + +-- CreateIndex +CREATE INDEX "OAuthAccessToken_userId_applicationId_idx" ON "OAuthAccessToken"("userId", "applicationId"); + +-- CreateIndex +CREATE INDEX "OAuthAccessToken_expiresAt_idx" ON "OAuthAccessToken"("expiresAt"); + +-- CreateIndex +CREATE UNIQUE INDEX "OAuthRefreshToken_token_key" ON "OAuthRefreshToken"("token"); + +-- CreateIndex +CREATE INDEX "OAuthRefreshToken_token_idx" ON "OAuthRefreshToken"("token"); + +-- CreateIndex +CREATE INDEX "OAuthRefreshToken_userId_applicationId_idx" ON "OAuthRefreshToken"("userId", "applicationId"); + +-- CreateIndex +CREATE INDEX "OAuthRefreshToken_expiresAt_idx" ON "OAuthRefreshToken"("expiresAt"); + +-- AddForeignKey +ALTER TABLE "OAuthApplication" ADD CONSTRAINT "OAuthApplication_ownerId_fkey" FOREIGN KEY ("ownerId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE; + +-- AddForeignKey +ALTER TABLE "OAuthAuthorizationCode" ADD CONSTRAINT "OAuthAuthorizationCode_applicationId_fkey" FOREIGN KEY ("applicationId") REFERENCES "OAuthApplication"("id") ON DELETE CASCADE ON UPDATE CASCADE; + +-- AddForeignKey +ALTER TABLE "OAuthAuthorizationCode" ADD CONSTRAINT "OAuthAuthorizationCode_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE; + +-- AddForeignKey +ALTER TABLE "OAuthAccessToken" ADD CONSTRAINT "OAuthAccessToken_applicationId_fkey" FOREIGN KEY ("applicationId") REFERENCES "OAuthApplication"("id") ON DELETE CASCADE ON UPDATE CASCADE; + +-- AddForeignKey +ALTER TABLE "OAuthAccessToken" ADD CONSTRAINT "OAuthAccessToken_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE; + +-- AddForeignKey +ALTER TABLE "OAuthRefreshToken" ADD CONSTRAINT "OAuthRefreshToken_applicationId_fkey" FOREIGN KEY ("applicationId") REFERENCES "OAuthApplication"("id") ON DELETE CASCADE ON UPDATE CASCADE; + +-- AddForeignKey +ALTER TABLE "OAuthRefreshToken" ADD CONSTRAINT "OAuthRefreshToken_userId_fkey" FOREIGN KEY ("userId") REFERENCES "User"("id") ON DELETE CASCADE ON UPDATE CASCADE; diff --git a/autogpt_platform/backend/migrations/20251216182139_fix_store_submission_agent_version/migration.sql b/autogpt_platform/backend/migrations/20251216182139_fix_store_submission_agent_version/migration.sql new file mode 100644 index 0000000000..676fe641b6 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251216182139_fix_store_submission_agent_version/migration.sql @@ -0,0 +1,45 @@ +-- Fix StoreSubmission view to use agentGraphVersion instead of version for agent_version field +-- This ensures that submission.agent_version returns the actual agent graph version, not the store listing version number + +BEGIN; + +-- Recreate the view with the corrected agent_version field (using agentGraphVersion instead of version) +CREATE OR REPLACE VIEW "StoreSubmission" AS +SELECT + sl.id AS listing_id, + sl."owningUserId" AS user_id, + slv."agentGraphId" AS agent_id, + slv."agentGraphVersion" AS agent_version, + sl.slug, + COALESCE(slv.name, '') AS name, + slv."subHeading" AS sub_heading, + slv.description, + slv.instructions, + slv."imageUrls" AS image_urls, + slv."submittedAt" AS date_submitted, + slv."submissionStatus" AS status, + COALESCE(ar.run_count, 0::bigint) AS runs, + COALESCE(avg(sr.score::numeric), 0.0)::double precision AS rating, + slv.id AS store_listing_version_id, + slv."reviewerId" AS reviewer_id, + slv."reviewComments" AS review_comments, + slv."internalComments" AS internal_comments, + slv."reviewedAt" AS reviewed_at, + slv."changesSummary" AS changes_summary, + slv."videoUrl" AS video_url, + slv.categories +FROM "StoreListing" sl + JOIN "StoreListingVersion" slv ON slv."storeListingId" = sl.id + LEFT JOIN "StoreListingReview" sr ON sr."storeListingVersionId" = slv.id + LEFT JOIN ( + SELECT "AgentGraphExecution"."agentGraphId", count(*) AS run_count + FROM "AgentGraphExecution" + GROUP BY "AgentGraphExecution"."agentGraphId" + ) ar ON ar."agentGraphId" = slv."agentGraphId" +WHERE sl."isDeleted" = false +GROUP BY sl.id, sl."owningUserId", slv.id, slv."agentGraphId", slv."agentGraphVersion", sl.slug, slv.name, + slv."subHeading", slv.description, slv.instructions, slv."imageUrls", slv."submittedAt", + slv."submissionStatus", slv."reviewerId", slv."reviewComments", slv."internalComments", + slv."reviewedAt", slv."changesSummary", slv."videoUrl", slv.categories, ar.run_count; + +COMMIT; \ No newline at end of file diff --git a/autogpt_platform/backend/migrations/20251217174500_fix_store_agent_versions_to_graph_versions/migration.sql b/autogpt_platform/backend/migrations/20251217174500_fix_store_agent_versions_to_graph_versions/migration.sql new file mode 100644 index 0000000000..495ac113b4 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251217174500_fix_store_agent_versions_to_graph_versions/migration.sql @@ -0,0 +1,81 @@ +-- Add agentGraphVersions field to StoreAgent view for consistent version comparison +-- This keeps the existing versions field unchanged and adds a new field with graph versions +-- This makes it safe for version comparison with LibraryAgent.graph_version + +BEGIN; + +-- Drop and recreate the StoreAgent view with new agentGraphVersions field +DROP VIEW IF EXISTS "StoreAgent"; + +CREATE OR REPLACE VIEW "StoreAgent" AS +WITH latest_versions AS ( + SELECT + "storeListingId", + MAX(version) AS max_version + FROM "StoreListingVersion" + WHERE "submissionStatus" = 'APPROVED' + GROUP BY "storeListingId" +), +agent_versions AS ( + SELECT + "storeListingId", + array_agg(DISTINCT version::text ORDER BY version::text) AS versions + FROM "StoreListingVersion" + WHERE "submissionStatus" = 'APPROVED' + GROUP BY "storeListingId" +), +agent_graph_versions AS ( + SELECT + "storeListingId", + array_agg(DISTINCT "agentGraphVersion"::text ORDER BY "agentGraphVersion"::text) AS graph_versions + FROM "StoreListingVersion" + WHERE "submissionStatus" = 'APPROVED' + GROUP BY "storeListingId" +) +SELECT + sl.id AS listing_id, + slv.id AS "storeListingVersionId", + slv."createdAt" AS updated_at, + sl.slug, + COALESCE(slv.name, '') AS agent_name, + slv."videoUrl" AS agent_video, + slv."agentOutputDemoUrl" AS agent_output_demo, + COALESCE(slv."imageUrls", ARRAY[]::text[]) AS agent_image, + slv."isFeatured" AS featured, + p.username AS creator_username, -- Allow NULL for malformed sub-agents + p."avatarUrl" AS creator_avatar, -- Allow NULL for malformed sub-agents + slv."subHeading" AS sub_heading, + slv.description, + slv.categories, + slv.search, + COALESCE(ar.run_count, 0::bigint) AS runs, + COALESCE(rs.avg_rating, 0.0)::double precision AS rating, + COALESCE(av.versions, ARRAY[slv.version::text]) AS versions, + COALESCE(agv.graph_versions, ARRAY[slv."agentGraphVersion"::text]) AS "agentGraphVersions", + slv."agentGraphId", + slv."isAvailable" AS is_available, + COALESCE(sl."useForOnboarding", false) AS "useForOnboarding" +FROM "StoreListing" sl +JOIN latest_versions lv + ON sl.id = lv."storeListingId" +JOIN "StoreListingVersion" slv + ON slv."storeListingId" = lv."storeListingId" + AND slv.version = lv.max_version + AND slv."submissionStatus" = 'APPROVED' +JOIN "AgentGraph" a + ON slv."agentGraphId" = a.id + AND slv."agentGraphVersion" = a.version +LEFT JOIN "Profile" p + ON sl."owningUserId" = p."userId" +LEFT JOIN "mv_review_stats" rs + ON sl.id = rs."storeListingId" +LEFT JOIN "mv_agent_run_counts" ar + ON a.id = ar."agentGraphId" +LEFT JOIN agent_versions av + ON sl.id = av."storeListingId" +LEFT JOIN agent_graph_versions agv + ON sl.id = agv."storeListingId" +WHERE sl."isDeleted" = false + AND sl."hasApprovedVersion" = true; + +COMMIT; \ No newline at end of file diff --git a/autogpt_platform/backend/migrations/20251218231330_add_oauth_app_logo/migration.sql b/autogpt_platform/backend/migrations/20251218231330_add_oauth_app_logo/migration.sql new file mode 100644 index 0000000000..c9c8c76df1 --- /dev/null +++ b/autogpt_platform/backend/migrations/20251218231330_add_oauth_app_logo/migration.sql @@ -0,0 +1,5 @@ +-- AlterEnum +ALTER TYPE "APIKeyPermission" ADD VALUE 'IDENTITY'; + +-- AlterTable +ALTER TABLE "OAuthApplication" ADD COLUMN "logoUrl" TEXT; diff --git a/autogpt_platform/backend/poetry.lock b/autogpt_platform/backend/poetry.lock index a7d9497944..d7aeec9b6d 100644 --- a/autogpt_platform/backend/poetry.lock +++ b/autogpt_platform/backend/poetry.lock @@ -272,14 +272,14 @@ trio = ["trio (>=0.26.1)"] [[package]] name = "apscheduler" -version = "3.11.0" +version = "3.11.1" description = "In-process task scheduler with Cron-like capabilities" optional = false python-versions = ">=3.8" groups = ["main"] files = [ - {file = "APScheduler-3.11.0-py3-none-any.whl", hash = "sha256:fc134ca32e50f5eadcc4938e3a4545ab19131435e851abb40b34d63d5141c6da"}, - {file = "apscheduler-3.11.0.tar.gz", hash = "sha256:4c622d250b0955a65d5d0eb91c33e6d43fd879834bf541e0a18661ae60460133"}, + {file = "apscheduler-3.11.1-py3-none-any.whl", hash = "sha256:6162cb5683cb09923654fa9bdd3130c4be4bfda6ad8990971c9597ecd52965d2"}, + {file = "apscheduler-3.11.1.tar.gz", hash = "sha256:0db77af6400c84d1747fe98a04b8b58f0080c77d11d338c4f507a9752880f221"}, ] [package.dependencies] @@ -1240,14 +1240,14 @@ tests = ["coverage", "coveralls", "dill", "mock", "nose"] [[package]] name = "faker" -version = "37.8.0" +version = "38.2.0" description = "Faker is a Python package that generates fake data for you." optional = false -python-versions = ">=3.9" +python-versions = ">=3.10" groups = ["dev"] files = [ - {file = "faker-37.8.0-py3-none-any.whl", hash = "sha256:b08233118824423b5fc239f7dd51f145e7018082b4164f8da6a9994e1f1ae793"}, - {file = "faker-37.8.0.tar.gz", hash = "sha256:090bb5abbec2b30949a95ce1ba6b20d1d0ed222883d63483a0d4be4a970d6fb8"}, + {file = "faker-38.2.0-py3-none-any.whl", hash = "sha256:35fe4a0a79dee0dc4103a6083ee9224941e7d3594811a50e3969e547b0d2ee65"}, + {file = "faker-38.2.0.tar.gz", hash = "sha256:20672803db9c7cb97f9b56c18c54b915b6f1d8991f63d1d673642dc43f5ce7ab"}, ] [package.dependencies] @@ -4165,14 +4165,14 @@ test = ["betamax (>=0.8,<0.9)", "pytest (>=2.7.3)", "urllib3 (==1.26.*)"] [[package]] name = "pre-commit" -version = "4.3.0" +version = "4.4.0" description = "A framework for managing and maintaining multi-language pre-commit hooks." optional = false -python-versions = ">=3.9" +python-versions = ">=3.10" groups = ["dev"] files = [ - {file = "pre_commit-4.3.0-py2.py3-none-any.whl", hash = "sha256:2b0747ad7e6e967169136edffee14c16e148a778a54e4f967921aa1ebf2308d8"}, - {file = "pre_commit-4.3.0.tar.gz", hash = "sha256:499fe450cc9d42e9d58e606262795ecb64dd05438943c62b66f6a8673da30b16"}, + {file = "pre_commit-4.4.0-py2.py3-none-any.whl", hash = "sha256:b35ea52957cbf83dcc5d8ee636cbead8624e3a15fbfa61a370e42158ac8a5813"}, + {file = "pre_commit-4.4.0.tar.gz", hash = "sha256:f0233ebab440e9f17cabbb558706eb173d19ace965c68cdce2c081042b4fab15"}, ] [package.dependencies] @@ -4913,14 +4913,14 @@ files = [ [[package]] name = "pyright" -version = "1.1.406" +version = "1.1.407" description = "Command line wrapper for pyright" optional = false python-versions = ">=3.7" groups = ["dev"] files = [ - {file = "pyright-1.1.406-py3-none-any.whl", hash = "sha256:1d81fb43c2407bf566e97e57abb01c811973fdb21b2df8df59f870f688bdca71"}, - {file = "pyright-1.1.406.tar.gz", hash = "sha256:c4872bc58c9643dac09e8a2e74d472c62036910b3bd37a32813989ef7576ea2c"}, + {file = "pyright-1.1.407-py3-none-any.whl", hash = "sha256:6dd419f54fcc13f03b52285796d65e639786373f433e243f8b94cf93a7444d21"}, + {file = "pyright-1.1.407.tar.gz", hash = "sha256:099674dba5c10489832d4a4b2d302636152a9a42d317986c38474c76fe562262"}, ] [package.dependencies] @@ -5765,31 +5765,31 @@ pyasn1 = ">=0.1.3" [[package]] name = "ruff" -version = "0.13.3" +version = "0.14.5" description = "An extremely fast Python linter and code formatter, written in Rust." optional = false python-versions = ">=3.7" groups = ["dev"] files = [ - {file = "ruff-0.13.3-py3-none-linux_armv6l.whl", hash = "sha256:311860a4c5e19189c89d035638f500c1e191d283d0cc2f1600c8c80d6dcd430c"}, - {file = "ruff-0.13.3-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:2bdad6512fb666b40fcadb65e33add2b040fc18a24997d2e47fee7d66f7fcae2"}, - {file = "ruff-0.13.3-py3-none-macosx_11_0_arm64.whl", hash = "sha256:fc6fa4637284708d6ed4e5e970d52fc3b76a557d7b4e85a53013d9d201d93286"}, - {file = "ruff-0.13.3-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c9e6469864f94a98f412f20ea143d547e4c652f45e44f369d7b74ee78185838"}, - {file = "ruff-0.13.3-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5bf62b705f319476c78891e0e97e965b21db468b3c999086de8ffb0d40fd2822"}, - {file = "ruff-0.13.3-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:78cc1abed87ce40cb07ee0667ce99dbc766c9f519eabfd948ed87295d8737c60"}, - {file = "ruff-0.13.3-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:4fb75e7c402d504f7a9a259e0442b96403fa4a7310ffe3588d11d7e170d2b1e3"}, - {file = "ruff-0.13.3-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:17b951f9d9afb39330b2bdd2dd144ce1c1335881c277837ac1b50bfd99985ed3"}, - {file = "ruff-0.13.3-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6052f8088728898e0a449f0dde8fafc7ed47e4d878168b211977e3e7e854f662"}, - {file = "ruff-0.13.3-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dc742c50f4ba72ce2a3be362bd359aef7d0d302bf7637a6f942eaa763bd292af"}, - {file = "ruff-0.13.3-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:8e5640349493b378431637019366bbd73c927e515c9c1babfea3e932f5e68e1d"}, - {file = "ruff-0.13.3-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:6b139f638a80eae7073c691a5dd8d581e0ba319540be97c343d60fb12949c8d0"}, - {file = "ruff-0.13.3-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:6b547def0a40054825de7cfa341039ebdfa51f3d4bfa6a0772940ed351d2746c"}, - {file = "ruff-0.13.3-py3-none-musllinux_1_2_i686.whl", hash = "sha256:9cc48a3564423915c93573f1981d57d101e617839bef38504f85f3677b3a0a3e"}, - {file = "ruff-0.13.3-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:1a993b17ec03719c502881cb2d5f91771e8742f2ca6de740034433a97c561989"}, - {file = "ruff-0.13.3-py3-none-win32.whl", hash = "sha256:f14e0d1fe6460f07814d03c6e32e815bff411505178a1f539a38f6097d3e8ee3"}, - {file = "ruff-0.13.3-py3-none-win_amd64.whl", hash = "sha256:621e2e5812b691d4f244638d693e640f188bacbb9bc793ddd46837cea0503dd2"}, - {file = "ruff-0.13.3-py3-none-win_arm64.whl", hash = "sha256:9e9e9d699841eaf4c2c798fa783df2fabc680b72059a02ca0ed81c460bc58330"}, - {file = "ruff-0.13.3.tar.gz", hash = "sha256:5b0ba0db740eefdfbcce4299f49e9eaefc643d4d007749d77d047c2bab19908e"}, + {file = "ruff-0.14.5-py3-none-linux_armv6l.whl", hash = "sha256:f3b8248123b586de44a8018bcc9fefe31d23dda57a34e6f0e1e53bd51fd63594"}, + {file = "ruff-0.14.5-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:f7a75236570318c7a30edd7f5491945f0169de738d945ca8784500b517163a72"}, + {file = "ruff-0.14.5-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6d146132d1ee115f8802356a2dc9a634dbf58184c51bff21f313e8cd1c74899a"}, + {file = "ruff-0.14.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2380596653dcd20b057794d55681571a257a42327da8894b93bbd6111aa801f"}, + {file = "ruff-0.14.5-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2d1fa985a42b1f075a098fa1ab9d472b712bdb17ad87a8ec86e45e7fa6273e68"}, + {file = "ruff-0.14.5-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88f0770d42b7fa02bbefddde15d235ca3aa24e2f0137388cc15b2dcbb1f7c7a7"}, + {file = "ruff-0.14.5-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:3676cb02b9061fee7294661071c4709fa21419ea9176087cb77e64410926eb78"}, + {file = "ruff-0.14.5-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b595bedf6bc9cab647c4a173a61acf4f1ac5f2b545203ba82f30fcb10b0318fb"}, + {file = "ruff-0.14.5-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f55382725ad0bdb2e8ee2babcbbfb16f124f5a59496a2f6a46f1d9d99d93e6e2"}, + {file = "ruff-0.14.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7497d19dce23976bdaca24345ae131a1d38dcfe1b0850ad8e9e6e4fa321a6e19"}, + {file = "ruff-0.14.5-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:410e781f1122d6be4f446981dd479470af86537fb0b8857f27a6e872f65a38e4"}, + {file = "ruff-0.14.5-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:c01be527ef4c91a6d55e53b337bfe2c0f82af024cc1a33c44792d6844e2331e1"}, + {file = "ruff-0.14.5-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:f66e9bb762e68d66e48550b59c74314168ebb46199886c5c5aa0b0fbcc81b151"}, + {file = "ruff-0.14.5-py3-none-musllinux_1_2_i686.whl", hash = "sha256:d93be8f1fa01022337f1f8f3bcaa7ffee2d0b03f00922c45c2207954f351f465"}, + {file = "ruff-0.14.5-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:c135d4b681f7401fe0e7312017e41aba9b3160861105726b76cfa14bc25aa367"}, + {file = "ruff-0.14.5-py3-none-win32.whl", hash = "sha256:c83642e6fccfb6dea8b785eb9f456800dcd6a63f362238af5fc0c83d027dd08b"}, + {file = "ruff-0.14.5-py3-none-win_amd64.whl", hash = "sha256:9d55d7af7166f143c94eae1db3312f9ea8f95a4defef1979ed516dbb38c27621"}, + {file = "ruff-0.14.5-py3-none-win_arm64.whl", hash = "sha256:4b700459d4649e2594b31f20a9de33bc7c19976d4746d8d0798ad959621d64a4"}, + {file = "ruff-0.14.5.tar.gz", hash = "sha256:8d3b48d7d8aad423d3137af7ab6c8b1e38e4de104800f0d596990f6ada1a9fc1"}, ] [[package]] @@ -5823,14 +5823,14 @@ files = [ [[package]] name = "sentry-sdk" -version = "2.33.2" +version = "2.44.0" description = "Python client for Sentry (https://sentry.io)" optional = false python-versions = ">=3.6" groups = ["main"] files = [ - {file = "sentry_sdk-2.33.2-py2.py3-none-any.whl", hash = "sha256:8d57a3b4861b243aa9d558fda75509ad487db14f488cbdb6c78c614979d77632"}, - {file = "sentry_sdk-2.33.2.tar.gz", hash = "sha256:e85002234b7b8efac9b74c2d91dbd4f8f3970dc28da8798e39530e65cb740f94"}, + {file = "sentry_sdk-2.44.0-py2.py3-none-any.whl", hash = "sha256:9e36a0372b881e8f92fdbff4564764ce6cec4b7f25424d0a3a8d609c9e4651a7"}, + {file = "sentry_sdk-2.44.0.tar.gz", hash = "sha256:5b1fe54dfafa332e900b07dd8f4dfe35753b64e78e7d9b1655a28fd3065e2493"}, ] [package.dependencies] @@ -5858,20 +5858,25 @@ django = ["django (>=1.8)"] falcon = ["falcon (>=1.4)"] fastapi = ["fastapi (>=0.79.0)"] flask = ["blinker (>=1.1)", "flask (>=0.11)", "markupsafe"] +google-genai = ["google-genai (>=1.29.0)"] grpcio = ["grpcio (>=1.21.1)", "protobuf (>=3.8.0)"] http2 = ["httpcore[http2] (==1.*)"] httpx = ["httpx (>=0.16.0)"] huey = ["huey (>=2)"] huggingface-hub = ["huggingface_hub (>=0.22)"] langchain = ["langchain (>=0.0.210)"] +langgraph = ["langgraph (>=0.6.6)"] launchdarkly = ["launchdarkly-server-sdk (>=9.8.0)"] +litellm = ["litellm (>=1.77.5)"] litestar = ["litestar (>=2.0.0)"] loguru = ["loguru (>=0.5)"] +mcp = ["mcp (>=1.15.0)"] openai = ["openai (>=1.0.0)", "tiktoken (>=0.3.0)"] openfeature = ["openfeature-sdk (>=0.7.1)"] opentelemetry = ["opentelemetry-distro (>=0.35b0)"] opentelemetry-experimental = ["opentelemetry-distro"] pure-eval = ["asttokens", "executing", "pure_eval"] +pydantic-ai = ["pydantic-ai (>=1.0.0)"] pymongo = ["pymongo (>=3.1)"] pyspark = ["pyspark (>=2.4.4)"] quart = ["blinker (>=1.1)", "quart (>=0.16.1)"] @@ -7274,4 +7279,4 @@ cffi = ["cffi (>=1.11)"] [metadata] lock-version = "2.1" python-versions = ">=3.10,<3.14" -content-hash = "e299540948bc5fc1c29aa18b45c79fd0e7a69c3ceb066046fa938110c594bcfa" +content-hash = "13b191b2a1989d3321ff713c66ff6f5f4f3b82d15df4d407e0e5dbf87d7522c4" diff --git a/autogpt_platform/backend/pyproject.toml b/autogpt_platform/backend/pyproject.toml index 562707d4d4..3c175e2fcd 100644 --- a/autogpt_platform/backend/pyproject.toml +++ b/autogpt_platform/backend/pyproject.toml @@ -13,7 +13,7 @@ aio-pika = "^9.5.5" aiohttp = "^3.10.0" aiodns = "^3.5.0" anthropic = "^0.59.0" -apscheduler = "^3.11.0" +apscheduler = "^3.11.1" autogpt-libs = { path = "../autogpt_libs", develop = true } bleach = { extras = ["css"], version = "^6.2.0" } click = "^8.2.0" @@ -58,7 +58,7 @@ python-multipart = "^0.0.20" redis = "^6.2.0" regex = "^2025.9.18" replicate = "^1.0.6" -sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlalchemy"], version = "^2.33.2"} +sentry-sdk = {extras = ["anthropic", "fastapi", "launchdarkly", "openai", "sqlalchemy"], version = "^2.44.0"} sqlalchemy = "^2.0.40" strenum = "^0.4.9" stripe = "^11.5.0" @@ -86,16 +86,16 @@ stagehand = "^0.5.1" [tool.poetry.group.dev.dependencies] aiohappyeyeballs = "^2.6.1" black = "^24.10.0" -faker = "^37.8.0" +faker = "^38.2.0" httpx = "^0.28.1" isort = "^5.13.2" poethepoet = "^0.37.0" -pre-commit = "^4.3.0" -pyright = "^1.1.406" +pre-commit = "^4.4.0" +pyright = "^1.1.407" pytest-mock = "^3.15.1" pytest-watcher = "^0.4.2" requests = "^2.32.5" -ruff = "^0.13.3" +ruff = "^0.14.5" # NOTE: please insert new dependencies in their alphabetical location [build-system] @@ -114,6 +114,9 @@ cli = "backend.cli:main" format = "linter:format" lint = "linter:lint" test = "run_tests:test" +load-store-agents = "test.load_store_agents:run" +export-api-schema = "backend.cli.generate_openapi_json:main" +oauth-tool = "backend.cli.oauth_tool:cli" [tool.isort] profile = "black" diff --git a/autogpt_platform/backend/schema.prisma b/autogpt_platform/backend/schema.prisma index fba1492f5a..2f6c109c03 100644 --- a/autogpt_platform/backend/schema.prisma +++ b/autogpt_platform/backend/schema.prisma @@ -5,10 +5,11 @@ datasource db { } generator client { - provider = "prisma-client-py" - recursive_type_depth = -1 - interface = "asyncio" - previewFeatures = ["views"] + provider = "prisma-client-py" + recursive_type_depth = -1 + interface = "asyncio" + previewFeatures = ["views", "fullTextSearch"] + partial_type_generator = "backend/data/partial_types.py" } // User model to mirror Auth provider users @@ -52,12 +53,20 @@ model User { Profile Profile[] UserOnboarding UserOnboarding? + BuilderSearchHistory BuilderSearchHistory[] StoreListings StoreListing[] StoreListingReviews StoreListingReview[] StoreVersionsReviewed StoreListingVersion[] APIKeys APIKey[] IntegrationWebhooks IntegrationWebhook[] NotificationBatches UserNotificationBatch[] + PendingHumanReviews PendingHumanReview[] + + // OAuth Provider relations + OAuthApplications OAuthApplication[] + OAuthAuthorizationCodes OAuthAuthorizationCode[] + OAuthAccessTokens OAuthAccessToken[] + OAuthRefreshTokens OAuthRefreshToken[] } enum OnboardingStep { @@ -112,6 +121,19 @@ model UserOnboarding { User User @relation(fields: [userId], references: [id], onDelete: Cascade) } +model BuilderSearchHistory { + id String @id @default(uuid()) + createdAt DateTime @default(now()) + updatedAt DateTime @default(now()) @updatedAt + + searchQuery String + filter String[] @default([]) + byCreator String[] @default([]) + + userId String + User User @relation(fields: [userId], references: [id], onDelete: Cascade) +} + // This model describes the Agent Graph/Flow (Multi Agent System). model AgentGraph { id String @default(uuid()) @@ -264,6 +286,8 @@ model LibraryAgent { isArchived Boolean @default(false) isDeleted Boolean @default(false) + settings Json @default("{}") + @@unique([userId, agentGraphId, agentGraphVersion]) @@index([agentGraphId, agentGraphVersion]) @@index([creatorId]) @@ -350,6 +374,7 @@ enum AgentExecutionStatus { COMPLETED TERMINATED FAILED + REVIEW } // This model describes the execution of an AgentGraph. @@ -382,16 +407,24 @@ model AgentGraphExecution { stats Json? + // Parent-child execution tracking for nested agent graphs + parentGraphExecutionId String? + ParentExecution AgentGraphExecution? @relation("ParentChildExecution", fields: [parentGraphExecutionId], references: [id], onDelete: SetNull) + ChildExecutions AgentGraphExecution[] @relation("ParentChildExecution") + // Sharing fields isShared Boolean @default(false) shareToken String? @unique sharedAt DateTime? + PendingHumanReviews PendingHumanReview[] + @@index([agentGraphId, agentGraphVersion]) @@index([userId, isDeleted, createdAt]) @@index([createdAt]) @@index([agentPresetId]) @@index([shareToken]) + @@index([parentGraphExecutionId]) } // This model describes the execution of an AgentNode. @@ -416,6 +449,8 @@ model AgentNodeExecution { stats Json? + PendingHumanReview PendingHumanReview? + @@index([agentGraphExecutionId, agentNodeId, executionStatus]) @@index([agentNodeId, executionStatus]) @@index([addedTime, queuedTime]) @@ -457,6 +492,39 @@ model AgentNodeExecutionKeyValueData { @@id([userId, key]) } +enum ReviewStatus { + WAITING + APPROVED + REJECTED +} + +// Pending human reviews for Human-in-the-loop blocks +model PendingHumanReview { + nodeExecId String @id + userId String + graphExecId String + graphId String + graphVersion Int + payload Json // The actual payload data to be reviewed + instructions String? // Instructions/message for the reviewer + editable Boolean @default(true) // Whether the reviewer can edit the data + status ReviewStatus @default(WAITING) + reviewMessage String? // Optional message from the reviewer + wasEdited Boolean? // Whether the data was modified during review + processed Boolean @default(false) // Whether the review result has been processed by the execution engine + createdAt DateTime @default(now()) + updatedAt DateTime? @updatedAt + reviewedAt DateTime? + + User User @relation(fields: [userId], references: [id], onDelete: Cascade) + NodeExecution AgentNodeExecution @relation(fields: [nodeExecId], references: [id], onDelete: Cascade) + GraphExecution AgentGraphExecution @relation(fields: [graphExecId], references: [id], onDelete: Cascade) + + @@unique([nodeExecId]) // One pending review per node execution + @@index([userId, status]) + @@index([graphExecId, status]) +} + // Webhook that is registered with a provider and propagates to one or more nodes model IntegrationWebhook { id String @id @default(uuid()) @@ -653,22 +721,26 @@ view StoreAgent { storeListingVersionId String updated_at DateTime - slug String - agent_name String - agent_video String? - agent_image String[] + slug String + agent_name String + agent_video String? + agent_output_demo String? + agent_image String[] - featured Boolean @default(false) + featured Boolean @default(false) creator_username String? creator_avatar String? sub_heading String description String categories String[] - runs Int - rating Float - versions String[] - is_available Boolean @default(true) - useForOnboarding Boolean @default(false) + search Unsupported("tsvector")? @default(dbgenerated("''::tsvector")) + runs Int + rating Float + versions String[] + agentGraphVersions String[] + agentGraphId String + is_available Boolean @default(true) + useForOnboarding Boolean @default(false) // Materialized views used (refreshed every 15 minutes via pg_cron): // - mv_agent_run_counts - Pre-aggregated agent execution counts by agentGraphId @@ -747,7 +819,7 @@ model StoreListing { slug String // Allow this agent to be used during onboarding - useForOnboarding Boolean @default(false) + useForOnboarding Boolean @default(false) // The currently active version that should be shown to users activeVersionId String? @unique @@ -784,13 +856,14 @@ model StoreListingVersion { AgentGraph AgentGraph @relation(fields: [agentGraphId, agentGraphVersion], references: [id, version]) // Content fields - name String - subHeading String - videoUrl String? - imageUrls String[] - description String - instructions String? - categories String[] + name String + subHeading String + videoUrl String? + agentOutputDemoUrl String? + imageUrls String[] + description String + instructions String? + categories String[] isFeatured Boolean @default(false) @@ -798,6 +871,8 @@ model StoreListingVersion { // Old versions can be made unavailable by the author if desired isAvailable Boolean @default(true) + search Unsupported("tsvector")? @default(dbgenerated("''::tsvector")) + // Version workflow state submissionStatus SubmissionStatus @default(DRAFT) submittedAt DateTime? @@ -857,10 +932,16 @@ enum SubmissionStatus { } enum APIKeyPermission { + IDENTITY // Info about the authenticated user EXECUTE_GRAPH // Can execute agent graphs READ_GRAPH // Can get graph versions and details EXECUTE_BLOCK // Can execute individual blocks READ_BLOCK // Can get block information + READ_STORE // Can read store agents and creators + USE_TOOLS // Can use chat tools via external API + MANAGE_INTEGRATIONS // Can initiate OAuth flows and complete them + READ_INTEGRATIONS // Can list credentials and providers + DELETE_INTEGRATIONS // Can delete credentials } model APIKey { @@ -903,3 +984,113 @@ enum APIKeyStatus { REVOKED SUSPENDED } + +//////////////////////////////////////////////////////////// +//////////////////////////////////////////////////////////// +////////////// OAUTH PROVIDER TABLES ////////////////// +//////////////////////////////////////////////////////////// +//////////////////////////////////////////////////////////// + +// OAuth2 applications that can access AutoGPT on behalf of users +model OAuthApplication { + id String @id @default(uuid()) + createdAt DateTime @default(now()) + updatedAt DateTime @updatedAt + + // Application metadata + name String + description String? + logoUrl String? // URL to app logo stored in GCS + clientId String @unique + clientSecret String // Hashed with Scrypt (same as API keys) + clientSecretSalt String // Salt for Scrypt hashing + + // OAuth configuration + redirectUris String[] // Allowed callback URLs + grantTypes String[] @default(["authorization_code", "refresh_token"]) + scopes APIKeyPermission[] // Which permissions the app can request + + // Application management + ownerId String + Owner User @relation(fields: [ownerId], references: [id], onDelete: Cascade) + isActive Boolean @default(true) + + // Relations + AuthorizationCodes OAuthAuthorizationCode[] + AccessTokens OAuthAccessToken[] + RefreshTokens OAuthRefreshToken[] + + @@index([clientId]) + @@index([ownerId]) +} + +// Temporary authorization codes (10 min TTL) +model OAuthAuthorizationCode { + id String @id @default(uuid()) + code String @unique + createdAt DateTime @default(now()) + expiresAt DateTime // Now + 10 minutes + + applicationId String + Application OAuthApplication @relation(fields: [applicationId], references: [id], onDelete: Cascade) + + userId String + User User @relation(fields: [userId], references: [id], onDelete: Cascade) + + scopes APIKeyPermission[] + redirectUri String // Must match one from application + + // PKCE (Proof Key for Code Exchange) support + codeChallenge String? + codeChallengeMethod String? // "S256" or "plain" + + usedAt DateTime? // Set when code is consumed + + @@index([code]) + @@index([applicationId, userId]) + @@index([expiresAt]) // For cleanup +} + +// Access tokens (1 hour TTL) +model OAuthAccessToken { + id String @id @default(uuid()) + token String @unique // SHA256 hash of plaintext token + createdAt DateTime @default(now()) + expiresAt DateTime // Now + 1 hour + + applicationId String + Application OAuthApplication @relation(fields: [applicationId], references: [id], onDelete: Cascade) + + userId String + User User @relation(fields: [userId], references: [id], onDelete: Cascade) + + scopes APIKeyPermission[] + + revokedAt DateTime? // Set when token is revoked + + @@index([token]) // For token lookup + @@index([userId, applicationId]) + @@index([expiresAt]) // For cleanup +} + +// Refresh tokens (30 days TTL) +model OAuthRefreshToken { + id String @id @default(uuid()) + token String @unique // SHA256 hash of plaintext token + createdAt DateTime @default(now()) + expiresAt DateTime // Now + 30 days + + applicationId String + Application OAuthApplication @relation(fields: [applicationId], references: [id], onDelete: Cascade) + + userId String + User User @relation(fields: [userId], references: [id], onDelete: Cascade) + + scopes APIKeyPermission[] + + revokedAt DateTime? // Set when token is revoked + + @@index([token]) // For token lookup + @@index([userId, applicationId]) + @@index([expiresAt]) // For cleanup +} diff --git a/autogpt_platform/backend/snapshots/agt_details b/autogpt_platform/backend/snapshots/agt_details index 0718ccab5d..0d69f1c23a 100644 --- a/autogpt_platform/backend/snapshots/agt_details +++ b/autogpt_platform/backend/snapshots/agt_details @@ -3,6 +3,7 @@ "slug": "test-agent", "agent_name": "Test Agent", "agent_video": "video.mp4", + "agent_output_demo": "demo.mp4", "agent_image": [ "image1.jpg", "image2.jpg" @@ -22,8 +23,14 @@ "1.0.0", "1.1.0" ], + "agentGraphVersions": [ + "1", + "2" + ], + "agentGraphId": "test-graph-id", "last_updated": "2023-01-01T00:00:00", "recommended_schedule_cron": null, "active_version_id": null, - "has_approved_version": false + "has_approved_version": false, + "changelog": null } \ No newline at end of file diff --git a/autogpt_platform/backend/snapshots/grph_single b/autogpt_platform/backend/snapshots/grph_single index d9207eb205..7ba26f6171 100644 --- a/autogpt_platform/backend/snapshots/grph_single +++ b/autogpt_platform/backend/snapshots/grph_single @@ -9,6 +9,7 @@ "forked_from_id": null, "forked_from_version": null, "has_external_trigger": false, + "has_human_in_the_loop": false, "id": "graph-123", "input_schema": { "properties": {}, diff --git a/autogpt_platform/backend/snapshots/grphs_all b/autogpt_platform/backend/snapshots/grphs_all index 42f4174d7b..d54df2bc18 100644 --- a/autogpt_platform/backend/snapshots/grphs_all +++ b/autogpt_platform/backend/snapshots/grphs_all @@ -9,6 +9,7 @@ "forked_from_id": null, "forked_from_version": null, "has_external_trigger": false, + "has_human_in_the_loop": false, "id": "graph-123", "input_schema": { "properties": {}, diff --git a/autogpt_platform/backend/snapshots/lib_agts_search b/autogpt_platform/backend/snapshots/lib_agts_search index 4da74a6ab4..d1feb7d16d 100644 --- a/autogpt_platform/backend/snapshots/lib_agts_search +++ b/autogpt_platform/backend/snapshots/lib_agts_search @@ -8,6 +8,7 @@ "creator_name": "Test Creator", "creator_image_url": "", "status": "COMPLETED", + "created_at": "2023-01-01T00:00:00", "updated_at": "2023-01-01T00:00:00", "name": "Test Agent 1", "description": "Test Description 1", @@ -30,7 +31,11 @@ "can_access_graph": true, "is_latest_version": true, "is_favorite": false, - "recommended_schedule_cron": null + "recommended_schedule_cron": null, + "settings": { + "human_in_the_loop_safe_mode": null + }, + "marketplace_listing": null }, { "id": "test-agent-2", @@ -40,6 +45,7 @@ "creator_name": "Test Creator", "creator_image_url": "", "status": "COMPLETED", + "created_at": "2023-01-01T00:00:00", "updated_at": "2023-01-01T00:00:00", "name": "Test Agent 2", "description": "Test Description 2", @@ -62,7 +68,11 @@ "can_access_graph": false, "is_latest_version": true, "is_favorite": false, - "recommended_schedule_cron": null + "recommended_schedule_cron": null, + "settings": { + "human_in_the_loop_safe_mode": null + }, + "marketplace_listing": null } ], "pagination": { diff --git a/autogpt_platform/backend/snapshots/sub_success b/autogpt_platform/backend/snapshots/sub_success index a3816c4384..13e2ec570d 100644 --- a/autogpt_platform/backend/snapshots/sub_success +++ b/autogpt_platform/backend/snapshots/sub_success @@ -23,6 +23,7 @@ "reviewed_at": null, "changes_summary": null, "video_url": "test.mp4", + "agent_output_demo_url": null, "categories": [ "test-category" ] diff --git a/autogpt_platform/backend/test/blocks/test_youtube.py b/autogpt_platform/backend/test/blocks/test_youtube.py new file mode 100644 index 0000000000..1af7c31b9b --- /dev/null +++ b/autogpt_platform/backend/test/blocks/test_youtube.py @@ -0,0 +1,179 @@ +from unittest.mock import Mock, patch + +import pytest +from pydantic import SecretStr +from youtube_transcript_api._errors import NoTranscriptFound +from youtube_transcript_api._transcripts import FetchedTranscript, Transcript +from youtube_transcript_api.proxies import WebshareProxyConfig + +from backend.blocks.youtube import TEST_CREDENTIALS, TranscribeYoutubeVideoBlock +from backend.data.model import UserPasswordCredentials +from backend.integrations.providers import ProviderName + + +class TestTranscribeYoutubeVideoBlock: + """Test cases for TranscribeYoutubeVideoBlock language fallback functionality.""" + + def setup_method(self): + """Set up test fixtures.""" + self.youtube_block = TranscribeYoutubeVideoBlock() + self.credentials = TEST_CREDENTIALS + + def test_extract_video_id_standard_url(self): + """Test extracting video ID from standard YouTube URL.""" + url = "https://www.youtube.com/watch?v=dQw4w9WgXcQ" + video_id = self.youtube_block.extract_video_id(url) + assert video_id == "dQw4w9WgXcQ" + + def test_extract_video_id_short_url(self): + """Test extracting video ID from shortened youtu.be URL.""" + url = "https://youtu.be/dQw4w9WgXcQ" + video_id = self.youtube_block.extract_video_id(url) + assert video_id == "dQw4w9WgXcQ" + + def test_extract_video_id_embed_url(self): + """Test extracting video ID from embed URL.""" + url = "https://www.youtube.com/embed/dQw4w9WgXcQ" + video_id = self.youtube_block.extract_video_id(url) + assert video_id == "dQw4w9WgXcQ" + + @patch("backend.blocks.youtube.YouTubeTranscriptApi") + def test_get_transcript_english_available(self, mock_api_class): + """Test getting transcript when English is available.""" + # Setup mock + mock_api = Mock() + mock_api_class.return_value = mock_api + mock_transcript = Mock(spec=FetchedTranscript) + mock_api.fetch.return_value = mock_transcript + + # Execute + result = self.youtube_block.get_transcript("test_video_id", self.credentials) + + # Assert + assert result == mock_transcript + mock_api_class.assert_called_once() + proxy_config = mock_api_class.call_args[1]["proxy_config"] + assert isinstance(proxy_config, WebshareProxyConfig) + mock_api.fetch.assert_called_once_with(video_id="test_video_id") + mock_api.list.assert_not_called() + + @patch("backend.blocks.youtube.YouTubeTranscriptApi") + def test_get_transcript_with_custom_credentials(self, mock_api_class): + """Test getting transcript with custom proxy credentials.""" + # Setup mock + mock_api = Mock() + mock_api_class.return_value = mock_api + mock_transcript = Mock(spec=FetchedTranscript) + mock_api.fetch.return_value = mock_transcript + + credentials = UserPasswordCredentials( + provider=ProviderName.WEBSHARE_PROXY, + username=SecretStr("custom_user"), + password=SecretStr("custom_pass"), + ) + + # Execute + result = self.youtube_block.get_transcript("test_video_id", credentials) + + # Assert + assert result == mock_transcript + mock_api_class.assert_called_once() + proxy_config = mock_api_class.call_args[1]["proxy_config"] + assert isinstance(proxy_config, WebshareProxyConfig) + assert proxy_config.proxy_username == "custom_user" + assert proxy_config.proxy_password == "custom_pass" + mock_api.fetch.assert_called_once_with(video_id="test_video_id") + mock_api.list.assert_not_called() + + @patch("backend.blocks.youtube.YouTubeTranscriptApi") + def test_get_transcript_fallback_to_first_available(self, mock_api_class): + """Test fallback to first available language when English is not available.""" + # Setup mock + mock_api = Mock() + mock_api_class.return_value = mock_api + + # Create mock transcript list with Hungarian transcript + mock_transcript_list = Mock() + mock_transcript_hu = Mock(spec=Transcript) + mock_fetched_transcript = Mock(spec=FetchedTranscript) + mock_transcript_hu.fetch.return_value = mock_fetched_transcript + + # Set up the transcript list to have manually created transcripts empty + # and generated transcripts with Hungarian + mock_transcript_list._manually_created_transcripts = {} + mock_transcript_list._generated_transcripts = {"hu": mock_transcript_hu} + + # Mock API to raise NoTranscriptFound for English, then return list + mock_api.fetch.side_effect = NoTranscriptFound( + "test_video_id", ("en",), mock_transcript_list + ) + mock_api.list.return_value = mock_transcript_list + + # Execute + result = self.youtube_block.get_transcript("test_video_id", self.credentials) + + # Assert + assert result == mock_fetched_transcript + mock_api_class.assert_called_once() + mock_api.fetch.assert_called_once_with(video_id="test_video_id") + mock_api.list.assert_called_once_with("test_video_id") + mock_transcript_hu.fetch.assert_called_once() + + @patch("backend.blocks.youtube.YouTubeTranscriptApi") + def test_get_transcript_prefers_manually_created(self, mock_api_class): + """Test that manually created transcripts are preferred over generated ones.""" + # Setup mock + mock_api = Mock() + mock_api_class.return_value = mock_api + + # Create mock transcript list with both manual and generated transcripts + mock_transcript_list = Mock() + mock_transcript_manual = Mock(spec=Transcript) + mock_transcript_generated = Mock(spec=Transcript) + mock_fetched_manual = Mock(spec=FetchedTranscript) + mock_transcript_manual.fetch.return_value = mock_fetched_manual + + # Set up the transcript list + mock_transcript_list._manually_created_transcripts = { + "es": mock_transcript_manual + } + mock_transcript_list._generated_transcripts = {"hu": mock_transcript_generated} + + # Mock API to raise NoTranscriptFound for English + mock_api.fetch.side_effect = NoTranscriptFound( + "test_video_id", ("en",), mock_transcript_list + ) + mock_api.list.return_value = mock_transcript_list + + # Execute + result = self.youtube_block.get_transcript("test_video_id", self.credentials) + + # Assert - should use manually created transcript first + assert result == mock_fetched_manual + mock_api_class.assert_called_once() + mock_transcript_manual.fetch.assert_called_once() + mock_transcript_generated.fetch.assert_not_called() + + @patch("backend.blocks.youtube.YouTubeTranscriptApi") + def test_get_transcript_no_transcripts_available(self, mock_api_class): + """Test that exception is re-raised when no transcripts are available at all.""" + # Setup mock + mock_api = Mock() + mock_api_class.return_value = mock_api + + # Create mock transcript list with no transcripts + mock_transcript_list = Mock() + mock_transcript_list._manually_created_transcripts = {} + mock_transcript_list._generated_transcripts = {} + + # Mock API to raise NoTranscriptFound + original_exception = NoTranscriptFound( + "test_video_id", ("en",), mock_transcript_list + ) + mock_api.fetch.side_effect = original_exception + mock_api.list.return_value = mock_transcript_list + + # Execute and assert exception is raised + with pytest.raises(NoTranscriptFound): + self.youtube_block.get_transcript("test_video_id", self.credentials) + mock_api_class.assert_called_once() diff --git a/autogpt_platform/backend/test/e2e_test_data.py b/autogpt_platform/backend/test/e2e_test_data.py index fc6119c4cc..d7576cdad3 100644 --- a/autogpt_platform/backend/test/e2e_test_data.py +++ b/autogpt_platform/backend/test/e2e_test_data.py @@ -23,16 +23,18 @@ from typing import Any, Dict, List from faker import Faker -from backend.data.api_key import create_api_key +# Import API functions from the backend +from backend.api.features.library.db import create_library_agent, create_preset +from backend.api.features.library.model import LibraryAgentPresetCreatable +from backend.api.features.store.db import ( + create_store_submission, + review_store_submission, +) +from backend.data.auth.api_key import create_api_key from backend.data.credit import get_user_credit_model from backend.data.db import prisma from backend.data.graph import Graph, Link, Node, create_graph - -# Import API functions from the backend from backend.data.user import get_or_create_user -from backend.server.v2.library.db import create_library_agent, create_preset -from backend.server.v2.library.model import LibraryAgentPresetCreatable -from backend.server.v2.store.db import create_store_submission, review_store_submission from backend.util.clients import get_supabase faker = Faker() @@ -402,7 +404,9 @@ class TestDataCreator: from backend.data.graph import get_graph graph = await get_graph( - graph_data["id"], graph_data.get("version", 1), user["id"] + graph_data["id"], + graph_data.get("version", 1), + user_id=user["id"], ) if graph: # Use the API function to create library agent @@ -462,7 +466,7 @@ class TestDataCreator: api_keys = [] for user in self.users: - from backend.data.api_key import APIKeyPermission + from backend.data.auth.api_key import APIKeyPermission try: # Use the API function to create API key diff --git a/autogpt_platform/backend/test/load_store_agents.py b/autogpt_platform/backend/test/load_store_agents.py new file mode 100644 index 0000000000..b9d8e0478e --- /dev/null +++ b/autogpt_platform/backend/test/load_store_agents.py @@ -0,0 +1,455 @@ +""" +Load Store Agents Script + +This script loads the exported store agents from the agents/ folder into the test database. +It creates: +- A user and profile for the 'autogpt' creator +- AgentGraph records from JSON files +- StoreListing and StoreListingVersion records from CSV metadata +- Approves agents that have is_available=true in the CSV + +Usage: + cd backend + poetry run load-store-agents +""" + +import asyncio +import csv +import json +import re +from datetime import datetime +from pathlib import Path + +import prisma.enums +from prisma import Json, Prisma +from prisma.types import ( + AgentBlockCreateInput, + AgentGraphCreateInput, + AgentNodeCreateInput, + AgentNodeLinkCreateInput, + ProfileCreateInput, + StoreListingCreateInput, + StoreListingVersionCreateInput, + UserCreateInput, +) + +# Path to agents folder (relative to backend directory) +AGENTS_DIR = Path(__file__).parent.parent / "agents" +CSV_FILE = AGENTS_DIR / "StoreAgent_rows.csv" + +# User constants for the autogpt creator (test data, not production) +# Fixed uuid4 for idempotency - same user is reused across script runs +AUTOGPT_USER_ID = "79d96c73-e6f5-4656-a83a-185b41ee0d06" +AUTOGPT_EMAIL = "autogpt-test@agpt.co" +AUTOGPT_USERNAME = "autogpt" + + +async def initialize_blocks(db: Prisma) -> set[str]: + """Initialize agent blocks in the database from the registered blocks. + + Returns a set of block IDs that exist in the database. + """ + from backend.data.block import get_blocks + + print(" Initializing agent blocks...") + blocks = get_blocks() + created_count = 0 + block_ids = set() + + for block_cls in blocks.values(): + block = block_cls() + block_ids.add(block.id) + existing_block = await db.agentblock.find_first( + where={"OR": [{"id": block.id}, {"name": block.name}]} + ) + if not existing_block: + await db.agentblock.create( + data=AgentBlockCreateInput( + id=block.id, + name=block.name, + inputSchema=json.dumps(block.input_schema.jsonschema()), + outputSchema=json.dumps(block.output_schema.jsonschema()), + ) + ) + created_count += 1 + elif block.id != existing_block.id or block.name != existing_block.name: + await db.agentblock.update( + where={"id": existing_block.id}, + data={ + "id": block.id, + "name": block.name, + "inputSchema": json.dumps(block.input_schema.jsonschema()), + "outputSchema": json.dumps(block.output_schema.jsonschema()), + }, + ) + + print(f" Initialized {len(blocks)} blocks ({created_count} new)") + return block_ids + + +async def ensure_block_exists( + db: Prisma, block_id: str, known_blocks: set[str] +) -> bool: + """Ensure a block exists in the database, create a placeholder if needed. + + Returns True if the block exists (or was created), False otherwise. + """ + if block_id in known_blocks: + return True + + # Check if it already exists in the database + existing = await db.agentblock.find_unique(where={"id": block_id}) + if existing: + known_blocks.add(block_id) + return True + + # Create a placeholder block + print(f" Creating placeholder block: {block_id}") + try: + await db.agentblock.create( + data=AgentBlockCreateInput( + id=block_id, + name=f"Placeholder_{block_id[:8]}", + inputSchema="{}", + outputSchema="{}", + ) + ) + known_blocks.add(block_id) + return True + except Exception as e: + print(f" Warning: Could not create placeholder block {block_id}: {e}") + return False + + +def parse_image_urls(image_str: str) -> list[str]: + """Parse the image URLs from CSV format like ["url1","url2"].""" + if not image_str or image_str == "[]": + return [] + try: + return json.loads(image_str) + except json.JSONDecodeError: + return [] + + +def parse_categories(categories_str: str) -> list[str]: + """Parse categories from CSV format like ["cat1","cat2"].""" + if not categories_str or categories_str == "[]": + return [] + try: + return json.loads(categories_str) + except json.JSONDecodeError: + return [] + + +def sanitize_slug(slug: str) -> str: + """Ensure slug only contains valid characters.""" + return re.sub(r"[^a-z0-9-]", "", slug.lower()) + + +async def create_user_and_profile(db: Prisma) -> None: + """Create the autogpt user and profile if they don't exist.""" + # Check if user exists + existing_user = await db.user.find_unique(where={"id": AUTOGPT_USER_ID}) + if existing_user: + print(f"User {AUTOGPT_USER_ID} already exists, skipping user creation") + else: + print(f"Creating user {AUTOGPT_USER_ID}") + await db.user.create( + data=UserCreateInput( + id=AUTOGPT_USER_ID, + email=AUTOGPT_EMAIL, + name="AutoGPT", + metadata=Json({}), + integrations="", + ) + ) + + # Check if profile exists + existing_profile = await db.profile.find_first(where={"userId": AUTOGPT_USER_ID}) + if existing_profile: + print( + f"Profile for user {AUTOGPT_USER_ID} already exists, skipping profile creation" + ) + else: + print(f"Creating profile for user {AUTOGPT_USER_ID}") + await db.profile.create( + data=ProfileCreateInput( + userId=AUTOGPT_USER_ID, + name="AutoGPT", + username=AUTOGPT_USERNAME, + description="Official AutoGPT agents and templates", + links=["https://agpt.co"], + avatarUrl="https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/4b5781a6-49e1-433c-9a75-65af1be5c02d.png", + ) + ) + + +async def load_csv_metadata() -> dict[str, dict]: + """Load CSV metadata and return a dict keyed by storeListingVersionId.""" + metadata = {} + with open(CSV_FILE, "r", encoding="utf-8") as f: + reader = csv.DictReader(f) + for row in reader: + version_id = row["storeListingVersionId"] + metadata[version_id] = { + "listing_id": row["listing_id"], + "store_listing_version_id": version_id, + "slug": sanitize_slug(row["slug"]), + "agent_name": row["agent_name"], + "agent_video": row["agent_video"] if row["agent_video"] else None, + "agent_image": parse_image_urls(row["agent_image"]), + "featured": row["featured"].lower() == "true", + "sub_heading": row["sub_heading"], + "description": row["description"], + "categories": parse_categories(row["categories"]), + "use_for_onboarding": row["useForOnboarding"].lower() == "true", + "is_available": row["is_available"].lower() == "true", + } + return metadata + + +async def load_agent_json(json_path: Path) -> dict: + """Load and parse an agent JSON file.""" + with open(json_path, "r", encoding="utf-8") as f: + return json.load(f) + + +async def create_agent_graph( + db: Prisma, agent_data: dict, known_blocks: set[str] +) -> tuple[str, int]: + """Create an AgentGraph and its nodes/links from JSON data.""" + graph_id = agent_data["id"] + version = agent_data.get("version", 1) + + # Check if graph already exists + existing_graph = await db.agentgraph.find_unique( + where={"graphVersionId": {"id": graph_id, "version": version}} + ) + if existing_graph: + print(f" Graph {graph_id} v{version} already exists, skipping") + return graph_id, version + + print( + f" Creating graph {graph_id} v{version}: {agent_data.get('name', 'Unnamed')}" + ) + + # Create the main graph + await db.agentgraph.create( + data=AgentGraphCreateInput( + id=graph_id, + version=version, + name=agent_data.get("name"), + description=agent_data.get("description"), + instructions=agent_data.get("instructions"), + recommendedScheduleCron=agent_data.get("recommended_schedule_cron"), + isActive=agent_data.get("is_active", True), + userId=AUTOGPT_USER_ID, + forkedFromId=agent_data.get("forked_from_id"), + forkedFromVersion=agent_data.get("forked_from_version"), + ) + ) + + # Create nodes + nodes = agent_data.get("nodes", []) + for node in nodes: + block_id = node["block_id"] + # Ensure the block exists (create placeholder if needed) + block_exists = await ensure_block_exists(db, block_id, known_blocks) + if not block_exists: + print( + f" Skipping node {node['id']} - block {block_id} could not be created" + ) + continue + + await db.agentnode.create( + data=AgentNodeCreateInput( + id=node["id"], + agentBlockId=block_id, + agentGraphId=graph_id, + agentGraphVersion=version, + constantInput=Json(node.get("input_default", {})), + metadata=Json(node.get("metadata", {})), + ) + ) + + # Create links + links = agent_data.get("links", []) + for link in links: + await db.agentnodelink.create( + data=AgentNodeLinkCreateInput( + id=link["id"], + agentNodeSourceId=link["source_id"], + agentNodeSinkId=link["sink_id"], + sourceName=link["source_name"], + sinkName=link["sink_name"], + isStatic=link.get("is_static", False), + ) + ) + + # Handle sub_graphs recursively + sub_graphs = agent_data.get("sub_graphs", []) + for sub_graph in sub_graphs: + await create_agent_graph(db, sub_graph, known_blocks) + + return graph_id, version + + +async def create_store_listing( + db: Prisma, + graph_id: str, + graph_version: int, + metadata: dict, +) -> None: + """Create StoreListing and StoreListingVersion for an agent.""" + listing_id = metadata["listing_id"] + version_id = metadata["store_listing_version_id"] + + # Check if listing already exists + existing_listing = await db.storelisting.find_unique(where={"id": listing_id}) + if existing_listing: + print(f" Store listing {listing_id} already exists, skipping") + return + + print(f" Creating store listing: {metadata['agent_name']}") + + # Determine if this should be approved + is_approved = metadata["is_available"] + submission_status = ( + prisma.enums.SubmissionStatus.APPROVED + if is_approved + else prisma.enums.SubmissionStatus.PENDING + ) + + # Create the store listing first (without activeVersionId - will update after) + await db.storelisting.create( + data=StoreListingCreateInput( + id=listing_id, + slug=metadata["slug"], + agentGraphId=graph_id, + agentGraphVersion=graph_version, + owningUserId=AUTOGPT_USER_ID, + hasApprovedVersion=is_approved, + useForOnboarding=metadata["use_for_onboarding"], + ) + ) + + # Create the store listing version + await db.storelistingversion.create( + data=StoreListingVersionCreateInput( + id=version_id, + version=1, + agentGraphId=graph_id, + agentGraphVersion=graph_version, + name=metadata["agent_name"], + subHeading=metadata["sub_heading"], + videoUrl=metadata["agent_video"], + imageUrls=metadata["agent_image"], + description=metadata["description"], + categories=metadata["categories"], + isFeatured=metadata["featured"], + isAvailable=metadata["is_available"], + submissionStatus=submission_status, + submittedAt=datetime.now() if is_approved else None, + reviewedAt=datetime.now() if is_approved else None, + storeListingId=listing_id, + ) + ) + + # Update the store listing with the active version if approved + if is_approved: + await db.storelisting.update( + where={"id": listing_id}, + data={"ActiveVersion": {"connect": {"id": version_id}}}, + ) + + +async def main(): + """Main function to load all store agents.""" + print("=" * 60) + print("Loading Store Agents into Test Database") + print("=" * 60) + + db = Prisma() + await db.connect() + + try: + # Step 0: Initialize agent blocks + print("\n[Step 0] Initializing agent blocks...") + known_blocks = await initialize_blocks(db) + + # Step 1: Create user and profile + print("\n[Step 1] Creating user and profile...") + await create_user_and_profile(db) + + # Step 2: Load CSV metadata + print("\n[Step 2] Loading CSV metadata...") + csv_metadata = await load_csv_metadata() + print(f" Found {len(csv_metadata)} store listing entries in CSV") + + # Step 3: Find all JSON files and match with CSV + print("\n[Step 3] Processing agent JSON files...") + json_files = list(AGENTS_DIR.glob("agent_*.json")) + print(f" Found {len(json_files)} agent JSON files") + + # Build mapping from version_id to json file + loaded_graphs = {} # graph_id -> (graph_id, version) + failed_agents = [] + + for json_file in json_files: + # Extract the version ID from filename (agent_.json) + version_id = json_file.stem.replace("agent_", "") + + if version_id not in csv_metadata: + print( + f" Warning: {json_file.name} not found in CSV metadata, skipping" + ) + continue + + metadata = csv_metadata[version_id] + agent_name = metadata["agent_name"] + print(f"\nProcessing: {agent_name}") + + # Use a transaction per agent to prevent dangling resources + try: + async with db.tx() as tx: + # Load and create the agent graph + agent_data = await load_agent_json(json_file) + graph_id, graph_version = await create_agent_graph( + tx, agent_data, known_blocks + ) + loaded_graphs[graph_id] = (graph_id, graph_version) + + # Create store listing + await create_store_listing(tx, graph_id, graph_version, metadata) + except Exception as e: + print(f" Error loading agent '{agent_name}': {e}") + failed_agents.append(agent_name) + continue + + # Step 4: Refresh materialized views + print("\n[Step 4] Refreshing materialized views...") + try: + await db.execute_raw("SELECT refresh_store_materialized_views();") + print(" Materialized views refreshed successfully") + except Exception as e: + print(f" Warning: Could not refresh materialized views: {e}") + + print("\n" + "=" * 60) + print(f"Successfully loaded {len(loaded_graphs)} agents") + if failed_agents: + print( + f"Failed to load {len(failed_agents)} agents: {', '.join(failed_agents)}" + ) + print("=" * 60) + + finally: + await db.disconnect() + + +def run(): + """Entry point for poetry script.""" + asyncio.run(main()) + + +if __name__ == "__main__": + run() diff --git a/autogpt_platform/backend/test/sdk/test_sdk_block_creation.py b/autogpt_platform/backend/test/sdk/test_sdk_block_creation.py index bb26913e56..1f7a253a5a 100644 --- a/autogpt_platform/backend/test/sdk/test_sdk_block_creation.py +++ b/autogpt_platform/backend/test/sdk/test_sdk_block_creation.py @@ -16,6 +16,8 @@ from backend.sdk import ( BlockCostType, BlockOutput, BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, CredentialsMetaInput, OAuth2Credentials, ProviderBuilder, @@ -36,11 +38,11 @@ class TestBasicBlockCreation: class SimpleBlock(Block): """A simple test block.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField(description="Input text") count: int = SchemaField(description="Repeat count", default=1) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: str = SchemaField(description="Output result") def __init__(self): @@ -77,13 +79,13 @@ class TestBasicBlockCreation: class APIBlock(Block): """A block that requires API credentials.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = test_api.credentials_field( description="API credentials for test service", ) query: str = SchemaField(description="API query") - class Output(BlockSchema): + class Output(BlockSchemaOutput): response: str = SchemaField(description="API response") authenticated: bool = SchemaField(description="Was authenticated") @@ -141,10 +143,10 @@ class TestBasicBlockCreation: class MultiOutputBlock(Block): """Block with multiple outputs.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): text: str = SchemaField(description="Input text") - class Output(BlockSchema): + class Output(BlockSchemaOutput): uppercase: str = SchemaField(description="Uppercase version") lowercase: str = SchemaField(description="Lowercase version") length: int = SchemaField(description="Text length") @@ -189,13 +191,13 @@ class TestBlockWithProvider: class TestServiceBlock(Block): """Block for test service.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = test_service.credentials_field( description="Test service credentials", ) action: str = SchemaField(description="Action to perform") - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: str = SchemaField(description="Action result") provider_name: str = SchemaField(description="Provider used") @@ -254,7 +256,7 @@ class TestComplexBlockScenarios: class OptionalFieldBlock(Block): """Block with optional fields.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): required_field: str = SchemaField(description="Required field") optional_field: Optional[str] = SchemaField( description="Optional field", @@ -265,7 +267,7 @@ class TestComplexBlockScenarios: default="default value", ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): has_optional: bool = SchemaField(description="Has optional value") optional_value: Optional[str] = SchemaField( description="Optional value" @@ -321,13 +323,13 @@ class TestComplexBlockScenarios: class ComplexBlock(Block): """Block with complex types.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): items: list[str] = SchemaField(description="List of items") mapping: dict[str, int] = SchemaField( description="String to int mapping" ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): item_count: int = SchemaField(description="Number of items") total_value: int = SchemaField(description="Sum of mapping values") combined: list[str] = SchemaField(description="Combined results") @@ -375,14 +377,14 @@ class TestComplexBlockScenarios: class ErrorHandlingBlock(Block): """Block that demonstrates error handling.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): value: int = SchemaField(description="Input value") should_error: bool = SchemaField( description="Whether to trigger an error", default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: int = SchemaField(description="Result") error_message: Optional[str] = SchemaField( description="Error if any", default=None @@ -436,6 +438,134 @@ class TestComplexBlockScenarios: ): pass + @pytest.mark.asyncio + async def test_block_error_field_override(self): + """Test block that overrides the automatic error field from BlockSchemaOutput.""" + + class ErrorFieldOverrideBlock(Block): + """Block that defines its own error field with different type.""" + + class Input(BlockSchemaInput): + value: int = SchemaField(description="Input value") + + class Output(BlockSchemaOutput): + result: int = SchemaField(description="Result") + # Override the error field with different description/default but same type + error: str = SchemaField( + description="Custom error field with specific validation codes", + default="NO_ERROR", + ) + + def __init__(self): + super().__init__( + id="error-field-override-block", + description="Block that overrides the error field", + categories={BlockCategory.DEVELOPER_TOOLS}, + input_schema=ErrorFieldOverrideBlock.Input, + output_schema=ErrorFieldOverrideBlock.Output, + ) + + async def run(self, input_data: Input, **kwargs) -> BlockOutput: + if input_data.value < 0: + yield "error", "VALIDATION_ERROR:VALUE_NEGATIVE" + yield "result", 0 + else: + yield "result", input_data.value * 2 + yield "error", "NO_ERROR" + + # Test alternative approach: Block that doesn't inherit from BlockSchemaOutput + class FlexibleErrorBlock(Block): + """Block that defines its own error structure by not inheriting BlockSchemaOutput.""" + + class Input(BlockSchemaInput): + value: int = SchemaField(description="Input value") + + # Use BlockSchemaInput as base to avoid automatic error field + class Output(BlockSchema): # Not BlockSchemaOutput! + result: int = SchemaField(description="Result") + error: Optional[dict[str, str]] = SchemaField( + description="Structured error information", + default=None, + ) + + def __init__(self): + super().__init__( + id="flexible-error-block", + description="Block with flexible error structure", + categories={BlockCategory.DEVELOPER_TOOLS}, + input_schema=FlexibleErrorBlock.Input, + output_schema=FlexibleErrorBlock.Output, + ) + + async def run(self, input_data: Input, **kwargs) -> BlockOutput: + if input_data.value < 0: + yield "error", { + "type": "ValidationError", + "message": "Value must be non-negative", + } + yield "result", 0 + else: + yield "result", input_data.value * 2 + yield "error", None + + # Test 1: String-based error override (constrained by BlockSchemaOutput) + string_error_block = ErrorFieldOverrideBlock() + outputs = {} + async for name, value in string_error_block.run( + ErrorFieldOverrideBlock.Input(value=5) + ): + outputs[name] = value + + assert outputs["result"] == 10 + assert outputs["error"] == "NO_ERROR" + + # Test string error with failure + outputs = {} + async for name, value in string_error_block.run( + ErrorFieldOverrideBlock.Input(value=-3) + ): + outputs[name] = value + + assert outputs["result"] == 0 + assert outputs["error"] == "VALIDATION_ERROR:VALUE_NEGATIVE" + + # Test 2: Structured error (using BlockSchema base) + flexible_block = FlexibleErrorBlock() + outputs = {} + async for name, value in flexible_block.run(FlexibleErrorBlock.Input(value=5)): + outputs[name] = value + + assert outputs["result"] == 10 + assert outputs["error"] is None + + # Test structured error with failure + outputs = {} + async for name, value in flexible_block.run(FlexibleErrorBlock.Input(value=-3)): + outputs[name] = value + + assert outputs["result"] == 0 + assert outputs["error"] == { + "type": "ValidationError", + "message": "Value must be non-negative", + } + + # Verify schema differences + string_schema = string_error_block.output_schema.jsonschema() + flexible_schema = flexible_block.output_schema.jsonschema() + + # String error field + string_error_field = string_schema["properties"]["error"] + assert string_error_field.get("type") == "string" + assert string_error_field.get("default") == "NO_ERROR" + + # Structured error field + flexible_error_field = flexible_schema["properties"]["error"] + # Should be object or anyOf with object/null for Optional[dict] + assert ( + "anyOf" in flexible_error_field + or flexible_error_field.get("type") == "object" + ) + class TestAuthenticationVariants: """Test complex authentication scenarios including OAuth, API keys, and scopes.""" @@ -458,14 +588,14 @@ class TestAuthenticationVariants: class OAuthScopedBlock(Block): """Block requiring OAuth2 with specific scopes.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = oauth_provider.credentials_field( description="OAuth2 credentials with scopes", scopes=["read:user", "write:data"], ) resource: str = SchemaField(description="Resource to access") - class Output(BlockSchema): + class Output(BlockSchemaOutput): data: str = SchemaField(description="Retrieved data") scopes_used: list[str] = SchemaField( description="Scopes that were used" @@ -548,14 +678,14 @@ class TestAuthenticationVariants: class MixedAuthBlock(Block): """Block supporting multiple authentication methods.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = mixed_provider.credentials_field( description="API key or OAuth2 credentials", supported_credential_types=["api_key", "oauth2"], ) operation: str = SchemaField(description="Operation to perform") - class Output(BlockSchema): + class Output(BlockSchemaOutput): result: str = SchemaField(description="Operation result") auth_type: str = SchemaField(description="Authentication type used") auth_details: dict[str, Any] = SchemaField(description="Auth details") @@ -674,7 +804,7 @@ class TestAuthenticationVariants: class MultiCredentialBlock(Block): """Block requiring credentials from multiple services.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): primary_credentials: CredentialsMetaInput = ( primary_provider.credentials_field( description="Primary service API key" @@ -690,7 +820,7 @@ class TestAuthenticationVariants: default=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): primary_data: str = SchemaField(description="Data from primary service") secondary_data: str = SchemaField( description="Data from secondary service" @@ -793,7 +923,7 @@ class TestAuthenticationVariants: class ScopeValidationBlock(Block): """Block that validates OAuth scopes.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = scoped_provider.credentials_field( description="OAuth credentials with specific scopes", scopes=["user:read", "user:write"], # Required scopes @@ -803,7 +933,7 @@ class TestAuthenticationVariants: default=False, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): allowed_operations: list[str] = SchemaField( description="Operations allowed with current scopes" ) diff --git a/autogpt_platform/backend/test/sdk/test_sdk_webhooks.py b/autogpt_platform/backend/test/sdk/test_sdk_webhooks.py index 65101c8fe6..a8c1f8b7e1 100644 --- a/autogpt_platform/backend/test/sdk/test_sdk_webhooks.py +++ b/autogpt_platform/backend/test/sdk/test_sdk_webhooks.py @@ -17,7 +17,8 @@ from backend.sdk import ( Block, BlockCategory, BlockOutput, - BlockSchema, + BlockSchemaInput, + BlockSchemaOutput, BlockWebhookConfig, Credentials, CredentialsField, @@ -84,7 +85,7 @@ class TestWebhooksManager(BaseWebhooksManager): class TestWebhookBlock(Block): """Test webhook block implementation.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = CredentialsField( provider="test_webhooks", supported_credential_types={"api_key"}, @@ -105,7 +106,7 @@ class TestWebhookBlock(Block): default={}, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): webhook_id: str = SchemaField(description="Registered webhook ID") is_active: bool = SchemaField(description="Webhook is active") event_count: int = SchemaField(description="Number of events configured") @@ -202,7 +203,7 @@ class TestWebhookBlockCreation: class FilteredWebhookBlock(Block): """Webhook block with filtering.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = CredentialsField( provider="test_webhooks", supported_credential_types={"api_key"}, @@ -217,7 +218,7 @@ class TestWebhookBlockCreation: default={}, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): webhook_active: bool = SchemaField(description="Webhook active") filter_summary: str = SchemaField(description="Active filters") @@ -352,7 +353,7 @@ class TestWebhookManagerIntegration: class IntegratedWebhookBlock(Block): """Block using integrated webhook manager.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): credentials: CredentialsMetaInput = CredentialsField( provider="integrated_webhooks", supported_credential_types={"api_key"}, @@ -363,7 +364,7 @@ class TestWebhookManagerIntegration: default={}, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): status: str = SchemaField(description="Webhook status") manager_type: str = SchemaField(description="Manager type used") @@ -429,7 +430,7 @@ class TestWebhookEventHandling: class WebhookEventBlock(Block): """Block that processes webhook events.""" - class Input(BlockSchema): + class Input(BlockSchemaInput): event_type: str = SchemaField(description="Type of webhook event") payload: dict = SchemaField(description="Webhook payload") verify_signature: bool = SchemaField( @@ -437,7 +438,7 @@ class TestWebhookEventHandling: default=True, ) - class Output(BlockSchema): + class Output(BlockSchemaOutput): processed: bool = SchemaField(description="Event was processed") event_summary: str = SchemaField(description="Summary of event") action_required: bool = SchemaField(description="Action required") diff --git a/autogpt_platform/backend/test/test_data_creator.py b/autogpt_platform/backend/test/test_data_creator.py index 6bb3b9c4e2..befb1dcacd 100644 --- a/autogpt_platform/backend/test/test_data_creator.py +++ b/autogpt_platform/backend/test/test_data_creator.py @@ -21,6 +21,7 @@ import random from datetime import datetime import prisma.enums +import pytest from autogpt_libs.api_key.keysmith import APIKeySmith from faker import Faker from prisma import Json, Prisma @@ -498,9 +499,6 @@ async def main(): if store_listing_versions and random.random() < 0.5 else None ), - "agentInput": ( - Json({"test": "data"}) if random.random() < 0.3 else None - ), "onboardingAgentExecutionId": ( random.choice(agent_graph_executions).id if agent_graph_executions and random.random() < 0.3 @@ -570,5 +568,11 @@ async def main(): print("Test data creation completed successfully!") +@pytest.mark.asyncio +@pytest.mark.integration +async def test_main_function_runs_without_errors(): + await main() + + if __name__ == "__main__": asyncio.run(main()) diff --git a/autogpt_platform/backend/test_requeue_integration.py b/autogpt_platform/backend/test_requeue_integration.py new file mode 100644 index 0000000000..da1e00e357 --- /dev/null +++ b/autogpt_platform/backend/test_requeue_integration.py @@ -0,0 +1,349 @@ +#!/usr/bin/env python3 +""" +Integration test for the requeue fix implementation. +Tests actual RabbitMQ behavior to verify that republishing sends messages to back of queue. +""" + +import json +import time +from threading import Event +from typing import List + +from backend.data.rabbitmq import SyncRabbitMQ +from backend.executor.utils import create_execution_queue_config + + +class QueueOrderTester: + """Helper class to test message ordering in RabbitMQ using a dedicated test queue.""" + + def __init__(self): + self.received_messages: List[dict] = [] + self.stop_consuming = Event() + self.queue_client = SyncRabbitMQ(create_execution_queue_config()) + self.queue_client.connect() + + # Use a dedicated test queue name to avoid conflicts + self.test_queue_name = "test_requeue_ordering" + self.test_exchange = "test_exchange" + self.test_routing_key = "test.requeue" + + def setup_queue(self): + """Set up a dedicated test queue for testing.""" + channel = self.queue_client.get_channel() + + # Declare test exchange + channel.exchange_declare( + exchange=self.test_exchange, exchange_type="direct", durable=True + ) + + # Declare test queue + channel.queue_declare( + queue=self.test_queue_name, durable=True, auto_delete=False + ) + + # Bind queue to exchange + channel.queue_bind( + exchange=self.test_exchange, + queue=self.test_queue_name, + routing_key=self.test_routing_key, + ) + + # Purge the queue to start fresh + channel.queue_purge(self.test_queue_name) + print(f"✅ Test queue {self.test_queue_name} setup and purged") + + def create_test_message(self, message_id: str, user_id: str = "test-user") -> str: + """Create a test graph execution message.""" + return json.dumps( + { + "graph_exec_id": f"exec-{message_id}", + "graph_id": f"graph-{message_id}", + "user_id": user_id, + "execution_context": {"timezone": "UTC"}, + "nodes_input_masks": {}, + "starting_nodes_input": [], + } + ) + + def publish_message(self, message: str): + """Publish a message to the test queue.""" + channel = self.queue_client.get_channel() + channel.basic_publish( + exchange=self.test_exchange, + routing_key=self.test_routing_key, + body=message, + ) + + def consume_messages(self, max_messages: int = 10, timeout: float = 5.0): + """Consume messages and track their order.""" + + def callback(ch, method, properties, body): + try: + message_data = json.loads(body.decode()) + self.received_messages.append(message_data) + ch.basic_ack(delivery_tag=method.delivery_tag) + + if len(self.received_messages) >= max_messages: + self.stop_consuming.set() + except Exception as e: + print(f"Error processing message: {e}") + ch.basic_nack(delivery_tag=method.delivery_tag, requeue=False) + + # Use synchronous consumption with blocking + channel = self.queue_client.get_channel() + + # Check if there are messages in the queue first + method_frame, header_frame, body = channel.basic_get( + queue=self.test_queue_name, auto_ack=False + ) + if method_frame: + # There are messages, set up consumer + channel.basic_nack( + delivery_tag=method_frame.delivery_tag, requeue=True + ) # Put message back + + # Set up consumer + channel.basic_consume( + queue=self.test_queue_name, + on_message_callback=callback, + ) + + # Consume with timeout + start_time = time.time() + while ( + not self.stop_consuming.is_set() + and (time.time() - start_time) < timeout + and len(self.received_messages) < max_messages + ): + try: + channel.connection.process_data_events(time_limit=0.1) + except Exception as e: + print(f"Error during consumption: {e}") + break + + # Cancel the consumer + try: + channel.cancel() + except Exception: + pass + else: + # No messages in queue - this might be expected for some tests + pass + + return self.received_messages + + def cleanup(self): + """Clean up test resources.""" + try: + channel = self.queue_client.get_channel() + channel.queue_delete(queue=self.test_queue_name) + channel.exchange_delete(exchange=self.test_exchange) + print(f"✅ Test queue {self.test_queue_name} cleaned up") + except Exception as e: + print(f"⚠️ Cleanup issue: {e}") + + +def test_queue_ordering_behavior(): + """ + Integration test to verify that our republishing method sends messages to back of queue. + This tests the actual fix for the rate limiting queue blocking issue. + """ + tester = QueueOrderTester() + + try: + tester.setup_queue() + + print("🧪 Testing actual RabbitMQ queue ordering behavior...") + + # Test 1: Normal FIFO behavior + print("1. Testing normal FIFO queue behavior") + + # Publish messages in order: A, B, C + msg_a = tester.create_test_message("A") + msg_b = tester.create_test_message("B") + msg_c = tester.create_test_message("C") + + tester.publish_message(msg_a) + tester.publish_message(msg_b) + tester.publish_message(msg_c) + + # Consume and verify FIFO order: A, B, C + tester.received_messages = [] + tester.stop_consuming.clear() + messages = tester.consume_messages(max_messages=3) + + assert len(messages) == 3, f"Expected 3 messages, got {len(messages)}" + assert ( + messages[0]["graph_exec_id"] == "exec-A" + ), f"First message should be A, got {messages[0]['graph_exec_id']}" + assert ( + messages[1]["graph_exec_id"] == "exec-B" + ), f"Second message should be B, got {messages[1]['graph_exec_id']}" + assert ( + messages[2]["graph_exec_id"] == "exec-C" + ), f"Third message should be C, got {messages[2]['graph_exec_id']}" + + print("✅ FIFO order confirmed: A -> B -> C") + + # Test 2: Rate limiting simulation - the key test! + print("2. Testing rate limiting fix scenario") + + # Simulate the scenario where user1 is rate limited + user1_msg = tester.create_test_message("RATE-LIMITED", "user1") + user2_msg1 = tester.create_test_message("USER2-1", "user2") + user2_msg2 = tester.create_test_message("USER2-2", "user2") + + # Initially publish user1 message (gets consumed, then rate limited on retry) + tester.publish_message(user1_msg) + + # Other users publish their messages + tester.publish_message(user2_msg1) + tester.publish_message(user2_msg2) + + # Now simulate: user1 message gets "requeued" using our new republishing method + # This is what happens in manager.py when requeue_by_republishing=True + tester.publish_message(user1_msg) # Goes to back via our method + + # Expected order: RATE-LIMITED, USER2-1, USER2-2, RATE-LIMITED (republished to back) + # This shows that user2 messages get processed instead of being blocked + tester.received_messages = [] + tester.stop_consuming.clear() + messages = tester.consume_messages(max_messages=4) + + assert len(messages) == 4, f"Expected 4 messages, got {len(messages)}" + + # The key verification: user2 messages are NOT blocked by user1's rate-limited message + user2_messages = [msg for msg in messages if msg["user_id"] == "user2"] + assert len(user2_messages) == 2, "Both user2 messages should be processed" + assert user2_messages[0]["graph_exec_id"] == "exec-USER2-1" + assert user2_messages[1]["graph_exec_id"] == "exec-USER2-2" + + print("✅ Rate limiting fix confirmed: user2 executions NOT blocked by user1") + + # Test 3: Verify our method behaves like going to back of queue + print("3. Testing republishing sends messages to back") + + # Start with message X in queue + msg_x = tester.create_test_message("X") + tester.publish_message(msg_x) + + # Add message Y + msg_y = tester.create_test_message("Y") + tester.publish_message(msg_y) + + # Republish X (simulates requeue using our method) + tester.publish_message(msg_x) + + # Expected: X, Y, X (X was republished to back) + tester.received_messages = [] + tester.stop_consuming.clear() + messages = tester.consume_messages(max_messages=3) + + assert len(messages) == 3 + # Y should come before the republished X + y_index = next( + i for i, msg in enumerate(messages) if msg["graph_exec_id"] == "exec-Y" + ) + republished_x_index = next( + i + for i, msg in enumerate(messages[1:], 1) + if msg["graph_exec_id"] == "exec-X" + ) + + assert ( + y_index < republished_x_index + ), f"Y should come before republished X, but got order: {[m['graph_exec_id'] for m in messages]}" + + print("✅ Republishing confirmed: messages go to back of queue") + + print("🎉 All integration tests passed!") + print("🎉 Our republishing method works correctly with real RabbitMQ") + print("🎉 Queue blocking issue is fixed!") + + finally: + tester.cleanup() + + +def test_traditional_requeue_behavior(): + """ + Test that traditional requeue (basic_nack with requeue=True) sends messages to FRONT of queue. + This validates our hypothesis about why queue blocking occurs. + """ + tester = QueueOrderTester() + + try: + tester.setup_queue() + print("🧪 Testing traditional requeue behavior (basic_nack with requeue=True)") + + # Step 1: Publish message A + msg_a = tester.create_test_message("A") + tester.publish_message(msg_a) + + # Step 2: Publish message B + msg_b = tester.create_test_message("B") + tester.publish_message(msg_b) + + # Step 3: Consume message A and requeue it using traditional method + channel = tester.queue_client.get_channel() + method_frame, header_frame, body = channel.basic_get( + queue=tester.test_queue_name, auto_ack=False + ) + + assert method_frame is not None, "Should have received message A" + consumed_msg = json.loads(body.decode()) + assert ( + consumed_msg["graph_exec_id"] == "exec-A" + ), f"Should have consumed message A, got {consumed_msg['graph_exec_id']}" + + # Traditional requeue: basic_nack with requeue=True (sends to FRONT) + channel.basic_nack(delivery_tag=method_frame.delivery_tag, requeue=True) + print(f"🔄 Traditional requeue (to FRONT): {consumed_msg['graph_exec_id']}") + + # Step 4: Consume all messages using basic_get for reliability + received_messages = [] + + # Get first message + method_frame, header_frame, body = channel.basic_get( + queue=tester.test_queue_name, auto_ack=True + ) + if method_frame: + msg = json.loads(body.decode()) + received_messages.append(msg) + + # Get second message + method_frame, header_frame, body = channel.basic_get( + queue=tester.test_queue_name, auto_ack=True + ) + if method_frame: + msg = json.loads(body.decode()) + received_messages.append(msg) + + # CRITICAL ASSERTION: Traditional requeue should put A at FRONT + # Expected order: A (requeued to front), B + assert ( + len(received_messages) == 2 + ), f"Expected 2 messages, got {len(received_messages)}" + + first_msg = received_messages[0]["graph_exec_id"] + second_msg = received_messages[1]["graph_exec_id"] + + # This is the critical test: requeued message A should come BEFORE B + assert ( + first_msg == "exec-A" + ), f"Traditional requeue should put A at FRONT, but first message was: {first_msg}" + assert ( + second_msg == "exec-B" + ), f"B should come after requeued A, but second message was: {second_msg}" + + print( + "✅ HYPOTHESIS CONFIRMED: Traditional requeue sends messages to FRONT of queue" + ) + print(f" Order: {first_msg} (requeued to front) → {second_msg}") + print(" This explains why rate-limited messages block other users!") + + finally: + tester.cleanup() + + +if __name__ == "__main__": + test_queue_ordering_behavior() diff --git a/autogpt_platform/frontend/.env.default b/autogpt_platform/frontend/.env.default index acc9946a9f..dc3f67efab 100644 --- a/autogpt_platform/frontend/.env.default +++ b/autogpt_platform/frontend/.env.default @@ -1,18 +1,32 @@ - NEXT_PUBLIC_SUPABASE_URL=http://localhost:8000 - NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE +# Supabase +NEXT_PUBLIC_SUPABASE_URL=http://localhost:8000 +NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE - NEXT_PUBLIC_AGPT_SERVER_URL=http://localhost:8006/api - NEXT_PUBLIC_AGPT_WS_SERVER_URL=ws://localhost:8001/ws - NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000 +# Back-end services +NEXT_PUBLIC_AGPT_SERVER_URL=http://localhost:8006/api +NEXT_PUBLIC_AGPT_WS_SERVER_URL=ws://localhost:8001/ws +NEXT_PUBLIC_FRONTEND_BASE_URL=http://localhost:3000 - NEXT_PUBLIC_APP_ENV=local - NEXT_PUBLIC_BEHAVE_AS=LOCAL +# Env config +NEXT_PUBLIC_APP_ENV=local +NEXT_PUBLIC_BEHAVE_AS=LOCAL - NEXT_PUBLIC_LAUNCHDARKLY_ENABLED=false - NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID=687ab1372f497809b131e06e +# Feature flags +NEXT_PUBLIC_LAUNCHDARKLY_ENABLED=false +NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID=687ab1372f497809b131e06e - NEXT_PUBLIC_TURNSTILE=disabled - NEXT_PUBLIC_REACT_QUERY_DEVTOOL=true +# Debugging +NEXT_PUBLIC_REACT_QUERY_DEVTOOL=true +NEXT_PUBLIC_GA_MEASUREMENT_ID=G-FH2XK2W4GN - NEXT_PUBLIC_GA_MEASUREMENT_ID=G-FH2XK2W4GN - \ No newline at end of file +# Google Drive Picker +NEXT_PUBLIC_GOOGLE_CLIENT_ID= +NEXT_PUBLIC_GOOGLE_API_KEY= +NEXT_PUBLIC_GOOGLE_APP_ID= + +# Cloudflare CAPTCHA +NEXT_PUBLIC_CLOUDFLARE_TURNSTILE_SITE_KEY= +NEXT_PUBLIC_TURNSTILE=disabled + +# PR previews +NEXT_PUBLIC_PREVIEW_STEALING_DEV= \ No newline at end of file diff --git a/autogpt_platform/frontend/.npmrc b/autogpt_platform/frontend/.npmrc index 15cb462b44..6a028fa5c3 100644 --- a/autogpt_platform/frontend/.npmrc +++ b/autogpt_platform/frontend/.npmrc @@ -1,2 +1,3 @@ # Configure pnpm to save exact versions -save-exact=true \ No newline at end of file +save-exact=true +engine-strict=true \ No newline at end of file diff --git a/autogpt_platform/frontend/.prettierignore b/autogpt_platform/frontend/.prettierignore index e4a8a24cc5..83318bff9c 100644 --- a/autogpt_platform/frontend/.prettierignore +++ b/autogpt_platform/frontend/.prettierignore @@ -2,7 +2,6 @@ node_modules pnpm-lock.yaml .next .auth -build public Dockerfile .prettierignore diff --git a/autogpt_platform/frontend/CONTRIBUTING.md b/autogpt_platform/frontend/CONTRIBUTING.md new file mode 100644 index 0000000000..048c088350 --- /dev/null +++ b/autogpt_platform/frontend/CONTRIBUTING.md @@ -0,0 +1,765 @@ +
      +

      AutoGPT Frontend • Contributing ⌨️

      +

      Next.js App Router • Client-first • Type-safe generated API hooks • Tailwind + shadcn/ui

      +
      + +--- + +## ☕️ Summary + +This document is your reference for contributing to the AutoGPT Frontend. It adapts legacy guidelines to our current stack and practices. + +- Architecture and stack +- Component structure and design system +- Data fetching (generated API hooks) +- Feature flags +- Naming and code conventions +- Tooling, scripts, and testing +- PR process and checklist + +This is a living document. Open a pull request any time to improve it. + +--- + +## 🚀 Quick Start FAQ + +New to the codebase? Here are shortcuts to common tasks: + +### I need to make a new page + +1. Create page in `src/app/(platform)/your-feature/page.tsx` +2. If it has logic, create `usePage.ts` hook next to it +3. Create sub-components in `components/` folder +4. Use generated API hooks for data fetching +5. If page needs auth, ensure it's in the `(platform)` route group + +**Example structure:** + +``` +app/(platform)/dashboard/ + page.tsx + useDashboardPage.ts + components/ + StatsPanel/ + StatsPanel.tsx + useStatsPanel.ts +``` + +See [Component structure](#-component-structure) and [Styling](#-styling) and [Data fetching patterns](#-data-fetching-patterns) sections. + +### I need to update an existing component in a page + +1. Find the page `src/app/(platform)/your-feature/page.tsx` +2. Check its `components/` folder +3. If needing to update its logic, check the `use[Component].ts` hook +4. If the update is related to rendering, check `[Component].tsx` file + +See [Component structure](#-component-structure) and [Styling](#-styling) sections. + +### I need to make a new API call and show it on the UI + +1. Ensure the backend endpoint exists in the OpenAPI spec +2. Regenerate API client: `pnpm generate:api` +3. Import the generated hook by typing the operation name (auto-import) +4. Use the hook in your component/custom hook +5. Handle loading, error, and success states + +**Example:** + +```tsx +import { useGetV2ListLibraryAgents } from "@/app/api/__generated__/endpoints/library/library"; + +export function useAgentList() { + const { data, isLoading, isError, error } = useGetV2ListLibraryAgents(); + + return { + agents: data?.data || [], + isLoading, + isError, + error, + }; +} +``` + +See [Data fetching patterns](#-data-fetching-patterns) for more examples. + +### I need to create a new component in the Design System + +1. Determine the atomic level: atom, molecule, or organism +2. Create folder: `src/components/[level]/ComponentName/` +3. Create `ComponentName.tsx` (render logic) +4. If logic exists, create `useComponentName.ts` +5. Create `ComponentName.stories.tsx` for Storybook +6. Use Tailwind + design tokens (avoid hardcoded values) +7. Only use Phosphor icons +8. Test in Storybook: `pnpm storybook` +9. Verify in Chromatic after PR + +**Example structure:** + +``` +src/components/molecules/DataCard/ + DataCard.tsx + DataCard.stories.tsx + useDataCard.ts +``` + +See [Component structure](#-component-structure) and [Styling](#-styling) sections. + +--- + +## 📟 Contribution process + +### 1) Branch off `dev` + +- Branch from `dev` for features and fixes +- Keep PRs focused (aim for one ticket per PR) +- Use conventional commit messages with a scope (e.g., `feat(frontend): add X`) + +### 2) Feature flags + +If a feature will ship across multiple PRs, guard it with a flag so we can merge iteratively. + +- Use [LaunchDarkly](https://www.launchdarkly.com) based flags (see Feature Flags below) +- Avoid long-lived feature branches + +### 3) Open PR and get reviews ✅ + +Before requesting review: + +- [x] Code follows architecture and conventions here +- [x] `pnpm format && pnpm lint && pnpm types` pass +- [x] Relevant tests pass locally: `pnpm test` (and/or Storybook tests) +- [x] If touching UI, validate against our design system and stories + +### 4) Merge to `dev` + +- Use squash merges +- Follow conventional commit message format for the squash title + +--- + +## 📂 Architecture & Stack + +### Next.js App Router + +- We use the [Next.js App Router](https://nextjs.org/docs/app) in `src/app` +- Use [route segments](https://nextjs.org/docs/app/building-your-application/routing) with semantic URLs; no `pages/` + +### Component good practices + +- Default to client components +- Use server components only when: + - SEO requires server-rendered HTML, or + - Extreme first-byte performance justifies it + - If you render server-side data, prefer server-side prefetch + client hydration (see examples below and [React Query SSR & Hydration](https://tanstack.com/query/latest/docs/framework/react/guides/ssr)) +- Prefer using [Next.js API routes](https://nextjs.org/docs/pages/building-your-application/routing/api-routes) when possible over [server actions](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations) +- Keep components small and simple + - favour composition and splitting large components into smaller bits of UI + - [colocate state](https://kentcdodds.com/blog/state-colocation-will-make-your-react-app-faster) when possible + - keep render/side-effects split for [separation of concerns](https://en.wikipedia.org/wiki/Separation_of_concerns) + - do not over-complicate or re-invent the wheel + +**❓ Why a client-side first design vs server components/actions?** + +While server components and actions are cool and cutting-edge, they introduce a layer of complexity which not always justified by the benefits they deliver. Defaulting to client-first keeps things simple in the mental model of the developer, specially for those developers less familiar with Next.js or heavy Front-end development. + +### Data fetching: prefer generated API hooks + +- We generate a type-safe client and React Query hooks from the backend OpenAPI spec via [Orval](https://orval.dev/) +- Prefer the generated hooks under `src/app/api/__generated__/endpoints/...` +- Treat `BackendAPI` and code under `src/lib/autogpt-server-api/*` as deprecated; do not introduce new usages +- Use [Zod](https://zod.dev/) schemas from the generated client where applicable + +### State management + +- Prefer [React Query](https://tanstack.com/query/latest/docs/framework/react/overview) for server state, colocated near consumers (see [state colocation](https://kentcdodds.com/blog/state-colocation-will-make-your-react-app-faster)) +- Co-locate UI state inside components/hooks; keep global state minimal + +### Styling and components + +- [Tailwind CSS](https://tailwindcss.com/docs) + [shadcn/ui](https://ui.shadcn.com/) ([Radix Primitives](https://www.radix-ui.com/docs/primitives/overview/introduction) under the hood) +- Use the design system under `src/components` for primitives and building blocks +- Do not use anything under `src/components/_legacy__`; migrate away from it when touching old code +- Reference the design system catalog on Chromatic: [`https://dev--670f94474adee5e32c896b98.chromatic.com/`](https://dev--670f94474adee5e32c896b98.chromatic.com/) +- Use the [`tailwind-scrollbar`](https://www.npmjs.com/package/tailwind-scrollbar) plugin utilities for scrollbar styling + +--- + +## 🧱 Component structure + +For components, separate render logic from data/behavior, and keep implementation details local. + +**Most components should follow this structure.** Pages are just bigger components made of smaller ones, and sub-components can have their own nested sub-components when dealing with complex features. + +### Basic structure + +When a component has non-trivial logic: + +``` +FeatureX/ + FeatureX.tsx (render logic only) + useFeatureX.ts (hook; data fetching, behavior, state) + helpers.ts (pure helpers used by the hook) + components/ (optional, subcomponents local to FeatureX) +``` + +### Example: Page with nested components + +```tsx +// Page composition +app/(platform)/dashboard/ + page.tsx + useDashboardPage.ts + components/ # (Sub-components the dashboard page is made of) + StatsPanel/ + StatsPanel.tsx + useStatsPanel.ts + helpers.ts + components/ # (Sub-components belonging to StatsPanel) + StatCard/ + StatCard.tsx + ActivityFeed/ + ActivityFeed.tsx + useActivityFeed.ts +``` + +### Guidelines + +- Prefer function declarations for components and handlers +- Only use arrow functions for small inline lambdas (e.g., in `map`) +- Avoid barrel files and `index.ts` re-exports +- Keep component files focused and readable; push complex logic to `helpers.ts` +- Abstract reusable, cross-feature logic into `src/services/` or `src/lib/utils.ts` as appropriate +- Build components encapsulated so they can be easily reused and abstracted elsewhere +- Nest sub-components within a `components/` folder when they're local to the parent feature + +### Exceptions + +When to simplify the structure: + +**Small hook logic (3-4 lines)** + +If the hook logic is minimal, keep it inline with the render function: + +```tsx +export function ActivityAlert() { + const [isVisible, setIsVisible] = useState(true); + if (!isVisible) return null; + + return ( + setIsVisible(false)}>New activity detected + ); +} +``` + +**Render-only components** + +Components with no hook logic can be direct files in `components/` without a folder: + +``` +components/ + ActivityAlert.tsx (render-only, no folder needed) + StatsPanel/ (has hook logic, needs folder) + StatsPanel.tsx + useStatsPanel.ts +``` + +### Hook file structure + +When separating logic into a custom hook: + +```tsx +// useStatsPanel.ts +export function useStatsPanel() { + const [data, setData] = useState([]); + const [isLoading, setIsLoading] = useState(true); + + useEffect(() => { + fetchStats().then(setData); + }, []); + + return { + data, + isLoading, + refresh: () => fetchStats().then(setData), + }; +} +``` + +Rules: + +- **Always return an object** that exposes data and methods to the view +- **Export a single function** named after the component (e.g., `useStatsPanel` for `StatsPanel.tsx`) +- **Abstract into helpers.ts** when hook logic grows large, so the hook file remains readable by scanning without diving into implementation details + +--- + +## 🔄 Data fetching patterns + +All API hooks are generated from the backend OpenAPI specification using [Orval](https://orval.dev/). The hooks are type-safe and follow the operation names defined in the backend API. + +### How to discover hooks + +Most of the time you can rely on auto-import by typing the endpoint or operation name. Your IDE will suggest the generated hooks based on the OpenAPI operation IDs. + +**Examples of hook naming patterns:** + +- `GET /api/v1/notifications` → `useGetV1GetNotificationPreferences` +- `POST /api/v2/store/agents` → `usePostV2CreateStoreAgent` +- `DELETE /api/v2/store/submissions/{id}` → `useDeleteV2DeleteStoreSubmission` +- `GET /api/v2/library/agents` → `useGetV2ListLibraryAgents` + +**Pattern**: `use{Method}{Version}{OperationName}` + +You can also explore the generated hooks by browsing `src/app/api/__generated__/endpoints/` which is organized by API tags (e.g., `auth`, `store`, `library`). + +**OpenAPI specs:** + +- Production: [https://backend.agpt.co/openapi.json](https://backend.agpt.co/openapi.json) +- Staging: [https://dev-server.agpt.co/openapi.json](https://dev-server.agpt.co/openapi.json) + +### Generated hooks (client) + +Prefer the generated React Query hooks (via Orval + React Query): + +```tsx +import { useGetV1GetNotificationPreferences } from "@/app/api/__generated__/endpoints/auth/auth"; + +export function PreferencesPanel() { + const { data, isLoading, isError } = useGetV1GetNotificationPreferences({ + query: { + select: (res) => res.data, + }, + }); + + if (isLoading) return null; + if (isError) throw new Error("Failed to load preferences"); + return
      {JSON.stringify(data, null, 2)}
      ; +} +``` + +### Generated mutations (client) + +```tsx +import { useQueryClient } from "@tanstack/react-query"; +import { + useDeleteV2DeleteStoreSubmission, + getGetV2ListMySubmissionsQueryKey, +} from "@/app/api/__generated__/endpoints/store/store"; + +export function DeleteSubmissionButton({ + submissionId, +}: { + submissionId: string; +}) { + const queryClient = useQueryClient(); + const { mutateAsync: deleteSubmission, isPending } = + useDeleteV2DeleteStoreSubmission({ + mutation: { + onSuccess: () => { + queryClient.invalidateQueries({ + queryKey: getGetV2ListMySubmissionsQueryKey(), + }); + }, + }, + }); + + async function onClick() { + await deleteSubmission({ submissionId }); + } + + return ( + + ); +} +``` + +### Server-side prefetch + client hydration + +Use server-side prefetch to improve TTFB while keeping the component tree client-first (see [React Query SSR & Hydration](https://tanstack.com/query/latest/docs/framework/react/guides/ssr)): + +```tsx +// in a server component +import { getQueryClient } from "@/lib/tanstack-query/getQueryClient"; +import { HydrationBoundary, dehydrate } from "@tanstack/react-query"; +import { + prefetchGetV2ListStoreAgentsQuery, + prefetchGetV2ListStoreCreatorsQuery, +} from "@/app/api/__generated__/endpoints/store/store"; + +export default async function MarketplacePage() { + const queryClient = getQueryClient(); + + await Promise.all([ + prefetchGetV2ListStoreAgentsQuery(queryClient, { featured: true }), + prefetchGetV2ListStoreAgentsQuery(queryClient, { sorted_by: "runs" }), + prefetchGetV2ListStoreCreatorsQuery(queryClient, { + featured: true, + sorted_by: "num_agents", + }), + ]); + + return ( + + {/* Client component tree goes here */} + + ); +} +``` + +Notes: + +- Do not introduce new usages of `BackendAPI` or `src/lib/autogpt-server-api/*` +- Keep transformations and mapping logic close to the consumer (hook), not in the view + +--- + +## ⚠️ Error handling + +The app has multiple error handling strategies depending on the type of error: + +### Render/runtime errors + +Use `` to display render or runtime errors gracefully: + +```tsx +import { ErrorCard } from "@/components/molecules/ErrorCard"; + +export function DataPanel() { + const { data, isLoading, isError, error } = useGetData(); + + if (isLoading) return ; + if (isError) return ; + + return
      {data.content}
      ; +} +``` + +### API mutation errors + +Display mutation errors using toast notifications: + +```tsx +import { useToast } from "@/components/ui/use-toast"; + +export function useUpdateSettings() { + const { toast } = useToast(); + const { mutateAsync: updateSettings } = useUpdateSettingsMutation({ + mutation: { + onError: (error) => { + toast({ + title: "Failed to update settings", + description: error.message, + variant: "destructive", + }); + }, + }, + }); + + return { updateSettings }; +} +``` + +### Manual Sentry capture + +When needed, you can manually capture exceptions to Sentry: + +```tsx +import * as Sentry from "@sentry/nextjs"; + +try { + await riskyOperation(); +} catch (error) { + Sentry.captureException(error, { + tags: { context: "feature-x" }, + extra: { metadata: additionalData }, + }); + throw error; +} +``` + +### Global error boundaries + +The app has error boundaries already configured to: + +- Capture uncaught errors globally and send them to Sentry +- Display a user-friendly error UI when something breaks +- Prevent the entire app from crashing + +You don't need to wrap components in error boundaries manually unless you need custom error recovery logic. + +--- + +## 🚩 Feature Flags + +- Flags are powered by [LaunchDarkly](https://docs.launchdarkly.com/) +- Use the helper APIs under `src/services/feature-flags` + +Check a flag in a client component: + +```tsx +import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag"; + +export function AgentActivityPanel() { + const enabled = useGetFlag(Flag.AGENT_ACTIVITY); + if (!enabled) return null; + return
      Feature is enabled!
      ; +} +``` + +Protect a route or page component: + +```tsx +import { withFeatureFlag } from "@/services/feature-flags/with-feature-flag"; + +export const MyFeaturePage = withFeatureFlag(function Page() { + return
      My feature page
      ; +}, "my-feature-flag"); +``` + +Local dev and Playwright: + +- Set `NEXT_PUBLIC_PW_TEST=true` to use mocked flag values during local development and tests + +Adding new flags: + +1. Add the flag to the `Flag` enum and `FlagValues` type +2. Provide a mock value in the mock map +3. Configure the flag in LaunchDarkly + +--- + +## 📙 Naming conventions + +General: + +- Variables and functions should read like plain English +- Prefer `const` over `let` unless reassignment is required +- Use searchable constants instead of magic numbers + +Files: + +- Components and hooks: `PascalCase` for component files, `camelCase` for hooks +- Other files: `kebab-case` +- Do not create barrel files or `index.ts` re-exports + +Types: + +- Prefer `interface` for object shapes +- Component props should be `interface Props { ... }` +- Use precise types; avoid `any` and unsafe casts + +Parameters: + +- If more than one parameter is needed, pass a single `Args` object for clarity + +Comments: + +- Keep comments minimal; code should be clear by itself +- Only document non-obvious intent, invariants, or caveats + +Functions: + +- Prefer function declarations for components and handlers +- Only use arrow functions for small inline callbacks + +Control flow: + +- Use early returns to reduce nesting +- Avoid catching errors unless you handle them meaningfully + +--- + +## 🎨 Styling + +- Use Tailwind utilities; prefer semantic, composable class names +- Use shadcn/ui components as building blocks when available +- Use the `tailwind-scrollbar` utilities for scrollbar styling +- Keep responsive and dark-mode behavior consistent with the design system + +Additional requirements: + +- Do not import shadcn primitives directly in feature code; only use components exposed in our design system under `src/components`. shadcn is a low-level skeleton we style on top of and is not meant to be consumed directly. +- Prefer design tokens over Tailwind's default theme whenever possible (e.g., color, spacing, radius, and typography tokens). Avoid hardcoded values and default palette if a token exists. + +--- + +## ⚠️ Errors and ⏳ Loading + +- **Errors**: Use the `ErrorCard` component from the design system to display API/HTTP errors and retry actions. Keep error derivation/mapping in hooks; pass the final message to the component. + - Component: `src/components/molecules/ErrorCard/ErrorCard.tsx` +- **Loading**: Use the `Skeleton` component(s) from the design system for loading states. Favor domain-appropriate skeleton layouts (lists, cards, tables) over spinners. + - See Storybook examples under Atoms/Skeleton for patterns. + +--- + +## 🧭 Responsive and mobile-first + +- Build mobile-first. Ensure new UI looks great from a 375px viewport width (iPhone SE) upwards. +- Validate layouts at common breakpoints (375, 768, 1024, 1280). Prefer stacking and progressive disclosure on small screens. + +--- + +## 🧰 State for complex flows + +For components/flows with complex state, multi-step wizards, or cross-component coordination, prefer a small co-located store using [Zustand](https://github.com/pmndrs/zustand). + +Guidelines: + +- Co-locate the store with the feature (e.g., `FeatureX/store.ts`). +- Expose typed selectors to minimize re-renders. +- Keep effects and API calls in hooks; stores hold state and pure actions. + +Example: simple store with selectors + +```ts +import { create } from "zustand"; + +interface WizardState { + step: number; + data: Record; + next(): void; + back(): void; + setField(args: { key: string; value: unknown }): void; +} + +export const useWizardStore = create((set) => ({ + step: 0, + data: {}, + next() { + set((state) => ({ step: state.step + 1 })); + }, + back() { + set((state) => ({ step: Math.max(0, state.step - 1) })); + }, + setField({ key, value }) { + set((state) => ({ data: { ...state.data, [key]: value } })); + }, +})); + +// Usage in a component (selectors keep updates scoped) +function WizardFooter() { + const step = useWizardStore((s) => s.step); + const next = useWizardStore((s) => s.next); + const back = useWizardStore((s) => s.back); + + return ( +
      + + +
      + ); +} +``` + +Example: async action coordinated via hook + store + +```ts +// FeatureX/useFeatureX.ts +import { useMutation } from "@tanstack/react-query"; +import { useWizardStore } from "./store"; + +export function useFeatureX() { + const setField = useWizardStore((s) => s.setField); + const next = useWizardStore((s) => s.next); + + const { mutateAsync: save, isPending } = useMutation({ + mutationFn: async (payload: unknown) => { + // call API here + return payload; + }, + onSuccess(data) { + setField({ key: "result", value: data }); + next(); + }, + }); + + return { save, isSaving: isPending }; +} +``` + +--- + +## 🖼 Icons + +- Only use Phosphor Icons. Treat all other icon libraries as deprecated for new code. + - Package: `@phosphor-icons/react` + - Site: [`https://phosphoricons.com/`](https://phosphoricons.com/) + +Example usage: + +```tsx +import { Plus } from "@phosphor-icons/react"; + +export function CreateButton() { + return ( + + ); +} +``` + +--- + +## 🧪 Testing & Storybook + +- End-to-end: [Playwright](https://playwright.dev/docs/intro) (`pnpm test`, `pnpm test-ui`) +- [Storybook](https://storybook.js.org/docs) for isolated UI development (`pnpm storybook` / `pnpm build-storybook`) +- For Storybook tests in CI, see [`@storybook/test-runner`](https://storybook.js.org/docs/writing-tests/test-runner) (`test-storybook:ci`) +- When changing components in `src/components`, update or add stories and visually verify in Storybook/Chromatic + +--- + +## 🛠 Tooling & Scripts + +Common scripts (see `package.json` for full list): + +- `pnpm dev` — Start Next.js dev server (generates API client first) +- `pnpm build` — Build for production +- `pnpm start` — Start production server +- `pnpm lint` — ESLint + Prettier check +- `pnpm format` — Format code +- `pnpm types` — Type-check +- `pnpm storybook` — Run Storybook +- `pnpm test` — Run Playwright tests + +Generated API client: + +- `pnpm generate:api` — Fetch OpenAPI spec and regenerate the client + +--- + +## ✅ PR checklist (Frontend) + +- Client-first: server components only for SEO or extreme TTFB needs +- Uses generated API hooks; no new `BackendAPI` usages +- UI uses `src/components` primitives; no new `_legacy__` components +- Logic is separated into `use*.ts` and `helpers.ts` when non-trivial +- Reusable logic extracted to `src/services/` or `src/lib/utils.ts` when appropriate +- Navigation uses the Next.js router +- Lint, format, type-check, and tests pass locally +- Stories updated/added if UI changed; verified in Storybook + +--- + +## ♻️ Migration guidance + +When touching legacy code: + +- Replace usages of `src/components/_legacy__/*` with the modern design system components under `src/components` +- Replace `BackendAPI` or `src/lib/autogpt-server-api/*` with generated API hooks +- Move presentational logic into render files and data/behavior into hooks +- Keep one-off transformations in local `helpers.ts`; move reusable logic to `src/services/` or `src/lib/utils.ts` + +--- + +## 📚 References + +- Design system (Chromatic): [`https://dev--670f94474adee5e32c896b98.chromatic.com/`](https://dev--670f94474adee5e32c896b98.chromatic.com/) +- Project README for setup and API client examples: `autogpt_platform/frontend/README.md` +- Conventional Commits: [conventionalcommits.org](https://www.conventionalcommits.org/) diff --git a/autogpt_platform/frontend/README.md b/autogpt_platform/frontend/README.md index ebc6764455..f4541cdd33 100644 --- a/autogpt_platform/frontend/README.md +++ b/autogpt_platform/frontend/README.md @@ -4,20 +4,12 @@ This is the frontend for AutoGPT's next generation This project uses [**pnpm**](https://pnpm.io/) as the package manager via **corepack**. [Corepack](https://github.com/nodejs/corepack) is a Node.js tool that automatically manages package managers without requiring global installations. +For architecture, conventions, data fetching, feature flags, design system usage, state management, and PR process, see [CONTRIBUTING.md](./CONTRIBUTING.md). + ### Prerequisites Make sure you have Node.js 16.10+ installed. Corepack is included with Node.js by default. -### ⚠️ Migrating from yarn - -> This project was previously using yarn1, make sure to clean up the old files if you set it up previously with yarn: -> -> ```bash -> rm -f yarn.lock && rm -rf node_modules -> ``` -> -> Then follow the setup steps below. - ## Setup ### 1. **Enable corepack** (run this once on your system): @@ -96,184 +88,13 @@ Every time a new Front-end dependency is added by you or others, you will need t This project uses [`next/font`](https://nextjs.org/docs/basic-features/font-optimization) to automatically optimize and load Inter, a custom Google Font. -## 🔄 Data Fetching Strategy +## 🔄 Data Fetching -> [!NOTE] -> You don't need to run the OpenAPI commands below to run the Front-end. You will only need to run them when adding or modifying endpoints on the Backend API and wanting to use those on the Frontend. - -This project uses an auto-generated API client powered by [**Orval**](https://orval.dev/), which creates type-safe API clients from OpenAPI specifications. - -### How It Works - -1. **Backend Requirements**: Each API endpoint needs a summary and tag in the OpenAPI spec -2. **Operation ID Generation**: FastAPI generates operation IDs using the pattern `{method}{tag}{summary}` -3. **Spec Fetching**: The OpenAPI spec is fetched from `http://localhost:8006/openapi.json` and saved to the frontend -4. **Spec Transformation**: The OpenAPI spec is cleaned up using a custom transformer (see `autogpt_platform/frontend/src/app/api/transformers`) -5. **Client Generation**: Auto-generated client includes TypeScript types, API endpoints, and Zod schemas, organized by tags - -### API Client Commands - -```bash -# Fetch OpenAPI spec from backend and generate client -pnpm generate:api - -# Only fetch the OpenAPI spec -pnpm fetch:openapi - -# Only generate the client (after spec is fetched) -pnpm generate:api-client -``` - -### Using the Generated Client - -The generated client provides React Query hooks for both queries and mutations: - -#### Queries (GET requests) - -```typescript -import { useGetV1GetNotificationPreferences } from "@/app/api/__generated__/endpoints/auth/auth"; - -const { data, isLoading, isError } = useGetV1GetNotificationPreferences({ - query: { - select: (res) => res.data, - // Other React Query options - }, -}); -``` - -#### Mutations (POST, PUT, DELETE requests) - -```typescript -import { useDeleteV2DeleteStoreSubmission } from "@/app/api/__generated__/endpoints/store/store"; -import { getGetV2ListMySubmissionsQueryKey } from "@/app/api/__generated__/endpoints/store/store"; -import { useQueryClient } from "@tanstack/react-query"; - -const queryClient = useQueryClient(); - -const { mutateAsync: deleteSubmission } = useDeleteV2DeleteStoreSubmission({ - mutation: { - onSuccess: () => { - // Invalidate related queries to refresh data - queryClient.invalidateQueries({ - queryKey: getGetV2ListMySubmissionsQueryKey(), - }); - }, - }, -}); - -// Usage -await deleteSubmission({ - submissionId: submission_id, -}); -``` - -#### Server Actions - -For server-side operations, you can also use the generated client functions directly: - -```typescript -import { postV1UpdateNotificationPreferences } from "@/app/api/__generated__/endpoints/auth/auth"; - -// In a server action -const preferences = { - email: "user@example.com", - preferences: { - AGENT_RUN: true, - ZERO_BALANCE: false, - // ... other preferences - }, - daily_limit: 0, -}; - -await postV1UpdateNotificationPreferences(preferences); -``` - -#### Server-Side Prefetching - -For server-side components, you can prefetch data on the server and hydrate it in the client cache. This allows immediate access to cached data when queries are called: - -```typescript -import { getQueryClient } from "@/lib/tanstack-query/getQueryClient"; -import { - prefetchGetV2ListStoreAgentsQuery, - prefetchGetV2ListStoreCreatorsQuery -} from "@/app/api/__generated__/endpoints/store/store"; -import { HydrationBoundary, dehydrate } from "@tanstack/react-query"; - -// In your server component -const queryClient = getQueryClient(); - -await Promise.all([ - prefetchGetV2ListStoreAgentsQuery(queryClient, { - featured: true, - }), - prefetchGetV2ListStoreAgentsQuery(queryClient, { - sorted_by: "runs", - }), - prefetchGetV2ListStoreCreatorsQuery(queryClient, { - featured: true, - sorted_by: "num_agents", - }), -]); - -return ( - - - -); -``` - -This pattern improves performance by serving pre-fetched data from the server while maintaining the benefits of client-side React Query features. - -### Configuration - -The Orval configuration is located in `autogpt_platform/frontend/orval.config.ts`. It generates two separate clients: - -1. **autogpt_api_client**: React Query hooks for client-side data fetching -2. **autogpt_zod_schema**: Zod schemas for validation - -For more details, see the [Orval documentation](https://orval.dev/) or check the configuration file. +See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidance on generated API hooks, SSR + hydration patterns, and usage examples. You generally do not need to run OpenAPI commands unless adding/modifying backend endpoints. ## 🚩 Feature Flags -This project uses [LaunchDarkly](https://launchdarkly.com/) for feature flags, allowing us to control feature rollouts and A/B testing. - -### Using Feature Flags - -#### Check if a feature is enabled - -```typescript -import { Flag, useGetFlag } from "@/services/feature-flags/use-get-flag"; - -function MyComponent() { - const isAgentActivityEnabled = useGetFlag(Flag.AGENT_ACTIVITY); - - if (!isAgentActivityEnabled) { - return null; // Hide feature - } - - return
      Feature is enabled!
      ; -} -``` - -#### Protect entire components - -```typescript -import { withFeatureFlag } from "@/services/feature-flags/with-feature-flag"; - -const MyFeaturePage = withFeatureFlag(MyPageComponent, "my-feature-flag"); -``` - -### Testing with Feature Flags - -For local development or running Playwright tests locally, use mocked feature flags by setting `NEXT_PUBLIC_PW_TEST=true` in your `.env` file. This bypasses LaunchDarkly and uses the mock values defined in the code. - -### Adding New Flags - -1. Add the flag to the `Flag` enum in `use-get-flag.ts` -2. Add the flag type to `FlagValues` type -3. Add mock value to `mockFlags` for testing -4. Configure the flag in LaunchDarkly dashboard +See [CONTRIBUTING.md](./CONTRIBUTING.md) for feature flag usage patterns, local development with mocks, and how to add new flags. ## 🚚 Deploy @@ -333,7 +154,7 @@ By integrating Storybook into our development workflow, we can streamline UI dev - [**Tailwind CSS**](https://tailwindcss.com/) - Utility-first CSS framework - [**shadcn/ui**](https://ui.shadcn.com/) - Re-usable components built with Radix UI and Tailwind CSS - [**Radix UI**](https://www.radix-ui.com/) - Headless UI components for accessibility -- [**Lucide React**](https://lucide.dev/guide/packages/lucide-react) - Beautiful & consistent icons +- [**Phosphor Icons**](https://phosphoricons.com/) - Icon set used across the app - [**Framer Motion**](https://motion.dev/) - Animation library for React ### Development & Testing diff --git a/autogpt_platform/frontend/instrumentation-client.ts b/autogpt_platform/frontend/instrumentation-client.ts index 3e601b1136..86fe015e62 100644 --- a/autogpt_platform/frontend/instrumentation-client.ts +++ b/autogpt_platform/frontend/instrumentation-client.ts @@ -2,26 +2,23 @@ // The config you add here will be used whenever a users loads a page in their browser. // https://docs.sentry.io/platforms/javascript/guides/nextjs/ -import { - AppEnv, - BehaveAs, - getAppEnv, - getBehaveAs, - getEnvironmentStr, -} from "@/lib/utils"; +import { consent } from "@/services/consent/cookies"; +import { environment } from "@/services/environment"; import * as Sentry from "@sentry/nextjs"; -const isProdOrDev = [AppEnv.PROD, AppEnv.DEV].includes(getAppEnv()); - -const isCloud = getBehaveAs() === BehaveAs.CLOUD; +const isProdOrDev = environment.isProd() || environment.isDev(); +const isCloud = environment.isCloud(); const isDisabled = process.env.DISABLE_SENTRY === "true"; const shouldEnable = !isDisabled && isProdOrDev && isCloud; +// Check for monitoring consent (includes session replay) +const hasMonitoringConsent = consent.hasConsentFor("monitoring"); + Sentry.init({ dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288", - environment: getEnvironmentStr(), + environment: environment.getEnvironmentStr(), enabled: shouldEnable, @@ -57,10 +54,12 @@ Sentry.init({ // Define how likely Replay events are sampled. // This sets the sample rate to be 10%. You may want this to be 100% while // in development and sample at a lower rate in production - replaysSessionSampleRate: 0.1, + // GDPR: Only enable if user has consented to monitoring + replaysSessionSampleRate: hasMonitoringConsent ? 0.1 : 0, // Define how likely Replay events are sampled when an error occurs. - replaysOnErrorSampleRate: 1.0, + // GDPR: Only enable if user has consented to monitoring + replaysOnErrorSampleRate: hasMonitoringConsent ? 1.0 : 0, // Setting this option to true will print useful information to the console while you're setting up Sentry. debug: false, diff --git a/autogpt_platform/frontend/next.config.mjs b/autogpt_platform/frontend/next.config.mjs index 5fab1f2ee3..e4e4cdf544 100644 --- a/autogpt_platform/frontend/next.config.mjs +++ b/autogpt_platform/frontend/next.config.mjs @@ -3,8 +3,18 @@ import { withSentryConfig } from "@sentry/nextjs"; /** @type {import('next').NextConfig} */ const nextConfig = { productionBrowserSourceMaps: true, + experimental: { + serverActions: { + bodySizeLimit: "256mb", + }, + // Increase body size limit for API routes (file uploads) - 256MB to match backend limit + proxyClientMaxBodySize: "256mb", + middlewareClientMaxBodySize: "256mb", + }, images: { domains: [ + // We dont need to maintain alphabetical order here + // as we are doing logical grouping of domains "images.unsplash.com", "ddz4ak4pa3d19.cloudfront.net", "upload.wikimedia.org", @@ -12,6 +22,7 @@ const nextConfig = { "ideogram.ai", // for generated images "picsum.photos", // for placeholder images + "example.com", // for local test data images ], remotePatterns: [ { @@ -31,7 +42,8 @@ const nextConfig = { }, ], }, - output: "standalone", + // Vercel has its own deployment mechanism and doesn't need standalone mode + ...(process.env.VERCEL ? {} : { output: "standalone" }), transpilePackages: ["geist"], }; @@ -77,10 +89,10 @@ export default isDevelopmentBuild // This helps Sentry with sourcemaps... https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/ sourcemaps: { - disable: false, // Source maps are enabled by default - assets: ["**/*.js", "**/*.js.map"], // Specify which files to upload - ignore: ["**/node_modules/**"], // Files to exclude - deleteSourcemapsAfterUpload: true, // Security: delete after upload + disable: false, + assets: [".next/**/*.js", ".next/**/*.js.map"], + ignore: ["**/node_modules/**"], + deleteSourcemapsAfterUpload: false, // Source is public anyway :) }, // Automatically tree-shake Sentry logger statements to reduce bundle size diff --git a/autogpt_platform/frontend/orval.config.ts b/autogpt_platform/frontend/orval.config.ts index de305c1acc..dff857e1b6 100644 --- a/autogpt_platform/frontend/orval.config.ts +++ b/autogpt_platform/frontend/orval.config.ts @@ -41,6 +41,12 @@ export default defineConfig({ useInfiniteQueryParam: "page", }, }, + "getV2List presets": { + query: { + useInfinite: true, + useInfiniteQueryParam: "page", + }, + }, "getV1List graph executions": { query: { useInfinite: true, diff --git a/autogpt_platform/frontend/package.json b/autogpt_platform/frontend/package.json index cbef96c4bd..4cbd867cd8 100644 --- a/autogpt_platform/frontend/package.json +++ b/autogpt_platform/frontend/package.json @@ -2,6 +2,9 @@ "name": "frontend", "version": "0.3.4", "private": true, + "engines": { + "node": "22.x" + }, "scripts": { "dev": "pnpm run generate:api:force && next dev --turbo", "build": "next build", @@ -26,8 +29,7 @@ ], "dependencies": { "@faker-js/faker": "10.0.0", - "@hookform/resolvers": "5.2.1", - "@marsidev/react-turnstile": "1.3.1", + "@hookform/resolvers": "5.2.2", "@next/third-parties": "15.4.6", "@phosphor-icons/react": "2.1.10", "@radix-ui/react-alert-dialog": "1.1.15", @@ -52,48 +54,50 @@ "@rjsf/core": "5.24.13", "@rjsf/utils": "5.24.13", "@rjsf/validator-ajv8": "5.24.13", - "@sentry/nextjs": "10.15.0", - "@supabase/ssr": "0.6.1", - "@supabase/supabase-js": "2.55.0", - "@tanstack/react-query": "5.85.3", + "@sentry/nextjs": "10.27.0", + "@supabase/ssr": "0.7.0", + "@supabase/supabase-js": "2.78.0", + "@tanstack/react-query": "5.90.6", "@tanstack/react-table": "8.21.3", "@types/jaro-winkler": "0.2.4", "@vercel/analytics": "1.5.0", "@vercel/speed-insights": "1.2.0", - "@xyflow/react": "12.8.3", + "@xyflow/react": "12.9.2", "boring-avatars": "1.11.2", "class-variance-authority": "0.7.1", "clsx": "2.1.1", "cmdk": "1.1.1", "cookie": "1.0.2", "date-fns": "4.1.0", - "dotenv": "17.2.1", + "dotenv": "17.2.3", "elliptic": "6.6.1", "embla-carousel-react": "8.6.0", - "framer-motion": "12.23.12", - "geist": "1.4.2", + "flatbush": "4.5.0", + "framer-motion": "12.23.24", + "geist": "1.5.1", "highlight.js": "11.11.1", "jaro-winkler": "0.2.8", - "katex": "0.16.22", - "launchdarkly-react-client-sdk": "3.8.1", + "katex": "0.16.25", + "launchdarkly-react-client-sdk": "3.9.0", "lodash": "4.17.21", - "lucide-react": "0.539.0", + "lucide-react": "0.552.0", "moment": "2.30.1", - "next": "15.4.7", + "next": "15.4.10", "next-themes": "0.4.6", - "nuqs": "2.4.3", + "nuqs": "2.7.2", "party-js": "2.2.0", "react": "18.3.1", - "react-day-picker": "9.8.1", + "react-currency-input-field": "4.0.3", + "react-day-picker": "9.11.1", "react-dom": "18.3.1", "react-drag-drop-files": "2.4.0", - "react-hook-form": "7.62.0", + "react-hook-form": "7.66.0", "react-icons": "5.5.0", "react-markdown": "9.0.3", "react-modal": "3.16.3", "react-shepherd": "6.1.9", "react-window": "1.8.11", - "recharts": "3.1.2", + "recharts": "3.3.0", "rehype-autolink-headings": "7.1.0", "rehype-highlight": "7.0.2", "rehype-katex": "7.0.1", @@ -103,7 +107,7 @@ "shepherd.js": "14.5.1", "sonner": "2.0.7", "tailwind-merge": "2.6.0", - "tailwind-scrollbar": "4.0.2", + "tailwind-scrollbar": "3.1.0", "tailwindcss-animate": "1.0.7", "uuid": "11.1.0", "vaul": "1.1.2", @@ -111,47 +115,46 @@ "zustand": "5.0.8" }, "devDependencies": { - "@chromatic-com/storybook": "4.1.1", - "@playwright/test": "1.55.0", + "@chromatic-com/storybook": "4.1.2", + "@playwright/test": "1.56.1", "@storybook/addon-a11y": "9.1.5", "@storybook/addon-docs": "9.1.5", "@storybook/addon-links": "9.1.5", "@storybook/addon-onboarding": "9.1.5", "@storybook/nextjs": "9.1.5", - "@tanstack/eslint-plugin-query": "5.86.0", - "@tanstack/react-query-devtools": "5.87.3", + "@tanstack/eslint-plugin-query": "5.91.2", + "@tanstack/react-query-devtools": "5.90.2", "@types/canvas-confetti": "1.9.0", "@types/lodash": "4.17.20", "@types/negotiator": "0.6.4", - "@types/node": "24.3.1", + "@types/node": "24.10.0", "@types/react": "18.3.17", "@types/react-dom": "18.3.5", "@types/react-modal": "3.16.3", "@types/react-window": "1.8.8", - "axe-playwright": "2.1.0", - "chromatic": "13.1.4", + "axe-playwright": "2.2.2", + "chromatic": "13.3.3", "concurrently": "9.2.1", - "cross-env": "7.0.3", + "cross-env": "10.1.0", "eslint": "8.57.1", - "eslint-config-next": "15.5.2", + "eslint-config-next": "15.5.7", "eslint-plugin-storybook": "9.1.5", - "import-in-the-middle": "1.14.2", - "msw": "2.11.1", - "msw-storybook-addon": "2.0.5", - "orval": "7.11.2", - "pbkdf2": "3.1.3", + "msw": "2.11.6", + "msw-storybook-addon": "2.0.6", + "orval": "7.13.0", + "pbkdf2": "3.1.5", "postcss": "8.5.6", "prettier": "3.6.2", - "prettier-plugin-tailwindcss": "0.6.14", + "prettier-plugin-tailwindcss": "0.7.1", "require-in-the-middle": "7.5.2", "storybook": "9.1.5", "tailwindcss": "3.4.17", - "typescript": "5.9.2" + "typescript": "5.9.3" }, "msw": { "workerDirectory": [ "public" ] }, - "packageManager": "pnpm@10.11.1+sha256.211e9990148495c9fc30b7e58396f7eeda83d9243eb75407ea4f8650fb161f7c" + "packageManager": "pnpm@10.20.0+sha512.cf9998222162dd85864d0a8102e7892e7ba4ceadebbf5a31f9c2fce48dfce317a9c53b9f6464d1ef9042cba2e02ae02a9f7c143a2b438cd93c91840f0192b9dd" } diff --git a/autogpt_platform/frontend/playwright.config.ts b/autogpt_platform/frontend/playwright.config.ts index 66a76a910b..7604e8e88a 100644 --- a/autogpt_platform/frontend/playwright.config.ts +++ b/autogpt_platform/frontend/playwright.config.ts @@ -8,9 +8,7 @@ import dotenv from "dotenv"; import path from "path"; dotenv.config({ path: path.resolve(__dirname, ".env") }); dotenv.config({ path: path.resolve(__dirname, "../backend/.env") }); -/** - * See https://playwright.dev/docs/test-configuration. - */ + export default defineConfig({ testDir: "./src/tests", /* Global setup file that runs before all tests */ @@ -37,11 +35,32 @@ export default defineConfig({ /* Helps debugging failures */ trace: "retain-on-failure", video: "retain-on-failure", + + /* Auto-accept cookies in all tests to prevent banner interference */ + storageState: { + cookies: [], + origins: [ + { + origin: "http://localhost:3000", + localStorage: [ + { + name: "autogpt_cookie_consent", + value: JSON.stringify({ + hasConsented: true, + timestamp: Date.now(), + analytics: true, + monitoring: true, + }), + }, + ], + }, + ], + }, }, /* Maximum time one test can run for */ timeout: 25000, - /* Configure web server to start automatically */ + /* Configure web server to start automatically (local dev only) */ webServer: { command: "pnpm start", url: "http://localhost:3000", diff --git a/autogpt_platform/frontend/pnpm-lock.yaml b/autogpt_platform/frontend/pnpm-lock.yaml index 56e5e48018..54843fc589 100644 --- a/autogpt_platform/frontend/pnpm-lock.yaml +++ b/autogpt_platform/frontend/pnpm-lock.yaml @@ -12,14 +12,11 @@ importers: specifier: 10.0.0 version: 10.0.0 '@hookform/resolvers': - specifier: 5.2.1 - version: 5.2.1(react-hook-form@7.62.0(react@18.3.1)) - '@marsidev/react-turnstile': - specifier: 1.3.1 - version: 1.3.1(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + specifier: 5.2.2 + version: 5.2.2(react-hook-form@7.66.0(react@18.3.1)) '@next/third-parties': specifier: 15.4.6 - version: 15.4.6(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1) + version: 15.4.6(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1) '@phosphor-icons/react': specifier: 2.1.10 version: 2.1.10(react-dom@18.3.1(react@18.3.1))(react@18.3.1) @@ -90,17 +87,17 @@ importers: specifier: 5.24.13 version: 5.24.13(@rjsf/utils@5.24.13(react@18.3.1)) '@sentry/nextjs': - specifier: 10.15.0 - version: 10.15.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)(webpack@5.101.3(esbuild@0.25.9)) + specifier: 10.27.0 + version: 10.27.0(@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0))(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)(webpack@5.101.3(esbuild@0.25.9)) '@supabase/ssr': - specifier: 0.6.1 - version: 0.6.1(@supabase/supabase-js@2.55.0) + specifier: 0.7.0 + version: 0.7.0(@supabase/supabase-js@2.78.0) '@supabase/supabase-js': - specifier: 2.55.0 - version: 2.55.0 + specifier: 2.78.0 + version: 2.78.0 '@tanstack/react-query': - specifier: 5.85.3 - version: 5.85.3(react@18.3.1) + specifier: 5.90.6 + version: 5.90.6(react@18.3.1) '@tanstack/react-table': specifier: 8.21.3 version: 8.21.3(react-dom@18.3.1(react@18.3.1))(react@18.3.1) @@ -109,13 +106,13 @@ importers: version: 0.2.4 '@vercel/analytics': specifier: 1.5.0 - version: 1.5.0(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1) + version: 1.5.0(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1) '@vercel/speed-insights': specifier: 1.2.0 - version: 1.2.0(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1) + version: 1.2.0(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1) '@xyflow/react': - specifier: 12.8.3 - version: 12.8.3(@types/react@18.3.17)(immer@10.1.3)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + specifier: 12.9.2 + version: 12.9.2(@types/react@18.3.17)(immer@10.1.3)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) boring-avatars: specifier: 1.11.2 version: 1.11.2 @@ -135,20 +132,23 @@ importers: specifier: 4.1.0 version: 4.1.0 dotenv: - specifier: 17.2.1 - version: 17.2.1 + specifier: 17.2.3 + version: 17.2.3 elliptic: specifier: 6.6.1 version: 6.6.1 embla-carousel-react: specifier: 8.6.0 version: 8.6.0(react@18.3.1) + flatbush: + specifier: 4.5.0 + version: 4.5.0 framer-motion: - specifier: 12.23.12 - version: 12.23.12(@emotion/is-prop-valid@1.2.2)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + specifier: 12.23.24 + version: 12.23.24(@emotion/is-prop-valid@1.2.2)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) geist: - specifier: 1.4.2 - version: 1.4.2(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)) + specifier: 1.5.1 + version: 1.5.1(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)) highlight.js: specifier: 11.11.1 version: 11.11.1 @@ -156,38 +156,41 @@ importers: specifier: 0.2.8 version: 0.2.8 katex: - specifier: 0.16.22 - version: 0.16.22 + specifier: 0.16.25 + version: 0.16.25 launchdarkly-react-client-sdk: - specifier: 3.8.1 - version: 3.8.1(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + specifier: 3.9.0 + version: 3.9.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1) lodash: specifier: 4.17.21 version: 4.17.21 lucide-react: - specifier: 0.539.0 - version: 0.539.0(react@18.3.1) + specifier: 0.552.0 + version: 0.552.0(react@18.3.1) moment: specifier: 2.30.1 version: 2.30.1 next: - specifier: 15.4.7 - version: 15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + specifier: 15.4.10 + version: 15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) next-themes: specifier: 0.4.6 version: 0.4.6(react-dom@18.3.1(react@18.3.1))(react@18.3.1) nuqs: - specifier: 2.4.3 - version: 2.4.3(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1) + specifier: 2.7.2 + version: 2.7.2(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1) party-js: specifier: 2.2.0 version: 2.2.0 react: specifier: 18.3.1 version: 18.3.1 + react-currency-input-field: + specifier: 4.0.3 + version: 4.0.3(react@18.3.1) react-day-picker: - specifier: 9.8.1 - version: 9.8.1(react@18.3.1) + specifier: 9.11.1 + version: 9.11.1(react@18.3.1) react-dom: specifier: 18.3.1 version: 18.3.1(react@18.3.1) @@ -195,8 +198,8 @@ importers: specifier: 2.4.0 version: 2.4.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1) react-hook-form: - specifier: 7.62.0 - version: 7.62.0(react@18.3.1) + specifier: 7.66.0 + version: 7.66.0(react@18.3.1) react-icons: specifier: 5.5.0 version: 5.5.0(react@18.3.1) @@ -208,13 +211,13 @@ importers: version: 3.16.3(react-dom@18.3.1(react@18.3.1))(react@18.3.1) react-shepherd: specifier: 6.1.9 - version: 6.1.9(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(typescript@5.9.2) + version: 6.1.9(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(typescript@5.9.3) react-window: specifier: 1.8.11 version: 1.8.11(react-dom@18.3.1(react@18.3.1))(react@18.3.1) recharts: - specifier: 3.1.2 - version: 3.1.2(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react-is@18.3.1)(react@18.3.1)(redux@5.0.1) + specifier: 3.3.0 + version: 3.3.0(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react-is@18.3.1)(react@18.3.1)(redux@5.0.1) rehype-autolink-headings: specifier: 7.1.0 version: 7.1.0 @@ -243,8 +246,8 @@ importers: specifier: 2.6.0 version: 2.6.0 tailwind-scrollbar: - specifier: 4.0.2 - version: 4.0.2(react@18.3.1)(tailwindcss@3.4.17) + specifier: 3.1.0 + version: 3.1.0(tailwindcss@3.4.17) tailwindcss-animate: specifier: 1.0.7 version: 1.0.7(tailwindcss@3.4.17) @@ -262,32 +265,32 @@ importers: version: 5.0.8(@types/react@18.3.17)(immer@10.1.3)(react@18.3.1)(use-sync-external-store@1.5.0(react@18.3.1)) devDependencies: '@chromatic-com/storybook': - specifier: 4.1.1 - version: 4.1.1(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + specifier: 4.1.2 + version: 4.1.2(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) '@playwright/test': - specifier: 1.55.0 - version: 1.55.0 + specifier: 1.56.1 + version: 1.56.1 '@storybook/addon-a11y': specifier: 9.1.5 - version: 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + version: 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) '@storybook/addon-docs': specifier: 9.1.5 - version: 9.1.5(@types/react@18.3.17)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + version: 9.1.5(@types/react@18.3.17)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) '@storybook/addon-links': specifier: 9.1.5 - version: 9.1.5(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + version: 9.1.5(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) '@storybook/addon-onboarding': specifier: 9.1.5 - version: 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + version: 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) '@storybook/nextjs': specifier: 9.1.5 - version: 9.1.5(esbuild@0.25.9)(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(type-fest@4.41.0)(typescript@5.9.2)(webpack-hot-middleware@2.26.1)(webpack@5.101.3(esbuild@0.25.9)) + version: 9.1.5(esbuild@0.25.9)(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(type-fest@4.41.0)(typescript@5.9.3)(webpack-hot-middleware@2.26.1)(webpack@5.101.3(esbuild@0.25.9)) '@tanstack/eslint-plugin-query': - specifier: 5.86.0 - version: 5.86.0(eslint@8.57.1)(typescript@5.9.2) + specifier: 5.91.2 + version: 5.91.2(eslint@8.57.1)(typescript@5.9.3) '@tanstack/react-query-devtools': - specifier: 5.87.3 - version: 5.87.3(@tanstack/react-query@5.85.3(react@18.3.1))(react@18.3.1) + specifier: 5.90.2 + version: 5.90.2(@tanstack/react-query@5.90.6(react@18.3.1))(react@18.3.1) '@types/canvas-confetti': specifier: 1.9.0 version: 1.9.0 @@ -298,8 +301,8 @@ importers: specifier: 0.6.4 version: 0.6.4 '@types/node': - specifier: 24.3.1 - version: 24.3.1 + specifier: 24.10.0 + version: 24.10.0 '@types/react': specifier: 18.3.17 version: 18.3.17 @@ -313,41 +316,38 @@ importers: specifier: 1.8.8 version: 1.8.8 axe-playwright: - specifier: 2.1.0 - version: 2.1.0(playwright@1.55.0) + specifier: 2.2.2 + version: 2.2.2(playwright@1.56.1) chromatic: - specifier: 13.1.4 - version: 13.1.4 + specifier: 13.3.3 + version: 13.3.3 concurrently: specifier: 9.2.1 version: 9.2.1 cross-env: - specifier: 7.0.3 - version: 7.0.3 + specifier: 10.1.0 + version: 10.1.0 eslint: specifier: 8.57.1 version: 8.57.1 eslint-config-next: - specifier: 15.5.2 - version: 15.5.2(eslint@8.57.1)(typescript@5.9.2) + specifier: 15.5.7 + version: 15.5.7(eslint@8.57.1)(typescript@5.9.3) eslint-plugin-storybook: specifier: 9.1.5 - version: 9.1.5(eslint@8.57.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(typescript@5.9.2) - import-in-the-middle: - specifier: 1.14.2 - version: 1.14.2 + version: 9.1.5(eslint@8.57.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(typescript@5.9.3) msw: - specifier: 2.11.1 - version: 2.11.1(@types/node@24.3.1)(typescript@5.9.2) + specifier: 2.11.6 + version: 2.11.6(@types/node@24.10.0)(typescript@5.9.3) msw-storybook-addon: - specifier: 2.0.5 - version: 2.0.5(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2)) + specifier: 2.0.6 + version: 2.0.6(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3)) orval: - specifier: 7.11.2 - version: 7.11.2(openapi-types@12.1.3) + specifier: 7.13.0 + version: 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) pbkdf2: - specifier: 3.1.3 - version: 3.1.3 + specifier: 3.1.5 + version: 3.1.5 postcss: specifier: 8.5.6 version: 8.5.6 @@ -355,20 +355,20 @@ importers: specifier: 3.6.2 version: 3.6.2 prettier-plugin-tailwindcss: - specifier: 0.6.14 - version: 0.6.14(prettier@3.6.2) + specifier: 0.7.1 + version: 0.7.1(prettier@3.6.2) require-in-the-middle: specifier: 7.5.2 version: 7.5.2 storybook: specifier: 9.1.5 - version: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + version: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) tailwindcss: specifier: 3.4.17 version: 3.4.17 typescript: - specifier: 5.9.2 - version: 5.9.2 + specifier: 5.9.3 + version: 5.9.3 packages: @@ -379,8 +379,8 @@ packages: resolution: {integrity: sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw==} engines: {node: '>=10'} - '@apidevtools/json-schema-ref-parser@11.7.2': - resolution: {integrity: sha512-4gY54eEGEstClvEkGnwVkTkrx0sqwemEFG5OSRRn3tD91XH0+Q8XIkYIfo7IwEWPpJZwILb9GUXeShtplRc/eA==} + '@apidevtools/json-schema-ref-parser@14.0.1': + resolution: {integrity: sha512-Oc96zvmxx1fqoSEdUmfmvvb59/KDOnUoJ7s2t7bISyAn0XEz57LCCw8k2Y4Pf3mwKaZLMciESALORLgfe2frCw==} engines: {node: '>= 16'} '@apidevtools/openapi-schemas@2.1.0': @@ -390,13 +390,19 @@ packages: '@apidevtools/swagger-methods@3.0.2': resolution: {integrity: sha512-QAkD5kK2b1WfjDS/UQn/qQkbwF31uqRjPTrsCs5ZG9BQGAkjwvqGFjjPqAuzac/IYzpPtRzjCP1WrTuAIjMrXg==} - '@apidevtools/swagger-parser@10.1.1': - resolution: {integrity: sha512-u/kozRnsPO/x8QtKYJOqoGtC4kH6yg1lfYkB9Au0WhYB0FNLpyFusttQtvhlwjtG3rOwiRz4D8DnnXa8iEpIKA==} + '@apidevtools/swagger-parser@12.1.0': + resolution: {integrity: sha512-e5mJoswsnAX0jG+J09xHFYQXb/bUc5S3pLpMxUuRUA2H8T2kni3yEoyz2R3Dltw5f4A6j6rPNMpWTK+iVDFlng==} peerDependencies: openapi-types: '>=7' - '@asyncapi/specs@6.9.0': - resolution: {integrity: sha512-gatFEH2hfJXWmv3vogIjBZfiIbPRC/ISn9UEHZZLZDdMBO0USxt3AFgCC9AY1P+eNE7zjXddXCIT7gz32XOK4g==} + '@apm-js-collab/code-transformer@0.8.2': + resolution: {integrity: sha512-YRjJjNq5KFSjDUoqu5pFUWrrsvGOxl6c3bu+uMFc9HNNptZ2rNU/TI2nLw4jnhQNtka972Ee2m3uqbvDQtPeCA==} + + '@apm-js-collab/tracing-hooks@0.3.1': + resolution: {integrity: sha512-Vu1CbmPURlN5fTboVuKMoJjbO5qcq9fA5YXpskx3dXe/zTBvjODFoerw+69rVBlRLrJpwPqSDqEuJDEKIrTldw==} + + '@asyncapi/specs@6.10.0': + resolution: {integrity: sha512-vB5oKLsdrLUORIZ5BXortZTlVyGWWMC1Nud/0LtgxQ3Yn2738HigAD6EVqScvpPsDUI/bcLVsYEXN4dtXQHVng==} '@babel/code-frame@7.27.1': resolution: {integrity: sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg==} @@ -947,10 +953,6 @@ packages: peerDependencies: '@babel/core': ^7.0.0-0 - '@babel/runtime@7.28.3': - resolution: {integrity: sha512-9uIQ10o0WGdpP6GDhXcdOJPJuDgFtIDtN/9+ArJQ2NAfAmiuhTQdzkaTGR33v43GYS2UrSA0eX2pPPHoFVvpxA==} - engines: {node: '>=6.9.0'} - '@babel/runtime@7.28.4': resolution: {integrity: sha512-Q/N6JNWvIvPnLDvjlE1OUBLPQHH6l3CltCEsHIujp45zQUSSh8K+gHnaEX45yAT1nyngnINhvWtzN+Nb9D8RAQ==} engines: {node: '>=6.9.0'} @@ -967,30 +969,29 @@ packages: resolution: {integrity: sha512-bkFqkLhh3pMBUQQkpVgWDWq/lqzc2678eUyDlTBhRqhCHFguYYGM0Efga7tYk4TogG/3x0EEl66/OQ+WGbWB/Q==} engines: {node: '>=6.9.0'} - '@bundled-es-modules/cookie@2.0.1': - resolution: {integrity: sha512-8o+5fRPLNbjbdGRRmJj3h6Hh1AQJf2dk3qQ/5ZFb+PXkRNiSoMGGUKlsgLfrxneb72axVJyIYji64E2+nNfYyw==} - - '@bundled-es-modules/statuses@1.0.1': - resolution: {integrity: sha512-yn7BklA5acgcBr+7w064fGV+SGIFySjCKpqjcWgBAIfrAkY+4GQTJJHQMeT3V/sgz23VTEVV8TtOmkvJAhFVfg==} - - '@chromatic-com/storybook@4.1.1': - resolution: {integrity: sha512-+Ib4cHtEjKl/Do+4LyU0U1FhLPbIU2Q/zgbOKHBCV+dTC4T3/vGzPqiGsgkdnZyTsK/zXg96LMPSPC4jjOiapg==} + '@chromatic-com/storybook@4.1.2': + resolution: {integrity: sha512-QAWGtHwib0qsP5CcO64aJCF75zpFgpKK3jNpxILzQiPK3sVo4EmnVGJVdwcZWpWrGdH8E4YkncGoitw4EXzKMg==} engines: {node: '>=20.0.0', yarn: '>=1.22.18'} peerDependencies: - storybook: ^0.0.0-0 || ^9.0.0 || ^9.1.0-0 || ^9.2.0-0 || ^10.0.0-0 + storybook: ^0.0.0-0 || ^9.0.0 || ^9.1.0-0 || ^9.2.0-0 || ^10.0.0-0 || ^10.1.0-0 || ^10.2.0-0 || ^10.3.0-0 - '@date-fns/tz@1.2.0': - resolution: {integrity: sha512-LBrd7MiJZ9McsOgxqWX7AaxrDjcFVjWH/tIKJd7pnR7McaslGYOP1QmmiBXdJH/H/yLCT+rcQ7FaPBUxRGUtrg==} + '@commander-js/extra-typings@14.0.0': + resolution: {integrity: sha512-hIn0ncNaJRLkZrxBIp5AsW/eXEHNKYQBh0aPdoUqNgD+Io3NIykQqpKFyKcuasZhicGaEZJX/JBSIkZ4e5x8Dg==} + peerDependencies: + commander: ~14.0.0 - '@emnapi/core@1.5.0': - resolution: {integrity: sha512-sbP8GzB1WDzacS8fgNPpHlp6C9VZe+SJP3F90W9rLemaQj2PzIuTEl1qDOYQf58YIpyjViI24y9aPWCjEzY2cg==} + '@date-fns/tz@1.4.1': + resolution: {integrity: sha512-P5LUNhtbj6YfI3iJjw5EL9eUAG6OitD0W3fWQcpQjDRc/QIsL0tRNuO1PcDvPccWL1fSTXXdE1ds+l95DV/OFA==} - '@emnapi/runtime@1.4.5': - resolution: {integrity: sha512-++LApOtY0pEEz1zrd9vy1/zXVaVJJ/EbAF3u0fXIzPJEDtnITsBGbbK0EkM72amhl/R5b+5xx0Y/QhcVOpuulg==} + '@emnapi/core@1.7.1': + resolution: {integrity: sha512-o1uhUASyo921r2XtHYOHy7gdkGLge8ghBEQHMWmyJFoXlpU58kIrhhN3w26lpQb6dspetweapMn2CSNwQ8I4wg==} '@emnapi/runtime@1.5.0': resolution: {integrity: sha512-97/BJ3iXHww3djw6hYIfErCZFee7qCtrneuLa20UXFCOTCfBM2cvQHjWJ2EG0s0MtdNwInarqCTz35i4wWXHsQ==} + '@emnapi/runtime@1.7.1': + resolution: {integrity: sha512-PVtJr5CmLwYAU9PZDMITZoR5iAOShYREoR45EyyLrbntV50mdePTgUn4AmOw90Ifcj+x2kRjdzr1HP3RrNiHGA==} + '@emnapi/wasi-threads@1.1.0': resolution: {integrity: sha512-WI0DdZ8xFSbgMjR1sFsKABJ/C5OnRrjT06JXbZKexJGrDuPTzZdDYfFlsgcCXCyf+suG5QU2e/y1Wo2V/OapLQ==} @@ -1003,168 +1004,321 @@ packages: '@emotion/unitless@0.8.1': resolution: {integrity: sha512-KOEGMu6dmJZtpadb476IsZBclKvILjopjUii3V+7MnXIQCYh8W3NgNcgwo21n9LXZX6EDIKvqfjYxXebDwxKmQ==} + '@epic-web/invariant@1.0.0': + resolution: {integrity: sha512-lrTPqgvfFQtR/eY/qkIzp98OGdNJu0m5ji3q/nJI8v3SXkRKEnWiOxMmbvcSoAIzv/cGiuvRy57k4suKQSAdwA==} + + '@esbuild/aix-ppc64@0.25.11': + resolution: {integrity: sha512-Xt1dOL13m8u0WE8iplx9Ibbm+hFAO0GsU2P34UNoDGvZYkY8ifSiy6Zuc1lYxfG7svWE2fzqCUmFp5HCn51gJg==} + engines: {node: '>=18'} + cpu: [ppc64] + os: [aix] + '@esbuild/aix-ppc64@0.25.9': resolution: {integrity: sha512-OaGtL73Jck6pBKjNIe24BnFE6agGl+6KxDtTfHhy1HmhthfKouEcOhqpSL64K4/0WCtbKFLOdzD/44cJ4k9opA==} engines: {node: '>=18'} cpu: [ppc64] os: [aix] + '@esbuild/android-arm64@0.25.11': + resolution: {integrity: sha512-9slpyFBc4FPPz48+f6jyiXOx/Y4v34TUeDDXJpZqAWQn/08lKGeD8aDp9TMn9jDz2CiEuHwfhRmGBvpnd/PWIQ==} + engines: {node: '>=18'} + cpu: [arm64] + os: [android] + '@esbuild/android-arm64@0.25.9': resolution: {integrity: sha512-IDrddSmpSv51ftWslJMvl3Q2ZT98fUSL2/rlUXuVqRXHCs5EUF1/f+jbjF5+NG9UffUDMCiTyh8iec7u8RlTLg==} engines: {node: '>=18'} cpu: [arm64] os: [android] + '@esbuild/android-arm@0.25.11': + resolution: {integrity: sha512-uoa7dU+Dt3HYsethkJ1k6Z9YdcHjTrSb5NUy66ZfZaSV8hEYGD5ZHbEMXnqLFlbBflLsl89Zke7CAdDJ4JI+Gg==} + engines: {node: '>=18'} + cpu: [arm] + os: [android] + '@esbuild/android-arm@0.25.9': resolution: {integrity: sha512-5WNI1DaMtxQ7t7B6xa572XMXpHAaI/9Hnhk8lcxF4zVN4xstUgTlvuGDorBguKEnZO70qwEcLpfifMLoxiPqHQ==} engines: {node: '>=18'} cpu: [arm] os: [android] + '@esbuild/android-x64@0.25.11': + resolution: {integrity: sha512-Sgiab4xBjPU1QoPEIqS3Xx+R2lezu0LKIEcYe6pftr56PqPygbB7+szVnzoShbx64MUupqoE0KyRlN7gezbl8g==} + engines: {node: '>=18'} + cpu: [x64] + os: [android] + '@esbuild/android-x64@0.25.9': resolution: {integrity: sha512-I853iMZ1hWZdNllhVZKm34f4wErd4lMyeV7BLzEExGEIZYsOzqDWDf+y082izYUE8gtJnYHdeDpN/6tUdwvfiw==} engines: {node: '>=18'} cpu: [x64] os: [android] + '@esbuild/darwin-arm64@0.25.11': + resolution: {integrity: sha512-VekY0PBCukppoQrycFxUqkCojnTQhdec0vevUL/EDOCnXd9LKWqD/bHwMPzigIJXPhC59Vd1WFIL57SKs2mg4w==} + engines: {node: '>=18'} + cpu: [arm64] + os: [darwin] + '@esbuild/darwin-arm64@0.25.9': resolution: {integrity: sha512-XIpIDMAjOELi/9PB30vEbVMs3GV1v2zkkPnuyRRURbhqjyzIINwj+nbQATh4H9GxUgH1kFsEyQMxwiLFKUS6Rg==} engines: {node: '>=18'} cpu: [arm64] os: [darwin] + '@esbuild/darwin-x64@0.25.11': + resolution: {integrity: sha512-+hfp3yfBalNEpTGp9loYgbknjR695HkqtY3d3/JjSRUyPg/xd6q+mQqIb5qdywnDxRZykIHs3axEqU6l1+oWEQ==} + engines: {node: '>=18'} + cpu: [x64] + os: [darwin] + '@esbuild/darwin-x64@0.25.9': resolution: {integrity: sha512-jhHfBzjYTA1IQu8VyrjCX4ApJDnH+ez+IYVEoJHeqJm9VhG9Dh2BYaJritkYK3vMaXrf7Ogr/0MQ8/MeIefsPQ==} engines: {node: '>=18'} cpu: [x64] os: [darwin] + '@esbuild/freebsd-arm64@0.25.11': + resolution: {integrity: sha512-CmKjrnayyTJF2eVuO//uSjl/K3KsMIeYeyN7FyDBjsR3lnSJHaXlVoAK8DZa7lXWChbuOk7NjAc7ygAwrnPBhA==} + engines: {node: '>=18'} + cpu: [arm64] + os: [freebsd] + '@esbuild/freebsd-arm64@0.25.9': resolution: {integrity: sha512-z93DmbnY6fX9+KdD4Ue/H6sYs+bhFQJNCPZsi4XWJoYblUqT06MQUdBCpcSfuiN72AbqeBFu5LVQTjfXDE2A6Q==} engines: {node: '>=18'} cpu: [arm64] os: [freebsd] + '@esbuild/freebsd-x64@0.25.11': + resolution: {integrity: sha512-Dyq+5oscTJvMaYPvW3x3FLpi2+gSZTCE/1ffdwuM6G1ARang/mb3jvjxs0mw6n3Lsw84ocfo9CrNMqc5lTfGOw==} + engines: {node: '>=18'} + cpu: [x64] + os: [freebsd] + '@esbuild/freebsd-x64@0.25.9': resolution: {integrity: sha512-mrKX6H/vOyo5v71YfXWJxLVxgy1kyt1MQaD8wZJgJfG4gq4DpQGpgTB74e5yBeQdyMTbgxp0YtNj7NuHN0PoZg==} engines: {node: '>=18'} cpu: [x64] os: [freebsd] + '@esbuild/linux-arm64@0.25.11': + resolution: {integrity: sha512-Qr8AzcplUhGvdyUF08A1kHU3Vr2O88xxP0Tm8GcdVOUm25XYcMPp2YqSVHbLuXzYQMf9Bh/iKx7YPqECs6ffLA==} + engines: {node: '>=18'} + cpu: [arm64] + os: [linux] + '@esbuild/linux-arm64@0.25.9': resolution: {integrity: sha512-BlB7bIcLT3G26urh5Dmse7fiLmLXnRlopw4s8DalgZ8ef79Jj4aUcYbk90g8iCa2467HX8SAIidbL7gsqXHdRw==} engines: {node: '>=18'} cpu: [arm64] os: [linux] + '@esbuild/linux-arm@0.25.11': + resolution: {integrity: sha512-TBMv6B4kCfrGJ8cUPo7vd6NECZH/8hPpBHHlYI3qzoYFvWu2AdTvZNuU/7hsbKWqu/COU7NIK12dHAAqBLLXgw==} + engines: {node: '>=18'} + cpu: [arm] + os: [linux] + '@esbuild/linux-arm@0.25.9': resolution: {integrity: sha512-HBU2Xv78SMgaydBmdor38lg8YDnFKSARg1Q6AT0/y2ezUAKiZvc211RDFHlEZRFNRVhcMamiToo7bDx3VEOYQw==} engines: {node: '>=18'} cpu: [arm] os: [linux] + '@esbuild/linux-ia32@0.25.11': + resolution: {integrity: sha512-TmnJg8BMGPehs5JKrCLqyWTVAvielc615jbkOirATQvWWB1NMXY77oLMzsUjRLa0+ngecEmDGqt5jiDC6bfvOw==} + engines: {node: '>=18'} + cpu: [ia32] + os: [linux] + '@esbuild/linux-ia32@0.25.9': resolution: {integrity: sha512-e7S3MOJPZGp2QW6AK6+Ly81rC7oOSerQ+P8L0ta4FhVi+/j/v2yZzx5CqqDaWjtPFfYz21Vi1S0auHrap3Ma3A==} engines: {node: '>=18'} cpu: [ia32] os: [linux] + '@esbuild/linux-loong64@0.25.11': + resolution: {integrity: sha512-DIGXL2+gvDaXlaq8xruNXUJdT5tF+SBbJQKbWy/0J7OhU8gOHOzKmGIlfTTl6nHaCOoipxQbuJi7O++ldrxgMw==} + engines: {node: '>=18'} + cpu: [loong64] + os: [linux] + '@esbuild/linux-loong64@0.25.9': resolution: {integrity: sha512-Sbe10Bnn0oUAB2AalYztvGcK+o6YFFA/9829PhOCUS9vkJElXGdphz0A3DbMdP8gmKkqPmPcMJmJOrI3VYB1JQ==} engines: {node: '>=18'} cpu: [loong64] os: [linux] + '@esbuild/linux-mips64el@0.25.11': + resolution: {integrity: sha512-Osx1nALUJu4pU43o9OyjSCXokFkFbyzjXb6VhGIJZQ5JZi8ylCQ9/LFagolPsHtgw6himDSyb5ETSfmp4rpiKQ==} + engines: {node: '>=18'} + cpu: [mips64el] + os: [linux] + '@esbuild/linux-mips64el@0.25.9': resolution: {integrity: sha512-YcM5br0mVyZw2jcQeLIkhWtKPeVfAerES5PvOzaDxVtIyZ2NUBZKNLjC5z3/fUlDgT6w89VsxP2qzNipOaaDyA==} engines: {node: '>=18'} cpu: [mips64el] os: [linux] + '@esbuild/linux-ppc64@0.25.11': + resolution: {integrity: sha512-nbLFgsQQEsBa8XSgSTSlrnBSrpoWh7ioFDUmwo158gIm5NNP+17IYmNWzaIzWmgCxq56vfr34xGkOcZ7jX6CPw==} + engines: {node: '>=18'} + cpu: [ppc64] + os: [linux] + '@esbuild/linux-ppc64@0.25.9': resolution: {integrity: sha512-++0HQvasdo20JytyDpFvQtNrEsAgNG2CY1CLMwGXfFTKGBGQT3bOeLSYE2l1fYdvML5KUuwn9Z8L1EWe2tzs1w==} engines: {node: '>=18'} cpu: [ppc64] os: [linux] + '@esbuild/linux-riscv64@0.25.11': + resolution: {integrity: sha512-HfyAmqZi9uBAbgKYP1yGuI7tSREXwIb438q0nqvlpxAOs3XnZ8RsisRfmVsgV486NdjD7Mw2UrFSw51lzUk1ww==} + engines: {node: '>=18'} + cpu: [riscv64] + os: [linux] + '@esbuild/linux-riscv64@0.25.9': resolution: {integrity: sha512-uNIBa279Y3fkjV+2cUjx36xkx7eSjb8IvnL01eXUKXez/CBHNRw5ekCGMPM0BcmqBxBcdgUWuUXmVWwm4CH9kg==} engines: {node: '>=18'} cpu: [riscv64] os: [linux] + '@esbuild/linux-s390x@0.25.11': + resolution: {integrity: sha512-HjLqVgSSYnVXRisyfmzsH6mXqyvj0SA7pG5g+9W7ESgwA70AXYNpfKBqh1KbTxmQVaYxpzA/SvlB9oclGPbApw==} + engines: {node: '>=18'} + cpu: [s390x] + os: [linux] + '@esbuild/linux-s390x@0.25.9': resolution: {integrity: sha512-Mfiphvp3MjC/lctb+7D287Xw1DGzqJPb/J2aHHcHxflUo+8tmN/6d4k6I2yFR7BVo5/g7x2Monq4+Yew0EHRIA==} engines: {node: '>=18'} cpu: [s390x] os: [linux] + '@esbuild/linux-x64@0.25.11': + resolution: {integrity: sha512-HSFAT4+WYjIhrHxKBwGmOOSpphjYkcswF449j6EjsjbinTZbp8PJtjsVK1XFJStdzXdy/jaddAep2FGY+wyFAQ==} + engines: {node: '>=18'} + cpu: [x64] + os: [linux] + '@esbuild/linux-x64@0.25.9': resolution: {integrity: sha512-iSwByxzRe48YVkmpbgoxVzn76BXjlYFXC7NvLYq+b+kDjyyk30J0JY47DIn8z1MO3K0oSl9fZoRmZPQI4Hklzg==} engines: {node: '>=18'} cpu: [x64] os: [linux] + '@esbuild/netbsd-arm64@0.25.11': + resolution: {integrity: sha512-hr9Oxj1Fa4r04dNpWr3P8QKVVsjQhqrMSUzZzf+LZcYjZNqhA3IAfPQdEh1FLVUJSiu6sgAwp3OmwBfbFgG2Xg==} + engines: {node: '>=18'} + cpu: [arm64] + os: [netbsd] + '@esbuild/netbsd-arm64@0.25.9': resolution: {integrity: sha512-9jNJl6FqaUG+COdQMjSCGW4QiMHH88xWbvZ+kRVblZsWrkXlABuGdFJ1E9L7HK+T0Yqd4akKNa/lO0+jDxQD4Q==} engines: {node: '>=18'} cpu: [arm64] os: [netbsd] + '@esbuild/netbsd-x64@0.25.11': + resolution: {integrity: sha512-u7tKA+qbzBydyj0vgpu+5h5AeudxOAGncb8N6C9Kh1N4n7wU1Xw1JDApsRjpShRpXRQlJLb9wY28ELpwdPcZ7A==} + engines: {node: '>=18'} + cpu: [x64] + os: [netbsd] + '@esbuild/netbsd-x64@0.25.9': resolution: {integrity: sha512-RLLdkflmqRG8KanPGOU7Rpg829ZHu8nFy5Pqdi9U01VYtG9Y0zOG6Vr2z4/S+/3zIyOxiK6cCeYNWOFR9QP87g==} engines: {node: '>=18'} cpu: [x64] os: [netbsd] + '@esbuild/openbsd-arm64@0.25.11': + resolution: {integrity: sha512-Qq6YHhayieor3DxFOoYM1q0q1uMFYb7cSpLD2qzDSvK1NAvqFi8Xgivv0cFC6J+hWVw2teCYltyy9/m/14ryHg==} + engines: {node: '>=18'} + cpu: [arm64] + os: [openbsd] + '@esbuild/openbsd-arm64@0.25.9': resolution: {integrity: sha512-YaFBlPGeDasft5IIM+CQAhJAqS3St3nJzDEgsgFixcfZeyGPCd6eJBWzke5piZuZ7CtL656eOSYKk4Ls2C0FRQ==} engines: {node: '>=18'} cpu: [arm64] os: [openbsd] + '@esbuild/openbsd-x64@0.25.11': + resolution: {integrity: sha512-CN+7c++kkbrckTOz5hrehxWN7uIhFFlmS/hqziSFVWpAzpWrQoAG4chH+nN3Be+Kzv/uuo7zhX716x3Sn2Jduw==} + engines: {node: '>=18'} + cpu: [x64] + os: [openbsd] + '@esbuild/openbsd-x64@0.25.9': resolution: {integrity: sha512-1MkgTCuvMGWuqVtAvkpkXFmtL8XhWy+j4jaSO2wxfJtilVCi0ZE37b8uOdMItIHz4I6z1bWWtEX4CJwcKYLcuA==} engines: {node: '>=18'} cpu: [x64] os: [openbsd] + '@esbuild/openharmony-arm64@0.25.11': + resolution: {integrity: sha512-rOREuNIQgaiR+9QuNkbkxubbp8MSO9rONmwP5nKncnWJ9v5jQ4JxFnLu4zDSRPf3x4u+2VN4pM4RdyIzDty/wQ==} + engines: {node: '>=18'} + cpu: [arm64] + os: [openharmony] + '@esbuild/openharmony-arm64@0.25.9': resolution: {integrity: sha512-4Xd0xNiMVXKh6Fa7HEJQbrpP3m3DDn43jKxMjxLLRjWnRsfxjORYJlXPO4JNcXtOyfajXorRKY9NkOpTHptErg==} engines: {node: '>=18'} cpu: [arm64] os: [openharmony] + '@esbuild/sunos-x64@0.25.11': + resolution: {integrity: sha512-nq2xdYaWxyg9DcIyXkZhcYulC6pQ2FuCgem3LI92IwMgIZ69KHeY8T4Y88pcwoLIjbed8n36CyKoYRDygNSGhA==} + engines: {node: '>=18'} + cpu: [x64] + os: [sunos] + '@esbuild/sunos-x64@0.25.9': resolution: {integrity: sha512-WjH4s6hzo00nNezhp3wFIAfmGZ8U7KtrJNlFMRKxiI9mxEK1scOMAaa9i4crUtu+tBr+0IN6JCuAcSBJZfnphw==} engines: {node: '>=18'} cpu: [x64] os: [sunos] + '@esbuild/win32-arm64@0.25.11': + resolution: {integrity: sha512-3XxECOWJq1qMZ3MN8srCJ/QfoLpL+VaxD/WfNRm1O3B4+AZ/BnLVgFbUV3eiRYDMXetciH16dwPbbHqwe1uU0Q==} + engines: {node: '>=18'} + cpu: [arm64] + os: [win32] + '@esbuild/win32-arm64@0.25.9': resolution: {integrity: sha512-mGFrVJHmZiRqmP8xFOc6b84/7xa5y5YvR1x8djzXpJBSv/UsNK6aqec+6JDjConTgvvQefdGhFDAs2DLAds6gQ==} engines: {node: '>=18'} cpu: [arm64] os: [win32] + '@esbuild/win32-ia32@0.25.11': + resolution: {integrity: sha512-3ukss6gb9XZ8TlRyJlgLn17ecsK4NSQTmdIXRASVsiS2sQ6zPPZklNJT5GR5tE/MUarymmy8kCEf5xPCNCqVOA==} + engines: {node: '>=18'} + cpu: [ia32] + os: [win32] + '@esbuild/win32-ia32@0.25.9': resolution: {integrity: sha512-b33gLVU2k11nVx1OhX3C8QQP6UHQK4ZtN56oFWvVXvz2VkDoe6fbG8TOgHFxEvqeqohmRnIHe5A1+HADk4OQww==} engines: {node: '>=18'} cpu: [ia32] os: [win32] + '@esbuild/win32-x64@0.25.11': + resolution: {integrity: sha512-D7Hpz6A2L4hzsRpPaCYkQnGOotdUpDzSGRIv9I+1ITdHROSFUWW95ZPZWQmGka1Fg7W3zFJowyn9WGwMJ0+KPA==} + engines: {node: '>=18'} + cpu: [x64] + os: [win32] + '@esbuild/win32-x64@0.25.9': resolution: {integrity: sha512-PPOl1mi6lpLNQxnGoyAfschAodRFYXJ+9fs6WHXz7CSWKbOqiMZsubC+BQsVKuul+3vKLuwTHsS2c2y9EoKwxQ==} engines: {node: '>=18'} cpu: [x64] os: [win32] - '@eslint-community/eslint-utils@4.7.0': - resolution: {integrity: sha512-dyybb3AcajC7uha6CvhdVRJqaKyn7w2YKqKyAN37NKYgZT36w+iRb0Dymmc5qEJ549c/S31cMMSFd75bteCpCw==} - engines: {node: ^12.22.0 || ^14.17.0 || >=16.0.0} - peerDependencies: - eslint: ^6.0.0 || ^7.0.0 || >=8.0.0 - '@eslint-community/eslint-utils@4.9.0': resolution: {integrity: sha512-ayVFHdtZ+hsq1t2Dy24wCmGXGe4q9Gu3smhLYALJrr473ZH27MsnSL+LKUlimp4BWJqMDMLmPpx/Q9R3OAlL4g==} engines: {node: ^12.22.0 || ^14.17.0 || >=16.0.0} @@ -1175,6 +1329,10 @@ packages: resolution: {integrity: sha512-CCZCDJuduB9OUkFkY2IgppNZMi2lBQgD2qzwXkEia16cge2pijY/aXi96CJMquDMn3nJdlPV1A5KrJEXwfLNzQ==} engines: {node: ^12.0.0 || ^14.0.0 || >=16.0.0} + '@eslint-community/regexpp@4.12.2': + resolution: {integrity: sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==} + engines: {node: ^12.0.0 || ^14.0.0 || >=16.0.0} + '@eslint/eslintrc@2.1.4': resolution: {integrity: sha512-269Z39MS6wVJtsoUl10L60WdkhJVdPG24Q4eZTH3nnF6lpvSShEK3wQjDX9JRWAUPvPh7COouPpU9IrqaZFvtQ==} engines: {node: ^12.22.0 || ^14.17.0 || >=16.0.0} @@ -1205,11 +1363,11 @@ packages: '@floating-ui/utils@0.2.10': resolution: {integrity: sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ==} - '@gerrit0/mini-shiki@3.9.2': - resolution: {integrity: sha512-Tvsj+AOO4Z8xLRJK900WkyfxHsZQu+Zm1//oT1w443PO6RiYMoq/4NGOhaNuZoUMYsjKIAPVQ6eOFMddj6yphQ==} + '@gerrit0/mini-shiki@3.14.0': + resolution: {integrity: sha512-c5X8fwPLOtUS8TVdqhynz9iV0GlOtFUT1ppXYzUUlEXe4kbZ/mvMT8wXoT8kCwUka+zsiloq7sD3pZ3+QVTuNQ==} - '@hookform/resolvers@5.2.1': - resolution: {integrity: sha512-u0+6X58gkjMcxur1wRWokA7XsiiBJ6aK17aPZxhkoYiK5J+HcTx0Vhu9ovXe6H+dVpO6cjrn2FkJTryXEMlryQ==} + '@hookform/resolvers@5.2.2': + resolution: {integrity: sha512-A/IxlMLShx3KjV/HeTcTfaMxdwy690+L/ZADoeaTltLx+CVuzkeVIPuybK3jrRfw7YZnmdKsVVHAlEPIAEUNlA==} peerDependencies: react-hook-form: ^7.55.0 @@ -1230,8 +1388,8 @@ packages: resolution: {integrity: sha512-AoFbSarOqFBYH+1TZ9Ahkm2IWYSi5v0pBk88fpV+5b3qGJukypX8PwvCWADjuyIccKg48/F73a6hTTkBzDQ2UA==} engines: {node: '>=16.0.0'} - '@ibm-cloud/openapi-ruleset@1.31.2': - resolution: {integrity: sha512-g3YYNTiX6zW7quFvDD9szu+54oHj6+4vz8g3/ikOacVsVEX072CvhjX9zRZf1WH4zDXv8KbprsxV+osZQbXPlg==} + '@ibm-cloud/openapi-ruleset@1.33.3': + resolution: {integrity: sha512-lOxglXIzUZwsw5WsbgZraxxzAYMdXYyiMNOioxYJYTd55ZuN4XEERoPdV5v1oPTdKedHEUSQu5siiSHToENFdA==} engines: {node: '>=16.0.0'} '@img/sharp-darwin-arm64@0.34.3': @@ -1356,8 +1514,12 @@ packages: cpu: [x64] os: [win32] - '@inquirer/confirm@5.1.16': - resolution: {integrity: sha512-j1a5VstaK5KQy8Mu8cHmuQvN1Zc62TbLhjJxwHvKPPKEoowSF6h/0UdOpA9DNdWZ+9Inq73+puRq1df6OJ8Sag==} + '@inquirer/ansi@1.0.1': + resolution: {integrity: sha512-yqq0aJW/5XPhi5xOAL1xRCpe1eh8UFVgYFpFsjEqmIR8rKLyP+HINvFXwUaxYICflJrVlxnp7lLN6As735kVpw==} + engines: {node: '>=18'} + + '@inquirer/confirm@5.1.19': + resolution: {integrity: sha512-wQNz9cfcxrtEnUyG5PndC8g3gZ7lGDBzmWiXZkX8ot3vfZ+/BLjR8EvyGX4YzQLeVqtAlY/YScZpW7CW8qMoDQ==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -1365,8 +1527,8 @@ packages: '@types/node': optional: true - '@inquirer/core@10.2.0': - resolution: {integrity: sha512-NyDSjPqhSvpZEMZrLCYUquWNl+XC/moEcVFqS55IEYIYsY0a1cUCevSqk7ctOlnm/RaSBU5psFryNlxcmGrjaA==} + '@inquirer/core@10.3.0': + resolution: {integrity: sha512-Uv2aPPPSK5jeCplQmQ9xadnFx2Zhj9b5Dj7bU6ZeCdDNNY11nhYy4btcSdtDguHqCT2h5oNeQTcUNSGGLA7NTA==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -1374,12 +1536,12 @@ packages: '@types/node': optional: true - '@inquirer/figures@1.0.13': - resolution: {integrity: sha512-lGPVU3yO9ZNqA7vTYz26jny41lE7yoQansmqdMLBEfqaGsmdg7V3W9mK9Pvb5IL4EVZ9GnSDGMO/cJXud5dMaw==} + '@inquirer/figures@1.0.14': + resolution: {integrity: sha512-DbFgdt+9/OZYFM+19dbpXOSeAstPy884FPy1KjDu4anWwymZeOYhMY1mdFri172htv6mvc/uvIAAi7b7tvjJBQ==} engines: {node: '>=18'} - '@inquirer/type@3.0.8': - resolution: {integrity: sha512-lg9Whz8onIHRthWaN1Q9EGLa/0LFJjyM8mEUbL1eTi6yMGvBf8gvyDLtxSXztQsxMvhxxNpJYrwa1YHdq+w4Jw==} + '@inquirer/type@3.0.9': + resolution: {integrity: sha512-QPaNt/nmE2bLGQa9b7wwyRJoLZ7pN6rcyXvzU0YCmivmJyq1BVo94G98tStRWkoD1RgDX5C+dPlhhHzNdu/W/w==} engines: {node: '>=18'} peerDependencies: '@types/node': '>=18' @@ -1410,9 +1572,6 @@ packages: '@jridgewell/trace-mapping@0.3.30': resolution: {integrity: sha512-GQ7Nw5G2lTu/BtHTKfXhKHok2WGetd4XYcVKGx00SjAk8GMwgJM3zr6zORiPGuOE+/vkc90KtTosSSvaCjKb2Q==} - '@jsdevtools/ono@7.1.3': - resolution: {integrity: sha512-4JQNk+3mVzK3xh2rqd6RB4J46qUR19azEHBneZyTZM+c456qOrbbM/5xcR8huNCCcbVt7+UmizG6GuUvPvKUYg==} - '@jsep-plugin/assignment@1.3.0': resolution: {integrity: sha512-VVgV+CXrhbMI3aSusQyclHkenWSAm95WaiKrMxRFam3JSUiIaQjoMIw2sEs/OX4XifnqeQUN4DYbJjlA8EfktQ==} engines: {node: '>= 10.16.0'} @@ -1431,20 +1590,14 @@ packages: peerDependencies: jsep: ^0.4.0||^1.0.0 - '@marsidev/react-turnstile@1.3.1': - resolution: {integrity: sha512-h2THG/75k4Y049hgjSGPIcajxXnh+IZAiXVbryQyVmagkboN7pJtBgR16g8akjwUBSfRrg6jw6KvPDjscQflog==} - peerDependencies: - react: ^17.0.2 || ^18.0.0 || ^19.0 - react-dom: ^17.0.2 || ^18.0.0 || ^19.0 - '@mdx-js/react@3.1.1': resolution: {integrity: sha512-f++rKLQgUVYDAtECQ6fn/is15GkEH9+nZPM3MS0RcxVqoTfawHvDlSCH7JbMhAM6uJ32v3eXLvLmLvjGu7PTQw==} peerDependencies: '@types/react': '>=16' react: '>=16' - '@mswjs/interceptors@0.39.6': - resolution: {integrity: sha512-bndDP83naYYkfayr/qhBHMhk0YGwS1iv6vaEGcr0SQbO0IZtbOPqjKjds/WcG+bJA+1T5vCx6kprKOzn5Bg+Vw==} + '@mswjs/interceptors@0.40.0': + resolution: {integrity: sha512-EFd6cVbHsgLa6wa4RljGj6Wk75qoHxUSyc5asLyyPSyuhIcdS2Q3Phw6ImS1q+CkALthJRShiYfKANcQMuMqsQ==} engines: {node: '>=18'} '@napi-rs/wasm-runtime@0.2.12': @@ -1453,56 +1606,56 @@ packages: '@neoconfetti/react@1.0.0': resolution: {integrity: sha512-klcSooChXXOzIm+SE5IISIAn3bYzYfPjbX7D7HoqZL84oAfgREeSg5vSIaSFH+DaGzzvImTyWe1OyrJ67vik4A==} - '@next/env@15.4.7': - resolution: {integrity: sha512-PrBIpO8oljZGTOe9HH0miix1w5MUiGJ/q83Jge03mHEE0E3pyqzAy2+l5G6aJDbXoobmxPJTVhbCuwlLtjSHwg==} + '@next/env@15.4.10': + resolution: {integrity: sha512-knhmoJ0Vv7VRf6pZEPSnciUG1S4bIhWx+qTYBW/AjxEtlzsiNORPk8sFDCEvqLfmKuey56UB9FL1UdHEV3uBrg==} - '@next/eslint-plugin-next@15.5.2': - resolution: {integrity: sha512-lkLrRVxcftuOsJNhWatf1P2hNVfh98k/omQHrCEPPriUypR6RcS13IvLdIrEvkm9AH2Nu2YpR5vLqBuy6twH3Q==} + '@next/eslint-plugin-next@15.5.7': + resolution: {integrity: sha512-DtRU2N7BkGr8r+pExfuWHwMEPX5SD57FeA6pxdgCHODo+b/UgIgjE+rgWKtJAbEbGhVZ2jtHn4g3wNhWFoNBQQ==} - '@next/swc-darwin-arm64@15.4.7': - resolution: {integrity: sha512-2Dkb+VUTp9kHHkSqtws4fDl2Oxms29HcZBwFIda1X7Ztudzy7M6XF9HDS2dq85TmdN47VpuhjE+i6wgnIboVzQ==} + '@next/swc-darwin-arm64@15.4.8': + resolution: {integrity: sha512-Pf6zXp7yyQEn7sqMxur6+kYcywx5up1J849psyET7/8pG2gQTVMjU3NzgIt8SeEP5to3If/SaWmaA6H6ysBr1A==} engines: {node: '>= 10'} cpu: [arm64] os: [darwin] - '@next/swc-darwin-x64@15.4.7': - resolution: {integrity: sha512-qaMnEozKdWezlmh1OGDVFueFv2z9lWTcLvt7e39QA3YOvZHNpN2rLs/IQLwZaUiw2jSvxW07LxMCWtOqsWFNQg==} + '@next/swc-darwin-x64@15.4.8': + resolution: {integrity: sha512-xla6AOfz68a6kq3gRQccWEvFC/VRGJmA/QuSLENSO7CZX5WIEkSz7r1FdXUjtGCQ1c2M+ndUAH7opdfLK1PQbw==} engines: {node: '>= 10'} cpu: [x64] os: [darwin] - '@next/swc-linux-arm64-gnu@15.4.7': - resolution: {integrity: sha512-ny7lODPE7a15Qms8LZiN9wjNWIeI+iAZOFDOnv2pcHStncUr7cr9lD5XF81mdhrBXLUP9yT9RzlmSWKIazWoDw==} + '@next/swc-linux-arm64-gnu@15.4.8': + resolution: {integrity: sha512-y3fmp+1Px/SJD+5ntve5QLZnGLycsxsVPkTzAc3zUiXYSOlTPqT8ynfmt6tt4fSo1tAhDPmryXpYKEAcoAPDJw==} engines: {node: '>= 10'} cpu: [arm64] os: [linux] - '@next/swc-linux-arm64-musl@15.4.7': - resolution: {integrity: sha512-4SaCjlFR/2hGJqZLLWycccy1t+wBrE/vyJWnYaZJhUVHccpGLG5q0C+Xkw4iRzUIkE+/dr90MJRUym3s1+vO8A==} + '@next/swc-linux-arm64-musl@15.4.8': + resolution: {integrity: sha512-DX/L8VHzrr1CfwaVjBQr3GWCqNNFgyWJbeQ10Lx/phzbQo3JNAxUok1DZ8JHRGcL6PgMRgj6HylnLNndxn4Z6A==} engines: {node: '>= 10'} cpu: [arm64] os: [linux] - '@next/swc-linux-x64-gnu@15.4.7': - resolution: {integrity: sha512-2uNXjxvONyRidg00VwvlTYDwC9EgCGNzPAPYbttIATZRxmOZ3hllk/YYESzHZb65eyZfBR5g9xgCZjRAl9YYGg==} + '@next/swc-linux-x64-gnu@15.4.8': + resolution: {integrity: sha512-9fLAAXKAL3xEIFdKdzG5rUSvSiZTLLTCc6JKq1z04DR4zY7DbAPcRvNm3K1inVhTiQCs19ZRAgUerHiVKMZZIA==} engines: {node: '>= 10'} cpu: [x64] os: [linux] - '@next/swc-linux-x64-musl@15.4.7': - resolution: {integrity: sha512-ceNbPjsFgLscYNGKSu4I6LYaadq2B8tcK116nVuInpHHdAWLWSwVK6CHNvCi0wVS9+TTArIFKJGsEyVD1H+4Kg==} + '@next/swc-linux-x64-musl@15.4.8': + resolution: {integrity: sha512-s45V7nfb5g7dbS7JK6XZDcapicVrMMvX2uYgOHP16QuKH/JA285oy6HcxlKqwUNaFY/UC6EvQ8QZUOo19cBKSA==} engines: {node: '>= 10'} cpu: [x64] os: [linux] - '@next/swc-win32-arm64-msvc@15.4.7': - resolution: {integrity: sha512-pZyxmY1iHlZJ04LUL7Css8bNvsYAMYOY9JRwFA3HZgpaNKsJSowD09Vg2R9734GxAcLJc2KDQHSCR91uD6/AAw==} + '@next/swc-win32-arm64-msvc@15.4.8': + resolution: {integrity: sha512-KjgeQyOAq7t/HzAJcWPGA8X+4WY03uSCZ2Ekk98S9OgCFsb6lfBE3dbUzUuEQAN2THbwYgFfxX2yFTCMm8Kehw==} engines: {node: '>= 10'} cpu: [arm64] os: [win32] - '@next/swc-win32-x64-msvc@15.4.7': - resolution: {integrity: sha512-HjuwPJ7BeRzgl3KrjKqD2iDng0eQIpIReyhpF5r4yeAHFwWRuAhfW92rWv/r3qeQHEwHsLRzFDvMqRjyM5DI6A==} + '@next/swc-win32-x64-msvc@15.4.8': + resolution: {integrity: sha512-Exsmf/+42fWVnLMaZHzshukTBxZrSwuuLKFvqhGHJ+mC1AokqieLY/XzAl3jc/CqhXLqLY3RRjkKJ9YnLPcRWg==} engines: {node: '>= 10'} cpu: [x64] os: [win32] @@ -1538,186 +1691,176 @@ packages: '@open-draft/until@2.1.0': resolution: {integrity: sha512-U69T3ItWHvLwGg5eJ0n3I62nWuE6ilHlmz7zM0npLBRvPRd7e6NYmg54vvRtP5mZG7kZqZCFVdsTWo7BPtBujg==} - '@opentelemetry/api-logs@0.204.0': - resolution: {integrity: sha512-DqxY8yoAaiBPivoJD4UtgrMS8gEmzZ5lnaxzPojzLVHBGqPxgWm4zcuvcUHZiqQ6kRX2Klel2r9y8cA2HAtqpw==} + '@opentelemetry/api-logs@0.208.0': + resolution: {integrity: sha512-CjruKY9V6NMssL/T1kAFgzosF1v9o6oeN+aX5JB/C/xPNtmgIJqcXHG7fA82Ou1zCpWGl4lROQUKwUNE1pMCyg==} engines: {node: '>=8.0.0'} - '@opentelemetry/api-logs@0.57.2': - resolution: {integrity: sha512-uIX52NnTM0iBh84MShlpouI7UKqkZ7MrUszTmaypHBu4r7NofznSnQRfJ+uUeDtQDj6w8eFGg5KBLDAwAPz1+A==} - engines: {node: '>=14'} - '@opentelemetry/api@1.9.0': resolution: {integrity: sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg==} engines: {node: '>=8.0.0'} - '@opentelemetry/context-async-hooks@2.1.0': - resolution: {integrity: sha512-zOyetmZppnwTyPrt4S7jMfXiSX9yyfF0hxlA8B5oo2TtKl+/RGCy7fi4DrBfIf3lCPrkKsRBWZZD7RFojK7FDg==} + '@opentelemetry/context-async-hooks@2.2.0': + resolution: {integrity: sha512-qRkLWiUEZNAmYapZ7KGS5C4OmBLcP/H2foXeOEaowYCR0wi89fHejrfYfbuLVCMLp/dWZXKvQusdbUEZjERfwQ==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': '>=1.0.0 <1.10.0' - '@opentelemetry/core@2.1.0': - resolution: {integrity: sha512-RMEtHsxJs/GiHHxYT58IY57UXAQTuUnZVco6ymDEqTNlJKTimM4qPUPVe8InNFyBjhHBEAx4k3Q8LtNayBsbUQ==} + '@opentelemetry/core@2.2.0': + resolution: {integrity: sha512-FuabnnUm8LflnieVxs6eP7Z383hgQU4W1e3KJS6aOG3RxWxcHyBxH8fDMHNgu/gFx/M2jvTOW/4/PHhLz6bjWw==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': '>=1.0.0 <1.10.0' - '@opentelemetry/instrumentation-amqplib@0.51.0': - resolution: {integrity: sha512-XGmjYwjVRktD4agFnWBWQXo9SiYHKBxR6Ag3MLXwtLE4R99N3a08kGKM5SC1qOFKIELcQDGFEFT9ydXMH00Luw==} + '@opentelemetry/instrumentation-amqplib@0.55.0': + resolution: {integrity: sha512-5ULoU8p+tWcQw5PDYZn8rySptGSLZHNX/7srqo2TioPnAAcvTy6sQFQXsNPrAnyRRtYGMetXVyZUy5OaX1+IfA==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-connect@0.48.0': - resolution: {integrity: sha512-OMjc3SFL4pC16PeK+tDhwP7MRvDPalYCGSvGqUhX5rASkI2H0RuxZHOWElYeXkV0WP+70Gw6JHWac/2Zqwmhdw==} + '@opentelemetry/instrumentation-connect@0.52.0': + resolution: {integrity: sha512-GXPxfNB5szMbV3I9b7kNWSmQBoBzw7MT0ui6iU/p+NIzVx3a06Ri2cdQO7tG9EKb4aKSLmfX9Cw5cKxXqX6Ohg==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-dataloader@0.22.0': - resolution: {integrity: sha512-bXnTcwtngQsI1CvodFkTemrrRSQjAjZxqHVc+CJZTDnidT0T6wt3jkKhnsjU/Kkkc0lacr6VdRpCu2CUWa0OKw==} + '@opentelemetry/instrumentation-dataloader@0.26.0': + resolution: {integrity: sha512-P2BgnFfTOarZ5OKPmYfbXfDFjQ4P9WkQ1Jji7yH5/WwB6Wm/knynAoA1rxbjWcDlYupFkyT0M1j6XLzDzy0aCA==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-express@0.53.0': - resolution: {integrity: sha512-r/PBafQmFYRjuxLYEHJ3ze1iBnP2GDA1nXOSS6E02KnYNZAVjj6WcDA1MSthtdAUUK0XnotHvvWM8/qz7DMO5A==} + '@opentelemetry/instrumentation-express@0.57.0': + resolution: {integrity: sha512-HAdx/o58+8tSR5iW+ru4PHnEejyKrAy9fYFhlEI81o10nYxrGahnMAHWiSjhDC7UQSY3I4gjcPgSKQz4rm/asg==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-fs@0.24.0': - resolution: {integrity: sha512-HjIxJ6CBRD770KNVaTdMXIv29Sjz4C1kPCCK5x1Ujpc6SNnLGPqUVyJYZ3LUhhnHAqdbrl83ogVWjCgeT4Q0yw==} + '@opentelemetry/instrumentation-fs@0.28.0': + resolution: {integrity: sha512-FFvg8fq53RRXVBRHZViP+EMxMR03tqzEGpuq55lHNbVPyFklSVfQBN50syPhK5UYYwaStx0eyCtHtbRreusc5g==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-generic-pool@0.48.0': - resolution: {integrity: sha512-TLv/On8pufynNR+pUbpkyvuESVASZZKMlqCm4bBImTpXKTpqXaJJ3o/MUDeMlM91rpen+PEv2SeyOKcHCSlgag==} + '@opentelemetry/instrumentation-generic-pool@0.52.0': + resolution: {integrity: sha512-ISkNcv5CM2IwvsMVL31Tl61/p2Zm2I2NAsYq5SSBgOsOndT0TjnptjufYVScCnD5ZLD1tpl4T3GEYULLYOdIdQ==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-graphql@0.52.0': - resolution: {integrity: sha512-3fEJ8jOOMwopvldY16KuzHbRhPk8wSsOTSF0v2psmOCGewh6ad+ZbkTx/xyUK9rUdUMWAxRVU0tFpj4Wx1vkPA==} + '@opentelemetry/instrumentation-graphql@0.56.0': + resolution: {integrity: sha512-IPvNk8AFoVzTAM0Z399t34VDmGDgwT6rIqCUug8P9oAGerl2/PEIYMPOl/rerPGu+q8gSWdmbFSjgg7PDVRd3Q==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-hapi@0.51.0': - resolution: {integrity: sha512-qyf27DaFNL1Qhbo/da+04MSCw982B02FhuOS5/UF+PMhM61CcOiu7fPuXj8TvbqyReQuJFljXE6UirlvoT/62g==} + '@opentelemetry/instrumentation-hapi@0.55.0': + resolution: {integrity: sha512-prqAkRf9e4eEpy4G3UcR32prKE8NLNlA90TdEU1UsghOTg0jUvs40Jz8LQWFEs5NbLbXHYGzB4CYVkCI8eWEVQ==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-http@0.204.0': - resolution: {integrity: sha512-1afJYyGRA4OmHTv0FfNTrTAzoEjPQUYgd+8ih/lX0LlZBnGio/O80vxA0lN3knsJPS7FiDrsDrWq25K7oAzbkw==} + '@opentelemetry/instrumentation-http@0.208.0': + resolution: {integrity: sha512-rhmK46DRWEbQQB77RxmVXGyjs6783crXCnFjYQj+4tDH/Kpv9Rbg3h2kaNyp5Vz2emF1f9HOQQvZoHzwMWOFZQ==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-ioredis@0.52.0': - resolution: {integrity: sha512-rUvlyZwI90HRQPYicxpDGhT8setMrlHKokCtBtZgYxQWRF5RBbG4q0pGtbZvd7kyseuHbFpA3I/5z7M8b/5ywg==} + '@opentelemetry/instrumentation-ioredis@0.56.0': + resolution: {integrity: sha512-XSWeqsd3rKSsT3WBz/JKJDcZD4QYElZEa0xVdX8f9dh4h4QgXhKRLorVsVkK3uXFbC2sZKAS2Ds+YolGwD83Dg==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-kafkajs@0.14.0': - resolution: {integrity: sha512-kbB5yXS47dTIdO/lfbbXlzhvHFturbux4EpP0+6H78Lk0Bn4QXiZQW7rmZY1xBCY16mNcCb8Yt0mhz85hTnSVA==} + '@opentelemetry/instrumentation-kafkajs@0.18.0': + resolution: {integrity: sha512-KCL/1HnZN5zkUMgPyOxfGjLjbXjpd4odDToy+7c+UsthIzVLFf99LnfIBE8YSSrYE4+uS7OwJMhvhg3tWjqMBg==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-knex@0.49.0': - resolution: {integrity: sha512-NKsRRT27fbIYL4Ix+BjjP8h4YveyKc+2gD6DMZbr5R5rUeDqfC8+DTfIt3c3ex3BIc5Vvek4rqHnN7q34ZetLQ==} + '@opentelemetry/instrumentation-knex@0.53.0': + resolution: {integrity: sha512-xngn5cH2mVXFmiT1XfQ1aHqq1m4xb5wvU6j9lSgLlihJ1bXzsO543cpDwjrZm2nMrlpddBf55w8+bfS4qDh60g==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-koa@0.52.0': - resolution: {integrity: sha512-JJSBYLDx/mNSy8Ibi/uQixu2rH0bZODJa8/cz04hEhRaiZQoeJ5UrOhO/mS87IdgVsHrnBOsZ6vDu09znupyuA==} + '@opentelemetry/instrumentation-koa@0.57.0': + resolution: {integrity: sha512-3JS8PU/D5E3q295mwloU2v7c7/m+DyCqdu62BIzWt+3u9utjxC9QS7v6WmUNuoDN3RM+Q+D1Gpj13ERo+m7CGg==} + engines: {node: ^18.19.0 || >=20.6.0} + peerDependencies: + '@opentelemetry/api': ^1.9.0 + + '@opentelemetry/instrumentation-lru-memoizer@0.53.0': + resolution: {integrity: sha512-LDwWz5cPkWWr0HBIuZUjslyvijljTwmwiItpMTHujaULZCxcYE9eU44Qf/pbVC8TulT0IhZi+RoGvHKXvNhysw==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-lru-memoizer@0.49.0': - resolution: {integrity: sha512-ctXu+O/1HSadAxtjoEg2w307Z5iPyLOMM8IRNwjaKrIpNAthYGSOanChbk1kqY6zU5CrpkPHGdAT6jk8dXiMqw==} + '@opentelemetry/instrumentation-mongodb@0.61.0': + resolution: {integrity: sha512-OV3i2DSoY5M/pmLk+68xr5RvkHU8DRB3DKMzYJdwDdcxeLs62tLbkmRyqJZsYf3Ht7j11rq35pHOWLuLzXL7pQ==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-mongodb@0.57.0': - resolution: {integrity: sha512-KD6Rg0KSHWDkik+qjIOWoksi1xqSpix8TSPfquIK1DTmd9OTFb5PHmMkzJe16TAPVEuElUW8gvgP59cacFcrMQ==} + '@opentelemetry/instrumentation-mongoose@0.55.0': + resolution: {integrity: sha512-5afj0HfF6aM6Nlqgu6/PPHFk8QBfIe3+zF9FGpX76jWPS0/dujoEYn82/XcLSaW5LPUDW8sni+YeK0vTBNri+w==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-mongoose@0.51.0': - resolution: {integrity: sha512-gwWaAlhhV2By7XcbyU3DOLMvzsgeaymwP/jktDC+/uPkCmgB61zurwqOQdeiRq9KAf22Y2dtE5ZLXxytJRbEVA==} + '@opentelemetry/instrumentation-mysql2@0.55.0': + resolution: {integrity: sha512-0cs8whQG55aIi20gnK8B7cco6OK6N+enNhW0p5284MvqJ5EPi+I1YlWsWXgzv/V2HFirEejkvKiI4Iw21OqDWg==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-mysql2@0.51.0': - resolution: {integrity: sha512-zT2Wg22Xn43RyfU3NOUmnFtb5zlDI0fKcijCj9AcK9zuLZ4ModgtLXOyBJSSfO+hsOCZSC1v/Fxwj+nZJFdzLQ==} + '@opentelemetry/instrumentation-mysql@0.54.0': + resolution: {integrity: sha512-bqC1YhnwAeWmRzy1/Xf9cDqxNG2d/JDkaxnqF5N6iJKN1eVWI+vg7NfDkf52/Nggp3tl1jcC++ptC61BD6738A==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-mysql@0.50.0': - resolution: {integrity: sha512-duKAvMRI3vq6u9JwzIipY9zHfikN20bX05sL7GjDeLKr2qV0LQ4ADtKST7KStdGcQ+MTN5wghWbbVdLgNcB3rA==} + '@opentelemetry/instrumentation-pg@0.61.0': + resolution: {integrity: sha512-UeV7KeTnRSM7ECHa3YscoklhUtTQPs6V6qYpG283AB7xpnPGCUCUfECFT9jFg6/iZOQTt3FHkB1wGTJCNZEvPw==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-pg@0.57.0': - resolution: {integrity: sha512-dWLGE+r5lBgm2A8SaaSYDE3OKJ/kwwy5WLyGyzor8PLhUL9VnJRiY6qhp4njwhnljiLtzeffRtG2Mf/YyWLeTw==} + '@opentelemetry/instrumentation-redis@0.57.0': + resolution: {integrity: sha512-bCxTHQFXzrU3eU1LZnOZQ3s5LURxQPDlU3/upBzlWY77qOI1GZuGofazj3jtzjctMJeBEJhNwIFEgRPBX1kp/Q==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-redis@0.53.0': - resolution: {integrity: sha512-WUHV8fr+8yo5RmzyU7D5BIE1zwiaNQcTyZPwtxlfr7px6NYYx7IIpSihJK7WA60npWynfxxK1T67RAVF0Gdfjg==} + '@opentelemetry/instrumentation-tedious@0.27.0': + resolution: {integrity: sha512-jRtyUJNZppPBjPae4ZjIQ2eqJbcRaRfJkr0lQLHFmOU/no5A6e9s1OHLd5XZyZoBJ/ymngZitanyRRA5cniseA==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation-tedious@0.23.0': - resolution: {integrity: sha512-3TMTk/9VtlRonVTaU4tCzbg4YqW+Iq/l5VnN2e5whP6JgEg/PKfrGbqQ+CxQWNLfLaQYIUgEZqAn5gk/inh1uQ==} - engines: {node: ^18.19.0 || >=20.6.0} - peerDependencies: - '@opentelemetry/api': ^1.3.0 - - '@opentelemetry/instrumentation-undici@0.15.0': - resolution: {integrity: sha512-sNFGA/iCDlVkNjzTzPRcudmI11vT/WAfAguRdZY9IspCw02N4WSC72zTuQhSMheh2a1gdeM9my1imnKRvEEvEg==} + '@opentelemetry/instrumentation-undici@0.19.0': + resolution: {integrity: sha512-Pst/RhR61A2OoZQZkn6OLpdVpXp6qn3Y92wXa6umfJe9rV640r4bc6SWvw4pPN6DiQqPu2c8gnSSZPDtC6JlpQ==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.7.0 - '@opentelemetry/instrumentation@0.204.0': - resolution: {integrity: sha512-vV5+WSxktzoMP8JoYWKeopChy6G3HKk4UQ2hESCRDUUTZqQ3+nM3u8noVG0LmNfRWwcFBnbZ71GKC7vaYYdJ1g==} + '@opentelemetry/instrumentation@0.208.0': + resolution: {integrity: sha512-Eju0L4qWcQS+oXxi6pgh7zvE2byogAkcsVv0OjHF/97iOz1N/aKE6etSGowYkie+YA1uo6DNwdSxaaNnLvcRlA==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.3.0 - '@opentelemetry/instrumentation@0.57.2': - resolution: {integrity: sha512-BdBGhQBh8IjZ2oIIX6F2/Q3LKm/FDDKi6ccYKcBTeilh6SNdNKveDOLk73BkSJjQLJk6qe4Yh+hHw1UPhCDdrg==} - engines: {node: '>=14'} - peerDependencies: - '@opentelemetry/api': ^1.3.0 - - '@opentelemetry/redis-common@0.38.0': - resolution: {integrity: sha512-4Wc0AWURII2cfXVVoZ6vDqK+s5n4K5IssdrlVrvGsx6OEOKdghKtJZqXAHWFiZv4nTDLH2/2fldjIHY8clMOjQ==} + '@opentelemetry/redis-common@0.38.2': + resolution: {integrity: sha512-1BCcU93iwSRZvDAgwUxC/DV4T/406SkMfxGqu5ojc3AvNI+I9GhV7v0J1HljsczuuhcnFLYqD5VmwVXfCGHzxA==} engines: {node: ^18.19.0 || >=20.6.0} - '@opentelemetry/resources@2.1.0': - resolution: {integrity: sha512-1CJjf3LCvoefUOgegxi8h6r4B/wLSzInyhGP2UmIBYNlo4Qk5CZ73e1eEyWmfXvFtm1ybkmfb2DqWvspsYLrWw==} + '@opentelemetry/resources@2.2.0': + resolution: {integrity: sha512-1pNQf/JazQTMA0BiO5NINUzH0cbLbbl7mntLa4aJNmCCXSj0q03T5ZXXL0zw4G55TjdL9Tz32cznGClf+8zr5A==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': '>=1.3.0 <1.10.0' - '@opentelemetry/sdk-trace-base@2.1.0': - resolution: {integrity: sha512-uTX9FBlVQm4S2gVQO1sb5qyBLq/FPjbp+tmGoxu4tIgtYGmBYB44+KX/725RFDe30yBSaA9Ml9fqphe1hbUyLQ==} + '@opentelemetry/sdk-trace-base@2.2.0': + resolution: {integrity: sha512-xWQgL0Bmctsalg6PaXExmzdedSp3gyKV8mQBwK/j9VGdCDu2fmXIb2gAehBKbkXCpJ4HPkgv3QfoJWRT4dHWbw==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': '>=1.3.0 <1.10.0' @@ -1726,41 +1869,41 @@ packages: resolution: {integrity: sha512-JD6DerIKdJGmRp4jQyX5FlrQjA4tjOw1cvfsPAZXfOOEErMUHjPcPSICS+6WnM0nB0efSFARh0KAZss+bvExOA==} engines: {node: '>=14'} - '@opentelemetry/sql-common@0.41.0': - resolution: {integrity: sha512-pmzXctVbEERbqSfiAgdes9Y63xjoOyXcD7B6IXBkVb+vbM7M9U98mn33nGXxPf4dfYR0M+vhcKRZmbSJ7HfqFA==} + '@opentelemetry/sql-common@0.41.2': + resolution: {integrity: sha512-4mhWm3Z8z+i508zQJ7r6Xi7y4mmoJpdvH0fZPFRkWrdp5fq7hhZ2HhYokEOLkfqSMgPR4Z9EyB3DBkbKGOqZiQ==} engines: {node: ^18.19.0 || >=20.6.0} peerDependencies: '@opentelemetry/api': ^1.1.0 - '@orval/angular@7.11.2': - resolution: {integrity: sha512-v7I3MXlc1DTFHZlCo10uqBmss/4puXi1EbYdlYGfeZ2sYQiwtRFEYAMnSIxHzMtdtI4jd7iDEH0fZRA7W6yloA==} + '@orval/angular@7.13.0': + resolution: {integrity: sha512-r/qKpfBWMilze0fzGpguFLOzSGS5AxI8Heaw8+zJ4Nky2+OURDCR2ImCCbeNz0rp12Vd8ovpgEUQIYTbciaapw==} - '@orval/axios@7.11.2': - resolution: {integrity: sha512-X5TJTFofCeJrQcHWoH0wz/032DBhPOQuZUUOPYO3DItOnq9/nfHJYKnUfg13wtYw0LVjCxyTZpeGLUBZnY804A==} + '@orval/axios@7.13.0': + resolution: {integrity: sha512-Uf7wvP94TEbgAMd6ueBNEiw7YtmCvc8Heu/aTpIoQj1aas5myG4DS22udgtuxo17UiGryuX8pwYITltX64lrUw==} - '@orval/core@7.11.2': - resolution: {integrity: sha512-5k2j4ro53yZ3J+tGMu3LpLgVb2OBtxNDgyrJik8qkrFyuORBLx/a+AJRFoPYwZmtnMZzzRXoH4J/fbpW5LXIyg==} + '@orval/core@7.13.0': + resolution: {integrity: sha512-fGwf/ZtwEbiSV1keKunGI7Tu6N6f95LlurBHC1fjsOhixzzVzJS3QofHvuYPtckOPRdMEWjAJsiQCpgrB4OOpw==} - '@orval/fetch@7.11.2': - resolution: {integrity: sha512-FuupASqk4Dn8ZET7u5Ra5djKy22KfRfec60zRR/o5+L5iQkWKEe/A5DBT1PwjTMnp9789PEGlFPQjZNwMG98Tg==} + '@orval/fetch@7.13.0': + resolution: {integrity: sha512-B5aI7GG1Xsfw1DIGqKaEGAZei516cJq+NfB1Fy5gZEuvoQUjvTzm9yIw4F85TZEaaMzad/ZqvpySg8bjSfW7vA==} - '@orval/hono@7.11.2': - resolution: {integrity: sha512-SddhKMYMB/dJH3YQx3xi0Zd+4tfhrEkqJdqQaYLXgENJiw0aGbdaZTdY6mb/e6qP38TTK6ME2PkYOqwkl2DQ7g==} + '@orval/hono@7.13.0': + resolution: {integrity: sha512-B9OvDAYch63KoC0wL99xXLBS0oTCO+rvT+yxBu+tMfoovvWj5cQLeX/DbZa/896MxyfiD/z9dCHuUtzLPaLxzQ==} - '@orval/mcp@7.11.2': - resolution: {integrity: sha512-9kGKko8wLuCbeETp8Pd8lXLtBpLzEJfR2kl2m19AI3nAoHXE/Tnn3KgjMIg0qvCcsRXGXdYJB7wfxy2URdAxVA==} + '@orval/mcp@7.13.0': + resolution: {integrity: sha512-ESH3zoLptftH++DxVr0okToysixdIsDo0eSrtRk0CeKZyGm03UmCnsBplF/xI3WvuImEWO46CrbBYlrHWvGgLg==} - '@orval/mock@7.11.2': - resolution: {integrity: sha512-+uRq6BT6NU2z0UQtgeD6FMuLAxQ5bjJ5PZK3AsbDYFRSmAWUWoeaQcoWyF38F4t7ez779beGs3AlUg+z0Ec4rQ==} + '@orval/mock@7.13.0': + resolution: {integrity: sha512-6qunGaem/s+jkxhtummbEOeJ/ab4dVydFJ9AxmI1mZVevMVz4lbD9Yyq9IQpZhn1G++amOtDyDQ4AC8RRvOzAg==} - '@orval/query@7.11.2': - resolution: {integrity: sha512-C/it+wNfcDtuvpB6h/78YwWU+Rjk7eU1Av8jAoGnvxMRli4nnzhSZ83HMILGhYQbE9WcfNZxQJ6OaBoTWqACPg==} + '@orval/query@7.13.0': + resolution: {integrity: sha512-5E1obQpt81ixJ62UsMr82DODYXl39oSccbXZ8EVv6oROhJyanFks///9WrKEqQPXIzPfqlStyjaY6bJvCjC8JA==} - '@orval/swr@7.11.2': - resolution: {integrity: sha512-95GkKLVy67xJvsiVvK4nTOsCpebWM54FvQdKQaqlJ0FGCNUbqDjVRwBKbjP6dLc/B3wTmBAWlFSLbdVmjGCTYg==} + '@orval/swr@7.13.0': + resolution: {integrity: sha512-SwORHlcLzbidhmxHGh8NET6ZxUZeMikfO+bI6vsayHpopCD7EkJNmbn4v3mDg1bwdzFP8M4drffKVj47KR9AAQ==} - '@orval/zod@7.11.2': - resolution: {integrity: sha512-4MzTg5Wms8/LlM3CbYu80dvCbP88bVlQjnYsBdFXuEv0K2GYkBCAhVOrmXCVrPXE89neV6ABkvWQeuKZQpkdxQ==} + '@orval/zod@7.13.0': + resolution: {integrity: sha512-jEEj0uRO5D5D1CHKQdth5Atl5Ap4/P21SMiOFmVpiArlXr4LQtMpbkiPVM2tsQXIhtC38c9oMt8+rOx1rYSjcw==} '@phosphor-icons/react@2.1.10': resolution: {integrity: sha512-vt8Tvq8GLjheAZZYa+YG/pW7HDbov8El/MANW8pOAz4eGxrwhnbfrQZq0Cp4q8zBEu8NIhHdnr+r8thnfRSNYA==} @@ -1773,8 +1916,8 @@ packages: resolution: {integrity: sha512-+1VkjdD0QBLPodGrJUeqarH8VAIvQODIbwh9XpP5Syisf7YoQgsJKPNFoqqLQlu+VQ/tVSshMR6loPMn8U+dPg==} engines: {node: '>=14'} - '@playwright/test@1.55.0': - resolution: {integrity: sha512-04IXzPwHrW69XusN/SIdDdKZBzMfOT9UNT/YiJit/xpy2VuAoB8NHc8Aplb96zsWDddLnbkPL3TsmrS04ZU2xQ==} + '@playwright/test@1.56.1': + resolution: {integrity: sha512-vSMYtL/zOcFpvJCW71Q/OEGQb7KYBPAdKh35WNSkaZA75JlAO8ED8UN6GUNTm3drWomcbcqRPFqQbLae8yBTdg==} engines: {node: '>=18'} hasBin: true @@ -1804,8 +1947,8 @@ packages: webpack-plugin-serve: optional: true - '@prisma/instrumentation@6.15.0': - resolution: {integrity: sha512-6TXaH6OmDkMOQvOxwLZ8XS51hU2v4A3vmE2pSijCIiGRJYyNeMcL6nMHQMyYdZRD8wl7LF3Wzc+AMPMV/9Oo7A==} + '@prisma/instrumentation@6.19.0': + resolution: {integrity: sha512-QcuYy25pkXM8BJ37wVFBO7Zh34nyRV1GOb2n3lPkkbRYfl4hWl3PTcImP41P0KrzVXfa/45p6eVCos27x3exIg==} peerDependencies: '@opentelemetry/api': ^1.8 @@ -2483,136 +2626,196 @@ packages: '@rtsao/scc@1.1.0': resolution: {integrity: sha512-zt6OdqaDoOnJ1ZYsCYGt9YmWzDXl4vQdKTyJev62gFhRGKdx7mcT54V9KIjg+d2wi9EXsPvAPKe7i7WjfVWB8g==} - '@rushstack/eslint-patch@1.12.0': - resolution: {integrity: sha512-5EwMtOqvJMMa3HbmxLlF74e+3/HhwBTMcvt3nqVJgGCozO6hzIPOBlwm8mGVNR9SN2IJpxSnlxczyDjcn7qIyw==} + '@rushstack/eslint-patch@1.15.0': + resolution: {integrity: sha512-ojSshQPKwVvSMR8yT2L/QtUkV5SXi/IfDiJ4/8d6UbTPjiHVmxZzUAzGD8Tzks1b9+qQkZa0isUOvYObedITaw==} '@scarf/scarf@1.4.0': resolution: {integrity: sha512-xxeapPiUXdZAE3che6f3xogoJPeZgig6omHEy1rIY5WVsB3H2BHNnZH+gHG6x91SCWyQCzWGsuL2Hh3ClO5/qQ==} - '@sentry-internal/browser-utils@10.15.0': - resolution: {integrity: sha512-hJxo6rj3cMqiYlZd6PC8o/i2FG6hRnZdHcJkfm1HXgWCRgdCPilKghL6WU+B2H5dLyRKJ17nWjDAVQPRdCxO9w==} + '@sentry-internal/browser-utils@10.27.0': + resolution: {integrity: sha512-17tO6AXP+rmVQtLJ3ROQJF2UlFmvMWp7/8RDT5x9VM0w0tY31z8Twc0gw2KA7tcDxa5AaHDUbf9heOf+R6G6ow==} engines: {node: '>=18'} - '@sentry-internal/feedback@10.15.0': - resolution: {integrity: sha512-EP+NvdU9yfmepGzQwz0jnqhd0DBxHzrP16TsJIVXJe93QJ+gumdN3XQ0lvYtEC9zHuU08DghRLjfI1kLRfGzdQ==} + '@sentry-internal/feedback@10.27.0': + resolution: {integrity: sha512-UecsIDJcv7VBwycge/MDvgSRxzevDdcItE1i0KSwlPz00rVVxLY9kV28PJ4I2E7r6/cIaP9BkbWegCEcv09NuA==} engines: {node: '>=18'} - '@sentry-internal/replay-canvas@10.15.0': - resolution: {integrity: sha512-SXgUWArk+haUJ24W6pIm9IiwmIk3WxeQyFUxFfMUetSRb06CVAoNjPb0YuzKIeuFYJb6hDPGQ9UWhShnQpTmkw==} + '@sentry-internal/replay-canvas@10.27.0': + resolution: {integrity: sha512-inhsRYSVBpu3BI1kZphXj6uB59baJpYdyHeIPCiTfdFNBE5tngNH0HS/aedZ1g9zICw290lwvpuyrWJqp4VBng==} engines: {node: '>=18'} - '@sentry-internal/replay@10.15.0': - resolution: {integrity: sha512-vHBAFVdDfa51oqPWyRCK4fOIFhFeE2mVlqBWrBb+S3vCNcmtpvqJUq6o4sjSYcQzdZQpMSp5/Lj8Y3a8x/ed7w==} + '@sentry-internal/replay@10.27.0': + resolution: {integrity: sha512-tKSzHq1hNzB619Ssrqo25cqdQJ84R3xSSLsUWEnkGO/wcXJvpZy94gwdoS+KmH18BB1iRRRGtnMxZcUkiPSesw==} engines: {node: '>=18'} '@sentry/babel-plugin-component-annotate@4.3.0': resolution: {integrity: sha512-OuxqBprXRyhe8Pkfyz/4yHQJc5c3lm+TmYWSSx8u48g5yKewSQDOxkiLU5pAk3WnbLPy8XwU/PN+2BG0YFU9Nw==} engines: {node: '>= 14'} - '@sentry/browser@10.15.0': - resolution: {integrity: sha512-YV42VgW7xdmY23u7+nQLNJXDVilNTP0d5WWkHDxeI/uD6AAvn3GyKjx1YMG/KCulxva3dPDPEUunzDm3al26Sw==} + '@sentry/babel-plugin-component-annotate@4.6.1': + resolution: {integrity: sha512-aSIk0vgBqv7PhX6/Eov+vlI4puCE0bRXzUG5HdCsHBpAfeMkI8Hva6kSOusnzKqs8bf04hU7s3Sf0XxGTj/1AA==} + engines: {node: '>= 14'} + + '@sentry/browser@10.27.0': + resolution: {integrity: sha512-G8q362DdKp9y1b5qkQEmhTFzyWTOVB0ps1rflok0N6bVA75IEmSDX1pqJsNuY3qy14VsVHYVwQBJQsNltQLS0g==} engines: {node: '>=18'} '@sentry/bundler-plugin-core@4.3.0': resolution: {integrity: sha512-dmR4DJhJ4jqVWGWppuTL2blNFqOZZnt4aLkewbD1myFG3KVfUx8CrMQWEmGjkgPOtj5TO6xH9PyTJjXC6o5tnA==} engines: {node: '>= 14'} + '@sentry/bundler-plugin-core@4.6.1': + resolution: {integrity: sha512-WPeRbnMXm927m4Kr69NTArPfI+p5/34FHftdCRI3LFPMyhZDzz6J3wLy4hzaVUgmMf10eLzmq2HGEMvpQmdynA==} + engines: {node: '>= 14'} + '@sentry/cli-darwin@2.55.0': resolution: {integrity: sha512-jGHE7SHHzqXUmnsmRLgorVH6nmMmTjQQXdPZbSL5tRtH8d3OIYrVNr5D72DSgD26XAPBDMV0ibqOQ9NKoiSpfA==} engines: {node: '>=10'} os: [darwin] + '@sentry/cli-darwin@2.58.2': + resolution: {integrity: sha512-MArsb3zLhA2/cbd4rTm09SmTpnEuZCoZOpuZYkrpDw1qzBVJmRFA1W1hGAQ9puzBIk/ubY3EUhhzuU3zN2uD6w==} + engines: {node: '>=10'} + os: [darwin] + '@sentry/cli-linux-arm64@2.55.0': resolution: {integrity: sha512-jNB/0/gFcOuDCaY/TqeuEpsy/k52dwyk1SOV3s1ku4DUsln6govTppeAGRewY3T1Rj9B2vgIWTrnB8KVh9+Rgg==} engines: {node: '>=10'} cpu: [arm64] os: [linux, freebsd, android] + '@sentry/cli-linux-arm64@2.58.2': + resolution: {integrity: sha512-ay3OeObnbbPrt45cjeUyQjsx5ain1laj1tRszWj37NkKu55NZSp4QCg1gGBZ0gBGhckI9nInEsmKtix00alw2g==} + engines: {node: '>=10'} + cpu: [arm64] + os: [linux, freebsd, android] + '@sentry/cli-linux-arm@2.55.0': resolution: {integrity: sha512-ATjU0PsiWADSPLF/kZroLZ7FPKd5W9TDWHVkKNwIUNTei702LFgTjNeRwOIzTgSvG3yTmVEqtwFQfFN/7hnVXQ==} engines: {node: '>=10'} cpu: [arm] os: [linux, freebsd, android] + '@sentry/cli-linux-arm@2.58.2': + resolution: {integrity: sha512-HU9lTCzcHqCz/7Mt5n+cv+nFuJdc1hGD2h35Uo92GgxX3/IujNvOUfF+nMX9j6BXH6hUt73R5c0Ycq9+a3Parg==} + engines: {node: '>=10'} + cpu: [arm] + os: [linux, freebsd, android] + '@sentry/cli-linux-i686@2.55.0': resolution: {integrity: sha512-8LZjo6PncTM6bWdaggscNOi5r7F/fqRREsCwvd51dcjGj7Kp1plqo9feEzYQ+jq+KUzVCiWfHrUjddFmYyZJrg==} engines: {node: '>=10'} cpu: [x86, ia32] os: [linux, freebsd, android] + '@sentry/cli-linux-i686@2.58.2': + resolution: {integrity: sha512-CN9p0nfDFsAT1tTGBbzOUGkIllwS3hygOUyTK7LIm9z+UHw5uNgNVqdM/3Vg+02ymjkjISNB3/+mqEM5osGXdA==} + engines: {node: '>=10'} + cpu: [x86, ia32] + os: [linux, freebsd, android] + '@sentry/cli-linux-x64@2.55.0': resolution: {integrity: sha512-5LUVvq74Yj2cZZy5g5o/54dcWEaX4rf3myTHy73AKhRj1PABtOkfexOLbF9xSrZy95WXWaXyeH+k5n5z/vtHfA==} engines: {node: '>=10'} cpu: [x64] os: [linux, freebsd, android] + '@sentry/cli-linux-x64@2.58.2': + resolution: {integrity: sha512-oX/LLfvWaJO50oBVOn4ZvG2SDWPq0MN8SV9eg5tt2nviq+Ryltfr7Rtoo+HfV+eyOlx1/ZXhq9Wm7OT3cQuz+A==} + engines: {node: '>=10'} + cpu: [x64] + os: [linux, freebsd, android] + '@sentry/cli-win32-arm64@2.55.0': resolution: {integrity: sha512-cWIQdzm1pfLwPARsV6dUb8TVd6Y3V1A2VWxjTons3Ift6GvtVmiAe0OWL8t2Yt95i8v61kTD/6Tq21OAaogqzA==} engines: {node: '>=10'} cpu: [arm64] os: [win32] + '@sentry/cli-win32-arm64@2.58.2': + resolution: {integrity: sha512-+cl3x2HPVMpoSVGVM1IDWlAEREZrrVQj4xBb0TRKII7g3hUxRsAIcsrr7+tSkie++0FuH4go/b5fGAv51OEF3w==} + engines: {node: '>=10'} + cpu: [arm64] + os: [win32] + '@sentry/cli-win32-i686@2.55.0': resolution: {integrity: sha512-ldepCn2t9r4I0wvgk7NRaA7coJyy4rTQAzM66u9j5nTEsUldf66xym6esd5ZZRAaJUjffqvHqUIr/lrieTIrVg==} engines: {node: '>=10'} cpu: [x86, ia32] os: [win32] + '@sentry/cli-win32-i686@2.58.2': + resolution: {integrity: sha512-omFVr0FhzJ8oTJSg1Kf+gjLgzpYklY0XPfLxZ5iiMiYUKwF5uo1RJRdkUOiEAv0IqpUKnmKcmVCLaDxsWclB7Q==} + engines: {node: '>=10'} + cpu: [x86, ia32] + os: [win32] + '@sentry/cli-win32-x64@2.55.0': resolution: {integrity: sha512-4hPc/I/9tXx+HLTdTGwlagtAfDSIa2AoTUP30tl32NAYQhx9a6niUbPAemK2qfxesiufJ7D2djX83rCw6WnJVA==} engines: {node: '>=10'} cpu: [x64] os: [win32] + '@sentry/cli-win32-x64@2.58.2': + resolution: {integrity: sha512-2NAFs9UxVbRztQbgJSP5i8TB9eJQ7xraciwj/93djrSMHSEbJ0vC47TME0iifgvhlHMs5vqETOKJtfbbpQAQFA==} + engines: {node: '>=10'} + cpu: [x64] + os: [win32] + '@sentry/cli@2.55.0': resolution: {integrity: sha512-cynvcIM2xL8ddwELyFRSpZQw4UtFZzoM2rId2l9vg7+wDREPDocMJB9lEQpBIo3eqhp9JswqUT037yjO6iJ5Sw==} engines: {node: '>= 10'} hasBin: true - '@sentry/core@10.15.0': - resolution: {integrity: sha512-J7WsQvb9G6nsVgWkTHwyX7wR2djtEACYCx19hAnRbSGIg+ysVG+7Ti3RL4bz9/VXfcxsz346cleKc7ljhynYlQ==} + '@sentry/cli@2.58.2': + resolution: {integrity: sha512-U4u62V4vaTWF+o40Mih8aOpQKqKUbZQt9A3LorIJwaE3tO3XFLRI70eWtW2se1Qmy0RZ74zB14nYcFNFl2t4Rw==} + engines: {node: '>= 10'} + hasBin: true + + '@sentry/core@10.27.0': + resolution: {integrity: sha512-Zc68kdH7tWTDtDbV1zWIbo3Jv0fHAU2NsF5aD2qamypKgfSIMSbWVxd22qZyDBkaX8gWIPm/0Sgx6aRXRBXrYQ==} engines: {node: '>=18'} - '@sentry/nextjs@10.15.0': - resolution: {integrity: sha512-u3WLeeYgQH2Ug2SSdUu5ChMDKnWXeDXP7Bn+dRO01Y1/5NrMjoXO2w33ak03SLaZltPJFsRuMcfBtYoLA9BNlw==} + '@sentry/nextjs@10.27.0': + resolution: {integrity: sha512-O3b7y4JgVyj70ucW7lfyFLSXTCvztu7qOdFzFl2LwIstzFIZzt6v7ICOhP3FEEC7Lxn5teNb6xVBDtu8vYr20g==} engines: {node: '>=18'} peerDependencies: - next: ^13.2.0 || ^14.0 || ^15.0.0-rc.0 + next: ^13.2.0 || ^14.0 || ^15.0.0-rc.0 || ^16.0.0-0 - '@sentry/node-core@10.15.0': - resolution: {integrity: sha512-X6QAHulgfkpONYrXNK2QXfW02ja5FS31sn5DWfCDO8ggHej/u2mrf5nwnUU8vilSwbInHmiMpkUswGEKYDEKTA==} + '@sentry/node-core@10.27.0': + resolution: {integrity: sha512-Dzo1I64Psb7AkpyKVUlR9KYbl4wcN84W4Wet3xjLmVKMgrCo2uAT70V4xIacmoMH5QLZAx0nGfRy9yRCd4nzBg==} engines: {node: '>=18'} peerDependencies: '@opentelemetry/api': ^1.9.0 - '@opentelemetry/context-async-hooks': ^1.30.1 || ^2.1.0 - '@opentelemetry/core': ^1.30.1 || ^2.1.0 + '@opentelemetry/context-async-hooks': ^1.30.1 || ^2.1.0 || ^2.2.0 + '@opentelemetry/core': ^1.30.1 || ^2.1.0 || ^2.2.0 '@opentelemetry/instrumentation': '>=0.57.1 <1' - '@opentelemetry/resources': ^1.30.1 || ^2.1.0 - '@opentelemetry/sdk-trace-base': ^1.30.1 || ^2.1.0 + '@opentelemetry/resources': ^1.30.1 || ^2.1.0 || ^2.2.0 + '@opentelemetry/sdk-trace-base': ^1.30.1 || ^2.1.0 || ^2.2.0 '@opentelemetry/semantic-conventions': ^1.37.0 - '@sentry/node@10.15.0': - resolution: {integrity: sha512-5V9BX55DEIscU/S5+AEIQuIMKKbSd+MVo1/x5UkOceBxfiA0KUmgQ0POIpUEZqGCS9rpQ5fEajByRXAQ7bjaWA==} + '@sentry/node@10.27.0': + resolution: {integrity: sha512-1cQZ4+QqV9juW64Jku1SMSz+PoZV+J59lotz4oYFvCNYzex8hRAnDKvNiKW1IVg5mEEkz98mg1fvcUtiw7GTiQ==} engines: {node: '>=18'} - '@sentry/opentelemetry@10.15.0': - resolution: {integrity: sha512-j+uk3bfxGgsBejwpq78iRZ+aBOKR/fWcJi72MBTboTEK3B4LINO65PyJqwOhcZOJVVAPL6IK1+sWQp4RL24GTg==} + '@sentry/opentelemetry@10.27.0': + resolution: {integrity: sha512-z2vXoicuGiqlRlgL9HaYJgkin89ncMpNQy0Kje6RWyhpzLe8BRgUXlgjux7WrSrcbopDdC1OttSpZsJ/Wjk7fg==} engines: {node: '>=18'} peerDependencies: '@opentelemetry/api': ^1.9.0 - '@opentelemetry/context-async-hooks': ^1.30.1 || ^2.1.0 - '@opentelemetry/core': ^1.30.1 || ^2.1.0 - '@opentelemetry/sdk-trace-base': ^1.30.1 || ^2.1.0 + '@opentelemetry/context-async-hooks': ^1.30.1 || ^2.1.0 || ^2.2.0 + '@opentelemetry/core': ^1.30.1 || ^2.1.0 || ^2.2.0 + '@opentelemetry/sdk-trace-base': ^1.30.1 || ^2.1.0 || ^2.2.0 '@opentelemetry/semantic-conventions': ^1.37.0 - '@sentry/react@10.15.0': - resolution: {integrity: sha512-dyJTv0rJtHunGE0rZ3amQAgBaKR9YnbIJcg9Y1uZt+vPK/B19sqM9S8D7DUvlBfDk9iWfhBCK6gHLEUOckFrKA==} + '@sentry/react@10.27.0': + resolution: {integrity: sha512-xoIRBlO1IhLX/O9aQgVYW1F3Qhw8TdkOiZjh6mrPsnCpBLufsQ4aS1nDQi9miZuWeslW0s2zNy0ACBpICZR/sw==} engines: {node: '>=18'} peerDependencies: react: ^16.14.0 || 17.x || 18.x || 19.x - '@sentry/vercel-edge@10.15.0': - resolution: {integrity: sha512-QNruocfQy2P3rrgCKHCWNq7bsy+cFVNY25Y5PDaYsFKSiIge482g4Tjvfi7VMohy5jozcC1y82efFhicp3UqYg==} + '@sentry/vercel-edge@10.27.0': + resolution: {integrity: sha512-uBfpOnzSNSd2ITMTMeX5bV9Jlci9iMyI+iOPuW8c3oc+0dITTN0OpKLyNd6nfm50bM5h/1qFVQrph+oFTrtuGQ==} engines: {node: '>=18'} '@sentry/webpack-plugin@4.3.0': @@ -2621,17 +2824,17 @@ packages: peerDependencies: webpack: '>=4.40.0' - '@shikijs/engine-oniguruma@3.9.2': - resolution: {integrity: sha512-Vn/w5oyQ6TUgTVDIC/BrpXwIlfK6V6kGWDVVz2eRkF2v13YoENUvaNwxMsQU/t6oCuZKzqp9vqtEtEzKl9VegA==} + '@shikijs/engine-oniguruma@3.14.0': + resolution: {integrity: sha512-TNcYTYMbJyy+ZjzWtt0bG5y4YyMIWC2nyePz+CFMWqm+HnZZyy9SWMgo8Z6KBJVIZnx8XUXS8U2afO6Y0g1Oug==} - '@shikijs/langs@3.9.2': - resolution: {integrity: sha512-X1Q6wRRQXY7HqAuX3I8WjMscjeGjqXCg/Sve7J2GWFORXkSrXud23UECqTBIdCSNKJioFtmUGJQNKtlMMZMn0w==} + '@shikijs/langs@3.14.0': + resolution: {integrity: sha512-DIB2EQY7yPX1/ZH7lMcwrK5pl+ZkP/xoSpUzg9YC8R+evRCCiSQ7yyrvEyBsMnfZq4eBzLzBlugMyTAf13+pzg==} - '@shikijs/themes@3.9.2': - resolution: {integrity: sha512-6z5lBPBMRfLyyEsgf6uJDHPa6NAGVzFJqH4EAZ+03+7sedYir2yJBRu2uPZOKmj43GyhVHWHvyduLDAwJQfDjA==} + '@shikijs/themes@3.14.0': + resolution: {integrity: sha512-fAo/OnfWckNmv4uBoUu6dSlkcBc+SA1xzj5oUSaz5z3KqHtEbUypg/9xxgJARtM6+7RVm0Q6Xnty41xA1ma1IA==} - '@shikijs/types@3.9.2': - resolution: {integrity: sha512-/M5L0Uc2ljyn2jKvj4Yiah7ow/W+DJSglVafvWAJ/b8AZDeeRAdMu3c2riDzB7N42VD+jSnWxeP9AKtd4TfYVw==} + '@shikijs/types@3.14.0': + resolution: {integrity: sha512-bQGgC6vrY8U/9ObG1Z/vTro+uclbjjD/uG58RvfxKZVD5p9Yc1ka3tVyEFy7BNJLzxuWyHH5NWynP9zZZS59eQ==} '@shikijs/vscode-textmate@10.0.2': resolution: {integrity: sha512-83yeghZ2xxin3Nj8z1NMd/NCuca+gsYXswywDy5bHvwlWL8tpTQmzGeUuHd9FC3E/SBEMvzJRwWEOz5gGes9Qg==} @@ -2821,55 +3024,55 @@ packages: typescript: optional: true - '@supabase/auth-js@2.71.1': - resolution: {integrity: sha512-mMIQHBRc+SKpZFRB2qtupuzulaUhFYupNyxqDj5Jp/LyPvcWvjaJzZzObv6URtL/O6lPxkanASnotGtNpS3H2Q==} + '@supabase/auth-js@2.78.0': + resolution: {integrity: sha512-cXDtu1U0LeZj/xfnFoV7yCze37TcbNo8FCxy1FpqhMbB9u9QxxDSW6pA5gm/07Ei7m260Lof4CZx67Cu6DPeig==} - '@supabase/functions-js@2.4.5': - resolution: {integrity: sha512-v5GSqb9zbosquTo6gBwIiq7W9eQ7rE5QazsK/ezNiQXdCbY+bH8D9qEaBIkhVvX4ZRW5rP03gEfw5yw9tiq4EQ==} + '@supabase/functions-js@2.78.0': + resolution: {integrity: sha512-t1jOvArBsOINyqaRee1xJ3gryXLvkBzqnKfi6q3YRzzhJbGS6eXz0pXR5fqmJeB01fLC+1njpf3YhMszdPEF7g==} '@supabase/node-fetch@2.6.15': resolution: {integrity: sha512-1ibVeYUacxWYi9i0cf5efil6adJ9WRyZBLivgjs+AUpewx1F3xPi7gLgaASI2SmIQxPoCEjAsLAzKPgMJVgOUQ==} engines: {node: 4.x || >=6.0.0} - '@supabase/postgrest-js@1.19.4': - resolution: {integrity: sha512-O4soKqKtZIW3olqmbXXbKugUtByD2jPa8kL2m2c1oozAO11uCcGrRhkZL0kVxjBLrXHE0mdSkFsMj7jDSfyNpw==} + '@supabase/postgrest-js@2.78.0': + resolution: {integrity: sha512-AwhpYlSvJ+PSnPmIK8sHj7NGDyDENYfQGKrMtpVIEzQA2ApUjgpUGxzXWN4Z0wEtLQsvv7g4y9HVad9Hzo1TNA==} - '@supabase/realtime-js@2.15.1': - resolution: {integrity: sha512-edRFa2IrQw50kNntvUyS38hsL7t2d/psah6om6aNTLLcWem0R6bOUq7sk7DsGeSlNfuwEwWn57FdYSva6VddYw==} + '@supabase/realtime-js@2.78.0': + resolution: {integrity: sha512-rCs1zmLe7of7hj4s7G9z8rTqzWuNVtmwDr3FiCRCJFawEoa+RQO1xpZGbdeuVvVmKDyVN6b542Okci+117y/LQ==} - '@supabase/ssr@0.6.1': - resolution: {integrity: sha512-QtQgEMvaDzr77Mk3vZ3jWg2/y+D8tExYF7vcJT+wQ8ysuvOeGGjYbZlvj5bHYsj/SpC0bihcisnwPrM4Gp5G4g==} + '@supabase/ssr@0.7.0': + resolution: {integrity: sha512-G65t5EhLSJ5c8hTCcXifSL9Q/ZRXvqgXeNo+d3P56f4U1IxwTqjB64UfmfixvmMcjuxnq2yGqEWVJqUcO+AzAg==} peerDependencies: '@supabase/supabase-js': ^2.43.4 - '@supabase/storage-js@2.11.0': - resolution: {integrity: sha512-Y+kx/wDgd4oasAgoAq0bsbQojwQ+ejIif8uczZ9qufRHWFLMU5cODT+ApHsSrDufqUcVKt+eyxtOXSkeh2v9ww==} + '@supabase/storage-js@2.78.0': + resolution: {integrity: sha512-n17P0JbjHOlxqJpkaGFOn97i3EusEKPEbWOpuk1r4t00Wg06B8Z4GUiq0O0n1vUpjiMgJUkLIMuBVp+bEgunzQ==} - '@supabase/supabase-js@2.55.0': - resolution: {integrity: sha512-Y1uV4nEMjQV1x83DGn7+Z9LOisVVRlY1geSARrUHbXWgbyKLZ6/08dvc0Us1r6AJ4tcKpwpCZWG9yDQYo1JgHg==} + '@supabase/supabase-js@2.78.0': + resolution: {integrity: sha512-xYMRNBFmKp2m1gMuwcp/gr/HlfZKqjye1Ib8kJe29XJNsgwsfO/f8skxnWiscFKTlkOKLuBexNgl5L8dzGt6vA==} '@swc/helpers@0.5.15': resolution: {integrity: sha512-JQ5TuMi45Owi4/BIMAJBoSQoOJu12oOk/gADqlcUL9JEdHB8vyjUSsxqeNXnmXHjYKMi2WcYtezGEEhqUI/E2g==} - '@tanstack/eslint-plugin-query@5.86.0': - resolution: {integrity: sha512-tmXdnx/fF3yY5G5jpzrJQbASY3PNzsKF0gq9IsZVqz3LJ4sExgdUFGQ305nao0wTMBOclyrSC13v/VQ3yOXu/Q==} + '@tanstack/eslint-plugin-query@5.91.2': + resolution: {integrity: sha512-UPeWKl/Acu1IuuHJlsN+eITUHqAaa9/04geHHPedY8siVarSaWprY0SVMKrkpKfk5ehRT7+/MZ5QwWuEtkWrFw==} peerDependencies: eslint: ^8.57.0 || ^9.0.0 - '@tanstack/query-core@5.85.3': - resolution: {integrity: sha512-9Ne4USX83nHmRuEYs78LW+3lFEEO2hBDHu7mrdIgAFx5Zcrs7ker3n/i8p4kf6OgKExmaDN5oR0efRD7i2J0DQ==} + '@tanstack/query-core@5.90.6': + resolution: {integrity: sha512-AnZSLF26R8uX+tqb/ivdrwbVdGemdEDm1Q19qM6pry6eOZ6bEYiY7mWhzXT1YDIPTNEVcZ5kYP9nWjoxDLiIVw==} - '@tanstack/query-devtools@5.87.3': - resolution: {integrity: sha512-LkzxzSr2HS1ALHTgDmJH5eGAVsSQiuwz//VhFW5OqNk0OQ+Fsqba0Tsf+NzWRtXYvpgUqwQr4b2zdFZwxHcGvg==} + '@tanstack/query-devtools@5.90.1': + resolution: {integrity: sha512-GtINOPjPUH0OegJExZ70UahT9ykmAhmtNVcmtdnOZbxLwT7R5OmRztR5Ahe3/Cu7LArEmR6/588tAycuaWb1xQ==} - '@tanstack/react-query-devtools@5.87.3': - resolution: {integrity: sha512-uV7m4/m58jU4OaLEyiPLRoXnL5H5E598lhFLSXIcK83on+ZXW7aIfiu5kwRwe1qFa4X4thH8wKaxz1lt6jNmAA==} + '@tanstack/react-query-devtools@5.90.2': + resolution: {integrity: sha512-vAXJzZuBXtCQtrY3F/yUNJCV4obT/A/n81kb3+YqLbro5Z2+phdAbceO+deU3ywPw8B42oyJlp4FhO0SoivDFQ==} peerDependencies: - '@tanstack/react-query': ^5.87.1 + '@tanstack/react-query': ^5.90.2 react: ^18 || ^19 - '@tanstack/react-query@5.85.3': - resolution: {integrity: sha512-AqU8TvNh5GVIE8I+TUU0noryBRy7gOY0XhSayVXmOPll4UkZeLWKDwi0rtWOZbwLRCbyxorfJ5DIjDqE7GXpcQ==} + '@tanstack/react-query@5.90.6': + resolution: {integrity: sha512-gB1sljYjcobZKxjPbKSa31FUTyr+ROaBdoH+wSSs9Dk+yDCmMs+TkTV3PybRRVLC7ax7q0erJ9LvRWnMktnRAw==} peerDependencies: react: ^18 || ^19 @@ -2898,8 +3101,8 @@ packages: peerDependencies: '@testing-library/dom': '>=7.21.4' - '@tybys/wasm-util@0.10.0': - resolution: {integrity: sha512-VyyPYFlOMNylG45GoAe0xDoLwWuowvf92F9kySqzYh8vmYm7D2u4iUJKa1tOUpS70Ku13ASrOkS4ScXFsTaCNQ==} + '@tybys/wasm-util@0.10.1': + resolution: {integrity: sha512-9tTaPJLSiejZKx+Bmog4uSubteqTvFrVrURwkmHixBo0G4seD0zUxp98E1DzUBJxLQ3NPwXrGKDiVjwx/DpPsg==} '@types/aria-query@5.0.4': resolution: {integrity: sha512-rfT93uj5s0PRL7EzccGMs3brplhcrghnDoV26NqKhCAS1hVo+WdNsPvE/yb6ilfr5hi2MEk6d5EWJTKdxg8jVw==} @@ -2925,9 +3128,6 @@ packages: '@types/connect@3.4.38': resolution: {integrity: sha512-K6uROf1LD88uDQqJCktA4yzL1YYAK6NgfsI0v/mTgyPKWsX1CnJ0XPSDhViejru1GcRkLWb8RlzFYJRqGUbaug==} - '@types/cookie@0.6.0': - resolution: {integrity: sha512-4Kh9a6B2bQciAhf7FSuMRRkUWecJgJu9nPnx3yzpsfXX/c50REIqpHY4C82bXP90qrLtXtkDxTZosYO3UpOwlA==} - '@types/d3-array@3.2.1': resolution: {integrity: sha512-Y2Jn2idRrLzUfAKV2LyRImR+y4oa2AntrgID95SHJxuMUrkNXmanDSed71sRNZysveJVt1hLLemQZIady0FpEg==} @@ -3030,8 +3230,8 @@ packages: '@types/negotiator@0.6.4': resolution: {integrity: sha512-elf6BsTq+AkyNsb2h5cGNst2Mc7dPliVoAPm1fXglC/BM3f2pFA40BaSSv3E5lyHteEawVKLP+8TwiY1DMNb3A==} - '@types/node@24.3.1': - resolution: {integrity: sha512-3vXmQDXy+woz+gnrTvuvNrPzekOi+Ds0ReMxw0LzBiK3a+1k0kQn9f2NWk+lgD4rJehFUmYy2gMhJ2ZI+7YP9g==} + '@types/node@24.10.0': + resolution: {integrity: sha512-qzQZRBqkFsYyaSWXuEHc2WR9c0a0CXwiE5FWUvn7ZM+vdy1uZLfCunD38UzhuB7YN/J11ndbDBcTmOdxJo9Q7A==} '@types/parse-json@4.0.2': resolution: {integrity: sha512-dISoDXWWQwUquiKsyZ4Ng+HX2KsPL7LyHKHQwgGFEA3IaKac4Obd+h2a/a6waisAoepJlBcx9paWqjA8/HVjCw==} @@ -3039,15 +3239,12 @@ packages: '@types/pg-pool@2.0.6': resolution: {integrity: sha512-TaAUE5rq2VQYxab5Ts7WZhKNmuN78Q6PiFonTDdpbx8a1H0M1vhy3rhiMjl+e2iHmogyMw7jZF4FrE6eJUy5HQ==} - '@types/pg@8.15.5': - resolution: {integrity: sha512-LF7lF6zWEKxuT3/OR8wAZGzkg4ENGXFNyiV/JeOt9z5B+0ZVwbql9McqX5c/WStFq1GaGso7H1AzP/qSzmlCKQ==} + '@types/pg@8.15.6': + resolution: {integrity: sha512-NoaMtzhxOrubeL/7UZuNTrejB4MPAJ0RpxZqXQf2qXuVlTPuG6Y8p4u9dKRaue4yjmC7ZhzVO2/Yyyn25znrPQ==} '@types/phoenix@1.6.6': resolution: {integrity: sha512-PIzZZlEppgrpoT2QgbnDU+MMzuR6BbCjllj0bM70lWoejMeNJAxCchxnv7J3XFkI8MpygtRpzXrIlmWUBclP5A==} - '@types/prismjs@1.26.5': - resolution: {integrity: sha512-AUZTa7hQ2KY5L7AmtSiqxlhWxb4ina0yd8hNbl4TWuqnv/pFP0nDMb3YrfSBf4hJVGLh2YEIBfKaBW/9UEl6IQ==} - '@types/prop-types@15.7.15': resolution: {integrity: sha512-F6bEyamV9jKGAFBEmlQnesRPGOQqS2+Uwi0Em15xenOxHaf2hv6L8YCVn3rPdPJOiJfPiCnLIRyvwVaqMY3MIw==} @@ -3071,9 +3268,6 @@ packages: '@types/semver@7.7.1': resolution: {integrity: sha512-FmgJfu+MOcQ370SD0ev7EI8TlCAfKYU+B4m5T3yXc1CiRN94g/SZPtsCkk506aUDtlMnFZvasDwHHUcZUEaYuA==} - '@types/shimmer@1.2.0': - resolution: {integrity: sha512-UE7oxhQLLd9gub6JKIAhDq06T0F6FnztwMNRvYgjeQSBeMc1ZG/tA47EwfduvkuQS8apbkM/lpLpWsaCeYsXVg==} - '@types/statuses@2.0.6': resolution: {integrity: sha512-xMAgYwceFhRA2zY+XbEA7mxYbA093wdiW8Vu6gZPGWy9cmOyU9XesH1tNcEWsKFd5Vzrqx5T3D38PWx1FIIXkA==} @@ -3089,8 +3283,8 @@ packages: '@types/unist@3.0.3': resolution: {integrity: sha512-ko/gIFJRv177XgZsZcBwnqJN5x/Gien8qNOn0D5bQU/zAzVf9Zt3BlcUiLqhV9y4ARk0GbT3tnUiPNgnTXzc/Q==} - '@types/urijs@1.19.25': - resolution: {integrity: sha512-XOfUup9r3Y06nFAZh3WvO0rBU4OtlfPB/vgxpjg+NRdGU6CN6djdc6OEiH+PcqHCY6eFLo9Ista73uarf4gnBg==} + '@types/urijs@1.19.26': + resolution: {integrity: sha512-wkXrVzX5yoqLnndOwFsieJA7oKM8cNkOKJtf/3vVGSUFkWDKZvFHpIl9Pvqb/T9UsawBBFMTTD8xu7sK5MWuvg==} '@types/use-sync-external-store@0.0.6': resolution: {integrity: sha512-zFDAD+tlpf2r4asuHEj0XH6pY6i0g5NeAHPn+15wk3BV6JA69eERFXC1gyGThDkVa1zCyKr5jox1+2LbV/AMLg==} @@ -3098,16 +3292,16 @@ packages: '@types/ws@8.18.1': resolution: {integrity: sha512-ThVF6DCVhA8kUGy+aazFQ4kXQ7E1Ty7A3ypFOe0IcJV8O/M511G99AW24irKrW56Wt44yG9+ij8FaqoBGkuBXg==} - '@typescript-eslint/eslint-plugin@8.43.0': - resolution: {integrity: sha512-8tg+gt7ENL7KewsKMKDHXR1vm8tt9eMxjJBYINf6swonlWgkYn5NwyIgXpbbDxTNU5DgpDFfj95prcTq2clIQQ==} + '@typescript-eslint/eslint-plugin@8.48.1': + resolution: {integrity: sha512-X63hI1bxl5ohelzr0LY5coufyl0LJNthld+abwxpCoo6Gq+hSqhKwci7MUWkXo67mzgUK6YFByhmaHmUcuBJmA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: - '@typescript-eslint/parser': ^8.43.0 + '@typescript-eslint/parser': ^8.48.1 eslint: ^8.57.0 || ^9.0.0 typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/parser@8.43.0': - resolution: {integrity: sha512-B7RIQiTsCBBmY+yW4+ILd6mF5h1FUwJsVvpqkrgpszYifetQ2Ke+Z4u6aZh0CblkUGIdR59iYVyXqqZGkZ3aBw==} + '@typescript-eslint/parser@8.48.1': + resolution: {integrity: sha512-PC0PDZfJg8sP7cmKe6L3QIL8GZwU5aRvUFedqSIpw3B+QjRSUZeeITC2M5XKeMXEzL6wccN196iy3JLwKNvDVA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: eslint: ^8.57.0 || ^9.0.0 @@ -3119,18 +3313,50 @@ packages: peerDependencies: typescript: '>=4.8.4 <6.0.0' + '@typescript-eslint/project-service@8.46.2': + resolution: {integrity: sha512-PULOLZ9iqwI7hXcmL4fVfIsBi6AN9YxRc0frbvmg8f+4hQAjQ5GYNKK0DIArNo+rOKmR/iBYwkpBmnIwin4wBg==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + peerDependencies: + typescript: '>=4.8.4 <6.0.0' + + '@typescript-eslint/project-service@8.48.1': + resolution: {integrity: sha512-HQWSicah4s9z2/HifRPQ6b6R7G+SBx64JlFQpgSSHWPKdvCZX57XCbszg/bapbRsOEv42q5tayTYcEFpACcX1w==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + peerDependencies: + typescript: '>=4.8.4 <6.0.0' + '@typescript-eslint/scope-manager@8.43.0': resolution: {integrity: sha512-daSWlQ87ZhsjrbMLvpuuMAt3y4ba57AuvadcR7f3nl8eS3BjRc8L9VLxFLk92RL5xdXOg6IQ+qKjjqNEimGuAg==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + '@typescript-eslint/scope-manager@8.46.2': + resolution: {integrity: sha512-LF4b/NmGvdWEHD2H4MsHD8ny6JpiVNDzrSZr3CsckEgCbAGZbYM4Cqxvi9L+WqDMT+51Ozy7lt2M+d0JLEuBqA==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + + '@typescript-eslint/scope-manager@8.48.1': + resolution: {integrity: sha512-rj4vWQsytQbLxC5Bf4XwZ0/CKd362DkWMUkviT7DCS057SK64D5lH74sSGzhI6PDD2HCEq02xAP9cX68dYyg1w==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + '@typescript-eslint/tsconfig-utils@8.43.0': resolution: {integrity: sha512-ALC2prjZcj2YqqL5X/bwWQmHA2em6/94GcbB/KKu5SX3EBDOsqztmmX1kMkvAJHzxk7TazKzJfFiEIagNV3qEA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/type-utils@8.43.0': - resolution: {integrity: sha512-qaH1uLBpBuBBuRf8c1mLJ6swOfzCXryhKND04Igr4pckzSEW9JX5Aw9AgW00kwfjWJF0kk0ps9ExKTfvXfw4Qg==} + '@typescript-eslint/tsconfig-utils@8.46.2': + resolution: {integrity: sha512-a7QH6fw4S57+F5y2FIxxSDyi5M4UfGF+Jl1bCGd7+L4KsaUY80GsiF/t0UoRFDHAguKlBaACWJRmdrc6Xfkkag==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + peerDependencies: + typescript: '>=4.8.4 <6.0.0' + + '@typescript-eslint/tsconfig-utils@8.48.1': + resolution: {integrity: sha512-k0Jhs4CpEffIBm6wPaCXBAD7jxBtrHjrSgtfCjUvPp9AZ78lXKdTR8fxyZO5y4vWNlOvYXRtngSZNSn+H53Jkw==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + peerDependencies: + typescript: '>=4.8.4 <6.0.0' + + '@typescript-eslint/type-utils@8.48.1': + resolution: {integrity: sha512-1jEop81a3LrJQLTf/1VfPQdhIY4PlGDBc/i67EVWObrtvcziysbLN3oReexHOM6N3jyXgCrkBsZpqwH0hiDOQg==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: eslint: ^8.57.0 || ^9.0.0 @@ -3140,12 +3366,32 @@ packages: resolution: {integrity: sha512-vQ2FZaxJpydjSZJKiSW/LJsabFFvV7KgLC5DiLhkBcykhQj8iK9BOaDmQt74nnKdLvceM5xmhaTF+pLekrxEkw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + '@typescript-eslint/types@8.46.2': + resolution: {integrity: sha512-lNCWCbq7rpg7qDsQrd3D6NyWYu+gkTENkG5IKYhUIcxSb59SQC/hEQ+MrG4sTgBVghTonNWq42bA/d4yYumldQ==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + + '@typescript-eslint/types@8.48.1': + resolution: {integrity: sha512-+fZ3LZNeiELGmimrujsDCT4CRIbq5oXdHe7chLiW8qzqyPMnn1puNstCrMNVAqwcl2FdIxkuJ4tOs/RFDBVc/Q==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + '@typescript-eslint/typescript-estree@8.43.0': resolution: {integrity: sha512-7Vv6zlAhPb+cvEpP06WXXy/ZByph9iL6BQRBDj4kmBsW98AqEeQHlj/13X+sZOrKSo9/rNKH4Ul4f6EICREFdw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: typescript: '>=4.8.4 <6.0.0' + '@typescript-eslint/typescript-estree@8.46.2': + resolution: {integrity: sha512-f7rW7LJ2b7Uh2EiQ+7sza6RDZnajbNbemn54Ob6fRwQbgcIn+GWfyuHDHRYgRoZu1P4AayVScrRW+YfbTvPQoQ==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + peerDependencies: + typescript: '>=4.8.4 <6.0.0' + + '@typescript-eslint/typescript-estree@8.48.1': + resolution: {integrity: sha512-/9wQ4PqaefTK6POVTjJaYS0bynCgzh6ClJHGSBj06XEHjkfylzB+A3qvyaXnErEZSaxhIo4YdyBgq6j4RysxDg==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + peerDependencies: + typescript: '>=4.8.4 <6.0.0' + '@typescript-eslint/utils@8.43.0': resolution: {integrity: sha512-S1/tEmkUeeswxd0GGcnwuVQPFWo8NzZTOMxCvw8BX7OMxnNae+i8Tm7REQen/SwUIPoPqfKn7EaZ+YLpiB3k9g==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} @@ -3153,10 +3399,32 @@ packages: eslint: ^8.57.0 || ^9.0.0 typescript: '>=4.8.4 <6.0.0' + '@typescript-eslint/utils@8.46.2': + resolution: {integrity: sha512-sExxzucx0Tud5tE0XqR0lT0psBQvEpnpiul9XbGUB1QwpWJJAps1O/Z7hJxLGiZLBKMCutjTzDgmd1muEhBnVg==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + peerDependencies: + eslint: ^8.57.0 || ^9.0.0 + typescript: '>=4.8.4 <6.0.0' + + '@typescript-eslint/utils@8.48.1': + resolution: {integrity: sha512-fAnhLrDjiVfey5wwFRwrweyRlCmdz5ZxXz2G/4cLn0YDLjTapmN4gcCsTBR1N2rWnZSDeWpYtgLDsJt+FpmcwA==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + peerDependencies: + eslint: ^8.57.0 || ^9.0.0 + typescript: '>=4.8.4 <6.0.0' + '@typescript-eslint/visitor-keys@8.43.0': resolution: {integrity: sha512-T+S1KqRD4sg/bHfLwrpF/K3gQLBM1n7Rp7OjjikjTEssI2YJzQpi5WXoynOaQ93ERIuq3O8RBTOUYDKszUCEHw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + '@typescript-eslint/visitor-keys@8.46.2': + resolution: {integrity: sha512-tUFMXI4gxzzMXt4xpGJEsBsTox0XbNQ1y94EwlD/CuZwFcQP79xfQqMhau9HsRc/J0cAPA/HZt1dZPtGn9V/7w==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + + '@typescript-eslint/visitor-keys@8.48.1': + resolution: {integrity: sha512-BmxxndzEWhE4TIEEMBs8lP3MBWN3jFPs/p6gPm/wkv02o41hI6cq9AuSmGAaTTHPtA1FTi2jBre4A9rm5ZmX+Q==} + engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} + '@ungap/structured-clone@1.3.0': resolution: {integrity: sha512-WmoN8qaIAo7WTYWbAZuG8PYEhn5fkz7dZrqTBZ7dtt//lL2Gwms1IcnQ5yHqjDfX8Ft5j4YzDM23f87zBfDe9g==} @@ -3378,14 +3646,14 @@ packages: '@xtuc/long@4.2.2': resolution: {integrity: sha512-NuHqBY1PB/D8xU6s/thBgOAiAP7HOYDQ32+BFZILJ8ivkUkAHQnWfn6WhL79Owj1qmUnoN/YPhktdIoucipkAQ==} - '@xyflow/react@12.8.3': - resolution: {integrity: sha512-8sdRZPMCzfhauF96krlUMPCKmi9cX64HsYG8qoVAAvTKDAqxXg7RSp/IhoXlzbI/lsRD1vAxeDBxvI/XqACa6g==} + '@xyflow/react@12.9.2': + resolution: {integrity: sha512-Xr+LFcysHCCoc5KRHaw+FwbqbWYxp9tWtk1mshNcqy25OAPuaKzXSdqIMNOA82TIXF/gFKo0Wgpa6PU7wUUVqw==} peerDependencies: react: '>=17' react-dom: '>=17' - '@xyflow/system@0.0.67': - resolution: {integrity: sha512-hYsmbj+8JDei0jmupBmxNLaeJEcf9kKmMl6IziGe02i0TOCsHwjIdP+qz+f4rI1/FR2CQiCZJrw4dkHOLC6tEQ==} + '@xyflow/system@0.0.72': + resolution: {integrity: sha512-WBI5Aau0fXTXwxHPzceLNS6QdXggSWnGjDtj/gG669crApN8+SCmEtkBth1m7r6pStNo/5fI9McEi7Dk0ymCLA==} abort-controller@3.0.0: resolution: {integrity: sha512-h8lQ8tacZYnR3vNQTgibj+tODHI5/+l06Au2Pcriv/Gmet0eaj4TwWH41sO9wnHDiQsEj19q0drzdWdeAHtweg==} @@ -3461,10 +3729,6 @@ packages: resolution: {integrity: sha512-/6w/C21Pm1A7aZitlI5Ni/2J6FFQN8i1Cvz3kHABAAbw93v/NlvKdVOqz7CCWz/3iv/JplRSEEZ83XION15ovw==} engines: {node: '>=6'} - ansi-escapes@4.3.2: - resolution: {integrity: sha512-gKXj5ALrKWQLsYG9jlTRmR/xKluxHV+Z9QEwNIgCfM1/uwPMCuzVVnh5mwTd+OuBZcwSIMbqssNWRm1lE51QaQ==} - engines: {node: '>=8'} - ansi-html-community@0.0.8: resolution: {integrity: sha512-1APHAyr3+PCamwNw3bXCPp4HFLONZt/yIH0sZp0/469KWNTEy+qN5jQ3GVX6DMZ1UXAi34yVwtTeaG/HpBuuzw==} engines: {'0': node >= 0.8.0} @@ -3588,14 +3852,18 @@ packages: resolution: {integrity: sha512-Xm7bpRXnDSX2YE2YFfBk2FnF0ep6tmG7xPh8iHee8MIcrgq762Nkce856dYtJYLkuIoYZvGfTs/PbZhideTcEg==} engines: {node: '>=4'} + axe-core@4.11.0: + resolution: {integrity: sha512-ilYanEU8vxxBexpJd8cWM4ElSQq4QctCLKih0TSfjIfCQTeyH/6zVrmIJfLPrKTKJRbiG+cfnZbQIjAlJmF1jQ==} + engines: {node: '>=4'} + axe-html-reporter@2.2.11: resolution: {integrity: sha512-WlF+xlNVgNVWiM6IdVrsh+N0Cw7qupe5HT9N6Uyi+aN7f6SSi92RDomiP1noW8OWIV85V6x404m5oKMeqRV3tQ==} engines: {node: '>=8.9.0'} peerDependencies: axe-core: '>=3' - axe-playwright@2.1.0: - resolution: {integrity: sha512-tY48SX56XaAp16oHPyD4DXpybz8Jxdz9P7exTjF/4AV70EGUavk+1fUPWirM0OYBR+YyDx6hUeDvuHVA6fB9YA==} + axe-playwright@2.2.2: + resolution: {integrity: sha512-h350/grzDCPgpuWV7eEOqr/f61Xn07Gi9f9B3Ew4rW6/nFtpdEJYW6jgRATorgAGXjEAYFTnaY3sEys39wDw4A==} peerDependencies: playwright: '>1.0.0' @@ -3707,10 +3975,6 @@ packages: builtin-status-codes@3.0.0: resolution: {integrity: sha512-HpGFw18DgFWlncDfjTa2rcQ4W88O1mC8e8yZ2AvQY5KDaktSTwo+KRf6nHK6FRI5FyRyb/5T6+TSxfP7QyGsmQ==} - cac@6.7.14: - resolution: {integrity: sha512-b6Ilus+c3RrdDk+JhLKUAQfzzgLEPy6wcXqS7f/xe1EETvsDP6GORG7SFuOs6cID5YkqchW/LXZbX5bc8j7ZcQ==} - engines: {node: '>=8'} - call-bind-apply-helpers@1.0.2: resolution: {integrity: sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==} engines: {node: '>= 0.4'} @@ -3740,9 +4004,6 @@ packages: camelize@1.0.1: resolution: {integrity: sha512-dU+Tx2fsypxTgtLoE36npi3UqcjSSMNYfkqgmoEhtZrraP5VWq0K7FkWVTYa8eMPtnU/G2txVsfdCJTn9uzpuQ==} - caniuse-lite@1.0.30001735: - resolution: {integrity: sha512-EV/laoX7Wq2J9TQlyIXRxTJqIw4sxfXS4OYgudGxBYRuTv0q7AM6yMEpU/Vo1I94thg9U6EZ2NfZx9GJq83u7w==} - caniuse-lite@1.0.30001741: resolution: {integrity: sha512-QGUGitqsc8ARjLdgAfxETDhRbJ0REsP6O3I96TAth/mVjh2cYzN2u+3AzPP3aVSm2FehEItaJw1xd+IGBXWeSw==} @@ -3757,10 +4018,6 @@ packages: resolution: {integrity: sha512-4zNhdJD/iOjSH0A05ea+Ke6MU5mmpQcbQsSOkgdaUMJ9zTlDTD/GYlwohmIE2u0gaxHYiVHEn1Fw9mZ/ktJWgw==} engines: {node: '>=18'} - chalk@3.0.0: - resolution: {integrity: sha512-4D3B6Wf41KOYRFdszmDqMCGq5VV/uMAB273JILmO+3jAlh8X4qDtdtgCR3fxtbLEMzSx22QdhnDcJvu2u1fVwg==} - engines: {node: '>=8'} - chalk@4.1.2: resolution: {integrity: sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==} engines: {node: '>=10'} @@ -3801,8 +4058,8 @@ packages: '@chromatic-com/playwright': optional: true - chromatic@13.1.4: - resolution: {integrity: sha512-6Voxdy2OvSyoA7mJjyiFiWii7d8ng0jBcW97TqL+ptlAWrJhIf10jrJ78KLPDUNOBIPxvx9Vcpe/bUwoLFIG5g==} + chromatic@13.3.3: + resolution: {integrity: sha512-89w0hiFzIRqLbwGSkqSQzhbpuqaWpXYZuevSIF+570Wb+T/apeAkp3px8nMJcFw+zEdqw/i6soofkJtfirET1Q==} hasBin: true peerDependencies: '@chromatic-com/cypress': ^0.*.* || ^1.0.0 @@ -3817,8 +4074,8 @@ packages: resolution: {integrity: sha512-rNjApaLzuwaOTjCiT8lSDdGN1APCiqkChLMJxJPWLunPAt5fy8xgU9/jNOchV84wfIxrA0lRQB7oCT8jrn/wrQ==} engines: {node: '>=6.0'} - cipher-base@1.0.6: - resolution: {integrity: sha512-3Ek9H3X6pj5TgenXYtNWdaBon1tgYCaebd+XPg0keyjEbEfkD4KkmAxkQ/i1vYvxdcT5nscLBfq9VJRmCBcFSw==} + cipher-base@1.0.7: + resolution: {integrity: sha512-Mz9QMT5fJe7bKI7MH31UilT5cEK5EHHRCccw/YRFsRY47AuNgaV6HY3rscp0/I4Q+tTW/5zoqpSeRRI54TkDWA==} engines: {node: '>= 0.10'} cjs-module-lexer@1.4.3: @@ -3875,6 +4132,10 @@ packages: comma-separated-tokens@2.0.3: resolution: {integrity: sha512-Fu4hJdvzeylCfQPp9SGWidpzrMs7tTrlu6Vb8XGaRGck8QSNZJJp538Wrb60Lax4fPwR64ViY468OIUTbRlGZg==} + commander@14.0.2: + resolution: {integrity: sha512-TywoWNNRbhoD0BXs1P3ZEScW8W5iKrnbithIl0YH+uCmBd0QpPOA8yc82DS3BIE5Ma6FnBVUsJ7wVUDz4dvOWQ==} + engines: {node: '>=20'} + commander@2.20.3: resolution: {integrity: sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==} @@ -3921,10 +4182,6 @@ packages: convert-source-map@2.0.0: resolution: {integrity: sha512-Kvp459HrV2FEJ1CAsi1Ku+MY3kasH19TFykTz2xWmMeq6bk2NU3XXvfJ+Q61m0xktWwt+1HSYf3JZsTms3aRJg==} - cookie@0.7.2: - resolution: {integrity: sha512-yki5XnKuf750l50uGTllt6kKILY4nQ1eNIQatoXEByZ5dWgnKqbnqmTrBE5B4N7lrMJKQ2ytWMiTO2o0v6Ew/w==} - engines: {node: '>= 0.6'} - cookie@1.0.2: resolution: {integrity: sha512-9Kr/j4O16ISv8zBBhJoi4bXOYNTkFLOqSL3UDB0njXxCXNezjeyVrJyGOWtgfs/q2km1gwBcfH8q1yEGoMYunA==} engines: {node: '>=18'} @@ -3954,18 +4211,15 @@ packages: create-ecdh@4.0.4: resolution: {integrity: sha512-mf+TCx8wWc9VpuxfP2ht0iSISLZnt0JgWlrOKZiNqyUZWnjIaCIVNQArMHnCZKfEYRg6IM7A+NeJoN8gf/Ws0A==} - create-hash@1.1.3: - resolution: {integrity: sha512-snRpch/kwQhcdlnZKYanNF1m0RDlrCdSKQaH87w1FCFPVPNCQ/Il9QJKAX2jVBZddRdaHBMC+zXa9Gw9tmkNUA==} - create-hash@1.2.0: resolution: {integrity: sha512-z00bCGNHDG8mHAkP7CtT1qVu+bFQUPjYq/4Iv3C3kWjTFV10zIjfSoeqXo9Asws8gwSHDGj/hl2u4OGIjapeCg==} create-hmac@1.1.7: resolution: {integrity: sha512-MJG9liiZ+ogc4TzUwuvbER1JRdgvUFSB5+VR/g5h82fGaIRWMWddtKBHi7/sVhfjQZ6SehlyhvQYrcYkaUIpLg==} - cross-env@7.0.3: - resolution: {integrity: sha512-+/HKd6EgcQCJGh2PSjZuUitQBQynKor4wrFbRg4DtAgS1aWO+gU52xpH7M9ScGgXSYmAVS9bIJ8EzuaGw0oNAw==} - engines: {node: '>=10.14', npm: '>=6', yarn: '>=1'} + cross-env@10.1.0: + resolution: {integrity: sha512-GsYosgnACZTADcmEyJctkJIoqAhHjttw7RsFrVoJNXbsWWqaq6Ym+7kZjq6mS45O0jij6vtiReppKQEtqWy6Dw==} + engines: {node: '>=20'} hasBin: true cross-spawn@7.0.6: @@ -4108,15 +4362,6 @@ packages: supports-color: optional: true - debug@4.4.1: - resolution: {integrity: sha512-KcKCqiftBJcZr++7ykoDIEwSa3XWowTfNPo92BYxjXiyYEVrUQh2aLyhxBCwww+heortUFxEJYcRzosstTEBYQ==} - engines: {node: '>=6.0'} - peerDependencies: - supports-color: '*' - peerDependenciesMeta: - supports-color: - optional: true - debug@4.4.3: resolution: {integrity: sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==} engines: {node: '>=6.0'} @@ -4237,8 +4482,8 @@ packages: resolution: {integrity: sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow==} engines: {node: '>=12'} - dotenv@17.2.1: - resolution: {integrity: sha512-kQhDYKZecqnM0fCnzI5eIv5L4cAe/iRI+HqMbO/hbRdTAeXDG+M9FjipUxNfbARuEg4iHIbhnhs78BCHNbSxEQ==} + dotenv@17.2.3: + resolution: {integrity: sha512-JVUnt+DUIzu87TABbhPmNfVdBDt18BLOWjMUFJMSi/Qqg7NTYtabbvSNJGOJ7afbRuv9D/lngizHtP7QyLQ+9w==} engines: {node: '>=12'} dunder-proto@1.0.1: @@ -4359,6 +4604,11 @@ packages: peerDependencies: esbuild: '>=0.12 <1' + esbuild@0.25.11: + resolution: {integrity: sha512-KohQwyzrKTQmhXDW1PjCv3Tyspn9n5GcY2RTDqeORIdIJY8yKIF7sTSopFmn/wpMPW4rdPXI0UE5LJLuq3bx0Q==} + engines: {node: '>=18'} + hasBin: true + esbuild@0.25.9: resolution: {integrity: sha512-CRbODhYyQx3qp7ZEwzxOk4JBqmD/seJrzPa/cGjY1VtIn5E09Oi9/dB4JwctnfZ8Q8iT7rioVv5k/FNT/uf54g==} engines: {node: '>=18'} @@ -4376,8 +4626,8 @@ packages: resolution: {integrity: sha512-/veY75JbMK4j1yjvuUxuVsiS/hr/4iHs9FTT6cgTexxdE0Ly/glccBAkloH/DofkjRbZU3bnoj38mOmhkZ0lHw==} engines: {node: '>=12'} - eslint-config-next@15.5.2: - resolution: {integrity: sha512-3hPZghsLupMxxZ2ggjIIrat/bPniM2yRpsVPVM40rp8ZMzKWOJp2CGWn7+EzoV2ddkUr5fxNfHpF+wU1hGt/3g==} + eslint-config-next@15.5.7: + resolution: {integrity: sha512-nU/TRGHHeG81NeLW5DeQT5t6BDUqbpsNQTvef1ld/tqHT+/zTx60/TIhKnmPISTTe++DVo+DLxDmk4rnwHaZVw==} peerDependencies: eslint: ^7.23.0 || ^8.0.0 || ^9.0.0 typescript: '>=3.3.1' @@ -4629,6 +4879,12 @@ packages: resolution: {integrity: sha512-CYcENa+FtcUKLmhhqyctpclsq7QF38pKjZHsGNiSQF5r4FtoKDWabFDl3hzaEQMvT1LHEysw5twgLvpYYb4vbw==} engines: {node: ^10.12.0 || >=12.0.0} + flatbush@4.5.0: + resolution: {integrity: sha512-K7JSilGr4lySRLdJqKY45fu0m/dIs6YAAu/ESqdMsnW3pI0m3gpa6oRc6NDXW161Ov9+rIQjsuyOt5ObdIfgwg==} + + flatqueue@3.0.0: + resolution: {integrity: sha512-y1deYaVt+lIc/d2uIcWDNd0CrdQTO5xoCjeFdhX0kSXvm2Acm0o+3bAOiYklTEoRyzwio3sv3/IiBZdusbAe2Q==} + flatted@3.3.3: resolution: {integrity: sha512-GX+ysw4PBCz0PzosHDepZGANEuFCMLrnRTiEy9McGjmkCQYwRq4A/X786G/fjM/+OjsWSU1ZrY5qyARZmO/uwg==} @@ -4650,8 +4906,8 @@ packages: forwarded-parse@2.1.2: resolution: {integrity: sha512-alTFZZQDKMporBH77856pXgzhEzaUVmLCDk+egLgIgHst3Tpndzz8MnKe+GzRJRfvVdn69HhpW7cmXzvtLvJAw==} - framer-motion@12.23.12: - resolution: {integrity: sha512-6e78rdVtnBvlEVgu6eFEAgG9v3wLnYEboM8I5O5EXvfKC8gxGQB8wXJdhkMy10iVcn05jl6CNw7/HTsTCfwcWg==} + framer-motion@12.23.24: + resolution: {integrity: sha512-HMi5HRoRCTou+3fb3h9oTLyJGBxHfW+HnNE25tAXOvVx/IvwMHK0cx7IR4a2ZU6sh3IX1Z+4ts32PcYBOqka8w==} peerDependencies: '@emotion/is-prop-valid': '*' react: ^18.0.0 || ^19.0.0 @@ -4668,8 +4924,8 @@ packages: resolution: {integrity: sha512-oRXApq54ETRj4eMiFzGnHWGy+zo5raudjuxN0b8H7s/RU2oW0Wvsx9O0ACRN/kRq9E8Vu/ReskGB5o3ji+FzHQ==} engines: {node: '>=12'} - fs-extra@11.3.1: - resolution: {integrity: sha512-eXvGGwZ5CL17ZSwHWd3bbgk7UUpF6IFHtP57NYYakPvHOs8GDgDe5KJI36jIJzDkJ6eJjuzRA8eBQb6SkKue0g==} + fs-extra@11.3.2: + resolution: {integrity: sha512-Xr9F6z6up6Ws+NjzMCZc6WXg2YFRlrLP9NQDO3VQrWrfiojdhS56TzueT88ze0uBdCTwEIhQ3ptnmKeWGFAe0A==} engines: {node: '>=14.14'} fs-monkey@1.1.0: @@ -4698,11 +4954,15 @@ packages: functions-have-names@1.2.3: resolution: {integrity: sha512-xckBUXyTIqT97tq2x2AMb+g163b5JFysYk0x4qxNFwbfQkmNZoiRHb6sPzI9/QV33WeuvVYBUIiD4NzNIyqaRQ==} - geist@1.4.2: - resolution: {integrity: sha512-OQUga/KUc8ueijck6EbtT07L4tZ5+TZgjw8PyWfxo16sL5FWk7gNViPNU8hgCFjy6bJi9yuTP+CRpywzaGN8zw==} + geist@1.5.1: + resolution: {integrity: sha512-mAHZxIsL2o3ZITFaBVFBnwyDOw+zNLYum6A6nIjpzCGIO8QtC3V76XF2RnZTyLx1wlDTmMDy8jg3Ib52MIjGvQ==} peerDependencies: next: '>=13.2.0' + generator-function@2.0.1: + resolution: {integrity: sha512-SFdFmIJi+ybC0vjlHN0ZGVGHc3lgE0DxPAT0djjVg+kjOnSqclqmj0KQ7ykTOLP6YxoqOvuAODGdcHJn+43q3g==} + engines: {node: '>= 0.4'} + gensync@1.0.0-beta.2: resolution: {integrity: sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==} engines: {node: '>=6.9.0'} @@ -4731,8 +4991,8 @@ packages: resolution: {integrity: sha512-w9UMqWwJxHNOvoNzSJ2oPF5wvYcvP7jUvYzhp67yEhTi17ZDBBC1z9pTdGuzjD+EFIqLSYRweZjqfiPzQ06Ebg==} engines: {node: '>= 0.4'} - get-tsconfig@4.10.1: - resolution: {integrity: sha512-auHyJ4AgMz7vgS8Hp3N6HXSmlMdUyhSUrfBF16w153rxtLIEOE+HGqaBppczZvnHLqQJfiHotCYpNhl0lUROFQ==} + get-tsconfig@4.13.0: + resolution: {integrity: sha512-1VKTZJCwBrvbd+Wn3AOgQP/2Av+TfTCOlE4AcRJE72W1ksZXbAx8PPBR9RzgTeSPzlPMHrbANMH3LbltH73wxQ==} github-slugger@2.0.0: resolution: {integrity: sha512-IaOQ9puYtjrkq7Y0Ygl9KDZnrf/aiUJYUpVf89y8kyaxbRG7Y1SrX/jaumrv81vc61+kiMempujsM3Yw7w5qcw==} @@ -4752,6 +5012,10 @@ packages: resolution: {integrity: sha512-7Bv8RF0k6xjo7d4A/PxYLbUCfb6c+Vpd2/mB2yRDlew7Jb5hEXiCD9ibfO7wpk8i4sevK6DFny9h7EYbM3/sHg==} hasBin: true + glob@10.5.0: + resolution: {integrity: sha512-DfXN8DfhJ7NH3Oe7cFmu3NCu1wKbkReJ8TorzSAFbSKrlNaQSKfIzqYqVY8zlbs2NLBbWpRiU52GX2PbaBVNkg==} + hasBin: true + glob@7.2.3: resolution: {integrity: sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==} deprecated: Glob versions prior to v9 are no longer supported @@ -4809,13 +5073,14 @@ packages: resolution: {integrity: sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==} engines: {node: '>= 0.4'} - hash-base@2.0.2: - resolution: {integrity: sha512-0TROgQ1/SxE6KmxWSvXHvRj90/Xo1JvZShofnYF+f6ZsGtR4eES7WfrQzPalmyagfKZCXpVnitiRebZulWsbiw==} - hash-base@3.0.5: resolution: {integrity: sha512-vXm0l45VbcHEVlTCzs8M+s0VeYsB2lnlAaThoLKGXr3bE/VWDOelNUnycUPEhKEaXARL2TEFjBOyUiM6+55KBg==} engines: {node: '>= 0.10'} + hash-base@3.1.2: + resolution: {integrity: sha512-Bb33KbowVTIj5s7Ked1OsqHUeCpz//tPwR+E2zJgJKo9Z5XolZ9b6bdUgjmYlwnWhoOQKoTd1TYToZGn5mAYOg==} + engines: {node: '>= 0.8'} + hash.js@1.1.7: resolution: {integrity: sha512-taOaskGt4z4SOANNseOviYDvjEJinIkRgmp7LbKP2YTTmVxWBl87s/uzK9r+44BclBSp2X7K1hqeNfz9JbBeXA==} @@ -4948,8 +5213,8 @@ packages: resolution: {integrity: sha512-TR3KfrTZTYLPB6jUjfx6MF9WcWrHL9su5TObK4ZkYgBdWKPOFoSoQIdEuTuR82pmtxH2spWG9h6etwfr1pLBqQ==} engines: {node: '>=6'} - import-in-the-middle@1.14.2: - resolution: {integrity: sha512-5tCuY9BV8ujfOpwtAGgsTx9CGUapcFMEEyByLv1B+v2+6DhAcw+Zr0nhQT7uwaZ7DiourxFEscghOR8e1aPLQw==} + import-in-the-middle@2.0.0: + resolution: {integrity: sha512-yNZhyQYqXpkT0AKq3F3KLasUSK4fHvebNH5hOsKQw2dhGSALvQ4U0BqUc5suziKvydO5u5hgN2hy1RJaho8U5A==} imurmurhash@0.1.4: resolution: {integrity: sha512-JmXMZ6wuvDmLiHEml9ykzqO6lwFbof0GG4IkcGaENdCRDDmMVnny7s5HsIgHCbaq0w2MyPhDqkhTUgS2LU2PHA==} @@ -4959,6 +5224,9 @@ packages: resolution: {integrity: sha512-EdDDZu4A2OyIK7Lr/2zG+w5jmbuk1DVBnEwREQvBzspBJkCEbRa8GxU1lghYcaGJCnRWibjDXlq779X1/y5xwg==} engines: {node: '>=8'} + inflected@2.1.0: + resolution: {integrity: sha512-hAEKNxvHf2Iq3H60oMBHkB4wl5jn3TPF3+fXek/sRwAB5gP9xWs4r7aweSF95f99HFoz69pnZTcu8f0SIHV18w==} + inflight@1.0.6: resolution: {integrity: sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==} deprecated: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful. @@ -5056,6 +5324,10 @@ packages: resolution: {integrity: sha512-nPUB5km40q9e8UfN/Zc24eLlzdSf9OfKByBw9CIdw4H1giPMeA0OIJvbchsCu4npfI2QcMVBsGEBHKZ7wLTWmQ==} engines: {node: '>= 0.4'} + is-generator-function@1.1.2: + resolution: {integrity: sha512-upqt1SkGkODW9tsGNG5mtXTXtECizwtS2kA161M+gJPc1xdb/Ax629af6YrTwcOeQHbewrPNlE5Dx7kzvXTizA==} + engines: {node: '>= 0.4'} + is-glob@4.0.3: resolution: {integrity: sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==} engines: {node: '>=0.10.0'} @@ -5250,8 +5522,8 @@ packages: resolution: {integrity: sha512-ZNOIIGMzqCGcHQEA2Q4rIQQ3Df6gSIfne+X9Rly9Bc2y55KxAZu8iGv+n2pP0bLf0XAOctJZgeloC54hWzCahQ==} engines: {node: '>=16'} - katex@0.16.22: - resolution: {integrity: sha512-XCHRdUw4lf3SKBaJe4EvgqIuWwkPSo9XoeO8GjQW94Bp7TWv9hNhzZjZ+OH9yf1UmLygb7DIT5GSFQiyt16zYg==} + katex@0.16.25: + resolution: {integrity: sha512-woHRUZ/iF23GBP1dkDQMh1QBad9dmr8/PAwNA54VrSOVYgI12MAcE14TqnDdQOdzyEonGzMepYnqBMYdsoAr8Q==} hasBin: true keyv@4.5.4: @@ -5264,14 +5536,14 @@ packages: resolution: {integrity: sha512-MbjN408fEndfiQXbFQ1vnd+1NoLDsnQW41410oQBXiyXDMYH5z505juWa4KUE1LqxRC7DgOgZDbKLxHIwm27hA==} engines: {node: '>=0.10'} - launchdarkly-js-client-sdk@3.8.1: - resolution: {integrity: sha512-Y05FXM8FAXAMbbJqeI+ffr6a4m2M/TBUccgI9ejWPSxQS+/b2t+FBWZzfmc7wXuOOYzgGkpHHfQ6bFDU9NKPWQ==} + launchdarkly-js-client-sdk@3.9.0: + resolution: {integrity: sha512-uPL9il6dOZrVQqEcpjDYc2c7HtBTlKpLJb1Q0187i4UokBVZwBXWKjTnNk9hkwaDD5PGD4puoe7POikrR8ACwQ==} - launchdarkly-js-sdk-common@5.7.1: - resolution: {integrity: sha512-RFFeoYVL764zarFpU16lDt1yHzUCt0rnYYKlX5LLtZ5Nhq+2fzE33xRolP/sjxAYVInD0o5z6jKTlDe8gtcDYg==} + launchdarkly-js-sdk-common@5.8.0: + resolution: {integrity: sha512-9X70K3kN1fuR6ZnRudkH7etMgFhi3sEU0mnJ+y2nhID+DpfkNDVnYUGnUs8/s4tsSDs7Q7Gpm4qnr3oqOqT9+A==} - launchdarkly-react-client-sdk@3.8.1: - resolution: {integrity: sha512-lQleTycQwAuNysNsV3VBC31N+wtCEF1FFfyffXlNV1g89jRG5dmEoChCqiuJVnlpL3l4O8+8HIbHuPALcWxCTQ==} + launchdarkly-react-client-sdk@3.9.0: + resolution: {integrity: sha512-Ayw6v5nfT0YoshUI89YH7lkOu+qAEtp9t743+dS6xnFq1IVOnckFlz4AoHa6smVjC3nPzj5n0dCLsRVfVIVEOg==} peerDependencies: react: ^16.6.3 || ^17.0.0 || ^18.0.0 || ^19.0.0 react-dom: ^16.8.4 || ^17.0.0 || ^18.0.0 || ^19.0.0 @@ -5380,8 +5652,8 @@ packages: lru-cache@5.1.1: resolution: {integrity: sha512-KpNARQA3Iwv+jTA0utUVVbrh+Jlrr1Fv0e56GGzAFOXN7dk/FviaDW8LHmK52DlcH4WP2n6gI8vN1aesBFgo9w==} - lucide-react@0.539.0: - resolution: {integrity: sha512-VVISr+VF2krO91FeuCrm1rSOLACQUYVy7NQkzrOty52Y8TlTPcXcMdQFj9bYzBgXbWCiywlwSZ3Z8u6a+6bMlg==} + lucide-react@0.552.0: + resolution: {integrity: sha512-g9WCjmfwqbexSnZE+2cl21PCfXOcqnGeWeMTNAOGEfpPbm/ZF4YIq77Z8qWrxbu660EKuLB4nSLggoKnCb+isw==} peerDependencies: react: ^16.5.1 || ^17.0.0 || ^18.0.0 || ^19.0.0 @@ -5631,17 +5903,14 @@ packages: resolution: {integrity: sha512-qOOzS1cBTWYF4BH8fVePDBOO9iptMnGUEZwNc/cMWnTV2nVLZ7VoNWEPHkYczZA0pdoA7dl6e7FL659nX9S2aw==} engines: {node: '>=16 || 14 >=14.17'} - mitt@3.0.1: - resolution: {integrity: sha512-vKivATfr97l2/QBCYAkXYDbrIWPM2IIKEl7YPhjCvKlG3kE2gm+uBo6nEXK3M5/Ffh/FLpKExzOQ3JJoJGFKBw==} - module-details-from-path@1.0.4: resolution: {integrity: sha512-EGWKgxALGMgzvxYF1UyGTy0HXX/2vHLkw6+NvDKW2jypWbHpjQuj4UMcqQWXHERJhVGKikolT06G3bcKe4fi7w==} moment@2.30.1: resolution: {integrity: sha512-uEmtNhbDOrWPFS+hdjFCBfy9f2YoyzRpwcl+DqpC6taX21FzsTLQVbMV/W7PzNSX6x/bhC1zA3c2UQ5NzH6how==} - motion-dom@12.23.12: - resolution: {integrity: sha512-RcR4fvMCTESQBD/uKQe49D5RUeDOokkGRmz4ceaJKDBgHYtZtntC/s2vLvY38gqGaytinij/yi3hMcWVcEF5Kw==} + motion-dom@12.23.23: + resolution: {integrity: sha512-n5yolOs0TQQBRUFImrRfs/+6X4p3Q4n1dUEqt/H58Vx7OW6RF+foWEgmTVDhIWJIMXOuNNL0apKH2S16en9eiA==} motion-utils@12.23.6: resolution: {integrity: sha512-eAWoPgr4eFEOFfg2WjIsMoqJTW6Z8MTUCgn/GZ3VRpClWBdnbjryiA3ZSNLyxCTmCQx4RmYX6jX1iWHbenUPNQ==} @@ -5649,13 +5918,13 @@ packages: ms@2.1.3: resolution: {integrity: sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==} - msw-storybook-addon@2.0.5: - resolution: {integrity: sha512-uum2gtprDBoUb8GV/rPMwPytHmB8+AUr25BQUY0MpjYey5/ujaew2Edt+4oHiXpLTd0ThyMqmEvGy/sRpDV4lg==} + msw-storybook-addon@2.0.6: + resolution: {integrity: sha512-ExCwDbcJoM2V3iQU+fZNp+axVfNc7DWMRh4lyTXebDO8IbpUNYKGFUrA8UqaeWiRGKVuS7+fU+KXEa9b0OP6uA==} peerDependencies: msw: ^2.0.0 - msw@2.11.1: - resolution: {integrity: sha512-dGSRx0AJmQVQfpGXTsAAq4JFdwdhOBdJ6sJS/jnN0ac3s0NZB6daacHF1z5Pefx+IejmvuiLWw260RlyQOf3sQ==} + msw@2.11.6: + resolution: {integrity: sha512-MCYMykvmiYScyUm7I6y0VCxpNq1rgd5v7kG8ks5dKtvmxRUUPjribX6mUoUNBbM5/3PhUyoelEWiKXGOz84c+w==} engines: {node: '>=18'} hasBin: true peerDependencies: @@ -5680,8 +5949,8 @@ packages: engines: {node: ^10 || ^12 || ^13.7 || ^14 || >=15.0.1} hasBin: true - napi-postinstall@0.3.3: - resolution: {integrity: sha512-uTp172LLXSxuSYHv/kou+f6KW3SMppU9ivthaVTXian9sOt3XM/zHYHpRZiLgQoxeWfYUnslNWQHF1+G71xcow==} + napi-postinstall@0.3.4: + resolution: {integrity: sha512-PHI5f1O0EP5xJ9gQmFGMS6IZcrVvTjpXjz7Na41gTE7eE2hK11lg04CECCYEEjdc17EV4DO+fkGEtt7TpTaTiQ==} engines: {node: ^12.20.0 || ^14.18.0 || >=16.0.0} hasBin: true @@ -5697,8 +5966,8 @@ packages: react: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc react-dom: ^16.8 || ^17 || ^18 || ^19 || ^19.0.0-rc - next@15.4.7: - resolution: {integrity: sha512-OcqRugwF7n7mC8OSYjvsZhhG1AYSvulor1EIUsIkbbEbf1qoE5EbH36Swj8WhF4cHqmDgkiam3z1c1W0J1Wifg==} + next@15.4.10: + resolution: {integrity: sha512-itVlc79QjpKMFMRhP+kbGKaSG/gZM6RCvwhEbwmCNF06CdDiNaoHcbeg0PqkEa2GOcn8KJ0nnc7+yL7EjoYLHQ==} engines: {node: ^18.18.0 || ^19.8.0 || >= 20.0.0} hasBin: true peerDependencies: @@ -5764,10 +6033,11 @@ packages: nth-check@2.1.1: resolution: {integrity: sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w==} - nuqs@2.4.3: - resolution: {integrity: sha512-BgtlYpvRwLYiJuWzxt34q2bXu/AIS66sLU1QePIMr2LWkb+XH0vKXdbLSgn9t6p7QKzwI7f38rX3Wl9llTXQ8Q==} + nuqs@2.7.2: + resolution: {integrity: sha512-wOPJoz5om7jMJQick9zU1S/Q+joL+B2DZTZxfCleHEcUzjUnPoujGod4+nAmUWb+G9TwZnyv+mfNqlyfEi8Zag==} peerDependencies: '@remix-run/react': '>=2' + '@tanstack/react-router': ^1 next: '>=14.2.0' react: '>=18.2.0 || ^19.0.0-0' react-router: ^6 || ^7 @@ -5775,6 +6045,8 @@ packages: peerDependenciesMeta: '@remix-run/react': optional: true + '@tanstack/react-router': + optional: true next: optional: true react-router: @@ -5855,18 +6127,15 @@ packages: openapi-types@12.1.3: resolution: {integrity: sha512-N4YtSYJqghVu4iek2ZUvcN/0aqH1kRDuNqzcycDxhOUpg7GdvLa2F3DgS6yBNhInhv2r/6I0Flkn7CqL8+nIcw==} - openapi3-ts@4.2.2: - resolution: {integrity: sha512-+9g4actZKeb3czfi9gVQ4Br2Ju3KwhCAQJBNaKgye5KggqcBLIhFHH+nIkcm0BUX00TrAJl6dH4JWgM4G4JWrw==} - - openapi3-ts@4.4.0: - resolution: {integrity: sha512-9asTNB9IkKEzWMcHmVZE7Ts3kC9G7AFHfs8i7caD8HbI76gEjdkId4z/AkP83xdZsH7PLAnnbl47qZkXuxpArw==} + openapi3-ts@4.5.0: + resolution: {integrity: sha512-jaL+HgTq2Gj5jRcfdutgRGLosCy/hT8sQf6VOy+P+g36cZOjI1iukdPnijC+4CmeRzg/jEllJUboEic2FhxhtQ==} optionator@0.9.4: resolution: {integrity: sha512-6IpQ7mKUxRcZNLIObR0hz7lxsapSSIYNZJwXPGeF0mTVqGKFIXj1DQcMoT22S3ROcLyY/rz0PWaWZ9ayWmad9g==} engines: {node: '>= 0.8.0'} - orval@7.11.2: - resolution: {integrity: sha512-Cjc/dgnQwAOkvymzvPpFqFc2nQwZ29E+ZFWUI8yKejleHaoFKIdwvkM/b1njtLEjePDcF0hyqXXCTz2wWaXLig==} + orval@7.13.0: + resolution: {integrity: sha512-8Q8BviorGpY2c252CxeeE8eFs7iBJX4KaTGxKmaardvRXjO0oWnEnaeAx9H5cB0FRYZPPKC0n5YHyo1GLs//CQ==} hasBin: true os-browserify@0.3.0: @@ -5977,9 +6246,9 @@ packages: resolution: {integrity: sha512-//nshmD55c46FuFw26xV/xFAaB5HF9Xdap7HJBBnrKdAd6/GxDBaNA1870O79+9ueg61cZLSVc+OaFlfmObYVQ==} engines: {node: '>= 14.16'} - pbkdf2@3.1.3: - resolution: {integrity: sha512-wfRLBZ0feWRhCIkoMB6ete7czJcnNnqRpcoWQBLqatqXXmelSRqfdDK4F3u9T2s2cXas/hQJcryI/4lAL+XTlA==} - engines: {node: '>=0.12'} + pbkdf2@3.1.5: + resolution: {integrity: sha512-Q3CG/cYvCO1ye4QKkuH7EXxs3VC/rI1/trd+qX2+PolbaKG0H+bgcZzrTt96mMyRtejk+JMCiLUn3y29W8qmFQ==} + engines: {node: '>= 0.10'} pg-int8@1.0.1: resolution: {integrity: sha512-WCtabS6t3c8SkpDBUlb1kjOs7l66xsGdKpIPZsg4wR+B3+u9UAum2odSsF9tnvxg80h4ZxLWMy4pRjOsFIqQpw==} @@ -6019,13 +6288,13 @@ packages: resolution: {integrity: sha512-Ie9z/WINcxxLp27BKOCHGde4ITq9UklYKDzVo1nhk5sqGEXU3FpkwP5GM2voTGJkGd9B3Otl+Q4uwSOeSUtOBA==} engines: {node: '>=14.16'} - playwright-core@1.55.0: - resolution: {integrity: sha512-GvZs4vU3U5ro2nZpeiwyb0zuFaqb9sUiAJuyrWpcGouD8y9/HLgGbNRjIph7zU9D3hnPaisMl9zG9CgFi/biIg==} + playwright-core@1.56.1: + resolution: {integrity: sha512-hutraynyn31F+Bifme+Ps9Vq59hKuUCz7H1kDOcBs+2oGguKkWTU50bBWrtz34OUWmIwpBTWDxaRPXrIXkgvmQ==} engines: {node: '>=18'} hasBin: true - playwright@1.55.0: - resolution: {integrity: sha512-sdCWStblvV1YU909Xqx0DhOjPZE4/5lJsIS84IfN9dAZfcl/CIZ5O8l3o0j7hPMjDvqoTF8ZUcc+i/GL5erstA==} + playwright@1.56.1: + resolution: {integrity: sha512-aFi5B0WovBHTEvpM3DzXTUaeN6eN0qWnTkKx4NQaH4Wvcmc153PdaY2UBdSYKaGYw+UyWXSVyxDUg5DoPEttjw==} engines: {node: '>=18'} hasBin: true @@ -6147,9 +6416,9 @@ packages: resolution: {integrity: sha512-vkcDPrRZo1QZLbn5RLGPpg/WmIQ65qoWWhcGKf/b5eplkkarX0m9z8ppCat4mlOqUsWpyNuYgO3VRyrYHSzX5g==} engines: {node: '>= 0.8.0'} - prettier-plugin-tailwindcss@0.6.14: - resolution: {integrity: sha512-pi2e/+ZygeIqntN+vC573BcW5Cve8zUB0SSAGxqpB4f96boZF4M3phPVoOFCeypwkpRYdi7+jQ5YJJUwrkGUAg==} - engines: {node: '>=14.21.3'} + prettier-plugin-tailwindcss@0.7.1: + resolution: {integrity: sha512-Bzv1LZcuiR1Sk02iJTS1QzlFNp/o5l2p3xkopwOrbPmtMeh3fK9rVW5M3neBQzHq+kGKj/4LGQMTNcTH4NGPtQ==} + engines: {node: '>=20.19'} peerDependencies: '@ianvs/prettier-plugin-sort-imports': '*' '@prettier/plugin-hermes': '*' @@ -6161,14 +6430,12 @@ packages: prettier: ^3.0 prettier-plugin-astro: '*' prettier-plugin-css-order: '*' - prettier-plugin-import-sort: '*' prettier-plugin-jsdoc: '*' prettier-plugin-marko: '*' prettier-plugin-multiline-arrays: '*' prettier-plugin-organize-attributes: '*' prettier-plugin-organize-imports: '*' prettier-plugin-sort-imports: '*' - prettier-plugin-style-order: '*' prettier-plugin-svelte: '*' peerDependenciesMeta: '@ianvs/prettier-plugin-sort-imports': @@ -6189,8 +6456,6 @@ packages: optional: true prettier-plugin-css-order: optional: true - prettier-plugin-import-sort: - optional: true prettier-plugin-jsdoc: optional: true prettier-plugin-marko: @@ -6203,8 +6468,6 @@ packages: optional: true prettier-plugin-sort-imports: optional: true - prettier-plugin-style-order: - optional: true prettier-plugin-svelte: optional: true @@ -6220,11 +6483,6 @@ packages: resolution: {integrity: sha512-Qb1gy5OrP5+zDf2Bvnzdl3jsTf1qXVMazbvCoKhtKqVs4/YK4ozX4gKQJJVyNe+cajNPn0KoC0MC3FUmaHWEmQ==} engines: {node: ^10.13.0 || ^12.13.0 || ^14.15.0 || >=15.0.0} - prism-react-renderer@2.4.1: - resolution: {integrity: sha512-ey8Ls/+Di31eqzUxC46h8MksNuGx/n0AAC8uKpwFau4RPDYLuE3EXTp8N8G2vX2N7UC/+IXeNUnlWBGGcAG+Ig==} - peerDependencies: - react: '>=16.0.0' - process-nextick-args@2.0.1: resolution: {integrity: sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==} @@ -6280,8 +6538,13 @@ packages: resolution: {integrity: sha512-Hrgsx+orqoygnmhFbKaHE6c296J+HTAQXoxEF6gNupROmmGJRoyzfG3ccAveqCBrwr/2yxQ5BVd/GTl5agOwSg==} engines: {node: '>= 0.6'} - react-day-picker@9.8.1: - resolution: {integrity: sha512-kMcLrp3PfN/asVJayVv82IjF3iLOOxuH5TNFWezX6lS/T8iVRFPTETpHl3TUSTH99IDMZLubdNPJr++rQctkEw==} + react-currency-input-field@4.0.3: + resolution: {integrity: sha512-alimHDX5tplPsNB3jEAW7qjlJ76RfBAc/p8yru3cTiAYslj3oJ+KNnk788IZYe6ja3cAuH26v047lMzadh47ow==} + peerDependencies: + react: ^16.9.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 + + react-day-picker@9.11.1: + resolution: {integrity: sha512-l3ub6o8NlchqIjPKrRFUCkTUEq6KwemQlfv3XZzzwpUeGwmDJ+0u0Upmt38hJyd7D/vn2dQoOoLV/qAp0o3uUw==} engines: {node: '>=18'} peerDependencies: react: '>=16.8.0' @@ -6306,8 +6569,8 @@ packages: react: ^18.0.0 react-dom: ^18.0.0 - react-hook-form@7.62.0: - resolution: {integrity: sha512-7KWFejc98xqG/F4bAxpL41NB3o1nnvQO1RWZT3TqRZYL8RryQETGfEdVnJN2fy1crCiBLLjkRBVK05j24FxJGA==} + react-hook-form@7.66.0: + resolution: {integrity: sha512-xXBqsWGKrY46ZqaHDo+ZUYiMUgi8suYu5kdrS20EG8KiL7VRQitEbNjm+UcrDYrNi1YLyfpmAeGjCZYXLT9YBw==} engines: {node: '>=18.0.0'} peerDependencies: react: ^16.8.0 || ^17 || ^18 || ^19 @@ -6431,8 +6694,8 @@ packages: resolution: {integrity: sha512-YTUo+Flmw4ZXiWfQKGcwwc11KnoRAYgzAE2E7mXKCjSviTKShtxBsN6YUUBB2gtaBzKzeKunxhUwNHQuRryhWA==} engines: {node: '>= 4'} - recharts@3.1.2: - resolution: {integrity: sha512-vhNbYwaxNbk/IATK0Ki29k3qvTkGqwvCgyQAQ9MavvvBwjvKnMTswdbklJpcOAoMPN/qxF3Lyqob0zO+ZXkZ4g==} + recharts@3.3.0: + resolution: {integrity: sha512-Vi0qmTB0iz1+/Cz9o5B7irVyUjX2ynvEgImbgMt/3sKRREcUM07QiYjS1QpAVrkmVlXqy5gykq4nGWMz9AS4Rg==} engines: {node: '>=18'} peerDependencies: react: ^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0 @@ -6529,6 +6792,10 @@ packages: resolution: {integrity: sha512-gAZ+kLqBdHarXB64XpAe2VCjB7rIRv+mU8tfRWziHRJ5umKsIHN2tLLv6EtMw7WCdP19S0ERVMldNvxYCHnhSQ==} engines: {node: '>=8.6.0'} + require-in-the-middle@8.0.1: + resolution: {integrity: sha512-QT7FVMXfWOYFbeRBF6nu+I6tr2Tf3u0q8RIEjNob/heKY/nh7drD/k7eeMFmSQgnTtCzLDcCu/XEnpW2wk4xCQ==} + engines: {node: '>=9.3.0 || >=8.10.0 <9.0.0'} + reselect@5.1.1: resolution: {integrity: sha512-K/BG6eIky/SBpzfHZv/dd+9JBFiS4SWV7FIujVyJRux6e45+73RaUHXLmIR1f7WOMaQ0U1km6qwklRQxpJJY0w==} @@ -6548,6 +6815,11 @@ packages: engines: {node: '>= 0.4'} hasBin: true + resolve@1.22.11: + resolution: {integrity: sha512-RfqAvLnMl313r7c9oclB1HhUEAezcpLjz95wFH4LVuhk9JF/r22qmVP9AMmOU4vMX7Q8pN8jwNg/CSpdFnMjTQ==} + engines: {node: '>= 0.4'} + hasBin: true + resolve@1.22.8: resolution: {integrity: sha512-oKWePCxqpd6FlLvGV1VU0x7bkPmmCNolxzjMf4NczoDnQcIWrAF+cPtZn5i6n+RfD2d9i0tzpKnG6Yk168yIyw==} hasBin: true @@ -6556,6 +6828,9 @@ packages: resolution: {integrity: sha512-U7WjGVG9sH8tvjW5SmGbQuui75FiyjAX72HX15DwBBwF9dNiQZRQAg9nnPhYy+TUnE0+VcrttuvNI8oSxZcocA==} hasBin: true + rettime@0.7.0: + resolution: {integrity: sha512-LPRKoHnLKd/r3dVxcwO7vhCW+orkOGj9ViueosEBK6ie89CijnfRlhaDhHq/3Hxu4CkWQtxwlBG0mzTQY6uQjw==} + reusify@1.1.0: resolution: {integrity: sha512-g6QUff04oZpHs0eG5p83rFLhHeV00ug/Yf9nZM6fLeUrPguBTkTQOdpAWWspMh55TZfVQDPaN3NQJfbVRAxdIw==} engines: {iojs: '>=1.0.0', node: '>=0.10.0'} @@ -6565,11 +6840,9 @@ packages: deprecated: Rimraf versions prior to v4 are no longer supported hasBin: true - ripemd160@2.0.1: - resolution: {integrity: sha512-J7f4wutN8mdbV08MJnXibYpCOPHR+yzy+iQ/AsjMv2j8cLavQ8VGagDFUwwTAdF8FmRKVeNpbTTEwNHCW1g94w==} - - ripemd160@2.0.2: - resolution: {integrity: sha512-ii4iagi25WusVoiC4B4lq7pbXfAp3D9v5CwfkY33vffw2+pkDjY1D8GaN7spsxvCSx8dkPqOZCEZyfxcmJG2IA==} + ripemd160@2.0.3: + resolution: {integrity: sha512-5Di9UC0+8h1L6ZD2d7awM7E/T4uA1fJRlx6zk/NvdCCVEoAnFqvHmCuNeIKoCeIixBX/q8uM+6ycDvF8woqosA==} + engines: {node: '>= 0.8'} rollup@4.52.2: resolution: {integrity: sha512-I25/2QgoROE1vYV+NQ1En9T9UFB9Cmfm2CJ83zZOlaDpvz29wGQSZXWKw7MiNXau7wYgB/T9fVIdIuEQ+KbiiA==} @@ -6644,6 +6917,11 @@ packages: engines: {node: '>=10'} hasBin: true + semver@7.7.3: + resolution: {integrity: sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q==} + engines: {node: '>=10'} + hasBin: true + serialize-javascript@6.0.2: resolution: {integrity: sha512-Saa1xPByTTq2gdeFZYLLo+RFE35NHZkAbqZeWNd3BpzppeVisAqpDjcp8dyf6uIvEqJRd46jemmyA4iFIeVk8g==} @@ -6690,9 +6968,6 @@ packages: resolution: {integrity: sha512-VuvPvLG1QjNOLP7AIm2HGyfmxEIz8QdskvWOHwUcxLDibYWjLRBmCWd8LSL5FlwhBW7D/GU+3gNVC/ASxAWdxg==} engines: {node: 18.* || >= 20} - shimmer@1.2.1: - resolution: {integrity: sha512-sQTKC1Re/rM6XyFM6fIAGHRPVGvyXfgzIDvzoq608vM+jeyVD0Tu1E6Np0Kc2zAIFWIj963V2800iF/9LPieQw==} - should-equal@2.0.0: resolution: {integrity: sha512-ZP36TMrK9euEuWQYBig9W55WPC7uo37qzAEmbjHz4gfyuXrEUgF8cUvQVO+w+d3OMfPvSRQJ22lSm8MQJ43LTA==} @@ -6949,11 +7224,11 @@ packages: tailwind-merge@2.6.0: resolution: {integrity: sha512-P+Vu1qXfzediirmHOC3xKGAYeZtPcV9g76X+xg2FD4tYgR71ewMA35Y3sCz3zhiN/dwefRpJX0yBcgwi1fXNQA==} - tailwind-scrollbar@4.0.2: - resolution: {integrity: sha512-wAQiIxAPqk0MNTPptVe/xoyWi27y+NRGnTwvn4PQnbvB9kp8QUBiGl/wsfoVBHnQxTmhXJSNt9NHTmcz9EivFA==} + tailwind-scrollbar@3.1.0: + resolution: {integrity: sha512-pmrtDIZeHyu2idTejfV59SbaJyvp1VRjYxAjZBH0jnyrPRo6HL1kD5Glz8VPagasqr6oAx6M05+Tuw429Z8jxg==} engines: {node: '>=12.13.0'} peerDependencies: - tailwindcss: 4.x + tailwindcss: 3.x tailwindcss-animate@1.0.7: resolution: {integrity: sha512-bl6mpH3T7I3UFxuvDEXLxy/VuFxBk5bbzplh7tXI68mwMokNYd1t9qPBHlnyTwfa4JGC4zP516I1hYYtQ/vspA==} @@ -7022,15 +7297,15 @@ packages: resolution: {integrity: sha512-t2T/WLB2WRgZ9EpE4jgPJ9w+i66UZfDc8wHh0xrwiRNN+UwH98GIJkTeZqX9rg0i0ptwzqW+uYeIF0T4F8LR7A==} engines: {node: '>=14.0.0'} - tldts-core@7.0.13: - resolution: {integrity: sha512-Td0LeWLgXJGsikI4mO82fRexgPCEyTcwWiXJERF/GBHX3Dm+HQq/wx4HnYowCbiwQ8d+ENLZc+ktbZw8H+0oEA==} + tldts-core@7.0.17: + resolution: {integrity: sha512-DieYoGrP78PWKsrXr8MZwtQ7GLCUeLxihtjC1jZsW1DnvSMdKPitJSe8OSYDM2u5H6g3kWJZpePqkp43TfLh0g==} - tldts@7.0.13: - resolution: {integrity: sha512-z/SgnxiICGb7Gli0z7ci9BZdjy1tQORUbdmzEUA7NbIJKWhdONn78Ji8gV0PAGfHPyEd+I+W2rMzhLjWkv2Olg==} + tldts@7.0.17: + resolution: {integrity: sha512-Y1KQBgDd/NUc+LfOtKS6mNsC9CCaH+m2P1RoIZy7RAPo3C3/t8X45+zgut31cRZtZ3xKPjfn3TkGTrctC2TQIQ==} hasBin: true - to-buffer@1.2.1: - resolution: {integrity: sha512-tB82LpAIWjhLYbqjx3X4zEeHN6M8CiuOEy2JY8SEQVdYRe3CCHOFaqrBW1doLDrfpWhplcW7BL+bO3/6S3pcDQ==} + to-buffer@1.2.2: + resolution: {integrity: sha512-db0E3UJjcFhpDhAF4tLo03oli3pwl3dbnzXOUIlRKrp+ldk/VUxzpWYZENsw2SZiuBjHAk7DfB0VU7NKdpb6sw==} engines: {node: '>= 0.4'} to-regex-range@5.0.1: @@ -7108,10 +7383,6 @@ packages: resolution: {integrity: sha512-Ne+eE4r0/iWnpAxD852z3A+N0Bt5RN//NjJwRd2VFHEmrywxf5vsZlh4R6lixl6B+wz/8d+maTSAkN1FIkI3LQ==} engines: {node: '>=10'} - type-fest@0.21.3: - resolution: {integrity: sha512-t0rzBq87m3fVcduHDUFhKmyyX+9eo6WQjZvf51Ea/M0Q7+T374Jp1aUiyUl0GKxp8M/OETVHSDvmkyPgvX+X2w==} - engines: {node: '>=10'} - type-fest@0.7.1: resolution: {integrity: sha512-Ne2YiiGN8bmrmJJEuTWTLJR32nh/JdL1+PSicowtNb0WFpn59GK8/lfD61bVtzguz7b3PBt74nxpv/Pw5po5Rg==} engines: {node: '>=8'} @@ -7140,21 +7411,27 @@ packages: resolution: {integrity: sha512-3KS2b+kL7fsuk/eJZ7EQdnEmQoaho/r6KUef7hxvltNA5DR8NAUM+8wJMbJyZ4G9/7i3v5zPBIMN5aybAh2/Jg==} engines: {node: '>= 0.4'} - typedoc-plugin-markdown@4.8.1: - resolution: {integrity: sha512-ug7fc4j0SiJxSwBGLncpSo8tLvrT9VONvPUQqQDTKPxCoFQBADLli832RGPtj6sfSVJebNSrHZQRUdEryYH/7g==} + typedoc-plugin-coverage@4.0.2: + resolution: {integrity: sha512-mfn0e7NCqB8x2PfvhXrtmd7KWlsNf1+B2N9y8gR/jexXBLrXl/0e+b2HdG5HaTXGi7i0t2pyQY2VRmq7gtdEHQ==} engines: {node: '>= 18'} peerDependencies: typedoc: 0.28.x - typedoc@0.28.10: - resolution: {integrity: sha512-zYvpjS2bNJ30SoNYfHSRaFpBMZAsL7uwKbWwqoCNFWjcPnI3e/mPLh2SneH9mX7SJxtDpvDgvd9/iZxGbo7daw==} + typedoc-plugin-markdown@4.9.0: + resolution: {integrity: sha512-9Uu4WR9L7ZBgAl60N/h+jqmPxxvnC9nQAlnnO/OujtG2ubjnKTVUFY1XDhcMY+pCqlX3N2HsQM2QTYZIU9tJuw==} + engines: {node: '>= 18'} + peerDependencies: + typedoc: 0.28.x + + typedoc@0.28.14: + resolution: {integrity: sha512-ftJYPvpVfQvFzpkoSfHLkJybdA/geDJ8BGQt/ZnkkhnBYoYW6lBgPQXu6vqLxO4X75dA55hX8Af847H5KXlEFA==} engines: {node: '>= 18', pnpm: '>= 10'} hasBin: true peerDependencies: typescript: 5.0.x || 5.1.x || 5.2.x || 5.3.x || 5.4.x || 5.5.x || 5.6.x || 5.7.x || 5.8.x || 5.9.x - typescript@5.9.2: - resolution: {integrity: sha512-CWBzXQrc/qOkhidw1OzBTQuYRbfyxDXJMVJ1XNwUHGROVmuaeiEm3OslpZ1RV96d7SKKjZKrSJu3+t/xlw3R9A==} + typescript@5.9.3: + resolution: {integrity: sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==} engines: {node: '>=14.17'} hasBin: true @@ -7165,8 +7442,8 @@ packages: resolution: {integrity: sha512-nWJ91DjeOkej/TA8pXQ3myruKpKEYgqvpw9lz4OPHj/NWFNluYrjbz9j01CJ8yKQd2g4jFoOkINCTW2I5LEEyw==} engines: {node: '>= 0.4'} - undici-types@7.10.0: - resolution: {integrity: sha512-t5Fy/nfn+14LuOc2KNYg75vZqClpAiqscVvMygNnlsHBFpSXdJaYtXMcdNLpl/Qvc3P2cB3s6lOV51nqsFq4ag==} + undici-types@7.16.0: + resolution: {integrity: sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw==} unicode-canonical-property-names-ecmascript@2.0.1: resolution: {integrity: sha512-dA8WbNeb2a6oQzAQ55YlT5vQAWGV9WXOsi3SskE3bcCdM0P4SDd+24zS/OCacdRq5BkdsRj9q3Pg6YyQoxIGqg==} @@ -7226,6 +7503,9 @@ packages: unrs-resolver@1.11.1: resolution: {integrity: sha512-bSjt9pjaEBnNiGgc9rUiHGKv5l4/TGzDmYw3RhnkJGtLhbnnA/5qJj7x3dNDCRx/PJxu774LlH8lCOlB4hEfKg==} + until-async@3.0.2: + resolution: {integrity: sha512-IiSk4HlzAMqTUseHHe3VhIGyuFmN90zMTpD3Z3y8jeQbzLIq500MVM7Jq2vUAnTKAFPJrqwkzr6PoTcPhGcOiw==} + update-browserslist-db@1.1.3: resolution: {integrity: sha512-UxhIZQ+QInVdunkDAaiazvvT/+fXL5Osr0JZlJulepYu6Jd7qJtDZjlur0emRlT71EN3ScPoE7gvsuIKKNavKw==} hasBin: true @@ -7307,8 +7587,8 @@ packages: validate.io-number@1.0.3: resolution: {integrity: sha512-kRAyotcbNaSYoDnXvb4MHg/0a1egJdLwS6oJ38TJY7aw9n93Fl/3blIXdyYvPOp55CNxywooG/3BcrwNrBpcSg==} - validator@13.15.15: - resolution: {integrity: sha512-BgWVbCI72aIQy937xbawcs+hrVaN/CZ2UwutgaJ36hGqRrLNM+f5LUT/YPRbo8IV/ASeFzXszezV+y2+rq3l8A==} + validator@13.15.20: + resolution: {integrity: sha512-KxPOq3V2LmfQPP4eqf3Mq/zrT0Dqp2Vmx2Bn285LwVahLc+CsxOM0crBHczm8ijlcjZ0Q5Xd6LW3z3odTPnlrw==} engines: {node: '>= 0.10'} vaul@1.1.2: @@ -7521,9 +7801,8 @@ snapshots: '@alloc/quick-lru@5.2.0': {} - '@apidevtools/json-schema-ref-parser@11.7.2': + '@apidevtools/json-schema-ref-parser@14.0.1': dependencies: - '@jsdevtools/ono': 7.1.3 '@types/json-schema': 7.0.15 js-yaml: 4.1.0 @@ -7531,18 +7810,27 @@ snapshots: '@apidevtools/swagger-methods@3.0.2': {} - '@apidevtools/swagger-parser@10.1.1(openapi-types@12.1.3)': + '@apidevtools/swagger-parser@12.1.0(openapi-types@12.1.3)': dependencies: - '@apidevtools/json-schema-ref-parser': 11.7.2 + '@apidevtools/json-schema-ref-parser': 14.0.1 '@apidevtools/openapi-schemas': 2.1.0 '@apidevtools/swagger-methods': 3.0.2 - '@jsdevtools/ono': 7.1.3 ajv: 8.17.1 ajv-draft-04: 1.0.0(ajv@8.17.1) call-me-maybe: 1.0.2 openapi-types: 12.1.3 - '@asyncapi/specs@6.9.0': + '@apm-js-collab/code-transformer@0.8.2': {} + + '@apm-js-collab/tracing-hooks@0.3.1': + dependencies: + '@apm-js-collab/code-transformer': 0.8.2 + debug: 4.4.3 + module-details-from-path: 1.0.4 + transitivePeerDependencies: + - supports-color + + '@asyncapi/specs@6.10.0': dependencies: '@types/json-schema': 7.0.15 @@ -7567,7 +7855,7 @@ snapshots: '@babel/types': 7.28.4 '@jridgewell/remapping': 2.3.5 convert-source-map: 2.0.0 - debug: 4.4.1 + debug: 4.4.3 gensync: 1.0.0-beta.2 json5: 2.2.3 semver: 6.3.1 @@ -7619,9 +7907,9 @@ snapshots: '@babel/core': 7.28.4 '@babel/helper-compilation-targets': 7.27.2 '@babel/helper-plugin-utils': 7.27.1 - debug: 4.4.1 + debug: 4.4.3 lodash.debounce: 4.0.8 - resolve: 1.22.10 + resolve: 1.22.11 transitivePeerDependencies: - supports-color @@ -8270,8 +8558,6 @@ snapshots: transitivePeerDependencies: - supports-color - '@babel/runtime@7.28.3': {} - '@babel/runtime@7.28.4': {} '@babel/template@7.27.2': @@ -8288,7 +8574,7 @@ snapshots: '@babel/parser': 7.28.4 '@babel/template': 7.27.2 '@babel/types': 7.28.4 - debug: 4.4.1 + debug: 4.4.3 transitivePeerDependencies: - supports-color @@ -8297,40 +8583,36 @@ snapshots: '@babel/helper-string-parser': 7.27.1 '@babel/helper-validator-identifier': 7.27.1 - '@bundled-es-modules/cookie@2.0.1': - dependencies: - cookie: 0.7.2 - - '@bundled-es-modules/statuses@1.0.1': - dependencies: - statuses: 2.0.2 - - '@chromatic-com/storybook@4.1.1(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))': + '@chromatic-com/storybook@4.1.2(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))': dependencies: '@neoconfetti/react': 1.0.0 chromatic: 12.2.0 filesize: 10.1.6 jsonfile: 6.2.0 - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) strip-ansi: 7.1.2 transitivePeerDependencies: - '@chromatic-com/cypress' - '@chromatic-com/playwright' - '@date-fns/tz@1.2.0': {} + '@commander-js/extra-typings@14.0.0(commander@14.0.2)': + dependencies: + commander: 14.0.2 - '@emnapi/core@1.5.0': + '@date-fns/tz@1.4.1': {} + + '@emnapi/core@1.7.1': dependencies: '@emnapi/wasi-threads': 1.1.0 tslib: 2.8.1 optional: true - '@emnapi/runtime@1.4.5': + '@emnapi/runtime@1.5.0': dependencies: tslib: 2.8.1 optional: true - '@emnapi/runtime@1.5.0': + '@emnapi/runtime@1.7.1': dependencies: tslib: 2.8.1 optional: true @@ -8348,89 +8630,164 @@ snapshots: '@emotion/unitless@0.8.1': {} + '@epic-web/invariant@1.0.0': {} + + '@esbuild/aix-ppc64@0.25.11': + optional: true + '@esbuild/aix-ppc64@0.25.9': optional: true + '@esbuild/android-arm64@0.25.11': + optional: true + '@esbuild/android-arm64@0.25.9': optional: true + '@esbuild/android-arm@0.25.11': + optional: true + '@esbuild/android-arm@0.25.9': optional: true + '@esbuild/android-x64@0.25.11': + optional: true + '@esbuild/android-x64@0.25.9': optional: true + '@esbuild/darwin-arm64@0.25.11': + optional: true + '@esbuild/darwin-arm64@0.25.9': optional: true + '@esbuild/darwin-x64@0.25.11': + optional: true + '@esbuild/darwin-x64@0.25.9': optional: true + '@esbuild/freebsd-arm64@0.25.11': + optional: true + '@esbuild/freebsd-arm64@0.25.9': optional: true + '@esbuild/freebsd-x64@0.25.11': + optional: true + '@esbuild/freebsd-x64@0.25.9': optional: true + '@esbuild/linux-arm64@0.25.11': + optional: true + '@esbuild/linux-arm64@0.25.9': optional: true + '@esbuild/linux-arm@0.25.11': + optional: true + '@esbuild/linux-arm@0.25.9': optional: true + '@esbuild/linux-ia32@0.25.11': + optional: true + '@esbuild/linux-ia32@0.25.9': optional: true + '@esbuild/linux-loong64@0.25.11': + optional: true + '@esbuild/linux-loong64@0.25.9': optional: true + '@esbuild/linux-mips64el@0.25.11': + optional: true + '@esbuild/linux-mips64el@0.25.9': optional: true + '@esbuild/linux-ppc64@0.25.11': + optional: true + '@esbuild/linux-ppc64@0.25.9': optional: true + '@esbuild/linux-riscv64@0.25.11': + optional: true + '@esbuild/linux-riscv64@0.25.9': optional: true + '@esbuild/linux-s390x@0.25.11': + optional: true + '@esbuild/linux-s390x@0.25.9': optional: true + '@esbuild/linux-x64@0.25.11': + optional: true + '@esbuild/linux-x64@0.25.9': optional: true + '@esbuild/netbsd-arm64@0.25.11': + optional: true + '@esbuild/netbsd-arm64@0.25.9': optional: true + '@esbuild/netbsd-x64@0.25.11': + optional: true + '@esbuild/netbsd-x64@0.25.9': optional: true + '@esbuild/openbsd-arm64@0.25.11': + optional: true + '@esbuild/openbsd-arm64@0.25.9': optional: true + '@esbuild/openbsd-x64@0.25.11': + optional: true + '@esbuild/openbsd-x64@0.25.9': optional: true + '@esbuild/openharmony-arm64@0.25.11': + optional: true + '@esbuild/openharmony-arm64@0.25.9': optional: true + '@esbuild/sunos-x64@0.25.11': + optional: true + '@esbuild/sunos-x64@0.25.9': optional: true + '@esbuild/win32-arm64@0.25.11': + optional: true + '@esbuild/win32-arm64@0.25.9': optional: true + '@esbuild/win32-ia32@0.25.11': + optional: true + '@esbuild/win32-ia32@0.25.9': optional: true + '@esbuild/win32-x64@0.25.11': + optional: true + '@esbuild/win32-x64@0.25.9': optional: true - '@eslint-community/eslint-utils@4.7.0(eslint@8.57.1)': - dependencies: - eslint: 8.57.1 - eslint-visitor-keys: 3.4.3 - '@eslint-community/eslint-utils@4.9.0(eslint@8.57.1)': dependencies: eslint: 8.57.1 @@ -8438,10 +8795,12 @@ snapshots: '@eslint-community/regexpp@4.12.1': {} + '@eslint-community/regexpp@4.12.2': {} + '@eslint/eslintrc@2.1.4': dependencies: ajv: 6.12.6 - debug: 4.4.1 + debug: 4.4.3 espree: 9.6.1 globals: 13.24.0 ignore: 5.3.2 @@ -8475,23 +8834,23 @@ snapshots: '@floating-ui/utils@0.2.10': {} - '@gerrit0/mini-shiki@3.9.2': + '@gerrit0/mini-shiki@3.14.0': dependencies: - '@shikijs/engine-oniguruma': 3.9.2 - '@shikijs/langs': 3.9.2 - '@shikijs/themes': 3.9.2 - '@shikijs/types': 3.9.2 + '@shikijs/engine-oniguruma': 3.14.0 + '@shikijs/langs': 3.14.0 + '@shikijs/themes': 3.14.0 + '@shikijs/types': 3.14.0 '@shikijs/vscode-textmate': 10.0.2 - '@hookform/resolvers@5.2.1(react-hook-form@7.62.0(react@18.3.1))': + '@hookform/resolvers@5.2.2(react-hook-form@7.66.0(react@18.3.1))': dependencies: '@standard-schema/utils': 0.3.0 - react-hook-form: 7.62.0(react@18.3.1) + react-hook-form: 7.66.0(react@18.3.1) '@humanwhocodes/config-array@0.13.0': dependencies: '@humanwhocodes/object-schema': 2.0.3 - debug: 4.4.1 + debug: 4.4.3 minimatch: 3.1.2 transitivePeerDependencies: - supports-color @@ -8502,19 +8861,20 @@ snapshots: '@ibm-cloud/openapi-ruleset-utilities@1.9.0': {} - '@ibm-cloud/openapi-ruleset@1.31.2': + '@ibm-cloud/openapi-ruleset@1.33.3': dependencies: '@ibm-cloud/openapi-ruleset-utilities': 1.9.0 '@stoplight/spectral-formats': 1.8.2 '@stoplight/spectral-functions': 1.10.1 '@stoplight/spectral-rulesets': 1.22.0 chalk: 4.1.2 + inflected: 2.1.0 jsonschema: 1.5.0 lodash: 4.17.21 loglevel: 1.9.2 loglevel-plugin-prefix: 0.8.4 minimatch: 6.2.0 - validator: 13.15.15 + validator: 13.15.20 transitivePeerDependencies: - encoding @@ -8592,7 +8952,7 @@ snapshots: '@img/sharp-wasm32@0.34.3': dependencies: - '@emnapi/runtime': 1.4.5 + '@emnapi/runtime': 1.5.0 optional: true '@img/sharp-win32-arm64@0.34.3': @@ -8604,31 +8964,33 @@ snapshots: '@img/sharp-win32-x64@0.34.3': optional: true - '@inquirer/confirm@5.1.16(@types/node@24.3.1)': - dependencies: - '@inquirer/core': 10.2.0(@types/node@24.3.1) - '@inquirer/type': 3.0.8(@types/node@24.3.1) - optionalDependencies: - '@types/node': 24.3.1 + '@inquirer/ansi@1.0.1': {} - '@inquirer/core@10.2.0(@types/node@24.3.1)': + '@inquirer/confirm@5.1.19(@types/node@24.10.0)': dependencies: - '@inquirer/figures': 1.0.13 - '@inquirer/type': 3.0.8(@types/node@24.3.1) - ansi-escapes: 4.3.2 + '@inquirer/core': 10.3.0(@types/node@24.10.0) + '@inquirer/type': 3.0.9(@types/node@24.10.0) + optionalDependencies: + '@types/node': 24.10.0 + + '@inquirer/core@10.3.0(@types/node@24.10.0)': + dependencies: + '@inquirer/ansi': 1.0.1 + '@inquirer/figures': 1.0.14 + '@inquirer/type': 3.0.9(@types/node@24.10.0) cli-width: 4.1.0 mute-stream: 2.0.0 signal-exit: 4.1.0 wrap-ansi: 6.2.0 yoctocolors-cjs: 2.1.3 optionalDependencies: - '@types/node': 24.3.1 + '@types/node': 24.10.0 - '@inquirer/figures@1.0.13': {} + '@inquirer/figures@1.0.14': {} - '@inquirer/type@3.0.8(@types/node@24.3.1)': + '@inquirer/type@3.0.9(@types/node@24.10.0)': optionalDependencies: - '@types/node': 24.3.1 + '@types/node': 24.10.0 '@isaacs/cliui@8.0.2': dependencies: @@ -8663,8 +9025,6 @@ snapshots: '@jridgewell/resolve-uri': 3.1.2 '@jridgewell/sourcemap-codec': 1.5.5 - '@jsdevtools/ono@7.1.3': {} - '@jsep-plugin/assignment@1.3.0(jsep@1.4.0)': dependencies: jsep: 1.4.0 @@ -8677,18 +9037,13 @@ snapshots: dependencies: jsep: 1.4.0 - '@marsidev/react-turnstile@1.3.1(react-dom@18.3.1(react@18.3.1))(react@18.3.1)': - dependencies: - react: 18.3.1 - react-dom: 18.3.1(react@18.3.1) - '@mdx-js/react@3.1.1(@types/react@18.3.17)(react@18.3.1)': dependencies: '@types/mdx': 2.0.13 '@types/react': 18.3.17 react: 18.3.1 - '@mswjs/interceptors@0.39.6': + '@mswjs/interceptors@0.40.0': dependencies: '@open-draft/deferred-promise': 2.2.0 '@open-draft/logger': 0.3.0 @@ -8699,46 +9054,46 @@ snapshots: '@napi-rs/wasm-runtime@0.2.12': dependencies: - '@emnapi/core': 1.5.0 - '@emnapi/runtime': 1.5.0 - '@tybys/wasm-util': 0.10.0 + '@emnapi/core': 1.7.1 + '@emnapi/runtime': 1.7.1 + '@tybys/wasm-util': 0.10.1 optional: true '@neoconfetti/react@1.0.0': {} - '@next/env@15.4.7': {} + '@next/env@15.4.10': {} - '@next/eslint-plugin-next@15.5.2': + '@next/eslint-plugin-next@15.5.7': dependencies: fast-glob: 3.3.1 - '@next/swc-darwin-arm64@15.4.7': + '@next/swc-darwin-arm64@15.4.8': optional: true - '@next/swc-darwin-x64@15.4.7': + '@next/swc-darwin-x64@15.4.8': optional: true - '@next/swc-linux-arm64-gnu@15.4.7': + '@next/swc-linux-arm64-gnu@15.4.8': optional: true - '@next/swc-linux-arm64-musl@15.4.7': + '@next/swc-linux-arm64-musl@15.4.8': optional: true - '@next/swc-linux-x64-gnu@15.4.7': + '@next/swc-linux-x64-gnu@15.4.8': optional: true - '@next/swc-linux-x64-musl@15.4.7': + '@next/swc-linux-x64-musl@15.4.8': optional: true - '@next/swc-win32-arm64-msvc@15.4.7': + '@next/swc-win32-arm64-msvc@15.4.8': optional: true - '@next/swc-win32-x64-msvc@15.4.7': + '@next/swc-win32-x64-msvc@15.4.8': optional: true - '@next/third-parties@15.4.6(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)': + '@next/third-parties@15.4.6(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)': dependencies: - next: 15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + next: 15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) react: 18.3.1 third-party-capital: 1.0.20 @@ -8765,363 +9120,359 @@ snapshots: '@open-draft/until@2.1.0': {} - '@opentelemetry/api-logs@0.204.0': - dependencies: - '@opentelemetry/api': 1.9.0 - - '@opentelemetry/api-logs@0.57.2': + '@opentelemetry/api-logs@0.208.0': dependencies: '@opentelemetry/api': 1.9.0 '@opentelemetry/api@1.9.0': {} - '@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 '@opentelemetry/semantic-conventions': 1.37.0 - '@opentelemetry/instrumentation-amqplib@0.51.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-amqplib@0.55.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/semantic-conventions': 1.37.0 + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-connect@0.48.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-connect@0.52.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 '@types/connect': 3.4.38 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-dataloader@0.22.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-dataloader@0.26.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-express@0.53.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-express@0.57.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-fs@0.24.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-fs@0.28.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-generic-pool@0.48.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-generic-pool@0.52.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-graphql@0.52.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-graphql@0.56.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-hapi@0.51.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-hapi@0.55.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-http@0.204.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-http@0.208.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 forwarded-parse: 2.1.2 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-ioredis@0.52.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-ioredis@0.56.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/redis-common': 0.38.0 + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) + '@opentelemetry/redis-common': 0.38.2 + transitivePeerDependencies: + - supports-color + + '@opentelemetry/instrumentation-kafkajs@0.18.0(@opentelemetry/api@1.9.0)': + dependencies: + '@opentelemetry/api': 1.9.0 + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-kafkajs@0.14.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-knex@0.53.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-knex@0.49.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-koa@0.57.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-koa@0.52.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-lru-memoizer@0.53.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/semantic-conventions': 1.37.0 + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-lru-memoizer@0.49.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-mongodb@0.61.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-mongodb@0.57.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-mongoose@0.55.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/semantic-conventions': 1.37.0 + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-mongoose@0.51.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-mysql2@0.55.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 + '@opentelemetry/sql-common': 0.41.2(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-mysql2@0.51.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-mysql@0.54.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/semantic-conventions': 1.37.0 - '@opentelemetry/sql-common': 0.41.0(@opentelemetry/api@1.9.0) - transitivePeerDependencies: - - supports-color - - '@opentelemetry/instrumentation-mysql@0.50.0(@opentelemetry/api@1.9.0)': - dependencies: - '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/semantic-conventions': 1.37.0 + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@types/mysql': 2.15.27 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-pg@0.57.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-pg@0.61.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 - '@opentelemetry/sql-common': 0.41.0(@opentelemetry/api@1.9.0) - '@types/pg': 8.15.5 + '@opentelemetry/sql-common': 0.41.2(@opentelemetry/api@1.9.0) + '@types/pg': 8.15.6 '@types/pg-pool': 2.0.6 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-redis@0.53.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-redis@0.57.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/redis-common': 0.38.0 + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) + '@opentelemetry/redis-common': 0.38.2 '@opentelemetry/semantic-conventions': 1.37.0 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-tedious@0.23.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-tedious@0.27.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/semantic-conventions': 1.37.0 + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) '@types/tedious': 4.0.14 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation-undici@0.15.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation-undici@0.19.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) + '@opentelemetry/semantic-conventions': 1.37.0 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation@0.204.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/instrumentation@0.208.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/api-logs': 0.204.0 - import-in-the-middle: 1.14.2 - require-in-the-middle: 7.5.2 + '@opentelemetry/api-logs': 0.208.0 + import-in-the-middle: 2.0.0 + require-in-the-middle: 8.0.1 transitivePeerDependencies: - supports-color - '@opentelemetry/instrumentation@0.57.2(@opentelemetry/api@1.9.0)': + '@opentelemetry/redis-common@0.38.2': {} + + '@opentelemetry/resources@2.2.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/api-logs': 0.57.2 - '@types/shimmer': 1.2.0 - import-in-the-middle: 1.14.2 - require-in-the-middle: 7.5.2 - semver: 7.7.2 - shimmer: 1.2.1 - transitivePeerDependencies: - - supports-color - - '@opentelemetry/redis-common@0.38.0': {} - - '@opentelemetry/resources@2.1.0(@opentelemetry/api@1.9.0)': - dependencies: - '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 - '@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/resources': 2.1.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/resources': 2.2.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 '@opentelemetry/semantic-conventions@1.37.0': {} - '@opentelemetry/sql-common@0.41.0(@opentelemetry/api@1.9.0)': + '@opentelemetry/sql-common@0.41.2(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) - '@orval/angular@7.11.2(openapi-types@12.1.3)': + '@orval/angular@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/axios@7.11.2(openapi-types@12.1.3)': + '@orval/axios@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/core@7.11.2(openapi-types@12.1.3)': + '@orval/core@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@apidevtools/swagger-parser': 10.1.1(openapi-types@12.1.3) - '@ibm-cloud/openapi-ruleset': 1.31.2 + '@apidevtools/swagger-parser': 12.1.0(openapi-types@12.1.3) + '@ibm-cloud/openapi-ruleset': 1.33.3 + '@stoplight/spectral-core': 1.20.0 acorn: 8.15.0 - ajv: 8.17.1 chalk: 4.1.2 compare-versions: 6.1.1 - debug: 4.4.1 - esbuild: 0.25.9 + debug: 4.4.3 + esbuild: 0.25.11 esutils: 2.0.3 - fs-extra: 11.3.1 + fs-extra: 11.3.2 globby: 11.1.0 lodash.isempty: 4.4.0 lodash.uniq: 4.5.0 lodash.uniqby: 4.7.0 lodash.uniqwith: 4.5.0 micromatch: 4.0.8 - openapi3-ts: 4.4.0 + openapi3-ts: 4.5.0 swagger2openapi: 7.0.8 + typedoc: 0.28.14(typescript@5.9.3) transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/fetch@7.11.2(openapi-types@12.1.3)': + '@orval/fetch@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + openapi3-ts: 4.5.0 transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/hono@7.11.2(openapi-types@12.1.3)': + '@orval/hono@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) - '@orval/zod': 7.11.2(openapi-types@12.1.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/zod': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + fs-extra: 11.3.2 lodash.uniq: 4.5.0 + openapi3-ts: 4.5.0 transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/mcp@7.11.2(openapi-types@12.1.3)': + '@orval/mcp@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) - '@orval/fetch': 7.11.2(openapi-types@12.1.3) - '@orval/zod': 7.11.2(openapi-types@12.1.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/fetch': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/zod': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + openapi3-ts: 4.5.0 transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/mock@7.11.2(openapi-types@12.1.3)': + '@orval/mock@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) - openapi3-ts: 4.4.0 + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + openapi3-ts: 4.5.0 transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/query@7.11.2(openapi-types@12.1.3)': + '@orval/query@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) - '@orval/fetch': 7.11.2(openapi-types@12.1.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/fetch': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + chalk: 4.1.2 lodash.omitby: 4.6.0 transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/swr@7.11.2(openapi-types@12.1.3)': + '@orval/swr@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) - '@orval/fetch': 7.11.2(openapi-types@12.1.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/fetch': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript - '@orval/zod@7.11.2(openapi-types@12.1.3)': + '@orval/zod@7.13.0(openapi-types@12.1.3)(typescript@5.9.3)': dependencies: - '@orval/core': 7.11.2(openapi-types@12.1.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) lodash.uniq: 4.5.0 + openapi3-ts: 4.5.0 transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript '@phosphor-icons/react@2.1.10(react-dom@18.3.1(react@18.3.1))(react@18.3.1)': dependencies: @@ -9131,9 +9482,9 @@ snapshots: '@pkgjs/parseargs@0.11.0': optional: true - '@playwright/test@1.55.0': + '@playwright/test@1.56.1': dependencies: - playwright: 1.55.0 + playwright: 1.56.1 '@pmmmwh/react-refresh-webpack-plugin@0.5.17(react-refresh@0.14.2)(type-fest@4.41.0)(webpack-hot-middleware@2.26.1)(webpack@5.101.3(esbuild@0.25.9))': dependencies: @@ -9150,10 +9501,10 @@ snapshots: type-fest: 4.41.0 webpack-hot-middleware: 2.26.1 - '@prisma/instrumentation@6.15.0(@opentelemetry/api@1.9.0)': + '@prisma/instrumentation@6.19.0(@opentelemetry/api@1.9.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/instrumentation': 0.57.2(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) transitivePeerDependencies: - supports-color @@ -9822,37 +10173,39 @@ snapshots: '@rtsao/scc@1.1.0': {} - '@rushstack/eslint-patch@1.12.0': {} + '@rushstack/eslint-patch@1.15.0': {} '@scarf/scarf@1.4.0': {} - '@sentry-internal/browser-utils@10.15.0': + '@sentry-internal/browser-utils@10.27.0': dependencies: - '@sentry/core': 10.15.0 + '@sentry/core': 10.27.0 - '@sentry-internal/feedback@10.15.0': + '@sentry-internal/feedback@10.27.0': dependencies: - '@sentry/core': 10.15.0 + '@sentry/core': 10.27.0 - '@sentry-internal/replay-canvas@10.15.0': + '@sentry-internal/replay-canvas@10.27.0': dependencies: - '@sentry-internal/replay': 10.15.0 - '@sentry/core': 10.15.0 + '@sentry-internal/replay': 10.27.0 + '@sentry/core': 10.27.0 - '@sentry-internal/replay@10.15.0': + '@sentry-internal/replay@10.27.0': dependencies: - '@sentry-internal/browser-utils': 10.15.0 - '@sentry/core': 10.15.0 + '@sentry-internal/browser-utils': 10.27.0 + '@sentry/core': 10.27.0 '@sentry/babel-plugin-component-annotate@4.3.0': {} - '@sentry/browser@10.15.0': + '@sentry/babel-plugin-component-annotate@4.6.1': {} + + '@sentry/browser@10.27.0': dependencies: - '@sentry-internal/browser-utils': 10.15.0 - '@sentry-internal/feedback': 10.15.0 - '@sentry-internal/replay': 10.15.0 - '@sentry-internal/replay-canvas': 10.15.0 - '@sentry/core': 10.15.0 + '@sentry-internal/browser-utils': 10.27.0 + '@sentry-internal/feedback': 10.27.0 + '@sentry-internal/replay': 10.27.0 + '@sentry-internal/replay-canvas': 10.27.0 + '@sentry/core': 10.27.0 '@sentry/bundler-plugin-core@4.3.0': dependencies: @@ -9868,30 +10221,68 @@ snapshots: - encoding - supports-color + '@sentry/bundler-plugin-core@4.6.1': + dependencies: + '@babel/core': 7.28.4 + '@sentry/babel-plugin-component-annotate': 4.6.1 + '@sentry/cli': 2.58.2 + dotenv: 16.6.1 + find-up: 5.0.0 + glob: 10.5.0 + magic-string: 0.30.8 + unplugin: 1.0.1 + transitivePeerDependencies: + - encoding + - supports-color + '@sentry/cli-darwin@2.55.0': optional: true + '@sentry/cli-darwin@2.58.2': + optional: true + '@sentry/cli-linux-arm64@2.55.0': optional: true + '@sentry/cli-linux-arm64@2.58.2': + optional: true + '@sentry/cli-linux-arm@2.55.0': optional: true + '@sentry/cli-linux-arm@2.58.2': + optional: true + '@sentry/cli-linux-i686@2.55.0': optional: true + '@sentry/cli-linux-i686@2.58.2': + optional: true + '@sentry/cli-linux-x64@2.55.0': optional: true + '@sentry/cli-linux-x64@2.58.2': + optional: true + '@sentry/cli-win32-arm64@2.55.0': optional: true + '@sentry/cli-win32-arm64@2.58.2': + optional: true + '@sentry/cli-win32-i686@2.55.0': optional: true + '@sentry/cli-win32-i686@2.58.2': + optional: true + '@sentry/cli-win32-x64@2.55.0': optional: true + '@sentry/cli-win32-x64@2.58.2': + optional: true + '@sentry/cli@2.55.0': dependencies: https-proxy-agent: 5.0.1 @@ -9912,23 +10303,42 @@ snapshots: - encoding - supports-color - '@sentry/core@10.15.0': {} + '@sentry/cli@2.58.2': + dependencies: + https-proxy-agent: 5.0.1 + node-fetch: 2.7.0 + progress: 2.0.3 + proxy-from-env: 1.1.0 + which: 2.0.2 + optionalDependencies: + '@sentry/cli-darwin': 2.58.2 + '@sentry/cli-linux-arm': 2.58.2 + '@sentry/cli-linux-arm64': 2.58.2 + '@sentry/cli-linux-i686': 2.58.2 + '@sentry/cli-linux-x64': 2.58.2 + '@sentry/cli-win32-arm64': 2.58.2 + '@sentry/cli-win32-i686': 2.58.2 + '@sentry/cli-win32-x64': 2.58.2 + transitivePeerDependencies: + - encoding + - supports-color - '@sentry/nextjs@10.15.0(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)(webpack@5.101.3(esbuild@0.25.9))': + '@sentry/core@10.27.0': {} + + '@sentry/nextjs@10.27.0(@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0))(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)(webpack@5.101.3(esbuild@0.25.9))': dependencies: '@opentelemetry/api': 1.9.0 '@opentelemetry/semantic-conventions': 1.37.0 '@rollup/plugin-commonjs': 28.0.1(rollup@4.52.2) - '@sentry-internal/browser-utils': 10.15.0 - '@sentry/bundler-plugin-core': 4.3.0 - '@sentry/core': 10.15.0 - '@sentry/node': 10.15.0 - '@sentry/opentelemetry': 10.15.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0) - '@sentry/react': 10.15.0(react@18.3.1) - '@sentry/vercel-edge': 10.15.0 + '@sentry-internal/browser-utils': 10.27.0 + '@sentry/bundler-plugin-core': 4.6.1 + '@sentry/core': 10.27.0 + '@sentry/node': 10.27.0 + '@sentry/opentelemetry': 10.27.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0) + '@sentry/react': 10.27.0(react@18.3.1) + '@sentry/vercel-edge': 10.27.0 '@sentry/webpack-plugin': 4.3.0(webpack@5.101.3(esbuild@0.25.9)) - chalk: 3.0.0 - next: 15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + next: 15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) resolve: 1.22.8 rollup: 4.52.2 stacktrace-parser: 0.1.11 @@ -9941,80 +10351,83 @@ snapshots: - supports-color - webpack - '@sentry/node-core@10.15.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/instrumentation@0.204.0(@opentelemetry/api@1.9.0))(@opentelemetry/resources@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0)': + '@sentry/node-core@10.27.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/instrumentation@0.208.0(@opentelemetry/api@1.9.0))(@opentelemetry/resources@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0)': dependencies: + '@apm-js-collab/tracing-hooks': 0.3.1 '@opentelemetry/api': 1.9.0 - '@opentelemetry/context-async-hooks': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/resources': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/sdk-trace-base': 2.1.0(@opentelemetry/api@1.9.0) + '@opentelemetry/context-async-hooks': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) + '@opentelemetry/resources': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/sdk-trace-base': 2.2.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 - '@sentry/core': 10.15.0 - '@sentry/opentelemetry': 10.15.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0) - import-in-the-middle: 1.14.2 + '@sentry/core': 10.27.0 + '@sentry/opentelemetry': 10.27.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0) + import-in-the-middle: 2.0.0 + transitivePeerDependencies: + - supports-color - '@sentry/node@10.15.0': + '@sentry/node@10.27.0': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/context-async-hooks': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-amqplib': 0.51.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-connect': 0.48.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-dataloader': 0.22.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-express': 0.53.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-fs': 0.24.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-generic-pool': 0.48.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-graphql': 0.52.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-hapi': 0.51.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-http': 0.204.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-ioredis': 0.52.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-kafkajs': 0.14.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-knex': 0.49.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-koa': 0.52.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-lru-memoizer': 0.49.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-mongodb': 0.57.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-mongoose': 0.51.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-mysql': 0.50.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-mysql2': 0.51.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-pg': 0.57.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-redis': 0.53.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-tedious': 0.23.0(@opentelemetry/api@1.9.0) - '@opentelemetry/instrumentation-undici': 0.15.0(@opentelemetry/api@1.9.0) - '@opentelemetry/resources': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/sdk-trace-base': 2.1.0(@opentelemetry/api@1.9.0) + '@opentelemetry/context-async-hooks': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation': 0.208.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-amqplib': 0.55.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-connect': 0.52.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-dataloader': 0.26.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-express': 0.57.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-fs': 0.28.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-generic-pool': 0.52.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-graphql': 0.56.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-hapi': 0.55.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-http': 0.208.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-ioredis': 0.56.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-kafkajs': 0.18.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-knex': 0.53.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-koa': 0.57.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-lru-memoizer': 0.53.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-mongodb': 0.61.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-mongoose': 0.55.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-mysql': 0.54.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-mysql2': 0.55.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-pg': 0.61.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-redis': 0.57.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-tedious': 0.27.0(@opentelemetry/api@1.9.0) + '@opentelemetry/instrumentation-undici': 0.19.0(@opentelemetry/api@1.9.0) + '@opentelemetry/resources': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/sdk-trace-base': 2.2.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 - '@prisma/instrumentation': 6.15.0(@opentelemetry/api@1.9.0) - '@sentry/core': 10.15.0 - '@sentry/node-core': 10.15.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/instrumentation@0.204.0(@opentelemetry/api@1.9.0))(@opentelemetry/resources@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0) - '@sentry/opentelemetry': 10.15.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0) - import-in-the-middle: 1.14.2 + '@prisma/instrumentation': 6.19.0(@opentelemetry/api@1.9.0) + '@sentry/core': 10.27.0 + '@sentry/node-core': 10.27.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/instrumentation@0.208.0(@opentelemetry/api@1.9.0))(@opentelemetry/resources@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0) + '@sentry/opentelemetry': 10.27.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0) + import-in-the-middle: 2.0.0 minimatch: 9.0.5 transitivePeerDependencies: - supports-color - '@sentry/opentelemetry@10.15.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.1.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0)': + '@sentry/opentelemetry@10.27.0(@opentelemetry/api@1.9.0)(@opentelemetry/context-async-hooks@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/core@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/sdk-trace-base@2.2.0(@opentelemetry/api@1.9.0))(@opentelemetry/semantic-conventions@1.37.0)': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/context-async-hooks': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/core': 2.1.0(@opentelemetry/api@1.9.0) - '@opentelemetry/sdk-trace-base': 2.1.0(@opentelemetry/api@1.9.0) + '@opentelemetry/context-async-hooks': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/core': 2.2.0(@opentelemetry/api@1.9.0) + '@opentelemetry/sdk-trace-base': 2.2.0(@opentelemetry/api@1.9.0) '@opentelemetry/semantic-conventions': 1.37.0 - '@sentry/core': 10.15.0 + '@sentry/core': 10.27.0 - '@sentry/react@10.15.0(react@18.3.1)': + '@sentry/react@10.27.0(react@18.3.1)': dependencies: - '@sentry/browser': 10.15.0 - '@sentry/core': 10.15.0 + '@sentry/browser': 10.27.0 + '@sentry/core': 10.27.0 hoist-non-react-statics: 3.3.2 react: 18.3.1 - '@sentry/vercel-edge@10.15.0': + '@sentry/vercel-edge@10.27.0': dependencies: '@opentelemetry/api': 1.9.0 - '@opentelemetry/resources': 2.1.0(@opentelemetry/api@1.9.0) - '@sentry/core': 10.15.0 + '@opentelemetry/resources': 2.2.0(@opentelemetry/api@1.9.0) + '@sentry/core': 10.27.0 '@sentry/webpack-plugin@4.3.0(webpack@5.101.3(esbuild@0.25.9))': dependencies: @@ -10026,20 +10439,20 @@ snapshots: - encoding - supports-color - '@shikijs/engine-oniguruma@3.9.2': + '@shikijs/engine-oniguruma@3.14.0': dependencies: - '@shikijs/types': 3.9.2 + '@shikijs/types': 3.14.0 '@shikijs/vscode-textmate': 10.0.2 - '@shikijs/langs@3.9.2': + '@shikijs/langs@3.14.0': dependencies: - '@shikijs/types': 3.9.2 + '@shikijs/types': 3.14.0 - '@shikijs/themes@3.9.2': + '@shikijs/themes@3.14.0': dependencies: - '@shikijs/types': 3.9.2 + '@shikijs/types': 3.14.0 - '@shikijs/types@3.9.2': + '@shikijs/types@3.14.0': dependencies: '@shikijs/vscode-textmate': 10.0.2 '@types/hast': 3.0.4 @@ -10068,7 +10481,7 @@ snapshots: '@stoplight/json': 3.21.7 '@stoplight/path': 1.3.2 '@stoplight/types': 13.20.0 - '@types/urijs': 1.19.25 + '@types/urijs': 1.19.26 dependency-graph: 0.11.0 fast-memoize: 2.5.2 immer: 9.0.21 @@ -10159,7 +10572,7 @@ snapshots: '@stoplight/spectral-rulesets@1.22.0': dependencies: - '@asyncapi/specs': 6.9.0 + '@asyncapi/specs': 6.10.0 '@stoplight/better-ajv-errors': 1.0.3(ajv@8.17.1) '@stoplight/json': 3.21.7 '@stoplight/spectral-core': 1.20.0 @@ -10213,47 +10626,47 @@ snapshots: '@stoplight/yaml-ast-parser': 0.0.50 tslib: 2.8.1 - '@storybook/addon-a11y@9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))': + '@storybook/addon-a11y@9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))': dependencies: '@storybook/global': 5.0.0 axe-core: 4.10.3 - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) - '@storybook/addon-docs@9.1.5(@types/react@18.3.17)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))': + '@storybook/addon-docs@9.1.5(@types/react@18.3.17)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))': dependencies: '@mdx-js/react': 3.1.1(@types/react@18.3.17)(react@18.3.1) - '@storybook/csf-plugin': 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + '@storybook/csf-plugin': 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) '@storybook/icons': 1.4.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1) - '@storybook/react-dom-shim': 9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + '@storybook/react-dom-shim': 9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) react: 18.3.1 react-dom: 18.3.1(react@18.3.1) - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) ts-dedent: 2.2.0 transitivePeerDependencies: - '@types/react' - '@storybook/addon-links@9.1.5(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))': + '@storybook/addon-links@9.1.5(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))': dependencies: '@storybook/global': 5.0.0 - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) optionalDependencies: react: 18.3.1 - '@storybook/addon-onboarding@9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))': + '@storybook/addon-onboarding@9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))': dependencies: - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) - '@storybook/builder-webpack5@9.1.5(esbuild@0.25.9)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(typescript@5.9.2)': + '@storybook/builder-webpack5@9.1.5(esbuild@0.25.9)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(typescript@5.9.3)': dependencies: - '@storybook/core-webpack': 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + '@storybook/core-webpack': 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) case-sensitive-paths-webpack-plugin: 2.4.0 cjs-module-lexer: 1.4.3 css-loader: 6.11.0(webpack@5.101.3(esbuild@0.25.9)) es-module-lexer: 1.7.0 - fork-ts-checker-webpack-plugin: 8.0.0(typescript@5.9.2)(webpack@5.101.3(esbuild@0.25.9)) + fork-ts-checker-webpack-plugin: 8.0.0(typescript@5.9.3)(webpack@5.101.3(esbuild@0.25.9)) html-webpack-plugin: 5.6.4(webpack@5.101.3(esbuild@0.25.9)) magic-string: 0.30.19 - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) style-loader: 3.3.4(webpack@5.101.3(esbuild@0.25.9)) terser-webpack-plugin: 5.3.14(esbuild@0.25.9)(webpack@5.101.3(esbuild@0.25.9)) ts-dedent: 2.2.0 @@ -10262,7 +10675,7 @@ snapshots: webpack-hot-middleware: 2.26.1 webpack-virtual-modules: 0.6.2 optionalDependencies: - typescript: 5.9.2 + typescript: 5.9.3 transitivePeerDependencies: - '@rspack/core' - '@swc/core' @@ -10270,14 +10683,14 @@ snapshots: - uglify-js - webpack-cli - '@storybook/core-webpack@9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))': + '@storybook/core-webpack@9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))': dependencies: - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) ts-dedent: 2.2.0 - '@storybook/csf-plugin@9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))': + '@storybook/csf-plugin@9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))': dependencies: - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) unplugin: 1.16.1 '@storybook/global@5.0.0': {} @@ -10287,7 +10700,7 @@ snapshots: react: 18.3.1 react-dom: 18.3.1(react@18.3.1) - '@storybook/nextjs@9.1.5(esbuild@0.25.9)(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(type-fest@4.41.0)(typescript@5.9.2)(webpack-hot-middleware@2.26.1)(webpack@5.101.3(esbuild@0.25.9))': + '@storybook/nextjs@9.1.5(esbuild@0.25.9)(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(type-fest@4.41.0)(typescript@5.9.3)(webpack-hot-middleware@2.26.1)(webpack@5.101.3(esbuild@0.25.9))': dependencies: '@babel/core': 7.28.4 '@babel/plugin-syntax-bigint': 7.8.3(@babel/core@7.28.4) @@ -10303,31 +10716,31 @@ snapshots: '@babel/preset-typescript': 7.27.1(@babel/core@7.28.4) '@babel/runtime': 7.28.4 '@pmmmwh/react-refresh-webpack-plugin': 0.5.17(react-refresh@0.14.2)(type-fest@4.41.0)(webpack-hot-middleware@2.26.1)(webpack@5.101.3(esbuild@0.25.9)) - '@storybook/builder-webpack5': 9.1.5(esbuild@0.25.9)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(typescript@5.9.2) - '@storybook/preset-react-webpack': 9.1.5(esbuild@0.25.9)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(typescript@5.9.2) - '@storybook/react': 9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(typescript@5.9.2) + '@storybook/builder-webpack5': 9.1.5(esbuild@0.25.9)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(typescript@5.9.3) + '@storybook/preset-react-webpack': 9.1.5(esbuild@0.25.9)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(typescript@5.9.3) + '@storybook/react': 9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(typescript@5.9.3) '@types/semver': 7.7.1 babel-loader: 9.2.1(@babel/core@7.28.4)(webpack@5.101.3(esbuild@0.25.9)) css-loader: 6.11.0(webpack@5.101.3(esbuild@0.25.9)) image-size: 2.0.2 loader-utils: 3.3.1 - next: 15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + next: 15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) node-polyfill-webpack-plugin: 2.0.1(webpack@5.101.3(esbuild@0.25.9)) postcss: 8.5.6 - postcss-loader: 8.2.0(postcss@8.5.6)(typescript@5.9.2)(webpack@5.101.3(esbuild@0.25.9)) + postcss-loader: 8.2.0(postcss@8.5.6)(typescript@5.9.3)(webpack@5.101.3(esbuild@0.25.9)) react: 18.3.1 react-dom: 18.3.1(react@18.3.1) react-refresh: 0.14.2 resolve-url-loader: 5.0.0 sass-loader: 16.0.5(webpack@5.101.3(esbuild@0.25.9)) semver: 7.7.2 - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) style-loader: 3.3.4(webpack@5.101.3(esbuild@0.25.9)) styled-jsx: 5.1.7(@babel/core@7.28.4)(react@18.3.1) tsconfig-paths: 4.2.0 tsconfig-paths-webpack-plugin: 4.2.0 optionalDependencies: - typescript: 5.9.2 + typescript: 5.9.3 webpack: 5.101.3(esbuild@0.25.9) transitivePeerDependencies: - '@rspack/core' @@ -10347,10 +10760,10 @@ snapshots: - webpack-hot-middleware - webpack-plugin-serve - '@storybook/preset-react-webpack@9.1.5(esbuild@0.25.9)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(typescript@5.9.2)': + '@storybook/preset-react-webpack@9.1.5(esbuild@0.25.9)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(typescript@5.9.3)': dependencies: - '@storybook/core-webpack': 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) - '@storybook/react-docgen-typescript-plugin': 1.0.6--canary.9.0c3f3b7.0(typescript@5.9.2)(webpack@5.101.3(esbuild@0.25.9)) + '@storybook/core-webpack': 9.1.5(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) + '@storybook/react-docgen-typescript-plugin': 1.0.6--canary.9.0c3f3b7.0(typescript@5.9.3)(webpack@5.101.3(esbuild@0.25.9)) '@types/semver': 7.7.1 find-up: 7.0.0 magic-string: 0.30.19 @@ -10359,11 +10772,11 @@ snapshots: react-dom: 18.3.1(react@18.3.1) resolve: 1.22.10 semver: 7.7.2 - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) tsconfig-paths: 4.2.0 webpack: 5.101.3(esbuild@0.25.9) optionalDependencies: - typescript: 5.9.2 + typescript: 5.9.3 transitivePeerDependencies: - '@swc/core' - esbuild @@ -10371,79 +10784,84 @@ snapshots: - uglify-js - webpack-cli - '@storybook/react-docgen-typescript-plugin@1.0.6--canary.9.0c3f3b7.0(typescript@5.9.2)(webpack@5.101.3(esbuild@0.25.9))': + '@storybook/react-docgen-typescript-plugin@1.0.6--canary.9.0c3f3b7.0(typescript@5.9.3)(webpack@5.101.3(esbuild@0.25.9))': dependencies: - debug: 4.4.1 + debug: 4.4.3 endent: 2.1.0 find-cache-dir: 3.3.2 flat-cache: 3.2.0 micromatch: 4.0.8 - react-docgen-typescript: 2.4.0(typescript@5.9.2) + react-docgen-typescript: 2.4.0(typescript@5.9.3) tslib: 2.8.1 - typescript: 5.9.2 + typescript: 5.9.3 webpack: 5.101.3(esbuild@0.25.9) transitivePeerDependencies: - supports-color - '@storybook/react-dom-shim@9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))': + '@storybook/react-dom-shim@9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))': dependencies: react: 18.3.1 react-dom: 18.3.1(react@18.3.1) - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) - '@storybook/react@9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(typescript@5.9.2)': + '@storybook/react@9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(typescript@5.9.3)': dependencies: '@storybook/global': 5.0.0 - '@storybook/react-dom-shim': 9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2)) + '@storybook/react-dom-shim': 9.1.5(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2)) react: 18.3.1 react-dom: 18.3.1(react@18.3.1) - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) optionalDependencies: - typescript: 5.9.2 + typescript: 5.9.3 - '@supabase/auth-js@2.71.1': + '@supabase/auth-js@2.78.0': dependencies: '@supabase/node-fetch': 2.6.15 + tslib: 2.8.1 - '@supabase/functions-js@2.4.5': + '@supabase/functions-js@2.78.0': dependencies: '@supabase/node-fetch': 2.6.15 + tslib: 2.8.1 '@supabase/node-fetch@2.6.15': dependencies: whatwg-url: 5.0.0 - '@supabase/postgrest-js@1.19.4': + '@supabase/postgrest-js@2.78.0': dependencies: '@supabase/node-fetch': 2.6.15 + tslib: 2.8.1 - '@supabase/realtime-js@2.15.1': + '@supabase/realtime-js@2.78.0': dependencies: '@supabase/node-fetch': 2.6.15 '@types/phoenix': 1.6.6 '@types/ws': 8.18.1 + tslib: 2.8.1 ws: 8.18.3 transitivePeerDependencies: - bufferutil - utf-8-validate - '@supabase/ssr@0.6.1(@supabase/supabase-js@2.55.0)': + '@supabase/ssr@0.7.0(@supabase/supabase-js@2.78.0)': dependencies: - '@supabase/supabase-js': 2.55.0 + '@supabase/supabase-js': 2.78.0 cookie: 1.0.2 - '@supabase/storage-js@2.11.0': + '@supabase/storage-js@2.78.0': dependencies: '@supabase/node-fetch': 2.6.15 + tslib: 2.8.1 - '@supabase/supabase-js@2.55.0': + '@supabase/supabase-js@2.78.0': dependencies: - '@supabase/auth-js': 2.71.1 - '@supabase/functions-js': 2.4.5 + '@supabase/auth-js': 2.78.0 + '@supabase/functions-js': 2.78.0 '@supabase/node-fetch': 2.6.15 - '@supabase/postgrest-js': 1.19.4 - '@supabase/realtime-js': 2.15.1 - '@supabase/storage-js': 2.11.0 + '@supabase/postgrest-js': 2.78.0 + '@supabase/realtime-js': 2.78.0 + '@supabase/storage-js': 2.78.0 transitivePeerDependencies: - bufferutil - utf-8-validate @@ -10452,27 +10870,27 @@ snapshots: dependencies: tslib: 2.8.1 - '@tanstack/eslint-plugin-query@5.86.0(eslint@8.57.1)(typescript@5.9.2)': + '@tanstack/eslint-plugin-query@5.91.2(eslint@8.57.1)(typescript@5.9.3)': dependencies: - '@typescript-eslint/utils': 8.43.0(eslint@8.57.1)(typescript@5.9.2) + '@typescript-eslint/utils': 8.46.2(eslint@8.57.1)(typescript@5.9.3) eslint: 8.57.1 transitivePeerDependencies: - supports-color - typescript - '@tanstack/query-core@5.85.3': {} + '@tanstack/query-core@5.90.6': {} - '@tanstack/query-devtools@5.87.3': {} + '@tanstack/query-devtools@5.90.1': {} - '@tanstack/react-query-devtools@5.87.3(@tanstack/react-query@5.85.3(react@18.3.1))(react@18.3.1)': + '@tanstack/react-query-devtools@5.90.2(@tanstack/react-query@5.90.6(react@18.3.1))(react@18.3.1)': dependencies: - '@tanstack/query-devtools': 5.87.3 - '@tanstack/react-query': 5.85.3(react@18.3.1) + '@tanstack/query-devtools': 5.90.1 + '@tanstack/react-query': 5.90.6(react@18.3.1) react: 18.3.1 - '@tanstack/react-query@5.85.3(react@18.3.1)': + '@tanstack/react-query@5.90.6(react@18.3.1)': dependencies: - '@tanstack/query-core': 5.85.3 + '@tanstack/query-core': 5.90.6 react: 18.3.1 '@tanstack/react-table@8.21.3(react-dom@18.3.1(react@18.3.1))(react@18.3.1)': @@ -10507,7 +10925,7 @@ snapshots: dependencies: '@testing-library/dom': 10.4.1 - '@tybys/wasm-util@0.10.0': + '@tybys/wasm-util@0.10.1': dependencies: tslib: 2.8.1 optional: true @@ -10543,9 +10961,7 @@ snapshots: '@types/connect@3.4.38': dependencies: - '@types/node': 24.3.1 - - '@types/cookie@0.6.0': {} + '@types/node': 24.10.0 '@types/d3-array@3.2.1': {} @@ -10596,7 +11012,7 @@ snapshots: '@types/es-aggregate-error@1.0.6': dependencies: - '@types/node': 24.3.1 + '@types/node': 24.10.0 '@types/eslint-scope@3.7.7': dependencies: @@ -10642,30 +11058,28 @@ snapshots: '@types/mysql@2.15.27': dependencies: - '@types/node': 24.3.1 + '@types/node': 24.10.0 '@types/negotiator@0.6.4': {} - '@types/node@24.3.1': + '@types/node@24.10.0': dependencies: - undici-types: 7.10.0 + undici-types: 7.16.0 '@types/parse-json@4.0.2': {} '@types/pg-pool@2.0.6': dependencies: - '@types/pg': 8.15.5 + '@types/pg': 8.15.6 - '@types/pg@8.15.5': + '@types/pg@8.15.6': dependencies: - '@types/node': 24.3.1 + '@types/node': 24.10.0 pg-protocol: 1.10.3 pg-types: 2.2.0 '@types/phoenix@1.6.6': {} - '@types/prismjs@1.26.5': {} - '@types/prop-types@15.7.15': {} '@types/react-dom@18.3.5(@types/react@18.3.17)': @@ -10689,63 +11103,79 @@ snapshots: '@types/semver@7.7.1': {} - '@types/shimmer@1.2.0': {} - '@types/statuses@2.0.6': {} '@types/stylis@4.2.5': {} '@types/tedious@4.0.14': dependencies: - '@types/node': 24.3.1 + '@types/node': 24.10.0 '@types/unist@2.0.11': {} '@types/unist@3.0.3': {} - '@types/urijs@1.19.25': {} + '@types/urijs@1.19.26': {} '@types/use-sync-external-store@0.0.6': {} '@types/ws@8.18.1': dependencies: - '@types/node': 24.3.1 + '@types/node': 24.10.0 - '@typescript-eslint/eslint-plugin@8.43.0(@typescript-eslint/parser@8.43.0(eslint@8.57.1)(typescript@5.9.2))(eslint@8.57.1)(typescript@5.9.2)': + '@typescript-eslint/eslint-plugin@8.48.1(@typescript-eslint/parser@8.48.1(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1)(typescript@5.9.3)': dependencies: - '@eslint-community/regexpp': 4.12.1 - '@typescript-eslint/parser': 8.43.0(eslint@8.57.1)(typescript@5.9.2) - '@typescript-eslint/scope-manager': 8.43.0 - '@typescript-eslint/type-utils': 8.43.0(eslint@8.57.1)(typescript@5.9.2) - '@typescript-eslint/utils': 8.43.0(eslint@8.57.1)(typescript@5.9.2) - '@typescript-eslint/visitor-keys': 8.43.0 + '@eslint-community/regexpp': 4.12.2 + '@typescript-eslint/parser': 8.48.1(eslint@8.57.1)(typescript@5.9.3) + '@typescript-eslint/scope-manager': 8.48.1 + '@typescript-eslint/type-utils': 8.48.1(eslint@8.57.1)(typescript@5.9.3) + '@typescript-eslint/utils': 8.48.1(eslint@8.57.1)(typescript@5.9.3) + '@typescript-eslint/visitor-keys': 8.48.1 eslint: 8.57.1 graphemer: 1.4.0 ignore: 7.0.5 natural-compare: 1.4.0 - ts-api-utils: 2.1.0(typescript@5.9.2) - typescript: 5.9.2 + ts-api-utils: 2.1.0(typescript@5.9.3) + typescript: 5.9.3 transitivePeerDependencies: - supports-color - '@typescript-eslint/parser@8.43.0(eslint@8.57.1)(typescript@5.9.2)': + '@typescript-eslint/parser@8.48.1(eslint@8.57.1)(typescript@5.9.3)': dependencies: - '@typescript-eslint/scope-manager': 8.43.0 - '@typescript-eslint/types': 8.43.0 - '@typescript-eslint/typescript-estree': 8.43.0(typescript@5.9.2) - '@typescript-eslint/visitor-keys': 8.43.0 - debug: 4.4.1 + '@typescript-eslint/scope-manager': 8.48.1 + '@typescript-eslint/types': 8.48.1 + '@typescript-eslint/typescript-estree': 8.48.1(typescript@5.9.3) + '@typescript-eslint/visitor-keys': 8.48.1 + debug: 4.4.3 eslint: 8.57.1 - typescript: 5.9.2 + typescript: 5.9.3 transitivePeerDependencies: - supports-color - '@typescript-eslint/project-service@8.43.0(typescript@5.9.2)': + '@typescript-eslint/project-service@8.43.0(typescript@5.9.3)': dependencies: - '@typescript-eslint/tsconfig-utils': 8.43.0(typescript@5.9.2) - '@typescript-eslint/types': 8.43.0 - debug: 4.4.1 - typescript: 5.9.2 + '@typescript-eslint/tsconfig-utils': 8.43.0(typescript@5.9.3) + '@typescript-eslint/types': 8.48.1 + debug: 4.4.3 + typescript: 5.9.3 + transitivePeerDependencies: + - supports-color + + '@typescript-eslint/project-service@8.46.2(typescript@5.9.3)': + dependencies: + '@typescript-eslint/tsconfig-utils': 8.46.2(typescript@5.9.3) + '@typescript-eslint/types': 8.48.1 + debug: 4.4.3 + typescript: 5.9.3 + transitivePeerDependencies: + - supports-color + + '@typescript-eslint/project-service@8.48.1(typescript@5.9.3)': + dependencies: + '@typescript-eslint/tsconfig-utils': 8.48.1(typescript@5.9.3) + '@typescript-eslint/types': 8.48.1 + debug: 4.4.3 + typescript: 5.9.3 transitivePeerDependencies: - supports-color @@ -10754,48 +11184,123 @@ snapshots: '@typescript-eslint/types': 8.43.0 '@typescript-eslint/visitor-keys': 8.43.0 - '@typescript-eslint/tsconfig-utils@8.43.0(typescript@5.9.2)': + '@typescript-eslint/scope-manager@8.46.2': dependencies: - typescript: 5.9.2 + '@typescript-eslint/types': 8.46.2 + '@typescript-eslint/visitor-keys': 8.46.2 - '@typescript-eslint/type-utils@8.43.0(eslint@8.57.1)(typescript@5.9.2)': + '@typescript-eslint/scope-manager@8.48.1': dependencies: - '@typescript-eslint/types': 8.43.0 - '@typescript-eslint/typescript-estree': 8.43.0(typescript@5.9.2) - '@typescript-eslint/utils': 8.43.0(eslint@8.57.1)(typescript@5.9.2) - debug: 4.4.1 + '@typescript-eslint/types': 8.48.1 + '@typescript-eslint/visitor-keys': 8.48.1 + + '@typescript-eslint/tsconfig-utils@8.43.0(typescript@5.9.3)': + dependencies: + typescript: 5.9.3 + + '@typescript-eslint/tsconfig-utils@8.46.2(typescript@5.9.3)': + dependencies: + typescript: 5.9.3 + + '@typescript-eslint/tsconfig-utils@8.48.1(typescript@5.9.3)': + dependencies: + typescript: 5.9.3 + + '@typescript-eslint/type-utils@8.48.1(eslint@8.57.1)(typescript@5.9.3)': + dependencies: + '@typescript-eslint/types': 8.48.1 + '@typescript-eslint/typescript-estree': 8.48.1(typescript@5.9.3) + '@typescript-eslint/utils': 8.48.1(eslint@8.57.1)(typescript@5.9.3) + debug: 4.4.3 eslint: 8.57.1 - ts-api-utils: 2.1.0(typescript@5.9.2) - typescript: 5.9.2 + ts-api-utils: 2.1.0(typescript@5.9.3) + typescript: 5.9.3 transitivePeerDependencies: - supports-color '@typescript-eslint/types@8.43.0': {} - '@typescript-eslint/typescript-estree@8.43.0(typescript@5.9.2)': + '@typescript-eslint/types@8.46.2': {} + + '@typescript-eslint/types@8.48.1': {} + + '@typescript-eslint/typescript-estree@8.43.0(typescript@5.9.3)': dependencies: - '@typescript-eslint/project-service': 8.43.0(typescript@5.9.2) - '@typescript-eslint/tsconfig-utils': 8.43.0(typescript@5.9.2) + '@typescript-eslint/project-service': 8.43.0(typescript@5.9.3) + '@typescript-eslint/tsconfig-utils': 8.43.0(typescript@5.9.3) '@typescript-eslint/types': 8.43.0 '@typescript-eslint/visitor-keys': 8.43.0 - debug: 4.4.1 + debug: 4.4.3 fast-glob: 3.3.3 is-glob: 4.0.3 minimatch: 9.0.5 - semver: 7.7.2 - ts-api-utils: 2.1.0(typescript@5.9.2) - typescript: 5.9.2 + semver: 7.7.3 + ts-api-utils: 2.1.0(typescript@5.9.3) + typescript: 5.9.3 transitivePeerDependencies: - supports-color - '@typescript-eslint/utils@8.43.0(eslint@8.57.1)(typescript@5.9.2)': + '@typescript-eslint/typescript-estree@8.46.2(typescript@5.9.3)': + dependencies: + '@typescript-eslint/project-service': 8.46.2(typescript@5.9.3) + '@typescript-eslint/tsconfig-utils': 8.46.2(typescript@5.9.3) + '@typescript-eslint/types': 8.46.2 + '@typescript-eslint/visitor-keys': 8.46.2 + debug: 4.4.3 + fast-glob: 3.3.3 + is-glob: 4.0.3 + minimatch: 9.0.5 + semver: 7.7.3 + ts-api-utils: 2.1.0(typescript@5.9.3) + typescript: 5.9.3 + transitivePeerDependencies: + - supports-color + + '@typescript-eslint/typescript-estree@8.48.1(typescript@5.9.3)': + dependencies: + '@typescript-eslint/project-service': 8.48.1(typescript@5.9.3) + '@typescript-eslint/tsconfig-utils': 8.48.1(typescript@5.9.3) + '@typescript-eslint/types': 8.48.1 + '@typescript-eslint/visitor-keys': 8.48.1 + debug: 4.4.3 + minimatch: 9.0.5 + semver: 7.7.3 + tinyglobby: 0.2.15 + ts-api-utils: 2.1.0(typescript@5.9.3) + typescript: 5.9.3 + transitivePeerDependencies: + - supports-color + + '@typescript-eslint/utils@8.43.0(eslint@8.57.1)(typescript@5.9.3)': dependencies: '@eslint-community/eslint-utils': 4.9.0(eslint@8.57.1) '@typescript-eslint/scope-manager': 8.43.0 '@typescript-eslint/types': 8.43.0 - '@typescript-eslint/typescript-estree': 8.43.0(typescript@5.9.2) + '@typescript-eslint/typescript-estree': 8.43.0(typescript@5.9.3) eslint: 8.57.1 - typescript: 5.9.2 + typescript: 5.9.3 + transitivePeerDependencies: + - supports-color + + '@typescript-eslint/utils@8.46.2(eslint@8.57.1)(typescript@5.9.3)': + dependencies: + '@eslint-community/eslint-utils': 4.9.0(eslint@8.57.1) + '@typescript-eslint/scope-manager': 8.46.2 + '@typescript-eslint/types': 8.46.2 + '@typescript-eslint/typescript-estree': 8.46.2(typescript@5.9.3) + eslint: 8.57.1 + typescript: 5.9.3 + transitivePeerDependencies: + - supports-color + + '@typescript-eslint/utils@8.48.1(eslint@8.57.1)(typescript@5.9.3)': + dependencies: + '@eslint-community/eslint-utils': 4.9.0(eslint@8.57.1) + '@typescript-eslint/scope-manager': 8.48.1 + '@typescript-eslint/types': 8.48.1 + '@typescript-eslint/typescript-estree': 8.48.1(typescript@5.9.3) + eslint: 8.57.1 + typescript: 5.9.3 transitivePeerDependencies: - supports-color @@ -10804,6 +11309,16 @@ snapshots: '@typescript-eslint/types': 8.43.0 eslint-visitor-keys: 4.2.1 + '@typescript-eslint/visitor-keys@8.46.2': + dependencies: + '@typescript-eslint/types': 8.46.2 + eslint-visitor-keys: 4.2.1 + + '@typescript-eslint/visitor-keys@8.48.1': + dependencies: + '@typescript-eslint/types': 8.48.1 + eslint-visitor-keys: 4.2.1 + '@ungap/structured-clone@1.3.0': {} '@unrs/resolver-binding-android-arm-eabi@1.11.1': @@ -10865,14 +11380,14 @@ snapshots: '@unrs/resolver-binding-win32-x64-msvc@1.11.1': optional: true - '@vercel/analytics@1.5.0(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)': + '@vercel/analytics@1.5.0(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)': optionalDependencies: - next: 15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + next: 15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) react: 18.3.1 - '@vercel/speed-insights@1.2.0(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)': + '@vercel/speed-insights@1.2.0(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1)': optionalDependencies: - next: 15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + next: 15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) react: 18.3.1 '@vitest/expect@3.2.4': @@ -10883,13 +11398,13 @@ snapshots: chai: 5.3.3 tinyrainbow: 2.0.0 - '@vitest/mocker@3.2.4(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))': + '@vitest/mocker@3.2.4(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))': dependencies: '@vitest/spy': 3.2.4 estree-walker: 3.0.3 magic-string: 0.30.19 optionalDependencies: - msw: 2.11.1(@types/node@24.3.1)(typescript@5.9.2) + msw: 2.11.6(@types/node@24.10.0)(typescript@5.9.3) '@vitest/pretty-format@3.2.4': dependencies: @@ -10985,9 +11500,9 @@ snapshots: '@xtuc/long@4.2.2': {} - '@xyflow/react@12.8.3(@types/react@18.3.17)(immer@10.1.3)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)': + '@xyflow/react@12.9.2(@types/react@18.3.17)(immer@10.1.3)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)': dependencies: - '@xyflow/system': 0.0.67 + '@xyflow/system': 0.0.72 classcat: 5.0.5 react: 18.3.1 react-dom: 18.3.1(react@18.3.1) @@ -10996,7 +11511,7 @@ snapshots: - '@types/react' - immer - '@xyflow/system@0.0.67': + '@xyflow/system@0.0.72': dependencies: '@types/d3-drag': 3.0.7 '@types/d3-interpolate': 3.0.4 @@ -11074,10 +11589,6 @@ snapshots: ansi-colors@4.1.3: {} - ansi-escapes@4.3.2: - dependencies: - type-fest: 0.21.3 - ansi-html-community@0.0.8: {} ansi-html@0.0.9: {} @@ -11216,19 +11727,21 @@ snapshots: axe-core@4.10.3: {} - axe-html-reporter@2.2.11(axe-core@4.10.3): + axe-core@4.11.0: {} + + axe-html-reporter@2.2.11(axe-core@4.11.0): dependencies: - axe-core: 4.10.3 + axe-core: 4.11.0 mustache: 4.2.0 - axe-playwright@2.1.0(playwright@1.55.0): + axe-playwright@2.2.2(playwright@1.56.1): dependencies: '@types/junit-report-builder': 3.0.2 - axe-core: 4.10.3 - axe-html-reporter: 2.2.11(axe-core@4.10.3) + axe-core: 4.11.0 + axe-html-reporter: 2.2.11(axe-core@4.11.0) junit-report-builder: 5.1.1 picocolors: 1.1.1 - playwright: 1.55.0 + playwright: 1.56.1 axobject-query@4.1.0: {} @@ -11303,7 +11816,7 @@ snapshots: browserify-aes@1.2.0: dependencies: buffer-xor: 1.0.3 - cipher-base: 1.0.6 + cipher-base: 1.0.7 create-hash: 1.2.0 evp_bytestokey: 1.0.3 inherits: 2.0.4 @@ -11317,7 +11830,7 @@ snapshots: browserify-des@1.0.2: dependencies: - cipher-base: 1.0.6 + cipher-base: 1.0.7 des.js: 1.1.0 inherits: 2.0.4 safe-buffer: 5.2.1 @@ -11363,8 +11876,6 @@ snapshots: builtin-status-codes@3.0.0: {} - cac@6.7.14: {} - call-bind-apply-helpers@1.0.2: dependencies: es-errors: 1.3.0 @@ -11395,8 +11906,6 @@ snapshots: camelize@1.0.1: {} - caniuse-lite@1.0.30001735: {} - caniuse-lite@1.0.30001741: {} case-sensitive-paths-webpack-plugin@2.4.0: {} @@ -11411,11 +11920,6 @@ snapshots: loupe: 3.2.1 pathval: 2.0.1 - chalk@3.0.0: - dependencies: - ansi-styles: 4.3.0 - supports-color: 7.2.0 - chalk@4.1.2: dependencies: ansi-styles: 4.3.0 @@ -11449,14 +11953,15 @@ snapshots: chromatic@12.2.0: {} - chromatic@13.1.4: {} + chromatic@13.3.3: {} chrome-trace-event@1.0.4: {} - cipher-base@1.0.6: + cipher-base@1.0.7: dependencies: inherits: 2.0.4 safe-buffer: 5.2.1 + to-buffer: 1.2.2 cjs-module-lexer@1.4.3: {} @@ -11516,6 +12021,8 @@ snapshots: comma-separated-tokens@2.0.3: {} + commander@14.0.2: {} + commander@2.20.3: {} commander@4.1.1: {} @@ -11560,8 +12067,6 @@ snapshots: convert-source-map@2.0.0: {} - cookie@0.7.2: {} - cookie@1.0.2: {} core-js-compat@3.45.1: @@ -11580,46 +12085,40 @@ snapshots: path-type: 4.0.0 yaml: 1.10.2 - cosmiconfig@9.0.0(typescript@5.9.2): + cosmiconfig@9.0.0(typescript@5.9.3): dependencies: env-paths: 2.2.1 import-fresh: 3.3.1 js-yaml: 4.1.0 parse-json: 5.2.0 optionalDependencies: - typescript: 5.9.2 + typescript: 5.9.3 create-ecdh@4.0.4: dependencies: bn.js: 4.12.2 elliptic: 6.6.1 - create-hash@1.1.3: - dependencies: - cipher-base: 1.0.6 - inherits: 2.0.4 - ripemd160: 2.0.1 - sha.js: 2.4.12 - create-hash@1.2.0: dependencies: - cipher-base: 1.0.6 + cipher-base: 1.0.7 inherits: 2.0.4 md5.js: 1.3.5 - ripemd160: 2.0.2 + ripemd160: 2.0.3 sha.js: 2.4.12 create-hmac@1.1.7: dependencies: - cipher-base: 1.0.6 - create-hash: 1.1.3 + cipher-base: 1.0.7 + create-hash: 1.2.0 inherits: 2.0.4 - ripemd160: 2.0.1 + ripemd160: 2.0.3 safe-buffer: 5.2.1 sha.js: 2.4.12 - cross-env@7.0.3: + cross-env@10.1.0: dependencies: + '@epic-web/invariant': 1.0.0 cross-spawn: 7.0.6 cross-spawn@7.0.6: @@ -11638,7 +12137,7 @@ snapshots: diffie-hellman: 5.0.3 hash-base: 3.0.5 inherits: 2.0.4 - pbkdf2: 3.1.3 + pbkdf2: 3.1.5 public-encrypt: 4.0.3 randombytes: 2.1.0 randomfill: 1.0.4 @@ -11772,10 +12271,6 @@ snapshots: dependencies: ms: 2.1.3 - debug@4.4.1: - dependencies: - ms: 2.1.3 - debug@4.4.3: dependencies: ms: 2.1.3 @@ -11885,7 +12380,7 @@ snapshots: dotenv@16.6.1: {} - dotenv@17.2.1: {} + dotenv@17.2.3: {} dunder-proto@1.0.1: dependencies: @@ -12077,11 +12572,40 @@ snapshots: esbuild-register@3.6.0(esbuild@0.25.9): dependencies: - debug: 4.4.1 + debug: 4.4.3 esbuild: 0.25.9 transitivePeerDependencies: - supports-color + esbuild@0.25.11: + optionalDependencies: + '@esbuild/aix-ppc64': 0.25.11 + '@esbuild/android-arm': 0.25.11 + '@esbuild/android-arm64': 0.25.11 + '@esbuild/android-x64': 0.25.11 + '@esbuild/darwin-arm64': 0.25.11 + '@esbuild/darwin-x64': 0.25.11 + '@esbuild/freebsd-arm64': 0.25.11 + '@esbuild/freebsd-x64': 0.25.11 + '@esbuild/linux-arm': 0.25.11 + '@esbuild/linux-arm64': 0.25.11 + '@esbuild/linux-ia32': 0.25.11 + '@esbuild/linux-loong64': 0.25.11 + '@esbuild/linux-mips64el': 0.25.11 + '@esbuild/linux-ppc64': 0.25.11 + '@esbuild/linux-riscv64': 0.25.11 + '@esbuild/linux-s390x': 0.25.11 + '@esbuild/linux-x64': 0.25.11 + '@esbuild/netbsd-arm64': 0.25.11 + '@esbuild/netbsd-x64': 0.25.11 + '@esbuild/openbsd-arm64': 0.25.11 + '@esbuild/openbsd-x64': 0.25.11 + '@esbuild/openharmony-arm64': 0.25.11 + '@esbuild/sunos-x64': 0.25.11 + '@esbuild/win32-arm64': 0.25.11 + '@esbuild/win32-ia32': 0.25.11 + '@esbuild/win32-x64': 0.25.11 + esbuild@0.25.9: optionalDependencies: '@esbuild/aix-ppc64': 0.25.9 @@ -12117,21 +12641,21 @@ snapshots: escape-string-regexp@5.0.0: {} - eslint-config-next@15.5.2(eslint@8.57.1)(typescript@5.9.2): + eslint-config-next@15.5.7(eslint@8.57.1)(typescript@5.9.3): dependencies: - '@next/eslint-plugin-next': 15.5.2 - '@rushstack/eslint-patch': 1.12.0 - '@typescript-eslint/eslint-plugin': 8.43.0(@typescript-eslint/parser@8.43.0(eslint@8.57.1)(typescript@5.9.2))(eslint@8.57.1)(typescript@5.9.2) - '@typescript-eslint/parser': 8.43.0(eslint@8.57.1)(typescript@5.9.2) + '@next/eslint-plugin-next': 15.5.7 + '@rushstack/eslint-patch': 1.15.0 + '@typescript-eslint/eslint-plugin': 8.48.1(@typescript-eslint/parser@8.48.1(eslint@8.57.1)(typescript@5.9.3))(eslint@8.57.1)(typescript@5.9.3) + '@typescript-eslint/parser': 8.48.1(eslint@8.57.1)(typescript@5.9.3) eslint: 8.57.1 eslint-import-resolver-node: 0.3.9 eslint-import-resolver-typescript: 3.10.1(eslint-plugin-import@2.32.0)(eslint@8.57.1) - eslint-plugin-import: 2.32.0(@typescript-eslint/parser@8.43.0(eslint@8.57.1)(typescript@5.9.2))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1) + eslint-plugin-import: 2.32.0(@typescript-eslint/parser@8.48.1(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1) eslint-plugin-jsx-a11y: 6.10.2(eslint@8.57.1) eslint-plugin-react: 7.37.5(eslint@8.57.1) eslint-plugin-react-hooks: 5.2.0(eslint@8.57.1) optionalDependencies: - typescript: 5.9.2 + typescript: 5.9.3 transitivePeerDependencies: - eslint-import-resolver-webpack - eslint-plugin-import-x @@ -12141,37 +12665,37 @@ snapshots: dependencies: debug: 3.2.7 is-core-module: 2.16.1 - resolve: 1.22.10 + resolve: 1.22.11 transitivePeerDependencies: - supports-color eslint-import-resolver-typescript@3.10.1(eslint-plugin-import@2.32.0)(eslint@8.57.1): dependencies: '@nolyfill/is-core-module': 1.0.39 - debug: 4.4.1 + debug: 4.4.3 eslint: 8.57.1 - get-tsconfig: 4.10.1 + get-tsconfig: 4.13.0 is-bun-module: 2.0.0 stable-hash: 0.0.5 tinyglobby: 0.2.15 unrs-resolver: 1.11.1 optionalDependencies: - eslint-plugin-import: 2.32.0(@typescript-eslint/parser@8.43.0(eslint@8.57.1)(typescript@5.9.2))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1) + eslint-plugin-import: 2.32.0(@typescript-eslint/parser@8.48.1(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1) transitivePeerDependencies: - supports-color - eslint-module-utils@2.12.1(@typescript-eslint/parser@8.43.0(eslint@8.57.1)(typescript@5.9.2))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1): + eslint-module-utils@2.12.1(@typescript-eslint/parser@8.48.1(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1): dependencies: debug: 3.2.7 optionalDependencies: - '@typescript-eslint/parser': 8.43.0(eslint@8.57.1)(typescript@5.9.2) + '@typescript-eslint/parser': 8.48.1(eslint@8.57.1)(typescript@5.9.3) eslint: 8.57.1 eslint-import-resolver-node: 0.3.9 eslint-import-resolver-typescript: 3.10.1(eslint-plugin-import@2.32.0)(eslint@8.57.1) transitivePeerDependencies: - supports-color - eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.43.0(eslint@8.57.1)(typescript@5.9.2))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1): + eslint-plugin-import@2.32.0(@typescript-eslint/parser@8.48.1(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1): dependencies: '@rtsao/scc': 1.1.0 array-includes: 3.1.9 @@ -12182,7 +12706,7 @@ snapshots: doctrine: 2.1.0 eslint: 8.57.1 eslint-import-resolver-node: 0.3.9 - eslint-module-utils: 2.12.1(@typescript-eslint/parser@8.43.0(eslint@8.57.1)(typescript@5.9.2))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1) + eslint-module-utils: 2.12.1(@typescript-eslint/parser@8.48.1(eslint@8.57.1)(typescript@5.9.3))(eslint-import-resolver-node@0.3.9)(eslint-import-resolver-typescript@3.10.1)(eslint@8.57.1) hasown: 2.0.2 is-core-module: 2.16.1 is-glob: 4.0.3 @@ -12194,7 +12718,7 @@ snapshots: string.prototype.trimend: 1.0.9 tsconfig-paths: 3.15.0 optionalDependencies: - '@typescript-eslint/parser': 8.43.0(eslint@8.57.1)(typescript@5.9.2) + '@typescript-eslint/parser': 8.48.1(eslint@8.57.1)(typescript@5.9.3) transitivePeerDependencies: - eslint-import-resolver-typescript - eslint-import-resolver-webpack @@ -12206,7 +12730,7 @@ snapshots: array-includes: 3.1.9 array.prototype.flatmap: 1.3.3 ast-types-flow: 0.0.8 - axe-core: 4.10.3 + axe-core: 4.11.0 axobject-query: 4.1.0 damerau-levenshtein: 1.0.8 emoji-regex: 9.2.2 @@ -12245,11 +12769,11 @@ snapshots: string.prototype.matchall: 4.0.12 string.prototype.repeat: 1.0.0 - eslint-plugin-storybook@9.1.5(eslint@8.57.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2))(typescript@5.9.2): + eslint-plugin-storybook@9.1.5(eslint@8.57.1)(storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2))(typescript@5.9.3): dependencies: - '@typescript-eslint/utils': 8.43.0(eslint@8.57.1)(typescript@5.9.2) + '@typescript-eslint/utils': 8.43.0(eslint@8.57.1)(typescript@5.9.3) eslint: 8.57.1 - storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2) + storybook: 9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2) transitivePeerDependencies: - supports-color - typescript @@ -12270,7 +12794,7 @@ snapshots: eslint@8.57.1: dependencies: - '@eslint-community/eslint-utils': 4.7.0(eslint@8.57.1) + '@eslint-community/eslint-utils': 4.9.0(eslint@8.57.1) '@eslint-community/regexpp': 4.12.1 '@eslint/eslintrc': 2.1.4 '@eslint/js': 8.57.1 @@ -12281,7 +12805,7 @@ snapshots: ajv: 6.12.6 chalk: 4.1.2 cross-spawn: 7.0.6 - debug: 4.4.1 + debug: 4.4.3 doctrine: 3.0.0 escape-string-regexp: 4.0.0 eslint-scope: 7.2.2 @@ -12458,6 +12982,12 @@ snapshots: keyv: 4.5.4 rimraf: 3.0.2 + flatbush@4.5.0: + dependencies: + flatqueue: 3.0.0 + + flatqueue@3.0.0: {} + flatted@3.3.3: {} for-each@0.3.5: @@ -12469,7 +12999,7 @@ snapshots: cross-spawn: 7.0.6 signal-exit: 4.1.0 - fork-ts-checker-webpack-plugin@8.0.0(typescript@5.9.2)(webpack@5.101.3(esbuild@0.25.9)): + fork-ts-checker-webpack-plugin@8.0.0(typescript@5.9.3)(webpack@5.101.3(esbuild@0.25.9)): dependencies: '@babel/code-frame': 7.27.1 chalk: 4.1.2 @@ -12483,14 +13013,14 @@ snapshots: schema-utils: 3.3.0 semver: 7.7.2 tapable: 2.2.3 - typescript: 5.9.2 + typescript: 5.9.3 webpack: 5.101.3(esbuild@0.25.9) forwarded-parse@2.1.2: {} - framer-motion@12.23.12(@emotion/is-prop-valid@1.2.2)(react-dom@18.3.1(react@18.3.1))(react@18.3.1): + framer-motion@12.23.24(@emotion/is-prop-valid@1.2.2)(react-dom@18.3.1(react@18.3.1))(react@18.3.1): dependencies: - motion-dom: 12.23.12 + motion-dom: 12.23.23 motion-utils: 12.23.6 tslib: 2.8.1 optionalDependencies: @@ -12504,7 +13034,7 @@ snapshots: jsonfile: 6.2.0 universalify: 2.0.1 - fs-extra@11.3.1: + fs-extra@11.3.2: dependencies: graceful-fs: 4.2.11 jsonfile: 6.2.0 @@ -12533,9 +13063,11 @@ snapshots: functions-have-names@1.2.3: {} - geist@1.4.2(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)): + geist@1.5.1(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1)): dependencies: - next: 15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + next: 15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + + generator-function@2.0.1: {} gensync@1.0.0-beta.2: {} @@ -12569,7 +13101,7 @@ snapshots: es-errors: 1.3.0 get-intrinsic: 1.3.0 - get-tsconfig@4.10.1: + get-tsconfig@4.13.0: dependencies: resolve-pkg-maps: 1.0.0 @@ -12594,6 +13126,15 @@ snapshots: package-json-from-dist: 1.0.1 path-scurry: 1.11.1 + glob@10.5.0: + dependencies: + foreground-child: 3.3.1 + jackspeak: 3.4.3 + minimatch: 9.0.5 + minipass: 7.1.2 + package-json-from-dist: 1.0.1 + path-scurry: 1.11.1 + glob@7.2.3: dependencies: fs.realpath: 1.0.0 @@ -12654,15 +13195,18 @@ snapshots: dependencies: has-symbols: 1.1.0 - hash-base@2.0.2: - dependencies: - inherits: 2.0.4 - hash-base@3.0.5: dependencies: inherits: 2.0.4 safe-buffer: 5.2.1 + hash-base@3.1.2: + dependencies: + inherits: 2.0.4 + readable-stream: 2.3.8 + safe-buffer: 5.2.1 + to-buffer: 1.2.2 + hash.js@1.1.7: dependencies: inherits: 2.0.4 @@ -12841,7 +13385,7 @@ snapshots: parent-module: 1.0.1 resolve-from: 4.0.0 - import-in-the-middle@1.14.2: + import-in-the-middle@2.0.0: dependencies: acorn: 8.15.0 acorn-import-attributes: 1.9.5(acorn@8.15.0) @@ -12852,6 +13396,8 @@ snapshots: indent-string@4.0.0: {} + inflected@2.1.0: {} + inflight@1.0.6: dependencies: once: 1.4.0 @@ -12915,7 +13461,7 @@ snapshots: is-bun-module@2.0.0: dependencies: - semver: 7.7.2 + semver: 7.7.3 is-callable@1.2.7: {} @@ -12953,6 +13499,14 @@ snapshots: has-tostringtag: 1.0.2 safe-regex-test: 1.1.0 + is-generator-function@1.1.2: + dependencies: + call-bound: 1.0.4 + generator-function: 2.0.1 + get-proto: 1.0.1 + has-tostringtag: 1.0.2 + safe-regex-test: 1.1.0 + is-glob@4.0.3: dependencies: is-extglob: 2.1.1 @@ -13055,7 +13609,7 @@ snapshots: jest-worker@27.5.1: dependencies: - '@types/node': 24.3.1 + '@types/node': 24.10.0 merge-stream: 2.0.0 supports-color: 8.1.1 @@ -13132,7 +13686,7 @@ snapshots: make-dir: 3.1.0 xmlbuilder: 15.1.1 - katex@0.16.22: + katex@0.16.25: dependencies: commander: 8.3.0 @@ -13146,21 +13700,21 @@ snapshots: dependencies: language-subtag-registry: 0.3.23 - launchdarkly-js-client-sdk@3.8.1: + launchdarkly-js-client-sdk@3.9.0: dependencies: escape-string-regexp: 4.0.0 - launchdarkly-js-sdk-common: 5.7.1 + launchdarkly-js-sdk-common: 5.8.0 - launchdarkly-js-sdk-common@5.7.1: + launchdarkly-js-sdk-common@5.8.0: dependencies: base64-js: 1.5.1 fast-deep-equal: 2.0.1 uuid: 8.3.2 - launchdarkly-react-client-sdk@3.8.1(react-dom@18.3.1(react@18.3.1))(react@18.3.1): + launchdarkly-react-client-sdk@3.9.0(react-dom@18.3.1(react@18.3.1))(react@18.3.1): dependencies: hoist-non-react-statics: 3.3.2 - launchdarkly-js-client-sdk: 3.8.1 + launchdarkly-js-client-sdk: 3.9.0 lodash.camelcase: 4.3.0 react: 18.3.1 react-dom: 18.3.1(react@18.3.1) @@ -13252,7 +13806,7 @@ snapshots: dependencies: yallist: 3.1.1 - lucide-react@0.539.0(react@18.3.1): + lucide-react@0.552.0(react@18.3.1): dependencies: react: 18.3.1 @@ -13291,7 +13845,7 @@ snapshots: md5.js@1.3.5: dependencies: - hash-base: 3.0.5 + hash-base: 3.1.2 inherits: 2.0.4 safe-buffer: 5.2.1 @@ -13553,7 +14107,7 @@ snapshots: dependencies: '@types/katex': 0.16.7 devlop: 1.1.0 - katex: 0.16.22 + katex: 0.16.25 micromark-factory-space: 2.0.1 micromark-util-character: 2.1.1 micromark-util-symbol: 2.0.1 @@ -13654,7 +14208,7 @@ snapshots: micromark@4.0.2: dependencies: '@types/debug': 4.1.12 - debug: 4.4.1 + debug: 4.4.3 decode-named-character-reference: 1.2.0 devlop: 1.1.0 micromark-core-commonmark: 2.0.3 @@ -13719,13 +14273,11 @@ snapshots: minipass@7.1.2: {} - mitt@3.0.1: {} - module-details-from-path@1.0.4: {} moment@2.30.1: {} - motion-dom@12.23.12: + motion-dom@12.23.23: dependencies: motion-utils: 12.23.6 @@ -13733,33 +14285,33 @@ snapshots: ms@2.1.3: {} - msw-storybook-addon@2.0.5(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2)): + msw-storybook-addon@2.0.6(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3)): dependencies: is-node-process: 1.2.0 - msw: 2.11.1(@types/node@24.3.1)(typescript@5.9.2) + msw: 2.11.6(@types/node@24.10.0)(typescript@5.9.3) - msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2): + msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3): dependencies: - '@bundled-es-modules/cookie': 2.0.1 - '@bundled-es-modules/statuses': 1.0.1 - '@inquirer/confirm': 5.1.16(@types/node@24.3.1) - '@mswjs/interceptors': 0.39.6 + '@inquirer/confirm': 5.1.19(@types/node@24.10.0) + '@mswjs/interceptors': 0.40.0 '@open-draft/deferred-promise': 2.2.0 - '@open-draft/until': 2.1.0 - '@types/cookie': 0.6.0 '@types/statuses': 2.0.6 + cookie: 1.0.2 graphql: 16.11.0 headers-polyfill: 4.0.3 is-node-process: 1.2.0 outvariant: 1.4.3 path-to-regexp: 6.3.0 picocolors: 1.1.1 + rettime: 0.7.0 + statuses: 2.0.2 strict-event-emitter: 0.5.1 tough-cookie: 6.0.0 type-fest: 4.41.0 + until-async: 3.0.2 yargs: 17.7.2 optionalDependencies: - typescript: 5.9.2 + typescript: 5.9.3 transitivePeerDependencies: - '@types/node' @@ -13775,7 +14327,7 @@ snapshots: nanoid@3.3.11: {} - napi-postinstall@0.3.3: {} + napi-postinstall@0.3.4: {} natural-compare@1.4.0: {} @@ -13786,26 +14338,26 @@ snapshots: react: 18.3.1 react-dom: 18.3.1(react@18.3.1) - next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1): + next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1): dependencies: - '@next/env': 15.4.7 + '@next/env': 15.4.10 '@swc/helpers': 0.5.15 - caniuse-lite: 1.0.30001735 + caniuse-lite: 1.0.30001741 postcss: 8.4.31 react: 18.3.1 react-dom: 18.3.1(react@18.3.1) styled-jsx: 5.1.6(@babel/core@7.28.4)(react@18.3.1) optionalDependencies: - '@next/swc-darwin-arm64': 15.4.7 - '@next/swc-darwin-x64': 15.4.7 - '@next/swc-linux-arm64-gnu': 15.4.7 - '@next/swc-linux-arm64-musl': 15.4.7 - '@next/swc-linux-x64-gnu': 15.4.7 - '@next/swc-linux-x64-musl': 15.4.7 - '@next/swc-win32-arm64-msvc': 15.4.7 - '@next/swc-win32-x64-msvc': 15.4.7 + '@next/swc-darwin-arm64': 15.4.8 + '@next/swc-darwin-x64': 15.4.8 + '@next/swc-linux-arm64-gnu': 15.4.8 + '@next/swc-linux-arm64-musl': 15.4.8 + '@next/swc-linux-x64-gnu': 15.4.8 + '@next/swc-linux-x64-musl': 15.4.8 + '@next/swc-win32-arm64-msvc': 15.4.8 + '@next/swc-win32-x64-msvc': 15.4.8 '@opentelemetry/api': 1.9.0 - '@playwright/test': 1.55.0 + '@playwright/test': 1.56.1 sharp: 0.34.3 transitivePeerDependencies: - '@babel/core' @@ -13881,12 +14433,12 @@ snapshots: dependencies: boolbase: 1.0.0 - nuqs@2.4.3(next@15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1): + nuqs@2.7.2(next@15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1))(react@18.3.1): dependencies: - mitt: 3.0.1 + '@standard-schema/spec': 1.0.0 react: 18.3.1 optionalDependencies: - next: 15.4.7(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.55.0)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) + next: 15.4.10(@babel/core@7.28.4)(@opentelemetry/api@1.9.0)(@playwright/test@1.56.1)(react-dom@18.3.1(react@18.3.1))(react@18.3.1) oas-kit-common@1.0.8: dependencies: @@ -13986,11 +14538,7 @@ snapshots: openapi-types@12.1.3: {} - openapi3-ts@4.2.2: - dependencies: - yaml: 2.8.1 - - openapi3-ts@4.4.0: + openapi3-ts@4.5.0: dependencies: yaml: 2.8.1 @@ -14003,38 +14551,40 @@ snapshots: type-check: 0.4.0 word-wrap: 1.2.5 - orval@7.11.2(openapi-types@12.1.3): + orval@7.13.0(openapi-types@12.1.3)(typescript@5.9.3): dependencies: - '@apidevtools/swagger-parser': 10.1.1(openapi-types@12.1.3) - '@orval/angular': 7.11.2(openapi-types@12.1.3) - '@orval/axios': 7.11.2(openapi-types@12.1.3) - '@orval/core': 7.11.2(openapi-types@12.1.3) - '@orval/fetch': 7.11.2(openapi-types@12.1.3) - '@orval/hono': 7.11.2(openapi-types@12.1.3) - '@orval/mcp': 7.11.2(openapi-types@12.1.3) - '@orval/mock': 7.11.2(openapi-types@12.1.3) - '@orval/query': 7.11.2(openapi-types@12.1.3) - '@orval/swr': 7.11.2(openapi-types@12.1.3) - '@orval/zod': 7.11.2(openapi-types@12.1.3) - ajv: 8.17.1 - cac: 6.7.14 + '@apidevtools/swagger-parser': 12.1.0(openapi-types@12.1.3) + '@commander-js/extra-typings': 14.0.0(commander@14.0.2) + '@orval/angular': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/axios': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/core': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/fetch': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/hono': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/mcp': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/mock': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/query': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/swr': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) + '@orval/zod': 7.13.0(openapi-types@12.1.3)(typescript@5.9.3) chalk: 4.1.2 chokidar: 4.0.3 + commander: 14.0.2 enquirer: 2.4.1 execa: 5.1.1 find-up: 5.0.0 - fs-extra: 11.3.1 + fs-extra: 11.3.2 + js-yaml: 4.1.0 lodash.uniq: 4.5.0 - openapi3-ts: 4.2.2 + openapi3-ts: 4.5.0 string-argv: 0.3.2 - tsconfck: 2.1.2(typescript@5.9.2) - typedoc: 0.28.10(typescript@5.9.2) - typedoc-plugin-markdown: 4.8.1(typedoc@0.28.10(typescript@5.9.2)) - typescript: 5.9.2 + tsconfck: 2.1.2(typescript@5.9.3) + typedoc: 0.28.14(typescript@5.9.3) + typedoc-plugin-coverage: 4.0.2(typedoc@0.28.14(typescript@5.9.3)) + typedoc-plugin-markdown: 4.9.0(typedoc@0.28.14(typescript@5.9.3)) transitivePeerDependencies: - encoding - openapi-types - supports-color + - typescript os-browserify@0.3.0: {} @@ -14091,7 +14641,7 @@ snapshots: browserify-aes: 1.2.0 evp_bytestokey: 1.0.3 hash-base: 3.0.5 - pbkdf2: 3.1.3 + pbkdf2: 3.1.5 safe-buffer: 5.2.1 parse-entities@4.0.2: @@ -14145,14 +14695,14 @@ snapshots: pathval@2.0.1: {} - pbkdf2@3.1.3: + pbkdf2@3.1.5: dependencies: - create-hash: 1.1.3 + create-hash: 1.2.0 create-hmac: 1.1.7 - ripemd160: 2.0.1 + ripemd160: 2.0.3 safe-buffer: 5.2.1 sha.js: 2.4.12 - to-buffer: 1.2.1 + to-buffer: 1.2.2 pg-int8@1.0.1: {} @@ -14184,11 +14734,11 @@ snapshots: dependencies: find-up: 6.3.0 - playwright-core@1.55.0: {} + playwright-core@1.56.1: {} - playwright@1.55.0: + playwright@1.56.1: dependencies: - playwright-core: 1.55.0 + playwright-core: 1.56.1 optionalDependencies: fsevents: 2.3.2 @@ -14215,9 +14765,9 @@ snapshots: optionalDependencies: postcss: 8.5.6 - postcss-loader@8.2.0(postcss@8.5.6)(typescript@5.9.2)(webpack@5.101.3(esbuild@0.25.9)): + postcss-loader@8.2.0(postcss@8.5.6)(typescript@5.9.3)(webpack@5.101.3(esbuild@0.25.9)): dependencies: - cosmiconfig: 9.0.0(typescript@5.9.2) + cosmiconfig: 9.0.0(typescript@5.9.3) jiti: 2.5.1 postcss: 8.5.6 semver: 7.7.2 @@ -14294,7 +14844,7 @@ snapshots: prelude-ls@1.2.1: {} - prettier-plugin-tailwindcss@0.6.14(prettier@3.6.2): + prettier-plugin-tailwindcss@0.7.1(prettier@3.6.2): dependencies: prettier: 3.6.2 @@ -14311,12 +14861,6 @@ snapshots: ansi-styles: 5.2.0 react-is: 17.0.2 - prism-react-renderer@2.4.1(react@18.3.1): - dependencies: - '@types/prismjs': 1.26.5 - clsx: 2.1.1 - react: 18.3.1 - process-nextick-args@2.0.1: {} process@0.11.10: {} @@ -14367,16 +14911,20 @@ snapshots: range-parser@1.2.1: {} - react-day-picker@9.8.1(react@18.3.1): + react-currency-input-field@4.0.3(react@18.3.1): dependencies: - '@date-fns/tz': 1.2.0 + react: 18.3.1 + + react-day-picker@9.11.1(react@18.3.1): + dependencies: + '@date-fns/tz': 1.4.1 date-fns: 4.1.0 date-fns-jalali: 4.1.0-0 react: 18.3.1 - react-docgen-typescript@2.4.0(typescript@5.9.2): + react-docgen-typescript@2.4.0(typescript@5.9.3): dependencies: - typescript: 5.9.2 + typescript: 5.9.3 react-docgen@7.1.1: dependencies: @@ -14406,7 +14954,7 @@ snapshots: react-dom: 18.3.1(react@18.3.1) styled-components: 6.1.19(react-dom@18.3.1(react@18.3.1))(react@18.3.1) - react-hook-form@7.62.0(react@18.3.1): + react-hook-form@7.66.0(react@18.3.1): dependencies: react: 18.3.1 @@ -14478,12 +15026,12 @@ snapshots: optionalDependencies: '@types/react': 18.3.17 - react-shepherd@6.1.9(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(typescript@5.9.2): + react-shepherd@6.1.9(react-dom@18.3.1(react@18.3.1))(react@18.3.1)(typescript@5.9.3): dependencies: react: 18.3.1 react-dom: 18.3.1(react@18.3.1) shepherd.js: 14.5.1 - typescript: 5.9.2 + typescript: 5.9.3 react-style-singleton@2.2.3(@types/react@18.3.17)(react@18.3.1): dependencies: @@ -14495,7 +15043,7 @@ snapshots: react-window@1.8.11(react-dom@18.3.1(react@18.3.1))(react@18.3.1): dependencies: - '@babel/runtime': 7.28.3 + '@babel/runtime': 7.28.4 memoize-one: 5.2.1 react: 18.3.1 react-dom: 18.3.1(react@18.3.1) @@ -14546,7 +15094,7 @@ snapshots: tiny-invariant: 1.3.3 tslib: 2.8.1 - recharts@3.1.2(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react-is@18.3.1)(react@18.3.1)(redux@5.0.1): + recharts@3.3.0(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react-is@18.3.1)(react@18.3.1)(redux@5.0.1): dependencies: '@reduxjs/toolkit': 2.9.0(react-redux@9.2.0(@types/react@18.3.17)(react@18.3.1)(redux@5.0.1))(react@18.3.1) clsx: 2.1.1 @@ -14645,7 +15193,7 @@ snapshots: '@types/katex': 0.16.7 hast-util-from-html-isomorphic: 2.0.0 hast-util-to-text: 4.0.2 - katex: 0.16.22 + katex: 0.16.25 unist-util-visit-parents: 6.0.1 vfile: 6.0.3 @@ -14716,12 +15264,19 @@ snapshots: require-in-the-middle@7.5.2: dependencies: - debug: 4.4.1 + debug: 4.4.3 module-details-from-path: 1.0.4 resolve: 1.22.10 transitivePeerDependencies: - supports-color + require-in-the-middle@8.0.1: + dependencies: + debug: 4.4.3 + module-details-from-path: 1.0.4 + transitivePeerDependencies: + - supports-color + reselect@5.1.1: {} resolve-from@4.0.0: {} @@ -14742,6 +15297,12 @@ snapshots: path-parse: 1.0.7 supports-preserve-symlinks-flag: 1.0.0 + resolve@1.22.11: + dependencies: + is-core-module: 2.16.1 + path-parse: 1.0.7 + supports-preserve-symlinks-flag: 1.0.0 + resolve@1.22.8: dependencies: is-core-module: 2.16.1 @@ -14754,20 +15315,17 @@ snapshots: path-parse: 1.0.7 supports-preserve-symlinks-flag: 1.0.0 + rettime@0.7.0: {} + reusify@1.1.0: {} rimraf@3.0.2: dependencies: glob: 7.2.3 - ripemd160@2.0.1: + ripemd160@2.0.3: dependencies: - hash-base: 2.0.2 - inherits: 2.0.4 - - ripemd160@2.0.2: - dependencies: - hash-base: 3.0.5 + hash-base: 3.1.2 inherits: 2.0.4 rollup@4.52.2: @@ -14858,6 +15416,8 @@ snapshots: semver@7.7.2: {} + semver@7.7.3: {} + serialize-javascript@6.0.2: dependencies: randombytes: 2.1.0 @@ -14890,7 +15450,7 @@ snapshots: dependencies: inherits: 2.0.4 safe-buffer: 5.2.1 - to-buffer: 1.2.1 + to-buffer: 1.2.2 shallowequal@1.1.0: {} @@ -14898,7 +15458,7 @@ snapshots: dependencies: color: 4.2.3 detect-libc: 2.0.4 - semver: 7.7.2 + semver: 7.7.3 optionalDependencies: '@img/sharp-darwin-arm64': 0.34.3 '@img/sharp-darwin-x64': 0.34.3 @@ -14938,8 +15498,6 @@ snapshots: '@scarf/scarf': 1.4.0 deepmerge-ts: 7.1.5 - shimmer@1.2.1: {} - should-equal@2.0.0: dependencies: should-type: 1.4.0 @@ -15042,13 +15600,13 @@ snapshots: es-errors: 1.3.0 internal-slot: 1.1.0 - storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2))(prettier@3.6.2): + storybook@9.1.5(@testing-library/dom@10.4.1)(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3))(prettier@3.6.2): dependencies: '@storybook/global': 5.0.0 '@testing-library/jest-dom': 6.8.0 '@testing-library/user-event': 14.6.1(@testing-library/dom@10.4.1) '@vitest/expect': 3.2.4 - '@vitest/mocker': 3.2.4(msw@2.11.1(@types/node@24.3.1)(typescript@5.9.2)) + '@vitest/mocker': 3.2.4(msw@2.11.6(@types/node@24.10.0)(typescript@5.9.3)) '@vitest/spy': 3.2.4 better-opn: 3.0.2 esbuild: 0.25.9 @@ -15259,12 +15817,9 @@ snapshots: tailwind-merge@2.6.0: {} - tailwind-scrollbar@4.0.2(react@18.3.1)(tailwindcss@3.4.17): + tailwind-scrollbar@3.1.0(tailwindcss@3.4.17): dependencies: - prism-react-renderer: 2.4.1(react@18.3.1) tailwindcss: 3.4.17 - transitivePeerDependencies: - - react tailwindcss-animate@1.0.7(tailwindcss@3.4.17): dependencies: @@ -15344,13 +15899,13 @@ snapshots: tinyspy@4.0.3: {} - tldts-core@7.0.13: {} + tldts-core@7.0.17: {} - tldts@7.0.13: + tldts@7.0.17: dependencies: - tldts-core: 7.0.13 + tldts-core: 7.0.17 - to-buffer@1.2.1: + to-buffer@1.2.2: dependencies: isarray: 2.0.5 safe-buffer: 5.2.1 @@ -15362,7 +15917,7 @@ snapshots: tough-cookie@6.0.0: dependencies: - tldts: 7.0.13 + tldts: 7.0.17 tr46@0.0.3: {} @@ -15372,17 +15927,17 @@ snapshots: trough@2.2.0: {} - ts-api-utils@2.1.0(typescript@5.9.2): + ts-api-utils@2.1.0(typescript@5.9.3): dependencies: - typescript: 5.9.2 + typescript: 5.9.3 ts-dedent@2.2.0: {} ts-interface-checker@0.1.13: {} - tsconfck@2.1.2(typescript@5.9.2): + tsconfck@2.1.2(typescript@5.9.3): optionalDependencies: - typescript: 5.9.2 + typescript: 5.9.3 tsconfig-paths-webpack-plugin@4.2.0: dependencies: @@ -15418,8 +15973,6 @@ snapshots: type-fest@0.20.2: {} - type-fest@0.21.3: {} - type-fest@0.7.1: {} type-fest@2.19.0: {} @@ -15459,20 +16012,24 @@ snapshots: possible-typed-array-names: 1.1.0 reflect.getprototypeof: 1.0.10 - typedoc-plugin-markdown@4.8.1(typedoc@0.28.10(typescript@5.9.2)): + typedoc-plugin-coverage@4.0.2(typedoc@0.28.14(typescript@5.9.3)): dependencies: - typedoc: 0.28.10(typescript@5.9.2) + typedoc: 0.28.14(typescript@5.9.3) - typedoc@0.28.10(typescript@5.9.2): + typedoc-plugin-markdown@4.9.0(typedoc@0.28.14(typescript@5.9.3)): dependencies: - '@gerrit0/mini-shiki': 3.9.2 + typedoc: 0.28.14(typescript@5.9.3) + + typedoc@0.28.14(typescript@5.9.3): + dependencies: + '@gerrit0/mini-shiki': 3.14.0 lunr: 2.3.9 markdown-it: 14.1.0 minimatch: 9.0.5 - typescript: 5.9.2 + typescript: 5.9.3 yaml: 2.8.1 - typescript@5.9.2: {} + typescript@5.9.3: {} uc.micro@2.1.0: {} @@ -15483,7 +16040,7 @@ snapshots: has-symbols: 1.1.0 which-boxed-primitive: 1.1.1 - undici-types@7.10.0: {} + undici-types@7.16.0: {} unicode-canonical-property-names-ecmascript@2.0.1: {} @@ -15557,7 +16114,7 @@ snapshots: unrs-resolver@1.11.1: dependencies: - napi-postinstall: 0.3.3 + napi-postinstall: 0.3.4 optionalDependencies: '@unrs/resolver-binding-android-arm-eabi': 1.11.1 '@unrs/resolver-binding-android-arm64': 1.11.1 @@ -15579,6 +16136,8 @@ snapshots: '@unrs/resolver-binding-win32-ia32-msvc': 1.11.1 '@unrs/resolver-binding-win32-x64-msvc': 1.11.1 + until-async@3.0.2: {} + update-browserslist-db@1.1.3(browserslist@4.25.4): dependencies: browserslist: 4.25.4 @@ -15650,7 +16209,7 @@ snapshots: validate.io-number@1.0.3: {} - validator@13.15.15: {} + validator@13.15.20: {} vaul@1.1.2(@types/react-dom@18.3.5(@types/react@18.3.17))(@types/react@18.3.17)(react-dom@18.3.1(react@18.3.1))(react@18.3.1): dependencies: @@ -15783,7 +16342,7 @@ snapshots: is-async-function: 2.1.1 is-date-object: 1.1.0 is-finalizationregistry: 1.1.1 - is-generator-function: 1.1.0 + is-generator-function: 1.1.2 is-regex: 1.2.1 is-weakref: 1.1.1 isarray: 2.0.5 diff --git a/autogpt_platform/frontend/pnpm-workspace.yaml b/autogpt_platform/frontend/pnpm-workspace.yaml new file mode 100644 index 0000000000..70f9d1bf52 --- /dev/null +++ b/autogpt_platform/frontend/pnpm-workspace.yaml @@ -0,0 +1,4 @@ +onlyBuiltDependencies: + - "@vercel/speed-insights" + - esbuild + - msw diff --git a/autogpt_platform/frontend/public/favicon-dev.ico b/autogpt_platform/frontend/public/favicon-dev.ico new file mode 100644 index 0000000000..e6a1e70099 Binary files /dev/null and b/autogpt_platform/frontend/public/favicon-dev.ico differ diff --git a/autogpt_platform/frontend/public/favicon-local.ico b/autogpt_platform/frontend/public/favicon-local.ico new file mode 100644 index 0000000000..a0031e3267 Binary files /dev/null and b/autogpt_platform/frontend/public/favicon-local.ico differ diff --git a/autogpt_platform/frontend/public/mockServiceWorker.js b/autogpt_platform/frontend/public/mockServiceWorker.js index ec47a9a50a..2f658e919a 100644 --- a/autogpt_platform/frontend/public/mockServiceWorker.js +++ b/autogpt_platform/frontend/public/mockServiceWorker.js @@ -5,24 +5,23 @@ * Mock Service Worker. * @see https://github.com/mswjs/msw * - Please do NOT modify this file. - * - Please do NOT serve this file on production. */ -const PACKAGE_VERSION = '2.7.0' -const INTEGRITY_CHECKSUM = '00729d72e3b82faf54ca8b9621dbb96f' +const PACKAGE_VERSION = '2.11.6' +const INTEGRITY_CHECKSUM = '4db4a41e972cec1b64cc569c66952d82' const IS_MOCKED_RESPONSE = Symbol('isMockedResponse') const activeClientIds = new Set() -self.addEventListener('install', function () { +addEventListener('install', function () { self.skipWaiting() }) -self.addEventListener('activate', function (event) { +addEventListener('activate', function (event) { event.waitUntil(self.clients.claim()) }) -self.addEventListener('message', async function (event) { - const clientId = event.source.id +addEventListener('message', async function (event) { + const clientId = Reflect.get(event.source || {}, 'id') if (!clientId || !self.clients) { return @@ -72,11 +71,6 @@ self.addEventListener('message', async function (event) { break } - case 'MOCK_DEACTIVATE': { - activeClientIds.delete(clientId) - break - } - case 'CLIENT_CLOSED': { activeClientIds.delete(clientId) @@ -94,69 +88,92 @@ self.addEventListener('message', async function (event) { } }) -self.addEventListener('fetch', function (event) { - const { request } = event +addEventListener('fetch', function (event) { + const requestInterceptedAt = Date.now() // Bypass navigation requests. - if (request.mode === 'navigate') { + if (event.request.mode === 'navigate') { return } // Opening the DevTools triggers the "only-if-cached" request // that cannot be handled by the worker. Bypass such requests. - if (request.cache === 'only-if-cached' && request.mode !== 'same-origin') { + if ( + event.request.cache === 'only-if-cached' && + event.request.mode !== 'same-origin' + ) { return } // Bypass all requests when there are no active clients. // Prevents the self-unregistered worked from handling requests - // after it's been deleted (still remains active until the next reload). + // after it's been terminated (still remains active until the next reload). if (activeClientIds.size === 0) { return } - // Generate unique request ID. const requestId = crypto.randomUUID() - event.respondWith(handleRequest(event, requestId)) + event.respondWith(handleRequest(event, requestId, requestInterceptedAt)) }) -async function handleRequest(event, requestId) { +/** + * @param {FetchEvent} event + * @param {string} requestId + * @param {number} requestInterceptedAt + */ +async function handleRequest(event, requestId, requestInterceptedAt) { const client = await resolveMainClient(event) - const response = await getResponse(event, client, requestId) + const requestCloneForEvents = event.request.clone() + const response = await getResponse( + event, + client, + requestId, + requestInterceptedAt, + ) // Send back the response clone for the "response:*" life-cycle events. // Ensure MSW is active and ready to handle the message, otherwise // this message will pend indefinitely. if (client && activeClientIds.has(client.id)) { - ;(async function () { - const responseClone = response.clone() + const serializedRequest = await serializeRequest(requestCloneForEvents) - sendToClient( - client, - { - type: 'RESPONSE', - payload: { - requestId, - isMockedResponse: IS_MOCKED_RESPONSE in response, + // Clone the response so both the client and the library could consume it. + const responseClone = response.clone() + + sendToClient( + client, + { + type: 'RESPONSE', + payload: { + isMockedResponse: IS_MOCKED_RESPONSE in response, + request: { + id: requestId, + ...serializedRequest, + }, + response: { type: responseClone.type, status: responseClone.status, statusText: responseClone.statusText, - body: responseClone.body, headers: Object.fromEntries(responseClone.headers.entries()), + body: responseClone.body, }, }, - [responseClone.body], - ) - })() + }, + responseClone.body ? [serializedRequest.body, responseClone.body] : [], + ) } return response } -// Resolve the main client for the given event. -// Client that issues a request doesn't necessarily equal the client -// that registered the worker. It's with the latter the worker should -// communicate with during the response resolving phase. +/** + * Resolve the main client for the given event. + * Client that issues a request doesn't necessarily equal the client + * that registered the worker. It's with the latter the worker should + * communicate with during the response resolving phase. + * @param {FetchEvent} event + * @returns {Promise} + */ async function resolveMainClient(event) { const client = await self.clients.get(event.clientId) @@ -184,12 +201,17 @@ async function resolveMainClient(event) { }) } -async function getResponse(event, client, requestId) { - const { request } = event - +/** + * @param {FetchEvent} event + * @param {Client | undefined} client + * @param {string} requestId + * @param {number} requestInterceptedAt + * @returns {Promise} + */ +async function getResponse(event, client, requestId, requestInterceptedAt) { // Clone the request because it might've been already used // (i.e. its body has been read and sent to the client). - const requestClone = request.clone() + const requestClone = event.request.clone() function passthrough() { // Cast the request headers to a new Headers instance @@ -230,29 +252,18 @@ async function getResponse(event, client, requestId) { } // Notify the client that a request has been intercepted. - const requestBuffer = await request.arrayBuffer() + const serializedRequest = await serializeRequest(event.request) const clientMessage = await sendToClient( client, { type: 'REQUEST', payload: { id: requestId, - url: request.url, - mode: request.mode, - method: request.method, - headers: Object.fromEntries(request.headers.entries()), - cache: request.cache, - credentials: request.credentials, - destination: request.destination, - integrity: request.integrity, - redirect: request.redirect, - referrer: request.referrer, - referrerPolicy: request.referrerPolicy, - body: requestBuffer, - keepalive: request.keepalive, + interceptedAt: requestInterceptedAt, + ...serializedRequest, }, }, - [requestBuffer], + [serializedRequest.body], ) switch (clientMessage.type) { @@ -268,6 +279,12 @@ async function getResponse(event, client, requestId) { return passthrough() } +/** + * @param {Client} client + * @param {any} message + * @param {Array} transferrables + * @returns {Promise} + */ function sendToClient(client, message, transferrables = []) { return new Promise((resolve, reject) => { const channel = new MessageChannel() @@ -280,14 +297,18 @@ function sendToClient(client, message, transferrables = []) { resolve(event.data) } - client.postMessage( - message, - [channel.port2].concat(transferrables.filter(Boolean)), - ) + client.postMessage(message, [ + channel.port2, + ...transferrables.filter(Boolean), + ]) }) } -async function respondWithMock(response) { +/** + * @param {Response} response + * @returns {Response} + */ +function respondWithMock(response) { // Setting response status code to 0 is a no-op. // However, when responding with a "Response.error()", the produced Response // instance will have status code set to 0. Since it's not possible to create @@ -305,3 +326,24 @@ async function respondWithMock(response) { return mockedResponse } + +/** + * @param {Request} request + */ +async function serializeRequest(request) { + return { + url: request.url, + mode: request.mode, + method: request.method, + headers: Object.fromEntries(request.headers.entries()), + cache: request.cache, + credentials: request.credentials, + destination: request.destination, + integrity: request.integrity, + redirect: request.redirect, + referrer: request.referrer, + referrerPolicy: request.referrerPolicy, + body: await request.arrayBuffer(), + keepalive: request.keepalive, + } +} diff --git a/autogpt_platform/frontend/scripts/generate-api-queries.ts b/autogpt_platform/frontend/scripts/generate-api-queries.ts index a11e7329c3..0c6fa717a3 100644 --- a/autogpt_platform/frontend/scripts/generate-api-queries.ts +++ b/autogpt_platform/frontend/scripts/generate-api-queries.ts @@ -1,16 +1,16 @@ #!/usr/bin/env node -import { getAgptServerBaseUrl } from "@/lib/env-config"; import { execSync } from "child_process"; import * as path from "path"; import * as fs from "fs"; import * as os from "os"; +import { environment } from "@/services/environment"; function fetchOpenApiSpec(): void { const args = process.argv.slice(2); const forceFlag = args.includes("--force"); - const baseUrl = getAgptServerBaseUrl(); + const baseUrl = environment.getAGPTServerBaseUrl(); const openApiUrl = `${baseUrl}/openapi.json`; const outputPath = path.join( __dirname, diff --git a/autogpt_platform/frontend/sentry.edge.config.ts b/autogpt_platform/frontend/sentry.edge.config.ts index 614be377ad..dbf642f521 100644 --- a/autogpt_platform/frontend/sentry.edge.config.ts +++ b/autogpt_platform/frontend/sentry.edge.config.ts @@ -3,18 +3,11 @@ // Note that this config is unrelated to the Vercel Edge Runtime and is also required when running locally. // https://docs.sentry.io/platforms/javascript/guides/nextjs/ +import { environment } from "@/services/environment"; import * as Sentry from "@sentry/nextjs"; -import { - AppEnv, - BehaveAs, - getAppEnv, - getBehaveAs, - getEnvironmentStr, -} from "./src/lib/utils"; -const isProdOrDev = [AppEnv.PROD, AppEnv.DEV].includes(getAppEnv()); - -const isCloud = getBehaveAs() === BehaveAs.CLOUD; +const isProdOrDev = environment.isProd() || environment.isDev(); +const isCloud = environment.isCloud(); const isDisabled = process.env.DISABLE_SENTRY === "true"; const shouldEnable = !isDisabled && isProdOrDev && isCloud; @@ -22,7 +15,7 @@ const shouldEnable = !isDisabled && isProdOrDev && isCloud; Sentry.init({ dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288", - environment: getEnvironmentStr(), + environment: environment.getEnvironmentStr(), enabled: shouldEnable, @@ -40,7 +33,7 @@ Sentry.init({ enableLogs: true, integrations: [ - Sentry.captureConsoleIntegration(), + Sentry.captureConsoleIntegration({ levels: ["fatal", "error", "warn"] }), Sentry.extraErrorDataIntegration(), ], }); diff --git a/autogpt_platform/frontend/sentry.server.config.ts b/autogpt_platform/frontend/sentry.server.config.ts index aea80ee8e4..97c159737a 100644 --- a/autogpt_platform/frontend/sentry.server.config.ts +++ b/autogpt_platform/frontend/sentry.server.config.ts @@ -2,19 +2,12 @@ // The config you add here will be used whenever the server handles a request. // https://docs.sentry.io/platforms/javascript/guides/nextjs/ -import { - AppEnv, - BehaveAs, - getAppEnv, - getBehaveAs, - getEnvironmentStr, -} from "@/lib/utils"; +import { environment } from "@/services/environment"; import * as Sentry from "@sentry/nextjs"; // import { NodeProfilingIntegration } from "@sentry/profiling-node"; -const isProdOrDev = [AppEnv.PROD, AppEnv.DEV].includes(getAppEnv()); - -const isCloud = getBehaveAs() === BehaveAs.CLOUD; +const isProdOrDev = environment.isProd() || environment.isDev(); +const isCloud = environment.isCloud(); const isDisabled = process.env.DISABLE_SENTRY === "true"; const shouldEnable = !isDisabled && isProdOrDev && isCloud; @@ -22,7 +15,7 @@ const shouldEnable = !isDisabled && isProdOrDev && isCloud; Sentry.init({ dsn: "https://fe4e4aa4a283391808a5da396da20159@o4505260022104064.ingest.us.sentry.io/4507946746380288", - environment: getEnvironmentStr(), + environment: environment.getEnvironmentStr(), enabled: shouldEnable, diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/4-agent/page.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/4-agent/page.tsx index 2635c48867..e75b2fc28e 100644 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/4-agent/page.tsx +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/4-agent/page.tsx @@ -1,4 +1,9 @@ "use client"; +import { isEmptyOrWhitespace } from "@/lib/utils"; +import { useRouter } from "next/navigation"; +import { useEffect, useState } from "react"; +import { useOnboarding } from "../../../../providers/onboarding/onboarding-provider"; +import OnboardingAgentCard from "../components/OnboardingAgentCard"; import OnboardingButton from "../components/OnboardingButton"; import { OnboardingFooter, @@ -6,27 +11,24 @@ import { OnboardingStep, } from "../components/OnboardingStep"; import { OnboardingText } from "../components/OnboardingText"; -import OnboardingAgentCard from "../components/OnboardingAgentCard"; -import { useEffect, useState } from "react"; -import { useBackendAPI } from "@/lib/autogpt-server-api/context"; -import { StoreAgentDetails } from "@/lib/autogpt-server-api"; -import { finishOnboarding } from "../6-congrats/actions"; -import { isEmptyOrWhitespace } from "@/lib/utils"; -import { useOnboarding } from "../../../../providers/onboarding/onboarding-provider"; +import { getV1RecommendedOnboardingAgents } from "@/app/api/__generated__/endpoints/onboarding/onboarding"; +import { resolveResponse } from "@/app/api/helpers"; +import { StoreAgentDetails } from "@/app/api/__generated__/models/storeAgentDetails"; export default function Page() { - const { state, updateState } = useOnboarding(4, "INTEGRATIONS"); + const { state, updateState, completeStep } = useOnboarding(4, "INTEGRATIONS"); const [agents, setAgents] = useState([]); - const api = useBackendAPI(); + const router = useRouter(); useEffect(() => { - api.getOnboardingAgents().then((agents) => { + resolveResponse(getV1RecommendedOnboardingAgents()).then((agents) => { if (agents.length < 2) { - finishOnboarding(); + completeStep("CONGRATS"); + router.replace("/"); } setAgents(agents); }); - }, [api, setAgents]); + }, []); useEffect(() => { // Deselect agent if it's not in the list of agents diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx new file mode 100644 index 0000000000..3176ec7f70 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/AgentOnboardingCredentials.tsx @@ -0,0 +1,62 @@ +import { CredentialsInput } from "@/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/CredentialsInputs/CredentialsInputs"; +import { CredentialsMetaInput } from "@/app/api/__generated__/models/credentialsMetaInput"; +import { GraphMeta } from "@/app/api/__generated__/models/graphMeta"; +import { useState } from "react"; +import { getSchemaDefaultCredentials } from "../../helpers"; +import { areAllCredentialsSet, getCredentialFields } from "./helpers"; + +type Credential = CredentialsMetaInput | undefined; +type Credentials = Record; + +type Props = { + agent: GraphMeta | null; + siblingInputs?: Record; + onCredentialsChange: ( + credentials: Record, + ) => void; + onValidationChange: (isValid: boolean) => void; + onLoadingChange: (isLoading: boolean) => void; +}; + +export function AgentOnboardingCredentials(props: Props) { + const [inputCredentials, setInputCredentials] = useState({}); + + const fields = getCredentialFields(props.agent); + const required = Object.keys(fields || {}).length > 0; + + if (!required) return null; + + function handleSelectCredentials(key: string, value: Credential) { + const updated = { ...inputCredentials, [key]: value }; + setInputCredentials(updated); + + const sanitized: Record = {}; + for (const [k, v] of Object.entries(updated)) { + if (v) sanitized[k] = v; + } + + props.onCredentialsChange(sanitized); + + const isValid = !required || areAllCredentialsSet(fields, updated); + props.onValidationChange(isValid); + } + + return ( + <> + {Object.entries(fields).map(([key, inputSubSchema]) => ( +
      + handleSelectCredentials(key, value)} + siblingInputs={props.siblingInputs} + onLoaded={(loaded) => props.onLoadingChange(!loaded)} + /> +
      + ))} + + ); +} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/helpers.ts b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/helpers.ts new file mode 100644 index 0000000000..7a456d63e4 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/AgentOnboardingCredentials/helpers.ts @@ -0,0 +1,32 @@ +import { CredentialsMetaInput } from "@/app/api/__generated__/models/credentialsMetaInput"; +import { GraphMeta } from "@/app/api/__generated__/models/graphMeta"; +import { BlockIOCredentialsSubSchema } from "@/lib/autogpt-server-api/types"; + +export function getCredentialFields( + agent: GraphMeta | null, +): AgentCredentialsFields { + if (!agent) return {}; + + const hasNoInputs = + !agent.credentials_input_schema || + typeof agent.credentials_input_schema !== "object" || + !("properties" in agent.credentials_input_schema) || + !agent.credentials_input_schema.properties; + + if (hasNoInputs) return {}; + + return agent.credentials_input_schema.properties as AgentCredentialsFields; +} + +export type AgentCredentialsFields = Record< + string, + BlockIOCredentialsSubSchema +>; + +export function areAllCredentialsSet( + fields: AgentCredentialsFields, + inputs: Record, +) { + const required = Object.keys(fields || {}); + return required.every((k) => Boolean(inputs[k])); +} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/RunAgentHint.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/RunAgentHint.tsx new file mode 100644 index 0000000000..7b2b7ff429 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/RunAgentHint.tsx @@ -0,0 +1,45 @@ +import { cn } from "@/lib/utils"; +import { OnboardingText } from "../../components/OnboardingText"; + +type RunAgentHintProps = { + handleNewRun: () => void; +}; + +export function RunAgentHint(props: RunAgentHintProps) { + return ( +
      +
      + Run your first agent + + A 'run' is when your agent starts working on a task + + + Click on New Run below to try it out + + +
      + + + + + + + + New run + +
      +
      +
      + ); +} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/SelectedAgentCard.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/SelectedAgentCard.tsx new file mode 100644 index 0000000000..ec5a3a6d01 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/components/SelectedAgentCard.tsx @@ -0,0 +1,52 @@ +import { StoreAgentDetails } from "@/app/api/__generated__/models/storeAgentDetails"; +import StarRating from "../../components/StarRating"; +import SmartImage from "@/components/__legacy__/SmartImage"; + +type Props = { + storeAgent: StoreAgentDetails | null; +}; + +export function SelectedAgentCard(props: Props) { + return ( +
      +
      + + SELECTED AGENT + + {props.storeAgent ? ( +
      + {/* Left image */} + + {/* Right content */} +
      +
      + + {props.storeAgent.agent_name} + + + by {props.storeAgent.creator} + +
      +
      + + {props.storeAgent.runs.toLocaleString("en-US")} runs + + +
      +
      +
      + ) : ( +
      + )} +
      +
      + ); +} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/helpers.ts b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/helpers.ts index edaf49e522..62f5c564ff 100644 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/helpers.ts +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/helpers.ts @@ -1,9 +1,9 @@ -import type { GraphMeta } from "@/lib/autogpt-server-api"; import type { BlockIOCredentialsSubSchema, CredentialsMetaInput, } from "@/lib/autogpt-server-api/types"; import type { InputValues } from "./types"; +import { GraphMeta } from "@/app/api/__generated__/models/graphMeta"; export function computeInitialAgentInputs( agent: GraphMeta | null, @@ -21,7 +21,6 @@ export function computeInitialAgentInputs( result[key] = existingInputs[key]; return; } - // GraphIOSubSchema.default is typed as string, but server may return other primitives const def = (subSchema as unknown as { default?: string | number }).default; result[key] = def ?? ""; }); @@ -29,40 +28,16 @@ export function computeInitialAgentInputs( return result; } -export function getAgentCredentialsInputFields(agent: GraphMeta | null) { - const hasNoInputs = - !agent?.credentials_input_schema || - typeof agent.credentials_input_schema !== "object" || - !("properties" in agent.credentials_input_schema) || - !agent.credentials_input_schema.properties; - - if (hasNoInputs) return {}; - - return agent.credentials_input_schema.properties; -} - -export function areAllCredentialsSet( - fields: Record, - inputs: Record, -) { - const required = Object.keys(fields || {}); - return required.every((k) => Boolean(inputs[k])); -} - type IsRunDisabledParams = { agent: GraphMeta | null; isRunning: boolean; agentInputs: InputValues | null | undefined; - credentialsRequired: boolean; - credentialsSatisfied: boolean; }; export function isRunDisabled({ agent, isRunning, agentInputs, - credentialsRequired, - credentialsSatisfied, }: IsRunDisabledParams) { const hasEmptyInput = Object.values(agentInputs || {}).some( (value) => String(value).trim() === "", @@ -71,7 +46,6 @@ export function isRunDisabled({ if (hasEmptyInput) return true; if (!agent) return true; if (isRunning) return true; - if (credentialsRequired && !credentialsSatisfied) return true; return false; } @@ -81,13 +55,3 @@ export function getSchemaDefaultCredentials( ): CredentialsMetaInput | undefined { return schema.default as CredentialsMetaInput | undefined; } - -export function sanitizeCredentials( - map: Record, -): Record { - const sanitized: Record = {}; - for (const [key, value] of Object.entries(map)) { - if (value) sanitized[key] = value; - } - return sanitized; -} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/page.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/page.tsx index 9e80231680..30e1b67090 100644 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/page.tsx +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/page.tsx @@ -1,224 +1,62 @@ "use client"; -import SmartImage from "@/components/__legacy__/SmartImage"; -import { useOnboarding } from "../../../../providers/onboarding/onboarding-provider"; -import OnboardingButton from "../components/OnboardingButton"; -import { OnboardingHeader, OnboardingStep } from "../components/OnboardingStep"; -import { OnboardingText } from "../components/OnboardingText"; -import StarRating from "../components/StarRating"; + +import { RunAgentInputs } from "@/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/modals/RunAgentInputs/RunAgentInputs"; import { Card, CardContent, CardHeader, CardTitle, } from "@/components/__legacy__/ui/card"; -import { useToast } from "@/components/molecules/Toast/use-toast"; -import { GraphMeta, StoreAgentDetails } from "@/lib/autogpt-server-api"; -import type { InputValues } from "./types"; -import { useBackendAPI } from "@/lib/autogpt-server-api/context"; -import { cn } from "@/lib/utils"; +import { ErrorCard } from "@/components/molecules/ErrorCard/ErrorCard"; +import { CircleNotchIcon } from "@phosphor-icons/react/dist/ssr"; import { Play } from "lucide-react"; -import { useRouter } from "next/navigation"; -import { useEffect, useState } from "react"; -import { RunAgentInputs } from "@/app/(platform)/library/agents/[id]/components/AgentRunsView/components/RunAgentInputs/RunAgentInputs"; -import { InformationTooltip } from "@/components/molecules/InformationTooltip/InformationTooltip"; -import { CredentialsInput } from "@/app/(platform)/library/agents/[id]/components/AgentRunsView/components/CredentialsInputs/CredentialsInputs"; -import type { CredentialsMetaInput } from "@/lib/autogpt-server-api/types"; -import { - areAllCredentialsSet, - computeInitialAgentInputs, - getAgentCredentialsInputFields, - isRunDisabled, - getSchemaDefaultCredentials, - sanitizeCredentials, -} from "./helpers"; +import OnboardingButton from "../components/OnboardingButton"; +import { OnboardingHeader, OnboardingStep } from "../components/OnboardingStep"; +import { OnboardingText } from "../components/OnboardingText"; +import { AgentOnboardingCredentials } from "./components/AgentOnboardingCredentials/AgentOnboardingCredentials"; +import { RunAgentHint } from "./components/RunAgentHint"; +import { SelectedAgentCard } from "./components/SelectedAgentCard"; +import { isRunDisabled } from "./helpers"; +import type { InputValues } from "./types"; +import { useOnboardingRunStep } from "./useOnboardingRunStep"; export default function Page() { - const { state, updateState, setStep } = useOnboarding( - undefined, - "AGENT_CHOICE", - ); - const [showInput, setShowInput] = useState(false); - const [agent, setAgent] = useState(null); - const [storeAgent, setStoreAgent] = useState(null); - const [runningAgent, setRunningAgent] = useState(false); - const [inputCredentials, setInputCredentials] = useState< - Record - >({}); - const { toast } = useToast(); - const router = useRouter(); - const api = useBackendAPI(); + const { + ready, + error, + showInput, + agentGraph, + onboarding, + storeAgent, + runningAgent, + handleSetAgentInput, + handleRunAgent, + handleNewRun, + handleCredentialsChange, + handleCredentialsValidationChange, + handleCredentialsLoadingChange, + } = useOnboardingRunStep(); - useEffect(() => { - setStep(5); - }, []); - - useEffect(() => { - if (!state?.selectedStoreListingVersionId) { - return; - } - api - .getStoreAgentByVersionId(state?.selectedStoreListingVersionId) - .then((storeAgent) => { - setStoreAgent(storeAgent); - }); - api - .getGraphMetaByStoreListingVersionID(state.selectedStoreListingVersionId) - .then((meta) => { - setAgent(meta); - const update = computeInitialAgentInputs( - meta, - (state.agentInput as unknown as InputValues) || null, - ); - updateState({ agentInput: update }); - }); - }, [api, setAgent, updateState, state?.selectedStoreListingVersionId]); - - const agentCredentialsInputFields = getAgentCredentialsInputFields(agent); - - const credentialsRequired = - Object.keys(agentCredentialsInputFields || {}).length > 0; - - const allCredentialsAreSet = areAllCredentialsSet( - agentCredentialsInputFields, - inputCredentials, - ); - - function setAgentInput(key: string, value: string) { - updateState({ - agentInput: { - ...state?.agentInput, - [key]: value, - }, - }); + if (error) { + return ; } - async function runAgent() { - if (!agent) { - return; - } - setRunningAgent(true); - try { - const libraryAgent = await api.addMarketplaceAgentToLibrary( - storeAgent?.store_listing_version_id || "", - ); - const { id: runID } = await api.executeGraph( - libraryAgent.graph_id, - libraryAgent.graph_version, - state?.agentInput || {}, - sanitizeCredentials(inputCredentials), - ); - updateState({ - onboardingAgentExecutionId: runID, - agentRuns: (state?.agentRuns || 0) + 1, - }); - router.push("/onboarding/6-congrats"); - } catch (error) { - console.error("Error running agent:", error); - toast({ - title: "Error running agent", - description: - "There was an error running your agent. Please try again or try choosing a different agent if it still fails.", - variant: "destructive", - }); - setRunningAgent(false); - } - } - - const runYourAgent = ( -
      -
      - Run your first agent - - A 'run' is when your agent starts working on a task - - - Click on New Run below to try it out - - -
      { - setShowInput(true); - setStep(6); - updateState({ - completedSteps: [ - ...(state?.completedSteps || []), - "AGENT_NEW_RUN", - ], - }); - }} - className={cn( - "mt-16 flex h-[68px] w-[330px] items-center justify-center rounded-xl border-2 border-violet-700 bg-neutral-50", - "cursor-pointer transition-all duration-200 ease-in-out hover:bg-violet-50", - )} - > - - - - - - - - New run - -
      + if (!ready) { + return ( +
      +
      -
      - ); + ); + } return ( - {/* Agent card */} -
      -
      - - SELECTED AGENT - - {storeAgent ? ( -
      - {/* Left image */} - - {/* Right content */} -
      - - {storeAgent?.agent_name} - - - by {storeAgent?.creator} - -
      - - {storeAgent?.runs.toLocaleString("en-US")} runs - - -
      -
      -
      - ) : ( -
      - )} -
      -
      - {/* Left side */} +
      - {/* Right side */} {!showInput ? ( - runYourAgent + ) : (
      @@ -232,53 +70,33 @@ export default function Page() { When you're done, click Run Agent. - {Object.entries(agentCredentialsInputFields || {}).map( - ([key, inputSubSchema]) => ( -
      - - setInputCredentials((prev) => ({ - ...prev, - [key]: value, - })) - } - siblingInputs={ - (state?.agentInput || undefined) as - | Record - | undefined - } - /> -
      - ), - )} + Input - {Object.entries(agent?.input_schema.properties || {}).map( - ([key, inputSubSchema]) => ( -
      - - setAgentInput(key, value)} - /> -
      - ), - )} + {Object.entries( + agentGraph?.input_schema.properties || {}, + ).map(([key, inputSubSchema]) => ( + handleSetAgentInput(key, value)} + /> + ))} + ) || + undefined + } + onCredentialsChange={handleCredentialsChange} + onValidationChange={handleCredentialsValidationChange} + onLoadingChange={handleCredentialsLoadingChange} + />
      } > Run agent diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/useOnboardingRunStep.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/useOnboardingRunStep.tsx new file mode 100644 index 0000000000..f143c89d44 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/5-run/useOnboardingRunStep.tsx @@ -0,0 +1,157 @@ +import { useToast } from "@/components/molecules/Toast/use-toast"; +import { useBackendAPI } from "@/lib/autogpt-server-api/context"; +import { useOnboarding } from "@/providers/onboarding/onboarding-provider"; +import { useRouter } from "next/navigation"; +import { useEffect, useState } from "react"; +import { computeInitialAgentInputs } from "./helpers"; +import { InputValues } from "./types"; +import { okData, resolveResponse } from "@/app/api/helpers"; +import { postV2AddMarketplaceAgent } from "@/app/api/__generated__/endpoints/library/library"; +import { + useGetV2GetAgentByVersion, + useGetV2GetAgentGraph, +} from "@/app/api/__generated__/endpoints/store/store"; +import { CredentialsMetaInput } from "@/app/api/__generated__/models/credentialsMetaInput"; +import { GraphID } from "@/lib/autogpt-server-api"; + +export function useOnboardingRunStep() { + const onboarding = useOnboarding(undefined, "AGENT_CHOICE"); + + const [showInput, setShowInput] = useState(false); + const [runningAgent, setRunningAgent] = useState(false); + + const [inputCredentials, setInputCredentials] = useState< + Record + >({}); + + const [credentialsValid, setCredentialsValid] = useState(true); + const [credentialsLoaded, setCredentialsLoaded] = useState(false); + + const { toast } = useToast(); + const router = useRouter(); + const api = useBackendAPI(); + + const currentAgentVersion = + onboarding.state?.selectedStoreListingVersionId ?? ""; + + const { + data: storeAgent, + error: storeAgentQueryError, + isSuccess: storeAgentQueryIsSuccess, + } = useGetV2GetAgentByVersion(currentAgentVersion, { + query: { + enabled: !!currentAgentVersion, + select: okData, + }, + }); + + const { + data: agentGraphMeta, + error: agentGraphQueryError, + isSuccess: agentGraphQueryIsSuccess, + } = useGetV2GetAgentGraph(currentAgentVersion, { + query: { + enabled: !!currentAgentVersion, + select: okData, + }, + }); + + useEffect(() => { + onboarding.setStep(5); + }, []); + + useEffect(() => { + if (agentGraphMeta && onboarding.state) { + const initialAgentInputs = computeInitialAgentInputs( + agentGraphMeta, + (onboarding.state.agentInput as unknown as InputValues) || null, + ); + + onboarding.updateState({ agentInput: initialAgentInputs }); + } + }, [agentGraphMeta]); + + function handleNewRun() { + if (!onboarding.state) return; + + setShowInput(true); + onboarding.setStep(6); + onboarding.completeStep("AGENT_NEW_RUN"); + } + + function handleSetAgentInput(key: string, value: string) { + if (!onboarding.state) return; + + onboarding.updateState({ + agentInput: { + ...onboarding.state.agentInput, + [key]: value, + }, + }); + } + + async function handleRunAgent() { + if (!agentGraphMeta || !storeAgent || !onboarding.state) { + toast({ + title: "Error getting agent", + description: + "Either the agent is not available or there was an error getting it.", + variant: "destructive", + }); + + return; + } + + setRunningAgent(true); + + try { + const libraryAgent = await resolveResponse( + postV2AddMarketplaceAgent({ + store_listing_version_id: storeAgent?.store_listing_version_id || "", + source: "onboarding", + }), + ); + + const { id: runID } = await api.executeGraph( + libraryAgent.graph_id as GraphID, + libraryAgent.graph_version, + onboarding.state.agentInput || {}, + inputCredentials, + "onboarding", + ); + + onboarding.updateState({ onboardingAgentExecutionId: runID }); + + router.push("/onboarding/6-congrats"); + } catch (error) { + console.error("Error running agent:", error); + + toast({ + title: "Error running agent", + description: + "There was an error running your agent. Please try again or try choosing a different agent if it still fails.", + variant: "destructive", + }); + + setRunningAgent(false); + } + } + + return { + ready: agentGraphQueryIsSuccess && storeAgentQueryIsSuccess, + error: agentGraphQueryError || storeAgentQueryError, + agentGraph: agentGraphMeta || null, + onboarding, + showInput, + storeAgent: storeAgent || null, + runningAgent, + credentialsValid, + credentialsLoaded, + handleSetAgentInput, + handleRunAgent, + handleNewRun, + handleCredentialsChange: setInputCredentials, + handleCredentialsValidationChange: setCredentialsValid, + handleCredentialsLoadingChange: (v: boolean) => setCredentialsLoaded(!v), + }; +} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/6-congrats/actions.ts b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/6-congrats/actions.ts deleted file mode 100644 index 202bad57bd..0000000000 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/6-congrats/actions.ts +++ /dev/null @@ -1,18 +0,0 @@ -"use server"; -import BackendAPI from "@/lib/autogpt-server-api"; -import { revalidatePath } from "next/cache"; -import { redirect } from "next/navigation"; - -export async function finishOnboarding() { - const api = new BackendAPI(); - const onboarding = await api.getUserOnboarding(); - const listingId = onboarding?.selectedStoreListingVersionId; - if (listingId) { - const libraryAgent = await api.addMarketplaceAgentToLibrary(listingId); - revalidatePath(`/library/agents/${libraryAgent.id}`, "layout"); - redirect(`/library/agents/${libraryAgent.id}`); - } else { - revalidatePath("/library", "layout"); - redirect("/library"); - } -} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/6-congrats/page.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/6-congrats/page.tsx index 0cac60c6f0..b3b4e4f458 100644 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/6-congrats/page.tsx +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/6-congrats/page.tsx @@ -1,12 +1,18 @@ "use client"; -import { useEffect, useRef, useState } from "react"; +import { useBackendAPI } from "@/lib/autogpt-server-api/context"; import { cn } from "@/lib/utils"; -import { finishOnboarding } from "./actions"; -import { useOnboarding } from "../../../../providers/onboarding/onboarding-provider"; +import { useRouter } from "next/navigation"; import * as party from "party-js"; +import { useEffect, useRef, useState } from "react"; +import { useOnboarding } from "../../../../providers/onboarding/onboarding-provider"; +import { resolveResponse } from "@/app/api/helpers"; +import { getV1OnboardingState } from "@/app/api/__generated__/endpoints/onboarding/onboarding"; +import { postV2AddMarketplaceAgent } from "@/app/api/__generated__/endpoints/library/library"; export default function Page() { const { completeStep } = useOnboarding(7, "AGENT_INPUT"); + const router = useRouter(); + const api = useBackendAPI(); const [showText, setShowText] = useState(false); const [showSubtext, setShowSubtext] = useState(false); const divRef = useRef(null); @@ -30,9 +36,32 @@ export default function Page() { setShowSubtext(true); }, 500); - const timer2 = setTimeout(() => { + const timer2 = setTimeout(async () => { completeStep("CONGRATS"); - finishOnboarding(); + + try { + const onboarding = await resolveResponse(getV1OnboardingState()); + if (onboarding?.selectedStoreListingVersionId) { + try { + const libraryAgent = await resolveResponse( + postV2AddMarketplaceAgent({ + store_listing_version_id: + onboarding.selectedStoreListingVersionId, + source: "onboarding", + }), + ); + router.replace(`/library/agents/${libraryAgent.id}`); + } catch (error) { + console.error("Failed to add agent to library:", error); + router.replace("/library"); + } + } else { + router.replace("/library"); + } + } catch (error) { + console.error("Failed to get onboarding data:", error); + router.replace("/library"); + } }, 3000); return () => { @@ -40,7 +69,7 @@ export default function Page() { clearTimeout(timer1); clearTimeout(timer2); }; - }, []); + }, [completeStep, router, api]); return (
      diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/components/OnboardingAgentCard.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/components/OnboardingAgentCard.tsx index 8d8bf6b7ce..841b0bb50a 100644 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/components/OnboardingAgentCard.tsx +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/components/OnboardingAgentCard.tsx @@ -1,7 +1,7 @@ import { cn } from "@/lib/utils"; import StarRating from "./StarRating"; -import { StoreAgentDetails } from "@/lib/autogpt-server-api"; import SmartImage from "@/components/__legacy__/SmartImage"; +import { StoreAgentDetails } from "@/app/api/__generated__/models/storeAgentDetails"; type OnboardingAgentCardProps = { agent?: StoreAgentDetails; @@ -21,7 +21,6 @@ export default function OnboardingAgentCard({ "relative animate-pulse", "h-[394px] w-[368px] rounded-[20px] border border-transparent bg-zinc-200", )} - onClick={onClick} /> ); } @@ -67,12 +66,12 @@ export default function OnboardingAgentCard({ {/* Text content wrapper */}
      {/* Title - 2 lines max */} -

      +

      {agent_name}

      {/* Author - single line with truncate */} -

      +

      by {creator}

      diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/components/StarRating.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/components/StarRating.tsx index e325c35a4a..5610566268 100644 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/components/StarRating.tsx +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/components/StarRating.tsx @@ -46,7 +46,7 @@ export default function StarRating({ )} > {/* Display numerical rating */} - {roundedRating} + {roundedRating} {/* Display stars */} {stars.map((starType, index) => { diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/page.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/page.tsx index c1e3bf0540..1ebfe6b87b 100644 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/page.tsx +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/page.tsx @@ -1,37 +1,72 @@ -import BackendAPI from "@/lib/autogpt-server-api"; -import { redirect } from "next/navigation"; -import { finishOnboarding } from "./6-congrats/actions"; -import { shouldShowOnboarding } from "@/app/api/helpers"; +"use client"; +import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner"; +import { useRouter } from "next/navigation"; +import { useEffect } from "react"; +import { resolveResponse, shouldShowOnboarding } from "@/app/api/helpers"; +import { getV1OnboardingState } from "@/app/api/__generated__/endpoints/onboarding/onboarding"; -// Force dynamic rendering to avoid static generation issues with cookies -export const dynamic = "force-dynamic"; +export default function OnboardingPage() { + const router = useRouter(); -export default async function OnboardingPage() { - const api = new BackendAPI(); - const isOnboardingEnabled = await shouldShowOnboarding(); + useEffect(() => { + async function redirectToStep() { + try { + // Check if onboarding is enabled + const isEnabled = await shouldShowOnboarding(); + if (!isEnabled) { + router.replace("/"); + return; + } - if (!isOnboardingEnabled) { - redirect("/marketplace"); - } + const onboarding = await resolveResponse(getV1OnboardingState()); - const onboarding = await api.getUserOnboarding(); + // Handle completed onboarding + if (onboarding.completedSteps.includes("GET_RESULTS")) { + router.replace("/"); + return; + } - // CONGRATS is the last step in intro onboarding - if (onboarding.completedSteps.includes("GET_RESULTS")) - redirect("/marketplace"); - else if (onboarding.completedSteps.includes("CONGRATS")) finishOnboarding(); - else if (onboarding.completedSteps.includes("AGENT_INPUT")) - redirect("/onboarding/5-run"); - else if (onboarding.completedSteps.includes("AGENT_NEW_RUN")) - redirect("/onboarding/5-run"); - else if (onboarding.completedSteps.includes("AGENT_CHOICE")) - redirect("/onboarding/5-run"); - else if (onboarding.completedSteps.includes("INTEGRATIONS")) - redirect("/onboarding/4-agent"); - else if (onboarding.completedSteps.includes("USAGE_REASON")) - redirect("/onboarding/3-services"); - else if (onboarding.completedSteps.includes("WELCOME")) - redirect("/onboarding/2-reason"); + // Redirect to appropriate step based on completed steps + if (onboarding.completedSteps.includes("AGENT_INPUT")) { + router.push("/onboarding/5-run"); + return; + } - redirect("/onboarding/1-welcome"); + if (onboarding.completedSteps.includes("AGENT_NEW_RUN")) { + router.push("/onboarding/5-run"); + return; + } + + if (onboarding.completedSteps.includes("AGENT_CHOICE")) { + router.push("/onboarding/5-run"); + return; + } + + if (onboarding.completedSteps.includes("INTEGRATIONS")) { + router.push("/onboarding/4-agent"); + return; + } + + if (onboarding.completedSteps.includes("USAGE_REASON")) { + router.push("/onboarding/3-services"); + return; + } + + if (onboarding.completedSteps.includes("WELCOME")) { + router.push("/onboarding/2-reason"); + return; + } + + // Default: redirect to first step + router.push("/onboarding/1-welcome"); + } catch (error) { + console.error("Failed to determine onboarding step:", error); + router.replace("/"); + } + } + + redirectToStep(); + }, [router]); + + return ; } diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/reset/page.ts b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/reset/page.ts deleted file mode 100644 index a35d9aec86..0000000000 --- a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/reset/page.ts +++ /dev/null @@ -1,18 +0,0 @@ -import BackendAPI from "@/lib/autogpt-server-api"; -import { redirect } from "next/navigation"; - -export default async function OnboardingResetPage() { - const api = new BackendAPI(); - await api.updateUserOnboarding({ - completedSteps: [], - walletShown: false, - notified: [], - usageReason: null, - integrations: [], - otherIntegrations: "", - selectedStoreListingVersionId: null, - agentInput: {}, - onboardingAgentExecutionId: null, - }); - redirect("/onboarding/1-welcome"); -} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/reset/page.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/reset/page.tsx new file mode 100644 index 0000000000..0113e67c17 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(no-navbar)/onboarding/reset/page.tsx @@ -0,0 +1,33 @@ +"use client"; +import { postV1ResetOnboardingProgress } from "@/app/api/__generated__/endpoints/onboarding/onboarding"; +import { LoadingSpinner } from "@/components/atoms/LoadingSpinner/LoadingSpinner"; +import { useToast } from "@/components/molecules/Toast/use-toast"; +import { useRouter } from "next/navigation"; +import { useEffect } from "react"; + +export default function OnboardingResetPage() { + const { toast } = useToast(); + const router = useRouter(); + + useEffect(() => { + postV1ResetOnboardingProgress() + .then(() => { + toast({ + title: "Onboarding reset successfully", + description: "You can now start the onboarding process again", + variant: "success", + }); + + router.push("/onboarding"); + }) + .catch(() => { + toast({ + title: "Failed to reset onboarding", + description: "Please try again later", + variant: "destructive", + }); + }); + }, [toast, router]); + + return ; +} diff --git a/autogpt_platform/frontend/src/app/(no-navbar)/share/[token]/page.tsx b/autogpt_platform/frontend/src/app/(no-navbar)/share/[token]/page.tsx index a8fd85eeb0..1c37c6c72f 100644 --- a/autogpt_platform/frontend/src/app/(no-navbar)/share/[token]/page.tsx +++ b/autogpt_platform/frontend/src/app/(no-navbar)/share/[token]/page.tsx @@ -1,8 +1,8 @@ "use client"; -import React from "react"; -import { useParams } from "next/navigation"; -import { RunOutputs } from "@/app/(platform)/library/agents/[id]/components/AgentRunsView/components/SelectedRunView/components/RunOutputs"; +import { RunOutputs } from "@/app/(platform)/library/agents/[id]/components/NewAgentLibraryView/components/selected-views/SelectedRunView/components/RunOutputs"; +import { okData } from "@/app/api/helpers"; +import { useGetV1GetSharedExecution } from "@/app/api/__generated__/endpoints/default/default"; import { Card, CardContent, @@ -11,19 +11,18 @@ import { } from "@/components/__legacy__/ui/card"; import { Alert, AlertDescription } from "@/components/molecules/Alert/Alert"; import { InfoIcon } from "lucide-react"; -import { useGetV1GetSharedExecution } from "@/app/api/__generated__/endpoints/default/default"; +import { useParams } from "next/navigation"; export default function SharePage() { const params = useParams(); const token = params.token as string; const { - data: response, + data: executionData, isLoading: loading, error, - } = useGetV1GetSharedExecution(token); + } = useGetV1GetSharedExecution(token, { query: { select: okData } }); - const executionData = response?.status === 200 ? response.data : undefined; const is404 = !loading && !executionData; if (loading) { diff --git a/autogpt_platform/frontend/src/app/(platform)/admin/components/AdminImpersonationBanner.tsx b/autogpt_platform/frontend/src/app/(platform)/admin/components/AdminImpersonationBanner.tsx new file mode 100644 index 0000000000..9bcb5d8b9c --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/admin/components/AdminImpersonationBanner.tsx @@ -0,0 +1,36 @@ +"use client"; + +import { useAdminImpersonation } from "./useAdminImpersonation"; + +export function AdminImpersonationBanner() { + const { isImpersonating, impersonatedUserId, stopImpersonating } = + useAdminImpersonation(); + + if (!isImpersonating) { + return null; + } + + return ( +
      +
      +
      + + ⚠️ ADMIN IMPERSONATION ACTIVE + + + You are currently acting as user:{" "} + + {impersonatedUserId} + + +
      + +
      +
      + ); +} diff --git a/autogpt_platform/frontend/src/app/(platform)/admin/components/AdminImpersonationPanel.tsx b/autogpt_platform/frontend/src/app/(platform)/admin/components/AdminImpersonationPanel.tsx new file mode 100644 index 0000000000..8acfe55fbf --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/admin/components/AdminImpersonationPanel.tsx @@ -0,0 +1,196 @@ +"use client"; + +import { useState } from "react"; +import { UserMinus, UserCheck, CreditCard } from "@phosphor-icons/react"; +import { Card } from "@/components/atoms/Card/Card"; +import { Input } from "@/components/atoms/Input/Input"; +import { Button } from "@/components/atoms/Button/Button"; +import { Alert, AlertDescription } from "@/components/molecules/Alert/Alert"; +import { useAdminImpersonation } from "./useAdminImpersonation"; +import { useGetV1GetUserCredits } from "@/app/api/__generated__/endpoints/credits/credits"; + +export function AdminImpersonationPanel() { + const [userIdInput, setUserIdInput] = useState(""); + const [error, setError] = useState(""); + const { + isImpersonating, + impersonatedUserId, + startImpersonating, + stopImpersonating, + } = useAdminImpersonation(); + + // Demo: Use existing credits API - it will automatically use impersonation if active + const { + data: creditsResponse, + isLoading: creditsLoading, + error: creditsError, + } = useGetV1GetUserCredits(); + + function handleStartImpersonation() { + setError(""); + + if (!userIdInput.trim()) { + setError("Please enter a valid user ID"); + return; + } + + // Basic UUID validation + const uuidRegex = + /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i; + if (!uuidRegex.test(userIdInput.trim())) { + setError("Please enter a valid UUID format user ID"); + return; + } + + try { + startImpersonating(userIdInput.trim()); + setUserIdInput(""); + } catch (err) { + setError( + err instanceof Error ? err.message : "Failed to start impersonation", + ); + } + } + + function handleStopImpersonation() { + stopImpersonating(); + setError(""); + } + + return ( + +
      +
      +
      + +

      Admin User Impersonation

      +
      +

      + Act on behalf of another user for debugging and support purposes +

      +
      + + {/* Security Warning */} + + + Security Notice: This feature is for admin + debugging and support only. All impersonation actions are logged for + audit purposes. + + + + {/* Current Status */} + {isImpersonating && ( + + + Currently impersonating:{" "} + + {impersonatedUserId} + + + + )} + + {/* Impersonation Controls */} +
      + setUserIdInput(e.target.value)} + disabled={isImpersonating} + error={error} + /> + +
      + + + {isImpersonating && ( + + )} +
      +
      + + {/* Demo: Live Credits Display */} + +
      +
      + +

      Live Demo: User Credits

      +
      + + {creditsLoading ? ( +

      Loading credits...

      + ) : creditsError ? ( + + + Error loading credits:{" "} + {creditsError && + typeof creditsError === "object" && + "message" in creditsError + ? String(creditsError.message) + : "Unknown error"} + + + ) : creditsResponse?.data ? ( +
      +

      + + {creditsResponse.data && + typeof creditsResponse.data === "object" && + "credits" in creditsResponse.data + ? String(creditsResponse.data.credits) + : "N/A"} + {" "} + credits available + {isImpersonating && ( + + (via impersonation) + + )} +

      +

      + {isImpersonating + ? `Showing credits for user ${impersonatedUserId}` + : "Showing your own credits"} +

      +
      + ) : ( +

      No credits data available

      + )} +
      +
      + + {/* Instructions */} +
      +

      + Instructions: +

      +
        +
      • Enter the UUID of the user you want to impersonate
      • +
      • + All existing API endpoints automatically work with impersonation +
      • +
      • A warning banner will appear while impersonation is active
      • +
      • + Impersonation persists across page refreshes in this session +
      • +
      +
      +
      +
      + ); +} diff --git a/autogpt_platform/frontend/src/app/(platform)/admin/components/useAdminImpersonation.ts b/autogpt_platform/frontend/src/app/(platform)/admin/components/useAdminImpersonation.ts new file mode 100644 index 0000000000..330510be49 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/admin/components/useAdminImpersonation.ts @@ -0,0 +1,75 @@ +"use client"; + +import { useState, useCallback } from "react"; +import { ImpersonationState } from "@/lib/impersonation"; +import { useToast } from "@/components/molecules/Toast/use-toast"; + +interface AdminImpersonationState { + isImpersonating: boolean; + impersonatedUserId: string | null; +} + +interface AdminImpersonationActions { + startImpersonating: (userId: string) => void; + stopImpersonating: () => void; +} + +type AdminImpersonationHook = AdminImpersonationState & + AdminImpersonationActions; + +export function useAdminImpersonation(): AdminImpersonationHook { + const [impersonatedUserId, setImpersonatedUserId] = useState( + ImpersonationState.get, + ); + const { toast } = useToast(); + + const isImpersonating = Boolean(impersonatedUserId); + + const startImpersonating = useCallback( + (userId: string) => { + if (!userId.trim()) { + toast({ + title: "User ID is required for impersonation", + variant: "destructive", + }); + return; + } + + try { + ImpersonationState.set(userId); + setImpersonatedUserId(userId); + window.location.reload(); + } catch (error) { + console.error("Failed to start impersonation:", error); + toast({ + title: "Failed to start impersonation", + description: error instanceof Error ? error.message : "Unknown error", + variant: "destructive", + }); + } + }, + [toast], + ); + + const stopImpersonating = useCallback(() => { + try { + ImpersonationState.clear(); + setImpersonatedUserId(null); + window.location.reload(); + } catch (error) { + console.error("Failed to stop impersonation:", error); + toast({ + title: "Failed to stop impersonation", + description: error instanceof Error ? error.message : "Unknown error", + variant: "destructive", + }); + } + }, [toast]); + + return { + isImpersonating, + impersonatedUserId, + startImpersonating, + stopImpersonating, + }; +} diff --git a/autogpt_platform/frontend/src/app/(platform)/admin/execution-analytics/components/AnalyticsResultsTable.tsx b/autogpt_platform/frontend/src/app/(platform)/admin/execution-analytics/components/AnalyticsResultsTable.tsx new file mode 100644 index 0000000000..56c52e2ceb --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/admin/execution-analytics/components/AnalyticsResultsTable.tsx @@ -0,0 +1,319 @@ +"use client"; + +import React, { useState } from "react"; +import { Button } from "@/components/atoms/Button/Button"; +import { Text } from "@/components/atoms/Text/Text"; +import { Badge } from "@/components/atoms/Badge/Badge"; +import { DownloadIcon, EyeIcon, CopyIcon } from "@phosphor-icons/react"; +import { useToast } from "@/components/molecules/Toast/use-toast"; +import type { ExecutionAnalyticsResponse } from "@/app/api/__generated__/models/executionAnalyticsResponse"; + +interface Props { + results: ExecutionAnalyticsResponse; +} + +export function AnalyticsResultsTable({ results }: Props) { + const [expandedRows, setExpandedRows] = useState>(new Set()); + const { toast } = useToast(); + + const createCopyableId = (value: string, label: string) => ( +
      { + navigator.clipboard.writeText(value); + toast({ + title: "Copied", + description: `${label} copied to clipboard`, + }); + }} + title={`Click to copy ${label.toLowerCase()}`} + > + {value.substring(0, 8)}... + +
      + ); + + const toggleRowExpansion = (execId: string) => { + const newExpanded = new Set(expandedRows); + if (newExpanded.has(execId)) { + newExpanded.delete(execId); + } else { + newExpanded.add(execId); + } + setExpandedRows(newExpanded); + }; + + const exportToCSV = () => { + const headers = [ + "Agent ID", + "Version", + "User ID", + "Execution ID", + "Status", + "Score", + "Summary Text", + "Error Message", + ]; + + const csvData = results.results.map((result) => [ + result.agent_id, + result.version_id.toString(), + result.user_id, + result.exec_id, + result.status, + result.score?.toString() || "", + `"${(result.summary_text || "").replace(/"/g, '""')}"`, // Escape quotes in summary + `"${(result.error_message || "").replace(/"/g, '""')}"`, // Escape quotes in error + ]); + + const csvContent = [ + headers.join(","), + ...csvData.map((row) => row.join(",")), + ].join("\n"); + + const blob = new Blob([csvContent], { type: "text/csv;charset=utf-8;" }); + const link = document.createElement("a"); + const url = URL.createObjectURL(blob); + + link.setAttribute("href", url); + link.setAttribute( + "download", + `execution-analytics-results-${new Date().toISOString().split("T")[0]}.csv`, + ); + link.style.visibility = "hidden"; + + document.body.appendChild(link); + link.click(); + document.body.removeChild(link); + }; + + const getStatusBadge = (status: string) => { + switch (status) { + case "success": + return Success; + case "failed": + return Failed; + case "skipped": + return Skipped; + default: + return {status}; + } + }; + + const getScoreDisplay = (score?: number) => { + if (score === undefined || score === null) return "—"; + + const percentage = Math.round(score * 100); + let colorClass = ""; + + if (score >= 0.8) colorClass = "text-green-600"; + else if (score >= 0.6) colorClass = "text-yellow-600"; + else if (score >= 0.4) colorClass = "text-orange-600"; + else colorClass = "text-red-600"; + + return {percentage}%; + }; + + return ( +
      + {/* Summary Stats */} +
      + + Analytics Summary + +
      +
      + + Total Executions: + + + {results.total_executions} + +
      +
      + + Processed: + + + {results.processed_executions} + +
      +
      + + Successful: + + + {results.successful_analytics} + +
      +
      + + Failed: + + + {results.failed_analytics} + +
      +
      + + Skipped: + + + {results.skipped_executions} + +
      +
      +
      + + {/* Export Button */} +
      + +
      + + {/* Results Table */} + {results.results.length > 0 ? ( +
      +
      + + + + + + + + + + + + + + {results.results.map((result) => ( + + + + + + + + + + + + {expandedRows.has(result.exec_id) && ( + + + + )} + + ))} + +
      + + Agent ID + + + + Version + + + + User ID + + + + Execution ID + + + + Status + + + + Score + + + + Actions + +
      + {createCopyableId(result.agent_id, "Agent ID")} + + {result.version_id} + + {createCopyableId(result.user_id, "User ID")} + + {createCopyableId(result.exec_id, "Execution ID")} + + {getStatusBadge(result.status)} + + {getScoreDisplay( + typeof result.score === "number" + ? result.score + : undefined, + )} + + {(result.summary_text || result.error_message) && ( + + )} +
      +
      + {result.summary_text && ( +
      + + Summary: + + + {result.summary_text} + +
      + )} + + {result.error_message && ( +
      + + Error: + + + {result.error_message} + +
      + )} +
      +
      +
      +
      + ) : ( +
      + + No executions were processed. + +
      + )} +
      + ); +} diff --git a/autogpt_platform/frontend/src/app/(platform)/admin/execution-analytics/components/ExecutionAnalyticsForm.tsx b/autogpt_platform/frontend/src/app/(platform)/admin/execution-analytics/components/ExecutionAnalyticsForm.tsx new file mode 100644 index 0000000000..5aced56090 --- /dev/null +++ b/autogpt_platform/frontend/src/app/(platform)/admin/execution-analytics/components/ExecutionAnalyticsForm.tsx @@ -0,0 +1,636 @@ +"use client"; + +import { useState, useEffect } from "react"; +import { + LineChart, + Line, + XAxis, + YAxis, + CartesianGrid, + Tooltip, + Legend, + ResponsiveContainer, +} from "recharts"; +import { Button } from "@/components/atoms/Button/Button"; +import { Input } from "@/components/__legacy__/ui/input"; +import { Label } from "@/components/__legacy__/ui/label"; +import { + Select, + SelectContent, + SelectItem, + SelectTrigger, + SelectValue, +} from "@/components/__legacy__/ui/select"; +import { Textarea } from "@/components/__legacy__/ui/textarea"; +import { Checkbox } from "@/components/__legacy__/ui/checkbox"; +import { Collapsible } from "@/components/molecules/Collapsible/Collapsible"; +import { useToast } from "@/components/molecules/Toast/use-toast"; +import { + usePostV2GenerateExecutionAnalytics, + useGetV2GetExecutionAnalyticsConfiguration, + useGetV2GetExecutionAccuracyTrendsAndAlerts, +} from "@/app/api/__generated__/endpoints/admin/admin"; +import type { ExecutionAnalyticsRequest } from "@/app/api/__generated__/models/executionAnalyticsRequest"; +import type { ExecutionAnalyticsResponse } from "@/app/api/__generated__/models/executionAnalyticsResponse"; +import type { AccuracyTrendsResponse } from "@/app/api/__generated__/models/accuracyTrendsResponse"; +import type { AccuracyLatestData } from "@/app/api/__generated__/models/accuracyLatestData"; + +// Use the generated type with minimal adjustment for form handling +interface FormData extends Omit { + created_after?: string; // Keep as string for datetime-local input + // All other fields use the generated types as-is +} +import { AnalyticsResultsTable } from "./AnalyticsResultsTable"; +import { okData } from "@/app/api/helpers"; + +export function ExecutionAnalyticsForm() { + const [results, setResults] = useState( + null, + ); + const [trendsData, setTrendsData] = useState( + null, + ); + const { toast } = useToast(); + + // State for accuracy trends query parameters + const [accuracyParams, setAccuracyParams] = useState<{ + graph_id: string; + user_id?: string; + days_back: number; + drop_threshold: number; + include_historical?: boolean; + } | null>(null); + + // Use the generated API client for accuracy trends (GET) + const { data: accuracyApiResponse, error: accuracyError } = + useGetV2GetExecutionAccuracyTrendsAndAlerts( + accuracyParams || { + graph_id: "", + days_back: 30, + drop_threshold: 10.0, + include_historical: false, + }, + { + query: { + enabled: !!accuracyParams?.graph_id, + }, + }, + ); + + // Update local state when data changes and handle success/error + useEffect(() => { + if (accuracyError) { + console.error("Failed to fetch trends:", accuracyError); + toast({ + title: "Trends Error", + description: + (accuracyError as any)?.message || "Failed to fetch accuracy trends", + variant: "destructive", + }); + return; + } + + const data = accuracyApiResponse?.data; + if (data && "latest_data" in data) { + setTrendsData(data); + + // Check for alerts + if (data.alert) { + toast({ + title: "🚨 Accuracy Alert Detected", + description: `${data.alert.drop_percent.toFixed(1)}% accuracy drop detected for this agent`, + variant: "destructive", + }); + } + } + }, [accuracyApiResponse, accuracyError, toast]); + + // Chart component for accuracy trends + function AccuracyChart({ data }: { data: AccuracyLatestData[] }) { + const chartData = data.map((item) => ({ + date: new Date(item.date).toLocaleDateString(), + "Daily Score": item.daily_score, + "3-Day Avg": item.three_day_avg, + "7-Day Avg": item.seven_day_avg, + "14-Day Avg": item.fourteen_day_avg, + })); + + return ( + + + + + + [`${Number(value).toFixed(2)}%`, ""]} + /> + + + + + + + + ); + } + + // Function to fetch accuracy trends using generated API client + const fetchAccuracyTrends = (graphId: string, userId?: string) => { + if (!graphId.trim()) return; + + setAccuracyParams({ + graph_id: graphId.trim(), + user_id: userId?.trim() || undefined, + days_back: 30, + drop_threshold: 10.0, + include_historical: showAccuracyChart, // Include historical data when chart is enabled + }); + }; + + // Fetch configuration from API + const { + data: config, + isLoading: configLoading, + error: configError, + } = useGetV2GetExecutionAnalyticsConfiguration({ query: { select: okData } }); + + const generateAnalytics = usePostV2GenerateExecutionAnalytics({ + mutation: { + onSuccess: (res) => { + if (res.status !== 200) { + throw new Error("Something went wrong!"); + } + const result = res.data; + setResults(result); + + toast({ + title: "Analytics Generated", + description: `Processed ${result.processed_executions} executions. ${result.successful_analytics} successful, ${result.failed_analytics} failed, ${result.skipped_executions} skipped.`, + variant: "default", + }); + }, + onError: (error: any) => { + console.error("Analytics generation error:", error); + + const errorMessage = + error?.message || error?.detail || "An unexpected error occurred"; + const isOpenAIError = errorMessage.includes( + "OpenAI API key not configured", + ); + + toast({ + title: isOpenAIError + ? "Analytics Generation Skipped" + : "Analytics Generation Failed", + description: isOpenAIError + ? "Analytics generation requires OpenAI configuration, but accuracy trends are still available above." + : errorMessage, + variant: isOpenAIError ? "default" : "destructive", + }); + }, + }, + }); + + const [formData, setFormData] = useState({ + graph_id: "", + model_name: "", // Will be set from config + batch_size: 10, // Fixed internal value + skip_existing: true, // Default to skip existing + system_prompt: "", // Will use config default when empty + user_prompt: "", // Will use config default when empty + }); + + // State for accuracy trends chart toggle + const [showAccuracyChart, setShowAccuracyChart] = useState(true); + + // Update form defaults when config loads + useEffect(() => { + if (config && !formData.model_name) { + setFormData((prev) => ({ + ...prev, + model_name: config.recommended_model, + })); + } + }, [config, formData.model_name]); + + const handleSubmit = async (e: React.FormEvent) => { + e.preventDefault(); + + if (!formData.graph_id.trim()) { + toast({ + title: "Validation Error", + description: "Graph ID is required", + variant: "destructive", + }); + return; + } + + setResults(null); + + // Fetch accuracy trends if chart is enabled + if (showAccuracyChart) { + fetchAccuracyTrends(formData.graph_id, formData.user_id || undefined); + } + + // Prepare the request payload + const payload: ExecutionAnalyticsRequest = { + graph_id: formData.graph_id.trim(), + model_name: formData.model_name, + batch_size: formData.batch_size, + skip_existing: formData.skip_existing, + }; + + if (formData.graph_version) { + payload.graph_version = formData.graph_version; + } + + if (formData.user_id?.trim()) { + payload.user_id = formData.user_id.trim(); + } + + if ( + formData.created_after && + typeof formData.created_after === "string" && + formData.created_after.trim() + ) { + payload.created_after = new Date(formData.created_after.trim()); + } + + if (formData.system_prompt?.trim()) { + payload.system_prompt = formData.system_prompt.trim(); + } + + if (formData.user_prompt?.trim()) { + payload.user_prompt = formData.user_prompt.trim(); + } + + generateAnalytics.mutate({ data: payload }); + }; + + const handleInputChange = (field: keyof FormData, value: any) => { + setFormData((prev: FormData) => ({ ...prev, [field]: value })); + }; + + // Show loading state while config loads + if (configLoading) { + return ( +
      +
      Loading configuration...
      +
      + ); + } + + // Show error state if config fails to load + if (configError || !config) { + return ( +
      +
      Failed to load configuration
      +
      + ); + } + + return ( +
      +
      +
      +
      + + handleInputChange("graph_id", e.target.value)} + placeholder="Enter graph/agent ID" + required + /> +
      + +
      + + + handleInputChange( + "graph_version", + e.target.value ? parseInt(e.target.value) : undefined, + ) + } + placeholder="Optional - leave empty for all versions" + /> +
      + +
      + + handleInputChange("user_id", e.target.value)} + placeholder="Optional - leave empty for all users" + /> +
      + +
      + + + handleInputChange("created_after", e.target.value) + } + /> +
      + +
      + + +
      +
      + + {/* Advanced Options Section - Collapsible */} +
      + + Advanced Options +

    + } + defaultOpen={false} + className="space-y-4" + > +
    + {/* Skip Existing Checkbox */} +
    + + handleInputChange("skip_existing", checked) + } + /> + +
    + + {/* Show Accuracy Chart Checkbox */} +
    + setShowAccuracyChart(!!checked)} + /> + +
    + + {/* Custom System Prompt */} +
    + +