Compare commits

..

117 Commits

Author SHA1 Message Date
Swifty
87e3d7eaad updates 2025-12-20 19:16:22 +01:00
Swifty
974c14a7b9 fix(frontend): Use server-side URL for auth API in Docker
Auth API and other server-side routes were using NEXT_PUBLIC_AGPT_SERVER_URL
directly, which resolves to localhost:8006. When running in Docker, the
frontend container needs to reach the backend via the container name
(rest_server:8006) instead of localhost.

Updated all server-side auth routes to use environment.getAGPTServerApiUrl()
or environment.getAGPTServerBaseUrl() which correctly handle the Docker
environment by using AGPT_SERVER_URL when running server-side.

Files updated:
- src/lib/auth/api.ts
- src/app/api/auth/callback/reset-password/route.ts
- src/app/api/auth/user/route.ts
- src/app/(platform)/auth/callback/route.ts
- src/app/(platform)/auth/confirm/route.ts
- src/app/(platform)/profile/(user)/settings/.../actions.ts
- src/app/(platform)/reset-password/actions.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 01:01:47 +01:00
Swifty
af014ea19d refactor(ci): Simplify fullstack CI by removing backend dependencies
Since openapi.json is committed, we don't need to:
- Run Python/Poetry
- Start services (postgres, redis, rabbitmq)
- Run Prisma migrations
- Generate OpenAPI schema

The workflow now just uses the committed openapi.json to generate
TypeScript queries and run type checks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 00:29:57 +01:00
Swifty
9ecf8bcb08 fmt 2025-12-20 00:25:43 +01:00
Swifty
a7a521cedd update openapi.json 2025-12-20 00:21:49 +01:00
Swifty
84244c0b56 fix(frontend): Handle 401 errors gracefully in onboarding provider
Silently handle 401 errors during onboarding initialization and step
completion. These errors are expected during login transitions when
auth cookies haven't propagated to the server-side proxy yet.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 00:19:37 +01:00
Swifty
9e83985b5b fix(ci): Add boolean argument to --pretty flag 2025-12-20 00:19:19 +01:00
Swifty
4ef3eab89d fix(ci): Use export-api-schema instead of running server
Generate OpenAPI schema directly using the CLI tool instead of
starting the REST server. This is simpler and avoids server
startup issues in CI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 00:15:59 +01:00
Swifty
c68b53b6c1 fix(frontend): Fix Google OAuth callback URL and error handling
- Remove duplicate /api prefix in auth API client and callback route
- Add try-catch around onboarding check in OAuth callback to handle
  401 errors gracefully when cookies aren't available yet

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-20 00:13:47 +01:00
Swifty
23fb3ad8a4 fix(ci): Use correct poetry command 'rest' instead of 'serve'
The backend pyproject.toml defines 'rest' as the script name, not 'serve'.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 23:49:14 +01:00
Swifty
175ba13ebe added oauth login 2025-12-19 23:09:26 +01:00
Swifty
a415f471c6 add rust migration tool 2025-12-19 23:02:19 +01:00
Swifty
3dd6e5cb04 update openapi.json 2025-12-19 22:32:31 +01:00
Swifty
3f1e66b317 Merge branch 'native-auth' of github.com:Significant-Gravitas/AutoGPT into native-auth 2025-12-19 22:14:23 +01:00
Swifty
8f722bd9cd fix(backend): Resolve pyright type errors for Prisma TypedDict inputs
- Add `cast()` wrappers for Prisma create/upsert dict literals across 24 files
- Add bcrypt dependency (>=4.1.0,<5.0.0) for native auth password hashing
- Add type ignore for PostmarkClient.emails attribute (missing type stubs)
- Refactor execution.py update_node_execution_status to avoid invalid cast

Files affected:
- Auth: oauth_tool.py, api_key.py, oauth.py, service.py, email.py
- Credit tests: credit_*.py (7 files)
- Data layer: execution.py, human_review.py, onboarding.py
- Server: oauth_test.py, library/db.py, store/db.py, _test_data.py
- Tests: e2e_test_data.py, test_data_creator.py, test_data_updater.py

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 22:10:43 +01:00
Swifty
65026fc9d3 feat(backend): Add script to migrate large execution tables
Creates migrate_big_tables.sh to stream large tables that were
excluded from the initial migration:
- NotificationEvent (94 MB)
- AgentNodeExecutionKeyValueData (792 KB)
- AgentGraphExecution (1.3 GB)
- AgentNodeExecution (6 GB)
- AgentNodeExecutionInputOutput (30 GB)

Features:
- Streams directly from source to destination (no disk write)
- Migrates tables in size order (smallest first)
- Shows progress with row counts and timing
- Supports --table flag to migrate single table
- Supports --dry-run to preview

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 21:36:03 +01:00
Swifty
af98bc1081 Merge branch 'native-auth' of github.com:Significant-Gravitas/AutoGPT into native-auth 2025-12-19 21:31:09 +01:00
Swifty
e92459fc5f fix(backend): Improve migration script with nuke step and table exclusions
- Add Step 0 to nuke destination database with confirmation (type 'NUKE')
- Exclude large execution tables to speed up migration:
  - AgentGraphExecution (1.3 GB)
  - AgentNodeExecution (6 GB)
  - AgentNodeExecutionInputOutput (30 GB)
  - AgentNodeExecutionKeyValueData
  - NotificationEvent (94 MB)
- Fix set -e issue with parse_args validation
- Clean up script structure and documentation

Reduces migration from ~37 GB to ~544 MB for initial cutover.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 21:29:20 +01:00
Swifty
1775286f59 Merge dev into native-auth
Resolved conflicts:
- rest_api.py: Keep both native auth and oauth router imports
- e2e_test_data.py: Keep AuthService import for native auth
- auth/callback/route.ts: Keep native auth implementation
- login/page.tsx: Add useSearchParams import
- useLoginPage.ts: Combine broadcastLogin/validateSession with nextUrl support
- signup/page.tsx: Add useSearchParams import
- useSignupPage.ts: Combine broadcastLogin/validateSession with nextUrl support
- openapi.json: Keep native auth TokenResponse, add OAuth types from dev

Kept deleted supabase files removed (native-auth replaces supabase).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 21:25:22 +01:00
Swifty
f6af700f1a fix(backend): format migrate_supabase_users.py line 148
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 21:14:22 +01:00
Reinier van der Leer
3dbc03e488 feat(platform): OAuth API & Single Sign-On (#11617)
We want to provide Single Sign-On for multiple AutoGPT apps that use the
Platform as their backend.

### Changes 🏗️

Backend:
- DB + logic + API for OAuth flow (w/ tests)
  - DB schema additions for OAuth apps, codes, and tokens
  - Token creation/validation/management logic
- OAuth flow endpoints (app info, authorize, token exchange, introspect,
revoke)
  - E2E OAuth API integration tests
- Other OAuth-related endpoints (upload app logo, list owned apps,
external `/me` endpoint)
    - App logo asset management
  - Adjust external API middleware to support auth with access token
  - Expired token clean-up job
    - Add `OAUTH_TOKEN_CLEANUP_INTERVAL_HOURS` setting (optional)
- `poetry run oauth-tool`: dev tool to test the OAuth flows and register
new OAuth apps
- `poetry run export-api-schema`: dev tool to quickly export the OpenAPI
schema (much quicker than spinning up the backend)

Frontend:
- Frontend UI for app authorization (`/auth/authorize`)
  - Re-redirect after login/signup
- Frontend flow to batch-auth integrations on request of the client app
(`/auth/integrations/setup-wizard`)
  - Debug `CredentialInputs` component
- Add `/profile/oauth-apps` management page
- Add `isOurProblem` flag to `ErrorCard` to hide action buttons when the
error isn't our fault
- Add `showTitle` flag to `CredentialsInput` to hide built-in title for
layout reasons

DX:
- Add [API
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/api-guide.md)
and [OAuth
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/oauth-guide.md)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Manually verify test coverage of OAuth API tests
  - Test `/auth/authorize` using `poetry run oauth-tool test-server`
    - [x] Works
    - [x] Looks okay
- Test `/auth/integrations/setup-wizard` using `poetry run oauth-tool
test-server`
    - [x] Works
    - [x] Looks okay
  - Test `/profile/oauth-apps` page
    - [x] All owned OAuth apps show up
    - [x] Enabling/disabling apps works
- [ ] ~~Uploading logos works~~ can only test this once deployed to dev

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-12-19 21:05:16 +01:00
Swifty
a80b06d459 fix(backend): rename password-related log variables to avoid security scan false positives
Rename variables and log messages from 'password' to 'credentials' terminology
to prevent GitHub Advanced Security from flagging logs of counts as sensitive
data exposure. No actual passwords are logged - only user count statistics.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 20:59:30 +01:00
Swifty
17c9e7c8b4 fix(backend): format migrate_supabase_users.py for black compliance
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 18:32:32 +01:00
Swifty
f83c9391c8 ci(platform): enable CI workflows for native-auth branch
Add native-auth branch to the trigger conditions for platform CI workflows
so that the CI runs on this feature branch.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-19 18:15:06 +01:00
Swifty
7a0a90e421 switch from supabase to native auth 2025-12-19 18:04:52 +01:00
Zamil Majdy
b76b5a37c5 fix(backend): Convert generic exceptions to appropriate typed exceptions (#11641)
## Summary
- Fix TimeoutError in AIShortformVideoCreatorBlock → BlockExecutionError
- Fix generic exceptions in SearchTheWebBlock → BlockExecutionError with
proper HTTP error handling
- Fix FirecrawlError 504 timeouts → BlockExecutionError with
service-specific messages
- Fix ReplicateBlock validation errors → BlockInputError for 422 status,
BlockExecutionError for others
- Add comprehensive HTTP error handling with
HTTPClientError/HTTPServerError classes
- Implement filename sanitization for "File name too long" errors
- Add proper User-Agent handling for Wikipedia API compliance
- Fix type conversion for string subclasses like ShortTextType
- Add support for moderation errors with proper context propagation

## Test plan
- [x] All modified blocks now properly categorize errors instead of
raising BlockUnknownError
- [x] Type conversion tests pass for ShortTextType and other string
subclasses
- [x] Formatting and linting pass
- [x] Exception constructors include required block_name and block_id
parameters

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-19 13:19:58 +01:00
Bently
eed07b173a fix(frontend/builder): automatically frame agent when opening in builder (#11640)
## Summary
- Fixed auto-frame timing in new builder - now calls `fitView` after
nodes are rendered instead of on mount
- Replaced manual viewport calculation in legacy builder with React
Flow's `fitView` for consistency
- Both builders now properly center and frame all blocks when opening an
agent

  ## Test plan
- [x] Open an existing agent with multiple blocks in the new builder -
verify all blocks are visible and centered
- [x] Open an existing agent in the legacy builder - verify all blocks
are visible and centered
  - [x] Verify the manual "Frame" button still works correctly
2025-12-18 18:07:40 +00:00
Ubbe
c5e8b0b08f fix(frontend): modal hidden overflow (#11642)
## Changes 🏗️

### Before

<img width="300" height="528" alt="Screenshot 2025-12-18 at 19 07 37"
src="https://github.com/user-attachments/assets/6bedf02d-e1fd-4f87-956e-96fcd9996392"
/>


### After

<img width="300" height="576" alt="Screenshot 2025-12-18 at 19 02 12"
src="https://github.com/user-attachments/assets/f3175b94-447c-41b0-83cf-4334c02d1378"
/>


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and check crop
2025-12-18 19:51:24 +01:00
Ubbe
cd3e35df9e fix(frontend): small library/mobile improvements (#11626)
## Changes 🏗️

Adds the following improvements:

### Prevent credential row overflowing on mobile 📱 

**Before**

<img width="300" height="469" alt="Screenshot 2025-12-15 at 16 42 05"
src="https://github.com/user-attachments/assets/0d27394c-cec9-45a4-be82-804827343212"
/>

**After**

<img width="300" height="446" alt="Screenshot 2025-12-15 at 16 44 22"
src="https://github.com/user-attachments/assets/0f19e220-500d-4488-955e-612d38704727"
/>

_Just hide the ****** on mobile..._

### Make touch targets bigger on 📱 on the mobile menu

**Before**

<img width="300" height="607" alt="Screenshot 2025-12-15 at 16 58 28"
src="https://github.com/user-attachments/assets/762b7d4e-5269-41a4-88d2-ea745c50324e"
/>

Touch targets were quite small on mobile, especially for people with big
fingers...

**After**

<img width="300" height="589" alt="Screenshot 2025-12-15 at 16 54 02"
src="https://github.com/user-attachments/assets/beede0c4-5439-47a9-8bec-143b44306c6b"
/>

### New `<OverflowText />` component

<img width="600" height="551" alt="Screenshot 2025-12-15 at 16 48 20"
src="https://github.com/user-attachments/assets/a969223f-dd6a-497a-857e-18483aea28d7"
/>

A component that will render text like `<Text />`, but automatically
displays `...` and the full text content on a tooltip if it detects
there is no space for the full text length. Pretty useful for the type
of dashboard we are building, where sometimes titles or user-generated
content can be quite long, making the UI look whack.

### Google Drive Picker

Only allow the removal of files if it is not in read-only mode.


## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Checkout branch locally
  - [x] Test the above
2025-12-18 19:33:30 +01:00
dependabot[bot]
4c474417bc chore(frontend/deps-dev): bump import-in-the-middle from 1.14.2 to 2.0.0 in /autogpt_platform/frontend (#11357)
Bumps
[import-in-the-middle](https://github.com/nodejs/import-in-the-middle)
from 1.14.2 to 2.0.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/nodejs/import-in-the-middle/releases">import-in-the-middle's
releases</a>.</em></p>
<blockquote>
<h2>import-in-the-middle: v2.0.0</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.15.0...import-in-the-middle-v2.0.0">2.0.0</a>
(2025-10-14)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<p>This was only a new major out of an abundance of caution. The hook
code has been converted to ESM to work around some loader issues. There
should actually be no breaking changes when using
<code>import-in-the-middle/hook.mjs</code> or the exported
<code>Hook</code> API.</p>
<h3>Features</h3>
<ul>
<li>convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)
(<a
href="da7c7a6904">da7c7a6</a>)</li>
</ul>
<h2>import-in-the-middle: v1.15.0</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.4...import-in-the-middle-v1.15.0">1.15.0</a>
(2025-10-09)</h2>
<h3>Features</h3>
<ul>
<li>Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)
(<a
href="83d662a8e1">83d662a</a>)</li>
</ul>
<h2>import-in-the-middle: v1.14.4</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.3...import-in-the-middle-v1.14.4">1.14.4</a>
(2025-09-25)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)
(<a
href="f23b7ef9e8">f23b7ef</a>)</li>
</ul>
<h2>import-in-the-middle: v1.14.3</h2>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v1.14.3">1.14.3</a>
(2025-09-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)
(<a
href="81a2ae0ea0">81a2ae0</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/nodejs/import-in-the-middle/blob/main/CHANGELOG.md">import-in-the-middle's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.15.0...import-in-the-middle-v2.0.0">2.0.0</a>
(2025-10-14)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<p>Converting all modules running in the loader thread to ESM should not
be a
breaking change for most users since it primarily affects internal
implementation
details. However, if you were referencing internal CJS files like
<code>hook.js</code> this will no longer work.</p>
<h3>Features</h3>
<ul>
<li>convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)
(<a
href="da7c7a6904">da7c7a6</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.4...import-in-the-middle-v1.15.0">1.15.0</a>
(2025-10-09)</h2>
<h3>Features</h3>
<ul>
<li>Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)
(<a
href="83d662a8e1">83d662a</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.3...import-in-the-middle-v1.14.4">1.14.4</a>
(2025-09-25)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)
(<a
href="f23b7ef9e8">f23b7ef</a>)</li>
</ul>
<h2><a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v1.14.3">1.14.3</a>
(2025-09-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)
(<a
href="81a2ae0ea0">81a2ae0</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d96619f7f2"><code>d96619f</code></a>
chore: release v2.0.0 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/214">#214</a>)</li>
<li><a
href="da7c7a6904"><code>da7c7a6</code></a>
feat!: convert all modules running in loader thread to ESM (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/210">#210</a>)</li>
<li><a
href="b8fd97e98c"><code>b8fd97e</code></a>
chore: npm ignore several files used for dev (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/137">#137</a>)</li>
<li><a
href="94837a7427"><code>94837a7</code></a>
chore: release v1.15.0 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/213">#213</a>)</li>
<li><a
href="83d662a8e1"><code>83d662a</code></a>
feat: Compatibility with specifier imports (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/211">#211</a>)</li>
<li><a
href="ffb5682e3f"><code>ffb5682</code></a>
chore: release v1.14.4 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/209">#209</a>)</li>
<li><a
href="f23b7ef9e8"><code>f23b7ef</code></a>
fix: Revert &quot;use <code>createRequire</code> to load
<code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)&quot;
(<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/208">#208</a>)</li>
<li><a
href="f67ecb8323"><code>f67ecb8</code></a>
chore: release v1.14.3 (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/206">#206</a>)</li>
<li><a
href="81a2ae0ea0"><code>81a2ae0</code></a>
fix: use <code>createRequire</code> to load <code>hook.js</code> (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/205">#205</a>)</li>
<li><a
href="2fb1f21b3d"><code>2fb1f21</code></a>
chore: use npm@11 for OIDC publishing (<a
href="https://redirect.github.com/nodejs/import-in-the-middle/issues/204">#204</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/nodejs/import-in-the-middle/compare/import-in-the-middle-v1.14.2...import-in-the-middle-v2.0.0">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by [GitHub Actions](<a
href="https://www.npmjs.com/~GitHub">https://www.npmjs.com/~GitHub</a>
Actions), a new releaser for import-in-the-middle since your current
version.</p>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=import-in-the-middle&package-manager=npm_and_yarn&previous-version=1.14.2&new-version=2.0.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-12-18 18:58:27 +01:00
dependabot[bot]
99e2261254 chore(frontend/deps-dev): bump eslint-config-next from 15.5.2 to 15.5.6 in /autogpt_platform/frontend (#11355)
Bumps
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
from 15.5.2 to 15.5.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.5.6</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Turbopack: don't define process.cwd() in node_modules <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83452">#83452</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/mischnic"><code>@​mischnic</code></a> for
helping!</p>
<h2>v15.5.5</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Split code-frame into separate compiled package (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84238">#84238</a>)</li>
<li>Add deprecation warning to Runtime config (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84650">#84650</a>)</li>
<li>fix: unstable_cache should perform blocking revalidation during ISR
revalidation (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84716">#84716</a>)</li>
<li>feat: <code>experimental.middlewareClientMaxBodySize</code> body
cloning limit (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84722">#84722</a>)</li>
<li>fix: missing next/link types with typedRoutes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84779">#84779</a>)</li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>docs: early October improvements and fixes (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84334">#84334</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/devjiwonchoi"><code>@​devjiwonchoi</code></a>,
<a href="https://github.com/ztanner"><code>@​ztanner</code></a>, and <a
href="https://github.com/icyJoseph"><code>@​icyJoseph</code></a> for
helping!</p>
<h2>v15.5.4</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>fix: ensure onRequestError is invoked when otel enabled (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83343">#83343</a>)</li>
<li>fix: devtools initial position should be from next config (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83571">#83571</a>)</li>
<li>[devtool] fix overlay styles are missing (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83721">#83721</a>)</li>
<li>Turbopack: don't match dynamic pattern for node_modules packages (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83176">#83176</a>)</li>
<li>Turbopack: don't treat metadata routes as RSC (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82911">#82911</a>)</li>
<li>[turbopack] Improve handling of symlink resolution errors in
track_glob and read_glob (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83357">#83357</a>)</li>
<li>Turbopack: throw large static metadata error earlier (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/82939">#82939</a>)</li>
<li>fix: error overlay not closing when backdrop clicked (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83981">#83981</a>)</li>
<li>Turbopack: flush Node.js worker IPC on error (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/84077">#84077</a>)</li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>[CNA] use linter preference (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83194">#83194</a>)</li>
<li>CI: use KV for test timing data (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/83745">#83745</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="55ef0e3ebc"><code>55ef0e3</code></a>
v15.5.6</li>
<li><a
href="81f530db26"><code>81f530d</code></a>
v15.5.5</li>
<li><a
href="40f1d7814d"><code>40f1d78</code></a>
v15.5.4</li>
<li><a
href="07d1cbc9c6"><code>07d1cbc</code></a>
v15.5.3</li>
<li>See full diff in <a
href="https://github.com/vercel/next.js/commits/v15.5.6/packages/eslint-config-next">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=eslint-config-next&package-manager=npm_and_yarn&previous-version=15.5.2&new-version=15.5.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-12-18 18:55:57 +01:00
dependabot[bot]
cab498fa8c chore(deps): Bump actions/stale from 9 to 10 (#10871)
Bumps [actions/stale](https://github.com/actions/stale) from 9 to 10.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/stale/releases">actions/stale's
releases</a>.</em></p>
<blockquote>
<h2>v10.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade to node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1279">actions/stale#1279</a>
Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></li>
</ul>
<h3>Enhancement</h3>
<ul>
<li>Introducing sort-by option by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1254">actions/stale#1254</a></li>
</ul>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade actions/publish-immutable-action from 0.0.3 to 0.0.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1186">actions/stale#1186</a></li>
<li>Upgrade undici from 5.28.4 to 5.28.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1201">actions/stale#1201</a></li>
<li>Upgrade <code>@​action/cache</code> from 4.0.0 to 4.0.2 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1226">actions/stale#1226</a></li>
<li>Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1233">actions/stale#1233</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/stale/pull/1251">actions/stale#1251</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/stale/pull/1277">actions/stale#1277</a></li>
</ul>
<h3>Documentation changes</h3>
<ul>
<li>Changelog update for recent releases by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1224">actions/stale#1224</a></li>
<li>Permissions update in Readme by <a
href="https://github.com/ghadimir"><code>@​ghadimir</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1248">actions/stale#1248</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1224">actions/stale#1224</a></li>
<li><a href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1248">actions/stale#1248</a></li>
<li><a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1277">actions/stale#1277</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1279">actions/stale#1279</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/stale/compare/v9...v10.0.0">https://github.com/actions/stale/compare/v9...v10.0.0</a></p>
<h2>v9.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Documentation update by <a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
<li>Update undici from 5.28.2 to 5.28.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1150">actions/stale#1150</a></li>
<li>Update actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1091">actions/stale#1091</a></li>
<li>Update actions/publish-action from 0.2.2 to 0.3.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1147">actions/stale#1147</a></li>
<li>Update ts-jest from 29.1.1 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1175">actions/stale#1175</a></li>
<li>Update <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1191">actions/stale#1191</a></li>
<li>Update <code>@​types/jest</code> from 29.5.11 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1193">actions/stale#1193</a></li>
<li>Update <code>@​actions/cache</code> from 3.2.2 to 4.0.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1194">actions/stale#1194</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/stale/compare/v9...v9.1.0">https://github.com/actions/stale/compare/v9...v9.1.0</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/stale/blob/main/CHANGELOG.md">actions/stale's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h1>[9.1.0]</h1>
<h2>What's Changed</h2>
<ul>
<li>Documentation update by <a
href="https://github.com/Marukome0743"><code>@​Marukome0743</code></a>
in <a
href="https://redirect.github.com/actions/stale/pull/1116">actions/stale#1116</a></li>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/Jcambass"><code>@​Jcambass</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1179">actions/stale#1179</a></li>
<li>Update undici from 5.28.2 to 5.28.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1150">actions/stale#1150</a></li>
<li>Update actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1091">actions/stale#1091</a></li>
<li>Update actions/publish-action from 0.2.2 to 0.3.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1147">actions/stale#1147</a></li>
<li>Update ts-jest from 29.1.1 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1175">actions/stale#1175</a></li>
<li>Update <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1191">actions/stale#1191</a></li>
<li>Update <code>@​types/jest</code> from 29.5.11 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1193">actions/stale#1193</a></li>
<li>Update <code>@​actions/cache</code> from 3.2.2 to 4.0.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1194">actions/stale#1194</a></li>
</ul>
<h1>[9.0.0]</h1>
<h2>Breaking Changes</h2>
<ol>
<li>Action is now stateful: If the action ends because of <a
href="https://github.com/actions/stale#operations-per-run">operations-per-run</a>
then the next run will start from the first unprocessed issue skipping
the issues processed during the previous run(s). The state is reset when
all the issues are processed. This should be considered for scheduling
workflow runs.</li>
<li>Version 9 of this action updated the runtime to Node.js 20. All
scripts are now run with Node.js 20 instead of Node.js 16 and are
affected by any breaking changes between Node.js 16 and 20.</li>
</ol>
<h2>What Else Changed</h2>
<ol>
<li>Performance optimization that removes unnecessary API calls by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1033/">#1033</a>;
fixes <a
href="https://redirect.github.com/actions/stale/issues/792">#792</a></li>
<li>Logs displaying current GitHub API rate limit by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/1032">#1032</a>;
addresses <a
href="https://redirect.github.com/actions/stale/issues/1029">#1029</a></li>
</ol>
<p>For more information, please read the <a
href="https://github.com/actions/stale#readme">action documentation</a>
and its <a href="https://github.com/actions/stale#statefulness">section
about statefulness</a></p>
<h1>[4.1.1]</h1>
<p>In scope of this release we updated <a
href="https://redirect.github.com/actions/stale/pull/957">actions/core
to 1.10.0</a> for v4 and <a
href="https://redirect.github.com/actions/stale/pull/662">fixed issues
operation count</a>.</p>
<h1>[8.0.0]</h1>
<p>⚠️ This version contains breaking changes ⚠️</p>
<ul>
<li>New option labels-to-remove-when-stale enables users to specify list
of comma delimited labels that will be removed when the issue or PR
becomes stale by <a
href="https://github.com/panticmilos"><code>@​panticmilos</code></a> <a
href="https://redirect.github.com/actions/stale/issues/770">actions/stale#770</a></li>
<li>Skip deleting the branch in the upstream of a forked repo by <a
href="https://github.com/dsame"><code>@​dsame</code></a> <a
href="https://redirect.github.com/actions/stale/pull/913">actions/stale#913</a></li>
<li>abort the build on the error by <a
href="https://github.com/dsame"><code>@​dsame</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/935">actions/stale#935</a></li>
</ul>
<h1>[7.0.0]</h1>
<p>⚠️ Breaking change ⚠️</p>
<ul>
<li>Allow daysBeforeStale options to be float by <a
href="https://github.com/irega"><code>@​irega</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/841">actions/stale#841</a></li>
<li>Use cache in check-dist.yml by <a
href="https://github.com/jongwooo"><code>@​jongwooo</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/876">actions/stale#876</a></li>
<li>fix print outputs step in existing workflows by <a
href="https://github.com/irega"><code>@​irega</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/859">actions/stale#859</a></li>
<li>Update issue and PR templates, add/delete workflow files by <a
href="https://github.com/IvanZosimov"><code>@​IvanZosimov</code></a> in
<a
href="https://redirect.github.com/actions/stale/pull/880">actions/stale#880</a></li>
<li>Update how stale handles exempt items by <a
href="https://github.com/johnsudol"><code>@​johnsudol</code></a> in <a
href="https://redirect.github.com/actions/stale/pull/874">actions/stale#874</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3a9db7e6a4"><code>3a9db7e</code></a>
Upgrade to node 24 (<a
href="https://redirect.github.com/actions/stale/issues/1279">#1279</a>)</li>
<li><a
href="8f717f0dfc"><code>8f717f0</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/stale/issues/1277">#1277</a>)</li>
<li><a
href="a92fd57ffe"><code>a92fd57</code></a>
build(deps): bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/stale/issues/1251">#1251</a>)</li>
<li><a
href="128b2c81d0"><code>128b2c8</code></a>
Introducing sort-by option (<a
href="https://redirect.github.com/actions/stale/issues/1254">#1254</a>)</li>
<li><a
href="f78de9780e"><code>f78de97</code></a>
Update README.md (<a
href="https://redirect.github.com/actions/stale/issues/1248">#1248</a>)</li>
<li><a
href="816d9db1ab"><code>816d9db</code></a>
Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 (<a
href="https://redirect.github.com/actions/stale/issues/1233">#1233</a>)</li>
<li><a
href="ba23c1cb02"><code>ba23c1c</code></a>
upgrade actions/cache from 4.0.0 to 4.0.2 (<a
href="https://redirect.github.com/actions/stale/issues/1226">#1226</a>)</li>
<li><a
href="a65e88a9b9"><code>a65e88a</code></a>
build(deps): bump undici from 5.28.4 to 5.28.5 (<a
href="https://redirect.github.com/actions/stale/issues/1201">#1201</a>)</li>
<li><a
href="d4df79c591"><code>d4df79c</code></a>
Updates to CHANGELOG.MD for recent releases (<a
href="https://redirect.github.com/actions/stale/issues/1224">#1224</a>)</li>
<li><a
href="ee7ef89499"><code>ee7ef89</code></a>
build(deps): bump actions/publish-immutable-action from 0.0.3 to 0.0.4
(<a
href="https://redirect.github.com/actions/stale/issues/1186">#1186</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/stale/compare/v9...v10">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/stale&package-manager=github_actions&previous-version=9&new-version=10)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Update the stale-issues workflow to use `actions/stale@v10` instead of
`v9`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
747d4ea73a. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-18 17:34:04 +00:00
Bently
22078671df feat(frontend): increase file upload size limit to 256MB (#11634)
- Updated Next.js configuration to set body size limits for server
actions and API routes.
- Enhanced error handling in the API client to provide user-friendly
messages for file size errors.
- Added user-friendly error messages for 413 Payload Too Large responses
in API error parsing.

These changes ensure that file uploads are consistent with backend
limits and improve user experience during uploads.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Upload a file bigger than 10MB and it works
- [X] Upload a file bigger than 256MB and you see a official error
stating the max file size is 256MB
2025-12-18 17:29:20 +00:00
dependabot[bot]
0082a72657 chore(deps): Bump actions/labeler from 5 to 6 (#10868)
Bumps [actions/labeler](https://github.com/actions/labeler) from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/labeler/releases">actions/labeler's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Add workflow file for publishing releases to immutable action
package by <a
href="https://github.com/jcambass"><code>@​jcambass</code></a> in <a
href="https://redirect.github.com/actions/labeler/pull/802">actions/labeler#802</a></li>
</ul>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade Node.js version to 24 in action and dependencies <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/labeler/pull/891">actions/labeler#891</a>
Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></li>
</ul>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade eslint-config-prettier from 9.0.0 to 9.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/711">actions/labeler#711</a></li>
<li>Upgrade eslint from 8.52.0 to 8.55.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/720">actions/labeler#720</a></li>
<li>Upgrade <code>@​types/jest</code> from 29.5.6 to 29.5.11 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/719">actions/labeler#719</a></li>
<li>Upgrade <code>@​types/js-yaml</code> from 4.0.8 to 4.0.9 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/718">actions/labeler#718</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 6.9.0 to 6.14.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/717">actions/labeler#717</a></li>
<li>Upgrade prettier from 3.0.3 to 3.1.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/726">actions/labeler#726</a></li>
<li>Upgrade eslint from 8.55.0 to 8.56.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/725">actions/labeler#725</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 6.14.0 to
6.19.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/745">actions/labeler#745</a></li>
<li>Upgrade eslint-plugin-jest from 27.4.3 to 27.6.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/744">actions/labeler#744</a></li>
<li>Upgrade <code>@​typescript-eslint/eslint-plugin</code> from 6.9.0 to
6.20.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/750">actions/labeler#750</a></li>
<li>Upgrade prettier from 3.1.1 to 3.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/752">actions/labeler#752</a></li>
<li>Upgrade undici from 5.26.5 to 5.28.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/757">actions/labeler#757</a></li>
<li>Upgrade braces from 3.0.2 to 3.0.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/789">actions/labeler#789</a></li>
<li>Upgrade minimatch from 9.0.3 to 10.0.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/805">actions/labeler#805</a></li>
<li>Upgrade <code>@​actions/core</code> from 1.10.1 to 1.11.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/811">actions/labeler#811</a></li>
<li>Upgrade typescript from 5.4.3 to 5.7.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/819">actions/labeler#819</a></li>
<li>Upgrade <code>@​typescript-eslint/parser</code> from 7.3.1 to 8.17.0
by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/824">actions/labeler#824</a></li>
<li>Upgrade prettier from 3.2.5 to 3.4.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/825">actions/labeler#825</a></li>
<li>Upgrade <code>@​types/jest</code> from 29.5.12 to 29.5.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/827">actions/labeler#827</a></li>
<li>Upgrade eslint-plugin-jest from 27.9.0 to 28.9.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/832">actions/labeler#832</a></li>
<li>Upgrade ts-jest from 29.1.2 to 29.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/831">actions/labeler#831</a></li>
<li>Upgrade <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/830">actions/labeler#830</a></li>
<li>Upgrade typescript from 5.7.2 to 5.7.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/835">actions/labeler#835</a></li>
<li>Upgrade eslint-plugin-jest from 28.9.0 to 28.11.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/839">actions/labeler#839</a></li>
<li>Upgrade undici from 5.28.4 to 5.28.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/842">actions/labeler#842</a></li>
<li>Upgrade <code>@​octokit/request-error</code> from 5.0.1 to 5.1.1 by
<a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/labeler/pull/846">actions/labeler#846</a></li>
</ul>
<h3>Documentation changes</h3>
<ul>
<li>Add note regarding <code>pull_request_target</code> to README.md by
<a href="https://github.com/silverwind"><code>@​silverwind</code></a> in
<a
href="https://redirect.github.com/actions/labeler/pull/669">actions/labeler#669</a></li>
<li>Update readme with additional examples and important note about
<code>pull_request_target</code> event by <a
href="https://github.com/IvanZosimov"><code>@​IvanZosimov</code></a> in
<a
href="https://redirect.github.com/actions/labeler/pull/721">actions/labeler#721</a></li>
<li>Document update - permission section by <a
href="https://github.com/harithavattikuti"><code>@​harithavattikuti</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/840">actions/labeler#840</a></li>
<li>Improvement in documentation for pull_request_target event usage in
README by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/871">actions/labeler#871</a></li>
<li>Fix broken links in documentation by <a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
in <a
href="https://redirect.github.com/actions/labeler/pull/822">actions/labeler#822</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/silverwind"><code>@​silverwind</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/669">actions/labeler#669</a></li>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/802">actions/labeler#802</a></li>
<li><a
href="https://github.com/suyashgaonkar"><code>@​suyashgaonkar</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/822">actions/labeler#822</a></li>
<li><a
href="https://github.com/HarithaVattikuti"><code>@​HarithaVattikuti</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/840">actions/labeler#840</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/labeler/pull/891">actions/labeler#891</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634933edcd"><code>634933e</code></a>
publish-action upgrade to 0.4.0 from 0.2.2 (<a
href="https://redirect.github.com/actions/labeler/issues/901">#901</a>)</li>
<li><a
href="f1a63e87db"><code>f1a63e8</code></a>
Update Node.js version to 24 in action and dependencies (<a
href="https://redirect.github.com/actions/labeler/issues/891">#891</a>)</li>
<li><a
href="b0a1180683"><code>b0a1180</code></a>
Bump <code>@​octokit/request-error</code> from 5.0.1 to 5.1.1 (<a
href="https://redirect.github.com/actions/labeler/issues/846">#846</a>)</li>
<li><a
href="110d44140c"><code>110d441</code></a>
Update README.md (<a
href="https://redirect.github.com/actions/labeler/issues/871">#871</a>)</li>
<li><a
href="bee50fefe1"><code>bee50fe</code></a>
Bump undici from 5.28.4 to 5.28.5 (<a
href="https://redirect.github.com/actions/labeler/issues/842">#842</a>)</li>
<li><a
href="6463cdb00e"><code>6463cdb</code></a>
Bump eslint-plugin-jest from 28.9.0 to 28.11.0 (<a
href="https://redirect.github.com/actions/labeler/issues/839">#839</a>)</li>
<li><a
href="c209686724"><code>c209686</code></a>
Bump typescript from 5.7.2 to 5.7.3 (<a
href="https://redirect.github.com/actions/labeler/issues/835">#835</a>)</li>
<li><a
href="5184940b54"><code>5184940</code></a>
Bump <code>@​vercel/ncc</code> from 0.38.1 to 0.38.3 (<a
href="https://redirect.github.com/actions/labeler/issues/830">#830</a>)</li>
<li><a
href="3629d5568b"><code>3629d55</code></a>
Document update - permission section (<a
href="https://redirect.github.com/actions/labeler/issues/840">#840</a>)</li>
<li><a
href="d24f7f3731"><code>d24f7f3</code></a>
Bump ts-jest from 29.1.2 to 29.2.5 (<a
href="https://redirect.github.com/actions/labeler/issues/831">#831</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/labeler/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/labeler&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 17:16:22 +00:00
Ubbe
9a1d940677 fix(frontend): onboarding run card (#11636)
## Changes 🏗️

### Before

<img width="300" height="730" alt="Screenshot 2025-12-18 at 17 16 57"
src="https://github.com/user-attachments/assets/f57efc83-aa55-4371-96af-e294cab9b03a"
/>

- extra label
- overflow

### After

<img width="300" height="642" alt="Screenshot 2025-12-18 at 17 41 53"
src="https://github.com/user-attachments/assets/50721293-c67c-438a-9c9d-ff7ffae11d14"
/>

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally
  - [x] Test the above
2025-12-18 18:08:19 +01:00
Abhimanyu Yadav
e640d36265 feat(frontend): Add special handling for AGENT block type in form and output handlers (#11595)
<!-- Clearly explain the need for these changes: -->

Agent blocks require different handling compared to standard blocks,
particularly for:
- Handle ID generation (using direct keys instead of generated IDs)
- Form data storage structure (nested under `inputs` key)
- Field ID parsing (filtering out schema path prefixes)

This PR implements special handling for `BlockUIType.AGENT` throughout
the form rendering and output handling components to ensure agents work
correctly in the flow editor.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **CustomNode.tsx**: Pass `uiType` prop to `OutputHandler` component
- **FormCreator.tsx**: 
- Store agent form data in `hardcodedValues.inputs` instead of directly
in `hardcodedValues`
- Extract initial values from `hardcodedValues.inputs` for agent blocks
- **OutputHandler.tsx**: 
  - Accept `uiType` prop
- Use direct key as handle ID for agents instead of
`generateHandleId(key)`
- **useMarketplaceAgentsContent.ts**: 
- Fetch full agent details using `getV2GetLibraryAgent` before adding to
builder
- Ensures agent schemas are properly populated (fixes issue where
marketplace endpoint returns empty schemas)
- **AnyOfField.tsx**: Generate handle IDs for agents by filtering out
"root" and "properties" from schema path
- **FieldTemplate.tsx**: Apply same handle ID generation logic for agent
fields

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add an agent block from marketplace and verify it renders
correctly
- [x] Connect inputs/outputs to/from an agent block and verify
connections work
- [x] Fill in form fields for an agent block and verify data persists
correctly
  - [x] Verify agent blocks work in both new and existing flows
- [x] Test that non-agent blocks still work as before (regression test)
2025-12-18 03:42:55 +00:00
Zamil Majdy
cc9179178f feat(block): Human in The Loop Block restructure (#11627)
## Summary

This PR refactors the Human-In-The-Loop (HITL) review system backend to
improve data handling and API consistency.

## Changes

### Backend Refactoring

#### 1. **Block Output Schema Update** (`human_in_the_loop.py`)
- Replaced single `reviewed_data` and `status` fields with separate
`approved_data` and `rejected_data` outputs
- This allows downstream blocks to handle approved vs rejected data
differently without checking status
- Simplified test outputs to match new schema

#### 2. **Review Data Handling** (`human_review.py`)
- Modified `get_or_create_human_review` to always return
`review.payload` regardless of approval status
- Previously returned `None` for rejected reviews, which could cause
data loss
- Now preserves reviewer-modified data for both approved and rejected
cases

#### 3. **API Route Simplification** (`review/routes.py`)
- Streamlined review decision processing logic using ternary operator
- Unified data handling for both approved and rejected reviews
- Maintains backward compatibility while improving code clarity

## Why These Changes?

- **Better Data Flow**: Separate output pins for approved/rejected data
make workflow design more intuitive
- **Data Preservation**: Rejected reviews can still pass modified data
downstream for logging or alternative processing
- **Cleaner API**: Simplified decision processing reduces code
complexity and potential bugs

## Testing

- All existing tests pass with updated schema
- Backward compatibility maintained for existing workflows
- Human review functionality verified in both approved and rejected
scenarios

## Related

This is the backend portion of changes from #11529, applied separately
to the `feat/hitl` branch.
2025-12-16 12:14:14 +00:00
Ubbe
e8d37ab116 feat(frontend): add nice scrollable tabs on Selected Run view (#11596)
## Changes 🏗️


https://github.com/user-attachments/assets/7e49ed5b-c818-4aa3-b5d6-4fa86fada7ee

When the content of Summary + Outputs + Inputs is long enough, it will
show in this new `<ScrollableTabs />` component, which auto-scrolls the
content as you click on a tab.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Check the new page with scrollable tabs
2025-12-15 16:57:36 +07:00
Ubbe
7f7ef6a271 feat(frontend): imporve agent inputs read-only (#11621)
## Changes 🏗️

The main goal of this PR is to improve how we display inputs used for a
given task.

Agent inputs can be of many types (text, long text, date, select, file,
etc.). Until now, we have tried to display them as text, which has not
always worked. Given we already have `<RunAgentInputs />`, which uses
form elements to display the inputs ( _prefilled with data_ ), most of
the time it will look better and less buggy than text.

### Before

<img width="800" height="614" alt="Screenshot 2025-12-14 at 17 45 44"
src="https://github.com/user-attachments/assets/3d851adf-9638-46c1-adfa-b5e68dc78bb0"
/>

### After

<img width="800" height="708" alt="Screenshot 2025-12-14 at 17 45 21"
src="https://github.com/user-attachments/assets/367f32b4-2c30-4368-8d63-4cad06e32437"
/>

### Other improvements

- 🗑️  Removed `<EditInputsModal />`
- it is not used given the API does not support editing inputs for a
schedule yt
- Made `<InformationTooltip />` icon size customisable    

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
- [x] Check the new view tasks use the form elements instead of text to
display inputs
2025-12-15 00:11:27 +07:00
Ubbe
aefac541d9 fix(frontend): force light mode for now (#11619)
## Changes 🏗️

We have the setup for light/dark mode support ( Tailwind + `next-themes`
), but not the capacity yet from contributions to make the app dark-mode
ready. First, we need to make it look good in light mode 😆

This disables `dark:` mode classes on the code, to prevent the app
looking oopsie when the user is seeing it with a browser with dark mode
preference:

### Before these changes

<img width="800" height="739" alt="Screenshot 2025-12-14 at 17 09 25"
src="https://github.com/user-attachments/assets/76333e03-930a-40b6-b91e-47ee01bf2c00"
/>

### After

<img width="800" height="722" alt="Screenshot 2025-12-14 at 16 55 46"
src="https://github.com/user-attachments/assets/34d85359-c68f-474c-8c66-2bebf28f923e"
/>

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app on a browser with dark mode preference
  - [x] It still looks in light mode without broken styles
2025-12-15 00:10:36 +07:00
Krzysztof Czerwinski
ff5c8f324b Merge branch 'master' into dev 2025-12-12 22:26:39 +09:00
Krzysztof Czerwinski
f121a22544 hotfix: update next (#11612)
Update next to `15.4.10`
2025-12-12 13:42:36 +01:00
Zamil Majdy
71157bddd7 feat(backend): add agent mode support to SmartDecisionMakerBlock with autonomous tool execution loops (#11547)
## Summary

<img width="2072" height="1836" alt="image"
src="https://github.com/user-attachments/assets/9d231a77-6309-46b9-bc11-befb5d8e9fcc"
/>

**🚀 Major Feature: Agent Mode Support**

Adds autonomous agent mode to SmartDecisionMakerBlock, enabling it to
execute tools directly in loops until tasks are completed, rather than
just yielding tool calls for external execution.

##  **Key New Features**

### 🤖 **Agent Mode with Tool Execution Loops**
- **New `agent_mode_max_iterations` parameter** controls execution
behavior:
  - `0` = Traditional mode (single LLM call, yield tool calls)
  - `1+` = Agent mode with iteration limit
  - `-1` = Infinite agent mode (loop until finished)

### 🔄 **Autonomous Tool Execution**  
- **Direct tool execution** instead of yielding for external handling
- **Multi-iteration loops** with conversation state management
- **Automatic completion detection** when LLM stops making tool calls
- **Iteration limit handling** with graceful completion messages

### 🏗️ **Proper Database Operations**
- **Replace manual execution ID generation** with proper
`upsert_execution_input`/`upsert_execution_output`
- **Real NodeExecutionEntry objects** from database results
- **Proper execution status management**: QUEUED → RUNNING →
COMPLETED/FAILED

### 🔧 **Enhanced Type Safety**
- **Pydantic models** replace TypedDict: `ToolInfo` and
`ExecutionParams`
- **Runtime validation** with better error messages
- **Improved developer experience** with IDE support

## 🔧 **Technical Implementation**

### Agent Mode Flow:
```python
# Agent mode enabled with iterations
if input_data.agent_mode_max_iterations != 0:
    async for result in self._execute_tools_agent_mode(...):
        yield result  # "conversations", "finished"
    return

# Traditional mode (existing behavior)  
# Single LLM call + yield tool calls for external execution
```

### Tool Execution with Database Operations:
```python
# Before: Manual execution IDs
tool_exec_id = f"{node_exec_id}_tool_{sink_node_id}_{len(input_data)}"

# After: Proper database operations
node_exec_result, final_input_data = await db_client.upsert_execution_input(
    node_id=sink_node_id,
    graph_exec_id=execution_params.graph_exec_id,
    input_name=input_name, 
    input_data=input_value,
)
```

### Type Safety with Pydantic:
```python
# Before: Dict access prone to errors
execution_params["user_id"]  

# After: Validated model access
execution_params.user_id  # Runtime validation + IDE support
```

## 🧪 **Comprehensive Test Coverage**

- **Agent mode execution tests** with multi-iteration scenarios
- **Database operation verification** 
- **Type safety validation**
- **Backward compatibility** for traditional mode
- **Enhanced dynamic fields tests**

## 📊 **Usage Examples**

### Traditional Mode (Existing Behavior):
```python
SmartDecisionMakerBlock.Input(
    prompt="Search for keywords",
    agent_mode_max_iterations=0  # Default
)
# → Yields tool calls for external execution
```

### Agent Mode (New Feature):
```python  
SmartDecisionMakerBlock.Input(
    prompt="Complete this task using available tools",
    agent_mode_max_iterations=5  # Max 5 iterations
)
# → Executes tools directly until task completion or iteration limit
```

### Infinite Agent Mode:
```python
SmartDecisionMakerBlock.Input(
    prompt="Analyze and process this data thoroughly", 
    agent_mode_max_iterations=-1  # No limit, run until finished
)
# → Executes tools autonomously until LLM indicates completion
```

##  **Backward Compatibility**

- **Zero breaking changes** to existing functionality
- **Traditional mode remains default** (`agent_mode_max_iterations=0`)
- **All existing tests pass**
- **Same API for tool definitions and execution**

This transforms the SmartDecisionMakerBlock from a simple tool call
generator into a powerful autonomous agent capable of complex multi-step
task execution! 🎯

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-12 09:58:06 +00:00
Bentlybro
152e747ea6 Merge branch 'master' into dev 2025-12-11 14:40:01 +00:00
Reinier van der Leer
4d4741d558 fix(frontend/library): Transition from empty tasks view on task init (#11600)
- Resolves #11599

### Changes 🏗️

- Manually update item counts when initiating a task from `EmptyTasks`
view
- Other improvements made while debugging

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] `NewAgentLibraryView` transitions to full layout when a first task
is created
- [x] `NewAgentLibraryView` transitions to full layout when a first
trigger is set up
2025-12-11 11:13:53 +00:00
Krzysztof Czerwinski
bd37fe946d feat(platform): Builder search history (#11457)
Preserve user searches in the new builder and cache search results for
more efficiency.
Search is saved, so the user can see their previous searches.

### Changes 🏗️

- Add `BuilderSearch` column&migration to save user search (with all
filters)
- Builder `db.py` now caches all search results using `@cached` and
returns paginated results, so following pages are returned much quicker
- Score and sort results
- Update models&routes
- Update frontend, so it works properly with modified endpoints
- Frontend: store `serachId` and use it for subsequent searches, so we
don't save partial searches (e.g. "b", "bl", ..., "block"). Search id is
reset when user clears the search field.
- Add clickable chips to the Suggestions builder tab
- Add `HorizontalScroll` component (chips use it)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search works and is cached
  - [x] Search sorts results
  - [x] Searches are preserved properly

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-10 17:32:17 +00:00
Nicholas Tindle
7ff282c908 fix(frontend): Disable single dollar sign LaTeX mode in markdown rend… (#11598)
Single dollar signs ($10, $variable) are commonly used in content and
were being incorrectly interpreted as inline LaTeX math delimiters. This
change disables that behavior while keeping double dollar sign ($$...$$)
math blocks working.
## Changes 🏗️
• Configure remarkMath plugin with singleDollarTextMath: false in
MarkdownRenderer.tsx
• Double dollar sign display math ($$...$$) continues to work as
expected
• Single dollar signs are no longer interpreted as inline math
delimiters
## Checklist 📋
For code changes:
	-[x]	I have clearly listed my changes in the PR description
	-[x] I have made a test plan
	-[x] I have tested my changes according to the test plan:
-[x] Verify content with dollar amounts (e.g., “$100”) renders as plain
text
-[x] Verify double dollar sign math blocks ($$x^2$$) still render as
LaTeX
-[x] Verify other markdown features (code blocks, tables, links) still
work correctly

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-10 17:10:52 +00:00
Reinier van der Leer
117bb05438 fix(frontend/library): Fix trigger UX flows in v3 library (#11589)
- Resolves #11586
- Follow-up to #11580

### Changes 🏗️

- Fix logic to include manual triggers as a possibility
- Fix input render logic to use trigger setup schema if applicable
- Fix rendering payload input for externally triggered runs
- Amend `RunAgentModal` to load preset inputs+credentials if selected
- Amend `SelectedTemplateView` to use modified input for run (if
applicable)
- Hide non-applicable buttons in `SelectedRunView` for externally
triggered runs
- Implement auto-navigation to `SelectedTriggerView` on trigger setup

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Can set up manual triggers
    - [x] Navigates to trigger view after setup
  - [x] Can set up automatic triggers
  - [x] Can create templates from runs
  - [x] Can run templates
  - [x] Can run templates with modified input
2025-12-10 15:52:02 +00:00
Nicholas Tindle
979d7c3b74 feat(blocks): Add 4 new GitHub webhook trigger blocks (#11588)
I want to be able to automate some actions on social media or our
sevrver in response to actions from discord


<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Add trigger blocks for common GitHub events to enable OSS automation:
- GithubReleaseTriggerBlock: Trigger on release events (published, etc.)
- GithubStarTriggerBlock: Trigger on star events for milestone
celebrations
- GithubIssuesTriggerBlock: Trigger on issue events for
triage/notifications
- GithubDiscussionTriggerBlock: Trigger on discussion events for Q&A
sync
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test Stars
  - [x] Test Discussions
  - [x] Test Issues
  - [x] Test Release

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-09 21:25:43 +00:00
Nicholas Tindle
95200b67f8 feat(blocks): add many new spreadsheet blocks (#11574)
<!-- Clearly explain the need for these changes: -->
We have lots we want to do with google sheets and we don't want a lack
of blocks to be a limiter so I pre-ddi a lot of blocks!

### Changes 🏗️
Adds 24 new blocks for google sheets (tested and working)
```
|-----|-------------------------------------------|----------------------------------------|
|  1  | GoogleSheetsFilterRowsBlock               | Filter rows based on column conditions |  |
|  2  | GoogleSheetsLookupRowBlock                | VLOOKUP-style row lookup               |  |
|  3  | GoogleSheetsDeleteRowsBlock               | Delete rows from a sheet               |  |
|  4  | GoogleSheetsGetColumnBlock                | Get data from a specific column        |  |
|  5  | GoogleSheetsSortBlock                     | Sort sheet data                        |  | 
|  6  | GoogleSheetsGetUniqueValuesBlock          | Get unique values from a column        |  | 
|  7  | GoogleSheetsInsertRowBlock                | Insert rows into a sheet               |  |
|  8  | GoogleSheetsAddColumnBlock                | Add a new column                       |  |
|  9  | GoogleSheetsGetRowCountBlock              | Get the number of rows                 |  |
| 10  | GoogleSheetsRemoveDuplicatesBlock         | Remove duplicate rows                  |  | 
| 11  | GoogleSheetsUpdateRowBlock                | Update an existing row                 |  |
| 12  | GoogleSheetsGetRowBlock                   | Get a specific row by index            |  |
| 13  | GoogleSheetsDeleteColumnBlock             | Delete a column                        |  |
| 14  | GoogleSheetsCreateNamedRangeBlock         | Create a named range                   |  | 
| 15  | GoogleSheetsListNamedRangesBlock          | List all named ranges                  |  | 
| 16  | GoogleSheetsAddDropdownBlock              | Add dropdown validation to cells       |  | 
| 17  | GoogleSheetsCopyToSpreadsheetBlock        | Copy sheet to another spreadsheet      |  |
| 18  | GoogleSheetsProtectRangeBlock             | Protect a range from editing           |  |
| 19  | GoogleSheetsExportCsvBlock                | Export sheet as CSV                    |  |
| 20  | GoogleSheetsImportCsvBlock                | Import CSV data                        |  |
| 21  | GoogleSheetsAddNoteBlock                  | Add notes to cells                     |  | 
| 22  | GoogleSheetsGetNotesBlock                 | Get notes from cells                   |  | 
| 23  | GoogleSheetsShareSpreadsheetBlock         | Share spreadsheet with users           |  | 
| 24  | GoogleSheetsSetPublicAccessBlock          | Set public access permissions          |  | 
```


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Tested using the attached agent 
[super test for
spreadsheets_v9.json](https://github.com/user-attachments/files/24041582/super.test.for.spreadsheets_v9.json)


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces a large suite of Google Sheets blocks for row/column ops,
filtering/sorting/lookup, CSV import/export, notes, named ranges,
protections, sheet copy, and sharing/public access, plus refactors
append to a simpler single-row append.
> 
> - **Google Sheets blocks (new)**:
> - **Data ops**: `GoogleSheetsFilterRowsBlock`,
`GoogleSheetsLookupRowBlock`, `GoogleSheetsDeleteRowsBlock`,
`GoogleSheetsGetColumnBlock`, `GoogleSheetsSortBlock`,
`GoogleSheetsGetUniqueValuesBlock`, `GoogleSheetsInsertRowBlock`,
`GoogleSheetsAddColumnBlock`, `GoogleSheetsGetRowCountBlock`,
`GoogleSheetsRemoveDuplicatesBlock`, `GoogleSheetsUpdateRowBlock`,
`GoogleSheetsGetRowBlock`, `GoogleSheetsDeleteColumnBlock`.
> - **Named ranges & validation**: `GoogleSheetsCreateNamedRangeBlock`,
`GoogleSheetsListNamedRangesBlock`, `GoogleSheetsAddDropdownBlock`.
> - **Sheet/admin**: `GoogleSheetsCopyToSpreadsheetBlock`,
`GoogleSheetsProtectRangeBlock`.
> - **CSV & notes**: `GoogleSheetsExportCsvBlock`,
`GoogleSheetsImportCsvBlock`, `GoogleSheetsAddNoteBlock`,
`GoogleSheetsGetNotesBlock`.
> - **Sharing**: `GoogleSheetsShareSpreadsheetBlock`,
`GoogleSheetsSetPublicAccessBlock`.
> - **Refactor**:
> - Rename and simplify append: `GoogleSheetsAppendRowBlock` (replaces
multi-row/dict input with single `row`), fixed insert option to
`INSERT_ROWS` and streamlined response.
> - **Utilities/Enums**:
> - Add helpers (`_column_letter_to_index`, `_index_to_column_letter`,
`_apply_filter`) and enums (`FilterOperator`, `SortOrder`, `ShareRole`,
`PublicAccessRole`).
> - Drive/Sheets service builders and file validation reused across new
blocks.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9e2f4024. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-12-09 17:28:22 +00:00
Abhimanyu Yadav
f8afc6044e fix(frontend): prevent file upload buttons from triggering form submission (#11576)
<!-- Clearly explain the need for these changes: -->

In the File Widget, the upload button was incorrectly behaving like a
submit button. When users clicked it, the rjsf library immediately
triggered form validation and displayed validation errors, even though
the user was only trying to upload a file.

This happened because HTML buttons inside a form default to
`type="submit"`, which triggers form submission on click. By explicitly
setting `type="button"` on all file-related buttons, we prevent them
from submitting the form while still allowing them to trigger the file
input dialog.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added `type="button"` attribute to the clear button in the compact
variant
- Added `type="button"` attribute to the upload button in the compact
variant
- Added `type="button"` attribute to the "Browse File" button in the
default variant

This ensures that clicking any of these buttons only triggers the
intended file selection/upload action without causing unwanted form
validation or submission.

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested clicking the upload button in a form with File Widget -
form should not submit or show validation errors
- [x] Tested clicking the clear button - should clear the file without
triggering form validation
- [x] Tested clicking the "Browse File" button - should open file dialog
without triggering form validation
- [x] Verified file upload functionality still works correctly after
selecting a file
2025-12-09 15:54:17 +00:00
Abhimanyu Yadav
7edf01777e fix(frontend): sync flowVersion to URL when loading graph from Library (#11585)
<!-- Clearly explain the need for these changes: -->

When opening a graph from the Library, the `flowVersion` query parameter
was not being set in the URL. This caused issues when the graph data
didn't contain an internal `graphVersion`, resulting in the builder
failing and graphs breaking when running.

The `useGetV1GetSpecificGraph` hook relies on the `flowVersion` query
parameter to fetch the correct graph version. Without it being properly
set in the URL, the graph loading logic fails when the version
information is missing from the graph data itself.

### Changes 🏗️

- Added `setQueryStates` to the `useQueryStates` hook return value in
`useFlow.ts`
- Added logic to sync `flowVersion` to the URL query parameters when a
graph is loaded
- When `graph.version` is available, it now updates the `flowVersion`
query parameter in the URL (defaults to `1` if version is undefined)

This ensures the URL stays in sync with the loaded graph's version,
preventing builder failures and execution issues.

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Open a graph from the Library that doesn't have a version in the
URL
- [x] Verify that `flowVersion` is automatically added to the URL query
parameters
  - [x] Verify that the graph loads correctly and can be executed
- [x] Verify that graphs with existing `flowVersion` in URL continue to
work correctly
- [x] Verify that graphs opened from Library with version information
sync correctly
2025-12-09 15:26:45 +00:00
Ubbe
c9681f5d44 fix(frontend): library page adjustments (#11587)
## Changes 🏗️

### Adjust layout and styles on mobile 📱 

<img width="448" height="843" alt="Screenshot 2025-12-09 at 22 53 14"
src="https://github.com/user-attachments/assets/159bdf4f-e6b2-42f5-8fdf-25f8a62c62d1"
/>

### Make the sidebar cards have contextual actions

<img width="486" height="243" alt="Screenshot 2025-12-09 at 22 53 27"
src="https://github.com/user-attachments/assets/2f530168-3217-47c4-b08d-feccbb9e9152"
/>

Depending on the card type, different type of actions are shown...

### Make buttons in "About agent" card do something

<img width="344" height="346" alt="Screenshot 2025-12-09 at 22 54 01"
src="https://github.com/user-attachments/assets/47181f80-1f68-4ef1-aecc-bbadc7cc9c44"
/>

### Other

- Hide `Schedule` button for agents with trigger run type
- Adjust secondary button background colour...
- Make drawer content scrollable on mobile 

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and test the above

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-09 23:17:44 +07:00
Abhimanyu Yadav
1305325813 fix(frontend): preserve button shape in credential select when content is long (#11577)
<!-- Clearly explain the need for these changes: -->

When the content inside the credential select dropdown becomes too long,
the adjacent link buttons lose their rounded shape and appear squarish.
This happens when the text stretches the container or affects the layout
of the buttons.

The issue occurs because the button's width can shrink below its
intended size when the flex container is stretched by long credential
names. By adding an explicit minimum width constraint with `!min-w-8`,
we ensure the button maintains its proper dimensions and rounded
appearance regardless of the select dropdown's content length.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added `!min-w-8` to the external link button's className in
`SelectCredential` component to enforce a minimum width of 2rem (8 *
0.25rem)
- This ensures the button maintains its rounded shape even when the
adjacent select dropdown contains long credential names

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested credential select with short credential names - button
should maintain rounded shape
- [x] Tested credential select with very long credential names (e.g.,
long provider names, usernames, and hosts) - button should maintain
rounded shape
2025-12-09 15:26:22 +00:00
Abhimanyu Yadav
4f349281bd feat(frontend): switch copied graph storage from local storage to clipboard (#11578)
### Changes 🏗️

This PR migrates the copy/paste functionality for graph nodes and edges
from local storage to the Clipboard API. This change addresses storage
limitations and enables cross-tab copying.


https://github.com/user-attachments/assets/6ef55713-ca5b-4562-bb54-4c12db241d30


**Key changes:**
- Replaced `localStorage` with `navigator.clipboard` API for copy/paste
operations
- Added `CLIPBOARD_PREFIX` constant (`"autogpt-flow-data:"`) to identify
our clipboard data and prevent conflicts with other clipboard content
- Added toast notifications to provide user feedback when copying nodes
- Added error handling for clipboard read/write operations with console
error logging
- Removed dependency on `@/services/storage/local-storage` for copied
flow data
- Updated `useCopyPaste` hook to use async clipboard operations with
proper promise handling

**Benefits:**
-  Removes local storage size limitations (5-10MB)
-  Enables copying nodes between browser tabs/windows
-  Provides better user feedback through toast notifications
-  More standard approach using native browser Clipboard API

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Select one or more nodes in the flow editor
  - [x] Press `Ctrl+C` (or `Cmd+C` on Mac) to copy nodes
- [x] Verify toast notification appears showing "Copied successfully"
with node count
  - [x] Press `Ctrl+V` (or `Cmd+V` on Mac) to paste nodes
  - [x] Verify nodes are pasted at viewport center with new unique IDs
  - [x] Verify edges between copied nodes are also pasted correctly
- [x] Test copying nodes in one browser tab and pasting in another tab
(should work)
- [x] Test copying non-flow data (e.g., text) and verify paste doesn't
interfere with flow editor
2025-12-09 15:26:12 +00:00
Ubbe
6c43b34dee feat(frontend): add templates/triggers to new Library page view (#11580)
## Changes 🏗️

Add Templates to the new Agent Library page:

<img width="800" height="889" alt="Screenshot 2025-12-09 at 14 10 01"
src="https://github.com/user-attachments/assets/85f0d478-f3f9-4ccf-81df-b9a7f4ae8849"
/>

- You can create a template from a run ( new action button )
- Templates are listed and can be selected on the sidebar
- When viewing a template, you can edit it, create a task or delete it

Add Triggers to the new Agent Library page:

<img width="800" height="836" alt="Screenshot 2025-12-09 at 14 10 43"
src="https://github.com/user-attachments/assets/c722f807-c72f-4a4d-8778-e36bea203f6e"
/>

- When an agent contains a trigger block, on the modal it will create a
trigger
- When there are triggers, they are listed on the sidebar
- A trigger can be viewed and edited

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the new page locally
2025-12-09 18:30:13 +07:00
Zamil Majdy
c1e21d07e6 feat(platform): add execution accuracy alert system (#11562)
## Summary

<img width="1263" height="883" alt="image"
src="https://github.com/user-attachments/assets/98d4f449-1897-4019-a599-846c27df4191"
/>
<img width="398" height="190" alt="image"
src="https://github.com/user-attachments/assets/0138ac02-420d-4f96-b980-74eb41e3c968"
/>

- Add execution accuracy monitoring with moving averages and Discord
alerts
- Dashboard visualization for accuracy trends and alert detection  
- Hourly monitoring for marketplace agents (≥10 executions in 30 days)
- Generated API client integration with type-safe models

## Features
- **Moving Average Analysis**: 3-day vs 7-day comparison with
configurable thresholds
- **Discord Notifications**: Hourly alerts for accuracy drops ≥10%
- **Dashboard UI**: Real-time trends visualization with alert status
- **Type Safety**: Generated API hooks and models throughout
- **Error Handling**: Graceful OpenAI configuration handling
- **PostgreSQL Optimization**: Window functions for efficient trend
queries

## Test plan
- [x] Backend accuracy monitoring logic tested with sample data
- [x] Frontend components using generated API hooks (no manual fetch)
- [x] Discord notification integration working
- [x] Admin authentication and authorization working
- [x] All formatting and linting checks passing
- [x] Error handling for missing OpenAI configuration
- [x] Test data available with `test-accuracy-agent-001`

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-08 19:28:57 +00:00
Ubbe
aaa8dcc5a8 feat(frontend): design updates run agent modal (#11570)
## Changes 🏗️

<img width="500" height="927" alt="Screenshot 2025-12-08 at 16 30 04"
src="https://github.com/user-attachments/assets/97170302-50ae-4750-99e1-275861ee9e08"
/>

<img width="500" height="897" alt="Screenshot 2025-12-08 at 16 30 10"
src="https://github.com/user-attachments/assets/0a7ac828-ebea-40de-a19e-fbf77b0c2eb7"
/>

Update the new **Run Agent Modal** as a new view. From now on, sections
( inputs, credentials, etc... ) are enclosed. Refactor all the existing
code to be more readable and [adhere to code
conventions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/autogpt_platform/frontend/CONTRIBUTING.md).

### Other improvements

- Refactor `<CredentialInputs />`:
  - to adhere to code conventions
  - to show all existing credentials for a given provider
  - to allow deletion of a given credential
  - to allow to select/switch credentials for a given task
- Run Graph Errors:
- when the run fails, show the errors nicely on the toast and link to
the builder for further debugging
- Design System:
  - secondary button colour has been updated 
  - labels size for form fields has been updated 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] Test the above updates running agents from the new library page
2025-12-08 21:47:32 +07:00
Ubbe
a46976decd fix(frontend): allow to close toasts (#11550)
## Changes 🏗️

<img width="795" height="275" alt="Screenshot 2025-12-04 at 21 05 55"
src="https://github.com/user-attachments/assets/e7572635-3976-4822-89ea-3e5e26196ab8"
/>

Allow to close toasts 🍞 

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Toast have a close button top-right

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-08 17:29:14 +07:00
Nicholas Tindle
c4eb7edb65 feat(platform): Improve Google Sheets/Drive integration with unified credentials (#11520)
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.

### Changes 🏗️

- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
  - [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
  - [x] Verify OAuth flow works with simplified scopes
  - [x] Verify file picker works with merged credentials field

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
> 
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-05 09:44:38 -06:00
Nicholas Tindle
3f690ea7b8 fix(platform/frontend): security upgrade next from 15.4.7 to 15.4.8 (#11536)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![critical
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/c.png
'critical severity') | Arbitrary Code Injection
<br/>[SNYK-JS-NEXT-14173355](https://snyk.io/vuln/SNYK-JS-NEXT-14173355)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJhNDQzN2JlZC0wMjYxLTRhZmMtYmQxOS1hMTUwY2RhMDE3ZDciLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImE0NDM3YmVkLTAyNjEtNGFmYy1iZDE5LWExNTBjZGEwMTdkNyJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Arbitrary Code
Injection](https://learn.snyk.io/lesson/insecure-deserialization/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.7","to":"15.4.8"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-14173355"],"prId":"a4437bed-0261-4afc-bd19-a150cda017d7","prPublicId":"a4437bed-0261-4afc-bd19-a150cda017d7","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-14173355"],"vulns":["SNYK-JS-NEXT-14173355"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades Next.js from 15.4.7 to 15.4.8 in the frontend and updates
lockfile/transitive references accordingly.
> 
> - **Dependencies**:
> - Bump `next` to `15.4.8` in `autogpt_platform/frontend/package.json`.
> - Update lockfile to align, including `@next/*` SWC binaries and
packages that peer-depend on `next` (e.g., `@sentry/nextjs`,
`@storybook/nextjs`, `@vercel/*`, `geist`, `nuqs`,
`@next/third-parties`).
> - Minor transitive tweak: `sharp` dependency `semver` updated to
`7.7.3`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7741cbfb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-05 09:44:12 -06:00
Swifty
8be3c88711 feat(backend): add default store agents for seeding test databases (#11552)
This PR adds a collection of pre-built store agents that can be loaded
into test databases for development and testing purposes.

### Changes 🏗️

- Add 17 exported agent JSON files in `backend/agents/` directory
- Add `StoreAgent_rows.csv` containing store listing metadata (titles,
descriptions, categories, images)
- Add `load_store_agents.py` script to load agents into the test
database
- Add `load-store-agents` Makefile target for easy execution

**Included Agents:**
- Flux AI Image Generator
- YouTube Transcription Scraper  
- Decision Maker Lead Finder
- Smart Meeting Prep
- Automated Support Agent
- Unspirational Poster Maker
- AI Video Generator
- Automated SEO Blog Writer
- Lead Finder (Local Businesses)
- LinkedIn Post Generator
- YouTube to LinkedIn Post Converter
- Personal Newsletter
- Email Scout - Contact Finder Assistant
- YouTube Video to SEO Blog Writer
- AI Webpage Copy Improver
- Domain Name Finder
- AI Function

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `make load-store-agents` and verify agents are loaded into the
database
  - [x] Verify store listings appear correctly with metadata from CSV
- [x] Confirm no sensitive information (API keys, secrets) is included
in the exported agents

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this only adds test data and a
loading script.
2025-12-05 16:08:37 +01:00
Zamil Majdy
e4d0dbc283 feat(platform): add Agent Output Demo field to marketplace submission form (#11538)
## Summary
- Add Agent Output Demo field to marketplace agent submission form,
positioned below the Description field
- Store agent output demo URLs in database for future CoPilot
integration
- Implement proper video/image ordering on marketplace pages
- Add shared YouTube URL validation utility to eliminate code
duplication

## Changes Made

### Frontend
- **Agent submission form**: Added Agent Output Demo field with YouTube
URL validation
- **Edit agent form**: Added Agent Output Demo field for existing
submissions
- **Marketplace display**: Implemented proper video/image ordering:
  1. YouTube/Overview video (if exists)
  2. First image (hero)
  3. Agent Output Demo (if exists) 
  4. Additional images
- **Shared utilities**: Created `validateYouTubeUrl` function in
`src/lib/utils.ts`

### Backend
- **Database schema**: Added `agentOutputDemoUrl` field to
`StoreListingVersion` model
- **Database views**: Updated `StoreAgent` view to include
`agent_output_demo` field
- **API models**: Added `agent_output_demo_url` to submission requests
and `agent_output_demo` to responses
- **Database migration**: Added migration to create new column and
update view
- **Test files**: Updated all test files to include the new required
field

## Test Plan
- [x] Frontend form validation works correctly for YouTube URLs
- [x] Database migration applies successfully 
- [x] Backend API accepts and returns the new field
- [x] Marketplace displays videos in correct order
- [x] Both frontend and backend formatting/linting pass
- [x] All test files include required field to prevent failures

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 11:40:12 +00:00
Swifty
8e476c3f8d fix(backend): pass credential type from SDK registry to integrations API (#11544)
### Changes 🏗️

This PR improves the `/integrations/providers` endpoint to dynamically
determine supported authentication types from the SDK registry instead
of using hardcoded values.

**What changed:**
- The `list_providers` function now looks up each provider in the
`AutoRegistry` to get its `supported_auth_types`
- If a provider has defined auth types in the SDK registry, those are
used to set `supports_api_key`, `supports_user_password`, and
`supports_host_scoped` flags
- Falls back to legacy hardcoded behavior for providers not registered
in the SDK (maintains backwards compatibility)

**Why:**
- Providers can now correctly declare their supported authentication
methods via the SDK
- Removes brittle hardcoded checks like `name in ("smtp",)` for specific
providers
- Makes the credential type system more extensible and maintainable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified providers with SDK-defined auth types return correct
flags
  - [x] Verified legacy providers still work with fallback behavior
- [x] Tested the `/integrations/providers` endpoint returns expected
data

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required for this PR.
2025-12-05 12:42:49 +01:00
Zamil Majdy
2f63defb53 fix(backend): Mark ValueError as known block errors (#11537)
### Changes 🏗️

Mark ValueError as known block errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
2025-12-05 11:12:18 +00:00
Sukhtumur Narantuya
2934e9ea69 fix(backend): replace print() statements with proper logging (#11499)
- Replace print() with logger.info() in reddit.py for login message
- Replace print() with logger.debug() in airtable/_api.py for API params
- Replace print() with logger.debug() in _manual_base.py for webhook URL
- Add logging imports and logger initialization where missing
- Update FIXME to TODO with GitHub issue reference #8537

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] test it still works


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Switch `print()` to `logger.info/debug()` across Airtable, Reddit, and
manual webhook modules; add logger initialization and clarify TODO with
issue reference.
> 
> - **Backend**:
>   - **Airtable (`backend/blocks/airtable/_api.py`)**:
>     - Replace `print(params)` with `logger.debug` in `create_base`.
>   - **Reddit (`backend/blocks/reddit.py`)**:
>     - Add `logging` import and `logger` initialization.
>     - Replace login `print` with `logger.info` in `get_praw`.
>   - **Webhooks (`backend/integrations/webhooks/_manual_base.py`)**:
> - Replace `print` with `logger.debug` in `_register_webhook` and add
`logger`.
>     - Update `FIXME` to `TODO` with GitHub issue reference `#8537`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
3add9b0fa9. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-12-05 07:49:20 +00:00
Krzysztof Czerwinski
c880db439d feat(platform): Backend completion of Onboarding tasks (#11375)
Make onboarding task completion backend-authoritative which prevents
cheating (previously users could mark all tasks as completed instantly
and get rewards) and makes task completion more reliable. Completion of
tasks is moved backend with exception of introductory onboarding tasks
and visit-page type tasks.

### Changes 🏗️

- Move incrementing run counter backend and make webhook-triggered and
scheduled task execution count as well
- Use user timezone for calculating run streak
- Frontend task completion is moved from update onboarding state to
separate endpoint and guarded so only frontend tasks can be completed
- Graph creation, execution and add marketplace agent to library accept
`source`, so appropriate tasks can be completed
- Replace `client.ts` api calls with orval generated and remove no
longer used functions from `client.ts`
- Add `resolveResponse` helper function that unwraps orval generated
call result to 2xx response

Small changes&bug fixes:
- Make Redis notification bus serialize all payload fields
- Fix confetti when group is finished
- Collapse finished group when opening Wallet
- Play confetti only for tasks that are listed in the Wallet UI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding can be finished
  - [x] All tasks can be finished and work properly
  - [x] Confetti works properly
2025-12-05 02:32:28 +00:00
Nicholas Tindle
486099140d fix(platform/frontend): security upgrade next from 15.4.7 to 15.4.8 (#11536)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the yarn
dependencies of this project.

#### Snyk changed the following file(s):

- `autogpt_platform/frontend/package.json`


#### Note for
[zero-installs](https://yarnpkg.com/features/zero-installs) users

If you are using the Yarn feature
[zero-installs](https://yarnpkg.com/features/zero-installs) that was
introduced in Yarn V2, note that this PR does not update the
`.yarn/cache/` directory meaning this code cannot be pulled and
immediately developed on as one would expect for a zero-install project
- you will need to run `yarn` to update the contents of the
`./yarn/cache` directory.
If you are not using zero-install you can ignore this as your flow
should likely be unchanged.



<details>
<summary>⚠️ <b>Warning</b></summary>

```
Failed to update the yarn.lock, please update manually before merging.
```

</details>



#### Vulnerabilities that will be fixed with an upgrade:

|  | Issue |  
:-------------------------:|:-------------------------
![critical
severity](https://res.cloudinary.com/snyk/image/upload/w_20,h_20/v1561977819/icon/c.png
'critical severity') | Arbitrary Code Injection
<br/>[SNYK-JS-NEXT-14173355](https://snyk.io/vuln/SNYK-JS-NEXT-14173355)




---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiJhNDQzN2JlZC0wMjYxLTRhZmMtYmQxOS1hMTUwY2RhMDE3ZDciLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6ImE0NDM3YmVkLTAyNjEtNGFmYy1iZDE5LWExNTBjZGEwMTdkNyJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Arbitrary Code
Injection](https://learn.snyk.io/lesson/insecure-deserialization/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"breakingChangeRiskLevel":null,"FF_showPullRequestBreakingChanges":false,"FF_showPullRequestBreakingChangesWebSearch":false,"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"next","from":"15.4.7","to":"15.4.8"}],"env":"prod","issuesToFix":["SNYK-JS-NEXT-14173355"],"prId":"a4437bed-0261-4afc-bd19-a150cda017d7","prPublicId":"a4437bed-0261-4afc-bd19-a150cda017d7","packageManager":"yarn","priorityScoreList":[null],"projectPublicId":"3d924968-0cf3-4767-9609-501fa4962856","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/3d924968-0cf3-4767-9609-501fa4962856?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown"],"type":"auto","upgrade":["SNYK-JS-NEXT-14173355"],"vulns":["SNYK-JS-NEXT-14173355"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Upgrades Next.js from 15.4.7 to 15.4.8 in the frontend and updates
lockfile/transitive references accordingly.
> 
> - **Dependencies**:
> - Bump `next` to `15.4.8` in `autogpt_platform/frontend/package.json`.
> - Update lockfile to align, including `@next/*` SWC binaries and
packages that peer-depend on `next` (e.g., `@sentry/nextjs`,
`@storybook/nextjs`, `@vercel/*`, `geist`, `nuqs`,
`@next/third-parties`).
> - Minor transitive tweak: `sharp` dependency `semver` updated to
`7.7.3`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
e7741cbfb5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-04 20:09:49 +00:00
Ubbe
6d8906ced7 fix(frontend): summary card layout (#11556)
## Changes 🏗️

- Make it less verbose and adapt the summary card to the new design

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and see summary displaying well
2025-12-04 23:33:57 +07:00
Abhimanyu Yadav
bf32a76f49 fix(frontend): prevent NodeHandle error when array schema items is undefined (#11549)
- Added conditional rendering in `NodeArrayInput` component to only
render `NodeHandle` when `schema.items` is defined
- This prevents the error when array schemas don't have an `items`
property defined
- The input field rendering logic already had proper null checks, so
only the handle rendering needed to be fixed

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Open the legacy builder and create a new agent
  - [x] Add a Smart Decision block to the agent
- [x] Verify that the `conversation_history` field renders without
errors
- [x] Verify that array fields with defined `items` still render
correctly (e.g., test with other blocks that use arrays)
- [x] Verify that connecting/disconnecting handles to the
conversation_history field works correctly
  - [x] Verify that the field can accept input values when not connected
2025-12-04 15:13:31 +00:00
Ubbe
f7a8e372dd feat(frontend): implement new actions sidebar + summary (#11545)
## Changes 🏗️

<img width="800" height="767" alt="Screenshot 2025-12-04 at 17 40 10"
src="https://github.com/user-attachments/assets/37036246-bcdb-46eb-832c-f91fddfd9014"
/>

<img width="800" height="492" alt="Screenshot 2025-12-04 at 17 40 16"
src="https://github.com/user-attachments/assets/ba547e54-016a-403c-9ab6-99465d01af6b"
/>

On the new Agent Library page:

- Implement the new actions sidebar ( main change... ) 
- Refactor the layout/components to accommodate that
- Implement the missing "Summary" functionality 
- Update icon buttons in Design system with new designs

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and test it
2025-12-04 22:46:42 +07:00
Abhimanyu Yadav
3ccc712463 feat(frontend): add host-scoped credentials support to CredentialField (#11546)
### Changes 🏗️

This PR adds support for `host_scoped` credential type in the new
builder's `CredentialField` component. This enables blocks that require
sensitive headers for custom API endpoints to configure host-scoped
credentials directly from the credential field.
<img width="745" height="843" alt="Screenshot 2025-12-04 at 4 31 09 PM"
src="https://github.com/user-attachments/assets/d076b797-64c4-4c31-9c88-47a064814055"
/>

<img width="418" height="180" alt="Screenshot 2025-12-04 at 4 36 02 PM"
src="https://github.com/user-attachments/assets/b4fa6d8d-d8f4-41ff-ab11-7c708017f8fd"
/>

**Key changes:**

- **Added `HostScopedCredentialsModal` component**
(`models/HostScopedCredentialsModal/`)
- Modal dialog for creating host-scoped credentials with host pattern,
optional title, and dynamic header pairs (key-value)
- Auto-populates host from discriminator value (URL field) when
available
  - Supports adding/removing multiple header pairs with validation

- **Enhanced credential filtering logic** (`helpers.ts`)
- Updated `filterCredentialsByProvider` to accept `schema` and
`discriminatorValue` parameters
  - Added intelligent filtering for:
    - Credential types supported by the block
    - OAuth credentials with sufficient scopes
    - Host-scoped credentials matched by host from discriminator value
  - Extracted `getDiscriminatorValue` helper function for reusability

- **Updated `CredentialField` component**
  - Added `supportsHostScoped` check in `useCredentialField` hook
- Conditionally renders `HostScopedCredentialsModal` when
`supportsHostScoped && discriminatorValue` is true
  - Exports `discriminatorValue` for use in child components

- **Updated `useCredentialField` hook**
- Calculates `discriminatorValue` using new `getDiscriminatorValue`
helper
- Passes `schema` and `discriminatorValue` to enhanced
`filterCredentialsByProvider` function
- Returns `supportsHostScoped` and `discriminatorValue` for component
consumption

**Technical details:**
- Host extraction uses `getHostFromUrl` utility to parse host from
discriminator value (URL)
- Header pairs are managed as state with add/remove functionality
- Form validation uses `react-hook-form` with `zod` schema
- Credential creation integrates with existing API endpoints and query
invalidation

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify `HostScopedCredentialsModal` appears when block supports
`host_scoped` credentials and discriminator value is present
  - [x] Test host auto-population from discriminator value (URL field)
  - [x] Test manual host entry when discriminator value is not available
  - [x] Test adding/removing multiple header pairs
- [x] Test form validation (host required, empty header pairs filtered
out)
  - [x] Test credential creation and successful toast notification
  - [x] Verify credentials list refreshes after creation
- [x] Test host-scoped credential filtering matches credentials by host
from URL
- [x] Verify existing credential types (api_key, oauth2, user_password)
still work correctly
  - [x] Test OAuth scope filtering still works as expected
- [x] Verify modal only shows when `supportsHostScoped &&
discriminatorValue` conditions are met
2025-12-04 15:13:23 +00:00
Abhimanyu Yadav
2b9816cfa5 fix(frontend): ensure node selection state is set before copying in context menu (#11535)
<!-- Clearly explain the need for these changes: -->

Fixes an issue where copying a node via the context menu dialog fails on
the first attempt in a new session. The problem occurs because the node
selection state update and the copy operation happen in quick
succession, causing a race condition where `copySelectedNodes()` reads
the store before the selection state is properly updated.


### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Start a new browser session (or clear storage)
  - [x] Open the flow editor
- [x] Right-click on a node and select "Copy Node" from the context menu
  - [x] Verify the node is successfully copied on the first attempt
2025-12-04 15:13:13 +00:00
Abhimanyu Yadav
4e87f668e3 feat(frontend): add file input widget with variants and base64 support (#11533)
<!-- Clearly explain the need for these changes: -->

This PR enhances the FileInput component to support multiple variants
and modes, and integrates it into the form renderer as a file widget.
The changes enable a more flexible file input experience with both
server upload and local base64 conversion capabilities.

### Compact one
<img width="354" height="91" alt="Screenshot 2025-12-03 at 8 05 51 PM"
src="https://github.com/user-attachments/assets/1295a34f-4d9f-4a65-89f2-4b6516f176ff"
/>
<img width="386" height="96" alt="Screenshot 2025-12-03 at 8 06 11 PM"
src="https://github.com/user-attachments/assets/3c10e350-8ddc-43ff-bf0a-68b23f8db394"
/>

## Default one
<img width="671" height="165" alt="Screenshot 2025-12-03 at 8 05 08 PM"
src="https://github.com/user-attachments/assets/bd17c5f1-8cdb-4818-9850-a9e61252d8c1"
/>
<img width="656" height="141" alt="Screenshot 2025-12-03 at 8 05 21 PM"
src="https://github.com/user-attachments/assets/1edf260f-6245-44b9-bd3e-dd962f497413"
/>

### Changes 🏗️

#### FileInput Component Enhancements
- **Added variant support**: Introduced `default` and `compact` variants
- `default`: Full-featured UI with drag & drop, progress bar, and
storage note
- `compact`: Minimal inline design suitable for tight spaces like node
inputs
- **Added mode support**: Introduced `upload` and `base64` modes
- `upload`: Uploads files to server (requires `onUploadFile` and
`uploadProgress`)
  - `base64`: Converts files to base64 locally without server upload
- **Improved type safety**: Refactored props using discriminated unions
(`UploadModeProps | Base64ModeProps`) to ensure type-safe usage
- **Enhanced file handling**: Added `getFileLabelFromValue` helper to
extract file labels from base64 data URIs or file paths
- **Better UX**: Added `showStorageNote` prop to control visibility of
storage disclaimer

#### FileWidget Integration
- **Replaced legacy Input**: Migrated from legacy `Input` component to
new `FileInput` component
- **Smart variant selection**: Automatically selects `default` or
`compact` variant based on form context size
- **Base64 mode**: Uses base64 mode for form inputs, eliminating need
for server uploads in builder context
- **Improved accessibility**: Better disabled/readonly state handling
with visual feedback

#### Form Renderer Updates
- **Disabled validation**: Added `noValidate={true}` and
`liveValidate={false}` to prevent premature form validation

#### Storybook Updates
- **Expanded stories**: Added comprehensive stories for all variant/mode
combinations:
  - `Default`: Default variant with upload mode
  - `Compact`: Compact variant with base64 mode
  - `CompactWithUpload`: Compact variant with upload mode
  - `DefaultWithBase64`: Default variant with base64 mode
- **Improved documentation**: Updated component descriptions to clearly
explain variants and modes

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test FileInput component in Storybook with all variant/mode
combinations
  - [x] Test file upload flow in default variant with upload mode
  - [x] Test base64 conversion in compact variant with base64 mode
  - [x] Test file widget in form renderer (node inputs)
  - [x] Test file type validation (accept prop)
  - [x] Test file size validation (maxFileSize prop)
  - [x] Test error handling for invalid files
  - [x] Test disabled and readonly states
  - [x] Test file clearing/removal functionality
  - [x] Verify compact variant renders correctly in tight spaces
  - [x] Verify default variant shows storage note only in upload mode
  - [x] Test drag & drop functionality in default variant
2025-12-04 15:13:01 +00:00
Abhimanyu Yadav
729400dbe1 feat(frontend): display graph validation errors inline on node fields (#11524)
When running a graph in the new builder, validation errors were only
displayed in toast notifications, making it difficult for users to
identify which specific fields had errors. Users needed to see
validation errors directly next to the problematic fields within each
node for better UX and faster debugging.

<img width="1319" height="953" alt="Screenshot 2025-12-03 at 12 48
15 PM"
src="https://github.com/user-attachments/assets/d444bc71-9bee-4fa7-8b7f-33339bd0cb24"
/>

### Changes 🏗️

- **Error handling in graph execution** (`useRunGraph.ts`):
- Added detection for graph validation errors using
`ApiError.isGraphValidationError()`
  - Parse and store node-level errors from backend validation response
  - Clear all node errors on successful graph execution
- Enhanced toast messages to guide users to fix validation errors on
highlighted nodes

- **Node store error management** (`nodeStore.ts`):
  - Added `errors` field to node data structure
  - Implemented `updateNodeErrors()` to set errors for a specific node
- Implemented `clearNodeErrors()` to remove errors from a specific node
  - Implemented `getNodeErrors()` to retrieve errors for a specific node
- Implemented `setNodeErrorsForBackendId()` to set errors by backend ID
(supports matching by `metadata.backend_id` or node `id`)
- Implemented `clearAllNodeErrors()` to clear all node errors across the
graph

- **Visual error indication** (`CustomNode.tsx`, `NodeContainer.tsx`):
- Added error detection logic to identify both configuration errors and
output errors
- Applied error styling to nodes with validation errors (using `FAILED`
status styling)
- Nodes with errors now display with red border/ring to visually
indicate issues

- **Field-level error display** (`FieldTemplate.tsx`):
  - Fetch node errors from store for the current node
- Match field IDs with error keys (handles both underscore and dot
notation)
  - Display field-specific error messages below each field in red text
- Added helper function `getFieldErrorKey()` to normalize field IDs for
error matching

- **Utility helpers** (`helpers.ts`):
- Created `getFieldErrorKey()` function to extract field key from field
ID (removes `root_` prefix)

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a graph with multiple nodes and intentionally leave
required fields empty
- [x] Run the graph and verify that validation errors appear in toast
notification
- [x] Verify that nodes with errors are highlighted with red border/ring
styling
- [x] Verify that field-specific error messages appear below each
problematic field in red text
- [x] Verify that error messages handle both underscore and dot notation
in field keys
- [x] Fix validation errors and run graph again - verify errors are
cleared
  - [x] Verify that successful graph execution clears all node errors
- [x] Test with nodes that have `backend_id` in metadata vs nodes
without
  - [x] Verify that nodes without errors don't show error styling
- [x] Test with nested fields and array fields to ensure error matching
works correctly
2025-12-04 15:12:51 +00:00
Abhimanyu Yadav
f6608e99c8 feat(frontend): add expandable text input modal for better editing experience (#11510)
Text inputs in the form builder can be difficult to edit when dealing
with longer content. Users need a way to expand text inputs into a
larger, more comfortable editing interface, especially for multi-line
text, passwords, and longer string values.



https://github.com/user-attachments/assets/443bf4eb-c77c-4bf6-b34c-77091e005c6d


### Changes 🏗️

- **Added `InputExpanderModal` component**: A new modal component that
provides a larger textarea (300px min-height) for editing text inputs
with the following features:
- Copy-to-clipboard functionality with visual feedback (checkmark icon)
  - Toast notification on successful copy
  - Auto-focus on open for better UX
  - Proper state management to reset values when modal opens/closes
  
- **Enhanced `TextInputWidget`**:
- Added expand button (ArrowsOutIcon) with tooltip for text, password,
and textarea input types
  - Button appears inline next to the input field
  - Integrated the new `InputExpanderModal` component
  - Improved layout with flexbox to accommodate the expand button
- Added padding-right to input when expand button is visible to prevent
text overlap

- **Refactored file structure**:
  - Moved `TextInputWidget.tsx` into `TextInputWidget/` directory
  - Updated import path in `widgets/index.ts`

- **UX improvements**:
- Expand button only shows for applicable input types (text, password,
textarea)
  - Number and integer inputs don't show expand button (not needed)
- Modal preserves schema title, description, and placeholder for context

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test expand button appears for text input fields
  - [x] Test expand button appears for password input fields  
  - [x] Test expand button appears for textarea fields
  - [x] Test expand button does NOT appear for number/integer inputs
  - [x] Test clicking expand button opens modal with current value
  - [x] Test editing text in modal and saving updates the input field
  - [x] Test cancel button closes modal without saving changes
- [x] Test copy-to-clipboard button copies text and shows success state
  - [x] Test toast notification appears on successful copy
  - [x] Test modal resets to original value when reopened
  - [x] Test modal auto-focuses textarea on open
  - [x] Test expand button tooltip displays correctly
  - [x] Test input field layout with expand button (no text overlap)
2025-12-04 15:12:32 +00:00
Abhimanyu Yadav
78c2245269 feat(frontend): add automatic collision resolution for flow editor nodes (#11506)
When users drag and drop nodes in the new flow editor, nodes can overlap
with each other, making the graph difficult to read and interact with.
This PR adds an automatic collision resolution algorithm that runs when
a node is dropped, ensuring nodes are automatically separated to prevent
overlaps and maintain a clean, readable graph layout.

### Changes 🏗️

- **Added collision resolution algorithm** (`resolve-collision.ts`):
- Implements an iterative collision detection and resolution system
using Flatbush for efficient spatial indexing
- Automatically resolves overlaps by moving nodes apart along the axis
with the smallest overlap
- Configurable options: `maxIterations`, `overlapThreshold`, and
`margin`
- Uses actual node dimensions (`width`, `height`, or `measured` values)
when available

- **Integrated collision resolution into Flow component**:
- Added `onNodeDragStop` callback that triggers collision resolution
after a node is dropped
- Configured with `maxIterations: Infinity`, `overlapThreshold: 0.5`,
and `margin: 15px`

- **Enhanced node dimension handling**:
- Updated `nodeStore.ts` to prioritize actual node dimensions
(`node.width`, `node.measured.width`) over hardcoded defaults when
calculating positions
  - Ensures collision detection uses accurate node sizes

- **Added dependency**:
- Added `flatbush@4.5.0` for efficient spatial indexing and collision
detection

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Drag a node and drop it on top of another node - verify nodes
automatically separate
- [x] Drag multiple nodes to create overlapping clusters - verify all
overlaps are resolved
- [x] Drag nodes with different sizes (NOTE blocks vs regular blocks) -
verify collision detection uses correct dimensions
- [x] Drag nodes near the edge of the canvas - verify nodes don't get
pushed off-screen
- [x] Test with a graph containing many nodes (20+) - verify performance
is acceptable
  - [x] Verify nodes maintain their positions when no collisions occur
- [x] Test with nodes that have custom measured dimensions - verify
accurate collision detection
2025-12-04 14:46:43 +00:00
Nicholas Tindle
113df689dc feat(platform): Improve Google Sheets/Drive integration with unified credentials (#11520)
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.

### Changes 🏗️

- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
  - [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
  - [x] Verify OAuth flow works with simplified scopes
  - [x] Verify file picker works with merged credentials field

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
> 
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-04 14:40:30 +00:00
Ubbe
f1c6c94636 ci(frontend): fix concurrency groups (#11551)
## Changes 🏗️

Fix concurrency grouping on Front-end workflows.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] We will see once merged
2025-12-04 21:44:06 +07:00
Swifty
6588110bf2 fix(frontend): forward X-API-Key header through proxy (#11530)
The Next.js API proxy was stripping the X-API-Key header when forwarding
requests to the backend, causing API key authentication to fail in
environments where requests go through the proxy (e.g., dev
environment).

### Changes 🏗️

- Updated `createRequestHeaders()` in
`frontend/src/lib/autogpt-server-api/helpers.ts` to forward the
`X-API-Key` header from the original request to the backend

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify API key authentication works when requests go through the
Next.js proxy
- [x] Verify existing authentication (Authorization header) still works
  - [x] Verify admin impersonation header forwarding still works

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-03 13:39:17 +01:00
Ubbe
bfbd4eee53 ci(frontend): concurrency optimizations (#11525)
## Changes 🏗️

Added the same concurrency optimisation to the Front-end and Fullstack
CI workflows. It will:

- Cancel in-progress runs when a new workflow starts for the same
branch/PR
- Reduce CI costs by avoiding redundant runs
- Ensure only the latest workflow runs
- Both workflows now use the same concurrency strategy to optimise CI
billing.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] We will see as we push commits...
2025-12-03 18:05:22 +07:00
Swifty
7b93600973 fix duplicate promethues metrics 2025-12-03 11:04:38 +01:00
Abhimanyu Yadav
02d9ff8db2 fix(frontend): improve error message extraction in agent execution error handler (#11527)
When agent execution fails, the error toast was only showing
`error.message`, which often lacks detail. The API returns more specific
error messages in `error.response.detail.message`, but these weren't
being displayed to users, making debugging harder.

<img width="1018" height="796" alt="Screenshot 2025-12-03 at 2 14 17 PM"
src="https://github.com/user-attachments/assets/6a93aed9-9e18-450a-9995-8760d7d5ca35"
/>

### Changes 🏗️

- Updated error message extraction in `useAgentRunModal` to check
`error.response.detail.message` first, then fall back to
`error.message`, then to the default message
- This ensures users see the most specific error message available from
the API response

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Triggered an agent execution error (e.g., invalid inputs) and
verified the toast shows the detailed error message from
`error.response.detail.message`
- [x] Verified fallback to `error.message` when
`error.response.detail.message` is not available
  - [x] Verified fallback to default message when neither is available
2025-12-03 09:19:29 +00:00
Ubbe
b4a69c49a1 feat(frontend): use websockets on new library page + fixes (#11526)
## Changes 🏗️

<img width="800" height="614" alt="Screenshot 2025-12-03 at 14 52 46"
src="https://github.com/user-attachments/assets/c7012d7a-96d4-4268-a53b-27f2f7322a39"
/>

- Create a new `useExecutionEvents()` hook that can be used to subscribe
to agent executions
- now both the **Activity Dropdown** and the new **Library Agent Page**
use that
  - so subscribing to executions is centralised on a single place
- Apply a couple of design fixes
- Fix not being able to select the new templates tab 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and verify the above
2025-12-03 16:24:58 +07:00
Ubbe
e62a8fb572 feat(frontend): design updates on new library page... (#11522)
## Changes 🏗️

### Design updates

Design updates on the new Library page. New empty views with
illustration and overall changes on the sidebar and selected run
sections...

<img width="800" height="871" alt="Screenshot 2025-12-03 at 14 03 45"
src="https://github.com/user-attachments/assets/6970af99-dda8-4cd8-a9b5-97fe5ee2032e"
/>

<img width="800" height="844" alt="Screenshot 2025-12-03 at 14 03 52"
src="https://github.com/user-attachments/assets/92e6af79-db9f-4098-961f-9ae3b3300ffa"
/>

<img width="800" height="816" alt="Screenshot 2025-12-03 at 14 03 57"
src="https://github.com/user-attachments/assets/7e23e612-b446-4c53-8ea2-f0e2b1574ec3"
/>

<img width="800" height="862" alt="Screenshot 2025-12-03 at 14 04 07"
src="https://github.com/user-attachments/assets/e3398232-e74b-4a06-8702-30a96862dc00"
/>

### Architecture

- Make selected tabs/items synced with the URL via `?activeTab=` and
`?activeItem=`, so it is easy and predictable to change their state...
- Some minor updates on the design system I missed on previous PRs ... 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally and check the new page ( still wip )
2025-12-03 14:33:18 +07:00
Reinier van der Leer
233ff40a24 fix(frontend/marketplace): Fix rendering creator links without schema (#11516)
- [OPEN-2871: TypeError: URL constructor: www.agpt.co is not a valid
URL.](https://linear.app/autogpt/issue/OPEN-2871/typeerror-url-constructor-wwwagptco-is-not-a-valid-url)
- [Sentry Issue BUILDER-56D: TypeError: URL constructor: www.agpt.co is
not a valid
URL.](https://significant-gravitas.sentry.io/issues/7081476631/)

### Changes 🏗️

- Amend URL handling in `CreatorLinks` to correctly handle URLs with
implicit schema

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, CI is sufficient
2025-12-03 06:57:55 +00:00
seer-by-sentry[bot]
fa567991b3 fix(backend): Handle HTTP errors in HTTP block by returning response objects (#11515)
### Changes 🏗️

- Modify the HTTP block to handle HTTP errors (4xx, 5xx) by returning
response objects instead of raising exceptions.
- This allows proper handling of client_error and server_error outputs.

Fixes
[AUTOGPT-SERVER-6VP](https://sentry.io/organizations/significant-gravitas/issues/7023985892/).
The issue was that: HTTP errors are raised as exceptions by `Requests`
default behavior, bypassing the block's intended error output handling,
resulting in `BlockUnknownError`.

This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 4902617

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/7023985892/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Tested with a service that will return 4XX and 5XX errors to make
sure the correct paths are followed



<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> HTTP block now returns 4xx/5xx responses instead of raising, and
Requests gains retry_max_attempts with last-result handling.
> 
> - **Backend**
>   - **HTTP block (`backend/blocks/http.py`)**:
> - Use `Requests(raise_for_status=False, retry_max_attempts=1)` so
4xx/5xx return response objects and route to
`client_error`/`server_error` outputs.
>   - **HTTP client util (`backend/util/request.py`)**:
> - Add `retry_max_attempts` option with `stop_after_attempt` and
`_return_last_result` to return the final response when retries stop.
> - Build `tenacity` retry config dynamically in `Requests.request()`;
validate `retry_max_attempts >= 1` when provided.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
fccae61c26. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: nicholas.tindle <nicholas.tindle@agpt.co>
2025-12-02 19:00:43 +00:00
Swifty
2cb6fd581c feat(platform): Integration management from external api (#11472)
Allow the external api to manage credentials 

### Changes 🏗️

- add ability to external api to manage credentials

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested it works

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces external API endpoints to manage integrations (OAuth
initiation/completion and credential CRUD), adds external OAuth state
fields, and new API key permissions/config.
> 
> - **External API – Integrations**:
> - Add router `backend/server/external/routes/integrations.py` with
endpoints to:
> - `GET /v1/integrations/providers` list providers (incl. default
scopes)
> - `POST /v1/integrations/{provider}/oauth/initiate` and `POST
/oauth/complete` for external OAuth (custom callback, state)
> - `GET /v1/integrations/credentials` and `GET /{provider}/credentials`
to list credentials
> - `POST /{provider}/credentials` to create `api_key`, `user_password`,
`host_scoped` creds; `DELETE /{provider}/credentials/{cred_id}` to
delete
>   - Wire router in `backend/server/external/api.py`.
> - **Auth/Permissions**:
> - Add `APIKeyPermission` values: `MANAGE_INTEGRATIONS`,
`READ_INTEGRATIONS`, `DELETE_INTEGRATIONS` (schema + migration +
OpenAPI).
> - **Data model / Store**:
> - Extend `OAuthState` with external-flow fields: `callback_url`,
`state_metadata`, `api_key_id`, `is_external`.
> - Update `IntegrationCredentialsStore.store_state_token(...)` to
accept/store external OAuth metadata.
> - **OAuth providers**:
> - Set GitHub handler `DEFAULT_SCOPES = ["repo"]` in
`integrations/oauth/github.py`.
> - **Config**:
> - Add `config.external_oauth_callback_origins` in
`backend/util/settings.py` to validate allowed OAuth callback origins.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
249bba9e59. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-02 17:42:53 +01:00
Nicholas Tindle
55af799083 fix(blocks): clamp Twitter search start_time to 10 seconds before now (#11461)
## Summary
- Clamp `start_time` to at least 10 seconds before request time (Twitter
API requirement)
- Update input description to document this automatic adjustment
- Fix `serialize_list` to handle `None` data gracefully (exposed by the
fix)

## Background
Twitter API returns `400 Bad Request` when `start_time` is less than 10
seconds before the request time. Users providing current/future times
would hit this error.

**Sentry Issue:**
[BUILDER-3PG](https://significant-gravitas.sentry.io/issues/6919685270/)

## Affected Blocks
- `TwitterSearchRecentTweetsBlock`
- `TwitterGetUserMentionsBlock`
- `TwitterGetHomeTimelineBlock`
- `TwitterGetUserTweetsBlock`

## Test plan
- [x] Tested `TwitterSearchRecentTweetsBlock` with current time as
`start_time`
- [x] Verified clamping works and API call succeeds
- [x] Verified "No tweets found" is returned correctly when search
window has no results

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-02 15:45:31 +00:00
Zamil Majdy
6590fcb76f fix(backend): fix broken update_agent_version_in_library and reduce the method code duplication (#11514)
## Summary
Fix broken `update_agent_version_in_library` functionality by eagerly
loading `AgentGraph` while loading the library, also consolidating
duplicate code that updates agent version in library and configures HITL
safe mode settings.

## Problem
The `update_agent_version_in_library` is currently failed with this
error:
```
  File "/Users/abhi/Documents/AutoGPT/autogpt_platform/backend/backend/server/v2/library/model.py", line 110, in from_db
    raise ValueError("Associated Agent record is required.")
ValueError: Associated Agent record is required.
```

also logic was duplicated across two router endpoints with identical
implementations, creating maintenance burden and potential for
inconsistencies.

## Changes Made

### Created Helper Method
- Add `_update_library_agent_version_and_settings()` helper function  
- Fixes broken `update_agent_version_in_library` by centralizing the
logic
- Uses proper error handling and settings merging with `model_copy()`

### Replaced Duplicate Code  
- **In `update_graph` function** (v1.py:863) - replaced 13 lines with
single helper call
- **In `set_graph_active_version` function** (v1.py:920) - replaced 13
lines with single helper call

### Benefits
- **Fixes broken functionality**: Centralizes
`update_agent_version_in_library` logic
- **DRY Principle**: Eliminates code duplication across two router
endpoints
- **Maintainability**: Single place to modify the library agent update
logic
- **Consistency**: Ensures both endpoints use identical logic for HITL
safe mode configuration
- **Readability**: Cleaner, more focused endpoint implementations

## Technical Details
The helper method fixes broken `update_agent_version_in_library` by
handling:
1. Updating agent version in library via
`update_agent_version_in_library()`
2. Conditionally setting `human_in_the_loop_safe_mode: true` if graph
has HITL blocks and setting is not already configured
3. Proper settings merging to preserve existing configuration

## Testing
- [x] Code compiles and passes type checking
- [x] Pre-commit hooks pass (linting, formatting, type checking)
- [x] Both affected endpoints maintain same functionality with cleaner
implementation

Fixes broken duplicate code identified in v1.py router endpoints for
`update_agent_version_in_library`.

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 14:31:36 +00:00
Abhimanyu Yadav
f987937046 fix(frontend): prune empty values from node inputs and fix object editor connection detection (#11507)
This PR addresses two issues:
1. **Empty values being sent to backend**: Node inputs were including
empty strings, null, and undefined values when converting to backend
format, causing unnecessary data to be sent.
2. **Incorrect connection detection in ObjectEditor**: The component was
checking if the parent field was connected instead of checking
individual key-value pairs, causing incorrect UI state for dynamic
properties.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- **nodeStore.ts**: Added `pruneEmptyValues` utility to remove
empty/null/undefined values from `hardcodedValues` before converting
nodes to backend format. This ensures only meaningful input values are
sent to the backend API.
- **ObjectEditorWidget.tsx**: 
  - Removed unused `fieldKey` prop from component interface
- Fixed connection detection logic to check individual key-value handle
IDs instead of the parent field key, ensuring correct connection state
for each dynamic property in object editors

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a node with object input fields and verify empty values are
not sent to backend
  - [x] Add multiple key-value pairs to an object editor widget
- [x] Connect individual key-value pairs via handles and verify
connection state is correctly displayed
- [x] Disconnect a key-value pair and verify the input field becomes
editable
- [x] Save and reload an agent with object inputs to verify empty values
are pruned correctly
- [x] Verify that non-empty values are still correctly preserved and
sent to backend
2025-12-02 12:47:12 +00:00
Abhimanyu Yadav
b8d67fc2a9 fix(frontend): Update isGraphRunning state when manually executing graphs (#11489)
This PR fixes an issue where the `isGraphRunning` state wasn't being
updated when users manually executed graphs. This caused the UI to not
show proper feedback that a graph was running, such as the running
background animation. The fix adds a `setIsGraphRunning` method to the
graph store and ensures it's called both when starting execution (for
immediate UI feedback) and when errors occur (to reset the state).

### Changes 🏗️

- **Added `setIsGraphRunning` method to graphStore**: Provides a way to
manually control the graph running state
- **Set running state on manual execution**: Updates `isGraphRunning` to
`true` immediately when users click the Run button, providing instant UI
feedback
- **Reset state on execution errors**: Ensures `isGraphRunning` is set
back to `false` if the graph execution fails, preventing the UI from
showing incorrect running state
- **Improved UI responsiveness**: Users now see immediate visual
feedback (like the running background) when they start a graph execution
- **Cleaned up unused import**: Removed unnecessary `flowID` from
useQueryStates in Flow component
- I’ve also changed the colour of the running background to a lighter
shade of purple.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Click Run button and verify the running background appears
immediately
  - [x] Test with graphs that require input values via the input dialog
  - [x] Verify running state is cleared when execution completes
- [x] Test error scenarios and confirm running state is reset on failure
- [x] Confirm all UI elements that depend on `isGraphRunning` update
correctly
- [x] Test multiple consecutive runs to ensure state management is
consistent
2025-12-02 12:42:47 +00:00
Zamil Majdy
e4102bf0fb fix(backend): resolve HITL execution context validation and re-enable tests (#11509)
## Summary
Fix critical validation errors in GraphExecutionEntry and
NodeExecutionEntry models that were preventing HITL block execution, and
re-enable HITL test suite.

## Root Cause
After introducing ExecutionContext as a mandatory field, existing
messages in the execution queue lacked this field, causing validation
failures:
```
[GraphExecutor] [ExecutionManager] Could not parse run message: 1 validation error for GraphExecutionEntry
execution_context
  Field required [type=missing, input_value={'user_id': '26db15cb-a29...
```

## Changes Made

### 🔧 Execution Model Fixes (`backend/data/execution.py`)
- **Add backward compatibility**: `execution_context: ExecutionContext =
Field(default_factory=ExecutionContext)`
- **Prevent shared mutable defaults**: Use `default_factory` instead of
direct instantiation to avoid mutation issues
- **Ignore unknown fields**: Add `model_config = {"extra": "ignore"}`
for future compatibility
- **Applied to both models**: GraphExecutionEntry and NodeExecutionEntry

### 🧪 Re-enable HITL Test Suite
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`human_review_test.py`
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`review_routes_test.py`
- **Restore test coverage**: HITL functionality now properly tested in
CI

## Default ExecutionContext Behavior
When execution_context is missing from old messages, provides safe
defaults:
- `safe_mode: bool = True` (HITL blocks require approval by default)
- `user_timezone: str = "UTC"` (safe timezone default)  
- `root_execution_id: Optional[str] = None`
- `parent_execution_id: Optional[str] = None`

## Impact
-  **Fixes deployment validation errors** in dev environment
-  **Maintains backward compatibility** with existing queue messages  
-  **Restores proper HITL test coverage** in CI
-  **Ensures isolation**: Each execution gets its own ExecutionContext
instance
-  **Future-proofs**: Protects against message format changes

## Testing
- [x] HITL test suite re-enabled and should pass in CI
- [x] Existing executions continue to work with sensible defaults
- [x] New executions receive proper ExecutionContext from caller
- [x] Verified `default_factory` prevents shared mutable instances

## Files Changed
- `backend/data/execution.py` - Add backward-compatible ExecutionContext
defaults
- `backend/data/human_review_test.py` - Re-enable test suite
- `backend/server/v2/executions/review/review_routes_test.py` -
Re-enable test suite

Resolves the "Field required" validation error preventing HITL block
execution in dev environment.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 11:02:21 +00:00
Zamil Majdy
7b951c977e feat(platform): implement graph-level Safe Mode toggle for HITL blocks (#11455)
## Summary

This PR implements a graph-level Safe Mode toggle system for
Human-in-the-Loop (HITL) blocks. When Safe Mode is ON (default), HITL
blocks require manual review before proceeding. When OFF, they execute
automatically.

## 🔧 Backend Changes

- **Database**: Added `metadata` JSON column to `AgentGraph` table with
migration
- **API**: Updated `execute_graph` endpoint to accept `safe_mode`
parameter
- **Execution**: Enhanced execution context to use graph metadata as
default with API override capability
- **Auto-detection**: Automatically populate `has_human_in_the_loop` for
graphs containing HITL blocks
- **Block Detection**: HITL block ID:
`8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d`

## 🎨 Frontend Changes

- **Component**: New `FloatingSafeModeToggle` with dual variants:
  - **White variant**: For library pages, integrates with action buttons
  - **Black variant**: For builders, floating positioned  
- **Integration**: Added toggles to both new/legacy builders and library
pages
- **API Integration**: Direct graph metadata updates via
`usePutV1UpdateGraphVersion`
- **Query Management**: React Query cache invalidation for consistent UI
updates
- **Conditional Display**: Toggle only appears when graph contains HITL
blocks

## 🛠 Technical Implementation

- **Safe Mode ON** (default): HITL blocks require manual review before
proceeding
- **Safe Mode OFF**: HITL blocks execute automatically without
intervention
- **Priority**: Backend API `safe_mode` parameter takes precedence over
graph metadata
- **Detection**: Auto-populates `has_human_in_the_loop` metadata field
- **Positioning**: Proper z-index and responsive positioning for
floating elements

## 🚧 Known Issues (Work in Progress)

### High Priority
- [ ] **Toggle state persistence**: Always shows "ON" regardless of
actual state - query invalidation issue
- [ ] **LibraryAgent metadata**: Missing metadata field causing
TypeScript errors
- [ ] **Tooltip z-index**: Still covered by some UI elements despite
high z-index

### Medium Priority  
- [ ] **HITL detection**: Logic needs improvement for reliable block
detection
- [ ] **Error handling**: Removing HITL blocks from graph causes save
errors
- [ ] **TypeScript**: Fix type mismatches between GraphModel and
LibraryAgent

### Low Priority
- [ ] **Frontend API**: Add `safe_mode` parameter to execution calls
once OpenAPI is regenerated
- [ ] **Performance**: Consider debouncing rapid toggle clicks

## 🧪 Test Plan

- [ ] Verify toggle appears only when graph has HITL blocks
- [ ] Test toggle persistence across page refreshes  
- [ ] Confirm API calls update graph metadata correctly
- [ ] Validate execution behavior respects safe mode setting
- [ ] Check styling consistency across builder and library contexts

## 🔗 Related

- Addresses requirements for graph-level HITL configuration
- Builds on existing FloatingReviewsPanel infrastructure
- Integrates with existing graph metadata system

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-12-02 09:55:55 +00:00
Nicholas Tindle
3b2a67a711 ci(claude): add free disk space step to prevent runner out of space (#11503)
## Summary
- Add `jlumbroso/free-disk-space@v1.3.1` action to claude.yml workflow
- Frees up disk space on Ubuntu runner before heavy Docker and
dependency setup
- Skips `large-packages` (slow) and `docker-images` (needed for
workflow)

## Test plan
- [x] Verify claude.yml workflow runs successfully without disk space
errors

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-01 20:06:56 +00:00
Bently
a58ac2150f feat(backend/blocks): add Discord create thread block (#11131)
Adds a new Discord block that allows users to create threads in Discord
channels. This addresses issue OPEN-2666 which requested the ability to
create Discord threads from workflows.

## Solution

Implemented `CreateDiscordThreadBlock` in
`autogpt_platform/backend/backend/blocks/discord/bot_blocks.py` with the
following features:

- Create public or private threads in Discord channels via bot token
- Support for both channel ID and channel name lookup (with optional
server name)
- Configurable thread type (public/private toggle)
- Configurable auto-archive duration (60, 1440, 4320, or 10080 minutes)
- Optional initial message to send in the newly created thread
- Outputs: status, thread_id, and thread_name for workflow chaining

The block follows the existing Discord block patterns and includes
proper error handling for permissions, channel not found, and login
failures.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Verify block appears in the workflow builder UI
- [x] Test creating a public thread with valid bot token and channel ID
- [x] Test creating a private thread with valid bot token and channel
name
  - [x] Test with invalid channel ID/name to verify error handling
  - [x] Test with bot lacking thread creation permissions
  - [x] Verify thread_id output can be chained to subsequent blocks
- [x] Test auto-archive duration options (60, 1440, 4320, 10080 minutes)
  - [x] Test sending initial message in newly created thread

Video to show the blocks working!



https://github.com/user-attachments/assets/f248f315-05b3-47e2-bd6b-4c64d53c35fc
2025-12-01 20:01:27 +00:00
Swifty
9f37342bc6 feat(platform): Simplify the chat tool system to use only 2 tools (#11464)
Simplifying the chat tool system to use only 2 tools

### Changes 🏗️

- remove old tools
- expand run_agent tool to include all stages

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested adding credentials work
  - [x] tested running an agent works
  - [x] tested scheduling an agent works
2025-12-01 20:56:16 +01:00
Swifty
7dc3b201b7 feat(platform): Explain None Message in BlockError Messages (#11490)
Sometime block errors are raised with message set as None, we now handle
this case

### Changes 🏗️

- handle case of None message
- Add tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] write unit tests

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensure `BlockExecutionError` and `BlockUnknownError` provide default
messages when given None/empty input, with new unit tests covering
formatting and inheritance.
> 
> - **Backend**:
>   - `backend/util/exceptions.py`:
> - `BlockExecutionError`: default `None` message to `"Output error was
None"`.
> - `BlockUnknownError`: default empty/`None` message to `"Unknown error
occurred"`.
> - **Tests**:
> - `backend/util/exceptions_test.py`: add tests for message formatting,
`None`/empty handling, and exception inheritance.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9b31ae47. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-01 20:55:02 +01:00
Swifty
7d53c0de27 fix(backend): Fix Youtube blocking our cloud ips (#11456)
Youtube can blocks cloud ips causing the youtube transcribe blocks to
not work. This PR adds webshare proxy to get around this issue

### Changes 🏗️

- add webshare proxy to youtube transcribe block 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have tested this works locally using the proxy

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Routes YouTube transcript fetching through Webshare proxy using
user/password credentials, wiring in provider enum, settings, default
credentials, and updated tests.
> 
> - **Blocks** (`backend/blocks/youtube.py`):
> - Use `WebshareProxyConfig` with `YouTubeTranscriptApi` to fetch
transcripts via proxy.
> - Add `credentials` input (`user_password` for `webshare_proxy`);
include test credentials and mocks.
> - Update method signatures: `get_transcript(video_id, credentials)`
and `run(..., *, credentials, ...)`.
>   - Change description to indicate proxy usage; add logging.
> - **Integrations**:
> - Providers (`backend/integrations/providers.py`): add
`ProviderName.WEBSHARE_PROXY`.
> - Credentials store (`backend/integrations/credentials_store.py`): add
`webshare_proxy` `UserPasswordCredentials`; include in
`DEFAULT_CREDENTIALS` and conditionally in `get_all_creds`.
> - **Settings** (`backend/util/settings.py`): add secrets
`webshare_proxy_username` and `webshare_proxy_password`.
> - **Tests** (`test/blocks/test_youtube.py`): update to pass
credentials and assert proxy config; add custom-credentials test; adjust
fallback/priority tests.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d060898488. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-01 20:54:52 +01:00
Nicholas Tindle
0728f3bd49 fix(backend): Remove Google Sheets API scopes from block inputs (#11484)
Eliminates explicit Google Sheets API scopes from credentials fields in
all Google Sheets-related blocks. This change may be intended to
centralize or dynamically manage API scopes elsewhere, simplifying block
configuration.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- removes the scopes we aren't approved to use
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Bently tested it on his fresh account and it worked!
2025-12-01 18:07:31 +00:00
Lluis Agusti
eb7e919450 Revert "chore: experiment"
This reverts commit 686412e7df.
2025-12-01 21:20:51 +07:00
Lluis Agusti
686412e7df chore: experiment 2025-12-01 21:19:21 +07:00
Ubbe
b4e95dba14 feat(frontend): update empty task view designs (#11498)
## Changes 🏗️

Update the new library agent page, empty view to look like:

<img width="900" height="1060" alt="Screenshot 2025-12-01 at 14 12 10"
src="https://github.com/user-attachments/assets/e6a22a4f-35f4-434e-bbb1-593390034b9a"
/>

Now we display an **About this agent** card on the left when the agent
is published on the marketplace. I expanded the:
```
/api/library/agents/{id}
```
endpoint to return as well the following:
```js
{
  // ...
  created_at: "timestamp",
  marketplace_listing: {
    creator: { name: "string", "slug": string, id: "string"  },
    name: "string",
    slug: "string",
    id: "string"
  }
}
```
To be able to display this extra information on the card and link to the
creator and marketplace pages.

Also:
- design system updates regarding navbar and colors

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and see the new page for an agent with no runs
2025-12-01 20:28:44 +07:00
Swifty
00148f4e3d feat(platform): add external api routes for store search and tool usage (#11463)
We want to allow external tools to explore the marketplace and use the
chat agent tools


### Changes 🏗️

- add store api routes
- add tool api routes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested all endpoints work

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-12-01 12:04:03 +00:00
Ubbe
6db18b8445 feat(frontend): design system tokens update (#11501)
## Changes 🏗️

Update tokens of the design system with new values 🖌️ 🎨 

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Storybook build passes, no type errors
2025-12-01 18:14:10 +07:00
Abhimanyu Yadav
627a33468b feat(frontend): Default node output accordion to expanded state (#11483)
This PR improves the user experience by defaulting the node output
accordion to an expanded state. Previously, users had to manually expand
the accordion to view execution results, which added an unnecessary
click to the workflow. With this change, output data is immediately
visible when available, allowing users to quickly see the results of
their node executions. The ability to collapse the accordion is
preserved for users who prefer a more compact interface.

### Changes 🏗️

- **Changed default state of node output accordion**: The node output
section now defaults to expanded (`isExpanded = true`) instead of
collapsed
- **Improved user experience**: Users can now immediately see node
execution results without needing to manually expand the accordion
- **Maintains collapse functionality**: Users can still manually
collapse the accordion if they prefer a more compact view

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute a node and verify the output accordion is expanded by
default
  - [x] Verify output data is immediately visible after node execution
  - [x] Test that the accordion can still be manually collapsed
- [x] Confirm the accordion state resets to expanded when switching
between nodes
- [x] Test with different types of output data (simple values, objects,
arrays)
2025-12-01 05:57:08 +00:00
Abhimanyu Yadav
ea4b55f967 feat(frontend): Add minimum movement threshold for node position history tracking (#11481)
This PR implements a minimum movement threshold of 50 pixels for node
position changes before they are logged to the history system. This
prevents the undo/redo history from being cluttered with minor,
unintentional movements that occur when users interact with block inputs
or accidentally nudge nodes.

### Changes 🏗️

- **Added movement threshold for history tracking**: Implemented a 50px
minimum movement requirement before logging node position changes to
history
- **Prevents history spam**: Stops small, unintentional movements (like
clicking on inputs inside blocks) from cluttering the undo/redo history
- **Tracks drag start positions**: Maintains initial positions when
dragging begins to accurately calculate total movement distance
- **Improved history management**: Only significant node movements are
now recorded, matching the behavior of the old builder
- **Memory efficient**: Cleans up tracked positions after drag
operations complete

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Drag a node less than 50px and verify no history entry is created
  - [x] Drag a node more than 50px and verify history entry is created
- [x] Click on inputs inside blocks and verify no history entries are
created
  - [x] Test undo/redo functionality works correctly with the threshold
  - [x] Verify adding/removing nodes still creates history entries
  - [x] Test multiple nodes being dragged simultaneously
2025-12-01 05:43:50 +00:00
Abhimanyu Yadav
321ab8a48a feat(frontend): Add trigger agent banner for webhook-based flows (#11480)
This PR addresses the need for better user awareness when building
trigger-based agents. When a user adds webhook/trigger nodes to their
flow, a prominent banner now appears at the bottom of the builder
informing them they're creating a "Trigger Agent" and providing a direct
link to monitor its activity in the Agent Library.

### Changes 🏗️

- **Added TriggerAgentBanner component**: New banner that displays at
the bottom of the builder when the flow contains webhook/trigger nodes
- **Implemented webhook node detection**: Added `hasWebhookNodes` method
to nodeStore that checks if any nodes are of type WEBHOOK or
WEBHOOK_MANUAL
- **Conditional banner display**: Builder now shows TriggerAgentBanner
instead of BuilderActions when webhook nodes are present
- **Dynamic library link**: Banner includes a link to the specific agent
in the library (if found) or defaults to the general library page
- **Integrated with existing flow context**: Uses the current flowID to
fetch the corresponding library agent for proper linking

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Add a webhook node to a flow and verify the banner appears
  - [x] Remove all webhook nodes and verify the banner disappears
  - [x] Test with both WEBHOOK and WEBHOOK_MANUAL node types
- [x] Click the library link and verify it navigates to the correct
agent page
  - [x] Test library link fallback when agent is not found in library
- [x] Verify banner styling and positioning at bottom center of builder
2025-12-01 05:43:04 +00:00
Abhimanyu Yadav
cd6a8cbd47 fix(frontend): Include edges when copying multiple nodes with Shift+Select (#11478)
This PR fixes issue where edges were not being copied when selecting
multiple nodes with Shift+Select.

### Changes 🏗️

- **Fixed edge copying logic**: Removed the requirement for edges to be
explicitly selected - now automatically includes all edges between
selected nodes when copying
- **Migrated to custom stores**: Refactored copy-paste functionality to
use `useNodeStore` and `useEdgeStore` instead of ReactFlow's built-in
state management
- **Improved type safety**: Replaced generic `Node` and `Edge` types
with `CustomNode` and `CustomEdge` for better type checking
- **Enhanced paste behavior**: 
- Deselects existing nodes before pasting to ensure only pasted nodes
are selected
- Uses store methods for adding nodes and edges, ensuring proper state
management
- **Simplified node data handling**: Removed manual clearing of
backend_id, status, and nodeExecutionResult - now handled by the store's
addNode method
- **Added debugging support**: Added console logging for copied data to
aid in troubleshooting

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Select multiple nodes using Shift+Select and copy with Ctrl/Cmd+C
  - [x] Verify edges between selected nodes are included in the copy
  - [x] Paste with Ctrl/Cmd+V and confirm both nodes and edges appear
  - [x] Verify pasted elements maintain correct connections
  - [x] Confirm only pasted nodes are selected after paste
  - [x] Test that pasted nodes have unique IDs
  - [x] Verify pasted nodes appear centered in the current viewport
2025-12-01 05:41:25 +00:00
Abhimanyu Yadav
7fabbb25c4 fix(frontend): Preserve customized_name only when explicitly set in node metadata (#11477)
This PR fixes an issue where undefined `customized_name` values were
being included in node metadata, which could cause issues with data
persistence and agent exports. The fix ensures that the
`customized_name` property is only included when it has been explicitly
set by the user.

### Changes 🏗️

- **Fixed metadata serialization**: Modified `getNodeAsGraphNode` to
only include `customized_name` in the metadata when it has been
explicitly set (not undefined)
- **Improved data structure**: Uses object spread syntax to
conditionally include the `customized_name` property, preventing
undefined values from being stored
- **Cleaned up code**: Removed unnecessary TODO comment and improved
code readability

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new agent and verify metadata doesn't include undefined
customized_name
- [x] Customize a block name and verify it's properly saved in metadata
- [x] Export an agent and verify the JSON doesn't contain undefined
customized_name fields
- [x] Import an agent with customized names and verify they are
preserved
- [x] Save and reload an agent to ensure metadata persistence works
correctly
2025-12-01 05:40:52 +00:00
Abhimanyu Yadav
35eb563241 feat(platform): enhance BlockMenuSearch with agent addition (#11474)
This PR enables users to add agents directly to the builder from search
results and marketplace views. Previously, users had to navigate to
different sections to add agents - now they can do it with a single
click from wherever they find the agent. The change includes proper
loading states, error handling, and success notifications to provide a
smooth user experience.

### Changes 🏗️

- **Added direct agent-to-builder functionality**: Users can now add
agents directly to the builder from search results and marketplace views
- **Created reusable hook `useAddAgentToBuilder`**: Centralized logic
for adding both library and marketplace agents to the builder
- **Enhanced search results interaction**: Added click handlers and
loading states to agent cards in search results
- **Improved marketplace agent addition**: Marketplace agents are now
added to both library and builder with proper feedback
- **Added loading states**: Visual feedback when agents are being added
(loading spinners on cards)
- **Improved error handling**: Added toast notifications for success and
failure cases with descriptive error messages
- **Added Sentry error tracking**: Captures exceptions for better
debugging in production

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for agents and add them to builder from search results
- [x] Add marketplace agents which should appear in both library and
builder
  - [x] Verify loading states appear during agent addition
  - [x] Test error scenarios (network failure, invalid agent)
- [x] Confirm toast notifications appear for both success and error
cases
  - [x] Verify builder viewport centers on newly added agent
2025-12-01 05:26:38 +00:00
Ubbe
a37b527744 refactor(frontend): agent runs view folder structure (#11475)
## Changes 🏗️

Re-arrange the folder structure of the new Library page sub-components
so they are grouped by location:

### Before

<img width="238" height="506" alt="Screenshot 2025-11-27 at 23 45 27"
src="https://github.com/user-attachments/assets/429fda6e-bf74-4d80-9306-028365789ca1"
/>

All components where on a single level, which works fine for simpler
pages without that many sub-components, but on this one which has so
much functionality it ends up messier...

### After 

<img width="226" height="517" alt="Screenshot 2025-11-27 at 23 45 46"
src="https://github.com/user-attachments/assets/99c098ea-ff11-4779-bad8-7d524bf91605"
/>


### Imports order

I edited some files, and the linter/formatter automatically sorted
import order as per the lint plugin.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the new library agent page locally and click around
2025-11-28 19:00:41 +07:00
Zamil Majdy
3d08c22dd5 feat(platform): add Human In The Loop block with review workflow (#11380)
## Summary
This PR implements a comprehensive Human In The Loop (HITL) block that
allows agents to pause execution and wait for human
approval/modification of data before continuing.



https://github.com/user-attachments/assets/c027d731-17d3-494c-85ca-97c3bf33329c


## Key Features
- Added WAITING_FOR_REVIEW status to AgentExecutionStatus enum
- Created PendingHumanReview database table for storing review requests
- Implemented HumanInTheLoopBlock that extracts input data and creates
review entries
- Added API endpoints at /api/executions/review for fetching and
reviewing pending data
- Updated execution manager to properly handle waiting status and resume
after approval

## Frontend Components
- PendingReviewCard for individual review handling
- PendingReviewsList for multiple reviews
- FloatingReviewsPanel for graph builder integration
- Integrated review UI into 3 locations: legacy library, new library,
and graph builder

## Technical Implementation
- Added proper type safety throughout with SafeJson handling
- Optimized database queries using count functions instead of full data
fetching
- Fixed imports to be top-level instead of local
- All formatters and linters pass

## Test plan
- [ ] Test Human In The Loop block creation in graph builder
- [ ] Test block execution pauses and creates pending review
- [ ] Test review UI appears in all 3 locations
- [ ] Test data modification and approval workflow
- [ ] Test rejection workflow
- [ ] Test execution resumes after approval

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added Human-In-The-Loop review workflows to pause executions for human
validation.
* Users can approve or reject pending tasks, optionally editing
submitted data and adding a message.
* New "Waiting for Review" execution status with UI indicators across
run lists, badges, and activity views.
* Review management UI: pending review cards, list view, and a floating
reviews panel for quick access.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 12:07:46 +07:00
Zamil Majdy
ff5dd7a5b4 fix(backend): migrate all query_raw calls to query_raw_with_schema for proper schema handling (#11462)
## Summary

Complete migration of all non-test `query_raw` calls to use
`query_raw_with_schema` for proper PostgreSQL schema context handling.
This resolves the marketplace API failures where queries were looking
for unqualified table names.

## Root Cause

Prisma's `query_raw()` doesn't respect the `schema` parameter in
`DATABASE_URL` (`?schema=platform`) for raw SQL queries, causing queries
to fail when looking for unqualified table names in multi-schema
environments.

## Changes Made

### Files Updated
-  **backend/server/v2/store/db.py**: Already updated in previous
commit
-  **backend/server/v2/builder/db.py**: Updated `get_suggested_blocks`
query at line 343
-  **backend/check_store_data.py**: Updated all 4 `query_raw` calls to
use schema-aware queries
-  **backend/check_db.py**: Updated all `query_raw` calls (import
already existed)

### Technical Implementation
- Add import: `from backend.data.db import query_raw_with_schema`
- Replace `prisma.get_client().query_raw()` with
`query_raw_with_schema()`
- Add `{schema_prefix}` placeholder to table references in SQL queries
- Fix f-string template conflicts by using double braces
`{{schema_prefix}}`

### Query Examples

**Before:**
```sql
FROM "StoreAgent"
FROM "AgentNodeExecution" execution
```

**After:**
```sql  
FROM {schema_prefix}"StoreAgent"
FROM {schema_prefix}"AgentNodeExecution" execution
```

## Impact

-  All raw SQL queries now properly respect platform schema context
-  Fixes "relation does not exist" errors in multi-schema environments
-  Maintains backward compatibility with public schema deployments
-  Code formatting passes with `poetry run format`

## Testing

- All `query_raw` usages in non-test code successfully migrated
- `query_raw_with_schema` automatically handles schema prefix injection
- Existing query logic unchanged, only schema awareness added

## Before/After

**Before:** GET /api/store/agents → "relation 'StoreAgent' does not
exist"
**After:** GET /api/store/agents →  Returns store agents correctly

Resolves the marketplace API failures and ensures consistent schema
handling across all raw SQL operations.

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 04:04:20 +00:00
Nicholas Tindle
02f8a69c6a feat(platform): add Google Drive Picker field type for enhanced file selection (#11311)
### 🏗️ Changes 

This PR adds a Google Drive Picker field type to enhance the user
experience of existing Google blocks, replacing manual file ID entry
with a visual file picker.

#### Backend Changes
- **Added  and  types** in :
  - Configurable picker field with OAuth scope management
  - Support for multiselect, folder selection, and MIME type filtering
  - Proper access token handling for file downloads
- **Enhanced Gmail blocks**: Updated attachment fields to use Google
Drive Picker for better UX
- **Enhanced Google Sheets blocks**: Updated spreadsheet selection to
use picker instead of manual ID entry
- **Added utility**: Async file download with virus scanning and 100MB
size limit

#### Frontend Changes  
- **Enhanced GoogleDrivePicker component**: Improved UI with folder icon
and multiselect messaging
- **Integrated picker in form renderers**: Auto-renders for fields with
format
- **Added shared GoogleDrivePickerInput component**: Eliminates code
duplication between NodeInputs and RunAgentInputs
- **Added type definitions**: Complete TypeScript support for picker
schemas and responses

#### Key Features
- 🎯 **Visual file selection**: Replace manual Google Drive file ID entry
with intuitive picker
- 📁 **Flexible configuration**: Support for documents, spreadsheets,
folders, and custom MIME types
- 🔒 **Minimal OAuth scopes**: Uses scope for security (only access to
user-selected files)
-  **Enhanced UX**: Seamless integration in both block configuration
and agent run modals
- 🛡️ **Security**: Virus scanning and file size limits for downloaded
attachments

#### Migration Impact
- **Backward compatible**: Existing blocks continue to work with manual
ID entry
- **Progressive enhancement**: New picker fields provide better UX for
the same functionality
- **No breaking changes**: all existing blocks should be unaffected

This enhancement improves the user experience of Google blocks without
introducing new systems or breaking existing functionality.


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x]Test multiple of the new blocks [of note is that the create
spreadsheet block should be not used for now as it uses api not drive
picker]
  - [x] chain the blocks together and pass values between them

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 03:01:29 +00:00
Toran Bruce Richards
e983d5c49a fix(backend): Implement passed uploaded media support for AI image customizer block (#11441)
- Added `store_media_file` utility to convert local file paths to Data
URIs for image processing.
- Updated `AIImageCustomizerBlock` to utilize processed images in model
execution, improving compatibility with Replicate API.
- Added optional Aspect ratio input to AIImageCustomizerBlock

This change enhances the image handling capabilities of the AI image
customizer, ensuring that images are properly formatted for external
processing.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Created agent using AI Image Customizer block attached to agent
file input
  - [x] Run agent, confirmed block is working
- [x] Confirm block is still working in original direct file upload
setup.


### Testing Results

#### Before (dev cloud):
<img width="836" height="592" alt="image"
src="https://github.com/user-attachments/assets/88c75668-c5c9-44bb-bec5-6554088a0cb7"
/>


#### After (local):
<img width="827" height="587" alt="image"
src="https://github.com/user-attachments/assets/04fea431-70a5-4173-bc84-d354c03d7174"
/>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Preprocesses input images to data URIs and adds an `aspect_ratio`
option, wiring both through to Replicate in `AIImageCustomizerBlock`.
> 
> - **Backend**
>   - **`backend/blocks/ai_image_customizer.py`**:
> - Preprocesses input images via `store_media_file(...,
return_content=True)` to Data URIs before invoking Replicate.
> - Adds `AspectRatio` enum and `aspect_ratio` input; passed through
`run_model` and included in Replicate input.
>     - Updates block test input accordingly.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4116cf80d7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-11-27 00:41:45 +00:00
Lluis Agusti
bdb94a3cf9 hotfix(frontend): clear cache account on logout 2025-11-26 23:33:17 +07:00
612 changed files with 83713 additions and 11895 deletions

View File

@@ -142,7 +142,7 @@ pnpm storybook # Start component development server
### Security & Middleware
**Cache Protection**: Backend includes middleware preventing sensitive data caching in browsers/proxies
**Authentication**: JWT-based with Supabase integration
**Authentication**: JWT-based with native authentication
**User ID Validation**: All data access requires user ID checks - verify this for any `data/*.py` changes
### Development Workflow
@@ -168,9 +168,9 @@ pnpm storybook # Start component development server
- `frontend/src/app/layout.tsx` - Root application layout
- `frontend/src/app/page.tsx` - Home page
- `frontend/src/lib/supabase/` - Authentication and database client
- `frontend/src/lib/auth/` - Authentication client
**Protected Routes**: Update `frontend/lib/supabase/middleware.ts` when adding protected routes
**Protected Routes**: Update `frontend/middleware.ts` when adding protected routes
### Agent Block System
@@ -194,7 +194,7 @@ Agents are built using a visual block-based system where each block performs a s
1. **Backend**: `/backend/.env.default` → `/backend/.env` (user overrides)
2. **Frontend**: `/frontend/.env.default` → `/frontend/.env` (user overrides)
3. **Platform**: `/.env.default` (Supabase/shared) → `/.env` (user overrides)
3. **Platform**: `/.env.default` (shared) → `/.env` (user overrides)
4. Docker Compose `environment:` sections override file-based config
5. Shell environment variables have highest precedence

View File

@@ -144,11 +144,7 @@ jobs:
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
"pgvector/pgvector:pg18"
)
# Check if any cached tar files exist (more reliable than cache-hit)

View File

@@ -44,6 +44,12 @@ jobs:
with:
fetch-depth: 1
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@v1.3.1
with:
large-packages: false # slow
docker-images: false # limited benefit
# Backend Python/Poetry setup (mirrors platform-backend-ci.yml)
- name: Set up Python
uses: actions/setup-python@v5
@@ -154,11 +160,7 @@ jobs:
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
"pgvector/pgvector:pg18"
)
# Check if any cached tar files exist (more reliable than cache-hit)

View File

@@ -142,11 +142,7 @@ jobs:
"rabbitmq:management"
"clamav/clamav-debian:latest"
"busybox:latest"
"kong:2.8.1"
"supabase/gotrue:v2.170.0"
"supabase/postgres:15.8.1.049"
"supabase/postgres-meta:v0.86.1"
"supabase/studio:20250224-d10db0f"
"pgvector/pgvector:pg18"
)
# Check if any cached tar files exist (more reliable than cache-hit)

View File

@@ -2,13 +2,13 @@ name: AutoGPT Platform - Backend CI
on:
push:
branches: [master, dev, ci-test*]
branches: [master, dev, ci-test*, native-auth]
paths:
- ".github/workflows/platform-backend-ci.yml"
- "autogpt_platform/backend/**"
- "autogpt_platform/autogpt_libs/**"
pull_request:
branches: [master, dev, release-*]
branches: [master, dev, release-*, native-auth]
paths:
- ".github/workflows/platform-backend-ci.yml"
- "autogpt_platform/backend/**"
@@ -36,6 +36,19 @@ jobs:
runs-on: ubuntu-latest
services:
postgres:
image: pgvector/pgvector:pg18
ports:
- 5432:5432
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: your-super-secret-and-long-postgres-password
POSTGRES_DB: postgres
options: >-
--health-cmd "pg_isready -U postgres"
--health-interval 5s
--health-timeout 5s
--health-retries 10
redis:
image: redis:latest
ports:
@@ -78,11 +91,6 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
- name: Setup Supabase
uses: supabase/setup-cli@v1
with:
version: 1.178.1
- id: get_date
name: Get date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
@@ -136,16 +144,6 @@ jobs:
- name: Generate Prisma Client
run: poetry run prisma generate
- id: supabase
name: Start Supabase
working-directory: .
run: |
supabase init
supabase start --exclude postgres-meta,realtime,storage-api,imgproxy,inbucket,studio,edge-runtime,logflare,vector,supavisor
supabase status -o env | sed 's/="/=/; s/"$//' >> $GITHUB_OUTPUT
# outputs:
# DB_URL, API_URL, GRAPHQL_URL, ANON_KEY, SERVICE_ROLE_KEY, JWT_SECRET
- name: Wait for ClamAV to be ready
run: |
echo "Waiting for ClamAV daemon to start..."
@@ -178,8 +176,8 @@ jobs:
- name: Run Database Migrations
run: poetry run prisma migrate dev --name updates
env:
DATABASE_URL: ${{ steps.supabase.outputs.DB_URL }}
DIRECT_URL: ${{ steps.supabase.outputs.DB_URL }}
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@localhost:5432/postgres
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@localhost:5432/postgres
- id: lint
name: Run Linter
@@ -195,11 +193,9 @@ jobs:
if: success() || (failure() && steps.lint.outcome == 'failure')
env:
LOG_LEVEL: ${{ runner.debug && 'DEBUG' || 'INFO' }}
DATABASE_URL: ${{ steps.supabase.outputs.DB_URL }}
DIRECT_URL: ${{ steps.supabase.outputs.DB_URL }}
SUPABASE_URL: ${{ steps.supabase.outputs.API_URL }}
SUPABASE_SERVICE_ROLE_KEY: ${{ steps.supabase.outputs.SERVICE_ROLE_KEY }}
JWT_VERIFY_KEY: ${{ steps.supabase.outputs.JWT_SECRET }}
DATABASE_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@localhost:5432/postgres
DIRECT_URL: postgresql://postgres:your-super-secret-and-long-postgres-password@localhost:5432/postgres
JWT_SECRET: your-super-secret-jwt-token-with-at-least-32-characters-long
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
ENCRYPTION_KEY: "dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=" # DO NOT USE IN PRODUCTION!!

View File

@@ -2,16 +2,21 @@ name: AutoGPT Platform - Frontend CI
on:
push:
branches: [master, dev]
branches: [master, dev, native-auth]
paths:
- ".github/workflows/platform-frontend-ci.yml"
- "autogpt_platform/frontend/**"
pull_request:
branches: [master, dev, native-auth]
paths:
- ".github/workflows/platform-frontend-ci.yml"
- "autogpt_platform/frontend/**"
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'merge_group' && format('merge-queue-{0}', github.ref) || format('{0}-{1}', github.ref, github.event.pull_request.number || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
defaults:
run:
shell: bash
@@ -143,7 +148,7 @@ jobs:
- name: Enable corepack
run: corepack enable
- name: Copy default supabase .env
- name: Copy default platform .env
run: |
cp ../.env.default ../.env

View File

@@ -1,17 +1,22 @@
name: AutoGPT Platform - Frontend CI
name: AutoGPT Platform - Fullstack CI
on:
push:
branches: [master, dev]
branches: [master, dev, native-auth]
paths:
- ".github/workflows/platform-fullstack-ci.yml"
- "autogpt_platform/**"
pull_request:
branches: [master, dev, native-auth]
paths:
- ".github/workflows/platform-fullstack-ci.yml"
- "autogpt_platform/**"
merge_group:
concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'merge_group' && format('merge-queue-{0}', github.ref) || github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
defaults:
run:
shell: bash
@@ -54,14 +59,11 @@ jobs:
types:
runs-on: ubuntu-latest
needs: setup
strategy:
fail-fast: false
timeout-minutes: 10
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
submodules: recursive
- name: Set up Node.js
uses: actions/setup-node@v4
@@ -71,18 +73,6 @@ jobs:
- name: Enable corepack
run: corepack enable
- name: Copy default supabase .env
run: |
cp ../.env.default ../.env
- name: Copy backend .env
run: |
cp ../backend/.env.default ../backend/.env
- name: Run docker compose
run: |
docker compose -f ../docker-compose.yml --profile local --profile deps_backend up -d
- name: Restore dependencies cache
uses: actions/cache@v4
with:
@@ -97,36 +87,12 @@ jobs:
- name: Setup .env
run: cp .env.default .env
- name: Wait for services to be ready
run: |
echo "Waiting for rest_server to be ready..."
timeout 60 sh -c 'until curl -f http://localhost:8006/health 2>/dev/null; do sleep 2; done' || echo "Rest server health check timeout, continuing..."
echo "Waiting for database to be ready..."
timeout 60 sh -c 'until docker compose -f ../docker-compose.yml exec -T db pg_isready -U postgres 2>/dev/null; do sleep 2; done' || echo "Database ready check timeout, continuing..."
- name: Generate API queries
run: pnpm generate:api:force
- name: Check for API schema changes
run: |
if ! git diff --exit-code src/app/api/openapi.json; then
echo "❌ API schema changes detected in src/app/api/openapi.json"
echo ""
echo "The openapi.json file has been modified after running 'pnpm generate:api-all'."
echo "This usually means changes have been made in the BE endpoints without updating the Frontend."
echo "The API schema is now out of sync with the Front-end queries."
echo ""
echo "To fix this:"
echo "1. Pull the backend 'docker compose pull && docker compose up -d --build --force-recreate'"
echo "2. Run 'pnpm generate:api' locally"
echo "3. Run 'pnpm types' locally"
echo "4. Fix any TypeScript errors that may have been introduced"
echo "5. Commit and push your changes"
echo ""
exit 1
else
echo "✅ No API schema changes detected"
fi
run: pnpm generate:api
- name: Run Typescript checks
run: pnpm types
env:
CI: true
PLAIN_OUTPUT: True

View File

@@ -11,7 +11,7 @@ jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9
- uses: actions/stale@v10
with:
# operations-per-run: 5000
stale-issue-message: >

View File

@@ -61,6 +61,6 @@ jobs:
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
- uses: actions/labeler@v6
with:
sync-labels: true

View File

@@ -49,5 +49,5 @@ Use conventional commit messages for all commits (e.g. `feat(backend): add API`)
- Keep out-of-scope changes under 20% of the PR.
- Ensure PR descriptions are complete.
- For changes touching `data/*.py`, validate user ID checks or explain why not needed.
- If adding protected frontend routes, update `frontend/lib/supabase/middleware.ts`.
- If adding protected frontend routes, update `frontend/lib/auth/helpers.ts`.
- Use the linear ticket branch structure if given codex/open-1668-resume-dropped-runs

View File

@@ -5,12 +5,6 @@
POSTGRES_PASSWORD=your-super-secret-and-long-postgres-password
JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long
ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJhbm9uIiwKICAgICJpc3MiOiAic3VwYWJhc2UtZGVtbyIsCiAgICAiaWF0IjogMTY0MTc2OTIwMCwKICAgICJleHAiOiAxNzk5NTM1NjAwCn0.dc_X5iR_VP_qT0zsiyj_I_OZ2T9FtRU2BBNWN8Bu4GE
SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
DASHBOARD_USERNAME=supabase
DASHBOARD_PASSWORD=this_password_is_insecure_and_should_be_updated
SECRET_KEY_BASE=UpNVntn3cDxHJpq99YMc1T1AQgQpc8kfYTuRgBiYa15BLrx8etQoXz3gZv1/u2oq
VAULT_ENC_KEY=your-encryption-key-32-chars-min
############
@@ -24,100 +18,31 @@ POSTGRES_PORT=5432
############
# Supavisor -- Database pooler
############
POOLER_PROXY_PORT_TRANSACTION=6543
POOLER_DEFAULT_POOL_SIZE=20
POOLER_MAX_CLIENT_CONN=100
POOLER_TENANT_ID=your-tenant-id
############
# API Proxy - Configuration for the Kong Reverse proxy.
# Auth - Native authentication configuration
############
KONG_HTTP_PORT=8000
KONG_HTTPS_PORT=8443
############
# API - Configuration for PostgREST.
############
PGRST_DB_SCHEMAS=public,storage,graphql_public
############
# Auth - Configuration for the GoTrue authentication server.
############
## General
SITE_URL=http://localhost:3000
ADDITIONAL_REDIRECT_URLS=
JWT_EXPIRY=3600
DISABLE_SIGNUP=false
API_EXTERNAL_URL=http://localhost:8000
## Mailer Config
MAILER_URLPATHS_CONFIRMATION="/auth/v1/verify"
MAILER_URLPATHS_INVITE="/auth/v1/verify"
MAILER_URLPATHS_RECOVERY="/auth/v1/verify"
MAILER_URLPATHS_EMAIL_CHANGE="/auth/v1/verify"
# JWT token configuration
ACCESS_TOKEN_EXPIRE_MINUTES=15
REFRESH_TOKEN_EXPIRE_DAYS=7
JWT_ISSUER=autogpt-platform
## Email auth
ENABLE_EMAIL_SIGNUP=true
ENABLE_EMAIL_AUTOCONFIRM=false
SMTP_ADMIN_EMAIL=admin@example.com
SMTP_HOST=supabase-mail
SMTP_PORT=2500
SMTP_USER=fake_mail_user
SMTP_PASS=fake_mail_password
SMTP_SENDER_NAME=fake_sender
ENABLE_ANONYMOUS_USERS=false
## Phone auth
ENABLE_PHONE_SIGNUP=true
ENABLE_PHONE_AUTOCONFIRM=true
# Google OAuth (optional)
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
############
# Studio - Configuration for the Dashboard
# Email configuration (optional)
############
STUDIO_DEFAULT_ORGANIZATION=Default Organization
STUDIO_DEFAULT_PROJECT=Default Project
SMTP_HOST=
SMTP_PORT=587
SMTP_USER=
SMTP_PASS=
SMTP_FROM_EMAIL=noreply@example.com
STUDIO_PORT=3000
# replace if you intend to use Studio outside of localhost
SUPABASE_PUBLIC_URL=http://localhost:8000
# Enable webp support
IMGPROXY_ENABLE_WEBP_DETECTION=true
# Add your OpenAI API key to enable SQL Editor Assistant
OPENAI_API_KEY=
############
# Functions - Configuration for Functions
############
# NOTE: VERIFY_JWT applies to all functions. Per-function VERIFY_JWT is not supported yet.
FUNCTIONS_VERIFY_JWT=false
############
# Logs - Configuration for Logflare
# Please refer to https://supabase.com/docs/reference/self-hosting-analytics/introduction
############
LOGFLARE_LOGGER_BACKEND_API_KEY=your-super-secret-and-long-logflare-key
# Change vector.toml sinks to reflect this change
LOGFLARE_API_KEY=your-super-secret-and-long-logflare-key
# Docker socket location - this value will differ depending on your OS
DOCKER_SOCKET_LOCATION=/var/run/docker.sock
# Google Cloud Project details
GOOGLE_PROJECT_ID=GOOGLE_PROJECT_ID
GOOGLE_PROJECT_NUMBER=GOOGLE_PROJECT_NUMBER

View File

@@ -1,6 +1,6 @@
.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend
.PHONY: start-core stop-core logs-core format lint migrate run-backend run-frontend load-store-agents
# Run just Supabase + Redis + RabbitMQ
# Run just PostgreSQL + Redis + RabbitMQ + ClamAV
start-core:
docker compose up -d deps
@@ -42,11 +42,14 @@ run-frontend:
test-data:
cd backend && poetry run python test/test_data_creator.py
load-store-agents:
cd backend && poetry run load-store-agents
help:
@echo "Usage: make <target>"
@echo "Targets:"
@echo " start-core - Start just the core services (Supabase, Redis, RabbitMQ) in background"
@echo " start-core - Start just the core services (PostgreSQL, Redis, RabbitMQ, ClamAV) in background"
@echo " stop-core - Stop the core services"
@echo " reset-db - Reset the database by deleting the volume"
@echo " logs-core - Tail the logs for core services"
@@ -54,4 +57,5 @@ help:
@echo " migrate - Run backend database migrations"
@echo " run-backend - Run the backend FastAPI server"
@echo " run-frontend - Run the frontend Next.js development server"
@echo " test-data - Run the test data creator"
@echo " test-data - Run the test data creator"
@echo " load-store-agents - Load store agents from agents/ folder into test database"

View File

@@ -57,6 +57,9 @@ class APIKeySmith:
def hash_key(self, raw_key: str) -> tuple[str, str]:
"""Migrate a legacy hash to secure hash format."""
if not raw_key.startswith(self.PREFIX):
raise ValueError("Key without 'agpt_' prefix would fail validation")
salt = self._generate_salt()
hash = self._hash_key_with_salt(raw_key, salt)
return hash, salt.hex()

View File

@@ -16,17 +16,37 @@ ALGO_RECOMMENDATION = (
"We highly recommend using an asymmetric algorithm such as ES256, "
"because when leaked, a shared secret would allow anyone to "
"forge valid tokens and impersonate users. "
"More info: https://supabase.com/docs/guides/auth/signing-keys#choosing-the-right-signing-algorithm" # noqa
"More info: https://pyjwt.readthedocs.io/en/stable/algorithms.html"
)
class Settings:
def __init__(self):
# JWT verification key (public key for asymmetric, shared secret for symmetric)
self.JWT_VERIFY_KEY: str = os.getenv(
"JWT_VERIFY_KEY", os.getenv("SUPABASE_JWT_SECRET", "")
).strip()
# JWT signing key (private key for asymmetric, shared secret for symmetric)
# Falls back to JWT_VERIFY_KEY for symmetric algorithms like HS256
self.JWT_SIGN_KEY: str = os.getenv("JWT_SIGN_KEY", self.JWT_VERIFY_KEY).strip()
self.JWT_ALGORITHM: str = os.getenv("JWT_SIGN_ALGORITHM", "HS256").strip()
# Token expiration settings
self.ACCESS_TOKEN_EXPIRE_MINUTES: int = int(
os.getenv("ACCESS_TOKEN_EXPIRE_MINUTES", "15")
)
self.REFRESH_TOKEN_EXPIRE_DAYS: int = int(
os.getenv("REFRESH_TOKEN_EXPIRE_DAYS", "7")
)
# JWT issuer claim
self.JWT_ISSUER: str = os.getenv("JWT_ISSUER", "autogpt-platform").strip()
# JWT audience claim
self.JWT_AUDIENCE: str = os.getenv("JWT_AUDIENCE", "authenticated").strip()
self.validate()
def validate(self):

View File

@@ -1,4 +1,8 @@
import hashlib
import logging
import secrets
import uuid
from datetime import datetime, timedelta, timezone
from typing import Any
import jwt
@@ -16,6 +20,57 @@ bearer_jwt_auth = HTTPBearer(
)
def create_access_token(
user_id: str,
email: str,
role: str = "authenticated",
email_verified: bool = False,
) -> str:
"""
Generate a new JWT access token.
:param user_id: The user's unique identifier
:param email: The user's email address
:param role: The user's role (default: "authenticated")
:param email_verified: Whether the user's email is verified
:return: Encoded JWT token
"""
settings = get_settings()
now = datetime.now(timezone.utc)
payload = {
"sub": user_id,
"email": email,
"role": role,
"email_verified": email_verified,
"aud": settings.JWT_AUDIENCE,
"iss": settings.JWT_ISSUER,
"iat": now,
"exp": now + timedelta(minutes=settings.ACCESS_TOKEN_EXPIRE_MINUTES),
"jti": str(uuid.uuid4()), # Unique token ID
}
return jwt.encode(payload, settings.JWT_SIGN_KEY, algorithm=settings.JWT_ALGORITHM)
def create_refresh_token() -> tuple[str, str]:
"""
Generate a new refresh token.
Returns a tuple of (raw_token, hashed_token).
The raw token should be sent to the client.
The hashed token should be stored in the database.
"""
raw_token = secrets.token_urlsafe(64)
hashed_token = hashlib.sha256(raw_token.encode()).hexdigest()
return raw_token, hashed_token
def hash_token(token: str) -> str:
"""Hash a token using SHA-256."""
return hashlib.sha256(token.encode()).hexdigest()
async def get_jwt_payload(
credentials: HTTPAuthorizationCredentials | None = Security(bearer_jwt_auth),
) -> dict[str, Any]:
@@ -52,11 +107,19 @@ def parse_jwt_token(token: str) -> dict[str, Any]:
"""
settings = get_settings()
try:
# Build decode options
options = {
"verify_aud": True,
"verify_iss": bool(settings.JWT_ISSUER),
}
payload = jwt.decode(
token,
settings.JWT_VERIFY_KEY,
algorithms=[settings.JWT_ALGORITHM],
audience="authenticated",
audience=settings.JWT_AUDIENCE,
issuer=settings.JWT_ISSUER if settings.JWT_ISSUER else None,
options=options,
)
return payload
except jwt.ExpiredSignatureError:

View File

@@ -11,6 +11,7 @@ class User:
email: str
phone_number: str
role: str
email_verified: bool = False
@classmethod
def from_payload(cls, payload):
@@ -18,5 +19,6 @@ class User:
user_id=payload["sub"],
email=payload.get("email", ""),
phone_number=payload.get("phone", ""),
role=payload["role"],
role=payload.get("role", "authenticated"),
email_verified=payload.get("email_verified", False),
)

View File

@@ -48,6 +48,21 @@ files = [
{file = "async_timeout-5.0.1.tar.gz", hash = "sha256:d9321a7a3d5a6a5e187e824d2fa0793ce379a202935782d555d6e9d2735677d3"},
]
[[package]]
name = "authlib"
version = "1.6.6"
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "authlib-1.6.6-py2.py3-none-any.whl", hash = "sha256:7d9e9bc535c13974313a87f53e8430eb6ea3d1cf6ae4f6efcd793f2e949143fd"},
{file = "authlib-1.6.6.tar.gz", hash = "sha256:45770e8e056d0f283451d9996fbb59b70d45722b45d854d58f32878d0a40c38e"},
]
[package.dependencies]
cryptography = "*"
[[package]]
name = "backports-asyncio-runner"
version = "1.2.0"
@@ -61,6 +76,71 @@ files = [
{file = "backports_asyncio_runner-1.2.0.tar.gz", hash = "sha256:a5aa7b2b7d8f8bfcaa2b57313f70792df84e32a2a746f585213373f900b42162"},
]
[[package]]
name = "bcrypt"
version = "4.3.0"
description = "Modern password hashing for your software and your servers"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "bcrypt-4.3.0-cp313-cp313t-macosx_10_12_universal2.whl", hash = "sha256:f01e060f14b6b57bbb72fc5b4a83ac21c443c9a2ee708e04a10e9192f90a6281"},
{file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5eeac541cefd0bb887a371ef73c62c3cd78535e4887b310626036a7c0a817bb"},
{file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:59e1aa0e2cd871b08ca146ed08445038f42ff75968c7ae50d2fdd7860ade2180"},
{file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:0042b2e342e9ae3d2ed22727c1262f76cc4f345683b5c1715f0250cf4277294f"},
{file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:74a8d21a09f5e025a9a23e7c0fd2c7fe8e7503e4d356c0a2c1486ba010619f09"},
{file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:0142b2cb84a009f8452c8c5a33ace5e3dfec4159e7735f5afe9a4d50a8ea722d"},
{file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_34_aarch64.whl", hash = "sha256:12fa6ce40cde3f0b899729dbd7d5e8811cb892d31b6f7d0334a1f37748b789fd"},
{file = "bcrypt-4.3.0-cp313-cp313t-manylinux_2_34_x86_64.whl", hash = "sha256:5bd3cca1f2aa5dbcf39e2aa13dd094ea181f48959e1071265de49cc2b82525af"},
{file = "bcrypt-4.3.0-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:335a420cfd63fc5bc27308e929bee231c15c85cc4c496610ffb17923abf7f231"},
{file = "bcrypt-4.3.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:0e30e5e67aed0187a1764911af023043b4542e70a7461ad20e837e94d23e1d6c"},
{file = "bcrypt-4.3.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:3b8d62290ebefd49ee0b3ce7500f5dbdcf13b81402c05f6dafab9a1e1b27212f"},
{file = "bcrypt-4.3.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:2ef6630e0ec01376f59a006dc72918b1bf436c3b571b80fa1968d775fa02fe7d"},
{file = "bcrypt-4.3.0-cp313-cp313t-win32.whl", hash = "sha256:7a4be4cbf241afee43f1c3969b9103a41b40bcb3a3f467ab19f891d9bc4642e4"},
{file = "bcrypt-4.3.0-cp313-cp313t-win_amd64.whl", hash = "sha256:5c1949bf259a388863ced887c7861da1df681cb2388645766c89fdfd9004c669"},
{file = "bcrypt-4.3.0-cp38-abi3-macosx_10_12_universal2.whl", hash = "sha256:f81b0ed2639568bf14749112298f9e4e2b28853dab50a8b357e31798686a036d"},
{file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:864f8f19adbe13b7de11ba15d85d4a428c7e2f344bac110f667676a0ff84924b"},
{file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e36506d001e93bffe59754397572f21bb5dc7c83f54454c990c74a468cd589e"},
{file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:842d08d75d9fe9fb94b18b071090220697f9f184d4547179b60734846461ed59"},
{file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:7c03296b85cb87db865d91da79bf63d5609284fc0cab9472fdd8367bbd830753"},
{file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:62f26585e8b219cdc909b6a0069efc5e4267e25d4a3770a364ac58024f62a761"},
{file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:beeefe437218a65322fbd0069eb437e7c98137e08f22c4660ac2dc795c31f8bb"},
{file = "bcrypt-4.3.0-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:97eea7408db3a5bcce4a55d13245ab3fa566e23b4c67cd227062bb49e26c585d"},
{file = "bcrypt-4.3.0-cp38-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:191354ebfe305e84f344c5964c7cd5f924a3bfc5d405c75ad07f232b6dffb49f"},
{file = "bcrypt-4.3.0-cp38-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:41261d64150858eeb5ff43c753c4b216991e0ae16614a308a15d909503617732"},
{file = "bcrypt-4.3.0-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:33752b1ba962ee793fa2b6321404bf20011fe45b9afd2a842139de3011898fef"},
{file = "bcrypt-4.3.0-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:50e6e80a4bfd23a25f5c05b90167c19030cf9f87930f7cb2eacb99f45d1c3304"},
{file = "bcrypt-4.3.0-cp38-abi3-win32.whl", hash = "sha256:67a561c4d9fb9465ec866177e7aebcad08fe23aaf6fbd692a6fab69088abfc51"},
{file = "bcrypt-4.3.0-cp38-abi3-win_amd64.whl", hash = "sha256:584027857bc2843772114717a7490a37f68da563b3620f78a849bcb54dc11e62"},
{file = "bcrypt-4.3.0-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:0d3efb1157edebfd9128e4e46e2ac1a64e0c1fe46fb023158a407c7892b0f8c3"},
{file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:08bacc884fd302b611226c01014eca277d48f0a05187666bca23aac0dad6fe24"},
{file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6746e6fec103fcd509b96bacdfdaa2fbde9a553245dbada284435173a6f1aef"},
{file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:afe327968aaf13fc143a56a3360cb27d4ad0345e34da12c7290f1b00b8fe9a8b"},
{file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:d9af79d322e735b1fc33404b5765108ae0ff232d4b54666d46730f8ac1a43676"},
{file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f1e3ffa1365e8702dc48c8b360fef8d7afeca482809c5e45e653af82ccd088c1"},
{file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3004df1b323d10021fda07a813fd33e0fd57bef0e9a480bb143877f6cba996fe"},
{file = "bcrypt-4.3.0-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:531457e5c839d8caea9b589a1bcfe3756b0547d7814e9ce3d437f17da75c32b0"},
{file = "bcrypt-4.3.0-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:17a854d9a7a476a89dcef6c8bd119ad23e0f82557afbd2c442777a16408e614f"},
{file = "bcrypt-4.3.0-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:6fb1fd3ab08c0cbc6826a2e0447610c6f09e983a281b919ed721ad32236b8b23"},
{file = "bcrypt-4.3.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:e965a9c1e9a393b8005031ff52583cedc15b7884fce7deb8b0346388837d6cfe"},
{file = "bcrypt-4.3.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:79e70b8342a33b52b55d93b3a59223a844962bef479f6a0ea318ebbcadf71505"},
{file = "bcrypt-4.3.0-cp39-abi3-win32.whl", hash = "sha256:b4d4e57f0a63fd0b358eb765063ff661328f69a04494427265950c71b992a39a"},
{file = "bcrypt-4.3.0-cp39-abi3-win_amd64.whl", hash = "sha256:e53e074b120f2877a35cc6c736b8eb161377caae8925c17688bd46ba56daaa5b"},
{file = "bcrypt-4.3.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:c950d682f0952bafcceaf709761da0a32a942272fad381081b51096ffa46cea1"},
{file = "bcrypt-4.3.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:107d53b5c67e0bbc3f03ebf5b030e0403d24dda980f8e244795335ba7b4a027d"},
{file = "bcrypt-4.3.0-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:b693dbb82b3c27a1604a3dff5bfc5418a7e6a781bb795288141e5f80cf3a3492"},
{file = "bcrypt-4.3.0-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:b6354d3760fcd31994a14c89659dee887f1351a06e5dac3c1142307172a79f90"},
{file = "bcrypt-4.3.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a839320bf27d474e52ef8cb16449bb2ce0ba03ca9f44daba6d93fa1d8828e48a"},
{file = "bcrypt-4.3.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:bdc6a24e754a555d7316fa4774e64c6c3997d27ed2d1964d55920c7c227bc4ce"},
{file = "bcrypt-4.3.0-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:55a935b8e9a1d2def0626c4269db3fcd26728cbff1e84f0341465c31c4ee56d8"},
{file = "bcrypt-4.3.0-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:57967b7a28d855313a963aaea51bf6df89f833db4320da458e5b3c5ab6d4c938"},
{file = "bcrypt-4.3.0.tar.gz", hash = "sha256:3a3fd2204178b6d2adcf09cb4f6426ffef54762577a7c9b54c159008cb288c18"},
]
[package.extras]
tests = ["pytest (>=3.2.1,!=3.3.0)"]
typecheck = ["mypy"]
[[package]]
name = "cachetools"
version = "5.5.2"
@@ -459,21 +539,6 @@ ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==45.0.6)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
name = "deprecation"
version = "2.1.0"
description = "A library to handle automated deprecations"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "deprecation-2.1.0-py2.py3-none-any.whl", hash = "sha256:a10811591210e1fb0e768a8c25517cabeabcba6f0bf96564f8ff45189f90b14a"},
{file = "deprecation-2.1.0.tar.gz", hash = "sha256:72b3bde64e5d778694b0cf68178aed03d15e15477116add3fb773e581f9518ff"},
]
[package.dependencies]
packaging = "*"
[[package]]
name = "exceptiongroup"
version = "1.3.0"
@@ -695,23 +760,6 @@ protobuf = ">=3.20.2,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4
[package.extras]
grpc = ["grpcio (>=1.44.0,<2.0.0)"]
[[package]]
name = "gotrue"
version = "2.12.3"
description = "Python Client Library for Supabase Auth"
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "gotrue-2.12.3-py3-none-any.whl", hash = "sha256:b1a3c6a5fe3f92e854a026c4c19de58706a96fd5fbdcc3d620b2802f6a46a26b"},
{file = "gotrue-2.12.3.tar.gz", hash = "sha256:f874cf9d0b2f0335bfbd0d6e29e3f7aff79998cd1c14d2ad814db8c06cee3852"},
]
[package.dependencies]
httpx = {version = ">=0.26,<0.29", extras = ["http2"]}
pydantic = ">=1.10,<3"
pyjwt = ">=2.10.1,<3.0.0"
[[package]]
name = "grpc-google-iam-v1"
version = "0.14.2"
@@ -822,94 +870,6 @@ files = [
{file = "h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1"},
]
[[package]]
name = "h2"
version = "4.2.0"
description = "Pure-Python HTTP/2 protocol implementation"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "h2-4.2.0-py3-none-any.whl", hash = "sha256:479a53ad425bb29af087f3458a61d30780bc818e4ebcf01f0b536ba916462ed0"},
{file = "h2-4.2.0.tar.gz", hash = "sha256:c8a52129695e88b1a0578d8d2cc6842bbd79128ac685463b887ee278126ad01f"},
]
[package.dependencies]
hpack = ">=4.1,<5"
hyperframe = ">=6.1,<7"
[[package]]
name = "hpack"
version = "4.1.0"
description = "Pure-Python HPACK header encoding"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "hpack-4.1.0-py3-none-any.whl", hash = "sha256:157ac792668d995c657d93111f46b4535ed114f0c9c8d672271bbec7eae1b496"},
{file = "hpack-4.1.0.tar.gz", hash = "sha256:ec5eca154f7056aa06f196a557655c5b009b382873ac8d1e66e79e87535f1dca"},
]
[[package]]
name = "httpcore"
version = "1.0.9"
description = "A minimal low-level HTTP client."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55"},
{file = "httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8"},
]
[package.dependencies]
certifi = "*"
h11 = ">=0.16"
[package.extras]
asyncio = ["anyio (>=4.0,<5.0)"]
http2 = ["h2 (>=3,<5)"]
socks = ["socksio (==1.*)"]
trio = ["trio (>=0.22.0,<1.0)"]
[[package]]
name = "httpx"
version = "0.28.1"
description = "The next generation HTTP client."
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad"},
{file = "httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc"},
]
[package.dependencies]
anyio = "*"
certifi = "*"
h2 = {version = ">=3,<5", optional = true, markers = "extra == \"http2\""}
httpcore = "==1.*"
idna = "*"
[package.extras]
brotli = ["brotli ; platform_python_implementation == \"CPython\"", "brotlicffi ; platform_python_implementation != \"CPython\""]
cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"]
http2 = ["h2 (>=3,<5)"]
socks = ["socksio (==1.*)"]
zstd = ["zstandard (>=0.18.0)"]
[[package]]
name = "hyperframe"
version = "6.1.0"
description = "Pure-Python HTTP/2 framing"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "hyperframe-6.1.0-py3-none-any.whl", hash = "sha256:b03380493a519fce58ea5af42e4a42317bf9bd425596f7a0835ffce80f1a42e5"},
{file = "hyperframe-6.1.0.tar.gz", hash = "sha256:f630908a00854a7adeabd6382b43923a4c4cd4b821fcb527e6ab9e15382a3b08"},
]
[[package]]
name = "idna"
version = "3.10"
@@ -1036,7 +996,7 @@ version = "25.0"
description = "Core utilities for Python packages"
optional = false
python-versions = ">=3.8"
groups = ["main", "dev"]
groups = ["dev"]
files = [
{file = "packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484"},
{file = "packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"},
@@ -1058,24 +1018,6 @@ files = [
dev = ["pre-commit", "tox"]
testing = ["coverage", "pytest", "pytest-benchmark"]
[[package]]
name = "postgrest"
version = "1.1.1"
description = "PostgREST client for Python. This library provides an ORM interface to PostgREST."
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "postgrest-1.1.1-py3-none-any.whl", hash = "sha256:98a6035ee1d14288484bfe36235942c5fb2d26af6d8120dfe3efbe007859251a"},
{file = "postgrest-1.1.1.tar.gz", hash = "sha256:f3bb3e8c4602775c75c844a31f565f5f3dd584df4d36d683f0b67d01a86be322"},
]
[package.dependencies]
deprecation = ">=2.1.0,<3.0.0"
httpx = {version = ">=0.26,<0.29", extras = ["http2"]}
pydantic = ">=1.9,<3.0"
strenum = {version = ">=0.4.9,<0.5.0", markers = "python_version < \"3.11\""}
[[package]]
name = "proto-plus"
version = "1.26.1"
@@ -1462,21 +1404,6 @@ pytest = ">=6.2.5"
[package.extras]
dev = ["pre-commit", "pytest-asyncio", "tox"]
[[package]]
name = "python-dateutil"
version = "2.9.0.post0"
description = "Extensions to the standard Python datetime module"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
groups = ["main"]
files = [
{file = "python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3"},
{file = "python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427"},
]
[package.dependencies]
six = ">=1.5"
[[package]]
name = "python-dotenv"
version = "1.1.1"
@@ -1492,22 +1419,6 @@ files = [
[package.extras]
cli = ["click (>=5.0)"]
[[package]]
name = "realtime"
version = "2.5.3"
description = ""
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "realtime-2.5.3-py3-none-any.whl", hash = "sha256:eb0994636946eff04c4c7f044f980c8c633c7eb632994f549f61053a474ac970"},
{file = "realtime-2.5.3.tar.gz", hash = "sha256:0587594f3bc1c84bf007ff625075b86db6528843e03250dc84f4f2808be3d99a"},
]
[package.dependencies]
typing-extensions = ">=4.14.0,<5.0.0"
websockets = ">=11,<16"
[[package]]
name = "redis"
version = "6.2.0"
@@ -1606,18 +1517,6 @@ files = [
{file = "semver-3.0.4.tar.gz", hash = "sha256:afc7d8c584a5ed0a11033af086e8af226a9c0b206f313e0301f8dd7b6b589602"},
]
[[package]]
name = "six"
version = "1.17.0"
description = "Python 2 and 3 compatibility utilities"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
groups = ["main"]
files = [
{file = "six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274"},
{file = "six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81"},
]
[[package]]
name = "sniffio"
version = "1.3.1"
@@ -1649,76 +1548,6 @@ typing-extensions = {version = ">=4.10.0", markers = "python_version < \"3.13\""
[package.extras]
full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart (>=0.0.18)", "pyyaml"]
[[package]]
name = "storage3"
version = "0.12.0"
description = "Supabase Storage client for Python."
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "storage3-0.12.0-py3-none-any.whl", hash = "sha256:1c4585693ca42243ded1512b58e54c697111e91a20916cd14783eebc37e7c87d"},
{file = "storage3-0.12.0.tar.gz", hash = "sha256:94243f20922d57738bf42e96b9f5582b4d166e8bf209eccf20b146909f3f71b0"},
]
[package.dependencies]
deprecation = ">=2.1.0,<3.0.0"
httpx = {version = ">=0.26,<0.29", extras = ["http2"]}
python-dateutil = ">=2.8.2,<3.0.0"
[[package]]
name = "strenum"
version = "0.4.15"
description = "An Enum that inherits from str."
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "StrEnum-0.4.15-py3-none-any.whl", hash = "sha256:a30cda4af7cc6b5bf52c8055bc4bf4b2b6b14a93b574626da33df53cf7740659"},
{file = "StrEnum-0.4.15.tar.gz", hash = "sha256:878fb5ab705442070e4dd1929bb5e2249511c0bcf2b0eeacf3bcd80875c82eff"},
]
[package.extras]
docs = ["myst-parser[linkify]", "sphinx", "sphinx-rtd-theme"]
release = ["twine"]
test = ["pylint", "pytest", "pytest-black", "pytest-cov", "pytest-pylint"]
[[package]]
name = "supabase"
version = "2.16.0"
description = "Supabase client for Python."
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "supabase-2.16.0-py3-none-any.whl", hash = "sha256:99065caab3d90a56650bf39fbd0e49740995da3738ab28706c61bd7f2401db55"},
{file = "supabase-2.16.0.tar.gz", hash = "sha256:98f3810158012d4ec0e3083f2e5515f5e10b32bd71e7d458662140e963c1d164"},
]
[package.dependencies]
gotrue = ">=2.11.0,<3.0.0"
httpx = ">=0.26,<0.29"
postgrest = ">0.19,<1.2"
realtime = ">=2.4.0,<2.6.0"
storage3 = ">=0.10,<0.13"
supafunc = ">=0.9,<0.11"
[[package]]
name = "supafunc"
version = "0.10.1"
description = "Library for Supabase Functions"
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "supafunc-0.10.1-py3-none-any.whl", hash = "sha256:26df9bd25ff2ef56cb5bfb8962de98f43331f7f8ff69572bac3ed9c3a9672040"},
{file = "supafunc-0.10.1.tar.gz", hash = "sha256:a5b33c8baecb6b5297d25da29a2503e2ec67ee6986f3d44c137e651b8a59a17d"},
]
[package.dependencies]
httpx = {version = ">=0.26,<0.29", extras = ["http2"]}
strenum = ">=0.4.15,<0.5.0"
[[package]]
name = "tomli"
version = "2.2.1"
@@ -1827,85 +1656,6 @@ typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""}
[package.extras]
standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.15.1) ; sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\"", "watchfiles (>=0.13)", "websockets (>=10.4)"]
[[package]]
name = "websockets"
version = "15.0.1"
description = "An implementation of the WebSocket Protocol (RFC 6455 & 7692)"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "websockets-15.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d63efaa0cd96cf0c5fe4d581521d9fa87744540d4bc999ae6e08595a1014b45b"},
{file = "websockets-15.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ac60e3b188ec7574cb761b08d50fcedf9d77f1530352db4eef1707fe9dee7205"},
{file = "websockets-15.0.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5756779642579d902eed757b21b0164cd6fe338506a8083eb58af5c372e39d9a"},
{file = "websockets-15.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0fdfe3e2a29e4db3659dbd5bbf04560cea53dd9610273917799f1cde46aa725e"},
{file = "websockets-15.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4c2529b320eb9e35af0fa3016c187dffb84a3ecc572bcee7c3ce302bfeba52bf"},
{file = "websockets-15.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac1e5c9054fe23226fb11e05a6e630837f074174c4c2f0fe442996112a6de4fb"},
{file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:5df592cd503496351d6dc14f7cdad49f268d8e618f80dce0cd5a36b93c3fc08d"},
{file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:0a34631031a8f05657e8e90903e656959234f3a04552259458aac0b0f9ae6fd9"},
{file = "websockets-15.0.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:3d00075aa65772e7ce9e990cab3ff1de702aa09be3940d1dc88d5abf1ab8a09c"},
{file = "websockets-15.0.1-cp310-cp310-win32.whl", hash = "sha256:1234d4ef35db82f5446dca8e35a7da7964d02c127b095e172e54397fb6a6c256"},
{file = "websockets-15.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:39c1fec2c11dc8d89bba6b2bf1556af381611a173ac2b511cf7231622058af41"},
{file = "websockets-15.0.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:823c248b690b2fd9303ba00c4f66cd5e2d8c3ba4aa968b2779be9532a4dad431"},
{file = "websockets-15.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678999709e68425ae2593acf2e3ebcbcf2e69885a5ee78f9eb80e6e371f1bf57"},
{file = "websockets-15.0.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d50fd1ee42388dcfb2b3676132c78116490976f1300da28eb629272d5d93e905"},
{file = "websockets-15.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d99e5546bf73dbad5bf3547174cd6cb8ba7273062a23808ffea025ecb1cf8562"},
{file = "websockets-15.0.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66dd88c918e3287efc22409d426c8f729688d89a0c587c88971a0faa2c2f3792"},
{file = "websockets-15.0.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8dd8327c795b3e3f219760fa603dcae1dcc148172290a8ab15158cf85a953413"},
{file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8fdc51055e6ff4adeb88d58a11042ec9a5eae317a0a53d12c062c8a8865909e8"},
{file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:693f0192126df6c2327cce3baa7c06f2a117575e32ab2308f7f8216c29d9e2e3"},
{file = "websockets-15.0.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:54479983bd5fb469c38f2f5c7e3a24f9a4e70594cd68cd1fa6b9340dadaff7cf"},
{file = "websockets-15.0.1-cp311-cp311-win32.whl", hash = "sha256:16b6c1b3e57799b9d38427dda63edcbe4926352c47cf88588c0be4ace18dac85"},
{file = "websockets-15.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:27ccee0071a0e75d22cb35849b1db43f2ecd3e161041ac1ee9d2352ddf72f065"},
{file = "websockets-15.0.1-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:3e90baa811a5d73f3ca0bcbf32064d663ed81318ab225ee4f427ad4e26e5aff3"},
{file = "websockets-15.0.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:592f1a9fe869c778694f0aa806ba0374e97648ab57936f092fd9d87f8bc03665"},
{file = "websockets-15.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0701bc3cfcb9164d04a14b149fd74be7347a530ad3bbf15ab2c678a2cd3dd9a2"},
{file = "websockets-15.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e8b56bdcdb4505c8078cb6c7157d9811a85790f2f2b3632c7d1462ab5783d215"},
{file = "websockets-15.0.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0af68c55afbd5f07986df82831c7bff04846928ea8d1fd7f30052638788bc9b5"},
{file = "websockets-15.0.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:64dee438fed052b52e4f98f76c5790513235efaa1ef7f3f2192c392cd7c91b65"},
{file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:d5f6b181bb38171a8ad1d6aa58a67a6aa9d4b38d0f8c5f496b9e42561dfc62fe"},
{file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:5d54b09eba2bada6011aea5375542a157637b91029687eb4fdb2dab11059c1b4"},
{file = "websockets-15.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:3be571a8b5afed347da347bfcf27ba12b069d9d7f42cb8c7028b5e98bbb12597"},
{file = "websockets-15.0.1-cp312-cp312-win32.whl", hash = "sha256:c338ffa0520bdb12fbc527265235639fb76e7bc7faafbb93f6ba80d9c06578a9"},
{file = "websockets-15.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:fcd5cf9e305d7b8338754470cf69cf81f420459dbae8a3b40cee57417f4614a7"},
{file = "websockets-15.0.1-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ee443ef070bb3b6ed74514f5efaa37a252af57c90eb33b956d35c8e9c10a1931"},
{file = "websockets-15.0.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5a939de6b7b4e18ca683218320fc67ea886038265fd1ed30173f5ce3f8e85675"},
{file = "websockets-15.0.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:746ee8dba912cd6fc889a8147168991d50ed70447bf18bcda7039f7d2e3d9151"},
{file = "websockets-15.0.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:595b6c3969023ecf9041b2936ac3827e4623bfa3ccf007575f04c5a6aa318c22"},
{file = "websockets-15.0.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c714d2fc58b5ca3e285461a4cc0c9a66bd0e24c5da9911e30158286c9b5be7f"},
{file = "websockets-15.0.1-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f3c1e2ab208db911594ae5b4f79addeb3501604a165019dd221c0bdcabe4db8"},
{file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:229cf1d3ca6c1804400b0a9790dc66528e08a6a1feec0d5040e8b9eb14422375"},
{file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:756c56e867a90fb00177d530dca4b097dd753cde348448a1012ed6c5131f8b7d"},
{file = "websockets-15.0.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:558d023b3df0bffe50a04e710bc87742de35060580a293c2a984299ed83bc4e4"},
{file = "websockets-15.0.1-cp313-cp313-win32.whl", hash = "sha256:ba9e56e8ceeeedb2e080147ba85ffcd5cd0711b89576b83784d8605a7df455fa"},
{file = "websockets-15.0.1-cp313-cp313-win_amd64.whl", hash = "sha256:e09473f095a819042ecb2ab9465aee615bd9c2028e4ef7d933600a8401c79561"},
{file = "websockets-15.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:5f4c04ead5aed67c8a1a20491d54cdfba5884507a48dd798ecaf13c74c4489f5"},
{file = "websockets-15.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:abdc0c6c8c648b4805c5eacd131910d2a7f6455dfd3becab248ef108e89ab16a"},
{file = "websockets-15.0.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:a625e06551975f4b7ea7102bc43895b90742746797e2e14b70ed61c43a90f09b"},
{file = "websockets-15.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d591f8de75824cbb7acad4e05d2d710484f15f29d4a915092675ad3456f11770"},
{file = "websockets-15.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:47819cea040f31d670cc8d324bb6435c6f133b8c7a19ec3d61634e62f8d8f9eb"},
{file = "websockets-15.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ac017dd64572e5c3bd01939121e4d16cf30e5d7e110a119399cf3133b63ad054"},
{file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:4a9fac8e469d04ce6c25bb2610dc535235bd4aa14996b4e6dbebf5e007eba5ee"},
{file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:363c6f671b761efcb30608d24925a382497c12c506b51661883c3e22337265ed"},
{file = "websockets-15.0.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:2034693ad3097d5355bfdacfffcbd3ef5694f9718ab7f29c29689a9eae841880"},
{file = "websockets-15.0.1-cp39-cp39-win32.whl", hash = "sha256:3b1ac0d3e594bf121308112697cf4b32be538fb1444468fb0a6ae4feebc83411"},
{file = "websockets-15.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:b7643a03db5c95c799b89b31c036d5f27eeb4d259c798e878d6937d71832b1e4"},
{file = "websockets-15.0.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0c9e74d766f2818bb95f84c25be4dea09841ac0f734d1966f415e4edfc4ef1c3"},
{file = "websockets-15.0.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1009ee0c7739c08a0cd59de430d6de452a55e42d6b522de7aa15e6f67db0b8e1"},
{file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76d1f20b1c7a2fa82367e04982e708723ba0e7b8d43aa643d3dcd404d74f1475"},
{file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f29d80eb9a9263b8d109135351caf568cc3f80b9928bccde535c235de55c22d9"},
{file = "websockets-15.0.1-pp310-pypy310_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b359ed09954d7c18bbc1680f380c7301f92c60bf924171629c5db97febb12f04"},
{file = "websockets-15.0.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:cad21560da69f4ce7658ca2cb83138fb4cf695a2ba3e475e0559e05991aa8122"},
{file = "websockets-15.0.1-pp39-pypy39_pp73-macosx_10_15_x86_64.whl", hash = "sha256:7f493881579c90fc262d9cdbaa05a6b54b3811c2f300766748db79f098db9940"},
{file = "websockets-15.0.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:47b099e1f4fbc95b701b6e85768e1fcdaf1630f3cbe4765fa216596f12310e2e"},
{file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:67f2b6de947f8c757db2db9c71527933ad0019737ec374a8a6be9a956786aaf9"},
{file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d08eb4c2b7d6c41da6ca0600c077e93f5adcfd979cd777d747e9ee624556da4b"},
{file = "websockets-15.0.1-pp39-pypy39_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b826973a4a2ae47ba357e4e82fa44a463b8f168e1ca775ac64521442b19e87f"},
{file = "websockets-15.0.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:21c1fa28a6a7e3cbdc171c694398b6df4744613ce9b36b1a498e816787e28123"},
{file = "websockets-15.0.1-py3-none-any.whl", hash = "sha256:f7a866fbc1e97b5c617ee4116daaa09b722101d4a3c170c787450ba409f9736f"},
{file = "websockets-15.0.1.tar.gz", hash = "sha256:82544de02076bafba038ce055ee6412d68da13ab47f0c60cab827346de828dee"},
]
[[package]]
name = "zipp"
version = "3.23.0"
@@ -1929,4 +1679,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<4.0"
content-hash = "0c40b63c3c921846cf05ccfb4e685d4959854b29c2c302245f9832e20aac6954"
content-hash = "de209c97aa0feb29d669a20e4422d51bdf3a0872ec37e85ce9b88ce726fcee7a"

View File

@@ -18,7 +18,8 @@ pydantic = "^2.11.7"
pydantic-settings = "^2.10.1"
pyjwt = { version = "^2.10.1", extras = ["crypto"] }
redis = "^6.2.0"
supabase = "^2.16.0"
bcrypt = "^4.1.0"
authlib = "^1.3.0"
uvicorn = "^0.35.0"
[tool.poetry.group.dev.dependencies]

View File

@@ -27,10 +27,15 @@ REDIS_PORT=6379
RABBITMQ_DEFAULT_USER=rabbitmq_user_default
RABBITMQ_DEFAULT_PASS=k0VMxyIJF9S35f3x2uaw5IWAl6Y536O7
# Supabase Authentication
SUPABASE_URL=http://localhost:8000
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyAgCiAgICAicm9sZSI6ICJzZXJ2aWNlX3JvbGUiLAogICAgImlzcyI6ICJzdXBhYmFzZS1kZW1vIiwKICAgICJpYXQiOiAxNjQxNzY5MjAwLAogICAgImV4cCI6IDE3OTk1MzU2MDAKfQ.DaYlNEoUrrEn2Ig7tqibS-PHK5vgusbcbo7X36XVt4Q
# JWT Authentication
# Generate a secure random key: python -c "import secrets; print(secrets.token_urlsafe(32))"
JWT_SIGN_KEY=your-super-secret-jwt-token-with-at-least-32-characters-long
JWT_VERIFY_KEY=your-super-secret-jwt-token-with-at-least-32-characters-long
JWT_SIGN_ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=15
REFRESH_TOKEN_EXPIRE_DAYS=7
JWT_ISSUER=autogpt-platform
JWT_AUDIENCE=authenticated
## ===== REQUIRED SECURITY KEYS ===== ##
# Generate using: from cryptography.fernet import Fernet;Fernet.generate_key().decode()

View File

@@ -18,3 +18,6 @@ load-tests/results/
load-tests/*.json
load-tests/*.log
load-tests/node_modules/*
# Migration backups (contain user data)
migration_backups/

View File

@@ -0,0 +1,242 @@
listing_id,storeListingVersionId,slug,agent_name,agent_video,agent_image,featured,sub_heading,description,categories,useForOnboarding,is_available
6e60a900-9d7d-490e-9af2-a194827ed632,d85882b8-633f-44ce-a315-c20a8c123d19,flux-ai-image-generator,Flux AI Image Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ca154dd1-140e-454c-91bd-2d8a00de3f08.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/577d995d-bc38-40a9-a23f-1f30f5774bdb.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/415db1b7-115c-43ab-bd6c-4e9f7ef95be1.jpg""]",false,Transform ideas into breathtaking images,"Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!","[""creative""]",false,true
f11fc6e9-6166-4676-ac5d-f07127b270c1,c775f60d-b99f-418b-8fe0-53172258c3ce,youtube-transcription-scraper,YouTube Transcription Scraper,https://youtu.be/H8S3pU68lGE,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/65bce54b-0124-4b0d-9e3e-f9b89d0dc99e.jpg""]",false,Fetch the transcriptions from the most popular YouTube videos in your chosen topic,"Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content.","[""writing""]",false,true
17908889-b599-4010-8e4f-bed19b8f3446,6e16e65a-ad34-4108-b4fd-4a23fced5ea2,business-ownerceo-finder,Decision Maker Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/1020d94e-b6a2-4fa7-bbdf-2c218b0de563.jpg""]",false,Contact CEOs today,"Find the key decision-makers you need, fast.
This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses youre looking for and where, and it will:
* Search the area and gather public information
* Return names, roles, and contact details when available
* Provide smart Google search suggestions if details arent found
Perfect for:
* B2B sales teams seeking verified leads
* Recruiters sourcing local talent
* Researchers looking to connect with business leaders
Save hours of manual searching and get straight to the people who matter most.","[""business""]",true,true
72beca1d-45ea-4403-a7ce-e2af168ee428,415b7352-0dc6-4214-9d87-0ad3751b711d,smart-meeting-brief,Smart Meeting Prep,https://youtu.be/9ydZR2hkxaY,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2f116ce1-63ae-4d39-a5cd-f514defc2b97.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0a71a60a-2263-4f12-9836-9c76ab49f155.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/95327695-9184-403c-907a-a9d3bdafa6a5.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2bc77788-790b-47d4-8a61-ce97b695e9f5.png""]",true,Business meeting briefings delivered daily,"Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls.
How It Works
1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day.
2. It reviews recent email threads with each participant to surface key relationship history and communication context.
3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data.
4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points.","[""personal""]",true,true
9fa5697a-617b-4fae-aea0-7dbbed279976,b8ceb480-a7a2-4c90-8513-181a49f7071f,automated-support-ai,Automated Support Agent,https://youtu.be/nBMfu_5sgDA,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ed56febc-2205-4179-9e7e-505d8500b66c.png""]",true,Automate up to 80 percent of inbound support emails,"Overview:
Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution.
How it Works:
New support emails are routed to the agent.
The agent checks internal documentation for answers.
It measures confidence in the answer found and either replies directly or escalates to a human.
Business Value:
Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times.","[""business""]",false,true
2bdac92b-a12c-4131-bb46-0e3b89f61413,31daf49d-31d3-476b-aa4c-099abc59b458,unspirational-poster-maker,Unspirational Poster Maker,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a490dac-27e5-405f-a4c4-8d1c55b85060.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d343fbb5-478c-4e38-94df-4337293b61f1.jpg""]",false,Because adulting is hard,"This witty AI agent generates hilariously relatable ""motivational"" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to ""get it together."" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.","[""creative""]",false,true
9adf005e-2854-4cc7-98cf-f7103b92a7b7,a03b0d8c-4751-43d6-a54e-c3b7856ba4e3,ai-shortform-video-generator-create-viral-ready-content,AI Video Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/8d2670b9-fea5-4966-a597-0a4511bffdc3.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/aabe8aec-0110-4ce7-a259-4f86fe8fe07d.png""]",false,Create Viral-Ready Shorts Content in Seconds,"OVERVIEW
Transform any trending headline or broad topic into a polished, vertical short-form video in a single run.
The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags.
HOW IT WORKS
1. Input a topic or an exact news headline.
2. The agent fetches live search results and selects the most engaging related story.
3. Key facts are summarised into concise research notes.
4. Claude writes a 3035 second script with visual cues, a three-second hook, tension loops, and a call-to-action.
5. GPT-4o generates an eye-catching title and one or two discoverability hashtags.
6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”).
All voice, style and resolution settings can be adjusted in the Builder before you press ""Run"".
7. Output delivered: Title, Script, Hashtags, Video URL.
KEY USE CASES
- Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”).
- Real-time newsjacking with a specific breaking headline.
- Product-launch spotlights and quick event recaps while interest is high.
BUSINESS VALUE
- One-click speed: from idea to finished video in minutes.
- Consistent brand look: Revid presets keep voice, style and aspect ratio on spec.
- No-code workflow: marketers create social video without design or development queues.
- Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys.
Self-hosted users simply add OpenAI, Anthropic, Perplexity (OpenRouter/Jina) and Revid keys once.
IMPORTANT NOTES
- The agent outputs exactly one video per execution. Run it again for additional shorts.
- Video rendering time varies; AI-generated footage may take several minutes.","[""writing""]",false,true
864e48ef-fee5-42c1-b6a4-2ae139db9fc1,55d40473-0f31-4ada-9e40-d3a7139fcbd4,automated-blog-writer,Automated SEO Blog Writer,https://youtu.be/nKcDCbDVobs,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2dd5f95b-5b30-4bf8-a11b-bac776c5141a.jpg""]",true,"Automate research, writing, and publishing for high-ranking blog posts","Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility.
How it works:
1. Share your pitch, website, and values.
2. The agent studies your site and uncovers proven SEO opportunities.
3. It spends two hours researching and drafting each post.
4. You set the cadence—publishing runs on autopilot.
Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most.
Use cases:
• Founders: Keep your blog active with no time drain.
• Agencies: Deliver scalable SEO content for clients.
• Strategists: Automate execution, focus on strategy.
• Marketers: Drive steady organic growth.
• Local businesses: Capture nearby search traffic.","[""writing""]",false,true
6046f42e-eb84-406f-bae0-8e052064a4fa,a548e507-09a7-4b30-909c-f63fcda10fff,lead-finder-local-businesses,Lead Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/abd6605f-d5f8-426b-af36-052e8ba5044f.webp""]",false,Auto-Prospect Like a Pro,"Turbo-charge your local lead generation with the AutoGPT Marketplaces top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching.
**WHAT IT DOES**
• Searches Google Maps via the official API (no scraping)
• Prompts like “dentists in Chicago” or “coffee shops near me”
• Returns: Name, Website, Rating, Reviews, **Phone & Address**
• Exports instantly to your CRM, sheet, or outreach workflow
**WHY YOULL LOVE IT**
✓ Hyper-targeted leads in minutes
✓ Unlimited searches & locations
✓ Zero CAPTCHAs or IP blocks
✓ Works on AutoGPT Cloud or self-hosted (with your API key)
✓ Cut prospecting time by 90%
**PERFECT FOR**
— Marketers & PPC agencies
— SEO consultants & designers
— SaaS founders & sales teams
Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit.
→ Click *Add to Library* and own your market today.","[""business""]",true,true
f623c862-24e9-44fc-8ce8-d8282bb51ad2,eafa21d3-bf14-4f63-a97f-a5ee41df83b3,linkedin-post-generator,LinkedIn Post Generator,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/297f6a8e-81a8-43e2-b106-c7ad4a5662df.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/fceebdc1-aef6-4000-97fc-4ef587f56bda.png""]",false,Autocraft LinkedIn gold,"Create researchdriven, highimpact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed.
FEATURES
• Automated YouTube research discovers and analyses topranked videos so you dont have to
• AIcurated synthesis combines multiple transcripts into one authoritative narrative
• Full creative control adjust style, tone, objective, opinion, clarity, target word count and number of videos
• LinkedInoptimised output hook, 23 key points, CTA, strategic line breaks, 35 hashtags, no markdown
• Oneclick publish returns a readytopost text block (≤1 300 characters)
HOW IT WORKS
1. Enter a topic and your preferred writing parameters.
2. The agent builds a YouTube search, fetches the page, and extracts the top N video URLs.
3. It pulls each transcript, then feeds them—plus your settings—into Claude 3.5 Sonnet.
4. The model writes a concise, engaging post designed for maximum LinkedIn engagement.
USE CASES
• Thoughtleadership updates backed by fresh video research
• Rapid industry summaries after major events, webinars, or conferences
• Consistent LinkedIn content for busy founders, marketers, and creators
WHY YOULL LOVE IT
Save hours of manual research, avoid surfacelevel hottakes, and publish posts that showcase real expertise—without the heavy lift.","[""writing""]",true,true
7d4120ad-b6b3-4419-8bdb-7dd7d350ef32,e7bb29a1-23c7-4fee-aa3b-5426174b8c52,youtube-to-linkedin-post-converter,YouTube to LinkedIn Post Converter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f084b326-a708-4396-be51-7ba59ad2ef32.png""]",false,Transform Your YouTube Videos into Engaging LinkedIn Posts with AI,"WHAT IT DOES:
This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn.
HOW IT WORKS:
- You provide the URL to the YouTube video (required)
- You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.)
- You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.)
- The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model
- The models extract key insights, memorable quotes, and the main points from the video
- Youll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement
INPUTS:
- Source YouTube Video Provide the URL to the YouTube video
- Structure Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.)
- Content Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.)
- Tone Select the tone for the post (e.g., Conversational, Inspirational, etc.)
OUTPUT:
- LinkedIn Post A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences
Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding.","[""writing""]",false,true
c61d6a83-ea48-4df8-b447-3da2d9fe5814,00fdd42c-a14c-4d19-a567-65374ea0e87f,personalized-morning-coffee-newsletter,Personal Newsletter,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f4b38e4c-8166-4caf-9411-96c9c4c82d4c.png""]",false,Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood.,"This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained.
How It Works
1. Enter your favorite topics, industries, or areas of interest.
2. Choose your tone—professional, casual, or humorous.
3. Set your preferred delivery cadence: daily or weekly.
4. The agent scans top sources and compiles 35 engaging stories, insights, and fun facts into a conversational newsletter.
Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day.
Use Cases
• Executives: Get a daily digest of market updates and leadership insights.
• Marketers: Receive curated creative trends and campaign inspiration.
• Entrepreneurs: Stay updated on your industry without information overload.","[""research""]",true,true
e2e49cfc-4a39-4d62-a6b3-c095f6d025ff,fc2c9976-0962-4625-a27b-d316573a9e7f,email-address-finder,Email Scout - Contact Finder Assistant,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/da8a690a-7a8b-4c1d-b6f8-e2f840c0205d.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a2ac25c-1609-4881-8140-e6da2421afb3.jpg"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/26179263-fe06-45bd-b6a0-0754660a0a46.jpg""]",false,Find contact details from name and location using AI search,"Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information.
Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like ""Tim Cook, USA"" or ""Sarah Smith, London"" and let the AI assistant do the work of finding potential contact details.
Key Features:
- Quick search from just name and location
- Scans multiple public sources
- Automated AI-powered search process
- Easy to use with simple inputs
Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact.
Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible.","[""""]",false,true
81bcc372-0922-4a36-bc35-f7b1e51d6939,e437cc95-e671-489d-b915-76561fba8c7f,ai-youtube-to-blog-converter,YouTube Video to SEO Blog Writer,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/239e5a41-2515-4e1c-96ef-31d0d37ecbeb.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/c7d96966-786f-4be6-ad7d-3a51c84efc0e.png"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0275a74c-e2c2-4e29-a6e4-3a616c3c35dd.png""]",false,One link. One click. One powerful blog post.,"Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts.
Your videos deserve a second life—in writing.
Make your content work twice as hard by repurposing it into engaging, searchable articles.
Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest.
FEATURES
• CONTENT ANALYSIS
Extracts key points from the video while preserving your message and intent.
• CUSTOMIZABLE OUTPUT
Select a tone that fits your audience: casual, professional, educational, or formal.
• SEO OPTIMIZATION
Automatically creates engaging titles and structured subheadings for better search visibility.
• USER-FRIENDLY
Repurpose your videos into written content to expand your reach and improve accessibility.
Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless.
","[""writing""]",true,true
5c3510d2-fc8b-4053-8e19-67f53c86eb1a,f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9,ai-webpage-copy-improver,AI Webpage Copy Improver,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d562d26f-5891-4b09-8859-fbb205972313.jpg""]",false,Boost Your Website's Search Engine Performance,"Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.","[""marketing""]",true,true
94d03bd3-7d44-4d47-b60c-edb2f89508d6,b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0,domain-name-finder,Domain Name Finder,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/28545e09-b2b8-4916-b4c6-67f982510a78.jpeg""]",false,Instantly generate brand-ready domain names that are actually available,"Overview:
Finding a domain name that fits your brand shouldnt take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable.
How It Works
1. Input your product pitch, company name, or core keywords.
2. The agent analyzes brand tone, audience, and industry context.
3. It generates a list of unique, memorable domains that match your criteria.
4. All names are pre-filtered for real-time availability, so you can register immediately.
Business Value
Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains.
Key Use Cases
• Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands.
• Marketers: Test name options across campaigns with instant availability data.
• Entrepreneurs: Validate ideas faster with instant domain options.","[""business""]",false,true
7a831906-daab-426f-9d66-bcf98d869426,516d813b-d1bc-470f-add7-c63a4b2c2bad,ai-function,AI Function,,"[""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/620e8117-2ee1-4384-89e6-c2ef4ec3d9c9.webp"",""https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/476259e2-5a79-4a7b-8e70-deeebfca70d7.png""]",false,Never Code Again,"AI FUNCTION MAGIC
Your AIpowered assistant for turning plainEnglish descriptions into working Python functions.
HOW IT WORKS
1. Describe what the function should do.
2. Specify the inputs it needs.
3. Receive the generated Python code.
FEATURES
- Effortless Function Generation: convert naturallanguage specs into complete functions.
- Customizable Inputs: define the parameters that matter to you.
- Versatile Use Cases: simulate data, automate tasks, prototype ideas.
- Seamless Integration: add the generated function directly to your codebase.
EXAMPLE
Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.”
Input parameter: number_of_people (default 20)
Result: a list of dictionaries such as
[
{ ""name"": ""Emma Martinez"", ""date_of_birth"": ""19921103"", ""job_title"": ""Data Analyst"", ""age"": 32 },
{ ""name"": ""Liam OConnor"", ""date_of_birth"": ""19850719"", ""job_title"": ""Marketing Manager"", ""age"": 39 },
…18 more entries…
]","[""development""]",false,true
1 listing_id storeListingVersionId slug agent_name agent_video agent_image featured sub_heading description categories useForOnboarding is_available
2 6e60a900-9d7d-490e-9af2-a194827ed632 d85882b8-633f-44ce-a315-c20a8c123d19 flux-ai-image-generator Flux AI Image Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ca154dd1-140e-454c-91bd-2d8a00de3f08.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/577d995d-bc38-40a9-a23f-1f30f5774bdb.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/415db1b7-115c-43ab-bd6c-4e9f7ef95be1.jpg"] false Transform ideas into breathtaking images Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today! ["creative"] false true
3 f11fc6e9-6166-4676-ac5d-f07127b270c1 c775f60d-b99f-418b-8fe0-53172258c3ce youtube-transcription-scraper YouTube Transcription Scraper https://youtu.be/H8S3pU68lGE ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/65bce54b-0124-4b0d-9e3e-f9b89d0dc99e.jpg"] false Fetch the transcriptions from the most popular YouTube videos in your chosen topic Effortlessly gather transcriptions from multiple YouTube videos with this agent. It scrapes and compiles video transcripts into a clean, organized list, making it easy to extract insights, quotes, or content from various sources in one go. Ideal for researchers, content creators, and marketers looking to quickly analyze or repurpose video content. ["writing"] false true
4 17908889-b599-4010-8e4f-bed19b8f3446 6e16e65a-ad34-4108-b4fd-4a23fced5ea2 business-ownerceo-finder Decision Maker Lead Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/1020d94e-b6a2-4fa7-bbdf-2c218b0de563.jpg"] false Contact CEOs today Find the key decision-makers you need, fast. This agent identifies business owners or CEOs of local companies in any area you choose. Simply enter what kind of businesses you’re looking for and where, and it will: * Search the area and gather public information * Return names, roles, and contact details when available * Provide smart Google search suggestions if details aren’t found Perfect for: * B2B sales teams seeking verified leads * Recruiters sourcing local talent * Researchers looking to connect with business leaders Save hours of manual searching and get straight to the people who matter most. ["business"] true true
5 72beca1d-45ea-4403-a7ce-e2af168ee428 415b7352-0dc6-4214-9d87-0ad3751b711d smart-meeting-brief Smart Meeting Prep https://youtu.be/9ydZR2hkxaY ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2f116ce1-63ae-4d39-a5cd-f514defc2b97.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0a71a60a-2263-4f12-9836-9c76ab49f155.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/95327695-9184-403c-907a-a9d3bdafa6a5.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2bc77788-790b-47d4-8a61-ce97b695e9f5.png"] true Business meeting briefings delivered daily Never walk into a meeting unprepared again. Every day at 4 pm, the Smart Meeting Prep Agent scans your calendar for tomorrow's external meetings. It reviews your past email exchanges, researches each participant's background and role, and compiles the insights into a concise briefing, so you can close your workday ready for tomorrow's calls. How It Works 1. At 4 pm, the agent scans your calendar and identifies external meetings scheduled for the next day. 2. It reviews recent email threads with each participant to surface key relationship history and communication context. 3. It conducts online research to gather publicly available information on roles, company backgrounds, and relevant professional data. 4. It produces a unified briefing for each participant, including past exchange highlights, profile notes, and strategic conversation points. ["personal"] true true
6 9fa5697a-617b-4fae-aea0-7dbbed279976 b8ceb480-a7a2-4c90-8513-181a49f7071f automated-support-ai Automated Support Agent https://youtu.be/nBMfu_5sgDA ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/ed56febc-2205-4179-9e7e-505d8500b66c.png"] true Automate up to 80 percent of inbound support emails Overview: Support teams spend countless hours on basic tickets. This agent automates repetitive customer support tasks. It reads incoming requests, researches your knowledge base, and responds automatically when confident. When unsure, it escalates to a human for final resolution. How it Works: New support emails are routed to the agent. The agent checks internal documentation for answers. It measures confidence in the answer found and either replies directly or escalates to a human. Business Value: Automating the easy 80 percent of support tickets allows your team to focus on high-value, complex customer issues, improving efficiency and response times. ["business"] false true
7 2bdac92b-a12c-4131-bb46-0e3b89f61413 31daf49d-31d3-476b-aa4c-099abc59b458 unspirational-poster-maker Unspirational Poster Maker ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a490dac-27e5-405f-a4c4-8d1c55b85060.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d343fbb5-478c-4e38-94df-4337293b61f1.jpg"] false Because adulting is hard This witty AI agent generates hilariously relatable "motivational" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clichés and embrace our collective struggles to "get it together." Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos. ["creative"] false true
8 9adf005e-2854-4cc7-98cf-f7103b92a7b7 a03b0d8c-4751-43d6-a54e-c3b7856ba4e3 ai-shortform-video-generator-create-viral-ready-content AI Video Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/8d2670b9-fea5-4966-a597-0a4511bffdc3.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/aabe8aec-0110-4ce7-a259-4f86fe8fe07d.png"] false Create Viral-Ready Shorts Content in Seconds OVERVIEW Transform any trending headline or broad topic into a polished, vertical short-form video in a single run. The agent automates research, scriptwriting, metadata creation, and Revid.ai rendering, returning one ready-to-publish MP4 plus its title, script and hashtags. HOW IT WORKS 1. Input a topic or an exact news headline. 2. The agent fetches live search results and selects the most engaging related story. 3. Key facts are summarised into concise research notes. 4. Claude writes a 30–35 second script with visual cues, a three-second hook, tension loops, and a call-to-action. 5. GPT-4o generates an eye-catching title and one or two discoverability hashtags. 6. The script is sent to a state-of-the-art AI video generator to render a single 9:16 MP4 (default: 720 p, 30 fps, voice “Brian”, style “movingImage”, music “Bladerunner 2049”). – All voice, style and resolution settings can be adjusted in the Builder before you press "Run". 7. Output delivered: Title, Script, Hashtags, Video URL. KEY USE CASES - Broad-topic explainers (e.g. “Artificial Intelligence” or “Climate Tech”). - Real-time newsjacking with a specific breaking headline. - Product-launch spotlights and quick event recaps while interest is high. BUSINESS VALUE - One-click speed: from idea to finished video in minutes. - Consistent brand look: Revid presets keep voice, style and aspect ratio on spec. - No-code workflow: marketers create social video without design or development queues. - Cloud convenience: Auto-GPT Cloud users are pre-configured with all required keys. Self-hosted users simply add OpenAI, Anthropic, Perplexity (OpenRouter/Jina) and Revid keys once. IMPORTANT NOTES - The agent outputs exactly one video per execution. Run it again for additional shorts. - Video rendering time varies; AI-generated footage may take several minutes. ["writing"] false true
9 864e48ef-fee5-42c1-b6a4-2ae139db9fc1 55d40473-0f31-4ada-9e40-d3a7139fcbd4 automated-blog-writer Automated SEO Blog Writer https://youtu.be/nKcDCbDVobs ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/2dd5f95b-5b30-4bf8-a11b-bac776c5141a.jpg"] true Automate research, writing, and publishing for high-ranking blog posts Scale your blog with a fully automated content engine. The Automated SEO Blog Writer learns your brand voice, finds high-demand keywords, and creates SEO-optimized articles that attract organic traffic and boost visibility. How it works: 1. Share your pitch, website, and values. 2. The agent studies your site and uncovers proven SEO opportunities. 3. It spends two hours researching and drafting each post. 4. You set the cadence—publishing runs on autopilot. Business value: Consistently publish research-backed, optimized posts that build domain authority, rankings, and thought leadership while you focus on what matters most. Use cases: • Founders: Keep your blog active with no time drain. • Agencies: Deliver scalable SEO content for clients. • Strategists: Automate execution, focus on strategy. • Marketers: Drive steady organic growth. • Local businesses: Capture nearby search traffic. ["writing"] false true
10 6046f42e-eb84-406f-bae0-8e052064a4fa a548e507-09a7-4b30-909c-f63fcda10fff lead-finder-local-businesses Lead Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/abd6605f-d5f8-426b-af36-052e8ba5044f.webp"] false Auto-Prospect Like a Pro Turbo-charge your local lead generation with the AutoGPT Marketplace’s top Google Maps prospecting agent. “Lead Finder: Local Businesses” delivers verified, ready-to-contact prospects in any niche and city—so you can focus on closing, not searching. **WHAT IT DOES** • Searches Google Maps via the official API (no scraping) • Prompts like “dentists in Chicago” or “coffee shops near me” • Returns: Name, Website, Rating, Reviews, **Phone & Address** • Exports instantly to your CRM, sheet, or outreach workflow **WHY YOU’LL LOVE IT** ✓ Hyper-targeted leads in minutes ✓ Unlimited searches & locations ✓ Zero CAPTCHAs or IP blocks ✓ Works on AutoGPT Cloud or self-hosted (with your API key) ✓ Cut prospecting time by 90% **PERFECT FOR** — Marketers & PPC agencies — SEO consultants & designers — SaaS founders & sales teams Stop scrolling directories—start filling your pipeline. Start now and let AI prospect while you profit. → Click *Add to Library* and own your market today. ["business"] true true
11 f623c862-24e9-44fc-8ce8-d8282bb51ad2 eafa21d3-bf14-4f63-a97f-a5ee41df83b3 linkedin-post-generator LinkedIn Post Generator ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/297f6a8e-81a8-43e2-b106-c7ad4a5662df.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/fceebdc1-aef6-4000-97fc-4ef587f56bda.png"] false Auto‑craft LinkedIn gold Create research‑driven, high‑impact LinkedIn posts in minutes. This agent searches YouTube for the best videos on your chosen topic, pulls their transcripts, and distils the most valuable insights into a polished post ready for your company page or personal feed. FEATURES • Automated YouTube research – discovers and analyses top‑ranked videos so you don’t have to • AI‑curated synthesis – combines multiple transcripts into one authoritative narrative • Full creative control – adjust style, tone, objective, opinion, clarity, target word count and number of videos • LinkedIn‑optimised output – hook, 2‑3 key points, CTA, strategic line breaks, 3‑5 hashtags, no markdown • One‑click publish – returns a ready‑to‑post text block (≤1 300 characters) HOW IT WORKS 1. Enter a topic and your preferred writing parameters. 2. The agent builds a YouTube search, fetches the page, and extracts the top N video URLs. 3. It pulls each transcript, then feeds them—plus your settings—into Claude 3.5 Sonnet. 4. The model writes a concise, engaging post designed for maximum LinkedIn engagement. USE CASES • Thought‑leadership updates backed by fresh video research • Rapid industry summaries after major events, webinars, or conferences • Consistent LinkedIn content for busy founders, marketers, and creators WHY YOU’LL LOVE IT Save hours of manual research, avoid surface‑level hot‑takes, and publish posts that showcase real expertise—without the heavy lift. ["writing"] true true
12 7d4120ad-b6b3-4419-8bdb-7dd7d350ef32 e7bb29a1-23c7-4fee-aa3b-5426174b8c52 youtube-to-linkedin-post-converter YouTube to LinkedIn Post Converter ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f084b326-a708-4396-be51-7ba59ad2ef32.png"] false Transform Your YouTube Videos into Engaging LinkedIn Posts with AI WHAT IT DOES: This agent converts YouTube video content into a LinkedIn post by analyzing the video's transcript. It provides you with a tailored post that reflects the core ideas, key takeaways, and tone of the original video, optimizing it for engagement on LinkedIn. HOW IT WORKS: - You provide the URL to the YouTube video (required) - You can choose the structure for the LinkedIn post (e.g., Personal Achievement Story, Lesson Learned, Thought Leadership, etc.) - You can also select the tone (e.g., Inspirational, Analytical, Conversational, etc.) - The transcript of the video is analyzed by the GPT-4 model and the Claude 3.5 Sonnet model - The models extract key insights, memorable quotes, and the main points from the video - You’ll receive a LinkedIn post, formatted according to your chosen structure and tone, optimized for professional engagement INPUTS: - Source YouTube Video – Provide the URL to the YouTube video - Structure – Choose the post format (e.g., Personal Achievement Story, Thought Leadership, etc.) - Content – Specify the main message or idea of the post (e.g., Hot Take, Key Takeaways, etc.) - Tone – Select the tone for the post (e.g., Conversational, Inspirational, etc.) OUTPUT: - LinkedIn Post – A well-crafted, AI-generated LinkedIn post with a professional tone, based on the video content and your specified preferences Perfect for content creators, marketers, and professionals who want to repurpose YouTube videos for LinkedIn and boost their professional branding. ["writing"] false true
13 c61d6a83-ea48-4df8-b447-3da2d9fe5814 00fdd42c-a14c-4d19-a567-65374ea0e87f personalized-morning-coffee-newsletter Personal Newsletter ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/f4b38e4c-8166-4caf-9411-96c9c4c82d4c.png"] false Start your day with personalized AI newsletters that deliver credibility and context for every interest or mood. This Personal Newsletter Agent provides a bespoke daily digest on your favorite topics and tone. Whether you prefer industry insights, lighthearted reads, or breaking news, this agent crafts your own unique newsletter to keep you informed and entertained. How It Works 1. Enter your favorite topics, industries, or areas of interest. 2. Choose your tone—professional, casual, or humorous. 3. Set your preferred delivery cadence: daily or weekly. 4. The agent scans top sources and compiles 3–5 engaging stories, insights, and fun facts into a conversational newsletter. Skip the morning scroll and enjoy a thoughtfully curated newsletter designed just for you. Stay ahead of trends, spark creative ideas, and enjoy an effortless, informed start to your day. Use Cases • Executives: Get a daily digest of market updates and leadership insights. • Marketers: Receive curated creative trends and campaign inspiration. • Entrepreneurs: Stay updated on your industry without information overload. ["research"] true true
14 e2e49cfc-4a39-4d62-a6b3-c095f6d025ff fc2c9976-0962-4625-a27b-d316573a9e7f email-address-finder Email Scout - Contact Finder Assistant ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/da8a690a-7a8b-4c1d-b6f8-e2f840c0205d.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/6a2ac25c-1609-4881-8140-e6da2421afb3.jpg","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/26179263-fe06-45bd-b6a0-0754660a0a46.jpg"] false Find contact details from name and location using AI search Finding someone's professional email address can be time-consuming and frustrating. Manual searching across multiple websites, social profiles, and business directories often leads to dead ends or outdated information. Email Scout automates this process by intelligently searching across publicly available sources when you provide a person's name and location. Simply input basic information like "Tim Cook, USA" or "Sarah Smith, London" and let the AI assistant do the work of finding potential contact details. Key Features: - Quick search from just name and location - Scans multiple public sources - Automated AI-powered search process - Easy to use with simple inputs Perfect for recruiters, business development professionals, researchers, and anyone needing to establish professional contact. Note: This tool searches only publicly available information. Search results depend on what contact information people have made public. Some searches may not yield results if the information isn't publicly accessible. [""] false true
15 81bcc372-0922-4a36-bc35-f7b1e51d6939 e437cc95-e671-489d-b915-76561fba8c7f ai-youtube-to-blog-converter YouTube Video to SEO Blog Writer ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/239e5a41-2515-4e1c-96ef-31d0d37ecbeb.webp","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/c7d96966-786f-4be6-ad7d-3a51c84efc0e.png","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/0275a74c-e2c2-4e29-a6e4-3a616c3c35dd.png"] false One link. One click. One powerful blog post. Effortlessly transform your YouTube videos into high-quality, SEO-optimized blog posts. Your videos deserve a second life—in writing. Make your content work twice as hard by repurposing it into engaging, searchable articles. Perfect for content creators, marketers, and bloggers, this tool analyzes video content and generates well-structured blog posts tailored to your tone, audience, and word count. Just paste a YouTube URL and let the AI handle the rest. FEATURES • CONTENT ANALYSIS Extracts key points from the video while preserving your message and intent. • CUSTOMIZABLE OUTPUT Select a tone that fits your audience: casual, professional, educational, or formal. • SEO OPTIMIZATION Automatically creates engaging titles and structured subheadings for better search visibility. • USER-FRIENDLY Repurpose your videos into written content to expand your reach and improve accessibility. Whether you're looking to grow your blog, boost SEO, or simply get more out of your content, the AI YouTube-to-Blog Converter makes it effortless. ["writing"] true true
16 5c3510d2-fc8b-4053-8e19-67f53c86eb1a f2cc74bb-f43f-4395-9c35-ecb30b5b4fc9 ai-webpage-copy-improver AI Webpage Copy Improver ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/d562d26f-5891-4b09-8859-fbb205972313.jpg"] false Boost Your Website's Search Engine Performance Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver. ["marketing"] true true
17 94d03bd3-7d44-4d47-b60c-edb2f89508d6 b6f6f0d3-49f4-4e3b-8155-ffe9141b32c0 domain-name-finder Domain Name Finder ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/28545e09-b2b8-4916-b4c6-67f982510a78.jpeg"] false Instantly generate brand-ready domain names that are actually available Overview: Finding a domain name that fits your brand shouldn’t take hours of searching and failed checks. The Domain Name Finder Agent turns your pitch into hundreds of creative, brand-ready domain ideas—filtered by live availability so every result is actionable. How It Works 1. Input your product pitch, company name, or core keywords. 2. The agent analyzes brand tone, audience, and industry context. 3. It generates a list of unique, memorable domains that match your criteria. 4. All names are pre-filtered for real-time availability, so you can register immediately. Business Value Save hours of guesswork and eliminate dead ends. Accelerate brand launches, startup naming, and campaign creation with ready-to-claim domains. Key Use Cases • Startup Founders: Quickly find brand-ready domains for MVP launches or rebrands. • Marketers: Test name options across campaigns with instant availability data. • Entrepreneurs: Validate ideas faster with instant domain options. ["business"] false true
18 7a831906-daab-426f-9d66-bcf98d869426 516d813b-d1bc-470f-add7-c63a4b2c2bad ai-function AI Function ["https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/620e8117-2ee1-4384-89e6-c2ef4ec3d9c9.webp","https://storage.googleapis.com/agpt-prod-website-artifacts/users/b3e41ea4-2f4c-4964-927c-fe682d857bad/images/476259e2-5a79-4a7b-8e70-deeebfca70d7.png"] false Never Code Again AI FUNCTION MAGIC Your AI‑powered assistant for turning plain‑English descriptions into working Python functions. HOW IT WORKS 1. Describe what the function should do. 2. Specify the inputs it needs. 3. Receive the generated Python code. FEATURES - Effortless Function Generation: convert natural‑language specs into complete functions. - Customizable Inputs: define the parameters that matter to you. - Versatile Use Cases: simulate data, automate tasks, prototype ideas. - Seamless Integration: add the generated function directly to your codebase. EXAMPLE Request: “Create a function that generates 20 examples of fake people, each with a name, date of birth, job title, and age.” Input parameter: number_of_people (default 20) Result: a list of dictionaries such as [ { "name": "Emma Martinez", "date_of_birth": "1992‑11‑03", "job_title": "Data Analyst", "age": 32 }, { "name": "Liam O’Connor", "date_of_birth": "1985‑07‑19", "job_title": "Marketing Manager", "age": 39 }, …18 more entries… ] ["development"] false true

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,590 @@
{
"id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"version": 29,
"is_active": false,
"name": "Unspirational Poster Maker",
"description": "This witty AI agent generates hilariously relatable \"motivational\" posters that tackle the everyday struggles of procrastination, overthinking, and workplace chaos with a blend of absurdity and sarcasm. From goldfish facing impossible tasks to cats in existential crises, The Unspirational Poster Maker designs tongue-in-cheek graphics and captions that mock productivity clich\u00e9s and embrace our collective struggles to \"get it together.\" Perfect for adding a touch of humour to the workday, these posters remind us that sometimes, all we can do is laugh at the chaos.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Generated Image",
"description": "The resulting generated image ready for you to review and post."
},
"metadata": {
"position": {
"x": 2329.937006807125,
"y": 80.49068076698347
}
},
"input_links": [
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Theme",
"value": "Cooking"
},
"metadata": {
"position": {
"x": -1219.5966324967521,
"y": 80.50339731789956
}
},
"input_links": [],
"output_links": [
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 1132.373897280427,
"y": 88.44610377514573
}
},
"input_links": [
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 590.7543882245375,
"y": 85.69546832466654
}
},
"input_links": [
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 60.48904654237981,
"y": 86.06183359510214
}
},
"input_links": [
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"block_id": "6ab085e2-20b3-4055-bc3e-08036e01eca6",
"input_default": {
"prompt": "A cat sprawled dramatically across an important-looking document during a work-from-home meeting, making direct eye contact with the camera while knocking over a coffee mug in slow motion. Text Overlay: \"Chaos is a career path. Be the obstacle everyone has to work around.\"",
"upscale": "No Upscale"
},
"metadata": {
"position": {
"x": 1668.3572666956795,
"y": 89.69665262457966
}
},
"input_links": [
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "<example_output>\nA photo of a sloth lounging on a desk, with its head resting on a keyboard. The keyboard is on top of a laptop with a blank spreadsheet open. A to-do list is placed beside the laptop, with the top item written as \"Do literally anything\". There is a text overlay that says \"If you can't outwork them, outnap them.\".\n</example_output>\n\nCreate a relatable satirical, snarky, user-deprecating motivational style image based on the theme: \"{{THEME}}\".\n\nOutput only the image description and caption, without any additional commentary or formatting.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": -561.1139207164056,
"y": 78.60434452403524
}
},
"input_links": [
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
}
],
"output_links": [
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"graph_id": "7b2e2095-782a-4f8d-adda-e62b661bccf5",
"graph_version": 29,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "66646786-3006-4417-a6b7-0158f2603d1d",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "c6c511e8-e6a4-4969-9bc8-f67d60c1e229",
"source_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "6524c611-774b-45e9-899d-9a6aa80c549c",
"source_id": "e7cdc1a2-4427-4a8a-a31b-63c8e74842f8",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "20845dda-91de-4508-8077-0504b1a5ae03",
"source_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "8c2bd1f7-b17b-4835-81b6-bb336097aa7a",
"source_id": "7e026d19-f9a6-412f-8082-610f9ba0c410",
"sink_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"source_name": "result",
"sink_name": "prompt_values_#_THEME",
"is_static": true
},
{
"id": "201d3e03-bc06-4cee-846d-4c3c804d8857",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "714a0821-e5ba-4af7-9432-50491adda7b1",
"source_id": "576c5677-9050-4d1c-aad4-36b820c04fef",
"sink_id": "5ac3727a-1ea7-436b-a902-ef1bfd883a30",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "54588c74-e090-4e49-89e4-844b9952a585",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "28bda769-b88b-44c9-be5c-52c2667f137e",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
},
{
"id": "509b7587-1940-4a06-808d-edde9a74f400",
"source_id": "7543b9b0-0409-4cf8-bc4e-e0336273e2c4",
"sink_id": "86665e90-ffbf-48fb-ad3f-e5d31fd50c51",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T19:58:34.390Z",
"input_schema": {
"type": "object",
"properties": {
"Theme": {
"advanced": false,
"secret": false,
"title": "Theme",
"default": "Cooking"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Generated Image": {
"advanced": false,
"secret": false,
"title": "Generated Image",
"description": "The resulting generated image ready for you to review and post."
}
},
"required": [
"Generated Image"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"ideogram_api_key_credentials": {
"credentials_provider": [
"ideogram"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "ideogram",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.IDEOGRAM: 'ideogram'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o"
]
}
},
"required": [
"ideogram_api_key_credentials",
"openai_api_key_credentials"
],
"title": "UnspirationalPosterMakerCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,447 @@
{
"id": "622849a7-5848-4838-894d-01f8f07e3fad",
"version": 18,
"is_active": true,
"name": "AI Function",
"description": "## AI-Powered Function Magic: Never code again!\nProvide a description of a python function and your inputs and AI will provide the results.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "return",
"title": null,
"value": null,
"format": "",
"secret": false,
"advanced": false,
"description": "The value returned by the function"
},
"metadata": {
"position": {
"x": 1598.8622921127233,
"y": 291.59140862204725
}
},
"input_links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "o3-mini",
"retry": 3,
"prompt": "{{ARGS}}",
"sys_prompt": "You are now the following python function:\n\n```\n# {{DESCRIPTION}}\n{{FUNCTION}}\n```\n\nThe user will provide your input arguments.\nOnly respond with your `return` value.\nDo not include any commentary or additional text in your response. \nDo not include ``` backticks or any other decorators.",
"ollama_host": "localhost:11434",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 995,
"y": 290.50000000000006
}
},
"input_links": [
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
},
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
},
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
}
],
"output_links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158",
"input_default": {
"name": "Function Definition",
"title": null,
"value": "def fake_people(n: int) -> list[dict]:",
"secret": false,
"advanced": false,
"description": "The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": -672.6908629664215,
"y": 302.42044359789116
}
},
"input_links": [],
"output_links": [
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "844530de-2354-46d8-b748-67306b7bbca1",
"block_id": "7fcd3bcb-8e1b-4e69-903d-32d3d4a92158",
"input_default": {
"name": "Arguments",
"title": null,
"value": "20",
"secret": false,
"advanced": false,
"description": "The function's inputs\n\ne.g \"20\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": -158.1623599617334,
"y": 295.410856928333
}
},
"input_links": [],
"output_links": [
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
},
{
"id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"block_id": "90a56ffb-7024-4b2b-ab50-e26c5e5ab8ba",
"input_default": {
"name": "Description",
"title": null,
"value": "Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.",
"secret": false,
"advanced": false,
"description": "Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"",
"placeholder_values": []
},
"metadata": {
"position": {
"x": 374.4548658057796,
"y": 290.3779121974126
}
},
"input_links": [],
"output_links": [
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
}
],
"graph_id": "622849a7-5848-4838-894d-01f8f07e3fad",
"graph_version": 18,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "caecc1de-fdbc-4fd9-9570-074057bb15f9",
"source_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"sink_id": "26ff2973-3f9a-451d-b902-d45e5da0a7fe",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6c63d8ee-b63d-4ff6-bae0-7db8f99bb7af",
"source_id": "0fd6ef54-c1cd-478d-b764-17e40f882b99",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_DESCRIPTION",
"is_static": true
},
{
"id": "093bdca5-9f44-42f9-8e1c-276dd2971675",
"source_id": "844530de-2354-46d8-b748-67306b7bbca1",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_ARGS",
"is_static": true
},
{
"id": "dc7cb15f-76cc-4533-b96c-dd9e3f7f75ed",
"source_id": "4eab3a55-20f2-4c1d-804c-7377ba8202d2",
"sink_id": "c5d16ee4-de9e-4d93-bf32-ac2d15760d5b",
"source_name": "result",
"sink_name": "prompt_values_#_FUNCTION",
"is_static": true
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2025-04-19T17:10:48.857Z",
"input_schema": {
"type": "object",
"properties": {
"Function Definition": {
"advanced": false,
"anyOf": [
{
"format": "short-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Function Definition",
"description": "The function definition (text). This is what you would type on the first line of the function when programming.\n\ne.g \"def fake_people(n: int) -> list[dict]:\"",
"default": "def fake_people(n: int) -> list[dict]:"
},
"Arguments": {
"advanced": false,
"anyOf": [
{
"format": "short-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Arguments",
"description": "The function's inputs\n\ne.g \"20\"",
"default": "20"
},
"Description": {
"advanced": false,
"anyOf": [
{
"format": "long-text",
"type": "string"
},
{
"type": "null"
}
],
"secret": false,
"title": "Description",
"description": "Describe what the function does.\n\ne.g \"Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age.\"",
"default": "Generates n examples of fake data representing people, each with a name, DoB, Job title, and an age."
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"return": {
"advanced": false,
"secret": false,
"title": "return",
"description": "The value returned by the function"
}
},
"required": [
"return"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"o3-mini"
]
}
},
"required": [
"openai_api_key_credentials"
],
"title": "AIFunctionCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,403 @@
{
"id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"version": 12,
"is_active": true,
"name": "Flux AI Image Generator",
"description": "Transform ideas into breathtaking images with this AI-powered Image Generator. Using cutting-edge Flux AI technology, the tool crafts highly detailed, photorealistic visuals from simple text prompts. Perfect for artists, marketers, and content creators, this generator produces unique images tailored to user specifications. From fantastical scenes to lifelike portraits, users can unleash creativity with professional-quality results in seconds. Easy to use and endlessly versatile, bring imagination to life with the AI Image Generator today!",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "7482c59d-725f-4686-82b9-0dfdc4e92316",
"block_id": "cc10ff7b-7753-4ff2-9af6-9399b1a7eddc",
"input_default": {
"text": "Press the \"Advanced\" toggle and input your replicate API key.\n\nYou can get one here:\nhttps://replicate.com/account/api-tokens\n"
},
"metadata": {
"position": {
"x": 872.8268131538296,
"y": 614.9436919065381
}
},
"input_links": [],
"output_links": [],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Generated Image"
},
"metadata": {
"position": {
"x": 1453.6844137728922,
"y": 963.2466395125115
}
},
"input_links": [
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Image Subject",
"value": "Otto the friendly, purple \"Chief Automation Octopus\" helping people automate their tedious tasks.",
"description": "The subject of the image"
},
"metadata": {
"position": {
"x": -314.43009631839783,
"y": 962.935949165938
}
},
"input_links": [],
"output_links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"block_id": "90f8c45e-e983-4644-aa0b-b4ebe2f531bc",
"input_default": {
"prompt": "dog",
"output_format": "png",
"replicate_model_name": "Flux Pro 1.1"
},
"metadata": {
"position": {
"x": 873.0119949791526,
"y": 966.1604399052493
}
},
"input_links": [
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"output_links": [
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o-mini",
"prompt": "Generate an incredibly detailed, photorealistic image prompt about {{TOPIC}}, describing the camera it's taken with and prompting the diffusion model to use all the best quality techniques.\n\nOutput only the prompt with no additional commentary.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 277.3057034159709,
"y": 962.8382498113764
}
},
"input_links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
}
],
"output_links": [
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"graph_id": "ed2091cf-5b27-45a9-b3ea-42396f95b256",
"graph_version": 12,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "1077c61a-a32a-4ed7-becf-11bcf835b914",
"source_id": "6f24c45f-1548-4eda-9784-da06ce0abef8",
"sink_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"source_name": "result",
"sink_name": "prompt_values_#_TOPIC",
"is_static": true
},
{
"id": "06665d23-2f3d-4445-8f22-573446fcff5b",
"source_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"sink_id": "0d1dec1a-e4ee-4349-9673-449a01bbf14e",
"source_name": "result",
"sink_name": "value",
"is_static": false
},
{
"id": "a17ec505-9377-4700-8fe0-124ca81d43a9",
"source_id": "0d1bca9a-d9b8-4bfd-a19c-fe50b54f4b12",
"sink_id": "50bc23e9-f2b7-4959-8710-99679ed9eeea",
"source_name": "response",
"sink_name": "prompt",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T18:46:11.492Z",
"input_schema": {
"type": "object",
"properties": {
"Image Subject": {
"advanced": false,
"secret": false,
"title": "Image Subject",
"description": "The subject of the image",
"default": "Otto the friendly, purple \"Chief Automation Octopus\" helping people automate their tedious tasks."
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Generated Image": {
"advanced": false,
"secret": false,
"title": "Generated Image"
}
},
"required": [
"Generated Image"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"replicate_api_key_credentials": {
"credentials_provider": [
"replicate"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "replicate",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.REPLICATE: 'replicate'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o-mini"
]
}
},
"required": [
"replicate_api_key_credentials",
"openai_api_key_credentials"
],
"title": "FluxAIImageGeneratorCredentialsInputSchema",
"type": "object"
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,505 @@
{
"id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"version": 12,
"is_active": true,
"name": "AI Webpage Copy Improver",
"description": "Elevate your web content with this powerful AI Webpage Copy Improver. Designed for marketers, SEO specialists, and web developers, this tool analyses and enhances website copy for maximum impact. Using advanced language models, it optimizes text for better clarity, SEO performance, and increased conversion rates. The AI examines your existing content, identifies areas for improvement, and generates refined copy that maintains your brand voice while boosting engagement. From homepage headlines to product descriptions, transform your web presence with AI-driven insights. Improve readability, incorporate targeted keywords, and craft compelling calls-to-action - all with the click of a button. Take your digital marketing to the next level with the AI Webpage Copy Improver.",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Improved Webpage Copy"
},
"metadata": {
"position": {
"x": 1039.5884372540172,
"y": -0.8359099621230968
}
},
"input_links": [
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Original Page Analysis",
"description": "Analysis of the webpage as it currently stands."
},
"metadata": {
"position": {
"x": 1037.7724103954706,
"y": -606.5934325506903
}
},
"input_links": [
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Homepage URL",
"value": "https://agpt.co",
"description": "Enter the URL of the homepage you want to improve"
},
"metadata": {
"position": {
"x": -1195.1455674454749,
"y": 0
}
},
"input_links": [],
"output_links": [
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"block_id": "436c3984-57fd-4b85-8e9a-459b356883bd",
"input_default": {
"raw_content": false
},
"metadata": {
"position": {
"x": -631.7330786555249,
"y": 1.9638396496230826
}
},
"input_links": [
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"output_links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "Current Webpage Content:\n```\n{{CONTENT}}\n```\n\nBased on the following analysis of the webpage content:\n\n```\n{{ANALYSIS}}\n```\n\nRewrite and improve the content to address the identified issues. Focus on:\n1. Enhancing clarity and readability\n2. Optimizing for SEO (suggest and incorporate relevant keywords)\n3. Improving calls-to-action for better conversion rates\n4. Refining the structure and organization\n5. Maintaining brand consistency while improving the overall tone\n\nProvide the improved content in HTML format inside a code-block with \"```\" backticks, preserving the original structure where appropriate. Also, include a brief summary of the changes made and their potential impact.",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 488.37278423303917,
"y": 0
}
},
"input_links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
}
],
"output_links": [
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
},
{
"id": "08612ce2-625b-4c17-accd-3acace7b6477",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "gpt-4o",
"prompt": "Analyze the following webpage content and provide a detailed report on its current state, including strengths and weaknesses in terms of clarity, SEO optimization, and potential for conversion:\n\n{{CONTENT}}\n\nInclude observations on:\n1. Overall readability and clarity\n2. Use of keywords and SEO-friendly language\n3. Effectiveness of calls-to-action\n4. Structure and organization of content\n5. Tone and brand consistency",
"prompt_values": {}
},
"metadata": {
"position": {
"x": -72.66206703605442,
"y": -0.58403945075381
}
},
"input_links": [
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
}
],
"output_links": [
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
}
],
"graph_id": "0d440799-44ba-4d6c-85b3-b3739f1e1287",
"graph_version": 12,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "adfa6113-77b3-4e32-b136-3e694b87553e",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "d4334477-3616-454f-a430-614ca27f5b36",
"source_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"sink_id": "130ec496-f75d-4fe2-9cd6-8c00d08ea4a7",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "5d5656fd-4208-4296-bc70-e39cc31caada",
"source_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"sink_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"source_name": "content",
"sink_name": "prompt_values_#_CONTENT",
"is_static": false
},
{
"id": "f979ab78-0903-4f19-a7c2-a419d5d81aef",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "cefccd07-fe70-4feb-bf76-46b20aaa5d35",
"source_name": "response",
"sink_name": "value",
"is_static": false
},
{
"id": "6bcca45d-c9d5-439e-ac43-e4a1264d8f57",
"source_id": "08612ce2-625b-4c17-accd-3acace7b6477",
"sink_id": "c9924577-70d8-4ccb-9106-6f796df09ef9",
"source_name": "response",
"sink_name": "prompt_values_#_ANALYSIS",
"is_static": false
},
{
"id": "cbb12335-fefd-4560-9fff-98675130fbad",
"source_id": "375f8bc3-afd9-4025-ad8e-9aeb329af7ce",
"sink_id": "b40595c6-dba3-4779-a129-cd4f01fff103",
"source_name": "result",
"sink_name": "url",
"is_static": true
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2024-12-20T19:47:22.036Z",
"input_schema": {
"type": "object",
"properties": {
"Homepage URL": {
"advanced": false,
"secret": false,
"title": "Homepage URL",
"description": "Enter the URL of the homepage you want to improve",
"default": "https://agpt.co"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Improved Webpage Copy": {
"advanced": false,
"secret": false,
"title": "Improved Webpage Copy"
},
"Original Page Analysis": {
"advanced": false,
"secret": false,
"title": "Original Page Analysis",
"description": "Analysis of the webpage as it currently stands."
}
},
"required": [
"Improved Webpage Copy",
"Original Page Analysis"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"jina_api_key_credentials": {
"credentials_provider": [
"jina"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "jina",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.JINA: 'jina'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"openai_api_key_credentials": {
"credentials_provider": [
"openai"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "openai",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.OPENAI: 'openai'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"gpt-4o"
]
}
},
"required": [
"jina_api_key_credentials",
"openai_api_key_credentials"
],
"title": "AIWebpageCopyImproverCredentialsInputSchema",
"type": "object"
}
}

View File

@@ -0,0 +1,615 @@
{
"id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"version": 29,
"is_active": true,
"name": "Email Address Finder",
"description": "Input information of a business and find their email address",
"instructions": null,
"recommended_schedule_cron": null,
"nodes": [
{
"id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Address",
"value": "USA"
},
"metadata": {
"position": {
"x": 1047.9357219838776,
"y": 1067.9123910370954
}
},
"input_links": [],
"output_links": [
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"block_id": "3146e4fe-2cdd-4f29-bd12-0c9d5bb4deb0",
"input_default": {
"group": 1,
"pattern": "<email>(.*?)<\\/email>"
},
"metadata": {
"position": {
"x": 3381.2821481740634,
"y": 246.091098184158
}
},
"input_links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
}
],
"output_links": [
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"block_id": "363ae599-353e-4804-937e-b2ee3cef3da4",
"input_default": {
"name": "Email"
},
"metadata": {
"position": {
"x": 4525.4246310882,
"y": 246.36913665010354
}
},
"input_links": [
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
}
],
"output_links": [],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"block_id": "87840993-2053-44b7-8da4-187ad4ee518c",
"input_default": {},
"metadata": {
"position": {
"x": 2182.7499999999995,
"y": 242.00001144409185
}
},
"input_links": [
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
}
],
"output_links": [
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"block_id": "c0a8e994-ebf1-4a9c-a4d8-89d09c86741b",
"input_default": {
"name": "Business Name",
"value": "Tim Cook"
},
"metadata": {
"position": {
"x": 1049.9704155272595,
"y": 244.49931152418344
}
},
"input_links": [],
"output_links": [
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30",
"input_default": {
"format": "Email Address of {{NAME}}, {{ADDRESS}}",
"values": {}
},
"metadata": {
"position": {
"x": 1625.25,
"y": 243.25001144409185
}
},
"input_links": [
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
}
],
"output_links": [
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"block_id": "db7d8f02-2f44-4c55-ab7a-eae0941f0c30",
"input_default": {
"format": "Failed to find email. \nResult:\n{{RESULT}}",
"values": {}
},
"metadata": {
"position": {
"x": 3949.7493830805934,
"y": 705.209819698647
}
},
"input_links": [
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
}
],
"output_links": [
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
},
{
"id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"block_id": "1f292d4a-41a4-4977-9684-7c8d560b9f91",
"input_default": {
"model": "claude-sonnet-4-5-20250929",
"prompt": "<business_website>\n{{WEBSITE_CONTENT}}\n</business_website>\n\nExtract the Contact Email of {{BUSINESS_NAME}}.\n\nIf no email that can be used to contact {{BUSINESS_NAME}} is present, output `N/A`.\nDo not share any emails other than the email for this specific entity.\n\nIf multiple present pick the likely best one.\n\nRespond with the email (or N/A) inside <email></email> tags.\n\nExample Response:\n\n<thoughts_or_comments>\nThere were many emails present, but luckily one was for {{BUSINESS_NAME}} which I have included below.\n</thoughts_or_comments>\n<email>\nexample@email.com\n</email>",
"prompt_values": {}
},
"metadata": {
"position": {
"x": 2774.879259081777,
"y": 243.3102035752969
}
},
"input_links": [
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
},
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"output_links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
}
],
"graph_id": "4c6b68cb-bb75-4044-b1cb-2cee3fd39b26",
"graph_version": 29,
"webhook_id": null,
"webhook": null
}
],
"links": [
{
"id": "9f8188ce-1f3d-46fb-acda-b2a57c0e5da6",
"source_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"sink_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"source_name": "response",
"sink_name": "text",
"is_static": false
},
{
"id": "b15b5143-27b7-486e-a166-4095e72e5235",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"source_name": "negative",
"sink_name": "values_#_Result",
"is_static": false
},
{
"id": "d87b07ea-dcec-4d38-a644-2c1d741ea3cb",
"source_id": "266b7255-11c4-4b88-99e2-85db31a2e865",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "output",
"sink_name": "value",
"is_static": false
},
{
"id": "946b522c-365f-4ee0-96f9-28863d9882ea",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_NAME",
"is_static": true
},
{
"id": "23591872-3c6b-4562-87d3-5b6ade698e48",
"source_id": "a6e7355e-5bf8-4b09-b11c-a5e140389981",
"sink_id": "310c8fab-2ae6-4158-bd48-01dbdc434130",
"source_name": "positive",
"sink_name": "value",
"is_static": false
},
{
"id": "43e920a7-0bb4-4fae-9a22-91df95c7342a",
"source_id": "9708a10a-8be0-4c44-abb3-bd0f7c594794",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "result",
"sink_name": "prompt_values_#_BUSINESS_NAME",
"is_static": true
},
{
"id": "2e411d3d-79ba-4958-9c1c-b76a45a2e649",
"source_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"sink_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"source_name": "output",
"sink_name": "query",
"is_static": false
},
{
"id": "aac29f7b-3cd1-4c91-9a2a-72a8301c0957",
"source_id": "04cad535-9f1a-4876-8b07-af5897d8c282",
"sink_id": "28b5ddcc-dc20-41cc-ad21-c54ff459f694",
"source_name": "result",
"sink_name": "values_#_ADDRESS",
"is_static": true
},
{
"id": "899cc7d8-a96b-4107-b3c6-4c78edcf0c6b",
"source_id": "4a41df99-ffe2-4c12-b528-632979c9c030",
"sink_id": "510937b3-0134-4e45-b2ba-05a447bbaf50",
"source_name": "results",
"sink_name": "prompt_values_#_WEBSITE_CONTENT",
"is_static": false
}
],
"forked_from_id": null,
"forked_from_version": null,
"sub_graphs": [],
"user_id": "",
"created_at": "2025-01-03T00:46:30.244Z",
"input_schema": {
"type": "object",
"properties": {
"Address": {
"advanced": false,
"secret": false,
"title": "Address",
"default": "USA"
},
"Business Name": {
"advanced": false,
"secret": false,
"title": "Business Name",
"default": "Tim Cook"
}
},
"required": []
},
"output_schema": {
"type": "object",
"properties": {
"Email": {
"advanced": false,
"secret": false,
"title": "Email"
}
},
"required": [
"Email"
]
},
"has_external_trigger": false,
"has_human_in_the_loop": false,
"trigger_setup_info": null,
"credentials_input_schema": {
"properties": {
"jina_api_key_credentials": {
"credentials_provider": [
"jina"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "jina",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.JINA: 'jina'>], Literal['api_key']]",
"type": "object",
"discriminator_values": []
},
"anthropic_api_key_credentials": {
"credentials_provider": [
"anthropic"
],
"credentials_types": [
"api_key"
],
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"title": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"title": "Title"
},
"provider": {
"const": "anthropic",
"title": "Provider",
"type": "string"
},
"type": {
"const": "api_key",
"title": "Type",
"type": "string"
}
},
"required": [
"id",
"provider",
"type"
],
"title": "CredentialsMetaInput[Literal[<ProviderName.ANTHROPIC: 'anthropic'>], Literal['api_key']]",
"type": "object",
"discriminator": "model",
"discriminator_mapping": {
"Llama-3.3-70B-Instruct": "llama_api",
"Llama-3.3-8B-Instruct": "llama_api",
"Llama-4-Maverick-17B-128E-Instruct-FP8": "llama_api",
"Llama-4-Scout-17B-16E-Instruct-FP8": "llama_api",
"Qwen/Qwen2.5-72B-Instruct-Turbo": "aiml_api",
"amazon/nova-lite-v1": "open_router",
"amazon/nova-micro-v1": "open_router",
"amazon/nova-pro-v1": "open_router",
"claude-3-7-sonnet-20250219": "anthropic",
"claude-3-haiku-20240307": "anthropic",
"claude-haiku-4-5-20251001": "anthropic",
"claude-opus-4-1-20250805": "anthropic",
"claude-opus-4-20250514": "anthropic",
"claude-opus-4-5-20251101": "anthropic",
"claude-sonnet-4-20250514": "anthropic",
"claude-sonnet-4-5-20250929": "anthropic",
"cohere/command-r-08-2024": "open_router",
"cohere/command-r-plus-08-2024": "open_router",
"deepseek/deepseek-chat": "open_router",
"deepseek/deepseek-r1-0528": "open_router",
"dolphin-mistral:latest": "ollama",
"google/gemini-2.0-flash-001": "open_router",
"google/gemini-2.0-flash-lite-001": "open_router",
"google/gemini-2.5-flash": "open_router",
"google/gemini-2.5-flash-lite-preview-06-17": "open_router",
"google/gemini-2.5-pro-preview-03-25": "open_router",
"google/gemini-3-pro-preview": "open_router",
"gpt-3.5-turbo": "openai",
"gpt-4-turbo": "openai",
"gpt-4.1-2025-04-14": "openai",
"gpt-4.1-mini-2025-04-14": "openai",
"gpt-4o": "openai",
"gpt-4o-mini": "openai",
"gpt-5-2025-08-07": "openai",
"gpt-5-chat-latest": "openai",
"gpt-5-mini-2025-08-07": "openai",
"gpt-5-nano-2025-08-07": "openai",
"gpt-5.1-2025-11-13": "openai",
"gryphe/mythomax-l2-13b": "open_router",
"llama-3.1-8b-instant": "groq",
"llama-3.3-70b-versatile": "groq",
"llama3": "ollama",
"llama3.1:405b": "ollama",
"llama3.2": "ollama",
"llama3.3": "ollama",
"meta-llama/Llama-3.2-3B-Instruct-Turbo": "aiml_api",
"meta-llama/Llama-3.3-70B-Instruct-Turbo": "aiml_api",
"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo": "aiml_api",
"meta-llama/llama-4-maverick": "open_router",
"meta-llama/llama-4-scout": "open_router",
"microsoft/wizardlm-2-8x22b": "open_router",
"mistralai/mistral-nemo": "open_router",
"moonshotai/kimi-k2": "open_router",
"nousresearch/hermes-3-llama-3.1-405b": "open_router",
"nousresearch/hermes-3-llama-3.1-70b": "open_router",
"nvidia/llama-3.1-nemotron-70b-instruct": "aiml_api",
"o1": "openai",
"o1-mini": "openai",
"o3-2025-04-16": "openai",
"o3-mini": "openai",
"openai/gpt-oss-120b": "open_router",
"openai/gpt-oss-20b": "open_router",
"perplexity/sonar": "open_router",
"perplexity/sonar-deep-research": "open_router",
"perplexity/sonar-pro": "open_router",
"qwen/qwen3-235b-a22b-thinking-2507": "open_router",
"qwen/qwen3-coder": "open_router",
"v0-1.0-md": "v0",
"v0-1.5-lg": "v0",
"v0-1.5-md": "v0",
"x-ai/grok-4": "open_router",
"x-ai/grok-4-fast": "open_router",
"x-ai/grok-4.1-fast": "open_router",
"x-ai/grok-code-fast-1": "open_router"
},
"discriminator_values": [
"claude-sonnet-4-5-20250929"
]
}
},
"required": [
"jina_api_key_credentials",
"anthropic_api_key_credentials"
],
"title": "EmailAddressFinderCredentialsInputSchema",
"type": "object"
}
}

View File

@@ -11,7 +11,7 @@ from backend.data.block import (
BlockType,
get_block,
)
from backend.data.execution import ExecutionStatus, NodesInputMasks
from backend.data.execution import ExecutionContext, ExecutionStatus, NodesInputMasks
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util.json import validate_with_jsonschema
from backend.util.retry import func_retry
@@ -72,9 +72,9 @@ class AgentExecutorBlock(Block):
input_data: Input,
*,
graph_exec_id: str,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
from backend.executor import utils as execution_utils
graph_exec = await execution_utils.add_graph_execution(
@@ -83,8 +83,9 @@ class AgentExecutorBlock(Block):
user_id=input_data.user_id,
inputs=input_data.inputs,
nodes_input_masks=input_data.nodes_input_masks,
parent_graph_exec_id=graph_exec_id,
is_sub_graph=True, # AgentExecutorBlock executions are always sub-graphs
execution_context=execution_context.model_copy(
update={"parent_execution_id": graph_exec_id},
),
)
logger = execution_utils.LogMetadata(

View File

@@ -1,3 +1,4 @@
import asyncio
from enum import Enum
from typing import Literal
@@ -19,7 +20,7 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import MediaFileType
from backend.util.file import MediaFileType, store_media_file
class GeminiImageModel(str, Enum):
@@ -27,6 +28,20 @@ class GeminiImageModel(str, Enum):
NANO_BANANA_PRO = "google/nano-banana-pro"
class AspectRatio(str, Enum):
MATCH_INPUT_IMAGE = "match_input_image"
ASPECT_1_1 = "1:1"
ASPECT_2_3 = "2:3"
ASPECT_3_2 = "3:2"
ASPECT_3_4 = "3:4"
ASPECT_4_3 = "4:3"
ASPECT_4_5 = "4:5"
ASPECT_5_4 = "5:4"
ASPECT_9_16 = "9:16"
ASPECT_16_9 = "16:9"
ASPECT_21_9 = "21:9"
class OutputFormat(str, Enum):
JPG = "jpg"
PNG = "png"
@@ -69,6 +84,11 @@ class AIImageCustomizerBlock(Block):
default=[],
title="Input Images",
)
aspect_ratio: AspectRatio = SchemaField(
description="Aspect ratio of the generated image",
default=AspectRatio.MATCH_INPUT_IMAGE,
title="Aspect Ratio",
)
output_format: OutputFormat = SchemaField(
description="Format of the output image",
default=OutputFormat.PNG,
@@ -92,6 +112,7 @@ class AIImageCustomizerBlock(Block):
"prompt": "Make the scene more vibrant and colorful",
"model": GeminiImageModel.NANO_BANANA,
"images": [],
"aspect_ratio": AspectRatio.MATCH_INPUT_IMAGE,
"output_format": OutputFormat.JPG,
"credentials": TEST_CREDENTIALS_INPUT,
},
@@ -116,11 +137,25 @@ class AIImageCustomizerBlock(Block):
**kwargs,
) -> BlockOutput:
try:
# Convert local file paths to Data URIs (base64) so Replicate can access them
processed_images = await asyncio.gather(
*(
store_media_file(
graph_exec_id=graph_exec_id,
file=img,
user_id=user_id,
return_content=True,
)
for img in input_data.images
)
)
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.model.value,
prompt=input_data.prompt,
images=input_data.images,
images=processed_images,
aspect_ratio=input_data.aspect_ratio.value,
output_format=input_data.output_format.value,
)
yield "image_url", result
@@ -133,12 +168,14 @@ class AIImageCustomizerBlock(Block):
model_name: str,
prompt: str,
images: list[MediaFileType],
aspect_ratio: str,
output_format: str,
) -> MediaFileType:
client = ReplicateClient(api_token=api_key.get_secret_value())
input_params: dict = {
"prompt": prompt,
"aspect_ratio": aspect_ratio,
"output_format": output_format,
}

View File

@@ -20,6 +20,7 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.exceptions import BlockExecutionError
from backend.util.request import Requests
TEST_CREDENTIALS = APIKeyCredentials(
@@ -246,7 +247,11 @@ class AIShortformVideoCreatorBlock(Block):
await asyncio.sleep(10)
logger.error("Video creation timed out")
raise TimeoutError("Video creation timed out")
raise BlockExecutionError(
message="Video creation timed out",
block_name=self.name,
block_id=self.id,
)
def __init__(self):
super().__init__(
@@ -422,7 +427,11 @@ class AIAdMakerVideoCreatorBlock(Block):
await asyncio.sleep(10)
logger.error("Video creation timed out")
raise TimeoutError("Video creation timed out")
raise BlockExecutionError(
message="Video creation timed out",
block_name=self.name,
block_id=self.id,
)
def __init__(self):
super().__init__(
@@ -599,7 +608,11 @@ class AIScreenshotToVideoAdBlock(Block):
await asyncio.sleep(10)
logger.error("Video creation timed out")
raise TimeoutError("Video creation timed out")
raise BlockExecutionError(
message="Video creation timed out",
block_name=self.name,
block_id=self.id,
)
def __init__(self):
super().__init__(

View File

@@ -1371,7 +1371,7 @@ async def create_base(
if tables:
params["tables"] = tables
print(params)
logger.debug(f"Creating Airtable base with params: {params}")
response = await Requests().post(
"https://api.airtable.com/v0/meta/bases",

View File

@@ -106,7 +106,10 @@ class ConditionBlock(Block):
ComparisonOperator.LESS_THAN_OR_EQUAL: lambda a, b: a <= b,
}
result = comparison_funcs[operator](value1, value2)
try:
result = comparison_funcs[operator](value1, value2)
except Exception as e:
raise ValueError(f"Comparison failed: {e}") from e
yield "result", result

View File

@@ -1,8 +1,9 @@
import base64
import io
import mimetypes
from enum import Enum
from pathlib import Path
from typing import Any
from typing import Any, Literal, cast
import discord
from pydantic import SecretStr
@@ -33,6 +34,19 @@ TEST_CREDENTIALS = TEST_BOT_CREDENTIALS
TEST_CREDENTIALS_INPUT = TEST_BOT_CREDENTIALS_INPUT
class ThreadArchiveDuration(str, Enum):
"""Discord thread auto-archive duration options"""
ONE_HOUR = "60"
ONE_DAY = "1440"
THREE_DAYS = "4320"
ONE_WEEK = "10080"
def to_minutes(self) -> int:
"""Convert the duration string to minutes for Discord API"""
return int(self.value)
class ReadDiscordMessagesBlock(Block):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
@@ -1166,3 +1180,211 @@ class DiscordChannelInfoBlock(Block):
raise ValueError(f"Login error occurred: {login_err}")
except Exception as e:
raise ValueError(f"An error occurred: {e}")
class CreateDiscordThreadBlock(Block):
class Input(BlockSchemaInput):
credentials: DiscordCredentials = DiscordCredentialsField()
channel_name: str = SchemaField(
description="Channel ID or channel name to create the thread in"
)
server_name: str = SchemaField(
description="Server name (only needed if using channel name)",
advanced=True,
default="",
)
thread_name: str = SchemaField(description="The name of the thread to create")
is_private: bool = SchemaField(
description="Whether to create a private thread (requires Boost Level 2+) or public thread",
default=False,
)
auto_archive_duration: ThreadArchiveDuration = SchemaField(
description="Duration before the thread is automatically archived",
advanced=True,
default=ThreadArchiveDuration.ONE_WEEK,
)
message_content: str = SchemaField(
description="Optional initial message to send in the thread",
advanced=True,
default="",
)
class Output(BlockSchemaOutput):
status: str = SchemaField(description="Operation status")
thread_id: str = SchemaField(description="ID of the created thread")
thread_name: str = SchemaField(description="Name of the created thread")
def __init__(self):
super().__init__(
id="e8f3c9a2-7b5d-4f1e-9c6a-3d8e2b4f7a1c",
input_schema=CreateDiscordThreadBlock.Input,
output_schema=CreateDiscordThreadBlock.Output,
description="Creates a new thread in a Discord channel.",
categories={BlockCategory.SOCIAL},
test_input={
"channel_name": "general",
"thread_name": "Test Thread",
"is_private": False,
"auto_archive_duration": ThreadArchiveDuration.ONE_HOUR,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("status", "Thread created successfully"),
("thread_id", "123456789012345678"),
("thread_name", "Test Thread"),
],
test_mock={
"create_thread": lambda *args, **kwargs: {
"status": "Thread created successfully",
"thread_id": "123456789012345678",
"thread_name": "Test Thread",
}
},
test_credentials=TEST_CREDENTIALS,
)
async def create_thread(
self,
token: str,
channel_name: str,
server_name: str | None,
thread_name: str,
is_private: bool,
auto_archive_duration: ThreadArchiveDuration,
message_content: str,
) -> dict:
intents = discord.Intents.default()
intents.guilds = True
intents.message_content = True # Required for sending messages in threads
client = discord.Client(intents=intents)
result = {}
@client.event
async def on_ready():
channel = None
# Try to parse as channel ID first
try:
channel_id = int(channel_name)
try:
channel = await client.fetch_channel(channel_id)
except discord.errors.NotFound:
result["status"] = f"Channel with ID {channel_id} not found"
await client.close()
return
except discord.errors.Forbidden:
result["status"] = (
f"Bot does not have permission to view channel {channel_id}"
)
await client.close()
return
except ValueError:
# Not an ID, treat as channel name
# Collect all matching channels to detect duplicates
matching_channels = []
for guild in client.guilds:
# Skip guilds if server_name is provided and doesn't match
if (
server_name
and server_name.strip()
and guild.name != server_name
):
continue
for ch in guild.text_channels:
if ch.name == channel_name:
matching_channels.append(ch)
if not matching_channels:
result["status"] = f"Channel not found: {channel_name}"
await client.close()
return
elif len(matching_channels) > 1:
result["status"] = (
f"Multiple channels named '{channel_name}' found. "
"Please specify server_name to disambiguate."
)
await client.close()
return
else:
channel = matching_channels[0]
if not channel:
result["status"] = "Failed to resolve channel"
await client.close()
return
# Type check - ensure it's a text channel that can create threads
if not hasattr(channel, "create_thread"):
result["status"] = (
f"Channel {channel_name} cannot create threads (not a text channel)"
)
await client.close()
return
# After the hasattr check, we know channel is a TextChannel
channel = cast(discord.TextChannel, channel)
try:
# Create the thread using discord.py 2.0+ API
thread_type = (
discord.ChannelType.private_thread
if is_private
else discord.ChannelType.public_thread
)
# Cast to the specific Literal type that discord.py expects
duration_minutes = cast(
Literal[60, 1440, 4320, 10080], auto_archive_duration.to_minutes()
)
# The 'type' parameter exists in discord.py 2.0+ but isn't in type stubs yet
# pyright: ignore[reportCallIssue]
thread = await channel.create_thread(
name=thread_name,
type=thread_type,
auto_archive_duration=duration_minutes,
)
# Send initial message if provided
if message_content:
await thread.send(message_content)
result["status"] = "Thread created successfully"
result["thread_id"] = str(thread.id)
result["thread_name"] = thread.name
except discord.errors.Forbidden as e:
result["status"] = (
f"Bot does not have permission to create threads in this channel. {str(e)}"
)
except Exception as e:
result["status"] = f"Error creating thread: {str(e)}"
finally:
await client.close()
await client.start(token)
return result
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
result = await self.create_thread(
token=credentials.api_key.get_secret_value(),
channel_name=input_data.channel_name,
server_name=input_data.server_name or None,
thread_name=input_data.thread_name,
is_private=input_data.is_private,
auto_archive_duration=input_data.auto_archive_duration,
message_content=input_data.message_content,
)
yield "status", result.get("status", "Unknown error")
if "thread_id" in result:
yield "thread_id", result["thread_id"]
if "thread_name" in result:
yield "thread_name", result["thread_name"]
except discord.errors.LoginFailure as login_err:
raise ValueError(f"Login error occurred: {login_err}")

View File

@@ -319,7 +319,7 @@ class CostDollars(BaseModel):
# Helper functions for payload processing
def process_text_field(
text: Union[bool, TextEnabled, TextDisabled, TextAdvanced, None]
text: Union[bool, TextEnabled, TextDisabled, TextAdvanced, None],
) -> Optional[Union[bool, Dict[str, Any]]]:
"""Process text field for API payload."""
if text is None:
@@ -400,7 +400,7 @@ def process_contents_settings(contents: Optional[ContentSettings]) -> Dict[str,
def process_context_field(
context: Union[bool, dict, ContextEnabled, ContextDisabled, ContextAdvanced, None]
context: Union[bool, dict, ContextEnabled, ContextDisabled, ContextAdvanced, None],
) -> Optional[Union[bool, Dict[str, int]]]:
"""Process context field for API payload."""
if context is None:

View File

@@ -15,6 +15,7 @@ from backend.sdk import (
SchemaField,
cost,
)
from backend.util.exceptions import BlockExecutionError
from ._config import firecrawl
@@ -59,11 +60,18 @@ class FirecrawlExtractBlock(Block):
) -> BlockOutput:
app = FirecrawlApp(api_key=credentials.api_key.get_secret_value())
extract_result = app.extract(
urls=input_data.urls,
prompt=input_data.prompt,
schema=input_data.output_schema,
enable_web_search=input_data.enable_web_search,
)
try:
extract_result = app.extract(
urls=input_data.urls,
prompt=input_data.prompt,
schema=input_data.output_schema,
enable_web_search=input_data.enable_web_search,
)
except Exception as e:
raise BlockExecutionError(
message=f"Extract failed: {e}",
block_name=self.name,
block_id=self.id,
) from e
yield "data", extract_result.data

View File

@@ -19,6 +19,7 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.exceptions import ModerationError
from backend.util.file import MediaFileType, store_media_file
TEST_CREDENTIALS = APIKeyCredentials(
@@ -153,6 +154,8 @@ class AIImageEditorBlock(Block):
),
aspect_ratio=input_data.aspect_ratio.value,
seed=input_data.seed,
user_id=user_id,
graph_exec_id=graph_exec_id,
)
yield "output_image", result
@@ -164,6 +167,8 @@ class AIImageEditorBlock(Block):
input_image_b64: Optional[str],
aspect_ratio: str,
seed: Optional[int],
user_id: str,
graph_exec_id: str,
) -> MediaFileType:
client = ReplicateClient(api_token=api_key.get_secret_value())
input_params = {
@@ -173,11 +178,21 @@ class AIImageEditorBlock(Block):
**({"seed": seed} if seed is not None else {}),
}
output: FileOutput | list[FileOutput] = await client.async_run( # type: ignore
model_name,
input=input_params,
wait=False,
)
try:
output: FileOutput | list[FileOutput] = await client.async_run( # type: ignore
model_name,
input=input_params,
wait=False,
)
except Exception as e:
if "flagged as sensitive" in str(e).lower():
raise ModerationError(
message="Content was flagged as sensitive by the model provider",
user_id=user_id,
graph_exec_id=graph_exec_id,
moderation_type="model_provider",
)
raise ValueError(f"Model execution failed: {e}") from e
if isinstance(output, list) and output:
output = output[0]

View File

@@ -0,0 +1,108 @@
{
"action": "created",
"discussion": {
"repository_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"category": {
"id": 12345678,
"node_id": "DIC_kwDOJKSTjM4CXXXX",
"repository_id": 614765452,
"emoji": ":pray:",
"name": "Q&A",
"description": "Ask the community for help",
"created_at": "2023-03-16T09:21:07Z",
"updated_at": "2023-03-16T09:21:07Z",
"slug": "q-a",
"is_answerable": true
},
"answer_html_url": null,
"answer_chosen_at": null,
"answer_chosen_by": null,
"html_url": "https://github.com/Significant-Gravitas/AutoGPT/discussions/9999",
"id": 5000000001,
"node_id": "D_kwDOJKSTjM4AYYYY",
"number": 9999,
"title": "How do I configure custom blocks?",
"user": {
"login": "curious-user",
"id": 22222222,
"node_id": "MDQ6VXNlcjIyMjIyMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22222222?v=4",
"url": "https://api.github.com/users/curious-user",
"html_url": "https://github.com/curious-user",
"type": "User",
"site_admin": false
},
"state": "open",
"state_reason": null,
"locked": false,
"comments": 0,
"created_at": "2024-12-01T17:00:00Z",
"updated_at": "2024-12-01T17:00:00Z",
"author_association": "NONE",
"active_lock_reason": null,
"body": "## Question\n\nI'm trying to create a custom block for my specific use case. I've read the documentation but I'm not sure how to:\n\n1. Define the input/output schema\n2. Handle authentication\n3. Test my block locally\n\nCan someone point me to examples or provide guidance?\n\n## Environment\n\n- AutoGPT Platform version: latest\n- Python: 3.11",
"reactions": {
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/discussions/9999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
},
"timeline_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/discussions/9999/timeline"
},
"repository": {
"id": 614765452,
"node_id": "R_kgDOJKSTjA",
"name": "AutoGPT",
"full_name": "Significant-Gravitas/AutoGPT",
"private": false,
"owner": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"type": "Organization",
"site_admin": false
},
"html_url": "https://github.com/Significant-Gravitas/AutoGPT",
"description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on.",
"fork": false,
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"created_at": "2023-03-16T09:21:07Z",
"updated_at": "2024-12-01T17:00:00Z",
"pushed_at": "2024-12-01T12:00:00Z",
"stargazers_count": 170000,
"watchers_count": 170000,
"language": "Python",
"has_discussions": true,
"forks_count": 45000,
"visibility": "public",
"default_branch": "master"
},
"organization": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"url": "https://api.github.com/orgs/Significant-Gravitas",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"description": ""
},
"sender": {
"login": "curious-user",
"id": 22222222,
"node_id": "MDQ6VXNlcjIyMjIyMjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/22222222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/curious-user",
"html_url": "https://github.com/curious-user",
"type": "User",
"site_admin": false
}
}

View File

@@ -0,0 +1,112 @@
{
"action": "opened",
"issue": {
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345",
"repository_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"labels_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/labels{/name}",
"comments_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/comments",
"events_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/events",
"html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/12345",
"id": 2000000001,
"node_id": "I_kwDOJKSTjM5wXXXX",
"number": 12345,
"title": "Bug: Application crashes when processing large files",
"user": {
"login": "bug-reporter",
"id": 11111111,
"node_id": "MDQ6VXNlcjExMTExMTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11111111?v=4",
"url": "https://api.github.com/users/bug-reporter",
"html_url": "https://github.com/bug-reporter",
"type": "User",
"site_admin": false
},
"labels": [
{
"id": 5272676214,
"node_id": "LA_kwDOJKSTjM8AAAABOkandg",
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
],
"state": "open",
"locked": false,
"assignee": null,
"assignees": [],
"milestone": null,
"comments": 0,
"created_at": "2024-12-01T16:00:00Z",
"updated_at": "2024-12-01T16:00:00Z",
"closed_at": null,
"author_association": "NONE",
"active_lock_reason": null,
"body": "## Description\n\nWhen I try to process a file larger than 100MB, the application crashes with an out of memory error.\n\n## Steps to Reproduce\n\n1. Open the application\n2. Select a file larger than 100MB\n3. Click 'Process'\n4. Application crashes\n\n## Expected Behavior\n\nThe application should handle large files gracefully.\n\n## Environment\n\n- OS: Ubuntu 22.04\n- Python: 3.11\n- AutoGPT Version: 1.0.0",
"reactions": {
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
},
"timeline_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/issues/12345/timeline",
"state_reason": null
},
"repository": {
"id": 614765452,
"node_id": "R_kgDOJKSTjA",
"name": "AutoGPT",
"full_name": "Significant-Gravitas/AutoGPT",
"private": false,
"owner": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"type": "Organization",
"site_admin": false
},
"html_url": "https://github.com/Significant-Gravitas/AutoGPT",
"description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on.",
"fork": false,
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"created_at": "2023-03-16T09:21:07Z",
"updated_at": "2024-12-01T16:00:00Z",
"pushed_at": "2024-12-01T12:00:00Z",
"stargazers_count": 170000,
"watchers_count": 170000,
"language": "Python",
"forks_count": 45000,
"open_issues_count": 190,
"visibility": "public",
"default_branch": "master"
},
"organization": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"url": "https://api.github.com/orgs/Significant-Gravitas",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"description": ""
},
"sender": {
"login": "bug-reporter",
"id": 11111111,
"node_id": "MDQ6VXNlcjExMTExMTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11111111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bug-reporter",
"html_url": "https://github.com/bug-reporter",
"type": "User",
"site_admin": false
}
}

View File

@@ -0,0 +1,97 @@
{
"action": "published",
"release": {
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases/123456789",
"assets_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases/123456789/assets",
"upload_url": "https://uploads.github.com/repos/Significant-Gravitas/AutoGPT/releases/123456789/assets{?name,label}",
"html_url": "https://github.com/Significant-Gravitas/AutoGPT/releases/tag/v1.0.0",
"id": 123456789,
"author": {
"login": "ntindle",
"id": 12345678,
"node_id": "MDQ6VXNlcjEyMzQ1Njc4",
"avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ntindle",
"html_url": "https://github.com/ntindle",
"type": "User",
"site_admin": false
},
"node_id": "RE_kwDOJKSTjM4HWwAA",
"tag_name": "v1.0.0",
"target_commitish": "master",
"name": "AutoGPT Platform v1.0.0",
"draft": false,
"prerelease": false,
"created_at": "2024-12-01T10:00:00Z",
"published_at": "2024-12-01T12:00:00Z",
"assets": [
{
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/releases/assets/987654321",
"id": 987654321,
"node_id": "RA_kwDOJKSTjM4HWwBB",
"name": "autogpt-v1.0.0.zip",
"label": "Release Package",
"content_type": "application/zip",
"state": "uploaded",
"size": 52428800,
"download_count": 0,
"created_at": "2024-12-01T11:30:00Z",
"updated_at": "2024-12-01T11:35:00Z",
"browser_download_url": "https://github.com/Significant-Gravitas/AutoGPT/releases/download/v1.0.0/autogpt-v1.0.0.zip"
}
],
"tarball_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/tarball/v1.0.0",
"zipball_url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT/zipball/v1.0.0",
"body": "## What's New\n\n- Feature 1: Amazing new capability\n- Feature 2: Performance improvements\n- Bug fixes and stability improvements\n\n## Breaking Changes\n\nNone\n\n## Contributors\n\nThanks to all our contributors!"
},
"repository": {
"id": 614765452,
"node_id": "R_kgDOJKSTjA",
"name": "AutoGPT",
"full_name": "Significant-Gravitas/AutoGPT",
"private": false,
"owner": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"type": "Organization",
"site_admin": false
},
"html_url": "https://github.com/Significant-Gravitas/AutoGPT",
"description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on.",
"fork": false,
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"created_at": "2023-03-16T09:21:07Z",
"updated_at": "2024-12-01T12:00:00Z",
"pushed_at": "2024-12-01T12:00:00Z",
"stargazers_count": 170000,
"watchers_count": 170000,
"language": "Python",
"forks_count": 45000,
"visibility": "public",
"default_branch": "master"
},
"organization": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"url": "https://api.github.com/orgs/Significant-Gravitas",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"description": ""
},
"sender": {
"login": "ntindle",
"id": 12345678,
"node_id": "MDQ6VXNlcjEyMzQ1Njc4",
"avatar_url": "https://avatars.githubusercontent.com/u/12345678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ntindle",
"html_url": "https://github.com/ntindle",
"type": "User",
"site_admin": false
}
}

View File

@@ -0,0 +1,53 @@
{
"action": "created",
"starred_at": "2024-12-01T15:30:00Z",
"repository": {
"id": 614765452,
"node_id": "R_kgDOJKSTjA",
"name": "AutoGPT",
"full_name": "Significant-Gravitas/AutoGPT",
"private": false,
"owner": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"url": "https://api.github.com/users/Significant-Gravitas",
"html_url": "https://github.com/Significant-Gravitas",
"type": "Organization",
"site_admin": false
},
"html_url": "https://github.com/Significant-Gravitas/AutoGPT",
"description": "AutoGPT is the vision of accessible AI for everyone, to use and to build on.",
"fork": false,
"url": "https://api.github.com/repos/Significant-Gravitas/AutoGPT",
"created_at": "2023-03-16T09:21:07Z",
"updated_at": "2024-12-01T15:30:00Z",
"pushed_at": "2024-12-01T12:00:00Z",
"stargazers_count": 170001,
"watchers_count": 170001,
"language": "Python",
"forks_count": 45000,
"visibility": "public",
"default_branch": "master"
},
"organization": {
"login": "Significant-Gravitas",
"id": 130738209,
"node_id": "O_kgDOB8roIQ",
"url": "https://api.github.com/orgs/Significant-Gravitas",
"avatar_url": "https://avatars.githubusercontent.com/u/130738209?v=4",
"description": ""
},
"sender": {
"login": "awesome-contributor",
"id": 98765432,
"node_id": "MDQ6VXNlcjk4NzY1NDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/98765432?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/awesome-contributor",
"html_url": "https://github.com/awesome-contributor",
"type": "User",
"site_admin": false
}
}

View File

@@ -159,3 +159,391 @@ class GithubPullRequestTriggerBlock(GitHubTriggerBase, Block):
# --8<-- [end:GithubTriggerExample]
class GithubStarTriggerBlock(GitHubTriggerBase, Block):
"""Trigger block for GitHub star events - useful for milestone celebrations."""
EXAMPLE_PAYLOAD_FILE = (
Path(__file__).parent / "example_payloads" / "star.created.json"
)
class Input(GitHubTriggerBase.Input):
class EventsFilter(BaseModel):
"""
https://docs.github.com/en/webhooks/webhook-events-and-payloads#star
"""
created: bool = False
deleted: bool = False
events: EventsFilter = SchemaField(
title="Events", description="The star events to subscribe to"
)
class Output(GitHubTriggerBase.Output):
event: str = SchemaField(
description="The star event that triggered the webhook ('created' or 'deleted')"
)
starred_at: str = SchemaField(
description="ISO timestamp when the repo was starred (empty if deleted)"
)
stargazers_count: int = SchemaField(
description="Current number of stars on the repository"
)
repository_name: str = SchemaField(
description="Full name of the repository (owner/repo)"
)
repository_url: str = SchemaField(description="URL to the repository")
def __init__(self):
from backend.integrations.webhooks.github import GithubWebhookType
example_payload = json.loads(
self.EXAMPLE_PAYLOAD_FILE.read_text(encoding="utf-8")
)
super().__init__(
id="551e0a35-100b-49b7-89b8-3031322239b6",
description="This block triggers on GitHub star events. "
"Useful for celebrating milestones (e.g., 1k, 10k stars) or tracking engagement.",
categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT},
input_schema=GithubStarTriggerBlock.Input,
output_schema=GithubStarTriggerBlock.Output,
webhook_config=BlockWebhookConfig(
provider=ProviderName.GITHUB,
webhook_type=GithubWebhookType.REPO,
resource_format="{repo}",
event_filter_input="events",
event_format="star.{event}",
),
test_input={
"repo": "Significant-Gravitas/AutoGPT",
"events": {"created": True},
"credentials": TEST_CREDENTIALS_INPUT,
"payload": example_payload,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("payload", example_payload),
("triggered_by_user", example_payload["sender"]),
("event", example_payload["action"]),
("starred_at", example_payload.get("starred_at", "")),
("stargazers_count", example_payload["repository"]["stargazers_count"]),
("repository_name", example_payload["repository"]["full_name"]),
("repository_url", example_payload["repository"]["html_url"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
async for name, value in super().run(input_data, **kwargs):
yield name, value
yield "event", input_data.payload["action"]
yield "starred_at", input_data.payload.get("starred_at", "")
yield "stargazers_count", input_data.payload["repository"]["stargazers_count"]
yield "repository_name", input_data.payload["repository"]["full_name"]
yield "repository_url", input_data.payload["repository"]["html_url"]
class GithubReleaseTriggerBlock(GitHubTriggerBase, Block):
"""Trigger block for GitHub release events - ideal for announcing new versions."""
EXAMPLE_PAYLOAD_FILE = (
Path(__file__).parent / "example_payloads" / "release.published.json"
)
class Input(GitHubTriggerBase.Input):
class EventsFilter(BaseModel):
"""
https://docs.github.com/en/webhooks/webhook-events-and-payloads#release
"""
published: bool = False
unpublished: bool = False
created: bool = False
edited: bool = False
deleted: bool = False
prereleased: bool = False
released: bool = False
events: EventsFilter = SchemaField(
title="Events", description="The release events to subscribe to"
)
class Output(GitHubTriggerBase.Output):
event: str = SchemaField(
description="The release event that triggered the webhook (e.g., 'published')"
)
release: dict = SchemaField(description="The full release object")
release_url: str = SchemaField(description="URL to the release page")
tag_name: str = SchemaField(description="The release tag name (e.g., 'v1.0.0')")
release_name: str = SchemaField(description="Human-readable release name")
body: str = SchemaField(description="Release notes/description")
prerelease: bool = SchemaField(description="Whether this is a prerelease")
draft: bool = SchemaField(description="Whether this is a draft release")
assets: list = SchemaField(description="List of release assets/files")
def __init__(self):
from backend.integrations.webhooks.github import GithubWebhookType
example_payload = json.loads(
self.EXAMPLE_PAYLOAD_FILE.read_text(encoding="utf-8")
)
super().__init__(
id="2052dd1b-74e1-46ac-9c87-c7a0e057b60b",
description="This block triggers on GitHub release events. "
"Perfect for automating announcements to Discord, Twitter, or other platforms.",
categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT},
input_schema=GithubReleaseTriggerBlock.Input,
output_schema=GithubReleaseTriggerBlock.Output,
webhook_config=BlockWebhookConfig(
provider=ProviderName.GITHUB,
webhook_type=GithubWebhookType.REPO,
resource_format="{repo}",
event_filter_input="events",
event_format="release.{event}",
),
test_input={
"repo": "Significant-Gravitas/AutoGPT",
"events": {"published": True},
"credentials": TEST_CREDENTIALS_INPUT,
"payload": example_payload,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("payload", example_payload),
("triggered_by_user", example_payload["sender"]),
("event", example_payload["action"]),
("release", example_payload["release"]),
("release_url", example_payload["release"]["html_url"]),
("tag_name", example_payload["release"]["tag_name"]),
("release_name", example_payload["release"]["name"]),
("body", example_payload["release"]["body"]),
("prerelease", example_payload["release"]["prerelease"]),
("draft", example_payload["release"]["draft"]),
("assets", example_payload["release"]["assets"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
async for name, value in super().run(input_data, **kwargs):
yield name, value
release = input_data.payload["release"]
yield "event", input_data.payload["action"]
yield "release", release
yield "release_url", release["html_url"]
yield "tag_name", release["tag_name"]
yield "release_name", release.get("name", "")
yield "body", release.get("body", "")
yield "prerelease", release["prerelease"]
yield "draft", release["draft"]
yield "assets", release["assets"]
class GithubIssuesTriggerBlock(GitHubTriggerBase, Block):
"""Trigger block for GitHub issues events - great for triage and notifications."""
EXAMPLE_PAYLOAD_FILE = (
Path(__file__).parent / "example_payloads" / "issues.opened.json"
)
class Input(GitHubTriggerBase.Input):
class EventsFilter(BaseModel):
"""
https://docs.github.com/en/webhooks/webhook-events-and-payloads#issues
"""
opened: bool = False
edited: bool = False
deleted: bool = False
closed: bool = False
reopened: bool = False
assigned: bool = False
unassigned: bool = False
labeled: bool = False
unlabeled: bool = False
locked: bool = False
unlocked: bool = False
transferred: bool = False
milestoned: bool = False
demilestoned: bool = False
pinned: bool = False
unpinned: bool = False
events: EventsFilter = SchemaField(
title="Events", description="The issue events to subscribe to"
)
class Output(GitHubTriggerBase.Output):
event: str = SchemaField(
description="The issue event that triggered the webhook (e.g., 'opened')"
)
number: int = SchemaField(description="The issue number")
issue: dict = SchemaField(description="The full issue object")
issue_url: str = SchemaField(description="URL to the issue")
issue_title: str = SchemaField(description="The issue title")
issue_body: str = SchemaField(description="The issue body/description")
labels: list = SchemaField(description="List of labels on the issue")
assignees: list = SchemaField(description="List of assignees")
state: str = SchemaField(description="Issue state ('open' or 'closed')")
def __init__(self):
from backend.integrations.webhooks.github import GithubWebhookType
example_payload = json.loads(
self.EXAMPLE_PAYLOAD_FILE.read_text(encoding="utf-8")
)
super().__init__(
id="b2605464-e486-4bf4-aad3-d8a213c8a48a",
description="This block triggers on GitHub issues events. "
"Useful for automated triage, notifications, and welcoming first-time contributors.",
categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT},
input_schema=GithubIssuesTriggerBlock.Input,
output_schema=GithubIssuesTriggerBlock.Output,
webhook_config=BlockWebhookConfig(
provider=ProviderName.GITHUB,
webhook_type=GithubWebhookType.REPO,
resource_format="{repo}",
event_filter_input="events",
event_format="issues.{event}",
),
test_input={
"repo": "Significant-Gravitas/AutoGPT",
"events": {"opened": True},
"credentials": TEST_CREDENTIALS_INPUT,
"payload": example_payload,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("payload", example_payload),
("triggered_by_user", example_payload["sender"]),
("event", example_payload["action"]),
("number", example_payload["issue"]["number"]),
("issue", example_payload["issue"]),
("issue_url", example_payload["issue"]["html_url"]),
("issue_title", example_payload["issue"]["title"]),
("issue_body", example_payload["issue"]["body"]),
("labels", example_payload["issue"]["labels"]),
("assignees", example_payload["issue"]["assignees"]),
("state", example_payload["issue"]["state"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
async for name, value in super().run(input_data, **kwargs):
yield name, value
issue = input_data.payload["issue"]
yield "event", input_data.payload["action"]
yield "number", issue["number"]
yield "issue", issue
yield "issue_url", issue["html_url"]
yield "issue_title", issue["title"]
yield "issue_body", issue.get("body") or ""
yield "labels", issue["labels"]
yield "assignees", issue["assignees"]
yield "state", issue["state"]
class GithubDiscussionTriggerBlock(GitHubTriggerBase, Block):
"""Trigger block for GitHub discussion events - perfect for community Q&A sync."""
EXAMPLE_PAYLOAD_FILE = (
Path(__file__).parent / "example_payloads" / "discussion.created.json"
)
class Input(GitHubTriggerBase.Input):
class EventsFilter(BaseModel):
"""
https://docs.github.com/en/webhooks/webhook-events-and-payloads#discussion
"""
created: bool = False
edited: bool = False
deleted: bool = False
answered: bool = False
unanswered: bool = False
labeled: bool = False
unlabeled: bool = False
locked: bool = False
unlocked: bool = False
category_changed: bool = False
transferred: bool = False
pinned: bool = False
unpinned: bool = False
events: EventsFilter = SchemaField(
title="Events", description="The discussion events to subscribe to"
)
class Output(GitHubTriggerBase.Output):
event: str = SchemaField(
description="The discussion event that triggered the webhook"
)
number: int = SchemaField(description="The discussion number")
discussion: dict = SchemaField(description="The full discussion object")
discussion_url: str = SchemaField(description="URL to the discussion")
title: str = SchemaField(description="The discussion title")
body: str = SchemaField(description="The discussion body")
category: dict = SchemaField(description="The discussion category object")
category_name: str = SchemaField(description="Name of the category")
state: str = SchemaField(description="Discussion state")
def __init__(self):
from backend.integrations.webhooks.github import GithubWebhookType
example_payload = json.loads(
self.EXAMPLE_PAYLOAD_FILE.read_text(encoding="utf-8")
)
super().__init__(
id="87f847b3-d81a-424e-8e89-acadb5c9d52b",
description="This block triggers on GitHub Discussions events. "
"Great for syncing Q&A to Discord or auto-responding to common questions. "
"Note: Discussions must be enabled on the repository.",
categories={BlockCategory.DEVELOPER_TOOLS, BlockCategory.INPUT},
input_schema=GithubDiscussionTriggerBlock.Input,
output_schema=GithubDiscussionTriggerBlock.Output,
webhook_config=BlockWebhookConfig(
provider=ProviderName.GITHUB,
webhook_type=GithubWebhookType.REPO,
resource_format="{repo}",
event_filter_input="events",
event_format="discussion.{event}",
),
test_input={
"repo": "Significant-Gravitas/AutoGPT",
"events": {"created": True},
"credentials": TEST_CREDENTIALS_INPUT,
"payload": example_payload,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("payload", example_payload),
("triggered_by_user", example_payload["sender"]),
("event", example_payload["action"]),
("number", example_payload["discussion"]["number"]),
("discussion", example_payload["discussion"]),
("discussion_url", example_payload["discussion"]["html_url"]),
("title", example_payload["discussion"]["title"]),
("body", example_payload["discussion"]["body"]),
("category", example_payload["discussion"]["category"]),
("category_name", example_payload["discussion"]["category"]["name"]),
("state", example_payload["discussion"]["state"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
async for name, value in super().run(input_data, **kwargs):
yield name, value
discussion = input_data.payload["discussion"]
yield "event", input_data.payload["action"]
yield "number", discussion["number"]
yield "discussion", discussion
yield "discussion_url", discussion["html_url"]
yield "title", discussion["title"]
yield "body", discussion.get("body") or ""
yield "category", discussion["category"]
yield "category_name", discussion["category"]["name"]
yield "state", discussion["state"]

View File

@@ -0,0 +1,155 @@
from typing import Any, Literal, Optional
from pydantic import BaseModel, ConfigDict, Field
from backend.data.model import SchemaField
AttachmentView = Literal[
"DOCS",
"DOCUMENTS",
"SPREADSHEETS",
"PRESENTATIONS",
"DOCS_IMAGES",
"FOLDERS",
]
ATTACHMENT_VIEWS: tuple[AttachmentView, ...] = (
"DOCS",
"DOCUMENTS",
"SPREADSHEETS",
"PRESENTATIONS",
"DOCS_IMAGES",
"FOLDERS",
)
class _GoogleDriveFileBase(BaseModel):
"""Internal base class for Google Drive file representation."""
model_config = ConfigDict(populate_by_name=True)
id: str = Field(description="Google Drive file/folder ID")
name: Optional[str] = Field(None, description="File/folder name")
mime_type: Optional[str] = Field(
None,
alias="mimeType",
description="MIME type (e.g., application/vnd.google-apps.document)",
)
url: Optional[str] = Field(None, description="URL to open the file")
icon_url: Optional[str] = Field(None, alias="iconUrl", description="Icon URL")
is_folder: Optional[bool] = Field(
None, alias="isFolder", description="Whether this is a folder"
)
class GoogleDriveFile(_GoogleDriveFileBase):
"""
Represents a Google Drive file/folder with optional credentials for chaining.
Used for both inputs and outputs in Google Drive blocks. The `_credentials_id`
field enables chaining between blocks - when one block outputs a file, the
next block can use the same credentials to access it.
When used with GoogleDriveFileField(), the frontend renders a combined
auth + file picker UI that automatically populates `_credentials_id`.
"""
# Hidden field for credential ID - populated by frontend, preserved in outputs
credentials_id: Optional[str] = Field(
None,
alias="_credentials_id",
description="Internal: credential ID for authentication",
)
def GoogleDriveFileField(
*,
title: str,
description: str | None = None,
credentials_kwarg: str = "credentials",
credentials_scopes: list[str] | None = None,
allowed_views: list[AttachmentView] | None = None,
allowed_mime_types: list[str] | None = None,
placeholder: str | None = None,
**kwargs: Any,
) -> Any:
"""
Creates a Google Drive file input field with auto-generated credentials.
This field type produces a single UI element that handles both:
1. Google OAuth authentication
2. File selection via Google Drive Picker
The system automatically generates a credentials field, and the credentials
are passed to the run() method using the specified kwarg name.
Args:
title: Field title shown in UI
description: Field description/help text
credentials_kwarg: Name of the kwarg that will receive GoogleCredentials
in the run() method (default: "credentials")
credentials_scopes: OAuth scopes required (default: drive.file)
allowed_views: List of view types to show in picker (default: ["DOCS"])
allowed_mime_types: Filter by MIME types
placeholder: Placeholder text for the button
**kwargs: Additional SchemaField arguments
Returns:
Field definition that produces GoogleDriveFile
Example:
>>> class MyBlock(Block):
... class Input(BlockSchemaInput):
... spreadsheet: GoogleDriveFile = GoogleDriveFileField(
... title="Select Spreadsheet",
... credentials_kwarg="creds",
... allowed_views=["SPREADSHEETS"],
... )
...
... async def run(
... self, input_data: Input, *, creds: GoogleCredentials, **kwargs
... ):
... # creds is automatically populated
... file = input_data.spreadsheet
"""
# Determine scopes - drive.file is sufficient for picker-selected files
scopes = credentials_scopes or ["https://www.googleapis.com/auth/drive.file"]
# Build picker configuration with auto_credentials embedded
picker_config = {
"multiselect": False,
"allow_folder_selection": False,
"allowed_views": list(allowed_views) if allowed_views else ["DOCS"],
"scopes": scopes,
# Auto-credentials config tells frontend to include _credentials_id in output
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": scopes,
"kwarg_name": credentials_kwarg,
},
}
if allowed_mime_types:
picker_config["allowed_mime_types"] = list(allowed_mime_types)
return SchemaField(
default=None,
title=title,
description=description,
placeholder=placeholder or "Select from Google Drive",
# Use google-drive-picker format so frontend renders existing component
format="google-drive-picker",
advanced=False,
json_schema_extra={
"google_drive_picker_config": picker_config,
# Also keep auto_credentials at top level for backend detection
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": scopes,
"kwarg_name": credentials_kwarg,
},
**kwargs,
},
)

File diff suppressed because it is too large Load Diff

View File

@@ -184,7 +184,13 @@ class SendWebRequestBlock(Block):
)
# ─── Execute request ─────────────────────────────────────────
response = await Requests().request(
# Use raise_for_status=False so HTTP errors (4xx, 5xx) are returned
# as response objects instead of raising exceptions, allowing proper
# handling via client_error and server_error outputs
response = await Requests(
raise_for_status=False,
retry_max_attempts=1, # allow callers to handle HTTP errors immediately
).request(
input_data.method.value,
input_data.url,
headers=input_data.headers,

View File

@@ -0,0 +1,166 @@
import logging
from typing import Any
from prisma.enums import ReviewStatus
from backend.data.block import (
Block,
BlockCategory,
BlockOutput,
BlockSchemaInput,
BlockSchemaOutput,
BlockType,
)
from backend.data.execution import ExecutionContext, ExecutionStatus
from backend.data.human_review import ReviewResult
from backend.data.model import SchemaField
from backend.executor.manager import async_update_node_execution_status
from backend.util.clients import get_database_manager_async_client
logger = logging.getLogger(__name__)
class HumanInTheLoopBlock(Block):
"""
This block pauses execution and waits for human approval or modification of the data.
When executed, it creates a pending review entry and sets the node execution status
to REVIEW. The execution will remain paused until a human user either:
- Approves the data (with or without modifications)
- Rejects the data
This is useful for workflows that require human validation or intervention before
proceeding to the next steps.
"""
class Input(BlockSchemaInput):
data: Any = SchemaField(description="The data to be reviewed by a human user")
name: str = SchemaField(
description="A descriptive name for what this data represents",
)
editable: bool = SchemaField(
description="Whether the human reviewer can edit the data",
default=True,
advanced=True,
)
class Output(BlockSchemaOutput):
approved_data: Any = SchemaField(
description="The data when approved (may be modified by reviewer)"
)
rejected_data: Any = SchemaField(
description="The data when rejected (may be modified by reviewer)"
)
review_message: str = SchemaField(
description="Any message provided by the reviewer", default=""
)
def __init__(self):
super().__init__(
id="8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d",
description="Pause execution and wait for human approval or modification of data",
categories={BlockCategory.BASIC},
input_schema=HumanInTheLoopBlock.Input,
output_schema=HumanInTheLoopBlock.Output,
block_type=BlockType.HUMAN_IN_THE_LOOP,
test_input={
"data": {"name": "John Doe", "age": 30},
"name": "User profile data",
"editable": True,
},
test_output=[
("approved_data", {"name": "John Doe", "age": 30}),
],
test_mock={
"get_or_create_human_review": lambda *_args, **_kwargs: ReviewResult(
data={"name": "John Doe", "age": 30},
status=ReviewStatus.APPROVED,
message="",
processed=False,
node_exec_id="test-node-exec-id",
),
"update_node_execution_status": lambda *_args, **_kwargs: None,
"update_review_processed_status": lambda *_args, **_kwargs: None,
},
)
async def get_or_create_human_review(self, **kwargs):
return await get_database_manager_async_client().get_or_create_human_review(
**kwargs
)
async def update_node_execution_status(self, **kwargs):
return await async_update_node_execution_status(
db_client=get_database_manager_async_client(), **kwargs
)
async def update_review_processed_status(self, node_exec_id: str, processed: bool):
return await get_database_manager_async_client().update_review_processed_status(
node_exec_id, processed
)
async def run(
self,
input_data: Input,
*,
user_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
execution_context: ExecutionContext,
**kwargs,
) -> BlockOutput:
if not execution_context.safe_mode:
logger.info(
f"HITL block skipping review for node {node_exec_id} - safe mode disabled"
)
yield "approved_data", input_data.data
yield "review_message", "Auto-approved (safe mode disabled)"
return
try:
result = await self.get_or_create_human_review(
user_id=user_id,
node_exec_id=node_exec_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
input_data=input_data.data,
message=input_data.name,
editable=input_data.editable,
)
except Exception as e:
logger.error(f"Error in HITL block for node {node_exec_id}: {str(e)}")
raise
if result is None:
logger.info(
f"HITL block pausing execution for node {node_exec_id} - awaiting human review"
)
try:
await self.update_node_execution_status(
exec_id=node_exec_id,
status=ExecutionStatus.REVIEW,
)
return
except Exception as e:
logger.error(
f"Failed to update node status for HITL block {node_exec_id}: {str(e)}"
)
raise
if not result.processed:
await self.update_review_processed_status(
node_exec_id=node_exec_id, processed=True
)
if result.status == ReviewStatus.APPROVED:
yield "approved_data", result.data
if result.message:
yield "review_message", result.message
elif result.status == ReviewStatus.REJECTED:
yield "rejected_data", result.data
if result.message:
yield "review_message", result.message

View File

@@ -2,7 +2,6 @@ from enum import Enum
from typing import Any, Dict, Literal, Optional
from pydantic import SecretStr
from requests.exceptions import RequestException
from backend.data.block import (
Block,
@@ -332,8 +331,8 @@ class IdeogramModelBlock(Block):
try:
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image with V3 endpoint: {str(e)}")
except Exception as e:
raise ValueError(f"Failed to fetch image with V3 endpoint: {e}") from e
async def _run_model_legacy(
self,
@@ -385,8 +384,8 @@ class IdeogramModelBlock(Block):
try:
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image with legacy endpoint: {str(e)}")
except Exception as e:
raise ValueError(f"Failed to fetch image with legacy endpoint: {e}") from e
async def upscale_image(self, api_key: SecretStr, image_url: str):
url = "https://api.ideogram.ai/upscale"
@@ -413,5 +412,5 @@ class IdeogramModelBlock(Block):
return (response.json())["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to upscale image: {str(e)}")
except Exception as e:
raise ValueError(f"Failed to upscale image: {e}") from e

View File

@@ -2,6 +2,8 @@ import copy
from datetime import date, time
from typing import Any, Optional
# Import for Google Drive file input block
from backend.blocks.google._drive import AttachmentView, GoogleDriveFile
from backend.data.block import (
Block,
BlockCategory,
@@ -646,6 +648,119 @@ class AgentTableInputBlock(AgentInputBlock):
yield "result", input_data.value if input_data.value is not None else []
class AgentGoogleDriveFileInputBlock(AgentInputBlock):
"""
This block allows users to select a file from Google Drive.
It provides a Google Drive file picker UI that handles both authentication
and file selection. The selected file information (ID, name, URL, etc.)
is output for use by other blocks like Google Sheets Read.
"""
class Input(AgentInputBlock.Input):
value: Optional[GoogleDriveFile] = SchemaField(
description="The selected Google Drive file.",
default=None,
advanced=False,
title="Selected File",
)
allowed_views: list[AttachmentView] = SchemaField(
description="Which views to show in the file picker (DOCS, SPREADSHEETS, PRESENTATIONS, etc.).",
default_factory=lambda: ["DOCS", "SPREADSHEETS", "PRESENTATIONS"],
advanced=False,
title="Allowed Views",
)
allow_folder_selection: bool = SchemaField(
description="Whether to allow selecting folders.",
default=False,
advanced=True,
title="Allow Folder Selection",
)
def generate_schema(self):
"""Generate schema for the value field with Google Drive picker format."""
schema = super().generate_schema()
# Default scopes for drive.file access
scopes = ["https://www.googleapis.com/auth/drive.file"]
# Build picker configuration
picker_config = {
"multiselect": False, # Single file selection only for now
"allow_folder_selection": self.allow_folder_selection,
"allowed_views": (
list(self.allowed_views) if self.allowed_views else ["DOCS"]
),
"scopes": scopes,
# Auto-credentials config tells frontend to include _credentials_id in output
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": scopes,
"kwarg_name": "credentials",
},
}
# Set format and config for frontend to render Google Drive picker
schema["format"] = "google-drive-picker"
schema["google_drive_picker_config"] = picker_config
# Also keep auto_credentials at top level for backend detection
schema["auto_credentials"] = {
"provider": "google",
"type": "oauth2",
"scopes": scopes,
"kwarg_name": "credentials",
}
if self.value is not None:
schema["default"] = self.value.model_dump()
return schema
class Output(AgentInputBlock.Output):
result: GoogleDriveFile = SchemaField(
description="The selected Google Drive file with ID, name, URL, and other metadata."
)
def __init__(self):
test_file = GoogleDriveFile.model_validate(
{
"id": "test-file-id",
"name": "Test Spreadsheet",
"mimeType": "application/vnd.google-apps.spreadsheet",
"url": "https://docs.google.com/spreadsheets/d/test-file-id",
}
)
super().__init__(
id="d3b32f15-6fd7-40e3-be52-e083f51b19a2",
description="Block for selecting a file from Google Drive.",
disabled=not config.enable_agent_input_subtype_blocks,
input_schema=AgentGoogleDriveFileInputBlock.Input,
output_schema=AgentGoogleDriveFileInputBlock.Output,
test_input=[
{
"name": "spreadsheet_input",
"description": "Select a spreadsheet from Google Drive",
"allowed_views": ["SPREADSHEETS"],
"value": {
"id": "test-file-id",
"name": "Test Spreadsheet",
"mimeType": "application/vnd.google-apps.spreadsheet",
"url": "https://docs.google.com/spreadsheets/d/test-file-id",
},
}
],
test_output=[("result", test_file)],
)
async def run(self, input_data: Input, *args, **kwargs) -> BlockOutput:
"""
Yields the selected Google Drive file.
"""
if input_data.value is not None:
yield "result", input_data.value
IO_BLOCK_IDs = [
AgentInputBlock().id,
AgentOutputBlock().id,
@@ -658,4 +773,5 @@ IO_BLOCK_IDs = [
AgentDropdownInputBlock().id,
AgentToggleInputBlock().id,
AgentTableInputBlock().id,
AgentGoogleDriveFileInputBlock().id,
]

View File

@@ -16,6 +16,7 @@ from backend.data.block import (
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.util.exceptions import BlockExecutionError
class SearchTheWebBlock(Block, GetRequest):
@@ -56,7 +57,17 @@ class SearchTheWebBlock(Block, GetRequest):
# Prepend the Jina Search URL to the encoded query
jina_search_url = f"https://s.jina.ai/{encoded_query}"
results = await self.get_request(jina_search_url, headers=headers, json=False)
try:
results = await self.get_request(
jina_search_url, headers=headers, json=False
)
except Exception as e:
raise BlockExecutionError(
message=f"Search failed: {e}",
block_name=self.name,
block_id=self.id,
) from e
# Output the search results
yield "results", results

View File

@@ -1,3 +1,4 @@
import logging
from datetime import datetime, timezone
from typing import Iterator, Literal
@@ -64,6 +65,7 @@ class RedditComment(BaseModel):
settings = Settings()
logger = logging.getLogger(__name__)
def get_praw(creds: RedditCredentials) -> praw.Reddit:
@@ -77,7 +79,7 @@ def get_praw(creds: RedditCredentials) -> praw.Reddit:
me = client.user.me()
if not me:
raise ValueError("Invalid Reddit credentials.")
print(f"Logged in as Reddit user: {me.name}")
logger.info(f"Logged in as Reddit user: {me.name}")
return client

View File

@@ -18,6 +18,7 @@ from backend.data.block import (
BlockSchemaOutput,
)
from backend.data.model import APIKeyCredentials, CredentialsField, SchemaField
from backend.util.exceptions import BlockExecutionError, BlockInputError
logger = logging.getLogger(__name__)
@@ -111,9 +112,27 @@ class ReplicateModelBlock(Block):
yield "status", "succeeded"
yield "model_name", input_data.model_name
except Exception as e:
error_msg = f"Unexpected error running Replicate model: {str(e)}"
logger.error(error_msg)
raise RuntimeError(error_msg)
error_msg = str(e)
logger.error(f"Error running Replicate model: {error_msg}")
# Input validation errors (422, 400) → BlockInputError
if (
"422" in error_msg
or "Input validation failed" in error_msg
or "400" in error_msg
):
raise BlockInputError(
message=f"Invalid model inputs: {error_msg}",
block_name=self.name,
block_id=self.id,
) from e
# Everything else → BlockExecutionError
else:
raise BlockExecutionError(
message=f"Replicate model error: {error_msg}",
block_name=self.name,
block_id=self.id,
) from e
async def run_model(self, model_ref: str, model_inputs: dict, api_key: SecretStr):
"""

View File

@@ -45,10 +45,16 @@ class GetWikipediaSummaryBlock(Block, GetRequest):
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
topic = input_data.topic
url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{topic}"
response = await self.get_request(url, json=True)
if "extract" not in response:
raise RuntimeError(f"Unable to parse Wikipedia response: {response}")
yield "summary", response["extract"]
# Note: User-Agent is now automatically set by the request library
# to comply with Wikimedia's robot policy (https://w.wiki/4wJS)
try:
response = await self.get_request(url, json=True)
if "extract" not in response:
raise ValueError(f"Unable to parse Wikipedia response: {response}")
yield "summary", response["extract"]
except Exception as e:
raise ValueError(f"Failed to fetch Wikipedia summary: {e}") from e
TEST_CREDENTIALS = APIKeyCredentials(

View File

@@ -1,8 +1,11 @@
import logging
import re
from collections import Counter
from concurrent.futures import Future
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel
import backend.blocks.llm as llm
from backend.blocks.agent import AgentExecutorBlock
from backend.data.block import (
@@ -20,16 +23,41 @@ from backend.data.dynamic_fields import (
is_dynamic_field,
is_tool_pin,
)
from backend.data.execution import ExecutionContext
from backend.data.model import NodeExecutionStats, SchemaField
from backend.util import json
from backend.util.clients import get_database_manager_async_client
from backend.util.prompt import MAIN_OBJECTIVE_PREFIX
if TYPE_CHECKING:
from backend.data.graph import Link, Node
from backend.executor.manager import ExecutionProcessor
logger = logging.getLogger(__name__)
class ToolInfo(BaseModel):
"""Processed tool call information."""
tool_call: Any # The original tool call object from LLM response
tool_name: str # The function name
tool_def: dict[str, Any] # The tool definition from tool_functions
input_data: dict[str, Any] # Processed input data ready for tool execution
field_mapping: dict[str, str] # Field name mapping for the tool
class ExecutionParams(BaseModel):
"""Tool execution parameters."""
user_id: str
graph_id: str
node_id: str
graph_version: int
graph_exec_id: str
node_exec_id: str
execution_context: "ExecutionContext"
def _get_tool_requests(entry: dict[str, Any]) -> list[str]:
"""
Return a list of tool_call_ids if the entry is a tool request.
@@ -105,6 +133,50 @@ def _create_tool_response(call_id: str, output: Any) -> dict[str, Any]:
return {"role": "tool", "tool_call_id": call_id, "content": content}
def _combine_tool_responses(tool_outputs: list[dict[str, Any]]) -> list[dict[str, Any]]:
"""
Combine multiple Anthropic tool responses into a single user message.
For non-Anthropic formats, returns the original list unchanged.
"""
if len(tool_outputs) <= 1:
return tool_outputs
# Anthropic responses have role="user", type="message", and content is a list with tool_result items
anthropic_responses = [
output
for output in tool_outputs
if (
output.get("role") == "user"
and output.get("type") == "message"
and isinstance(output.get("content"), list)
and any(
item.get("type") == "tool_result"
for item in output.get("content", [])
if isinstance(item, dict)
)
)
]
if len(anthropic_responses) > 1:
combined_content = [
item for response in anthropic_responses for item in response["content"]
]
combined_response = {
"role": "user",
"type": "message",
"content": combined_content,
}
non_anthropic_responses = [
output for output in tool_outputs if output not in anthropic_responses
]
return [combined_response] + non_anthropic_responses
return tool_outputs
def _convert_raw_response_to_dict(raw_response: Any) -> dict[str, Any]:
"""
Safely convert raw_response to dictionary format for conversation history.
@@ -204,6 +276,17 @@ class SmartDecisionMakerBlock(Block):
default="localhost:11434",
description="Ollama host for local models",
)
agent_mode_max_iterations: int = SchemaField(
title="Agent Mode Max Iterations",
description="Maximum iterations for agent mode. 0 = traditional mode (single LLM call, yield tool calls for external execution), -1 = infinite agent mode (loop until finished), 1+ = agent mode with max iterations limit.",
advanced=True,
default=0,
)
conversation_compaction: bool = SchemaField(
default=True,
title="Context window auto-compaction",
description="Automatically compact the context window once it hits the limit",
)
@classmethod
def get_missing_links(cls, data: BlockInput, links: list["Link"]) -> set[str]:
@@ -506,6 +589,7 @@ class SmartDecisionMakerBlock(Block):
Returns the response if successful, raises ValueError if validation fails.
"""
resp = await llm.llm_call(
compress_prompt_to_fit=input_data.conversation_compaction,
credentials=credentials,
llm_model=input_data.model,
prompt=current_prompt,
@@ -593,6 +677,291 @@ class SmartDecisionMakerBlock(Block):
return resp
def _process_tool_calls(
self, response, tool_functions: list[dict[str, Any]]
) -> list[ToolInfo]:
"""Process tool calls and extract tool definitions, arguments, and input data.
Returns a list of tool info dicts with:
- tool_call: The original tool call object
- tool_name: The function name
- tool_def: The tool definition from tool_functions
- input_data: Processed input data dict (includes None values)
- field_mapping: Field name mapping for the tool
"""
if not response.tool_calls:
return []
processed_tools = []
for tool_call in response.tool_calls:
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
tool_def = next(
(
tool
for tool in tool_functions
if tool["function"]["name"] == tool_name
),
None,
)
if not tool_def:
if len(tool_functions) == 1:
tool_def = tool_functions[0]
else:
continue
# Build input data for the tool
input_data = {}
field_mapping = tool_def["function"].get("_field_mapping", {})
if "function" in tool_def and "parameters" in tool_def["function"]:
expected_args = tool_def["function"]["parameters"].get("properties", {})
for clean_arg_name in expected_args:
original_field_name = field_mapping.get(
clean_arg_name, clean_arg_name
)
arg_value = tool_args.get(clean_arg_name)
# Include all expected parameters, even if None (for backward compatibility with tests)
input_data[original_field_name] = arg_value
processed_tools.append(
ToolInfo(
tool_call=tool_call,
tool_name=tool_name,
tool_def=tool_def,
input_data=input_data,
field_mapping=field_mapping,
)
)
return processed_tools
def _update_conversation(
self, prompt: list[dict], response, tool_outputs: list | None = None
):
"""Update conversation history with response and tool outputs."""
# Don't add separate reasoning message with tool calls (breaks Anthropic's tool_use->tool_result pairing)
assistant_message = _convert_raw_response_to_dict(response.raw_response)
has_tool_calls = isinstance(assistant_message.get("content"), list) and any(
item.get("type") == "tool_use"
for item in assistant_message.get("content", [])
)
if response.reasoning and not has_tool_calls:
prompt.append(
{"role": "assistant", "content": f"[Reasoning]: {response.reasoning}"}
)
prompt.append(assistant_message)
if tool_outputs:
prompt.extend(tool_outputs)
async def _execute_single_tool_with_manager(
self,
tool_info: ToolInfo,
execution_params: ExecutionParams,
execution_processor: "ExecutionProcessor",
) -> dict:
"""Execute a single tool using the execution manager for proper integration."""
# Lazy imports to avoid circular dependencies
from backend.data.execution import NodeExecutionEntry
tool_call = tool_info.tool_call
tool_def = tool_info.tool_def
raw_input_data = tool_info.input_data
# Get sink node and field mapping
sink_node_id = tool_def["function"]["_sink_node_id"]
# Use proper database operations for tool execution
db_client = get_database_manager_async_client()
# Get target node
target_node = await db_client.get_node(sink_node_id)
if not target_node:
raise ValueError(f"Target node {sink_node_id} not found")
# Create proper node execution using upsert_execution_input
node_exec_result = None
final_input_data = None
# Add all inputs to the execution
if not raw_input_data:
raise ValueError(f"Tool call has no input data: {tool_call}")
for input_name, input_value in raw_input_data.items():
node_exec_result, final_input_data = await db_client.upsert_execution_input(
node_id=sink_node_id,
graph_exec_id=execution_params.graph_exec_id,
input_name=input_name,
input_data=input_value,
)
assert node_exec_result is not None, "node_exec_result should not be None"
# Create NodeExecutionEntry for execution manager
node_exec_entry = NodeExecutionEntry(
user_id=execution_params.user_id,
graph_exec_id=execution_params.graph_exec_id,
graph_id=execution_params.graph_id,
graph_version=execution_params.graph_version,
node_exec_id=node_exec_result.node_exec_id,
node_id=sink_node_id,
block_id=target_node.block_id,
inputs=final_input_data or {},
execution_context=execution_params.execution_context,
)
# Use the execution manager to execute the tool node
try:
# Get NodeExecutionProgress from the execution manager's running nodes
node_exec_progress = execution_processor.running_node_execution[
sink_node_id
]
# Use the execution manager's own graph stats
graph_stats_pair = (
execution_processor.execution_stats,
execution_processor.execution_stats_lock,
)
# Create a completed future for the task tracking system
node_exec_future = Future()
node_exec_progress.add_task(
node_exec_id=node_exec_result.node_exec_id,
task=node_exec_future,
)
# Execute the node directly since we're in the SmartDecisionMaker context
node_exec_future.set_result(
await execution_processor.on_node_execution(
node_exec=node_exec_entry,
node_exec_progress=node_exec_progress,
nodes_input_masks=None,
graph_stats_pair=graph_stats_pair,
)
)
# Get outputs from database after execution completes using database manager client
node_outputs = await db_client.get_execution_outputs_by_node_exec_id(
node_exec_result.node_exec_id
)
# Create tool response
tool_response_content = (
json.dumps(node_outputs)
if node_outputs
else "Tool executed successfully"
)
return _create_tool_response(tool_call.id, tool_response_content)
except Exception as e:
logger.error(f"Tool execution with manager failed: {e}")
# Return error response
return _create_tool_response(
tool_call.id, f"Tool execution failed: {str(e)}"
)
async def _execute_tools_agent_mode(
self,
input_data,
credentials,
tool_functions: list[dict[str, Any]],
prompt: list[dict],
graph_exec_id: str,
node_id: str,
node_exec_id: str,
user_id: str,
graph_id: str,
graph_version: int,
execution_context: ExecutionContext,
execution_processor: "ExecutionProcessor",
):
"""Execute tools in agent mode with a loop until finished."""
max_iterations = input_data.agent_mode_max_iterations
iteration = 0
# Execution parameters for tool execution
execution_params = ExecutionParams(
user_id=user_id,
graph_id=graph_id,
node_id=node_id,
graph_version=graph_version,
graph_exec_id=graph_exec_id,
node_exec_id=node_exec_id,
execution_context=execution_context,
)
current_prompt = list(prompt)
while max_iterations < 0 or iteration < max_iterations:
iteration += 1
logger.debug(f"Agent mode iteration {iteration}")
# Prepare prompt for this iteration
iteration_prompt = list(current_prompt)
# On the last iteration, add a special system message to encourage completion
if max_iterations > 0 and iteration == max_iterations:
last_iteration_message = {
"role": "system",
"content": f"{MAIN_OBJECTIVE_PREFIX}This is your last iteration ({iteration}/{max_iterations}). "
"Try to complete the task with the information you have. If you cannot fully complete it, "
"provide a summary of what you've accomplished and what remains to be done. "
"Prefer finishing with a clear response rather than making additional tool calls.",
}
iteration_prompt.append(last_iteration_message)
# Get LLM response
try:
response = await self._attempt_llm_call_with_validation(
credentials, input_data, iteration_prompt, tool_functions
)
except Exception as e:
yield "error", f"LLM call failed in agent mode iteration {iteration}: {str(e)}"
return
# Process tool calls
processed_tools = self._process_tool_calls(response, tool_functions)
# If no tool calls, we're done
if not processed_tools:
yield "finished", response.response
self._update_conversation(current_prompt, response)
yield "conversations", current_prompt
return
# Execute tools and collect responses
tool_outputs = []
for tool_info in processed_tools:
try:
tool_response = await self._execute_single_tool_with_manager(
tool_info, execution_params, execution_processor
)
tool_outputs.append(tool_response)
except Exception as e:
logger.error(f"Tool execution failed: {e}")
# Create error response for the tool
error_response = _create_tool_response(
tool_info.tool_call.id, f"Error: {str(e)}"
)
tool_outputs.append(error_response)
tool_outputs = _combine_tool_responses(tool_outputs)
self._update_conversation(current_prompt, response, tool_outputs)
# Yield intermediate conversation state
yield "conversations", current_prompt
# If we reach max iterations, yield the current state
if max_iterations < 0:
yield "finished", f"Agent mode completed after {iteration} iterations"
else:
yield "finished", f"Agent mode completed after {max_iterations} iterations (limit reached)"
yield "conversations", current_prompt
async def run(
self,
input_data: Input,
@@ -603,8 +972,12 @@ class SmartDecisionMakerBlock(Block):
graph_exec_id: str,
node_exec_id: str,
user_id: str,
graph_version: int,
execution_context: ExecutionContext,
execution_processor: "ExecutionProcessor",
**kwargs,
) -> BlockOutput:
tool_functions = await self._create_tool_node_signatures(node_id)
yield "tool_functions", json.dumps(tool_functions)
@@ -648,24 +1021,52 @@ class SmartDecisionMakerBlock(Block):
input_data.prompt = llm.fmt.format_string(input_data.prompt, values)
input_data.sys_prompt = llm.fmt.format_string(input_data.sys_prompt, values)
prefix = "[Main Objective Prompt]: "
if input_data.sys_prompt and not any(
p["role"] == "system" and p["content"].startswith(prefix) for p in prompt
p["role"] == "system" and p["content"].startswith(MAIN_OBJECTIVE_PREFIX)
for p in prompt
):
prompt.append({"role": "system", "content": prefix + input_data.sys_prompt})
prompt.append(
{
"role": "system",
"content": MAIN_OBJECTIVE_PREFIX + input_data.sys_prompt,
}
)
if input_data.prompt and not any(
p["role"] == "user" and p["content"].startswith(prefix) for p in prompt
p["role"] == "user" and p["content"].startswith(MAIN_OBJECTIVE_PREFIX)
for p in prompt
):
prompt.append({"role": "user", "content": prefix + input_data.prompt})
prompt.append(
{"role": "user", "content": MAIN_OBJECTIVE_PREFIX + input_data.prompt}
)
# Execute tools based on the selected mode
if input_data.agent_mode_max_iterations != 0:
# In agent mode, execute tools directly in a loop until finished
async for result in self._execute_tools_agent_mode(
input_data=input_data,
credentials=credentials,
tool_functions=tool_functions,
prompt=prompt,
graph_exec_id=graph_exec_id,
node_id=node_id,
node_exec_id=node_exec_id,
user_id=user_id,
graph_id=graph_id,
graph_version=graph_version,
execution_context=execution_context,
execution_processor=execution_processor,
):
yield result
return
# One-off mode: single LLM call and yield tool calls for external execution
current_prompt = list(prompt)
max_attempts = max(1, int(input_data.retry))
response = None
last_error = None
for attempt in range(max_attempts):
for _ in range(max_attempts):
try:
response = await self._attempt_llm_call_with_validation(
credentials, input_data, current_prompt, tool_functions

View File

@@ -1,14 +1,24 @@
from typing import Type
from typing import Any, Type
import pytest
from backend.data.block import Block, get_blocks
from backend.data.block import Block, BlockSchemaInput, get_blocks
from backend.data.model import SchemaField
from backend.util.test import execute_block_test
SKIP_BLOCK_TESTS = {
"HumanInTheLoopBlock",
}
@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b().name)
async def test_available_blocks(block: Type[Block]):
await execute_block_test(block())
block_instance = block()
if block_instance.__class__.__name__ in SKIP_BLOCK_TESTS:
pytest.skip(
f"Skipping {block_instance.__class__.__name__} - requires external service"
)
await execute_block_test(block_instance)
@pytest.mark.parametrize("block", get_blocks().values(), ids=lambda b: b().name)
@@ -123,3 +133,148 @@ async def test_block_ids_valid(block: Type[Block]):
), f"Block {block.name} ID is UUID version {parsed_uuid.version}, expected version 4"
except ValueError:
pytest.fail(f"Block {block.name} has invalid UUID format: {block_instance.id}")
class TestAutoCredentialsFieldsValidation:
"""Tests for auto_credentials field validation in BlockSchema."""
def test_duplicate_auto_credentials_kwarg_name_raises_error(self):
"""Test that duplicate kwarg_name in auto_credentials raises ValueError."""
class DuplicateKwargSchema(BlockSchemaInput):
"""Schema with duplicate auto_credentials kwarg_name."""
# Both fields explicitly use the same kwarg_name "credentials"
file1: dict[str, Any] | None = SchemaField(
description="First file input",
default=None,
json_schema_extra={
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": ["https://www.googleapis.com/auth/drive.file"],
"kwarg_name": "credentials",
}
},
)
file2: dict[str, Any] | None = SchemaField(
description="Second file input",
default=None,
json_schema_extra={
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": ["https://www.googleapis.com/auth/drive.file"],
"kwarg_name": "credentials", # Duplicate kwarg_name!
}
},
)
with pytest.raises(ValueError) as exc_info:
DuplicateKwargSchema.get_auto_credentials_fields()
error_message = str(exc_info.value)
assert "Duplicate auto_credentials kwarg_name 'credentials'" in error_message
assert "file1" in error_message
assert "file2" in error_message
def test_unique_auto_credentials_kwarg_names_succeed(self):
"""Test that unique kwarg_name values work correctly."""
class UniqueKwargSchema(BlockSchemaInput):
"""Schema with unique auto_credentials kwarg_name values."""
file1: dict[str, Any] | None = SchemaField(
description="First file input",
default=None,
json_schema_extra={
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": ["https://www.googleapis.com/auth/drive.file"],
"kwarg_name": "file1_credentials",
}
},
)
file2: dict[str, Any] | None = SchemaField(
description="Second file input",
default=None,
json_schema_extra={
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": ["https://www.googleapis.com/auth/drive.file"],
"kwarg_name": "file2_credentials", # Different kwarg_name
}
},
)
# Should not raise
result = UniqueKwargSchema.get_auto_credentials_fields()
assert "file1_credentials" in result
assert "file2_credentials" in result
assert result["file1_credentials"]["field_name"] == "file1"
assert result["file2_credentials"]["field_name"] == "file2"
def test_default_kwarg_name_is_credentials(self):
"""Test that missing kwarg_name defaults to 'credentials'."""
class DefaultKwargSchema(BlockSchemaInput):
"""Schema with auto_credentials missing kwarg_name."""
file: dict[str, Any] | None = SchemaField(
description="File input",
default=None,
json_schema_extra={
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": ["https://www.googleapis.com/auth/drive.file"],
# No kwarg_name specified - should default to "credentials"
}
},
)
result = DefaultKwargSchema.get_auto_credentials_fields()
assert "credentials" in result
assert result["credentials"]["field_name"] == "file"
def test_duplicate_default_kwarg_name_raises_error(self):
"""Test that two fields with default kwarg_name raises ValueError."""
class DefaultDuplicateSchema(BlockSchemaInput):
"""Schema where both fields omit kwarg_name, defaulting to 'credentials'."""
file1: dict[str, Any] | None = SchemaField(
description="First file input",
default=None,
json_schema_extra={
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": ["https://www.googleapis.com/auth/drive.file"],
# No kwarg_name - defaults to "credentials"
}
},
)
file2: dict[str, Any] | None = SchemaField(
description="Second file input",
default=None,
json_schema_extra={
"auto_credentials": {
"provider": "google",
"type": "oauth2",
"scopes": ["https://www.googleapis.com/auth/drive.file"],
# No kwarg_name - also defaults to "credentials"
}
},
)
with pytest.raises(ValueError) as exc_info:
DefaultDuplicateSchema.get_auto_credentials_fields()
assert "Duplicate auto_credentials kwarg_name 'credentials'" in str(
exc_info.value
)

View File

@@ -1,7 +1,11 @@
import logging
import threading
from collections import defaultdict
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from backend.data.execution import ExecutionContext
from backend.data.model import ProviderName, User
from backend.server.model import CreateGraph
from backend.server.rest_api import AgentServer
@@ -17,10 +21,10 @@ async def create_graph(s: SpinTestServer, g, u: User):
async def create_credentials(s: SpinTestServer, u: User):
import backend.blocks.llm as llm
import backend.blocks.llm as llm_module
provider = ProviderName.OPENAI
credentials = llm.TEST_CREDENTIALS
credentials = llm_module.TEST_CREDENTIALS
return await s.agent_server.test_create_credentials(u.id, provider, credentials)
@@ -196,8 +200,6 @@ async def test_smart_decision_maker_function_signature(server: SpinTestServer):
@pytest.mark.asyncio
async def test_smart_decision_maker_tracks_llm_stats():
"""Test that SmartDecisionMakerBlock correctly tracks LLM usage stats."""
from unittest.mock import MagicMock, patch
import backend.blocks.llm as llm_module
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
@@ -216,7 +218,6 @@ async def test_smart_decision_maker_tracks_llm_stats():
}
# Mock the _create_tool_node_signatures method to avoid database calls
from unittest.mock import AsyncMock
with patch(
"backend.blocks.llm.llm_call",
@@ -234,10 +235,19 @@ async def test_smart_decision_maker_tracks_llm_stats():
prompt="Should I continue with this task?",
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
# Execute the block
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
async for output_name, output_data in block.run(
input_data,
credentials=llm_module.TEST_CREDENTIALS,
@@ -246,6 +256,9 @@ async def test_smart_decision_maker_tracks_llm_stats():
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
@@ -263,8 +276,6 @@ async def test_smart_decision_maker_tracks_llm_stats():
@pytest.mark.asyncio
async def test_smart_decision_maker_parameter_validation():
"""Test that SmartDecisionMakerBlock correctly validates tool call parameters."""
from unittest.mock import MagicMock, patch
import backend.blocks.llm as llm_module
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
@@ -311,8 +322,6 @@ async def test_smart_decision_maker_parameter_validation():
mock_response_with_typo.reasoning = None
mock_response_with_typo.raw_response = {"role": "assistant", "content": None}
from unittest.mock import AsyncMock
with patch(
"backend.blocks.llm.llm_call",
new_callable=AsyncMock,
@@ -329,8 +338,17 @@ async def test_smart_decision_maker_parameter_validation():
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
retry=2, # Set retry to 2 for testing
agent_mode_max_iterations=0,
)
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
# Should raise ValueError after retries due to typo'd parameter name
with pytest.raises(ValueError) as exc_info:
outputs = {}
@@ -342,6 +360,9 @@ async def test_smart_decision_maker_parameter_validation():
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
@@ -368,8 +389,6 @@ async def test_smart_decision_maker_parameter_validation():
mock_response_missing_required.reasoning = None
mock_response_missing_required.raw_response = {"role": "assistant", "content": None}
from unittest.mock import AsyncMock
with patch(
"backend.blocks.llm.llm_call",
new_callable=AsyncMock,
@@ -385,8 +404,17 @@ async def test_smart_decision_maker_parameter_validation():
prompt="Search for keywords",
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
# Should raise ValueError due to missing required parameter
with pytest.raises(ValueError) as exc_info:
outputs = {}
@@ -398,6 +426,9 @@ async def test_smart_decision_maker_parameter_validation():
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
@@ -418,8 +449,6 @@ async def test_smart_decision_maker_parameter_validation():
mock_response_valid.reasoning = None
mock_response_valid.raw_response = {"role": "assistant", "content": None}
from unittest.mock import AsyncMock
with patch(
"backend.blocks.llm.llm_call",
new_callable=AsyncMock,
@@ -435,10 +464,19 @@ async def test_smart_decision_maker_parameter_validation():
prompt="Search for keywords",
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
# Should succeed - optional parameter missing is OK
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
async for output_name, output_data in block.run(
input_data,
credentials=llm_module.TEST_CREDENTIALS,
@@ -447,6 +485,9 @@ async def test_smart_decision_maker_parameter_validation():
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
@@ -472,8 +513,6 @@ async def test_smart_decision_maker_parameter_validation():
mock_response_all_params.reasoning = None
mock_response_all_params.raw_response = {"role": "assistant", "content": None}
from unittest.mock import AsyncMock
with patch(
"backend.blocks.llm.llm_call",
new_callable=AsyncMock,
@@ -489,10 +528,19 @@ async def test_smart_decision_maker_parameter_validation():
prompt="Search for keywords",
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
# Should succeed with all parameters
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
async for output_name, output_data in block.run(
input_data,
credentials=llm_module.TEST_CREDENTIALS,
@@ -501,6 +549,9 @@ async def test_smart_decision_maker_parameter_validation():
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
@@ -513,8 +564,6 @@ async def test_smart_decision_maker_parameter_validation():
@pytest.mark.asyncio
async def test_smart_decision_maker_raw_response_conversion():
"""Test that SmartDecisionMaker correctly handles different raw_response types with retry mechanism."""
from unittest.mock import MagicMock, patch
import backend.blocks.llm as llm_module
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
@@ -584,7 +633,6 @@ async def test_smart_decision_maker_raw_response_conversion():
)
# Mock llm_call to return different responses on different calls
from unittest.mock import AsyncMock
with patch(
"backend.blocks.llm.llm_call", new_callable=AsyncMock
@@ -603,10 +651,19 @@ async def test_smart_decision_maker_raw_response_conversion():
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
retry=2,
agent_mode_max_iterations=0,
)
# Should succeed after retry, demonstrating our helper function works
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
async for output_name, output_data in block.run(
input_data,
credentials=llm_module.TEST_CREDENTIALS,
@@ -615,6 +672,9 @@ async def test_smart_decision_maker_raw_response_conversion():
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
@@ -650,8 +710,6 @@ async def test_smart_decision_maker_raw_response_conversion():
"I'll help you with that." # Ollama returns string
)
from unittest.mock import AsyncMock
with patch(
"backend.blocks.llm.llm_call",
new_callable=AsyncMock,
@@ -666,9 +724,18 @@ async def test_smart_decision_maker_raw_response_conversion():
prompt="Simple prompt",
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
async for output_name, output_data in block.run(
input_data,
credentials=llm_module.TEST_CREDENTIALS,
@@ -677,6 +744,9 @@ async def test_smart_decision_maker_raw_response_conversion():
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
@@ -696,8 +766,6 @@ async def test_smart_decision_maker_raw_response_conversion():
"content": "Test response",
} # Dict format
from unittest.mock import AsyncMock
with patch(
"backend.blocks.llm.llm_call",
new_callable=AsyncMock,
@@ -712,6 +780,160 @@ async def test_smart_decision_maker_raw_response_conversion():
prompt="Another test",
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0,
)
outputs = {}
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
async for output_name, output_data in block.run(
input_data,
credentials=llm_module.TEST_CREDENTIALS,
graph_id="test-graph-id",
node_id="test-node-id",
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
assert "finished" in outputs
assert outputs["finished"] == "Test response"
@pytest.mark.asyncio
async def test_smart_decision_maker_agent_mode():
"""Test that agent mode executes tools directly and loops until finished."""
import backend.blocks.llm as llm_module
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
block = SmartDecisionMakerBlock()
# Mock tool call that requires multiple iterations
mock_tool_call_1 = MagicMock()
mock_tool_call_1.id = "call_1"
mock_tool_call_1.function.name = "search_keywords"
mock_tool_call_1.function.arguments = (
'{"query": "test", "max_keyword_difficulty": 50}'
)
mock_response_1 = MagicMock()
mock_response_1.response = None
mock_response_1.tool_calls = [mock_tool_call_1]
mock_response_1.prompt_tokens = 50
mock_response_1.completion_tokens = 25
mock_response_1.reasoning = "Using search tool"
mock_response_1.raw_response = {
"role": "assistant",
"content": None,
"tool_calls": [{"id": "call_1", "type": "function"}],
}
# Final response with no tool calls (finished)
mock_response_2 = MagicMock()
mock_response_2.response = "Task completed successfully"
mock_response_2.tool_calls = []
mock_response_2.prompt_tokens = 30
mock_response_2.completion_tokens = 15
mock_response_2.reasoning = None
mock_response_2.raw_response = {
"role": "assistant",
"content": "Task completed successfully",
}
# Mock the LLM call to return different responses on each iteration
llm_call_mock = AsyncMock()
llm_call_mock.side_effect = [mock_response_1, mock_response_2]
# Mock tool node signatures
mock_tool_signatures = [
{
"type": "function",
"function": {
"name": "search_keywords",
"_sink_node_id": "test-sink-node-id",
"_field_mapping": {},
"parameters": {
"properties": {
"query": {"type": "string"},
"max_keyword_difficulty": {"type": "integer"},
},
"required": ["query", "max_keyword_difficulty"],
},
},
}
]
# Mock database and execution components
mock_db_client = AsyncMock()
mock_node = MagicMock()
mock_node.block_id = "test-block-id"
mock_db_client.get_node.return_value = mock_node
# Mock upsert_execution_input to return proper NodeExecutionResult and input data
mock_node_exec_result = MagicMock()
mock_node_exec_result.node_exec_id = "test-tool-exec-id"
mock_input_data = {"query": "test", "max_keyword_difficulty": 50}
mock_db_client.upsert_execution_input.return_value = (
mock_node_exec_result,
mock_input_data,
)
# No longer need mock_execute_node since we use execution_processor.on_node_execution
with patch("backend.blocks.llm.llm_call", llm_call_mock), patch.object(
block, "_create_tool_node_signatures", return_value=mock_tool_signatures
), patch(
"backend.blocks.smart_decision_maker.get_database_manager_async_client",
return_value=mock_db_client,
), patch(
"backend.executor.manager.async_update_node_execution_status",
new_callable=AsyncMock,
), patch(
"backend.integrations.creds_manager.IntegrationCredentialsManager"
):
# Create a mock execution context
mock_execution_context = ExecutionContext(
safe_mode=False,
)
# Create a mock execution processor for agent mode tests
mock_execution_processor = AsyncMock()
# Configure the execution processor mock with required attributes
mock_execution_processor.running_node_execution = defaultdict(MagicMock)
mock_execution_processor.execution_stats = MagicMock()
mock_execution_processor.execution_stats_lock = threading.Lock()
# Mock the on_node_execution method to return successful stats
mock_node_stats = MagicMock()
mock_node_stats.error = None # No error
mock_execution_processor.on_node_execution = AsyncMock(
return_value=mock_node_stats
)
# Mock the get_execution_outputs_by_node_exec_id method
mock_db_client.get_execution_outputs_by_node_exec_id.return_value = {
"result": {"status": "success", "data": "search completed"}
}
# Test agent mode with max_iterations = 3
input_data = SmartDecisionMakerBlock.Input(
prompt="Complete this task using tools",
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=3, # Enable agent mode with 3 max iterations
)
outputs = {}
@@ -723,8 +945,115 @@ async def test_smart_decision_maker_raw_response_conversion():
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
# Verify agent mode behavior
assert "tool_functions" in outputs # tool_functions is yielded in both modes
assert "finished" in outputs
assert outputs["finished"] == "Test response"
assert outputs["finished"] == "Task completed successfully"
assert "conversations" in outputs
# Verify the conversation includes tool responses
conversations = outputs["conversations"]
assert len(conversations) > 2 # Should have multiple conversation entries
# Verify LLM was called twice (once for tool call, once for finish)
assert llm_call_mock.call_count == 2
# Verify tool was executed via execution processor
assert mock_execution_processor.on_node_execution.call_count == 1
@pytest.mark.asyncio
async def test_smart_decision_maker_traditional_mode_default():
"""Test that default behavior (agent_mode_max_iterations=0) works as traditional mode."""
import backend.blocks.llm as llm_module
from backend.blocks.smart_decision_maker import SmartDecisionMakerBlock
block = SmartDecisionMakerBlock()
# Mock tool call
mock_tool_call = MagicMock()
mock_tool_call.function.name = "search_keywords"
mock_tool_call.function.arguments = (
'{"query": "test", "max_keyword_difficulty": 50}'
)
mock_response = MagicMock()
mock_response.response = None
mock_response.tool_calls = [mock_tool_call]
mock_response.prompt_tokens = 50
mock_response.completion_tokens = 25
mock_response.reasoning = None
mock_response.raw_response = {"role": "assistant", "content": None}
mock_tool_signatures = [
{
"type": "function",
"function": {
"name": "search_keywords",
"_sink_node_id": "test-sink-node-id",
"_field_mapping": {},
"parameters": {
"properties": {
"query": {"type": "string"},
"max_keyword_difficulty": {"type": "integer"},
},
"required": ["query", "max_keyword_difficulty"],
},
},
}
]
with patch(
"backend.blocks.llm.llm_call",
new_callable=AsyncMock,
return_value=mock_response,
), patch.object(
block, "_create_tool_node_signatures", return_value=mock_tool_signatures
):
# Test default behavior (traditional mode)
input_data = SmartDecisionMakerBlock.Input(
prompt="Test prompt",
model=llm_module.LlmModel.GPT4O,
credentials=llm_module.TEST_CREDENTIALS_INPUT, # type: ignore
agent_mode_max_iterations=0, # Traditional mode
)
# Create execution context
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a mock execution processor for tests
mock_execution_processor = MagicMock()
outputs = {}
async for output_name, output_data in block.run(
input_data,
credentials=llm_module.TEST_CREDENTIALS,
graph_id="test-graph-id",
node_id="test-node-id",
graph_exec_id="test-exec-id",
node_exec_id="test-node-exec-id",
user_id="test-user-id",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_data
# Verify traditional mode behavior
assert (
"tool_functions" in outputs
) # Should yield tool_functions in traditional mode
assert (
"tools_^_test-sink-node-id_~_query" in outputs
) # Should yield individual tool parameters
assert "tools_^_test-sink-node-id_~_max_keyword_difficulty" in outputs
assert "conversations" in outputs

View File

@@ -1,7 +1,7 @@
"""Comprehensive tests for SmartDecisionMakerBlock dynamic field handling."""
import json
from unittest.mock import AsyncMock, Mock, patch
from unittest.mock import AsyncMock, MagicMock, Mock, patch
import pytest
@@ -308,10 +308,47 @@ async def test_output_yielding_with_dynamic_fields():
) as mock_llm:
mock_llm.return_value = mock_response
# Mock the function signature creation
with patch.object(
# Mock the database manager to avoid HTTP calls during tool execution
with patch(
"backend.blocks.smart_decision_maker.get_database_manager_async_client"
) as mock_db_manager, patch.object(
block, "_create_tool_node_signatures", new_callable=AsyncMock
) as mock_sig:
# Set up the mock database manager
mock_db_client = AsyncMock()
mock_db_manager.return_value = mock_db_client
# Mock the node retrieval
mock_target_node = Mock()
mock_target_node.id = "test-sink-node-id"
mock_target_node.block_id = "CreateDictionaryBlock"
mock_target_node.block = Mock()
mock_target_node.block.name = "Create Dictionary"
mock_db_client.get_node.return_value = mock_target_node
# Mock the execution result creation
mock_node_exec_result = Mock()
mock_node_exec_result.node_exec_id = "mock-node-exec-id"
mock_final_input_data = {
"values_#_name": "Alice",
"values_#_age": 30,
"values_#_email": "alice@example.com",
}
mock_db_client.upsert_execution_input.return_value = (
mock_node_exec_result,
mock_final_input_data,
)
# Mock the output retrieval
mock_outputs = {
"values_#_name": "Alice",
"values_#_age": 30,
"values_#_email": "alice@example.com",
}
mock_db_client.get_execution_outputs_by_node_exec_id.return_value = (
mock_outputs
)
mock_sig.return_value = [
{
"type": "function",
@@ -337,10 +374,16 @@ async def test_output_yielding_with_dynamic_fields():
prompt="Create a user dictionary",
credentials=llm.TEST_CREDENTIALS_INPUT,
model=llm.LlmModel.GPT4O,
agent_mode_max_iterations=0, # Use traditional mode to test output yielding
)
# Run the block
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(safe_mode=False)
mock_execution_processor = MagicMock()
async for output_name, output_value in block.run(
input_data,
credentials=llm.TEST_CREDENTIALS,
@@ -349,6 +392,9 @@ async def test_output_yielding_with_dynamic_fields():
graph_exec_id="test_exec",
node_exec_id="test_node_exec",
user_id="test_user",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_value
@@ -511,45 +557,108 @@ async def test_validation_errors_dont_pollute_conversation():
}
]
# Create input data
from backend.blocks import llm
# Mock the database manager to avoid HTTP calls during tool execution
with patch(
"backend.blocks.smart_decision_maker.get_database_manager_async_client"
) as mock_db_manager:
# Set up the mock database manager for agent mode
mock_db_client = AsyncMock()
mock_db_manager.return_value = mock_db_client
input_data = block.input_schema(
prompt="Test prompt",
credentials=llm.TEST_CREDENTIALS_INPUT,
model=llm.LlmModel.GPT4O,
retry=3, # Allow retries
)
# Mock the node retrieval
mock_target_node = Mock()
mock_target_node.id = "test-sink-node-id"
mock_target_node.block_id = "TestBlock"
mock_target_node.block = Mock()
mock_target_node.block.name = "Test Block"
mock_db_client.get_node.return_value = mock_target_node
# Run the block
outputs = {}
async for output_name, output_value in block.run(
input_data,
credentials=llm.TEST_CREDENTIALS,
graph_id="test_graph",
node_id="test_node",
graph_exec_id="test_exec",
node_exec_id="test_node_exec",
user_id="test_user",
):
outputs[output_name] = output_value
# Mock the execution result creation
mock_node_exec_result = Mock()
mock_node_exec_result.node_exec_id = "mock-node-exec-id"
mock_final_input_data = {"correct_param": "value"}
mock_db_client.upsert_execution_input.return_value = (
mock_node_exec_result,
mock_final_input_data,
)
# Verify we had 2 LLM calls (initial + retry)
assert call_count == 2
# Mock the output retrieval
mock_outputs = {"correct_param": "value"}
mock_db_client.get_execution_outputs_by_node_exec_id.return_value = (
mock_outputs
)
# Check the final conversation output
final_conversation = outputs.get("conversations", [])
# Create input data
from backend.blocks import llm
# The final conversation should NOT contain the validation error message
error_messages = [
msg
for msg in final_conversation
if msg.get("role") == "user"
and "parameter errors" in msg.get("content", "")
]
assert (
len(error_messages) == 0
), "Validation error leaked into final conversation"
input_data = block.input_schema(
prompt="Test prompt",
credentials=llm.TEST_CREDENTIALS_INPUT,
model=llm.LlmModel.GPT4O,
retry=3, # Allow retries
agent_mode_max_iterations=1,
)
# The final conversation should only have the successful response
assert final_conversation[-1]["content"] == "valid"
# Run the block
outputs = {}
from backend.data.execution import ExecutionContext
mock_execution_context = ExecutionContext(safe_mode=False)
# Create a proper mock execution processor for agent mode
from collections import defaultdict
mock_execution_processor = AsyncMock()
mock_execution_processor.execution_stats = MagicMock()
mock_execution_processor.execution_stats_lock = MagicMock()
# Create a mock NodeExecutionProgress for the sink node
mock_node_exec_progress = MagicMock()
mock_node_exec_progress.add_task = MagicMock()
mock_node_exec_progress.pop_output = MagicMock(
return_value=None
) # No outputs to process
# Set up running_node_execution as a defaultdict that returns our mock for any key
mock_execution_processor.running_node_execution = defaultdict(
lambda: mock_node_exec_progress
)
# Mock the on_node_execution method that gets called during tool execution
mock_node_stats = MagicMock()
mock_node_stats.error = None
mock_execution_processor.on_node_execution.return_value = (
mock_node_stats
)
async for output_name, output_value in block.run(
input_data,
credentials=llm.TEST_CREDENTIALS,
graph_id="test_graph",
node_id="test_node",
graph_exec_id="test_exec",
node_exec_id="test_node_exec",
user_id="test_user",
graph_version=1,
execution_context=mock_execution_context,
execution_processor=mock_execution_processor,
):
outputs[output_name] = output_value
# Verify we had at least 1 LLM call
assert call_count >= 1
# Check the final conversation output
final_conversation = outputs.get("conversations", [])
# The final conversation should NOT contain validation error messages
# Even if retries don't happen in agent mode, we should not leak errors
error_messages = [
msg
for msg in final_conversation
if msg.get("role") == "user"
and "parameter errors" in msg.get("content", "")
]
assert (
len(error_messages) == 0
), "Validation error leaked into final conversation"

View File

@@ -14,7 +14,7 @@ from backend.data.block import (
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.execution import UserContext
from backend.data.execution import ExecutionContext
from backend.data.model import SchemaField
# Shared timezone literal type for all time/date blocks
@@ -188,10 +188,9 @@ class GetCurrentTimeBlock(Block):
)
async def run(
self, input_data: Input, *, user_context: UserContext, **kwargs
self, input_data: Input, *, execution_context: ExecutionContext, **kwargs
) -> BlockOutput:
# Extract timezone from user_context (always present)
effective_timezone = user_context.timezone
effective_timezone = execution_context.user_timezone
# Get the appropriate timezone
tz = _get_timezone(input_data.format_type, effective_timezone)
@@ -298,10 +297,10 @@ class GetCurrentDateBlock(Block):
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Extract timezone from user_context (required keyword argument)
user_context: UserContext = kwargs["user_context"]
effective_timezone = user_context.timezone
async def run(
self, input_data: Input, *, execution_context: ExecutionContext, **kwargs
) -> BlockOutput:
effective_timezone = execution_context.user_timezone
try:
offset = int(input_data.offset)
@@ -404,10 +403,10 @@ class GetCurrentDateAndTimeBlock(Block):
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# Extract timezone from user_context (required keyword argument)
user_context: UserContext = kwargs["user_context"]
effective_timezone = user_context.timezone
async def run(
self, input_data: Input, *, execution_context: ExecutionContext, **kwargs
) -> BlockOutput:
effective_timezone = execution_context.user_timezone
# Get the appropriate timezone
tz = _get_timezone(input_data.format_type, effective_timezone)

View File

@@ -1,4 +1,4 @@
from datetime import datetime
from datetime import datetime, timedelta, timezone
from typing import Any, Dict
from backend.blocks.twitter._mappers import (
@@ -237,6 +237,12 @@ class TweetDurationBuilder:
def add_start_time(self, start_time: datetime | None):
if start_time:
# Twitter API requires start_time to be at least 10 seconds before now
max_start_time = datetime.now(timezone.utc) - timedelta(seconds=10)
if start_time.tzinfo is None:
start_time = start_time.replace(tzinfo=timezone.utc)
if start_time > max_start_time:
start_time = max_start_time
self.params["start_time"] = start_time
return self

View File

@@ -51,8 +51,10 @@ class ResponseDataSerializer(BaseSerializer):
return serialized_item
@classmethod
def serialize_list(cls, data: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
def serialize_list(cls, data: List[Dict[str, Any]] | None) -> List[Dict[str, Any]]:
"""Serializes a list of dictionary items"""
if not data:
return []
return [cls.serialize_dict(item) for item in data]

View File

@@ -408,7 +408,7 @@ class ListExpansionInputs(BlockSchemaInput):
class TweetTimeWindowInputs(BlockSchemaInput):
start_time: datetime | None = SchemaField(
description="Start time in YYYY-MM-DDTHH:mm:ssZ format",
description="Start time in YYYY-MM-DDTHH:mm:ssZ format. If set to a time less than 10 seconds ago, it will be automatically adjusted to 10 seconds ago (Twitter API requirement).",
placeholder="Enter start time",
default=None,
advanced=False,

View File

@@ -1,9 +1,13 @@
import logging
from typing import Literal
from urllib.parse import parse_qs, urlparse
from pydantic import SecretStr
from youtube_transcript_api._api import YouTubeTranscriptApi
from youtube_transcript_api._errors import NoTranscriptFound
from youtube_transcript_api._transcripts import FetchedTranscript
from youtube_transcript_api.formatters import TextFormatter
from youtube_transcript_api.proxies import WebshareProxyConfig
from backend.data.block import (
Block,
@@ -12,7 +16,42 @@ from backend.data.block import (
BlockSchemaInput,
BlockSchemaOutput,
)
from backend.data.model import SchemaField
from backend.data.model import (
CredentialsField,
CredentialsMetaInput,
SchemaField,
UserPasswordCredentials,
)
from backend.integrations.providers import ProviderName
logger = logging.getLogger(__name__)
TEST_CREDENTIALS = UserPasswordCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="webshare_proxy",
username=SecretStr("mock-webshare-username"),
password=SecretStr("mock-webshare-password"),
title="Mock Webshare Proxy credentials",
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
WebshareProxyCredentials = UserPasswordCredentials
WebshareProxyCredentialsInput = CredentialsMetaInput[
Literal[ProviderName.WEBSHARE_PROXY],
Literal["user_password"],
]
def WebshareProxyCredentialsField() -> WebshareProxyCredentialsInput:
return CredentialsField(
description="Webshare proxy credentials for fetching YouTube transcripts",
)
class TranscribeYoutubeVideoBlock(Block):
@@ -22,6 +61,7 @@ class TranscribeYoutubeVideoBlock(Block):
description="The URL of the YouTube video to transcribe",
placeholder="https://www.youtube.com/watch?v=dQw4w9WgXcQ",
)
credentials: WebshareProxyCredentialsInput = WebshareProxyCredentialsField()
class Output(BlockSchemaOutput):
video_id: str = SchemaField(description="The extracted YouTube video ID")
@@ -35,9 +75,12 @@ class TranscribeYoutubeVideoBlock(Block):
id="f3a8f7e1-4b1d-4e5f-9f2a-7c3d5a2e6b4c",
input_schema=TranscribeYoutubeVideoBlock.Input,
output_schema=TranscribeYoutubeVideoBlock.Output,
description="Transcribes a YouTube video.",
description="Transcribes a YouTube video using a proxy.",
categories={BlockCategory.SOCIAL},
test_input={"youtube_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ"},
test_input={
"youtube_url": "https://www.youtube.com/watch?v=dQw4w9WgXcQ",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("video_id", "dQw4w9WgXcQ"),
(
@@ -45,8 +88,9 @@ class TranscribeYoutubeVideoBlock(Block):
"Never gonna give you up\nNever gonna let you down",
),
],
test_credentials=TEST_CREDENTIALS,
test_mock={
"get_transcript": lambda video_id: [
"get_transcript": lambda video_id, credentials: [
{"text": "Never gonna give you up"},
{"text": "Never gonna let you down"},
],
@@ -69,16 +113,27 @@ class TranscribeYoutubeVideoBlock(Block):
return parsed_url.path.split("/")[2]
raise ValueError(f"Invalid YouTube URL: {url}")
@staticmethod
def get_transcript(video_id: str) -> FetchedTranscript:
def get_transcript(
self, video_id: str, credentials: WebshareProxyCredentials
) -> FetchedTranscript:
"""
Get transcript for a video, preferring English but falling back to any available language.
:param video_id: The YouTube video ID
:param credentials: The Webshare proxy credentials
:return: The fetched transcript
:raises: Any exception except NoTranscriptFound for requested languages
"""
api = YouTubeTranscriptApi()
logger.warning(
"Using Webshare proxy for YouTube transcript fetch (video_id=%s)",
video_id,
)
proxy_config = WebshareProxyConfig(
proxy_username=credentials.username.get_secret_value(),
proxy_password=credentials.password.get_secret_value(),
)
api = YouTubeTranscriptApi(proxy_config=proxy_config)
try:
# Try to get English transcript first (default behavior)
return api.fetch(video_id=video_id)
@@ -101,11 +156,17 @@ class TranscribeYoutubeVideoBlock(Block):
transcript_text = formatter.format_transcript(transcript)
return transcript_text
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(
self,
input_data: Input,
*,
credentials: WebshareProxyCredentials,
**kwargs,
) -> BlockOutput:
video_id = self.extract_video_id(input_data.youtube_url)
yield "video_id", video_id
transcript = self.get_transcript(video_id)
transcript = self.get_transcript(video_id, credentials)
transcript_text = self.format_transcript(transcript=transcript)
yield "transcript", transcript_text

View File

@@ -5,6 +5,8 @@ from datetime import datetime
from faker import Faker
from prisma import Prisma
from backend.data.db import query_raw_with_schema
faker = Faker()
@@ -15,9 +17,9 @@ async def check_cron_job(db):
try:
# Check if pg_cron extension exists
extension_check = await db.query_raw("CREATE EXTENSION pg_cron;")
extension_check = await query_raw_with_schema("CREATE EXTENSION pg_cron;")
print(extension_check)
extension_check = await db.query_raw(
extension_check = await query_raw_with_schema(
"SELECT COUNT(*) as count FROM pg_extension WHERE extname = 'pg_cron'"
)
if extension_check[0]["count"] == 0:
@@ -25,7 +27,7 @@ async def check_cron_job(db):
return False
# Check if the refresh job exists
job_check = await db.query_raw(
job_check = await query_raw_with_schema(
"""
SELECT jobname, schedule, command
FROM cron.job
@@ -55,33 +57,33 @@ async def get_materialized_view_counts(db):
print("-" * 40)
# Get counts from mv_agent_run_counts
agent_runs = await db.query_raw(
agent_runs = await query_raw_with_schema(
"""
SELECT COUNT(*) as total_agents,
SUM(run_count) as total_runs,
MAX(run_count) as max_runs,
MIN(run_count) as min_runs
FROM mv_agent_run_counts
FROM {schema_prefix}mv_agent_run_counts
"""
)
# Get counts from mv_review_stats
review_stats = await db.query_raw(
review_stats = await query_raw_with_schema(
"""
SELECT COUNT(*) as total_listings,
SUM(review_count) as total_reviews,
AVG(avg_rating) as overall_avg_rating
FROM mv_review_stats
FROM {schema_prefix}mv_review_stats
"""
)
# Get sample data from StoreAgent view
store_agents = await db.query_raw(
store_agents = await query_raw_with_schema(
"""
SELECT COUNT(*) as total_store_agents,
AVG(runs) as avg_runs,
AVG(rating) as avg_rating
FROM "StoreAgent"
FROM {schema_prefix}"StoreAgent"
"""
)

View File

@@ -5,6 +5,8 @@ import asyncio
from prisma import Prisma
from backend.data.db import query_raw_with_schema
async def check_store_data(db):
"""Check what store data exists in the database."""
@@ -89,11 +91,11 @@ async def check_store_data(db):
sa.creator_username,
sa.categories,
sa.updated_at
FROM "StoreAgent" sa
FROM {schema_prefix}"StoreAgent" sa
LIMIT 10;
"""
store_agents = await db.query_raw(query)
store_agents = await query_raw_with_schema(query)
print(f"Total store agents in view: {len(store_agents)}")
if store_agents:
@@ -111,22 +113,22 @@ async def check_store_data(db):
# Check for any APPROVED store listing versions
query = """
SELECT COUNT(*) as count
FROM "StoreListingVersion"
FROM {schema_prefix}"StoreListingVersion"
WHERE "submissionStatus" = 'APPROVED'
"""
result = await db.query_raw(query)
result = await query_raw_with_schema(query)
approved_count = result[0]["count"] if result else 0
print(f"Approved store listing versions: {approved_count}")
# Check for store listings with hasApprovedVersion = true
query = """
SELECT COUNT(*) as count
FROM "StoreListing"
FROM {schema_prefix}"StoreListing"
WHERE "hasApprovedVersion" = true AND "isDeleted" = false
"""
result = await db.query_raw(query)
result = await query_raw_with_schema(query)
has_approved_count = result[0]["count"] if result else 0
print(f"Store listings with approved versions: {has_approved_count}")
@@ -134,10 +136,10 @@ async def check_store_data(db):
query = """
SELECT COUNT(DISTINCT "agentGraphId") as unique_agents,
COUNT(*) as total_executions
FROM "AgentGraphExecution"
FROM {schema_prefix}"AgentGraphExecution"
"""
result = await db.query_raw(query)
result = await query_raw_with_schema(query)
if result:
print("\nAgent Graph Executions:")
print(f" Unique agents with executions: {result[0]['unique_agents']}")

View File

@@ -0,0 +1 @@
"""CLI utilities for backend development & administration"""

View File

@@ -0,0 +1,57 @@
#!/usr/bin/env python3
"""
Script to generate OpenAPI JSON specification for the FastAPI app.
This script imports the FastAPI app from backend.server.rest_api and outputs
the OpenAPI specification as JSON to stdout or a specified file.
Usage:
`poetry run python generate_openapi_json.py`
`poetry run python generate_openapi_json.py --output openapi.json`
`poetry run python generate_openapi_json.py --indent 4 --output openapi.json`
"""
import json
import os
from pathlib import Path
import click
@click.command()
@click.option(
"--output",
type=click.Path(dir_okay=False, path_type=Path),
help="Output file path (default: stdout)",
)
@click.option(
"--pretty",
type=click.BOOL,
default=False,
help="Pretty-print JSON output (indented 2 spaces)",
)
def main(output: Path, pretty: bool):
"""Generate and output the OpenAPI JSON specification."""
openapi_schema = get_openapi_schema()
json_output = json.dumps(openapi_schema, indent=2 if pretty else None)
if output:
output.write_text(json_output)
click.echo(f"✅ OpenAPI specification written to {output}\n\nPreview:")
click.echo(f"\n{json_output[:500]} ...")
else:
print(json_output)
def get_openapi_schema():
"""Get the OpenAPI schema from the FastAPI app"""
from backend.server.rest_api import app
return app.openapi()
if __name__ == "__main__":
os.environ["LOG_LEVEL"] = "ERROR" # disable stdout log output
main()

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,45 @@
import logging
from datetime import datetime, timedelta, timezone
from typing import Optional
import prisma.types
from pydantic import BaseModel
from backend.data.db import query_raw_with_schema
from backend.util.json import SafeJson
logger = logging.getLogger(__name__)
class AccuracyAlertData(BaseModel):
"""Alert data when accuracy drops significantly."""
graph_id: str
user_id: Optional[str]
drop_percent: float
three_day_avg: float
seven_day_avg: float
detected_at: datetime
class AccuracyLatestData(BaseModel):
"""Latest execution accuracy data point."""
date: datetime
daily_score: Optional[float]
three_day_avg: Optional[float]
seven_day_avg: Optional[float]
fourteen_day_avg: Optional[float]
class AccuracyTrendsResponse(BaseModel):
"""Response model for accuracy trends and alerts."""
latest_data: AccuracyLatestData
alert: Optional[AccuracyAlertData]
historical_data: Optional[list[AccuracyLatestData]] = None
async def log_raw_analytics(
user_id: str,
type: str,
@@ -43,3 +76,217 @@ async def log_raw_metric(
)
return result
async def get_accuracy_trends_and_alerts(
graph_id: str,
days_back: int = 30,
user_id: Optional[str] = None,
drop_threshold: float = 10.0,
include_historical: bool = False,
) -> AccuracyTrendsResponse:
"""Get accuracy trends and detect alerts for a specific graph."""
query_template = """
WITH daily_scores AS (
SELECT
DATE(e."createdAt") as execution_date,
AVG(CASE
WHEN e.stats IS NOT NULL
AND e.stats::json->>'correctness_score' IS NOT NULL
AND e.stats::json->>'correctness_score' != 'null'
THEN (e.stats::json->>'correctness_score')::float * 100
ELSE NULL
END) as daily_score
FROM {schema_prefix}"AgentGraphExecution" e
WHERE e."agentGraphId" = $1::text
AND e."isDeleted" = false
AND e."createdAt" >= $2::timestamp
AND e."executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED')
{user_filter}
GROUP BY DATE(e."createdAt")
HAVING COUNT(*) >= 3 -- Need at least 3 executions per day
),
trends AS (
SELECT
execution_date,
daily_score,
AVG(daily_score) OVER (
ORDER BY execution_date
ROWS BETWEEN 2 PRECEDING AND CURRENT ROW
) as three_day_avg,
AVG(daily_score) OVER (
ORDER BY execution_date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) as seven_day_avg,
AVG(daily_score) OVER (
ORDER BY execution_date
ROWS BETWEEN 13 PRECEDING AND CURRENT ROW
) as fourteen_day_avg
FROM daily_scores
)
SELECT *,
CASE
WHEN three_day_avg IS NOT NULL AND seven_day_avg IS NOT NULL AND seven_day_avg > 0
THEN ((seven_day_avg - three_day_avg) / seven_day_avg * 100)
ELSE NULL
END as drop_percent
FROM trends
ORDER BY execution_date DESC
{limit_clause}
"""
start_date = datetime.now(timezone.utc) - timedelta(days=days_back)
params = [graph_id, start_date]
user_filter = ""
if user_id:
user_filter = 'AND e."userId" = $3::text'
params.append(user_id)
# Determine limit clause
limit_clause = "" if include_historical else "LIMIT 1"
final_query = query_template.format(
schema_prefix="{schema_prefix}",
user_filter=user_filter,
limit_clause=limit_clause,
)
result = await query_raw_with_schema(final_query, *params)
if not result:
return AccuracyTrendsResponse(
latest_data=AccuracyLatestData(
date=datetime.now(timezone.utc),
daily_score=None,
three_day_avg=None,
seven_day_avg=None,
fourteen_day_avg=None,
),
alert=None,
)
latest = result[0]
alert = None
if (
latest["drop_percent"] is not None
and latest["drop_percent"] >= drop_threshold
and latest["three_day_avg"] is not None
and latest["seven_day_avg"] is not None
):
alert = AccuracyAlertData(
graph_id=graph_id,
user_id=user_id,
drop_percent=float(latest["drop_percent"]),
three_day_avg=float(latest["three_day_avg"]),
seven_day_avg=float(latest["seven_day_avg"]),
detected_at=datetime.now(timezone.utc),
)
# Prepare historical data if requested
historical_data = None
if include_historical:
historical_data = []
for row in result:
historical_data.append(
AccuracyLatestData(
date=row["execution_date"],
daily_score=(
float(row["daily_score"])
if row["daily_score"] is not None
else None
),
three_day_avg=(
float(row["three_day_avg"])
if row["three_day_avg"] is not None
else None
),
seven_day_avg=(
float(row["seven_day_avg"])
if row["seven_day_avg"] is not None
else None
),
fourteen_day_avg=(
float(row["fourteen_day_avg"])
if row["fourteen_day_avg"] is not None
else None
),
)
)
return AccuracyTrendsResponse(
latest_data=AccuracyLatestData(
date=latest["execution_date"],
daily_score=(
float(latest["daily_score"])
if latest["daily_score"] is not None
else None
),
three_day_avg=(
float(latest["three_day_avg"])
if latest["three_day_avg"] is not None
else None
),
seven_day_avg=(
float(latest["seven_day_avg"])
if latest["seven_day_avg"] is not None
else None
),
fourteen_day_avg=(
float(latest["fourteen_day_avg"])
if latest["fourteen_day_avg"] is not None
else None
),
),
alert=alert,
historical_data=historical_data,
)
class MarketplaceGraphData(BaseModel):
"""Data structure for marketplace graph monitoring."""
graph_id: str
user_id: Optional[str]
execution_count: int
async def get_marketplace_graphs_for_monitoring(
days_back: int = 30,
min_executions: int = 10,
) -> list[MarketplaceGraphData]:
"""Get published marketplace graphs with recent executions for monitoring."""
query_template = """
WITH marketplace_graphs AS (
SELECT DISTINCT
slv."agentGraphId" as graph_id,
slv."agentGraphVersion" as graph_version
FROM {schema_prefix}"StoreListing" sl
JOIN {schema_prefix}"StoreListingVersion" slv ON sl."activeVersionId" = slv."id"
WHERE sl."hasApprovedVersion" = true
AND sl."isDeleted" = false
)
SELECT DISTINCT
mg.graph_id,
NULL as user_id, -- Marketplace graphs don't have a specific user_id for monitoring
COUNT(*) as execution_count
FROM marketplace_graphs mg
JOIN {schema_prefix}"AgentGraphExecution" e ON e."agentGraphId" = mg.graph_id
WHERE e."createdAt" >= $1::timestamp
AND e."isDeleted" = false
AND e."executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED')
GROUP BY mg.graph_id
HAVING COUNT(*) >= $2
ORDER BY execution_count DESC
"""
start_date = datetime.now(timezone.utc) - timedelta(days=days_back)
result = await query_raw_with_schema(query_template, start_date, min_executions)
return [
MarketplaceGraphData(
graph_id=row["graph_id"],
user_id=row["user_id"],
execution_count=int(row["execution_count"]),
)
for row in result
]

View File

@@ -1,22 +1,24 @@
import logging
import uuid
from datetime import datetime, timezone
from typing import Optional
from typing import Literal, Optional, cast
from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission, APIKeyStatus
from prisma.models import APIKey as PrismaAPIKey
from prisma.types import APIKeyWhereUniqueInput
from pydantic import BaseModel, Field
from prisma.types import APIKeyCreateInput, APIKeyWhereUniqueInput
from pydantic import Field
from backend.data.includes import MAX_USER_API_KEYS_FETCH
from backend.util.exceptions import NotAuthorizedError, NotFoundError
from .base import APIAuthorizationInfo
logger = logging.getLogger(__name__)
keysmith = APIKeySmith()
class APIKeyInfo(BaseModel):
class APIKeyInfo(APIAuthorizationInfo):
id: str
name: str
head: str = Field(
@@ -26,12 +28,9 @@ class APIKeyInfo(BaseModel):
description=f"The last {APIKeySmith.TAIL_LENGTH} characters of the key"
)
status: APIKeyStatus
permissions: list[APIKeyPermission]
created_at: datetime
last_used_at: Optional[datetime] = None
revoked_at: Optional[datetime] = None
description: Optional[str] = None
user_id: str
type: Literal["api_key"] = "api_key" # type: ignore
@staticmethod
def from_db(api_key: PrismaAPIKey):
@@ -41,7 +40,7 @@ class APIKeyInfo(BaseModel):
head=api_key.head,
tail=api_key.tail,
status=APIKeyStatus(api_key.status),
permissions=[APIKeyPermission(p) for p in api_key.permissions],
scopes=[APIKeyPermission(p) for p in api_key.permissions],
created_at=api_key.createdAt,
last_used_at=api_key.lastUsedAt,
revoked_at=api_key.revokedAt,
@@ -83,17 +82,20 @@ async def create_api_key(
generated_key = keysmith.generate_key()
saved_key_obj = await PrismaAPIKey.prisma().create(
data={
"id": str(uuid.uuid4()),
"name": name,
"head": generated_key.head,
"tail": generated_key.tail,
"hash": generated_key.hash,
"salt": generated_key.salt,
"permissions": [p for p in permissions],
"description": description,
"userId": user_id,
}
data=cast(
APIKeyCreateInput,
{
"id": str(uuid.uuid4()),
"name": name,
"head": generated_key.head,
"tail": generated_key.tail,
"hash": generated_key.hash,
"salt": generated_key.salt,
"permissions": [p for p in permissions],
"description": description,
"userId": user_id,
},
)
)
return APIKeyInfo.from_db(saved_key_obj), generated_key.key
@@ -211,7 +213,7 @@ async def suspend_api_key(key_id: str, user_id: str) -> APIKeyInfo:
def has_permission(api_key: APIKeyInfo, required_permission: APIKeyPermission) -> bool:
return required_permission in api_key.permissions
return required_permission in api_key.scopes
async def get_api_key_by_id(key_id: str, user_id: str) -> Optional[APIKeyInfo]:

View File

@@ -0,0 +1,15 @@
from datetime import datetime
from typing import Literal, Optional
from prisma.enums import APIKeyPermission
from pydantic import BaseModel
class APIAuthorizationInfo(BaseModel):
user_id: str
scopes: list[APIKeyPermission]
type: Literal["oauth", "api_key"]
created_at: datetime
expires_at: Optional[datetime] = None
last_used_at: Optional[datetime] = None
revoked_at: Optional[datetime] = None

View File

@@ -0,0 +1,886 @@
"""
OAuth 2.0 Provider Data Layer
Handles management of OAuth applications, authorization codes,
access tokens, and refresh tokens.
Hashing strategy:
- Access tokens & Refresh tokens: SHA256 (deterministic, allows direct lookup by hash)
- Client secrets: Scrypt with salt (lookup by client_id, then verify with salt)
"""
import hashlib
import logging
import secrets
import uuid
from datetime import datetime, timedelta, timezone
from typing import Literal, Optional, cast
from autogpt_libs.api_key.keysmith import APIKeySmith
from prisma.enums import APIKeyPermission as APIPermission
from prisma.models import OAuthAccessToken as PrismaOAuthAccessToken
from prisma.models import OAuthApplication as PrismaOAuthApplication
from prisma.models import OAuthAuthorizationCode as PrismaOAuthAuthorizationCode
from prisma.models import OAuthRefreshToken as PrismaOAuthRefreshToken
from prisma.types import (
OAuthAccessTokenCreateInput,
OAuthApplicationUpdateInput,
OAuthAuthorizationCodeCreateInput,
OAuthRefreshTokenCreateInput,
)
from pydantic import BaseModel, Field, SecretStr
from .base import APIAuthorizationInfo
logger = logging.getLogger(__name__)
keysmith = APIKeySmith() # Only used for client secret hashing (Scrypt)
def _generate_token() -> str:
"""Generate a cryptographically secure random token."""
return secrets.token_urlsafe(32)
def _hash_token(token: str) -> str:
"""Hash a token using SHA256 (deterministic, for direct lookup)."""
return hashlib.sha256(token.encode()).hexdigest()
# Token TTLs
AUTHORIZATION_CODE_TTL = timedelta(minutes=10)
ACCESS_TOKEN_TTL = timedelta(hours=1)
REFRESH_TOKEN_TTL = timedelta(days=30)
ACCESS_TOKEN_PREFIX = "agpt_xt_"
REFRESH_TOKEN_PREFIX = "agpt_rt_"
# ============================================================================
# Exception Classes
# ============================================================================
class OAuthError(Exception):
"""Base OAuth error"""
pass
class InvalidClientError(OAuthError):
"""Invalid client_id or client_secret"""
pass
class InvalidGrantError(OAuthError):
"""Invalid or expired authorization code/refresh token"""
def __init__(self, reason: str):
self.reason = reason
super().__init__(f"Invalid grant: {reason}")
class InvalidTokenError(OAuthError):
"""Invalid, expired, or revoked token"""
def __init__(self, reason: str):
self.reason = reason
super().__init__(f"Invalid token: {reason}")
# ============================================================================
# Data Models
# ============================================================================
class OAuthApplicationInfo(BaseModel):
"""OAuth application information (without client secret hash)"""
id: str
name: str
description: Optional[str] = None
logo_url: Optional[str] = None
client_id: str
redirect_uris: list[str]
grant_types: list[str]
scopes: list[APIPermission]
owner_id: str
is_active: bool
created_at: datetime
updated_at: datetime
@staticmethod
def from_db(app: PrismaOAuthApplication):
return OAuthApplicationInfo(
id=app.id,
name=app.name,
description=app.description,
logo_url=app.logoUrl,
client_id=app.clientId,
redirect_uris=app.redirectUris,
grant_types=app.grantTypes,
scopes=[APIPermission(s) for s in app.scopes],
owner_id=app.ownerId,
is_active=app.isActive,
created_at=app.createdAt,
updated_at=app.updatedAt,
)
class OAuthApplicationInfoWithSecret(OAuthApplicationInfo):
"""OAuth application with client secret hash (for validation)"""
client_secret_hash: str
client_secret_salt: str
@staticmethod
def from_db(app: PrismaOAuthApplication):
return OAuthApplicationInfoWithSecret(
**OAuthApplicationInfo.from_db(app).model_dump(),
client_secret_hash=app.clientSecret,
client_secret_salt=app.clientSecretSalt,
)
def verify_secret(self, plaintext_secret: str) -> bool:
"""Verify a plaintext client secret against the stored hash"""
# Use keysmith.verify_key() with stored salt
return keysmith.verify_key(
plaintext_secret, self.client_secret_hash, self.client_secret_salt
)
class OAuthAuthorizationCodeInfo(BaseModel):
"""Authorization code information"""
id: str
code: str
created_at: datetime
expires_at: datetime
application_id: str
user_id: str
scopes: list[APIPermission]
redirect_uri: str
code_challenge: Optional[str] = None
code_challenge_method: Optional[str] = None
used_at: Optional[datetime] = None
@property
def is_used(self) -> bool:
return self.used_at is not None
@staticmethod
def from_db(code: PrismaOAuthAuthorizationCode):
return OAuthAuthorizationCodeInfo(
id=code.id,
code=code.code,
created_at=code.createdAt,
expires_at=code.expiresAt,
application_id=code.applicationId,
user_id=code.userId,
scopes=[APIPermission(s) for s in code.scopes],
redirect_uri=code.redirectUri,
code_challenge=code.codeChallenge,
code_challenge_method=code.codeChallengeMethod,
used_at=code.usedAt,
)
class OAuthAccessTokenInfo(APIAuthorizationInfo):
"""Access token information"""
id: str
expires_at: datetime # type: ignore
application_id: str
type: Literal["oauth"] = "oauth" # type: ignore
@staticmethod
def from_db(token: PrismaOAuthAccessToken):
return OAuthAccessTokenInfo(
id=token.id,
user_id=token.userId,
scopes=[APIPermission(s) for s in token.scopes],
created_at=token.createdAt,
expires_at=token.expiresAt,
last_used_at=None,
revoked_at=token.revokedAt,
application_id=token.applicationId,
)
class OAuthAccessToken(OAuthAccessTokenInfo):
"""Access token with plaintext token included (sensitive)"""
token: SecretStr = Field(description="Plaintext token (sensitive)")
@staticmethod
def from_db(token: PrismaOAuthAccessToken, plaintext_token: str): # type: ignore
return OAuthAccessToken(
**OAuthAccessTokenInfo.from_db(token).model_dump(),
token=SecretStr(plaintext_token),
)
class OAuthRefreshTokenInfo(BaseModel):
"""Refresh token information"""
id: str
user_id: str
scopes: list[APIPermission]
created_at: datetime
expires_at: datetime
application_id: str
revoked_at: Optional[datetime] = None
@property
def is_revoked(self) -> bool:
return self.revoked_at is not None
@staticmethod
def from_db(token: PrismaOAuthRefreshToken):
return OAuthRefreshTokenInfo(
id=token.id,
user_id=token.userId,
scopes=[APIPermission(s) for s in token.scopes],
created_at=token.createdAt,
expires_at=token.expiresAt,
application_id=token.applicationId,
revoked_at=token.revokedAt,
)
class OAuthRefreshToken(OAuthRefreshTokenInfo):
"""Refresh token with plaintext token included (sensitive)"""
token: SecretStr = Field(description="Plaintext token (sensitive)")
@staticmethod
def from_db(token: PrismaOAuthRefreshToken, plaintext_token: str): # type: ignore
return OAuthRefreshToken(
**OAuthRefreshTokenInfo.from_db(token).model_dump(),
token=SecretStr(plaintext_token),
)
class TokenIntrospectionResult(BaseModel):
"""Result of token introspection (RFC 7662)"""
active: bool
scopes: Optional[list[str]] = None
client_id: Optional[str] = None
user_id: Optional[str] = None
exp: Optional[int] = None # Unix timestamp
token_type: Optional[Literal["access_token", "refresh_token"]] = None
# ============================================================================
# OAuth Application Management
# ============================================================================
async def get_oauth_application(client_id: str) -> Optional[OAuthApplicationInfo]:
"""Get OAuth application by client ID (without secret)"""
app = await PrismaOAuthApplication.prisma().find_unique(
where={"clientId": client_id}
)
if not app:
return None
return OAuthApplicationInfo.from_db(app)
async def get_oauth_application_with_secret(
client_id: str,
) -> Optional[OAuthApplicationInfoWithSecret]:
"""Get OAuth application by client ID (with secret hash for validation)"""
app = await PrismaOAuthApplication.prisma().find_unique(
where={"clientId": client_id}
)
if not app:
return None
return OAuthApplicationInfoWithSecret.from_db(app)
async def validate_client_credentials(
client_id: str, client_secret: str
) -> OAuthApplicationInfo:
"""
Validate client credentials and return application info.
Raises:
InvalidClientError: If client_id or client_secret is invalid, or app is inactive
"""
app = await get_oauth_application_with_secret(client_id)
if not app:
raise InvalidClientError("Invalid client_id")
if not app.is_active:
raise InvalidClientError("Application is not active")
# Verify client secret
if not app.verify_secret(client_secret):
raise InvalidClientError("Invalid client_secret")
# Return without secret hash
return OAuthApplicationInfo(**app.model_dump(exclude={"client_secret_hash"}))
def validate_redirect_uri(app: OAuthApplicationInfo, redirect_uri: str) -> bool:
"""Validate that redirect URI is registered for the application"""
return redirect_uri in app.redirect_uris
def validate_scopes(
app: OAuthApplicationInfo, requested_scopes: list[APIPermission]
) -> bool:
"""Validate that all requested scopes are allowed for the application"""
return all(scope in app.scopes for scope in requested_scopes)
# ============================================================================
# Authorization Code Flow
# ============================================================================
def _generate_authorization_code() -> str:
"""Generate a cryptographically secure authorization code"""
# 32 bytes = 256 bits of entropy
return secrets.token_urlsafe(32)
async def create_authorization_code(
application_id: str,
user_id: str,
scopes: list[APIPermission],
redirect_uri: str,
code_challenge: Optional[str] = None,
code_challenge_method: Optional[Literal["S256", "plain"]] = None,
) -> OAuthAuthorizationCodeInfo:
"""
Create a new authorization code.
Expires in 10 minutes and can only be used once.
"""
code = _generate_authorization_code()
now = datetime.now(timezone.utc)
expires_at = now + AUTHORIZATION_CODE_TTL
saved_code = await PrismaOAuthAuthorizationCode.prisma().create(
data=cast(
OAuthAuthorizationCodeCreateInput,
{
"id": str(uuid.uuid4()),
"code": code,
"expiresAt": expires_at,
"applicationId": application_id,
"userId": user_id,
"scopes": [s for s in scopes],
"redirectUri": redirect_uri,
"codeChallenge": code_challenge,
"codeChallengeMethod": code_challenge_method,
},
)
)
return OAuthAuthorizationCodeInfo.from_db(saved_code)
async def consume_authorization_code(
code: str,
application_id: str,
redirect_uri: str,
code_verifier: Optional[str] = None,
) -> tuple[str, list[APIPermission]]:
"""
Consume an authorization code and return (user_id, scopes).
This marks the code as used and validates:
- Code exists and matches application
- Code is not expired
- Code has not been used
- Redirect URI matches
- PKCE code verifier matches (if code challenge was provided)
Raises:
InvalidGrantError: If code is invalid, expired, used, or PKCE fails
"""
auth_code = await PrismaOAuthAuthorizationCode.prisma().find_unique(
where={"code": code}
)
if not auth_code:
raise InvalidGrantError("authorization code not found")
# Validate application
if auth_code.applicationId != application_id:
raise InvalidGrantError(
"authorization code does not belong to this application"
)
# Check if already used
if auth_code.usedAt is not None:
raise InvalidGrantError(
f"authorization code already used at {auth_code.usedAt}"
)
# Check expiration
now = datetime.now(timezone.utc)
if auth_code.expiresAt < now:
raise InvalidGrantError("authorization code expired")
# Validate redirect URI
if auth_code.redirectUri != redirect_uri:
raise InvalidGrantError("redirect_uri mismatch")
# Validate PKCE if code challenge was provided
if auth_code.codeChallenge:
if not code_verifier:
raise InvalidGrantError("code_verifier required but not provided")
if not _verify_pkce(
code_verifier, auth_code.codeChallenge, auth_code.codeChallengeMethod
):
raise InvalidGrantError("PKCE verification failed")
# Mark code as used
await PrismaOAuthAuthorizationCode.prisma().update(
where={"code": code},
data={"usedAt": now},
)
return auth_code.userId, [APIPermission(s) for s in auth_code.scopes]
def _verify_pkce(
code_verifier: str, code_challenge: str, code_challenge_method: Optional[str]
) -> bool:
"""
Verify PKCE code verifier against code challenge.
Supports:
- S256: SHA256(code_verifier) == code_challenge
- plain: code_verifier == code_challenge
"""
if code_challenge_method == "S256":
# Hash the verifier with SHA256 and base64url encode
hashed = hashlib.sha256(code_verifier.encode("ascii")).digest()
computed_challenge = (
secrets.token_urlsafe(len(hashed)).encode("ascii").decode("ascii")
)
# For proper base64url encoding
import base64
computed_challenge = (
base64.urlsafe_b64encode(hashed).decode("ascii").rstrip("=")
)
return secrets.compare_digest(computed_challenge, code_challenge)
elif code_challenge_method == "plain" or code_challenge_method is None:
# Plain comparison
return secrets.compare_digest(code_verifier, code_challenge)
else:
logger.warning(f"Unsupported code challenge method: {code_challenge_method}")
return False
# ============================================================================
# Access Token Management
# ============================================================================
async def create_access_token(
application_id: str, user_id: str, scopes: list[APIPermission]
) -> OAuthAccessToken:
"""
Create a new access token.
Returns OAuthAccessToken (with plaintext token).
"""
plaintext_token = ACCESS_TOKEN_PREFIX + _generate_token()
token_hash = _hash_token(plaintext_token)
now = datetime.now(timezone.utc)
expires_at = now + ACCESS_TOKEN_TTL
saved_token = await PrismaOAuthAccessToken.prisma().create(
data=cast(
OAuthAccessTokenCreateInput,
{
"id": str(uuid.uuid4()),
"token": token_hash, # SHA256 hash for direct lookup
"expiresAt": expires_at,
"applicationId": application_id,
"userId": user_id,
"scopes": [s for s in scopes],
},
)
)
return OAuthAccessToken.from_db(saved_token, plaintext_token=plaintext_token)
async def validate_access_token(
token: str,
) -> tuple[OAuthAccessTokenInfo, OAuthApplicationInfo]:
"""
Validate an access token and return token info.
Raises:
InvalidTokenError: If token is invalid, expired, or revoked
InvalidClientError: If the client application is not marked as active
"""
token_hash = _hash_token(token)
# Direct lookup by hash
access_token = await PrismaOAuthAccessToken.prisma().find_unique(
where={"token": token_hash}, include={"Application": True}
)
if not access_token:
raise InvalidTokenError("access token not found")
if not access_token.Application: # should be impossible
raise InvalidClientError("Client application not found")
if not access_token.Application.isActive:
raise InvalidClientError("Client application is disabled")
if access_token.revokedAt is not None:
raise InvalidTokenError("access token has been revoked")
# Check expiration
now = datetime.now(timezone.utc)
if access_token.expiresAt < now:
raise InvalidTokenError("access token expired")
return (
OAuthAccessTokenInfo.from_db(access_token),
OAuthApplicationInfo.from_db(access_token.Application),
)
async def revoke_access_token(
token: str, application_id: str
) -> OAuthAccessTokenInfo | None:
"""
Revoke an access token.
Args:
token: The plaintext access token to revoke
application_id: The application ID making the revocation request.
Only tokens belonging to this application will be revoked.
Returns:
OAuthAccessTokenInfo if token was found and revoked, None otherwise.
Note:
Always performs exactly 2 DB queries regardless of outcome to prevent
timing side-channel attacks that could reveal token existence.
"""
try:
token_hash = _hash_token(token)
# Use update_many to filter by both token and applicationId
updated_count = await PrismaOAuthAccessToken.prisma().update_many(
where={
"token": token_hash,
"applicationId": application_id,
"revokedAt": None,
},
data={"revokedAt": datetime.now(timezone.utc)},
)
# Always perform second query to ensure constant time
result = await PrismaOAuthAccessToken.prisma().find_unique(
where={"token": token_hash}
)
# Only return result if we actually revoked something
if updated_count == 0:
return None
return OAuthAccessTokenInfo.from_db(result) if result else None
except Exception as e:
logger.exception(f"Error revoking access token: {e}")
return None
# ============================================================================
# Refresh Token Management
# ============================================================================
async def create_refresh_token(
application_id: str, user_id: str, scopes: list[APIPermission]
) -> OAuthRefreshToken:
"""
Create a new refresh token.
Returns OAuthRefreshToken (with plaintext token).
"""
plaintext_token = REFRESH_TOKEN_PREFIX + _generate_token()
token_hash = _hash_token(plaintext_token)
now = datetime.now(timezone.utc)
expires_at = now + REFRESH_TOKEN_TTL
saved_token = await PrismaOAuthRefreshToken.prisma().create(
data=cast(
OAuthRefreshTokenCreateInput,
{
"id": str(uuid.uuid4()),
"token": token_hash, # SHA256 hash for direct lookup
"expiresAt": expires_at,
"applicationId": application_id,
"userId": user_id,
"scopes": [s for s in scopes],
},
)
)
return OAuthRefreshToken.from_db(saved_token, plaintext_token=plaintext_token)
async def refresh_tokens(
refresh_token: str, application_id: str
) -> tuple[OAuthAccessToken, OAuthRefreshToken]:
"""
Use a refresh token to create new access and refresh tokens.
Returns (new_access_token, new_refresh_token) both with plaintext tokens included.
Raises:
InvalidGrantError: If refresh token is invalid, expired, or revoked
"""
token_hash = _hash_token(refresh_token)
# Direct lookup by hash
rt = await PrismaOAuthRefreshToken.prisma().find_unique(where={"token": token_hash})
if not rt:
raise InvalidGrantError("refresh token not found")
# NOTE: no need to check Application.isActive, this is checked by the token endpoint
if rt.revokedAt is not None:
raise InvalidGrantError("refresh token has been revoked")
# Validate application
if rt.applicationId != application_id:
raise InvalidGrantError("refresh token does not belong to this application")
# Check expiration
now = datetime.now(timezone.utc)
if rt.expiresAt < now:
raise InvalidGrantError("refresh token expired")
# Revoke old refresh token
await PrismaOAuthRefreshToken.prisma().update(
where={"token": token_hash},
data={"revokedAt": now},
)
# Create new access and refresh tokens with same scopes
scopes = [APIPermission(s) for s in rt.scopes]
new_access_token = await create_access_token(
rt.applicationId,
rt.userId,
scopes,
)
new_refresh_token = await create_refresh_token(
rt.applicationId,
rt.userId,
scopes,
)
return new_access_token, new_refresh_token
async def revoke_refresh_token(
token: str, application_id: str
) -> OAuthRefreshTokenInfo | None:
"""
Revoke a refresh token.
Args:
token: The plaintext refresh token to revoke
application_id: The application ID making the revocation request.
Only tokens belonging to this application will be revoked.
Returns:
OAuthRefreshTokenInfo if token was found and revoked, None otherwise.
Note:
Always performs exactly 2 DB queries regardless of outcome to prevent
timing side-channel attacks that could reveal token existence.
"""
try:
token_hash = _hash_token(token)
# Use update_many to filter by both token and applicationId
updated_count = await PrismaOAuthRefreshToken.prisma().update_many(
where={
"token": token_hash,
"applicationId": application_id,
"revokedAt": None,
},
data={"revokedAt": datetime.now(timezone.utc)},
)
# Always perform second query to ensure constant time
result = await PrismaOAuthRefreshToken.prisma().find_unique(
where={"token": token_hash}
)
# Only return result if we actually revoked something
if updated_count == 0:
return None
return OAuthRefreshTokenInfo.from_db(result) if result else None
except Exception as e:
logger.exception(f"Error revoking refresh token: {e}")
return None
# ============================================================================
# Token Introspection
# ============================================================================
async def introspect_token(
token: str,
token_type_hint: Optional[Literal["access_token", "refresh_token"]] = None,
) -> TokenIntrospectionResult:
"""
Introspect a token and return its metadata (RFC 7662).
Returns TokenIntrospectionResult with active=True and metadata if valid,
or active=False if the token is invalid/expired/revoked.
"""
# Try as access token first (or if hint says "access_token")
if token_type_hint != "refresh_token":
try:
token_info, app = await validate_access_token(token)
return TokenIntrospectionResult(
active=True,
scopes=list(s.value for s in token_info.scopes),
client_id=app.client_id if app else None,
user_id=token_info.user_id,
exp=int(token_info.expires_at.timestamp()),
token_type="access_token",
)
except InvalidTokenError:
pass # Try as refresh token
# Try as refresh token
token_hash = _hash_token(token)
refresh_token = await PrismaOAuthRefreshToken.prisma().find_unique(
where={"token": token_hash}
)
if refresh_token and refresh_token.revokedAt is None:
# Check if valid (not expired)
now = datetime.now(timezone.utc)
if refresh_token.expiresAt > now:
app = await get_oauth_application_by_id(refresh_token.applicationId)
return TokenIntrospectionResult(
active=True,
scopes=list(s for s in refresh_token.scopes),
client_id=app.client_id if app else None,
user_id=refresh_token.userId,
exp=int(refresh_token.expiresAt.timestamp()),
token_type="refresh_token",
)
# Token not found or inactive
return TokenIntrospectionResult(active=False)
async def get_oauth_application_by_id(app_id: str) -> Optional[OAuthApplicationInfo]:
"""Get OAuth application by ID"""
app = await PrismaOAuthApplication.prisma().find_unique(where={"id": app_id})
if not app:
return None
return OAuthApplicationInfo.from_db(app)
async def list_user_oauth_applications(user_id: str) -> list[OAuthApplicationInfo]:
"""Get all OAuth applications owned by a user"""
apps = await PrismaOAuthApplication.prisma().find_many(
where={"ownerId": user_id},
order={"createdAt": "desc"},
)
return [OAuthApplicationInfo.from_db(app) for app in apps]
async def update_oauth_application(
app_id: str,
*,
owner_id: str,
is_active: Optional[bool] = None,
logo_url: Optional[str] = None,
) -> Optional[OAuthApplicationInfo]:
"""
Update OAuth application active status.
Only the owner can update their app's status.
Returns the updated app info, or None if app not found or not owned by user.
"""
# First verify ownership
app = await PrismaOAuthApplication.prisma().find_first(
where={"id": app_id, "ownerId": owner_id}
)
if not app:
return None
patch: OAuthApplicationUpdateInput = {}
if is_active is not None:
patch["isActive"] = is_active
if logo_url:
patch["logoUrl"] = logo_url
if not patch:
return OAuthApplicationInfo.from_db(app) # return unchanged
updated_app = await PrismaOAuthApplication.prisma().update(
where={"id": app_id},
data=patch,
)
return OAuthApplicationInfo.from_db(updated_app) if updated_app else None
# ============================================================================
# Token Cleanup
# ============================================================================
async def cleanup_expired_oauth_tokens() -> dict[str, int]:
"""
Delete expired OAuth tokens from the database.
This removes:
- Expired authorization codes (10 min TTL)
- Expired access tokens (1 hour TTL)
- Expired refresh tokens (30 day TTL)
Returns a dict with counts of deleted tokens by type.
"""
now = datetime.now(timezone.utc)
# Delete expired authorization codes
codes_result = await PrismaOAuthAuthorizationCode.prisma().delete_many(
where={"expiresAt": {"lt": now}}
)
# Delete expired access tokens
access_result = await PrismaOAuthAccessToken.prisma().delete_many(
where={"expiresAt": {"lt": now}}
)
# Delete expired refresh tokens
refresh_result = await PrismaOAuthRefreshToken.prisma().delete_many(
where={"expiresAt": {"lt": now}}
)
deleted = {
"authorization_codes": codes_result,
"access_tokens": access_result,
"refresh_tokens": refresh_result,
}
total = sum(deleted.values())
if total > 0:
logger.info(f"Cleaned up {total} expired OAuth tokens: {deleted}")
return deleted

View File

@@ -71,6 +71,7 @@ class BlockType(Enum):
AGENT = "Agent"
AI = "AI"
AYRSHARE = "Ayrshare"
HUMAN_IN_THE_LOOP = "Human In The Loop"
class BlockCategory(Enum):
@@ -265,14 +266,61 @@ class BlockSchema(BaseModel):
)
}
@classmethod
def get_auto_credentials_fields(cls) -> dict[str, dict[str, Any]]:
"""
Get fields that have auto_credentials metadata (e.g., GoogleDriveFileInput).
Returns a dict mapping kwarg_name -> {field_name, auto_credentials_config}
Raises:
ValueError: If multiple fields have the same kwarg_name, as this would
cause silent overwriting and only the last field would be processed.
"""
result: dict[str, dict[str, Any]] = {}
schema = cls.jsonschema()
properties = schema.get("properties", {})
for field_name, field_schema in properties.items():
auto_creds = field_schema.get("auto_credentials")
if auto_creds:
kwarg_name = auto_creds.get("kwarg_name", "credentials")
if kwarg_name in result:
raise ValueError(
f"Duplicate auto_credentials kwarg_name '{kwarg_name}' "
f"in fields '{result[kwarg_name]['field_name']}' and "
f"'{field_name}' on {cls.__qualname__}"
)
result[kwarg_name] = {
"field_name": field_name,
"config": auto_creds,
}
return result
@classmethod
def get_credentials_fields_info(cls) -> dict[str, CredentialsFieldInfo]:
return {
field_name: CredentialsFieldInfo.model_validate(
result = {}
# Regular credentials fields
for field_name in cls.get_credentials_fields().keys():
result[field_name] = CredentialsFieldInfo.model_validate(
cls.get_field_schema(field_name), by_alias=True
)
for field_name in cls.get_credentials_fields().keys()
}
# Auto-generated credentials fields (from GoogleDriveFileInput etc.)
for kwarg_name, info in cls.get_auto_credentials_fields().items():
config = info["config"]
# Build a schema-like dict that CredentialsFieldInfo can parse
auto_schema = {
"credentials_provider": [config.get("provider", "google")],
"credentials_types": [config.get("type", "oauth2")],
"credentials_scopes": config.get("scopes"),
}
result[kwarg_name] = CredentialsFieldInfo.model_validate(
auto_schema, by_alias=True
)
return result
@classmethod
def get_input_defaults(cls, data: BlockInput) -> BlockInput:
@@ -553,14 +601,18 @@ class Block(ABC, Generic[BlockSchemaInputType, BlockSchemaOutputType]):
async for output_name, output_data in self._execute(input_data, **kwargs):
yield output_name, output_data
except Exception as ex:
if not isinstance(ex, BlockError):
raise BlockUnknownError(
if isinstance(ex, BlockError):
raise ex
else:
raise (
BlockExecutionError
if isinstance(ex, ValueError)
else BlockUnknownError
)(
message=str(ex),
block_name=self.name,
block_id=self.id,
) from ex
else:
raise ex
async def _execute(self, input_data: BlockInput, **kwargs) -> BlockOutput:
if error := self.input_schema.validate_data(input_data):
@@ -796,3 +848,12 @@ def get_io_block_ids() -> Sequence[str]:
for id, B in get_blocks().items()
if B().block_type in (BlockType.INPUT, BlockType.OUTPUT)
]
@cached(ttl_seconds=3600)
def get_human_in_the_loop_block_ids() -> Sequence[str]:
return [
id
for id, B in get_blocks().items()
if B().block_type == BlockType.HUMAN_IN_THE_LOOP
]

View File

@@ -5,12 +5,14 @@ This test was added to cover a previously untested code path that could lead to
incorrect balance capping behavior.
"""
from typing import cast
from uuid import uuid4
import pytest
from prisma.enums import CreditTransactionType
from prisma.errors import UniqueViolationError
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserBalanceUpsertInput, UserCreateInput
from backend.data.credit import UserCredit
from backend.util.json import SafeJson
@@ -21,11 +23,14 @@ async def create_test_user(user_id: str) -> None:
"""Create a test user for ceiling tests."""
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
}
data=cast(
UserCreateInput,
{
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
},
)
)
except UniqueViolationError:
# User already exists, continue
@@ -33,7 +38,10 @@ async def create_test_user(user_id: str) -> None:
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
data=cast(
UserBalanceUpsertInput,
{"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
),
)

View File

@@ -7,6 +7,7 @@ without race conditions, deadlocks, or inconsistent state.
import asyncio
import random
from typing import cast
from uuid import uuid4
import prisma.enums
@@ -14,6 +15,7 @@ import pytest
from prisma.enums import CreditTransactionType
from prisma.errors import UniqueViolationError
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserBalanceUpsertInput, UserCreateInput
from backend.data.credit import POSTGRES_INT_MAX, UsageTransactionMetadata, UserCredit
from backend.util.exceptions import InsufficientBalanceError
@@ -28,11 +30,14 @@ async def create_test_user(user_id: str) -> None:
"""Create a test user with initial balance."""
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
}
data=cast(
UserCreateInput,
{
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
},
)
)
except UniqueViolationError:
# User already exists, continue
@@ -41,7 +46,10 @@ async def create_test_user(user_id: str) -> None:
# Ensure UserBalance record exists
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
data=cast(
UserBalanceUpsertInput,
{"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
),
)
@@ -342,10 +350,13 @@ async def test_integer_overflow_protection(server: SpinTestServer):
# First, set balance near max
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": max_int - 100},
"update": {"balance": max_int - 100},
},
data=cast(
UserBalanceUpsertInput,
{
"create": {"userId": user_id, "balance": max_int - 100},
"update": {"balance": max_int - 100},
},
),
)
# Try to add more than possible - should clamp to POSTGRES_INT_MAX

View File

@@ -5,9 +5,12 @@ These tests run actual database operations to ensure SQL queries work correctly,
which would have caught the CreditTransactionType enum casting bug.
"""
from typing import cast
import pytest
from prisma.enums import CreditTransactionType
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserCreateInput
from backend.data.credit import (
AutoTopUpConfig,
@@ -29,12 +32,15 @@ async def cleanup_test_user():
# Create the user first
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"topUpConfig": SafeJson({}),
"timezone": "UTC",
}
data=cast(
UserCreateInput,
{
"id": user_id,
"email": f"test-{user_id}@example.com",
"topUpConfig": SafeJson({}),
"timezone": "UTC",
},
)
)
except Exception:
# User might already exist, that's fine

View File

@@ -6,12 +6,19 @@ are atomic and maintain data consistency.
"""
from datetime import datetime, timezone
from typing import cast
from unittest.mock import MagicMock, patch
import pytest
import stripe
from prisma.enums import CreditTransactionType
from prisma.models import CreditRefundRequest, CreditTransaction, User, UserBalance
from prisma.types import (
CreditRefundRequestCreateInput,
CreditTransactionCreateInput,
UserBalanceCreateInput,
UserCreateInput,
)
from backend.data.credit import UserCredit
from backend.util.json import SafeJson
@@ -35,32 +42,41 @@ async def setup_test_user_with_topup():
# Create user
await User.prisma().create(
data={
"id": REFUND_TEST_USER_ID,
"email": f"{REFUND_TEST_USER_ID}@example.com",
"name": "Refund Test User",
}
data=cast(
UserCreateInput,
{
"id": REFUND_TEST_USER_ID,
"email": f"{REFUND_TEST_USER_ID}@example.com",
"name": "Refund Test User",
},
)
)
# Create user balance
await UserBalance.prisma().create(
data={
"userId": REFUND_TEST_USER_ID,
"balance": 1000, # $10
}
data=cast(
UserBalanceCreateInput,
{
"userId": REFUND_TEST_USER_ID,
"balance": 1000, # $10
},
)
)
# Create a top-up transaction that can be refunded
topup_tx = await CreditTransaction.prisma().create(
data={
"userId": REFUND_TEST_USER_ID,
"amount": 1000,
"type": CreditTransactionType.TOP_UP,
"transactionKey": "pi_test_12345",
"runningBalance": 1000,
"isActive": True,
"metadata": SafeJson({"stripe_payment_intent": "pi_test_12345"}),
}
data=cast(
CreditTransactionCreateInput,
{
"userId": REFUND_TEST_USER_ID,
"amount": 1000,
"type": CreditTransactionType.TOP_UP,
"transactionKey": "pi_test_12345",
"runningBalance": 1000,
"isActive": True,
"metadata": SafeJson({"stripe_payment_intent": "pi_test_12345"}),
},
)
)
return topup_tx
@@ -93,12 +109,15 @@ async def test_deduct_credits_atomic(server: SpinTestServer):
# Create refund request record (simulating webhook flow)
await CreditRefundRequest.prisma().create(
data={
"userId": REFUND_TEST_USER_ID,
"amount": 500,
"transactionKey": topup_tx.transactionKey, # Should match the original transaction
"reason": "Test refund",
}
data=cast(
CreditRefundRequestCreateInput,
{
"userId": REFUND_TEST_USER_ID,
"amount": 500,
"transactionKey": topup_tx.transactionKey, # Should match the original transaction
"reason": "Test refund",
},
)
)
# Call deduct_credits
@@ -286,12 +305,15 @@ async def test_concurrent_refunds(server: SpinTestServer):
refund_requests = []
for i in range(5):
req = await CreditRefundRequest.prisma().create(
data={
"userId": REFUND_TEST_USER_ID,
"amount": 100, # $1 each
"transactionKey": topup_tx.transactionKey,
"reason": f"Test refund {i}",
}
data=cast(
CreditRefundRequestCreateInput,
{
"userId": REFUND_TEST_USER_ID,
"amount": 100, # $1 each
"transactionKey": topup_tx.transactionKey,
"reason": f"Test refund {i}",
},
)
)
refund_requests.append(req)

View File

@@ -1,13 +1,15 @@
from datetime import datetime, timedelta, timezone
from typing import cast
import pytest
from prisma.enums import CreditTransactionType
from prisma.models import CreditTransaction, UserBalance
from prisma.types import CreditTransactionCreateInput, UserBalanceUpsertInput
from backend.blocks.llm import AITextGeneratorBlock
from backend.data.block import get_block
from backend.data.credit import BetaUserCredit, UsageTransactionMetadata
from backend.data.execution import NodeExecutionEntry, UserContext
from backend.data.execution import ExecutionContext, NodeExecutionEntry
from backend.data.user import DEFAULT_USER_ID
from backend.executor.utils import block_usage_cost
from backend.integrations.credentials_store import openai_credentials
@@ -23,10 +25,13 @@ async def disable_test_user_transactions():
old_date = datetime.now(timezone.utc) - timedelta(days=35) # More than a month ago
await UserBalance.prisma().upsert(
where={"userId": DEFAULT_USER_ID},
data={
"create": {"userId": DEFAULT_USER_ID, "balance": 0},
"update": {"balance": 0, "updatedAt": old_date},
},
data=cast(
UserBalanceUpsertInput,
{
"create": {"userId": DEFAULT_USER_ID, "balance": 0},
"update": {"balance": 0, "updatedAt": old_date},
},
),
)
@@ -73,6 +78,7 @@ async def test_block_credit_usage(server: SpinTestServer):
NodeExecutionEntry(
user_id=DEFAULT_USER_ID,
graph_id="test_graph",
graph_version=1,
node_id="test_node",
graph_exec_id="test_graph_exec",
node_exec_id="test_node_exec",
@@ -85,7 +91,7 @@ async def test_block_credit_usage(server: SpinTestServer):
"type": openai_credentials.type,
},
},
user_context=UserContext(timezone="UTC"),
execution_context=ExecutionContext(user_timezone="UTC"),
),
)
assert spending_amount_1 > 0
@@ -94,12 +100,13 @@ async def test_block_credit_usage(server: SpinTestServer):
NodeExecutionEntry(
user_id=DEFAULT_USER_ID,
graph_id="test_graph",
graph_version=1,
node_id="test_node",
graph_exec_id="test_graph_exec",
node_exec_id="test_node_exec",
block_id=AITextGeneratorBlock().id,
inputs={"model": "gpt-4-turbo", "api_key": "owned_api_key"},
user_context=UserContext(timezone="UTC"),
execution_context=ExecutionContext(user_timezone="UTC"),
),
)
assert spending_amount_2 == 0
@@ -138,23 +145,29 @@ async def test_block_credit_reset(server: SpinTestServer):
# Manually create a transaction with month 1 timestamp to establish history
await CreditTransaction.prisma().create(
data={
"userId": DEFAULT_USER_ID,
"amount": 100,
"type": CreditTransactionType.TOP_UP,
"runningBalance": 1100,
"isActive": True,
"createdAt": month1, # Set specific timestamp
}
data=cast(
CreditTransactionCreateInput,
{
"userId": DEFAULT_USER_ID,
"amount": 100,
"type": CreditTransactionType.TOP_UP,
"runningBalance": 1100,
"isActive": True,
"createdAt": month1, # Set specific timestamp
},
)
)
# Update user balance to match
await UserBalance.prisma().upsert(
where={"userId": DEFAULT_USER_ID},
data={
"create": {"userId": DEFAULT_USER_ID, "balance": 1100},
"update": {"balance": 1100},
},
data=cast(
UserBalanceUpsertInput,
{
"create": {"userId": DEFAULT_USER_ID, "balance": 1100},
"update": {"balance": 1100},
},
),
)
# Now test month 2 behavior
@@ -173,14 +186,17 @@ async def test_block_credit_reset(server: SpinTestServer):
# Create a month 2 transaction to update the last transaction time
await CreditTransaction.prisma().create(
data={
"userId": DEFAULT_USER_ID,
"amount": -700, # Spent 700 to get to 400
"type": CreditTransactionType.USAGE,
"runningBalance": 400,
"isActive": True,
"createdAt": month2,
}
data=cast(
CreditTransactionCreateInput,
{
"userId": DEFAULT_USER_ID,
"amount": -700, # Spent 700 to get to 400
"type": CreditTransactionType.USAGE,
"runningBalance": 400,
"isActive": True,
"createdAt": month2,
},
)
)
# Move to month 3

View File

@@ -6,12 +6,14 @@ doesn't underflow below POSTGRES_INT_MIN, which could cause integer wraparound i
"""
import asyncio
from typing import cast
from uuid import uuid4
import pytest
from prisma.enums import CreditTransactionType
from prisma.errors import UniqueViolationError
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserBalanceUpsertInput, UserCreateInput
from backend.data.credit import POSTGRES_INT_MIN, UserCredit
from backend.util.test import SpinTestServer
@@ -21,11 +23,14 @@ async def create_test_user(user_id: str) -> None:
"""Create a test user for underflow tests."""
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
}
data=cast(
UserCreateInput,
{
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
},
)
)
except UniqueViolationError:
# User already exists, continue
@@ -33,7 +38,10 @@ async def create_test_user(user_id: str) -> None:
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
data=cast(
UserBalanceUpsertInput,
{"create": {"userId": user_id, "balance": 0}, "update": {"balance": 0}},
),
)
@@ -70,10 +78,13 @@ async def test_debug_underflow_step_by_step(server: SpinTestServer):
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": initial_balance_target},
"update": {"balance": initial_balance_target},
},
data=cast(
UserBalanceUpsertInput,
{
"create": {"userId": user_id, "balance": initial_balance_target},
"update": {"balance": initial_balance_target},
},
),
)
current_balance = await credit_system.get_credits(user_id)
@@ -110,10 +121,13 @@ async def test_debug_underflow_step_by_step(server: SpinTestServer):
# Set balance to exactly POSTGRES_INT_MIN
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": POSTGRES_INT_MIN},
"update": {"balance": POSTGRES_INT_MIN},
},
data=cast(
UserBalanceUpsertInput,
{
"create": {"userId": user_id, "balance": POSTGRES_INT_MIN},
"update": {"balance": POSTGRES_INT_MIN},
},
),
)
edge_balance = await credit_system.get_credits(user_id)
@@ -152,10 +166,13 @@ async def test_underflow_protection_large_refunds(server: SpinTestServer):
test_balance = POSTGRES_INT_MIN + 1000
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": test_balance},
"update": {"balance": test_balance},
},
data=cast(
UserBalanceUpsertInput,
{
"create": {"userId": user_id, "balance": test_balance},
"update": {"balance": test_balance},
},
),
)
current_balance = await credit_system.get_credits(user_id)
@@ -217,10 +234,13 @@ async def test_multiple_large_refunds_cumulative_underflow(server: SpinTestServe
initial_balance = POSTGRES_INT_MIN + 500 # Close to minimum but with some room
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": initial_balance},
"update": {"balance": initial_balance},
},
data=cast(
UserBalanceUpsertInput,
{
"create": {"userId": user_id, "balance": initial_balance},
"update": {"balance": initial_balance},
},
),
)
# Apply multiple refunds that would cumulatively underflow
@@ -295,10 +315,13 @@ async def test_concurrent_large_refunds_no_underflow(server: SpinTestServer):
initial_balance = POSTGRES_INT_MIN + 1000 # Close to minimum
await UserBalance.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, "balance": initial_balance},
"update": {"balance": initial_balance},
},
data=cast(
UserBalanceUpsertInput,
{
"create": {"userId": user_id, "balance": initial_balance},
"update": {"balance": initial_balance},
},
),
)
async def large_refund(amount: int, label: str):

View File

@@ -9,11 +9,13 @@ This test ensures that:
import asyncio
from datetime import datetime
from typing import cast
import pytest
from prisma.enums import CreditTransactionType
from prisma.errors import UniqueViolationError
from prisma.models import CreditTransaction, User, UserBalance
from prisma.types import UserBalanceCreateInput, UserCreateInput
from backend.data.credit import UsageTransactionMetadata, UserCredit
from backend.util.json import SafeJson
@@ -24,11 +26,14 @@ async def create_test_user(user_id: str) -> None:
"""Create a test user for migration tests."""
try:
await User.prisma().create(
data={
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
}
data=cast(
UserCreateInput,
{
"id": user_id,
"email": f"test-{user_id}@example.com",
"name": f"Test User {user_id[:8]}",
},
)
)
except UniqueViolationError:
# User already exists, continue
@@ -121,7 +126,9 @@ async def test_detect_stale_user_balance_queries(server: SpinTestServer):
try:
# Create UserBalance with specific value
await UserBalance.prisma().create(
data={"userId": user_id, "balance": 5000} # $50
data=cast(
UserBalanceCreateInput, {"userId": user_id, "balance": 5000}
) # $50
)
# Verify that get_credits returns UserBalance value (5000), not any stale User.balance value
@@ -160,7 +167,9 @@ async def test_concurrent_operations_use_userbalance_only(server: SpinTestServer
try:
# Set initial balance in UserBalance
await UserBalance.prisma().create(data={"userId": user_id, "balance": 1000})
await UserBalance.prisma().create(
data=cast(UserBalanceCreateInput, {"userId": user_id, "balance": 1000})
)
# Run concurrent operations to ensure they all use UserBalance atomic operations
async def concurrent_spend(amount: int, label: str):

View File

@@ -5,6 +5,7 @@ from enum import Enum
from multiprocessing import Manager
from queue import Empty
from typing import (
TYPE_CHECKING,
Annotated,
Any,
AsyncGenerator,
@@ -27,6 +28,7 @@ from prisma.models import (
AgentNodeExecutionKeyValueData,
)
from prisma.types import (
AgentGraphExecutionCreateInput,
AgentGraphExecutionUpdateManyMutationInput,
AgentGraphExecutionWhereInput,
AgentNodeExecutionCreateInput,
@@ -64,12 +66,27 @@ from .includes import (
)
from .model import CredentialsMetaInput, GraphExecutionStats, NodeExecutionStats
if TYPE_CHECKING:
pass
T = TypeVar("T")
logger = logging.getLogger(__name__)
config = Config()
class ExecutionContext(BaseModel):
"""
Unified context that carries execution-level data throughout the entire execution flow.
This includes information needed by blocks, sub-graphs, and execution management.
"""
safe_mode: bool = True
user_timezone: str = "UTC"
root_execution_id: Optional[str] = None
parent_execution_id: Optional[str] = None
# -------------------------- Models -------------------------- #
@@ -96,11 +113,14 @@ NodesInputMasks = Mapping[str, NodeInputMask]
VALID_STATUS_TRANSITIONS = {
ExecutionStatus.QUEUED: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.TERMINATED, # For resuming halted execution
ExecutionStatus.REVIEW, # For resuming after review
],
ExecutionStatus.RUNNING: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.TERMINATED, # For resuming halted execution
ExecutionStatus.REVIEW, # For resuming after review
],
ExecutionStatus.COMPLETED: [
ExecutionStatus.RUNNING,
@@ -109,11 +129,16 @@ VALID_STATUS_TRANSITIONS = {
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
ExecutionStatus.REVIEW,
],
ExecutionStatus.TERMINATED: [
ExecutionStatus.INCOMPLETE,
ExecutionStatus.QUEUED,
ExecutionStatus.RUNNING,
ExecutionStatus.REVIEW,
],
ExecutionStatus.REVIEW: [
ExecutionStatus.RUNNING,
],
}
@@ -356,9 +381,8 @@ class GraphExecutionWithNodes(GraphExecution):
def to_graph_execution_entry(
self,
user_context: "UserContext",
execution_context: ExecutionContext,
compiled_nodes_input_masks: Optional[NodesInputMasks] = None,
parent_graph_exec_id: Optional[str] = None,
):
return GraphExecutionEntry(
user_id=self.user_id,
@@ -366,8 +390,7 @@ class GraphExecutionWithNodes(GraphExecution):
graph_version=self.graph_version or 0,
graph_exec_id=self.id,
nodes_input_masks=compiled_nodes_input_masks,
user_context=user_context,
parent_graph_exec_id=parent_graph_exec_id,
execution_context=execution_context,
)
@@ -440,17 +463,18 @@ class NodeExecutionResult(BaseModel):
)
def to_node_execution_entry(
self, user_context: "UserContext"
self, execution_context: ExecutionContext
) -> "NodeExecutionEntry":
return NodeExecutionEntry(
user_id=self.user_id,
graph_exec_id=self.graph_exec_id,
graph_id=self.graph_id,
graph_version=self.graph_version,
node_exec_id=self.node_exec_id,
node_id=self.node_id,
block_id=self.block_id,
inputs=self.input_data,
user_context=user_context,
execution_context=execution_context,
)
@@ -685,37 +709,40 @@ async def create_graph_execution(
The id of the AgentGraphExecution and the list of ExecutionResult for each node.
"""
result = await AgentGraphExecution.prisma().create(
data={
"agentGraphId": graph_id,
"agentGraphVersion": graph_version,
"executionStatus": ExecutionStatus.INCOMPLETE,
"inputs": SafeJson(inputs),
"credentialInputs": (
SafeJson(credential_inputs) if credential_inputs else Json({})
),
"nodesInputMasks": (
SafeJson(nodes_input_masks) if nodes_input_masks else Json({})
),
"NodeExecutions": {
"create": [
AgentNodeExecutionCreateInput(
agentNodeId=node_id,
executionStatus=ExecutionStatus.QUEUED,
queuedTime=datetime.now(tz=timezone.utc),
Input={
"create": [
{"name": name, "data": SafeJson(data)}
for name, data in node_input.items()
]
},
)
for node_id, node_input in starting_nodes_input
]
data=cast(
AgentGraphExecutionCreateInput,
{
"agentGraphId": graph_id,
"agentGraphVersion": graph_version,
"executionStatus": ExecutionStatus.INCOMPLETE,
"inputs": SafeJson(inputs),
"credentialInputs": (
SafeJson(credential_inputs) if credential_inputs else Json({})
),
"nodesInputMasks": (
SafeJson(nodes_input_masks) if nodes_input_masks else Json({})
),
"NodeExecutions": {
"create": [
AgentNodeExecutionCreateInput(
agentNodeId=node_id,
executionStatus=ExecutionStatus.QUEUED,
queuedTime=datetime.now(tz=timezone.utc),
Input={
"create": [
{"name": name, "data": SafeJson(data)}
for name, data in node_input.items()
]
},
)
for node_id, node_input in starting_nodes_input
]
},
"userId": user_id,
"agentPresetId": preset_id,
"parentGraphExecutionId": parent_graph_exec_id,
},
"userId": user_id,
"agentPresetId": preset_id,
"parentGraphExecutionId": parent_graph_exec_id,
},
),
include=GRAPH_EXECUTION_INCLUDE_WITH_NODES,
)
@@ -728,7 +755,7 @@ async def upsert_execution_input(
input_name: str,
input_data: JsonValue,
node_exec_id: str | None = None,
) -> tuple[str, BlockInput]:
) -> tuple[NodeExecutionResult, BlockInput]:
"""
Insert AgentNodeExecutionInputOutput record for as one of AgentNodeExecution.Input.
If there is no AgentNodeExecution that has no `input_name` as input, create new one.
@@ -761,7 +788,7 @@ async def upsert_execution_input(
existing_execution = await AgentNodeExecution.prisma().find_first(
where=existing_exec_query_filter,
order={"addedTime": "asc"},
include={"Input": True},
include={"Input": True, "GraphExecution": True},
)
json_input_data = SafeJson(input_data)
@@ -773,7 +800,7 @@ async def upsert_execution_input(
referencedByInputExecId=existing_execution.id,
)
)
return existing_execution.id, {
return NodeExecutionResult.from_db(existing_execution), {
**{
input_data.name: type_utils.convert(input_data.data, JsonValue)
for input_data in existing_execution.Input or []
@@ -788,9 +815,10 @@ async def upsert_execution_input(
agentGraphExecutionId=graph_exec_id,
executionStatus=ExecutionStatus.INCOMPLETE,
Input={"create": {"name": input_name, "data": json_input_data}},
)
),
include={"GraphExecution": True},
)
return result.id, {input_name: input_data}
return NodeExecutionResult.from_db(result), {input_name: input_data}
else:
raise ValueError(
@@ -806,15 +834,42 @@ async def upsert_execution_output(
"""
Insert AgentNodeExecutionInputOutput record for as one of AgentNodeExecution.Output.
"""
data: AgentNodeExecutionInputOutputCreateInput = {
"name": output_name,
"referencedByOutputExecId": node_exec_id,
}
data: AgentNodeExecutionInputOutputCreateInput = cast(
AgentNodeExecutionInputOutputCreateInput,
{
"name": output_name,
"referencedByOutputExecId": node_exec_id,
},
)
if output_data is not None:
data["data"] = SafeJson(output_data)
await AgentNodeExecutionInputOutput.prisma().create(data=data)
async def get_execution_outputs_by_node_exec_id(
node_exec_id: str,
) -> dict[str, Any]:
"""
Get all execution outputs for a specific node execution ID.
Args:
node_exec_id: The node execution ID to get outputs for
Returns:
Dictionary mapping output names to their data values
"""
outputs = await AgentNodeExecutionInputOutput.prisma().find_many(
where={"referencedByOutputExecId": node_exec_id}
)
result = {}
for output in outputs:
if output.data is not None:
result[output.name] = type_utils.convert(output.data, JsonValue)
return result
async def update_graph_execution_start_time(
graph_exec_id: str,
) -> GraphExecution | None:
@@ -886,9 +941,25 @@ async def update_node_execution_status_batch(
node_exec_ids: list[str],
status: ExecutionStatus,
stats: dict[str, Any] | None = None,
):
await AgentNodeExecution.prisma().update_many(
where={"id": {"in": node_exec_ids}},
) -> int:
# Validate status transitions - allowed_from should never be empty for valid statuses
allowed_from = VALID_STATUS_TRANSITIONS.get(status, [])
if not allowed_from:
raise ValueError(
f"Invalid status transition: {status} has no valid source statuses"
)
# For batch updates, we filter to only update nodes with valid current statuses
where_clause = cast(
AgentNodeExecutionWhereInput,
{
"id": {"in": node_exec_ids},
"executionStatus": {"in": [s.value for s in allowed_from]},
},
)
return await AgentNodeExecution.prisma().update_many(
where=where_clause,
data=_get_update_status_data(status, None, stats),
)
@@ -902,15 +973,37 @@ async def update_node_execution_status(
if status == ExecutionStatus.QUEUED and execution_data is None:
raise ValueError("Execution data must be provided when queuing an execution.")
res = await AgentNodeExecution.prisma().update(
# Validate status transitions - allowed_from should never be empty for valid statuses
allowed_from = VALID_STATUS_TRANSITIONS.get(status, [])
if not allowed_from:
raise ValueError(
f"Invalid status transition: {status} has no valid source statuses"
)
# First verify the current status allows this transition
current_exec = await AgentNodeExecution.prisma().find_unique(
where={"id": node_exec_id}, include=EXECUTION_RESULT_INCLUDE
)
if not current_exec:
raise ValueError(f"Execution {node_exec_id} not found.")
# Check if current status allows the requested transition
if current_exec.executionStatus not in allowed_from:
# Status transition not allowed, return current state without updating
return NodeExecutionResult.from_db(current_exec)
# Status transition is valid, perform the update
updated_exec = await AgentNodeExecution.prisma().update(
where={"id": node_exec_id},
data=_get_update_status_data(status, execution_data, stats),
include=EXECUTION_RESULT_INCLUDE,
)
if not res:
raise ValueError(f"Execution {node_exec_id} not found.")
return NodeExecutionResult.from_db(res)
if not updated_exec:
raise ValueError(f"Failed to update execution {node_exec_id}.")
return NodeExecutionResult.from_db(updated_exec)
def _get_update_status_data(
@@ -964,17 +1057,17 @@ async def get_node_execution(node_exec_id: str) -> NodeExecutionResult | None:
return NodeExecutionResult.from_db(execution)
async def get_node_executions(
def _build_node_execution_where_clause(
graph_exec_id: str | None = None,
node_id: str | None = None,
block_ids: list[str] | None = None,
statuses: list[ExecutionStatus] | None = None,
limit: int | None = None,
created_time_gte: datetime | None = None,
created_time_lte: datetime | None = None,
include_exec_data: bool = True,
) -> list[NodeExecutionResult]:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
) -> AgentNodeExecutionWhereInput:
"""
Build where clause for node execution queries.
"""
where_clause: AgentNodeExecutionWhereInput = {}
if graph_exec_id:
where_clause["agentGraphExecutionId"] = graph_exec_id
@@ -991,6 +1084,29 @@ async def get_node_executions(
"lte": created_time_lte or datetime.max.replace(tzinfo=timezone.utc),
}
return where_clause
async def get_node_executions(
graph_exec_id: str | None = None,
node_id: str | None = None,
block_ids: list[str] | None = None,
statuses: list[ExecutionStatus] | None = None,
limit: int | None = None,
created_time_gte: datetime | None = None,
created_time_lte: datetime | None = None,
include_exec_data: bool = True,
) -> list[NodeExecutionResult]:
"""⚠️ No `user_id` check: DO NOT USE without check in user-facing endpoints."""
where_clause = _build_node_execution_where_clause(
graph_exec_id=graph_exec_id,
node_id=node_id,
block_ids=block_ids,
statuses=statuses,
created_time_gte=created_time_gte,
created_time_lte=created_time_lte,
)
executions = await AgentNodeExecution.prisma().find_many(
where=where_clause,
include=(
@@ -1032,31 +1148,29 @@ async def get_latest_node_execution(
# ----------------- Execution Infrastructure ----------------- #
class UserContext(BaseModel):
"""Generic user context for graph execution containing user-specific settings."""
timezone: str
class GraphExecutionEntry(BaseModel):
model_config = {"extra": "ignore"}
user_id: str
graph_exec_id: str
graph_id: str
graph_version: int
nodes_input_masks: Optional[NodesInputMasks] = None
user_context: UserContext
parent_graph_exec_id: Optional[str] = None
execution_context: ExecutionContext = Field(default_factory=ExecutionContext)
class NodeExecutionEntry(BaseModel):
model_config = {"extra": "ignore"}
user_id: str
graph_exec_id: str
graph_id: str
graph_version: int
node_exec_id: str
node_id: str
block_id: str
inputs: BlockInput
user_context: UserContext
execution_context: ExecutionContext = Field(default_factory=ExecutionContext)
class ExecutionQueue(Generic[T]):
@@ -1390,3 +1504,35 @@ async def get_graph_execution_by_share_token(
created_at=execution.createdAt,
outputs=outputs,
)
async def get_frequently_executed_graphs(
days_back: int = 30,
min_executions: int = 10,
) -> list[dict]:
"""Get graphs that have been frequently executed for monitoring."""
query_template = """
SELECT DISTINCT
e."agentGraphId" as graph_id,
e."userId" as user_id,
COUNT(*) as execution_count
FROM {schema_prefix}"AgentGraphExecution" e
WHERE e."createdAt" >= $1::timestamp
AND e."isDeleted" = false
AND e."executionStatus" IN ('COMPLETED', 'FAILED', 'TERMINATED')
GROUP BY e."agentGraphId", e."userId"
HAVING COUNT(*) >= $2
ORDER BY execution_count DESC
"""
start_date = datetime.now(timezone.utc) - timedelta(days=days_back)
result = await query_raw_with_schema(query_template, start_date, min_executions)
return [
{
"graph_id": row["graph_id"],
"user_id": row["user_id"],
"execution_count": int(row["execution_count"]),
}
for row in result
]

View File

@@ -61,6 +61,10 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class GraphSettings(BaseModel):
human_in_the_loop_safe_mode: bool | None = None
class Link(BaseDbModel):
source_id: str
sink_id: str
@@ -225,6 +229,15 @@ class BaseGraph(BaseDbModel):
def has_external_trigger(self) -> bool:
return self.webhook_input_node is not None
@computed_field
@property
def has_human_in_the_loop(self) -> bool:
return any(
node.block_id
for node in self.nodes
if node.block.block_type == BlockType.HUMAN_IN_THE_LOOP
)
@property
def webhook_input_node(self) -> Node | None:
return next(
@@ -1105,6 +1118,28 @@ async def delete_graph(graph_id: str, user_id: str) -> int:
return entries_count
async def get_graph_settings(user_id: str, graph_id: str) -> GraphSettings:
lib = await LibraryAgent.prisma().find_first(
where={
"userId": user_id,
"agentGraphId": graph_id,
"isDeleted": False,
"isArchived": False,
},
order={"agentGraphVersion": "desc"},
)
if not lib or not lib.settings:
return GraphSettings()
try:
return GraphSettings.model_validate(lib.settings)
except Exception:
logger.warning(
f"Malformed settings for LibraryAgent user={user_id} graph={graph_id}"
)
return GraphSettings()
async def validate_graph_execution_permissions(
user_id: str, graph_id: str, graph_version: int, is_sub_graph: bool = False
) -> None:

View File

@@ -0,0 +1,261 @@
"""
Data layer for Human In The Loop (HITL) review operations.
Handles all database operations for pending human reviews.
"""
import asyncio
import logging
from datetime import datetime, timezone
from typing import Optional, cast
from prisma.enums import ReviewStatus
from prisma.models import PendingHumanReview
from prisma.types import PendingHumanReviewUpdateInput, PendingHumanReviewUpsertInput
from pydantic import BaseModel
from backend.server.v2.executions.review.model import (
PendingHumanReviewModel,
SafeJsonData,
)
from backend.util.json import SafeJson
logger = logging.getLogger(__name__)
class ReviewResult(BaseModel):
"""Result of a review operation."""
data: Optional[SafeJsonData] = None
status: ReviewStatus
message: str = ""
processed: bool
node_exec_id: str
async def get_or_create_human_review(
user_id: str,
node_exec_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
input_data: SafeJsonData,
message: str,
editable: bool,
) -> Optional[ReviewResult]:
"""
Get existing review or create a new pending review entry.
Uses upsert with empty update to get existing or create new review in a single operation.
Args:
user_id: ID of the user who owns this review
node_exec_id: ID of the node execution
graph_exec_id: ID of the graph execution
graph_id: ID of the graph template
graph_version: Version of the graph template
input_data: The data to be reviewed
message: Instructions for the reviewer
editable: Whether the data can be edited
Returns:
ReviewResult if the review is complete, None if waiting for human input
"""
try:
logger.debug(f"Getting or creating review for node {node_exec_id}")
# Upsert - get existing or create new review
review = await PendingHumanReview.prisma().upsert(
where={"nodeExecId": node_exec_id},
data=cast(
PendingHumanReviewUpsertInput,
{
"create": {
"userId": user_id,
"nodeExecId": node_exec_id,
"graphExecId": graph_exec_id,
"graphId": graph_id,
"graphVersion": graph_version,
"payload": SafeJson(input_data),
"instructions": message,
"editable": editable,
"status": ReviewStatus.WAITING,
},
"update": {}, # Do nothing on update - keep existing review as is
},
),
)
logger.info(
f"Review {'created' if review.createdAt == review.updatedAt else 'retrieved'} for node {node_exec_id} with status {review.status}"
)
except Exception as e:
logger.error(
f"Database error in get_or_create_human_review for node {node_exec_id}: {str(e)}"
)
raise
# Early return if already processed
if review.processed:
return None
# If pending, return None to continue waiting, otherwise return the review result
if review.status == ReviewStatus.WAITING:
return None
else:
return ReviewResult(
data=review.payload,
status=review.status,
message=review.reviewMessage or "",
processed=review.processed,
node_exec_id=review.nodeExecId,
)
async def has_pending_reviews_for_graph_exec(graph_exec_id: str) -> bool:
"""
Check if a graph execution has any pending reviews.
Args:
graph_exec_id: The graph execution ID to check
Returns:
True if there are reviews waiting for human input, False otherwise
"""
# Check if there are any reviews waiting for human input
count = await PendingHumanReview.prisma().count(
where={"graphExecId": graph_exec_id, "status": ReviewStatus.WAITING}
)
return count > 0
async def get_pending_reviews_for_user(
user_id: str, page: int = 1, page_size: int = 25
) -> list["PendingHumanReviewModel"]:
"""
Get all pending reviews for a user with pagination.
Args:
user_id: User ID to get reviews for
page: Page number (1-indexed)
page_size: Number of reviews per page
Returns:
List of pending review models
"""
# Calculate offset for pagination
offset = (page - 1) * page_size
reviews = await PendingHumanReview.prisma().find_many(
where={"userId": user_id, "status": ReviewStatus.WAITING},
order={"createdAt": "desc"},
skip=offset,
take=page_size,
)
return [PendingHumanReviewModel.from_db(review) for review in reviews]
async def get_pending_reviews_for_execution(
graph_exec_id: str, user_id: str
) -> list["PendingHumanReviewModel"]:
"""
Get all pending reviews for a specific graph execution.
Args:
graph_exec_id: Graph execution ID
user_id: User ID for security validation
Returns:
List of pending review models
"""
reviews = await PendingHumanReview.prisma().find_many(
where={
"userId": user_id,
"graphExecId": graph_exec_id,
"status": ReviewStatus.WAITING,
},
order={"createdAt": "asc"},
)
return [PendingHumanReviewModel.from_db(review) for review in reviews]
async def process_all_reviews_for_execution(
user_id: str,
review_decisions: dict[str, tuple[ReviewStatus, SafeJsonData | None, str | None]],
) -> dict[str, PendingHumanReviewModel]:
"""Process all pending reviews for an execution with approve/reject decisions.
Args:
user_id: User ID for ownership validation
review_decisions: Map of node_exec_id -> (status, reviewed_data, message)
Returns:
Dict of node_exec_id -> updated review model
"""
if not review_decisions:
return {}
node_exec_ids = list(review_decisions.keys())
# Get all reviews for validation
reviews = await PendingHumanReview.prisma().find_many(
where={
"nodeExecId": {"in": node_exec_ids},
"userId": user_id,
"status": ReviewStatus.WAITING,
},
)
# Validate all reviews can be processed
if len(reviews) != len(node_exec_ids):
missing_ids = set(node_exec_ids) - {review.nodeExecId for review in reviews}
raise ValueError(
f"Reviews not found, access denied, or not in WAITING status: {', '.join(missing_ids)}"
)
# Create parallel update tasks
update_tasks = []
for review in reviews:
new_status, reviewed_data, message = review_decisions[review.nodeExecId]
has_data_changes = reviewed_data is not None and reviewed_data != review.payload
# Check edit permissions for actual data modifications
if has_data_changes and not review.editable:
raise ValueError(f"Review {review.nodeExecId} is not editable")
update_data: PendingHumanReviewUpdateInput = {
"status": new_status,
"reviewMessage": message,
"wasEdited": has_data_changes,
"reviewedAt": datetime.now(timezone.utc),
}
if has_data_changes:
update_data["payload"] = SafeJson(reviewed_data)
task = PendingHumanReview.prisma().update(
where={"nodeExecId": review.nodeExecId},
data=update_data,
)
update_tasks.append(task)
# Execute all updates in parallel and get updated reviews
updated_reviews = await asyncio.gather(*update_tasks)
# Note: Execution resumption is now handled at the API layer after ALL reviews
# for an execution are processed (both approved and rejected)
# Return as dict for easy access
return {
review.nodeExecId: PendingHumanReviewModel.from_db(review)
for review in updated_reviews
}
async def update_review_processed_status(node_exec_id: str, processed: bool) -> None:
"""Update the processed status of a review."""
await PendingHumanReview.prisma().update(
where={"nodeExecId": node_exec_id}, data={"processed": processed}
)

View File

@@ -0,0 +1,342 @@
import datetime
from unittest.mock import AsyncMock, Mock
import pytest
import pytest_mock
from prisma.enums import ReviewStatus
from backend.data.human_review import (
get_or_create_human_review,
get_pending_reviews_for_execution,
get_pending_reviews_for_user,
has_pending_reviews_for_graph_exec,
process_all_reviews_for_execution,
)
@pytest.fixture
def sample_db_review():
"""Create a sample database review object"""
mock_review = Mock()
mock_review.nodeExecId = "test_node_123"
mock_review.userId = "test-user-123"
mock_review.graphExecId = "test_graph_exec_456"
mock_review.graphId = "test_graph_789"
mock_review.graphVersion = 1
mock_review.payload = {"data": "test payload"}
mock_review.instructions = "Please review"
mock_review.editable = True
mock_review.status = ReviewStatus.WAITING
mock_review.reviewMessage = None
mock_review.wasEdited = False
mock_review.processed = False
mock_review.createdAt = datetime.datetime.now(datetime.timezone.utc)
mock_review.updatedAt = None
mock_review.reviewedAt = None
return mock_review
@pytest.mark.asyncio
async def test_get_or_create_human_review_new(
mocker: pytest_mock.MockFixture,
sample_db_review,
):
"""Test creating a new human review"""
# Mock the upsert to return a new review (created_at == updated_at)
sample_db_review.status = ReviewStatus.WAITING
sample_db_review.processed = False
mock_upsert = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_upsert.return_value.upsert = AsyncMock(return_value=sample_db_review)
result = await get_or_create_human_review(
user_id="test-user-123",
node_exec_id="test_node_123",
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
input_data={"data": "test payload"},
message="Please review",
editable=True,
)
# Should return None for pending reviews (waiting for human input)
assert result is None
@pytest.mark.asyncio
async def test_get_or_create_human_review_approved(
mocker: pytest_mock.MockFixture,
sample_db_review,
):
"""Test retrieving an already approved review"""
# Set up review as already approved
sample_db_review.status = ReviewStatus.APPROVED
sample_db_review.processed = False
sample_db_review.reviewMessage = "Looks good"
mock_upsert = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_upsert.return_value.upsert = AsyncMock(return_value=sample_db_review)
result = await get_or_create_human_review(
user_id="test-user-123",
node_exec_id="test_node_123",
graph_exec_id="test_graph_exec_456",
graph_id="test_graph_789",
graph_version=1,
input_data={"data": "test payload"},
message="Please review",
editable=True,
)
# Should return the approved result
assert result is not None
assert result.status == ReviewStatus.APPROVED
assert result.data == {"data": "test payload"}
assert result.message == "Looks good"
@pytest.mark.asyncio
async def test_has_pending_reviews_for_graph_exec_true(
mocker: pytest_mock.MockFixture,
):
"""Test when there are pending reviews"""
mock_count = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_count.return_value.count = AsyncMock(return_value=2)
result = await has_pending_reviews_for_graph_exec("test_graph_exec")
assert result is True
@pytest.mark.asyncio
async def test_has_pending_reviews_for_graph_exec_false(
mocker: pytest_mock.MockFixture,
):
"""Test when there are no pending reviews"""
mock_count = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_count.return_value.count = AsyncMock(return_value=0)
result = await has_pending_reviews_for_graph_exec("test_graph_exec")
assert result is False
@pytest.mark.asyncio
async def test_get_pending_reviews_for_user(
mocker: pytest_mock.MockFixture,
sample_db_review,
):
"""Test getting pending reviews for a user with pagination"""
mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_find_many.return_value.find_many = AsyncMock(return_value=[sample_db_review])
result = await get_pending_reviews_for_user("test_user", page=2, page_size=10)
assert len(result) == 1
assert result[0].node_exec_id == "test_node_123"
# Verify pagination parameters
call_args = mock_find_many.return_value.find_many.call_args
assert call_args.kwargs["skip"] == 10 # (page-1) * page_size = (2-1) * 10
assert call_args.kwargs["take"] == 10
@pytest.mark.asyncio
async def test_get_pending_reviews_for_execution(
mocker: pytest_mock.MockFixture,
sample_db_review,
):
"""Test getting pending reviews for specific execution"""
mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_find_many.return_value.find_many = AsyncMock(return_value=[sample_db_review])
result = await get_pending_reviews_for_execution(
"test_graph_exec_456", "test-user-123"
)
assert len(result) == 1
assert result[0].graph_exec_id == "test_graph_exec_456"
# Verify it filters by execution and user
call_args = mock_find_many.return_value.find_many.call_args
where_clause = call_args.kwargs["where"]
assert where_clause["userId"] == "test-user-123"
assert where_clause["graphExecId"] == "test_graph_exec_456"
assert where_clause["status"] == ReviewStatus.WAITING
@pytest.mark.asyncio
async def test_process_all_reviews_for_execution_success(
mocker: pytest_mock.MockFixture,
sample_db_review,
):
"""Test successful processing of reviews for an execution"""
# Mock finding reviews
mock_prisma = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_prisma.return_value.find_many = AsyncMock(return_value=[sample_db_review])
# Mock updating reviews
updated_review = Mock()
updated_review.nodeExecId = "test_node_123"
updated_review.userId = "test-user-123"
updated_review.graphExecId = "test_graph_exec_456"
updated_review.graphId = "test_graph_789"
updated_review.graphVersion = 1
updated_review.payload = {"data": "modified"}
updated_review.instructions = "Please review"
updated_review.editable = True
updated_review.status = ReviewStatus.APPROVED
updated_review.reviewMessage = "Approved"
updated_review.wasEdited = True
updated_review.processed = False
updated_review.createdAt = datetime.datetime.now(datetime.timezone.utc)
updated_review.updatedAt = datetime.datetime.now(datetime.timezone.utc)
updated_review.reviewedAt = datetime.datetime.now(datetime.timezone.utc)
mock_prisma.return_value.update = AsyncMock(return_value=updated_review)
# Mock gather to simulate parallel updates
mocker.patch(
"backend.data.human_review.asyncio.gather",
new=AsyncMock(return_value=[updated_review]),
)
result = await process_all_reviews_for_execution(
user_id="test-user-123",
review_decisions={
"test_node_123": (ReviewStatus.APPROVED, {"data": "modified"}, "Approved")
},
)
assert len(result) == 1
assert "test_node_123" in result
assert result["test_node_123"].status == ReviewStatus.APPROVED
@pytest.mark.asyncio
async def test_process_all_reviews_for_execution_validation_errors(
mocker: pytest_mock.MockFixture,
):
"""Test validation errors in process_all_reviews_for_execution"""
# Mock finding fewer reviews than requested (some not found)
mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_find_many.return_value.find_many = AsyncMock(
return_value=[]
) # No reviews found
with pytest.raises(ValueError, match="Reviews not found"):
await process_all_reviews_for_execution(
user_id="test-user-123",
review_decisions={
"nonexistent_node": (ReviewStatus.APPROVED, {"data": "test"}, "message")
},
)
@pytest.mark.asyncio
async def test_process_all_reviews_edit_permission_error(
mocker: pytest_mock.MockFixture,
sample_db_review,
):
"""Test editing non-editable review"""
# Set review as non-editable
sample_db_review.editable = False
# Mock finding reviews
mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_find_many.return_value.find_many = AsyncMock(return_value=[sample_db_review])
with pytest.raises(ValueError, match="not editable"):
await process_all_reviews_for_execution(
user_id="test-user-123",
review_decisions={
"test_node_123": (
ReviewStatus.APPROVED,
{"data": "modified"},
"message",
)
},
)
@pytest.mark.asyncio
async def test_process_all_reviews_mixed_approval_rejection(
mocker: pytest_mock.MockFixture,
sample_db_review,
):
"""Test processing mixed approval and rejection decisions"""
# Create second review for rejection
second_review = Mock()
second_review.nodeExecId = "test_node_456"
second_review.userId = "test-user-123"
second_review.graphExecId = "test_graph_exec_456"
second_review.graphId = "test_graph_789"
second_review.graphVersion = 1
second_review.payload = {"data": "original"}
second_review.instructions = "Second review"
second_review.editable = True
second_review.status = ReviewStatus.WAITING
second_review.reviewMessage = None
second_review.wasEdited = False
second_review.processed = False
second_review.createdAt = datetime.datetime.now(datetime.timezone.utc)
second_review.updatedAt = None
second_review.reviewedAt = None
# Mock finding reviews
mock_find_many = mocker.patch("backend.data.human_review.PendingHumanReview.prisma")
mock_find_many.return_value.find_many = AsyncMock(
return_value=[sample_db_review, second_review]
)
# Mock updating reviews
approved_review = Mock()
approved_review.nodeExecId = "test_node_123"
approved_review.userId = "test-user-123"
approved_review.graphExecId = "test_graph_exec_456"
approved_review.graphId = "test_graph_789"
approved_review.graphVersion = 1
approved_review.payload = {"data": "modified"}
approved_review.instructions = "Please review"
approved_review.editable = True
approved_review.status = ReviewStatus.APPROVED
approved_review.reviewMessage = "Approved"
approved_review.wasEdited = True
approved_review.processed = False
approved_review.createdAt = datetime.datetime.now(datetime.timezone.utc)
approved_review.updatedAt = datetime.datetime.now(datetime.timezone.utc)
approved_review.reviewedAt = datetime.datetime.now(datetime.timezone.utc)
rejected_review = Mock()
rejected_review.nodeExecId = "test_node_456"
rejected_review.userId = "test-user-123"
rejected_review.graphExecId = "test_graph_exec_456"
rejected_review.graphId = "test_graph_789"
rejected_review.graphVersion = 1
rejected_review.payload = {"data": "original"}
rejected_review.instructions = "Please review"
rejected_review.editable = True
rejected_review.status = ReviewStatus.REJECTED
rejected_review.reviewMessage = "Rejected"
rejected_review.wasEdited = False
rejected_review.processed = False
rejected_review.createdAt = datetime.datetime.now(datetime.timezone.utc)
rejected_review.updatedAt = datetime.datetime.now(datetime.timezone.utc)
rejected_review.reviewedAt = datetime.datetime.now(datetime.timezone.utc)
mocker.patch(
"backend.data.human_review.asyncio.gather",
new=AsyncMock(return_value=[approved_review, rejected_review]),
)
result = await process_all_reviews_for_execution(
user_id="test-user-123",
review_decisions={
"test_node_123": (ReviewStatus.APPROVED, {"data": "modified"}, "Approved"),
"test_node_456": (ReviewStatus.REJECTED, None, "Rejected"),
},
)
assert len(result) == 2
assert "test_node_123" in result
assert "test_node_456" in result

View File

@@ -1,5 +1,5 @@
import logging
from typing import AsyncGenerator, Literal, Optional, overload
from typing import TYPE_CHECKING, AsyncGenerator, Literal, Optional, overload
from prisma.models import AgentNode, AgentPreset, IntegrationWebhook
from prisma.types import (
@@ -19,10 +19,12 @@ from backend.integrations.creds_manager import IntegrationCredentialsManager
from backend.integrations.providers import ProviderName
from backend.integrations.webhooks import get_webhook_manager
from backend.integrations.webhooks.utils import webhook_ingress_url
from backend.server.v2.library.model import LibraryAgentPreset
from backend.util.exceptions import NotFoundError
from backend.util.json import SafeJson
if TYPE_CHECKING:
from backend.server.v2.library.model import LibraryAgentPreset
from .db import BaseDbModel
from .graph import NodeModel
@@ -64,7 +66,7 @@ class Webhook(BaseDbModel):
class WebhookWithRelations(Webhook):
triggered_nodes: list[NodeModel]
triggered_presets: list[LibraryAgentPreset]
triggered_presets: list["LibraryAgentPreset"]
@staticmethod
def from_db(webhook: IntegrationWebhook):
@@ -73,6 +75,12 @@ class WebhookWithRelations(Webhook):
"AgentNodes and AgentPresets must be included in "
"IntegrationWebhook query with relations"
)
# LibraryAgentPreset import is moved to TYPE_CHECKING to avoid circular import:
# integrations.py → library/model.py → integrations.py (for Webhook)
# Runtime import is used in WebhookWithRelations.from_db() method instead
# Import at runtime to avoid circular dependency
from backend.server.v2.library.model import LibraryAgentPreset
return WebhookWithRelations(
**Webhook.from_db(webhook).model_dump(),
triggered_nodes=[NodeModel.from_db(node) for node in webhook.AgentNodes],

View File

@@ -22,7 +22,7 @@ from typing import (
from urllib.parse import urlparse
from uuid import uuid4
from prisma.enums import CreditTransactionType
from prisma.enums import CreditTransactionType, OnboardingStep
from pydantic import (
BaseModel,
ConfigDict,
@@ -46,6 +46,7 @@ from backend.util.settings import Secrets
# Type alias for any provider name (including custom ones)
AnyProviderName = str # Will be validated as ProviderName at runtime
USER_TIMEZONE_NOT_SET = "not-set"
class User(BaseModel):
@@ -98,7 +99,7 @@ class User(BaseModel):
# User timezone for scheduling and time display
timezone: str = Field(
default="not-set",
default=USER_TIMEZONE_NOT_SET,
description="User timezone (IANA timezone identifier or 'not-set')",
)
@@ -155,7 +156,7 @@ class User(BaseModel):
notify_on_daily_summary=prisma_user.notifyOnDailySummary or True,
notify_on_weekly_summary=prisma_user.notifyOnWeeklySummary or True,
notify_on_monthly_summary=prisma_user.notifyOnMonthlySummary or True,
timezone=prisma_user.timezone or "not-set",
timezone=prisma_user.timezone or USER_TIMEZONE_NOT_SET,
)
@@ -433,6 +434,18 @@ class OAuthState(BaseModel):
code_verifier: Optional[str] = None
"""Unix timestamp (seconds) indicating when this OAuth state expires"""
scopes: list[str]
# Fields for external API OAuth flows
callback_url: Optional[str] = None
"""External app's callback URL for OAuth redirect"""
state_metadata: dict[str, Any] = Field(default_factory=dict)
"""Metadata to echo back to external app on completion"""
initiated_by_api_key_id: Optional[str] = None
"""ID of the API key that initiated this OAuth flow"""
@property
def is_external(self) -> bool:
"""Whether this OAuth flow was initiated via external API."""
return self.callback_url is not None
class UserMetadata(BaseModel):
@@ -855,3 +868,20 @@ class UserExecutionSummaryStats(BaseModel):
total_execution_time: float = Field(default=0)
average_execution_time: float = Field(default=0)
cost_breakdown: dict[str, float] = Field(default_factory=dict)
class UserOnboarding(BaseModel):
userId: str
completedSteps: list[OnboardingStep]
walletShown: bool
notified: list[OnboardingStep]
rewardedFor: list[OnboardingStep]
usageReason: Optional[str]
integrations: list[str]
otherIntegrations: Optional[str]
selectedStoreListingVersionId: Optional[str]
agentInput: Optional[dict[str, Any]]
onboardingAgentExecutionId: Optional[str]
agentRuns: int
lastRunAt: Optional[datetime]
consecutiveRunDays: int

View File

@@ -2,7 +2,7 @@ from __future__ import annotations
from typing import AsyncGenerator
from pydantic import BaseModel
from pydantic import BaseModel, field_serializer
from backend.data.event_bus import AsyncRedisEventBus
from backend.server.model import NotificationPayload
@@ -15,6 +15,11 @@ class NotificationEvent(BaseModel):
user_id: str
payload: NotificationPayload
@field_serializer("payload")
def serialize_payload(self, payload: NotificationPayload):
"""Ensure extra fields survive Redis serialization."""
return payload.model_dump()
class AsyncRedisNotificationEventBus(AsyncRedisEventBus[NotificationEvent]):
Model = NotificationEvent # type: ignore

View File

@@ -1,24 +1,30 @@
import re
from datetime import datetime
from typing import Any, Optional
from datetime import datetime, timedelta, timezone
from typing import Any, Literal, Optional, cast
from zoneinfo import ZoneInfo
import prisma
import pydantic
from prisma.enums import OnboardingStep
from prisma.models import UserOnboarding
from prisma.types import UserOnboardingCreateInput, UserOnboardingUpdateInput
from prisma.types import (
UserOnboardingCreateInput,
UserOnboardingUpdateInput,
UserOnboardingUpsertInput,
)
from backend.data.block import get_blocks
from backend.data import execution as execution_db
from backend.data.credit import get_user_credit_model
from backend.data.model import CredentialsMetaInput
from backend.data.notification_bus import (
AsyncRedisNotificationEventBus,
NotificationEvent,
)
from backend.data.user import get_user_by_id
from backend.server.model import OnboardingNotificationPayload
from backend.server.v2.store.model import StoreAgentDetails
from backend.util.cache import cached
from backend.util.json import SafeJson
from backend.util.timezone_utils import get_user_timezone_or_utc
# Mapping from user reason id to categories to search for when choosing agent to show
REASON_MAPPING: dict[str, list[str]] = {
@@ -31,9 +37,20 @@ REASON_MAPPING: dict[str, list[str]] = {
POINTS_AGENT_COUNT = 50 # Number of agents to calculate points for
MIN_AGENT_COUNT = 2 # Minimum number of marketplace agents to enable onboarding
FrontendOnboardingStep = Literal[
OnboardingStep.WELCOME,
OnboardingStep.USAGE_REASON,
OnboardingStep.INTEGRATIONS,
OnboardingStep.AGENT_CHOICE,
OnboardingStep.AGENT_NEW_RUN,
OnboardingStep.AGENT_INPUT,
OnboardingStep.CONGRATS,
OnboardingStep.MARKETPLACE_VISIT,
OnboardingStep.BUILDER_OPEN,
]
class UserOnboardingUpdate(pydantic.BaseModel):
completedSteps: Optional[list[OnboardingStep]] = None
walletShown: Optional[bool] = None
notified: Optional[list[OnboardingStep]] = None
usageReason: Optional[str] = None
@@ -42,9 +59,6 @@ class UserOnboardingUpdate(pydantic.BaseModel):
selectedStoreListingVersionId: Optional[str] = None
agentInput: Optional[dict[str, Any]] = None
onboardingAgentExecutionId: Optional[str] = None
agentRuns: Optional[int] = None
lastRunAt: Optional[datetime] = None
consecutiveRunDays: Optional[int] = None
async def get_user_onboarding(user_id: str):
@@ -83,14 +97,6 @@ async def reset_user_onboarding(user_id: str):
async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate):
update: UserOnboardingUpdateInput = {}
onboarding = await get_user_onboarding(user_id)
if data.completedSteps is not None:
update["completedSteps"] = list(
set(data.completedSteps + onboarding.completedSteps)
)
for step in data.completedSteps:
if step not in onboarding.completedSteps:
await _reward_user(user_id, onboarding, step)
await _send_onboarding_notification(user_id, step)
if data.walletShown:
update["walletShown"] = data.walletShown
if data.notified is not None:
@@ -107,19 +113,16 @@ async def update_user_onboarding(user_id: str, data: UserOnboardingUpdate):
update["agentInput"] = SafeJson(data.agentInput)
if data.onboardingAgentExecutionId is not None:
update["onboardingAgentExecutionId"] = data.onboardingAgentExecutionId
if data.agentRuns is not None and data.agentRuns > onboarding.agentRuns:
update["agentRuns"] = data.agentRuns
if data.lastRunAt is not None:
update["lastRunAt"] = data.lastRunAt
if data.consecutiveRunDays is not None:
update["consecutiveRunDays"] = data.consecutiveRunDays
return await UserOnboarding.prisma().upsert(
where={"userId": user_id},
data={
"create": {"userId": user_id, **update},
"update": update,
},
data=cast(
UserOnboardingUpsertInput,
{
"create": {"userId": user_id, **update},
"update": update,
},
),
)
@@ -161,14 +164,12 @@ async def _reward_user(user_id: str, onboarding: UserOnboarding, step: Onboardin
if step in onboarding.rewardedFor:
return
onboarding.rewardedFor.append(step)
user_credit_model = await get_user_credit_model(user_id)
await user_credit_model.onboarding_reward(user_id, reward, step)
await UserOnboarding.prisma().update(
where={"userId": user_id},
data={
"completedSteps": list(set(onboarding.completedSteps + [step])),
"rewardedFor": onboarding.rewardedFor,
"rewardedFor": list(set(onboarding.rewardedFor + [step])),
},
)
@@ -177,31 +178,52 @@ async def complete_onboarding_step(user_id: str, step: OnboardingStep):
"""
Completes the specified onboarding step for the user if not already completed.
"""
onboarding = await get_user_onboarding(user_id)
if step not in onboarding.completedSteps:
await update_user_onboarding(
user_id,
UserOnboardingUpdate(completedSteps=onboarding.completedSteps + [step]),
await UserOnboarding.prisma().update(
where={"userId": user_id},
data={
"completedSteps": list(set(onboarding.completedSteps + [step])),
},
)
await _reward_user(user_id, onboarding, step)
await _send_onboarding_notification(user_id, step)
async def _send_onboarding_notification(user_id: str, step: OnboardingStep):
async def _send_onboarding_notification(
user_id: str, step: OnboardingStep | None, event: str = "step_completed"
):
"""
Sends an onboarding notification to the user for the specified step.
Sends an onboarding notification to the user.
"""
payload = OnboardingNotificationPayload(
type="onboarding",
event="step_completed",
step=step.value,
event=event,
step=step,
)
await AsyncRedisNotificationEventBus().publish(
NotificationEvent(user_id=user_id, payload=payload)
)
def clean_and_split(text: str) -> list[str]:
async def complete_re_run_agent(user_id: str, graph_id: str) -> None:
"""
Complete RE_RUN_AGENT step when a user runs a graph they've run before.
Keeps overhead low by only counting executions if the step is still pending.
"""
onboarding = await get_user_onboarding(user_id)
if OnboardingStep.RE_RUN_AGENT in onboarding.completedSteps:
return
# Includes current execution, so count > 1 means there was at least one prior run.
previous_exec_count = await execution_db.get_graph_executions_count(
user_id=user_id, graph_id=graph_id
)
if previous_exec_count > 1:
await complete_onboarding_step(user_id, OnboardingStep.RE_RUN_AGENT)
def _clean_and_split(text: str) -> list[str]:
"""
Removes all special characters from a string, truncates it to 100 characters,
and splits it by whitespace and commas.
@@ -224,7 +246,7 @@ def clean_and_split(text: str) -> list[str]:
return words
def calculate_points(
def _calculate_points(
agent, categories: list[str], custom: list[str], integrations: list[str]
) -> int:
"""
@@ -268,18 +290,85 @@ def calculate_points(
return int(points)
def get_credentials_blocks() -> dict[str, str]:
# Returns a dictionary of block id to credentials field name
creds: dict[str, str] = {}
blocks = get_blocks()
for id, block in blocks.items():
for field_name, field_info in block().input_schema.model_fields.items():
if field_info.annotation == CredentialsMetaInput:
creds[id] = field_name
return creds
def _normalize_datetime(value: datetime | None) -> datetime | None:
if value is None:
return None
if value.tzinfo is None:
return value.replace(tzinfo=timezone.utc)
return value.astimezone(timezone.utc)
CREDENTIALS_FIELDS: dict[str, str] = get_credentials_blocks()
def _calculate_consecutive_run_days(
last_run_at: datetime | None, current_consecutive_days: int, user_timezone: str
) -> tuple[datetime, int]:
tz = ZoneInfo(user_timezone)
local_now = datetime.now(tz)
normalized_last_run = _normalize_datetime(last_run_at)
if normalized_last_run is None:
return local_now.astimezone(timezone.utc), 1
last_run_local = normalized_last_run.astimezone(tz)
last_run_date = last_run_local.date()
today = local_now.date()
if last_run_date == today:
return local_now.astimezone(timezone.utc), current_consecutive_days
if last_run_date == today - timedelta(days=1):
return local_now.astimezone(timezone.utc), current_consecutive_days + 1
return local_now.astimezone(timezone.utc), 1
def _get_run_milestone_steps(
new_run_count: int, consecutive_days: int
) -> list[OnboardingStep]:
milestones: list[OnboardingStep] = []
if new_run_count >= 10:
milestones.append(OnboardingStep.RUN_AGENTS)
if new_run_count >= 100:
milestones.append(OnboardingStep.RUN_AGENTS_100)
if consecutive_days >= 3:
milestones.append(OnboardingStep.RUN_3_DAYS)
if consecutive_days >= 14:
milestones.append(OnboardingStep.RUN_14_DAYS)
return milestones
async def _get_user_timezone(user_id: str) -> str:
user = await get_user_by_id(user_id)
return get_user_timezone_or_utc(user.timezone if user else None)
async def increment_runs(user_id: str):
"""
Increment a user's run counters and trigger any onboarding milestones.
"""
user_timezone = await _get_user_timezone(user_id)
onboarding = await get_user_onboarding(user_id)
new_run_count = onboarding.agentRuns + 1
last_run_at, consecutive_run_days = _calculate_consecutive_run_days(
onboarding.lastRunAt, onboarding.consecutiveRunDays, user_timezone
)
await UserOnboarding.prisma().update(
where={"userId": user_id},
data={
"agentRuns": {"increment": 1},
"lastRunAt": last_run_at,
"consecutiveRunDays": consecutive_run_days,
},
)
milestones = _get_run_milestone_steps(new_run_count, consecutive_run_days)
new_steps = [step for step in milestones if step not in onboarding.completedSteps]
for step in new_steps:
await complete_onboarding_step(user_id, step)
# Send progress notification if no steps were completed, so client refetches onboarding state
if not new_steps:
await _send_onboarding_notification(user_id, None, event="increment_runs")
async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]:
@@ -288,7 +377,7 @@ async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]:
where_clause: dict[str, Any] = {}
custom = clean_and_split((user_onboarding.usageReason or "").lower())
custom = _clean_and_split((user_onboarding.usageReason or "").lower())
if categories:
where_clause["OR"] = [
@@ -336,7 +425,7 @@ async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]:
# Calculate points for the first X agents and choose the top 2
agent_points = []
for agent in storeAgents[:POINTS_AGENT_COUNT]:
points = calculate_points(
points = _calculate_points(
agent, categories, custom, user_onboarding.integrations
)
agent_points.append((agent, points))
@@ -350,6 +439,7 @@ async def get_recommended_agents(user_id: str) -> list[StoreAgentDetails]:
slug=agent.slug,
agent_name=agent.agent_name,
agent_video=agent.agent_video or "",
agent_output_demo=agent.agent_output_demo or "",
agent_image=agent.agent_image,
creator=agent.creator_username,
creator_avatar=agent.creator_avatar,

View File

@@ -3,12 +3,18 @@ from contextlib import asynccontextmanager
from typing import TYPE_CHECKING, Callable, Concatenate, ParamSpec, TypeVar, cast
from backend.data import db
from backend.data.analytics import (
get_accuracy_trends_and_alerts,
get_marketplace_graphs_for_monitoring,
)
from backend.data.credit import UsageTransactionMetadata, get_user_credit_model
from backend.data.execution import (
create_graph_execution,
get_block_error_stats,
get_child_graph_executions,
get_execution_kv_data,
get_execution_outputs_by_node_exec_id,
get_frequently_executed_graphs,
get_graph_execution_meta,
get_graph_executions,
get_graph_executions_count,
@@ -28,9 +34,15 @@ from backend.data.graph import (
get_connected_output_nodes,
get_graph,
get_graph_metadata,
get_graph_settings,
get_node,
validate_graph_execution_permissions,
)
from backend.data.human_review import (
get_or_create_human_review,
has_pending_reviews_for_graph_exec,
update_review_processed_status,
)
from backend.data.notifications import (
clear_all_user_notification_batches,
create_or_add_to_user_notification_batch,
@@ -136,15 +148,20 @@ class DatabaseManager(AppService):
update_graph_execution_stats = _(update_graph_execution_stats)
upsert_execution_input = _(upsert_execution_input)
upsert_execution_output = _(upsert_execution_output)
get_execution_outputs_by_node_exec_id = _(get_execution_outputs_by_node_exec_id)
get_execution_kv_data = _(get_execution_kv_data)
set_execution_kv_data = _(set_execution_kv_data)
get_block_error_stats = _(get_block_error_stats)
get_accuracy_trends_and_alerts = _(get_accuracy_trends_and_alerts)
get_frequently_executed_graphs = _(get_frequently_executed_graphs)
get_marketplace_graphs_for_monitoring = _(get_marketplace_graphs_for_monitoring)
# Graphs
get_node = _(get_node)
get_graph = _(get_graph)
get_connected_output_nodes = _(get_connected_output_nodes)
get_graph_metadata = _(get_graph_metadata)
get_graph_settings = _(get_graph_settings)
# Credits
spend_credits = _(_spend_credits, name="spend_credits")
@@ -161,6 +178,11 @@ class DatabaseManager(AppService):
get_user_email_verification = _(get_user_email_verification)
get_user_notification_preference = _(get_user_notification_preference)
# Human In The Loop
get_or_create_human_review = _(get_or_create_human_review)
has_pending_reviews_for_graph_exec = _(has_pending_reviews_for_graph_exec)
update_review_processed_status = _(update_review_processed_status)
# Notifications - async
clear_all_user_notification_batches = _(clear_all_user_notification_batches)
create_or_add_to_user_notification_batch = _(
@@ -214,6 +236,13 @@ class DatabaseManagerClient(AppServiceClient):
# Block error monitoring
get_block_error_stats = _(d.get_block_error_stats)
# Execution accuracy monitoring
get_accuracy_trends_and_alerts = _(d.get_accuracy_trends_and_alerts)
get_frequently_executed_graphs = _(d.get_frequently_executed_graphs)
get_marketplace_graphs_for_monitoring = _(d.get_marketplace_graphs_for_monitoring)
# Human In The Loop
has_pending_reviews_for_graph_exec = _(d.has_pending_reviews_for_graph_exec)
# User Emails
get_user_email_by_id = _(d.get_user_email_by_id)
@@ -241,6 +270,7 @@ class DatabaseManagerAsyncClient(AppServiceClient):
get_latest_node_execution = d.get_latest_node_execution
get_graph = d.get_graph
get_graph_metadata = d.get_graph_metadata
get_graph_settings = d.get_graph_settings
get_graph_execution_meta = d.get_graph_execution_meta
get_node = d.get_node
get_node_execution = d.get_node_execution
@@ -249,6 +279,7 @@ class DatabaseManagerAsyncClient(AppServiceClient):
get_user_integrations = d.get_user_integrations
upsert_execution_input = d.upsert_execution_input
upsert_execution_output = d.upsert_execution_output
get_execution_outputs_by_node_exec_id = d.get_execution_outputs_by_node_exec_id
update_graph_execution_stats = d.update_graph_execution_stats
update_node_execution_status = d.update_node_execution_status
update_node_execution_status_batch = d.update_node_execution_status_batch
@@ -256,6 +287,10 @@ class DatabaseManagerAsyncClient(AppServiceClient):
get_execution_kv_data = d.get_execution_kv_data
set_execution_kv_data = d.set_execution_kv_data
# Human In The Loop
get_or_create_human_review = d.get_or_create_human_review
update_review_processed_status = d.update_review_processed_status
# User Comms
get_active_user_ids_in_timerange = d.get_active_user_ids_in_timerange
get_user_email_by_id = d.get_user_email_by_id

View File

@@ -29,6 +29,7 @@ from backend.data.block import (
from backend.data.credit import UsageTransactionMetadata
from backend.data.dynamic_fields import parse_execution_output
from backend.data.execution import (
ExecutionContext,
ExecutionQueue,
ExecutionStatus,
GraphExecution,
@@ -36,7 +37,6 @@ from backend.data.execution import (
NodeExecutionEntry,
NodeExecutionResult,
NodesInputMasks,
UserContext,
)
from backend.data.graph import Link, Node
from backend.data.model import GraphExecutionStats, NodeExecutionStats
@@ -133,9 +133,8 @@ def execute_graph(
cluster_lock: ClusterLock,
):
"""Execute graph using thread-local ExecutionProcessor instance"""
return _tls.processor.on_graph_execution(
graph_exec_entry, cancel_event, cluster_lock
)
processor: ExecutionProcessor = _tls.processor
return processor.on_graph_execution(graph_exec_entry, cancel_event, cluster_lock)
T = TypeVar("T")
@@ -143,8 +142,8 @@ T = TypeVar("T")
async def execute_node(
node: Node,
creds_manager: IntegrationCredentialsManager,
data: NodeExecutionEntry,
execution_processor: "ExecutionProcessor",
execution_stats: NodeExecutionStats | None = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
) -> BlockOutput:
@@ -164,9 +163,12 @@ async def execute_node(
user_id = data.user_id
graph_exec_id = data.graph_exec_id
graph_id = data.graph_id
graph_version = data.graph_version
node_exec_id = data.node_exec_id
node_id = data.node_id
node_block = node.block
execution_context = data.execution_context
creds_manager = execution_processor.creds_manager
log_metadata = LogMetadata(
logger=_logger,
@@ -204,28 +206,66 @@ async def execute_node(
# Inject extra execution arguments for the blocks via kwargs
extra_exec_kwargs: dict = {
"graph_id": graph_id,
"graph_version": graph_version,
"node_id": node_id,
"graph_exec_id": graph_exec_id,
"node_exec_id": node_exec_id,
"user_id": user_id,
"execution_context": execution_context,
"execution_processor": execution_processor,
}
# Add user context from NodeExecutionEntry
extra_exec_kwargs["user_context"] = data.user_context
# Last-minute fetch credentials + acquire a system-wide read-write lock to prevent
# changes during execution. ⚠️ This means a set of credentials can only be used by
# one (running) block at a time; simultaneous execution of blocks using same
# credentials is not supported.
creds_lock = None
creds_locks: list[AsyncRedisLock] = []
input_model = cast(type[BlockSchema], node_block.input_schema)
# Handle regular credentials fields
for field_name, input_type in input_model.get_credentials_fields().items():
credentials_meta = input_type(**input_data[field_name])
credentials, creds_lock = await creds_manager.acquire(
user_id, credentials_meta.id
)
credentials, lock = await creds_manager.acquire(user_id, credentials_meta.id)
creds_locks.append(lock)
extra_exec_kwargs[field_name] = credentials
# Handle auto-generated credentials (e.g., from GoogleDriveFileInput)
for kwarg_name, info in input_model.get_auto_credentials_fields().items():
field_name = info["field_name"]
field_data = input_data.get(field_name)
if field_data and isinstance(field_data, dict):
# Check if _credentials_id key exists in the field data
if "_credentials_id" in field_data:
cred_id = field_data["_credentials_id"]
if cred_id:
# Credential ID provided - acquire credentials
provider = info.get("config", {}).get(
"provider", "external service"
)
file_name = field_data.get("name", "selected file")
try:
credentials, lock = await creds_manager.acquire(
user_id, cred_id
)
creds_locks.append(lock)
extra_exec_kwargs[kwarg_name] = credentials
except ValueError:
# Credential was deleted or doesn't exist
raise ValueError(
f"Authentication expired for '{file_name}' in field '{field_name}'. "
f"The saved {provider.capitalize()} credentials no longer exist. "
f"Please re-select the file to re-authenticate."
)
# else: _credentials_id is explicitly None, skip credentials (for chained data)
else:
# _credentials_id key missing entirely - this is an error
provider = info.get("config", {}).get("provider", "external service")
file_name = field_data.get("name", "selected file")
raise ValueError(
f"Authentication missing for '{file_name}' in field '{field_name}'. "
f"Please re-select the file to authenticate with {provider.capitalize()}."
)
output_size = 0
# sentry tracking nonsense to get user counts for blocks because isolation scopes don't work :(
@@ -241,8 +281,8 @@ async def execute_node(
scope.set_tag("node_id", node_id)
scope.set_tag("block_name", node_block.name)
scope.set_tag("block_id", node_block.id)
for k, v in (data.user_context or UserContext(timezone="UTC")).model_dump().items():
scope.set_tag(f"user_context.{k}", v)
for k, v in execution_context.model_dump().items():
scope.set_tag(f"execution_context.{k}", v)
try:
async for output_name, output_data in node_block.execute(
@@ -259,12 +299,17 @@ async def execute_node(
# Re-raise to maintain normal error flow
raise
finally:
# Ensure credentials are released even if execution fails
if creds_lock and (await creds_lock.locked()) and (await creds_lock.owned()):
try:
await creds_lock.release()
except Exception as e:
log_metadata.error(f"Failed to release credentials lock: {e}")
# Ensure all credentials are released even if execution fails
for creds_lock in creds_locks:
if (
creds_lock
and (await creds_lock.locked())
and (await creds_lock.owned())
):
try:
await creds_lock.release()
except Exception as e:
log_metadata.error(f"Failed to release credentials lock: {e}")
# Update execution stats
if execution_stats is not None:
@@ -284,9 +329,10 @@ async def _enqueue_next_nodes(
user_id: str,
graph_exec_id: str,
graph_id: str,
graph_version: int,
log_metadata: LogMetadata,
nodes_input_masks: Optional[NodesInputMasks],
user_context: UserContext,
execution_context: ExecutionContext,
) -> list[NodeExecutionEntry]:
async def add_enqueued_execution(
node_exec_id: str, node_id: str, block_id: str, data: BlockInput
@@ -301,11 +347,12 @@ async def _enqueue_next_nodes(
user_id=user_id,
graph_exec_id=graph_exec_id,
graph_id=graph_id,
graph_version=graph_version,
node_exec_id=node_exec_id,
node_id=node_id,
block_id=block_id,
inputs=data,
user_context=user_context,
execution_context=execution_context,
)
async def register_next_executions(node_link: Link) -> list[NodeExecutionEntry]:
@@ -334,17 +381,14 @@ async def _enqueue_next_nodes(
# Or the same input to be consumed multiple times.
async with synchronized(f"upsert_input-{next_node_id}-{graph_exec_id}"):
# Add output data to the earliest incomplete execution, or create a new one.
next_node_exec_id, next_node_input = await db_client.upsert_execution_input(
next_node_exec, next_node_input = await db_client.upsert_execution_input(
node_id=next_node_id,
graph_exec_id=graph_exec_id,
input_name=next_input_name,
input_data=next_data,
)
await async_update_node_execution_status(
db_client=db_client,
exec_id=next_node_exec_id,
status=ExecutionStatus.INCOMPLETE,
)
next_node_exec_id = next_node_exec.node_exec_id
await send_async_execution_update(next_node_exec)
# Complete missing static input pins data using the last execution input.
static_link_names = {
@@ -565,8 +609,8 @@ class ExecutionProcessor:
async for output_name, output_data in execute_node(
node=node,
creds_manager=self.creds_manager,
data=node_exec,
execution_processor=self,
execution_stats=stats,
nodes_input_masks=nodes_input_masks,
):
@@ -660,6 +704,16 @@ class ExecutionProcessor:
log_metadata.info(
f"⚙️ Graph execution #{graph_exec.graph_exec_id} is already running, continuing where it left off."
)
elif exec_meta.status == ExecutionStatus.REVIEW:
exec_meta.status = ExecutionStatus.RUNNING
log_metadata.info(
f"⚙️ Graph execution #{graph_exec.graph_exec_id} was waiting for review, resuming execution."
)
update_graph_execution_state(
db_client=db_client,
graph_exec_id=graph_exec.graph_exec_id,
status=ExecutionStatus.RUNNING,
)
elif exec_meta.status == ExecutionStatus.FAILED:
exec_meta.status = ExecutionStatus.RUNNING
log_metadata.info(
@@ -697,19 +751,21 @@ class ExecutionProcessor:
raise status
exec_meta.status = status
# Activity status handling
activity_response = asyncio.run_coroutine_threadsafe(
generate_activity_status_for_execution(
graph_exec_id=graph_exec.graph_exec_id,
graph_id=graph_exec.graph_id,
graph_version=graph_exec.graph_version,
execution_stats=exec_stats,
db_client=get_db_async_client(),
user_id=graph_exec.user_id,
execution_status=status,
),
self.node_execution_loop,
).result(timeout=60.0)
if status in [ExecutionStatus.COMPLETED, ExecutionStatus.FAILED]:
activity_response = asyncio.run_coroutine_threadsafe(
generate_activity_status_for_execution(
graph_exec_id=graph_exec.graph_exec_id,
graph_id=graph_exec.graph_id,
graph_version=graph_exec.graph_version,
execution_stats=exec_stats,
db_client=get_db_async_client(),
user_id=graph_exec.user_id,
execution_status=status,
),
self.node_execution_loop,
).result(timeout=60.0)
else:
activity_response = None
if activity_response is not None:
exec_stats.activity_status = activity_response["activity_status"]
exec_stats.correctness_score = activity_response["correctness_score"]
@@ -805,12 +861,17 @@ class ExecutionProcessor:
execution_stats_lock = threading.Lock()
# State holders ----------------------------------------------------
running_node_execution: dict[str, NodeExecutionProgress] = defaultdict(
self.running_node_execution: dict[str, NodeExecutionProgress] = defaultdict(
NodeExecutionProgress
)
running_node_evaluation: dict[str, Future] = {}
self.running_node_evaluation: dict[str, Future] = {}
self.execution_stats = execution_stats
self.execution_stats_lock = execution_stats_lock
execution_queue = ExecutionQueue[NodeExecutionEntry]()
running_node_execution = self.running_node_execution
running_node_evaluation = self.running_node_evaluation
try:
if db_client.get_credits(graph_exec.user_id) <= 0:
raise InsufficientBalanceError(
@@ -845,14 +906,18 @@ class ExecutionProcessor:
ExecutionStatus.RUNNING,
ExecutionStatus.QUEUED,
ExecutionStatus.TERMINATED,
ExecutionStatus.REVIEW,
],
):
node_entry = node_exec.to_node_execution_entry(graph_exec.user_context)
node_entry = node_exec.to_node_execution_entry(
graph_exec.execution_context
)
execution_queue.add(node_entry)
# ------------------------------------------------------------
# Main dispatch / polling loop -----------------------------
# ------------------------------------------------------------
while not execution_queue.empty():
if cancel.is_set():
break
@@ -1006,7 +1071,12 @@ class ExecutionProcessor:
elif error is not None:
execution_status = ExecutionStatus.FAILED
else:
execution_status = ExecutionStatus.COMPLETED
if db_client.has_pending_reviews_for_graph_exec(
graph_exec.graph_exec_id
):
execution_status = ExecutionStatus.REVIEW
else:
execution_status = ExecutionStatus.COMPLETED
if error:
execution_stats.error = str(error) or type(error).__name__
@@ -1142,9 +1212,10 @@ class ExecutionProcessor:
user_id=graph_exec.user_id,
graph_exec_id=graph_exec.graph_exec_id,
graph_id=graph_exec.graph_id,
graph_version=graph_exec.graph_version,
log_metadata=log_metadata,
nodes_input_masks=nodes_input_masks,
user_context=graph_exec.user_context,
execution_context=graph_exec.execution_context,
):
execution_queue.add(next_execution)
@@ -1534,36 +1605,32 @@ class ExecutionManager(AppProcess):
graph_exec_id = graph_exec_entry.graph_exec_id
user_id = graph_exec_entry.user_id
graph_id = graph_exec_entry.graph_id
parent_graph_exec_id = graph_exec_entry.parent_graph_exec_id
root_exec_id = graph_exec_entry.execution_context.root_execution_id
parent_exec_id = graph_exec_entry.execution_context.parent_execution_id
logger.info(
f"[{self.service_name}] Received RUN for graph_exec_id={graph_exec_id}, user_id={user_id}, executor_id={self.executor_id}"
+ (f", parent={parent_graph_exec_id}" if parent_graph_exec_id else "")
+ (f", root={root_exec_id}" if root_exec_id else "")
+ (f", parent={parent_exec_id}" if parent_exec_id else "")
)
# Check if parent execution is already terminated (prevents orphaned child executions)
if parent_graph_exec_id:
try:
parent_exec = get_db_client().get_graph_execution_meta(
execution_id=parent_graph_exec_id,
user_id=user_id,
# Check if root execution is already terminated (prevents orphaned child executions)
if root_exec_id and root_exec_id != graph_exec_id:
parent_exec = get_db_client().get_graph_execution_meta(
execution_id=root_exec_id,
user_id=user_id,
)
if parent_exec and parent_exec.status == ExecutionStatus.TERMINATED:
logger.info(
f"[{self.service_name}] Skipping execution {graph_exec_id} - parent {root_exec_id} is TERMINATED"
)
if parent_exec and parent_exec.status == ExecutionStatus.TERMINATED:
logger.info(
f"[{self.service_name}] Skipping execution {graph_exec_id} - parent {parent_graph_exec_id} is TERMINATED"
)
# Mark this child as terminated since parent was stopped
get_db_client().update_graph_execution_stats(
graph_exec_id=graph_exec_id,
status=ExecutionStatus.TERMINATED,
)
_ack_message(reject=False, requeue=False)
return
except Exception as e:
logger.warning(
f"[{self.service_name}] Could not check parent status for {graph_exec_id}: {e}"
# Mark this child as terminated since parent was stopped
get_db_client().update_graph_execution_stats(
graph_exec_id=graph_exec_id,
status=ExecutionStatus.TERMINATED,
)
# Continue execution if parent check fails (don't block on errors)
_ack_message(reject=False, requeue=False)
return
# Check user rate limit before processing
try:

View File

@@ -23,15 +23,18 @@ from dotenv import load_dotenv
from pydantic import BaseModel, Field, ValidationError
from sqlalchemy import MetaData, create_engine
from backend.data.auth.oauth import cleanup_expired_oauth_tokens
from backend.data.block import BlockInput
from backend.data.execution import GraphExecutionWithNodes
from backend.data.model import CredentialsMetaInput
from backend.data.onboarding import increment_runs
from backend.executor import utils as execution_utils
from backend.monitoring import (
NotificationJobArgs,
process_existing_batches,
process_weekly_summary,
report_block_error_rates,
report_execution_accuracy_alerts,
report_late_executions,
)
from backend.util.clients import get_scheduler_client
@@ -153,6 +156,7 @@ async def _execute_graph(**kwargs):
inputs=args.input_data,
graph_credentials_inputs=args.input_credentials,
)
await increment_runs(args.user_id)
elapsed = asyncio.get_event_loop().time() - start_time
logger.info(
f"Graph execution started with ID {graph_exec.id} for graph {args.graph_id} "
@@ -239,6 +243,17 @@ def cleanup_expired_files():
run_async(cleanup_expired_files_async())
def cleanup_oauth_tokens():
"""Clean up expired OAuth tokens from the database."""
# Wait for completion
run_async(cleanup_expired_oauth_tokens())
def execution_accuracy_alerts():
"""Check execution accuracy and send alerts if drops are detected."""
return report_execution_accuracy_alerts()
# Monitoring functions are now imported from monitoring module
@@ -438,6 +453,28 @@ class Scheduler(AppService):
jobstore=Jobstores.EXECUTION.value,
)
# OAuth Token Cleanup - configurable interval
self.scheduler.add_job(
cleanup_oauth_tokens,
id="cleanup_oauth_tokens",
trigger="interval",
replace_existing=True,
seconds=config.oauth_token_cleanup_interval_hours
* 3600, # Convert hours to seconds
jobstore=Jobstores.EXECUTION.value,
)
# Execution Accuracy Monitoring - configurable interval
self.scheduler.add_job(
execution_accuracy_alerts,
id="report_execution_accuracy_alerts",
trigger="interval",
replace_existing=True,
seconds=config.execution_accuracy_check_interval_hours
* 3600, # Convert hours to seconds
jobstore=Jobstores.EXECUTION.value,
)
self.scheduler.add_listener(job_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
self.scheduler.add_listener(job_missed_listener, EVENT_JOB_MISSED)
self.scheduler.add_listener(job_max_instances_listener, EVENT_JOB_MAX_INSTANCES)
@@ -585,6 +622,16 @@ class Scheduler(AppService):
"""Manually trigger cleanup of expired cloud storage files."""
return cleanup_expired_files()
@expose
def execute_cleanup_oauth_tokens(self):
"""Manually trigger cleanup of expired OAuth tokens."""
return cleanup_oauth_tokens()
@expose
def execute_report_execution_accuracy_alerts(self):
"""Manually trigger execution accuracy alert checking."""
return execution_accuracy_alerts()
class SchedulerClient(AppServiceClient):
@classmethod

View File

@@ -10,6 +10,7 @@ from pydantic import BaseModel, JsonValue, ValidationError
from backend.data import execution as execution_db
from backend.data import graph as graph_db
from backend.data import user as user_db
from backend.data.block import (
Block,
BlockCostType,
@@ -24,18 +25,17 @@ from backend.data.db import prisma
# Import dynamic field utilities from centralized location
from backend.data.dynamic_fields import merge_execution_input
from backend.data.execution import (
ExecutionContext,
ExecutionStatus,
GraphExecutionMeta,
GraphExecutionStats,
GraphExecutionWithNodes,
NodesInputMasks,
UserContext,
get_graph_execution,
)
from backend.data.graph import GraphModel, Node
from backend.data.model import CredentialsMetaInput
from backend.data.model import USER_TIMEZONE_NOT_SET, CredentialsMetaInput
from backend.data.rabbitmq import Exchange, ExchangeType, Queue, RabbitMQConfig
from backend.data.user import get_user_by_id
from backend.util.cache import cached
from backend.util.clients import (
get_async_execution_event_bus,
get_async_execution_queue,
@@ -51,32 +51,6 @@ from backend.util.logging import TruncatedLogger, is_structured_logging_enabled
from backend.util.settings import Config
from backend.util.type import convert
@cached(maxsize=1000, ttl_seconds=3600)
async def get_user_context(user_id: str) -> UserContext:
"""
Get UserContext for a user, always returns a valid context with timezone.
Defaults to UTC if user has no timezone set.
"""
user_context = UserContext(timezone="UTC") # Default to UTC
try:
if prisma.is_connected():
user = await get_user_by_id(user_id)
else:
user = await get_database_manager_async_client().get_user_by_id(user_id)
if user and user.timezone and user.timezone != "not-set":
user_context.timezone = user.timezone
logger.debug(f"Retrieved user context: timezone={user.timezone}")
else:
logger.debug("User has no timezone set, using UTC")
except Exception as e:
logger.warning(f"Could not fetch user timezone: {e}")
# Continue with UTC as default
return user_context
config = Config()
logger = TruncatedLogger(logging.getLogger(__name__), prefix="[GraphExecutorUtil]")
@@ -494,7 +468,6 @@ async def validate_and_construct_node_execution_input(
graph_version: The version of the graph to use.
graph_credentials_inputs: Credentials inputs to use.
nodes_input_masks: Node inputs to use.
is_sub_graph: Whether this is a sub-graph execution.
Returns:
GraphModel: Full graph object for the given `graph_id`.
@@ -762,8 +735,8 @@ async def add_graph_execution(
graph_version: Optional[int] = None,
graph_credentials_inputs: Optional[Mapping[str, CredentialsMetaInput]] = None,
nodes_input_masks: Optional[NodesInputMasks] = None,
parent_graph_exec_id: Optional[str] = None,
is_sub_graph: bool = False,
execution_context: Optional[ExecutionContext] = None,
graph_exec_id: Optional[str] = None,
) -> GraphExecutionWithNodes:
"""
Adds a graph execution to the queue and returns the execution entry.
@@ -778,33 +751,54 @@ async def add_graph_execution(
Keys should map to the keys generated by `GraphModel.aggregate_credentials_inputs`.
nodes_input_masks: Node inputs to use in the execution.
parent_graph_exec_id: The ID of the parent graph execution (for nested executions).
is_sub_graph: Whether this is a sub-graph execution.
graph_exec_id: If provided, resume this existing execution instead of creating a new one.
Returns:
GraphExecutionEntry: The entry for the graph execution.
Raises:
ValueError: If the graph is not found or if there are validation errors.
NotFoundError: If graph_exec_id is provided but execution is not found.
"""
if prisma.is_connected():
edb = execution_db
udb = user_db
gdb = graph_db
else:
edb = get_database_manager_async_client()
edb = udb = gdb = get_database_manager_async_client()
graph, starting_nodes_input, compiled_nodes_input_masks = (
await validate_and_construct_node_execution_input(
graph_id=graph_id,
# Get or create the graph execution
if graph_exec_id:
# Resume existing execution
graph_exec = await get_graph_execution(
user_id=user_id,
graph_inputs=inputs or {},
graph_version=graph_version,
graph_credentials_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
is_sub_graph=is_sub_graph,
execution_id=graph_exec_id,
include_node_executions=True,
)
if not graph_exec:
raise NotFoundError(f"Graph execution #{graph_exec_id} not found.")
# Use existing execution's compiled input masks
compiled_nodes_input_masks = graph_exec.nodes_input_masks or {}
logger.info(f"Resuming graph execution #{graph_exec.id} for graph #{graph_id}")
else:
parent_exec_id = (
execution_context.parent_execution_id if execution_context else None
)
# Create new execution
graph, starting_nodes_input, compiled_nodes_input_masks = (
await validate_and_construct_node_execution_input(
graph_id=graph_id,
user_id=user_id,
graph_inputs=inputs or {},
graph_version=graph_version,
graph_credentials_inputs=graph_credentials_inputs,
nodes_input_masks=nodes_input_masks,
is_sub_graph=parent_exec_id is not None,
)
)
)
graph_exec = None
try:
# Sanity check: running add_graph_execution with the properties of
# the graph_exec created here should create the same execution again.
graph_exec = await edb.create_graph_execution(
user_id=user_id,
graph_id=graph_id,
@@ -814,20 +808,38 @@ async def add_graph_execution(
nodes_input_masks=nodes_input_masks,
starting_nodes_input=starting_nodes_input,
preset_id=preset_id,
parent_graph_exec_id=parent_graph_exec_id,
parent_graph_exec_id=parent_exec_id,
)
graph_exec_entry = graph_exec.to_graph_execution_entry(
user_context=await get_user_context(user_id),
compiled_nodes_input_masks=compiled_nodes_input_masks,
parent_graph_exec_id=parent_graph_exec_id,
)
logger.info(
f"Created graph execution #{graph_exec.id} for graph "
f"#{graph_id} with {len(starting_nodes_input)} starting nodes. "
f"Now publishing to execution queue."
f"#{graph_id} with {len(starting_nodes_input)} starting nodes"
)
# Generate execution context if it's not provided
if execution_context is None:
user = await udb.get_user_by_id(user_id)
settings = await gdb.get_graph_settings(user_id=user_id, graph_id=graph_id)
execution_context = ExecutionContext(
safe_mode=(
settings.human_in_the_loop_safe_mode
if settings.human_in_the_loop_safe_mode is not None
else True
),
user_timezone=(
user.timezone if user.timezone != USER_TIMEZONE_NOT_SET else "UTC"
),
root_execution_id=graph_exec.id,
)
try:
graph_exec_entry = graph_exec.to_graph_execution_entry(
compiled_nodes_input_masks=compiled_nodes_input_masks,
execution_context=execution_context,
)
logger.info(f"Publishing execution {graph_exec.id} to execution queue")
exec_queue = await get_async_execution_queue()
await exec_queue.publish_message(
routing_key=GRAPH_EXECUTION_ROUTING_KEY,

Some files were not shown because too many files have changed in this diff Show More