987 Commits

Author SHA1 Message Date
Nicholas Tindle
818de26d24 fix(platform/blocks): XMLParserBlock list object error (#11517)
<!-- Clearly explain the need for these changes: -->

### Need for these changes 💡

The `XMLParserBlock` was susceptible to crashing with an
`AttributeError: 'List' object has no attribute 'add_text'` when
processing malformed XML inputs, such as documents with multiple root
elements or stray text outside the root. This PR introduces robust
validation to prevent these crashes and provide clear, actionable error
messages to users.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added a `_validate_tokens` static method to `XMLParserBlock` to
perform pre-parsing validation on the token stream. This method ensures
the XML input has a single root element and no text content outside of
it.
- Modified the `XMLParserBlock.run` method to call `_validate_tokens`
immediately after tokenization and before passing the tokens to
`gravitasml.Parser`.
- Introduced a new test case, `test_rejects_text_outside_root`, in
`test_blocks_dos_vulnerability.py` to verify that the `XMLParserBlock`
correctly raises a `ValueError` when encountering XML with text outside
the root element.
- Imported `Token` for type hinting in `xml_parser.py`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Confirm that the `test_rejects_text_outside_root` test passes,
asserting that `ValueError` is raised for invalid XML.
  - [x] Confirm that other relevant XML parsing tests continue to pass.


---
Linear Issue:
[OPEN-2835](https://linear.app/autogpt/issue/OPEN-2835/blockunknownerror-raised-by-xmlparserblock-with-message-list-object)

<a
href="https://cursor.com/background-agent?bcId=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a>&nbsp;<a
href="https://cursor.com/agents?id=bc-4495ea93-6836-412c-b2e3-0adb31113169"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens XML parsing robustness and error clarity.
> 
> - Adds `_validate_tokens` in `XMLParserBlock` to ensure a single root
element, balanced tags, and no text outside the root before parsing
> - Updates `run` to `list(tokenize(...))` and validate tokens prior to
`Parser.parse()`; maintains 10MB input size guard
> - Introduces `test_rejects_text_outside_root` asserting a readable
`ValueError` for trailing text
> - Bumps `gravitasml` to `0.1.4` in `pyproject.toml` and lockfile
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
22cc5149c5. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Bug Fixes**
* Improved XML parsing validation with stricter enforcement of
single-root elements and prevention of trailing text, providing clearer
error messages for invalid XML input.

* **Tests**
* Added test coverage for XML parser validation of invalid root text
scenarios.

* **Chores**
  * Updated GravitasML dependency to latest compatible version.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-06 20:02:53 +00:00
Nicholas Tindle
cb08def96c feat(blocks): Add Google Docs integration blocks (#11608)
Introduces a new module with blocks for Google Docs operations,
including reading, creating, appending, inserting, formatting,
exporting, sharing, and managing public access for Google Docs. Updates
dependencies in pyproject.toml and poetry.lock to support these
features.



https://github.com/user-attachments/assets/3597366b-a9eb-4f8e-8a0a-5a0bc8ebc09b



<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Adds lots of basic docs tools + a dependency to use them with markdown

Block | Description | Key Features
-- | -- | --
Read & Create |   |  
GoogleDocsReadBlock | Read content from a Google Doc | Returns text
content, title, revision ID
GoogleDocsCreateBlock | Create a new Google Doc | Title, optional
initial content
GoogleDocsGetMetadataBlock | Get document metadata | Title, revision ID,
locale, suggested modes
GoogleDocsGetStructureBlock | Get document structure with indexes | Flat
segments or detailed hierarchy; shows start/end indexes
Plain Text Operations |   |  
GoogleDocsAppendPlainTextBlock | Append plain text to end | No
formatting applied
GoogleDocsInsertPlainTextBlock | Insert plain text at position |
Requires index; no formatting
GoogleDocsFindReplacePlainTextBlock | Find and replace plain text |
Case-sensitive option; no formatting on replacement
Markdown Operations | (ideal for LLM/AI output) |  
GoogleDocsAppendMarkdownBlock | Append Markdown to end | Full formatting
via gravitas-md2gdocs
GoogleDocsInsertMarkdownAtBlock | Insert Markdown at position | Requires
index
GoogleDocsReplaceAllWithMarkdownBlock | Replace entire doc with Markdown
| Clears and rewrites
GoogleDocsReplaceRangeWithMarkdownBlock | Replace index range with
Markdown | Requires start/end index
GoogleDocsReplaceContentWithMarkdownBlock | Find text and replace with
Markdown | Text-based search; great for templates
Structural Operations |   |  
GoogleDocsInsertTableBlock | Insert a table | Rows/columns OR content
array; optional Markdown in cells
GoogleDocsInsertPageBreakBlock | Insert a page break | Position index (0
= end)
GoogleDocsDeleteContentBlock | Delete content range | Requires start/end
index
GoogleDocsFormatTextBlock | Apply formatting to text range | Bold,
italic, underline, font size/color, etc.
Export & Sharing |   |  
GoogleDocsExportBlock | Export to different formats | PDF, DOCX, TXT,
HTML, RTF, ODT, EPUB
GoogleDocsShareBlock | Share with specific users | Reader, commenter,
writer, owner roles
GoogleDocsSetPublicAccessBlock | Set public access level | Private,
anyone with link (view/comment/edit)


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Build, run, verify, and upload a block super test
- [x] [Google Docs Super
Agent_v8.json](https://github.com/user-attachments/files/24134215/Google.Docs.Super.Agent_v8.json)
works


<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->

## Summary by CodeRabbit

* **Chores**
  * Updated backend dependencies.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>

<!-- end of auto-generated comment: release notes by coderabbit.ai -->

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds end-to-end Google Docs capabilities under
`backend/blocks/google/docs.py`, including rich Markdown support.
> 
> - New blocks: read/create docs; plain-text
`append`/`insert`/`find_replace`/`delete`; text `format`;
`insert_table`; `insert_page_break`; `get_metadata`; `get_structure`
> - Markdown-powered blocks (via `gravitas_md2gdocs.to_requests`):
`append_markdown`, `insert_markdown_at`, `replace_all_with_markdown`,
`replace_range_with_markdown`, `replace_content_with_markdown`
> - Export and sharing: `export` (PDF/DOCX/TXT/HTML/RTF/ODT/EPUB),
`share` (user roles), `set_public_access`
> - Dependency updates: add `gravitas-md2gdocs` to `pyproject.toml` and
update `poetry.lock`
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
73512a95b2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
2026-01-05 18:36:56 +00:00
Krzysztof Czerwinski
ac2daee5f8 feat(backend): Add GPT-5.2 and update default models (#11652)
### Changes 🏗️

- Add OpenAI `GPT-5.2` with metadata&cost
- Add const `DEFAULT_LLM_MODEL` (set to GPT-5.2) and use it instead of
hardcoded model across llm blocks and tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] GPT-5.2 is set as default and works on llm blocks
2026-01-05 16:13:35 +00:00
lif
266e0d79d4 fix(blocks): add YouTube Shorts URL support (#11659)
## Summary
Added support for parsing YouTube Shorts URLs (`youtube.com/shorts/...`)
in the TranscribeYoutubeVideoBlock to extract video IDs correctly.

## Changes
- Modified `_extract_video_id` method in `youtube.py` to handle Shorts
URL format
- Added test cases for YouTube Shorts URL extraction

## Related Issue
Fixes #11500

## Test Plan
- [x] Added unit tests for YouTube Shorts URL extraction
- [x] Verified existing YouTube URL formats still work
- [x] CI should pass all existing tests

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
2026-01-05 16:11:45 +00:00
Nicholas Tindle
79d45a15d0 feat(platform): Deduplicate insufficient funds Discord + email notifications (#11672)
Add Redis-based deduplication for insufficient funds notifications (both
Discord alerts and user emails) when users run out of credits. This
prevents spamming users and the PRODUCT Discord channel with repeated
alerts for the same user+agent combination.

### Changes 🏗️

- **Redis-based deduplication** (`backend/executor/manager.py`):
- Add `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` constant for Redis key prefix
- Add `INSUFFICIENT_FUNDS_NOTIFIED_TTL_SECONDS` (30 days) as fallback
cleanup
- Implement deduplication in `_handle_insufficient_funds_notif` using
Redis `SET NX`
- Skip both email (`ZERO_BALANCE`) and Discord notifications for
duplicate alerts per user+agent
- Add `clear_insufficient_funds_notifications(user_id)` function to
remove all notification flags for a user

- **Clear flags on credit top-up** (`backend/data/credit.py`):
- Call `clear_insufficient_funds_notifications` in `_top_up_credits`
after successful auto-charge
- Call `clear_insufficient_funds_notifications` in `fulfill_checkout`
after successful manual top-up
- This allows users to receive notifications again if they run out of
funds in the future

- **Comprehensive test coverage**
(`backend/executor/manager_insufficient_funds_test.py`):
  - Test first-time notification sends both email and Discord alert
  - Test duplicate notifications are skipped for same user+agent
  - Test different agents for same user get separate alerts
  - Test clearing notifications removes all keys for a user
  - Test handling when no notification keys exist
- Test notifications still sent when Redis fails (graceful degradation)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First insufficient funds alert sends both email and Discord
notification
  - [x] Duplicate alerts for same user+agent are skipped
  - [x] Different agents for same user each get their own notification
  - [x] Topping up credits clears notification flags
  - [x] Redis failure gracefully falls back to sending notifications
  - [x] 30-day TTL provides automatic cleanup as fallback
  - [x] Manually test this works with scheduled agents
 

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces Redis-backed deduplication for insufficient-funds alerts
and resets flags on successful credit additions.
> 
> - **Dedup insufficient-funds alerts** in `executor/manager.py` using
Redis `SET NX` with `INSUFFICIENT_FUNDS_NOTIFIED_PREFIX` and 30‑day TTL;
skips duplicate ZERO_BALANCE email + Discord alerts per
`user_id`+`graph_id`, with graceful fallback if Redis fails.
> - **Reset notification flags on credit increases** by adding
`clear_insufficient_funds_notifications(user_id)` and invoking it when
enabling/adding positive `GRANT`/`TOP_UP` transactions in
`data/credit.py`.
> - **Tests** (`executor/manager_insufficient_funds_test.py`):
first-time vs duplicate behavior, per-agent separation, clearing keys
(including no-key and Redis-error cases), and clearing on
`_add_transaction`/`_enable_transaction`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
1a4413b3a1. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2025-12-30 18:10:30 +00:00
seer-by-sentry[bot]
e26822998f fix: Handle missing or null 'items' key in DataForSEO Related Keywords block (#10989)
### Changes 🏗️

- Modified the DataForSEO Related Keywords block to handle cases where
the 'items' key is missing or has a null value in the API response.
- Ensures that the code gracefully handles these scenarios by defaulting
to an empty list, preventing potential errors. Fixes
[AUTOGPT-SERVER-66D](https://sentry.io/organizations/significant-gravitas/issues/6902944636/).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] The DataForSEO API now returns an empty list when there are no
results, preventing the code from attempting to iterate on a null value.

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Strengthens parsing of DataForSEO Labs response to avoid errors when
`items` is missing or null.
> 
> - In `backend/blocks/dataforseo/related_keywords.py` `run()`, sets
`items = first_result.get("items") or []` when `first_result` is a
`dict`, otherwise `[]`, ensuring safe iteration
> - Prevents exceptions and yields empty results when no items are
returned
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
cc465ddbf2. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-26 16:17:24 +00:00
Zamil Majdy
88731b1f76 feat(platform): marketplace update notifications with enhanced publishing workflow (#11630)
## Summary
This PR implements a comprehensive marketplace update notification
system that allows users to discover and update to newer agent versions,
along with enhanced publishing workflows and UI improvements.

<img width="1500" height="533" alt="image"
src="https://github.com/user-attachments/assets/ee331838-d712-4718-b231-1f9ec21bcd8e"
/>

<img width="600" height="610" alt="image"
src="https://github.com/user-attachments/assets/b881a7b8-91a5-460d-a159-f64765b339f1"
/>

<img width="1500" height="416" alt="image"
src="https://github.com/user-attachments/assets/a2d61904-2673-4e44-bcc5-c47d36af7a38"
/>

<img width="1500" height="1015" alt="image"
src="https://github.com/user-attachments/assets/2dd978c7-20cc-4230-977e-9c62157b9f23"
/>


## Core Features

### 🔔 Marketplace Update Notifications
- **Update detection**: Automatically detects when marketplace has newer
agent versions than user's local copy
- **Creator notifications**: Shows banners for creators with unpublished
changes ready to publish
- **Non-creator support**: Enables regular users to discover and update
to newer marketplace versions
- **Version comparison**: Intelligent logic comparing `graph_version` vs
marketplace listing versions

### 📋 Enhanced Publishing Workflow  
- **Builder integration**: Added "Publish to Marketplace" button
directly in the builder actions
- **Unified banner system**: Consistent `MarketplaceBanners` component
across library and marketplace pages
- **Streamlined UX**: Fixed layout issues, improved button placement and
styling
- **Modal improvements**: Fixed thumbnail loading race conditions and
infinite loop bugs

### 📚 Version History & Changelog
- **Inline version history**: Added version changelog directly to
marketplace agent pages
- **Version comparison**: Clear display of available versions with
current version highlighting
- **Update mechanism**: Direct updates using `graph_version` parameter
for accuracy

## Technical Implementation

### Backend Changes
- **Database schema**: Added `agentGraphVersions` and `agentGraphId`
fields to `StoreAgent` model
- **API enhancement**: Updated store endpoints to expose graph version
data for version comparison
- **Data migration**: Fixed agent version field naming from `version` to
`agentGraphVersions`
- **Model updates**: Enhanced `LibraryAgentUpdateRequest` with
`graph_version` field

### Frontend Architecture
- **`useMarketplaceUpdate` hook**: Centralized marketplace update
detection and creator identification
- **`MarketplaceBanners` component**: Unified banner system with proper
vertical layout and styling
- **`AgentVersionChangelog` component**: Version history display for
marketplace pages
- **`PublishToMarketplace` component**: Builder integration with modal
workflow

### Key Bug Fixes
- **Thumbnail loading**: Fixed race condition where images wouldn't load
on first modal open
- **Infinite loops**: Used refs to prevent circular dependencies in
`useThumbnailImages` hook
- **Layout issues**: Fixed banner placement, removed duplicate
breadcrumbs, corrected vertical layout
- **Field naming**: Fixed `agent_version` vs `version` field
inconsistencies across APIs

## Files Changed

### Backend
- `autogpt_platform/backend/backend/server/v2/store/` - Enhanced store
API with graph version data
- `autogpt_platform/backend/backend/server/v2/library/` - Updated
library API models
- `autogpt_platform/backend/migrations/` - Database migrations for
version fields
- `autogpt_platform/backend/schema.prisma` - Schema updates for graph
versions

### Frontend
- `src/app/(platform)/components/MarketplaceBanners/` - New unified
banner component
- `src/app/(platform)/library/agents/[id]/components/` - Enhanced
library views with banners
- `src/app/(platform)/build/components/BuilderActions/` - Added
marketplace publish button
- `src/app/(platform)/marketplace/components/AgentInfo/` - Added inline
version history
- `src/components/contextual/PublishAgentModal/` - Fixed thumbnail
loading and modal workflow

## User Experience Impact
- **Better discovery**: Users automatically notified of newer agent
versions
- **Streamlined publishing**: Direct publish access from builder
interface
- **Reduced friction**: Fixed UI bugs, improved loading states,
consistent design
- **Enhanced transparency**: Inline version history on marketplace pages
- **Creator workflow**: Better notifications for creators with
unpublished changes

## Testing
-  Update banners appear correctly when marketplace has newer versions
-  Creator banners show for users with unpublished changes  
-  Version comparison logic works with graph_version vs marketplace
versions
-  Publish button in builder opens modal correctly with pre-populated
data
-  Thumbnail images load properly on first modal open without infinite
loops
-  Database migrations completed successfully with version field fixes
-  All existing tests updated and passing with new schema changes

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-22 11:13:06 +00:00
Reinier van der Leer
08a60dcb9b refactor(frontend): Clean up React Query-related code (#11604)
- #11603

### Changes 🏗️

Frontend:
- Make `okData` infer the response data type instead of casting
- Generalize infinite query utilities from `SidebarRunsList/helpers.ts`
  - Move to `@/app/api/helpers` and use wherever possible
- Simplify/replace boilerplate checks and conditions with `okData` in
many places
- Add `useUserTimezone` hook to replace all the boilerplate timezone
queries

Backend:
- Fix response type annotation of `GET
/api/store/graph/{store_listing_version_id}` endpoint
- Fix documentation and error behavior of `GET
/api/review/execution/{graph_exec_id}` endpoint

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI passes
  - [x] Clicking around the app manually -> no obvious issues
  - [x] Test Onboarding step 5 (run)
  - [x] Library runs list loads normally
2025-12-20 22:46:24 +01:00
Reinier van der Leer
de78d062a9 refactor(backend/api): Clean up API file structure (#11629)
We'll soon be needing a more feature-complete external API. To make way
for this, I'm moving some files around so:
- We can more easily create new versions of our external API
- The file structure of our internal API is more homogeneous

These changes are quite opinionated, but IMO in any case they're better
than the chaotic structure we have now.

### Changes 🏗️

- Move `backend/server` -> `backend/api`
- Move `backend/server/routers` + `backend/server/v2` ->
`backend/api/features`
  - Change absolute sibling imports to relative imports
- Move `backend/server/v2/AutoMod` -> `backend/executor/automod`
- Combine `backend/server/routers/analytics_*test.py` ->
`backend/api/features/analytics_test.py`
- Sort OpenAPI spec file

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - CI tests
  - [x] Clicking around in the app -> no obvious breakage
2025-12-20 20:33:10 +00:00
Reinier van der Leer
3dbc03e488 feat(platform): OAuth API & Single Sign-On (#11617)
We want to provide Single Sign-On for multiple AutoGPT apps that use the
Platform as their backend.

### Changes 🏗️

Backend:
- DB + logic + API for OAuth flow (w/ tests)
  - DB schema additions for OAuth apps, codes, and tokens
  - Token creation/validation/management logic
- OAuth flow endpoints (app info, authorize, token exchange, introspect,
revoke)
  - E2E OAuth API integration tests
- Other OAuth-related endpoints (upload app logo, list owned apps,
external `/me` endpoint)
    - App logo asset management
  - Adjust external API middleware to support auth with access token
  - Expired token clean-up job
    - Add `OAUTH_TOKEN_CLEANUP_INTERVAL_HOURS` setting (optional)
- `poetry run oauth-tool`: dev tool to test the OAuth flows and register
new OAuth apps
- `poetry run export-api-schema`: dev tool to quickly export the OpenAPI
schema (much quicker than spinning up the backend)

Frontend:
- Frontend UI for app authorization (`/auth/authorize`)
  - Re-redirect after login/signup
- Frontend flow to batch-auth integrations on request of the client app
(`/auth/integrations/setup-wizard`)
  - Debug `CredentialInputs` component
- Add `/profile/oauth-apps` management page
- Add `isOurProblem` flag to `ErrorCard` to hide action buttons when the
error isn't our fault
- Add `showTitle` flag to `CredentialsInput` to hide built-in title for
layout reasons

DX:
- Add [API
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/api-guide.md)
and [OAuth
guide](https://github.com/Significant-Gravitas/AutoGPT/blob/pwuts/sso/docs/content/platform/integrating/oauth-guide.md)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Manually verify test coverage of OAuth API tests
  - Test `/auth/authorize` using `poetry run oauth-tool test-server`
    - [x] Works
    - [x] Looks okay
- Test `/auth/integrations/setup-wizard` using `poetry run oauth-tool
test-server`
    - [x] Works
    - [x] Looks okay
  - Test `/profile/oauth-apps` page
    - [x] All owned OAuth apps show up
    - [x] Enabling/disabling apps works
- [ ] ~~Uploading logos works~~ can only test this once deployed to dev

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-12-19 21:05:16 +01:00
Zamil Majdy
b76b5a37c5 fix(backend): Convert generic exceptions to appropriate typed exceptions (#11641)
## Summary
- Fix TimeoutError in AIShortformVideoCreatorBlock → BlockExecutionError
- Fix generic exceptions in SearchTheWebBlock → BlockExecutionError with
proper HTTP error handling
- Fix FirecrawlError 504 timeouts → BlockExecutionError with
service-specific messages
- Fix ReplicateBlock validation errors → BlockInputError for 422 status,
BlockExecutionError for others
- Add comprehensive HTTP error handling with
HTTPClientError/HTTPServerError classes
- Implement filename sanitization for "File name too long" errors
- Add proper User-Agent handling for Wikipedia API compliance
- Fix type conversion for string subclasses like ShortTextType
- Add support for moderation errors with proper context propagation

## Test plan
- [x] All modified blocks now properly categorize errors instead of
raising BlockUnknownError
- [x] Type conversion tests pass for ShortTextType and other string
subclasses
- [x] Formatting and linting pass
- [x] Exception constructors include required block_name and block_id
parameters

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-19 13:19:58 +01:00
Zamil Majdy
cc9179178f feat(block): Human in The Loop Block restructure (#11627)
## Summary

This PR refactors the Human-In-The-Loop (HITL) review system backend to
improve data handling and API consistency.

## Changes

### Backend Refactoring

#### 1. **Block Output Schema Update** (`human_in_the_loop.py`)
- Replaced single `reviewed_data` and `status` fields with separate
`approved_data` and `rejected_data` outputs
- This allows downstream blocks to handle approved vs rejected data
differently without checking status
- Simplified test outputs to match new schema

#### 2. **Review Data Handling** (`human_review.py`)
- Modified `get_or_create_human_review` to always return
`review.payload` regardless of approval status
- Previously returned `None` for rejected reviews, which could cause
data loss
- Now preserves reviewer-modified data for both approved and rejected
cases

#### 3. **API Route Simplification** (`review/routes.py`)
- Streamlined review decision processing logic using ternary operator
- Unified data handling for both approved and rejected reviews
- Maintains backward compatibility while improving code clarity

## Why These Changes?

- **Better Data Flow**: Separate output pins for approved/rejected data
make workflow design more intuitive
- **Data Preservation**: Rejected reviews can still pass modified data
downstream for logging or alternative processing
- **Cleaner API**: Simplified decision processing reduces code
complexity and potential bugs

## Testing

- All existing tests pass with updated schema
- Backward compatibility maintained for existing workflows
- Human review functionality verified in both approved and rejected
scenarios

## Related

This is the backend portion of changes from #11529, applied separately
to the `feat/hitl` branch.
2025-12-16 12:14:14 +00:00
Zamil Majdy
71157bddd7 feat(backend): add agent mode support to SmartDecisionMakerBlock with autonomous tool execution loops (#11547)
## Summary

<img width="2072" height="1836" alt="image"
src="https://github.com/user-attachments/assets/9d231a77-6309-46b9-bc11-befb5d8e9fcc"
/>

**🚀 Major Feature: Agent Mode Support**

Adds autonomous agent mode to SmartDecisionMakerBlock, enabling it to
execute tools directly in loops until tasks are completed, rather than
just yielding tool calls for external execution.

##  **Key New Features**

### 🤖 **Agent Mode with Tool Execution Loops**
- **New `agent_mode_max_iterations` parameter** controls execution
behavior:
  - `0` = Traditional mode (single LLM call, yield tool calls)
  - `1+` = Agent mode with iteration limit
  - `-1` = Infinite agent mode (loop until finished)

### 🔄 **Autonomous Tool Execution**  
- **Direct tool execution** instead of yielding for external handling
- **Multi-iteration loops** with conversation state management
- **Automatic completion detection** when LLM stops making tool calls
- **Iteration limit handling** with graceful completion messages

### 🏗️ **Proper Database Operations**
- **Replace manual execution ID generation** with proper
`upsert_execution_input`/`upsert_execution_output`
- **Real NodeExecutionEntry objects** from database results
- **Proper execution status management**: QUEUED → RUNNING →
COMPLETED/FAILED

### 🔧 **Enhanced Type Safety**
- **Pydantic models** replace TypedDict: `ToolInfo` and
`ExecutionParams`
- **Runtime validation** with better error messages
- **Improved developer experience** with IDE support

## 🔧 **Technical Implementation**

### Agent Mode Flow:
```python
# Agent mode enabled with iterations
if input_data.agent_mode_max_iterations != 0:
    async for result in self._execute_tools_agent_mode(...):
        yield result  # "conversations", "finished"
    return

# Traditional mode (existing behavior)  
# Single LLM call + yield tool calls for external execution
```

### Tool Execution with Database Operations:
```python
# Before: Manual execution IDs
tool_exec_id = f"{node_exec_id}_tool_{sink_node_id}_{len(input_data)}"

# After: Proper database operations
node_exec_result, final_input_data = await db_client.upsert_execution_input(
    node_id=sink_node_id,
    graph_exec_id=execution_params.graph_exec_id,
    input_name=input_name, 
    input_data=input_value,
)
```

### Type Safety with Pydantic:
```python
# Before: Dict access prone to errors
execution_params["user_id"]  

# After: Validated model access
execution_params.user_id  # Runtime validation + IDE support
```

## 🧪 **Comprehensive Test Coverage**

- **Agent mode execution tests** with multi-iteration scenarios
- **Database operation verification** 
- **Type safety validation**
- **Backward compatibility** for traditional mode
- **Enhanced dynamic fields tests**

## 📊 **Usage Examples**

### Traditional Mode (Existing Behavior):
```python
SmartDecisionMakerBlock.Input(
    prompt="Search for keywords",
    agent_mode_max_iterations=0  # Default
)
# → Yields tool calls for external execution
```

### Agent Mode (New Feature):
```python  
SmartDecisionMakerBlock.Input(
    prompt="Complete this task using available tools",
    agent_mode_max_iterations=5  # Max 5 iterations
)
# → Executes tools directly until task completion or iteration limit
```

### Infinite Agent Mode:
```python
SmartDecisionMakerBlock.Input(
    prompt="Analyze and process this data thoroughly", 
    agent_mode_max_iterations=-1  # No limit, run until finished
)
# → Executes tools autonomously until LLM indicates completion
```

##  **Backward Compatibility**

- **Zero breaking changes** to existing functionality
- **Traditional mode remains default** (`agent_mode_max_iterations=0`)
- **All existing tests pass**
- **Same API for tool definitions and execution**

This transforms the SmartDecisionMakerBlock from a simple tool call
generator into a powerful autonomous agent capable of complex multi-step
task execution! 🎯

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-12 09:58:06 +00:00
Krzysztof Czerwinski
bd37fe946d feat(platform): Builder search history (#11457)
Preserve user searches in the new builder and cache search results for
more efficiency.
Search is saved, so the user can see their previous searches.

### Changes 🏗️

- Add `BuilderSearch` column&migration to save user search (with all
filters)
- Builder `db.py` now caches all search results using `@cached` and
returns paginated results, so following pages are returned much quicker
- Score and sort results
- Update models&routes
- Update frontend, so it works properly with modified endpoints
- Frontend: store `serachId` and use it for subsequent searches, so we
don't save partial searches (e.g. "b", "bl", ..., "block"). Search id is
reset when user clears the search field.
- Add clickable chips to the Suggestions builder tab
- Add `HorizontalScroll` component (chips use it)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search works and is cached
  - [x] Search sorts results
  - [x] Searches are preserved properly

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-10 17:32:17 +00:00
Nicholas Tindle
979d7c3b74 feat(blocks): Add 4 new GitHub webhook trigger blocks (#11588)
I want to be able to automate some actions on social media or our
sevrver in response to actions from discord


<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
Add trigger blocks for common GitHub events to enable OSS automation:
- GithubReleaseTriggerBlock: Trigger on release events (published, etc.)
- GithubStarTriggerBlock: Trigger on star events for milestone
celebrations
- GithubIssuesTriggerBlock: Trigger on issue events for
triage/notifications
- GithubDiscussionTriggerBlock: Trigger on discussion events for Q&A
sync
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test Stars
  - [x] Test Discussions
  - [x] Test Issues
  - [x] Test Release

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-09 21:25:43 +00:00
Nicholas Tindle
95200b67f8 feat(blocks): add many new spreadsheet blocks (#11574)
<!-- Clearly explain the need for these changes: -->
We have lots we want to do with google sheets and we don't want a lack
of blocks to be a limiter so I pre-ddi a lot of blocks!

### Changes 🏗️
Adds 24 new blocks for google sheets (tested and working)
```
|-----|-------------------------------------------|----------------------------------------|
|  1  | GoogleSheetsFilterRowsBlock               | Filter rows based on column conditions |  |
|  2  | GoogleSheetsLookupRowBlock                | VLOOKUP-style row lookup               |  |
|  3  | GoogleSheetsDeleteRowsBlock               | Delete rows from a sheet               |  |
|  4  | GoogleSheetsGetColumnBlock                | Get data from a specific column        |  |
|  5  | GoogleSheetsSortBlock                     | Sort sheet data                        |  | 
|  6  | GoogleSheetsGetUniqueValuesBlock          | Get unique values from a column        |  | 
|  7  | GoogleSheetsInsertRowBlock                | Insert rows into a sheet               |  |
|  8  | GoogleSheetsAddColumnBlock                | Add a new column                       |  |
|  9  | GoogleSheetsGetRowCountBlock              | Get the number of rows                 |  |
| 10  | GoogleSheetsRemoveDuplicatesBlock         | Remove duplicate rows                  |  | 
| 11  | GoogleSheetsUpdateRowBlock                | Update an existing row                 |  |
| 12  | GoogleSheetsGetRowBlock                   | Get a specific row by index            |  |
| 13  | GoogleSheetsDeleteColumnBlock             | Delete a column                        |  |
| 14  | GoogleSheetsCreateNamedRangeBlock         | Create a named range                   |  | 
| 15  | GoogleSheetsListNamedRangesBlock          | List all named ranges                  |  | 
| 16  | GoogleSheetsAddDropdownBlock              | Add dropdown validation to cells       |  | 
| 17  | GoogleSheetsCopyToSpreadsheetBlock        | Copy sheet to another spreadsheet      |  |
| 18  | GoogleSheetsProtectRangeBlock             | Protect a range from editing           |  |
| 19  | GoogleSheetsExportCsvBlock                | Export sheet as CSV                    |  |
| 20  | GoogleSheetsImportCsvBlock                | Import CSV data                        |  |
| 21  | GoogleSheetsAddNoteBlock                  | Add notes to cells                     |  | 
| 22  | GoogleSheetsGetNotesBlock                 | Get notes from cells                   |  | 
| 23  | GoogleSheetsShareSpreadsheetBlock         | Share spreadsheet with users           |  | 
| 24  | GoogleSheetsSetPublicAccessBlock          | Set public access permissions          |  | 
```


<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Tested using the attached agent 
[super test for
spreadsheets_v9.json](https://github.com/user-attachments/files/24041582/super.test.for.spreadsheets_v9.json)


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces a large suite of Google Sheets blocks for row/column ops,
filtering/sorting/lookup, CSV import/export, notes, named ranges,
protections, sheet copy, and sharing/public access, plus refactors
append to a simpler single-row append.
> 
> - **Google Sheets blocks (new)**:
> - **Data ops**: `GoogleSheetsFilterRowsBlock`,
`GoogleSheetsLookupRowBlock`, `GoogleSheetsDeleteRowsBlock`,
`GoogleSheetsGetColumnBlock`, `GoogleSheetsSortBlock`,
`GoogleSheetsGetUniqueValuesBlock`, `GoogleSheetsInsertRowBlock`,
`GoogleSheetsAddColumnBlock`, `GoogleSheetsGetRowCountBlock`,
`GoogleSheetsRemoveDuplicatesBlock`, `GoogleSheetsUpdateRowBlock`,
`GoogleSheetsGetRowBlock`, `GoogleSheetsDeleteColumnBlock`.
> - **Named ranges & validation**: `GoogleSheetsCreateNamedRangeBlock`,
`GoogleSheetsListNamedRangesBlock`, `GoogleSheetsAddDropdownBlock`.
> - **Sheet/admin**: `GoogleSheetsCopyToSpreadsheetBlock`,
`GoogleSheetsProtectRangeBlock`.
> - **CSV & notes**: `GoogleSheetsExportCsvBlock`,
`GoogleSheetsImportCsvBlock`, `GoogleSheetsAddNoteBlock`,
`GoogleSheetsGetNotesBlock`.
> - **Sharing**: `GoogleSheetsShareSpreadsheetBlock`,
`GoogleSheetsSetPublicAccessBlock`.
> - **Refactor**:
> - Rename and simplify append: `GoogleSheetsAppendRowBlock` (replaces
multi-row/dict input with single `row`), fixed insert option to
`INSERT_ROWS` and streamlined response.
> - **Utilities/Enums**:
> - Add helpers (`_column_letter_to_index`, `_index_to_column_letter`,
`_apply_filter`) and enums (`FilterOperator`, `SortOrder`, `ShareRole`,
`PublicAccessRole`).
> - Drive/Sheets service builders and file validation reused across new
blocks.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9e2f4024. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
2025-12-09 17:28:22 +00:00
Zamil Majdy
c1e21d07e6 feat(platform): add execution accuracy alert system (#11562)
## Summary

<img width="1263" height="883" alt="image"
src="https://github.com/user-attachments/assets/98d4f449-1897-4019-a599-846c27df4191"
/>
<img width="398" height="190" alt="image"
src="https://github.com/user-attachments/assets/0138ac02-420d-4f96-b980-74eb41e3c968"
/>

- Add execution accuracy monitoring with moving averages and Discord
alerts
- Dashboard visualization for accuracy trends and alert detection  
- Hourly monitoring for marketplace agents (≥10 executions in 30 days)
- Generated API client integration with type-safe models

## Features
- **Moving Average Analysis**: 3-day vs 7-day comparison with
configurable thresholds
- **Discord Notifications**: Hourly alerts for accuracy drops ≥10%
- **Dashboard UI**: Real-time trends visualization with alert status
- **Type Safety**: Generated API hooks and models throughout
- **Error Handling**: Graceful OpenAI configuration handling
- **PostgreSQL Optimization**: Window functions for efficient trend
queries

## Test plan
- [x] Backend accuracy monitoring logic tested with sample data
- [x] Frontend components using generated API hooks (no manual fetch)
- [x] Discord notification integration working
- [x] Admin authentication and authorization working
- [x] All formatting and linting checks passing
- [x] Error handling for missing OpenAI configuration
- [x] Test data available with `test-accuracy-agent-001`

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-08 19:28:57 +00:00
Swifty
8be3c88711 feat(backend): add default store agents for seeding test databases (#11552)
This PR adds a collection of pre-built store agents that can be loaded
into test databases for development and testing purposes.

### Changes 🏗️

- Add 17 exported agent JSON files in `backend/agents/` directory
- Add `StoreAgent_rows.csv` containing store listing metadata (titles,
descriptions, categories, images)
- Add `load_store_agents.py` script to load agents into the test
database
- Add `load-store-agents` Makefile target for easy execution

**Included Agents:**
- Flux AI Image Generator
- YouTube Transcription Scraper  
- Decision Maker Lead Finder
- Smart Meeting Prep
- Automated Support Agent
- Unspirational Poster Maker
- AI Video Generator
- Automated SEO Blog Writer
- Lead Finder (Local Businesses)
- LinkedIn Post Generator
- YouTube to LinkedIn Post Converter
- Personal Newsletter
- Email Scout - Contact Finder Assistant
- YouTube Video to SEO Blog Writer
- AI Webpage Copy Improver
- Domain Name Finder
- AI Function

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `make load-store-agents` and verify agents are loaded into the
database
  - [x] Verify store listings appear correctly with metadata from CSV
- [x] Confirm no sensitive information (API keys, secrets) is included
in the exported agents

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required - this only adds test data and a
loading script.
2025-12-05 16:08:37 +01:00
Zamil Majdy
e4d0dbc283 feat(platform): add Agent Output Demo field to marketplace submission form (#11538)
## Summary
- Add Agent Output Demo field to marketplace agent submission form,
positioned below the Description field
- Store agent output demo URLs in database for future CoPilot
integration
- Implement proper video/image ordering on marketplace pages
- Add shared YouTube URL validation utility to eliminate code
duplication

## Changes Made

### Frontend
- **Agent submission form**: Added Agent Output Demo field with YouTube
URL validation
- **Edit agent form**: Added Agent Output Demo field for existing
submissions
- **Marketplace display**: Implemented proper video/image ordering:
  1. YouTube/Overview video (if exists)
  2. First image (hero)
  3. Agent Output Demo (if exists) 
  4. Additional images
- **Shared utilities**: Created `validateYouTubeUrl` function in
`src/lib/utils.ts`

### Backend
- **Database schema**: Added `agentOutputDemoUrl` field to
`StoreListingVersion` model
- **Database views**: Updated `StoreAgent` view to include
`agent_output_demo` field
- **API models**: Added `agent_output_demo_url` to submission requests
and `agent_output_demo` to responses
- **Database migration**: Added migration to create new column and
update view
- **Test files**: Updated all test files to include the new required
field

## Test Plan
- [x] Frontend form validation works correctly for YouTube URLs
- [x] Database migration applies successfully 
- [x] Backend API accepts and returns the new field
- [x] Marketplace displays videos in correct order
- [x] Both frontend and backend formatting/linting pass
- [x] All test files include required field to prevent failures

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-05 11:40:12 +00:00
Swifty
8e476c3f8d fix(backend): pass credential type from SDK registry to integrations API (#11544)
### Changes 🏗️

This PR improves the `/integrations/providers` endpoint to dynamically
determine supported authentication types from the SDK registry instead
of using hardcoded values.

**What changed:**
- The `list_providers` function now looks up each provider in the
`AutoRegistry` to get its `supported_auth_types`
- If a provider has defined auth types in the SDK registry, those are
used to set `supports_api_key`, `supports_user_password`, and
`supports_host_scoped` flags
- Falls back to legacy hardcoded behavior for providers not registered
in the SDK (maintains backwards compatibility)

**Why:**
- Providers can now correctly declare their supported authentication
methods via the SDK
- Removes brittle hardcoded checks like `name in ("smtp",)` for specific
providers
- Makes the credential type system more extensible and maintainable

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified providers with SDK-defined auth types return correct
flags
  - [x] Verified legacy providers still work with fallback behavior
- [x] Tested the `/integrations/providers` endpoint returns expected
data

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

No configuration changes required for this PR.
2025-12-05 12:42:49 +01:00
Zamil Majdy
2f63defb53 fix(backend): Mark ValueError as known block errors (#11537)
### Changes 🏗️

Mark ValueError as known block errors

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
2025-12-05 11:12:18 +00:00
Sukhtumur Narantuya
2934e9ea69 fix(backend): replace print() statements with proper logging (#11499)
- Replace print() with logger.info() in reddit.py for login message
- Replace print() with logger.debug() in airtable/_api.py for API params
- Replace print() with logger.debug() in _manual_base.py for webhook URL
- Add logging imports and logger initialization where missing
- Update FIXME to TODO with GitHub issue reference #8537

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] test it still works


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Switch `print()` to `logger.info/debug()` across Airtable, Reddit, and
manual webhook modules; add logger initialization and clarify TODO with
issue reference.
> 
> - **Backend**:
>   - **Airtable (`backend/blocks/airtable/_api.py`)**:
>     - Replace `print(params)` with `logger.debug` in `create_base`.
>   - **Reddit (`backend/blocks/reddit.py`)**:
>     - Add `logging` import and `logger` initialization.
>     - Replace login `print` with `logger.info` in `get_praw`.
>   - **Webhooks (`backend/integrations/webhooks/_manual_base.py`)**:
> - Replace `print` with `logger.debug` in `_register_webhook` and add
`logger`.
>     - Update `FIXME` to `TODO` with GitHub issue reference `#8537`.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
3add9b0fa9. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-12-05 07:49:20 +00:00
Krzysztof Czerwinski
c880db439d feat(platform): Backend completion of Onboarding tasks (#11375)
Make onboarding task completion backend-authoritative which prevents
cheating (previously users could mark all tasks as completed instantly
and get rewards) and makes task completion more reliable. Completion of
tasks is moved backend with exception of introductory onboarding tasks
and visit-page type tasks.

### Changes 🏗️

- Move incrementing run counter backend and make webhook-triggered and
scheduled task execution count as well
- Use user timezone for calculating run streak
- Frontend task completion is moved from update onboarding state to
separate endpoint and guarded so only frontend tasks can be completed
- Graph creation, execution and add marketplace agent to library accept
`source`, so appropriate tasks can be completed
- Replace `client.ts` api calls with orval generated and remove no
longer used functions from `client.ts`
- Add `resolveResponse` helper function that unwraps orval generated
call result to 2xx response

Small changes&bug fixes:
- Make Redis notification bus serialize all payload fields
- Fix confetti when group is finished
- Collapse finished group when opening Wallet
- Play confetti only for tasks that are listed in the Wallet UI

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding can be finished
  - [x] All tasks can be finished and work properly
  - [x] Confetti works properly
2025-12-05 02:32:28 +00:00
Nicholas Tindle
113df689dc feat(platform): Improve Google Sheets/Drive integration with unified credentials (#11520)
Simplifies and improves the Google Sheets/Drive integration by merging
credentials with the file picker and using narrower OAuth scopes.

### Changes 🏗️

- Merge Google credentials and file picker into a single unified input
field for better UX
- Create spreadsheets using Drive API instead of Sheets API for proper
scope support
- Simplify Google Drive OAuth scope to only use `drive.file` (narrowest
permission needed)
- Clean up unused imports (NormalizedPickedFile)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test creating a new Google Spreadsheet with
GoogleSheetsCreateSpreadsheetBlock
- [x] Test reading from existing spreadsheets with GoogleSheetsReadBlock
  - [x] Test writing to spreadsheets with GoogleSheetsWriteBlock
  - [x] Verify OAuth flow works with simplified scopes
  - [x] Verify file picker works with merged credentials field

#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Unifies Google Drive picker and credentials with auto-credentials
across backend and frontend, updates all Sheets blocks and execution to
use it, and adds Drive-based spreadsheet creation plus supporting tests
and UI fixes.
> 
> - **Backend**:
> - **Google Drive model/field**: Introduce `GoogleDriveFile` (with
`_credentials_id`) and `GoogleDriveFileField()` for unified auth+picker
(`backend/blocks/google/_drive.py`).
> - **Sheets blocks**: Replace `GoogleDrivePickerField` and explicit
credentials with `GoogleDriveFileField` across all Sheets blocks;
preserve and emit credentials for chaining; add Drive service; create
spreadsheets via Drive API then manage via Sheets API.
> - **IO block**: Add `AgentGoogleDriveFileInputBlock` providing a Drive
picker input.
> - **Execution**: Support auto-generated credentials via
`BlockSchema.get_auto_credentials_fields()`; acquire/release multiple
credential locks; pass creds by `credentials_kwarg`
(`executor/manager.py`, `data/block.py`, `util/test.py`).
> - **Tests**: Add validation tests for duplicate/unique
`auto_credentials.kwarg_name` and defaults.
> - **Frontend**:
> - **Picker**: Enhance Google Drive picker to require/use saved
platform credentials, pass `_credentials_id`, validate scopes, and
manage dialog z-index/interaction; expose `requirePlatformCredentials`.
> - **UI**: Update dialogs/CSS to keep Google picker on top and prevent
overlay interactions.
> - **Types**: Extend `GoogleDrivePickerConfig` with `auto_credentials`
and related typings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7d25534def. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
2025-12-04 14:40:30 +00:00
Swifty
7b93600973 fix duplicate promethues metrics 2025-12-03 11:04:38 +01:00
seer-by-sentry[bot]
fa567991b3 fix(backend): Handle HTTP errors in HTTP block by returning response objects (#11515)
### Changes 🏗️

- Modify the HTTP block to handle HTTP errors (4xx, 5xx) by returning
response objects instead of raising exceptions.
- This allows proper handling of client_error and server_error outputs.

Fixes
[AUTOGPT-SERVER-6VP](https://sentry.io/organizations/significant-gravitas/issues/7023985892/).
The issue was that: HTTP errors are raised as exceptions by `Requests`
default behavior, bypassing the block's intended error output handling,
resulting in `BlockUnknownError`.

This fix was generated by Seer in Sentry, triggered by Nicholas Tindle.
👁️ Run ID: 4902617

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/7023985892/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Tested with a service that will return 4XX and 5XX errors to make
sure the correct paths are followed



<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> HTTP block now returns 4xx/5xx responses instead of raising, and
Requests gains retry_max_attempts with last-result handling.
> 
> - **Backend**
>   - **HTTP block (`backend/blocks/http.py`)**:
> - Use `Requests(raise_for_status=False, retry_max_attempts=1)` so
4xx/5xx return response objects and route to
`client_error`/`server_error` outputs.
>   - **HTTP client util (`backend/util/request.py`)**:
> - Add `retry_max_attempts` option with `stop_after_attempt` and
`_return_last_result` to return the final response when retries stop.
> - Build `tenacity` retry config dynamically in `Requests.request()`;
validate `retry_max_attempts >= 1` when provided.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
fccae61c26. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: nicholas.tindle <nicholas.tindle@agpt.co>
2025-12-02 19:00:43 +00:00
Swifty
2cb6fd581c feat(platform): Integration management from external api (#11472)
Allow the external api to manage credentials 

### Changes 🏗️

- add ability to external api to manage credentials

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested it works

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Introduces external API endpoints to manage integrations (OAuth
initiation/completion and credential CRUD), adds external OAuth state
fields, and new API key permissions/config.
> 
> - **External API – Integrations**:
> - Add router `backend/server/external/routes/integrations.py` with
endpoints to:
> - `GET /v1/integrations/providers` list providers (incl. default
scopes)
> - `POST /v1/integrations/{provider}/oauth/initiate` and `POST
/oauth/complete` for external OAuth (custom callback, state)
> - `GET /v1/integrations/credentials` and `GET /{provider}/credentials`
to list credentials
> - `POST /{provider}/credentials` to create `api_key`, `user_password`,
`host_scoped` creds; `DELETE /{provider}/credentials/{cred_id}` to
delete
>   - Wire router in `backend/server/external/api.py`.
> - **Auth/Permissions**:
> - Add `APIKeyPermission` values: `MANAGE_INTEGRATIONS`,
`READ_INTEGRATIONS`, `DELETE_INTEGRATIONS` (schema + migration +
OpenAPI).
> - **Data model / Store**:
> - Extend `OAuthState` with external-flow fields: `callback_url`,
`state_metadata`, `api_key_id`, `is_external`.
> - Update `IntegrationCredentialsStore.store_state_token(...)` to
accept/store external OAuth metadata.
> - **OAuth providers**:
> - Set GitHub handler `DEFAULT_SCOPES = ["repo"]` in
`integrations/oauth/github.py`.
> - **Config**:
> - Add `config.external_oauth_callback_origins` in
`backend/util/settings.py` to validate allowed OAuth callback origins.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
249bba9e59. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-12-02 17:42:53 +01:00
Nicholas Tindle
55af799083 fix(blocks): clamp Twitter search start_time to 10 seconds before now (#11461)
## Summary
- Clamp `start_time` to at least 10 seconds before request time (Twitter
API requirement)
- Update input description to document this automatic adjustment
- Fix `serialize_list` to handle `None` data gracefully (exposed by the
fix)

## Background
Twitter API returns `400 Bad Request` when `start_time` is less than 10
seconds before the request time. Users providing current/future times
would hit this error.

**Sentry Issue:**
[BUILDER-3PG](https://significant-gravitas.sentry.io/issues/6919685270/)

## Affected Blocks
- `TwitterSearchRecentTweetsBlock`
- `TwitterGetUserMentionsBlock`
- `TwitterGetHomeTimelineBlock`
- `TwitterGetUserTweetsBlock`

## Test plan
- [x] Tested `TwitterSearchRecentTweetsBlock` with current time as
`start_time`
- [x] Verified clamping works and API call succeeds
- [x] Verified "No tweets found" is returned correctly when search
window has no results

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-12-02 15:45:31 +00:00
Zamil Majdy
6590fcb76f fix(backend): fix broken update_agent_version_in_library and reduce the method code duplication (#11514)
## Summary
Fix broken `update_agent_version_in_library` functionality by eagerly
loading `AgentGraph` while loading the library, also consolidating
duplicate code that updates agent version in library and configures HITL
safe mode settings.

## Problem
The `update_agent_version_in_library` is currently failed with this
error:
```
  File "/Users/abhi/Documents/AutoGPT/autogpt_platform/backend/backend/server/v2/library/model.py", line 110, in from_db
    raise ValueError("Associated Agent record is required.")
ValueError: Associated Agent record is required.
```

also logic was duplicated across two router endpoints with identical
implementations, creating maintenance burden and potential for
inconsistencies.

## Changes Made

### Created Helper Method
- Add `_update_library_agent_version_and_settings()` helper function  
- Fixes broken `update_agent_version_in_library` by centralizing the
logic
- Uses proper error handling and settings merging with `model_copy()`

### Replaced Duplicate Code  
- **In `update_graph` function** (v1.py:863) - replaced 13 lines with
single helper call
- **In `set_graph_active_version` function** (v1.py:920) - replaced 13
lines with single helper call

### Benefits
- **Fixes broken functionality**: Centralizes
`update_agent_version_in_library` logic
- **DRY Principle**: Eliminates code duplication across two router
endpoints
- **Maintainability**: Single place to modify the library agent update
logic
- **Consistency**: Ensures both endpoints use identical logic for HITL
safe mode configuration
- **Readability**: Cleaner, more focused endpoint implementations

## Technical Details
The helper method fixes broken `update_agent_version_in_library` by
handling:
1. Updating agent version in library via
`update_agent_version_in_library()`
2. Conditionally setting `human_in_the_loop_safe_mode: true` if graph
has HITL blocks and setting is not already configured
3. Proper settings merging to preserve existing configuration

## Testing
- [x] Code compiles and passes type checking
- [x] Pre-commit hooks pass (linting, formatting, type checking)
- [x] Both affected endpoints maintain same functionality with cleaner
implementation

Fixes broken duplicate code identified in v1.py router endpoints for
`update_agent_version_in_library`.

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 14:31:36 +00:00
Zamil Majdy
e4102bf0fb fix(backend): resolve HITL execution context validation and re-enable tests (#11509)
## Summary
Fix critical validation errors in GraphExecutionEntry and
NodeExecutionEntry models that were preventing HITL block execution, and
re-enable HITL test suite.

## Root Cause
After introducing ExecutionContext as a mandatory field, existing
messages in the execution queue lacked this field, causing validation
failures:
```
[GraphExecutor] [ExecutionManager] Could not parse run message: 1 validation error for GraphExecutionEntry
execution_context
  Field required [type=missing, input_value={'user_id': '26db15cb-a29...
```

## Changes Made

### 🔧 Execution Model Fixes (`backend/data/execution.py`)
- **Add backward compatibility**: `execution_context: ExecutionContext =
Field(default_factory=ExecutionContext)`
- **Prevent shared mutable defaults**: Use `default_factory` instead of
direct instantiation to avoid mutation issues
- **Ignore unknown fields**: Add `model_config = {"extra": "ignore"}`
for future compatibility
- **Applied to both models**: GraphExecutionEntry and NodeExecutionEntry

### 🧪 Re-enable HITL Test Suite
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`human_review_test.py`
- **Remove test skips**: Remove `pytestmark = pytest.mark.skip()` from
`review_routes_test.py`
- **Restore test coverage**: HITL functionality now properly tested in
CI

## Default ExecutionContext Behavior
When execution_context is missing from old messages, provides safe
defaults:
- `safe_mode: bool = True` (HITL blocks require approval by default)
- `user_timezone: str = "UTC"` (safe timezone default)  
- `root_execution_id: Optional[str] = None`
- `parent_execution_id: Optional[str] = None`

## Impact
-  **Fixes deployment validation errors** in dev environment
-  **Maintains backward compatibility** with existing queue messages  
-  **Restores proper HITL test coverage** in CI
-  **Ensures isolation**: Each execution gets its own ExecutionContext
instance
-  **Future-proofs**: Protects against message format changes

## Testing
- [x] HITL test suite re-enabled and should pass in CI
- [x] Existing executions continue to work with sensible defaults
- [x] New executions receive proper ExecutionContext from caller
- [x] Verified `default_factory` prevents shared mutable instances

## Files Changed
- `backend/data/execution.py` - Add backward-compatible ExecutionContext
defaults
- `backend/data/human_review_test.py` - Re-enable test suite
- `backend/server/v2/executions/review/review_routes_test.py` -
Re-enable test suite

Resolves the "Field required" validation error preventing HITL block
execution in dev environment.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-02 11:02:21 +00:00
Zamil Majdy
7b951c977e feat(platform): implement graph-level Safe Mode toggle for HITL blocks (#11455)
## Summary

This PR implements a graph-level Safe Mode toggle system for
Human-in-the-Loop (HITL) blocks. When Safe Mode is ON (default), HITL
blocks require manual review before proceeding. When OFF, they execute
automatically.

## 🔧 Backend Changes

- **Database**: Added `metadata` JSON column to `AgentGraph` table with
migration
- **API**: Updated `execute_graph` endpoint to accept `safe_mode`
parameter
- **Execution**: Enhanced execution context to use graph metadata as
default with API override capability
- **Auto-detection**: Automatically populate `has_human_in_the_loop` for
graphs containing HITL blocks
- **Block Detection**: HITL block ID:
`8b2a7b3c-6e9d-4a5f-8c1b-2e3f4a5b6c7d`

## 🎨 Frontend Changes

- **Component**: New `FloatingSafeModeToggle` with dual variants:
  - **White variant**: For library pages, integrates with action buttons
  - **Black variant**: For builders, floating positioned  
- **Integration**: Added toggles to both new/legacy builders and library
pages
- **API Integration**: Direct graph metadata updates via
`usePutV1UpdateGraphVersion`
- **Query Management**: React Query cache invalidation for consistent UI
updates
- **Conditional Display**: Toggle only appears when graph contains HITL
blocks

## 🛠 Technical Implementation

- **Safe Mode ON** (default): HITL blocks require manual review before
proceeding
- **Safe Mode OFF**: HITL blocks execute automatically without
intervention
- **Priority**: Backend API `safe_mode` parameter takes precedence over
graph metadata
- **Detection**: Auto-populates `has_human_in_the_loop` metadata field
- **Positioning**: Proper z-index and responsive positioning for
floating elements

## 🚧 Known Issues (Work in Progress)

### High Priority
- [ ] **Toggle state persistence**: Always shows "ON" regardless of
actual state - query invalidation issue
- [ ] **LibraryAgent metadata**: Missing metadata field causing
TypeScript errors
- [ ] **Tooltip z-index**: Still covered by some UI elements despite
high z-index

### Medium Priority  
- [ ] **HITL detection**: Logic needs improvement for reliable block
detection
- [ ] **Error handling**: Removing HITL blocks from graph causes save
errors
- [ ] **TypeScript**: Fix type mismatches between GraphModel and
LibraryAgent

### Low Priority
- [ ] **Frontend API**: Add `safe_mode` parameter to execution calls
once OpenAPI is regenerated
- [ ] **Performance**: Consider debouncing rapid toggle clicks

## 🧪 Test Plan

- [ ] Verify toggle appears only when graph has HITL blocks
- [ ] Test toggle persistence across page refreshes  
- [ ] Confirm API calls update graph metadata correctly
- [ ] Validate execution behavior respects safe mode setting
- [ ] Check styling consistency across builder and library contexts

## 🔗 Related

- Addresses requirements for graph-level HITL configuration
- Builds on existing FloatingReviewsPanel infrastructure
- Integrates with existing graph metadata system

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-12-02 09:55:55 +00:00
Bently
a58ac2150f feat(backend/blocks): add Discord create thread block (#11131)
Adds a new Discord block that allows users to create threads in Discord
channels. This addresses issue OPEN-2666 which requested the ability to
create Discord threads from workflows.

## Solution

Implemented `CreateDiscordThreadBlock` in
`autogpt_platform/backend/backend/blocks/discord/bot_blocks.py` with the
following features:

- Create public or private threads in Discord channels via bot token
- Support for both channel ID and channel name lookup (with optional
server name)
- Configurable thread type (public/private toggle)
- Configurable auto-archive duration (60, 1440, 4320, or 10080 minutes)
- Optional initial message to send in the newly created thread
- Outputs: status, thread_id, and thread_name for workflow chaining

The block follows the existing Discord block patterns and includes
proper error handling for permissions, channel not found, and login
failures.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Verify block appears in the workflow builder UI
- [x] Test creating a public thread with valid bot token and channel ID
- [x] Test creating a private thread with valid bot token and channel
name
  - [x] Test with invalid channel ID/name to verify error handling
  - [x] Test with bot lacking thread creation permissions
  - [x] Verify thread_id output can be chained to subsequent blocks
- [x] Test auto-archive duration options (60, 1440, 4320, 10080 minutes)
  - [x] Test sending initial message in newly created thread

Video to show the blocks working!



https://github.com/user-attachments/assets/f248f315-05b3-47e2-bd6b-4c64d53c35fc
2025-12-01 20:01:27 +00:00
Swifty
9f37342bc6 feat(platform): Simplify the chat tool system to use only 2 tools (#11464)
Simplifying the chat tool system to use only 2 tools

### Changes 🏗️

- remove old tools
- expand run_agent tool to include all stages

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested adding credentials work
  - [x] tested running an agent works
  - [x] tested scheduling an agent works
2025-12-01 20:56:16 +01:00
Swifty
7dc3b201b7 feat(platform): Explain None Message in BlockError Messages (#11490)
Sometime block errors are raised with message set as None, we now handle
this case

### Changes 🏗️

- handle case of None message
- Add tests

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] write unit tests

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Ensure `BlockExecutionError` and `BlockUnknownError` provide default
messages when given None/empty input, with new unit tests covering
formatting and inheritance.
> 
> - **Backend**:
>   - `backend/util/exceptions.py`:
> - `BlockExecutionError`: default `None` message to `"Output error was
None"`.
> - `BlockUnknownError`: default empty/`None` message to `"Unknown error
occurred"`.
> - **Tests**:
> - `backend/util/exceptions_test.py`: add tests for message formatting,
`None`/empty handling, and exception inheritance.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6e9b31ae47. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-01 20:55:02 +01:00
Swifty
7d53c0de27 fix(backend): Fix Youtube blocking our cloud ips (#11456)
Youtube can blocks cloud ips causing the youtube transcribe blocks to
not work. This PR adds webshare proxy to get around this issue

### Changes 🏗️

- add webshare proxy to youtube transcribe block 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have tested this works locally using the proxy

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Routes YouTube transcript fetching through Webshare proxy using
user/password credentials, wiring in provider enum, settings, default
credentials, and updated tests.
> 
> - **Blocks** (`backend/blocks/youtube.py`):
> - Use `WebshareProxyConfig` with `YouTubeTranscriptApi` to fetch
transcripts via proxy.
> - Add `credentials` input (`user_password` for `webshare_proxy`);
include test credentials and mocks.
> - Update method signatures: `get_transcript(video_id, credentials)`
and `run(..., *, credentials, ...)`.
>   - Change description to indicate proxy usage; add logging.
> - **Integrations**:
> - Providers (`backend/integrations/providers.py`): add
`ProviderName.WEBSHARE_PROXY`.
> - Credentials store (`backend/integrations/credentials_store.py`): add
`webshare_proxy` `UserPasswordCredentials`; include in
`DEFAULT_CREDENTIALS` and conditionally in `get_all_creds`.
> - **Settings** (`backend/util/settings.py`): add secrets
`webshare_proxy_username` and `webshare_proxy_password`.
> - **Tests** (`test/blocks/test_youtube.py`): update to pass
credentials and assert proxy config; add custom-credentials test; adjust
fallback/priority tests.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
d060898488. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-12-01 20:54:52 +01:00
Nicholas Tindle
0728f3bd49 fix(backend): Remove Google Sheets API scopes from block inputs (#11484)
Eliminates explicit Google Sheets API scopes from credentials fields in
all Google Sheets-related blocks. This change may be intended to
centralize or dynamically manage API scopes elsewhere, simplifying block
configuration.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
- removes the scopes we aren't approved to use
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Bently tested it on his fresh account and it worked!
2025-12-01 18:07:31 +00:00
Ubbe
b4e95dba14 feat(frontend): update empty task view designs (#11498)
## Changes 🏗️

Update the new library agent page, empty view to look like:

<img width="900" height="1060" alt="Screenshot 2025-12-01 at 14 12 10"
src="https://github.com/user-attachments/assets/e6a22a4f-35f4-434e-bbb1-593390034b9a"
/>

Now we display an **About this agent** card on the left when the agent
is published on the marketplace. I expanded the:
```
/api/library/agents/{id}
```
endpoint to return as well the following:
```js
{
  // ...
  created_at: "timestamp",
  marketplace_listing: {
    creator: { name: "string", "slug": string, id: "string"  },
    name: "string",
    slug: "string",
    id: "string"
  }
}
```
To be able to display this extra information on the card and link to the
creator and marketplace pages.

Also:
- design system updates regarding navbar and colors

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run locally and see the new page for an agent with no runs
2025-12-01 20:28:44 +07:00
Swifty
00148f4e3d feat(platform): add external api routes for store search and tool usage (#11463)
We want to allow external tools to explore the marketplace and use the
chat agent tools


### Changes 🏗️

- add store api routes
- add tool api routes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] tested all endpoints work

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-12-01 12:04:03 +00:00
Abhimanyu Yadav
35eb563241 feat(platform): enhance BlockMenuSearch with agent addition (#11474)
This PR enables users to add agents directly to the builder from search
results and marketplace views. Previously, users had to navigate to
different sections to add agents - now they can do it with a single
click from wherever they find the agent. The change includes proper
loading states, error handling, and success notifications to provide a
smooth user experience.

### Changes 🏗️

- **Added direct agent-to-builder functionality**: Users can now add
agents directly to the builder from search results and marketplace views
- **Created reusable hook `useAddAgentToBuilder`**: Centralized logic
for adding both library and marketplace agents to the builder
- **Enhanced search results interaction**: Added click handlers and
loading states to agent cards in search results
- **Improved marketplace agent addition**: Marketplace agents are now
added to both library and builder with proper feedback
- **Added loading states**: Visual feedback when agents are being added
(loading spinners on cards)
- **Improved error handling**: Added toast notifications for success and
failure cases with descriptive error messages
- **Added Sentry error tracking**: Captures exceptions for better
debugging in production

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Search for agents and add them to builder from search results
- [x] Add marketplace agents which should appear in both library and
builder
  - [x] Verify loading states appear during agent addition
  - [x] Test error scenarios (network failure, invalid agent)
- [x] Confirm toast notifications appear for both success and error
cases
  - [x] Verify builder viewport centers on newly added agent
2025-12-01 05:26:38 +00:00
Zamil Majdy
3d08c22dd5 feat(platform): add Human In The Loop block with review workflow (#11380)
## Summary
This PR implements a comprehensive Human In The Loop (HITL) block that
allows agents to pause execution and wait for human
approval/modification of data before continuing.



https://github.com/user-attachments/assets/c027d731-17d3-494c-85ca-97c3bf33329c


## Key Features
- Added WAITING_FOR_REVIEW status to AgentExecutionStatus enum
- Created PendingHumanReview database table for storing review requests
- Implemented HumanInTheLoopBlock that extracts input data and creates
review entries
- Added API endpoints at /api/executions/review for fetching and
reviewing pending data
- Updated execution manager to properly handle waiting status and resume
after approval

## Frontend Components
- PendingReviewCard for individual review handling
- PendingReviewsList for multiple reviews
- FloatingReviewsPanel for graph builder integration
- Integrated review UI into 3 locations: legacy library, new library,
and graph builder

## Technical Implementation
- Added proper type safety throughout with SafeJson handling
- Optimized database queries using count functions instead of full data
fetching
- Fixed imports to be top-level instead of local
- All formatters and linters pass

## Test plan
- [ ] Test Human In The Loop block creation in graph builder
- [ ] Test block execution pauses and creates pending review
- [ ] Test review UI appears in all 3 locations
- [ ] Test data modification and approval workflow
- [ ] Test rejection workflow
- [ ] Test execution resumes after approval

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Added Human-In-The-Loop review workflows to pause executions for human
validation.
* Users can approve or reject pending tasks, optionally editing
submitted data and adding a message.
* New "Waiting for Review" execution status with UI indicators across
run lists, badges, and activity views.
* Review management UI: pending review cards, list view, and a floating
reviews panel for quick access.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 12:07:46 +07:00
Zamil Majdy
ff5dd7a5b4 fix(backend): migrate all query_raw calls to query_raw_with_schema for proper schema handling (#11462)
## Summary

Complete migration of all non-test `query_raw` calls to use
`query_raw_with_schema` for proper PostgreSQL schema context handling.
This resolves the marketplace API failures where queries were looking
for unqualified table names.

## Root Cause

Prisma's `query_raw()` doesn't respect the `schema` parameter in
`DATABASE_URL` (`?schema=platform`) for raw SQL queries, causing queries
to fail when looking for unqualified table names in multi-schema
environments.

## Changes Made

### Files Updated
-  **backend/server/v2/store/db.py**: Already updated in previous
commit
-  **backend/server/v2/builder/db.py**: Updated `get_suggested_blocks`
query at line 343
-  **backend/check_store_data.py**: Updated all 4 `query_raw` calls to
use schema-aware queries
-  **backend/check_db.py**: Updated all `query_raw` calls (import
already existed)

### Technical Implementation
- Add import: `from backend.data.db import query_raw_with_schema`
- Replace `prisma.get_client().query_raw()` with
`query_raw_with_schema()`
- Add `{schema_prefix}` placeholder to table references in SQL queries
- Fix f-string template conflicts by using double braces
`{{schema_prefix}}`

### Query Examples

**Before:**
```sql
FROM "StoreAgent"
FROM "AgentNodeExecution" execution
```

**After:**
```sql  
FROM {schema_prefix}"StoreAgent"
FROM {schema_prefix}"AgentNodeExecution" execution
```

## Impact

-  All raw SQL queries now properly respect platform schema context
-  Fixes "relation does not exist" errors in multi-schema environments
-  Maintains backward compatibility with public schema deployments
-  Code formatting passes with `poetry run format`

## Testing

- All `query_raw` usages in non-test code successfully migrated
- `query_raw_with_schema` automatically handles schema prefix injection
- Existing query logic unchanged, only schema awareness added

## Before/After

**Before:** GET /api/store/agents → "relation 'StoreAgent' does not
exist"
**After:** GET /api/store/agents →  Returns store agents correctly

Resolves the marketplace API failures and ensures consistent schema
handling across all raw SQL operations.

Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 04:04:20 +00:00
Nicholas Tindle
02f8a69c6a feat(platform): add Google Drive Picker field type for enhanced file selection (#11311)
### 🏗️ Changes 

This PR adds a Google Drive Picker field type to enhance the user
experience of existing Google blocks, replacing manual file ID entry
with a visual file picker.

#### Backend Changes
- **Added  and  types** in :
  - Configurable picker field with OAuth scope management
  - Support for multiselect, folder selection, and MIME type filtering
  - Proper access token handling for file downloads
- **Enhanced Gmail blocks**: Updated attachment fields to use Google
Drive Picker for better UX
- **Enhanced Google Sheets blocks**: Updated spreadsheet selection to
use picker instead of manual ID entry
- **Added utility**: Async file download with virus scanning and 100MB
size limit

#### Frontend Changes  
- **Enhanced GoogleDrivePicker component**: Improved UI with folder icon
and multiselect messaging
- **Integrated picker in form renderers**: Auto-renders for fields with
format
- **Added shared GoogleDrivePickerInput component**: Eliminates code
duplication between NodeInputs and RunAgentInputs
- **Added type definitions**: Complete TypeScript support for picker
schemas and responses

#### Key Features
- 🎯 **Visual file selection**: Replace manual Google Drive file ID entry
with intuitive picker
- 📁 **Flexible configuration**: Support for documents, spreadsheets,
folders, and custom MIME types
- 🔒 **Minimal OAuth scopes**: Uses scope for security (only access to
user-selected files)
-  **Enhanced UX**: Seamless integration in both block configuration
and agent run modals
- 🛡️ **Security**: Virus scanning and file size limits for downloaded
attachments

#### Migration Impact
- **Backward compatible**: Existing blocks continue to work with manual
ID entry
- **Progressive enhancement**: New picker fields provide better UX for
the same functionality
- **No breaking changes**: all existing blocks should be unaffected

This enhancement improves the user experience of Google blocks without
introducing new systems or breaking existing functionality.


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x]Test multiple of the new blocks [of note is that the create
spreadsheet block should be not used for now as it uses api not drive
picker]
  - [x] chain the blocks together and pass values between them

---------

Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-27 03:01:29 +00:00
Toran Bruce Richards
e983d5c49a fix(backend): Implement passed uploaded media support for AI image customizer block (#11441)
- Added `store_media_file` utility to convert local file paths to Data
URIs for image processing.
- Updated `AIImageCustomizerBlock` to utilize processed images in model
execution, improving compatibility with Replicate API.
- Added optional Aspect ratio input to AIImageCustomizerBlock

This change enhances the image handling capabilities of the AI image
customizer, ensuring that images are properly formatted for external
processing.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Created agent using AI Image Customizer block attached to agent
file input
  - [x] Run agent, confirmed block is working
- [x] Confirm block is still working in original direct file upload
setup.


### Testing Results

#### Before (dev cloud):
<img width="836" height="592" alt="image"
src="https://github.com/user-attachments/assets/88c75668-c5c9-44bb-bec5-6554088a0cb7"
/>


#### After (local):
<img width="827" height="587" alt="image"
src="https://github.com/user-attachments/assets/04fea431-70a5-4173-bc84-d354c03d7174"
/>

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Preprocesses input images to data URIs and adds an `aspect_ratio`
option, wiring both through to Replicate in `AIImageCustomizerBlock`.
> 
> - **Backend**
>   - **`backend/blocks/ai_image_customizer.py`**:
> - Preprocesses input images via `store_media_file(...,
return_content=True)` to Data URIs before invoking Replicate.
> - Adds `AspectRatio` enum and `aspect_ratio` input; passed through
`run_model` and included in Replicate input.
>     - Updates block test input accordingly.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4116cf80d7. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-11-27 00:41:45 +00:00
Bently
64a775dfa7 feat(backend/blocks): Add GPT-5.1 and GPT-5.1-codex (#11406)
This pr adds the latest gpt-5.1 and gpt-5.1-codex llm's from openai, as
well as update the price of the gpt-5-chat model

https://platform.openai.com/docs/models/gpt-5.1
https://platform.openai.com/docs/models/gpt-5.1-codex

I have also had to add a new codex block as it uses a different openai
API and has other options the main llm's dont use

<img width="231" height="755" alt="image"
src="https://github.com/user-attachments/assets/a4056633-7b0f-446f-ae86-d7755c5b88ec"
/>


#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the latest gpt-5.1 llm
  - [x] Test the latest gpt-5.1-codex block

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-25 09:33:11 +00:00
Bently
5d97706bb8 feat(backend/blocks): Add claude opus 4.5 (#11446)
This PR adds the latest [claude opus
4.5](https://www.anthropic.com/news/claude-opus-4-5) model to the
platform

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test and use the llm to make sure it works
2025-11-25 09:11:02 +00:00
dependabot[bot]
244f3c7c71 chore(backend/deps-dev): bump faker from 37.8.0 to 38.2.0 in /autogpt_platform/backend (#11435)
Bumps [faker](https://github.com/joke2k/faker) from 37.8.0 to 38.2.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/releases">faker's
releases</a>.</em></p>
<blockquote>
<h2>Release v38.2.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.2.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v38.1.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.1.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v38.0.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v38.0.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.12.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.12.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.11.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.11.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.10.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.10.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
<h2>Release v37.9.0</h2>
<p>See <a
href="https://github.com/joke2k/faker/blob/refs/tags/v37.9.0/CHANGELOG.md">CHANGELOG.md</a>.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/joke2k/faker/blob/master/CHANGELOG.md">faker's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/joke2k/faker/compare/v38.1.0...v38.2.0">v38.2.0
- 2025-11-19</a></h3>
<ul>
<li>Add localized UniqueProxy. Thanks <a
href="https://github.com/azmeuk"><code>@​azmeuk</code></a></li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v38.0.0...v38.1.0">v38.1.0
- 2025-11-19</a></h3>
<ul>
<li>Add <code>person</code> provider for <code>ar_DZ</code> locale.
Thanks <a
href="https://github.com/othmane099"><code>@​othmane099</code></a>.</li>
<li>Add <code>person</code>, <code>phone_number</code>,
<code>date_time</code> for <code>fr_DZ</code> locale. Thanks <a
href="https://github.com/othmane099"><code>@​othmane099</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.12.0...v38.0.0">v38.0.0
- 2025-11-11</a></h3>
<ul>
<li>Drop support for Python 3.9</li>
<li>Add support for Python 3.14</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.11.0...v37.12.0">v37.12.0
- 2025-10-07</a></h3>
<ul>
<li>Add french VAT number. Thanks <a
href="https://github.com/fabien-michel"><code>@​fabien-michel</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.9.0...v37.11.0">v37.11.0
- 2025-10-07</a></h3>
<ul>
<li>Add French company APE code. Thanks <a
href="https://github.com/fabien-michel"><code>@​fabien-michel</code></a>.</li>
</ul>
<h3><a
href="https://github.com/joke2k/faker/compare/v37.8.0...v37.9.0">v37.9.0
- 2025-10-07</a></h3>
<ul>
<li>Add names generation to <code>en_KE</code> locale. Thanks <a
href="https://github.com/titustum"><code>@​titustum</code></a>.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="337f8faea2"><code>337f8fa</code></a>
Bump version: 38.1.0 → 38.2.0</li>
<li><a
href="d8fb7f20fa"><code>d8fb7f2</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="243e3174c0"><code>243e317</code></a>
lint docs</li>
<li><a
href="e398287902"><code>e398287</code></a>
📝 Update docs</li>
<li><a
href="3cc7f7750f"><code>3cc7f77</code></a>
feat: localized UniqueProxy (<a
href="https://redirect.github.com/joke2k/faker/issues/2279">#2279</a>)</li>
<li><a
href="8ba30da5f7"><code>8ba30da</code></a>
Bump version: 38.0.0 → 38.1.0</li>
<li><a
href="921bde120f"><code>921bde1</code></a>
📝 Update CHANGELOG.md</li>
<li><a
href="702e23b8e3"><code>702e23b</code></a>
fix newline</li>
<li><a
href="d5051a98db"><code>d5051a9</code></a>
add_faker_pk_pypi_link (<a
href="https://redirect.github.com/joke2k/faker/issues/2281">#2281</a>)</li>
<li><a
href="050de370cc"><code>050de37</code></a>
Add <code>person</code> provider for <code>ar_DZ</code> locale (<a
href="https://redirect.github.com/joke2k/faker/issues/2271">#2271</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/joke2k/faker/compare/v37.8.0...v38.2.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=faker&package-manager=pip&previous-version=37.8.0&new-version=38.2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-11-25 09:05:48 +00:00
Bently
126d5838a0 feat(backend/blocks): add latest grok models (#11422)
This PR adds some of the latest grok models to the platform
``x-ai/grok-4-fast``, ``x-ai/grok-4.1-fast`` and ``ai/grok-code-fast-1``

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test all of the latest grok models to make sure they work and they
do!

<img width="1089" height="714" alt="image"
src="https://github.com/user-attachments/assets/0d1e3984-69e8-432b-982a-b04c16bc4f41"
/>
2025-11-24 13:25:48 +00:00
Bently
643aea849b feat(backend/blocks): Add google banana pro (#11425)
This PR adds the latest google banana pro image generator and editor to
the platform and fixes up some of the prices for the image generation
models

I asked for ``Generate a image of a dog on a skateboard`` and this is
what i got:
<img width="2048" height="2048" alt="image"
src="https://github.com/user-attachments/assets/9b6c16d8-df8f-4fb6-a009-d6d342f9beb7"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test the image generator and image editor block using the latest
google banana pro model and it works

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-11-24 13:23:54 +00:00
Swifty
3b092f34d8 feat(platform): Add Get Linear Issues Block (#11415)
Added the ability to get all issues for a given project.

### Changes 🏗️

- added api query
- added new models
- added new block that gets all issues for a given project

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have ensured the new block works in dev
  - [x] I have ensured the other linear blocks still work
2025-11-24 11:43:10 +00:00
Swifty
0921d23628 fix(block): Improve error handling of SendEmailBlock (#11420)
Currently if the smtp server is not configured currently it results in a
platform error. This PR simplifies the error handling

### Changes 🏗️
 
- removed default value for smtp server host. 
- capture common errors and yield them as error

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Checked all tests still pass
2025-11-24 11:42:38 +00:00