### Changes 🏗️
- Added validation to ensure that the `summary` and `final_summary`
returned by the LLM are strings.
- Raises a `ValueError` if the LLM returns a list or other non-string
type, providing a descriptive error message to aid debugging.
Fixes
[AUTOGPT-SERVER-6M4](https://sentry.io/organizations/significant-gravitas/issues/6978480131/).
The issue was that: LLM returned list of strings instead of single
string summary, causing `_combine_summaries` to fail on `join`.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2230933
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6978480131/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Added a unit test to verify that a ValueError is raised when the
LLM returns a list instead of a string for summary or final_summary.
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Marketplace sort by functionality was not working on the frontend. This
PR fixes it
### Changes 🏗️
- Add type hints for sort by
- Fix marketplace sort by drop downs
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested locally
### Changes 🏗️
- Ensures `handleFetchError` can handle non-JSON error responses (e.g.,
HTML error pages).
- Attempts to parse the response body as JSON, but falls back to text if
JSON parsing fails.
- Logs a warning to the console if JSON parsing fails.
- Sets `responseData` to null if parsing fails.
Fixes
[BUILDER-482](https://sentry.io/organizations/significant-gravitas/issues/6958135748/).
The issue was that: Frontend error handler unconditionally calls
`response.json()` on a non-JSON HTML error page starting with 'A'.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2206951
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6958135748/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test Plan:
- [x] Created unit tests for the issue that caused the error
- [x] Created unit tests to ensure responses are parsed gracefully
### Changes 🏗️
Enhanced SQL query security in the store search functionality by
implementing proper parameterization to prevent SQL injection
vulnerabilities.
**Security Improvements:**
- Replaced string interpolation with PostgreSQL positional parameters
(`$1`, `$2`, etc.) for all user inputs
- Added ORDER BY whitelist validation to prevent injection via
`sorted_by` parameter
- Parameterized search term, creators array, category, and pagination
values
- Fixed variable naming conflict (`sql_where_clause` vs `where_clause`)
**Testing:**
- Added 4 comprehensive tests validating SQL injection prevention across
different attack vectors
- Tests verify that malicious input in search queries, filters, sorting,
and categories are safely handled
- All 10 tests in db_test.py pass successfully
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] All existing tests pass (10/10 tests passing)
- [x] New security tests validate SQL injection prevention
- [x] Verified parameterized queries handle malicious input safely
- [x] Code formatting passes (`poetry run format`)
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
*Note: No configuration changes required for this security fix*
## Summary
Fix critical issue where pre-execution permission validation broke
execution of graphs that reference older versions of sub-graphs.
## Problem
The `validate_graph_execution_permissions` function was checking for the
specific version of a graph in the user's library. This caused failures
when:
1. A parent graph references an older version of a sub-graph
2. The user updates the sub-graph to a newer version
3. The older version is no longer in their library
4. Execution of the parent graph fails with `GraphNotInLibraryError`
## Root Cause
In `backend/executor/utils.py` line 523, the function was checking for
the exact version, but sub-graphs legitimately reference older versions
that may no longer be in the library.
## Solution
### 1. Remove Version-Specific Check (backend/executor/utils.py)
- Remove `graph_version=graph.version` parameter from validation call
- Add explanatory comment about version-agnostic behavior
- Now only checks that the graph ID exists in user's library (any
version)
### 2. Enhance Documentation (backend/data/graph.py)
- Update function docstring to explain version-agnostic behavior
- Document that `None` (now default) allows execution of any version
- Clarify this is important for sub-graph version compatibility
## Technical Details
The `validate_graph_execution_permissions` function was already designed
to handle version-agnostic checks when `graph_version=None`. By omitting
the version parameter, we skip the version check and only verify:
- Graph exists in user's library
- Graph is not deleted/archived
- User has execution permissions
## Impact
- ✅ Parent graphs can execute even when they reference older sub-graph
versions
- ✅ Sub-graph updates don't break existing parent graphs
- ✅ Maintains security: still checks library membership and permissions
- ✅ No breaking changes: version-specific validation still available
when needed
## Example Scenario Fixed
1. User creates parent graph that uses sub-graph v1
2. User updates sub-graph to v2 (v1 removed from library)
3. Parent graph still references sub-graph v1
4. **Before**: Execution fails with `GraphNotInLibraryError`
5. **After**: Execution succeeds (version-agnostic permission check)
## Testing
- [x] Code formatting and linting passes
- [x] Type checking passes
- [x] No breaking changes to existing functionality
- [x] Security still maintained through library membership checks
## Files Changed
- `backend/executor/utils.py`: Remove version-specific permission check
- `backend/data/graph.py`: Enhanced documentation for version-agnostic
behavior
Closes #[issue-number-if-applicable]
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
The `<Wallet />` was being rendered twice ( one hidden with CSS `hidden`
) because of the Navbar layout, which caused logic issues within the
wallet. I changed to render it conditionally via Javascript instance,
which is always better practice than use `hidden` specially for
components with actual logic.
I also moved the component files closer to where it is used ( in the
navbar ).
I have a Cursor plugin that removes imports when unused, but annoyingly
re-organizes them, hence the changes around that...
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] There is only 1 Wallet in the DOM
📨 Fix: Handle Oversized Notification Emails
Summary
This PR adds logic to detect and handle oversized notification emails
exceeding Postmark’s 5 MB limit. Instead of retrying indefinitely, the
system now sends a lightweight summary email with key stats and a
dashboard link.
Changes
Added size check in EmailSender.send_templated()
Sends summary email when payload > ~4.5 MB
Prevents infinite retries and queue clogging
Added logs for oversized detection
Fixes#11119
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Changes 🏗️
A couple of improvements on **Onboarding Step 5**:
- Show a spinner when the page is loading ( better contrast / context
than skeleton in this case )
- Prevent the run button being disabled if credentials failed to load
- while this is good/expected behavior, it will help us debug the issue
in production where credentials failed to load silently, given running
the agent it'll throw an error we can see
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new account/signup
- [x] On Onboarding Step 5 test the above
- Resolves#11251
This fixes all the warnings mentioned in #11251, reducing noise and
making our logs and error alerts more useful :)
### Changes 🏗️
- Remove "Block {block_name} has multiple credential inputs" warning
(not actually an issue)
- Rename `json` attribute of `MainCodeExecutionResult` to `json_data`;
retain serialized name through a field alias
- Replace `Path(regex=...)` with `Path(pattern=...)` in
`get_shared_execution` endpoint parameter config
- Change Uvicorn's WebSocket module to new Sans-I/O implementation for
WS server
- Disable Uvicorn's WebSocket module for REST server
- Remove deprecated `enable_cleanup_closed=True` argument in
`CloudStorageHandler` implementation
- Replace Prisma transaction timeout `int` argument with a `timedelta`
value
- Update Sentry SDK to latest version (v2.42.1)
- Broaden filter for cleanup warnings from indirect dependency `litellm`
- Fix handling of `MissingConfigError` in REST server endpoints
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Check that the warnings are actually gone
- [x] Deploy to dev environment and run a graph; check for any warnings
- Test WebSocket server
- [x] Run an agent in the Builder; make sure real-time execution updates
still work
### Changes 🏗️
### Before
<img width="800" height="649" alt="Screenshot_2025-10-23_at_00 44 59"
src="https://github.com/user-attachments/assets/fd717d39-772a-4331-bc54-4db15a9a3107"
/>
### After
<img width="800" height="555" alt="Screenshot 2025-10-27 at 23 19 10"
src="https://github.com/user-attachments/assets/64878bd0-3a96-4b3a-8344-1a88c89de52e"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Try to signup with a non-approved email
- [x] You see the modal with an updated copy
## Changes 🏗️
The mobile 📱 experience is still a mess but this helps a little.
### Before
<img width="350" height="395" alt="Screenshot 2025-10-24 at 18 26 18"
src="https://github.com/user-attachments/assets/75eab232-8c37-41e7-a51d-dbe07db336a0"
/>
### After
<img width="350" height="406" alt="Screenshot 2025-10-24 at 18 25 54"
src="https://github.com/user-attachments/assets/ecbd8bbd-8a94-4775-b990-c8b51de48cf9"
/>
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Load the app
- [x] Check the Tally popup button copy
- [x] The button still works
### Changes 🏗️
- Modifies the LLM block to extract the actual response from the
dictionary returned by the LLM, instead of yielding the entire
dictionary. This addresses
[AUTOGPT-SERVER-6EY](https://sentry.io/organizations/significant-gravitas/issues/6950850822/).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] After applying the fix, I ran the agent that triggered the Sentry
error and confirmed that it now completes successfully without errors.
---------
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Fixes
[AUTOGPT-SERVER-6KZ](https://sentry.io/organizations/significant-gravitas/issues/6976926125/).
The issue was that: Redirect handling strips the URL scheme, causing
subsequent requests to fail validation and hit a 404.
- Ensures the hostname in the URL is properly IDNA-encoded after
validation.
- Reconstructs the netloc with the encoded hostname and preserves the
port if it exists.
This fix was generated by Seer in Sentry, triggered by Craig Swift. 👁️
Run ID: 2204774
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6976926125/?seerDrawer=true)
### Changes 🏗️
**backend/util/request.py:**
- Fixed URL validation to properly preserve port numbers when
reconstructing netloc
- Ensures IDNA-encoded hostname is combined with port (if present)
before URL reconstruction
**Test Results:**
- ✅ Tested request to https://www.target.com/ (original failing URL from
Sentry issue)
- ✅ Status: 200, Content retrieved successfully (339,846 bytes)
- ✅ Port preservation verified for URLs with explicit ports
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested request to https://www.target.com/ (original failing URL)
- [x] Verified status code 200 and successful content retrieval
- [x] Verified port preservation in URL validation
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
- Resolves#10980
- 2nd attempt after #11075 broke some things
Fixes unnecessary graph re-saving when no changes were made after
initial save. More specifically, this PR fixes two causes of this issue:
- Frontend node IDs were being compared to backend IDs, which won't
match if the graph has been modified and saved since loading.
- `fillDefaults` was being applied to all nodes (including existing
ones) on element creation, and empty values were being stripped
*post-save* with `removeEmptyStringsAndNulls`. This invisible
auto-modification of node input data meant that in some common cases the
graph would never be in sync with the backend.
### Changes 🏗️
- Fix node ID handling
- Use `node.data.backend_id ?? node.id` instead of `node.id` in
`prepareSaveableGraph`
- Also map link source/sink IDs to their corresponding backend IDs
- Add note about `node.data.backend_id` to `_saveAgent`
- Use `node.data.backend_id || node.id` as display ID in `CustomNode`
- Prevent auto-modification of node input data on existing nodes
- Prune empty values (`undefined`, `null`, `""`) from node input data
*pre-save* instead of post-save
- Related: improve typing and functionality of
`fillObjectDefaultsFromSchema` (moved and renamed from `fillDefaults`)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Node display ID updates on save
- [x] Clicking save a second time (without making more changes) doesn't
cause re-save
- [x] Updating nodes with dynamic input links (e.g. Create Dictionary
Block) doesn't make the links disappear
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Bug Fixes**
* Prevented unintended auto-modification of existing nodes during
editing
* Improved consistency of node and connection identifiers in saved
graphs
* **Improvements**
* Enhanced node title display logic for clearer node identification
* Optimized data cleanup utilities for more robust input processing in
the builder
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Removes CLAUDE_3_5_SONNET and CLAUDE_3_5_HAIKU from LlmModel enum, model
metadata, and cost configuration since they are deprecated
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify the models are gone from the llm blocks
## Changes 🏗️
We weren't fetching all library agents, just the first 15... to compute
the agent map on the Agent Activity dropdown. We suspect that is causing
some agent executions coming as `Unknown agent`.
In this changes, I'm fetching all the library agents upfront ( _without
blocking page load_ ) and caching them on the browser, so we have all
the details to render the agent runs. This is re-used in the library as
well for fast initial load on the agents list page.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] First request populates cache; subsequent identical requests hit
cache
- [x] Editing an agent invalidates relevant cache keys and serves fresh
data
- [x] Different query params generate distinct cache entries
- [x] Cache layer gracefully falls back to live data on errors
- [x] 404 behavior for unknown agents unchanged
### For configuration changes:
None
## Changes 🏗️
The onboarding `Run` button is disabled sometimes when an agent
requiring credentials is selected. We think this can be because the
credentials load _async_ by a sub-component ( `<CredentialsInputs />` ),
and there wasn't a way for the parent component to know whether they
loaded or not.
- Refactored **Step 5** of onboarding to adhere to our code conventions
- split concerns and colocated state
- used generated API hooks
- the UI will only render once API calls succeed
- Created a system where ``<CredentialsInputs />` notify the parent
component when they load
- Did minor adjustments here and there
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I will know once I find an agent with credentials that I can
run....
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added visual agent selection card displaying agent details during
onboarding
* Introduced credentials input management for agent configuration
* Added onboarding guidance for initiating agent runs
* **Improvements**
* Enhanced onboarding flow with improved state management
* Refined login state handling
* Adjusted spacing in agent rating display
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Categories and Creators where not sanitized in the full text search
### Changes 🏗️
- apply sanitization to categories and creators
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] run tests to check it still works
We're currently seeing errors in the `DatabaseManager` while it's
shutting down, like:
```
WARNING [DatabaseManager] Termination request: SystemExit; 0 executing cleanup.
INFO [DatabaseManager] ⏳ Disconnecting Database...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection started...
INFO [PID-1|THREAD-29|DatabaseManager|Prisma-82fb1994-4b87-40c1-8869-fbd97bd33fc8] Releasing connection completed successfully.
INFO [DatabaseManager] Terminated.
ERROR POST /create_or_add_to_user_notification_batch failed: Failed to create or add to notification batch for user {user_id} and type AGENT_RUN: NoneType: None
```
This indicates two issues:
- The service doesn't wait for pending RPC calls to finish before
terminating
- We're using `logger.exception` outside an error handling context,
causing the confusing and not much useful `NoneType: None` to be printed
instead of error info
### Changes 🏗️
- Implement graceful shutdown in `AppService` so in-flight RPC calls can
finish
- Add tests for graceful shutdown
- Prevent `AppService` accepting new requests during shutdown
- Rework `AppService` lifecycle management; add support for async
`lifespan`
- Fix `AppService` endpoint error logging
- Improve logging in `AppProcess` and `AppService`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Deploy to Dev cluster, then `kubectl rollout restart` the different
services a few times
- [x] -> `DatabaseManager` doesn't break on re-deployment
- [x] -> `Scheduler` doesn't break on re-deployment
- [x] -> `NotificationManager` doesn't break on re-deployment
This PR implements real-time agent execution functionality in the new
flow editor, enabling users to run, monitor, and view results of their
agent workflows directly within the builder interface.
https://github.com/user-attachments/assets/8a730e08-f88d-49d4-be31-980e2c7a2f83
#### Key Features Added:
##### 1. **Agent Execution Controls**
- Added "Run Agent" / "Stop Agent" button with gradient styling in the
builder interface
- Implemented execution state management through a new `graphStore` for
tracking running status
- Save graph automatically before execution to ensure latest changes are
persisted
##### 2. **Real-time Execution Monitoring**
- Implemented WebSocket-based real-time updates for node execution
status via `useFlowRealtime` hook
- Subscribe to graph execution events and node execution events for live
status tracking
- Visual execution status badges on nodes showing states: `QUEUED`,
`RUNNING`, `COMPLETED`, `FAILED`, etc.
- Animated gradient border effect when agent is actively running
##### 3. **Node Execution Results Display**
- New `NodeDataRenderer` component to display input/output data for each
executed node
- Collapsible result sections with formatted JSON display
- Prepared UI for future functionality: copy, info, and expand actions
for node data
#### Technical Implementation:
- **State Management**: Extended `nodeStore` with execution status and
result tracking methods
- **WebSocket Integration**: Real-time communication for execution
updates without polling
- **Component Architecture**: Modular components for execution controls,
status display, and result rendering
- **Visual Feedback**: Color-coded status badges and animated borders
for clear execution state indication
#### TODO Items for Future PRs:
- Complete implementation of node result action buttons (copy, info,
expand)
- Add agent output display component
- Implement schedule run functionality
- Handle credential and input parameters for graph execution
- Add tooltips for better UX
### Checklist
- [x] Create a new agent with at least 3 blocks and verify execution
starts correctly
- [x] Verify real-time status updates appear on nodes during execution
- [x] Confirm execution results display in the node output sections
- [x] Verify the animated border appears when agent is running
- [x] Check that node status badges show correct states (QUEUED,
RUNNING, COMPLETED, etc.)
- [x] Test WebSocket reconnection after connection loss
- [x] Verify graph is saved before execution begins
Updated block costs in `backend/backend/data/block_cost_config.py`:
- **AIShortformVideoCreatorBlock**: Updated from 50 credits to 307
- **AIAdMakerVideoCreatorBlock**: Added cost of 714 credits
- **AIScreenshotToVideoAdBlock**: Added cost of 612 credits
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify AIShortformVideoCreatorBlock costs 307 credits when
executed
- [x] Verify AIAdMakerVideoCreatorBlock costs 714 credits when executed
- [x] Verify AIScreenshotToVideoAdBlock costs 612 credits when executed
Improves the "not on waitlist" error display based on feedback.
- Follow-up to #11198
- Follow-up to #11196
### Changes 🏗️
- Use standard `ErrorCard`
- Improve text strings
- Merge `isWaitlistError` and `isWaitlistErrorFromParams`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] We need to test in dev becasue we don't have a waitlist locally
and will revert if it doesnt work
- deploy to dev environment and sign up with a non approved account and
see if error appears
Part of our effort to eliminate preventable warnings and errors.
- Resolves#11237
### Changes 🏗️
- Exclude `undefined` query params in API requests
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Open the Builder without a `flowVersion` URL parameter
- [x] -> `GET /api/library/agents/by-graph/{graph_id}` succeeds
- Open the builder with a `flowVersion` URL parameter
- [x] -> version is correctly included in request URL parameters
## Changes 🏗️
- Add [custom events](https://datafa.st/docs/custom-goals) in
**Datafa.st** to track the user journey around core actions
- track `add_to_library`
- track `download_agent`
- track `run_agent`
- track `schedule_agent`
- Refactor the analytics service to encapsulate both **GA** and
**Datafa.st**
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Analytics load correctly locally
- [x] Events fire in production
### For configuration changes:
Once deployed to production we need to verify we are receiving analytics
and custom events in [Datafa.st](https://datafa.st/)
Potential fix for
[https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145)
To fix the issue, rather than using substring matching on the raw URL
string, we need to properly parse the URL and inspect its hostname. We
should confirm that the `hostname` property of the parsed URL is equal
to either `vimeo.com` or explicitly permitted subdomains like
`www.vimeo.com`. We can use the native JavaScript `URL` class for robust
parsing.
**File/Location:**
- Only change line(s) in
`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/OutputRenderers/renderers/MarkdownRenderer.tsx`
- Specifically, update the logic in function `isVideoUrl()` on line 45.
**Methods/Imports/Definitions:**
- Use the standard `URL` class (no need to add a new import, as this is
available in browsers and in Node.js).
- Provide fallback in case the URL passed in is malformed (wrap in a
try-catch, treat as non-video in this case).
- Check the parsed hostname for equality with `vimeo.com` or,
optionally, specific allowed subdomains (`www.vimeo.com`).
---
_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Debug and info level messages are currently ending up in Sentry,
polluting our issue feed.
### Changes 🏗️
- Limit Sentry console capture to warnings and worse
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Trivial change, no test needed
<!-- Clearly explain the need for these changes: -->
This PR converts Jinja2 TemplateError exceptions to ValueError in the
TextFormatter class to ensure proper error handling and HTTP status code
responses (400 instead of 500).
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Added import for `jinja2.exceptions.TemplateError` in
`backend/util/text.py:6`
- Wrapped template rendering in try-catch block in `format_string`
method (`backend/util/text.py:105-109`)
- Convert `TemplateError` to `ValueError` to ensure proper 400 HTTP
status code for client errors
- Added warning logging for template rendering errors before re-raising
as ValueError
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan: -->
- [x] Verified that invalid Jinja2 templates now raise ValueError
instead of TemplateError
- [x] Confirmed that valid templates continue to work correctly
- [x] Checked that warning logs are generated for template errors
- [x] Validated that the exception chain is preserved with `from e`
#### For configuration changes:
- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Resolves#11226
### Changes 🏗️
- Drop use of `CloudLoggingHandler` which docs state isn't for use in
GKE
- For cloud logging, output only structured log entries to `stdout`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test deploy to dev and check logs
Changes to providers blocks to store in db
### Changes 🏗️
- revet change
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] I have reverted the merge
## Summary
- Fixes database connection warnings in executor logs: "Client is not
connected to the query engine, you must call `connect()` before
attempting to query data"
- Implements resilient database client pattern already used elsewhere in
the codebase
- Adds caching to reduce database load for user context lookups
## Changes
- Updated `get_user_context()` to check `prisma.is_connected()` and fall
back to database manager client
- Added `@cached(maxsize=1000, ttl_seconds=3600)` decorator for
performance optimization
- Updated database manager to expose `get_user_by_id` method
## Test plan
- [x] Verify executor pods no longer show Prisma connection warnings
- [x] Confirm user timezone is still correctly retrieved
- [x] Test fallback behavior when Prisma is disconnected
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
Standardize all the runtime environment checks on the Front-end and
associated conditions to run against a single environment service where
all the environment config is centralized and hence easier to manage.
This helps prevent typos and bug when manually asserting against
environment variables ( which are typed as `string` ), the helper
functions are easier to read and re-use across the codebase.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app and click around
- [x] Everything is smooth
- [x] Test on the CI and types are green
### For configuration changes:
None 🙏🏽
## Changes 🏗️
Document how to contribute on the Front-end so it is easier for
non-regular contributors.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Contribution guidelines make sense and look good considering the
AutoGPT stack
### For configuration changes:
None
We currently try to re-init the LaunchDarkly client every time a feature flag is checked.
This causes 5 second extra latency on the flag check when LD is down, such as now.
Since flag checks are performed on every block execution, this currently cripples the platform's executors.
- Follow-up to #11221
### Changes 🏗️
- Only try to init LaunchDarkly once
- Improve surrounding log statements in the `feature_flag` module
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- This is a critical hotfix; we'll see its effect once deployed
LaunchDarkly is currently down and it's keeping our executor pods from
spinning up.
### Changes 🏗️
- Wrap `LaunchDarklyIntegration` init in a try/except
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- We'll see if it works once it deploys
## Problem
The YouTube transcription block would fail when attempting to transcribe
videos that only had transcripts available in non-English languages.
Even when usable transcripts existed in other languages, the block would
raise a `NoTranscriptFound` error because it only requested English
transcripts.
**Example video that would fail:**
https://www.youtube.com/watch?v=3AMl5d2NKpQ (only has Hungarian
transcripts)
**Error message:**
```
Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ!
No transcripts were found for any of the requested language codes: ('en',)
For this video (3AMl5d2NKpQ) transcripts are available in the following languages:
(GENERATED) - hu ("Hungarian (auto-generated)")
```
## Solution
Implemented intelligent language fallback in the
`TranscribeYoutubeVideoBlock.get_transcript()` method:
1. **First**, tries to fetch English transcript (maintains backward
compatibility)
2. **If English unavailable**, lists all available transcripts and
selects the first one using this priority:
- Manually created transcripts (any language)
- Auto-generated transcripts (any language)
3. **Only fails** if no transcripts exist at all
**Example behavior:**
```python
# Before: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ") # ❌ Raises NoTranscriptFound
# After: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ") # ✅ Returns Hungarian transcript
```
## Changes
- **Modified** `backend/blocks/youtube.py`: Added try-catch logic to
fallback to any available language when English is not found
- **Added** `test/blocks/test_youtube.py`: Comprehensive test suite
covering URL extraction, language fallback, transcript preferences, and
error handling (7 tests)
- **Updated** `docs/content/platform/blocks/youtube.md`: Documented the
language fallback behavior and transcript priority order
## Testing
- ✅ All 7 new unit tests pass
- ✅ Block integration test passes
- ✅ Full test suite: 621 passed, 0 failed (no regressions)
- ✅ Code formatting and linting pass
## Impact
This fix enables the YouTube transcription block to work with
international content while maintaining full backward compatibility:
- ✅ Videos in any language can now be transcribed
- ✅ English is still preferred when available
- ✅ No breaking changes to existing functionality
- ✅ Graceful degradation to available languages
Fixes#10637
Fixes https://linear.app/autogpt/issue/OPEN-2626
> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `www.youtube.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python3`
(dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
> Issue Title: if theres only one lanague available for transcribe
youtube return that langage not an error
> Issue Description: `Could not retrieve a transcript for the video
https://www.youtube.com/watch?v=3AMl5d2NKpQ! This is most likely caused
by: No transcripts were found for any of the requested language codes:
('en',) For this video (3AMl5d2NKpQ) transcripts are available in the
following languages: (MANUALLY CREATED) None (GENERATED) - hu
("Hungarian (auto-generated)") (TRANSLATION LANGUAGES) None If you are
sure that the described cause is not responsible for this error and that
a transcript should be retrievable, please create an issue at
https://github.com/jdepoix/youtube-transcript-api/issues. Please add
which version of youtube_transcript_api you are using and provide the
information needed to replicate the error. Also make sure that there are
no open issues which already describe your problem!` you can use this
video to test:
[https://www.youtube.com/watch?v=3AMl5d2NKpQ\`](https://www.youtube.com/watch?v=3AMl5d2NKpQ%60)
> Fixes
https://linear.app/autogpt/issue/OPEN-2626/if-theres-only-one-lanague-available-for-transcribe-youtube-return
>
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
>
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10637).
All replies are displayed in both locations.
>
>
</details>
<!-- START COPILOT CODING AGENT TIPS -->
---
✨ Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
### Need 💡
This PR addresses Linear issue SECRT-1665, which mandates an update to
Linear's OAuth2 implementation. Linear is transitioning from long-lived
access tokens to short-lived access tokens with refresh tokens, with a
deadline of April 1, 2026. This change is crucial to ensure continued
integration with Linear and to support their new token management
system, including a migration path for existing long-lived tokens.
### Changes 🏗️
- **`autogpt_platform/backend/backend/blocks/linear/_oauth.py`**:
- Implemented full support for refresh tokens, including HTTP Basic
Authentication for token refresh requests.
- Added `migrate_old_token()` method to exchange old long-lived access
tokens for new short-lived tokens with refresh tokens using Linear's
`/oauth/migrate_old_token` endpoint.
- Enhanced `get_access_token()` to automatically detect and attempt
migration for old tokens, and to refresh short-lived tokens when they
expire.
- Improved error handling and token expiration management.
- Updated `_request_tokens` to handle both authorization code and
refresh token flows, supporting Linear's recommended authentication
methods.
- **`autogpt_platform/backend/backend/blocks/linear/_config.py`**:
- Updated `TEST_CREDENTIALS_OAUTH` mock data to include realistic
`access_token_expires_at` and `refresh_token` for testing the new token
lifecycle.
- **`LINEAR_OAUTH_IMPLEMENTATION.md`**:
- Added documentation detailing the new Linear OAuth refresh token
implementation, including technical details, migration strategy, and
testing notes.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified OAuth URL generation and parameter encoding.
- [x] Confirmed HTTP Basic Authentication header creation for refresh
requests.
- [x] Tested token expiration logic with a 5-minute buffer.
- [x] Validated migration detection for old vs. new token types.
- [x] Checked code syntax and import compatibility.
#### For configuration changes:
- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
---
Linear Issue: [SECRT-1665](https://linear.app/autogpt/issue/SECRT-1665)
<a
href="https://cursor.com/background-agent?bcId=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a> <a
href="https://cursor.com/agents?id=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>
---------
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
## Changes 🏗️
Following https://datafa.st/docs/nextjs-app-router
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will see once we make a production deployment and get data into
the platform
### For configuration changes:
None