Commit Graph

7640 Commits

Author SHA1 Message Date
Bently
a6a2f71458 Merge commit from fork
* Replace urllib with Requests in RSS block to prevent SSRF

* Format
2025-10-22 14:18:34 +01:00
Bently
788b861bb7 Merge commit from fork 2025-10-22 14:17:26 +01:00
Ubbe
e203e65dc4 feat(frontend): setup datafast custom events (#11231)
## Changes 🏗️

- Add [custom events](https://datafa.st/docs/custom-goals) in
**Datafa.st** to track the user journey around core actions
  - track `add_to_library`
  - track `download_agent`
  - track `run_agent`
  - track `schedule_agent` 
- Refactor the analytics service to encapsulate both **GA** and
**Datafa.st**

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Analytics load correctly locally
  - [x] Events fire in production
 
### For configuration changes:

Once deployed to production we need to verify we are receiving analytics
and custom events in [Datafa.st](https://datafa.st/)
2025-10-22 16:56:30 +04:00
Ubbe
bd03697ff2 fix(frontend): URL substring sanitazion issue (#11232)
Potential fix for
[https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145](https://github.com/Significant-Gravitas/AutoGPT/security/code-scanning/145)

To fix the issue, rather than using substring matching on the raw URL
string, we need to properly parse the URL and inspect its hostname. We
should confirm that the `hostname` property of the parsed URL is equal
to either `vimeo.com` or explicitly permitted subdomains like
`www.vimeo.com`. We can use the native JavaScript `URL` class for robust
parsing.

**File/Location:**  
- Only change line(s) in
`autogpt_platform/frontend/src/app/(platform)/library/agents/[id]/components/AgentRunsView/components/OutputRenderers/renderers/MarkdownRenderer.tsx`
- Specifically, update the logic in function `isVideoUrl()` on line 45.

**Methods/Imports/Definitions:**  
- Use the standard `URL` class (no need to add a new import, as this is
available in browsers and in Node.js).
- Provide fallback in case the URL passed in is malformed (wrap in a
try-catch, treat as non-video in this case).
- Check the parsed hostname for equality with `vimeo.com` or,
optionally, specific allowed subdomains (`www.vimeo.com`).

---


_Suggested fixes powered by Copilot Autofix. Review carefully before
merging._

---------

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-10-22 16:56:12 +04:00
Reinier van der Leer
efd37b7a36 fix(frontend): Limit Sentry console capture to warnings and errors (#11223)
Debug and info level messages are currently ending up in Sentry,
polluting our issue feed.

### Changes 🏗️

- Limit Sentry console capture to warnings and worse

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Trivial change, no test needed
2025-10-22 09:49:25 +00:00
Zamil Majdy
bb0b45d7f7 fix(backend): Make Jinja Error on TextFormatter as value error (#11236)
<!-- Clearly explain the need for these changes: -->

This PR converts Jinja2 TemplateError exceptions to ValueError in the
TextFormatter class to ensure proper error handling and HTTP status code
responses (400 instead of 500).

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Added import for `jinja2.exceptions.TemplateError` in
`backend/util/text.py:6`
- Wrapped template rendering in try-catch block in `format_string`
method (`backend/util/text.py:105-109`)
- Convert `TemplateError` to `ValueError` to ensure proper 400 HTTP
status code for client errors
- Added warning logging for template rendering errors before re-raising
as ValueError

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan: -->
- [x] Verified that invalid Jinja2 templates now raise ValueError
instead of TemplateError
  - [x] Confirmed that valid templates continue to work correctly
  - [x] Checked that warning logs are generated for template errors
  - [x] Validated that the exception chain is preserved with `from e`

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-10-22 09:38:02 +00:00
Reinier van der Leer
04df981115 fix(backend): Fix structured logging for cloud environments (#11227)
- Resolves #11226

### Changes 🏗️

- Drop use of `CloudLoggingHandler` which docs state isn't for use in
GKE
- For cloud logging, output only structured log entries to `stdout`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test deploy to dev and check logs
2025-10-21 12:48:41 +00:00
Swifty
d25997b4f2 Revert "Merge branch 'swiftyos/secrt-1709-store-provider-names-and-en… (#11225)
Changes to providers blocks to store in db

### Changes 🏗️

- revet change

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] I have reverted the merge
2025-10-21 09:12:00 +00:00
Zamil Majdy
11d55f6055 fix(backend/executor): Avoid running direct query in executor (#11224)
## Summary
- Fixes database connection warnings in executor logs: "Client is not
connected to the query engine, you must call `connect()` before
attempting to query data"
- Implements resilient database client pattern already used elsewhere in
the codebase
- Adds caching to reduce database load for user context lookups

## Changes
- Updated `get_user_context()` to check `prisma.is_connected()` and fall
back to database manager client
- Added `@cached(maxsize=1000, ttl_seconds=3600)` decorator for
performance optimization
- Updated database manager to expose `get_user_by_id` method

## Test plan
- [x] Verify executor pods no longer show Prisma connection warnings
- [x] Confirm user timezone is still correctly retrieved
- [x] Test fallback behavior when Prisma is disconnected

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-21 08:46:40 +00:00
Ubbe
063dc5cf65 refactor(frontend): standardise with environment service (#11209)
## Changes 🏗️

Standardize all the runtime environment checks on the Front-end and
associated conditions to run against a single environment service where
all the environment config is centralized and hence easier to manage.

This helps prevent typos and bug when manually asserting against
environment variables ( which are typed as `string` ), the helper
functions are easier to read and re-use across the codebase.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app and click around
  - [x] Everything is smooth
  - [x] Test on the CI and types are green  

### For configuration changes:

None 🙏🏽
2025-10-21 08:44:34 +00:00
Ubbe
b7646f3e58 docs(frontend): contributing guidelines (#11210)
## Changes 🏗️

Document how to contribute on the Front-end so it is easier for
non-regular contributors.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Contribution guidelines make sense and look good considering the
AutoGPT stack

### For configuration changes:

None
2025-10-21 08:26:51 +00:00
Ubbe
0befaf0a47 feat(frontend): update tooltip and alert styles (#11212)
## Changes 🏗️

Matching updated changes in AutoGPT design system:

<img width="283" height="156" alt="Screenshot 2025-10-20 at 23 55 15"
src="https://github.com/user-attachments/assets/3a2e0ee7-cd53-4552-b72b-42f4631f1503"
/>
<img width="427" height="92" alt="Screenshot 2025-10-20 at 23 55 25"
src="https://github.com/user-attachments/assets/95344765-2155-4861-abdd-f5ec1497ace2"
/>
<img width="472" height="85" alt="Screenshot 2025-10-20 at 23 55 30"
src="https://github.com/user-attachments/assets/31084b40-0eea-4feb-a627-c5014790c40d"
/>
<img width="370" height="87" alt="Screenshot 2025-10-20 at 23 55 35"
src="https://github.com/user-attachments/assets/a81dba12-a792-4d41-b269-0bc32fc81271"
/>


## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Check the stories for Tooltip and Alerts, they look good


#### For configuration changes:
None
2025-10-21 08:14:28 +00:00
Reinier van der Leer
93f58dec5e Merge branch 'master' into dev 2025-10-21 08:49:12 +02:00
Reinier van der Leer
3da595f599 fix(backend): Only try to initialize LaunchDarkly once (#11222)
We currently try to re-init the LaunchDarkly client every time a feature flag is checked.
This causes 5 second extra latency on the flag check when LD is down, such as now.
Since flag checks are performed on every block execution, this currently cripples the platform's executors.

- Follow-up to #11221

### Changes 🏗️

- Only try to init LaunchDarkly once
- Improve surrounding log statements in the `feature_flag` module

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - This is a critical hotfix; we'll see its effect once deployed
2025-10-21 08:46:07 +02:00
Reinier van der Leer
e5e60921a3 fix(backend): Handle LaunchDarkly init failure (#11221)
LaunchDarkly is currently down and it's keeping our executor pods from
spinning up.

### Changes 🏗️

- Wrap `LaunchDarklyIntegration` init in a try/except

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - We'll see if it works once it deploys
2025-10-21 07:53:40 +02:00
Copilot
90af8f8e1a feat(backend): Add language fallback for YouTube transcription block (#11057)
## Problem

The YouTube transcription block would fail when attempting to transcribe
videos that only had transcripts available in non-English languages.
Even when usable transcripts existed in other languages, the block would
raise a `NoTranscriptFound` error because it only requested English
transcripts.

**Example video that would fail:**
https://www.youtube.com/watch?v=3AMl5d2NKpQ (only has Hungarian
transcripts)

**Error message:**
```
Could not retrieve a transcript for the video https://www.youtube.com/watch?v=3AMl5d2NKpQ! 
No transcripts were found for any of the requested language codes: ('en',)

For this video (3AMl5d2NKpQ) transcripts are available in the following languages:
(GENERATED) - hu ("Hungarian (auto-generated)")
```

## Solution

Implemented intelligent language fallback in the
`TranscribeYoutubeVideoBlock.get_transcript()` method:

1. **First**, tries to fetch English transcript (maintains backward
compatibility)
2. **If English unavailable**, lists all available transcripts and
selects the first one using this priority:
   - Manually created transcripts (any language)
   - Auto-generated transcripts (any language)
3. **Only fails** if no transcripts exist at all

**Example behavior:**
```python
# Before: Video with only Hungarian transcript
get_transcript("3AMl5d2NKpQ")  #  Raises NoTranscriptFound

# After: Video with only Hungarian transcript  
get_transcript("3AMl5d2NKpQ")  #  Returns Hungarian transcript
```

## Changes

- **Modified** `backend/blocks/youtube.py`: Added try-catch logic to
fallback to any available language when English is not found
- **Added** `test/blocks/test_youtube.py`: Comprehensive test suite
covering URL extraction, language fallback, transcript preferences, and
error handling (7 tests)
- **Updated** `docs/content/platform/blocks/youtube.md`: Documented the
language fallback behavior and transcript priority order

## Testing

-  All 7 new unit tests pass
-  Block integration test passes
-  Full test suite: 621 passed, 0 failed (no regressions)
-  Code formatting and linting pass

## Impact

This fix enables the YouTube transcription block to work with
international content while maintaining full backward compatibility:

-  Videos in any language can now be transcribed
-  English is still preferred when available
-  No breaking changes to existing functionality
-  Graceful degradation to available languages

Fixes #10637
Fixes https://linear.app/autogpt/issue/OPEN-2626

> [!WARNING]
>
> <details>
> <summary>Firewall rules blocked me from connecting to one or more
addresses (expand for details)</summary>
>
> #### I tried to connect to the following addresses, but was blocked by
firewall rules:
>
> - `www.youtube.com`
> - Triggering command:
`/home/REDACTED/.cache/pypoetry/virtualenvs/autogpt-platform-backend-Ajv4iu2i-py3.11/bin/python3`
(dns block)
>
> If you need me to access, download, or install something from one of
these locations, you can either:
>
> - Configure [Actions setup
steps](https://gh.io/copilot/actions-setup-steps) to set up my
environment, which run before the firewall is enabled
> - Add the appropriate URLs or hosts to the custom allowlist in this
repository's [Copilot coding agent
settings](https://github.com/Significant-Gravitas/AutoGPT/settings/copilot/coding_agent)
(admins only)
>
> </details>

<!-- START COPILOT CODING AGENT SUFFIX -->



<details>

<summary>Original prompt</summary>

> Issue Title: if theres only one lanague available for transcribe
youtube return that langage not an error
> Issue Description: `Could not retrieve a transcript for the video
https://www.youtube.com/watch?v=3AMl5d2NKpQ! This is most likely caused
by: No transcripts were found for any of the requested language codes:
('en',) For this video (3AMl5d2NKpQ) transcripts are available in the
following languages: (MANUALLY CREATED) None (GENERATED) - hu
("Hungarian (auto-generated)") (TRANSLATION LANGUAGES) None If you are
sure that the described cause is not responsible for this error and that
a transcript should be retrievable, please create an issue at
https://github.com/jdepoix/youtube-transcript-api/issues. Please add
which version of youtube_transcript_api you are using and provide the
information needed to replicate the error. Also make sure that there are
no open issues which already describe your problem!` you can use this
video to test:
[https://www.youtube.com/watch?v=3AMl5d2NKpQ\`](https://www.youtube.com/watch?v=3AMl5d2NKpQ%60)
> Fixes
https://linear.app/autogpt/issue/OPEN-2626/if-theres-only-one-lanague-available-for-transcribe-youtube-return
> 
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User :
> This thread is for an agent session with githubcopilotcodingagent.
> 
> Comment by User :
> This comment thread is synced to a corresponding [GitHub
issue](https://github.com/Significant-Gravitas/AutoGPT/issues/10637).
All replies are displayed in both locations.
> 
> 


</details>


<!-- START COPILOT CODING AGENT TIPS -->
---

 Let Copilot coding agent [set things up for
you](https://github.com/Significant-Gravitas/AutoGPT/issues/new?title=+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot)
— coding agent works faster and does higher quality work when set up for
your repo.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: ntindle <8845353+ntindle@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-10-21 02:31:33 +00:00
Nicholas Tindle
eba67e0a4b fix(platform/blocks): update linear oauth to use refresh tokens (#10998)
<!-- Clearly explain the need for these changes: -->

### Need 💡

This PR addresses Linear issue SECRT-1665, which mandates an update to
Linear's OAuth2 implementation. Linear is transitioning from long-lived
access tokens to short-lived access tokens with refresh tokens, with a
deadline of April 1, 2026. This change is crucial to ensure continued
integration with Linear and to support their new token management
system, including a migration path for existing long-lived tokens.

### Changes 🏗️

-   **`autogpt_platform/backend/backend/blocks/linear/_oauth.py`**:
- Implemented full support for refresh tokens, including HTTP Basic
Authentication for token refresh requests.
- Added `migrate_old_token()` method to exchange old long-lived access
tokens for new short-lived tokens with refresh tokens using Linear's
`/oauth/migrate_old_token` endpoint.
- Enhanced `get_access_token()` to automatically detect and attempt
migration for old tokens, and to refresh short-lived tokens when they
expire.
    -   Improved error handling and token expiration management.
- Updated `_request_tokens` to handle both authorization code and
refresh token flows, supporting Linear's recommended authentication
methods.
-   **`autogpt_platform/backend/backend/blocks/linear/_config.py`**:
- Updated `TEST_CREDENTIALS_OAUTH` mock data to include realistic
`access_token_expires_at` and `refresh_token` for testing the new token
lifecycle.
-   **`LINEAR_OAUTH_IMPLEMENTATION.md`**:
- Added documentation detailing the new Linear OAuth refresh token
implementation, including technical details, migration strategy, and
testing notes.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified OAuth URL generation and parameter encoding.
- [x] Confirmed HTTP Basic Authentication header creation for refresh
requests.
  - [x] Tested token expiration logic with a 5-minute buffer.
  - [x] Validated migration detection for old vs. new token types.
  - [x] Checked code syntax and import compatibility.

#### For configuration changes:

- [ ] `.env.default` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

---
Linear Issue: [SECRT-1665](https://linear.app/autogpt/issue/SECRT-1665)

<a
href="https://cursor.com/background-agent?bcId=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-cursor-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-cursor-light.svg"><img alt="Open in
Cursor"
src="https://cursor.com/open-in-cursor.svg"></picture></a>&nbsp;<a
href="https://cursor.com/agents?id=bc-95f4c668-f7fa-4057-87e5-622ac81c0783"><picture><source
media="(prefers-color-scheme: dark)"
srcset="https://cursor.com/open-in-web-dark.svg"><source
media="(prefers-color-scheme: light)"
srcset="https://cursor.com/open-in-web-light.svg"><img alt="Open in Web"
src="https://cursor.com/open-in-web.svg"></picture></a>

---------

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <ntindle@users.noreply.github.com>
Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-10-20 20:44:58 +00:00
Nicholas Tindle
47bb89caeb fix(backend): Disable LaunchDarkly integration in metrics.py (#11217) 2025-10-20 14:07:21 -05:00
Ubbe
271a520afa feat(frontend): setup DataFast analytics (#11182)
## Changes 🏗️

Following https://datafa.st/docs/nextjs-app-router

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] We will see once we make a production deployment and get data into
the platform

### For configuration changes:

None
2025-10-20 16:18:04 +04:00
Swifty
3988057032 Merge branch 'swiftyos/secrt-1712-remove-error-handling-form-store-routes' into dev 2025-10-18 12:28:25 +02:00
Swifty
a6c6e48f00 Merge branch 'swiftyos/open-2791-featplatform-add-easy-test-data-creation' into dev 2025-10-18 12:28:17 +02:00
Swifty
e72ce2f9e7 Merge branch 'swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db' into dev 2025-10-18 12:27:58 +02:00
Swifty
bd7a79a920 Merge branch 'swiftyos/secrt-1706-improve-store-search' into dev 2025-10-18 12:27:31 +02:00
Nicholas Tindle
3f546ae845 fix(frontend): improve waitlist error display for users not on allowlist (#11198)
fix issue with identifying errors :(
### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] we have to test in dev due to waitlist integration, so we are
merging. will revert if fails

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-18 05:14:05 +00:00
Nicholas Tindle
097a19141d fix(frontend): improve waitlist error display for users not on allowlist (#11196)
## Summary

This PR improves the user experience for users who are not on the
waitlist during sign-up. When a user attempts to sign up or log in with
an email that's not on the allowlist, they now see a clear, helpful
modal with a direct call-to-action to join the waitlist.

Fixes
[OPEN-2794](https://linear.app/autogpt/issue/OPEN-2794/display-waitlist-error-for-users-not-on-waitlist-during-sign-up)

## Changes

-  Updated `EmailNotAllowedModal` with improved messaging and a "Join
Waitlist" button
- 🔧 Fixed OAuth provider signup/login to properly display the waitlist
modal
- 📝 Enhanced auth-code-error page to detect and display
waitlist-specific errors
- 💬 Added helpful guidance about checking email address and Discord
support link
- 🎯 Consistent waitlist error handling across all auth flows (regular
signup, OAuth, error pages)

## Test Plan

Tested locally by:
1. Attempting signup with non-allowlisted email - modal appears 
2. Attempting Google SSO with non-allowlisted account - modal appears 
3. Modal shows "Join Waitlist" button that opens
https://agpt.co/waitlist 
4. Help text about checking email and Discord support is visible 

## Screenshots

The new waitlist modal includes:
- Clear "Join the Waitlist" title
- Explanation that platform is in closed beta
- "Join Waitlist" button (opens in new tab)
- Help text about checking email address
- Discord support link for users who need help

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-10-18 03:37:31 +00:00
Swifty
c958c95d6b fix incorrect type import 2025-10-17 20:36:49 +02:00
Swifty
3e50cbd2cb fix import 2025-10-17 19:19:17 +02:00
Swifty
1b69f1644d revert frontend type change 2025-10-17 17:26:08 +02:00
Swifty
d9035a233c Merge branch 'swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db' of github.com:Significant-Gravitas/AutoGPT into swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db 2025-10-17 17:20:27 +02:00
Swifty
972cbfc3de fix tests 2025-10-17 17:20:05 +02:00
Swifty
8f861b1bb2 removed error handling from routes 2025-10-17 17:08:17 +02:00
Swifty
fa2731bb8b Merge branch 'dev' into swiftyos/secrt-1709-store-provider-names-and-env-vars-in-db 2025-10-17 17:06:09 +02:00
Swifty
2dc0c97a52 Add block registry and updated 2025-10-17 16:49:04 +02:00
Zamil Majdy
0bb2b87c32 fix(backend): resolve UserBalance migration issues and credit spending bug (#11192)
## Summary
Fix critical UserBalance migration and spending issues affecting users
with credits from transaction history but no UserBalance records.

## Root Issues Fixed

### Issue 1: UserBalance Migration Complexity
- **Problem**: Complex data migration with timestamp logic issues and
potential race conditions
- **Solution**: Simplified to idempotent table creation only,
application handles auto-population

### Issue 2: Credit Spending Bug  
- **Problem**: Users with $10.0 from transaction history couldn't spend
$0.16
- **Root Cause**: `_add_transaction` and `_enable_transaction` only
checked UserBalance table, returning 0 balance for users without records
- **Solution**: Enhanced both methods with transaction history fallback
logic

### Issue 3: Exception Handling Inconsistency
- **Problem**: Raw SQL unique violations raised different exception
types than Prisma ORM
- **Solution**: Convert raw SQL unique violations to
`UniqueViolationError` at source

## Changes Made

### Migration Cleanup
- **Idempotent operations**: Use `CREATE TABLE IF NOT EXISTS`, `CREATE
INDEX IF NOT EXISTS`
- **Inline foreign key**: Define constraint within `CREATE TABLE`
instead of separate `ALTER TABLE`
- **Removed data migration**: Application creates UserBalance records
on-demand
- **Safe to re-run**: No errors if table/index/constraint already exists

### Credit Logic Fixes
- **Enhanced `_add_transaction`**: Added transaction history fallback in
`user_balance_lock` CTE
- **Enhanced `_enable_transaction`**: Added same fallback logic for
payment fulfillment
- **Exception normalization**: Convert raw SQL unique violations to
`UniqueViolationError`
- **Simplified `onboarding_reward`**: Use standardized
`UniqueViolationError` catching

### SQL Fallback Pattern
```sql
COALESCE(
    (SELECT balance FROM UserBalance WHERE userId = ? FOR UPDATE),
    -- Fallback: compute from transaction history if UserBalance doesn't exist
    (SELECT COALESCE(ct.runningBalance, 0) 
     FROM CreditTransaction ct 
     WHERE ct.userId = ? AND ct.isActive = true AND ct.runningBalance IS NOT NULL 
     ORDER BY ct.createdAt DESC LIMIT 1),
    0
) as balance
```

## Impact

### Before
-  Users with transaction history but no UserBalance couldn't spend
credits
-  Migration had complex timestamp logic with potential bugs  
-  Raw SQL and Prisma exceptions handled differently
-  Error: "Insufficient balance of $10.0, where this will cost $0.16"

### After  
-  Seamless spending for all users regardless of UserBalance record
existence
-  Simple, idempotent migration that's safe to re-run
-  Consistent exception handling across all credit operations
-  Automatic UserBalance record creation during first transaction
-  Backward compatible - existing users unaffected

## Business Value
- **Eliminates user frustration**: Users can spend their credits
immediately
- **Smooth migration path**: From old User.balance to new UserBalance
table
- **Better reliability**: Atomic operations with proper error handling
- **Maintainable code**: Consistent patterns across credit operations

## Test Plan
- [ ] Manual testing with users who have transaction history but no
UserBalance records
- [ ] Verify migration can be run multiple times safely
- [ ] Test spending credits works for all user scenarios
- [ ] Verify payment fulfillment (`_enable_transaction`) works correctly
- [ ] Add comprehensive test coverage for this scenario

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 19:46:13 +07:00
Swifty
a1d9b45238 updated openapi spec 2025-10-17 14:01:37 +02:00
Swifty
29895c290f store providers in db 2025-10-17 13:34:35 +02:00
Zamil Majdy
73c0b6899a fix(backend): Remove advisory locks for atomic credit operations (#11143)
## Problem
High QPS failures on `spend_credits` operations due to lock contention
from `pg_advisory_xact_lock` causing serialization and seconds of wait
time.

## Solution 
Replace PostgreSQL advisory locks with atomic database operations using
CTEs (Common Table Expressions).

### Key Changes
- **Add persistent balance column** to User table for O(1) balance
lookups
- **Atomic CTE-based operations** for all credit transactions using
UPDATE...RETURNING pattern
- **Comprehensive concurrency tests** with 7 test scenarios including
stress testing
- **Remove all advisory lock usage** from the credit system

### Implementation Details
1. **Migration**: Adds balance column with backfill from transaction
history
2. **Atomic Operations**: All credit operations now use single atomic
CTEs that update balance and create transaction in one query
3. **Race Condition Prevention**: WHERE clauses in UPDATE statements
ensure balance never goes negative
4. **BetaUserCredit Compatibility**: Preserved monthly refill logic with
updated `_add_transaction` signature

### Performance Impact
-  Eliminated lock contention bottlenecks
-  O(1) balance lookups instead of O(n) transaction aggregation  
-  Atomic operations prevent race conditions without locks
-  Supports high QPS without serialization delays

### Testing
- All existing tests pass
- New concurrency test suite (`credit_concurrency_test.py`) with:
  - Concurrent spends from same user
  - Insufficient balance handling
  - Mixed operations (spends, top-ups, balance checks)
  - Race condition prevention
  - Integer overflow protection
  - Stress testing with 100 concurrent operations

### Breaking Changes
None - all existing APIs maintain compatibility

🤖 Generated with [Claude Code](https://claude.ai/code)

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **New Features**
* Enhanced top‑up flows with top‑up types, clearer credit→dollar
formatting, and idempotent onboarding rewards.

* **Bug Fixes**
* Fixed race conditions for concurrent spends/top‑ups, added
integer‑overflow and underflow protection, stronger input validation,
and improved refund/dispute handling.

* **Refactor**
* Persisted per‑user balance with atomic updates for reliable balances;
admin history now prefetches balances.

* **Tests**
* Added extensive concurrency, refund, ceiling/underflow and migration
test suites.

* **Chores**
* Database migration to add persisted user balance; APIKey status
extended (SUSPENDED).
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-10-17 17:05:05 +07:00
Zamil Majdy
4c853a54d7 Merge commit 'e4bc728d40332e7c2b1edec5f1b200f1917950e2' into HEAD 2025-10-17 16:43:23 +07:00
Zamil Majdy
dfdd632161 fix(backend/util): handle nested Pydantic models in SafeJson (#11188)
## Summary

Fixes a critical serialization bug introduced in PR #11187 where
`SafeJson` failed to serialize dictionaries containing Pydantic models,
causing 500 Internal Server Errors in the executor service.

## Problem

The error manifested as:
```
CRITICAL: Operation Approaching Failure Threshold: Service communication: '_call_method_async'
Current attempt: 50/50
Error: HTTPServerError: HTTP 500: Server error '500 Internal Server Error' 
for url 'http://autogpt-database-manager.prod-agpt.svc.cluster.local:8005/create_graph_execution'
```

Root cause in `create_graph_execution`
(backend/data/execution.py:656-657):
```python
"credentialInputs": SafeJson(credential_inputs) if credential_inputs else Json({})
```

Where `credential_inputs: Mapping[str, CredentialsMetaInput]` is a dict
containing Pydantic models.

After PR #11187's refactor, `_sanitize_value()` only converted top-level
BaseModel instances to dicts, but didn't handle BaseModel instances
nested inside dicts/lists/tuples. This caused Prisma's JSON serializer
to fail with:
```
TypeError: Type <class 'backend.data.model.CredentialsMetaInput'> not serializable
```

## Solution

Added BaseModel handling to `_sanitize_value()` to recursively convert
Pydantic models to dicts before sanitizing:

```python
elif isinstance(value, BaseModel):
    # Convert Pydantic models to dict and recursively sanitize
    return _sanitize_value(value.model_dump(exclude_none=True))
```

This ensures all nested Pydantic models are properly serialized
regardless of nesting depth.

## Changes

- **backend/util/json.py**: Added BaseModel check to `_sanitize_value()`
function
- **backend/util/test_json.py**: Added 6 comprehensive tests covering:
  - Dict containing Pydantic models
  - Deeply nested Pydantic models  
  - Lists of Pydantic models in dicts
  - The exact CredentialsMetaInput scenario
  - Complex mixed structures
  - Models with control characters

## Testing

 All new tests pass  
 Verified fix resolves the production 500 error  
 Code formatted with `poetry run format`

## Related

- Fixes issues introduced in PR #11187
- Related to executor service 500 errors in production

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Bentlybro <Github@bentlybro.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 09:27:09 +00:00
Swifty
1ed224d481 simplify test and add reset-db make command 2025-10-17 11:12:00 +02:00
Swifty
3b5d919399 fix formatting 2025-10-17 10:56:45 +02:00
Swifty
3c16de22ef add test data creation to makefile and test it 2025-10-17 10:51:58 +02:00
Zamil Majdy
e4bc728d40 Revert "Revert "fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)""
This reverts commit 8258338caf.
2025-10-17 15:25:30 +07:00
Swifty
2c6d85d15e feat(platform): Shared cache (#11150)
### Problem
When running multiple backend pods in production, requests can be routed
to different pods causing inconsistent cache states. Additionally, the
current cache implementation in `autogpt_libs` doesn't support shared
caching across processes, leading to data inconsistency and redundant
cache misses.

### Changes 🏗️

- **Moved cache implementation from autogpt_libs to backend**
(`/backend/backend/util/cache.py`)
  - Removed `/autogpt_libs/autogpt_libs/utils/cache.py`
  - Centralized cache utilities within the backend module
  - Updated all import statements across the codebase

- **Implemented Redis-based shared caching**
- Added `shared_cache` parameter to `@cached` decorator for
cross-process caching
  - Implemented Redis connection pooling for efficient cache operations
  - Added support for cache key pattern matching and bulk deletion
  - Added TTL refresh on cache access with `refresh_ttl_on_get` option

- **Enhanced cache functionality**
  - Added thundering herd protection with double-checked locking
  - Implemented thread-local caching with `@thread_cached` decorator
- Added cache management methods: `cache_clear()`, `cache_info()`,
`cache_delete()`
  - Added support for both sync and async functions

- **Updated store caching** (`/backend/server/v2/store/cache.py`)
  - Enabled shared caching for all store-related cache functions
  - Set appropriate TTL values (5-15 minutes) for different cache types
  - Added `clear_all_caches()` function for cache invalidation

- **Added Redis configuration**
  - Added Redis connection settings to backend settings
  - Configured dedicated connection pool for cache operations
  - Set up binary mode for pickle serialization

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify Redis connection and cache operations work correctly
  - [x] Test shared cache across multiple backend instances
  - [x] Verify cache invalidation with `clear_all_caches()`
- [x] Run cache tests: `poetry run pytest
backend/backend/util/cache_test.py`
  - [x] Test thundering herd protection under concurrent load
  - [x] Verify TTL refresh functionality with `refresh_ttl_on_get=True`
  - [x] Test thread-local caching for request-scoped data
  - [x] Ensure no performance regression vs in-memory cache

#### For configuration changes:

- [x] `.env.default` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes (Redis already configured)
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- Redis cache configuration uses existing Redis service settings
(REDIS_HOST, REDIS_PORT, REDIS_PASSWORD)
  - No new environment variables required
2025-10-17 07:56:01 +00:00
Bentlybro
8258338caf Revert "fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)"
This reverts commit e62a56e8ba.
2025-10-17 08:31:23 +01:00
Zamil Majdy
374f35874c feat(platform): Add LaunchDarkly flag for platform payment system (#11181)
## Summary

Implement selective rollout of payment functionality using LaunchDarkly
feature flags to enable gradual deployment to pilot users.

- Add `ENABLE_PLATFORM_PAYMENT` flag to control credit system behavior
- Update `get_user_credit_model` to use user-specific flag evaluation  
- Replace hardcoded `NEXT_PUBLIC_SHOW_BILLING_PAGE` with LaunchDarkly
flag
- Enable payment UI components only for flagged users
- Maintain backward compatibility with existing beta credit system
- Default to beta monthly credits when flag is disabled
- Fix tests to work with new async credit model function

## Key Changes

### Backend
- **Credit Model Selection**: The `get_user_credit_model()` function now
takes a `user_id` parameter and uses LaunchDarkly to determine which
credit model to return:
- Flag enabled → `UserCredit` (payment system enabled, no monthly
refills)
- Flag disabled → `BetaUserCredit` (current behavior with monthly
refills)
  
- **Flag Integration**: Added `ENABLE_PLATFORM_PAYMENT` flag and
integrated LaunchDarkly evaluation throughout the credit system

- **API Updates**: All credit-related endpoints now use the
user-specific credit model instead of a global instance

### Frontend
- **Dynamic UI**: Payment-related components (billing page, wallet
refill) now show/hide based on the LaunchDarkly flag
- **Removed Environment Variable**: Replaced
`NEXT_PUBLIC_SHOW_BILLING_PAGE` with runtime flag evaluation

### Testing
- **Test Fixes**: Updated all tests that referenced the removed global
`_user_credit_model` to use proper mocking of the new async function

## Deployment Strategy

This implementation enables a controlled rollout:
1. Deploy with flag disabled (default) - no behavior change for existing
users
2. Enable flag for pilot/beta users via LaunchDarkly dashboard
3. Monitor usage and feedback from pilot users
4. Gradually expand to more users
5. Eventually enable for all users once validated

## Test Plan

- [x] Unit tests pass for credit system components
- [x] Payment UI components show/hide correctly based on flag
- [x] Default behavior (flag disabled) maintains current functionality
- [x] Flag enabled users get payment system without monthly refills
- [x] Admin credit operations work correctly
- [x] Backward compatibility maintained

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 06:11:39 +00:00
Zamil Majdy
e62a56e8ba fix(backend/util): rewrite SafeJson to prevent Invalid \escape errors (#11187)
## Summary

Fixes the `Invalid \escape` error occurring in
`/upsert_execution_output` endpoint by completely rewriting the SafeJson
implementation.

## Problem

- Error: `POST /upsert_execution_output failed: Invalid \escape: line 1
column 36404 (char 36403)`
- Caused by data containing literal backslash-u sequences (e.g.,
`\u0000` as text, not actual null characters)
- Previous implementation tried to remove problematic escape sequences
from JSON strings
- This created invalid JSON when it removed `\\u0000` and left invalid
sequences like `\w`

## Solution

Completely rewrote SafeJson to work on Python data structures instead of
JSON strings:

1. **Direct data sanitization**: Recursively walks through dicts, lists,
and tuples to remove control characters directly from strings
2. **No JSON string manipulation**: Avoids all escape sequence parsing
issues
3. **More efficient**: Eliminates the serialize → sanitize → deserialize
cycle
4. **Preserves valid content**: Backslashes, paths, and literal text are
correctly preserved

## Changes

- Removed `POSTGRES_JSON_ESCAPES` regex (no longer needed)
- Added `_sanitize_value()` helper function for recursive sanitization
- Simplified `SafeJson()` to convert Pydantic models and sanitize data
structures
- Added `import json  # noqa: F401` for backwards compatibility

## Testing

-  Verified fix resolves the `Invalid \escape` error
-  All existing SafeJson unit tests pass
-  Problematic data with literal escape sequences no longer causes
errors
-  Code formatted with `poetry run format`

## Technical Details

**Before (JSON string approach):**
```python
# Serialize to JSON string
json_string = dumps(data)
# Remove escape sequences from string (BREAKS!)
sanitized = regex.sub("", json_string)
# Parse back (FAILS with Invalid \escape)
return Json(json.loads(sanitized))
```

**After (data structure approach):**
```python
# Convert Pydantic to dict
data = model.model_dump() if isinstance(data, BaseModel) else data
# Recursively sanitize strings in data structure
sanitized = _sanitize_value(data)
# Return as Json (no parsing needed)
return Json(sanitized)
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 05:56:08 +00:00
Abhimanyu Yadav
f3f9a60157 feat(frontend): add extra info in custom node in new builder (#11172)
Currently, we don’t add category and cost information to custom nodes in
the new builder. This means we’re rendering with the correct information
and costs are displayed accurately based on the selected discriminator
value.

<img width="441" height="781" alt="Screenshot 2025-10-15 at 2 43 33 PM"
src="https://github.com/user-attachments/assets/8199cfa7-4353-4de2-8c15-b68aa86e458c"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] All information is displayed correctly.
- [x] I’ve tried changing the discrimination value and we’re getting the
correct cost for the selected value.
2025-10-17 04:35:22 +00:00
Swifty
3ed1c93ec0 Merge branch 'dev' into swiftyos/secrt-1706-improve-store-search 2025-10-16 15:10:01 +02:00
Swifty
773f545cfd update existing rows when migration is ran 2025-10-16 13:38:01 +02:00