Need: The Gmail integration had several parsing issues that were causing
data loss and
workflow incompatibilities:
1. Email recipient parsing only captured the first recipient, losing
CC/BCC and multiple TO
recipients
2. Email body parsing was inconsistent between blocks, sometimes showing
"This email does
not contain a readable body" for valid emails
3. Type mismatches between blocks caused serialization issues when
connecting them in
workflows (lists being converted to string representations like
"[\"email@example.com\"]")
# Changes 🏗️
1. Enhanced Email Model:
- Added cc and bcc fields to capture all recipients
- Changed to field from string to list for consistency
- Now captures all recipients instead of just the first one
2. Improved Email Parsing:
- Updated GmailReadBlock and GmailGetThreadBlock to parse all recipients
using
getaddresses()
- Unified email body parsing logic across blocks with robust multipart
handling
- Added support for HTML to plain text conversion
- Fixed handling of emails with attachments as body content
3. Fixed Block Compatibility:
- Updated GmailSendBlock and GmailCreateDraftBlock to accept lists for
recipient fields
- Added validation to ensure at least one recipient is provided
- All blocks now consistently use lists for recipient fields, preventing
serialization
issues
4. Updated Test Data:
- Modified all test inputs/outputs to use the new list format for
recipients
- Ensures tests reflect the new data structure
# Checklist 📋
For code changes:
- I have clearly listed my changes in the PR description
- I have made a test plan
- I have tested my changes according to the test plan:
- Run existing Gmail block unit tests with poetry run test
- Create a workflow that reads emails with multiple recipients and
verify all TO, CC, BCC
recipients are captured
- Test email body parsing with plain text, HTML, and multipart emails
- Connect GmailReadBlock → GmailSendBlock in a workflow and verify
recipient data flows
correctly
- Connect GmailReplyBlock → GmailSendBlock and verify no serialization
errors occur
- Test sending emails with multiple recipients via GmailSendBlock
- Test creating drafts with multiple recipients via
GmailCreateDraftBlock
- Verify backwards compatibility by testing with single recipient
strings (should now
require lists)
- Create from scratch and execute an agent with at least 3 blocks
- Import an agent from file upload, and confirm it executes correctly
- Upload agent to marketplace
- Import an agent from marketplace and confirm it executes correctly
- Edit an agent from monitor, and confirm it executes correctly
# Breaking Change Note: The to field in GmailSendBlock and
GmailCreateDraftBlock now requires
a list instead of accepting both string and list. Existing workflows
using strings will
need to be updated to use lists (e.g., ["email@example.com"] instead of
"email@example.com").
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
## Changes 🏗️
Fix the issue where sometimes the agent activity would show a link to
agent runs that are not available in the library. So only show runs that
can be verified in the library. Improve the display of the agent name as
well 🤔
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run agents
- [x] There are no empty links on the activity dropdown
- [x] Agent names look good 💅🏽
### For configuration changes:
None
### Changes 🏗️
This PR fixes an issue where LLM blocks (particularly
AITextSummarizerBlock) were not properly tracking `llm_call_count` in
their execution statistics, despite correctly tracking token counts.
**Root Cause**: The `finally` block in
`AIStructuredResponseGeneratorBlock.run()` that sets `llm_call_count`
was executing after the generator returned, meaning the stats weren't
available when `merge_llm_stats()` was called by dependent blocks.
**Changes made**:
- **Fixed stats tracking timing**: Moved `llm_call_count` and
`llm_retry_count` tracking to execute before successful return
statements in `AIStructuredResponseGeneratorBlock.run()`
- **Removed problematic finally block**: Eliminated the finally block
that was setting stats after function return
- **Added comprehensive tests**: Created extensive test suite for LLM
stats tracking across all AI blocks
- **Added SmartDecisionMaker stats tracking**: Fixed missing LLM stats
tracking in SmartDecisionMakerBlock
- **Fixed type errors**: Added appropriate type ignore comments for test
mock objects
**Files affected**:
- `backend/blocks/llm.py`: Fixed stats tracking timing in
AIStructuredResponseGeneratorBlock
- `backend/blocks/smart_decision_maker.py`: Added missing LLM stats
tracking
- `backend/blocks/test/test_llm.py`: Added comprehensive LLM stats
tracking tests
- `backend/blocks/test/test_smart_decision_maker.py`: Added LLM stats
tracking test and fixed circular imports
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created comprehensive unit tests for all LLM blocks stats tracking
- [x] Verified AITextSummarizerBlock now correctly tracks llm_call_count
(was 0, now shows actual call count)
- [x] Verified AIStructuredResponseGeneratorBlock properly tracks stats
with retries
- [x] Verified SmartDecisionMakerBlock now tracks LLM usage stats
- [x] Verified all existing tests still pass
- [x] Ran `poetry run format` to ensure code formatting
- [x] All 11 LLM and SmartDecisionMaker tests pass
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note**: No configuration changes were needed for this fix.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
HTTP requests can fail when the DNS is messed up. Sometimes this kind of
issue requires a client reset.
### Changes 🏗️
Introduce HTTP client refresh on repeated error
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual run, added tests
To allow for a simpler dev experience, the new SDK now auto discovers
providers and registers them. However, the OAuth system was still
requiring these credentials to be hardcoded in the settings object.
This PR changes that to verify the env var is present during
registration and then allows the OAuth system to load them from the env.
### Changes 🏗️
- **OAuth Registration**: Modified `ProviderBuilder.with_oauth(..)` to
check OAuth env vars exist during registration
- **OAuth Loading**: Updated OAuth system to load credentials from env
vars if not using secrets
- **Block Filtering**: Added `is_block_auth_configured()` function to
check if a block has valid authorization options configured at runtime
- **Test Updates**: Fixed failing SDK registry tests to properly mock
environment variables for OAuth registration
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified OAuth system checks that env vars exist during provider
registration
- [x] Confirmed OAuth system can use env vars directly without requiring
hardcoded secrets
- [x] Tested that blocks with unconfigured OAuth providers are filtered
out
- [x] All SDK registry tests pass with proper env var mocking
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
- OAuth providers now require their client ID and secret env vars to be
set for registration
- No changes required to `.env.example` or `docker-compose.yml`
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Changes 🏗️
There is a bug where the agent activity dropdown bubble only shows up to
`6` even if there are `50 running agents`. We only display the last 6
runs in the dropdown, but the bubble badge count should show the correct
all running agents.
On top of that we added the option to search runs by agent name when you
have more than 6 recent runs:
https://github.com/user-attachments/assets/931e3db7-5715-48d1-b4df-22490fae9de0
- Also make the dropdown items a link ( `a` ) so that you can command
click them to open runs in new tabs.
- Keep up to `400` executions on the state ( worse case load test )
- Each execution object is relatively small (ID, status, timestamps,
agent info)
- 400 objects × ~`1KB` each = negligible memory footprint `400kb`
- Always display running agents at the top
- Only display runs from the last week on the dropdown
- the agent library page contains the historical runs, this is just to
show the recent ones
### Code changes
- **Added count tracking**
- the `NotificationState` interface now includes separate count fields
(`activeCount`, `recentCompletionsCount`, `recentFailuresCount`) to
track the actual numbers independent of display limits.
- **Dual array system:**
- the `categorizeExecutions` function now creates:
- unlimited arrays for counting all executions
- limited arrays (sliced to 6 items) for dropdown display
- Updated all helper functions to properly maintain both the display
arrays and the count fields.
- Component uses actual counts
- `<AgentActivityDropdown />` component now uses `activeCount` for the
badge and hover hint instead of `activeExecutions.length`
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login and navigate to library or build
- [x] Start running agents like there is no tomorrow
- [x] The badge shows the correct agent execution count ( .i.e 10 )
- [x] The dropdown only displays the 6 most recent
- [x] You can command click on the runs and they open in new tabs
### For configuration changes
None
The heartbeat mechanism doesn't seem to work at the moment.
### Changes 🏗️
Revert the RabbitMQ consumer heartbeat mechanism
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run agents
This PR adds WordPress integration to AutoGPT platform, enabling users
to create posts on WordPress.com and Jetpack-enabled sites.
### Changes 🏗️
**OAuth Implementation:**
- Added WordPress OAuth2 handler (`_oauth.py`) supporting both single
blog and global access tokens
- Implemented OAuth flow without PKCE (as WordPress doesn't require it)
- Added token validation endpoint support
- Server-side tokens don't expire, eliminating the need for refresh in
most cases
**API Integration:**
- Created WordPress API client (`_api.py`) with Pydantic models for type
safety
- Implemented `create_post` function with full support for WordPress
post features
- Added helper functions for token validation and generic API requests
- Fixed response models to handle WordPress API's mixed data types
**WordPress Block:**
- Created `WordPressCreatePostBlock` in `blog.py` with minimal
user-facing options
- Exposed fields: site, title, content, excerpt, slug, author,
categories, tags, featured_image, media_urls
- Posts are published immediately by default
- Integrated with platform's OAuth credential system
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] OAuth URL generation works correctly for single blog and global
access
- [x] Token exchange and validation functions handle WordPress API
responses
- [x] Create post block properly transforms input data to API format
- [x] Response models handle mixed data types from WordPress API
The WordPress OAuth provider needs to be configured with client ID and
secret from WordPress.com application settings.
<!-- Clearly explain the need for these changes: -->
I'm working with these blocks and found some much needed improvements
### Changes 🏗️
- Outputs labels from emails
- Types the outputs
- reorder the yielding of the smart decision blokc
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Build a test agent with labels, sending, reading emails + smart
decision maker
When we run multiple instances of the executor, some of the executors
can oversubscribe the messages and end up queuing the agent execution
request instead of letting another executor handle the job. This change
solves the problem.
### Changes 🏗️
* Reject execution request when the executor is full.
* Improve `active_graph_runs` tracking for better horizontal scaling
heuristics.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual graph execution & CI
### Changes 🏗️
1. Json columns have to be json serializable, but sometimes the data is
not, so `SafeJson` is introduced to make sure that the data being loaded
can be string serialized and back before persisting into the database.
2. Locks & transactions seem to be used in the case where it's not
needed, this reduces database & redis performance.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] CI tests
We want scrolling for agent dialog list
- Based on #9833
### Changes 🏗️
- adds backend support for paginating this content
- adds frontend support for scrolling pagination
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] test UI for this
---------
Co-authored-by: Venkat Sai Kedari Nath Gandham <154089422+Kedarinath1502@users.noreply.github.com>
Co-authored-by: Claude <claude@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10458
Improve logic in `useAgentGraph`:
- Correctly handle unset `flowVersion` in checks in hooks
- Prevent unnecessary WebSocket re-connects
- Remove redundant WebSocket connection management logic
- Untangle hooks for initial load and set-up
- Simplify block filtering logic
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Edit an agent in the builder
- [x] WebSocket doesn't re-connect unnecessarily
- [x] Graph doesn't reset on WebSocket re-connect
- [x] Graph doesn't reset on LaunchDarkly re-connect
- Resolves#10458
### Changes 🏗️
Improve logic in `useAgentGraph`:
- Correctly handle unset `flowVersion` in checks in hooks
- Prevent unnecessary WebSocket re-connects
- Remove redundant WebSocket connection management logic
- Untangle hooks for initial load and set-up
- Simplify block filtering logic
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Edit an agent in the builder
- [x] WebSocket doesn't re-connect unnecessarily
- [x] Graph doesn't reset on WebSocket re-connect
- [x] Graph doesn't reset on LaunchDarkly re-connect
Bumps [redis](https://github.com/redis/redis-py) from 5.2.1 to 6.2.0,
for both `autogpt_libs` and `backend`.
Also, additional fixes in `autogpt_libs/pyproject.toml`:
- Move `redis` from dev dependencies to prod dependencies
- Fix author info
- Sort dependencies
> [!NOTE]
> Of course dependabot wouldn't do this on its own; this PR has been
taken over and augmented by @Pwuts
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/redis/redis-py/releases">redis's
releases</a>.</em></p>
<blockquote>
<h2>6.2.0</h2>
<h1>Changes</h1>
<h2>🚀 New Features</h2>
<ul>
<li>Add <code>dynamic_startup_nodes</code> parameter to async
RedisCluster (<a
href="https://redirect.github.com/redis/redis-py/issues/3646">#3646</a>)</li>
<li>Support RESP3 with <code>hiredis-py</code> parser (<a
href="https://redirect.github.com/redis/redis-py/issues/3648">#3648</a>)</li>
<li>[Async] Support for transactions in async <code>RedisCluster</code>
client (<a
href="https://redirect.github.com/redis/redis-py/issues/3649">#3649</a>)</li>
</ul>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Revert wrongly changed default value for <code>check_hostname</code>
when instantiating <code>RedisSSLContext</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3655">#3655</a>)</li>
<li>Fixed potential deadlock from unexpected <code>__del__</code> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<ul>
<li>Update <code>search_json_examples.ipynb</code>: Fix the old import
<code>indexDefinition</code> -> <code>index_definition</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3652">#3652</a>)</li>
<li>Remove mandatory update of the CHANGES file for new PRs. Changes
file will be kept for history for versions < 4.0.0 (<a
href="https://redirect.github.com/redis/redis-py/issues/3645">#3645</a>)</li>
<li>Dropping <code>Python 3.8</code> support as it has reached end of
life (<a
href="https://redirect.github.com/redis/redis-py/issues/3657">#3657</a>)</li>
<li>fix(doc): update Python print output in json doctests (<a
href="https://redirect.github.com/redis/redis-py/issues/3658">#3658</a>)</li>
<li>Update redis-entraid dependency (<a
href="https://redirect.github.com/redis/redis-py/issues/3661">#3661</a>)</li>
</ul>
<h2></h2>
<p>We'd like to thank all the contributors who worked on this release!
<a href="https://github.com/JCornat"><code>@JCornat</code></a> <a
href="https://github.com/ShubhamKaudewar"><code>@ShubhamKaudewar</code></a>
<a href="https://github.com/uglide"><code>@uglide</code></a> <a
href="https://github.com/petyaslavova"><code>@petyaslavova</code></a>
<a
href="https://github.com/vladvildanov"><code>@vladvildanov</code></a></p>
<h2>v6.1.1</h2>
<h1>Changes</h1>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Revert wrongly changed default value for <code>check_hostname</code>
when instantiating <code>RedisSSLContext</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3655">#3655</a>)</li>
<li>Fixed potential deadlock from unexpected <code>__del__</code> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
</ul>
<h2></h2>
<p>We'd like to thank all the contributors who worked on this release!
<a
href="https://github.com/vladvildanov"><code>@vladvildanov</code></a>
<a
href="https://github.com/petyaslavova"><code>@petyaslavova</code></a></p>
<h2>6.1.0</h2>
<h1>Changes</h1>
<h2>🚀 New Features</h2>
<ul>
<li>Support for transactions in <code>RedisCluster</code> client (<a
href="https://redirect.github.com/redis/redis-py/issues/3611">#3611</a>)</li>
<li>Add equality and hashability to <code>Retry</code> and backoff
classes (<a
href="https://redirect.github.com/redis/redis-py/issues/3628">#3628</a>)</li>
</ul>
<h2>🐛 Bug Fixes</h2>
<ul>
<li>Fix RedisCluster <code>ssl_check_hostname</code> not set to
connections. For SSL verification with
<code>ssl_cert_reqs="none"</code>, check_hostname is set to
<code>False</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3637">#3637</a>)
<strong>Important</strong>: The default value for the
<code>check_hostname</code> field of <code>RedisSSLContext</code> has
been changed as part of this PR - this is a breaking change and should
not be introduced in minor versions - unfortunately, it is part of the
current release.
The breaking change is reverted in the next release to fix the behavior
--> 6.2.0</li>
<li>Prevent RuntimeError while reinitializing clusters - sync and async
(<a
href="https://redirect.github.com/redis/redis-py/issues/3633">#3633</a>)</li>
<li>Add equality and hashability to <code>Retry</code> and backoff
classes (<a
href="https://redirect.github.com/redis/redis-py/issues/3628">#3628</a>)
- fixes integration with Django RQ</li>
<li>Fix <code>AttributeError</code> on <code>ClusterPipeline</code> (<a
href="https://redirect.github.com/redis/redis-py/issues/3634">#3634</a>)</li>
</ul>
<h2>🧰 Maintenance</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1a59471870"><code>1a59471</code></a>
Adding small change in code to trigger pipeline for the branch.</li>
<li><a
href="83cf781be6"><code>83cf781</code></a>
Adding small change in README to trigger pipeline for the branch.</li>
<li><a
href="f5cd264c40"><code>f5cd264</code></a>
maintenance: Preparation for release 6.2.0 - updating lib version. (<a
href="https://redirect.github.com/redis/redis-py/issues/3662">#3662</a>)</li>
<li><a
href="793cdc63ac"><code>793cdc6</code></a>
maintenance: Update redis-entraid dependency (<a
href="https://redirect.github.com/redis/redis-py/issues/3661">#3661</a>)</li>
<li><a
href="34c40ff82d"><code>34c40ff</code></a>
fix(doc) : update Python print output in json doctests (<a
href="https://redirect.github.com/redis/redis-py/issues/3658">#3658</a>)</li>
<li><a
href="e5756daafa"><code>e5756da</code></a>
Dropping Python 3.8 support as it has reached end of life (<a
href="https://redirect.github.com/redis/redis-py/issues/3657">#3657</a>)</li>
<li><a
href="bc7de608b8"><code>bc7de60</code></a>
[Async] Support for transactions in async RedisCluster client (<a
href="https://redirect.github.com/redis/redis-py/issues/3649">#3649</a>)</li>
<li><a
href="e226ad2b4c"><code>e226ad2</code></a>
Removing connection_pool field from the CommandProtocol definition (<a
href="https://redirect.github.com/redis/redis-py/issues/3656">#3656</a>)</li>
<li><a
href="14a6fc39bd"><code>14a6fc3</code></a>
fix: Fixed potential deadlock from unexpected <strong>del</strong> call
(<a
href="https://redirect.github.com/redis/redis-py/issues/3654">#3654</a>)</li>
<li><a
href="3ebfd5b569"><code>3ebfd5b</code></a>
fix: Revert wrongly changed default value for check_hostname when
instantiati...</li>
<li>Additional commits viewable in <a
href="https://github.com/redis/redis-py/compare/v5.2.1...v6.2.0">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Changes 🏗️
- Close websocket connections gracefully during logout ( _whether from
another tab or not_ )
- Also fixed an error on `HeroSection` that shows when the onboarding is
disabled locally ( `null` )
- Uncomment legit tests about connecting/saving agents
- Centralise local storage usage through a single service with typed
keys
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login in 3 tabs ( 1 builder, 1 marketplace, 1 agent run view )
- [x] Logout from the marketplace tab
- [x] The other tabs show logout state gracefully without toasts or
errors
- [x] Websocket connections are closed ( _devtools console shows that_ )
### For configuration changes:
None
This PR implements a comprehensive Ayrshare social media integration for
AutoGPT Platform, enabling users to post content across multiple social
media platforms through a unified interface. Ayrshare provides a single
API to manage posts across Facebook, Twitter/X, LinkedIn, Instagram,
YouTube, TikTok, Pinterest, Reddit, Telegram, Google My Business,
Bluesky, Snapchat, and Threads.
The integration addresses the need for social media automation and
content distribution workflows within AutoGPT agents, allowing users to:
- Connect their social media accounts via SSO
- Post content with platform-specific options and constraints
- Schedule posts across multiple platforms simultaneously
- Handle platform-specific media requirements and validation
⚠️ To simplify the review process all except the twitter post block has
been commented out, future pr's will uncomment other platfroms so we can
test them in isolation.
### Changes 🏗️
#### Backend Integration (`backend/integrations/ayrshare.py`)
- **AyrshareClient**: Complete API client implementation with post
creation, profile management, and JWT generation
- **SocialPlatform enum**: Comprehensive platform definitions for all
supported social networks
- **Response models**: PostResponse, ProfileResponse, JWTResponse for
type-safe API interactions
- **Error handling**: Custom AyrshareAPIException with proper HTTP
status code handling
#### Social Media Posting Blocks (`backend/blocks/ayrshare/post.py`)
- **BaseAyrshareInput**: Shared input schema with common fields (post
text, media URLs, scheduling, etc.)
- **Platform-specific blocks**: 13 dedicated posting blocks, each with
platform-specific validation and options:
- PostToFacebookBlock: Carousel, Reels, Stories, targeting, alt text
- PostToXBlock: Threads, polls, long posts, premium features, subtitles
- PostToLinkedInBlock: Document support, visibility controls, audience
targeting
- PostToInstagramBlock: Stories, Reels, user tags, collaborators
- PostToYouTubeBlock: Video uploads, playlists, visibility, country
targeting
- PostToPinterestBlock: Pins, carousels, board management
- PostToTikTokBlock: Video/image posts, AI labeling, brand content
- PostToRedditBlock: Basic posting functionality
- PostToTelegramBlock: GIF handling, mentions
- PostToGMBBlock: Event/offer posts, call-to-action buttons
- PostToBlueskyBlock: Character limit validation, alt text
- PostToSnapchatBlock: Story types, video thumbnails
- PostToThreadsBlock: Hashtag restrictions, carousel support
#### Helper Models
- **CarouselItem**: Facebook carousel configuration
- **CallToAction, EventDetails, OfferDetails**: Google My Business post
types
- **InstagramUserTag**: Instagram user tagging with coordinates
- **LinkedInTargeting**: LinkedIn audience targeting options
- **PinterestCarouselOption**: Pinterest carousel image options
- **YouTubeTargeting**: YouTube country blocking/allowing
#### Authentication & SSO (`backend/server/integrations/router.py`)
- **SSO endpoint**: `/integrations/ayrshare/sso_url` for account linking
- **Profile management**: Automatic profile creation and key management
- **JWT generation**: Secure token generation for social media account
linking
- **Platform allowlist**: Configured access to all supported social
platforms
#### Frontend Integration (`frontend/src/components/CustomNode.tsx`)
- **AYRSHARE block type**: New BlockUIType.AYRSHARE for
Ayrshare-specific nodes
- **SSO button**: "Connect Social Media Accounts" with loading states
- **Handle generation**: Special handling for Ayrshare blocks with SSO
integration
#### Configuration
- **Environment variables**: Added AYRSHARE_API_KEY and AYRSHARE_JWT_KEY
to .env.example
- **Block registration**: All Ayrshare blocks registered in
AYRSHARE_NODE_IDS array
#### Type Safety & Error Handling
- **Modern typing**: Updated to use `list`, `dict`, `Any` instead of
legacy typing
- **Comprehensive validation**: Platform-specific constraints (character
limits, media counts, file types)
- **User-friendly errors**: Clear error messages for validation failures
and API errors
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
**Test Plan:**
**Backend API Testing:**
- [x] Verify AyrshareClient initializes correctly with API key
- [x] Test JWT generation for SSO authentication
- [x] Test profile creation and management
- [x] Verify all 13 posting blocks are properly registered
- [x] Test platform-specific validation rules for each block
- [x] Verify error handling for missing credentials and API failures
**Frontend Integration Testing:**
- [x] Verify AYRSHARE block type renders correctly in flow editor
- [x] Test SSO button functionality and popup window behavior
- [x] Confirm loading states work properly during authentication
- [x] Verify input handles generate correctly for Ayrshare blocks
- [x] Test platform-specific input fields and validation
**End-to-End Workflow Testing:**
- [x] Create agent with Ayrshare posting blocks
- [x] Test SSO flow: click "Connect Social Media Accounts" button
- [x] Verify popup opens with Ayrshare authentication page
- [x] Test social media account linking process
- [x] Create posts with various platform-specific options:
- [ X ] X (Twitter) - tested basic posting with image
- [] Test scheduling functionality across platforms
- [x] Verify media upload constraints and validation
- [] Test error handling for invalid inputs and failed posts
**Error Case Testing:**
- [] Test behavior with missing AYRSHARE_API_KEY configuration
- [] Test invalid social media credentials handling
- [] Test network failure scenarios
- [] Verify platform-specific validation error messages
- [] Test character limit enforcement per platform
- [] Test media file type and size restrictions
**Security Testing:**
- [ x ] Verify JWT tokens are properly generated and validated
- [x] Test profile key isolation between users
- [x] Confirm sensitive credentials are not logged
- [x] Verify SSO popup prevents XSS attacks
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Configuration Changes:**
- Added `AYRSHARE_API_KEY` environment variable for Ayrshare API
authentication
- Added `AYRSHARE_JWT_KEY` environment variable for SSO token generation
- No docker-compose.yml changes required (uses existing backend
services)
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
## Overview
This PR adds comprehensive Airtable integration to the AutoGPT platform,
enabling users to seamlessly connect their Airtable bases with AutoGPT
workflows for powerful no-code automation capabilities.
## Why Airtable Integration?
Airtable is one of the most popular no-code databases used by teams for
project management, CRMs, inventory tracking, and countless other use
cases. This integration brings significant value:
- **Data Automation**: Automate data entry, updates, and synchronization
between Airtable and other services
- **Workflow Triggers**: React to changes in Airtable bases with
webhook-based triggers
- **Schema Management**: Programmatically create and manage Airtable
table structures
- **Bulk Operations**: Efficiently process large amounts of data with
batch create/update/delete operations
## Key Features
### 🔌 Webhook Trigger
- **AirtableWebhookTriggerBlock**: Listens for changes in Airtable bases
and triggers workflows
- Supports filtering by table, view, and specific fields
- Includes webhook signature validation for security
### 📊 Record Operations
- **AirtableCreateRecordsBlock**: Create single or multiple records (up
to 10 at once)
- **AirtableUpdateRecordsBlock**: Update existing records with upsert
support
- **AirtableDeleteRecordsBlock**: Delete single or multiple records
- **AirtableGetRecordBlock**: Retrieve specific record details
- **AirtableListRecordsBlock**: Query records with filtering, sorting,
and pagination
### 🏗️ Schema Management
- **AirtableCreateTableBlock**: Create new tables with custom field
definitions
- **AirtableUpdateTableBlock**: Modify table properties
- **AirtableAddFieldBlock**: Add new fields to existing tables
- **AirtableUpdateFieldBlock**: Update field properties
## Technical Implementation Details
### Authentication
- Supports both API Key and OAuth authentication methods
- OAuth implementation includes proper token refresh handling
- Credentials are securely managed through the platform's credential
system
### Webhook Security
- Added `credentials` parameter to WebhooksManager interface for proper
signature validation
- HMAC-SHA256 signature verification ensures webhook authenticity
- Webhook cursor tracking prevents duplicate event processing
### API Integration
- Comprehensive API client (`_api.py`) with full type safety
- Proper error handling and response validation
- Support for all Airtable field types and operations
## Changes 🏗️
### Added Blocks:
- AirtableWebhookTriggerBlock
- AirtableCreateRecordsBlock
- AirtableDeleteRecordsBlock
- AirtableGetRecordBlock
- AirtableListRecordsBlock
- AirtableUpdateRecordsBlock
- AirtableAddFieldBlock
- AirtableCreateTableBlock
- AirtableUpdateFieldBlock
- AirtableUpdateTableBlock
### Modified Files:
- Updated WebhooksManager interface to support credential-based
validation
- Modified all webhook handlers to support the new interface
## Test Plan 📋
### Manual Testing Performed:
1. **Authentication Testing**
- ✅ Verified API key authentication works correctly
- ✅ Tested OAuth flow including token refresh
- ✅ Confirmed credentials are properly encrypted and stored
2. **Webhook Testing**
- ✅ Created webhook subscriptions for different table events
- ✅ Verified signature validation prevents unauthorized requests
- ✅ Tested cursor tracking to ensure no duplicate events
- ✅ Confirmed webhook cleanup on block deletion
3. **Record Operations Testing**
- ✅ Created single and batch records with various field types
- ✅ Updated records with and without upsert functionality
- ✅ Listed records with filtering, sorting, and pagination
- ✅ Deleted single and multiple records
- ✅ Retrieved individual record details
4. **Schema Management Testing**
- ✅ Created tables with multiple field types
- ✅ Added fields to existing tables
- ✅ Updated table and field properties
- ✅ Verified proper error handling for invalid field types
5. **Error Handling Testing**
- ✅ Tested with invalid credentials
- ✅ Verified proper error messages for API limits
- ✅ Confirmed graceful handling of network errors
### Security Considerations 🔒
1. **API Key Management**
- API keys are stored encrypted in the credential system
- Keys are never logged or exposed in error messages
- Credentials are passed securely through the execution context
2. **Webhook Security**
- HMAC-SHA256 signature validation on all incoming webhooks
- Webhook URLs use secure ingress endpoints
- Proper cleanup of webhooks when blocks are deleted
3. **OAuth Security**
- OAuth tokens are securely stored and refreshed
- Scopes are limited to necessary permissions
- Token refresh happens automatically before expiration
## Configuration Requirements
No additional environment variables or configuration changes are
required. The integration uses the existing credential management
system.
## Checklist 📋
#### For code changes:
- [x] I have read the [contributing
instructions](https://github.com/Significant-Gravitas/AutoGPT/blob/master/.github/CONTRIBUTING.md)
- [x] Confirmed that `make lint` passes
- [x] Confirmed that `make test` passes
- [x] Updated documentation where needed
- [x] Added/updated tests for new functionality
- [x] Manually tested all blocks with real Airtable bases
- [x] Verified backwards compatibility of webhook interface changes
#### Security:
- [x] No hard-coded secrets or sensitive information
- [x] Proper input validation on all user inputs
- [x] Secure credential handling throughout
## Changes 🏗️
- Make docker + deps cache actually work on the FE CI
- Run the E2E test data script before Playwright
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI is faster in repeated runs ( _uses cache_ )
- [x] Test data script runs successfully
### For configuration changes:
None
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
This PR contains code changes that will resolve the cursor jump issue in
the **Note** block #9252 . The changes in code affects the
NodeTextBoxInput component to transfer the state into local state and
not mutate the `value` prop directly.
Also includes change to use store admin which is from rebase issue but
approved by maintainer @ntindle
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested the note block allows to edit text normally
https://github.com/user-attachments/assets/f2800bf1-9867-4627-ac9d-44718627b263
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Ubbe <hi@ubbe.dev>
https://github.com/Significant-Gravitas/AutoGPT/pull/10437 migrated
DatabaseManager away from RestApi, but this change is not yet reflected
in Docker Compose, which causes the self-hosted Docker mode to break.
### Changes 🏗️
Add DatabaseManager process in docker.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] `docker compose up`
## Changes 🏗️
Fixed WebSocket connection errors during multi-tab logout 💆🏽
<img width="1193" height="273" alt="Screenshot 2025-07-23 at 22 23 35"
src="https://github.com/user-attachments/assets/bf6f964d-bcb0-4a2a-adff-1194defe1e61"
/>
Previously, when users logged out in one browser tab, WebSocket
connections in other open tabs would continue trying to reconnect with
invalid authentication tokens, causing console errors and potential
runtime errors.
### What was fixed
- WebSocket connections are now properly disconnected when logout occurs
in any tab
- cross-tab logout mechanism now includes WebSocket cleanup to prevent
reconnection errors
- added logic to prevent automatic reconnection after intentional
disconnection
- added E2E tests to cover this case
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manual testing: Login in multiple tabs, navigate to builder
(establishes WebSocket), logout from one tab, verify no console errors
- [x] E2E test: Added automated test covering multi-tab logout with
WebSocket cleanup verification
- [x] Verified cross-tab logout still works correctly
- [x] Confirmed WebSocket connections reconnect properly after fresh
login
### For configuration changes:
None
We've been reporting agents that are stuck on the `QUEUED` status. This
change includes the one on. RUNNING status that's been stuck for more
than 24hours.
### Changes 🏗️
Report agent on RUNNING status for more than 24hours.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
It's hard to debug two processes running in the same pod. We need to
decouple the two processes into two services.
### Changes 🏗️
Move DatabaseManager away from RestAPI as a standalone serice
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️https://github.com/user-attachments/assets/42e1c896-5f3b-447c-aee9-4f5963c217d9
There is now a 🔔 icon on the Navigation bar that shows previous agent
runs and displays real-time agent running status.
If you run an agent, the bell will show on a badge how many agents are
running. If you hover over it, a hint appears. If you click on it, it
opens a dropdown and displays the executions with their status ( _which
should match what we have in library functionality, not design-wise_ ).
I leveraged the existing APIs for this purpose. Most of the run logic is
[encapsulated on this
hook](https://github.com/Significant-Gravitas/AutoGPT/compare/dev...feat/agent-notifications?expand=1#diff-a9e7f2904d6283b094aca19b64c7168e8c66be1d5e0bb454be8978cb98526617)
and is also an independent `<AgentActivityDropdown />` component.
Clicking on an agent run opens that run in the library page.
This new functionality is covered by E2E tests 💆🏽✔️
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The navigation bar layout looks good when logged out
- [x] The navigation bar layout looks good when logged in
- [x] Open an agent in the library and click `Run`
- [x] See the real-time activity of the agent running on the navigation
bar bell icon
### For configuration changes:
_No configuration changes needed._
Some AgentInput can store empty string as the default value, this will
cause error on non string input.
Error:
```
2025-07-22 23:14:10,424 WARNING Invalid <class 'backend.blocks.io.AgentToggleInputBlock.Input'>: {'name': 'Enriched info (including email), will double the search cost', 'value': '', 'advanced': False, 'placeholder_values': []}, 1 validation error for Input
value
Input should be a valid boolean, unable to interpret input [type=bool_parsing, input_value='', input_type=str]
For further information visit https://errors.pydantic.dev/2.11/v/bool_parsing
2025-07-22 23:14:10,424 WARNING Invalid <class 'backend.blocks.io.AgentNumberInputBlock.Input'>: {'name': 'Expected New Leads Count', 'value': '', 'advanced': False, 'placeholder_values': []}, 1 validation error for Input
value
Input should be a valid integer, unable to parse string as an integer [type=int_parsing, input_value='', input_type=str]
For further information visit https://errors.pydantic.dev/2.11/v/int_parsing
```
### Changes 🏗️
Ignore invalid field when constructing agent input schema.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Use AgentNumberInput and AgenToggleInput with empty string value.
<!-- Clearly explain the need for these changes: -->
We setup launchdarkly so now we can dynamically control which blocks are
available on the UI
### Changes 🏗️
enables the google blocks + fixes the LD .env.example for the local env
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Deploy to test environment and verify the blocks show correctly vs
are hidden when toggling Launch darlky rules
### Changes ️
The previous implementation of the `LaunchDarklyProvider` had a race
condition where it would only initialize after the user's authentication
state was fully resolved. This caused two primary issues:
1. A delay in evaluating any feature flags, leading to a "flash of
un-styled/un-flagged content" until the user session was loaded.
2. An unreliable transition from an un-flagged state to a flagged state,
which could cause UI flicker or incorrect flag evaluations upon login.
This pull request refactors the provider to follow a more robust,
industry-standard pattern. It now initializes immediately with an
`anonymous` context, ensuring flags are available from the very start of
the application lifecycle. When the user logs in and their session
becomes available, the provider seamlessly transitions to an
authenticated context, guaranteeing that the correct flags are evaluated
consistently.
### Checklist
#### For code changes:
- I have clearly listed my changes in the PR description
- I have made a test plan
- I have tested my changes according to the test plan:
**Test Plan:**
- [x] **Anonymous User:** Load the application in an incognito window
without logging in. Verify that feature flags are evaluated correctly
for an anonymous user. Check the browser console for the
`[LaunchDarklyProvider] Using anonymous context` message.
- [x] **Login Flow:** While on the site, log in. Verify that the UI
updates with the correct feature flags for the authenticated user. Check
the console for the `[LaunchDarklyProvider] Using authenticated context`
message and confirm the LaunchDarkly client re-initializes.
- [x] **Authenticated User (Page Refresh):** As a logged-in user,
refresh the page. Verify that the application loads directly with the
authenticated user's flags, leveraging the cached session and
bootstrapped flags from `localStorage`.
- [x] **Logout Flow:** While logged in, log out. Verify that the UI
reverts to the anonymous user's state and flags. The provider `key`
should change back to "anonymous", triggering another re-mount.
<details><summary>Summary of Code Changes</summary>
- Refactored `LaunchDarklyProvider` to handle user authentication state
changes gracefully.
- The provider now initializes immediately with an `anonymous` user
context while the Supabase user session is loading.
- Once the user is authenticated, the provider's context is updated to
reflect the logged-in user's details.
- Added a `key` prop to the `<LDProvider>` component, using the user's
ID (or "anonymous"). This forces React to re-mount the provider when the
user's identity changes, ensuring a clean re-initialization of the
LaunchDarkly SDK.
- Enabled `localStorage` bootstrapping (`options={{ bootstrap:
"localStorage" }}`) to cache flags and improve performance on subsequent
page loads.
- Added `console.debug` statements for improved observability into the
provider's state (anonymous vs. authenticated).
</details>
#### For configuration changes:
- `.env.example` is updated or already compatible with my changes
- `docker-compose.yml` is updated or already compatible with my changes
- I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Configuration Changes</summary>
- No configuration changes were made. This PR relies on existing
environment variables (`NEXT_PUBLIC_LAUNCHDARKLY_CLIENT_ID` and
`NEXT_PUBLIC_LAUNCHDARKLY_ENABLED`).
</details>
---------
Co-authored-by: Lluis Agusti <hi@llu.lu>
## Summary
- Add thread safety to NodeExecutionProgress class to prevent race
conditions between graph executor and node executor threads
- Fixes potential data corruption and lost outputs during concurrent
access to shared output lists
- Uses single global lock per node for minimal performance impact
- Instead of blocking the node evaluation before adding another node
evaluation, we move on to the next node, in case another node completes
it.
## Changes
- Added `threading.Lock` to NodeExecutionProgress class
- Protected `add_output()` calls from node executor thread with lock
- Protected `pop_output()` calls from graph executor thread with lock
- Protected `_pop_done_task()` output checks with lock
## Problem Solved
The `NodeExecutionProgress.output` dictionary was being accessed
concurrently:
- `add_output()` called from node executor thread (asyncio thread)
- `pop_output()` called from graph executor thread (main thread)
- Python lists are not thread-safe for concurrent append/pop operations
- This could cause data corruption, index errors, and lost outputs
## Test Plan
- [x] Existing executor tests pass
- [x] No performance regression (operations are microsecond-level)
- [x] Thread safety verified through code analysis
## Technical Details
- Single `threading.Lock()` per NodeExecutionProgress instance (~64
bytes)
- Lock acquisition time (~100-200ns) is minimal compared to list
operations
- Maintains order guarantees for same node_execution_id processing
- No GIL contention issues as operations are very brief
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
Rest-Api service is a vital service where we should minimize the amount
of external resources impacting it.
And the NotificationManager service is running as a singleton in nature
(similar to the scheduler service), so it makes sense to put it in a
single pod.
### Changes 🏗️
Move the NotificationManager service from rest-api pod to the scheduler
pod
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
Rest-Api service is a vital service where we should minimize the amount
of external resources impacting it.
And the NotificationManager service is running as a singleton in nature
(similar to the scheduler service), so it makes sense to put it in a
single pod.
### Changes 🏗️
Move the NotificationManager service from rest-api pod to the scheduler
pod
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual test
## Changes 🏗️
Launch Darkly is not being initialised in production, despite on paper
all env variables being well set 🧐
I tried doing production builds locally, and I noticed this provider
needs to be initialised on the client because Launch Darkly flags are
designed to work on the client side, where they can access user context
and browser environment.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Did production build locally and run it
- [x] See LD initialised on the console after the `use client` directive
was added
### For configuration changes:
None
Currently, we only create a library entry of the top-most graph when
importing the graph from an exported file.
This can cause some complications, as there is no way to remove the
library entry of it.
### Changes 🏗️
Create the library entry for all the subgraphs during the import
process.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Export an agent with subgraphs and import it back.
### Changes 🏗️
This PR adds a new utility block to the basic blocks collection. This
block provides a simple way to reverse the order of elements in any
list.
**New Features:**
- Added class in
- Block accepts any list as input and returns the same list with
elements in reversed order
- Preserves the original list (creates a copy before reversing)
- Works with lists containing any type of elements
**Technical Details:**
- Block ID:
- Category:
- Input: - The list to reverse (accepts )
- Output: - The list with elements in reversed order
- Includes test input/output for validation
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Created an agent with the ReverseListOrderBlock
- [x] Tested with various list types (numbers, strings, mixed types)
- [x] Verified the block preserves the original list
- [x] Confirmed the block correctly reverses the order of elements
- [x] Tested with empty lists and single-element lists
- [x] Verified the block integrates properly with other blocks in a
workflow
#### For configuration changes:
- [x] is updated or already compatible with my changes
- [x] is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Note:** No configuration changes required - this is a pure code
addition that uses the existing block framework.
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
## Summary
- Introduced correct health check API for AppService
- Fixed service health check mechanism to properly handle health status
monitoring
## Changes Made
- Updated health check implementation in AppService.
- Make rest service health check checks the health of DatabaseManager
too.
## Test plan
- [x] Verify health check endpoint responds correctly
- [x] Test health check mechanism under various service states
- [x] Validate monitoring and alerting integration
🤖 Generated with [Claude Code](https://claude.ai/code)
### Changes 🏗️
This PR optimizes database performance by adding missing foreign key
indexes, removing unused/redundant indexes, cleaning up all legacy
untracked indexes, and adding performance indexes for materialized views
to achieve 100% optimized database indexing.
**Foreign Key Indexes Added:**
- `AgentGraph`: `[forkedFromId, forkedFromVersion]` - For fork
relationship queries
- `AgentGraphExecution`: `[agentPresetId]` - For preset-based execution
filtering
- `AgentNodeExecution`: `[agentNodeId]` - For node execution lookups
- `AgentNodeExecutionInputOutput`: `[agentPresetId]` - For preset
input/output queries
- `AgentPreset`: `[agentGraphId, agentGraphVersion]` & `[webhookId]` -
For graph and webhook lookups
- `LibraryAgent`: `[agentGraphId, agentGraphVersion]` & `[creatorId]` -
For agent and creator queries
- `StoreListing`: `[agentGraphId, agentGraphVersion]` - For marketplace
agent lookups
- `StoreListingReview`: `[reviewByUserId]` - For user review queries
**Unused/Redundant Indexes Removed:**
- `User.email` - Unused index identified by linter
- `AnalyticsMetrics.userId` - Unused index causing write overhead
- `AnalyticsDetails.type` - Redundant (covered by composite `[userId,
type]`)
- `APIKey.key`, `APIKey.status` - Unused indexes
- Named index `"analyticsDetails"` - Converted to standard composite
index
**All Legacy Untracked Indexes Removed:**
- `idx_store_listing_version_status` - Redundant with Prisma composite
index
- `idx_slv_agent` - Redundant with Prisma-managed `[agentGraphId,
agentGraphVersion]`
- `idx_store_listing_version_approved_listing` - Redundant with unique
constraint
- `StoreListing_agentId_owningUserId_idx` - Legacy index superseded by
current strategy
- `StoreListing_isDeleted_isApproved_idx` - Replaced by optimized
composite index
- `StoreListing_isDeleted_idx` - Redundant with composite index
- `StoreListingVersion_agentId_agentVersion_isDeleted_idx` - Legacy
index replaced
- `idx_store_listing_approved` - Redundant with existing `[owningUserId,
slug]` unique constraint and `[isDeleted, hasApprovedVersion]` index
- `idx_slv_categories_gin` - Specialized array search index removed (can
be re-added if category filtering is implemented)
- `idx_profile_user` - Duplicate of Prisma-managed `Profile_userId_idx`
**Materialized View Performance Indexes Added:**
- `idx_mv_review_stats_rating` on `mv_review_stats(avg_rating DESC)` -
Optimizes sorting agents by rating
- `idx_mv_review_stats_count` on `mv_review_stats(review_count DESC)` -
Optimizes sorting agents by review count
**Result: 100% Optimized Database Indexing**
- All database indexes are now defined and managed through Prisma schema
- No more untracked indexes requiring manual SQL maintenance
- Added performance indexes for materialized views used by marketplace
views
- Improved query performance for agent sorting and filtering
- Enhanced maintainability and consistency across environments
**Schema Comments Updated:**
- Removed all references to dropped untracked indexes
- Simplified documentation to reflect Prisma-only approach for regular
tables
- Added comprehensive documentation for materialized view indexes and
their purposes
- Maintained documentation for materialized view refresh strategy
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Schema changes compile successfully with Prisma
- [x] Migration adds required FK indexes and materialized view
performance indexes
- [x] Migration drops all legacy indexes and redundant untracked indexes
- [x] All pre-commit hooks pass (linting, formatting, type checking)
- [x] No breaking changes to existing foreign key relationships
- [x] Verified existing Prisma indexes cover all query patterns
- [x] Schema comments comprehensively document all indexing strategy
- [x] Materialized view performance indexes optimize marketplace sorting
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>
This PR adds a database index to improve query performance based on
Supabase query performance insights and index recommendations. These
indexes target frequently queried columns and relationships to reduce
query execution time.
### Changes 🏗️
- Added index on `AgentNodeExecutionInputOutput.agentPresetId`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified schema changes are valid Prisma syntax
- [x] Confirmed indexes target frequently queried columns based on
Supabase recommendations
- [x] Ensured no duplicate indexes are created
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Changes 🏗️
Only initialise Launch Darkly if we have a user and the config is set.
### Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Once merged login into app and see if Launch Darkly initialises
### For configuration changes:
No