The block library was missing two key capabilities that keep coming up
in real-world agent flows:
1. **Granular Mem0 filtering.** Agents often work side-by-side for the
same user, so memories must be scoped to a specific run or agent to
avoid crosstalk.
2. **First-class Google Sheets support.** Many community projects (e.g.,
data-collection, lightweight dashboards, no-code workflows) rely on
Sheets, but we only had a brittle REST call block.
This PR adds fine-grained filters to every Mem0 retrieval block and
introduces a complete, OAuth-ready Google Sheets suite so agents can
create, read, write, format, and manage spreadsheets safely.
:contentReference[oaicite:0]{index=0}
---
### Changes 🏗️
#### 📚 Mem0 block enhancements
* Added `categories_filter`, `metadata_filter`, `limit_memory_to_run`,
and `limit_memory_to_agent` inputs to **SearchMemoryBlock**,
**GetAllMemoriesBlock**, and **GetLatestMemoryBlock**.
* Added identical scoping logic to **AddMemoryBlock** so newly-created
memories can be tied to run/agent IDs.
#### 📊 New Google Sheets blocks (`backend/blocks/google/sheets.py`)
| Block | Purpose |
|-------|---------|
| `GoogleSheetsReadBlock` | Read a range |
| `GoogleSheetsWriteBlock` | Overwrite a range |
| `GoogleSheetsAppendBlock` | Append rows |
| `GoogleSheetsClearBlock` | Clear a range |
| `GoogleSheetsMetadataBlock` | Fetch spreadsheet + sheet info |
| `GoogleSheetsManageSheetBlock` | Create / delete / copy a sheet |
| `GoogleSheetsBatchOperationsBlock` | Batch update / clear |
| `GoogleSheetsFindReplaceBlock` | Find & replace text |
| `GoogleSheetsFormatBlock` | Cell formatting (bg/text colour, bold,
italic, font size) |
| `GoogleSheetsCreateSpreadsheetBlock` | Spin up a new spreadsheet |
* Each block has typed input/output schemas, built-in test mocks, and is
disabled in prod unless Google OAuth is configured.
* Added helper enums (`SheetOperation`, `BatchOperationType`) and
updated **CLAUDE.md** contributor guide with a UUID-generation step.
:contentReference[oaicite:2]{index=2}
---
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual E2E run: agent writes chat summary to new spreadsheet,
reads it back, searches memory with run-scoped filter
- [x] Live Google API smoke-tests (read/write/append/clear/format) using
a disposable spreadsheet
## Changes 🏗️
We noticed that in some pages ( `/build` _mainly_ ), where a lot of API
calls are fired in parallel using the old `BackendAPI`( _running many
agent executions_ ) the performance became worse. That is because the
`BackendAPI` was proxied via [server
actions](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
( _to make calls to our Backend on the Next.js server_ ).
Looks like server actions don't run in parallel, and their performance
is also subpar, given that we are not hosted on Vercel (they don't
utilise the edge middleware).
These changes cause all `BackendAPI` calls to be proxied via the Next.js
`/api/` route when executed on the browser; when executed on the server,
they bypass the proxy and directly access the API. Hopefully we gain:
- 🚀 Better Performance - API routes are faster than server actions for
this use case
- 🔧 Less Magic - Direct fetch calls instead of hidden server action
complexity
- ♻️ Code Reuse - Leveraging the existing proxy infrastructure used by
react-query
- 🎯 Cleaner Architecture - Single proxy pattern for all API calls
- 🔒 Same Security - Still uses server-side authentication with httpOnly
cookies
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] E2E tests pass
- [x] Click through the app, there is no issues
- [x] Agent executions are fast again in the builder
- [x] Test file uploads
This PR introduces key-value storage blocks.
### Changes 🏗️
- **Database Schema**: Add AgentNodeExecutionKeyValueData table with
composite primary key (userId, key)
- **Persistence Blocks**: Create PersistInformationBlock and
RetrieveInformationBlock in persistence.py
- **Scope-based Storage**: Support for within_agent (per agent) vs
across_agents (global user) persistence
- **Key Structure**: Use formal # delimiter for storage keys:
`agent#{graph_id}#{key}` and `global#{key}`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run all 244 block tests - all passing ✅
- [x] Test PersistInformationBlock with mock data storage
- [x] Test RetrieveInformationBlock with mock data retrieval
- [x] Verify scope-based key generation (within_agent vs across_agents)
- [x] Verify database function integration through all manager classes
- [x] Run lint and type checking - all passing ✅
- [x] Verify database migration is included and valid
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
Note: This change adds database schema and new blocks but doesn't
require environment or docker-compose changes as it uses existing
database infrastructure.
---------
Co-authored-by: Claude <noreply@anthropic.com>
- Resolves#10298
- Follow-up to #10270
### Changes 🏗️
Amend two changes from #10270:
- Add fallback for `NEXT_PUBLIC_FRONTEND_BASE_URL` in custom-mutator.ts
- Revert rename of `FRONTEND_BASE_URL` to
`NEXT_PUBLIC_FRONTEND_BASE_URL` in `backend/.env.example`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Don't set `NEXT_PUBLIC_FRONTEND_BASE_URL`
- Run the platform locally
- [x] -> `/library` loads normally
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
## Summary
- Enhanced graph execution cancellation and cleanup mechanisms
- Improved error handling and logging for graph execution lifecycle
- Added timeout handling for graph termination with proper status
updates
- Exposed a new API for stopping graph based on only graph_id or user_id
- Refactored logging metadata structure for better error tracking
## Key Changes
### Backend
- **Graph Execution Management**: Enhanced `stop_graph_execution` with
timeout-based waiting and proper status transitions
- **Execution Cleanup**: Added proper cancellation waiting with timeout
handling in executor manager
- **Logging Improvements**: Centralized `LogMetadata` class and improved
error logging consistency
- **API Enhancements**: Added bulk graph execution stopping
functionality
- **Error Handling**: Better exception handling and status management
for failed/cancelled executions
### Frontend
- **Status Safety**: Added null safety checks for status chips to
prevent runtime errors
- **Execution Control**: Simplified stop execution request handling
## Test Plan
- [x] Verify graph execution can be properly stopped and reaches
terminal state
- [x] Test timeout scenarios for stuck executions
- [x] Validate proper cleanup of running node executions when graph is
cancelled
- [x] Check frontend status chips handle undefined statuses gracefully
- [x] Test bulk execution stopping functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
## Changes 🏗️
Requests to the Backend happen now on the server, given we moved to
server-side cookies 🍪 ... however the client proxy is not exposing the
API errors to the client correctly. This aim to fix that.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login to the platform
- [x] Run agents until you encounter an error
- [x] The error is shown on the toast
## Changes 🏗️
<img width="800" alt="Screenshot 2025-07-02 at 16 43 08"
src="https://github.com/user-attachments/assets/d7cd0dd7-e671-4c5d-8ed9-6d8f56371ff5"
/>
During logout, the user state gets cleared but the onboarding provider
continues to run and tries to access onboarding.completedSteps on a null
object, causing a runtime error 😬
This mostly happens because onboarding is broken on local and dev ( the
onboarding agents don't work ), so I manually skip it after creating an
account, navigating to `/marketplace`. That makes me think that the
onboarding provider still thinks I need to onboard, and hence why this
error?
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Create a new user
- [x] Instead of completing onboarding, navigate to `/marketplace` via
browser URL
- [x] logout, login/logout again few times and you don't see runtime
errors
# Change details
- **Before**: Each job separately installs dependencies (~4 redundant
installations)
### Massive Redundancy in Setup Steps
Each job repeats these SAME 4 steps:
- Checkout repository
- Set up Node.js (version 21)
- Enable corepack
- Install dependencies (pnpm install --frozen-lockfile)
This happens 4+ times across different jobs:
- lint job
- type-check job
- chromatic job
- test job (runs 2x due to matrix strategy)
### No Dependency Caching
No caching strategy - downloads packages fresh every time
- Every workflow run downloads all packages from scratch
- No benefit from previous runs, even with identical pnpm-lock.yaml
- **After**: Dependencies installed once in setup job, cached and reused
This optimization maintains all existing CI functionality while
significantly improving pipeline efficiency.
A workflow run example is dispatched:
https://github.com/souhailaS/AutoGPT/actions/workflows/platform-frontend-ci.yml
## Additional Context
We are a team of researchers from University of Zurich
(https://www.ifi.uzh.ch/en/zest.html) currently **working on automating
energy optimizations in GitHub Actions workflows**. This optimization
maintains full functionality while significantly reducing computational
overhead and energy consumption.
souhaila.serbout@uzh.ch
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10284
### Changes 🏗️
- Allows passing an `aiohttp.BasicAuth` object directly to the `auth`
parameter of the `make_request` function.
- Converts tuple-based auth credentials to `aiohttp.BasicAuth` objects
before making the request.
Fixes
[AUTOGPT-SERVER-4AX](https://sentry.io/organizations/significant-gravitas/issues/6709824432/).
The issue was that: aiohttp's ClientSession.request received a plain
tuple for `auth` instead of an `aiohttp.BasicAuth` object, causing
OAuth2 token exchange failure.
This fix was generated by Seer in Sentry, triggered by Bently. 👁️ Run
ID: 185767
Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6709824432/?seerDrawer=true)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
This PR adds two setup scripts that will setup autogpt fully, it has a
windows .bat and a linux/mac .sh script, for now they are placed in a
new folder called "Installer"
### Note, the installers are supposed to be run outside of the autogpt
repo folder like Desktop/in a new empy folder because it will clone the
repo into the current directory.
I have had to add ``cross-env`` via ``pnpm add cross-env `` as on
windows the env is set differently in the ``package.json`` build section
``"build": "cross-env pnpm run generate:api-client &&
SKIP_STORYBOOK_TESTS=true next build"``
once fully setup i plan to make it so these commands can be run with the
following commands
Linux/Mac
```bash
curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh
```
Windows cmd/powershell
```bash
powershell -c "iwr https://setup.agpt.co/install.bat -o install.bat; ./install.bat"
```
Currently the commands above dont work but will later on!
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have tested the linux ``install.sh`` on a ubuntu system and it
setup the platform fully.
- [x] I have tested the windows ``install.bat`` on my windows system and
it setup the platform fully.
- [x] I have tested on both OS's and checked with missing prerequisites
to see if it shows the errors and it does
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
## Changes 🏗️
We want to make running the AutoGPT Front-end as easy as possible. For
that, you should be able to run it with the least amount of commands.
We recently added generated queries and types on the Front-end from the
Back-end OpenAPI schema, to make development easier and catch bugs
earlier. However, with the current setup, developers are forced to run
`pnpm generate:api-all` with the Back-end running, which is annoying.
After this PR, the Front-end can be rerun with just `pnpm i & pnpm dev`.
## Checklist 📋
### For code changes
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the Front-end with just `pnpm dev` and it works
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
### Changes 🏗️
A new undocumented env var, `NEXT_PUBLIC_AGPT_SERVER_BASE_URL`, was
added to the proxy route for it to work with the new `react-query`
mutator.
I removed it and used the existing `NEXT_PUBLIC_AGPT_SERVER_URL`, so we
have fewer environment variables to manage ( _and this one is already
added to all environments_ ).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the server locally
- [x] All pages ( library, marketplace, builder, settings ) work
Graph execution that fails due to interruption or unknown error should
be enqueued back to the queue. However, swallowing the error ends up not
marking the execution as a failure.
### Changes 🏗️
* Avoid keep updating the graph execution status on each node execution
step.
* Added a guard rail to avoid completing graph execution on
non-completed execution status.
* Avoid acknowledging messages from the queue if the graph execution is
not yet completed.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run graph execution, kill the process, re-run the process
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Currently, we are using PyClamd to run a file anti-virus scan for all
the files uploaded into the platform. We split the file into small
chunks and serially check the chunks for the virus scan. The socket is
not thread-safe, and we need to create multiple sockets across many
threads to leverage concurrency. To make this step concurrent and keep
it fully async, we need to migrate PyClamd to aioclamd.
### Changes 🏗️
Convert pyclamd to aioclamd, leverage chunk parallelism scan with a
semaphore limiting the concurrency limit.
#### Side Note
Shout-out to @tedyu for raising this improvement idea.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute file upload into the platform
## Changes 🏗️
We need to `FRONTEND_BASE_URL` to → `NEXT_PUBLIC_FRONTEND_BASE_URL`
given is needed on the new API client on the Front-end to make requests.
The `NEXT_PUBLIC` prefix is important so that it is available on the
client.
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app locally
- [x] The library and other pages work
This pull request includes updates to the environment configuration and
API mutator logic in the `autogpt_platform/frontend` directory. The
changes aim to improve flexibility by introducing dynamic base URLs
through environment variables.
Environment configuration updates:
*
[`autogpt_platform/frontend/.env.example`](diffhunk://#diff-72012a00359825421736dc064be74187011cb5b0462bea1ed3a3c5ca80bb3117R2):
Added `NEXT_PUBLIC_FRONTEND_BASE_URL` to define the base URL for the
frontend dynamically.
API mutator logic updates:
*
[`autogpt_platform/frontend/src/app/api/mutators/custom-mutator.ts`](diffhunk://#diff-28c5af33c7bd0ecddc1793aa6a27bfd5b4f979b62c29990538aceea3320d8be9L1-R1):
Updated `BASE_URL` to use the `NEXT_PUBLIC_FRONTEND_BASE_URL`
environment variable, enabling dynamic configuration of the API proxy
URL.
### Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested manually and everything is working perfectly
This PR demonstrates how the new data fetching strategy works, so other
developers don’t get confused.
### Changes
- Updated `README.md` to explain the new data fetching strategy.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Nothing is breaking, everything works great
### Changes
- Restructure library components.
- Divide the component into two parts: one for rendering and one for
hooks.
- Add a `useInfiniteParams` inside the `orval` config to use `page` as
the pagination parameter.
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manually tested everything and everything works fine
- Resolves#10217https://github.com/user-attachments/assets/26a402f5-6f43-453b-8c83-481380bde853
### Changes 🏗️
Frontend:
- Show message instead of action buttons ("Run" etc) when graph has
webhook node(s)
- Fix check for webhook nodes used in `BlocksControl` and `FlowEditor`
- Clean up `PrimaryActionBar` implementation
- Add `accent` variant to `ui/button:Button`
API:
- Add `GET /library/agents/by-graph/{graph_id}` endpoint
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to Builder
- Add a trigger block
- [x] -> action buttons disappear; message shows in their place
- Save the graph
- Click the "Agent Library" link in the message
- [x] -> app navigates to `/library/agents/[id]` for the newly created
agent
This PR helps to send all the React query requests through a Next.js
server proxy. It works something like this: when a user sends a request,
our custom mutator sends a request to the proxy server, where we add the
auth token to the header and send it to the backend again. 🌐
Users can send a client-side request directly to the backend server
because their browser does have access to auth tokens, so they need to
go via the Next.js server. 🚀
### Changes 🏗️
- Change the position of the generated client, mutator, and transfer
inside `/src/app/api`
- Update the mutator to send the request to the proxy server
- Add a proxy server at `/api/proxy`, which handles the request using
`makeAuthenticatedRequest` and `makeAuthenticatedFileUpload` helpers and
sends the request to the backend
- Remove `getSupabaseClient`, because we do not have access to the auth
token on client side, hence no need 🔑
- Update Orval configs to generate the client at the new position
- Added new backend updates to the auto-generated client.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The setting page is using React Query and is working fine.
- [x] The mutator is sending requests to the proxy server correctly.
- [x] The proxy server is handling requests correctly.
- [x] The response handling is correct in both the proxy server and the
custom mutator.
## Changes 🏗️
### Overview
Introduces a new responsive `<Dialog />` component that automatically
adapts to screen size, providing optimal UX across devices.
<img width="800" alt="Screenshot 2025-06-27 at 16 00 01"
src="https://github.com/user-attachments/assets/d0c53b30-488f-4102-8100-c9318168d65b"
/>
<img width="300" alt="Screenshot 2025-06-27 at 16 00 12"
src="https://github.com/user-attachments/assets/f2105708-97d9-4a94-8e26-3c2d582ea8cd"
/>
### Key Features
#### 📱 **Responsive Behavior**
- **Desktop**: Modal dialog with overlay
- **Mobile**: Bottom drawer [Vaul](https://vaul.emilkowal.ski/) with
**swipe-to-dismiss** functionality
#### 🎯 **Multiple Interaction Methods**
- `ESC` key to close (both desktop & mobile)
- Click outside to dismiss
- Swipe down to dismiss (mobile drawer)
- Close button (X)
#### ❓ Why I did not use `shadcn/dialog` in this case as a base
While we already use the raw `shadcn/dialog` on the platform, it's
designed as a desktop-only solution and is not really
responsive-friendly. It lacks 📱 mobile-optimisation patterns like
_bottom drawers_, _swipe-to-dismiss gestures_ ( the new implementation
has it via [Vaul](https://vaul.emilkowal.ski/) ), and automatic
breakpoint adaptation according to screen size.
#### 🧩 **Compound Component Pattern**
```tsx
<Dialog title="Example">
<Dialog.Trigger>
<Button>Open Dialog</Button>
</Dialog.Trigger>
<Dialog.Content>
Content goes here
</Dialog.Content>
</Dialog>
```
#### ⚙️ **Flexible Control**
- **Uncontrolled**: Self-managed state via triggers
- **Controlled**: External state management
- **Force open**: rare but might be needed
- **Custom styling**: if needed
## Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] **Desktop Modal**: Opens/closes via trigger, ESC key, click
outside, close button
- [x] **Mobile Drawer**: Automatically switches at `lg` breakpoint,
swipe-to-dismiss works
- [x] **Controlled Mode**: External state management functions correctly
- [x] **Force Open**: Dialog stays open for preview purposes
- [x] **Custom Styling**: CSS-in-JS overrides work as expected
- [x] **Footer Component**: Action buttons render and function properly
- [x] **No Title Mode**: Dialog works without title prop
- [x] **Accessibility**: Tab navigation, screen reader announcements,
ARIA compliance
- [x] **Responsive Breakpoints**: Component switches modes at correct
screen sizes
- [x] **Storybook**: All stories render and function correctly
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
CreateListBlock can only batch lists based on the size limit, but
sometimes we need the size to be dynamically adjusted based on the token
count.
### Changes 🏗️
Improve CreateListBlock to support batching based on token count
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test CreateListBlock
Complete the implementation of the Agent Run Scheduling UX in the
Library.
Demo:
https://github.com/user-attachments/assets/701adc63-452c-4d37-aeea-51788b2774f2
### Changes 🏗️
Frontend:
- Add "Schedule" button + dialog + logic to `AgentRunDraftView`
- Update corresponding logic on `AgentRunsPage`
- Add schedule name field to `CronSchedulerDialog`
- Amend Builder components `useAgentGraph`, `FlowEditor`,
`RunnerUIWrapper` to also handle schedule name input
- Split `CronScheduler` into `CronScheduler`+`CronSchedulerDialog`
- Make `AgentScheduleDetailsView` more fully functional
- Add schedule description to info box
- Add "Delete schedule" button
- Update schedule create/select/delete logic in `AgentRunsPage`
- Improve schedule UX in `AgentRunsSelectorList`
- Switch tabs automatically when a run or schedule is selected
- Remove now-redundant schedule filters
- Refactor `@/lib/monitor/cronExpressionManager` into
`@/lib/cron-expression-utils`
Backend + API:
- Add name and credentials to graph execution schedule job params
- Update schedule API
- `POST /schedules` -> `POST /graphs/{graph_id}/schedules`
- Add `GET /graphs/{graph_id}/schedules`
- Add not found error handling to `DELETE /schedules/{schedule_id}`
- Minor refactoring
Backend:
- Fix "`GraphModel`->`NodeModel` is not fully defined" error in
scheduler
- Add support for all exceptions defined in `backend.util.exceptions` to
RPC logic in `backend.util.service`
- Fix inconsistent log prefixing in `backend.executor.scheduler`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create a simple agent with inputs and blocks that require credentials;
go to this agent in the Library
- Fill out the inputs and click "Schedule"; make it run every minute
(for testing purposes)
- [x] -> newly created schedule appears in the list
- [x] -> scheduled runs are successful
- Click "Delete schedule"
- [x] -> schedule no longer in list
- [x] -> on deleting the last schedule, view switches back to the Runs
list
- [x] -> no new runs occur from the deleted schedule
Calling LLM using the current block sometimes can break due to the high
context window.
A prompt compaction algorithm is applied (enabled by default) to make
sure the sent prompt is within a context window limit.
### Changes 🏗️
````
Heuristics
--------
* Prefer shrinking the content rather than truncating the conversation.
* If the conversation content is compacted and it's still not enough, then reduce the conversation list.
* The rest of the implementation is adjusted to minimize the LLM call breaking.
Strategy
--------
1. **Token-aware truncation** – progressively halve a per-message cap
(`start_cap`, `start_cap/2`, … `floor_cap`) and apply it to the
*content* of every message except the first and last. Tool shells
are included: we keep the envelope but shorten huge payloads.
2. **Middle-out deletion** – if still over the limit, delete the whole
messages working outward from the centre, **skipping** any message
that contains ``tool_calls`` or has ``role == "tool"``.
3. **Last-chance trim** – if still too big, truncate the *first* and
*last* message bodies down to `floor_cap` tokens.
4. If the prompt is *still* too large:
• raise ``ValueError`` when ``lossy_ok == False`` (default)
• return the partially-trimmed prompt when ``lossy_ok == True``
````
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run an SDM block in a loop until it hits 200000 tokens using the
open-ai O3 model.
- Follow-up fix to #10138
AI erased a bit of functionality from the `GithubReadPullRequestBlock`
in #10138. This PR puts it back and improves on the old format.
### Changes 🏗️
- Include full diff in `changes` output of `GithubReadPullRequestBlock`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Use the `GithubReadPullRequestBlock` with `include_pr_changes` enabled
- [ ] -> block runs successfully
- [ ] -> full diff included in `changes` output
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
<html><head></head><body><h3>Why these changes are needed 🧐</h3>
<p>Revid.ai offers several specialised, undocumented rendering flows
beyond the basic “text-to-video” endpoint our platform already
supported.
to:</p>
<ol>
<li>
<p><strong>Generate ads</strong> from copy plus product images
(30-second vertical spots).</p>
</li>
<li>
<p><strong>Turn a single creative prompt</strong> into a fully
AI-generated video (no multi-line script).</p>
</li>
<li>
<p><strong>Transform a screenshot into a narrated, avatar-driven
clip</strong>, ideal for product-led demos.</p>
</li>
</ol>
<p>Without first-class blocks for these flows, users were forced to drop
to raw HTTP nodes, losing schema validation, test mocks and credential
management.</p>
<h3>Changes 🏗️</h3>
Added new category to ``BlockCategory`` in ``block.py`` for ``MARKETING
= "Block that helps with marketing"``
Area | Change | Notes
-- | -- | --
ai_shortform_video_block.py | Refactored out a shared _RevidMixin
(webhook + polling helpers). | Keeps DRY across new blocks.
| Added AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS and Voice.EVA
enum members. | Required by Revid sample payloads.
| AIAdMakerVideoCreatorBlock | Implements ai-ad-generator flow;
supports optional input_media_urls, target_duration,
use_only_provided_media.
| AIPromptToVideoCreatorBlock | Implements prompt-to-video flow with
prompt_target_duration.
| AIScreenshotToVideoAdBlock | Implements screenshot-to-video-ad flow
(avatar narration, BG removal).
| Added full pydantic schemas, test stubs & mock hooks for each new
block. | Ensures unit tests pass and blocks appear in UI.
<p>No existing functionality was removed; current <code
inline="">AIShortformVideoCreatorBlock</code> is untouched apart from
enum imports.</p></body></html>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] use the ``AI ShortForm Video Creator`` block to generate a video
and it will work
- [x] same with `` ai ad maker video creator`` block test it and it
should work
- [x] and test ``ai screenshot to video ad`` block it should work
---------
Co-authored-by: Bently <Github@bentlybro.com>
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
* Add an enriching email feature toggle for SearchPeopleBlock
* Introduce GetPersonDetailBlock
* Adjust the cost of both blocks
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute SearchPeopleBlock & GetPersonDetailBlock
Currently, we don't have a secure way to pass Authorization headers when
calling the `SendWebRequestBlock`.
This hinders the integration of third-party applications that do not yet
have native block support.
### Changes 🏗️
Add Host-scoped credentials support for the newly introduced
SendAuthenticatedWebRequestBlock.
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/0d3d577a-2b9b-4f0f-9377-0e00a069ba37"
/>
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a59b9f16-c89c-453d-a628-1df0dfd60fb5"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Uses `https://api.openai.com/v1/images/edits` through
SendWebRequestBlock by passing the api-key through host-scoped
credentials.
### Why are these changes needed?
<!-- Clearly explain the need for these changes: -->
These changes document the OAuth integration flow for CASA lvl 2
compliance, specifically addressing the requirement to "Verify
documentation and justification of all the application's trust
boundaries, components, and significant data flows." The documentation
clarifies the two distinct OAuth implementations in AutoGPT: user
authentication via Supabase SSO and API integration credentials for
third-party services.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Created comprehensive OAuth integration flow documentation at
`/docs/content/platform/contributing/oauth-integration-flow.md`
- Documented trust boundaries between frontend (untrusted), backend API
(trusted), and external providers (semi-trusted)
- Added detailed component architecture for both frontend and backend
OAuth implementations
- Included mermaid diagrams illustrating:
- OAuth flow sequences (initiation, authorization, token refresh)
- System architecture showing SSO vs API integration OAuth
- Data flow diagram
- Security architecture layers
- Credential lifecycle state diagram
- Documented security measures including CSRF protection, PKCE
implementation, and token management
- Clarified the distinction between Supabase SSO for user login and
custom OAuth for API integrations
- Added references to source files for up-to-date provider lists rather
than hard-coding all providers
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Created documentation file with proper markdown formatting
- [x] Verified all file paths referenced in documentation exist
- [x] Confirmed mermaid diagrams render correctly
- [x] Validated that the documentation accurately reflects the codebase
implementation
---------
Co-authored-by: Claude <noreply@anthropic.com>
### Changes 🏗️
- We have implemented some backend changes, so I have added a new,
updated OpenAPI specification.
- We have updated the settings and API keys page to enable us to use
React Query for fetching data.
### Checklist 📋
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Settings and api keys page is working correctly
### Changes 🏗️
Implemented `httpOnly` cookies 🍪 for secure session management 💆🏽
- 🙏🏽 **Moved all API requests to server-side execution** for maximum XSS
protection
- All authentication now happens server-side with `httpOnly` cookies (no
JWT tokens exposed to client)
- Created `proxyApiRequest()` and `proxyFileUpload()` server actions to
handle all communication with API
- Updated `BackendAPI._request()` to always use proxy approach for
consistent security
- 🚧 **Exception: WebSocket authentication** requires client-side token
exposure
- Added `getWebSocketToken()` server action to securely provide tokens
only for WebSocket connections
- Maintains secure architecture while we keep the real-time features
- 🧹 **Abstracted implementation details** into reusable helper functions
- Reduced proxy actions from 157 lines to 48 lines (70% reduction)
- Added flexible content-type support ( _JSON, form-urlencoded, custom_
)
- Enhanced error handling for graceful logout scenarios
- 📙 **Renamed `/reset_password` page to `/reset-password`**
- couldn't resist sorry... snake case URLs get me
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Verify all API requests work through server-side proxy
- [x] Confirm httpOnly cookies prevent client-side JWT access
- [x] Test WebSocket connections work with server-provided tokens
- [x] Verify logout scenarios don't throw authentication errors
- [x] Check file uploads work securely through proxy
- [x] Validate zero breaking changes for existing BackendAPI calls
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
## Changes 🏗️
<img width="800" alt="Screenshot 2025-06-25 at 20 34 38"
src="https://github.com/user-attachments/assets/bfc90504-85b6-4178-9ace-2aa4d14f16b0"
/>
<br /><br />
- To match what is on the AutoGPT design system
- Unit tests commented because they depend on:
https://github.com/Significant-Gravitas/AutoGPT/pull/10243
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally, Badge stories look good
An anti-virus file scan step is added to each file upload step on the
platform before the file is sent to cloud storage or local machine
storage.
### Changes 🏗️
* Added ClamAV service
* Added AV file scan on each upload step
* Added tests & documentation
* Make the step mandatory even on local development
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tried using FileUploadBlock & AgentFileInputBlock
## Changes 🏗️
<img width="1580" alt="Screenshot 2025-06-25 at 18 11 36"
src="https://github.com/user-attachments/assets/c8b136b6-5897-41fa-a03b-010582c4b879"
/>
<br /><br />
Add a new `<Link />` component that will be the standard when rendering
links on the platform.
It is a wrapper of `next/link` and has an `isExternal` prop; when
supplied `target="_blank"` and `rel="noopener noreferrer"` will be added
to it. It comes with the styles agreed on AutoGPT design system.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally
- [x] Tests pass and the component looks good
## Changes 🏗️
<img width="800" alt="Screenshot 2025-06-25 at 17 52 38"
src="https://github.com/user-attachments/assets/18f859cf-5008-4915-925c-1912ab9cf176"
/>
- Depends on #10235 so that we can test the new Chromatic workflow with
this
- Documents our Skeleton atom which is directly
[shadcn/skeleton](https://ui.shadcn.com/docs/components/skeleton)
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run storybook locally
- [x] The Skeleton stories look good
## Changes 🏗️
<img width="800" alt="Screenshot 2025-06-25 at 13 43 06"
src="https://github.com/user-attachments/assets/13ffd32e-ffa1-482e-91a6-8363ad6b67df"
/>
<br /><br />
- Setup Chromatic ( install + `package.json` command )
- Make it run on the CI
- Remove a lot of old component in Storybook which were broken or need
deign review
- for now we only keep on Storybook what has been ✅ by design
- Remove `test-storybook:ci` commands
- I plan to run tests via Chromatic, but I want to look at that setup on
a separate PR and in a clean state
## 📋 Checklist
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The `chromatic` job succeeds on the CI and the changes appear on
Chromatic's dashboard
## Changes 🏗️
The test data script is not working locally, this should fix it 🤞🏽
- Fixed `agentId` → `agentGraphId` field references in preset matching
logic
- Fixed `agentId` → `agentGraphId` field references in store listing
graph lookup
- Added graph uniqueness logic to prevent duplicate library agents per
user
- Improved data consistency by ensuring proper foreign key relationships
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified script runs without database schema errors
- [x] Confirmed foreign key relationships are properly maintained
- [x] Tested that library agents use unique graphs per user
- [x] Validated preset matching uses correct field references
This PR makes several improvements to the `update_library_agent`
endpoint.
- Resolves#10216
### Changes 🏗️
- Add `DELETE /library/agents/{id}` endpoint
- Fix `PUT /library/agents/{id}` endpoint
- Return updated library agent
- Remove `is_deleted` parameter
- Change method from `PUT` to `PATCH`
Also, a small DX improvement:
- Expose `BackendAPI` globally through `window.api` for local dev
purposes
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Deleting library agents works
- Follow-up fix to #10167
- Resolves#10228
### Changes 🏗️
- Don't assume `block.input_schema.jsonschema()["required"]` exists
- Unbreak handling of `webhook_type` in
`BaseWebhooksManager.get_manual_webhook(..)`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create an agent with a Generic Webhook Trigger block; go to it in the
Library
- [x] -> `/library/agents/[id]` loads normally
- Follow-up fix to #9862
- Resolves#10097
In #9862, the `AgentExecutorBlock`'s nested input field was renamed from
`data` to `input`, but apparently the frontend also had a reference to
this field and was now broken.
### Changes 🏗️
- Update `getInputPropKey` in `CustomNode` to use `inputs.{key}` instead
of `data.{key}`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create an agent with at least one input
- Use the agent with at least one input inside another agent
- Set a default value on the input on the agent block
- Save the graph
- [x] -> default input value is saved
AIImageEditorBlock was not able to accept an image from AgentFileInput
or FileStore block.
### Changes 🏗️
* Add support for image loading for the image editor block:
<img width="1081" alt="Screenshot 2025-06-23 at 10 28 45 AM"
src="https://github.com/user-attachments/assets/ac3fea91-9503-4894-bbe3-2dc3c5649a39"
/>
* Avoid rendering a relative path image file.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run AiImageEditor block using AgentFileInput or FileStore block.
This pull request adds support for setting up (webhook-)triggered agents
in the Library. It contains changes throughout the entire stack to make
everything work in the various phases of a triggered agent's lifecycle:
setup, execution, updates, deletion.
Setting up agents with webhook triggers was previously only possible in
the Builder, limiting their use to the agent's creator only. To make it
work in the Library, this change uses the previously introduced
`AgentPreset` to store information on, instead of on the graph's nodes
to which only a graph's creator has access.
- Initial ticket: #10111
- Builds on #9786


### Changes 🏗️
Frontend:
- Amend the Library's `AgentRunDraftView` to handle creating and editing
Presets
- Add `hideIfSingleCredentialAvailable` parameter to `CredentialsInput`
- Add multi-select support to `TypeBasedInput`
- Add Presets section to `AgentRunsSelectorList`
- Amend `AgentRunSummaryCard` for use for Presets
- Add `AgentStatusChip` to display general agent status (for now: Active
/ Inactive / Error)
- Add Preset loading logic and create/update/delete handlers logic to
`AgentRunsPage`
- Rename `IconClose` to `IconCross`
API:
- Add `LibraryAgent` properties `has_external_trigger`,
`trigger_setup_info`, `credentials_input_schema`
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Remove redundant parameters from `POST
/library/presets/{preset_id}/execute` endpoint
Backend:
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Extract non-node-related logic from `on_node_activate` into
`setup_webhook_for_block`
- Add webhook-related logic to `update_preset` and `delete_preset`
endpoints
- Amend webhook infrastructure to work with AgentPresets
- Add preset trigger support to webhook ingress endpoint
- Amend executor stack to work with passed-in node input
(`nodes_input_masks`, generalized from `node_credentials_input_map`)
- Amend graph validation to work with passed-in node input
- Add `AgentPreset`->`IntegrationWebhook` relation
- Add `WebhookWithRelations` model
- Change behavior of `BaseWebhooksManager.get_manual_webhook(..)` to
avoid unnecessary changes of the webhook URL: ignore `events` to find
matching webhook, and update `events` if necessary.
- Fix & improve `AgentPreset` API, models, and DB logic
- Add `isDeleted` filter to get/list queries
- Add `user_id` attribute to `LibraryAgentPreset` model
- Add separate `credentials` property to `LibraryAgentPreset` model
- Fix `library_db.update_preset(..)` replacement of existing
`InputPresets`
- Make `library_db.update_preset(..)` more usage-friendly with separate
parameters for updateable properties
- Add `user_id` checks to various DB functions
- Fix error handling in various endpoints
- Fix cache race condition on `load_webhook_managers()`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Test existing functionality
- [x] Auto-setup and -teardown of webhooks on save in the builder still
works
- [x] Running an agent normally from the Library still works
- Test new functionality
- [x] Setting up a trigger in the Library
- [x] Updating a trigger in the Library
- [x] Disabling and re-enabling a trigger in the Library
- [x] Deleting a trigger in the Library
- [x] Triggers set up in the Library result in a new run when the
webhook receives a payload
This pull request sets up and configures Orval for API client
generation. It automates the process of creating TypeScript clients from
the backend's OpenAPI specification, improving development efficiency
and reducing manual code maintenance.
### Changes 🏗️
- Configures Orval with a new configuration file (`orval.config.ts`).
- Adds scripts to `package.json` for fetching the OpenAPI spec and
generating the API client.
- Implements a custom mutator for handling authentication.
- Adds API client generation as a step in the CI workflow.
- Adds `.gitignore` entry for generated API client files.
- Adds a security middleware to prevent caching of sensitive data.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified that the API client is generated correctly.
- [x] Confirmed that the custom mutator is functioning as expected for
authentication.
- [x] Ensured that the new CI workflow step for API client generation is
successful.
- [x] Tested generated API calls
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
Since auto conversion is applied before merging nested input in the
block, it breaks the auto conversion break.
### Changes 🏗️
* Enabling auto-type conversion on block input schema mismatch for
nested input
* Add batching feature for `CreateListBlock`
* Increase default max_token size for LLM call
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run `AIStructuredResponseGeneratorBlock` with non-string prompt
value (should be auto-converted).
### Changes 🏗️
Add cost calculation for Apollo integration.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run Apollo block Search People & Organizations Block.
### Why? 🤔
<!-- Clearly explain the need for these changes: -->
We need to prevent sensitive data (authentication tokens, API
keys, user credentials, personal information) from being cached by
browsers and proxies. Following the principle of "secure by
default", we're switching from a deny list to an allow list
approach for cache control.
### Changes 🛠️
<!-- Concisely describe all of the changes made in this pull
request: -->
- **Refactored cache control middleware from deny list to allow
list approach**
- By default, ALL endpoints now have `Cache-Control: no-store,
no-cache, must-revalidate, private` headers
- Only explicitly allowed paths (static assets, health checks,
public store pages) can be cached
- This ensures new endpoints are automatically protected without
developers having to remember to add them to a list
- **Updated `SecurityHeadersMiddleware` in
`/backend/backend/server/middleware/security.py`**
- Renamed `SENSITIVE_PATHS` to `CACHEABLE_PATHS`
- Inverted the logic in `is_cacheable_path()` method
- Cache control headers are now applied to all paths NOT in the
allow list
- **Updated test suite to match new behavior**
- Tests now verify that most endpoints have cache control
headers by default
- Tests verify that only allowed paths (static assets, health
endpoints, etc.) can be cached
- **Updated documentation in `CLAUDE.md`**
- Documented the new allow list approach
- Added instructions for developers on how to allow caching for
new endpoints
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test modified endpoints work still
- [x] Test modified endpoints correctly have no cache rules
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Main issues:
* `AIStructuredResponseGeneratorBlock` is not able to produce a list of
objects.
* `SmartDecisionBlock` is not able to call tools with some optional
inputs.
### Changes 🏗️
* Allow persisting `null` / `None` value as execution output.
* Provide `multiple_tool_calls` option for `SmartDecisionBlock`.
* Provide `list_result` option for `AIStructuredResponseGeneratorBlock`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run `SmartDecisionBlock` & `AIStructuredResponseGeneratorBlock`
This PR introduces a custom function for generating unique operation IDs
for OpenAPI specifications to improve auto-generated client code
quality.
## Why This Change?
**Better Auto-Generated Clients**: Default FastAPI operation IDs create
unclear method names in generated clients. Our custom generator produces
clean, readable operation IDs that translate to intuitive method names.
- **Before**: `get_items_api_v1_items_get` → unclear generated methods
- **After**: `get_users_list` → clean, descriptive method names
## Changes
- ✨ **Added**: `custom_generate_unique_id` utility function
- Generates IDs using pattern: `{method}_{tag}_{summary}`
- Ensures uniqueness and readability
- 🔧 **Updated**: FastAPI app configuration to use custom generator
## Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] OpenAPI docs reflect new operation ID format
- [x] Tested various HTTP methods, tags, and summaries
- [x] Verified app startup functionality
- [x] Validated improved client generation output
Current Apollo blocks only work with keywords; the huge number of list
filter fields doesn't work because it's passing the wrong GET parameter
(missing `[]`).
### Changes 🏗️
Change the GET request to a POST request for Apollo.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run SearchPeopleBlock with title filter
This PR integrates React Query DevTools and ESLint rules to improve the
development workflow and enforce best practices for data fetching.
### Changes:
- **React Query DevTools:**
- Added the `@tanstack/react-query-devtools` package.
- DevTools are enabled by default in the development environment.
- They can be disabled by setting
`NEXT_PUBLIC_REACT_QUERY_DEVTOOL=false` in your environment variables.
- **ESLint Rules:**
- Integrated `@tanstack/eslint-plugin-query` to enforce best practices
and catch common errors in React Query usage.
- **Configuration:**
- Added the `NEXT_PUBLIC_REACT_QUERY_DEVTOOL` variable to the
`.env.example` file so other developers are aware of this option.
- **Documentation:**
- Updated the `README.md` with instructions on how to toggle the
DevTools using the environment variable.
Configuration Changes Checklist
- `.env.example` has been updated with the new environment variable.
### Checklist
For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the app in development with pnpm dev.
- [x] Verified DevTools toggle with environment variables
- [x] Run pnpm lint in the frontend directory.
- [x] Confirm that linting passes on the current codebase.
### Screenshot
<img width="1512" alt="Screenshot 2025-06-19 at 6 32 22 PM"
src="https://github.com/user-attachments/assets/a3defd23-2c3d-4d20-b152-037d85e04503"
/>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Issue -
https://linear.app/autogpt/issue/OPEN-2534/set-up-react-query-for-both-client-side-and-server-side-operations
This update adds react-query to the frontend, enabling efficient data
fetching and caching. It uses a singleton QueryClient on the client for
shared cache, creates a new QueryClient for each server request to
prevent data leaks, and supports server-side prefetching for faster
data.
### Changes
- Add @tanstack/react-query dependency
- Set up QueryClient with default config (except 1m staleTime)
- Wrap app with QueryClientProvider for global access
- Ensure safe client/server QueryClient instantiation
> I only changed the staleTime in the default config because the other
settings work well for general use. For specific cases—like when you
want data to stay fresh unless manually invalidated—you can set
staleTime: Infinity in that query.
### Checklist 📋
For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Ran frontend locally – it’s working fine
### Changes 🏗️

- Adds a new `<Button>` component that mirrors 1:1 what we have in the
design system
- Documented the new component via stories
- Re-arranged the stories in the Storybook sidebar to show the legacy
ones at the end
Once this is merged, we can start updating buttons on the app to only
use this one, so we have a consistent UX 💆🏽
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally
- [x] Button stories look good ( _in all variants_ )
### Changes 🏗️
Fixes: [Make the default scheduler frequency to daily instead of every
minute
#9985](https://github.com/Significant-Gravitas/AutoGPT/issues/9985)
This simply updates the Schedule Task's default from minute to daily at
09:00 as default time

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Open the Schedule Task UI and see the default is now daily at
09:00
### Changes 🏗️
<img width="800" alt="Screenshot 2025-06-18 at 19 55 24"
src="https://github.com/user-attachments/assets/f3bd662e-cc64-4a32-a030-973b7cf89d8b"
/>
Document the new colour tokens agreed with the design team, and update
the Tailwind theme with them.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally
- [x] Verify the colors story renders well and make sense
## Description
Added the `graph_id` parameter to the stop execution endpoint path
(`/graphs/{graph_id}/executions/{graph_exec_id}/stop`) to fix client
generation from Openapi spec error.
## Problem
The client generation was failing due to missing path parameter
definition for `graph_id` in the stop execution endpoint.
<img width="1412" alt="Screenshot 2025-06-19 at 9 20 17 AM"
src="https://github.com/user-attachments/assets/aa1667d3-05be-48c6-975b-84473830ac03"
/>
## Solution
Added `graph_id` as a path parameter while maintaining the existing
functionality.
## Testing
- [x] Verified OpenAPI client generation works without errors
- [x] Confirmed endpoint functionality remains unchanged
- [x] Tested API calls maintain backward compatibility
## Changes 🏗️
Migrate to [Storybook 9](https://storybook.js.org/docs/migration-guide),
changes are mostly from the migration tool:
``` basg
pnpm storybook@latest upgrade
```
On top of that:
- removed stories for [shadcn](https://ui.shadcn.com/) components
- to avoid confusion, shadcn in our base for the component library, and
is already documented on their website
- removed example stories
- regrouped existing `agpt-ui` stories under `Legacy`
- I need to review them and see if they still fit the expected designs
of the platform or not
<img width="600" alt="Screenshot 2025-06-17 at 13 43 57"
src="https://github.com/user-attachments/assets/ca3d9c1b-9dc4-4684-ac77-6259beeb3e1d"
/>
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run `pn storybook` locally
- [x] It works well, and the stories look good
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Request on block execution can be throttled, and requests between
services can sometimes break. The scope of this PR is to add an
appropriate retry on those.
### Changes 🏗️
* Block request retry: Retry on throttled status code only (504, 429,
etc).
* RPC request retry: Retry connection issues (ConnectError, Timeout,
etc).
* Truncate logging on executor/utils.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual graph execution
## Changes 🏗️
### ESLint Config
1. **Disabled `react-hooks/exhaustive-deps`:**
- to prevent unnecessary dependency proliferation and rely on code
review instead
2. **Added
[`next/typescript`](https://nextjs.org/docs/app/api-reference/config/eslint#with-typescript):**
- to the ESLint config to make sure we also have TS linting rules
3. **Added custom rule for `@typescript-eslint/no-unused-vars`:**
- to allow underscore-prefixed variables (convention for intentionally
unused), in some cases helpful
From now on, whenever we have unused variables or imports, the `lint` CI
will fail 🔴 , thanks to `next/typescript` that adds
`typescript-eslint/no-unused-vars` 💆🏽
### Minor Fixes
- Replaced empty interfaces with type aliases to resolve
`@typescript-eslint/no-empty-object-type` warnings
- Fixed unsafe non-null assertions with proper null checks
- Removed `@ts-ignore` comments in favour of proper type casting ( _when
possible_ 🙏🏽 )
### Google Analytics Component
- Changed Next.js Script strategy from `beforeInteractive` to
`afterInteractive` to resolve Next.js warnings
- this make sure loading analytics does not block page render 🙏🏽 (
_better page load time_ )
### Are these changes safe?
As long as the Typescript compiler does not complain ( check the
`type-check` job ) we should be save. Most changes are removing unused
code, if that code would be used somewhere else the compiler should
catch it and tell us 🫶
I also typed some code when possible, or bypassed the linter when I
thought it was fair for now. I disabled a couple ESLint rules. Most
importantly the `no-explicity-any` one as we have loads of stuff untyped
yet ( _this should be improved once API types are generated for us_ ).
### DX
Added some settings on `.vscode` folder 📁 so that files will be
formatted on save and also ESLint will fix errors on save when able 💯
### 📈 **Result:**
- ✅ All linting errors resolved
- ✅ Improved TypeScript strict mode compliance
- ✅ Better developer experience with cleaner code
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Lint CI job passes
- [x] There is not type errors ( _TS will catch issue related to these
changes_ )
- Follow-up fix for #9786
A change to a DB statement introduced in #9786 turns out to be breaking.
Apparently `connect` can't just be used for *some* relations: if it is
used, it must be used for *all* relations created by the statement.
### Changes 🏗️
- Fix broken DB statement in `add_store_agent_to_library(..)`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Add store agent to library
Co-authored-by: Swifty <craigswift13@gmail.com>
## Changes 🏗️
### Checklist 📋
<img width="800" alt="Screenshot 2025-06-17 at 14 11 55"
src="https://github.com/user-attachments/assets/61d5a6b9-57f7-4117-bbc6-e78c2cdc5778"
/>
Document the icons for the new design system. With the design team, it
was agreed we will settle on [phosphor
icons](https://phosphoricons.com/), so we will need to migrate
progressively out of `lucide-react`.
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook locally
- [x] Check the icons story and displays well
This change introduced async execution for blocks and the execution
engine. Paralellism will be achieved through a single process
asynchronous execution instead of process concurrency.
### Changes 🏗️
* Support async execution for the graph executor
* Removed process creation for node execution
* Update all blocks to support async executions
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual graph executions, tested many of the impacted blocks.
<!-- Clearly explain the need for these changes: -->
Doing the CASA Audit and this is something to check
### Changes 🏗️
- limits APIs to use their specific endpoints
- use expected trusted sources for each block and requests call
- Use cryptographically valid string comparisons
- Don't log secrets
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Testing in dev branch once merged
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
<!-- Clearly explain the need for these changes: -->
## Background & Summary of Changes
If a user has a single invalid Agent in their Library (i.e one with a
Block which doesn't exist) currently the Blocks menu does not return any
Agent results.
Valid agents should still load even when some stored graphs are
malformed.
Graphs which are malformed should just be skipped rather than breaking
the entire process, this PR implements that fix, unblocking users with a
malformed Agent in their Library (me!).
## Testing
I have tested this PR in the dev deployment (where I have this issue on
my account) and have confirmed that Agents now show up in the list:
| Before this Change | After this Change |
| ------------------ | ----------------- |
| 
| 
|
## Changes 🏗️
- Validate each graph’s serialization in get_graphs and skip any that
raise an exception
- Added error logging for invalid graphs
## Checklist 📋
For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] poetry run format
- [ ] poetry run test
For configuration changes:
- [x] .env.example is updated or already compatible with my changes
- [x] docker-compose.yml is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under Changes)
Fixes [OPEN-2461: Loading a Library Agent with an invalid block causes
all Library Agent Loading to fail in Builder Blocks
Menu](https://linear.app/autogpt/issue/OPEN-2461/loading-a-library-agent-with-an-invalid-block-causes-all-library-agent)
Fixes#9868
This pull request updates the `StoreCard` component in
`autogpt_platform/frontend/src/components/agptui/StoreCard.tsx` to
replace the hardcoded Tailwind CSS class `bg-white` with the more
flexible `bg-background` utility class. This change ensures better
consistency with the application's theming and makes it easier to adapt
to different color schemes, such as light and dark modes.
#### Changes:
- **Before:**
`className="... bg-white ... dark:bg-transparent ..."`

- **After:**
`className="... bg-background ... dark:bg-transparent ..."`

#### Motivation:
- Removes the white background on the cards, which weren't part of the
designs.
No functional or visual changes are expected except for improved support
for custom themes.
---
This PR was entirely generated by an AI Agent.
**Please review and let me know if additional changes are needed!**
Co-authored-by: itsababseh <36419647+itsababseh@users.noreply.github.com>
## 🏗️ Changes
### 🧢 Authentication improvements
- Updates for [CASA compliance](https://appdefensealliance.dev/casa)
- implemented cross-tab login/logout
- logout now triggers cross-device logout
- forgot password triggers cross-device logout
- we are already able to revoke sessions given Supabase stores sessions
🙌🏽
### 📙 Cross-tab login/logout implementation
I implemented some session validation debouncing ( _2-second cooldown_ )
to prevent excessive API calls when switching tabs fast ( _more of an
edge-case but could happen_ ). Cross tab implementation is done via
`localStorage` and `window.visibility` events.
### Refactor and cleanup
Smol things to improve our auth logic on the Frontend:
- created `helpers.ts` with utilities for protected page detection,
admin page routing, and cross-tab communication
- added `STORAGE_KEYS`, `PROTECTED_PAGES`, and `ADMIN_PAGES` constants
for better organization
- refactored server-side Supabase utilities and middleware
- updated import paths to use named exports
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Cross-tab logout synchronization works correctly
- [x] Session validation debouncing prevents excessive API calls
- [x] Protected page redirects function properly
- [x] Authentication state persists correctly across tabs
- [x] Role-based access controls work as expected
- [x] Cross-device logout is performed after forgot password change
### Cross-tab login/logout
https://github.com/user-attachments/assets/5dbdd204-faa2-419f-b989-e31f69ddabd6
### Cross-device logout
https://github.com/user-attachments/assets/aac9c97a-beec-4519-a391-f94f988dc7c8
### Changes 🏗️
<img width="800" alt="Screenshot 2025-06-13 at 18 29 27"
src="https://github.com/user-attachments/assets/6a2f9c23-860f-4f92-8a7a-eeb7839940fd"
/>
- Add a nice overview page for the 👶🏽 baby AutoGPT design system
- Customise the logo on Storybook to show AutoGPT one
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run Storybook
- [x] You see the Overview page which looks good
### Changes 🏗️
<img width="1761" alt="Screenshot 2025-06-13 at 18 40 50"
src="https://github.com/user-attachments/assets/d24a0350-a371-4067-9666-c3206aacce13"
/>
Document border radius tokens, which follow Tailwind default theme
radius scale ✅
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run storybook
- [x] Open the Tokens / Border Radius story
- [x] Verify makes sense
### Changes 🏗️
<img width="800" alt="Screenshot 2025-06-13 at 18 42 54"
src="https://github.com/user-attachments/assets/c1ddffb4-6898-4e2e-8961-49857c0ce65a"
/>
<img width="800" alt="Screenshot 2025-06-13 at 18 01 27"
src="https://github.com/user-attachments/assets/22c5e305-a5ed-469f-916b-38e93aba7f98"
/>
Document spacing tokens, which follow Tailwind default theme spacing
scale ✅
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run storybook
- [x] Open the Tokens / Spacing story
- [x] Verify makes sense
Bumps the production-dependencies group in /autogpt_platform/frontend
with 3 updates:
[@sentry/nextjs](https://github.com/getsentry/sentry-javascript),
[@supabase/supabase-js](https://github.com/supabase/supabase-js) and
[zod](https://github.com/colinhacks/zod).
Updates `@sentry/nextjs` from 9.26.0 to 9.27.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/releases"><code>@sentry/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>9.27.0</h2>
<ul>
<li>feat(node): Expand how vercel ai input/outputs can be set (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16455">#16455</a>)</li>
<li>feat(node): Switch to new semantic conventions for Vercel AI (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16476">#16476</a>)</li>
<li>feat(react-router): Add component annotation plugin (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16472">#16472</a>)</li>
<li>feat(react-router): Export wrappers for server loaders and actions
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16481">#16481</a>)</li>
<li>fix(browser): Ignore unrealistically long INP values (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16484">#16484</a>)</li>
<li>fix(react-router): Conditionally add <code>ReactRouterServer</code>
integration (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16470">#16470</a>)</li>
</ul>
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@sentry/browser</code></td>
<td>23.43 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> - with treeshaking flags</td>
<td>23.2 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Tracing)</td>
<td>37.46 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Tracing, Replay)</td>
<td>74.68 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>67.94 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Tracing, Replay with
Canvas)</td>
<td>79.33 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Tracing, Replay, Feedback)</td>
<td>91.13 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. Feedback)</td>
<td>39.77 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. sendFeedback)</td>
<td>28.03 KB</td>
</tr>
<tr>
<td><code>@sentry/browser</code> (incl. FeedbackAsync)</td>
<td>32.8 KB</td>
</tr>
<tr>
<td><code>@sentry/react</code></td>
<td>25.15 KB</td>
</tr>
<tr>
<td><code>@sentry/react</code> (incl. Tracing)</td>
<td>39.41 KB</td>
</tr>
<tr>
<td><code>@sentry/vue</code></td>
<td>27.69 KB</td>
</tr>
<tr>
<td><code>@sentry/vue</code> (incl. Tracing)</td>
<td>39.27 KB</td>
</tr>
<tr>
<td><code>@sentry/svelte</code></td>
<td>23.45 KB</td>
</tr>
<tr>
<td>CDN Bundle</td>
<td>24.88 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing)</td>
<td>37.63 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay)</td>
<td>72.66 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback)</td>
<td>77.99 KB</td>
</tr>
<tr>
<td>CDN Bundle - uncompressed</td>
<td>72.67 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing) - uncompressed</td>
<td>111.42 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay) - uncompressed</td>
<td>222.72 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed</td>
<td>235.25 KB</td>
</tr>
<tr>
<td><code>@sentry/nextjs</code> (client)</td>
<td>41.03 KB</td>
</tr>
<tr>
<td><code>@sentry/sveltekit</code> (client)</td>
<td>37.93 KB</td>
</tr>
<tr>
<td><code>@sentry/node</code></td>
<td>146.75 KB</td>
</tr>
<tr>
<td><code>@sentry/node</code> - without tracing</td>
<td>96.03 KB</td>
</tr>
<tr>
<td><code>@sentry/aws-serverless</code></td>
<td>121.19 KB</td>
</tr>
</tbody>
</table>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md"><code>@sentry/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.27.0</h2>
<ul>
<li>feat(node): Expand how vercel ai input/outputs can be set (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16455">#16455</a>)</li>
<li>feat(node): Switch to new semantic conventions for Vercel AI (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16476">#16476</a>)</li>
<li>feat(react-router): Add component annotation plugin (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16472">#16472</a>)</li>
<li>feat(react-router): Export wrappers for server loaders and actions
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16481">#16481</a>)</li>
<li>fix(browser): Ignore unrealistically long INP values (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16484">#16484</a>)</li>
<li>fix(react-router): Conditionally add <code>ReactRouterServer</code>
integration (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16470">#16470</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="650abf323b"><code>650abf3</code></a>
release: 9.27.0</li>
<li><a
href="5a672c90ea"><code>5a672c9</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16489">#16489</a>
from getsentry/prepare-release/9.27.0</li>
<li><a
href="9d1c05ecf3"><code>9d1c05e</code></a>
meta(changelog): Update changelog for 9.27.0</li>
<li><a
href="6d61be0337"><code>6d61be0</code></a>
fix(browser): Ignore unrealistically long INP values (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16484">#16484</a>)</li>
<li><a
href="0a7b915fd7"><code>0a7b915</code></a>
ref(vue): Clarify Vue tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16487">#16487</a>)</li>
<li><a
href="b1fd4a1d47"><code>b1fd4a1</code></a>
test(vue): Add tests for Vue tracing mixins (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16486">#16486</a>)</li>
<li><a
href="497b76e23a"><code>497b76e</code></a>
feat(react-router): Add component annotation plugin (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16472">#16472</a>)</li>
<li><a
href="cfa8d41aa2"><code>cfa8d41</code></a>
feat(react-router): Export wrappers for server loaders and actions (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16481">#16481</a>)</li>
<li><a
href="bfe5e888b1"><code>bfe5e88</code></a>
feat(node): Expand how vercel ai input/outputs can be set (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16455">#16455</a>)</li>
<li><a
href="45088a2ab7"><code>45088a2</code></a>
feat(node): Switch to new semantic conventions for Vercel AI (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16476">#16476</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-javascript/compare/9.26.0...9.27.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `@supabase/supabase-js` from 2.49.10 to 2.50.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-js/releases"><code>@supabase/supabase-js</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v2.50.0</h2>
<h1><a
href="https://github.com/supabase/supabase-js/compare/v2.49.10...v2.50.0">2.50.0</a>
(2025-06-06)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>add <code>@solana/wallet-standard-features</code> as dev dependency
(<a
href="https://redirect.github.com/supabase/supabase-js/issues/1454">#1454</a>)
(<a
href="146c822768">146c822</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>bump auth-js to v2.70.0 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1449">#1449</a>)
(<a
href="217473f6b4">217473f</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="146c822768"><code>146c822</code></a>
fix: add <code>@solana/wallet-standard-features</code> as dev
dependency (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1454">#1454</a>)</li>
<li><a
href="217473f6b4"><code>217473f</code></a>
feat: bump auth-js to v2.70.0 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1449">#1449</a>)</li>
<li>See full diff in <a
href="https://github.com/supabase/supabase-js/compare/v2.49.10...v2.50.0">compare
view</a></li>
</ul>
</details>
<br />
Updates `zod` from 3.25.51 to 3.25.56
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/colinhacks/zod/releases">zod's
releases</a>.</em></p>
<blockquote>
<h2>v3.25.56</h2>
<h2>Commits:</h2>
<ul>
<li>64bfb7001cf6f2575bf38b5e6130bc73b4b0e371 3.25.56</li>
</ul>
<h2>v3.25.55</h2>
<h2>Commits:</h2>
<ul>
<li>44141ea1dbd48403f14704386119884aeda5cb27 3.25.55</li>
</ul>
<h2>v3.25.54</h2>
<h2>Commits:</h2>
<ul>
<li>8ab237423cd8fdca58dc9e18f45d48d56ca2a24d fix(util): cross realm
IsPlainObject check (<a
href="https://redirect.github.com/colinhacks/zod/issues/4627">#4627</a>)</li>
<li>2be1c6ad909a9d0598d9f45fedc9038213130529 Fix generic assignability
issue. 3.25.54</li>
</ul>
<h2>v3.25.53</h2>
<h2>Commits:</h2>
<ul>
<li>a6adb148012f59d734245c637a577ed413a484e7 zod mini internals (<a
href="https://redirect.github.com/colinhacks/zod/issues/4631">#4631</a>)</li>
<li>da4f92170ac838029178c4622015dbdae4a1de7c 3.25.53</li>
</ul>
<h2>v3.25.52</h2>
<h2>Commits:</h2>
<ul>
<li>2954f40a4e41f61e835ba211ff084467dca1f41e Fix json (<a
href="https://redirect.github.com/colinhacks/zod/issues/4630">#4630</a>)</li>
<li>51dc6f9361851e64a925c3f4ee9364ce4da4c4e7 3.25.52</li>
<li>e479ea76ae1571064c3dade621b3af0ea2dff942 Add test cast for deferred
self-recursion</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="64bfb7001c"><code>64bfb70</code></a>
3.25.56</li>
<li><a
href="44141ea1db"><code>44141ea</code></a>
3.25.55</li>
<li><a
href="2be1c6ad90"><code>2be1c6a</code></a>
Fix generic assignability issue. 3.25.54</li>
<li><a
href="8ab237423c"><code>8ab2374</code></a>
fix(util): cross realm IsPlainObject check (<a
href="https://redirect.github.com/colinhacks/zod/issues/4627">#4627</a>)</li>
<li><a
href="da4f92170a"><code>da4f921</code></a>
3.25.53</li>
<li><a
href="a6adb14801"><code>a6adb14</code></a>
zod mini internals (<a
href="https://redirect.github.com/colinhacks/zod/issues/4631">#4631</a>)</li>
<li><a
href="e479ea76ae"><code>e479ea7</code></a>
Add test cast for deferred self-recursion</li>
<li><a
href="51dc6f9361"><code>51dc6f9</code></a>
3.25.52</li>
<li><a
href="2954f40a4e"><code>2954f40</code></a>
Fix json (<a
href="https://redirect.github.com/colinhacks/zod/issues/4630">#4630</a>)</li>
<li>See full diff in <a
href="https://github.com/colinhacks/zod/compare/v3.25.51...v3.25.56">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Swifty <craigswift13@gmail.com>
<img width="1410" alt="image"
src="https://github.com/user-attachments/assets/bce407a2-96a1-42e9-9772-b49b8f20886c"
/>
### Changes 🏗️
Add the missing `send execution update` command on completed/update
status change for the node execution.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Screenshot attached
### Changes 🏗️
<img width="800" alt="Screenshot 2025-06-10 at 14 21 48"
src="https://github.com/user-attachments/assets/d0dba02d-049d-446c-9a25-0f7cec9108bc"
/>
When logging out, I'm redirected to the `/login` page but the account
menu is still visible (however I'm not longer logged out). Refreshing
the page fixes it.
The problem was that `supabase.logOut()` is a client side action, and
`<Navbar />` is a RSC who fetchs the session on the server to display
the state and does not know on the client it was invalidated.
`router.refresh()` solves the issue by forcing RSC on the page to
refetch their state and update server side. Further reading
[here](https://nextjs.org/docs/app/deep-dive/caching#invalidation-1).
I also improved the UX awaiting the promise and displaying a spinner
while the log out action is happening. If logout fails ( _should be very
rare_ ) I'm displaying a toast to not let the user be hanging wondering
what happened.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] Open account menu
- [x] Click `Logout`
- [x] I see a spinner while the action is happening
- [x] I'm redirected to `/login` and I no longer see the account menu
### Changes 🏗️
This simply adds "fal-ai/veo3" to the ``FalModel`` in the
``ai_video_generator.py`` file
Oh i also set it so veo3 also always generates videos with audio so
``generate_audio=True`` is set to true if veo3 is selected
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test the veo3 model via Fal.ai and it should work.
### Changes 🏗️
Today openAI dropped the prices of the o3 model so this simply drops the
price from 7 to 4
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run the platform with the new price, check the O3 model in the ai
text generator block and see its cheaper to use
Bumps the development-dependencies group in /autogpt_platform/frontend
with 5 updates:
| Package | From | To |
| --- | --- | --- |
| [@storybook/test-runner](https://github.com/storybookjs/test-runner) |
`0.22.0` | `0.22.1` |
|
[@types/negotiator](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/negotiator)
| `0.6.3` | `0.6.4` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `22.15.29` | `22.15.30` |
| [msw](https://github.com/mswjs/msw) | `2.9.0` | `2.10.2` |
|
[msw-storybook-addon](https://github.com/mswjs/msw-storybook-addon/tree/HEAD/packages/msw-addon)
| `2.0.4` | `2.0.5` |
Updates `@storybook/test-runner` from 0.22.0 to 0.22.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/test-runner/releases"><code>@storybook/test-runner</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v0.22.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Patch: Add telemetry to test run <a
href="https://redirect.github.com/storybookjs/test-runner/pull/565">#565</a>
(<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@yannbf</code></a>)</li>
</ul>
<h2>v0.22.1-next.0</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Replace <code>@storybook/csf</code> with storybook's internal csf
implementation <a
href="https://redirect.github.com/storybookjs/test-runner/pull/556">#556</a>
(<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@yannbf</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/test-runner/blob/v0.22.1/CHANGELOG.md"><code>@storybook/test-runner</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v0.22.1 (Sat Jun 07 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Patch: Add telemetry to test run <a
href="https://redirect.github.com/storybookjs/test-runner/pull/565">#565</a>
(<a href="https://github.com/yannbf"><code>@yannbf</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@yannbf</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ed37196132"><code>ed37196</code></a>
Bump version to: 0.22.1 [skip ci]</li>
<li><a
href="df34390ef7"><code>df34390</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="0b60c084fe"><code>0b60c08</code></a>
Merge pull request <a
href="https://redirect.github.com/storybookjs/test-runner/issues/565">#565</a>
from storybookjs/telemetry-main</li>
<li><a
href="4baeaf3d3d"><code>4baeaf3</code></a>
Add telemetry to test run</li>
<li>See full diff in <a
href="https://github.com/storybookjs/test-runner/compare/v0.22.0...v0.22.1">compare
view</a></li>
</ul>
</details>
<br />
Updates `@types/negotiator` from 0.6.3 to 0.6.4
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/negotiator">compare
view</a></li>
</ul>
</details>
<br />
Updates `@types/node` from 22.15.29 to 22.15.30
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />
Updates `msw` from 2.9.0 to 2.10.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw/releases">msw's
releases</a>.</em></p>
<blockquote>
<h2>v2.10.2 (2025-06-09)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>TypeScript:</strong> support <code>Response.error()</code>
and <code>HttpResponse.error()</code> as mocked responses (<a
href="https://redirect.github.com/mswjs/msw/issues/2132">#2132</a>)
(72cc8ddac8f030f747b674148b03e5a025e412d2) <a
href="https://github.com/jacquesg"><code>@jacquesg</code></a> <a
href="https://github.com/kettanaito"><code>@kettanaito</code></a></li>
</ul>
<h2>v2.10.1 (2025-06-07)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update <code>@mswjs/interceptors</code> to support WebSocket server
protocol (<a
href="https://redirect.github.com/mswjs/msw/issues/2528">#2528</a>)
(6704fa042a3eaa71b68eb7b9028a7464b2b30cef) <a
href="https://github.com/kettanaito"><code>@kettanaito</code></a></li>
</ul>
<h2>v2.10.0 (2025-06-07)</h2>
<h3>Features</h3>
<ul>
<li><strong>WebSocketHandler:</strong> add <code>run</code> method (<a
href="https://redirect.github.com/mswjs/msw/issues/2527">#2527</a>)
(94fc78ea50bd8c3334945d3047650c8b82c2f754) <a
href="https://github.com/kettanaito"><code>@kettanaito</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a30cdf591e"><code>a30cdf5</code></a>
chore(release): v2.10.2</li>
<li><a
href="72cc8ddac8"><code>72cc8dd</code></a>
fix(TypeScript): support <code>Response.error()</code> and
<code>HttpResponse.error()</code> as moc...</li>
<li><a
href="d38097f162"><code>d38097f</code></a>
chore(release): v2.10.1</li>
<li><a
href="6704fa042a"><code>6704fa0</code></a>
fix: update <code>@mswjs/interceptors</code> to support WebSocket server
protocol (<a
href="https://redirect.github.com/mswjs/msw/issues/2528">#2528</a>)</li>
<li><a
href="dce459e32c"><code>dce459e</code></a>
chore(release): v2.10.0</li>
<li><a
href="94fc78ea50"><code>94fc78e</code></a>
feat(WebSocketHandler): add <code>run</code> method (<a
href="https://redirect.github.com/mswjs/msw/issues/2527">#2527</a>)</li>
<li><a
href="ca9d87768e"><code>ca9d877</code></a>
chore: run preview release on all pull requests (<a
href="https://redirect.github.com/mswjs/msw/issues/2526">#2526</a>)</li>
<li>See full diff in <a
href="https://github.com/mswjs/msw/compare/v2.9.0...v2.10.2">compare
view</a></li>
</ul>
</details>
<br />
Updates `msw-storybook-addon` from 2.0.4 to 2.0.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw-storybook-addon/releases">msw-storybook-addon's
releases</a>.</em></p>
<blockquote>
<h2>v2.0.5</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>fix: updates types to increase compatibility with storybook types <a
href="https://redirect.github.com/mswjs/msw-storybook-addon/pull/169">#169</a>
(<a
href="https://github.com/rhuanbarreto"><code>@rhuanbarreto</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Rhuan Barreto (<a
href="https://github.com/rhuanbarreto"><code>@rhuanbarreto</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw-storybook-addon/blob/main/packages/msw-addon/CHANGELOG.md">msw-storybook-addon's
changelog</a>.</em></p>
<blockquote>
<h1>v2.0.5 (Thu Jun 05 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>fix: updates types to increase compatibility with storybook types <a
href="https://redirect.github.com/mswjs/msw-storybook-addon/pull/169">#169</a>
(<a
href="https://github.com/rhuanbarreto"><code>@rhuanbarreto</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Rhuan Barreto (<a
href="https://github.com/rhuanbarreto"><code>@rhuanbarreto</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ec35e9371f"><code>ec35e93</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="d67ffc6b26"><code>d67ffc6</code></a>
fix: updates types to increase compatibility with storybook types (<a
href="https://github.com/mswjs/msw-storybook-addon/tree/HEAD/packages/msw-addon/issues/169">#169</a>)</li>
<li>See full diff in <a
href="https://github.com/mswjs/msw-storybook-addon/commits/v2.0.5/packages/msw-addon">compare
view</a></li>
</ul>
</details>
<br />
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
### Changes 🏗️
<img width="700" alt="Screenshot 2025-06-09 at 17 01 59"
src="https://github.com/user-attachments/assets/f2b0a3a6-fdf1-4e3e-9caa-d2bf03543dab"
/>
<img width="700" alt="Screenshot 2025-06-09 at 17 02 06"
src="https://github.com/user-attachments/assets/36e27a0b-07f2-4074-8628-cb236d75e4c4"
/>
This PR introduces a comprehensive Typography System for our design
system with improved documentation and developer experience [matching
what we have on
Figma](https://www.figma.com/design/nO9NFynNuicLtkiwvOxrbz/AutoGPT-Design-System?m=dev).
#### **Typography System**
- Created `<Text />` component
- Enforce its usage to ensure consistent typographic styles across the
app
```tsx
<Text variant="h1">Heading 1</Text>
<Text variant="h2">Heading 2</Text>
<Text variant="body">hello world</Text>
<Text variant="small">smol text</Text>
```
- Created `Typography.stories.tsx` on Storybook
- Complete typography overview with font showcases and usage guidelines
#### **Storybook Improvements**
- **Updated TypeScript docgen** configuration for better prop extraction
- **Cleaned up story rendering** to prevent MDX styling pollution
- **Split large story files** into focused, maintainable components
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
**Test Plan:**
- [x] Typography stories render correctly in Storybook
- [x] All Text component variants display properly
- [x] Interactive playground controls function correctly
- [x] No TypeScript or linting errors
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
Allowing depth-first execution will unlock faster processing latency and
a better sense of progress.
<img width="950" alt="image"
src="https://github.com/user-attachments/assets/e2a0e11a-8bc5-4a65-a10d-b5b6c6383354"
/>
### Changes 🏗️
* Prioritize adding a new execution over processing execution output
* Make sure to enqueue each node once when processing output instead of
draining a single node and move on.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run company follower count finder agent.
---------
Co-authored-by: Swifty <craigswift13@gmail.com>
## Changes 🏗️
We were getting the following warnings in the console when running the
local server:
```
⚠ ./node_modules/.pnpm/@sentry+node@9.26.0/node_modules/@sentry/node/build/cjs/sdk
Package import-in-the-middle can't be external
The request import-in-the-middle matches serverExternalPackages (or the default list).
The request could not be resolved by Node.js from the project directory.
Packages that should be external need to be installed in the project directory, so they can be resolved from the output files.
Try to install it into the project directory by running npm install import-in-the-middle from the project directory.
```
### Why were the warnings happening?
The Sentry SDK for Next.js tries to hook into Node.js internals using
packages like `import-in-the-middle` and `require-in-the-middle`. When
Sentry is imported at the top level on a file (even if not enabled), it
tries to load these dependencies... If they are not installed, then we
get these warnings...
### Why does installing the packages fix it?
By installing `import-in-the-middle` and `require-in-the-middle` as dev
dependencies, Sentry finds them and the warnings disappear. This is a
safe workaround for local/dev, and does not affect production.
### Loading Sentry conditionally
One way to avoid these warnings ⚠️ is by loading Sentry conditionally.
That is the approach I took in an earlier PR. However I realised that it
would have to apply to any file importing Sentry:
```ts
import * as Sentry from "@sentry/nextjs";
```
which would end quite messy and affecting a lot of files. I realised
installing the packages is just simpler ( _they won't in the bundle or
affect page load_ ) and using `enabled` in the Sentry initialisation is
also cleaner.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] There are not Sentry warnings when running the local dev server (
with or without `--turbo` )
- [x] Sentry still reports issues in production or staging ( _but not
locally_ )
## Changes
- log helpful hints when metrics fail to record
- clarify API key errors in v1 router
- improve Postmark unsubscribe and webhook logs
- surface actionable feedback across integrations and store APIs
- handle Otto proxy failures with guidance
## Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan
There are a few UI bugs on the builder that this PR addresses.
<img width="554" alt="image"
src="https://github.com/user-attachments/assets/1be70197-de7e-40fe-ab11-405c145e763d"
/>
### Changes 🏗️
Fix these UI issues:
* (screenshot attached above) Key-value input width was unintentionally
maxed out due to a stale CSS rule.
* When multiple executions within the same node are running, we pick the
latest status, making one running and one completed execution displayed
as completed.
* No balance errors were executed, only displayed while at least one
node execution was triggered, while this can be done directly when the
execution request is triggered.
* Run & Stop button glitch: it's still showing as stopped when the graph
is still running, this is due to way the UI code tracks execution in the
node-level, instead of graph level.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual tests on the described behaviours.
## Changes 🏗️
<img width="500" alt="Screenshot 2025-06-05 at 16 24 35"
src="https://github.com/user-attachments/assets/ccf51917-68fb-4538-bfd9-7ab8bc8ce33a"
/>
<br /><br />
- Allow users to sign in or sign up with Google (SSO), via Supabase
- Prevent password login or signup with `@agpt.co` emails
- Refactor/simplify the login/signup page logic by splitting rendering
and hook logic (
[explanation](https://github.com/Significant-Gravitas/AutoGPT/pull/10117#discussion_r2128793394)
)
### Moved the `createUser` logic to the callback
`api.createUser()` was being called **before** the OAuth flow completes.
Here's what's happening. I moved `api.createUser()` from `providerLogin`
to the **callback handler** where the session is established to make
sure it happens once we get the OK from Google session wise.
## Checklist 📋
### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Run this PR on a preview
- [ ] The "Login with Google" button is visible
- [ ] Login/signup with Google works
- [ ] You can't login or signup with password using an `@agpt.co` email
### Changes 🏗️
change ``NEXT_PUBLIC_DISABLE_TURNSTILE`` to ``NEXT_PUBLIC_TURNSTILE``
and set turnstile captcha to be hidden by default
if ``NEXT_PUBLIC_TURNSTILE=enabled`` is set, captcha will work and show
on the frontend login/signup/password reset pages
if ``NEXT_PUBLIC_TURNSTILE=disabled`` is set, captcha will be hidden
from the frontend and is not needed to login/signup/password reset
if ``NEXT_PUBLIC_TURNSTILE`` is not set captcha will be hidden from the
frontend and is not needed to login/signup/password reset
This means users who setup AutoGPT locally will not need to deal with
changing the env to hide captcha
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] start the platform with ``NEXT_PUBLIC_TURNSTILE=enabled`` set to
enabled and captcha shows
- [x] start with ``NEXT_PUBLIC_TURNSTILE=disabled`` and captcha does not
show and you dont need it to login ect
- [x] start with ``NEXT_PUBLIC_TURNSTILE`` not set and the captcha does
not show and you dont need it to login ect
This pull request introduces a comprehensive backend testing guide and
adds new tests for analytics logging and various API endpoints, focusing
on snapshot testing. It also includes corresponding snapshot files for
these tests. Below are the most significant changes:
### Documentation Updates:
* Added a detailed `TESTING.md` file to the backend, providing a guide
for running tests, snapshot testing, writing API route tests, and best
practices. It includes examples for mocking, fixtures, and CI/CD
integration.
### Analytics Logging Tests:
* Implemented tests for logging raw metrics and analytics in
`analytics_test.py`, covering success scenarios, various input values,
invalid requests, and complex nested data. These tests utilize snapshot
testing for response validation.
* Added snapshot files for analytics logging tests, including responses
for success cases, various metric values, and complex analytics data.
[[1]](diffhunk://#diff-654bc5aa1951008ec5c110a702279ef58709ee455ba049b9fa825fa60f7e3869R1-R3)
[[2]](diffhunk://#diff-e0a434b107abc71aeffb7d7989dbfd8f466b5e53f8dea25a87937ec1b885b122R1-R3)
[[3]](diffhunk://#diff-dd0bc0b72264de1a0c0d3bd0c54ad656061317f425e4de461018ca51a19171a0R1-R3)
[[4]](diffhunk://#diff-63af007073db553d04988544af46930458a768544cabd08412265e0818320d11R1-R30)
### Snapshot Files for API Endpoints:
* Added snapshot files for various API endpoint tests, such as:
- Graph-related operations (`graphs_get_single_response`,
`graphs_get_all_response`, `blocks_get_all_response`).
[[1]](diffhunk://#diff-b25dba271606530cfa428c00073d7e016184a7bb22166148ab1726b3e113dda8R1-R29)
[[2]](diffhunk://#diff-1054e58ec3094715660f55bfba1676d65b6833a81a91a08e90ad57922444d056R1-R31)
[[3]](diffhunk://#diff-cfd403ab6f3efc89188acaf993d85e6f792108d1740c7e7149eb05efb73d918dR1-R14)
- User-related operations (`auth_get_or_create_user_response`,
`auth_update_email_response`).
[[1]](diffhunk://#diff-49e65ab1eb6af4d0163a6c54ed10be621ce7336b2ab5d47d47679bfaefdb7059R1-R5)
[[2]](diffhunk://#diff-ac1216f96878bd4356454c317473654d5d5c7c180125663b80b0b45aa5ab52cbR1-R3)
- Credit-related operations (`credits_get_balance_response`,
`credits_get_auto_top_up_response`, `credits_top_up_request_response`).
[[1]](diffhunk://#diff-189488f8da5be74d80ac3fb7f84f1039a408573184293e9ba2e321d535c57cddR1-R3)
[[2]](diffhunk://#diff-ba3c4a6853793cbed24030cdccedf966d71913451ef8eb4b2c4f426ef18ed87aR1-R4)
[[3]](diffhunk://#diff-43d7daa0c82070a9b6aee88a774add8e87533e630bbccbac5a838b7a7ae56a75R1-R3)
- Graph execution and deletion (`blocks_execute_response`,
`graphs_delete_response`).
[[1]](diffhunk://#diff-a2ade7d646ad85a2801e7ff39799a925a612548a1cdd0ed99b44dd870d1465b5R1-R12)
[[2]](diffhunk://#diff-c0d1cd0a8499ee175ce3007c3a87ba5f3235ce02d38ce837560b36a44fdc4a22R1-R3)##
Summary
- add pytest-snapshot to backend dev requirements
- snapshot server route response JSONs
- mention how to update stored snapshots
## Testing
- `poetry run format`
- `poetry run test`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] run poetry run test
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
<!-- Concisely describe all of the changes made in this pull request:
-->
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
### Changes 🏗️
<img width="600" alt="Screenshot_2025-06-06_at_3 00 40_PM"
src="https://github.com/user-attachments/assets/2793793e-356f-47b0-8624-9d73af414ff3"
/>
☝🏽 Fix the following warning that gets logged to the console when
running the dev server in the Front-end. It shouldn't cause an actual
auth issue, as Next.js made sure `cookies` can still be called sync;
however, it is safer if we just migrate our calls to `cookies` to be
async 🙏🏽
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] There is no cookie warnings when running the FE dev server lcoally
- [x] Authentication works as expected
This is to fix turnstile not resetting properly on failed login/register
### Changes 🏗️
Calling ``turnstile.reset()`` directly seems to fail some times so I
have made a function ``resetCaptcha`` which forces a full reset of the
turnstile widget which should prevent getting stuck when failing to
login the first time
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test logging in with a wrong email/password, it should fail with
"Invalid login credentials", you should see the turnstile token refresh,
try login again with correct info and it should work with out getting
stuck
Update ollama docs to add info on how to setup ollama environment vars
for proper access
This includes properly setting the "OLLAMA_HOST" env var with the ip and
port "0.0.0.0:11434" which makes it accessible to AutoGPT thats running
inside of docker
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Follow the latest setup to test Ollama to make sure it works
## 🤖 Installing Claude Code GitHub App
This PR adds a GitHub Actions workflow that enables Claude Code
integration in our repository.
### What is Claude Code?
[Claude Code](https://claude.ai/code) is an AI coding agent that can
help with:
- Bug fixes and improvements
- Documentation updates
- Implementing new features
- Code reviews and suggestions
- Writing tests
- And more!
### How it works
Once this PR is merged, we'll be able to interact with Claude by
mentioning @claude in a pull request or issue comment.
Once the workflow is triggered, Claude will analyze the comment and
surrounding context, and execute on the request in a GitHub action.
### Important Notes
- **This workflow won't take effect until this PR is merged**
- **@claude mentions won't work until after the merge is complete**
- The workflow runs automatically whenever Claude is mentioned in PR or
issue comments
- Claude gets access to the entire PR or issue context including files,
diffs, and previous comments
### Security
- Our Anthropic API key is securely stored as a GitHub Actions secret
- Only users with write access to the repository can trigger the
workflow
- All Claude runs are stored in the GitHub Actions run history
- Claude's default tools are limited to reading/writing files and
interacting with our repo by creating comments, branches, and commits.
- We can add more allowed tools by adding them to the workflow file
like:
```
allowed_tools: Bash(npm install),Bash(npm run build),Bash(npm run lint),Bash(npm run test)
```
There's more information in the [Claude Code
documentation](http://docs.anthropic.com/s/claude-code-github-actions).
After merging this PR, let's try mentioning @claude in a comment on any
PR to get started!
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
We're encountering a hydration warning on the frontend due to a mismatch
in CSS module hashes. This happens because auto-generated classnames
from `next/font` and `geist` differ between server-side and client-side
rendering. The inconsistency triggers a warning when the client
rehydrates the server-rendered HTML.

### Changes 🏗️
Since the mismatch only affects the `<html>` tag and has no visible
impact on the UI, the most straightforward workaround is to suppress the
warning and still take advantage of `next/font` optimisations.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run the frontend locally
- [x] Page loads without hydration warnings
---------
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
- Part of #9307
- ❗ Blocks #9541
### Changes 🏗️
Backend:
- Fix+improve `GET /library/presets` (`list_presets`) endpoint
- Fix pagination
- Add `graph_id` filter parameter
- Allow partial preset updates: `PUT /presets/{preset_id}` -> `PATCH
/presets/{preset_id}`
- Allow creating preset from graph execution through `POST /presets`
- Clean up models & DB functions
- Split `upsert_preset` into `create_preset` + `update_preset`
- Add `LibraryAgentPresetUpdatable`
- Replace `CreateLibraryAgentPresetRequest` with
`LibraryAgentPresetCreatable`
- Use `LibraryAgentPresetCreatable` as base class for
`LibraryAgentPreset`
- Remove redundant `set_is_deleted_for_library_agent(..)`
- Improve log statements
- Improve DB statements (e.g. by using unique keys where possible)
Frontend:
- Add timestamp parsing logic to library agent preset endpoints
- Brand `LibraryAgentPreset.id` + references
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI green
- Since these changes don't affect existing front-end functionality, no
additional testing is needed.
**Goal**: Allow parallel runs within a single node. Currently, we
prevent this to avoid unexpected ordering of the execution.
### Changes 🏗️
#### Executor changes
We decoupled the node execution output processing, which is responsible
for deciding the next executions from the node executor code.
Currently, `execute_node` does two big things:
* Runs the block’s execute(...) (which yields outputs).
* immediately enqueues the next nodes based on those outputs.
This PR makes:
* execute_node(node_exec) -> stream of (output_name, data). That purely
runs the block and yields each output as soon as it’s available.
* Move _enqueue_next_nodes into the graph executor. So the next
execution is handled serially by the graph executor to avoid concurrency
issues.
#### UI changes
The change on the executor also fixes the behavior of the execution
update to the UI We will report the execution output to the UI as soon
as it is available, not when the node execution is fully completed.
This, however, broke the bread calculation logic that assumes each
execution update will never overlap. So the change in this PR makes the
bead calculation take the overlap / duplicated execution update into
account, and simplify the overall calculation logic.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Execute this agent and observe its concurrency ordering
<img width="1424" alt="image"
src="https://github.com/user-attachments/assets/0fe8259f-9091-4ecc-b824-ce8e8819c2d2"
/>
- Fixes#9215
- [x] ⚠️ Merge first:
https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/93
### Changes 🏗️
- Use `FRONTEND_BASE_URL` instead of Host header to make password reset
redirect link
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Resetting password gives an e-mail with a link that points to the
correct URL
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
This PR updates the .npmrc file to improve dependency consistency across
local and CI environments when using pnpm.
Specifically:
- `save-exact=true` ensures all dependencies are pinned to exact
versions, preventing version drift ( _especially important for tools
like `prettier`, where even minor changes can lead to inconsistent
formatting in commits_ ).
This change aims to reduce formatting discrepancies and improve
reproducibility across machines and contributors.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] formatting & lint passes on the CI
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
This simply adds a env to hide turnstile for dev deploys
if this env ``NEXT_PUBLIC_DISABLE_TURNSTILE`` is set to false which it
is by default, it will show turnstile, if the env is set to "true" it
will hide the turnstile
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Login/register with ``NEXT_PUBLIC_DISABLE_TURNSTILE=false`` and
you see the turnstile and that is needed to login/signup
- [x] Login/register with ``NEXT_PUBLIC_DISABLE_TURNSTILE=true`` and you
will see the turnstile is gone, and you can login/signup with out it
<!-- Clearly explain the need for these changes: -->
## Changes 🏗️
### Before
<img width="200" alt="image"
src="https://github.com/user-attachments/assets/8a8e1818-6b8c-4d86-a2b1-a474ba27a6de"
/>
### After
<img width="200" alt="Screenshot 2025-06-05 at 15 26 45"
src="https://github.com/user-attachments/assets/cc28eaeb-626b-46a8-a726-c157b2471ca9"
/>
### Adjustments
- Adjusted the padding account the menu
- Made username display on top of account name nicely
- Both username and account name will be trimmed if they are too long
`...`
## Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Login
- [x] Open account menu ( top-right )
- [x] The alignment of items is right
- [x] Long usernames are handled nicely
- [x] Username and display name don't overlap
- Fixes#10085
### Changes 🏗️
- Remove redirect from `sendResetEmail` server action
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Reset password form exits loading state after request completes
Currently, the get_graph function, with no graph version specifier will
try to fetch the latest version, and when the graph is not owned and the
latest version is not available for listing, it will return `None`
instead of picking the latest graph version available on the store.
### Changes 🏗️
Instead of using the latest graph.version to fetch the store listing,
don't provide any version filter at all and pick up whatever available
version in the store.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] CI, existing tests
- Resolves#10008
### Changes 🏗️
- Update `useSupabase` hook to propagate auth state changes
- Refresh `CredentialsProvider` whenever the user login state changes
- Add `logOut` callback to `useSupabase` hook that handles (client-side)
logout
- Remove server-side `logout` action: the Supabase reference
implementation does it client-side, and doing both causes a race
condition
Refactorings to aid implementation of the above:
- Move `@/hooks/useSupabase` -> `@/lib/supabase/useSupabase`
Other improvements:
- Clean up `login` server action based on reference implementation
- Make `BackendAPI.isAuthenticated()` more efficient and faster
- Remove unused `ProfileDropdown` component
- Improve logic and debug logging in `tests/pages/login.page.ts`
- Improve playwright test output logging
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Log out from account (A)
- Log in to other account (B)
- Open builder, add a block for which account B has (multiple)
credentials
- [x] Credentials for account B are shown
- [x] Credentials for account A are *not* shown
**Note: do not reload the page** while going through these steps
<!-- Clearly explain the need for these changes: -->
fixed#10098
### Changes 🏗️
tells dependabot to ignore poetry
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
N/A
Suppose we have pint with list[list[int]] type, and we want directly
insert the a new value inside the first index of the first list e.g:
list[0][0] = X through a dynamic pin, this will be translated into
list_$_0_$_0, and the system does not currently support this.
### Changes 🏗️
Add support for nested dynamic pins for list, object, and dict.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] lots of unit tests
- [x] Tried inserting the value directly on the `value` nested field on
Google Sheets Write block.
<img width="371" alt="image"
src="https://github.com/user-attachments/assets/0a5e7213-b0e0-4fce-9e89-b39f7a583582"
/>
- Resolves#10041
Upgrading to Next v15 isn't trivial, but still a good idea if not simply
necessary.
### Changes 🏗️
- Upgrade Next.js from `v14.2.26` to `v15.3.2`
- Fix usage of now-async APIs `params`, `searchParams`, `headers`
([docs](https://nextjs.org/docs/app/guides/upgrading/version-15#async-request-apis-breaking-change))
- Update Next+TypeScript configs
- Set build target to ES2022 for better performance
- Unignore TypeScript build errors
- Set `hideSourceMaps: false` because OSS FTW :)
- Remove obsolete `webpack.config.js`
- Fix existing warnings/errors
- Fix Sentry missing navigation hook warning
- Fix `DYNAMIC_SERVER_USAGE` build error on `/profile` and
`/marketplace`
- Moved `getStoreData` to a proper server action `getMarketplaceData`
- Fix breaking CSS syntax error in `customnode.css`
- Use Turbopack for faster dev+test build times
- Add separate build step to frontend CI workflow: this also fixes the
test timing out if the build takes too long
Other technical improvements:
- Wrap `handleSortChange` in `MarketplaceSearchPage` in `useCallback`
- Fix typing in `ProfileInfoForm`
- Improve output of frontend tests
> [!NOTE]
> Next prints this error (in dev mode):
> ```
> Error: Route "/marketplace" used `cookies().getAll()`. `cookies()`
should be awaited before using its value. Learn more:
https://nextjs.org/docs/messages/sync-dynamic-apis
> at Object.getAll (src/lib/supabase/getServerSupabase.ts:16:31)
> at getAll (../../src/cookies.ts:115:41)
> ...
> ```
> As far as I can see, this isn't breaking, and will become a warning in
prod. See also [the Next docs about this
issue](https://nextjs.org/docs/messages/sync-dynamic-apis)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Follow the full release QA test script
#### For configuration changes:
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
### Changes 🏗️
#### Before
<img width="800" alt="Screenshot 2025-06-03 at 16 54 36"
src="https://github.com/user-attachments/assets/2a69b69d-2b01-436e-aab3-8206485a001c"
/>
#### After
<img width="800" alt="Screenshot 2025-06-03 at 16 58 38"
src="https://github.com/user-attachments/assets/4daf41d4-42ce-4119-8e9f-b2b10b524cba"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [ ] checkout this branch ( _we should have PR previews for the app and
Storybook_ )
- [ ] `cd autogpt_platform/frontend`
- [ ] `yarn storybook`
- [ ] the stories road with the right font ( Poppins ) not a serif one 😄
#### For configuration changes:
- [ ] ~~`.env.example` is updated or already compatible with my
changes~~
- [ ] ~~`docker-compose.yml` is updated or already compatible with my
changes~~
- [ ] ~~I have included a list of my configuration changes in the PR
description (under **Changes**)~~
## 🧢 Overview
This PR migrates the AutoGPT Platform frontend from [yarn
1](https://classic.yarnpkg.com/lang/en/) to [pnpm](https://pnpm.io/)
using **corepack** for automatic package manager management.
**yarn1** is not longer maintained and a bit old, moving to **pnpm** we
get:
- ⚡ Significantly faster install times,
- 💾 Better disk space efficiency,
- 🛠️ Better community support and maintenance,
- 💆🏽♂️ Config swap very easy
## 🏗️ Changes
### Package Management Migration
- updated [corepack](https://github.com/nodejs/corepack) to use
[pnpm](https://pnpm.io/)
- Deleted `yarn.lock` and generated new `pnpm-lock.yaml`
- Updated `.gitignore`
### Documentation Updates
- `frontend/README.md`:
- added comprehensive tech stack overview with links
- updated all commands to use pnpm
- added corepack setup instructions
- and included migration disclaimer for yarn users
- `backend/README.md`:
- Updated installation instructions to use pnpm with corepack
- `AGENTS.md`:
- Updated testing commands from yarn to pnpm
### CI/CD & Infrastructure
- **GitHub Workflows** :
- updated all jobs to use pnpm with corepack enable
- cleaned FE Playwright test workflow to avoid Sentry noise
- **Dockerfile**:
- updated to use pnpm with corepack, changed lock file reference, and
updated cache mount path
### 📋 Checklist
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
**Test Plan:**
> assuming you are on the `frontend` folder
- [x] Clean installation works: `rm -rf node_modules && corepack enable
&& pnpm install`
- [x] Development server starts correctly: `pnpm dev`
- [x] Build process works: `pnpm build`
- [x] Linting and formatting work: `pnpm lint` and `pnpm format`
- [x] Type checking works: `pnpm type-check`
- [x] Tests run successfully: `pnpm test`
- [x] Storybook starts correctly: `pnpm storybook`
- [x] Docker build succeeds with new pnpm configuration
- [x] GitHub Actions workflow passes with pnpm commands
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
This hotfix adds a GitHub Actions workflow to automate deployment and
undeployment of platform PRs to the development environment through
repository dispatch events.
### Changes 🏗️
- **Added new GitHub Actions workflow**:
`.github/workflows/platform-dev-deploy-event-dispatcher.yml`
- Handles `\!deploy` and `\!undeploy` comment commands on PRs
- Implements permission checks (only owners, members, and collaborators
can deploy)
- Automatically undeploys when PRs are closed with active deployments
- Dispatches events to the cloud infrastructure repository for actual
deployment/undeployment
- Provides user feedback through GitHub comments
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verify workflow triggers on PR comment creation
- [x] Test permission validation for unauthorized users
- [x] Confirm `\!deploy` command dispatches correct event payload
- [x] Confirm `\!undeploy` command dispatches correct event payload
- [x] Test auto-undeploy on PR closure with active deployment
- [x] Verify appropriate GitHub comments are posted for each action
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
**Configuration changes:**
- Requires `DISPATCH_TOKEN` secret to be configured for repository
dispatch to cloud infrastructure repo
- Workflow uses `issues: write` and `pull-requests: write` permissions
Now, SendWebRequestBlock can upload files. To make this work, we also
need to improve the UI rendering on the key-value pair input so that it
can also render media/file upload.
### Changes 🏗️
* Add file multipart upload support for SendWebRequestBlock
* Improve key-value input UI rendering to allow rendering any types as a
normal input block (it was only string & number).
<img width="381" alt="image"
src="https://github.com/user-attachments/assets/b41d778d-8f9d-4aec-95b6-0b32bef50e89"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test running http request block, othe key-value pair input block
<!-- Clearly explain the need for these changes: -->
This PR adds a new internal block, **AI Image Editor**, which enables
**text-based image editing** via BlackForest Labs’ Flux Kontext models
on Replicate. This block allows users to input a prompt and optionally a
reference image, and returns a transformed image URL. It supports two
model variants (Pro and Max), with different cost tiers. This
functionality will enhance multimedia capabilities across internal agent
workflows and support richer AI-powered image manipulation.
---
### Changes 🏗️
* Added `FluxKontextBlock` in `backend/blocks/flux_kontext.py`
* Uses `ReplicateClient` to call Flux Kontext Pro or Max models
* Supports inputs for `prompt`, `input_image`, `aspect_ratio`, `seed`,
and `model`
* Outputs transformed image URL or error
* Added credit pricing logic for Flux Kontext models to
`block_cost_config.py`:
* Pro: 10 credits
* Max: 20 credits
* Added documentation for the new block at
`docs/content/platform/blocks/flux_kontext.md`
* Updated block index at `docs/content/platform/blocks/blocks.md` to
include Flux Kontext
---

### Checklist 📋
#### For code changes:
* [x] I have clearly listed my changes in the PR description
* [x] I have made a test plan
* [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
* [x] Prompt-only input generates an image
* [x] Prompt with image applies edit correctly
* [x] Image respects specified aspect ratio
* [x] Invalid image URL returns helpful error
* [x] Using the same seed gives consistent output
* [x] Output chaining works: result URI can be used in downstream blocks
* [x] Output from Max model shows higher fidelity than Pro
<details>
<summary>Example test plan</summary>
* [x] Create from scratch and execute an agent using Flux Kontext with
at least 3 blocks
* [x] Import agent with Flux Kontext from file upload, and confirm
execution
* [x] Upload agent (with Flux Kontext block) to marketplace (internal
test)
* [x] Import agent from marketplace and confirm correct execution
* [x] Edit agent with Flux Kontext block from monitor and confirm output
</details>
#### For configuration changes:
* [x] `.env.example` is updated or already compatible with my changes
* [x] `docker-compose.yml` is updated or already compatible with my
changes
* [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
* No new environment variables or services introduced
<details>
<summary>Examples of configuration changes</summary>
* N/A
</details>
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
This PR adds a comprehensive system requirements section to the
README.md file. Currently, users don't have clear guidance on the
hardware, software, and network requirements needed to run AutoGPT. This
addition will help users determine if their system is capable of running
AutoGPT before attempting installation.
- Resolves#10050
### Changes 🏗️
- Added new "System Requirements" section under "How to Setup for
Self-Hosting" with:
- Hardware Requirements
- CPU specifications (4+ cores)
- RAM requirements (8GB min, 16GB recommended)
- Storage requirements (10GB minimum)
- Software Requirements
- Supported Operating Systems
- Required software with minimum versions
- Development tools requirements
- Network Requirements
- Internet connectivity requirements
- Port access information
- HTTPS connection requirements
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Verified README.md renders correctly on GitHub
- [x] Confirmed all formatting is consistent
- [x] Validated all requirements are accurate
- [x] Checked section placement is logical
- [x] Ensured no other files are modified
#### For configuration changes:
- [x] Not applicable - This PR only contains documentation changes to
README.md
- [x] No configuration files are modified in this update
Co-authored-by: Bently <Github@bentlybro.com>
<!-- Clearly explain the need for these changes: -->
We removed the linked video because out of date so unlink it
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
Comment out video segment until new one posted
### Checklist 📋
#### For code changes:
- [x] no code changes made
### Description 📝
This PR introduces a GitHub Actions workflow that enables
cross-repository event dispatching for development environment
deployments. The workflow listens for specific PR events and dispatches
corresponding deployment/undeployment actions to our cloud
infrastructure repository.
**How it works:**
- The workflow triggers on PR events (opened, synchronized, closed) and
PR target events (labeled, unlabeled)
- When a PR comment containing `!deploy` is detected from authorized
users (PR author, repo owners, members, or collaborators), it dispatches
a deployment event
- When a PR with existing deployments is closed, it automatically
dispatches an undeployment event to clean up resources
**Interaction with target repository:**
The workflow dispatches events to
`Significant-Gravitas/AutoGPT_cloud_infrastructure` with a payload
containing:
- `action`: Either "deploy" or "undeploy"
- `pr_number`: The PR number for tracking
- `pr_title`: Human-readable identifier
- `pr_state`: Current PR state
- `repo`: Source repository name
This enables the infrastructure repository to spin up isolated
development environments for each PR on demand.
### Changes 🏗️
- Added `.github/workflows/dev-deploy-pr-dispatcher.yml` workflow file
- Implements secure cross-repository communication using repository
dispatch events
- Includes authorization checks to ensure only authorized users can
trigger deployments
### Checklist 📋
#### For code changes:
- [x] No code changes - this is a workflow addition only
#### For configuration changes:
- [x] New workflow file added that requires testing
- [x] Requires `DISPATCH_TOKEN` secret to be configured with appropriate
permissions for cross-repository dispatch
- [x] No environment variable changes needed
- [ ] Workflow will be tested after initial merge to verify proper event
dispatching
<!-- Clearly explain the need for these changes: -->
CASA requires a length of 12 passwords, which we did update. When
testing in dev, I realized I missed a few.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
updates a missed prompt
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test manually, and read all the prompts carefully
<!-- Clearly explain the need for these changes: -->
We're doing CASA and this is a requirement
### Changes 🏗️
- Requires new passwords to be min length 12
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] test
Users were unable to retry login attempts after a failed authentication
because the Turnstile CAPTCHA widget was not properly resetting. This
forced users to refresh the entire page to attempt login again, creating
a terrible user experience.
Root Cause: The useTurnstile hook had several critical issues:
- The reset() function only cleared state when shouldRender was true and
widget existed
- Widget ID tracking was unreliable due to intercepting
window.turnstile.render
- Token wasn't being cleared on verification failures
- State wasn't being reset consistently across error scenarios
Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
- Fixed useTurnstile hook reset logic: Modified the reset() function to
always clear all state (token, verified, verifying, error) regardless of
shouldRender condition
- Improved widget ID synchronization: Added setWidgetId prop to the
Turnstile component interface and hook for reliable widget tracking
between component and hook
- Enhanced error handling: Updated handleVerify, handleExpire, and
handleError to properly reset tokens on failures
- Updated all auth components: Added setWidgetId prop to all Turnstile
component usages in login, signup, and password reset pages
- Removed unreliable widget tracking: Eliminated the
window.turnstile.render interception approach in favor of explicit
prop-based communication
Checklist 📋
For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- <!-- Put your test plan here: -->
- [x] Test failed login attempt - CAPTCHA resets properly without page
refresh
- [x] Test failed signup attempt - CAPTCHA resets properly without page
refresh
- [x] Test successful login flow - CAPTCHA works normally
- [x] Test CAPTCHA expiration - State resets correctly
- [x] Test CAPTCHA error scenarios - Error handling works properly
### Changes 🏗️
This PR adds `Run 10 agents` step to wallet tasks that can be done by
running any agents 10 times either from Library or Builder (onboarding
agent run also counts).
- Merge `Finish onboarding` and `See results` steps into one in the
wallet
- User is redirected directly to onboarding agent runs in Library after
congrats screen
- Add `RUN_AGENTS` step and `agentRuns` integer to schema and related
migration
- Running agent from Library and Builder increments `agentRuns`
- Open NPS survey popup when 10 agents are run
- Fix resuming onboarding on login when unfinished
- Remove no longer needed `get-results.mp4` tutorial video
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Onboarding can be completed and proper reward is awarded
- [x] `Run 10 agents` can be completed and reward is awarded
- [x] When unning different agents and the same agent
- [x] Running from library and builder counts
- [x] Onboarding is resumed to last finished step on login
This makes button on the marketplace listing page show `See runs` if
user has an agent in the library.
### Changes 🏗️
- Remove `/` from the related endpoint
- Use `active_version_id` instead of `store_listing_version_id` to check
for the library agent
- Fix `get_store_agent_details`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Log in and pick an agent that has never been in user library
- [x] Button says `Add to library`
- Add the agent and return to the listing page
- [x] Button says `See runs`
- Remove agent from library
- [x] Button says `Add to library`
- Add agent again
- [x] Button says `See runs`
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
- Resolves#9752
- Follow-up fix to #9940
### Changes 🏗️
- `GRAPH_EXECUTION_INCLUDE` ->
`graph_execution_include(include_block_ids)`
- Add `get_io_block_ids()` and `get_webhook_block_ids()` to
`backend.data.blocks`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Payload for webhook-triggered runs is shown on
`/library/agents/[id]`
## Summary
- require categories to be selected in PublishAgent popout
### Changes 🏗️
Makes it require the categories to be set before allowing an agent to be
uploaded
added popup notification to say its missing categories
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Try to upload a agent with out setting categories and it will
error and show message saying "Missing Required Fields, Please fill in:
Categories"
- [x] Now try to upload a agent with the categories set and it will work
Co-authored-by: Bently <Github@bentlybro.com>
# Query Optimization for AgentNodeExecution Tables
## Overview
This PR describes the database index optimizations applied to improve
the performance of slow queries in the AutoGPT platform backend.
## Problem Analysis
The following queries were identified as consuming significant database
resources:
### 1. Complex Filtering Query (19.3% of total time)
```sql
SELECT ... FROM "AgentNodeExecution"
WHERE "agentNodeId" = $1
AND "agentGraphExecutionId" = $2
AND "executionStatus" = $3
AND "id" NOT IN (
SELECT "referencedByInputExecId"
FROM "AgentNodeExecutionInputOutput"
WHERE "name" = $4 AND "referencedByInputExecId" IS NOT NULL
)
ORDER BY "addedTime" ASC
```
### 2. Multi-table JOIN Query (8.9% of total time)
```sql
SELECT ... FROM "AgentNodeExecution"
LEFT JOIN "AgentNode" ON ...
LEFT JOIN "AgentBlock" ON ...
WHERE "AgentBlock"."id" IN (...)
AND "executionStatus" != $11
AND "agentGraphExecutionId" IN (...)
ORDER BY "queuedTime" DESC, "addedTime" DESC
```
### 3. Bulk Graph Execution Queries (multiple variations, ~10% combined)
Multiple queries filtering by `agentGraphExecutionId` with various
ordering requirements.
## Optimization Strategy
### 1. Composite Indexes for AgentNodeExecution
Set the following composite indexes to the `AgentNodeExecution` model:
```prisma
@@index([agentGraphExecutionId, agentNodeId, executionStatus])
@@index([addedTime, queuedTime])
```
#### Benefits:
- **Index 1**: Covers the exact WHERE clause of the complex filtering
query, allowing index-only scans
- **Index 2**: Optimizes queries filtering by graph execution and status
- **Index 3**: Supports efficient sorting when filtering by graph
execution
- **Index 4**: Optimizes ORDER BY operations on time fields
### 2. Composite Index for AgentNodeExecutionInputOutput
Added the following composite index:
```prisma
// Input and Output pin names are unique for each AgentNodeExecution.
@@unique([referencedByInputExecId, referencedByOutputExecId, name])
@@index([referencedByOutputExecId])
// Composite index for `upsert_execution_input`.
@@index([name, time])
```
#### Benefits:
- Dramatically improves the NOT IN subquery performance in Query 1
- Allows the database to use an index scan instead of a full table scan
- Reduces the subquery execution time from O(n) to O(log n)
## Expected Performance Improvements
1. **Query 1 (19.3% of total time)**:
- Expected improvement: 80-90% reduction in execution time
- The composite index on `[agentNodeId, agentGraphExecutionId,
executionStatus]` will allow index-only scans
- The subquery will benefit from the new index on
`AgentNodeExecutionInputOutput`
2. **Query 2 (8.9% of total time)**:
- Expected improvement: 50-70% reduction in execution time
- The `[agentGraphExecutionId, executionStatus]` index will reduce the
initial filtering cost
3. **Bulk Queries (10% combined)**:
- Expected improvement: 60-80% reduction in execution time
- Composite indexes including time fields will optimize sorting
operations
## Migration Considerations
1. **Index Creation Time**: Creating these indexes on existing large
tables may take time
2. **Storage Impact**: Each index requires additional storage space
3. **Write Performance**: Slight decrease in INSERT/UPDATE performance
due to index maintenance
## Additional Optimizations
### NotificationEvent Table Index
Added index for notification batch queries:
```prisma
@@index([userNotificationBatchId])
```
This optimizes the query:
```sql
SELECT ... FROM "NotificationEvent"
WHERE "userNotificationBatchId" IN (...)
```
#### Benefits:
- Eliminates full table scans when filtering by batch ID
- Improves query performance from O(n) to O(log n)
- Particularly beneficial for users with many notification events
## Future Optimizations
Consider these additional optimizations if needed:
1. Partitioning `AgentNodeExecution` table by `createdAt` or
`agentGraphExecutionId`
2. Implementing materialized views for frequently accessed aggregate
data
3. Adding covering indexes for specific query patterns
4. Implementing query result caching at the application level
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
- Resolves#10024
Caching the repeated DB calls by the graph lifecycle hooks significantly
speeds up graph update/create calls with many authenticated blocks
(~300ms saved per authenticated block)
### Changes 🏗️
- Add and use `IntegrationCredentialsManager.cached_getter(user_id)` in
lifecycle hooks
- Split `refresh_if_needed(..)` method out of
`IntegrationCredentialsManager.get(..)`
- Simplify interface of lifecycle hooks: change `get_credentials`
parameter to `user_id`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Save a graph with nodes with credentials
Running the tests locally takes a lot of time and leaves test data
behind in the DB, making it impractical to actually run locally.
I'm disabling the `pytest` hooks in the pre-commit config so the
pre-commit checks can reasonably be used without significant negative
impact to DX.
This doesn't impact UX and there is nothing to test.
- Resolves#8656
Instead of "NextGen AutoGPT", make page titles like "My Test Agent -
Library - AutoGPT Platform", "Settings - AutoGPT Platform", "Builder -
AutoGPT Platform".
### Changes 🏗️
- Add specific page titles to `/library`, `/library/agents/[id]`,
`/build`, `/profile`, `/profile/api_keys`
- Fix page titles on `/marketplace`, `/profile/settings`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Go to `/marketplace` and check the page title
- [x] Go to `/library` and check the page title
- [x] Go to `/library/agents/[id]` and check the page title
- [x] Go to `/build` and check the page title
- [x] Go to `/profile` and check the page title
- [x] Go to `/profile/settings` and check the page title
- [x] Go to `/profile/api_keys` and check the page title
- [ ] ~~Go to `/profile/dashboard` and check the page title~~
- [ ] ~~Go to `/profile/integrations` and check the page title~~
- [ ] ~~Go to `/profile/credits` and check the page title~~
Base styling currently being fragmented between `layout.tsx` and
`globals.css` is causing some styling (e.g. application background
color) to be incorrectly overridden.
### Changes 🏗️
- Remove background color override from `<body>`
- Move `<body>` classes from `layout.tsx` to `globals.css`
- Remove background color from elements that shouldn't have their own
background color
- Remove `font-neue`, `font-inter`; replace by Geist (`font-sans`) where
necessary
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Effective background color of application is `#FAFAFA` like before
- [x] Default font is Geist
- [x] Everything looks okay
Changed the section header for "Top Agents" to include a 24px margin.
I have not tested this, an eng needs to test / look at this
## Summary
- set `margin` default to 24px in `AgentsSection`
- apply the bottom margin via an inline style
## Testing
- `npm test` *(fails: playwright not found)*
- `npm run lint` *(fails: next not found)*
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test via deployment to the dev branch and verify by designer
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
- Resolves#9992
### Changes 🏗️
- Use `<LoadingBox>` instead of "Loading..." on `/library/agents/[id]`

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Designer approves based on screencapture
This pull request refines the handling of `input_data.content` and
improves error message formatting in the `run` method of `mem0.py`. The
changes enhance robustness and clarity in the code.
### Handling `input_data.content`:
* Updated the `run` method to handle `Content` objects explicitly,
ensuring proper formatting of messages when `input_data.content` is of
type `Content`. Additionally, non-standard types are now converted to
strings for consistent handling.
(`[autogpt_platform/backend/backend/blocks/mem0.pyR127-R130](diffhunk://#diff-d7abf8c3299388129480b6a9be78438fe7e0fbe239da630ebb486ad99c80dd24R127-R130)`)
### Error message formatting:
* Simplified the error message formatting by removing the unnecessary
`object=` keyword in the `str()` conversion of exceptions.
(`[autogpt_platform/backend/backend/blocks/mem0.pyL155-R157](diffhunk://#diff-d7abf8c3299388129480b6a9be78438fe7e0fbe239da630ebb486ad99c80dd24L155-R157)`)
## Summary
- fix AddMemoryBlock so `Content` input uses the underlying string
- improve error handling in Mem0 AddMemoryBlock
## Testing
- `ruff check autogpt_platform/backend/backend/blocks/mem0.py`
- `pre-commit run --files
autogpt_platform/backend/backend/blocks/mem0.py` *(fails: unable to
fetch remote hooks)*
- `poetry run pytest -k AddMemoryBlock -q` *(fails: Error 111 connecting
to localhost:6379)*
Checklist 📋
For code changes:
I have clearly listed my changes in the PR description
I have made a test plan
I have tested my changes according to the test plan:
Payload for webhook-triggered runs is shown on /library/agents/[id]
## Summary
- refine contribution instructions in `autogpt_platform/AGENTS.md`
## Testing
- `pre-commit` *(fails to fetch hooks due to no network access)*
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Docs only hcnage
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
### Changes 🏗️
Keep the original URL when an HTTP error occurs in
`SendWebRequestBlock`.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test sending POST request on a web that doesn't support POST
request using `SendWebRequestBlock`.
The executor can sometimes become dangling due to the executor stopping
executing messages but the process is not fully killed. This PR avoids
such a scenario by simply keeping retrying it.
### Changes 🏗️
Introduced continuous_retry decorator and use it to executor message
consumption/
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run executor service and execute some agents.
The executor can sometimes become dangling due to the executor stopping
executing messages but the process is not fully killed. This PR avoids
such a scenario by simply keeping retrying it.
### Changes 🏗️
Introduced continuous_retry decorator and use it to executor message
consumption/
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run executor service and execute some agents.
Makes all optional fields on `Credentials` models actually optional, and
sets `exclude_none=True` on the corresponding `model_dump`.
This is a hotfix: after running the `aryshare-revid` branch on the dev
deployment, there is some data in the DB that isn't valid for the
`UserIntegrations` model on the `dev` branch (see
[here](https://github.com/Significant-Gravitas/AutoGPT/pull/9946#discussion_r2098428575)).
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] This fix worked on the `aryshare-revid` branch:
52b6d9696b
- Resolves#9987
### Changes 🏗️
- Split `pin_url(..)` out of `validate_url(..)` and call
`extra_url_validator` in between
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] GitHub Read Pull Request Block works with "Include PR Changes"
enabled
### Changes 🏗️
Moves the route path for spending
drops min
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] test locally
---------
Co-authored-by: Bently <Github@bentlybro.com>
- Resolves#9950
### Changes 🏗️
- Move `<OttoChatWidget>` from root layout into `FlowEditor`
- Pass graph info directly into `OttoChatWidget` instead of using
`useAgentGraph`
- Rearrange z-indices of elements in the builder
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/build`
- [x] -> chat widget should show up in the bottom right corner
- Open the widget and ask Otto something
- [x] -> should work normally
- Add a few blocks and save the graph
- [x] -> "Include graph data" should show up
- Click "Include graph data" and ask Otto something about your graph
- [x] -> Otto should be aware of the graph structure and metadata
- Resolves#9941
- Follow-up to #9935
### Changes 🏗️
- Show toast when WS connection (dis|re)connects (on `/library/agents/[id]`)
- Implement `BackendAPI.onWebSocketDisconnect`
Related improvements:
- Clean up WebSocket state management & logging in `BackendAPI`
- Clean up & split loading spinner implementation: `Spinner` -> `LoadingBox` + `LoadingSpinner`
Also, unrelated:
- fix(frontend/library): Add 2 second debounce to page refresh logic
This eliminates 3 triple API calls (so 9 -> 3 total) on page load: `GET /library/agents/{agent_id}`, `GET /graphs/{graph_id}/executions`, and `GET /graphs/{graph_id}/executions/{exec_id}`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Start the frontend and backend applications (locally)
- Navigate to `/library/agents/[id]`
- Kill the backend
- [x] -> a toast should appear "Connection to server was lost"
- [x] -> this toast should be shown as long as the server is down
- Re-start the backend
- [x] -> toast should change to show "Connection re-established"
- [x] -> toast should now disappear after 2 seconds
---
Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
Resolves#9947
### Changes 🏗️
Backend:
- Send a graph execution update after terminating a run
- Don't wipe the graph execution stats when not passed in to `update_graph_execution_stats`
Frontend:
- Don't hide the output of stopped runs
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]`
- Run an agent that takes a while (long enough to click stop and see the effect)
- Hit stop after it has executed a few nodes
- [x] -> run status should change to "Stopped"
- [x] -> run stats (steps, duration, cost) should stay the same or increase only one last time
- [x] -> output so far should be visible
- [x] -> shown information should stay the same after refreshing the page
---
Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
I am disabling all the Twitter and Todoist blocks whose OAuth is not
configured.
> I have already checked it locally. When OAuth is not set, the blocks
do not appear in the block menu
Bumps the development-dependencies group in
/autogpt_platform/autogpt_libs with 1 update:
[ruff](https://github.com/astral-sh/ruff).
Updates `ruff` from 0.11.2 to 0.11.10
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions
</details>
> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
This small PR resolves the deprecation warnings of the `logger` library:
```
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
```
Github Blocks use an URL transformer passed to `Requests` to convert web
URLs to the API URLs. This doesn't always work with the anti-SSRF URL
pinning mechanism that was implemented in #8531.
### Changes 🏗️
In `Requests.request(..)`:
- Apply `validate_url` *after* `extra_url_validator`, to prevent
mismatch between `pinned_url` and `original_hostname`
- Simplify logic & add clarifying comments
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Tested the github blocks that had the issue
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
The changes in this PR are to add Llama API support.
### Changes 🏗️
We add both backend and frontend support.
**Backend**:
- Add llama_api provider
- Include models supported by Llama API along with configs
- llm_call
- credential store and llama_api_key field in Settings
**Frontend**:
- Llama API as a type
- Credentials input and provider for Llama API
### Checklist 📋
#### For code changes:
- [X] I have clearly listed my changes in the PR description
- [X] I have tested my changes according to the test plan:
**Test Plan**:
<details>
<summary>AI Text Generator</summary>
- [X] Start-up backend and frontend:
- Start backend with Docker services: `docker compose up -d --build`
- Start frontend: `npm install && npm run dev`
- By visiting http://localhost:3000/, test inference and structured
outputs
- [X] Create from scratch
- [X] Request for Llama API Credentials
<img width="2015" alt="image"
src="https://github.com/user-attachments/assets/3dede402-3718-4441-9327-ecab25c63ebf"
/>
- [X] Execute an agent with at least 3 blocks
<img width="2026" alt="image"
src="https://github.com/user-attachments/assets/59d6d56b-2ccc-4af5-b511-4af312c3f7f8"
/>
- [X] Confirm it executes correctly
</details>
<details>
<summary>Structured Response Generator</summary>
- [X] Start-up backend and frontend:
- Start backend with Docker services: `docker compose up -d --build`
- Start frontend: `npm install && npm run dev`
- By visiting http://localhost:3000/, test inference and structured
outputs
- [X] Create from scratch
- [X] Execute an agent
<img width="2023" alt="image"
src="https://github.com/user-attachments/assets/d1107638-bf1b-45b1-a296-1e0fac29525b"
/>
- [X] Confirm it executes correctly
</details>
---------
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
> Log entry with size 865.8K exceeds maximum size of 256.0K
Some logs are just too large.
### Changes 🏗️
Truncate logging on NotificationManager and LLMBlocks
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing CI
<!-- Clearly explain the need for these changes: -->
As part of a small security review, we found that google won't let you
load with integrity. That seems insane but the general workaround is to
load a static copy of your own
### Changes 🏗️
Moves all the google analytics scripts in house instead of loading via
cdn
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] tested it in a new isolated environment to confirm events still
work as expected everywhere they are used
---------
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
A collection of UX update and bug fixes for onboarding and wallet.
### Changes 🏗️
- Show spinner loading indicator when onboarding button is clicked
- Use `getLibraryAgentByStoreListingVersionID` instead of
`addMarketplaceAgentToLibrary` on congrats screen
- Fix `Not enough segments` issue: don't fetch onboarding when user is
logged out
- Minor updates
- Fill some missing deps in deps arrays
- `Spinner` component, styles updates
- Use `useMemo`/`useCallback`
- Show error toast when onboarding agent fails to run:
<img width="405" alt="Screenshot 2025-05-06 at 5 09 01 PM"
src="https://github.com/user-attachments/assets/dd1272da-326a-448d-995d-98ac773b3ee4"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Onboarding can be completed
- [x] Failing agent shows toast
- [x] Wallet can be opened and works properly (tasks, confetti)
- [x] Dependency arrays don't cause infinite loops
- Resolves#9929
### Changes 🏗️
- Implement `BackendAPI.onWebSocketConnect(..)`
- Improve reliability and reactivity of `/library/agents/[id]`:
- Refresh page data and (re)subscribe and on WebSocket (re)connect
- Break up multi-action hooks into smaller parts to reduce unnecessary
re-renders and requests
- Reduce duplicate requests
- Use `onWebSocketConnect` in `useAgentGraph` as well
- Tidy up `autogpt-server-api/client.ts` a bit
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]`
- Run the agent
- [x] -> UI should update normally with execution updates
- Suspend your computer, or restart the backend (or WS server) to break
the connection
- DO NOT REFRESH THE TAB ITSELF
- [x] -> On reconnect, page data should be refreshed (check in network
tab of dev tools)
- Run the agent again
- [x] -> UI should update normally with execution updates
Currently both login and signup page show the same feedback on error on
cloud (join the waitlist).
### Changes 🏗️
Update login&signup feedback for cloud.
- Use cards for login and signup feedback instead of html list
- Make login page show prompt to redirect user to signup
<img width="474" alt="Screenshot 2025-05-07 at 4 01 07 PM"
src="https://github.com/user-attachments/assets/45f189ea-5fea-45bb-89f9-7323418d69ea"
/>
<img width="476" alt="Screenshot 2025-05-07 at 4 03 29 PM"
src="https://github.com/user-attachments/assets/96f4cd7f-f3e6-44b2-b647-96ee98063572"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Signup&login works with correct credentials
- [x] Signup&login shows correct error message on wrong credentials
- [x] Links work correctly
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
This pull request includes a small change to the `get_my_agents`
function in `autogpt_platform/backend/backend/server/v2/store/db.py`.
The change updates the sorting order for retrieving library agents to
use the `updatedAt` field instead of `agentGraphVersion`.
This aligns with the later sorting that is applied to the fetched page
in the frontend.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
<!-- Clearly explain the need for these changes: -->
John thinks an hourly email is a bit much; I'm inclined to agree.
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
swaps the time for dealing the batch for the agent runs from one hour to
one day
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] No testing done
We want to add support for reading + writing events for google calendar
so agents can do cool stuff with that!
### Changes 🏗️
* Add support for google calendar blocks
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Run manual tests of each block
- [x] Build and run automatic tests
- Follow-up fix for #9862
### Changes 🏗️
- Rename the stored `data` input field to `inputs` on all Agent Executor
nodes in the DB
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Dry-run the query in Supabase to verify that the data operation
works
- [x] CI
```
2025-05-09 11:54:15,171 ERROR Error executing graph 954e6fc8-9c90-46fa-be5b-4063eb519ec7: Block ID AIMusicGeneratorBlock error: 44f6c8ad-d75c-4ae1-8209-aad1c0326928 is already in use
```
### Changes 🏗️
This PR avoids the use of global variables messing up with the way
`load_all_blocks` is cached.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] CI
Currently, there is no guarantee that an error will be reported right
away. And late execution is a serious issue that needs to be addressed
quickly
### Changes 🏗️
Provided a direct alert when late execution occurs.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manual run on the late executions job on artificially created late
executions.
This is a follow-up to
https://github.com/Significant-Gravitas/AutoGPT/pull/9903
The continued graph execution restarted all the execution stats from
zero, making the execution stats misleading.
### Changes 🏗️
Continue the execution stats when continuing the graph execution.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing tests, manual graph run with the graph execution aborted
midway.
Some of the code paths in the notification & scheduler service were
synchronous HTTP calls that execute a long-running job that blocks. This
makes the service threads busy waiting.
### Changes 🏗️
* Remove queue_notification API
* Remove DTO
* Move heavy tasks intothe executor
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manually executing notification service jobs through the scheduler
API
Listing page throws exception on deployment because of supabase auth
issue.
### Changes 🏗️
Catch the exception when getting library agent. This reverts the
behavior of listing page and it'll always show "Add to Library" when
user is logged in.
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] ...
<!-- Clearly explain the need for these changes: -->
The goal of this change is a quick and temporary tweak to improve the
displaying of output text in the Agent Runs screen.
This change is made anticipating that these outputs will be properly
improved in the near future, and is thus just a temporary change in
order to display text in a human readable format.
### Changes 🏗️
There is one change in this PR:
- The class of the Agent Output textbox is changed to properly display
text without impacting the design.
Below is a before and after of this change:
**Before**

**After**

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
---------
Co-authored-by: Bentlybro <Github@bentlybro.com>
- Resolves#9918
- Follow-up fix for #9914
### Changes 🏗️
- In `get_graph_execution_schedules`, skip jobs when their kwargs can't
be parsed as `GraphExecutionJobArgs`
- Rename methods of `Scheduler` to clarify their scope (scheduled
*graph* executions)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Go to `/library/agents/[id]` (which calls `GET /api/schedules`)
- [x] -> `GET /api/schedules` request returns HTTP 200
If a node has a multi-credentials input (e.g. AI Text Generator block)
but the discriminator value (e.g. model choice) is missing, the input
can't be discriminated into a single-provider input. Discrimination into
a single-provider input is necessary to make a graph-level credentials
input for use in the Library.
### Changes 🏗️
- feat(backend): Require discriminator fields to always have a value
- dx(frontend): Improve typing of discriminator stuff
- dx(frontend): Fix typing in `NodeOneOfDiscriminatorField` component
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving & running graphs with and without credentials works
normally
- Note: We don't have any blocks with a discriminator that doesn't have
a default value, so currently I don't think it's possible to produce a
case where this mechanism would be triggered.
The Library Agent credentials UX (#9789) currently doesn't work for
sub-graphs.
### Changes 🏗️
- Include sub-graphs in generating `Graph.credentials_input_schema`
- Propagate `node_credentials_input_map` into `AgentExecutionBlock`
executions
- Fix: also apply `node_credentials_input_map` in `_enqueue_next_nodes`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Import a graph with sub-graphs that need credentials
- Run this agent from the Library
- [x] -> Should work
Introduce a late execution check scheduled job. The late threshold
duration is configurable.
This initial version only reports the error to Sentry.
### Changes 🏗️
* Added late execution check scheduled job
* Move the registration weekly notification processing job out of API
call and calling it directly from the scheduler service.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Manual firing of scheduled job through an exposed API
<!-- Clearly explain the need for these changes: -->
We want the scheduler shouldn't scale with the rest API lol
### Changes 🏗️
pulls out the scheduler into its own service
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] test it
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Currently agent listing on Marketplace have bad UX.
### Changes 🏗️
- Add function and endpoint to check if user has `LibraryAgent` by given
`storeListingVersionId`
- Redesign listing buttons
- `Add to library` shown when user is logged in and doesn't have an
agent in library
- `See runs` shown when user logged in as has the agent in the library
- `Download agent` always shown
- Disabled buttons during processing (adding/downloading)
- Stop raising when owner is trying to add own agent. Now it'll simply
redirect to Library.
- Remove button appearing/flickering after a delay on listing page -
logged in status is now checked in server component.
- Show error toast on adding/redirecting to library and downloading
error
- Update breadcrumbs and page title to say `Marketplace` instead of
`Store`
- `font-geist` -> `font-sans` (`font-geist` var doesn't exist)
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Button on a listing is `Add to library` (no library agent)
- [x] Agent can be added and user is redirected
- [x] Button on the listing is `See runs` and clicking it redirects to
the library agent
- [x] Remove agent from library
- [x] Buttons shows `Add to library` again
- [x] Agent can be re-added
- [x] Agent can be downloaded
- [x] `Add to library` Button is hidden when user is logged out
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Process initializer on the process pool should never fail, but we do
network-related stuff there.
This cause the pool to be in a broken state.
### Changes 🏗️
Remove the health check step on process initializer.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing CI test
<!-- Clearly explain the need for these changes: -->
Our oauth review wants us to drop this in favor of a diff scope that
will require additional work
### Changes 🏗️
Disables the oauth sheets scopes in prod
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] set env locally
A collection of updates regarding onboarding and wallet.
### Changes 🏗️
- `try-except` instead of `if` when rewarding (skip unnecessary db call)
- Make external services question onboarding step optional
- Add `SmartImage` component to lazy load images with pulse animation
and use it throughout onboarding
- Use store agent name instead of graph graph name (run page)
- Fix some images breaking layout on the agent card (run page)
- Center agent card vertically and horizontally (center on the left half
of page) (run page)
- Delay and tweak confetti when opening wallet and when task finished
(wallet)
- Flash wallet when credits change value
- Make tutorial video grayscale on completed steps (wallet)
- Fix confetti triggering on page refresh (wallet)
- Redirect to agent run page instead of Library after onboarding
- Expand task groups by default (wallet) - this means tutorial videos
are visible by default
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Services step is optional and skipping it doesn't break onboarding
- [x] `SmartImage` works properly
- [x] Agent card is aligned properly, including on page scroll
- [x] Wallet flash when credits value change
- [x] User is redirected to the agent runs page after onboarding
Currently, the agent/graph execution engine is consuming the execution
queue and acknowledges the message after fully completing its execution
or failing it.
However, in the case of the agent executor failing due to a
hardware/resource issue, or the executor did not manage to acknowledge
the execution message. Another agent executor will pick it up and start
the execution again from the beginning.
The scope of this PR is to make the next executor pick up the next work
to continue the pre-existing execution instead of starting it all over
from the beginning.
### Changes 🏗️
* Removed `start_node_execs` from `GraphExecutionEntry`
* Populate the starting graph node from the DB query instead (fetching
Running & Queued node executions).
* Removed `get_incomplete_node_executions` from DB manager.
* Use get_node_executions with a status filter instead.
* Allow graph execution to end in non-FAILED/COMPLETED status, e.g, when
the executor is interrupted, it should be stuck in the running status,
and let other executors continue the task.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run an agent, stop the executor midway, re-reun the executor, the
execution should be continued instead of restarted.
### Changes 🏗️
* Avoid executing any agent with a zero balance.
* Make node execution count global across agents for a single user.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run agents by tweaking the `execution_cost_count_threshold` &
`execution_cost_per_threshold` values.
### Changes 🏗️
* Avoid executing any agent with a zero balance.
* Make node execution count global across agents for a single user.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run agents by tweaking the `execution_cost_count_threshold` &
`execution_cost_per_threshold` values.
Using sync code in the async route often introduces a blocking
event-loop code that impacts stability.
The current RPC system only provides a synchronous client to call the
service endpoints.
The scope of this PR is to provide an entirely decoupled signature
between client and server, allowing the client can mix & match async &
sync options on the client code while not changing the async/sync nature
of the server.
### Changes 🏗️
* Add support for flexible async/sync RPC client.
* Migrate scheduler client to all-async client.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Scheduler route test.
- [x] Modified service_test.py
- [x] Run normal agent executions
Fixes the admin add dollars, in the ``add-money-button.tsx`` file, in
the handleApproveSubmit action it was trying to use formatCredits for
the value which is wrong, this fix changes it
```diff
<form action={handleApproveSubmit}>
<input type="hidden" name="id" value={userId} />
<input
type="hidden"
name="amount"
- value={formatCredits(Number(dollarAmount))}
+ value={Math.round(parseFloat(dollarAmount) * 100)}
/>
```
i was able to add $1, $0.10 and $0.01

```
FAILED test/model_test.py::test_agent_preset_from_db - pydantic_core._pydantic_core.ValidationError: 1 validation error for AgentNodeExecutionInputOutput
E pydantic_core._pydantic_core.ValidationError: 1 validation error for AgentNodeExecutionInputOutput
E data
E JSON input should be string, bytes or bytearray [type=json_type, input_value=Json, input_type=Json]
E For further information visit https://errors.pydantic.dev/2.11/v/json_type
```
### Changes 🏗️
Manually creating a Prisma model often breaks, and we have such an
instance in the test.
This PR fixes the test to make the new Pydantic happy.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] CI
<!-- Clearly explain the need for these changes: -->
We need a way to refund people who spend money on agents wihout making
manual db actions
### Changes 🏗️
- Adds a bunch for refunding users
- Adds reasons and admin id for actions
- Add admin to db manager
- Add UI for this for the admin panel
- Clean up pagination controls
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test by importing dev db as baseline
- [x] Add transactions on top for "refund", and make sure all existing
transactions work
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
When an executor dies, an ongoing execution will not be retried and will
just stuck in the running status.
This change avoids such a scenario by allowing an execution of an entry
that is not in QUEUED status with the low-probability risk of double
execution.
### Changes 🏗️
* Allow non-QUEUED status to be re-executed.
* Improve cleanup of node & graph executor.
* Make a cancellation request consumption a separate thread to avoid
being blocked by other messages.
* Remove unused retry loop on the execution manager.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Run agent, kill the server, re-run it, agent restarted.
<!-- Clearly explain the need for these changes: -->
This PR fixes [Issue
#9883](https://github.com/Significant-Gravitas/AutoGPT/issues/9883),
where the SendWebRequestBlock crashes when receiving a 204 No Content
response, such as when posting to a Discord webhook. The fix ensures
that empty responses are handled gracefully, and the block does not
crash.
### Changes 🏗️
- Added a check to handle empty HTTP responses (like 204 status) in
SendWebRequestBlock
- Fallback to empty string or None if there is no response content
- Prevents server errors when parsing non-existent response bodies
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Send a POST request to an endpoint that returns 204 No Content
- [x] Confirm that SendWebRequestBlock handles it without crashing
- [x] Confirm that regular 200 OK JSON responses still work
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Lohith-11 <lohithr011@gamil.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Add Note to "Getting Started" page for Raspberry Pi 5 page size issue
with `supabase-vector` that prevents `docker compose up` from running
successfully.
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
- Added a Note to the "Getting Started" page that explains a change in
Raspberry Pi OS for Raspberry Pi 5s, and how to revert the change to
avoid an issue running the backend on Docker.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] No code changes
#### For configuration changes:
- [x] No configuration changes
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
<!-- Clearly explain the need for these changes: -->
we oopsed and used the wrong attribute for short desc
### Changes 🏗️
Uses sub heading instead now
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] check the expected text shows
<!-- Clearly explain the need for these changes: -->
for admins to approve agents for the marketplace, we need to be able to
run them. this is a quick workaround for downloading them so you can put
them in your marketplace to check
### Changes 🏗️
- clones various endpoints related to downloading into an admin side
with logging, and admin checks
- adds download button and removes open in builder action
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] Test downloading agents from local marketplace
- fix#9882
we’re currently using optional multi select, and it’s working great.
We’re able to correctly determine the data type for it. However, there’s
a small issue. We’re not using the correct subSchema that is inside
anyOf on the multi select input. This is why we’re getting the problem
on the Twitter block. It’s the only one that’s using this type of input,
so it’s the only one that’s affected.

---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
### Changes 🏗️
Bring back PrintConsoleBlock
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Print console block
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
The transaction with zero payment amount will not generate a payment ID,
so the checkout failed for this scenario.
### Changes 🏗️
Don't use payment id as transaction key on top-up with zero payment
amount.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Top-up with stripe coupon
There are instances of node executions that were failed and end up stuck
in the RUNNING status due to the execution failed to release the lock:
```
2025-04-24 20:53:31,573 INFO [ExecutionManager|uid:25eba2d1-e9c1-44bc-88c7-43e0f4fbad5a|gid:01f8c315-c163-4dd1-a8a0-d396477c5a9f|nid:f8bf84ae-b1f0-4434-8f04-80f43852bc30]|geid:2e1b35c6-0d2f-4e97-adea-f6fe0d9965d0|neid:590b29ea-63ee-4e24-a429-de5a3e191e72|-] Failed node execution 590b29ea-63ee-4e24-a429-de5a3e191e72: Cannot release a lock that's no longer owned
```
### Changes 🏗️
Check the ownership of the lock before releasing.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing CI tests.
(cherry picked from commit ef022720d5)
👋 Hi there! This PR was automatically generated by Autofix 🤖
This fix was triggered by Toran Bruce Richards.
Fixes
[AUTOGPT-SERVER-1ZY](https://sentry.io/organizations/significant-gravitas/issues/6386687527/).
The issue was that: `llm_call` calculates `max_tokens` without
considering `input_tokens`, causing OpenRouter API errors when the
context window is exceeded.
- Implements a function `estimate_token_count` to estimate the number of
tokens in a list of messages.
- Calculates available tokens based on the context window, estimated
input tokens, and user-defined max tokens.
- Adjusts `max_tokens` for LLM calls to prevent exceeding context window
limits.
- Reduces `max_tokens` by 15% and retries if a token limit error is
encountered during LLM calls.
If you have any questions or feedback for the Sentry team about this
fix, please email [autofix@sentry.io](mailto:autofix@sentry.io) with the
Run ID: 32838.
---------
Co-authored-by: sentry-autofix[bot] <157164994+sentry-autofix[bot]@users.noreply.github.com>
Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
There are instances of node executions that were failed and end up stuck
in the RUNNING status due to the execution failed to release the lock:
```
2025-04-24 20:53:31,573 INFO [ExecutionManager|uid:25eba2d1-e9c1-44bc-88c7-43e0f4fbad5a|gid:01f8c315-c163-4dd1-a8a0-d396477c5a9f|nid:f8bf84ae-b1f0-4434-8f04-80f43852bc30]|geid:2e1b35c6-0d2f-4e97-adea-f6fe0d9965d0|neid:590b29ea-63ee-4e24-a429-de5a3e191e72|-] Failed node execution 590b29ea-63ee-4e24-a429-de5a3e191e72: Cannot release a lock that's no longer owned
```
### Changes 🏗️
Check the ownership of the lock before releasing.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Existing CI tests.
### Changes 🏗️
Provide a system toggle for disabling the billing page:
NEXT_PUBLIC_SHOW_BILLING_PAGE
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Toggle `NEXT_PUBLIC_SHOW_BILLING_PAGE` value.
Smart Decision Block was not able to work with sub agent with custom
name input & the bead were not properly propagated in the execution UI.
The scope of this PR is fixing it.
### Changes 🏗️
* Introduce an easy to parse format of tool edge:
`{tool}_^_{func}_~_{arg}`. Graph using SmartDecisionBlock needs to be
re-saved before execution to work.
* Reduce cluttering on a smart decision block logic.
* Fix beads not being shown for a smart decision block tool calling.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Execute an SDM with some special character input as a tool
<img width="672" alt="image"
src="https://github.com/user-attachments/assets/873556b3-c16a-4dd1-ad84-bc86c636c406"
/>
Update "Edit a copy" modal text when copying marketplace agent in
Library. Update agent action buttons to reflect the design accurately.
### Changes 🏗️
- Update modal text
- Disable copying owned agents (only marketplace allowed)
- `Open in Builder` -> `Customize agent`
- Disabled `Customize agent` instead of hiding
- Change `Delete agent` to non-destructive design
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] ...
Strip secrets, credentials when forking agent
### Changes 🏗️
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] ...
Currently, we have no visibility on the state of the execution manager,
the scope of this PR is to open up the observability of it by exposing
Prometheus metrics.
### Changes 🏗️
Re-use the execution manager port to expose the Prometheus metrics.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Hit /metrics on 8002 port
### Changes 🏗️
Set process starting mode to forkserver instead of spawn, if possible,
for performance benefits.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Existing tests
Executor process initialization can fail and cause this error:
```
concurrent.futures.process.BrokenProcessPool: A child process terminated abruptly, the process pool is not usable anymore
```
### Changes 🏗️
Add retry to reduce the chance of the initialization error to happen.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Existing tests
This PR introduces copying agents feature in the Library. Users can copy
and download their library agents but they can edit only the ones they
own (included copied ones).
### Changes 🏗️
- DB migration: add relation in `AgentGraph`: `forked_from_id` and
`forked_from_version`
- Add `fork_graph` function that makes a hardcopy of agent graph and its
nodes (all with new ids)
- Add `fork_library_agent` that copies library agent and its graph for a
user
- Add endpoint `/library/agents/{libraryAgentId}/fork`
- Add UI to `library/agents/[id]/page.tsx`: `Edit a copy` button with
dialog confirmation
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Agent can be copied, edited and runs
### Changes 🏗️

Hide Google Maps system id key on the frontend UI.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan
Navbar sometimes disappears outside `/onboarding`.
### Changes 🏗️
This PR solves the problem of disappearing Navbar outside `/onboarding`
by introducing `app/(platform)` route group.
- Move all routes requiring Navbar to `app/(platform)`
- Move `<Navbar>` to `app/(platform)/layout.tsx`
- Move `/onboarding` to `app/(no-navbar/`
- Remove pathname injection to header from middleware and stop relying
on it to hide the navbar
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Common routes work properly
Setting the Google Maps API through the API has never worked on the
platform.
### Changes 🏗️
Set the default api key from the environment variable.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Test GoogleMapsBlock
https://github.com/Significant-Gravitas/AutoGPT/pull/9452 was throwing
`operator does not exist: text ? unknown` on deployed dev and so the
function call was commented as a hotfix.
This PR fixes and re-enables the llm model migration function.
### Changes 🏗️
- Uncomment and fix `migrate_llm_models` function
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Migrate nodes with non-existing models
- [x] Don't migrate nodes without any model or with correct models
---------
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
There are cases where the publishing agent execution is failing, making
the agent execution appear to be stuck in a queue, but the execution has
never been in a queue in the first place.
### Changes 🏗️
On publishing failure, we set the graph & starting node execution status
to FAILED and let the UI bubble up the error so the user can try again.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Normal add execution flow
For unknown reason publishing message can fail sometimes due to the
connection being broken:
MessageQueue suddenly unavailable, connection simply broke, connection
being reset, etc.
### Changes 🏗️
Adding a tenacity retry on AMQP or ConnectionError, which hopefully can
alleviate the issue.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Simple add execution
- Resolves#9771
- ... in a non-persistent way, so it won't work for webhook-triggered
agents
For webhooks: #9541
### Changes 🏗️
Frontend:
- Add credentials inputs in Library "New run" screen (based on
`graph.credentials_input_schema`)
- Refactor `CredentialsInput` and `useCredentials` to not rely on XYFlow
context
- Unsplit lists of saved credentials in `CredentialsProvider` state
- Move logic that was being executed at component render to `useEffect`
hooks in `CredentialsInput`
Backend:
- Implement logic to aggregate credentials input requirements to one per
provider per graph
- Add `BaseGraph.credentials_input_schema` (JSON schema) computed field
Underlying added logic:
- `BaseGraph._credentials_input_schema` - makes a `BlockSchema` from a
graph's aggregated credentials inputs
- `BaseGraph.aggregate_credentials_inputs()` - aggregates a graph's
nodes' credentials inputs using `CredentialsFieldInfo.combine(..)`
- `BlockSchema.get_credentials_fields_info() -> dict[str,
CredentialsFieldInfo]`
- `CredentialsFieldInfo` model (created from
`_CredentialsFieldSchemaExtra`)
- Implement logic to inject explicitly passed credentials into graph
execution
- Add `credentials_inputs` parameter to `execute_graph` endpoint
- Add `graph_credentials_input` parameter to
`.executor.utils.add_graph_execution(..)`
- Implement `.executor.utils.make_node_credentials_input_map(..)`
- Amend `.executor.utils.construct_node_execution_input`
- Add `GraphExecutionEntry.node_credentials_input_map` attribute
- Amend validation to allow injecting credentials
- Amend `GraphModel._validate_graph(..)`
- Amend `.executor.utils._validate_node_input_credentials`
- Add `node_credentials_map` parameter to
`ExecutionManager.add_execution(..)`
- Amend execution validation to handle side-loaded credentials
- Add `GraphExecutionEntry.node_execution_map` attribute
- Add mechanism to inject passed credentials into node execution data
- Add credentials injection mechanism to node execution queueing logic
in `Executor._on_graph_execution(..)`
- Replace boilerplate logic in `v1.execute_graph` endpoint with call to
existing `.executor.utils.add_graph_execution(..)`
- Replace calls to `.server.routers.v1.execute_graph` with
`add_graph_execution`
Also:
- Address tech debt in `GraphModel._validate_gaph(..)`
- Fix type checking in `BaseGraph._generate_schema(..)`
#### TODO
- [ ] ~~Make "Run again" work with credentials in
`AgentRunDetailsView`~~
- [ ] Prohibit saving a graph if it has nodes with missing discriminator
value for discriminated credentials inputs
### Checklist 📋
#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [ ] ...
This redesigns how the task group is displayed when finished for both
expanded and folded state.
### Changes 🏗️
- Folded state now displays `Done` badge and hides tasks
- Expanded state shows only task names and hides details and video
Screenshot:
1. Expanded unfinished group
2. Expanded finished group
3. Folded finished group
<img width="463" alt="Screenshot 2025-04-15 at 2 05 31 PM"
src="https://github.com/user-attachments/assets/40152073-fc0e-47c2-9fd4-a6b0161280e6"
/>
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Finished group displays correctly
- [x] Unfinished group displays correctly
### Changes 🏗️
- Add top-up and auto-refill tabs in the Wallet
- Add shadcn `tabs` component
- Disable increase/decrease spinner buttons on number inputs across
Platform (moved css from `customnode.css` to `globals.css`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Incorrect values are detected properly
- [x] Top-up works
- [x] Setting auto-refill works
Onboarding executes original agent graph directly without waiting for
marketplace agent to be added to user library.
### Changes 🏗️
- Execute library agent after it's already added to library
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Onboarding agent executes properly
Array fields in `schema.prisma` are non-nullable, but generated
migrations don’t add `NOT NULL` constraints. This causes existing rows
to get `NULL` values when new array columns are added, breaking schema
expectations and leading to bugs.
### Changes 🏗️
- Backfill all `NULL` rows on non-nullable array columns to empty arrays
- Set `NOT NULL` constraint on all array columns
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Existing `NULL` rows are properly backfilled
- [x] Existing arrays are not set to default empty arrays
- [x] Affected columns became non-nullable in the db
### Changes 🏗️
Updates to the setup docs to remove the old unneeded ``git submodule
update --init --recursive --progress`` command + some other small tweaks
around it
If the websocket doesn't disconnect when the user switches to viewing a
different agent, they aren't unsubscribed. If execution updates *from a
different agent* are adopted into the page state, that can cause
crashes.
### Changes 🏗️
- Filter incoming execution updates by `graph_id`
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Go to an agent and initiate a run that will take a while (long enough
to navigate to a different agent)
- Navigate: Library -> [another agent]
- [ ] Runs from the first agent don't show up in the runs list of the
other agent
SmartDecisionBlock sometimes tried to be smart by calling multiple tool
calls and our platform does not support this yet.
### Changes 🏗️
Disable parallel tool calls for OpenAI & OpenRouter LLM provider LLM
blocks.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] Tested SmartDecisionBlock & AITextGeneratorBlock
### Changes 🏗️
Avoid other threads accessing the channel within the same process.
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manual agent runs
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
This PR simply changes the logging type from info to debug of node
outputs in the agent.py file.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
---------
Co-authored-by: Bentlybro <Github@bentlybro.com>
<!-- Clearly explain the need for these changes: -->
### Changes 🏗️
This change simply changes the logging level of node inputs and outputs
to debug level. This change is needed because currently logging all node
data causes logs that are too large for the logger to prevent nodes from
running.
<!-- Concisely describe all of the changes made in this pull request:
-->
### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
<!-- Put your test plan here: -->
- [x] ...
<details>
<summary>Example test plan</summary>
- [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
- [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
- [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
<details>
<summary>Examples of configuration changes</summary>
- Changing ports
- Adding new services that need to communicate with each other
- Secrets or environment variable changes
- New or infrastructure changes such as databases
</details>
2025-04-16 19:22:52 +00:00
1046 changed files with 79021 additions and 31308 deletions
> Setting up and hosting the AutoGPT Platform yourself is a technical process.
> If you'd rather something that just works, we recommend [joining the waitlist](https://bit.ly/3ZDijAI) for the cloud-hosted beta.
### System Requirements
Before proceeding with the installation, ensure your system meets the following requirements:
#### Hardware Requirements
- CPU: 4+ cores recommended
- RAM: Minimum 8GB, 16GB recommended
- Storage: At least 10GB of free space
#### Software Requirements
- Operating Systems:
- Linux (Ubuntu 20.04 or newer recommended)
- macOS (10.15 or newer)
- Windows 10/11 with WSL2
- Required Software (with minimum versions):
- Docker Engine (20.10.0 or newer)
- Docker Compose (2.0.0 or newer)
- Git (2.30 or newer)
- Node.js (16.x or newer)
- npm (8.x or newer)
- VSCode (1.60 or newer) or any modern code editor
#### Network Requirements
- Stable internet connection
- Access to required ports (will be configured in Docker)
- Ability to make outbound HTTPS connections
### Updated Setup Instructions:
We’ve moved to a fully maintained and regularly updated documentation site.
We've moved to a fully maintained and regularly updated documentation site.
👉 [Follow the official self-hosting guide here](https://docs.agpt.co/platform/getting-started/)
@@ -152,7 +179,7 @@ Just clone the repo, install dependencies with `./run setup`, and you should be
[](https://discord.gg/autogpt)
To report a bug or request a feature, create a [GitHub Issue](https://github.com/Significant-Gravitas/AutoGPT/issues/new/choose). Please ensure someone else hasn’t created an issue for the same topic.
To report a bug or request a feature, create a [GitHub Issue](https://github.com/Significant-Gravitas/AutoGPT/issues/new/choose). Please ensure someone else hasn't created an issue for the same topic.
This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables.
3. Run the following command:
```
docker compose up -d
```
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
4. Navigate to `frontend` within the `autogpt_platform` directory:
```
cd frontend
```
You will need to run your frontend application separately on your local machine.
5. Run the following command:
5. Run the following command:
```
cp .env.example .env.local
```
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
6. Run the following command:
Enable corepack and install dependencies by running:
```
npm install
npm run dev
corepack enable
pnpm i
```
This command will install the necessary dependencies and start the frontend application in development mode.
If you are using Yarn, you can run the following commands instead:
Generate the API client (this step is required before running the frontend):
```
yarn install && yarn dev
pnpm generate:api-client
```
Then start the frontend application in development mode:
```
pnpm dev
```
7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
@@ -68,43 +87,52 @@ Here are some useful Docker Compose commands for managing your AutoGPT Platform:
- `docker compose down`: Stop and remove containers, networks, and volumes.
- `docker compose watch`: Watch for changes in your services and automatically update them.
### Sample Scenarios
Here are some common scenarios where you might use multiple Docker Compose commands:
1. Updating and restarting a specific service:
```
docker compose build api_srv
docker compose up -d --no-deps api_srv
```
This rebuilds the `api_srv` service and restarts it without affecting other services.
2. Viewing logs for troubleshooting:
```
docker compose logs -f api_srv ws_srv
```
This shows and follows the logs for both `api_srv` and `ws_srv` services.
3. Scaling a service for increased load:
```
docker compose up -d --scale executor=3
```
This scales the `executor` service to 3 instances to handle increased load.
4. Stopping the entire system for maintenance:
```
docker compose stop
docker compose rm -f
docker compose pull
docker compose up -d
```
This stops all services, removes containers, pulls the latest images, and restarts the system.
5. Developing with live updates:
```
docker compose watch
```
This watches for changes in your code and automatically updates the relevant services.
6. Checking the status of services:
@@ -115,7 +143,6 @@ Here are some common scenarios where you might use multiple Docker Compose comma
These scenarios demonstrate how to use Docker Compose commands in combination to manage your AutoGPT Platform effectively.
### Persisting Data
To persist data for PostgreSQL and Redis, you can modify the `docker-compose.yml` file to add volumes. Here's how:
@@ -143,3 +170,27 @@ To persist data for PostgreSQL and Redis, you can modify the `docker-compose.yml
3. Save the file and run `docker compose up -d` to apply the changes.
This configuration will create named volumes for PostgreSQL and Redis, ensuring that your data persists across container restarts.
### API Client Generation
The platform includes scripts for generating and managing the API client:
- `pnpm fetch:openapi`: Fetches the OpenAPI specification from the backend service (requires backend to be running on port 8006)
- `pnpm generate:api-client`: Generates the TypeScript API client from the OpenAPI specification using Orval
- `pnpm generate:api-all`: Runs both fetch and generate commands in sequence
#### Manual API Client Updates
If you need to update the API client after making changes to the backend API:
1. Ensure the backend services are running:
```
docker compose up -d
```
2. Generate the updated API client:
```
pnpm generate:api-all
```
This will fetch the latest OpenAPI specification and regenerate the TypeScript client code.
This guide covers testing practices for the AutoGPT Platform backend, with a focus on snapshot testing for API endpoints.
## Table of Contents
- [Overview](#overview)
- [Running Tests](#running-tests)
- [Snapshot Testing](#snapshot-testing)
- [Writing Tests for API Routes](#writing-tests-for-api-routes)
- [Best Practices](#best-practices)
## Overview
The backend uses pytest for testing with the following key libraries:
-`pytest` - Test framework
-`pytest-asyncio` - Async test support
-`pytest-mock` - Mocking support
-`pytest-snapshot` - Snapshot testing for API responses
## Running Tests
### Run all tests
```bash
poetry run test
```
### Run specific test file
```bash
poetry run pytest path/to/test_file.py
```
### Run with verbose output
```bash
poetry run pytest -v
```
### Run with coverage
```bash
poetry run pytest --cov=backend
```
## Snapshot Testing
Snapshot testing captures the output of your code and compares it against previously saved snapshots. This is particularly useful for testing API responses.
### How Snapshot Testing Works
1. First run: Creates snapshot files in `snapshots/` directories
2. Subsequent runs: Compares output against saved snapshots
3. Changes detected: Test fails if output differs from snapshot
### Creating/Updating Snapshots
When you first write a test or when the expected output changes:
```bash
poetry run pytest path/to/test.py --snapshot-update
```
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
- Place tests next to the code: `routes.py` → `routes_test.py`
- Use descriptive test names: `test_create_user_with_invalid_email`
- Group related tests in classes when appropriate
### 2. Test Coverage
- Test happy path and error cases
- Test edge cases (empty data, invalid formats)
- Test authentication and authorization
### 3. Snapshot Testing Guidelines
- Review all snapshot changes carefully
- Don't snapshot sensitive data
- Keep snapshots focused and minimal
- Update snapshots intentionally, not accidentally
### 4. Async Testing
- Use regular `def` for FastAPI TestClient tests
- Use `async def` with `@pytest.mark.asyncio` for testing async functions directly
### 5. Fixtures
Create reusable fixtures for common test data:
```python
@pytest.fixture
defsample_user():
return{
"email":"test@example.com",
"name":"Test User"
}
deftest_create_user(sample_user,snapshot):
response=client.post("/users",json=sample_user)
# ... test implementation
```
## CI/CD Integration
The GitHub Actions workflow automatically runs tests on:
- Pull requests
- Pushes to main branch
Snapshot tests work in CI by:
1. Committing snapshot files to the repository
2. CI compares against committed snapshots
3. Fails if snapshots don't match
## Troubleshooting
### Snapshot Mismatches
- Review the diff carefully
- If changes are expected: `poetry run pytest --snapshot-update`
- If changes are unexpected: Fix the code causing the difference
### Async Test Issues
- Ensure async functions use `@pytest.mark.asyncio`
- Use `AsyncMock` for mocking async functions
- FastAPI TestClient handles async automatically
### Import Errors
- Check that all dependencies are in `pyproject.toml`
- Run `poetry install` to ensure dependencies are installed
- Verify import paths are correct
## Summary
Snapshot testing provides a powerful way to ensure API responses remain consistent. Combined with traditional assertions, it creates a robust test suite that catches regressions while remaining maintainable.
Remember: Good tests are as important as good code!
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
@@ -375,28 +387,30 @@ To exclude companies based on location, use the organization_not_locations param
description="""Exclude companies from search results based on the location of the company headquarters. You can use cities, US states, and countries as locations to exclude.
This parameter is useful for ensuring you do not prospect in an undesirable territory. For example, if you use ireland as a value, no Ireland-based companies will appear in your search results.
description="""Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry."""
description="""Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry.""",
default_factory=list,
)
q_organization_name:str=SchemaField(
q_organization_name:Optional[str]=SchemaField(
description="""Filter search results to include a specific company name.
If the value you enter for this parameter does not match with a company's name, the company will not appear in search results, even if it matches other parameters. Partial matches are accepted. For example, if you filter by the value marketing, a company called NY Marketing Unlimited would still be eligible as a search result, but NY Market Analysis would not be eligible."""
If the value you enter for this parameter does not match with a company's name, the company will not appear in search results, even if it matches other parameters. Partial matches are accepted. For example, if you filter by the value marketing, a company called NY Marketing Unlimited would still be eligible as a search result, but NY Market Analysis would not be eligible.""",
default="",
)
organization_ids:list[str]=SchemaField(
organization_ids:Optional[list[str]]=SchemaField(
description="""The Apollo IDs for the companies you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, identify the values for organization_id when you call this endpoint.""",
default_factory=list,
)
max_results:int=SchemaField(
max_results:Optional[int]=SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
ge=1,
@@ -421,11 +435,11 @@ Use the page parameter to search the different pages of data.""",
classSearchOrganizationsResponse(BaseModel):
"""Response from Apollo's search organizations API"""
breadcrumbs:list[Breadcrumb]=[]
partial_results_only:bool=True
has_join:bool=True
disable_eu_prospecting:bool=True
partial_results_limit:int=0
breadcrumbs:Optional[list[Breadcrumb]]=[]
partial_results_only:Optional[bool]=True
has_join:Optional[bool]=True
disable_eu_prospecting:Optional[bool]=True
partial_results_limit:Optional[int]=0
pagination:Pagination=Pagination(
page=0,per_page=0,total_entries=0,total_pages=0
)
@@ -433,14 +447,14 @@ class SearchOrganizationsResponse(BaseModel):
accounts:list[Any]=[]
organizations:list[Organization]=[]
models_ids:list[str]=[]
num_fetch_result:Optional[str]="N/A"
derived_params:Optional[str]="N/A"
num_fetch_result:Optional[str]=""
derived_params:Optional[str]=""
classSearchPeopleRequest(BaseModel):
"""Request for Apollo's search people API"""
person_titles:list[str]=SchemaField(
person_titles:Optional[list[str]]=SchemaField(
description="""Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results.
Results also include job titles with the same terms, even if they are not exact matches. For example, searching for marketing manager might return people with the job title content marketing manager.
@@ -450,13 +464,13 @@ Use this parameter in combination with the person_seniorities[] parameter to fin
default_factory=list,
placeholder="marketing manager",
)
person_locations:list[str]=SchemaField(
person_locations:Optional[list[str]]=SchemaField(
description="""The location where people live. You can search across cities, US states, and countries.
To find people based on the headquarters locations of their current employer, use the organization_locations parameter.""",
description="""The job seniority that people hold within their current employer. This enables you to find people that currently hold positions at certain reporting levels, such as Director level or senior IC level.
For a person to be included in search results, they only need to match 1 of the seniorities you add. Adding more seniorities expands your search results.
@@ -466,7 +480,7 @@ Searches only return results based on their current job title, so searching for
Use this parameter in combination with the person_titles[] parameter to find people based on specific job functions and seniority levels.""",
description="""The location of the company headquarters for a person's current employer. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, people that work for the Boston-based company will not appear in your results, even if they match other parameters.
@@ -474,7 +488,7 @@ If a company has several office locations, results are still based on the headqu
To find people based on their personal location, use the person_locations parameter.""",
description="""The domain name for the person's employer. This can be the current employer or a previous employer. Do not include www., the @ symbol, or similar.
You can add multiple domains to search across companies.
@@ -482,23 +496,23 @@ You can add multiple domains to search across companies.
description="""The email statuses for the people you want to find. You can add multiple statuses to expand your search.""",
default_factory=list,
)
organization_ids:list[str]=SchemaField(
organization_ids:Optional[list[str]]=SchemaField(
description="""The Apollo IDs for the companies (employers) you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, call the Organization Search endpoint and identify the values for organization_id.""",
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
default_factory=list,
)
q_keywords:str=SchemaField(
q_keywords:Optional[str]=SchemaField(
description="""A string of words over which we want to filter the results""",
default="",
)
@@ -514,7 +528,7 @@ Use this parameter in combination with the per_page parameter to make search res
Use the page parameter to search the different pages of data.""",
default=100,
)
max_results:int=SchemaField(
max_results:Optional[int]=SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
ge=1,
@@ -533,16 +547,61 @@ class SearchPeopleResponse(BaseModel):
populate_by_name=True,
)
breadcrumbs:list[Breadcrumb]=[]
partial_results_only:bool=True
has_join:bool=True
disable_eu_prospecting:bool=True
partial_results_limit:int=0
breadcrumbs:Optional[list[Breadcrumb]]=[]
partial_results_only:Optional[bool]=True
has_join:Optional[bool]=True
disable_eu_prospecting:Optional[bool]=True
partial_results_limit:Optional[int]=0
pagination:Pagination=Pagination(
page=0,per_page=0,total_entries=0,total_pages=0
)
contacts:list[Contact]=[]
people:list[Contact]=[]
model_ids:list[str]=[]
num_fetch_result:Optional[str]="N/A"
derived_params:Optional[str]="N/A"
num_fetch_result:Optional[str]=""
derived_params:Optional[str]=""
classEnrichPersonRequest(BaseModel):
"""Request for Apollo's person enrichment API"""
person_id:Optional[str]=SchemaField(
description="Apollo person ID to enrich (most accurate method)",
default="",
)
first_name:Optional[str]=SchemaField(
description="First name of the person to enrich",
default="",
)
last_name:Optional[str]=SchemaField(
description="Last name of the person to enrich",
default="",
)
name:Optional[str]=SchemaField(
description="Full name of the person to enrich",
default="",
)
email:Optional[str]=SchemaField(
description="Email address of the person to enrich",
default="",
)
domain:Optional[str]=SchemaField(
description="Company domain of the person to enrich",
default="",
)
company:Optional[str]=SchemaField(
description="Company name of the person to enrich",
default="",
)
linkedin_url:Optional[str]=SchemaField(
description="LinkedIn URL of the person to enrich",
default="",
)
organization_id:Optional[str]=SchemaField(
description="Apollo organization ID of the person's company",
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
@@ -65,7 +65,7 @@ To find IDs, identify the values for organization_id when you call this endpoint
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
@@ -90,14 +93,19 @@ class SearchPeopleBlock(Block):
advanced=False,
)
max_results:int=SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
description="""The maximum number of results to return. If you don't specify this parameter, the default is 25. Limited to 500 to prevent overspending.""",
default=25,
ge=1,
le=50000,
le=500,
advanced=True,
)
enrich_info:bool=SchemaField(
description="""Whether to enrich contacts with detailed information including real email addresses. This will double the search cost.""",
description="Number of days to look ahead for events",default=30
)
search_term:str|None=SchemaField(
description="Optional search term to filter events by",default=None
)
page_token:str|None=SchemaField(
description="Page token from previous request to get the next batch of events. You can use this if you have lots of events you want to process in a loop",
prompt=f"Summarize the following text in a {input_data.style} form. Focus your summary on the topic of `{input_data.focus}` if present, otherwise just provide a general summary:\n\n```{chunk}```"
llm_response=self.llm_call(
llm_response=awaitself.llm_call(
AIStructuredResponseGeneratorBlock.Input(
prompt=prompt,
credentials=input_data.credentials,
@@ -995,7 +1158,7 @@ class AITextSummarizerBlock(AIBlockBase):
prompt=f"Provide a final summary of the following section summaries in a {input_data.style} form, focus your summary on the topic of `{input_data.focus}` if present:\n\n ```{combined_text}```\n\n Just respond with the final_summary in the format specified."
llm_response=self.llm_call(
llm_response=awaitself.llm_call(
AIStructuredResponseGeneratorBlock.Input(
prompt=prompt,
credentials=input_data.credentials,
@@ -1018,7 +1181,8 @@ class AITextSummarizerBlock(AIBlockBase):
returnllm_response["final_summary"]
else:
# If combined summaries are still too long, recursively summarize
returnself._run(
block=AITextSummarizerBlock()
returnawaitblock.run_once(
AITextSummarizerBlock.Input(
text=combined_text,
credentials=input_data.credentials,
@@ -1026,10 +1190,9 @@ class AITextSummarizerBlock(AIBlockBase):
max_tokens=input_data.max_tokens,
chunk_overlap=input_data.chunk_overlap,
),
"summary",
credentials=credentials,
).send(None)[
1
]# Get the first yielded value
)
classAIConversationBlock(AIBlockBase):
@@ -1100,20 +1263,20 @@ class AIConversationBlock(AIBlockBase):
output:FileOutput|list[FileOutput]=client.run(# type: ignore This is because they changed the return type, and didn't update the type hint! It should be overloaded depending on the value of `use_file_output` to `FileOutput | list[FileOutput]` but it's `Any | Iterator[Any]`
output:FileOutput|list[FileOutput]=awaitclient.async_run(# type: ignore This is because they changed the return type, and didn't update the type hint! It should be overloaded depending on the value of `use_file_output` to `FileOutput | list[FileOutput]` but it's `Any | Iterator[Any]`
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.