Commit Graph

7702 Commits

Author SHA1 Message Date
Ubbe
01950ccc42 fix(frontend): password reset via server callback (#10303)
## Changes 🏗️

### Root Cause

With httpOnly cookies, the Supabase client can't automatically exchange
password reset codes for sessions client-side because it can't access
the secure cookies 🍪 ( _which is a good thing_ ).

Previously, when users clicked email reset links, the Supabase client on
the browser would automatically handle the code exchange, but
with`httpOnly`, this is not possible because the Supabase browser client
does not have access to session info, so it fails silently 🥵

### Solution

Moved password reset code exchange to server-side middleware that can
access `httpOnly` cookies and properly create authenticated sessions.

### Code Changes

**`middleware.ts`**
- intercepts `/reset-password` URLs containing `code` parameter
- uses helper function to exchange code for session server-side
- redirects with error parameters if exchange fails
- moved `getUser()` call to avoid middleware timing issues

**`reset-password/page.tsx`**
- added toast notifications for password reset errors
- checks URL parameters for error messages on page load

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Password reset emails send successfully
  - [x] Valid reset codes exchange for sessions server-side
  - [x] Invalid/expired codes show error messages via toast
  - [x] Successfully authenticated users can change passwords
  - [x] URL parameters are cleaned up after error display
  - [x] Middleware doesn't break normal authentication flows
  
 ### For configuration changes:

For this to work we need to configure Supabase with the new
password-reset redirect URL.
```
/api/auth/callback/reset-password
```
- [x] Already added in Supabase dev
- [ ] We need to add it on Supabase prod
2025-07-04 12:09:13 +00:00
Reinier van der Leer
358ce1d258 fix(backend/library): Include subgraphs in get_library_agent (#10301)
- Resolves #10300
- Follow-up fix to #10167

### Changes 🏗️

- Include sub-graphs in `get_library_agent` endpoint

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Executing agent with sub-graphs that require credentials works
2025-07-04 10:29:53 +00:00
Zamil Majdy
a5691c0e89 feat(block): Add dict append capability for GoogleSheetsAppendBlock 2025-07-03 20:54:59 -07:00
Zamil Majdy
0b35dff1e6 fix(block): Fix failing GoogleSheetsAppendBlock on undefined append range 2025-07-03 17:13:41 -07:00
Zamil Majdy
6cf9136cdd feat(block): Support URL format input instead of ID for Google Sheet blocks 2025-07-03 16:47:33 -07:00
Zamil Majdy
5d91a9c9b9 feat(block): Make RetrieveInformationBlock output static 2025-07-03 14:15:22 -07:00
Zamil Majdy
e3d84d87f8 fix(blocks): restore batching logic in CreateListBlock
During data manipulation refactoring, the CreateListBlock lost its
important batching functionality with max_size and max_tokens parameters.
This restores the original implementation that can yield lists in chunks
based on size or token limits.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-03 14:01:22 -07:00
Toran Bruce Richards
9fecbe2a31 feat(blocks): add plural outputs where blocks yield singular values in loops (#10304)
## Summary

This PR adds missing plural output versions to blocks that yield
individual items in loops but don't provide the complete collection,
enabling both individual item access (for iteration) and complete
collection access (for aggregate operations).

## Changes

### GitHub Blocks (existing)
- **GithubListPullRequestsBlock**: Added `pull_requests` output
alongside existing `pull_request`
- **GithubListPRReviewersBlock**: Added `reviewers` output alongside
existing `reviewer`

### Additional Blocks (added in this PR)
- **GetRedditPostsBlock**: Added `posts` output for complete list of
Reddit posts
- **ReadRSSFeedBlock**: Added `entries` output for complete list of RSS
entries
- **AddMemoryBlock**: Added `results` output for complete list of memory
operation results

## Pattern Applied

The pattern ensures blocks provide both:
```python
# Complete collection first
yield "plural_output", all_items

# Then individual items for iteration
for item in all_items:
    yield "singular_output", item
```

## Testing
- Updated test outputs to include plural versions
- All blocks maintain backward compatibility with existing singular
outputs
- `poetry run format` -  Passed
- `poetry run test` -  Blocks validated

## Benefits
- **Iteration**: Users can still iterate over individual items as before
- **Aggregation**: Users can now access complete collections for
operations like counting, filtering, or batch processing
- **Compatibility**: Existing workflows continue to work unchanged

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-03 20:06:31 +00:00
Toran Bruce Richards
4744e0f6b1 feat(blocks): add data manipulation blocks and refactor basic.py (#10261)
### Changes 🏗️

#### New List Operation Blocks
- Implement `GetListItemBlock` for retrieving an element at a specific
index, with negative index support
- Introduce `RemoveFromListBlock` to remove or pop items and optionally
return the removed value
- Add `ReplaceListItemBlock` to overwrite an item at a given index and
return the old value
- Provide `ListIsEmptyBlock` for quickly checking if a list has no
elements

#### New Dictionary Operation Blocks (for consistency with list
operations)
- Add `RemoveFromDictionaryBlock` to remove key-value pairs and
optionally return the removed value
- Implement `ReplaceDictionaryValueBlock` to replace values for a
specified key and return the old value
- Provide `DictionaryIsEmptyBlock` for checking if a dictionary has no
elements

#### Code Organization & Refactoring
- **Created `data_manipulation.py`**: Moved all dictionary and list
manipulation blocks to a dedicated file to prevent `basic.py` from
becoming too large
- **Refactored `basic.py`**: Now contains only core utility blocks
(FileStore, StoreValue, PrintToConsole, Note, UniversalTypeConverter)
- **Ensured consistency**: Dictionary and list blocks now have
equivalent functionality and follow the same patterns
- **Removed redundancy**: Eliminated duplicate `GetDictionaryValueBlock`
since `FindInDictionaryBlock` already provides comprehensive lookup
functionality
- **Preserved UUIDs**: All existing block UUIDs maintained to ensure no
breaking changes

#### Block Organization Summary
**`basic.py` (core utilities):**
- `FileStoreBlock`, `StoreValueBlock`, `PrintToConsoleBlock`,
`NoteBlock`, `UniversalTypeConverterBlock`

**`data_manipulation.py` (dictionary & list operations):**
- **Dictionary blocks:** Create, Add, Find, Remove, Replace, IsEmpty
- **List blocks:** Create, Add, Find, Get, Remove, Replace, IsEmpty

### Checklist 📋
#### For code changes:
- [x] I have clearly listed my changes in the PR description  
- [x] I have made a test plan  
- [x] I have tested my changes according to the test plan:
  - [x] `poetry run format`
  - [x] `poetry run test`
  - [x] `pnpm format`

<details>
  <summary>Example test plan</summary>

  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-07-03 20:04:21 +00:00
Zamil Majdy
24b4ab9864 feat(block): Enhance Mem0 blocks filetering & add more GoogleSheets blocks (#10287)
The block library was missing two key capabilities that keep coming up
in real-world agent flows:

1. **Granular Mem0 filtering.** Agents often work side-by-side for the
same user, so memories must be scoped to a specific run or agent to
avoid crosstalk.
2. **First-class Google Sheets support.** Many community projects (e.g.,
data-collection, lightweight dashboards, no-code workflows) rely on
Sheets, but we only had a brittle REST call block.

This PR adds fine-grained filters to every Mem0 retrieval block and
introduces a complete, OAuth-ready Google Sheets suite so agents can
create, read, write, format, and manage spreadsheets safely.
:contentReference[oaicite:0]{index=0}

---

### Changes 🏗️
#### 📚 Mem0 block enhancements  
* Added `categories_filter`, `metadata_filter`, `limit_memory_to_run`,
and `limit_memory_to_agent` inputs to **SearchMemoryBlock**,
**GetAllMemoriesBlock**, and **GetLatestMemoryBlock**.
* Added identical scoping logic to **AddMemoryBlock** so newly-created
memories can be tied to run/agent IDs.

#### 📊 New Google Sheets blocks (`backend/blocks/google/sheets.py`)  
| Block | Purpose |
|-------|---------|
| `GoogleSheetsReadBlock` | Read a range |
| `GoogleSheetsWriteBlock` | Overwrite a range |
| `GoogleSheetsAppendBlock` | Append rows |
| `GoogleSheetsClearBlock` | Clear a range |
| `GoogleSheetsMetadataBlock` | Fetch spreadsheet + sheet info |
| `GoogleSheetsManageSheetBlock` | Create / delete / copy a sheet |
| `GoogleSheetsBatchOperationsBlock` | Batch update / clear |
| `GoogleSheetsFindReplaceBlock` | Find & replace text |
| `GoogleSheetsFormatBlock` | Cell formatting (bg/text colour, bold,
italic, font size) |
| `GoogleSheetsCreateSpreadsheetBlock` | Spin up a new spreadsheet |

* Each block has typed input/output schemas, built-in test mocks, and is
disabled in prod unless Google OAuth is configured.
* Added helper enums (`SheetOperation`, `BatchOperationType`) and
updated **CLAUDE.md** contributor guide with a UUID-generation step.
:contentReference[oaicite:2]{index=2}

---

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Manual E2E run: agent writes chat summary to new spreadsheet,
reads it back, searches memory with run-scoped filter
- [x] Live Google API smoke-tests (read/write/append/clear/format) using
a disposable spreadsheet
2025-07-03 18:01:30 +00:00
Ubbe
04e90da031 fix(frontend): proxy via API route no actions (#10296)
## Changes 🏗️

We noticed that in some pages ( `/build` _mainly_ ), where a lot of API
calls are fired in parallel using the old `BackendAPI`( _running many
agent executions_ ) the performance became worse. That is because the
`BackendAPI` was proxied via [server
actions](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
( _to make calls to our Backend on the Next.js server_ ).

Looks like server actions don't run in parallel, and their performance
is also subpar, given that we are not hosted on Vercel (they don't
utilise the edge middleware).

These changes cause all `BackendAPI` calls to be proxied via the Next.js
`/api/` route when executed on the browser; when executed on the server,
they bypass the proxy and directly access the API. Hopefully we gain:

- 🚀 Better Performance - API routes are faster than server actions for
this use case
- 🔧 Less Magic - Direct fetch calls instead of hidden server action
complexity
- ♻️ Code Reuse - Leveraging the existing proxy infrastructure used by
react-query
- 🎯 Cleaner Architecture - Single proxy pattern for all API calls
- 🔒 Same Security - Still uses server-side authentication with httpOnly
cookies

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] E2E tests pass
  - [x] Click through the app, there is no issues
  - [x] Agent executions are fast again in the builder
  - [x] Test file uploads
2025-07-03 15:18:47 +00:00
Zamil Majdy
d4646c249d feat(backend): implement KV data storage blocks 2025-07-03 07:54:50 -07:00
Zamil Majdy
095199bfa6 feat(backend): implement KV data storage blocks (#10294)
This PR introduces key-value storage blocks.

### Changes 🏗️

- **Database Schema**: Add AgentNodeExecutionKeyValueData table with
composite primary key (userId, key)
- **Persistence Blocks**: Create PersistInformationBlock and
RetrieveInformationBlock in persistence.py
- **Scope-based Storage**: Support for within_agent (per agent) vs
across_agents (global user) persistence
- **Key Structure**: Use formal # delimiter for storage keys:
`agent#{graph_id}#{key}` and `global#{key}`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run all 244 block tests - all passing 
  - [x] Test PersistInformationBlock with mock data storage
  - [x] Test RetrieveInformationBlock with mock data retrieval
- [x] Verify scope-based key generation (within_agent vs across_agents)
  - [x] Verify database function integration through all manager classes
  - [x] Run lint and type checking - all passing 
  - [x] Verify database migration is included and valid

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

Note: This change adds database schema and new blocks but doesn't
require environment or docker-compose changes as it uses existing
database infrastructure.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-03 14:24:51 +00:00
Zamil Majdy
90fb223114 fix(frontend): Fix status chip not showing on graph with INCOMPLETE status 2025-07-03 07:25:59 -07:00
Reinier van der Leer
b1f3122243 fix(frontend): Add fallback for NEXT_PUBLIC_FRONTEND_BASE_URL to API proxy (#10299)
- Resolves #10298
- Follow-up to #10270

### Changes 🏗️

Amend two changes from #10270:

- Add fallback for `NEXT_PUBLIC_FRONTEND_BASE_URL` in custom-mutator.ts
- Revert rename of `FRONTEND_BASE_URL` to
`NEXT_PUBLIC_FRONTEND_BASE_URL` in `backend/.env.example`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Don't set `NEXT_PUBLIC_FRONTEND_BASE_URL`
  - Run the platform locally
  - [x] -> `/library` loads normally

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-07-03 12:26:50 +00:00
Zamil Majdy
f1cc2afbda feat(backend): improve stop graph execution reliability (#10293)
## Summary
- Enhanced graph execution cancellation and cleanup mechanisms
- Improved error handling and logging for graph execution lifecycle
- Added timeout handling for graph termination with proper status
updates
- Exposed a new API for stopping graph based on only graph_id or user_id
- Refactored logging metadata structure for better error tracking

## Key Changes
### Backend
- **Graph Execution Management**: Enhanced `stop_graph_execution` with
timeout-based waiting and proper status transitions
- **Execution Cleanup**: Added proper cancellation waiting with timeout
handling in executor manager
- **Logging Improvements**: Centralized `LogMetadata` class and improved
error logging consistency
- **API Enhancements**: Added bulk graph execution stopping
functionality
- **Error Handling**: Better exception handling and status management
for failed/cancelled executions

### Frontend
- **Status Safety**: Added null safety checks for status chips to
prevent runtime errors
- **Execution Control**: Simplified stop execution request handling

## Test Plan
- [x] Verify graph execution can be properly stopped and reaches
terminal state
- [x] Test timeout scenarios for stuck executions
- [x] Validate proper cleanup of running node executions when graph is
cancelled
- [x] Check frontend status chips handle undefined statuses gracefully
- [x] Test bulk execution stopping functionality

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-02 21:21:26 +00:00
Ubbe
f394a0eabb fix(frontend): do not swallow errors on the proxy (#10289)
## Changes 🏗️

Requests to the Backend happen now on the server, given we moved to
server-side cookies 🍪 ... however the client proxy is not exposing the
API errors to the client correctly. This aim to fix that.

## Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login to the platform
  - [x] Run agents until you encounter an error
  - [x] The error is shown on the toast
2025-07-02 14:18:17 +00:00
Ubbe
311bcc7751 fix(frontend): onboarding runtime error (#10288)
## Changes 🏗️

<img width="800" alt="Screenshot 2025-07-02 at 16 43 08"
src="https://github.com/user-attachments/assets/d7cd0dd7-e671-4c5d-8ed9-6d8f56371ff5"
/>

During logout, the user state gets cleared but the onboarding provider
continues to run and tries to access onboarding.completedSteps on a null
object, causing a runtime error 😬

This mostly happens because onboarding is broken on local and dev ( the
onboarding agents don't work ), so I manually skip it after creating an
account, navigating to `/marketplace`. That makes me think that the
onboarding provider still thinks I need to onboard, and hence why this
error?

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a new user
- [x] Instead of completing onboarding, navigate to `/marketplace` via
browser URL
- [x] logout, login/logout again few times and you don't see runtime
errors
2025-07-02 14:13:22 +00:00
SOUHAILA SERBOUT
e2bd727798 feat: optimize frontend CI with shared setup job (#10286)
# Change details
- **Before**: Each job separately installs dependencies (~4 redundant
installations)
  ### Massive Redundancy in Setup Steps
  Each job repeats these SAME 4 steps:
  - Checkout repository
  - Set up Node.js (version 21) 
  - Enable corepack
  - Install dependencies (pnpm install --frozen-lockfile)
  
  This happens 4+ times across different jobs:

   -  lint job
    - type-check job
    - chromatic job
    - test job (runs 2x due to matrix strategy)
    
   ### No Dependency Caching
  No caching strategy - downloads packages fresh every time
   - Every workflow run downloads all packages from scratch
   - No benefit from previous runs, even with identical pnpm-lock.yaml


- **After**: Dependencies installed once in setup job, cached and reused

This optimization maintains all existing CI functionality while
significantly improving pipeline efficiency.
A workflow run example is dispatched:
https://github.com/souhailaS/AutoGPT/actions/workflows/platform-frontend-ci.yml


## Additional Context 
We are a team of researchers from University of Zurich
(https://www.ifi.uzh.ch/en/zest.html) currently **working on automating
energy optimizations in GitHub Actions workflows**. This optimization
maintains full functionality while significantly reducing computational
overhead and energy consumption.

souhaila.serbout@uzh.ch
2025-07-02 10:02:19 +00:00
seer-by-sentry[bot]
47f503f223 feat(backend): Support aiohttp.BasicAuth in make_request (#10283)
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10284

### Changes 🏗️

- Allows passing an `aiohttp.BasicAuth` object directly to the `auth`
parameter of the `make_request` function.
- Converts tuple-based auth credentials to `aiohttp.BasicAuth` objects
before making the request.

Fixes
[AUTOGPT-SERVER-4AX](https://sentry.io/organizations/significant-gravitas/issues/6709824432/).
The issue was that: aiohttp's ClientSession.request received a plain
tuple for `auth` instead of an `aiohttp.BasicAuth` object, causing
OAuth2 token exchange failure.

This fix was generated by Seer in Sentry, triggered by Bently. 👁️ Run
ID: 185767

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6709824432/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
2025-07-01 13:09:54 +00:00
Bently
22d58367ec dx(platform): Add initial setup scripts for linux and windows (#9912)
This PR adds two setup scripts that will setup autogpt fully, it has a
windows .bat and a linux/mac .sh script, for now they are placed in a
new folder called "Installer"

### Note, the installers are supposed to be run outside of the autogpt
repo folder like Desktop/in a new empy folder because it will clone the
repo into the current directory.

I have had to add ``cross-env`` via ``pnpm add cross-env `` as on
windows the env is set differently in the ``package.json`` build section
``"build": "cross-env pnpm run generate:api-client &&
SKIP_STORYBOOK_TESTS=true next build"``

once fully setup i plan to make it so these commands can be run with the
following commands

Linux/Mac
```bash
curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh
```

Windows cmd/powershell
```bash
powershell -c "iwr https://setup.agpt.co/install.bat -o install.bat; ./install.bat"
```

Currently the commands above dont work but will later on!

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have tested the linux ``install.sh`` on a ubuntu system and it
setup the platform fully.
- [x] I have tested the windows ``install.bat`` on my windows system and
it setup the platform fully.
- [x] I have tested on both OS's and checked with missing prerequisites
to see if it shows the errors and it does
2025-07-01 13:09:38 +00:00
dependabot[bot]
d076d0175f chore(frontend/deps-dev): Bump the development-dependencies group in /autogpt_platform/frontend with 9 updates (#10277)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-07-01 06:30:05 +00:00
Ubbe
a33d58dd33 chore(frontend): add generated files/queries to Git (#10281)
## Changes 🏗️

We want to make running the AutoGPT Front-end as easy as possible. For
that, you should be able to run it with the least amount of commands.

We recently added generated queries and types on the Front-end from the
Back-end OpenAPI schema, to make development easier and catch bugs
earlier. However, with the current setup, developers are forced to run
`pnpm generate:api-all` with the Back-end running, which is annoying.

After this PR, the Front-end can be rerun with just `pnpm i & pnpm dev`.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the Front-end with just `pnpm dev` and it works

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-07-01 06:01:05 +00:00
Ubbe
254bb6236f fix(frontend): use NEXT_PUBLIC_AGPT_SERVER_URL on proxy route (#10280)
### Changes 🏗️

A new undocumented env var, `NEXT_PUBLIC_AGPT_SERVER_BASE_URL`, was
added to the proxy route for it to work with the new `react-query`
mutator.

I removed it and used the existing `NEXT_PUBLIC_AGPT_SERVER_URL`, so we
have fewer environment variables to manage ( _and this one is already
added to all environments_ ).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the server locally
  - [x] All pages ( library, marketplace, builder, settings ) work
2025-07-01 05:14:18 +00:00
Zamil Majdy
198b3d9f45 fix(backend): Avoid swallowing exception on graph execution failure (#10260)
Graph execution that fails due to interruption or unknown error should
be enqueued back to the queue. However, swallowing the error ends up not
marking the execution as a failure.

### Changes 🏗️

* Avoid keep updating the graph execution status on each node execution
step.
* Added a guard rail to avoid completing graph execution on
non-completed execution status.
* Avoid acknowledging messages from the queue if the graph execution is
not yet completed.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run graph execution, kill the process, re-run the process

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-30 21:09:53 +00:00
Zamil Majdy
9a6ae90d12 fix(backend): Convert pyclamd to aioclamd for anti-virus scan concurrency improvement (#10258)
Currently, we are using PyClamd to run a file anti-virus scan for all
the files uploaded into the platform. We split the file into small
chunks and serially check the chunks for the virus scan. The socket is
not thread-safe, and we need to create multiple sockets across many
threads to leverage concurrency. To make this step concurrent and keep
it fully async, we need to migrate PyClamd to aioclamd.

### Changes 🏗️

Convert pyclamd to aioclamd, leverage chunk parallelism scan with a
semaphore limiting the concurrency limit.

#### Side Note
Shout-out to @tedyu for raising this improvement idea.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Execute file upload into the platform
2025-06-30 21:09:30 +00:00
Zamil Majdy
89a5ba69e5 fix(blocks): Fix boolean/toggle block input with false/disabled value 2025-06-30 14:06:29 -07:00
Ubbe
b32ac898db fix(frontend): migrate to NEXT_PUBLIC_FRONTEND_BASE_URL (#10270)
## Changes 🏗️

We need to `FRONTEND_BASE_URL` to → `NEXT_PUBLIC_FRONTEND_BASE_URL`
given is needed on the new API client on the Front-end to make requests.
The `NEXT_PUBLIC` prefix is important so that it is available on the
client.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] The library and other pages work
2025-06-30 16:42:31 +00:00
Abhimanyu Yadav
4f6e66447f fix(frontend): fix custom mutator of orval (#10269)
This pull request includes updates to the environment configuration and
API mutator logic in the `autogpt_platform/frontend` directory. The
changes aim to improve flexibility by introducing dynamic base URLs
through environment variables.

Environment configuration updates:

*
[`autogpt_platform/frontend/.env.example`](diffhunk://#diff-72012a00359825421736dc064be74187011cb5b0462bea1ed3a3c5ca80bb3117R2):
Added `NEXT_PUBLIC_FRONTEND_BASE_URL` to define the base URL for the
frontend dynamically.

API mutator logic updates:

*
[`autogpt_platform/frontend/src/app/api/mutators/custom-mutator.ts`](diffhunk://#diff-28c5af33c7bd0ecddc1793aa6a27bfd5b4f979b62c29990538aceea3320d8be9L1-R1):
Updated `BASE_URL` to use the `NEXT_PUBLIC_FRONTEND_BASE_URL`
environment variable, enabling dynamic configuration of the API proxy
URL.

### Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
   - [x] Tested manually and everything is working perfectly
2025-06-30 14:19:36 +00:00
Abhimanyu Yadav
fae927e2a7 feat(docs): update README.md to show how new data fetching works (#10268)
This PR demonstrates how the new data fetching strategy works, so other
developers don’t get confused.

### Changes
- Updated `README.md` to explain the new data fetching strategy.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Nothing is breaking, everything works great
2025-06-30 12:50:10 +00:00
Abhimanyu Yadav
6ef8119708 feat(frontend): use the new set up on library data fetching client (#10266)
### Changes

- Restructure library components.
- Divide the component into two parts: one for rendering and one for
hooks.
- Add a `useInfiniteParams` inside the `orval` config to use `page` as
the pagination parameter.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
    - [x] Manually tested everything and everything works fine
2025-06-30 12:07:21 +00:00
Ubbe
a0a7129081 fix(frontend): design system feedback (#10253)
## Changes 🏗️

I had a catch-up yesterday with Olivia, we agreed to implement these
fixes on our 👶🏽 component library

### 1. Update button loading states

<img width="600" alt="Screenshot 2025-06-27 at 15 13 12"
src="https://github.com/user-attachments/assets/a9ec8d0b-5f2c-4675-ae38-41ce81a3d699"
/>

When `loading`, all buttons will have a grey background and white text,
except if it is the `ghost` variant.

### 2. Update new border radius tokens

<img width="300" alt="Screenshot 2025-06-27 at 15 15 46"
src="https://github.com/user-attachments/assets/9cc7ea52-420c-4d61-b682-0cffe1843ad8"
/>

Updated the `border-radius` scale to the one in Figma.
[Reference](https://www.figma.com/design/nO9NFynNuicLtkiwvOxrbz/AutoGPT-Design-System?node-id=634-8255&t=hGgDUzLoLdSqpJIe-1).

### 3. Add `secondary` link variant

<img width="319" alt="Screenshot 2025-06-27 at 15 13 02"
src="https://github.com/user-attachments/assets/dc307d32-2f35-4110-bc7e-0ef6dd3d78e3"
/>

We have 2 types of links, `primary` ( _without underline but shows on
hover_ ) and `secondary` ( _with underline_ )

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally, stories look good

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-06-30 09:56:55 +00:00
Reinier van der Leer
f3202fa776 feat(platform/builder): Hide action buttons on triggered graphs (#10218)
- Resolves #10217


https://github.com/user-attachments/assets/26a402f5-6f43-453b-8c83-481380bde853

### Changes 🏗️

Frontend:
- Show message instead of action buttons ("Run" etc) when graph has
webhook node(s)
- Fix check for webhook nodes used in `BlocksControl` and `FlowEditor`
- Clean up `PrimaryActionBar` implementation
  - Add `accent` variant to `ui/button:Button`

API:
- Add `GET /library/agents/by-graph/{graph_id}` endpoint

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to Builder
  - Add a trigger block
  - [x] -> action buttons disappear; message shows in their place
  - Save the graph
  - Click the "Agent Library" link in the message
- [x] -> app navigates to `/library/agents/[id]` for the newly created
agent
2025-06-30 08:33:33 +00:00
Abhimanyu Yadav
b5c7f381c1 feat(platform): centralize api calls in nextjs for token handling (#10222)
This PR helps to send all the React query requests through a Next.js
server proxy. It works something like this: when a user sends a request,
our custom mutator sends a request to the proxy server, where we add the
auth token to the header and send it to the backend again. 🌐

Users can send a client-side request directly to the backend server
because their browser does have access to auth tokens, so they need to
go via the Next.js server. 🚀

### Changes 🏗️

- Change the position of the generated client, mutator, and transfer
inside `/src/app/api`
- Update the mutator to send the request to the proxy server 
- Add a proxy server at `/api/proxy`, which handles the request using
`makeAuthenticatedRequest` and `makeAuthenticatedFileUpload` helpers and
sends the request to the backend
- Remove `getSupabaseClient`, because we do not have access to the auth
token on client side, hence no need 🔑
- Update Orval configs to generate the client at the new position 
- Added new backend updates to the auto-generated client.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] The setting page is using React Query and is working fine.
  - [x] The mutator is sending requests to the proxy server correctly.
  - [x] The proxy server is handling requests correctly.
- [x] The response handling is correct in both the proxy server and the
custom mutator.
2025-06-30 08:31:08 +00:00
Ubbe
2dd366172e feat(frontend): add dialog component (#10254)
## Changes 🏗️

### Overview

Introduces a new responsive `<Dialog />` component that automatically
adapts to screen size, providing optimal UX across devices.

<img width="800" alt="Screenshot 2025-06-27 at 16 00 01"
src="https://github.com/user-attachments/assets/d0c53b30-488f-4102-8100-c9318168d65b"
/>

<img width="300" alt="Screenshot 2025-06-27 at 16 00 12"
src="https://github.com/user-attachments/assets/f2105708-97d9-4a94-8e26-3c2d582ea8cd"
/>

### Key Features

#### 📱 **Responsive Behavior**
- **Desktop**: Modal dialog with overlay
- **Mobile**: Bottom drawer [Vaul](https://vaul.emilkowal.ski/) with
**swipe-to-dismiss** functionality

#### 🎯 **Multiple Interaction Methods**
- `ESC` key to close (both desktop & mobile)
- Click outside to dismiss
- Swipe down to dismiss (mobile drawer)
- Close button (X)

####  Why I did not use `shadcn/dialog` in this case as a base

While we already use the raw `shadcn/dialog` on the platform, it's
designed as a desktop-only solution and is not really
responsive-friendly. It lacks 📱 mobile-optimisation patterns like
_bottom drawers_, _swipe-to-dismiss gestures_ ( the new implementation
has it via [Vaul](https://vaul.emilkowal.ski/) ), and automatic
breakpoint adaptation according to screen size.

#### 🧩 **Compound Component Pattern**
```tsx
<Dialog title="Example">
  <Dialog.Trigger>
    <Button>Open Dialog</Button>
  </Dialog.Trigger>
  <Dialog.Content>
    Content goes here
  </Dialog.Content>
</Dialog>
```

#### ⚙️ **Flexible Control**
- **Uncontrolled**: Self-managed state via triggers
- **Controlled**: External state management
- **Force open**: rare but might be needed
- **Custom styling**: if needed

## Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] **Desktop Modal**: Opens/closes via trigger, ESC key, click
outside, close button
- [x] **Mobile Drawer**: Automatically switches at `lg` breakpoint,
swipe-to-dismiss works
- [x] **Controlled Mode**: External state management functions correctly
  - [x] **Force Open**: Dialog stays open for preview purposes
  - [x] **Custom Styling**: CSS-in-JS overrides work as expected
- [x] **Footer Component**: Action buttons render and function properly
  - [x] **No Title Mode**: Dialog works without title prop
- [x] **Accessibility**: Tab navigation, screen reader announcements,
ARIA compliance
- [x] **Responsive Breakpoints**: Component switches modes at correct
screen sizes
  - [x] **Storybook**: All stories render and function correctly

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-06-27 17:08:03 +00:00
Zamil Majdy
4d0db27d5e feat(block): Improve CreateListBlock to support batching based on token count (#10257)
CreateListBlock can only batch lists based on the size limit, but
sometimes we need the size to be dynamically adjusted based on the token
count.

### Changes 🏗️

Improve CreateListBlock to support batching based on token count

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test CreateListBlock
2025-06-27 16:56:17 +00:00
Reinier van der Leer
5421ccf86a feat(platform/library): Scheduling UX (#10246)
Complete the implementation of the Agent Run Scheduling UX in the
Library.

Demo:


https://github.com/user-attachments/assets/701adc63-452c-4d37-aeea-51788b2774f2

### Changes 🏗️

Frontend:
- Add "Schedule" button + dialog + logic to `AgentRunDraftView`
  - Update corresponding logic on `AgentRunsPage`
  - Add schedule name field to `CronSchedulerDialog`
- Amend Builder components `useAgentGraph`, `FlowEditor`,
`RunnerUIWrapper` to also handle schedule name input
    - Split `CronScheduler` into `CronScheduler`+`CronSchedulerDialog`
- Make `AgentScheduleDetailsView` more fully functional
  - Add schedule description to info box
  - Add "Delete schedule" button
- Update schedule create/select/delete logic in `AgentRunsPage`
- Improve schedule UX in `AgentRunsSelectorList`
  - Switch tabs automatically when a run or schedule is selected
  - Remove now-redundant schedule filters
- Refactor `@/lib/monitor/cronExpressionManager` into
`@/lib/cron-expression-utils`

Backend + API:
- Add name and credentials to graph execution schedule job params
- Update schedule API
  - `POST /schedules` -> `POST /graphs/{graph_id}/schedules`
  - Add `GET /graphs/{graph_id}/schedules`
  - Add not found error handling to `DELETE /schedules/{schedule_id}`
  - Minor refactoring

Backend:
- Fix "`GraphModel`->`NodeModel` is not fully defined" error in
scheduler
- Add support for all exceptions defined in `backend.util.exceptions` to
RPC logic in `backend.util.service`
- Fix inconsistent log prefixing in `backend.executor.scheduler`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create a simple agent with inputs and blocks that require credentials;
go to this agent in the Library
- Fill out the inputs and click "Schedule"; make it run every minute
(for testing purposes)
  - [x] -> newly created schedule appears in the list
  - [x] -> scheduled runs are successful
  - Click "Delete schedule"
  - [x] -> schedule no longer in list
- [x] -> on deleting the last schedule, view switches back to the Runs
list
  - [x] -> no new runs occur from the deleted schedule
2025-06-27 15:31:44 +00:00
Zamil Majdy
c4056cbae9 feat(block): Introduce context-window aware prompt compaction for LLM & SmartDecision blocks (#10252)
Calling LLM using the current block sometimes can break due to the high
context window.
A prompt compaction algorithm is applied (enabled by default) to make
sure the sent prompt is within a context window limit.


### Changes 🏗️

````
Heuristics
--------
* Prefer shrinking the content rather than truncating the conversation.
* If the conversation content is compacted and it's still not enough, then reduce the conversation list.
* The rest of the implementation is adjusted to minimize the LLM call breaking.

Strategy
--------
1. **Token-aware truncation** – progressively halve a per-message cap
   (`start_cap`, `start_cap/2`, … `floor_cap`) and apply it to the
   *content* of every message except the first and last.  Tool shells
   are included: we keep the envelope but shorten huge payloads.
2. **Middle-out deletion** – if still over the limit, delete the whole
   messages working outward from the centre, **skipping** any message
   that contains ``tool_calls`` or has ``role == "tool"``.
3. **Last-chance trim** – if still too big, truncate the *first* and
   *last* message bodies down to `floor_cap` tokens.
4. If the prompt is *still* too large:
     • raise ``ValueError``      when ``lossy_ok == False`` (default)
     • return the partially-trimmed prompt when ``lossy_ok == True``
````

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run an SDM block in a loop until it hits 200000 tokens using the
open-ai O3 model.
2025-06-27 15:07:50 +00:00
Reinier van der Leer
c01beaf003 fix(blocks): Restore GithubReadPullRequestBlock diff output (#10256)
- Follow-up fix to #10138

AI erased a bit of functionality from the `GithubReadPullRequestBlock`
in #10138. This PR puts it back and improves on the old format.

### Changes 🏗️

- Include full diff in `changes` output of `GithubReadPullRequestBlock`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Use the `GithubReadPullRequestBlock` with `include_pr_changes` enabled
  - [ ] -> block runs successfully
  - [ ] -> full diff included in `changes` output
2025-06-27 15:05:31 +00:00
dependabot[bot]
cf560c5d65 chore(frontend/deps-dev): Bump the development-dependencies group across 1 directory with 14 updates (#10249)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-27 10:21:49 +00:00
Toran Bruce Richards
77e99e9739 feat(blocks): Add more Revid.ai media generation blocks (#9931)
<html><head></head><body><h3>Why these changes are needed 🧐</h3>
<p>Revid.ai offers several specialised, undocumented rendering flows
beyond the basic “text-to-video” endpoint our platform already
supported.
to:</p>
<ol>
<li>
<p><strong>Generate ads</strong> from copy plus product images
(30-second vertical spots).</p>
</li>
<li>
<p><strong>Turn a single creative prompt</strong> into a fully
AI-generated video (no multi-line script).</p>
</li>
<li>
<p><strong>Transform a screenshot into a narrated, avatar-driven
clip</strong>, ideal for product-led demos.</p>
</li>
</ol>
<p>Without first-class blocks for these flows, users were forced to drop
to raw HTTP nodes, losing schema validation, test mocks and credential
management.</p>
<h3>Changes 🏗️</h3>

Added new category to ``BlockCategory`` in ``block.py`` for ``MARKETING
= "Block that helps with marketing"``

Area | Change | Notes
-- | -- | --
ai_shortform_video_block.py | Refactored out a shared _RevidMixin
(webhook + polling helpers). | Keeps DRY across new blocks.
  | Added AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS and Voice.EVA
enum members. | Required by Revid sample payloads.
  | AIAdMakerVideoCreatorBlock | Implements ai-ad-generator flow;
supports optional input_media_urls, target_duration,
use_only_provided_media.
  | AIPromptToVideoCreatorBlock | Implements prompt-to-video flow with
prompt_target_duration.
  | AIScreenshotToVideoAdBlock | Implements screenshot-to-video-ad flow
(avatar narration, BG removal).
  | Added full pydantic schemas, test stubs & mock hooks for each new
block. | Ensures unit tests pass and blocks appear in UI.


<p>No existing functionality was removed; current <code
inline="">AIShortformVideoCreatorBlock</code> is untouched apart from
enum imports.</p></body></html>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the ``AI ShortForm Video Creator`` block to generate a video
and it will work
- [x] same with `` ai ad maker video creator`` block test it and it
should work
  - [x] and test ``ai screenshot to video ad`` block it should work

---------

Co-authored-by: Bently <Github@bentlybro.com>
2025-06-27 06:39:24 +00:00
Zamil Majdy
7f7c387156 fix(block): Fix broken in SearchPeople block 2025-06-26 19:13:34 -07:00
Zamil Majdy
21cf263eea fix(block): Fix typo in Apollo block 2025-06-26 14:41:15 -07:00
Zamil Majdy
500952a15f fix(block): Fix typo in Apollo block 2025-06-26 14:40:03 -07:00
Zamil Majdy
b3c81fa9e2 fix(block): Fix typo in SearchPeople block 2025-06-26 14:31:57 -07:00
Zamil Majdy
59e96d4759 fix(block): Avoid Apollo enrich_info feature overriding already existing search info 2025-06-26 13:34:49 -07:00
Zamil Majdy
e01dd94e36 feat(block): Add enriching email feature for SearchPeopleBlock & introduce GetPersonDetailBlock (#10251)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

* Add an enriching email feature toggle for SearchPeopleBlock
* Introduce GetPersonDetailBlock
* Adjust the cost of both blocks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Execute SearchPeopleBlock & GetPersonDetailBlock
2025-06-26 20:11:43 +00:00
Zamil Majdy
2f12e8d731 feat(platform): Add Host-scoped credentials support for blocks HTTP requests (#10215)
Currently, we don't have a secure way to pass Authorization headers when
calling the `SendWebRequestBlock`.
This hinders the integration of third-party applications that do not yet
have native block support.

### Changes 🏗️

Add Host-scoped credentials support for the newly introduced
SendAuthenticatedWebRequestBlock.

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/0d3d577a-2b9b-4f0f-9377-0e00a069ba37"
/>
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a59b9f16-c89c-453d-a628-1df0dfd60fb5"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Uses `https://api.openai.com/v1/images/edits` through
SendWebRequestBlock by passing the api-key through host-scoped
credentials.
2025-06-26 19:09:27 +00:00
Nicholas Tindle
83dbcd11e4 docs(frontend, backend): add OAuth security boundary docs (#10202)
### Why are these changes needed?

<!-- Clearly explain the need for these changes: -->
These changes document the OAuth integration flow for CASA lvl 2
compliance, specifically addressing the requirement to "Verify
documentation and justification of all the application's trust
boundaries, components, and significant data flows." The documentation
clarifies the two distinct OAuth implementations in AutoGPT: user
authentication via Supabase SSO and API integration credentials for
third-party services.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Created comprehensive OAuth integration flow documentation at
`/docs/content/platform/contributing/oauth-integration-flow.md`
- Documented trust boundaries between frontend (untrusted), backend API
(trusted), and external providers (semi-trusted)
- Added detailed component architecture for both frontend and backend
OAuth implementations
- Included mermaid diagrams illustrating:
  - OAuth flow sequences (initiation, authorization, token refresh)
  - System architecture showing SSO vs API integration OAuth
  - Data flow diagram
  - Security architecture layers
  - Credential lifecycle state diagram
- Documented security measures including CSRF protection, PKCE
implementation, and token management
- Clarified the distinction between Supabase SSO for user login and
custom OAuth for API integrations
- Added references to source files for up-to-date provider lists rather
than hard-coding all providers

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Created documentation file with proper markdown formatting
  - [x] Verified all file paths referenced in documentation exist
  - [x] Confirmed mermaid diagrams render correctly
- [x] Validated that the documentation accurately reflects the codebase
implementation

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-06-26 18:08:55 +00:00
Abhimanyu Yadav
f66b8f9c74 refactor(frontend): update settings pages fetching using react query (#10248)
### Changes 🏗️
- We have implemented some backend changes, so I have added a new,
updated OpenAPI specification.
- We have updated the settings and API keys page to enable us to use
React Query for fetching data.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
    - [x] Settings and api keys page is working correctly
2025-06-26 15:13:20 +00:00