Compare commits

..

281 Commits

Author SHA1 Message Date
Zamil Majdy
3eda604608 refactor data manipulation blocks 2025-07-03 12:25:45 -07:00
Toran Bruce Richards
47ba2701c2 Merge branch 'dev' into q53n42-codex/add-list-operation-blocks 2025-07-03 20:08:57 +01:00
Zamil Majdy
24b4ab9864 feat(block): Enhance Mem0 blocks filetering & add more GoogleSheets blocks (#10287)
The block library was missing two key capabilities that keep coming up
in real-world agent flows:

1. **Granular Mem0 filtering.** Agents often work side-by-side for the
same user, so memories must be scoped to a specific run or agent to
avoid crosstalk.
2. **First-class Google Sheets support.** Many community projects (e.g.,
data-collection, lightweight dashboards, no-code workflows) rely on
Sheets, but we only had a brittle REST call block.

This PR adds fine-grained filters to every Mem0 retrieval block and
introduces a complete, OAuth-ready Google Sheets suite so agents can
create, read, write, format, and manage spreadsheets safely.
:contentReference[oaicite:0]{index=0}

---

### Changes 🏗️
#### 📚 Mem0 block enhancements  
* Added `categories_filter`, `metadata_filter`, `limit_memory_to_run`,
and `limit_memory_to_agent` inputs to **SearchMemoryBlock**,
**GetAllMemoriesBlock**, and **GetLatestMemoryBlock**.
* Added identical scoping logic to **AddMemoryBlock** so newly-created
memories can be tied to run/agent IDs.

#### 📊 New Google Sheets blocks (`backend/blocks/google/sheets.py`)  
| Block | Purpose |
|-------|---------|
| `GoogleSheetsReadBlock` | Read a range |
| `GoogleSheetsWriteBlock` | Overwrite a range |
| `GoogleSheetsAppendBlock` | Append rows |
| `GoogleSheetsClearBlock` | Clear a range |
| `GoogleSheetsMetadataBlock` | Fetch spreadsheet + sheet info |
| `GoogleSheetsManageSheetBlock` | Create / delete / copy a sheet |
| `GoogleSheetsBatchOperationsBlock` | Batch update / clear |
| `GoogleSheetsFindReplaceBlock` | Find & replace text |
| `GoogleSheetsFormatBlock` | Cell formatting (bg/text colour, bold,
italic, font size) |
| `GoogleSheetsCreateSpreadsheetBlock` | Spin up a new spreadsheet |

* Each block has typed input/output schemas, built-in test mocks, and is
disabled in prod unless Google OAuth is configured.
* Added helper enums (`SheetOperation`, `BatchOperationType`) and
updated **CLAUDE.md** contributor guide with a UUID-generation step.
:contentReference[oaicite:2]{index=2}

---

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Manual E2E run: agent writes chat summary to new spreadsheet,
reads it back, searches memory with run-scoped filter
- [x] Live Google API smoke-tests (read/write/append/clear/format) using
a disposable spreadsheet
2025-07-03 18:01:30 +00:00
Ubbe
04e90da031 fix(frontend): proxy via API route no actions (#10296)
## Changes 🏗️

We noticed that in some pages ( `/build` _mainly_ ), where a lot of API
calls are fired in parallel using the old `BackendAPI`( _running many
agent executions_ ) the performance became worse. That is because the
`BackendAPI` was proxied via [server
actions](https://nextjs.org/docs/14/app/building-your-application/data-fetching/server-actions-and-mutations)
( _to make calls to our Backend on the Next.js server_ ).

Looks like server actions don't run in parallel, and their performance
is also subpar, given that we are not hosted on Vercel (they don't
utilise the edge middleware).

These changes cause all `BackendAPI` calls to be proxied via the Next.js
`/api/` route when executed on the browser; when executed on the server,
they bypass the proxy and directly access the API. Hopefully we gain:

- 🚀 Better Performance - API routes are faster than server actions for
this use case
- 🔧 Less Magic - Direct fetch calls instead of hidden server action
complexity
- ♻️ Code Reuse - Leveraging the existing proxy infrastructure used by
react-query
- 🎯 Cleaner Architecture - Single proxy pattern for all API calls
- 🔒 Same Security - Still uses server-side authentication with httpOnly
cookies

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] E2E tests pass
  - [x] Click through the app, there is no issues
  - [x] Agent executions are fast again in the builder
  - [x] Test file uploads
2025-07-03 15:18:47 +00:00
Zamil Majdy
d4646c249d feat(backend): implement KV data storage blocks 2025-07-03 07:54:50 -07:00
Zamil Majdy
095199bfa6 feat(backend): implement KV data storage blocks (#10294)
This PR introduces key-value storage blocks.

### Changes 🏗️

- **Database Schema**: Add AgentNodeExecutionKeyValueData table with
composite primary key (userId, key)
- **Persistence Blocks**: Create PersistInformationBlock and
RetrieveInformationBlock in persistence.py
- **Scope-based Storage**: Support for within_agent (per agent) vs
across_agents (global user) persistence
- **Key Structure**: Use formal # delimiter for storage keys:
`agent#{graph_id}#{key}` and `global#{key}`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run all 244 block tests - all passing 
  - [x] Test PersistInformationBlock with mock data storage
  - [x] Test RetrieveInformationBlock with mock data retrieval
- [x] Verify scope-based key generation (within_agent vs across_agents)
  - [x] Verify database function integration through all manager classes
  - [x] Run lint and type checking - all passing 
  - [x] Verify database migration is included and valid

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

Note: This change adds database schema and new blocks but doesn't
require environment or docker-compose changes as it uses existing
database infrastructure.

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-03 14:24:51 +00:00
Zamil Majdy
90fb223114 fix(frontend): Fix status chip not showing on graph with INCOMPLETE status 2025-07-03 07:25:59 -07:00
Reinier van der Leer
b1f3122243 fix(frontend): Add fallback for NEXT_PUBLIC_FRONTEND_BASE_URL to API proxy (#10299)
- Resolves #10298
- Follow-up to #10270

### Changes 🏗️

Amend two changes from #10270:

- Add fallback for `NEXT_PUBLIC_FRONTEND_BASE_URL` in custom-mutator.ts
- Revert rename of `FRONTEND_BASE_URL` to
`NEXT_PUBLIC_FRONTEND_BASE_URL` in `backend/.env.example`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Don't set `NEXT_PUBLIC_FRONTEND_BASE_URL`
  - Run the platform locally
  - [x] -> `/library` loads normally

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-07-03 12:26:50 +00:00
Zamil Majdy
f1cc2afbda feat(backend): improve stop graph execution reliability (#10293)
## Summary
- Enhanced graph execution cancellation and cleanup mechanisms
- Improved error handling and logging for graph execution lifecycle
- Added timeout handling for graph termination with proper status
updates
- Exposed a new API for stopping graph based on only graph_id or user_id
- Refactored logging metadata structure for better error tracking

## Key Changes
### Backend
- **Graph Execution Management**: Enhanced `stop_graph_execution` with
timeout-based waiting and proper status transitions
- **Execution Cleanup**: Added proper cancellation waiting with timeout
handling in executor manager
- **Logging Improvements**: Centralized `LogMetadata` class and improved
error logging consistency
- **API Enhancements**: Added bulk graph execution stopping
functionality
- **Error Handling**: Better exception handling and status management
for failed/cancelled executions

### Frontend
- **Status Safety**: Added null safety checks for status chips to
prevent runtime errors
- **Execution Control**: Simplified stop execution request handling

## Test Plan
- [x] Verify graph execution can be properly stopped and reaches
terminal state
- [x] Test timeout scenarios for stuck executions
- [x] Validate proper cleanup of running node executions when graph is
cancelled
- [x] Check frontend status chips handle undefined statuses gracefully
- [x] Test bulk execution stopping functionality

🤖 Generated with [Claude Code](https://claude.ai/code)

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-02 21:21:26 +00:00
Ubbe
f394a0eabb fix(frontend): do not swallow errors on the proxy (#10289)
## Changes 🏗️

Requests to the Backend happen now on the server, given we moved to
server-side cookies 🍪 ... however the client proxy is not exposing the
API errors to the client correctly. This aim to fix that.

## Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login to the platform
  - [x] Run agents until you encounter an error
  - [x] The error is shown on the toast
2025-07-02 14:18:17 +00:00
Ubbe
311bcc7751 fix(frontend): onboarding runtime error (#10288)
## Changes 🏗️

<img width="800" alt="Screenshot 2025-07-02 at 16 43 08"
src="https://github.com/user-attachments/assets/d7cd0dd7-e671-4c5d-8ed9-6d8f56371ff5"
/>

During logout, the user state gets cleared but the onboarding provider
continues to run and tries to access onboarding.completedSteps on a null
object, causing a runtime error 😬

This mostly happens because onboarding is broken on local and dev ( the
onboarding agents don't work ), so I manually skip it after creating an
account, navigating to `/marketplace`. That makes me think that the
onboarding provider still thinks I need to onboard, and hence why this
error?

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Create a new user
- [x] Instead of completing onboarding, navigate to `/marketplace` via
browser URL
- [x] logout, login/logout again few times and you don't see runtime
errors
2025-07-02 14:13:22 +00:00
SOUHAILA SERBOUT
e2bd727798 feat: optimize frontend CI with shared setup job (#10286)
# Change details
- **Before**: Each job separately installs dependencies (~4 redundant
installations)
  ### Massive Redundancy in Setup Steps
  Each job repeats these SAME 4 steps:
  - Checkout repository
  - Set up Node.js (version 21) 
  - Enable corepack
  - Install dependencies (pnpm install --frozen-lockfile)
  
  This happens 4+ times across different jobs:

   -  lint job
    - type-check job
    - chromatic job
    - test job (runs 2x due to matrix strategy)
    
   ### No Dependency Caching
  No caching strategy - downloads packages fresh every time
   - Every workflow run downloads all packages from scratch
   - No benefit from previous runs, even with identical pnpm-lock.yaml


- **After**: Dependencies installed once in setup job, cached and reused

This optimization maintains all existing CI functionality while
significantly improving pipeline efficiency.
A workflow run example is dispatched:
https://github.com/souhailaS/AutoGPT/actions/workflows/platform-frontend-ci.yml


## Additional Context 
We are a team of researchers from University of Zurich
(https://www.ifi.uzh.ch/en/zest.html) currently **working on automating
energy optimizations in GitHub Actions workflows**. This optimization
maintains full functionality while significantly reducing computational
overhead and energy consumption.

souhaila.serbout@uzh.ch
2025-07-02 10:02:19 +00:00
seer-by-sentry[bot]
47f503f223 feat(backend): Support aiohttp.BasicAuth in make_request (#10283)
Fixes https://github.com/Significant-Gravitas/AutoGPT/issues/10284

### Changes 🏗️

- Allows passing an `aiohttp.BasicAuth` object directly to the `auth`
parameter of the `make_request` function.
- Converts tuple-based auth credentials to `aiohttp.BasicAuth` objects
before making the request.

Fixes
[AUTOGPT-SERVER-4AX](https://sentry.io/organizations/significant-gravitas/issues/6709824432/).
The issue was that: aiohttp's ClientSession.request received a plain
tuple for `auth` instead of an `aiohttp.BasicAuth` object, causing
OAuth2 token exchange failure.

This fix was generated by Seer in Sentry, triggered by Bently. 👁️ Run
ID: 185767

Not quite right? [Click here to continue debugging with
Seer.](https://sentry.io/organizations/significant-gravitas/issues/6709824432/?seerDrawer=true)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

Co-authored-by: seer-by-sentry[bot] <157164994+seer-by-sentry[bot]@users.noreply.github.com>
2025-07-01 13:09:54 +00:00
Bently
22d58367ec dx(platform): Add initial setup scripts for linux and windows (#9912)
This PR adds two setup scripts that will setup autogpt fully, it has a
windows .bat and a linux/mac .sh script, for now they are placed in a
new folder called "Installer"

### Note, the installers are supposed to be run outside of the autogpt
repo folder like Desktop/in a new empy folder because it will clone the
repo into the current directory.

I have had to add ``cross-env`` via ``pnpm add cross-env `` as on
windows the env is set differently in the ``package.json`` build section
``"build": "cross-env pnpm run generate:api-client &&
SKIP_STORYBOOK_TESTS=true next build"``

once fully setup i plan to make it so these commands can be run with the
following commands

Linux/Mac
```bash
curl -fsSL https://setup.agpt.co/install.sh -o install.sh && bash install.sh
```

Windows cmd/powershell
```bash
powershell -c "iwr https://setup.agpt.co/install.bat -o install.bat; ./install.bat"
```

Currently the commands above dont work but will later on!

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] I have tested the linux ``install.sh`` on a ubuntu system and it
setup the platform fully.
- [x] I have tested the windows ``install.bat`` on my windows system and
it setup the platform fully.
- [x] I have tested on both OS's and checked with missing prerequisites
to see if it shows the errors and it does
2025-07-01 13:09:38 +00:00
dependabot[bot]
d076d0175f chore(frontend/deps-dev): Bump the development-dependencies group in /autogpt_platform/frontend with 9 updates (#10277)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-07-01 06:30:05 +00:00
Ubbe
a33d58dd33 chore(frontend): add generated files/queries to Git (#10281)
## Changes 🏗️

We want to make running the AutoGPT Front-end as easy as possible. For
that, you should be able to run it with the least amount of commands.

We recently added generated queries and types on the Front-end from the
Back-end OpenAPI schema, to make development easier and catch bugs
earlier. However, with the current setup, developers are forced to run
`pnpm generate:api-all` with the Back-end running, which is annoying.

After this PR, the Front-end can be rerun with just `pnpm i & pnpm dev`.

## Checklist 📋

### For code changes

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the Front-end with just `pnpm dev` and it works

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-07-01 06:01:05 +00:00
Ubbe
254bb6236f fix(frontend): use NEXT_PUBLIC_AGPT_SERVER_URL on proxy route (#10280)
### Changes 🏗️

A new undocumented env var, `NEXT_PUBLIC_AGPT_SERVER_BASE_URL`, was
added to the proxy route for it to work with the new `react-query`
mutator.

I removed it and used the existing `NEXT_PUBLIC_AGPT_SERVER_URL`, so we
have fewer environment variables to manage ( _and this one is already
added to all environments_ ).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the server locally
  - [x] All pages ( library, marketplace, builder, settings ) work
2025-07-01 05:14:18 +00:00
Zamil Majdy
198b3d9f45 fix(backend): Avoid swallowing exception on graph execution failure (#10260)
Graph execution that fails due to interruption or unknown error should
be enqueued back to the queue. However, swallowing the error ends up not
marking the execution as a failure.

### Changes 🏗️

* Avoid keep updating the graph execution status on each node execution
step.
* Added a guard rail to avoid completing graph execution on
non-completed execution status.
* Avoid acknowledging messages from the queue if the graph execution is
not yet completed.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run graph execution, kill the process, re-run the process

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-30 21:09:53 +00:00
Zamil Majdy
9a6ae90d12 fix(backend): Convert pyclamd to aioclamd for anti-virus scan concurrency improvement (#10258)
Currently, we are using PyClamd to run a file anti-virus scan for all
the files uploaded into the platform. We split the file into small
chunks and serially check the chunks for the virus scan. The socket is
not thread-safe, and we need to create multiple sockets across many
threads to leverage concurrency. To make this step concurrent and keep
it fully async, we need to migrate PyClamd to aioclamd.

### Changes 🏗️

Convert pyclamd to aioclamd, leverage chunk parallelism scan with a
semaphore limiting the concurrency limit.

#### Side Note
Shout-out to @tedyu for raising this improvement idea.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Execute file upload into the platform
2025-06-30 21:09:30 +00:00
Zamil Majdy
89a5ba69e5 fix(blocks): Fix boolean/toggle block input with false/disabled value 2025-06-30 14:06:29 -07:00
Ubbe
b32ac898db fix(frontend): migrate to NEXT_PUBLIC_FRONTEND_BASE_URL (#10270)
## Changes 🏗️

We need to `FRONTEND_BASE_URL` to → `NEXT_PUBLIC_FRONTEND_BASE_URL`
given is needed on the new API client on the Front-end to make requests.
The `NEXT_PUBLIC` prefix is important so that it is available on the
client.

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the app locally
  - [x] The library and other pages work
2025-06-30 16:42:31 +00:00
Abhimanyu Yadav
4f6e66447f fix(frontend): fix custom mutator of orval (#10269)
This pull request includes updates to the environment configuration and
API mutator logic in the `autogpt_platform/frontend` directory. The
changes aim to improve flexibility by introducing dynamic base URLs
through environment variables.

Environment configuration updates:

*
[`autogpt_platform/frontend/.env.example`](diffhunk://#diff-72012a00359825421736dc064be74187011cb5b0462bea1ed3a3c5ca80bb3117R2):
Added `NEXT_PUBLIC_FRONTEND_BASE_URL` to define the base URL for the
frontend dynamically.

API mutator logic updates:

*
[`autogpt_platform/frontend/src/app/api/mutators/custom-mutator.ts`](diffhunk://#diff-28c5af33c7bd0ecddc1793aa6a27bfd5b4f979b62c29990538aceea3320d8be9L1-R1):
Updated `BASE_URL` to use the `NEXT_PUBLIC_FRONTEND_BASE_URL`
environment variable, enabling dynamic configuration of the API proxy
URL.

### Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
   - [x] Tested manually and everything is working perfectly
2025-06-30 14:19:36 +00:00
Abhimanyu Yadav
fae927e2a7 feat(docs): update README.md to show how new data fetching works (#10268)
This PR demonstrates how the new data fetching strategy works, so other
developers don’t get confused.

### Changes
- Updated `README.md` to explain the new data fetching strategy.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Nothing is breaking, everything works great
2025-06-30 12:50:10 +00:00
Abhimanyu Yadav
6ef8119708 feat(frontend): use the new set up on library data fetching client (#10266)
### Changes

- Restructure library components.
- Divide the component into two parts: one for rendering and one for
hooks.
- Add a `useInfiniteParams` inside the `orval` config to use `page` as
the pagination parameter.

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
    - [x] Manually tested everything and everything works fine
2025-06-30 12:07:21 +00:00
Ubbe
a0a7129081 fix(frontend): design system feedback (#10253)
## Changes 🏗️

I had a catch-up yesterday with Olivia, we agreed to implement these
fixes on our 👶🏽 component library

### 1. Update button loading states

<img width="600" alt="Screenshot 2025-06-27 at 15 13 12"
src="https://github.com/user-attachments/assets/a9ec8d0b-5f2c-4675-ae38-41ce81a3d699"
/>

When `loading`, all buttons will have a grey background and white text,
except if it is the `ghost` variant.

### 2. Update new border radius tokens

<img width="300" alt="Screenshot 2025-06-27 at 15 15 46"
src="https://github.com/user-attachments/assets/9cc7ea52-420c-4d61-b682-0cffe1843ad8"
/>

Updated the `border-radius` scale to the one in Figma.
[Reference](https://www.figma.com/design/nO9NFynNuicLtkiwvOxrbz/AutoGPT-Design-System?node-id=634-8255&t=hGgDUzLoLdSqpJIe-1).

### 3. Add `secondary` link variant

<img width="319" alt="Screenshot 2025-06-27 at 15 13 02"
src="https://github.com/user-attachments/assets/dc307d32-2f35-4110-bc7e-0ef6dd3d78e3"
/>

We have 2 types of links, `primary` ( _without underline but shows on
hover_ ) and `secondary` ( _with underline_ )

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally, stories look good

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-06-30 09:56:55 +00:00
Reinier van der Leer
f3202fa776 feat(platform/builder): Hide action buttons on triggered graphs (#10218)
- Resolves #10217


https://github.com/user-attachments/assets/26a402f5-6f43-453b-8c83-481380bde853

### Changes 🏗️

Frontend:
- Show message instead of action buttons ("Run" etc) when graph has
webhook node(s)
- Fix check for webhook nodes used in `BlocksControl` and `FlowEditor`
- Clean up `PrimaryActionBar` implementation
  - Add `accent` variant to `ui/button:Button`

API:
- Add `GET /library/agents/by-graph/{graph_id}` endpoint

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to Builder
  - Add a trigger block
  - [x] -> action buttons disappear; message shows in their place
  - Save the graph
  - Click the "Agent Library" link in the message
- [x] -> app navigates to `/library/agents/[id]` for the newly created
agent
2025-06-30 08:33:33 +00:00
Abhimanyu Yadav
b5c7f381c1 feat(platform): centralize api calls in nextjs for token handling (#10222)
This PR helps to send all the React query requests through a Next.js
server proxy. It works something like this: when a user sends a request,
our custom mutator sends a request to the proxy server, where we add the
auth token to the header and send it to the backend again. 🌐

Users can send a client-side request directly to the backend server
because their browser does have access to auth tokens, so they need to
go via the Next.js server. 🚀

### Changes 🏗️

- Change the position of the generated client, mutator, and transfer
inside `/src/app/api`
- Update the mutator to send the request to the proxy server 
- Add a proxy server at `/api/proxy`, which handles the request using
`makeAuthenticatedRequest` and `makeAuthenticatedFileUpload` helpers and
sends the request to the backend
- Remove `getSupabaseClient`, because we do not have access to the auth
token on client side, hence no need 🔑
- Update Orval configs to generate the client at the new position 
- Added new backend updates to the auto-generated client.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] The setting page is using React Query and is working fine.
  - [x] The mutator is sending requests to the proxy server correctly.
  - [x] The proxy server is handling requests correctly.
- [x] The response handling is correct in both the proxy server and the
custom mutator.
2025-06-30 08:31:08 +00:00
Toran Bruce Richards
26e3afa37d Merge branch 'dev' into q53n42-codex/add-list-operation-blocks 2025-06-27 21:33:09 +01:00
Toran Bruce Richards
d12fdbda79 Update autogpt_platform/backend/backend/blocks/basic.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-27 21:31:11 +01:00
Toran Bruce Richards
a235e49dda fix(blocks): remove failing test scenarios 2025-06-27 21:26:39 +01:00
Ubbe
2dd366172e feat(frontend): add dialog component (#10254)
## Changes 🏗️

### Overview

Introduces a new responsive `<Dialog />` component that automatically
adapts to screen size, providing optimal UX across devices.

<img width="800" alt="Screenshot 2025-06-27 at 16 00 01"
src="https://github.com/user-attachments/assets/d0c53b30-488f-4102-8100-c9318168d65b"
/>

<img width="300" alt="Screenshot 2025-06-27 at 16 00 12"
src="https://github.com/user-attachments/assets/f2105708-97d9-4a94-8e26-3c2d582ea8cd"
/>

### Key Features

#### 📱 **Responsive Behavior**
- **Desktop**: Modal dialog with overlay
- **Mobile**: Bottom drawer [Vaul](https://vaul.emilkowal.ski/) with
**swipe-to-dismiss** functionality

#### 🎯 **Multiple Interaction Methods**
- `ESC` key to close (both desktop & mobile)
- Click outside to dismiss
- Swipe down to dismiss (mobile drawer)
- Close button (X)

####  Why I did not use `shadcn/dialog` in this case as a base

While we already use the raw `shadcn/dialog` on the platform, it's
designed as a desktop-only solution and is not really
responsive-friendly. It lacks 📱 mobile-optimisation patterns like
_bottom drawers_, _swipe-to-dismiss gestures_ ( the new implementation
has it via [Vaul](https://vaul.emilkowal.ski/) ), and automatic
breakpoint adaptation according to screen size.

#### 🧩 **Compound Component Pattern**
```tsx
<Dialog title="Example">
  <Dialog.Trigger>
    <Button>Open Dialog</Button>
  </Dialog.Trigger>
  <Dialog.Content>
    Content goes here
  </Dialog.Content>
</Dialog>
```

#### ⚙️ **Flexible Control**
- **Uncontrolled**: Self-managed state via triggers
- **Controlled**: External state management
- **Force open**: rare but might be needed
- **Custom styling**: if needed

## Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] **Desktop Modal**: Opens/closes via trigger, ESC key, click
outside, close button
- [x] **Mobile Drawer**: Automatically switches at `lg` breakpoint,
swipe-to-dismiss works
- [x] **Controlled Mode**: External state management functions correctly
  - [x] **Force Open**: Dialog stays open for preview purposes
  - [x] **Custom Styling**: CSS-in-JS overrides work as expected
- [x] **Footer Component**: Action buttons render and function properly
  - [x] **No Title Mode**: Dialog works without title prop
- [x] **Accessibility**: Tab navigation, screen reader announcements,
ARIA compliance
- [x] **Responsive Breakpoints**: Component switches modes at correct
screen sizes
  - [x] **Storybook**: All stories render and function correctly

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-06-27 17:08:03 +00:00
Zamil Majdy
4d0db27d5e feat(block): Improve CreateListBlock to support batching based on token count (#10257)
CreateListBlock can only batch lists based on the size limit, but
sometimes we need the size to be dynamically adjusted based on the token
count.

### Changes 🏗️

Improve CreateListBlock to support batching based on token count

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test CreateListBlock
2025-06-27 16:56:17 +00:00
Reinier van der Leer
5421ccf86a feat(platform/library): Scheduling UX (#10246)
Complete the implementation of the Agent Run Scheduling UX in the
Library.

Demo:


https://github.com/user-attachments/assets/701adc63-452c-4d37-aeea-51788b2774f2

### Changes 🏗️

Frontend:
- Add "Schedule" button + dialog + logic to `AgentRunDraftView`
  - Update corresponding logic on `AgentRunsPage`
  - Add schedule name field to `CronSchedulerDialog`
- Amend Builder components `useAgentGraph`, `FlowEditor`,
`RunnerUIWrapper` to also handle schedule name input
    - Split `CronScheduler` into `CronScheduler`+`CronSchedulerDialog`
- Make `AgentScheduleDetailsView` more fully functional
  - Add schedule description to info box
  - Add "Delete schedule" button
- Update schedule create/select/delete logic in `AgentRunsPage`
- Improve schedule UX in `AgentRunsSelectorList`
  - Switch tabs automatically when a run or schedule is selected
  - Remove now-redundant schedule filters
- Refactor `@/lib/monitor/cronExpressionManager` into
`@/lib/cron-expression-utils`

Backend + API:
- Add name and credentials to graph execution schedule job params
- Update schedule API
  - `POST /schedules` -> `POST /graphs/{graph_id}/schedules`
  - Add `GET /graphs/{graph_id}/schedules`
  - Add not found error handling to `DELETE /schedules/{schedule_id}`
  - Minor refactoring

Backend:
- Fix "`GraphModel`->`NodeModel` is not fully defined" error in
scheduler
- Add support for all exceptions defined in `backend.util.exceptions` to
RPC logic in `backend.util.service`
- Fix inconsistent log prefixing in `backend.executor.scheduler`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create a simple agent with inputs and blocks that require credentials;
go to this agent in the Library
- Fill out the inputs and click "Schedule"; make it run every minute
(for testing purposes)
  - [x] -> newly created schedule appears in the list
  - [x] -> scheduled runs are successful
  - Click "Delete schedule"
  - [x] -> schedule no longer in list
- [x] -> on deleting the last schedule, view switches back to the Runs
list
  - [x] -> no new runs occur from the deleted schedule
2025-06-27 15:31:44 +00:00
Zamil Majdy
c4056cbae9 feat(block): Introduce context-window aware prompt compaction for LLM & SmartDecision blocks (#10252)
Calling LLM using the current block sometimes can break due to the high
context window.
A prompt compaction algorithm is applied (enabled by default) to make
sure the sent prompt is within a context window limit.


### Changes 🏗️

````
Heuristics
--------
* Prefer shrinking the content rather than truncating the conversation.
* If the conversation content is compacted and it's still not enough, then reduce the conversation list.
* The rest of the implementation is adjusted to minimize the LLM call breaking.

Strategy
--------
1. **Token-aware truncation** – progressively halve a per-message cap
   (`start_cap`, `start_cap/2`, … `floor_cap`) and apply it to the
   *content* of every message except the first and last.  Tool shells
   are included: we keep the envelope but shorten huge payloads.
2. **Middle-out deletion** – if still over the limit, delete the whole
   messages working outward from the centre, **skipping** any message
   that contains ``tool_calls`` or has ``role == "tool"``.
3. **Last-chance trim** – if still too big, truncate the *first* and
   *last* message bodies down to `floor_cap` tokens.
4. If the prompt is *still* too large:
     • raise ``ValueError``      when ``lossy_ok == False`` (default)
     • return the partially-trimmed prompt when ``lossy_ok == True``
````

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run an SDM block in a loop until it hits 200000 tokens using the
open-ai O3 model.
2025-06-27 15:07:50 +00:00
Reinier van der Leer
c01beaf003 fix(blocks): Restore GithubReadPullRequestBlock diff output (#10256)
- Follow-up fix to #10138

AI erased a bit of functionality from the `GithubReadPullRequestBlock`
in #10138. This PR puts it back and improves on the old format.

### Changes 🏗️

- Include full diff in `changes` output of `GithubReadPullRequestBlock`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Use the `GithubReadPullRequestBlock` with `include_pr_changes` enabled
  - [ ] -> block runs successfully
  - [ ] -> full diff included in `changes` output
2025-06-27 15:05:31 +00:00
dependabot[bot]
cf560c5d65 chore(frontend/deps-dev): Bump the development-dependencies group across 1 directory with 14 updates (#10249)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-27 10:21:49 +00:00
Toran Bruce Richards
77e99e9739 feat(blocks): Add more Revid.ai media generation blocks (#9931)
<html><head></head><body><h3>Why these changes are needed 🧐</h3>
<p>Revid.ai offers several specialised, undocumented rendering flows
beyond the basic “text-to-video” endpoint our platform already
supported.
to:</p>
<ol>
<li>
<p><strong>Generate ads</strong> from copy plus product images
(30-second vertical spots).</p>
</li>
<li>
<p><strong>Turn a single creative prompt</strong> into a fully
AI-generated video (no multi-line script).</p>
</li>
<li>
<p><strong>Transform a screenshot into a narrated, avatar-driven
clip</strong>, ideal for product-led demos.</p>
</li>
</ol>
<p>Without first-class blocks for these flows, users were forced to drop
to raw HTTP nodes, losing schema validation, test mocks and credential
management.</p>
<h3>Changes 🏗️</h3>

Added new category to ``BlockCategory`` in ``block.py`` for ``MARKETING
= "Block that helps with marketing"``

Area | Change | Notes
-- | -- | --
ai_shortform_video_block.py | Refactored out a shared _RevidMixin
(webhook + polling helpers). | Keeps DRY across new blocks.
  | Added AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS and Voice.EVA
enum members. | Required by Revid sample payloads.
  | AIAdMakerVideoCreatorBlock | Implements ai-ad-generator flow;
supports optional input_media_urls, target_duration,
use_only_provided_media.
  | AIPromptToVideoCreatorBlock | Implements prompt-to-video flow with
prompt_target_duration.
  | AIScreenshotToVideoAdBlock | Implements screenshot-to-video-ad flow
(avatar narration, BG removal).
  | Added full pydantic schemas, test stubs & mock hooks for each new
block. | Ensures unit tests pass and blocks appear in UI.


<p>No existing functionality was removed; current <code
inline="">AIShortformVideoCreatorBlock</code> is untouched apart from
enum imports.</p></body></html>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] use the ``AI ShortForm Video Creator`` block to generate a video
and it will work
- [x] same with `` ai ad maker video creator`` block test it and it
should work
  - [x] and test ``ai screenshot to video ad`` block it should work

---------

Co-authored-by: Bently <Github@bentlybro.com>
2025-06-27 06:39:24 +00:00
Zamil Majdy
7f7c387156 fix(block): Fix broken in SearchPeople block 2025-06-26 19:13:34 -07:00
Zamil Majdy
21cf263eea fix(block): Fix typo in Apollo block 2025-06-26 14:41:15 -07:00
Zamil Majdy
500952a15f fix(block): Fix typo in Apollo block 2025-06-26 14:40:03 -07:00
Zamil Majdy
b3c81fa9e2 fix(block): Fix typo in SearchPeople block 2025-06-26 14:31:57 -07:00
Zamil Majdy
59e96d4759 fix(block): Avoid Apollo enrich_info feature overriding already existing search info 2025-06-26 13:34:49 -07:00
Zamil Majdy
e01dd94e36 feat(block): Add enriching email feature for SearchPeopleBlock & introduce GetPersonDetailBlock (#10251)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

* Add an enriching email feature toggle for SearchPeopleBlock
* Introduce GetPersonDetailBlock
* Adjust the cost of both blocks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Execute SearchPeopleBlock & GetPersonDetailBlock
2025-06-26 20:11:43 +00:00
Zamil Majdy
2f12e8d731 feat(platform): Add Host-scoped credentials support for blocks HTTP requests (#10215)
Currently, we don't have a secure way to pass Authorization headers when
calling the `SendWebRequestBlock`.
This hinders the integration of third-party applications that do not yet
have native block support.

### Changes 🏗️

Add Host-scoped credentials support for the newly introduced
SendAuthenticatedWebRequestBlock.

<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/0d3d577a-2b9b-4f0f-9377-0e00a069ba37"
/>
<img width="1000" alt="image"
src="https://github.com/user-attachments/assets/a59b9f16-c89c-453d-a628-1df0dfd60fb5"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Uses `https://api.openai.com/v1/images/edits` through
SendWebRequestBlock by passing the api-key through host-scoped
credentials.
2025-06-26 19:09:27 +00:00
Nicholas Tindle
83dbcd11e4 docs(frontend, backend): add OAuth security boundary docs (#10202)
### Why are these changes needed?

<!-- Clearly explain the need for these changes: -->
These changes document the OAuth integration flow for CASA lvl 2
compliance, specifically addressing the requirement to "Verify
documentation and justification of all the application's trust
boundaries, components, and significant data flows." The documentation
clarifies the two distinct OAuth implementations in AutoGPT: user
authentication via Supabase SSO and API integration credentials for
third-party services.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
- Created comprehensive OAuth integration flow documentation at
`/docs/content/platform/contributing/oauth-integration-flow.md`
- Documented trust boundaries between frontend (untrusted), backend API
(trusted), and external providers (semi-trusted)
- Added detailed component architecture for both frontend and backend
OAuth implementations
- Included mermaid diagrams illustrating:
  - OAuth flow sequences (initiation, authorization, token refresh)
  - System architecture showing SSO vs API integration OAuth
  - Data flow diagram
  - Security architecture layers
  - Credential lifecycle state diagram
- Documented security measures including CSRF protection, PKCE
implementation, and token management
- Clarified the distinction between Supabase SSO for user login and
custom OAuth for API integrations
- Added references to source files for up-to-date provider lists rather
than hard-coding all providers

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Created documentation file with proper markdown formatting
  - [x] Verified all file paths referenced in documentation exist
  - [x] Confirmed mermaid diagrams render correctly
- [x] Validated that the documentation accurately reflects the codebase
implementation

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-06-26 18:08:55 +00:00
Abhimanyu Yadav
f66b8f9c74 refactor(frontend): update settings pages fetching using react query (#10248)
### Changes 🏗️
- We have implemented some backend changes, so I have added a new,
updated OpenAPI specification.
- We have updated the settings and API keys page to enable us to use
React Query for fetching data.

### Checklist 📋

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
    - [x] Settings and api keys page is working correctly
2025-06-26 15:13:20 +00:00
Ubbe
2af9d75dec feat(frontend): enforce auth through httpOnly cookies (#10201)
### Changes 🏗️

Implemented `httpOnly` cookies 🍪 for secure session management 💆🏽 

- 🙏🏽 **Moved all API requests to server-side execution** for maximum XSS
protection
- All authentication now happens server-side with `httpOnly` cookies (no
JWT tokens exposed to client)
- Created `proxyApiRequest()` and `proxyFileUpload()` server actions to
handle all communication with API
- Updated `BackendAPI._request()` to always use proxy approach for
consistent security

- 🚧 **Exception: WebSocket authentication** requires client-side token
exposure
- Added `getWebSocketToken()` server action to securely provide tokens
only for WebSocket connections
  - Maintains secure architecture while we keep the real-time features

- 🧹 **Abstracted implementation details** into reusable helper functions
  - Reduced proxy actions from 157 lines to 48 lines (70% reduction)
- Added flexible content-type support ( _JSON, form-urlencoded, custom_
)
  - Enhanced error handling for graceful logout scenarios
  
- 📙 **Renamed `/reset_password` page to `/reset-password`**
  - couldn't resist sorry... snake case URLs get me 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Verify all API requests work through server-side proxy
  - [x] Confirm httpOnly cookies prevent client-side JWT access
  - [x] Test WebSocket connections work with server-provided tokens
  - [x] Verify logout scenarios don't throw authentication errors
  - [x] Check file uploads work securely through proxy
  - [x] Validate zero breaking changes for existing BackendAPI calls

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-26 13:27:23 +00:00
Ubbe
0b37263092 feat(frontend): add Badge component (#10244)
## Changes 🏗️

<img width="800" alt="Screenshot 2025-06-25 at 20 34 38"
src="https://github.com/user-attachments/assets/bfc90504-85b6-4178-9ace-2aa4d14f16b0"
/>
<br /><br />

- To match what is on the AutoGPT design system
- Unit tests commented because they depend on:
https://github.com/Significant-Gravitas/AutoGPT/pull/10243

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally, Badge stories look good
2025-06-26 05:21:20 +00:00
Zamil Majdy
443995d79a fix(backend): Fix broken executor due to missing nodes_input_masks on GraphExecutionEntry 2025-06-25 11:20:56 -07:00
Zamil Majdy
68749a28d4 feat(backend): Add ClamAV support for anti-virus scan on file upload to the platform (#10232)
An anti-virus file scan step is added to each file upload step on the
platform before the file is sent to cloud storage or local machine
storage.

### Changes 🏗️

* Added ClamAV service
* Added AV file scan on each upload step
* Added tests & documentation
* Make the step mandatory even on local development

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Tried using FileUploadBlock & AgentFileInputBlock
2025-06-25 15:29:04 +00:00
Ubbe
203cb1c88c feat(frontend): add Link component (#10241)
## Changes 🏗️

<img width="1580" alt="Screenshot 2025-06-25 at 18 11 36"
src="https://github.com/user-attachments/assets/c8b136b6-5897-41fa-a03b-010582c4b879"
/>
<br /><br />

Add a new `<Link />` component that will be the standard when rendering
links on the platform.

It is a wrapper of `next/link` and has an `isExternal` prop; when
supplied `target="_blank"` and `rel="noopener noreferrer"` will be added
to it. It comes with the styles agreed on AutoGPT design system.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Tests pass and the component looks good
2025-06-25 14:37:22 +00:00
Ubbe
3e3db658c6 docs(frontend): document skeleton (#10240)
## Changes 🏗️

<img width="800" alt="Screenshot 2025-06-25 at 17 52 38"
src="https://github.com/user-attachments/assets/18f859cf-5008-4915-925c-1912ab9cf176"
/>

- Depends on #10235 so that we can test the new Chromatic workflow with
this
- Documents our Skeleton atom which is directly
[shadcn/skeleton](https://ui.shadcn.com/docs/components/skeleton)

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run storybook locally
  - [x] The Skeleton stories look good
2025-06-25 14:34:40 +00:00
Ubbe
1dcf0312f2 chore(frontend): setup chromatic (#10235)
## Changes 🏗️

<img width="800" alt="Screenshot 2025-06-25 at 13 43 06"
src="https://github.com/user-attachments/assets/13ffd32e-ffa1-482e-91a6-8363ad6b67df"
/>

<br /><br />

- Setup Chromatic ( install + `package.json` command )
- Make it run on the CI
- Remove a lot of old component in Storybook which were broken or need
deign review
  - for now we only keep on Storybook what has been  by design 
- Remove `test-storybook:ci` commands 
- I plan to run tests via Chromatic, but I want to look at that setup on
a separate PR and in a clean state


## 📋 Checklist

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] The `chromatic` job succeeds on the CI and the changes appear on
Chromatic's dashboard
2025-06-25 13:55:18 +00:00
Ubbe
8442fb0605 fix(backend): test data script (#10206)
## Changes 🏗️

The test data script is not working locally, this should fix it 🤞🏽 

- Fixed `agentId` → `agentGraphId` field references in preset matching
logic
- Fixed `agentId` → `agentGraphId` field references in store listing
graph lookup
- Added graph uniqueness logic to prevent duplicate library agents per
user
- Improved data consistency by ensuring proper foreign key relationships

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified script runs without database schema errors
  - [x] Confirmed foreign key relationships are properly maintained
  - [x] Tested that library agents use unique graphs per user
  - [x] Validated preset matching uses correct field references
2025-06-25 08:59:09 +00:00
Reinier van der Leer
1d29a64e35 fix(backend/library): Split & fix update_library_agent endpoint (#10220)
This PR makes several improvements to the `update_library_agent`
endpoint.

- Resolves #10216

### Changes 🏗️

- Add `DELETE /library/agents/{id}` endpoint
- Fix `PUT /library/agents/{id}` endpoint
  - Return updated library agent
  - Remove `is_deleted` parameter
  - Change method from `PUT` to `PATCH`

Also, a small DX improvement:
- Expose `BackendAPI` globally through `window.api` for local dev
purposes

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Deleting library agents works
2025-06-25 08:53:27 +00:00
Reinier van der Leer
aedbcbf2d8 fix(backend/library): Fix library breakage for manual-setup triggered agents (#10230)
- Follow-up fix to #10167
- Resolves #10228

### Changes 🏗️

- Don't assume `block.input_schema.jsonschema()["required"]` exists
- Unbreak handling of `webhook_type` in
`BaseWebhooksManager.get_manual_webhook(..)`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- Create an agent with a Generic Webhook Trigger block; go to it in the
Library
  - [x] -> `/library/agents/[id]` loads normally
2025-06-24 22:37:36 +00:00
Reinier van der Leer
c3ad260415 fix(frontend/builder): Fix agent block input value key (#10229)
- Follow-up fix to #9862
- Resolves #10097

In #9862, the `AgentExecutorBlock`'s nested input field was renamed from
`data` to `input`, but apparently the frontend also had a reference to
this field and was now broken.

### Changes 🏗️

- Update `getInputPropKey` in `CustomNode` to use `inputs.{key}` instead
of `data.{key}`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Create an agent with at least one input
  - Use the agent with at least one input inside another agent
  - Set a default value on the input on the agent block
  - Save the graph
  - [x] -> default input value is saved
2025-06-24 21:31:48 +00:00
Zamil Majdy
1c6b829925 fix(blocks): Fix Image input on AIImageEditorBlock to accept relative path file media (#10210)
AIImageEditorBlock was not able to accept an image from AgentFileInput
or FileStore block.

### Changes 🏗️

* Add support for image loading for the image editor block:
<img width="1081" alt="Screenshot 2025-06-23 at 10 28 45 AM"
src="https://github.com/user-attachments/assets/ac3fea91-9503-4894-bbe3-2dc3c5649a39"
/>

* Avoid rendering a relative path image file.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run AiImageEditor block using AgentFileInput or FileStore block.
2025-06-24 20:36:02 +00:00
Reinier van der Leer
efa4b6d2a0 feat(platform/library): Triggered-agent support (#10167)
This pull request adds support for setting up (webhook-)triggered agents
in the Library. It contains changes throughout the entire stack to make
everything work in the various phases of a triggered agent's lifecycle:
setup, execution, updates, deletion.

Setting up agents with webhook triggers was previously only possible in
the Builder, limiting their use to the agent's creator only. To make it
work in the Library, this change uses the previously introduced
`AgentPreset` to store information on, instead of on the graph's nodes
to which only a graph's creator has access.

- Initial ticket: #10111
- Builds on #9786

![screenshot of trigger setup screen in the
library](https://github.com/user-attachments/assets/525b4e94-d799-4328-b5fa-f05d6a3a206a)
![screenshot of trigger edit screen in the
library](https://github.com/user-attachments/assets/e67eb0bc-df70-4a75-bf95-1c31263ef0c9)

### Changes 🏗️

Frontend:
- Amend the Library's `AgentRunDraftView` to handle creating and editing
Presets
- Add `hideIfSingleCredentialAvailable` parameter to `CredentialsInput`
  - Add multi-select support to `TypeBasedInput`
- Add Presets section to `AgentRunsSelectorList`
  - Amend `AgentRunSummaryCard` for use for Presets
- Add `AgentStatusChip` to display general agent status (for now: Active
/ Inactive / Error)
- Add Preset loading logic and create/update/delete handlers logic to
`AgentRunsPage`
- Rename `IconClose` to `IconCross`

API:
- Add `LibraryAgent` properties `has_external_trigger`,
`trigger_setup_info`, `credentials_input_schema`
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Remove redundant parameters from `POST
/library/presets/{preset_id}/execute` endpoint

Backend:
- Add `POST /library/agents/{library_agent_id}/setup_trigger` endpoint
- Extract non-node-related logic from `on_node_activate` into
`setup_webhook_for_block`
- Add webhook-related logic to `update_preset` and `delete_preset`
endpoints
- Amend webhook infrastructure to work with AgentPresets
  - Add preset trigger support to webhook ingress endpoint
- Amend executor stack to work with passed-in node input
(`nodes_input_masks`, generalized from `node_credentials_input_map`)
    - Amend graph validation to work with passed-in node input
  - Add `AgentPreset`->`IntegrationWebhook` relation
    - Add `WebhookWithRelations` model
- Change behavior of `BaseWebhooksManager.get_manual_webhook(..)` to
avoid unnecessary changes of the webhook URL: ignore `events` to find
matching webhook, and update `events` if necessary.
- Fix & improve `AgentPreset` API, models, and DB logic
  - Add `isDeleted` filter to get/list queries
  - Add `user_id` attribute to `LibraryAgentPreset` model
  - Add separate `credentials` property to `LibraryAgentPreset` model
- Fix `library_db.update_preset(..)` replacement of existing
`InputPresets`
- Make `library_db.update_preset(..)` more usage-friendly with separate
parameters for updateable properties
- Add `user_id` checks to various DB functions
- Fix error handling in various endpoints
- Fix cache race condition on `load_webhook_managers()`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Test existing functionality
- [x] Auto-setup and -teardown of webhooks on save in the builder still
works
    - [x] Running an agent normally from the Library still works
  - Test new functionality
    - [x] Setting up a trigger in the Library
    - [x] Updating a trigger in the Library
    - [x] Disabling and re-enabling a trigger in the Library
    - [x] Deleting a trigger in the Library
- [x] Triggers set up in the Library result in a new run when the
webhook receives a payload
2025-06-24 20:28:20 +00:00
Abhimanyu Yadav
94aed94113 feat(platform): setup and configure orval (#10209)
This pull request sets up and configures Orval for API client
generation. It automates the process of creating TypeScript clients from
the backend's OpenAPI specification, improving development efficiency
and reducing manual code maintenance.

### Changes 🏗️

- Configures Orval with a new configuration file (`orval.config.ts`).
- Adds scripts to `package.json` for fetching the OpenAPI spec and
generating the API client.
- Implements a custom mutator for handling authentication.
- Adds API client generation as a step in the CI workflow.
- Adds `.gitignore` entry for generated API client files.
- Adds a security middleware to prevent caching of sensitive data.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified that the API client is generated correctly.
- [x] Confirmed that the custom mutator is functioning as expected for
authentication.
- [x] Ensured that the new CI workflow step for API client generation is
successful.
  - [x] Tested generated API calls

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-06-24 14:00:19 +00:00
Zamil Majdy
e701f41e66 feat(blocks): Enabling auto type conversion on block input schema mismatch for nested input (#10203)
Since auto conversion is applied before merging nested input in the
block, it breaks the auto conversion break.

### Changes 🏗️

* Enabling auto-type conversion on block input schema mismatch for
nested input
* Add batching feature for `CreateListBlock`
* Increase default max_token size for LLM call

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run `AIStructuredResponseGeneratorBlock` with non-string prompt
value (should be auto-converted).
2025-06-21 03:56:53 +07:00
Zamil Majdy
a2d54c5fb4 feat(block): Enable cloud Apollo integration cost (#10199)
### Changes 🏗️

Add cost calculation for Apollo integration.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run Apollo block Search People & Organizations Block.
2025-06-20 15:17:58 +00:00
Nicholas Tindle
568f5a449e feat(backend): cache control headers (#10160)
### Why? 🤔
  <!-- Clearly explain the need for these changes: -->
  We need to prevent sensitive data (authentication tokens, API
  keys, user credentials, personal information) from being cached by
   browsers and proxies. Following the principle of "secure by
  default", we're switching from a deny list to an allow list
  approach for cache control.

  ### Changes 🛠️
  <!-- Concisely describe all of the changes made in this pull
  request: -->
  - **Refactored cache control middleware from deny list to allow 
  list approach**
    - By default, ALL endpoints now have `Cache-Control: no-store,
  no-cache, must-revalidate, private` headers
    - Only explicitly allowed paths (static assets, health checks,
  public store pages) can be cached
    - This ensures new endpoints are automatically protected without
   developers having to remember to add them to a list

  - **Updated `SecurityHeadersMiddleware` in 
  `/backend/backend/server/middleware/security.py`**
    - Renamed `SENSITIVE_PATHS` to `CACHEABLE_PATHS`
    - Inverted the logic in `is_cacheable_path()` method
    - Cache control headers are now applied to all paths NOT in the
  allow list

  - **Updated test suite to match new behavior**
    - Tests now verify that most endpoints have cache control
  headers by default
    - Tests verify that only allowed paths (static assets, health
  endpoints, etc.) can be cached

  - **Updated documentation in `CLAUDE.md`**
    - Documented the new allow list approach
    - Added instructions for developers on how to allow caching for
  new endpoints


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test modified endpoints work still
  - [x] Test modified endpoints correctly have no cache rules

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-20 14:19:52 +00:00
Zamil Majdy
3df6dcd26b feat(blocks): Improve SmartDecisionBlock & AIStructuredResponseGeneratorBlock (#10198)
Main issues:
* `AIStructuredResponseGeneratorBlock` is not able to produce a list of
objects.
* `SmartDecisionBlock` is not able to call tools with some optional
inputs.

### Changes 🏗️

* Allow persisting `null` / `None` value as execution output.
* Provide `multiple_tool_calls` option for `SmartDecisionBlock`.
* Provide `list_result` option for `AIStructuredResponseGeneratorBlock`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run `SmartDecisionBlock` & `AIStructuredResponseGeneratorBlock`
2025-06-20 14:14:02 +00:00
Abhimanyu Yadav
aab40fe225 feat(backend): add custom openapi generator (#10200)
This PR introduces a custom function for generating unique operation IDs
for OpenAPI specifications to improve auto-generated client code
quality.

## Why This Change?
**Better Auto-Generated Clients**: Default FastAPI operation IDs create
unclear method names in generated clients. Our custom generator produces
clean, readable operation IDs that translate to intuitive method names.

- **Before**: `get_items_api_v1_items_get` → unclear generated methods
- **After**: `get_users_list` → clean, descriptive method names

## Changes
-  **Added**: `custom_generate_unique_id` utility function
  - Generates IDs using pattern: `{method}_{tag}_{summary}`
  - Ensures uniqueness and readability
- 🔧 **Updated**: FastAPI app configuration to use custom generator

## Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
   - [x] OpenAPI docs reflect new operation ID format
   - [x] Tested various HTTP methods, tags, and summaries
   - [x] Verified app startup functionality 
   - [x] Validated improved client generation output
2025-06-20 13:32:39 +00:00
Zamil Majdy
91ea322887 fix(blocks): Fix broken Apollo Blocks (#10197)
Current Apollo blocks only work with keywords; the huge number of list
filter fields doesn't work because it's passing the wrong GET parameter
(missing `[]`).

### Changes 🏗️

Change the GET request to a POST request for Apollo.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run SearchPeopleBlock with title filter
2025-06-20 05:15:46 +00:00
Ubbe
e183be08bd feat(frontend): Add cool Input component (#10196)
## Changes 🏗️

<img width="400" alt="Screenshot 2025-06-19 at 18 17 53"
src="https://github.com/user-attachments/assets/ad690d75-4ce0-4f50-9774-1f9d07cd5934"
/>

<img width="400" alt="Screenshot 2025-06-19 at 18 17 56"
src="https://github.com/user-attachments/assets/97d59e18-76c8-46d1-9b8f-87058bc1ab86"
/>

### 📙 Overview

- New Input component (`components/atoms/Input/`)
- Multiple input types with smart formatting:
  - `type="text"` → Basic text input with no filtering
  - `type="email"` → Email input (no character filtering)  
  - `type="password"` → Password input with masking
- `type="number"` → Number input with character filtering (digits,
decimal, minus)
- `type="amount"` → Formatted number input with comma separators and
decimal limiting
  - `type="tel"` → Phone input allowing only `+()[] ` and digits
  - `type="url"` → URL input (no character filtering)

### 📙 Smart formatting features

- Amount type: `1000` → `1,000`, `1234.567` → `1,234.56` (with
`decimalCount={2}`)
- Number type: `abc123def` → `123`, `12.34.56` → `12.3456`
- Phone type: `abc+1(555)def` → `+1(555)`

### 📙 Other

- Error state with `error` prop - shows red border and error message
below input
- Label control with `hideLabel` prop for accessibility
- Decimal precision control via `decimalCount` prop (amount type only,
default: 4)

### 📙 Architecture

- **Clean separation**: `Input.tsx` (render), `useInput.ts` (logic),
`helpers.ts` (utilities)
- **Comprehensive Storybook stories** with usage examples and
documentation

### 📙 Examples
```tsx
// Basic inputs
<Input type="text" label="Full Name" />
<Input type="email" label="Email" error="Invalid email" />

// Formatted inputs  
<Input type="amount" label="Price" decimalCount={2} />
<Input type="tel" label="Phone" placeholder="+1 (555) 123-4567" />

// Number input (unlimited decimals)
<Input type="number" label="Score" />
```

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:

**Test Plan:**
- [x] **Basic functionality**: Text input, label visibility, disabled
state, error display
- [x] **Number validation**: Character filtering (`abc123` → `123`),
decimal handling, negative numbers
- [x] **Amount formatting**: Comma insertion (`1000` → `1,000`), decimal
limiting with `decimalCount`
- [x] **Phone filtering**: Only allows `+()[] ` and digits (`abc555def`
→ `555`)
- [x] **Email/Password/URL**: No character filtering, proper input types
- [x] **Edge cases**: Starting with `.` or `-`, multiple decimals,
accessibility with hidden labels
- [x] **Storybook stories**: All variants documented with interactive
examples
```
2025-06-19 16:06:23 +00:00
Abhimanyu Yadav
a541a3edd7 feat(frontend): Add React Query DevTools and ESLint Rules (#10194)
This PR integrates React Query DevTools and ESLint rules to improve the
development workflow and enforce best practices for data fetching.

### Changes:
- **React Query DevTools:**
  - Added the `@tanstack/react-query-devtools` package.
  - DevTools are enabled by default in the development environment.
- They can be disabled by setting
`NEXT_PUBLIC_REACT_QUERY_DEVTOOL=false` in your environment variables.

- **ESLint Rules:**
- Integrated `@tanstack/eslint-plugin-query` to enforce best practices
and catch common errors in React Query usage.

- **Configuration:**
- Added the `NEXT_PUBLIC_REACT_QUERY_DEVTOOL` variable to the
`.env.example` file so other developers are aware of this option.

- **Documentation:**
- Updated the `README.md` with instructions on how to toggle the
DevTools using the environment variable.

Configuration Changes Checklist
 - `.env.example` has been updated with the new environment variable.

### Checklist
For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
    - [x] Run the app in development with pnpm dev.
    - [x]  Verified DevTools toggle with environment variables
    - [x] Run pnpm lint in the frontend directory.
    - [x] Confirm that linting passes on the current codebase.

### Screenshot

<img width="1512" alt="Screenshot 2025-06-19 at 6 32 22 PM"
src="https://github.com/user-attachments/assets/a3defd23-2c3d-4d20-b152-037d85e04503"
/>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-19 15:52:31 +00:00
dependabot[bot]
f3731afaf2 chore(frontend/deps-dev): Bump the development-dependencies group across 1 directory with 12 updates (#10193)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-19 12:51:21 +00:00
Abhimanyu Yadav
29395665c3 feat(frontend): Add React Query for Data Fetching (#10190)
Issue -
https://linear.app/autogpt/issue/OPEN-2534/set-up-react-query-for-both-client-side-and-server-side-operations

This update adds react-query to the frontend, enabling efficient data
fetching and caching. It uses a singleton QueryClient on the client for
shared cache, creates a new QueryClient for each server request to
prevent data leaks, and supports server-side prefetching for faster
data.

### Changes
- Add @tanstack/react-query dependency  
- Set up QueryClient with default config (except 1m staleTime)  
- Wrap app with QueryClientProvider for global access  
- Ensure safe client/server QueryClient instantiation

> I only changed the staleTime in the default config because the other
settings work well for general use. For specific cases—like when you
want data to stay fresh unless manually invalidated—you can set
staleTime: Infinity in that query.

### Checklist 📋
For code changes:
 - [x] I have clearly listed my changes in the PR description
 - [x] I have made a test plan
 - [x] I have tested my changes according to the test plan:
    - [x] Ran frontend locally – it’s working fine
2025-06-19 12:09:51 +00:00
Ubbe
fc4d0d4bb8 docs(frontend): new Button component + stories (#10192)
### Changes 🏗️

![Screenshot 2025-06-19 at 13 38
21](https://github.com/user-attachments/assets/8c3f8d27-6f4d-4d95-8d78-0b160ce51091)

- Adds a new `<Button>` component that mirrors 1:1 what we have in the
design system
- Documented the new component via stories
- Re-arranged the stories in the Storybook sidebar to show the legacy
ones at the end

Once this is merged, we can start updating buttons on the app to only
use this one, so we have a consistent UX 💆🏽

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Button stories look good ( _in all variants_ )
2025-06-19 10:43:23 +00:00
Bently
d0beebcbff fix(frontend): Update Schedule Task's default value to "daily" (#10191)
### Changes 🏗️

Fixes: [Make the default scheduler frequency to daily instead of every
minute
#9985](https://github.com/Significant-Gravitas/AutoGPT/issues/9985)

This simply updates the Schedule Task's default from minute to daily at
09:00 as default time


![image](https://github.com/user-attachments/assets/673f5e75-a970-41d9-a7b6-531bce4a6d7b)


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Open the Schedule Task UI and see the default is now daily at
09:00
2025-06-19 07:15:12 +00:00
Ubbe
93e611d609 docs(frontend): document new color tokens (#10186)
### Changes 🏗️

<img width="800" alt="Screenshot 2025-06-18 at 19 55 24"
src="https://github.com/user-attachments/assets/f3bd662e-cc64-4a32-a030-973b7cf89d8b"
/>

Document the new colour tokens agreed with the design team, and update
the Tailwind theme with them.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Verify the colors story renders well and make sense
2025-06-19 05:36:59 +00:00
Abhimanyu Yadav
c29b5e3f0f fix(backend): Add graph_id path parameter to fix automatic client generation from OpenAPI spec (#10189)
## Description
Added the `graph_id` parameter to the stop execution endpoint path
(`/graphs/{graph_id}/executions/{graph_exec_id}/stop`) to fix client
generation from Openapi spec error.

## Problem
The client generation was failing due to missing path parameter
definition for `graph_id` in the stop execution endpoint.

<img width="1412" alt="Screenshot 2025-06-19 at 9 20 17 AM"
src="https://github.com/user-attachments/assets/aa1667d3-05be-48c6-975b-84473830ac03"
/>


## Solution
Added `graph_id` as a path parameter while maintaining the existing
functionality.

## Testing
- [x] Verified OpenAPI client generation works without errors
- [x] Confirmed endpoint functionality remains unchanged
- [x] Tested API calls maintain backward compatibility
2025-06-19 05:35:28 +00:00
Bentlybro
753a2bf200 Merge branch 'master' into dev 2025-06-18 16:53:01 +01:00
Ubbe
375777fe3c refactor(frontend): upgrade to Storybook 9 (#10179)
## Changes 🏗️

Migrate to [Storybook 9](https://storybook.js.org/docs/migration-guide),
changes are mostly from the migration tool:
``` basg
pnpm storybook@latest upgrade
```

On top of that:
- removed stories for [shadcn](https://ui.shadcn.com/) components
- to avoid confusion, shadcn in our base for the component library, and
is already documented on their website
- removed example stories
- regrouped existing `agpt-ui` stories under `Legacy`
- I need to review them and see if they still fit the expected designs
of the platform or not

<img width="600" alt="Screenshot 2025-06-17 at 13 43 57"
src="https://github.com/user-attachments/assets/ca3d9c1b-9dc4-4684-ac77-6259beeb3e1d"
/>


## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run `pn storybook` locally
  - [x] It works well, and the stories look good

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-06-18 10:36:00 +00:00
Zamil Majdy
1e0a3d3c1b feat(backend): Add request retry on block execution and RPC (#10183)
Request on block execution can be throttled, and requests between
services can sometimes break. The scope of this PR is to add an
appropriate retry on those.

### Changes 🏗️

* Block request retry: Retry on throttled status code only (504, 429,
etc).
* RPC request retry: Retry connection issues (ConnectError, Timeout,
etc).
* Truncate logging on executor/utils.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual graph execution
2025-06-17 21:03:46 +00:00
Zamil Majdy
c4797a5f84 fix(backend): Fix scheduler broken late execution message 2025-06-17 10:25:25 -07:00
Zamil Majdy
4923318cfe fix(backend): Fix scheduler ayncio loop issue & update late execution message report 2025-06-17 10:20:17 -07:00
Ubbe
86361fc1ae fix(frontend): fix all lint errors and add next/typescript (#10182)
## Changes 🏗️

### ESLint Config
1. **Disabled `react-hooks/exhaustive-deps`:** 
- to prevent unnecessary dependency proliferation and rely on code
review instead
2. **Added
[`next/typescript`](https://nextjs.org/docs/app/api-reference/config/eslint#with-typescript):**
- to the ESLint config to make sure we also have TS linting rules
3. **Added custom rule for `@typescript-eslint/no-unused-vars`:** 
- to allow underscore-prefixed variables (convention for intentionally
unused), in some cases helpful

From now on, whenever we have unused variables or imports, the `lint` CI
will fail 🔴 , thanks to `next/typescript` that adds
`typescript-eslint/no-unused-vars` 💆🏽

### Minor Fixes
- Replaced empty interfaces with type aliases to resolve
`@typescript-eslint/no-empty-object-type` warnings
- Fixed unsafe non-null assertions with proper null checks
- Removed `@ts-ignore` comments in favour of proper type casting ( _when
possible_ 🙏🏽 )

### Google Analytics Component
- Changed Next.js Script strategy from `beforeInteractive` to
`afterInteractive` to resolve Next.js warnings
- this make sure loading analytics does not block page render 🙏🏽 (
_better page load time_ )

### Are these changes safe?

As long as the Typescript compiler does not complain ( check the
`type-check` job ) we should be save. Most changes are removing unused
code, if that code would be used somewhere else the compiler should
catch it and tell us 🫶

I also typed some code when possible, or bypassed the linter when I
thought it was fair for now. I disabled a couple ESLint rules. Most
importantly the `no-explicity-any` one as we have loads of stuff untyped
yet ( _this should be improved once API types are generated for us_ ).

### DX

Added some settings on `.vscode` folder 📁 so that files will be
formatted on save and also ESLint will fix errors on save when able 💯

### 📈 **Result:**

-  All linting errors resolved
-  Improved TypeScript strict mode compliance  
-  Better developer experience with cleaner code

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x]  I have tested my changes according to the test plan:
  - [x] Lint CI job passes
- [x] There is not type errors ( _TS will catch issue related to these
changes_ )
2025-06-17 14:29:21 +00:00
Reinier van der Leer
b477d31641 fix(backend): Unbreak add_store_agent_to_library (#10166)
- Follow-up fix for #9786

A change to a DB statement introduced in #9786 turns out to be breaking.
Apparently `connect` can't just be used for *some* relations: if it is
used, it must be used for *all* relations created by the statement.

### Changes 🏗️

- Fix broken DB statement in `add_store_agent_to_library(..)`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Add store agent to library

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-17 13:11:40 +02:00
Ubbe
2269e3593a chore(frontend): document icons on storybook (#10181)
## Changes 🏗️

### Checklist 📋

<img width="800" alt="Screenshot 2025-06-17 at 14 11 55"
src="https://github.com/user-attachments/assets/61d5a6b9-57f7-4117-bbc6-e78c2cdc5778"
/>

Document the icons for the new design system. With the design team, it
was agreed we will settle on [phosphor
icons](https://phosphoricons.com/), so we will need to migrate
progressively out of `lucide-react`.

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook locally
  - [x] Check the icons story and displays well
2025-06-17 10:39:28 +00:00
Zamil Majdy
97e72cb485 feat(backend): Make execution engine async-first (#10138)
This change introduced async execution for blocks and the execution
engine. Paralellism will be achieved through a single process
asynchronous execution instead of process concurrency.

### Changes 🏗️

* Support async execution for the graph executor
* Removed process creation for node execution
* Update all blocks to support async executions

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual graph executions, tested many of the impacted blocks.
2025-06-17 09:38:24 +00:00
Nicholas Tindle
81d3eb7c34 feat(backend, frontend): make changes to use our security modules more effectively (#10123)
<!-- Clearly explain the need for these changes: -->
Doing the CASA Audit and this is something to check
### Changes 🏗️
- limits APIs to use their specific endpoints
- use expected trusted sources for each block and requests call
- Use cryptographically valid string comparisons
- Don't log secrets

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Testing in dev branch once merged

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-16 15:22:08 +00:00
Nicholas Tindle
f950f35af8 [Snyk] Security upgrade requests from 2.31.0 to 2.32.4 (#10148)
![snyk-top-banner](https://res.cloudinary.com/snyk/image/upload/r-d/scm-platform/snyk-pull-requests/pr-banner-default.svg)

### Snyk has created this PR to fix 1 vulnerabilities in the pip
dependencies of this project.

#### Snyk changed the following file(s):

- `docs/requirements.txt`



<details>
<summary>⚠️ <b>Warning</b></summary>

```
mkdocs-material 9.2.8 requires requests, which is not installed.
mkdocs-material 9.2.8 has requirement pymdown-extensions~=10.3, but you have pymdown-extensions 10.2.1.
```

</details>





---

> [!IMPORTANT]
>
> - Check the changes in this PR to ensure they won't cause issues with
your project.
> - Max score is 1000. Note that the real score may have changed since
the PR was raised.
> - This PR was automatically created by Snyk using the credentials of a
real user.
> - Some vulnerabilities couldn't be fully fixed and so Snyk will still
find them when the project is tested again. This may be because the
vulnerability existed within more than one direct dependency, but not
all of the affected dependencies could be upgraded.

---

**Note:** _You are seeing this because you or someone else with access
to this repository has authorized Snyk to open fix PRs._

For more information: <img
src="https://api.segment.io/v1/pixel/track?data=eyJ3cml0ZUtleSI6InJyWmxZcEdHY2RyTHZsb0lYd0dUcVg4WkFRTnNCOUEwIiwiYW5vbnltb3VzSWQiOiIxYjFlMDQ4NS1jMzZlLTRjYjgtYTAzYy00MjIwNTdjYzViMjEiLCJldmVudCI6IlBSIHZpZXdlZCIsInByb3BlcnRpZXMiOnsicHJJZCI6IjFiMWUwNDg1LWMzNmUtNGNiOC1hMDNjLTQyMjA1N2NjNWIyMSJ9fQ=="
width="0" height="0"/>
🧐 [View latest project
report](https://app.snyk.io/org/significant-gravitas/project/7c1b6d4c-2625-44c8-8403-42505b3997f8?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr)
📜 [Customise PR
templates](https://docs.snyk.io/scan-using-snyk/pull-requests/snyk-fix-pull-or-merge-requests/customize-pr-templates?utm_source=github&utm_content=fix-pr-template)
🛠 [Adjust project
settings](https://app.snyk.io/org/significant-gravitas/project/7c1b6d4c-2625-44c8-8403-42505b3997f8?utm_source&#x3D;github&amp;utm_medium&#x3D;referral&amp;page&#x3D;fix-pr/settings)
📚 [Read about Snyk's upgrade
logic](https://docs.snyk.io/scan-with-snyk/snyk-open-source/manage-vulnerabilities/upgrade-package-versions-to-fix-vulnerabilities?utm_source=github&utm_content=fix-pr-template)

---

**Learn how to fix vulnerabilities with free interactive lessons:**

🦉 [Learn about vulnerability in an interactive lesson of Snyk
Learn.](https://learn.snyk.io/?loc&#x3D;fix-pr)

[//]: #
'snyk:metadata:{"customTemplate":{"variablesUsed":[],"fieldsUsed":[]},"dependencies":[{"name":"requests","from":"2.31.0","to":"2.32.4"}],"env":"prod","issuesToFix":["SNYK-PYTHON-REQUESTS-10305723"],"prId":"1b1e0485-c36e-4cb8-a03c-422057cc5b21","prPublicId":"1b1e0485-c36e-4cb8-a03c-422057cc5b21","packageManager":"pip","priorityScoreList":[678],"projectPublicId":"7c1b6d4c-2625-44c8-8403-42505b3997f8","projectUrl":"https://app.snyk.io/org/significant-gravitas/project/7c1b6d4c-2625-44c8-8403-42505b3997f8?utm_source=github&utm_medium=referral&page=fix-pr","prType":"fix","templateFieldSources":{"branchName":"default","commitMessage":"default","description":"default","title":"default"},"templateVariants":["updated-fix-title","pr-warning-shown","priorityScore"],"type":"auto","upgrade":[],"vulns":["SNYK-PYTHON-REQUESTS-10305723"],"patch":[],"isBreakingChange":false,"remediationStrategy":"vuln"}'

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: snyk-bot <snyk-bot@snyk.io>
2025-06-16 15:02:12 +00:00
Toran Bruce Richards
e05c34e76a fix(platform/backend): skip invalid graphs when listing in block menu (#10159)
<!-- Clearly explain the need for these changes: -->
## Background & Summary of Changes
If a user has a single invalid Agent in their Library (i.e one with a
Block which doesn't exist) currently the Blocks menu does not return any
Agent results.

Valid agents should still load even when some stored graphs are
malformed.
Graphs which are malformed should just be skipped rather than breaking
the entire process, this PR implements that fix, unblocking users with a
malformed Agent in their Library (me!).

## Testing
I have tested this PR in the dev deployment (where I have this issue on
my account) and have confirmed that Agents now show up in the list:

| Before this Change | After this Change |
| ------------------ | ----------------- |
| ![Before change
screenshot](https://github.com/user-attachments/assets/9263da25-ff4a-4dfa-bd96-19dfd689ddac)
| ![After change
screenshot](https://github.com/user-attachments/assets/86219055-b97b-456c-a270-80d729c909da)
|


## Changes 🏗️
- Validate each graph’s serialization in get_graphs and skip any that
raise an exception
- Added error logging for invalid graphs

## Checklist 📋
For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
    - [x] poetry run format
    - [ ] poetry run test
For configuration changes:
- [x] .env.example is updated or already compatible with my changes
- [x] docker-compose.yml is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under Changes)

Fixes [OPEN-2461: Loading a Library Agent with an invalid block causes
all Library Agent Loading to fail in Builder Blocks
Menu](https://linear.app/autogpt/issue/OPEN-2461/loading-a-library-agent-with-an-invalid-block-causes-all-library-agent)
2025-06-16 08:01:10 +00:00
Toran Bruce Richards
1ff924e260 Fix(frontend): Update StoreCard component to use bg-background instead of hardcoded bg-white (#9963)
Fixes #9868

This pull request updates the `StoreCard` component in
`autogpt_platform/frontend/src/components/agptui/StoreCard.tsx` to
replace the hardcoded Tailwind CSS class `bg-white` with the more
flexible `bg-background` utility class. This change ensures better
consistency with the application's theming and makes it easier to adapt
to different color schemes, such as light and dark modes.

#### Changes:
- **Before:**  
  `className="... bg-white ... dark:bg-transparent ..."`
 

![image](https://github.com/user-attachments/assets/9eb2b595-8712-405b-ba7d-babd2361e344)

- **After:**  
  `className="... bg-background ... dark:bg-transparent ..."`
  

![image](https://github.com/user-attachments/assets/58affa1b-7160-4961-b9f2-5fdc15c2439e)


#### Motivation:
- Removes the white background on the cards, which weren't part of the
designs.

No functional or visual changes are expected except for improved support
for custom themes.

---
This PR was entirely generated by an AI Agent.
**Please review and let me know if additional changes are needed!**

Co-authored-by: itsababseh <36419647+itsababseh@users.noreply.github.com>
2025-06-16 08:00:16 +00:00
Ubbe
fb18ddf95d feat(frontend): handle cross-tab login/logout + auth architecture refactor (#10150)
## 🏗️ Changes 

### 🧢 Authentication improvements
- Updates for [CASA compliance](https://appdefensealliance.dev/casa)
  - implemented cross-tab login/logout
  - logout now triggers cross-device logout 
  - forgot password triggers cross-device logout
- we are already able to revoke sessions given Supabase stores sessions
🙌🏽

### 📙 Cross-tab login/logout implementation

I implemented some session validation debouncing ( _2-second cooldown_ )
to prevent excessive API calls when switching tabs fast ( _more of an
edge-case but could happen_ ). Cross tab implementation is done via
`localStorage` and `window.visibility` events.

### Refactor and cleanup

Smol things to improve our auth logic on the Frontend:
- created `helpers.ts` with utilities for protected page detection,
admin page routing, and cross-tab communication
- added `STORAGE_KEYS`, `PROTECTED_PAGES`, and `ADMIN_PAGES` constants
for better organization
- refactored server-side Supabase utilities and middleware
- updated import paths to use named exports

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Cross-tab logout synchronization works correctly
  - [x] Session validation debouncing prevents excessive API calls
  - [x] Protected page redirects function properly
  - [x] Authentication state persists correctly across tabs
  - [x] Role-based access controls work as expected
  - [x] Cross-device logout is performed after forgot password change

### Cross-tab login/logout 


https://github.com/user-attachments/assets/5dbdd204-faa2-419f-b989-e31f69ddabd6

### Cross-device logout


https://github.com/user-attachments/assets/aac9c97a-beec-4519-a391-f94f988dc7c8
2025-06-13 15:28:03 +00:00
Ubbe
6e253ecade docs(frontend): add design system overview page (#10157)
### Changes 🏗️

<img width="800" alt="Screenshot 2025-06-13 at 18 29 27"
src="https://github.com/user-attachments/assets/6a2f9c23-860f-4f92-8a7a-eeb7839940fd"
/>

- Add a nice overview page for the 👶🏽 baby AutoGPT design system
- Customise the logo on Storybook to show AutoGPT one

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run Storybook
  - [x] You see the Overview page which looks good
2025-06-13 15:10:32 +00:00
Ubbe
36d304f03f docs(frontend): document border radius tokens (#10158)
### Changes 🏗️

<img width="1761" alt="Screenshot 2025-06-13 at 18 40 50"
src="https://github.com/user-attachments/assets/d24a0350-a371-4067-9666-c3206aacce13"
/>

Document border radius tokens, which follow Tailwind default theme
radius scale 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run storybook
  - [x] Open the Tokens / Border Radius story
  - [x] Verify makes sense
2025-06-13 15:08:54 +00:00
Ubbe
5dafc086fb docs(frontend): document spacing tokens (#10155)
### Changes 🏗️

<img width="800" alt="Screenshot 2025-06-13 at 18 42 54"
src="https://github.com/user-attachments/assets/c1ddffb4-6898-4e2e-8961-49857c0ce65a"
/>

<img width="800" alt="Screenshot 2025-06-13 at 18 01 27"
src="https://github.com/user-attachments/assets/22c5e305-a5ed-469f-916b-38e93aba7f98"
/>

Document spacing tokens, which follow Tailwind default theme spacing
scale 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run storybook
  - [x] Open the Tokens / Spacing story
  - [x] Verify makes sense
2025-06-13 15:08:28 +00:00
Zamil Majdy
c109b676b8 fix(block): Invalid block input error on falsy non-null value 2025-06-13 00:39:55 -07:00
Dmitry
a259eac9ff feat(blocks): Add AI/ML API support to LLM blocks (#9996)
Hi! Taking over this feature from the previous author in
[#9163](https://github.com/Significant-Gravitas/AutoGPT/pull/9163).
This PR continues the work to integrate AI/ML API support into LLM
blocks, addressing pending review comments and ensuring compatibility
with the current codebase.

I’ve reviewed and fixed the outstanding issues from the previous PR.
Kindly recheck the previous concerns — let me know if anything still
needs improvement.

Previous description:

> Changes 🏗️
> 
> - Added basic functionality to enable users to send requests to our
models.
> - Added instructions for users on how to use the AI/ML API in AutoGPT.
> 
> Checklist 📋
> For code changes:
> 
> - [x] I have clearly listed my changes in the PR description
> - [x] I have made a test plan
> - [x] I have tested my changes according to the test plan:
> - [x] The API key has been successfully added and saved to the user's
profile.
> - [x] Sending requests to each model provided by us, enabling users to
test them in a project with various max_tokens parameter values and
other configurations.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Ivan <waterstark97@yandex.ru>
Co-authored-by: waterstark <84220220+waterstark@users.noreply.github.com>
Co-authored-by: Aarushi <50577581+aarushik93@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
Co-authored-by: Bently <tomnoon9@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ayush Mittal <130590402+Ayush-Mittal10@users.noreply.github.com>
Co-authored-by: Azraf Nahian <69325302+turboslapper@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Mario Sacaj <mariosacaj@gmail.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
Co-authored-by: Ritik Dutta <ritikduttagd@gmail.com>
Co-authored-by: Pratim Sadhu <pratimsadhu@icloud.com>
2025-06-11 17:46:44 +00:00
dependabot[bot]
2ab9cfdf79 chore(frontend/deps): Bump the production-dependencies group in /autogpt_platform/frontend with 3 updates (#10133)
Bumps the production-dependencies group in /autogpt_platform/frontend
with 3 updates:
[@sentry/nextjs](https://github.com/getsentry/sentry-javascript),
[@supabase/supabase-js](https://github.com/supabase/supabase-js) and
[zod](https://github.com/colinhacks/zod).

Updates `@sentry/nextjs` from 9.26.0 to 9.27.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/releases"><code>@​sentry/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>9.27.0</h2>
<ul>
<li>feat(node): Expand how vercel ai input/outputs can be set (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16455">#16455</a>)</li>
<li>feat(node): Switch to new semantic conventions for Vercel AI (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16476">#16476</a>)</li>
<li>feat(react-router): Add component annotation plugin (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16472">#16472</a>)</li>
<li>feat(react-router): Export wrappers for server loaders and actions
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16481">#16481</a>)</li>
<li>fix(browser): Ignore unrealistically long INP values (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16484">#16484</a>)</li>
<li>fix(react-router): Conditionally add <code>ReactRouterServer</code>
integration (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16470">#16470</a>)</li>
</ul>
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@​sentry/browser</code></td>
<td>23.43 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> - with treeshaking flags</td>
<td>23.2 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing)</td>
<td>37.46 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay)</td>
<td>74.68 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>67.94 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay with
Canvas)</td>
<td>79.33 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay, Feedback)</td>
<td>91.13 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Feedback)</td>
<td>39.77 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. sendFeedback)</td>
<td>28.03 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. FeedbackAsync)</td>
<td>32.8 KB</td>
</tr>
<tr>
<td><code>@​sentry/react</code></td>
<td>25.15 KB</td>
</tr>
<tr>
<td><code>@​sentry/react</code> (incl. Tracing)</td>
<td>39.41 KB</td>
</tr>
<tr>
<td><code>@​sentry/vue</code></td>
<td>27.69 KB</td>
</tr>
<tr>
<td><code>@​sentry/vue</code> (incl. Tracing)</td>
<td>39.27 KB</td>
</tr>
<tr>
<td><code>@​sentry/svelte</code></td>
<td>23.45 KB</td>
</tr>
<tr>
<td>CDN Bundle</td>
<td>24.88 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing)</td>
<td>37.63 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay)</td>
<td>72.66 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback)</td>
<td>77.99 KB</td>
</tr>
<tr>
<td>CDN Bundle - uncompressed</td>
<td>72.67 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing) - uncompressed</td>
<td>111.42 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay) - uncompressed</td>
<td>222.72 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed</td>
<td>235.25 KB</td>
</tr>
<tr>
<td><code>@​sentry/nextjs</code> (client)</td>
<td>41.03 KB</td>
</tr>
<tr>
<td><code>@​sentry/sveltekit</code> (client)</td>
<td>37.93 KB</td>
</tr>
<tr>
<td><code>@​sentry/node</code></td>
<td>146.75 KB</td>
</tr>
<tr>
<td><code>@​sentry/node</code> - without tracing</td>
<td>96.03 KB</td>
</tr>
<tr>
<td><code>@​sentry/aws-serverless</code></td>
<td>121.19 KB</td>
</tr>
</tbody>
</table>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md"><code>@​sentry/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.27.0</h2>
<ul>
<li>feat(node): Expand how vercel ai input/outputs can be set (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16455">#16455</a>)</li>
<li>feat(node): Switch to new semantic conventions for Vercel AI (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16476">#16476</a>)</li>
<li>feat(react-router): Add component annotation plugin (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16472">#16472</a>)</li>
<li>feat(react-router): Export wrappers for server loaders and actions
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16481">#16481</a>)</li>
<li>fix(browser): Ignore unrealistically long INP values (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16484">#16484</a>)</li>
<li>fix(react-router): Conditionally add <code>ReactRouterServer</code>
integration (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16470">#16470</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="650abf323b"><code>650abf3</code></a>
release: 9.27.0</li>
<li><a
href="5a672c90ea"><code>5a672c9</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16489">#16489</a>
from getsentry/prepare-release/9.27.0</li>
<li><a
href="9d1c05ecf3"><code>9d1c05e</code></a>
meta(changelog): Update changelog for 9.27.0</li>
<li><a
href="6d61be0337"><code>6d61be0</code></a>
fix(browser): Ignore unrealistically long INP values (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16484">#16484</a>)</li>
<li><a
href="0a7b915fd7"><code>0a7b915</code></a>
ref(vue): Clarify Vue tracing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16487">#16487</a>)</li>
<li><a
href="b1fd4a1d47"><code>b1fd4a1</code></a>
test(vue): Add tests for Vue tracing mixins (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16486">#16486</a>)</li>
<li><a
href="497b76e23a"><code>497b76e</code></a>
feat(react-router): Add component annotation plugin (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16472">#16472</a>)</li>
<li><a
href="cfa8d41aa2"><code>cfa8d41</code></a>
feat(react-router): Export wrappers for server loaders and actions (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16481">#16481</a>)</li>
<li><a
href="bfe5e888b1"><code>bfe5e88</code></a>
feat(node): Expand how vercel ai input/outputs can be set (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16455">#16455</a>)</li>
<li><a
href="45088a2ab7"><code>45088a2</code></a>
feat(node): Switch to new semantic conventions for Vercel AI (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16476">#16476</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-javascript/compare/9.26.0...9.27.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@supabase/supabase-js` from 2.49.10 to 2.50.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-js/releases"><code>@​supabase/supabase-js</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v2.50.0</h2>
<h1><a
href="https://github.com/supabase/supabase-js/compare/v2.49.10...v2.50.0">2.50.0</a>
(2025-06-06)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>add <code>@​solana/wallet-standard-features</code> as dev dependency
(<a
href="https://redirect.github.com/supabase/supabase-js/issues/1454">#1454</a>)
(<a
href="146c822768">146c822</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>bump auth-js to v2.70.0 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1449">#1449</a>)
(<a
href="217473f6b4">217473f</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="146c822768"><code>146c822</code></a>
fix: add <code>@​solana/wallet-standard-features</code> as dev
dependency (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1454">#1454</a>)</li>
<li><a
href="217473f6b4"><code>217473f</code></a>
feat: bump auth-js to v2.70.0 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1449">#1449</a>)</li>
<li>See full diff in <a
href="https://github.com/supabase/supabase-js/compare/v2.49.10...v2.50.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `zod` from 3.25.51 to 3.25.56
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/colinhacks/zod/releases">zod's
releases</a>.</em></p>
<blockquote>
<h2>v3.25.56</h2>
<h2>Commits:</h2>
<ul>
<li>64bfb7001cf6f2575bf38b5e6130bc73b4b0e371 3.25.56</li>
</ul>
<h2>v3.25.55</h2>
<h2>Commits:</h2>
<ul>
<li>44141ea1dbd48403f14704386119884aeda5cb27 3.25.55</li>
</ul>
<h2>v3.25.54</h2>
<h2>Commits:</h2>
<ul>
<li>8ab237423cd8fdca58dc9e18f45d48d56ca2a24d fix(util): cross realm
IsPlainObject check (<a
href="https://redirect.github.com/colinhacks/zod/issues/4627">#4627</a>)</li>
<li>2be1c6ad909a9d0598d9f45fedc9038213130529 Fix generic assignability
issue. 3.25.54</li>
</ul>
<h2>v3.25.53</h2>
<h2>Commits:</h2>
<ul>
<li>a6adb148012f59d734245c637a577ed413a484e7 zod mini internals (<a
href="https://redirect.github.com/colinhacks/zod/issues/4631">#4631</a>)</li>
<li>da4f92170ac838029178c4622015dbdae4a1de7c 3.25.53</li>
</ul>
<h2>v3.25.52</h2>
<h2>Commits:</h2>
<ul>
<li>2954f40a4e41f61e835ba211ff084467dca1f41e Fix json (<a
href="https://redirect.github.com/colinhacks/zod/issues/4630">#4630</a>)</li>
<li>51dc6f9361851e64a925c3f4ee9364ce4da4c4e7 3.25.52</li>
<li>e479ea76ae1571064c3dade621b3af0ea2dff942 Add test cast for deferred
self-recursion</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="64bfb7001c"><code>64bfb70</code></a>
3.25.56</li>
<li><a
href="44141ea1db"><code>44141ea</code></a>
3.25.55</li>
<li><a
href="2be1c6ad90"><code>2be1c6a</code></a>
Fix generic assignability issue. 3.25.54</li>
<li><a
href="8ab237423c"><code>8ab2374</code></a>
fix(util): cross realm IsPlainObject check (<a
href="https://redirect.github.com/colinhacks/zod/issues/4627">#4627</a>)</li>
<li><a
href="da4f92170a"><code>da4f921</code></a>
3.25.53</li>
<li><a
href="a6adb14801"><code>a6adb14</code></a>
zod mini internals (<a
href="https://redirect.github.com/colinhacks/zod/issues/4631">#4631</a>)</li>
<li><a
href="e479ea76ae"><code>e479ea7</code></a>
Add test cast for deferred self-recursion</li>
<li><a
href="51dc6f9361"><code>51dc6f9</code></a>
3.25.52</li>
<li><a
href="2954f40a4e"><code>2954f40</code></a>
Fix json (<a
href="https://redirect.github.com/colinhacks/zod/issues/4630">#4630</a>)</li>
<li>See full diff in <a
href="https://github.com/colinhacks/zod/compare/v3.25.51...v3.25.56">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ubbe <hi@ubbe.dev>
Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-11 11:10:06 +00:00
Zamil Majdy
796f896042 fix(backend): execution UI did not receive completed / failed execution update (#10149)
<img width="1410" alt="image"
src="https://github.com/user-attachments/assets/bce407a2-96a1-42e9-9772-b49b8f20886c"
/>


### Changes 🏗️

Add the missing `send execution update` command on completed/update
status change for the node execution.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Screenshot attached
2025-06-11 07:35:51 +00:00
Ubbe
8028a766b1 fix(frontend): account menu still showing after logout (#10143)
### Changes 🏗️

<img width="800" alt="Screenshot 2025-06-10 at 14 21 48"
src="https://github.com/user-attachments/assets/d0dba02d-049d-446c-9a25-0f7cec9108bc"
/>

When logging out, I'm redirected to the `/login` page but the account
menu is still visible (however I'm not longer logged out). Refreshing
the page fixes it.

The problem was that `supabase.logOut()` is a client side action, and
`<Navbar />` is a RSC who fetchs the session on the server to display
the state and does not know on the client it was invalidated.
`router.refresh()` solves the issue by forcing RSC on the page to
refetch their state and update server side. Further reading
[here](https://nextjs.org/docs/app/deep-dive/caching#invalidation-1).

I also improved the UX awaiting the promise and displaying a spinner
while the log out action is happening. If logout fails ( _should be very
rare_ ) I'm displaying a toast to not let the user be hanging wondering
what happened.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Open account menu
  - [x] Click `Logout`
  - [x] I see a spinner while the action is happening
- [x] I'm redirected to `/login` and I no longer see the account menu
2025-06-11 04:49:56 +00:00
Bently
1e89bf5c37 feat(blocks): add veo3 to ai video generator (#10144)
### Changes 🏗️

This simply adds "fal-ai/veo3" to the ``FalModel`` in the
``ai_video_generator.py`` file
Oh i also set it so veo3 also always generates videos with audio so
``generate_audio=True`` is set to true if veo3 is selected

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test the veo3 model via Fal.ai and it should work.
2025-06-10 23:31:50 +00:00
Bently
2e96da36c2 feat(blocks): Drop the price of chatgpt o3 model (#10145)
### Changes 🏗️

Today openAI dropped the prices of the o3 model so this simply drops the
price from 7 to 4

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run the platform with the new price, check the O3 model in the ai
text generator block and see its cheaper to use
2025-06-10 23:31:01 +00:00
dependabot[bot]
de83c35c5f chore(frontend/deps-dev): Bump the development-dependencies group in /autogpt_platform/frontend with 5 updates (#10135)
Bumps the development-dependencies group in /autogpt_platform/frontend
with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [@storybook/test-runner](https://github.com/storybookjs/test-runner) |
`0.22.0` | `0.22.1` |
|
[@types/negotiator](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/negotiator)
| `0.6.3` | `0.6.4` |
|
[@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node)
| `22.15.29` | `22.15.30` |
| [msw](https://github.com/mswjs/msw) | `2.9.0` | `2.10.2` |
|
[msw-storybook-addon](https://github.com/mswjs/msw-storybook-addon/tree/HEAD/packages/msw-addon)
| `2.0.4` | `2.0.5` |

Updates `@storybook/test-runner` from 0.22.0 to 0.22.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/test-runner/releases"><code>@​storybook/test-runner</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v0.22.1</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Patch: Add telemetry to test run <a
href="https://redirect.github.com/storybookjs/test-runner/pull/565">#565</a>
(<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<h2>v0.22.1-next.0</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Replace <code>@​storybook/csf</code> with storybook's internal csf
implementation <a
href="https://redirect.github.com/storybookjs/test-runner/pull/556">#556</a>
(<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/test-runner/blob/v0.22.1/CHANGELOG.md"><code>@​storybook/test-runner</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v0.22.1 (Sat Jun 07 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Patch: Add telemetry to test run <a
href="https://redirect.github.com/storybookjs/test-runner/pull/565">#565</a>
(<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ed37196132"><code>ed37196</code></a>
Bump version to: 0.22.1 [skip ci]</li>
<li><a
href="df34390ef7"><code>df34390</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="0b60c084fe"><code>0b60c08</code></a>
Merge pull request <a
href="https://redirect.github.com/storybookjs/test-runner/issues/565">#565</a>
from storybookjs/telemetry-main</li>
<li><a
href="4baeaf3d3d"><code>4baeaf3</code></a>
Add telemetry to test run</li>
<li>See full diff in <a
href="https://github.com/storybookjs/test-runner/compare/v0.22.0...v0.22.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/negotiator` from 0.6.3 to 0.6.4
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/negotiator">compare
view</a></li>
</ul>
</details>
<br />

Updates `@types/node` from 22.15.29 to 22.15.30
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node">compare
view</a></li>
</ul>
</details>
<br />

Updates `msw` from 2.9.0 to 2.10.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw/releases">msw's
releases</a>.</em></p>
<blockquote>
<h2>v2.10.2 (2025-06-09)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>TypeScript:</strong> support <code>Response.error()</code>
and <code>HttpResponse.error()</code> as mocked responses (<a
href="https://redirect.github.com/mswjs/msw/issues/2132">#2132</a>)
(72cc8ddac8f030f747b674148b03e5a025e412d2) <a
href="https://github.com/jacquesg"><code>@​jacquesg</code></a> <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.10.1 (2025-06-07)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update <code>@mswjs/interceptors</code> to support WebSocket server
protocol (<a
href="https://redirect.github.com/mswjs/msw/issues/2528">#2528</a>)
(6704fa042a3eaa71b68eb7b9028a7464b2b30cef) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
<h2>v2.10.0 (2025-06-07)</h2>
<h3>Features</h3>
<ul>
<li><strong>WebSocketHandler:</strong> add <code>run</code> method (<a
href="https://redirect.github.com/mswjs/msw/issues/2527">#2527</a>)
(94fc78ea50bd8c3334945d3047650c8b82c2f754) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a30cdf591e"><code>a30cdf5</code></a>
chore(release): v2.10.2</li>
<li><a
href="72cc8ddac8"><code>72cc8dd</code></a>
fix(TypeScript): support <code>Response.error()</code> and
<code>HttpResponse.error()</code> as moc...</li>
<li><a
href="d38097f162"><code>d38097f</code></a>
chore(release): v2.10.1</li>
<li><a
href="6704fa042a"><code>6704fa0</code></a>
fix: update <code>@mswjs/interceptors</code> to support WebSocket server
protocol (<a
href="https://redirect.github.com/mswjs/msw/issues/2528">#2528</a>)</li>
<li><a
href="dce459e32c"><code>dce459e</code></a>
chore(release): v2.10.0</li>
<li><a
href="94fc78ea50"><code>94fc78e</code></a>
feat(WebSocketHandler): add <code>run</code> method (<a
href="https://redirect.github.com/mswjs/msw/issues/2527">#2527</a>)</li>
<li><a
href="ca9d87768e"><code>ca9d877</code></a>
chore: run preview release on all pull requests (<a
href="https://redirect.github.com/mswjs/msw/issues/2526">#2526</a>)</li>
<li>See full diff in <a
href="https://github.com/mswjs/msw/compare/v2.9.0...v2.10.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `msw-storybook-addon` from 2.0.4 to 2.0.5
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw-storybook-addon/releases">msw-storybook-addon's
releases</a>.</em></p>
<blockquote>
<h2>v2.0.5</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>fix: updates types to increase compatibility with storybook types <a
href="https://redirect.github.com/mswjs/msw-storybook-addon/pull/169">#169</a>
(<a
href="https://github.com/rhuanbarreto"><code>@​rhuanbarreto</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Rhuan Barreto (<a
href="https://github.com/rhuanbarreto"><code>@​rhuanbarreto</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw-storybook-addon/blob/main/packages/msw-addon/CHANGELOG.md">msw-storybook-addon's
changelog</a>.</em></p>
<blockquote>
<h1>v2.0.5 (Thu Jun 05 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>fix: updates types to increase compatibility with storybook types <a
href="https://redirect.github.com/mswjs/msw-storybook-addon/pull/169">#169</a>
(<a
href="https://github.com/rhuanbarreto"><code>@​rhuanbarreto</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Rhuan Barreto (<a
href="https://github.com/rhuanbarreto"><code>@​rhuanbarreto</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ec35e9371f"><code>ec35e93</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="d67ffc6b26"><code>d67ffc6</code></a>
fix: updates types to increase compatibility with storybook types (<a
href="https://github.com/mswjs/msw-storybook-addon/tree/HEAD/packages/msw-addon/issues/169">#169</a>)</li>
<li>See full diff in <a
href="https://github.com/mswjs/msw-storybook-addon/commits/v2.0.5/packages/msw-addon">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Swifty <craigswift13@gmail.com>
Co-authored-by: Lluis Agusti <hi@llu.lu>
Co-authored-by: Ubbe <hi@ubbe.dev>
2025-06-10 15:52:49 +00:00
dependabot[bot]
450c1ee668 chore(frontend/deps): Bump @hookform/resolvers from 3.10.0 to 5.1.1 in /autogpt_platform/frontend (#10134)
Bumps
[@hookform/resolvers](https://github.com/react-hook-form/resolvers) from
3.10.0 to 5.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/react-hook-form/resolvers/releases"><code>@​hookform/resolvers</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v5.1.1</h2>
<h2><a
href="https://github.com/react-hook-form/resolvers/compare/v5.1.0...v5.1.1">5.1.1</a>
(2025-06-09)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>zod peer dep issue (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/780">#780</a>)
(<a
href="79cd8b284d">79cd8b2</a>)</li>
</ul>
<h2>v5.1.0</h2>
<h1><a
href="https://github.com/react-hook-form/resolvers/compare/v5.0.1...v5.1.0">5.1.0</a>
(2025-06-07)</h1>
<h3>Features</h3>
<ul>
<li>support Zod 4, Zod v4 mini, and retains compatibility with Zod v3.
(<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/777">#777</a>)
(<a
href="8d083bd5f5">8d083bd</a>)</li>
</ul>
<h2>v5.0.1</h2>
<h2><a
href="https://github.com/react-hook-form/resolvers/compare/v5.0.0...v5.0.1">5.0.1</a>
(2025-04-02)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>relax version constraint for react-hook-form 7.55.0 → ^7.55.0 (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/758">#758</a>)
(<a
href="6e8839343d">6e88393</a>)</li>
</ul>
<h2>v5.0.0</h2>
<h1><a
href="https://github.com/react-hook-form/resolvers/compare/v4.1.3...v5.0.0">5.0.0</a>
(2025-04-01)</h1>
<h3>Features</h3>
<ul>
<li>infer input/output types from schema (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/753">#753</a>)
(<a
href="6124c59a99">6124c59</a>)</li>
</ul>
<h3>BREAKING CHANGES</h3>
<ul>
<li>Requires react-hook-form@7.55.0 or higher</li>
</ul>
<p><strong>Before</strong>
Prior to V5, some projects used manual types like</p>
<pre lang="tsx"><code>useForm&lt;FormValues&gt;();
</code></pre>
<p><strong>After</strong>
With V5, the correct approach is:</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="79cd8b284d"><code>79cd8b2</code></a>
fix: zod peer dep issue (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/780">#780</a>)</li>
<li><a
href="8d083bd5f5"><code>8d083bd</code></a>
feat: support Zod 4 (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/777">#777</a>)</li>
<li><a
href="3bc2ad50a6"><code>3bc2ad5</code></a>
docs: fix table formatting (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/774">#774</a>)</li>
<li><a
href="6e8839343d"><code>6e88393</code></a>
fix: relax version constraint for react-hook-form 7.55.0 → ^7.55.0 (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/758">#758</a>)</li>
<li><a
href="a54d05a9a2"><code>a54d05a</code></a>
Merge branch 'dev'</li>
<li><a
href="6124c59a99"><code>6124c59</code></a>
feat: infer input/output types from schema (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/753">#753</a>)</li>
<li><a
href="50dd4add92"><code>50dd4ad</code></a>
fix: escape square brackets in field name regex pattern (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/752">#752</a>)</li>
<li><a
href="ded1746ee8"><code>ded1746</code></a>
fix(standard-schema): move <code>@​standard-schema/utils</code> to
dependencies (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/748">#748</a>)</li>
<li><a
href="8ffada0c7a"><code>8ffada0</code></a>
fix(standard-schema): Propertly handle object path segments (<a
href="https://redirect.github.com/react-hook-form/resolvers/issues/746">#746</a>)</li>
<li><a
href="8ea953c84d"><code>8ea953c</code></a>
update README.md</li>
<li>Additional commits viewable in <a
href="https://github.com/react-hook-form/resolvers/compare/v3.10.0...v5.1.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@hookform/resolvers&package-manager=npm_and_yarn&previous-version=3.10.0&new-version=5.1.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 15:51:19 +00:00
Ubbe
5385520c53 feat(frontend): document typography tokens + Text component (#10132)
### Changes 🏗️

<img width="700" alt="Screenshot 2025-06-09 at 17 01 59"
src="https://github.com/user-attachments/assets/f2b0a3a6-fdf1-4e3e-9caa-d2bf03543dab"
/>

<img width="700" alt="Screenshot 2025-06-09 at 17 02 06"
src="https://github.com/user-attachments/assets/36e27a0b-07f2-4074-8628-cb236d75e4c4"
/>

This PR introduces a comprehensive Typography System for our design
system with improved documentation and developer experience [matching
what we have on
Figma](https://www.figma.com/design/nO9NFynNuicLtkiwvOxrbz/AutoGPT-Design-System?m=dev).

#### **Typography System**

- Created `<Text />` component
- Enforce its usage to ensure consistent typographic styles across the
app
```tsx
<Text variant="h1">Heading 1</Text>
<Text variant="h2">Heading 2</Text>
<Text variant="body">hello world</Text>
<Text variant="small">smol text</Text>
```
- Created `Typography.stories.tsx` on Storybook
- Complete typography overview with font showcases and usage guidelines

#### **Storybook Improvements**
- **Updated TypeScript docgen** configuration for better prop extraction
- **Cleaned up story rendering** to prevent MDX styling pollution
- **Split large story files** into focused, maintainable components

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
**Test Plan:**
- [x] Typography stories render correctly in Storybook
- [x] All Text component variants display properly
- [x] Interactive playground controls function correctly
- [x] No TypeScript or linting errors

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-10 14:57:13 +00:00
Zamil Majdy
210d457ecd feat(executor): Improve execution ordering to allow depth-first execution (#10142)
Allowing depth-first execution will unlock faster processing latency and
a better sense of progress.

<img width="950" alt="image"
src="https://github.com/user-attachments/assets/e2a0e11a-8bc5-4a65-a10d-b5b6c6383354"
/>


### Changes 🏗️

* Prioritize adding a new execution over processing execution output
* Make sure to enqueue each node once when processing output instead of
draining a single node and move on.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run company follower count finder agent.

---------

Co-authored-by: Swifty <craigswift13@gmail.com>
2025-06-10 12:41:31 +00:00
dependabot[bot]
f9b37d2693 chore(frontend/deps-dev): Bump @chromatic-com/storybook from 3.2.4 to 3.2.6 in /autogpt_platform/frontend (#10137)
Bumps
[@chromatic-com/storybook](https://github.com/chromaui/addon-visual-tests)
from 3.2.4 to 3.2.6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/releases"><code>@​chromatic-com/storybook</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v3.2.6</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Fix SSO url <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/363">#363</a>
(<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Kasper Peulen (<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
</ul>
<h2>v3.2.6-next.0</h2>
<h4>⚠️ Pushed to <code>next</code></h4>
<ul>
<li>cleanup (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>&quot;fix&quot; typeissues (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>ignore .js files (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>ignore (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>fix (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>fix typing issue where the component now supports an array of arrays
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>single quotes (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>fix linting &amp; upgrade (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>upgrade to sb9 alpha (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>add canary support (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>restrict version range to 9.0.0 ONLY (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>add 9.0.0-alpha compatibility (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h2>v3.2.5</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Debug release <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/357">#357</a>
(<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
<li>Remove the connection timeout notification <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/351">#351</a>
(<a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>)</li>
<li>Set up Codecov <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/350">#350</a>
(<a
href="https://github.com/paulelliott"><code>@​paulelliott</code></a>)</li>
</ul>
<h4>Authors: 3</h4>
<ul>
<li>Kasper Peulen (<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
<li>Paul Elliott (<a
href="https://github.com/paulelliott"><code>@​paulelliott</code></a>)</li>
<li>Valentin Palkovic (<a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>)</li>
</ul>
<h2>v3.2.5-next.0</h2>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Remove the connection timeout notification <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/351">#351</a>
(<a
href="https://github.com/valentinpalkovic"><code>@​valentinpalkovic</code></a>)</li>
<li>Set up Codecov <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/350">#350</a>
(<a
href="https://github.com/paulelliott"><code>@​paulelliott</code></a>)</li>
</ul>
<h4>Authors: 2</h4>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/chromaui/addon-visual-tests/blob/next/CHANGELOG.md"><code>@​chromatic-com/storybook</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v3.2.6 (Fri Mar 14 2025)</h1>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Fix SSO url <a
href="https://redirect.github.com/chromaui/addon-visual-tests/pull/363">#363</a>
(<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Kasper Peulen (<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="296009e652"><code>296009e</code></a>
Bump version to: 3.2.6 [skip ci]</li>
<li><a
href="0feb10807f"><code>0feb108</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="ffb7b7dbe7"><code>ffb7b7d</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/363">#363</a>
from chromaui/kasper/fix-url</li>
<li><a
href="76f5a62b58"><code>76f5a62</code></a>
Fix eslint</li>
<li><a
href="ad737b588d"><code>ad737b5</code></a>
Fix storybook</li>
<li><a
href="4719d54d43"><code>4719d54</code></a>
Fix subdomain for SSO</li>
<li><a
href="a1c2d2249b"><code>a1c2d22</code></a>
Bump version to: 3.2.5 [skip ci]</li>
<li><a
href="f2a5ec59f0"><code>f2a5ec5</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="bf4292c8da"><code>bf4292c</code></a>
Merge pull request <a
href="https://redirect.github.com/chromaui/addon-visual-tests/issues/357">#357</a>
from chromaui/kasper/fix-release</li>
<li><a
href="abdd57a2b3"><code>abdd57a</code></a>
Debug release</li>
<li>See full diff in <a
href="https://github.com/chromaui/addon-visual-tests/compare/v3.2.4...v3.2.6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@chromatic-com/storybook&package-manager=npm_and_yarn&previous-version=3.2.4&new-version=3.2.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 11:48:17 +00:00
Ubbe
2f16511f24 fix(frontend): avoid Sentry initialisation and warnings on local dev (#10125)
## Changes 🏗️

We were getting the following warnings in the console when running the
local server:

```
 ⚠ ./node_modules/.pnpm/@sentry+node@9.26.0/node_modules/@sentry/node/build/cjs/sdk
Package import-in-the-middle can't be external
The request import-in-the-middle matches serverExternalPackages (or the default list).
The request could not be resolved by Node.js from the project directory.
Packages that should be external need to be installed in the project directory, so they can be resolved from the output files.
Try to install it into the project directory by running npm install import-in-the-middle from the project directory.
```

### Why were the warnings happening?

The Sentry SDK for Next.js tries to hook into Node.js internals using
packages like `import-in-the-middle` and `require-in-the-middle`. When
Sentry is imported at the top level on a file (even if not enabled), it
tries to load these dependencies... If they are not installed, then we
get these warnings...

### Why does installing the packages fix it?

By installing `import-in-the-middle` and `require-in-the-middle` as dev
dependencies, Sentry finds them and the warnings disappear. This is a
safe workaround for local/dev, and does not affect production.

### Loading Sentry conditionally

One way to avoid these warnings ⚠️ is by loading Sentry conditionally.
That is the approach I took in an earlier PR. However I realised that it
would have to apply to any file importing Sentry:
```ts
import * as Sentry from "@sentry/nextjs";
```
which would end quite messy and affecting a lot of files. I realised
installing the packages is just simpler ( _they won't in the bundle or
affect page load_ ) and using `enabled` in the Sentry initialisation is
also cleaner.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] There are not Sentry warnings when running the local dev server (
with or without `--turbo` )
- [x] Sentry still reports issues in production or staging ( _but not
locally_ )
2025-06-10 10:20:32 +00:00
Swifty
4a03e5cbaf fix(backend): improve server error handling (#10030)
## Changes
- log helpful hints when metrics fail to record
- clarify API key errors in v1 router
- improve Postmark unsubscribe and webhook logs
- surface actionable feedback across integrations and store APIs
- handle Otto proxy failures with guidance

## Checklist
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan
2025-06-10 10:09:57 +00:00
Zamil Majdy
7165958feb fix(frontend): Fix builder UI glitch (#10139)
There are a few UI bugs on the builder that this PR addresses.

<img width="554" alt="image"
src="https://github.com/user-attachments/assets/1be70197-de7e-40fe-ab11-405c145e763d"
/>

### Changes 🏗️

Fix these UI issues:
* (screenshot attached above) Key-value input width was unintentionally
maxed out due to a stale CSS rule.
* When multiple executions within the same node are running, we pick the
latest status, making one running and one completed execution displayed
as completed.
* No balance errors were executed, only displayed while at least one
node execution was triggered, while this can be done directly when the
execution request is triggered.
* Run & Stop button glitch: it's still showing as stopped when the graph
is still running, this is due to way the UI code tracks execution in the
node-level, instead of graph level.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual tests on the described behaviours.
2025-06-10 05:46:18 +00:00
Ubbe
014b276552 feat(platform): allow to signin with Google (#10117)
## Changes 🏗️

<img width="500" alt="Screenshot 2025-06-05 at 16 24 35"
src="https://github.com/user-attachments/assets/ccf51917-68fb-4538-bfd9-7ab8bc8ce33a"
/>
<br /><br />

- Allow users to sign in or sign up with Google (SSO), via Supabase
- Prevent password login or signup with `@agpt.co` emails
- Refactor/simplify the login/signup page logic by splitting rendering
and hook logic (
[explanation](https://github.com/Significant-Gravitas/AutoGPT/pull/10117#discussion_r2128793394)
)

### Moved the `createUser` logic to the callback 

`api.createUser()` was being called **before** the OAuth flow completes.
Here's what's happening. I moved `api.createUser()` from `providerLogin`
to the **callback handler** where the session is established to make
sure it happens once we get the OK from Google session wise.

## Checklist 📋

### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  - [ ] Run this PR on a preview
  - [ ] The "Login with Google" button is visible
  - [ ] Login/signup with Google works
- [ ] You can't login or signup with password using an `@agpt.co` email
2025-06-09 14:23:12 +00:00
Bently
6771476d01 fix(turnstile): Disable turnstile by default and rename its env (#10129)
### Changes 🏗️
change ``NEXT_PUBLIC_DISABLE_TURNSTILE`` to ``NEXT_PUBLIC_TURNSTILE``
and set turnstile captcha to be hidden by default

if ``NEXT_PUBLIC_TURNSTILE=enabled`` is set, captcha will work and show
on the frontend login/signup/password reset pages
if ``NEXT_PUBLIC_TURNSTILE=disabled`` is set, captcha will be hidden
from the frontend and is not needed to login/signup/password reset
if ``NEXT_PUBLIC_TURNSTILE`` is not set captcha will be hidden from the
frontend and is not needed to login/signup/password reset

This means users who setup AutoGPT locally will not need to deal with
changing the env to hide captcha

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] start the platform with ``NEXT_PUBLIC_TURNSTILE=enabled`` set to
enabled and captcha shows
- [x] start with ``NEXT_PUBLIC_TURNSTILE=disabled`` and captcha does not
show and you dont need it to login ect
- [x] start with ``NEXT_PUBLIC_TURNSTILE`` not set and the captcha does
not show and you dont need it to login ect
2025-06-09 10:45:02 +00:00
Swifty
a3d082a5fa feat(backend): snapshot test responses (#10039)
This pull request introduces a comprehensive backend testing guide and
adds new tests for analytics logging and various API endpoints, focusing
on snapshot testing. It also includes corresponding snapshot files for
these tests. Below are the most significant changes:

### Documentation Updates:
* Added a detailed `TESTING.md` file to the backend, providing a guide
for running tests, snapshot testing, writing API route tests, and best
practices. It includes examples for mocking, fixtures, and CI/CD
integration.

### Analytics Logging Tests:
* Implemented tests for logging raw metrics and analytics in
`analytics_test.py`, covering success scenarios, various input values,
invalid requests, and complex nested data. These tests utilize snapshot
testing for response validation.
* Added snapshot files for analytics logging tests, including responses
for success cases, various metric values, and complex analytics data.
[[1]](diffhunk://#diff-654bc5aa1951008ec5c110a702279ef58709ee455ba049b9fa825fa60f7e3869R1-R3)
[[2]](diffhunk://#diff-e0a434b107abc71aeffb7d7989dbfd8f466b5e53f8dea25a87937ec1b885b122R1-R3)
[[3]](diffhunk://#diff-dd0bc0b72264de1a0c0d3bd0c54ad656061317f425e4de461018ca51a19171a0R1-R3)
[[4]](diffhunk://#diff-63af007073db553d04988544af46930458a768544cabd08412265e0818320d11R1-R30)

### Snapshot Files for API Endpoints:
* Added snapshot files for various API endpoint tests, such as:
- Graph-related operations (`graphs_get_single_response`,
`graphs_get_all_response`, `blocks_get_all_response`).
[[1]](diffhunk://#diff-b25dba271606530cfa428c00073d7e016184a7bb22166148ab1726b3e113dda8R1-R29)
[[2]](diffhunk://#diff-1054e58ec3094715660f55bfba1676d65b6833a81a91a08e90ad57922444d056R1-R31)
[[3]](diffhunk://#diff-cfd403ab6f3efc89188acaf993d85e6f792108d1740c7e7149eb05efb73d918dR1-R14)
- User-related operations (`auth_get_or_create_user_response`,
`auth_update_email_response`).
[[1]](diffhunk://#diff-49e65ab1eb6af4d0163a6c54ed10be621ce7336b2ab5d47d47679bfaefdb7059R1-R5)
[[2]](diffhunk://#diff-ac1216f96878bd4356454c317473654d5d5c7c180125663b80b0b45aa5ab52cbR1-R3)
- Credit-related operations (`credits_get_balance_response`,
`credits_get_auto_top_up_response`, `credits_top_up_request_response`).
[[1]](diffhunk://#diff-189488f8da5be74d80ac3fb7f84f1039a408573184293e9ba2e321d535c57cddR1-R3)
[[2]](diffhunk://#diff-ba3c4a6853793cbed24030cdccedf966d71913451ef8eb4b2c4f426ef18ed87aR1-R4)
[[3]](diffhunk://#diff-43d7daa0c82070a9b6aee88a774add8e87533e630bbccbac5a838b7a7ae56a75R1-R3)
- Graph execution and deletion (`blocks_execute_response`,
`graphs_delete_response`).
[[1]](diffhunk://#diff-a2ade7d646ad85a2801e7ff39799a925a612548a1cdd0ed99b44dd870d1465b5R1-R12)
[[2]](diffhunk://#diff-c0d1cd0a8499ee175ce3007c3a87ba5f3235ce02d38ce837560b36a44fdc4a22R1-R3)##
Summary
- add pytest-snapshot to backend dev requirements
- snapshot server route response JSONs
- mention how to update stored snapshots

## Testing
- `poetry run format`
- `poetry run test` 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] run poetry run test

---------

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-06-06 20:36:00 +00:00
Swifty
a5ff8e8f69 restore dev deploy (#10122)
<!-- Clearly explain the need for these changes: -->

<!-- Concisely describe all of the changes made in this pull request:
-->
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>

  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-06-06 14:36:11 +02:00
Ubbe
f881570325 fix(frontend): cookies console warnings (#10124)
### Changes 🏗️

<img width="600" alt="Screenshot_2025-06-06_at_3 00 40_PM"
src="https://github.com/user-attachments/assets/2793793e-356f-47b0-8624-9d73af414ff3"
/>

☝🏽 Fix the following warning that gets logged to the console when
running the dev server in the Front-end. It shouldn't cause an actual
auth issue, as Next.js made sure `cookies` can still be called sync;
however, it is safer if we just migrate our calls to `cookies` to be
async 🙏🏽

### Checklist 📋

#### For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] There is no cookie warnings when running the FE dev server lcoally
  - [x] Authentication works as expected
2025-06-06 11:24:55 +00:00
dependabot[bot]
12972fde77 chore(deps): Bump peter-evans/repository-dispatch from 2 to 3 (#10076)
Bumps
[peter-evans/repository-dispatch](https://github.com/peter-evans/repository-dispatch)
from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/peter-evans/repository-dispatch/releases">peter-evans/repository-dispatch's
releases</a>.</em></p>
<blockquote>
<h2>Repository Dispatch v3.0.0</h2>
<p>⚙️  Updated runtime to Node.js 20</p>
<ul>
<li>The action now requires a minimum version of <a
href="https://github.com/actions/runner/releases/tag/v2.308.0">v2.308.0</a>
for the Actions runner. Update self-hosted runners to v2.308.0 or later
to ensure compatibility.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>Bump prettier to fix deps by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/255">peter-evans/repository-dispatch#255</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.17.12 to
18.17.14 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/257">peter-evans/repository-dispatch#257</a></li>
<li>build(deps-dev): bump <code>@​vercel/ncc</code> from 0.36.1 to
0.38.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/258">peter-evans/repository-dispatch#258</a></li>
<li>build(deps): bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/259">peter-evans/repository-dispatch#259</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.17.14 to
18.17.16 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/261">peter-evans/repository-dispatch#261</a></li>
<li>build(deps): bump <code>@​actions/core</code> from 1.10.0 to 1.10.1
by <a href="https://github.com/dependabot"><code>@​dependabot</code></a>
in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/262">peter-evans/repository-dispatch#262</a></li>
<li>build(deps-dev): bump jest-circus from 29.6.4 to 29.7.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/263">peter-evans/repository-dispatch#263</a></li>
<li>build(deps-dev): bump eslint from 8.48.0 to 8.49.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/264">peter-evans/repository-dispatch#264</a></li>
<li>Update distribution by <a
href="https://github.com/actions-bot"><code>@​actions-bot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/265">peter-evans/repository-dispatch#265</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.17.16 to
18.17.18 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/266">peter-evans/repository-dispatch#266</a></li>
<li>build(deps-dev): bump eslint-plugin-github from 4.10.0 to 4.10.1 by
<a href="https://github.com/dependabot"><code>@​dependabot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/267">peter-evans/repository-dispatch#267</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.17.18 to
18.18.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/268">peter-evans/repository-dispatch#268</a></li>
<li>build(deps-dev): bump eslint from 8.49.0 to 8.50.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/269">peter-evans/repository-dispatch#269</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.18.0 to
18.18.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/271">peter-evans/repository-dispatch#271</a></li>
<li>build(deps-dev): bump eslint-plugin-prettier from 5.0.0 to 5.0.1 by
<a href="https://github.com/dependabot"><code>@​dependabot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/275">peter-evans/repository-dispatch#275</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.18.3 to
18.18.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/274">peter-evans/repository-dispatch#274</a></li>
<li>build(deps-dev): bump eslint from 8.50.0 to 8.51.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/276">peter-evans/repository-dispatch#276</a></li>
<li>build(deps-dev): bump <code>@​babel/traverse</code> from 7.16.3 to
7.23.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/278">peter-evans/repository-dispatch#278</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.18.5 to
18.18.6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/279">peter-evans/repository-dispatch#279</a></li>
<li>build(deps-dev): bump <code>@​vercel/ncc</code> from 0.38.0 to
0.38.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/280">peter-evans/repository-dispatch#280</a></li>
<li>build(deps-dev): bump eslint from 8.51.0 to 8.52.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/281">peter-evans/repository-dispatch#281</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.18.6 to
18.18.7 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/282">peter-evans/repository-dispatch#282</a></li>
<li>build(deps): bump actions/setup-node from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/283">peter-evans/repository-dispatch#283</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.18.7 to
18.18.8 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/284">peter-evans/repository-dispatch#284</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.18.8 to
18.18.9 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/285">peter-evans/repository-dispatch#285</a></li>
<li>build(deps-dev): bump eslint from 8.52.0 to 8.53.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/286">peter-evans/repository-dispatch#286</a></li>
<li>build(deps-dev): bump prettier from 3.0.3 to 3.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/287">peter-evans/repository-dispatch#287</a></li>
<li>build(deps-dev): bump eslint from 8.53.0 to 8.54.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/289">peter-evans/repository-dispatch#289</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.18.9 to
18.18.13 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/290">peter-evans/repository-dispatch#290</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.18.13 to
18.19.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/291">peter-evans/repository-dispatch#291</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.0 to
18.19.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/292">peter-evans/repository-dispatch#292</a></li>
<li>build(deps-dev): bump eslint from 8.54.0 to 8.55.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/293">peter-evans/repository-dispatch#293</a></li>
<li>build(deps-dev): bump prettier from 3.1.0 to 3.1.1 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/296">peter-evans/repository-dispatch#296</a></li>
<li>build(deps): bump actions/upload-artifact from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/295">peter-evans/repository-dispatch#295</a></li>
<li>build(deps-dev): bump eslint from 8.55.0 to 8.56.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/297">peter-evans/repository-dispatch#297</a></li>
<li>build(deps-dev): bump eslint-plugin-prettier from 5.0.1 to 5.1.1 by
<a href="https://github.com/dependabot"><code>@​dependabot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/298">peter-evans/repository-dispatch#298</a></li>
<li>build(deps-dev): bump eslint-plugin-prettier from 5.1.1 to 5.1.2 by
<a href="https://github.com/dependabot"><code>@​dependabot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/299">peter-evans/repository-dispatch#299</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.3 to
18.19.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/300">peter-evans/repository-dispatch#300</a></li>
<li>build(deps-dev): bump eslint-plugin-prettier from 5.1.2 to 5.1.3 by
<a href="https://github.com/dependabot"><code>@​dependabot</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/301">peter-evans/repository-dispatch#301</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.4 to
18.19.6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/302">peter-evans/repository-dispatch#302</a></li>
<li>build(deps-dev): bump prettier from 3.1.1 to 3.2.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/303">peter-evans/repository-dispatch#303</a></li>
<li>build(deps-dev): bump <code>@​types/node</code> from 18.19.6 to
18.19.8 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/304">peter-evans/repository-dispatch#304</a></li>
<li>feat: update runtime to node 20 by <a
href="https://github.com/peter-evans"><code>@​peter-evans</code></a> in
<a
href="https://redirect.github.com/peter-evans/repository-dispatch/pull/305">peter-evans/repository-dispatch#305</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ff45666b94"><code>ff45666</code></a>
feat: update runtime to node 20 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/305">#305</a>)</li>
<li><a
href="a4a90276d0"><code>a4a9027</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.6 to 18.19.8
(<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/304">#304</a>)</li>
<li><a
href="2605253283"><code>2605253</code></a>
build(deps-dev): bump prettier from 3.1.1 to 3.2.4 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/303">#303</a>)</li>
<li><a
href="ab3258eeef"><code>ab3258e</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.4 to 18.19.6
(<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/302">#302</a>)</li>
<li><a
href="240bc73193"><code>240bc73</code></a>
build(deps-dev): bump eslint-plugin-prettier from 5.1.2 to 5.1.3 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/301">#301</a>)</li>
<li><a
href="8aa15c54a0"><code>8aa15c5</code></a>
build(deps-dev): bump <code>@​types/node</code> from 18.19.3 to 18.19.4
(<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/300">#300</a>)</li>
<li><a
href="22aa07cf23"><code>22aa07c</code></a>
build(deps-dev): bump eslint-plugin-prettier from 5.1.1 to 5.1.2 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/299">#299</a>)</li>
<li><a
href="ba0298574b"><code>ba02985</code></a>
build(deps-dev): bump eslint-plugin-prettier from 5.0.1 to 5.1.1 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/298">#298</a>)</li>
<li><a
href="accfd7b5bf"><code>accfd7b</code></a>
build(deps-dev): bump eslint from 8.55.0 to 8.56.0 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/297">#297</a>)</li>
<li><a
href="3c7d964ae9"><code>3c7d964</code></a>
build(deps): bump actions/upload-artifact from 3 to 4 (<a
href="https://redirect.github.com/peter-evans/repository-dispatch/issues/295">#295</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/peter-evans/repository-dispatch/compare/v2...v3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=peter-evans/repository-dispatch&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-06 11:20:05 +00:00
Bently
6df4dd3739 fix(frontend/turnstile): Reset on failed login/register (#9948)
This is to fix turnstile not resetting properly on failed login/register

### Changes 🏗️

Calling ``turnstile.reset()`` directly seems to fail some times so I
have made a function ``resetCaptcha`` which forces a full reset of the
turnstile widget which should prevent getting stuck when failing to
login the first time

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Test logging in with a wrong email/password, it should fail with
"Invalid login credentials", you should see the turnstile token refresh,
try login again with correct info and it should work with out getting
stuck
2025-06-06 10:56:15 +00:00
Bently
79b38343c2 docs(ollama): Update to add info on how to properly setup ollama environment variables (#10089)
Update ollama docs to add info on how to setup ollama environment vars
for proper access

This includes properly setting the "OLLAMA_HOST" env var with the ip and
port "0.0.0.0:11434" which makes it accessible to AutoGPT thats running
inside of docker



#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Follow the latest setup to test Ollama to make sure it works
2025-06-06 10:45:31 +00:00
Nicholas Tindle
705be3ec86 dx: Add Claude Code GitHub Workflow (#10099)
## 🤖 Installing Claude Code GitHub App

This PR adds a GitHub Actions workflow that enables Claude Code
integration in our repository.

### What is Claude Code?

[Claude Code](https://claude.ai/code) is an AI coding agent that can
help with:
- Bug fixes and improvements  
- Documentation updates
- Implementing new features
- Code reviews and suggestions
- Writing tests
- And more!

### How it works

Once this PR is merged, we'll be able to interact with Claude by
mentioning @claude in a pull request or issue comment.
Once the workflow is triggered, Claude will analyze the comment and
surrounding context, and execute on the request in a GitHub action.

### Important Notes

- **This workflow won't take effect until this PR is merged**
- **@claude mentions won't work until after the merge is complete**
- The workflow runs automatically whenever Claude is mentioned in PR or
issue comments
- Claude gets access to the entire PR or issue context including files,
diffs, and previous comments

### Security

- Our Anthropic API key is securely stored as a GitHub Actions secret
- Only users with write access to the repository can trigger the
workflow
- All Claude runs are stored in the GitHub Actions run history
- Claude's default tools are limited to reading/writing files and
interacting with our repo by creating comments, branches, and commits.
- We can add more allowed tools by adding them to the workflow file
like:

```
allowed_tools: Bash(npm install),Bash(npm run build),Bash(npm run lint),Bash(npm run test)
```

There's more information in the [Claude Code
documentation](http://docs.anthropic.com/s/claude-code-github-actions).

After merging this PR, let's try mentioning @claude in a comment on any
PR to get started!
2025-06-05 21:40:39 +00:00
Swifty
3a20c5a4bb restore dev deploy (#10122)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-06-05 19:28:48 +02:00
Ubbe
36634b7ba2 fix(frontend): hydration warning (#10120)
We're encountering a hydration warning on the frontend due to a mismatch
in CSS module hashes. This happens because auto-generated classnames
from `next/font` and `geist` differ between server-side and client-side
rendering. The inconsistency triggers a warning when the client
rehydrates the server-rendered HTML.

![Screenshot 2025-06-05 at 17 20
06](https://github.com/user-attachments/assets/4270f94c-faab-4cad-9c8e-6fd0c27bd4d0)

### Changes 🏗️

Since the mismatch only affects the `<html>` tag and has no visible
impact on the UI, the most straightforward workaround is to suppress the
warning and still take advantage of `next/font` optimisations.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run the frontend locally
  - [x] Page loads without hydration warnings

---------

Co-authored-by: Abhimanyu Yadav <122007096+Abhi1992002@users.noreply.github.com>
2025-06-05 17:13:00 +00:00
Reinier van der Leer
781f138c09 feat(backend): Agent Presets backend improvements (#9786)
- Part of #9307

-  Blocks #9541

### Changes 🏗️

Backend:
- Fix+improve `GET /library/presets` (`list_presets`) endpoint
  - Fix pagination
  - Add `graph_id` filter parameter

- Allow partial preset updates: `PUT /presets/{preset_id}` -> `PATCH
/presets/{preset_id}`

- Allow creating preset from graph execution through `POST /presets`

- Clean up models & DB functions
  - Split `upsert_preset` into `create_preset` + `update_preset`
  - Add `LibraryAgentPresetUpdatable`
- Replace `CreateLibraryAgentPresetRequest` with
`LibraryAgentPresetCreatable`
- Use `LibraryAgentPresetCreatable` as base class for
`LibraryAgentPreset`
  - Remove redundant `set_is_deleted_for_library_agent(..)`
  - Improve log statements
  - Improve DB statements (e.g. by using unique keys where possible)

Frontend:
- Add timestamp parsing logic to library agent preset endpoints
- Brand `LibraryAgentPreset.id` + references

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI green
- Since these changes don't affect existing front-end functionality, no
additional testing is needed.
2025-06-05 16:11:37 +00:00
Zamil Majdy
2647417e9f feat(executor;frontend): Move output processing step from node executor to graph executor & simplify input beads calculation (#10066)
**Goal**: Allow parallel runs within a single node. Currently, we
prevent this to avoid unexpected ordering of the execution.

### Changes 🏗️

#### Executor changes

We decoupled the node execution output processing, which is responsible
for deciding the next executions from the node executor code.

Currently, `execute_node` does two big things:
* Runs the block’s execute(...) (which yields outputs).
* immediately enqueues the next nodes based on those outputs.

This PR makes:
* execute_node(node_exec) -> stream of (output_name, data). That purely
runs the block and yields each output as soon as it’s available.
* Move _enqueue_next_nodes into the graph executor. So the next
execution is handled serially by the graph executor to avoid concurrency
issues.

#### UI changes

The change on the executor also fixes the behavior of the execution
update to the UI We will report the execution output to the UI as soon
as it is available, not when the node execution is fully completed.
This, however, broke the bread calculation logic that assumes each
execution update will never overlap. So the change in this PR makes the
bead calculation take the overlap / duplicated execution update into
account, and simplify the overall calculation logic.


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Execute this agent and observe its concurrency ordering
  
<img width="1424" alt="image"
src="https://github.com/user-attachments/assets/0fe8259f-9091-4ecc-b824-ce8e8819c2d2"
/>
2025-06-05 16:10:50 +00:00
Reinier van der Leer
f2a04f9845 dx: Fix file filter for platform/backend auto-label (#10115)
- Fixes #10114

### Changes 🏗️

- Change file matching patterns OR to AND in auto-labeler config
2025-06-05 15:49:14 +00:00
Reinier van der Leer
96df40f7b6 fix(frontend): Use FRONTEND_BASE_URL to make password reset link (#10102)
- Fixes #9215

- [x] ⚠️ Merge first:
https://github.com/Significant-Gravitas/AutoGPT_cloud_infrastructure/pull/93

### Changes 🏗️

- Use `FRONTEND_BASE_URL` instead of Host header to make password reset
redirect link

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Resetting password gives an e-mail with a link that points to the
correct URL

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-06-05 15:48:35 +00:00
Ubbe
7d10dc4e7b fix(frontend): pin deps (#10121)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

This PR updates the .npmrc file to improve dependency consistency across
local and CI environments when using pnpm.

Specifically:

- `save-exact=true` ensures all dependencies are pinned to exact
versions, preventing version drift ( _especially important for tools
like `prettier`, where even minor changes can lead to inconsistent
formatting in commits_ ).

This change aims to reduce formatting discrepancies and improve
reproducibility across machines and contributors.


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] formatting & lint passes on the CI
2025-06-05 15:02:27 +00:00
Swifty
5d0faab4b1 Delete .github/workflows/platform-autogpt-deploy-dev.yaml (#10118)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [ ] `.env.example` is updated or already compatible with my changes
- [ ] `docker-compose.yml` is updated or already compatible with my
changes
- [ ] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-06-05 15:13:04 +02:00
Bently
5b324abc7c update(turnstile): Add env to hide turnstile for dev deploy (#10116)
This simply adds a env to hide turnstile for dev deploys

if this env ``NEXT_PUBLIC_DISABLE_TURNSTILE`` is set to false which it
is by default, it will show turnstile, if the env is set to "true" it
will hide the turnstile

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Login/register with ``NEXT_PUBLIC_DISABLE_TURNSTILE=false`` and
you see the turnstile and that is needed to login/signup
- [x] Login/register with ``NEXT_PUBLIC_DISABLE_TURNSTILE=true`` and you
will see the turnstile is gone, and you can login/signup with out it
2025-06-05 15:06:29 +02:00
Ubbe
b900e86c49 fix(profile): account menu layout and alignment (#10113)
<!-- Clearly explain the need for these changes: -->

## Changes 🏗️

### Before

<img width="200" alt="image"
src="https://github.com/user-attachments/assets/8a8e1818-6b8c-4d86-a2b1-a474ba27a6de"
/>

### After

<img width="200" alt="Screenshot 2025-06-05 at 15 26 45"
src="https://github.com/user-attachments/assets/cc28eaeb-626b-46a8-a726-c157b2471ca9"
/>

### Adjustments

- Adjusted the padding account the menu
- Made username display on top of account name nicely
- Both username and account name will be trimmed if they are too long
`...`

## Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Login
  - [x] Open account menu ( top-right )
  - [x] The alignment of items is right
  - [x] Long usernames are handled nicely
  - [x] Username and display name don't overlap
2025-06-05 12:24:06 +00:00
Reinier van der Leer
ef6ba3e84a fix(frontend): Resolve perpetual loading state on password reset (#10103)
- Fixes #10085

### Changes 🏗️

- Remove redirect from `sendResetEmail` server action

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Reset password form exits loading state after request completes
2025-06-05 09:11:50 +00:00
dependabot[bot]
95137323f7 chore(frontend/deps-dev): Bump eslint-plugin-storybook from 0.11.6 to 0.12.0 in /autogpt_platform/frontend (#10105)
Bumps
[eslint-plugin-storybook](https://github.com/storybookjs/storybook/tree/HEAD/code/lib/eslint-plugin)
from 0.11.6 to 0.12.0.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/storybook/blob/next/CHANGELOG.v1-5.md">eslint-plugin-storybook's
changelog</a>.</em></p>
<blockquote>
<h2>5.3.7 (January 20, 2020)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Node-logger: Move <code>@types/npmlog</code> to dependencies (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9538">#9538</a>)</li>
<li>Core: Fix legacy story URLs (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9545">#9545</a>)</li>
<li>Addon-docs: Convert default prop value to string (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9525">#9525</a>)</li>
<li>Addon-docs: Preserve Source indentation by default (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9513">#9513</a>)</li>
</ul>
<h2>5.3.6 (January 17, 2020)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Source-loader: Bypass if file has no exports (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9505">#9505</a>)</li>
<li>Core: Fix default sorting of docs-only stories (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9504">#9504</a>)</li>
</ul>
<h2>5.3.5 (January 17, 2020)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Core: Fix typo for loading addon-notes/register-panel (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9497">#9497</a>)</li>
<li>Source-loader: Add imports to top of file (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9492">#9492</a>)</li>
</ul>
<h2>5.3.4 (January 16, 2020)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Core: Fix presets register panel (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9486">#9486</a>)</li>
<li>Core: Fix addon/preset detection for local addons (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9485">#9485</a>)</li>
<li>Core: Fix default story sort (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9482">#9482</a>)</li>
</ul>
<h2>5.3.3 (January 14, 2020)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>UI: Fix edge case where only one legacy separator is defined (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9425">#9425</a>)</li>
<li>Core: Preserve kind load order on HMR when no sortFn is provided (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9424">#9424</a>)</li>
<li>Angular: Fix missing architect properties (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9390">#9390</a>)</li>
<li>Addon-knobs: Fix null knob values in select (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9416">#9416</a>)</li>
<li>Source-loader: Disable linting altogether (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9417">#9417</a>)</li>
</ul>
<h2>5.3.2 (January 13, 2020)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Source-loader: Disable eslint entirely for generated code (<a
href="https://redirect.github.com/storybookjs/storybook/pull/9410">#9410</a>)</li>
</ul>
<h2>5.3.1 (January 12, 2020)</h2>
<h3>Bug Fixes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/storybookjs/storybook/commits/v0.12.0/code/lib/eslint-plugin">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=eslint-plugin-storybook&package-manager=npm_and_yarn&previous-version=0.11.6&new-version=0.12.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-05 07:53:38 +00:00
Zamil Majdy
512ce6d473 fix(frontend): Hide native file input on agent node file input 2025-06-05 13:58:36 +07:00
Zamil Majdy
da0482b54e fix(backend): Fix graph fetching of non-owned marketplace agent (#10110)
Currently, the get_graph function, with no graph version specifier will
try to fetch the latest version, and when the graph is not owned and the
latest version is not available for listing, it will return `None`
instead of picking the latest graph version available on the store.

### Changes 🏗️

Instead of using the latest graph.version to fetch the store listing,
don't provide any version filter at all and pick up whatever available
version in the store.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] CI, existing tests
2025-06-05 06:05:31 +00:00
Zamil Majdy
d710d14339 Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-06-05 11:46:49 +07:00
Reinier van der Leer
47adab575b fix(frontend): Fix login/logout actions; update credentials cache on state change (#10017)
- Resolves #10008

### Changes 🏗️

- Update `useSupabase` hook to propagate auth state changes
- Refresh `CredentialsProvider` whenever the user login state changes
- Add `logOut` callback to `useSupabase` hook that handles (client-side)
logout
- Remove server-side `logout` action: the Supabase reference
implementation does it client-side, and doing both causes a race
condition

Refactorings to aid implementation of the above:
- Move `@/hooks/useSupabase` -> `@/lib/supabase/useSupabase`

Other improvements:
- Clean up `login` server action based on reference implementation
- Make `BackendAPI.isAuthenticated()` more efficient and faster
- Remove unused `ProfileDropdown` component
- Improve logic and debug logging in `tests/pages/login.page.ts`
- Improve playwright test output logging

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Log out from account (A)
  - Log in to other account (B)
- Open builder, add a block for which account B has (multiple)
credentials
  - [x] Credentials for account B are shown
  - [x] Credentials for account A are *not* shown

  **Note: do not reload the page** while going through these steps
2025-06-04 22:42:31 +00:00
Nicholas Tindle
fa7fcb3dd4 chore(backend): Tell Dependabot not to update Poetry (#10101)
<!-- Clearly explain the need for these changes: -->
fixed #10098


### Changes 🏗️
tells dependabot to ignore poetry

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋
N/A
2025-06-04 22:32:17 +00:00
dependabot[bot]
6629052a6b chore(frontend/deps-dev): Bump the development-dependencies group across 1 directory with 3 updates (#10086)
Bumps the development-dependencies group with 3 updates in the
/autogpt_platform/frontend directory:
[@storybook/test-runner](https://github.com/storybookjs/test-runner),
[eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next)
and [msw](https://github.com/mswjs/msw).

Updates `@storybook/test-runner` from 0.21.3 to 0.22.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/test-runner/releases"><code>@​storybook/test-runner</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v0.22.0</h2>
<h4>🚀 Enhancement</h4>
<ul>
<li>Release v0.22.0 <a
href="https://redirect.github.com/storybookjs/test-runner/pull/553">#553</a>
(<a href="https://github.com/ronakj"><code>@​ronakj</code></a> <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a> <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>
<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
<li>Dependencies: Add sb9 alpha compatibility <a
href="https://redirect.github.com/storybookjs/test-runner/pull/551">#551</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Exclude new server component error <a
href="https://redirect.github.com/storybookjs/test-runner/pull/552">#552</a>
(<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
</ul>
<h4>Authors: 4</h4>
<ul>
<li>Kasper Peulen (<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>Ronak Jain (<a
href="https://github.com/ronakj"><code>@​ronakj</code></a>)</li>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<h2>v0.22.0-next.2</h2>
<h4>🐛 Bug Fix</h4>
<h4>Authors: 1</h4>
<ul>
<li>Ronak Jain (<a
href="https://github.com/ronakj"><code>@​ronakj</code></a>)</li>
</ul>
<h2>v0.22.0-next.0</h2>
<h4>🚀 Enhancement</h4>
<ul>
<li>Dependencies: Add sb9 alpha compatibility <a
href="https://redirect.github.com/storybookjs/test-runner/pull/551">#551</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>Authors: 1</h4>
<ul>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/storybookjs/test-runner/blob/v0.22.0/CHANGELOG.md"><code>@​storybook/test-runner</code>'s
changelog</a>.</em></p>
<blockquote>
<h1>v0.22.0 (Fri Feb 28 2025)</h1>
<h4>🚀 Enhancement</h4>
<ul>
<li>Release v0.22.0 <a
href="https://redirect.github.com/storybookjs/test-runner/pull/553">#553</a>
(<a href="https://github.com/ronakj"><code>@​ronakj</code></a> <a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a> <a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>
<a href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
<li>Dependencies: Add sb9 alpha compatibility <a
href="https://redirect.github.com/storybookjs/test-runner/pull/551">#551</a>
(<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
</ul>
<h4>🐛 Bug Fix</h4>
<ul>
<li>Exclude new server component error <a
href="https://redirect.github.com/storybookjs/test-runner/pull/552">#552</a>
(<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
</ul>
<h4>Authors: 4</h4>
<ul>
<li>Kasper Peulen (<a
href="https://github.com/kasperpeulen"><code>@​kasperpeulen</code></a>)</li>
<li>Norbert de Langen (<a
href="https://github.com/ndelangen"><code>@​ndelangen</code></a>)</li>
<li>Ronak Jain (<a
href="https://github.com/ronakj"><code>@​ronakj</code></a>)</li>
<li>Yann Braga (<a
href="https://github.com/yannbf"><code>@​yannbf</code></a>)</li>
</ul>
<hr />
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d56dba2c40"><code>d56dba2</code></a>
Bump version to: 0.22.0 [skip ci]</li>
<li><a
href="3b1a12284f"><code>3b1a122</code></a>
Update CHANGELOG.md [skip ci]</li>
<li><a
href="a595bb8cf5"><code>a595bb8</code></a>
Merge pull request <a
href="https://redirect.github.com/storybookjs/test-runner/issues/553">#553</a>
from storybookjs/release/v0.22.0</li>
<li><a
href="16a7aa8f02"><code>16a7aa8</code></a>
Merge branch 'main' into release/v0.22.0</li>
<li><a
href="b24a54faa4"><code>b24a54f</code></a>
Merge pull request <a
href="https://redirect.github.com/storybookjs/test-runner/issues/539">#539</a>
from ronakj/next</li>
<li><a
href="7b78185656"><code>7b78185</code></a>
Merge pull request <a
href="https://redirect.github.com/storybookjs/test-runner/issues/552">#552</a>
from storybookjs/kasper/fix-next-error</li>
<li><a
href="91cf68d17c"><code>91cf68d</code></a>
Exclude new server component error</li>
<li><a
href="2d485ce6bf"><code>2d485ce</code></a>
Merge pull request <a
href="https://redirect.github.com/storybookjs/test-runner/issues/551">#551</a>
from storybookjs/norbert/sb-9-compatibility</li>
<li><a
href="dfe563067c"><code>dfe5630</code></a>
dedupe</li>
<li><a
href="859c52b4df"><code>859c52b</code></a>
upgrade playwright</li>
<li>Additional commits viewable in <a
href="https://github.com/storybookjs/test-runner/compare/v0.21.3...v0.22.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `eslint-config-next` from 15.1.6 to 15.3.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases">eslint-config-next's
releases</a>.</em></p>
<blockquote>
<h2>v15.3.3</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>Reinstate <code>vary</code> (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/79939">#79939</a>)</li>
<li>fix(next-swc): Fix interestingness detection for React Compiler (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/79558">#79558</a>)</li>
<li>fix(next-swc): Fix react compiler usefulness detector (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/79480">#79480</a>)</li>
<li>fix(dev-overlay): Better handle edge-case file paths in launchEditor
(<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/79526">#79526</a>)</li>
<li>Client router should discard stale prefetch entries for static pages
(<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/79362">#79362</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/gaojude"><code>@​gaojude</code></a>, <a
href="https://github.com/kdy1"><code>@​kdy1</code></a>, <a
href="https://github.com/bgw"><code>@​bgw</code></a>, and <a
href="https://github.com/unstubbable"><code>@​unstubbable</code></a> for
helping!</p>
<h2>v15.3.2</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>backport: fix(turbopack): Store persistence of wrapped task on
RawVc::LocalOutput (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78488">#78488</a>)
(<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78883">#78883</a>)</li>
<li><code>@​next/mdx</code>: Use stable turbopack config options (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78880">#78880</a>)</li>
<li>Fix react-compiler: Fix detection of interest (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78879">#78879</a>)</li>
<li>Fix turbopack: Backport sourcemap bugfix (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78881">#78881</a>)</li>
<li>[next-server] preserve rsc query for rsc redirects (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78876">#78876</a>)</li>
<li>Update middleware public/static matching (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78875">#78875</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/ijjk"><code>@​ijjk</code></a>, <a
href="https://github.com/huozhi"><code>@​huozhi</code></a>, <a
href="https://github.com/kdy1"><code>@​kdy1</code></a>, <a
href="https://github.com/wbinnssmith"><code>@​wbinnssmith</code></a>,
and <a href="https://github.com/bgw"><code>@​bgw</code></a> for
helping!</p>
<h2>v15.3.1</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>chore: Backport SWC-based RC optimization (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78260">#78260</a>)</li>
<li>fix: bump image-size@1.2.1 (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next/issues/78164">#78164</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/kdy1"><code>@​kdy1</code></a> and <a
href="https://github.com/styfle"><code>@​styfle</code></a> for
helping!</p>
<h2>v15.3.1-canary.15</h2>
<h3>Core Changes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="3ab8db7383"><code>3ab8db7</code></a>
v15.3.3</li>
<li><a
href="d9ec4a4b57"><code>d9ec4a4</code></a>
v15.3.2</li>
<li><a
href="fa536cf2c9"><code>fa536cf</code></a>
v15.3.1</li>
<li><a
href="b2ff04995b"><code>b2ff049</code></a>
v15.3.0</li>
<li><a
href="60bfe64295"><code>60bfe64</code></a>
v15.3.0-canary.46</li>
<li><a
href="f71c4a1582"><code>f71c4a1</code></a>
v15.3.0-canary.45</li>
<li><a
href="4451bae75d"><code>4451bae</code></a>
v15.3.0-canary.44</li>
<li><a
href="87d7d8eb7a"><code>87d7d8e</code></a>
v15.3.0-canary.43</li>
<li><a
href="82ab39f801"><code>82ab39f</code></a>
v15.3.0-canary.42</li>
<li><a
href="8f1409d6ce"><code>8f1409d</code></a>
v15.3.0-canary.41</li>
<li>Additional commits viewable in <a
href="https://github.com/vercel/next.js/commits/v15.3.3/packages/eslint-config-next">compare
view</a></li>
</ul>
</details>
<br />

Updates `msw` from 2.8.7 to 2.9.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/mswjs/msw/releases">msw's
releases</a>.</em></p>
<blockquote>
<h2>v2.9.0 (2025-06-03)</h2>
<h3>Features</h3>
<ul>
<li>send <code>request</code> reference within the <code>RESPONSE</code>
event (<a
href="https://redirect.github.com/mswjs/msw/issues/2510">#2510</a>)
(425635161dddb3457eea37b996b41b7c731fc69f) <a
href="https://github.com/kettanaito"><code>@​kettanaito</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4fbc90a5ef"><code>4fbc90a</code></a>
chore(release): v2.9.0</li>
<li><a
href="425635161d"><code>4256351</code></a>
feat: send <code>request</code> reference within the
<code>RESPONSE</code> event (<a
href="https://redirect.github.com/mswjs/msw/issues/2510">#2510</a>)</li>
<li>See full diff in <a
href="https://github.com/mswjs/msw/compare/v2.8.7...v2.9.0">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-04 18:10:56 +00:00
Ubbe
d8cf62c8be fix(github): dependabot config (#10088)
### Changes 🏗️

The changes on my recent PR,
https://github.com/Significant-Gravitas/AutoGPT/pull/10072 , moving from
`yarn1` to `pnpm` in the FE [made dependabot break
](https://github.com/Significant-Gravitas/AutoGPT/runs/43465590584)😭

I forgot, dependabot implicitly supports pnpm:
https://github.com/Significant-Gravitas/AutoGPT/pull/10087/files ( _you
can see it updated the `pnpm-lock.yml` file correctly_ ) but you need to
set the `dependabot.yml` config with `npm` as the package ecosystem.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
2025-06-04 17:21:25 +00:00
dependabot[bot]
7abe6eb328 chore(frontend/deps): Bump the production-dependencies group across 1 directory with 5 updates (#10087)
Bumps the production-dependencies group with 5 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
| [@sentry/nextjs](https://github.com/getsentry/sentry-javascript) |
`9.24.0` | `9.26.0` |
| [@supabase/supabase-js](https://github.com/supabase/supabase-js) |
`2.49.9` | `2.49.10` |
| [framer-motion](https://github.com/motiondivision/motion) | `12.15.0`
| `12.16.0` |
|
[lucide-react](https://github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react)
| `0.510.0` | `0.513.0` |
| [zod](https://github.com/colinhacks/zod) | `3.25.48` | `3.25.51` |


Updates `@sentry/nextjs` from 9.24.0 to 9.26.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/releases"><code>@​sentry/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>9.26.0</h2>
<ul>
<li>feat(react-router): Re-export functions from
<code>@sentry/react</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16465">#16465</a>)</li>
<li>fix(nextjs): Skip re instrumentating on generate phase of
experimental build mode (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16410">#16410</a>)</li>
<li>fix(node): Ensure adding sentry-trace and baggage headers via
SentryHttpInstrumentation doesn't crash (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16473">#16473</a>)</li>
</ul>
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@​sentry/browser</code></td>
<td>23.43 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> - with treeshaking flags</td>
<td>23.2 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing)</td>
<td>37.44 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay)</td>
<td>74.69 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>67.96 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay with
Canvas)</td>
<td>79.33 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay, Feedback)</td>
<td>91.13 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Feedback)</td>
<td>39.78 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. sendFeedback)</td>
<td>28.03 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. FeedbackAsync)</td>
<td>32.8 KB</td>
</tr>
<tr>
<td><code>@​sentry/react</code></td>
<td>25.15 KB</td>
</tr>
<tr>
<td><code>@​sentry/react</code> (incl. Tracing)</td>
<td>39.39 KB</td>
</tr>
<tr>
<td><code>@​sentry/vue</code></td>
<td>27.67 KB</td>
</tr>
<tr>
<td><code>@​sentry/vue</code> (incl. Tracing)</td>
<td>39.24 KB</td>
</tr>
<tr>
<td><code>@​sentry/svelte</code></td>
<td>23.45 KB</td>
</tr>
<tr>
<td>CDN Bundle</td>
<td>24.88 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing)</td>
<td>37.62 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay)</td>
<td>72.64 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback)</td>
<td>77.93 KB</td>
</tr>
<tr>
<td>CDN Bundle - uncompressed</td>
<td>72.67 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing) - uncompressed</td>
<td>111.4 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay) - uncompressed</td>
<td>222.7 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed</td>
<td>235.22 KB</td>
</tr>
<tr>
<td><code>@​sentry/nextjs</code> (client)</td>
<td>41.02 KB</td>
</tr>
<tr>
<td><code>@​sentry/sveltekit</code> (client)</td>
<td>37.93 KB</td>
</tr>
<tr>
<td><code>@​sentry/node</code></td>
<td>146.56 KB</td>
</tr>
<tr>
<td><code>@​sentry/node</code> - without tracing</td>
<td>96.03 KB</td>
</tr>
<tr>
<td><code>@​sentry/aws-serverless</code></td>
<td>121.19 KB</td>
</tr>
</tbody>
</table>
<h2>9.25.1</h2>
<ul>
<li>fix(otel): Don't ignore child spans after the root is sent (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16416">#16416</a>)</li>
</ul>
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@​sentry/browser</code></td>
<td>23.43 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> - with treeshaking flags</td>
<td>23.2 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing)</td>
<td>37.44 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay)</td>
<td>74.69 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>67.96 KB</td>
</tr>
</tbody>
</table>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md"><code>@​sentry/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.26.0</h2>
<ul>
<li>feat(react-router): Re-export functions from
<code>@sentry/react</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16465">#16465</a>)</li>
<li>fix(nextjs): Skip re instrumentating on generate phase of
experimental build mode (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16410">#16410</a>)</li>
<li>fix(node): Ensure adding sentry-trace and baggage headers via
SentryHttpInstrumentation doesn't crash (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16473">#16473</a>)</li>
</ul>
<h2>9.25.1</h2>
<ul>
<li>fix(otel): Don't ignore child spans after the root is sent (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16416">#16416</a>)</li>
</ul>
<h2>9.25.0</h2>
<h3>Important Changes</h3>
<ul>
<li><strong>feat(browser): Add option to ignore <code>mark</code> and
<code>measure</code> spans (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16443">#16443</a>)</strong></li>
</ul>
<p>This release adds an option to <code>browserTracingIntegration</code>
that lets you ignore
<code>mark</code> and <code>measure</code> spans created from the
<code>performance.mark(...)</code> and
<code>performance.measure(...)</code> browser APIs:</p>
<pre lang="js"><code>Sentry.init({
  integrations: [
    Sentry.browserTracingIntegration({
ignorePerformanceApiSpans: ['measure-to-ignore', /mark-to-ignore/],
    }),
  ],
});
</code></pre>
<h3>Other Changes</h3>
<ul>
<li>feat(browser): Export getTraceData from the browser sdks (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16433">#16433</a>)</li>
<li>feat(node): Add <code>includeServerName</code> option (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16442">#16442</a>)</li>
<li>fix(nuxt): Remove setting <code>@sentry/nuxt</code> external (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16444">#16444</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="94398daaae"><code>94398da</code></a>
release: 9.26.0</li>
<li><a
href="89b00f650d"><code>89b00f6</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16480">#16480</a>
from getsentry/prepare-release/9.26.0</li>
<li><a
href="f2e28a3a40"><code>f2e28a3</code></a>
meta(changelog): Update changelog for 9.26.0</li>
<li><a
href="57268c14c9"><code>57268c1</code></a>
deps(react-router): Bump react-router version (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16478">#16478</a>)</li>
<li><a
href="a08cae29e4"><code>a08cae2</code></a>
fix(node): Ensure adding sentry-trace and baggage headers via
SentryHttpInstr...</li>
<li><a
href="ac22be2f5c"><code>ac22be2</code></a>
feat(react-router): Re-export functions from <code>@sentry/react</code>
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16465">#16465</a>)</li>
<li><a
href="4806b8ab7f"><code>4806b8a</code></a>
fix(nextjs): Skip re instrumentating on generate phase of experimental
build ...</li>
<li><a
href="dcdf07485e"><code>dcdf074</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16468">#16468</a>
from getsentry/master</li>
<li><a
href="f43737fcf6"><code>f43737f</code></a>
Merge branch 'release/9.25.1'</li>
<li><a
href="8047770d5b"><code>8047770</code></a>
release: 9.25.1</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-javascript/compare/9.24.0...9.26.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@supabase/supabase-js` from 2.49.9 to 2.49.10
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-js/releases"><code>@​supabase/supabase-js</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v2.49.10</h2>
<h2><a
href="https://github.com/supabase/supabase-js/compare/v2.49.9...v2.49.10">2.49.10</a>
(2025-06-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>bump up realtime (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1446">#1446</a>)
(<a
href="51a42c25a0">51a42c2</a>)</li>
</ul>
<h2>v2.49.10-next.2</h2>
<h2><a
href="https://github.com/supabase/supabase-js/compare/v2.49.10-next.1...v2.49.10-next.2">2.49.10-next.2</a>
(2025-06-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>bump up realtime (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1443">#1443</a>)
(<a
href="ed21283ee4">ed21283</a>)</li>
</ul>
<h2>v2.49.10-next.1</h2>
<h2><a
href="https://github.com/supabase/supabase-js/compare/v2.49.9...v2.49.10-next.1">2.49.10-next.1</a>
(2025-06-03)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>bump realtime-js (<a
href="3083f9772c">3083f97</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="51a42c25a0"><code>51a42c2</code></a>
fix: bump up realtime (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1446">#1446</a>)</li>
<li><a
href="79b33ea18e"><code>79b33ea</code></a>
Next (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1444">#1444</a>)</li>
<li><a
href="d4b854e7e8"><code>d4b854e</code></a>
merge next into master (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1438">#1438</a>)</li>
<li>See full diff in <a
href="https://github.com/supabase/supabase-js/compare/v2.49.9...v2.49.10">compare
view</a></li>
</ul>
</details>
<br />

Updates `framer-motion` from 12.15.0 to 12.16.0
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/motiondivision/motion/blob/main/CHANGELOG.md">framer-motion's
changelog</a>.</em></p>
<blockquote>
<h2>[12.16.0] 2025-06-03</h2>
<h3>Added</h3>
<ul>
<li><code>resize()</code>.</li>
</ul>
<h2>[12.15.1] 2025-05-30</h2>
<h3>Fixed</h3>
<ul>
<li>Explicitly set layout animation velocity to zero to prevent
persistent <code>MotionValue</code> carrying through velocity.</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="99ab6a15b8"><code>99ab6a1</code></a>
v12.16.0</li>
<li><a
href="318d693b95"><code>318d693</code></a>
Updating window syntax</li>
<li><a
href="bd7dbd5335"><code>bd7dbd5</code></a>
Debouncing listeners</li>
<li><a
href="892d7462ee"><code>892d746</code></a>
Removing window resize event listener</li>
<li><a
href="2f2d8dec54"><code>2f2d8de</code></a>
Updating changelog</li>
<li><a
href="156cc56ca9"><code>156cc56</code></a>
Merge pull request <a
href="https://redirect.github.com/motiondivision/motion/issues/3242">#3242</a>
from motiondivision/feature/resize</li>
<li><a
href="c57f859e6c"><code>c57f859</code></a>
resize()</li>
<li><a
href="81417b20d1"><code>81417b2</code></a>
v12.15.1</li>
<li><a
href="41daff1691"><code>41daff1</code></a>
Updating changelog</li>
<li><a
href="92a1634105"><code>92a1634</code></a>
Merge pull request <a
href="https://redirect.github.com/motiondivision/motion/issues/3235">#3235</a>
from motiondivision/feature/reset-layout-velocity</li>
<li>Additional commits viewable in <a
href="https://github.com/motiondivision/motion/compare/v12.15.0...v12.16.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `lucide-react` from 0.510.0 to 0.513.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/lucide-icons/lucide/releases">lucide-react's
releases</a>.</em></p>
<blockquote>
<h2>Version 0.513.0</h2>
<h2>What's Changed</h2>
<ul>
<li>feat(icons): Add sim card icon from lab by <a
href="https://github.com/ericfennis"><code>@​ericfennis</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3275">lucide-icons/lucide#3275</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/lucide-icons/lucide/compare/0.512.0...0.513.0">https://github.com/lucide-icons/lucide/compare/0.512.0...0.513.0</a></p>
<h2>Version 0.512.0</h2>
<h2>What's Changed</h2>
<ul>
<li>feat(icons): added <code>circle-pound-sterling</code> icon by <a
href="https://github.com/lieonlion"><code>@​lieonlion</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2822">lucide-icons/lucide#2822</a></li>
<li>build(deps-dev): bump vite from 6.3.2 to 6.3.4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3181">lucide-icons/lucide#3181</a></li>
<li>docs(docs): added testing website locally instructions by <a
href="https://github.com/briz123"><code>@​briz123</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3124">lucide-icons/lucide#3124</a></li>
<li>build(deps-dev): bump vite from 6.0.7 to 6.1.6 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3236">lucide-icons/lucide#3236</a></li>
<li>fix(icons): changed <code>square-check-big</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3156">lucide-icons/lucide#3156</a></li>
<li>fix(icons): changed <code>list-collapse</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3081">lucide-icons/lucide#3081</a></li>
<li>fix(icons): changed <code>battery-*</code> icons by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3083">lucide-icons/lucide#3083</a></li>
<li>fix(icons): changed <code>paperclip</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2956">lucide-icons/lucide#2956</a></li>
<li>fix(icons): changed <code>eraser</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3076">lucide-icons/lucide#3076</a></li>
<li>feat(icons): Add <code>cloud-check</code> icon by <a
href="https://github.com/lscheibel"><code>@​lscheibel</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2612">lucide-icons/lucide#2612</a></li>
<li>feat(icon): add <code>id-card-lanyard</code> icon by <a
href="https://github.com/python2911"><code>@​python2911</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2898">lucide-icons/lucide#2898</a></li>
<li>feat(angular): update peer dependencies for Angular to support
version 20.x by <a
href="https://github.com/JeevanMahesha"><code>@​JeevanMahesha</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3273">lucide-icons/lucide#3273</a></li>
<li>fix(icons): changed <code>file-badge</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2934">lucide-icons/lucide#2934</a></li>
<li>feat(icons): added <code>grid-3x2</code> icon by <a
href="https://github.com/qubrat"><code>@​qubrat</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3216">lucide-icons/lucide#3216</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/lieonlion"><code>@​lieonlion</code></a>
made their first contribution in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2822">lucide-icons/lucide#2822</a></li>
<li><a
href="https://github.com/python2911"><code>@​python2911</code></a> made
their first contribution in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2898">lucide-icons/lucide#2898</a></li>
<li><a
href="https://github.com/JeevanMahesha"><code>@​JeevanMahesha</code></a>
made their first contribution in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3273">lucide-icons/lucide#3273</a></li>
<li><a href="https://github.com/qubrat"><code>@​qubrat</code></a> made
their first contribution in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3216">lucide-icons/lucide#3216</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/lucide-icons/lucide/compare/0.511.0...0.512.0">https://github.com/lucide-icons/lucide/compare/0.511.0...0.512.0</a></p>
<h2>Version 0.511.0</h2>
<h2>What's Changed</h2>
<ul>
<li>fix(icons): Optimise a number of icons using
<code>&lt;line&gt;</code> and <code>&lt;polyline&gt;</code> by <a
href="https://github.com/jamiemlaw"><code>@​jamiemlaw</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3168">lucide-icons/lucide#3168</a></li>
<li>fix(icons): changed <code>clock-6</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3209">lucide-icons/lucide#3209</a></li>
<li>fix(icons): changed <code>axis-3d</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3199">lucide-icons/lucide#3199</a></li>
<li>fix(icons): changed <code>chevrons-left-right-ellipsis</code> icon
by <a href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3189">lucide-icons/lucide#3189</a></li>
<li>fix(icons): changed <code>square-code</code> icon by <a
href="https://github.com/jguddas"><code>@​jguddas</code></a> in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3173">lucide-icons/lucide#3173</a></li>
<li>fix(icons): changed <code>satellite</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3180">lucide-icons/lucide#3180</a></li>
<li>fix(lucide-react-native): support react 19 (<a
href="https://github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react/issues/2951">#2951</a>)
by <a href="https://github.com/jvliwanag"><code>@​jvliwanag</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3126">lucide-icons/lucide#3126</a></li>
<li>fix(icons): changed <code>factory</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2970">lucide-icons/lucide#2970</a></li>
<li>fix(icons): changed <code>university</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2965">lucide-icons/lucide#2965</a></li>
<li>fix(icons): changed <code>warehouse</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2966">lucide-icons/lucide#2966</a></li>
<li>fix(icons): changed <code>landmark</code> icon by <a
href="https://github.com/karsa-mistmere"><code>@​karsa-mistmere</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/2967">lucide-icons/lucide#2967</a></li>
<li>chore(cspell): remove duplicate 'pilcrow' from
<code>custom-words.txt</code> by <a
href="https://github.com/Abdalrhman-Almarakeby"><code>@​Abdalrhman-Almarakeby</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3193">lucide-icons/lucide#3193</a></li>
<li>feat(icons): added <code>square-dashed-top-solid</code> icon by <a
href="https://github.com/juanpablofernandez"><code>@​juanpablofernandez</code></a>
in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3204">lucide-icons/lucide#3204</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/jvliwanag"><code>@​jvliwanag</code></a>
made their first contribution in <a
href="https://redirect.github.com/lucide-icons/lucide/pull/3126">lucide-icons/lucide#3126</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="19fa01b5fc"><code>19fa01b</code></a>
build(deps-dev): bump vite from 6.3.2 to 6.3.4 (<a
href="https://github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react/issues/3181">#3181</a>)</li>
<li>See full diff in <a
href="https://github.com/lucide-icons/lucide/commits/0.513.0/packages/lucide-react">compare
view</a></li>
</ul>
</details>
<br />

Updates `zod` from 3.25.48 to 3.25.51
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/colinhacks/zod/releases">zod's
releases</a>.</em></p>
<blockquote>
<h2>v3.25.51</h2>
<h2>Commits:</h2>
<ul>
<li>d7ffdfa73a800ea810218431d1dd751f15d0fba4 Remove _</li>
<li>50ef910565a14c127942442b7e09596afcfdca5f Add output type generic
test</li>
<li>eb14475c3ca14562c4bf11c2111a1fbfa3d114b6 Improve docs</li>
<li>32104c2801f01edac3fb3168017b09b6c43f3cef Improve extend docs</li>
<li>f67332f9fbcae13ce59dbb1eeb67f9c4c60bdcd9 Docs</li>
<li>8230237b3453b02bf34b81d0bc11b40d9868cd09 Standardize string format
continuability</li>
<li>c58bd9b0125881caee03a408eae88ec1dc5eb18b 3.25.51</li>
</ul>
<h2>v3.25.50</h2>
<h2>Commits:</h2>
<ul>
<li>5fdece94bf1ada76e5742f2755d45d3711a8e962 fix(v4): reflect inclusive
boundaries in minLength/maxLength issue objects (<a
href="https://redirect.github.com/colinhacks/zod/issues/4591">#4591</a>)</li>
<li>4897269451f0b0afeb2313389d20ffa1f22ce8b1 Restructure: mitigate
excessively deep errors (<a
href="https://redirect.github.com/colinhacks/zod/issues/4599">#4599</a>)</li>
<li>a88a080f0b9782a87b5732cb9cd96fc2b1a71794 Improve prototype
tests</li>
<li>c7833356a3211d32ae322739a7a6f66dce52ed5f 3.25.50</li>
</ul>
<h2>v3.25.49</h2>
<h2>Commits:</h2>
<ul>
<li>74458e9ccec6d0d4a7af02f66e463a07ee5cad91 docs: updated name and link
of <code>regle</code> library (<a
href="https://redirect.github.com/colinhacks/zod/issues/4601">#4601</a>)</li>
<li>5cc04f3685903c0e66a65b91f758115cfcead83d docs: fix some typos on the
&quot;Defining schemas&quot; page (<a
href="https://redirect.github.com/colinhacks/zod/issues/4595">#4595</a>)</li>
<li>9a81173ba28d70d856d8db2e5fe6daedc9241fa4 Add includes position regex
(<a
href="https://redirect.github.com/colinhacks/zod/issues/4602">#4602</a>)</li>
<li>fa7bee41ae5330aeb90b70a2b5aab228b2faa8ae fix(docs): remove
z.literal(Symbol) (<a
href="https://redirect.github.com/colinhacks/zod/issues/4587">#4587</a>)</li>
<li>df73cb08bc3362cd298be0873cec7c42310a89a1 Make z.custom issues
fatal</li>
<li>78e0eae30181cd75c987bcee98cceaf51dc62979 fix: add type to literal
enum (<a
href="https://redirect.github.com/colinhacks/zod/issues/4590">#4590</a>)</li>
<li>a73a3b3009735c6f82531393e65a82cad6b62e05 3.25.49</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c58bd9b012"><code>c58bd9b</code></a>
3.25.51</li>
<li><a
href="8230237b34"><code>8230237</code></a>
Standardize string format continuability</li>
<li><a
href="f67332f9fb"><code>f67332f</code></a>
Docs</li>
<li><a
href="32104c2801"><code>32104c2</code></a>
Improve extend docs</li>
<li><a
href="eb14475c3c"><code>eb14475</code></a>
Improve docs</li>
<li><a
href="50ef910565"><code>50ef910</code></a>
Add output type generic test</li>
<li><a
href="d7ffdfa73a"><code>d7ffdfa</code></a>
Remove _</li>
<li><a
href="c7833356a3"><code>c783335</code></a>
3.25.50</li>
<li><a
href="a88a080f0b"><code>a88a080</code></a>
Improve prototype tests</li>
<li><a
href="4897269451"><code>4897269</code></a>
Restructure: mitigate excessively deep errors (<a
href="https://redirect.github.com/colinhacks/zod/issues/4599">#4599</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/colinhacks/zod/compare/v3.25.48...v3.25.51">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-04 17:07:19 +00:00
Zamil Majdy
4b70e778d2 feat(backend): Add nested dynamic pin-name support (#10082)
Suppose we have pint with list[list[int]] type, and we want directly
insert the a new value inside the first index of the first list e.g:
list[0][0] = X through a dynamic pin, this will be translated into
list_$_0_$_0, and the system does not currently support this.

### Changes 🏗️

Add support for nested dynamic pins for list, object, and dict.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] lots of unit tests
- [x] Tried inserting the value directly on the `value` nested field on
Google Sheets Write block.
<img width="371" alt="image"
src="https://github.com/user-attachments/assets/0a5e7213-b0e0-4fce-9e89-b39f7a583582"
/>
2025-06-04 16:32:32 +00:00
Reinier van der Leer
34009bc749 feat(frontend): Upgrade to Next.js v15 + update config (#10042)
- Resolves #10041

Upgrading to Next v15 isn't trivial, but still a good idea if not simply
necessary.

### Changes 🏗️

- Upgrade Next.js from `v14.2.26` to `v15.3.2`
- Fix usage of now-async APIs `params`, `searchParams`, `headers`
([docs](https://nextjs.org/docs/app/guides/upgrading/version-15#async-request-apis-breaking-change))

- Update Next+TypeScript configs
  - Set build target to ES2022 for better performance
  - Unignore TypeScript build errors
  - Set `hideSourceMaps: false` because OSS FTW :)
  - Remove obsolete `webpack.config.js`

- Fix existing warnings/errors
  - Fix Sentry missing navigation hook warning
- Fix `DYNAMIC_SERVER_USAGE` build error on `/profile` and
`/marketplace`
- Moved `getStoreData` to a proper server action `getMarketplaceData`
  - Fix breaking CSS syntax error in `customnode.css`

- Use Turbopack for faster dev+test build times
- Add separate build step to frontend CI workflow: this also fixes the
test timing out if the build takes too long

Other technical improvements:
- Wrap `handleSortChange` in `MarketplaceSearchPage` in `useCallback`
- Fix typing in `ProfileInfoForm`
- Improve output of frontend tests

> [!NOTE]
> Next prints this error (in dev mode):
> ```
> Error: Route "/marketplace" used `cookies().getAll()`. `cookies()`
should be awaited before using its value. Learn more:
https://nextjs.org/docs/messages/sync-dynamic-apis
>     at Object.getAll (src/lib/supabase/getServerSupabase.ts:16:31)
>     at getAll (../../src/cookies.ts:115:41)
> ...
> ```
> As far as I can see, this isn't breaking, and will become a warning in
prod. See also [the Next docs about this
issue](https://nextjs.org/docs/messages/sync-dynamic-apis)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Follow the full release QA test script

#### For configuration changes:
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-04 15:38:08 +00:00
Ubbe
722c6bcc18 fix(storybook): make font load in Stories (#10081)
### Changes 🏗️

#### Before

<img width="800" alt="Screenshot 2025-06-03 at 16 54 36"
src="https://github.com/user-attachments/assets/2a69b69d-2b01-436e-aab3-8206485a001c"
/>

#### After

<img width="800" alt="Screenshot 2025-06-03 at 16 58 38"
src="https://github.com/user-attachments/assets/4daf41d4-42ce-4119-8e9f-b2b10b524cba"
/>



### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [ ] checkout this branch ( _we should have PR previews for the app and
Storybook_ )
  - [ ] `cd autogpt_platform/frontend`
  - [ ] `yarn storybook`
- [ ] the stories road with the right font ( Poppins ) not a serif one 😄

#### For configuration changes:
- [ ] ~~`.env.example` is updated or already compatible with my
changes~~
- [ ] ~~`docker-compose.yml` is updated or already compatible with my
changes~~
- [ ] ~~I have included a list of my configuration changes in the PR
description (under **Changes**)~~
2025-06-04 15:26:09 +00:00
Reinier van der Leer
eaf6da02d1 fix poetry.lock issue 2025-06-04 18:17:59 +02:00
Bently
d5d613e014 chore(backend): Downgrade poetry to 2.1.1 for dependabot (#10079)
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-06-04 17:37:21 +02:00
Ubbe
73a3d980ca chore(frontend): move from yarn1 to pnpm (#10072)
## 🧢 Overview
This PR migrates the AutoGPT Platform frontend from [yarn
1](https://classic.yarnpkg.com/lang/en/) to [pnpm](https://pnpm.io/)
using **corepack** for automatic package manager management.

**yarn1** is not longer maintained and a bit old, moving to **pnpm** we
get:
-  Significantly faster install times,
- 💾 Better disk space efficiency,
- 🛠️ Better community support and maintenance,
- 💆🏽‍♂️  Config swap very easy

##  🏗️ Changes

### Package Management Migration

- updated [corepack](https://github.com/nodejs/corepack) to use
[pnpm](https://pnpm.io/)
- Deleted `yarn.lock` and generated new `pnpm-lock.yaml`
- Updated `.gitignore`

### Documentation Updates

- `frontend/README.md`: 
  - added comprehensive tech stack overview with links
  - updated all commands to use pnpm
  - added corepack setup instructions
  - and included migration disclaimer for yarn users
- `backend/README.md`: 
  - Updated installation instructions to use pnpm with corepack
- `AGENTS.md`: 
  - Updated testing commands from yarn to pnpm

### CI/CD & Infrastructure

- **GitHub Workflows** : 
  - updated all jobs to use pnpm with corepack enable
  - cleaned FE Playwright test workflow to avoid Sentry noise
- **Dockerfile**:
- updated to use pnpm with corepack, changed lock file reference, and
updated cache mount path

###  📋 Checklist

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  
  **Test Plan:**
  > assuming you are on the `frontend` folder 
- [x] Clean installation works: `rm -rf node_modules && corepack enable
&& pnpm install`
  - [x] Development server starts correctly: `pnpm dev`
  - [x] Build process works: `pnpm build`
  - [x] Linting and formatting work: `pnpm lint` and `pnpm format`
  - [x] Type checking works: `pnpm type-check`
  - [x] Tests run successfully: `pnpm test`
  - [x] Storybook starts correctly: `pnpm storybook`
  - [x] Docker build succeeds with new pnpm configuration
  - [x] GitHub Actions workflow passes with pnpm commands

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)
2025-06-04 17:07:29 +04:00
Swifty
bac07b79e9 ci: Adds a GitHub Actions workflow to handle PR deployments based on comments and PR state (#10083)
This hotfix adds a GitHub Actions workflow to automate deployment and
undeployment of platform PRs to the development environment through
repository dispatch events.

### Changes 🏗️

- **Added new GitHub Actions workflow**:
`.github/workflows/platform-dev-deploy-event-dispatcher.yml`
  - Handles `\!deploy` and `\!undeploy` comment commands on PRs
- Implements permission checks (only owners, members, and collaborators
can deploy)
  - Automatically undeploys when PRs are closed with active deployments
- Dispatches events to the cloud infrastructure repository for actual
deployment/undeployment
  - Provides user feedback through GitHub comments

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verify workflow triggers on PR comment creation
  - [x] Test permission validation for unauthorized users
  - [x] Confirm `\!deploy` command dispatches correct event payload
  - [x] Confirm `\!undeploy` command dispatches correct event payload  
  - [x] Test auto-undeploy on PR closure with active deployment
  - [x] Verify appropriate GitHub comments are posted for each action

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

**Configuration changes:**
- Requires `DISPATCH_TOKEN` secret to be configured for repository
dispatch to cloud infrastructure repo
- Workflow uses `issues: write` and `pull-requests: write` permissions
2025-06-04 12:49:15 +02:00
SwiftyOS
c8f2c7bc88 update event dispatcher 2025-06-04 12:40:29 +02:00
Zamil Majdy
0f558876e2 feat(blocks;frontend): Add file multipart upload support for SendWebRequestBlock & Improve key-value input UI rendering (#10058)
Now, SendWebRequestBlock can upload files. To make this work, we also
need to improve the UI rendering on the key-value pair input so that it
can also render media/file upload.

### Changes 🏗️

* Add file multipart upload support for SendWebRequestBlock 
* Improve key-value input UI rendering to allow rendering any types as a
normal input block (it was only string & number).

<img width="381" alt="image"
src="https://github.com/user-attachments/assets/b41d778d-8f9d-4aec-95b6-0b32bef50e89"
/>


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test running http request block, othe key-value pair input block
2025-06-03 07:55:12 +00:00
Toran Bruce Richards
3f6585f763 feat(platform/blocks): add AI Image Editor Block powered by flux kontext (#10063)
<!-- Clearly explain the need for these changes: -->

This PR adds a new internal block, **AI Image Editor**, which enables
**text-based image editing** via BlackForest Labs’ Flux Kontext models
on Replicate. This block allows users to input a prompt and optionally a
reference image, and returns a transformed image URL. It supports two
model variants (Pro and Max), with different cost tiers. This
functionality will enhance multimedia capabilities across internal agent
workflows and support richer AI-powered image manipulation.

---

### Changes 🏗️

* Added `FluxKontextBlock` in `backend/blocks/flux_kontext.py`

  * Uses `ReplicateClient` to call Flux Kontext Pro or Max models
* Supports inputs for `prompt`, `input_image`, `aspect_ratio`, `seed`,
and `model`
  * Outputs transformed image URL or error
* Added credit pricing logic for Flux Kontext models to
`block_cost_config.py`:

  * Pro: 10 credits
  * Max: 20 credits
* Added documentation for the new block at
`docs/content/platform/blocks/flux_kontext.md`
* Updated block index at `docs/content/platform/blocks/blocks.md` to
include Flux Kontext

---


![image](https://github.com/user-attachments/assets/0edb2b30-4c37-4184-bcc8-9d733658f620)


### Checklist 📋

#### For code changes:

* [x] I have clearly listed my changes in the PR description
* [x] I have made a test plan
* [x] I have tested my changes according to the test plan:

  <!-- Put your test plan here: -->

  * [x] Prompt-only input generates an image
  * [x] Prompt with image applies edit correctly
  * [x] Image respects specified aspect ratio
  * [x] Invalid image URL returns helpful error
  * [x] Using the same seed gives consistent output
* [x] Output chaining works: result URI can be used in downstream blocks
  * [x] Output from Max model shows higher fidelity than Pro

<details>
  <summary>Example test plan</summary>

* [x] Create from scratch and execute an agent using Flux Kontext with
at least 3 blocks
* [x] Import agent with Flux Kontext from file upload, and confirm
execution
* [x] Upload agent (with Flux Kontext block) to marketplace (internal
test)
* [x] Import agent from marketplace and confirm correct execution
* [x] Edit agent with Flux Kontext block from monitor and confirm output

</details>

#### For configuration changes:

* [x] `.env.example` is updated or already compatible with my changes
* [x] `docker-compose.yml` is updated or already compatible with my
changes
* [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

  * No new environment variables or services introduced

<details>
  <summary>Examples of configuration changes</summary>

* N/A

</details>

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-06-02 20:46:48 +00:00
JU-NINE NGU CHO
0ec557b942 docs: add system requirements section to README (#10054)
This PR adds a comprehensive system requirements section to the
README.md file. Currently, users don't have clear guidance on the
hardware, software, and network requirements needed to run AutoGPT. This
addition will help users determine if their system is capable of running
AutoGPT before attempting installation.

- Resolves #10050

### Changes 🏗️

- Added new "System Requirements" section under "How to Setup for
Self-Hosting" with:
  - Hardware Requirements
    - CPU specifications (4+ cores)
    - RAM requirements (8GB min, 16GB recommended)
    - Storage requirements (10GB minimum)
  - Software Requirements
    - Supported Operating Systems
    - Required software with minimum versions
    - Development tools requirements
  - Network Requirements
    - Internet connectivity requirements
    - Port access information
    - HTTPS connection requirements

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Verified README.md renders correctly on GitHub
  - [x] Confirmed all formatting is consistent
  - [x] Validated all requirements are accurate
  - [x] Checked section placement is logical
  - [x] Ensured no other files are modified

#### For configuration changes:
- [x] Not applicable - This PR only contains documentation changes to
README.md
- [x] No configuration files are modified in this update

Co-authored-by: Bently <Github@bentlybro.com>
2025-06-02 14:46:37 +00:00
Nicholas Tindle
453834f475 fix(docs): comment out segment about video because we removed video (#10073)
<!-- Clearly explain the need for these changes: -->
We removed the linked video because out of date so unlink it


### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
Comment out video segment until new one posted

### Checklist 📋

#### For code changes:
- [x] no code changes made
2025-06-02 14:33:15 +00:00
SwiftyOS
768c6b1c97 fixed auto deploy script 2025-05-31 11:24:56 +02:00
SwiftyOS
eeb1764779 remove back tick on workflow 2025-05-31 10:51:22 +02:00
SwiftyOS
7c65e53d51 rename workflow 2025-05-31 10:43:47 +02:00
Swifty
56ddffeaa0 feat(ci): Add cross-repository dev deployment workflow (#10059)
### Description 📝

This PR introduces a GitHub Actions workflow that enables
cross-repository event dispatching for development environment
deployments. The workflow listens for specific PR events and dispatches
corresponding deployment/undeployment actions to our cloud
infrastructure repository.

**How it works:**
- The workflow triggers on PR events (opened, synchronized, closed) and
PR target events (labeled, unlabeled)
- When a PR comment containing `!deploy` is detected from authorized
users (PR author, repo owners, members, or collaborators), it dispatches
a deployment event
- When a PR with existing deployments is closed, it automatically
dispatches an undeployment event to clean up resources

**Interaction with target repository:**
The workflow dispatches events to
`Significant-Gravitas/AutoGPT_cloud_infrastructure` with a payload
containing:
- `action`: Either "deploy" or "undeploy"
- `pr_number`: The PR number for tracking
- `pr_title`: Human-readable identifier
- `pr_state`: Current PR state
- `repo`: Source repository name

This enables the infrastructure repository to spin up isolated
development environments for each PR on demand.

### Changes 🏗️
- Added `.github/workflows/dev-deploy-pr-dispatcher.yml` workflow file
- Implements secure cross-repository communication using repository
dispatch events
- Includes authorization checks to ensure only authorized users can
trigger deployments

### Checklist 📋

#### For code changes:
- [x] No code changes - this is a workflow addition only

#### For configuration changes:
- [x] New workflow file added that requires testing
- [x] Requires `DISPATCH_TOKEN` secret to be configured with appropriate
permissions for cross-repository dispatch
- [x] No environment variable changes needed
- [ ] Workflow will be tested after initial merge to verify proper event
dispatching
2025-05-31 10:39:57 +02:00
Nicholas Tindle
16d6f5377c fix(frontend): missed a password prompt (#10065)
<!-- Clearly explain the need for these changes: -->
CASA requires a length of 12 passwords, which we did update. When
testing in dev, I realized I missed a few.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
updates a missed prompt

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test manually, and read all the prompts carefully
2025-05-30 16:31:00 +00:00
Nicholas Tindle
85e108a37a feat(frontend): require passwrods to be min length 12 (#10061)
<!-- Clearly explain the need for these changes: -->
We're doing CASA and this is a requirement

### Changes 🏗️
- Requires new passwords to be min length 12
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] test
2025-05-30 15:23:32 +00:00
Bently
692f32a350 fix(platform): Turnstile CAPTCHA reset after failed login attempts (#10056)
Users were unable to retry login attempts after a failed authentication
because the Turnstile CAPTCHA widget was not properly resetting. This
forced users to refresh the entire page to attempt login again, creating
a terrible user experience.

Root Cause: The useTurnstile hook had several critical issues:

- The reset() function only cleared state when shouldRender was true and
widget existed
- Widget ID tracking was unreliable due to intercepting
window.turnstile.render
- Token wasn't being cleared on verification failures
- State wasn't being reset consistently across error scenarios

Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

- Fixed useTurnstile hook reset logic: Modified the reset() function to
always clear all state (token, verified, verifying, error) regardless of
shouldRender condition
- Improved widget ID synchronization: Added setWidgetId prop to the
Turnstile component interface and hook for reliable widget tracking
between component and hook
- Enhanced error handling: Updated handleVerify, handleExpire, and
handleError to properly reset tokens on failures
- Updated all auth components: Added setWidgetId prop to all Turnstile
component usages in login, signup, and password reset pages
- Removed unreliable widget tracking: Eliminated the
window.turnstile.render interception approach in favor of explicit
prop-based communication

Checklist 📋
For code changes:

- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- <!-- Put your test plan here: -->
- [x] Test failed login attempt - CAPTCHA resets properly without page
refresh
- [x] Test failed signup attempt - CAPTCHA resets properly without page
refresh
	- [x] Test successful login flow - CAPTCHA works normally
	- [x] Test CAPTCHA expiration - State resets correctly
	- [x] Test CAPTCHA error scenarios - Error handling works properly
2025-05-28 15:21:27 +00:00
Krzysztof Czerwinski
9f2b9d08c9 feat(platform): Add Run 10 agents wallet task (#9937)
### Changes 🏗️

This PR adds `Run 10 agents` step to wallet tasks that can be done by
running any agents 10 times either from Library or Builder (onboarding
agent run also counts).

- Merge `Finish onboarding` and `See results` steps into one in the
wallet
- User is redirected directly to onboarding agent runs in Library after
congrats screen
- Add `RUN_AGENTS` step and `agentRuns` integer to schema and related
migration
- Running agent from Library and Builder increments `agentRuns`
- Open NPS survey popup when 10 agents are run
- Fix resuming onboarding on login when unfinished
- Remove no longer needed `get-results.mp4` tutorial video

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding can be completed and proper reward is awarded
  - [x] `Run 10 agents` can be completed and reward is awarded
    - [x] When unning different agents and the same agent
    - [x] Running from library and builder counts
  - [x] Onboarding is resumed to last finished step on login
2025-05-28 07:40:52 +00:00
Krzysztof Czerwinski
b91b026164 fix(platform): Restore See runs button on marketplace listing page (#9970)
This makes button on the marketplace listing page show `See runs` if
user has an agent in the library.

### Changes 🏗️

- Remove `/` from the related endpoint
- Use `active_version_id` instead of `store_listing_version_id` to check
for the library agent
- Fix `get_store_agent_details`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Log in and pick an agent that has never been in user library
  - [x] Button says `Add to library`
  - Add the agent and return to the listing page
  - [x] Button says `See runs`
  - Remove agent from library
  - [x] Button says `Add to library`
  - Add agent again
  - [x] Button says `See runs`

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-05-28 07:15:42 +00:00
Zamil Majdy
35a5755958 Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-05-28 14:06:25 +07:00
Reinier van der Leer
b244726b20 fix(backend): Include webhook block executions in GraphExecution queries (#9984)
- Resolves #9752
- Follow-up fix to #9940

### Changes 🏗️

- `GRAPH_EXECUTION_INCLUDE` ->
`graph_execution_include(include_block_ids)`
- Add `get_io_block_ids()` and `get_webhook_block_ids()` to
`backend.data.blocks`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- [ ] Payload for webhook-triggered runs is shown on
`/library/agents/[id]`
2025-05-27 17:43:56 +00:00
itsababseh
3471781b98 fix(frontend/marketplace): require category selection (#10031)
## Summary
- require categories to be selected in PublishAgent popout

### Changes 🏗️
Makes it require the categories to be set before allowing an agent to be
uploaded
added popup notification to say its missing categories 

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Try to upload a agent with out setting categories and it will
error and show message saying "Missing Required Fields, Please fill in:
Categories"
- [x] Now try to upload a agent with the categories set and it will work

Co-authored-by: Bently <Github@bentlybro.com>
2025-05-27 14:57:56 +00:00
Swifty
17e973a8cb fix(platform): Optimise Query Indexes (#10038)
# Query Optimization for AgentNodeExecution Tables

## Overview
This PR describes the database index optimizations applied to improve
the performance of slow queries in the AutoGPT platform backend.

## Problem Analysis
The following queries were identified as consuming significant database
resources:

### 1. Complex Filtering Query (19.3% of total time)
```sql
SELECT ... FROM "AgentNodeExecution" 
WHERE "agentNodeId" = $1 
  AND "agentGraphExecutionId" = $2 
  AND "executionStatus" = $3 
  AND "id" NOT IN (
    SELECT "referencedByInputExecId" 
    FROM "AgentNodeExecutionInputOutput" 
    WHERE "name" = $4 AND "referencedByInputExecId" IS NOT NULL
  )
ORDER BY "addedTime" ASC
```

### 2. Multi-table JOIN Query (8.9% of total time)
```sql
SELECT ... FROM "AgentNodeExecution" 
LEFT JOIN "AgentNode" ON ...
LEFT JOIN "AgentBlock" ON ...
WHERE "AgentBlock"."id" IN (...) 
  AND "executionStatus" != $11
  AND "agentGraphExecutionId" IN (...)
ORDER BY "queuedTime" DESC, "addedTime" DESC
```

### 3. Bulk Graph Execution Queries (multiple variations, ~10% combined)
Multiple queries filtering by `agentGraphExecutionId` with various
ordering requirements.

## Optimization Strategy

### 1. Composite Indexes for AgentNodeExecution

Set the following composite indexes to the `AgentNodeExecution` model:

```prisma
@@index([agentGraphExecutionId, agentNodeId, executionStatus])
@@index([addedTime, queuedTime])
```

#### Benefits:
- **Index 1**: Covers the exact WHERE clause of the complex filtering
query, allowing index-only scans
- **Index 2**: Optimizes queries filtering by graph execution and status
- **Index 3**: Supports efficient sorting when filtering by graph
execution
- **Index 4**: Optimizes ORDER BY operations on time fields

### 2. Composite Index for AgentNodeExecutionInputOutput

Added the following composite index:

```prisma
  // Input and Output pin names are unique for each AgentNodeExecution.
  @@unique([referencedByInputExecId, referencedByOutputExecId, name])
  @@index([referencedByOutputExecId])
  // Composite index for `upsert_execution_input`.
  @@index([name, time])
```

#### Benefits:
- Dramatically improves the NOT IN subquery performance in Query 1
- Allows the database to use an index scan instead of a full table scan
- Reduces the subquery execution time from O(n) to O(log n)

## Expected Performance Improvements

1. **Query 1 (19.3% of total time)**: 
   - Expected improvement: 80-90% reduction in execution time
- The composite index on `[agentNodeId, agentGraphExecutionId,
executionStatus]` will allow index-only scans
- The subquery will benefit from the new index on
`AgentNodeExecutionInputOutput`

2. **Query 2 (8.9% of total time)**:
   - Expected improvement: 50-70% reduction in execution time
- The `[agentGraphExecutionId, executionStatus]` index will reduce the
initial filtering cost

3. **Bulk Queries (10% combined)**:
   - Expected improvement: 60-80% reduction in execution time
- Composite indexes including time fields will optimize sorting
operations

## Migration Considerations

1. **Index Creation Time**: Creating these indexes on existing large
tables may take time
2. **Storage Impact**: Each index requires additional storage space
3. **Write Performance**: Slight decrease in INSERT/UPDATE performance
due to index maintenance


## Additional Optimizations

### NotificationEvent Table Index

Added index for notification batch queries:

```prisma
@@index([userNotificationBatchId])
```

This optimizes the query:
```sql
SELECT ... FROM "NotificationEvent" 
WHERE "userNotificationBatchId" IN (...)
```

#### Benefits:
- Eliminates full table scans when filtering by batch ID
- Improves query performance from O(n) to O(log n)
- Particularly beneficial for users with many notification events

## Future Optimizations

Consider these additional optimizations if needed:
1. Partitioning `AgentNodeExecution` table by `createdAt` or
`agentGraphExecutionId`
2. Implementing materialized views for frequently accessed aggregate
data
3. Adding covering indexes for specific query patterns
4. Implementing query result caching at the application level

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-05-27 09:20:36 +00:00
Reinier van der Leer
8e2fb2daa4 feat(backend): Speed up graph create/update (#10025)
- Resolves #10024

Caching the repeated DB calls by the graph lifecycle hooks significantly
speeds up graph update/create calls with many authenticated blocks
(~300ms saved per authenticated block)

### Changes 🏗️

- Add and use `IntegrationCredentialsManager.cached_getter(user_id)` in
lifecycle hooks
- Split `refresh_if_needed(..)` method out of
`IntegrationCredentialsManager.get(..)`
- Simplify interface of lifecycle hooks: change `get_credentials`
parameter to `user_id`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Save a graph with nodes with credentials
2025-05-26 09:59:27 +00:00
Reinier van der Leer
767d2f2c1e dx(backend): Disable pre-commit pytest hooks (#10003)
Running the tests locally takes a lot of time and leaves test data
behind in the DB, making it impractical to actually run locally.
I'm disabling the `pytest` hooks in the pre-commit config so the
pre-commit checks can reasonably be used without significant negative
impact to DX.

This doesn't impact UX and there is nothing to test.
2025-05-25 12:40:59 +00:00
Reinier van der Leer
45578136e3 feat(frontend): Page-specific titles (#9995)
- Resolves #8656

Instead of "NextGen AutoGPT", make page titles like "My Test Agent -
Library - AutoGPT Platform", "Settings - AutoGPT Platform", "Builder -
AutoGPT Platform".

### Changes 🏗️

- Add specific page titles to `/library`, `/library/agents/[id]`,
`/build`, `/profile`, `/profile/api_keys`
- Fix page titles on `/marketplace`, `/profile/settings`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Go to `/marketplace` and check the page title
  - [x] Go to `/library` and check the page title
  - [x] Go to `/library/agents/[id]` and check the page title
  - [x] Go to `/build` and check the page title
  - [x] Go to `/profile` and check the page title
  - [x] Go to `/profile/settings` and check the page title
  - [x] Go to `/profile/api_keys` and check the page title
  - [ ] ~~Go to `/profile/dashboard` and check the page title~~
  - [ ] ~~Go to `/profile/integrations` and check the page title~~
  - [ ] ~~Go to `/profile/credits` and check the page title~~
2025-05-25 05:52:51 +00:00
Reinier van der Leer
a51af36296 feat(blocks/exa): Fix Exa blocks error reporting (#10020)
Exa blocks currently just return an empty list when they fail.

## Changes
- Add `error` output field where missing on Exa blocks
- Don't yield empty results when a request fails

## Testing
- `ruff check autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/backend/blocks/exa/contents.py
autogpt_platform/backend/backend/blocks/exa/similar.py --fix`
- `black autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/backend/blocks/exa/contents.py
autogpt_platform/backend/backend/blocks/exa/similar.py`
- `isort autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/backend/blocks/exa/contents.py
autogpt_platform/backend/backend/blocks/exa/similar.py`
- `pre-commit run --files
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/backend/blocks/exa/contents.py
autogpt_platform/backend/backend/blocks/exa/similar.py` *(fails: redis
connection errors)*
2025-05-24 16:17:03 +00:00
Reinier van der Leer
5518c2e9a2 fix(frontend): Fix global <body> styling and base fonts (#9574)
Base styling currently being fragmented between `layout.tsx` and
`globals.css` is causing some styling (e.g. application background
color) to be incorrectly overridden.

### Changes 🏗️

- Remove background color override from `<body>`
- Move `<body>` classes from `layout.tsx` to `globals.css`
- Remove background color from elements that shouldn't have their own
background color
- Remove `font-neue`, `font-inter`; replace by Geist (`font-sans`) where
necessary

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Effective background color of application is `#FAFAFA` like before
  - [x] Default font is Geist
  - [x] Everything looks okay
2025-05-24 15:25:26 +00:00
Bently
dc981b52a3 feat(Platform): add claude 4 sonnet and opus models to platform (#10018)
This adds the latest claude 4 opus and sonnet to the platform

https://www.anthropic.com/news/claude-4
2025-05-22 19:16:25 +00:00
ograce1421
61643e6a47 fix(frontend): Top Agents header spacing (#10002)
Changed the section header for "Top Agents" to include a 24px margin. 
I have not tested this, an eng needs to test / look at this

## Summary
- set `margin` default to 24px in `AgentsSection`
- apply the bottom margin via an inline style

## Testing
- `npm test` *(fails: playwright not found)*
- `npm run lint` *(fails: next not found)*


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Test via deployment to the dev branch and verify by designer

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-05-22 16:12:08 +00:00
Reinier van der Leer
21b4d272ce feat(frontend/library): Replace "Loading..." by loading spinners (#9993)
- Resolves #9992

### Changes 🏗️

- Use `<LoadingBox>` instead of "Loading..." on `/library/agents/[id]`

![2025-05-20 23 26
vivaldi](https://github.com/user-attachments/assets/6fe8ce60-c249-4e4c-b3f1-eea925b003d3)


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Designer approves based on screencapture
2025-05-22 15:58:32 +00:00
Toran Bruce Richards
b8ba572629 Fix AddMemoryBlock JSON serialization error (#10013)
This pull request refines the handling of `input_data.content` and
improves error message formatting in the `run` method of `mem0.py`. The
changes enhance robustness and clarity in the code.

### Handling `input_data.content`:

* Updated the `run` method to handle `Content` objects explicitly,
ensuring proper formatting of messages when `input_data.content` is of
type `Content`. Additionally, non-standard types are now converted to
strings for consistent handling.
(`[autogpt_platform/backend/backend/blocks/mem0.pyR127-R130](diffhunk://#diff-d7abf8c3299388129480b6a9be78438fe7e0fbe239da630ebb486ad99c80dd24R127-R130)`)

### Error message formatting:

* Simplified the error message formatting by removing the unnecessary
`object=` keyword in the `str()` conversion of exceptions.
(`[autogpt_platform/backend/backend/blocks/mem0.pyL155-R157](diffhunk://#diff-d7abf8c3299388129480b6a9be78438fe7e0fbe239da630ebb486ad99c80dd24L155-R157)`)

## Summary
- fix AddMemoryBlock so `Content` input uses the underlying string
- improve error handling in Mem0 AddMemoryBlock

## Testing
- `ruff check autogpt_platform/backend/backend/blocks/mem0.py`
- `pre-commit run --files
autogpt_platform/backend/backend/blocks/mem0.py` *(fails: unable to
fetch remote hooks)*
- `poetry run pytest -k AddMemoryBlock -q` *(fails: Error 111 connecting
to localhost:6379)*

Checklist 📋
For code changes:
 I have clearly listed my changes in the PR description
 I have made a test plan
 I have tested my changes according to the test plan:
 Payload for webhook-triggered runs is shown on /library/agents/[id]
2025-05-22 15:54:18 +00:00
Nicholas Tindle
47deeb53c3 docs(platform): update AGENTS instructions (#10016)
## Summary
- refine contribution instructions in `autogpt_platform/AGENTS.md`

## Testing
- `pre-commit` *(fails to fetch hooks due to no network access)*

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Docs only hcnage

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-22 15:54:02 +00:00
Zamil Majdy
1b81a7c755 fix(blocks): Error messages from SendWebRequestBlock use the requested translated IP instead of the orignal URL (#10009)
### Changes 🏗️

Keep the original URL when an HTTP error occurs in
`SendWebRequestBlock`.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Test sending POST request on a web that doesn't support POST
request using `SendWebRequestBlock`.
2025-05-22 15:46:01 +00:00
Zamil Majdy
793d056d81 fix(backend/executor): Make executor continuously running and retrying message consumption (#9999)
The executor can sometimes become dangling due to the executor stopping
executing messages but the process is not fully killed. This PR avoids
such a scenario by simply keeping retrying it.

### Changes 🏗️

Introduced continuous_retry decorator and use it to executor message
consumption/

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run executor service and execute some agents.
2025-05-22 13:40:41 +01:00
Zamil Majdy
8f1b3eb8ba fix(backend/executor): Make executor continuously running and retrying message consumption (#9999)
The executor can sometimes become dangling due to the executor stopping
executing messages but the process is not fully killed. This PR avoids
such a scenario by simply keeping retrying it.

### Changes 🏗️

Introduced continuous_retry decorator and use it to executor message
consumption/

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run executor service and execute some agents.
2025-05-22 12:11:08 +00:00
Reinier van der Leer
73ee6e272a fix(backend): Unbreak UserIntegrations parsing for missing None values (#9994)
Makes all optional fields on `Credentials` models actually optional, and
sets `exclude_none=True` on the corresponding `model_dump`.

This is a hotfix: after running the `aryshare-revid` branch on the dev
deployment, there is some data in the DB that isn't valid for the
`UserIntegrations` model on the `dev` branch (see
[here](https://github.com/Significant-Gravitas/AutoGPT/pull/9946#discussion_r2098428575)).

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] This fix worked on the `aryshare-revid` branch:
52b6d9696b
2025-05-20 23:50:19 +00:00
Reinier van der Leer
f466b010e4 fix(backend): Unbreak URL handling for GitHub blocks (#9989)
- Resolves #9987

### Changes 🏗️

- Split `pin_url(..)` out of `validate_url(..)` and call
`extra_url_validator` in between

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] GitHub Read Pull Request Block works with "Include PR Changes"
enabled
2025-05-20 22:54:39 +00:00
Nicholas Tindle
f8965e530f ref(frontend/admin): fix location of spending page (#9991)
### Changes 🏗️
Moves the route path for spending
drops min

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] test locally

---------

Co-authored-by: Bently <Github@bentlybro.com>
2025-05-20 21:43:38 +00:00
Zamil Majdy
5e7b66da90 fix(backend): Disable health check for scheduler service from the api server 2025-05-20 18:04:33 +01:00
Zamil Majdy
701d283f69 fix(backend): Disable health check for scheduler service from the api server 2025-05-20 18:04:10 +01:00
Zamil Majdy
1bc4a48d53 fix(backend): Remove cleaner on graph executor exit 2025-05-20 17:45:37 +01:00
Zamil Majdy
47c1a64cc2 fix(backend): Remove cleaner on graph executor exit 2025-05-20 17:45:19 +01:00
Reinier van der Leer
cf9cf4e7dd refactor(frontend): Move OttoChatWidget out of root layout (#9951)
- Resolves #9950

### Changes 🏗️

- Move `<OttoChatWidget>` from root layout into `FlowEditor`
- Pass graph info directly into `OttoChatWidget` instead of using
`useAgentGraph`
- Rearrange z-indices of elements in the builder

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/build`
  - [x] -> chat widget should show up in the bottom right corner
  - Open the widget and ask Otto something
  - [x] -> should work normally
  - Add a few blocks and save the graph
  - [x] -> "Include graph data" should show up
  - Click "Include graph data" and ask Otto something about your graph
  - [x] -> Otto should be aware of the graph structure and metadata
2025-05-20 13:38:09 +00:00
Reinier van der Leer
0a79e1c5fd feat(frontend/library): Show toast on WebSocket (dis|re)connect (#9949)
- Resolves #9941
- Follow-up to #9935

### Changes 🏗️

- Show toast when WS connection (dis|re)connects (on `/library/agents/[id]`)
  - Implement `BackendAPI.onWebSocketDisconnect`

Related improvements:
- Clean up WebSocket state management & logging in `BackendAPI`
- Clean up & split loading spinner implementation: `Spinner` -> `LoadingBox` + `LoadingSpinner`

Also, unrelated:
- fix(frontend/library): Add 2 second debounce to page refresh logic
  This eliminates 3 triple API calls (so 9 -> 3 total) on page load: `GET /library/agents/{agent_id}`, `GET /graphs/{graph_id}/executions`, and `GET /graphs/{graph_id}/executions/{exec_id}`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Start the frontend and backend applications (locally)
  - Navigate to `/library/agents/[id]`
  - Kill the backend
  - [x] -> a toast should appear "Connection to server was lost"
  - [x] -> this toast should be shown as long as the server is down
  - Re-start the backend
  - [x] -> toast should change to show "Connection re-established"
  - [x] -> toast should now disappear after 2 seconds

---

Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
2025-05-19 23:17:06 +02:00
Reinier van der Leer
ac532ca4b9 fix(backend): Graph execution update on terminate (#9952)
Resolves #9947

### Changes 🏗️

Backend:
- Send a graph execution update after terminating a run
- Don't wipe the graph execution stats when not passed in to `update_graph_execution_stats`

Frontend:
- Don't hide the output of stopped runs

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]`
  - Run an agent that takes a while (long enough to click stop and see the effect)
  - Hit stop after it has executed a few nodes
  - [x] -> run status should change to "Stopped"
  - [x] -> run stats (steps, duration, cost) should stay the same or increase only one last time
  - [x] -> output so far should be visible
  - [x] -> shown information should stay the same after refreshing the page

---

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
2025-05-19 23:15:39 +02:00
Zamil Majdy
694f701194 fix(backend): Force process exit on execution manager cleanup 2025-05-19 16:48:34 +01:00
Zamil Majdy
aa2c2c1ad2 fix(backend): Force process exit on execution manager cleanup 2025-05-19 16:47:56 +01:00
dependabot[bot]
bd425331f1 chore(frontend/deps): Update 35 dependencies to latest minor versions (#9953)
Bumps the production-dependencies group with 35 updates in the
/autogpt_platform/frontend directory:

| Package | From | To |
| --- | --- | --- |
| [@faker-js/faker](https://github.com/faker-js/faker) | `9.6.0` |
`9.8.0` |
|
[@next/third-parties](https://github.com/vercel/next.js/tree/HEAD/packages/third-parties)
| `15.2.1` | `15.3.2` |
| [@radix-ui/react-alert-dialog](https://github.com/radix-ui/primitives)
| `1.1.6` | `1.1.13` |
| [@radix-ui/react-avatar](https://github.com/radix-ui/primitives) |
`1.1.3` | `1.1.9` |
| [@radix-ui/react-checkbox](https://github.com/radix-ui/primitives) |
`1.1.4` | `1.3.1` |
| [@radix-ui/react-collapsible](https://github.com/radix-ui/primitives)
| `1.1.3` | `1.1.10` |
| [@radix-ui/react-context-menu](https://github.com/radix-ui/primitives)
| `2.2.6` | `2.2.14` |
|
[@radix-ui/react-dropdown-menu](https://github.com/radix-ui/primitives)
| `2.1.6` | `2.1.14` |
| [@radix-ui/react-label](https://github.com/radix-ui/primitives) |
`2.1.2` | `2.1.6` |
| [@radix-ui/react-popover](https://github.com/radix-ui/primitives) |
`1.1.6` | `1.1.13` |
| [@radix-ui/react-radio-group](https://github.com/radix-ui/primitives)
| `1.2.3` | `1.3.6` |
| [@radix-ui/react-scroll-area](https://github.com/radix-ui/primitives)
| `1.2.3` | `1.2.8` |
| [@radix-ui/react-select](https://github.com/radix-ui/primitives) |
`2.1.6` | `2.2.4` |
| [@radix-ui/react-separator](https://github.com/radix-ui/primitives) |
`1.1.2` | `1.1.6` |
| [@radix-ui/react-switch](https://github.com/radix-ui/primitives) |
`1.1.3` | `1.2.4` |
| [@radix-ui/react-tabs](https://github.com/radix-ui/primitives) |
`1.1.4` | `1.1.11` |
| [@radix-ui/react-toast](https://github.com/radix-ui/primitives) |
`1.2.6` | `1.2.13` |
| [@radix-ui/react-tooltip](https://github.com/radix-ui/primitives) |
`1.1.8` | `1.2.6` |
| [@sentry/nextjs](https://github.com/getsentry/sentry-javascript) |
`9.10.1` | `9.19.0` |
| [@supabase/ssr](https://github.com/supabase/ssr) | `0.5.2` | `0.6.1` |
| [@supabase/supabase-js](https://github.com/supabase/supabase-js) |
`2.49.1` | `2.49.4` |
|
[@tanstack/react-table](https://github.com/TanStack/table/tree/HEAD/packages/react-table)
| `8.21.2` | `8.21.3` |
|
[@xyflow/react](https://github.com/xyflow/xyflow/tree/HEAD/packages/react)
| `12.4.2` | `12.6.4` |
| [cmdk](https://github.com/pacocoursey/cmdk/tree/HEAD/cmdk) | `1.0.4` |
`1.1.1` |
| [dotenv](https://github.com/motdotla/dotenv) | `16.4.7` | `16.5.0` |
| [embla-carousel-react](https://github.com/davidjerleke/embla-carousel)
| `8.5.2` | `8.6.0` |
| [framer-motion](https://github.com/motiondivision/motion) | `12.4.11`
| `12.12.1` |
| [geist](https://github.com/vercel/geist-font/tree/HEAD/packages/next)
| `1.3.1` | `1.4.2` |
|
[launchdarkly-react-client-sdk](https://github.com/launchdarkly/react-client-sdk)
| `3.6.1` | `3.7.0` |
|
[lucide-react](https://github.com/lucide-icons/lucide/tree/HEAD/packages/lucide-react)
| `0.479.0` | `0.510.0` |
| [next-themes](https://github.com/pacocoursey/next-themes) | `0.4.5` |
`0.4.6` |
| [react-day-picker](https://github.com/gpbl/react-day-picker) | `9.6.1`
| `9.7.0` |
| [react-hook-form](https://github.com/react-hook-form/react-hook-form)
| `7.54.2` | `7.56.3` |
| [recharts](https://github.com/recharts/recharts) | `2.15.1` | `2.15.3`
|
| [zod](https://github.com/colinhacks/zod) | `3.24.2` | `3.24.4` |


Updates `@faker-js/faker` from 9.6.0 to 9.8.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/faker-js/faker/releases"><code>@​faker-js/faker</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v9.8.0</h2>
<h2>What's Changed</h2>
<ul>
<li>feat(locale): add country code for en_CA &amp; fr_CA by <a
href="https://github.com/alixlahuec"><code>@​alixlahuec</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3476">faker-js/faker#3476</a></li>
<li>test: use validator@13.15.0 with isULID, isISO31661Numeric,
isISO15924 by <a
href="https://github.com/matthewmayer"><code>@​matthewmayer</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3482">faker-js/faker#3482</a></li>
<li>feat(locale): add zh_CN food by <a
href="https://github.com/yyz945947732"><code>@​yyz945947732</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3479">faker-js/faker#3479</a></li>
<li>docs: more than 70 locales by <a
href="https://github.com/matthewmayer"><code>@​matthewmayer</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3483">faker-js/faker#3483</a></li>
<li>feat(locale): update zh_CN location by <a
href="https://github.com/yyz945947732"><code>@​yyz945947732</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3481">faker-js/faker#3481</a></li>
<li>feat(locale): update zh_CN animal by <a
href="https://github.com/yyz945947732"><code>@​yyz945947732</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3480">faker-js/faker#3480</a></li>
<li>refactor(locale): ko state data update by <a
href="https://github.com/seoahan"><code>@​seoahan</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3487">faker-js/faker#3487</a></li>
<li>feat(locale): add zh_CN book by <a
href="https://github.com/yyz945947732"><code>@​yyz945947732</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3477">faker-js/faker#3477</a></li>
<li>feat(locale): add Japanese date and month definitions by <a
href="https://github.com/matsueushi"><code>@​matsueushi</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3492">faker-js/faker#3492</a></li>
<li>feat(locale): add vehicle locale data for Japanese by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3490">faker-js/faker#3490</a></li>
<li>feat(locale): update Japanese company categories by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3489">faker-js/faker#3489</a></li>
<li>feat(locale): add Japanese science locale data including elements
and units by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3491">faker-js/faker#3491</a></li>
<li>feat(locale): add Japanese sex definitions for person locale by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3495">faker-js/faker#3495</a></li>
<li>refactor(locale): rename pt-BR streetSuffix to streetPrefix by <a
href="https://github.com/glmchalita"><code>@​glmchalita</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3493">faker-js/faker#3493</a></li>
<li>feat(locale): update zh_CN word by <a
href="https://github.com/yyz945947732"><code>@​yyz945947732</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3478">faker-js/faker#3478</a></li>
<li>refactor(locale): normalize internet data by <a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3502">faker-js/faker#3502</a></li>
<li>chore(deps): update eslint by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3500">faker-js/faker#3500</a></li>
<li>chore(deps): update dependency eslint-plugin-unicorn to v59 by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3501">faker-js/faker#3501</a></li>
<li>chore(deps): update vitest by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3497">faker-js/faker#3497</a></li>
<li>chore(deps): update cypress/browsers docker tag to v24 by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3507">faker-js/faker#3507</a></li>
<li>chore(deps): update all non-major dependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3498">faker-js/faker#3498</a></li>
<li>chore(deps): update
mcr.microsoft.com/devcontainers/typescript-node:22 docker digest to
fb211a0 by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3508">faker-js/faker#3508</a></li>
<li>feat(locale): Add additional Japanese last names to the locale data
by <a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3484">faker-js/faker#3484</a></li>
<li>chore(deps): update eslint by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3511">faker-js/faker#3511</a></li>
<li>chore(deps): lock file maintenance by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3486">faker-js/faker#3486</a></li>
<li>fix(locale): ko modified street_name to street_name_part by <a
href="https://github.com/seoahan"><code>@​seoahan</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3485">faker-js/faker#3485</a></li>
<li>fix(locale): correct Japanese country names by <a
href="https://github.com/matsueushi"><code>@​matsueushi</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3510">faker-js/faker#3510</a></li>
<li>chore(release): 9.8.0 by <a
href="https://github.com/fakerjs-bot"><code>@​fakerjs-bot</code></a> in
<a
href="https://redirect.github.com/faker-js/faker/pull/3515">faker-js/faker#3515</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/alixlahuec"><code>@​alixlahuec</code></a> made
their first contribution in <a
href="https://redirect.github.com/faker-js/faker/pull/3476">faker-js/faker#3476</a></li>
<li><a
href="https://github.com/yyz945947732"><code>@​yyz945947732</code></a>
made their first contribution in <a
href="https://redirect.github.com/faker-js/faker/pull/3479">faker-js/faker#3479</a></li>
<li><a href="https://github.com/seoahan"><code>@​seoahan</code></a> made
their first contribution in <a
href="https://redirect.github.com/faker-js/faker/pull/3487">faker-js/faker#3487</a></li>
<li><a
href="https://github.com/noritaka1166"><code>@​noritaka1166</code></a>
made their first contribution in <a
href="https://redirect.github.com/faker-js/faker/pull/3490">faker-js/faker#3490</a></li>
<li><a
href="https://github.com/glmchalita"><code>@​glmchalita</code></a> made
their first contribution in <a
href="https://redirect.github.com/faker-js/faker/pull/3493">faker-js/faker#3493</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/faker-js/faker/compare/v9.7.0...v9.8.0">https://github.com/faker-js/faker/compare/v9.7.0...v9.8.0</a></p>
<h2>v9.7.0</h2>
<h2>What's Changed</h2>
<ul>
<li>feat(locale): Add bn_BD locale by <a
href="https://github.com/AbrarShahriar"><code>@​AbrarShahriar</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3439">faker-js/faker#3439</a></li>
<li>fix(airline): Air France and KLM Royal Dutch Airlines by <a
href="https://github.com/chimurai"><code>@​chimurai</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3440">faker-js/faker#3440</a></li>
<li>infra(comment-issue): fix display of thumbs up emoji by <a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3444">faker-js/faker#3444</a></li>
<li>feat(locale): add localize sex support for zh_CN &amp; zh_TW by <a
href="https://github.com/sd44"><code>@​sd44</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3450">faker-js/faker#3450</a></li>
<li>fix(iban): more strict pattern for IE and PS by <a
href="https://github.com/xDivisionByZerox"><code>@​xDivisionByZerox</code></a>
in <a
href="https://redirect.github.com/faker-js/faker/pull/3464">faker-js/faker#3464</a></li>
<li>chore(deps): update devdependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3456">faker-js/faker#3456</a></li>
<li>chore(deps): update all non-major dependencies by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3457">faker-js/faker#3457</a></li>
<li>chore(deps): update dependency prettier to v3.5.3 by <a
href="https://github.com/renovate"><code>@​renovate</code></a> in <a
href="https://redirect.github.com/faker-js/faker/pull/3455">faker-js/faker#3455</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/faker-js/faker/blob/next/CHANGELOG.md"><code>@​faker-js/faker</code>'s
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/faker-js/faker/compare/v9.7.0...v9.8.0">9.8.0</a>
(2025-05-13)</h2>
<h3>New Locales</h3>
<ul>
<li><strong>locale:</strong> Add additional Japanese last names to the
locale data (<a
href="https://redirect.github.com/faker-js/faker/issues/3484">#3484</a>)
(<a
href="72e66c3a3a">72e66c3</a>)</li>
<li><strong>locale:</strong> add Japanese date and month definitions (<a
href="https://redirect.github.com/faker-js/faker/issues/3492">#3492</a>)
(<a
href="b70e7934b7">b70e793</a>)</li>
<li><strong>locale:</strong> add Japanese science locale data including
elements and units (<a
href="https://redirect.github.com/faker-js/faker/issues/3491">#3491</a>)
(<a
href="54fd5519e9">54fd551</a>)</li>
<li><strong>locale:</strong> add Japanese sex definitions for person
locale (<a
href="https://redirect.github.com/faker-js/faker/issues/3495">#3495</a>)
(<a
href="1dbd8fa511">1dbd8fa</a>)</li>
<li><strong>locale:</strong> add vehicle locale data for Japanese (<a
href="https://redirect.github.com/faker-js/faker/issues/3490">#3490</a>)
(<a
href="dfadb1da74">dfadb1d</a>)</li>
<li><strong>locale:</strong> add zh_CN book (<a
href="https://redirect.github.com/faker-js/faker/issues/3477">#3477</a>)
(<a
href="786a3d0bd8">786a3d0</a>)</li>
<li><strong>locale:</strong> add zh_CN food (<a
href="https://redirect.github.com/faker-js/faker/issues/3479">#3479</a>)
(<a
href="6c883e74b8">6c883e7</a>)</li>
<li><strong>locale:</strong> update Japanese company categories (<a
href="https://redirect.github.com/faker-js/faker/issues/3489">#3489</a>)
(<a
href="8c0953a261">8c0953a</a>)</li>
<li><strong>locale:</strong> update zh_CN animal (<a
href="https://redirect.github.com/faker-js/faker/issues/3480">#3480</a>)
(<a
href="38ee7b81a8">38ee7b8</a>)</li>
<li><strong>locale:</strong> update zh_CN location (<a
href="https://redirect.github.com/faker-js/faker/issues/3481">#3481</a>)
(<a
href="456f10276b">456f102</a>)</li>
<li><strong>locale:</strong> update zh_CN word (<a
href="https://redirect.github.com/faker-js/faker/issues/3478">#3478</a>)
(<a
href="aa98867765">aa98867</a>)</li>
</ul>
<h3>Changed Locales</h3>
<ul>
<li><strong>locale:</strong> ko state data update (<a
href="https://redirect.github.com/faker-js/faker/issues/3487">#3487</a>)
(<a
href="b611ec2e51">b611ec2</a>)</li>
<li><strong>locale:</strong> normalize internet data (<a
href="https://redirect.github.com/faker-js/faker/issues/3502">#3502</a>)
(<a
href="e6151e4efd">e6151e4</a>)</li>
<li><strong>locale:</strong> rename pt-BR streetSuffix to streetPrefix
(<a
href="https://redirect.github.com/faker-js/faker/issues/3493">#3493</a>)
(<a
href="7c23db316e">7c23db3</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>locale:</strong> correct Japanese country names (<a
href="https://redirect.github.com/faker-js/faker/issues/3510">#3510</a>)
(<a
href="046bb81558">046bb81</a>)</li>
<li><strong>locale:</strong> correct the name of element Lv in Japanese
(<a
href="https://redirect.github.com/faker-js/faker/issues/3509">#3509</a>)
(<a
href="6a7ef4c2ad">6a7ef4c</a>)</li>
<li><strong>locale:</strong> ko modified street_name to street_name_part
(<a
href="https://redirect.github.com/faker-js/faker/issues/3485">#3485</a>)
(<a
href="c15da8efec">c15da8e</a>)</li>
</ul>
<h2><a
href="https://github.com/faker-js/faker/compare/v9.6.0...v9.7.0">9.7.0</a>
(2025-04-13)</h2>
<h3>New Locales</h3>
<ul>
<li><strong>locale:</strong> Add bn_BD locale (<a
href="https://redirect.github.com/faker-js/faker/issues/3439">#3439</a>)
(<a
href="fef0ad7859">fef0ad7</a>)</li>
<li><strong>locale:</strong> add cy locale, start with date (<a
href="https://redirect.github.com/faker-js/faker/issues/3462">#3462</a>)
(<a
href="f70a6f7a65">f70a6f7</a>)</li>
<li><strong>locale:</strong> add finance support for ja locale (<a
href="https://redirect.github.com/faker-js/faker/issues/3449">#3449</a>)
(<a
href="b2c5298c94">b2c5298</a>)</li>
<li><strong>locale:</strong> add localize sex support for zh_CN &amp;
zh_TW (<a
href="https://redirect.github.com/faker-js/faker/issues/3450">#3450</a>)
(<a
href="048c32581b">048c325</a>)</li>
<li><strong>locale:</strong> add Tamil language support (<a
href="https://redirect.github.com/faker-js/faker/issues/3468">#3468</a>)
(<a
href="cdf6dc4a97">cdf6dc4</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>airline:</strong> Air France and KLM Royal Dutch Airlines
(<a
href="https://redirect.github.com/faker-js/faker/issues/3440">#3440</a>)
(<a
href="8a2d168f62">8a2d168</a>)</li>
<li><strong>iban:</strong> more strict pattern for IE and PS (<a
href="https://redirect.github.com/faker-js/faker/issues/3464">#3464</a>)
(<a
href="7b12056713">7b12056</a>)</li>
<li><strong>locale:</strong> rename ja and zh_CN company affix files (<a
href="https://redirect.github.com/faker-js/faker/issues/3448">#3448</a>)
(<a
href="1e551c5f47">1e551c5</a>)</li>
<li><strong>number:</strong> don't ignore multipleOf in float when
min=max (<a
href="https://redirect.github.com/faker-js/faker/issues/3417">#3417</a>)
(<a
href="e4cc4e50d1">e4cc4e5</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="549d71cf33"><code>549d71c</code></a>
chore(release): 9.8.0 (<a
href="https://redirect.github.com/faker-js/faker/issues/3515">#3515</a>)</li>
<li><a
href="046bb81558"><code>046bb81</code></a>
fix(locale): correct Japanese country names (<a
href="https://redirect.github.com/faker-js/faker/issues/3510">#3510</a>)</li>
<li><a
href="c15da8efec"><code>c15da8e</code></a>
fix(locale): ko modified street_name to street_name_part (<a
href="https://redirect.github.com/faker-js/faker/issues/3485">#3485</a>)</li>
<li><a
href="d6ba4cc1b1"><code>d6ba4cc</code></a>
chore(deps): lock file maintenance (<a
href="https://redirect.github.com/faker-js/faker/issues/3486">#3486</a>)</li>
<li><a
href="2faf57b01c"><code>2faf57b</code></a>
chore(deps): update eslint (<a
href="https://redirect.github.com/faker-js/faker/issues/3511">#3511</a>)</li>
<li><a
href="6a7ef4c2ad"><code>6a7ef4c</code></a>
fix(locale): correct the name of element Lv in Japanese (<a
href="https://redirect.github.com/faker-js/faker/issues/3509">#3509</a>)</li>
<li><a
href="72e66c3a3a"><code>72e66c3</code></a>
feat(locale): Add additional Japanese last names to the locale data (<a
href="https://redirect.github.com/faker-js/faker/issues/3484">#3484</a>)</li>
<li><a
href="c0d5217e10"><code>c0d5217</code></a>
chore(deps): update mcr.microsoft.com/devcontainers/typescript-node:22
docker...</li>
<li><a
href="049875d283"><code>049875d</code></a>
chore(deps): update all non-major dependencies (<a
href="https://redirect.github.com/faker-js/faker/issues/3498">#3498</a>)</li>
<li><a
href="07087ff6ef"><code>07087ff</code></a>
chore(deps): update cypress/browsers docker tag to v24 (<a
href="https://redirect.github.com/faker-js/faker/issues/3507">#3507</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/faker-js/faker/compare/v9.6.0...v9.8.0">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by <a
href="https://www.npmjs.com/~faker-bot">faker-bot</a>, a new releaser
for <code>@​faker-js/faker</code> since your current version.</p>
</details>
<br />

Updates `@next/third-parties` from 15.2.1 to 15.3.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/vercel/next.js/releases"><code>@​next/third-parties</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v15.3.2</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>backport: fix(turbopack): Store persistence of wrapped task on
RawVc::LocalOutput (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78488">#78488</a>)
(<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78883">#78883</a>)</li>
<li><code>@​next/mdx</code>: Use stable turbopack config options (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78880">#78880</a>)</li>
<li>Fix react-compiler: Fix detection of interest (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78879">#78879</a>)</li>
<li>Fix turbopack: Backport sourcemap bugfix (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78881">#78881</a>)</li>
<li>[next-server] preserve rsc query for rsc redirects (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78876">#78876</a>)</li>
<li>Update middleware public/static matching (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78875">#78875</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/ijjk"><code>@​ijjk</code></a>, <a
href="https://github.com/huozhi"><code>@​huozhi</code></a>, <a
href="https://github.com/kdy1"><code>@​kdy1</code></a>, <a
href="https://github.com/wbinnssmith"><code>@​wbinnssmith</code></a>,
and <a href="https://github.com/bgw"><code>@​bgw</code></a> for
helping!</p>
<h2>v15.3.1</h2>
<blockquote>
<p>[!NOTE]<br />
This release is backporting bug fixes. It does <strong>not</strong>
include all pending features/changes on canary.</p>
</blockquote>
<h3>Core Changes</h3>
<ul>
<li>chore: Backport SWC-based RC optimization (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78260">#78260</a>)</li>
<li>fix: bump image-size@1.2.1 (<a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78164">#78164</a>)</li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/kdy1"><code>@​kdy1</code></a> and <a
href="https://github.com/styfle"><code>@​styfle</code></a> for
helping!</p>
<h2>v15.3.1-canary.15</h2>
<h3>Core Changes</h3>
<ul>
<li>[Turbopack] refactor persistent caching from log based to cow
approach: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/76234">#76234</a></li>
</ul>
<h3>Misc Changes</h3>
<ul>
<li>fix(turbo-tasks-fs): Handle filesystem watcher rescan events: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78045">#78045</a></li>
</ul>
<h3>Credits</h3>
<p>Huge thanks to <a
href="https://github.com/bgw"><code>@​bgw</code></a> and <a
href="https://github.com/sokra"><code>@​sokra</code></a> for
helping!</p>
<h2>v15.3.1-canary.14</h2>
<h3>Core Changes</h3>
<ul>
<li>Add graceful error boundary for bots requests: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78298">#78298</a></li>
<li>make sure eslint-plugin-next is built when running 'pnpm dev': <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78305">#78305</a></li>
<li>Migrate pages API routes to handler interface: <a
href="https://github.com/vercel/next.js/tree/HEAD/packages/third-parties/issues/78166">#78166</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d9ec4a4b57"><code>d9ec4a4</code></a>
v15.3.2</li>
<li><a
href="fa536cf2c9"><code>fa536cf</code></a>
v15.3.1</li>
<li><a
href="b2ff04995b"><code>b2ff049</code></a>
v15.3.0</li>
<li><a
href="60bfe64295"><code>60bfe64</code></a>
v15.3.0-canary.46</li>
<li><a
href="f71c4a1582"><code>f71c4a1</code></a>
v15.3.0-canary.45</li>
<li><a
href="4451bae75d"><code>4451bae</code></a>
v15.3.0-canary.44</li>
<li><a
href="87d7d8eb7a"><code>87d7d8e</code></a>
v15.3.0-canary.43</li>
<li><a
href="82ab39f801"><code>82ab39f</code></a>
v15.3.0-canary.42</li>
<li><a
href="8f1409d6ce"><code>8f1409d</code></a>
v15.3.0-canary.41</li>
<li><a
href="2139369821"><code>2139369</code></a>
v15.3.0-canary.40</li>
<li>Additional commits viewable in <a
href="https://github.com/vercel/next.js/commits/v15.3.2/packages/third-parties">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-alert-dialog` from 1.1.6 to 1.1.13
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-avatar` from 1.1.3 to 1.1.9
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-checkbox` from 1.1.4 to 1.3.1
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-collapsible` from 1.1.3 to 1.1.10
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-context-menu` from 2.2.6 to 2.2.14
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-dialog` from 1.1.6 to 1.1.13
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-dropdown-menu` from 2.1.6 to 2.1.14
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-label` from 2.1.2 to 2.1.6
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-popover` from 1.1.6 to 1.1.13
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-radio-group` from 1.2.3 to 1.3.6
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-scroll-area` from 1.2.3 to 1.2.8
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-select` from 2.1.6 to 2.2.4
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-separator` from 1.1.2 to 1.1.6
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-slot` from 1.1.2 to 1.2.2
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-switch` from 1.1.3 to 1.2.4
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-tabs` from 1.1.4 to 1.1.11
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-toast` from 1.2.6 to 1.2.13
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@radix-ui/react-tooltip` from 1.1.8 to 1.2.6
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/radix-ui/primitives/commits">compare
view</a></li>
</ul>
</details>
<br />

Updates `@sentry/nextjs` from 9.10.1 to 9.19.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/releases"><code>@​sentry/nextjs</code>'s
releases</a>.</em></p>
<blockquote>
<h2>9.19.0</h2>
<ul>
<li>feat(react-router): Add otel instrumentation for server requests (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16147">#16147</a>)</li>
<li>feat(remix): Vendor in
<code>opentelemetry-instrumentation-remix</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16145">#16145</a>)</li>
<li>fix(browser): Ensure spans auto-ended for navigations have
<code>cancelled</code> reason (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16277">#16277</a>)</li>
<li>fix(node): Pin <code>@fastify/otel</code> fork to direct url to
allow installing without git (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16287">#16287</a>)</li>
<li>fix(react): Handle nested parameterized routes in reactrouterv3
transaction normalization (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16274">#16274</a>)</li>
</ul>
<p>Work in this release was contributed by <a
href="https://github.com/sidx1024"><code>@​sidx1024</code></a>. Thank
you for your contribution!</p>
<h2>Bundle size 📦</h2>
<table>
<thead>
<tr>
<th>Path</th>
<th>Size</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>@​sentry/browser</code></td>
<td>23.4 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> - with treeshaking flags</td>
<td>23.06 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing)</td>
<td>37.31 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay)</td>
<td>74.53 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay) - with
treeshaking flags</td>
<td>67.72 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay with
Canvas)</td>
<td>79.19 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Tracing, Replay, Feedback)</td>
<td>91 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. Feedback)</td>
<td>39.8 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. sendFeedback)</td>
<td>28.03 KB</td>
</tr>
<tr>
<td><code>@​sentry/browser</code> (incl. FeedbackAsync)</td>
<td>32.79 KB</td>
</tr>
<tr>
<td><code>@​sentry/react</code></td>
<td>25.17 KB</td>
</tr>
<tr>
<td><code>@​sentry/react</code> (incl. Tracing)</td>
<td>39.27 KB</td>
</tr>
<tr>
<td><code>@​sentry/vue</code></td>
<td>27.67 KB</td>
</tr>
<tr>
<td><code>@​sentry/vue</code> (incl. Tracing)</td>
<td>39.07 KB</td>
</tr>
<tr>
<td><code>@​sentry/svelte</code></td>
<td>23.43 KB</td>
</tr>
<tr>
<td>CDN Bundle</td>
<td>24.58 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing)</td>
<td>37.33 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay)</td>
<td>72.37 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback)</td>
<td>77.68 KB</td>
</tr>
<tr>
<td>CDN Bundle - uncompressed</td>
<td>71.72 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing) - uncompressed</td>
<td>110.5 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay) - uncompressed</td>
<td>221.78 KB</td>
</tr>
<tr>
<td>CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed</td>
<td>234.31 KB</td>
</tr>
<tr>
<td><code>@​sentry/nextjs</code> (client)</td>
<td>40.88 KB</td>
</tr>
<tr>
<td><code>@​sentry/sveltekit</code> (client)</td>
<td>37.77 KB</td>
</tr>
<tr>
<td><code>@​sentry/node</code></td>
<td>154.29 KB</td>
</tr>
<tr>
<td><code>@​sentry/node</code> - without tracing</td>
<td>95.6 KB</td>
</tr>
<tr>
<td><code>@​sentry/aws-serverless</code></td>
<td>120.36 KB</td>
</tr>
</tbody>
</table>
<h2>9.18.0</h2>
<h3>Important changes</h3>
<ul>
<li><strong>feat: Support Node 24 (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16236">#16236</a>)</strong></li>
</ul>
<p>We now also publish profiling binaries for Node 24.</p>
<h3>Other changes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/getsentry/sentry-javascript/blob/develop/CHANGELOG.md"><code>@​sentry/nextjs</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>9.19.0</h2>
<ul>
<li>feat(react-router): Add otel instrumentation for server requests (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16147">#16147</a>)</li>
<li>feat(remix): Vendor in
<code>opentelemetry-instrumentation-remix</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16145">#16145</a>)</li>
<li>fix(browser): Ensure spans auto-ended for navigations have
<code>cancelled</code> reason (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16277">#16277</a>)</li>
<li>fix(node): Pin <code>@fastify/otel</code> fork to direct url to
allow installing without git (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16287">#16287</a>)</li>
<li>fix(react): Handle nested parameterized routes in reactrouterv3
transaction normalization (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16274">#16274</a>)</li>
</ul>
<p>Work in this release was contributed by <a
href="https://github.com/sidx1024"><code>@​sidx1024</code></a>. Thank
you for your contribution!</p>
<h2>9.18.0</h2>
<h3>Important changes</h3>
<ul>
<li><strong>feat: Support Node 24 (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16236">#16236</a>)</strong></li>
</ul>
<p>We now also publish profiling binaries for Node 24.</p>
<h3>Other changes</h3>
<ul>
<li>deps(node): Bump <code>import-in-the-middle</code> to
<code>1.13.1</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16260">#16260</a>)</li>
<li>feat: Export <code>consoleLoggingIntegration</code> from vercel edge
sdk (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16228">#16228</a>)</li>
<li>feat(cloudflare): Add support for email, queue, and tail handler (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16233">#16233</a>)</li>
<li>feat(cloudflare): Improve http span data (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16232">#16232</a>)</li>
<li>feat(nextjs): Add more attributes for generation functions (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16214">#16214</a>)</li>
<li>feat(opentelemetry): Widen peer dependencies to support Otel v2 (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16246">#16246</a>)</li>
<li>fix(core): Gracefully handle invalid baggage entries (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16257">#16257</a>)</li>
<li>fix(node): Ensure traces are propagated without spans in Node 22+
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16221">#16221</a>)</li>
<li>fix(node): Use sentry forked <code>@fastify/otel</code> dependency
with pinned Otel v1 deps (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16256">#16256</a>)</li>
<li>fix(remix): Remove vendored types (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16218">#16218</a>)</li>
</ul>
<h2>9.17.0</h2>
<ul>
<li>feat(node): Migrate to <code>@fastify/otel</code> (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/15542">#15542</a>)</li>
</ul>
<h2>9.16.1</h2>
<ul>
<li>fix(core): Make sure logs get flushed in server-runtime-client (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16222">#16222</a>)</li>
<li>ref(node): Remove vercel flushing code that does nothing (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16217">#16217</a>)</li>
</ul>
<h2>9.16.0</h2>
<h3>Important changes</h3>
<ul>
<li><strong>feat: Create a Vite plugin that injects sentryConfig into
the global config (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16197">#16197</a>)</strong></li>
</ul>
<p>Add a new plugin <code>makeConfigInjectorPlugin</code> within our
existing vite plugin that updates the global vite config with sentry
options</p>
<ul>
<li><strong>feat(browser): Add option to sample linked traces
consistently (<a
href="https://redirect.github.com/getsentry/sentry-javascript/pull/16037">#16037</a>)</strong></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d69ea01d02"><code>d69ea01</code></a>
release: 9.19.0</li>
<li><a
href="9acaae28c1"><code>9acaae2</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16288">#16288</a>
from getsentry/prepare-release/9.19.0</li>
<li><a
href="ed276633ce"><code>ed27663</code></a>
meta(changelog): Update changelog for 9.19.0</li>
<li><a
href="4b039f8a6e"><code>4b039f8</code></a>
fix(node): Pin <code>@fastify/otel</code> fork to direct url to allow
installing without...</li>
<li><a
href="26731edc99"><code>26731ed</code></a>
feat(react-router): Add otel instrumentation for server requests (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16147">#16147</a>)</li>
<li><a
href="efb86de64b"><code>efb86de</code></a>
chore: Add external contributor to CHANGELOG.md (<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16282">#16282</a>)</li>
<li><a
href="e83658f23a"><code>e83658f</code></a>
fix(browser): Ensure spans auto-ended for navigations have
<code>cancelled</code> reason...</li>
<li><a
href="b9984338be"><code>b998433</code></a>
feat(remix): Vendor in <code>opentelemetry-instrumentation-remix</code>
(<a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16145">#16145</a>)</li>
<li><a
href="f55018fbf6"><code>f55018f</code></a>
fix(react): Handle nested parameterized routes in reactrouterv3
transaction n...</li>
<li><a
href="6d2b971980"><code>6d2b971</code></a>
Merge pull request <a
href="https://redirect.github.com/getsentry/sentry-javascript/issues/16276">#16276</a>
from getsentry/master</li>
<li>Additional commits viewable in <a
href="https://github.com/getsentry/sentry-javascript/compare/9.10.1...9.19.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `@supabase/ssr` from 0.5.2 to 0.6.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/ssr/releases"><code>@​supabase/ssr</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v0.6.1</h2>
<h2><a
href="https://github.com/supabase/ssr/compare/v0.6.0...v0.6.1">0.6.1</a>
(2025-03-16)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>force release (<a
href="https://redirect.github.com/supabase/ssr/issues/98">#98</a>) (<a
href="66710e82aa">66710e8</a>)</li>
<li><strong>revert:</strong> &quot;feat: improve cookie chunk handling
via base64url+length encoding (<a
href="https://redirect.github.com/supabase/ssr/issues/90">#90</a>)&quot;
(<a href="https://redirect.github.com/supabase/ssr/issues/100">#100</a>)
(<a
href="2ea8e23525">2ea8e23</a>)</li>
</ul>
<h2>v0.6.1-rc.3</h2>
<p>This is a release candidate. See release-please PR <a
href="https://redirect.github.com/supabase/ssr/issues/99">#99</a> for
context.</p>
<h2>v0.6.1-rc.2</h2>
<p>This is a release candidate. See release-please PR <a
href="https://redirect.github.com/supabase/ssr/issues/99">#99</a> for
context.</p>
<h2>v0.6.0</h2>
<h2><a
href="https://github.com/supabase/ssr/compare/v0.5.2...v0.6.0">0.6.0</a>
(2025-02-27)</h2>
<h3>Features</h3>
<ul>
<li>improve cookie chunk handling via base64url+length encoding (<a
href="https://redirect.github.com/supabase/ssr/issues/90">#90</a>) (<a
href="6deb6871ca">6deb687</a>)</li>
<li>upgrade cookie dependency and cleanup imports (<a
href="https://redirect.github.com/supabase/ssr/issues/77">#77</a>) (<a
href="95245282e6">9524528</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>add <code>create*Client</code> string in <code>x-client-info</code>
(<a href="https://redirect.github.com/supabase/ssr/issues/85">#85</a>)
(<a
href="f271accfea">f271acc</a>)</li>
</ul>
<h2>v0.6.0-rc.5</h2>
<p>This is a release candidate. See release-please PR <a
href="https://redirect.github.com/supabase/ssr/issues/80">#80</a> for
context.</p>
<h2>v0.6.0-rc.3</h2>
<p>This is a release candidate. See release-please PR <a
href="https://redirect.github.com/supabase/ssr/issues/80">#80</a> for
context.</p>
<h2>v0.6.0-rc.2</h2>
<p>This is a release candidate. See release-please PR <a
href="https://redirect.github.com/supabase/ssr/issues/80">#80</a> for
context.</p>
<h2>v0.6.0-rc.1</h2>
<p>This is a release candidate. See release-please PR <a
href="https://redirect.github.com/supabase/ssr/issues/80">#80</a> for
context.</p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/ssr/blob/main/CHANGELOG.md"><code>@​supabase/ssr</code>'s
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/supabase/ssr/compare/v0.6.0...v0.6.1">0.6.1</a>
(2025-03-16)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>force release (<a
href="https://redirect.github.com/supabase/ssr/issues/98">#98</a>) (<a
href="66710e82aa">66710e8</a>)</li>
<li><strong>revert:</strong> &quot;feat: improve cookie chunk handling
via base64url+length encoding (<a
href="https://redirect.github.com/supabase/ssr/issues/90">#90</a>)&quot;
(<a href="https://redirect.github.com/supabase/ssr/issues/100">#100</a>)
(<a
href="2ea8e23525">2ea8e23</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a1b60ba826"><code>a1b60ba</code></a>
chore(main): release 0.6.1 (<a
href="https://redirect.github.com/supabase/ssr/issues/99">#99</a>)</li>
<li><a
href="2ea8e23525"><code>2ea8e23</code></a>
fix(revert): &quot;feat: improve cookie chunk handling via
base64url+length encodi...</li>
<li><a
href="66710e82aa"><code>66710e8</code></a>
fix: force release (<a
href="https://redirect.github.com/supabase/ssr/issues/98">#98</a>)</li>
<li><a
href="37909b21f1"><code>37909b2</code></a>
Revert &quot;chore(main): release 0.6.0&quot; (<a
href="https://redirect.github.com/supabase/ssr/issues/95">#95</a>)</li>
<li><a
href="be9cd6c70e"><code>be9cd6c</code></a>
chore(main): release 0.6.0 (<a
href="https://redirect.github.com/supabase/ssr/issues/80">#80</a>)</li>
<li><a
href="6deb6871ca"><code>6deb687</code></a>
feat: improve cookie chunk handling via base64url+length encoding (<a
href="https://redirect.github.com/supabase/ssr/issues/90">#90</a>)</li>
<li><a
href="ef429dfbd1"><code>ef429df</code></a>
build(deps): bump vite from 5.4.10 to 5.4.14 (<a
href="https://redirect.github.com/supabase/ssr/issues/88">#88</a>)</li>
<li><a
href="ca469593e3"><code>ca46959</code></a>
ci: publish with provenance (<a
href="https://redirect.github.com/supabase/ssr/issues/86">#86</a>)</li>
<li><a
href="f271accfea"><code>f271acc</code></a>
fix: add <code>create*Client</code> string in <code>x-client-info</code>
(<a
href="https://redirect.github.com/supabase/ssr/issues/85">#85</a>)</li>
<li><a
href="95245282e6"><code>9524528</code></a>
feat: upgrade cookie dependency and cleanup imports (<a
href="https://redirect.github.com/supabase/ssr/issues/77">#77</a>)</li>
<li>See full diff in <a
href="https://github.com/supabase/ssr/compare/v0.5.2...v0.6.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `@supabase/supabase-js` from 2.49.1 to 2.49.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-js/releases"><code>@​supabase/supabase-js</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v2.49.4</h2>
<h2><a
href="https://github.com/supabase/supabase-js/compare/v2.49.3...v2.49.4">2.49.4</a>
(2025-03-29)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>deps:</strong> upgrade postgrest-js to 1.19.4 (<a
href="692e8e846b">692e8e8</a>)</li>
</ul>
<h2>v2.49.3</h2>
<h2><a
href="https://github.com/supabase/supabase-js/compare/v2.49.2...v2.49.3">2.49.3</a>
(2025-03-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>bump auth-js to 2.69.1 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1383">#1383</a>)
(<a
href="f08bfd9c96">f08bfd9</a>)</li>
</ul>
<h2>v2.49.2</h2>
<h2><a
href="https://github.com/supabase/supabase-js/compare/v2.49.1...v2.49.2">2.49.2</a>
(2025-03-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>bump auth-js to v2.69.0 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1380">#1380</a>)
(<a
href="73ab30dd61">73ab30d</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="692e8e846b"><code>692e8e8</code></a>
fix(deps): upgrade postgrest-js to 1.19.4</li>
<li><a
href="f08bfd9c96"><code>f08bfd9</code></a>
fix: bump auth-js to 2.69.1 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1383">#1383</a>)</li>
<li><a
href="73ab30dd61"><code>73ab30d</code></a>
fix: bump auth-js to v2.69.0 (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1380">#1380</a>)</li>
<li><a
href="a5219a81b3"><code>a5219a8</code></a>
chore: add open role to README.md (<a
href="https://redirect.github.com/supabase/supabase-js/issues/1378">#1378</a>)</li>
<li>See full diff in <a
href="https://github.com/supabase/supabase-js/compare/v2.49.1...v2.49.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `@tanstack/react-table` from 8.21.2 to 8.21.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/TanStack/table/releases"><code>@​tanstack/react-table</code>'s
releases</a>.</em></p>
<blockquote>
<h2>v8.21.3</h2>
<p>Version 8.21.3 - 4/14/25, 8:19 PM</p>
<h2>Changes</h2>
<h3>Fix</h3>
<ul>
<li>table-core: use right Document instance on getResizeHandler
(column-sizing feature) (<a
href="https://github.com/TanStack/table/tree/HEAD/packages/react-table/issues/5989">#5989</a>)
(54ce673) by <a
href="https://github.com/riccardoperra"><code>@​riccardoperra</code></a></li>
</ul>
<h3>Docs</h3>
<ul>
<li>fix all 158 broken links (<a
href="https://github.com/TanStack/table/tree/HEAD/packages/react-table/issues/5972">#5972</a>)
(f7bf6f1) by <a
href="https://github.com/kisaragi-hiu"><code>@​kisaragi-hiu</code></a></li>
<li>add vue example for grouping (<a
href="https://github.com/TanStack/table/tree/HEAD/packages/react-table/issues/5941">#5941</a>)
(3efa59c) by Harshil Patel</li>
</ul>
<h2>Packages</h2>
<ul>
<li><code>@​tanstack/table-core</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
<li><code>@​tanstack/angular-table</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
<li><code>@​tanstack/lit-table</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
<li><code>@​tanstack/qwik-table</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
<li><code>@​tanstack/react-table</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
<li><code>@​tanstack/solid-table</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
<li><code>@​tanstack/svelte-table</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
<li><code>@​tanstack/vue-table</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
<li><code>@​tanstack/react-table-devtools</code><a
href="https://github.com/8"><code>@​8</code></a>.21.3</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f4dc742b7b"><code>f4dc742</code></a>
release: v8.21.3</li>
<li>See full diff in <a
href="https://github.com/TanStack/table/commits/v8.21.3/packages/react-table">compare
view</a></li>
</ul>
</details>
<br />

Updates `@xyflow/react` from 12.4.2 to 12.6.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/xyflow/xyflow/releases"><code>@​xyflow/react</code>'s
releases</a>.</em></p>
<blockquote>
<h2><code>@​xyflow/react</code><a
href="https://github.com/12"><code>@​12</code></a>.6.4</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5263">#5263</a> <a
href="e4a8d4b43f"><code>e4a8d4b4</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Multi select key works when input is focused</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5266">#5266</a> <a
href="77107453fa"><code>77107453</code></a>
Thanks <a
href="https://github.com/peterkogo"><code>@​peterkogo</code></a>! - Fix
connection snapping for handles larger than connectionRadius</p>
</li>
<li>
<p>Updated dependencies [<a
href="77107453fa"><code>77107453</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.61</li>
</ul>
</li>
</ul>
<h2><code>@​xyflow/react</code><a
href="https://github.com/12"><code>@​12</code></a>.6.3</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5259">#5259</a> <a
href="77bf79c40e"><code>77bf79c4</code></a>
Thanks <a
href="https://github.com/peterkogo"><code>@​peterkogo</code></a>! - Fix
background-color css variable fallback.</p>
</li>
<li>
<p>Updated dependencies [<a
href="77bf79c40e"><code>77bf79c4</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.60</li>
</ul>
</li>
</ul>
<h2><code>@​xyflow/react</code><a
href="https://github.com/12"><code>@​12</code></a>.6.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5257">#5257</a> <a
href="b1314be04d"><code>b1314be0</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Use OnReconnect from system</p>
</li>
<li>
<p>Updated dependencies [<a
href="a95f0e2fbf"><code>a95f0e2f</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.59</li>
</ul>
</li>
</ul>
<h2><code>@​xyflow/react</code><a
href="https://github.com/12"><code>@​12</code></a>.6.1</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5249">#5249</a> <a
href="895b5d81c8"><code>895b5d81</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Call <code>onNodesChange</code> for uncontrolled flows that use
<code>updateNode</code></p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5247">#5247</a> <a
href="67e1cb6891"><code>67e1cb68</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Cleanup TSDoc annotations for ReactFlow</p>
</li>
<li>
<p>Updated dependencies [<a
href="2a03213b06"><code>2a03213b</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.58</li>
</ul>
</li>
</ul>
<h2><code>@​xyflow/react</code><a
href="https://github.com/12"><code>@​12</code></a>.6.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5219">#5219</a> <a
href="4236adbc46"><code>4236adbc</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Add initialMinZoom, initialMaxZoom and initialFitViewOptions to
ReactFlowProvider</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5227">#5227</a> <a
href="a7d10ffce5"><code>a7d10ffc</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Add <code>resizeDirection</code> prop for the
<code>NodeResizeControl</code> component</p>
</li>
</ul>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5217">#5217</a> <a
href="bce74e8811"><code>bce74e88</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Keep node seleciton on pane click if elementsSelectable=false</p>
</li>
<li>
<p>Updated dependencies [<a
href="a7d10ffce5"><code>a7d10ffc</code></a>,
<a
href="4e681f9c52"><code>4e681f9c</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.57</li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/xyflow/xyflow/blob/main/packages/react/CHANGELOG.md"><code>@​xyflow/react</code>'s
changelog</a>.</em></p>
<blockquote>
<h2>12.6.4</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5263">#5263</a> <a
href="e4a8d4b43f"><code>e4a8d4b4</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Multi select key works when input is focused</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5266">#5266</a> <a
href="77107453fa"><code>77107453</code></a>
Thanks <a
href="https://github.com/peterkogo"><code>@​peterkogo</code></a>! - Fix
connection snapping for handles larger than connectionRadius</p>
</li>
<li>
<p>Updated dependencies [<a
href="77107453fa"><code>77107453</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.61</li>
</ul>
</li>
</ul>
<h2>12.6.3</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5259">#5259</a> <a
href="77bf79c40e"><code>77bf79c4</code></a>
Thanks <a
href="https://github.com/peterkogo"><code>@​peterkogo</code></a>! - Fix
background-color css variable fallback.</p>
</li>
<li>
<p>Updated dependencies [<a
href="77bf79c40e"><code>77bf79c4</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.60</li>
</ul>
</li>
</ul>
<h2>12.6.2</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5257">#5257</a> <a
href="b1314be04d"><code>b1314be0</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Use OnReconnect from system</p>
</li>
<li>
<p>Updated dependencies [<a
href="a95f0e2fbf"><code>a95f0e2f</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.59</li>
</ul>
</li>
</ul>
<h2>12.6.1</h2>
<h3>Patch Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5249">#5249</a> <a
href="895b5d81c8"><code>895b5d81</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Call <code>onNodesChange</code> for uncontrolled flows that use
<code>updateNode</code></p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5247">#5247</a> <a
href="67e1cb6891"><code>67e1cb68</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Cleanup TSDoc annotations for ReactFlow</p>
</li>
<li>
<p>Updated dependencies [<a
href="2a03213b06"><code>2a03213b</code></a>]:</p>
<ul>
<li><code>@​xyflow/system</code><a
href="https://github.com/0"><code>@​0</code></a>.0.58</li>
</ul>
</li>
</ul>
<h2>12.6.0</h2>
<h3>Minor Changes</h3>
<ul>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5219">#5219</a> <a
href="4236adbc46"><code>4236adbc</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Add initialMinZoom, initialMaxZoom and initialFitViewOptions to
ReactFlowProvider</p>
</li>
<li>
<p><a
href="https://redirect.github.com/xyflow/xyflow/pull/5227">#5227</a> <a
href="a7d10ffce5"><code>a7d10ffc</code></a>
Thanks <a href="https://github.com/moklick"><code>@​moklick</code></a>!
- Add <code>resizeDirection</code> prop for the
<code>NodeResizeControl</code> component</p>
</li>
</ul>
<h3>Patch Changes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="0076d82222"><code>0076d82</code></a>
chore(packages): bump</li>
<li><a
href="8ce85fd8f0"><code>8ce85fd</code></a>
fix(react): multi select works when input is focused</li>
<li><a
href="cee50e1dd0"><code>cee50e1</code></a>
chore(packages): bump</li>
<li><a
href="673a3a8718"><code>673a3a8</code></a>
merge main</li>
<li><a
href="9893b6321b"><code>9893b63</code></a>
chore(packages): bump</li>
<li><a
href="69021e8bbe"><code>69021e8</code></a>
chore(react): use OnReconnet from system</li>
<li><a
href="b30a496bf4"><code>b30a496</code></a>
merg main</li>
<li><a
href="2df553e611"><code>2df553e</code></a>
chore(packages): bump</li>
<li><a href="https://g...

_Description has been truncated_

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-05-16 13:13:07 +00:00
Abhimanyu Yadav
0e53c540d4 fix(blocks): Disable Twitter and Todoist blocks if their OAuth is not configured (#9954)
I am disabling all the Twitter and Todoist blocks whose OAuth is not
configured.

> I have already checked it locally. When OAuth is not set, the blocks
do not appear in the block menu
2025-05-16 12:12:48 +00:00
dependabot[bot]
e48aec921e chore(backend/deps-dev): Bump 3 dev dependencies to latest minor versions (#9852)
Bumps [poethepoet](https://github.com/nat-n/poethepoet),
[pyright](https://github.com/RobertCraigie/pyright-python) and
[ruff](https://github.com/astral-sh/ruff) to their latest versions.

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-05-16 11:30:42 +00:00
dependabot[bot]
d754c2349c chore(libs/deps-dev): Update ruff from 0.11.2 to 0.11.10 (#9775)
Bumps the development-dependencies group in
/autogpt_platform/autogpt_libs with 1 update:
[ruff](https://github.com/astral-sh/ruff).

Updates `ruff` from 0.11.2 to 0.11.10

[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.11.2&new-version=0.11.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

You can trigger a rebase of this PR by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

> **Note**
> Automatic rebases have been disabled on this pull request as it has
been open for over 30 days.

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-16 10:42:45 +00:00
dependabot[bot]
870f8265b3 chore(backend/deps): Bump 18 dependencies to latest minor versions (#9930)
Bumps the production-dependencies group with 18 updates in the
/autogpt_platform/backend directory:

| Package | From | To |
| --- | --- | --- |
| [anthropic](https://github.com/anthropics/anthropic-sdk-python) |
`0.49.0` | `0.51.0` |
| [click](https://github.com/pallets/click) | `8.1.8` | `8.2.0` |
| [e2b-code-interpreter](https://github.com/e2b-dev/code-interpreter) |
`1.1.1` | `1.5.0` |
|
[google-api-python-client](https://github.com/googleapis/google-api-python-client)
| `2.166.0` | `2.169.0` |
|
[google-auth-oauthlib](https://github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib)
| `1.2.1` | `1.2.2` |
| [groq](https://github.com/groq/groq-python) | `0.20.0` | `0.24.0` |
|
[launchdarkly-server-sdk](https://github.com/launchdarkly/python-server-sdk)
| `9.10.0` | `9.11.0` |
| [mem0ai](https://github.com/mem0ai/mem0) | `0.1.80` | `0.1.98` |
| [ollama](https://github.com/ollama/ollama-python) | `0.4.7` | `0.4.8`
|
| [openai](https://github.com/openai/openai-python) | `1.70.0` |
`1.78.1` |
| [poetry](https://github.com/python-poetry/poetry) | `2.1.2` | `2.1.3`
|
| [pydantic](https://github.com/pydantic/pydantic) | `2.11.1` | `2.11.4`
|
| [pydantic-settings](https://github.com/pydantic/pydantic-settings) |
`2.8.1` | `2.9.1` |
| [replicate](https://github.com/replicate/replicate-python) | `1.0.4` |
`1.0.6` |
| [sentry-sdk](https://github.com/getsentry/sentry-python) | `2.25.1` |
`2.28.0` |
| [supabase](https://github.com/supabase/supabase-py) | `2.15.0` |
`2.15.1` |
| [tenacity](https://github.com/jd/tenacity) | `9.0.0` | `9.1.2` |
| [uvicorn](https://github.com/encode/uvicorn) | `0.34.0` | `0.34.2` |


Updates `anthropic` from 0.49.0 to 0.51.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/anthropics/anthropic-sdk-python/releases">anthropic's
releases</a>.</em></p>
<blockquote>
<h2>v0.51.0</h2>
<h2>0.51.0 (2025-05-07)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.50.0...v0.51.0">v0.50.0...v0.51.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> adds web search capabilities to the Claude API
(<a
href="bec0cf93c2">bec0cf9</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>pydantic v1:</strong> more robust ModelField.annotation
check (<a
href="c50f406767">c50f406</a>)</li>
<li><strong>sockets:</strong> handle non-portable socket flags (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/935">#935</a>)
(<a
href="205c8dda37">205c8dd</a>)</li>
</ul>
<h3>Chores</h3>
<ul>
<li>broadly detect json family of content-type headers (<a
href="66bbb3a668">66bbb3a</a>)</li>
<li><strong>ci:</strong> only use depot for staging repos (<a
href="c867a11af3">c867a11</a>)</li>
<li><strong>ci:</strong> run on more branches and use depot runners (<a
href="95f5f17be0">95f5f17</a>)</li>
<li><strong>internal:</strong> add back missing custom modifications for
Web Search (<a
href="f43ba69d53">f43ba69</a>)</li>
<li><strong>internal:</strong> minor formatting changes (<a
href="8afef086af">8afef08</a>)</li>
<li>use lazy imports for resources (<a
href="704be817f4">704be81</a>)</li>
</ul>
<h2>v0.50.0</h2>
<h2>0.50.0 (2025-04-22)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.49.0...v0.50.0">v0.49.0...v0.50.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> extract ContentBlockDelta events into their
own schemas (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/920">#920</a>)
(<a
href="ae773d673a">ae773d6</a>)</li>
<li><strong>api:</strong> manual updates (<a
href="46ac1f8d1c">46ac1f8</a>)</li>
<li><strong>api:</strong> manual updates (<a
href="48d9739ad7">48d9739</a>)</li>
<li><strong>api:</strong> manual updates (<a
href="66e8cc3fb2">66e8cc3</a>)</li>
<li><strong>api:</strong> manual updates (<a
href="a74746e0df">a74746e</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>ci:</strong> ensure pip is always available (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/907">#907</a>)
(<a
href="36326871c1">3632687</a>)</li>
<li><strong>ci:</strong> remove publishing patch (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/908">#908</a>)
(<a
href="cae032381b">cae0323</a>)</li>
<li><strong>client:</strong> deduplicate stop reason type (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/913">#913</a>)
(<a
href="3ab0194550">3ab0194</a>)</li>
<li><strong>client:</strong> send all configured auth headers (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/929">#929</a>)
(<a
href="9d2581e79f">9d2581e</a>)</li>
<li><strong>perf:</strong> optimize some hot paths (<a
href="cff76cb00b">cff76cb</a>)</li>
<li><strong>perf:</strong> skip traversing types for NotGiven values (<a
href="dadac7fa72">dadac7f</a>)</li>
<li><strong>project:</strong> bump httpx minimum version to 0.25.0 (<a
href="b554138c2f">b554138</a>),
closes <a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/902">#902</a></li>
<li><strong>types:</strong> handle more discriminated union shapes (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/906">#906</a>)
(<a
href="2fc179a4d2">2fc179a</a>)</li>
<li><strong>vertex:</strong> explicitly include requests extra (<a
href="2b1221b76b">2b1221b</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/anthropics/anthropic-sdk-python/blob/main/CHANGELOG.md">anthropic's
changelog</a>.</em></p>
<blockquote>
<h2>0.51.0 (2025-05-07)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.50.0...v0.51.0">v0.50.0...v0.51.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> adds web search capabilities to the Claude API
(<a
href="bec0cf93c2">bec0cf9</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>pydantic v1:</strong> more robust ModelField.annotation
check (<a
href="c50f406767">c50f406</a>)</li>
<li><strong>sockets:</strong> handle non-portable socket flags (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/935">#935</a>)
(<a
href="205c8dda37">205c8dd</a>)</li>
</ul>
<h3>Chores</h3>
<ul>
<li>broadly detect json family of content-type headers (<a
href="66bbb3a668">66bbb3a</a>)</li>
<li><strong>ci:</strong> only use depot for staging repos (<a
href="c867a11af3">c867a11</a>)</li>
<li><strong>ci:</strong> run on more branches and use depot runners (<a
href="95f5f17be0">95f5f17</a>)</li>
<li><strong>internal:</strong> add back missing custom modifications for
Web Search (<a
href="f43ba69d53">f43ba69</a>)</li>
<li><strong>internal:</strong> minor formatting changes (<a
href="8afef086af">8afef08</a>)</li>
<li>use lazy imports for resources (<a
href="704be817f4">704be81</a>)</li>
</ul>
<h2>0.50.0 (2025-04-22)</h2>
<p>Full Changelog: <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.49.0...v0.50.0">v0.49.0...v0.50.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> extract ContentBlockDelta events into their
own schemas (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/920">#920</a>)
(<a
href="ae773d673a">ae773d6</a>)</li>
<li><strong>api:</strong> manual updates (<a
href="46ac1f8d1c">46ac1f8</a>)</li>
<li><strong>api:</strong> manual updates (<a
href="48d9739ad7">48d9739</a>)</li>
<li><strong>api:</strong> manual updates (<a
href="66e8cc3fb2">66e8cc3</a>)</li>
<li><strong>api:</strong> manual updates (<a
href="a74746e0df">a74746e</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>ci:</strong> ensure pip is always available (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/907">#907</a>)
(<a
href="36326871c1">3632687</a>)</li>
<li><strong>ci:</strong> remove publishing patch (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/908">#908</a>)
(<a
href="cae032381b">cae0323</a>)</li>
<li><strong>client:</strong> deduplicate stop reason type (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/913">#913</a>)
(<a
href="3ab0194550">3ab0194</a>)</li>
<li><strong>client:</strong> send all configured auth headers (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/929">#929</a>)
(<a
href="9d2581e79f">9d2581e</a>)</li>
<li><strong>perf:</strong> optimize some hot paths (<a
href="cff76cb00b">cff76cb</a>)</li>
<li><strong>perf:</strong> skip traversing types for NotGiven values (<a
href="dadac7fa72">dadac7f</a>)</li>
<li><strong>project:</strong> bump httpx minimum version to 0.25.0 (<a
href="b554138c2f">b554138</a>),
closes <a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/902">#902</a></li>
<li><strong>types:</strong> handle more discriminated union shapes (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/906">#906</a>)
(<a
href="2fc179a4d2">2fc179a</a>)</li>
<li><strong>vertex:</strong> explicitly include requests extra (<a
href="2b1221b76b">2b1221b</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e42451ab3f"><code>e42451a</code></a>
release: 0.51.0</li>
<li><a
href="4c7f97f3ea"><code>4c7f97f</code></a>
chore(internal): add back missing custom modifications for Web
Search</li>
<li><a
href="2da00f26c5"><code>2da00f2</code></a>
feat(api): adds web search capabilities to the Claude API</li>
<li><a
href="51fd796456"><code>51fd796</code></a>
fix(sockets): handle non-portable socket flags (<a
href="https://redirect.github.com/anthropics/anthropic-sdk-python/issues/935">#935</a>)</li>
<li><a
href="ac6cfee090"><code>ac6cfee</code></a>
chore: use lazy imports for resources</li>
<li><a
href="215f5bbe56"><code>215f5bb</code></a>
chore: broadly detect json family of content-type headers</li>
<li><a
href="bcaa8a582b"><code>bcaa8a5</code></a>
chore(ci): only use depot for staging repos</li>
<li><a
href="a41e9c346a"><code>a41e9c3</code></a>
chore(ci): run on more branches and use depot runners</li>
<li><a
href="bfebcd91c6"><code>bfebcd9</code></a>
chore(internal): minor formatting changes</li>
<li><a
href="e3548ac1c5"><code>e3548ac</code></a>
fix(pydantic v1): more robust ModelField.annotation check</li>
<li>Additional commits viewable in <a
href="https://github.com/anthropics/anthropic-sdk-python/compare/v0.49.0...v0.51.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `click` from 8.1.8 to 8.2.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/click/releases">click's
releases</a>.</em></p>
<blockquote>
<h2>8.2.0</h2>
<p>This is the Click 8.2.0 feature release. A feature release may
include new features, remove previously deprecated code, add new
deprecation, or introduce potentially breaking changes.</p>
<p>We encourage everyone to upgrade. You can read more about our <a
href="https://palletsprojects.com/versions">Version Support Policy</a>
on our website.</p>
<p>PyPI: <a
href="https://pypi.org/project/click/8.2.0/">https://pypi.org/project/click/8.2.0/</a>
Changes: <a
href="https://click.palletsprojects.com/en/stable/changes/">https://click.palletsprojects.com/en/stable/changes/</a>
Milestone <a
href="https://github.com/pallets/click/milestone/15">https://github.com/pallets/click/milestone/15</a></p>
<ul>
<li>Drop support for Python 3.7, 3.8,and 3.9. <a
href="https://redirect.github.com/pallets/click/issues/2588">#2588</a>,
<a
href="https://redirect.github.com/pallets/click/issues/2893">#2893</a></li>
<li>Use modern packaging metadata with <code>pyproject.toml</code>
instead of <code>setup.cfg</code>. <a
href="https://redirect.github.com/pallets/click/issues/2438">#2438</a></li>
<li>Use <code>flit_core</code> instead of <code>setuptools</code> as
build backend. <a
href="https://redirect.github.com/pallets/click/issues/2543">#2543</a></li>
<li>Deprecate the <code>__version__</code> attribute. Use feature
detection, or
<code>importlib.metadata.version(&quot;click&quot;)</code>, instead. <a
href="https://redirect.github.com/pallets/click/issues/2598">#2598</a></li>
<li><code>BaseCommand</code> is deprecated. <code>Command</code> is the
base class for all commands. <a
href="https://redirect.github.com/pallets/click/issues/2589">#2589</a></li>
<li><code>MultiCommand</code> is deprecated. <code>Group</code> is the
base class for all group commands. <a
href="https://redirect.github.com/pallets/click/issues/2590">#2590</a></li>
<li>The current parser and related classes and methods, are deprecated.
<a
href="https://redirect.github.com/pallets/click/issues/2205">#2205</a>
<ul>
<li><code>OptionParser</code> and the <code>parser</code> module, which
is a modified copy of <code>optparse</code> in the standard
library.</li>
<li><code>Context.protected_args</code> is unneeded.
<code>Context.args</code> contains any remaining arguments while
parsing.</li>
<li><code>Parameter.add_to_parser</code> (on both <code>Argument</code>
and <code>Option</code>) is unneeded. Parsing works directly without
building a separate parser.</li>
<li><code>split_arg_string</code> is moved from <code>parser</code> to
<code>shell_completion</code>.</li>
</ul>
</li>
<li>Enable deferred evaluation of annotations with <code>from __future__
import annotations</code>. <a
href="https://redirect.github.com/pallets/click/issues/2270">#2270</a></li>
<li>When generating a command's name from a decorated function's name,
the suffixes <code>_command</code>, <code>_cmd</code>,
<code>_group</code>, and <code>_grp</code> are removed. <a
href="https://redirect.github.com/pallets/click/issues/2322">#2322</a></li>
<li>Show the <code>types.ParamType.name</code> for
<code>types.Choice</code> options within <code>--help</code> message if
<code>show_choices=False</code> is specified. <a
href="https://redirect.github.com/pallets/click/issues/2356">#2356</a></li>
<li>Do not display default values in prompts when
<code>Option.show_default</code> is <code>False</code>. <a
href="https://redirect.github.com/pallets/click/issues/2509">#2509</a></li>
<li>Add <code>get_help_extra</code> method on <code>Option</code> to
fetch the generated extra items used in <code>get_help_record</code> to
render help text. <a
href="https://redirect.github.com/pallets/click/issues/2516">#2516</a>
<a
href="https://redirect.github.com/pallets/click/issues/2517">#2517</a></li>
<li>Keep stdout and stderr streams independent in
<code>CliRunner</code>. Always collect stderr output and never raise an
exception. Add a new output stream to simulate what the user sees in its
terminal. Removes the <code>mix_stderr</code> parameter in
<code>CliRunner</code>. <a
href="https://redirect.github.com/pallets/click/issues/2522">#2522</a>
<a
href="https://redirect.github.com/pallets/click/issues/2523">#2523</a></li>
<li><code>Option.show_envvar</code> now also shows environment variable
in error messages. <a
href="https://redirect.github.com/pallets/click/issues/2695">#2695</a>
<a
href="https://redirect.github.com/pallets/click/issues/2696">#2696</a></li>
<li><code>Context.close</code> will be called on exit. This results in
all <code>Context.call_on_close</code> callbacks and context managers
added via <code>Context.with_resource</code> to be closed on exit as
well. <a
href="https://redirect.github.com/pallets/click/issues/2680">#2680</a></li>
<li>Add <code>ProgressBar(hidden: bool)</code> to allow hiding the
progressbar. <a
href="https://redirect.github.com/pallets/click/issues/2609">#2609</a></li>
<li>A <code>UserWarning</code> will be shown when multiple parameters
attempt to use the same name. <a
href="https://redirect.github.com/pallets/click/issues/2396">#2396</a></li>
<li>When using <code>Option.envvar</code> with
<code>Option.flag_value</code>, the <code>flag_value</code> will always
be used instead of the value of the environment variable. <a
href="https://redirect.github.com/pallets/click/issues/2746">#2746</a>
<a
href="https://redirect.github.com/pallets/click/issues/2788">#2788</a></li>
<li>Add <code>Choice.get_invalid_choice_message</code> method for
customizing the invalid choice message. <a
href="https://redirect.github.com/pallets/click/issues/2621">#2621</a>
<a
href="https://redirect.github.com/pallets/click/issues/2622">#2622</a></li>
<li>If help is shown because <code>no_args_is_help</code> is enabled
(defaults to <code>True</code> for groups, <code>False</code> for
commands), the exit code is 2 instead of 0. <a
href="https://redirect.github.com/pallets/click/issues/1489">#1489</a>
<a
href="https://redirect.github.com/pallets/click/issues/1489">#1489</a></li>
<li>Contexts created during shell completion are closed properly, fixing
a <code>ResourceWarning</code> when using <code>click.File</code>. <a
href="https://redirect.github.com/pallets/click/issues/2644">#2644</a>
<a
href="https://redirect.github.com/pallets/click/issues/2800">#2800</a>
<a
href="https://redirect.github.com/pallets/click/issues/2767">#2767</a></li>
<li><code>click.edit(filename)</code> now supports passing an iterable
of filenames in case the editor supports editing multiple files at once.
Its return type is now also typed: <code>AnyStr</code> if
<code>text</code> is passed, otherwise <code>None</code>. <a
href="https://redirect.github.com/pallets/click/issues/2067">#2067</a>
<a
href="https://redirect.github.com/pallets/click/issues/2068">#2068</a></li>
<li>Specialized typing of <code>progressbar(length=...)</code> as
<code>ProgressBar[int]</code>. <a
href="https://redirect.github.com/pallets/click/issues/2630">#2630</a></li>
<li>Improve <code>echo_via_pager</code> behaviour in face of errors. <a
href="https://redirect.github.com/pallets/click/issues/2674">#2674</a>
<ul>
<li>Terminate the pager in case a generator passed to
<code>echo_via_pager</code> raises an exception.</li>
<li>Ensure to always close the pipe to the pager process and wait for it
to terminate.</li>
<li><code>echo_via_pager</code> will not ignore
<code>KeyboardInterrupt</code> anymore. This allows the user to search
for future output of the generator when using less and then aborting the
program using ctrl-c.</li>
</ul>
</li>
<li><code>deprecated: bool | str</code> can now be used on options and
arguments. This previously was only available for <code>Command</code>.
The message can now also be customised by using a <code>str</code>
instead of a <code>bool</code>. <a
href="https://redirect.github.com/pallets/click/issues/2263">#2263</a>
<a
href="https://redirect.github.com/pallets/click/issues/2271">#2271</a>
<ul>
<li><code>Command.deprecated</code> formatting in <code>--help</code>
changed from <code>(Deprecated) help</code> to <code>help
(DEPRECATED)</code>.</li>
<li>Parameters cannot be required nor prompted or an error is
raised.</li>
<li>A warning will be printed when something deprecated is used.</li>
</ul>
</li>
<li>Add a <code>catch_exceptions</code> parameter to
<code>CliRunner</code>. If <code>catch_exceptions</code> is not passed
to <code>CliRunner.invoke</code>, the value from <code>CliRunner</code>
is used. <a
href="https://redirect.github.com/pallets/click/issues/2817">#2817</a>
<a
href="https://redirect.github.com/pallets/click/issues/2818">#2818</a></li>
<li><code>Option.flag_value</code> will no longer have a default value
set based on <code>Option.default</code> if <code>Option.is_flag</code>
is <code>False</code>. This results in <code>Option.default</code> not
needing to implement <code>__bool__</code>. <a
href="https://redirect.github.com/pallets/click/issues/2829">#2829</a></li>
<li>Incorrect <code>click.edit</code> typing has been corrected. <a
href="https://redirect.github.com/pallets/click/issues/2804">#2804</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pallets/click/blob/main/CHANGES.rst">click's
changelog</a>.</em></p>
<blockquote>
<h2>Version 8.2.0</h2>
<p>Released 2025-05-10</p>
<ul>
<li>
<p>Drop support for Python 3.7, 3.8, and 3.9. :pr:<code>2588</code>
:pr:<code>2893</code></p>
</li>
<li>
<p>Use modern packaging metadata with <code>pyproject.toml</code>
instead of <code>setup.cfg</code>.
:pr:<code>2438</code></p>
</li>
<li>
<p>Use <code>flit_core</code> instead of <code>setuptools</code> as
build backend. :pr:<code>2543</code></p>
</li>
<li>
<p>Deprecate the <code>__version__</code> attribute. Use feature
detection, or
<code>importlib.metadata.version(&quot;click&quot;)</code>, instead.
:issue:<code>2598</code></p>
</li>
<li>
<p><code>BaseCommand</code> is deprecated. <code>Command</code> is the
base class for all
commands. :issue:<code>2589</code></p>
</li>
<li>
<p><code>MultiCommand</code> is deprecated. <code>Group</code> is the
base class for all group
commands. :issue:<code>2590</code></p>
</li>
<li>
<p>The current parser and related classes and methods, are deprecated.
:issue:<code>2205</code></p>
<ul>
<li><code>OptionParser</code> and the <code>parser</code> module, which
is a modified copy of
<code>optparse</code> in the standard library.</li>
<li><code>Context.protected_args</code> is unneeded.
<code>Context.args</code> contains any
remaining arguments while parsing.</li>
<li><code>Parameter.add_to_parser</code> (on both <code>Argument</code>
and <code>Option</code>) is
unneeded. Parsing works directly without building a separate
parser.</li>
<li><code>split_arg_string</code> is moved from <code>parser</code> to
<code>shell_completion</code>.</li>
</ul>
</li>
<li>
<p>Enable deferred evaluation of annotations with
<code>from __future__ import annotations</code>.
:pr:<code>2270</code></p>
</li>
<li>
<p>When generating a command's name from a decorated function's name,
the
suffixes <code>_command</code>, <code>_cmd</code>, <code>_group</code>,
and <code>_grp</code> are removed.
:issue:<code>2322</code></p>
</li>
<li>
<p>Show the <code>types.ParamType.name</code> for
<code>types.Choice</code> options within
<code>--help</code> message if <code>show_choices=False</code> is
specified.
:issue:<code>2356</code></p>
</li>
<li>
<p>Do not display default values in prompts when
<code>Option.show_default</code> is
<code>False</code>. :pr:<code>2509</code></p>
</li>
<li>
<p>Add <code>get_help_extra</code> method on <code>Option</code> to
fetch the generated extra
items used in <code>get_help_record</code> to render help text.
:issue:<code>2516</code>
:pr:<code>2517</code></p>
</li>
<li>
<p>Keep stdout and stderr streams independent in <code>CliRunner</code>.
Always
collect stderr output and never raise an exception. Add a new
output stream to simulate what the user sees in its terminal. Removes
the <code>mix_stderr</code> parameter in <code>CliRunner</code>.
:issue:<code>2522</code> :pr:<code>2523</code></p>
</li>
<li>
<p><code>Option.show_envvar</code> now also shows environment variable
in error messages.
:issue:<code>2695</code> :pr:<code>2696</code></p>
</li>
<li>
<p><code>Context.close</code> will be called on exit. This results in
all
<code>Context.call_on_close</code> callbacks and context managers added
via
<code>Context.with_resource</code> to be closed on exit as well.
:pr:<code>2680</code></p>
</li>
<li>
<p>Add <code>ProgressBar(hidden: bool)</code> to allow hiding the
progressbar. :issue:<code>2609</code></p>
</li>
<li>
<p>A <code>UserWarning</code> will be shown when multiple parameters
attempt to use the</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="219206a186"><code>219206a</code></a>
release version 8.2.0</li>
<li><a
href="498f882604"><code>498f882</code></a>
drop end of life python versions (<a
href="https://redirect.github.com/pallets/click/issues/2893">#2893</a>)</li>
<li><a
href="ba770cbc96"><code>ba770cb</code></a>
drop end of life python versions</li>
<li><a
href="f14b75063f"><code>f14b750</code></a>
update dev dependencies</li>
<li><a
href="9982faee85"><code>9982fae</code></a>
Update CHANGES.rst</li>
<li><a
href="7318f5f11b"><code>7318f5f</code></a>
Update CHANGES.rst</li>
<li><a
href="b7c0ab471c"><code>b7c0ab4</code></a>
Merge <code>stable</code> into <code>main</code>; Release 8.2.0 (<a
href="https://redirect.github.com/pallets/click/issues/2873">#2873</a>)</li>
<li><a
href="c9b96fe08d"><code>c9b96fe</code></a>
Merge branch 'main' into main</li>
<li><a
href="ab21233fc8"><code>ab21233</code></a>
Rewrite second half of options docs (<a
href="https://redirect.github.com/pallets/click/issues/2848">#2848</a>)</li>
<li><a
href="8c89a14362"><code>8c89a14</code></a>
Merge branch 'main' into options_docs_2</li>
<li>Additional commits viewable in <a
href="https://github.com/pallets/click/compare/8.1.8...8.2.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `e2b-code-interpreter` from 1.1.1 to 1.5.0
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c590c09188"><code>c590c09</code></a>
Bump e2b core to 1.4.0 (<a
href="https://redirect.github.com/e2b-dev/code-interpreter/issues/101">#101</a>)</li>
<li><a
href="0ba58b4a97"><code>0ba58b4</code></a>
vitest: continue after failing test (<a
href="https://redirect.github.com/e2b-dev/code-interpreter/issues/100">#100</a>)</li>
<li><a
href="e80eb185dd"><code>e80eb18</code></a>
added ts-kernel tests (<a
href="https://redirect.github.com/e2b-dev/code-interpreter/issues/99">#99</a>)</li>
<li><a
href="8251c2fb3c"><code>8251c2f</code></a>
[skip ci] Release new versions</li>
<li><a
href="95163ce294"><code>95163ce</code></a>
Added TypeScript support to the code interpreter (<a
href="https://redirect.github.com/e2b-dev/code-interpreter/issues/91">#91</a>)</li>
<li><a
href="120d097e24"><code>120d097</code></a>
[skip ci] Release new versions</li>
<li><a
href="9f9a423e45"><code>9f9a423</code></a>
Updated template with SWC compiler for TypeScript (<a
href="https://redirect.github.com/e2b-dev/code-interpreter/issues/97">#97</a>)</li>
<li><a
href="3be5d224f8"><code>3be5d22</code></a>
[skip ci] Release new versions</li>
<li><a
href="2ee6ffb9aa"><code>2ee6ffb</code></a>
Update JS SDK target to es2017 (<a
href="https://redirect.github.com/e2b-dev/code-interpreter/issues/96">#96</a>)</li>
<li><a
href="2bfcfbb615"><code>2bfcfbb</code></a>
Security patches (<a
href="https://redirect.github.com/e2b-dev/code-interpreter/issues/95">#95</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/e2b-dev/code-interpreter/compare/@e2b/code-interpreter@1.1.1...@e2b/code-interpreter@1.5.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `google-api-python-client` from 2.166.0 to 2.169.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-api-python-client/releases">google-api-python-client's
releases</a>.</em></p>
<blockquote>
<h2>v2.169.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-python-client/compare/v2.168.0...v2.169.0">2.169.0</a>
(2025-04-29)</h2>
<h3>Features</h3>
<ul>
<li><strong>aiplatform:</strong> Update the api <a
href="6ccd7f5371</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>artifactregistry:</strong> Update the api <a
href="ac6477013f</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>blockchainnodeengine:</strong> Update the api <a
href="4b1c7d2466</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>cloudchannel:</strong> Update the api <a
href="5e15d858dd</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>cloudresourcemanager:</strong> Update the api <a
href="94d98b6e6a</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>connectors:</strong> Update the api <a
href="a456e2ceaf</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>dataflow:</strong> Update the api <a
href="ea28e0289e</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>dataform:</strong> Update the api <a
href="35990c2ffd</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>datamigration:</strong> Update the api <a
href="d9218bd460</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>dataplex:</strong> Update the api <a
href="6df52e86e8</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>dialogflow:</strong> Update the api <a
href="bb0b59eedc</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>discoveryengine:</strong> Update the api <a
href="5af62dfdcf</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>displayvideo:</strong> Update the api <a
href="67effcf14b</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>documentai:</strong> Update the api <a
href="fadb34b1a6</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>drive:</strong> Update the api <a
href="7d2063f757</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>file:</strong> Update the api <a
href="a820c78341</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>firebaseml:</strong> Update the api <a
href="e45e9b0fc2</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>gkehub:</strong> Update the api <a
href="3c38499a90</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>iamcredentials:</strong> Update the api <a
href="c59ac8442a</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>integrations:</strong> Update the api <a
href="fa57493279</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>merchantapi:</strong> Update the api <a
href="8e00992fb2</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>networkconnectivity:</strong> Update the api <a
href="b3c8df3478</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>networkmanagement:</strong> Update the api <a
href="62a2c6a476</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>networksecurity:</strong> Update the api <a
href="39706e9fd8</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>oracledatabase:</strong> Update the api <a
href="6678c3beeb</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>orgpolicy:</strong> Update the api <a
href="d1244740ed</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>sheets:</strong> Update the api <a
href="8b09804195</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>sqladmin:</strong> Update the api <a
href="900e43c86d</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>tpu:</strong> Update the api <a
href="183d48eb24</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>translate:</strong> Update the api <a
href="c230f82427</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>vpcaccess:</strong> Update the api <a
href="cae086c59a</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>youtube:</strong> Update the api <a
href="d0b32ba5ba</a>
(<a
href="14bd22993e">14bd229</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>admin:</strong> Update the api <a
href="82e6e7d206</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>cloudbuild:</strong> Update the api <a
href="009d14e0bd</a>
(<a
href="14bd22993e">14bd229</a>)</li>
<li><strong>storage:</strong> Update the api <a
href="e70b2a8745</a>
(<a
href="14bd22993e">14bd229</a>)</li>
</ul>
<h2>v2.168.0</h2>
<h2><a
href="https://github.com/googleapis/google-api-python-client/compare/v2.167.0...v2.168.0">2.168.0</a>
(2025-04-22)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="bece3b3b3a"><code>bece3b3</code></a>
chore(main): release 2.169.0 (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2600">#2600</a>)</li>
<li><a
href="14bd22993e"><code>14bd229</code></a>
chore: Update discovery artifacts (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2599">#2599</a>)</li>
<li><a
href="ac22e7ce69"><code>ac22e7c</code></a>
chore(main): release 2.168.0 (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2595">#2595</a>)</li>
<li><a
href="27e0dff489"><code>27e0dff</code></a>
chore: Update discovery artifacts (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2597">#2597</a>)</li>
<li><a
href="e2aaf41f7e"><code>e2aaf41</code></a>
chore: Update discovery artifacts (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2596">#2596</a>)</li>
<li><a
href="607bd3d516"><code>607bd3d</code></a>
fix: remove setup.cfg configuration for creating universal wheels (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2580">#2580</a>)</li>
<li><a
href="236c82dc14"><code>236c82d</code></a>
chore(main): release 2.167.0 (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2591">#2591</a>)</li>
<li><a
href="139f58fb30"><code>139f58f</code></a>
fix: explicitly declare support for Python 3.13 (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2593">#2593</a>)</li>
<li><a
href="f84e041b4b"><code>f84e041</code></a>
chore: Update discovery artifacts (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2594">#2594</a>)</li>
<li><a
href="a9db82a60a"><code>a9db82a</code></a>
chore: Update discovery artifacts (<a
href="https://redirect.github.com/googleapis/google-api-python-client/issues/2590">#2590</a>)</li>
<li>See full diff in <a
href="https://github.com/googleapis/google-api-python-client/compare/v2.166.0...v2.169.0">compare
view</a></li>
</ul>
</details>
<br />

Updates `google-auth-oauthlib` from 1.2.1 to 1.2.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/releases">google-auth-oauthlib's
releases</a>.</em></p>
<blockquote>
<h2>v1.2.2</h2>
<h2><a
href="https://github.com/googleapis/google-auth-library-python-oauthlib/compare/v1.2.1...v1.2.2">1.2.2</a>
(2025-04-01)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Do not include docs/conf.py &amp; scripts in wheel (<a
href="https://redirect.github.com/googleapis/google-auth-library-python-oauthlib/issues/328">#328</a>)
(<a
href="78940dfce4">78940df</a>)</li>
<li>Let OS select an available port when running TestInstalledAppFlow
(<a
href="https://redirect.github.com/googleapis/google-auth-library-python-oauthlib/issues/407">#407</a>)
(<a
href="6060d65626">6060d65</a>),
closes <a
href="https://redirect.github.com/googleapis/google-auth-library-python-oauthlib/issues/381">#381</a></li>
<li>Remove setup.cfg configuration for creating universal wheels (<a
href="https://redirect.github.com/googleapis/google-auth-library-python-oauthlib/issues/405">#405</a>)
(<a
href="0b962ed5aa">0b962ed</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/google-auth-library-python-oauthlib/blob/main/CHANGELOG.md">google-auth-oauthlib's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/google-auth-library-python-oauthlib/compare/v1.2.1...v1.2.2">1.2.2</a>
(2025-04-01)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Do not include docs/conf.py &amp; scripts in wheel (<a
href="https://redirect.github.com/googleapis/google-auth-library-python-oauthlib/issues/328">#328</a>)
(<a
href="78940dfce4">78940df</a>)</li>
<li>Let OS select an available port when running TestInstalledAppFlow
(<a
href="https://redirect.github.com/googleapis/google-auth-library-python-oauthlib/issues/407">#407</a>)
(<a
href="6060d65626">6060d65</a>),
closes <a
href="https://redirect.github.com/googleapis/google-auth-library-python-oauthlib/issues/381">#381</a></li>
<li>Remove setup.cfg configuration for creating universal wheels (<a
href="https://redirect.github.com/googleapis/google-auth-library-python-oauthlib/issues/405">#405</a>)
(<a
href="0b962ed5aa">0b962ed</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="cc29cc3c37"><code>cc29cc3</code></a>
chore(main): release 1.2.2 (<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/368">#368</a>)</li>
<li><a
href="6060d65626"><code>6060d65</code></a>
fix: Let OS select an available port when running TestInstalledAppFlow
(<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/407">#407</a>)</li>
<li><a
href="0b962ed5aa"><code>0b962ed</code></a>
fix: remove setup.cfg configuration for creating universal wheels (<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/405">#405</a>)</li>
<li><a
href="dedc58a521"><code>dedc58a</code></a>
chore: remove unused files (<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/402">#402</a>)</li>
<li><a
href="63442e94fd"><code>63442e9</code></a>
chore(python): conditionally load credentials in .kokoro/build.sh (<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/398">#398</a>)</li>
<li><a
href="9a1dfab8e9"><code>9a1dfab</code></a>
chore: check if port is in use before returning the port to start a new
serve...</li>
<li><a
href="9c3861009e"><code>9c38610</code></a>
chore: Reduce prioirty of flaky tests (<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/390">#390</a>)</li>
<li><a
href="780f6a6cae"><code>780f6a6</code></a>
chore(python): Update the python version in docs presubmit to use 3.10
(<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/387">#387</a>)</li>
<li><a
href="2a561a6975"><code>2a561a6</code></a>
chore(deps): update all dependencies (<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/382">#382</a>)</li>
<li><a
href="c220b451d1"><code>c220b45</code></a>
chore(python): update dependencies in .kokoro/docker/docs (<a
href="https://redirect.github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/issues/380">#380</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/GoogleCloudPlatform/google-auth-library-python-oauthlib/compare/v1.2.1...v1.2.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `groq` from 0.20.0 to 0.24.0
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/groq/groq-python/releases">groq's
releases</a>.</em></p>
<blockquote>
<h2>v0.24.0</h2>
<h2>0.24.0 (2025-05-02)</h2>
<p>Full Changelog: <a
href="https://github.com/groq/groq-python/compare/v0.23.1...v0.24.0">v0.23.1...v0.24.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> api update (<a
href="e65ff4d299">e65ff4d</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>add include/exclude_domains to all chat completions overloads (<a
href="7616f4b2e9">7616f4b</a>)</li>
</ul>
<h2>v0.23.1</h2>
<h2>0.23.1 (2025-04-24)</h2>
<p>Full Changelog: <a
href="https://github.com/groq/groq-python/compare/v0.23.0...v0.23.1">v0.23.0...v0.23.1</a></p>
<h3>Bug Fixes</h3>
<ul>
<li>add executed_tools to streaming choicedelta (<a
href="fb26fbcd0b">fb26fbc</a>)</li>
<li><strong>pydantic v1:</strong> more robust ModelField.annotation
check (<a
href="40aaee2cd7">40aaee2</a>)</li>
</ul>
<h3>Chores</h3>
<ul>
<li>broadly detect json family of content-type headers (<a
href="2411533949">2411533</a>)</li>
<li><strong>ci:</strong> add timeout thresholds for CI jobs (<a
href="aae461436e">aae4614</a>)</li>
<li><strong>ci:</strong> only use depot for staging repos (<a
href="b6d1b47c1c">b6d1b47</a>)</li>
<li><strong>internal:</strong> codegen related update (<a
href="1da64f5c78">1da64f5</a>)</li>
<li><strong>internal:</strong> fix list file params (<a
href="a9b18debf8">a9b18de</a>)</li>
<li><strong>internal:</strong> import reformatting (<a
href="5068736832">5068736</a>)</li>
<li><strong>internal:</strong> minor formatting changes (<a
href="bc26d603a5">bc26d60</a>)</li>
<li><strong>internal:</strong> refactor retries to not use recursion (<a
href="488b9fe0a8">488b9fe</a>)</li>
</ul>
<h2>v0.23.0</h2>
<h2>0.23.0 (2025-04-22)</h2>
<p>Full Changelog: <a
href="https://github.com/groq/groq-python/compare/v0.22.0...v0.23.0">v0.22.0...v0.23.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> api update (<a
href="f5cbd0148e">f5cbd01</a>)</li>
<li><strong>api:</strong> api update (<a
href="e7c5514b3e">e7c5514</a>)</li>
<li><strong>api:</strong> api update (<a
href="9d5b7c8ba4">9d5b7c8</a>)</li>
<li><strong>api:</strong> api update (<a
href="73357e15c4">73357e1</a>)</li>
<li><strong>api:</strong> api update (<a
href="b1d6697301">b1d6697</a>)</li>
<li><strong>api:</strong> api update (<a
href="98ef30efd2">98ef30e</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/groq/groq-python/blob/main/CHANGELOG.md">groq's
changelog</a>.</em></p>
<blockquote>
<h2>0.24.0 (2025-05-02)</h2>
<p>Full Changelog: <a
href="https://github.com/groq/groq-python/compare/v0.23.1...v0.24.0">v0.23.1...v0.24.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> api update (<a
href="e65ff4d299">e65ff4d</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>add include/exclude_domains to all chat completions overloads (<a
href="7616f4b2e9">7616f4b</a>)</li>
</ul>
<h2>0.23.1 (2025-04-24)</h2>
<p>Full Changelog: <a
href="https://github.com/groq/groq-python/compare/v0.23.0...v0.23.1">v0.23.0...v0.23.1</a></p>
<h3>Bug Fixes</h3>
<ul>
<li>add executed_tools to streaming choicedelta (<a
href="fb26fbcd0b">fb26fbc</a>)</li>
<li><strong>pydantic v1:</strong> more robust ModelField.annotation
check (<a
href="40aaee2cd7">40aaee2</a>)</li>
</ul>
<h3>Chores</h3>
<ul>
<li>broadly detect json family of content-type headers (<a
href="2411533949">2411533</a>)</li>
<li><strong>ci:</strong> add timeout thresholds for CI jobs (<a
href="aae461436e">aae4614</a>)</li>
<li><strong>ci:</strong> only use depot for staging repos (<a
href="b6d1b47c1c">b6d1b47</a>)</li>
<li><strong>internal:</strong> codegen related update (<a
href="1da64f5c78">1da64f5</a>)</li>
<li><strong>internal:</strong> fix list file params (<a
href="a9b18debf8">a9b18de</a>)</li>
<li><strong>internal:</strong> import reformatting (<a
href="5068736832">5068736</a>)</li>
<li><strong>internal:</strong> minor formatting changes (<a
href="bc26d603a5">bc26d60</a>)</li>
<li><strong>internal:</strong> refactor retries to not use recursion (<a
href="488b9fe0a8">488b9fe</a>)</li>
</ul>
<h2>0.23.0 (2025-04-22)</h2>
<p>Full Changelog: <a
href="https://github.com/groq/groq-python/compare/v0.22.0...v0.23.0">v0.22.0...v0.23.0</a></p>
<h3>Features</h3>
<ul>
<li><strong>api:</strong> api update (<a
href="f5cbd0148e">f5cbd01</a>)</li>
<li><strong>api:</strong> api update (<a
href="e7c5514b3e">e7c5514</a>)</li>
<li><strong>api:</strong> api update (<a
href="9d5b7c8ba4">9d5b7c8</a>)</li>
<li><strong>api:</strong> api update (<a
href="73357e15c4">73357e1</a>)</li>
<li><strong>api:</strong> api update (<a
href="b1d6697301">b1d6697</a>)</li>
<li><strong>api:</strong> api update (<a
href="98ef30efd2">98ef30e</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e1280b2c3e"><code>e1280b2</code></a>
release: 0.24.0 (<a
href="https://redirect.github.com/groq/groq-python/issues/240">#240</a>)</li>
<li><a
href="ab7fe20db7"><code>ab7fe20</code></a>
release: 0.23.1 (<a
href="https://redirect.github.com/groq/groq-python/issues/239">#239</a>)</li>
<li><a href="https://github.com/groq/groq-python/commit/f3d1ea0...

_Description has been truncated_

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-05-16 10:26:39 +00:00
dependabot[bot]
ba91c9f736 chore(libs/deps): Update 4 dependencies (#9908)
Bumps the production-dependencies group with 4 updates in the
/autogpt_platform/autogpt_libs directory:
[google-cloud-logging](https://github.com/googleapis/python-logging),
[pydantic](https://github.com/pydantic/pydantic),
[pydantic-settings](https://github.com/pydantic/pydantic-settings) and
[supabase](https://github.com/supabase/supabase-py).

Updates `google-cloud-logging` from 3.11.4 to 3.12.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/python-logging/releases">google-cloud-logging's
releases</a>.</em></p>
<blockquote>
<h2>v3.12.1</h2>
<h2><a
href="https://github.com/googleapis/python-logging/compare/v3.12.0...v3.12.1">3.12.1</a>
(2025-04-21)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Make logging handler close conditional to having the transport
opened (<a
href="https://redirect.github.com/googleapis/python-logging/issues/990">#990</a>)
(<a
href="66c6b91725">66c6b91</a>)</li>
</ul>
<h2>v3.12.0</h2>
<h2><a
href="https://github.com/googleapis/python-logging/compare/v3.11.4...v3.12.0">3.12.0</a>
(2025-04-10)</h2>
<h3>Features</h3>
<ul>
<li>Add REST Interceptors which support reading metadata (<a
href="681bcc5c1f">681bcc5</a>)</li>
<li>Add support for opt-in debug logging (<a
href="681bcc5c1f">681bcc5</a>)</li>
<li>Added flushes/close functionality to logging handlers (<a
href="https://redirect.github.com/googleapis/python-logging/issues/917">#917</a>)
(<a
href="d179304b34">d179304</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>Allow protobuf 6.x (<a
href="https://redirect.github.com/googleapis/python-logging/issues/977">#977</a>)
(<a
href="6757890013">6757890</a>)</li>
<li><strong>deps:</strong> Require google-cloud-audit-log &gt;= 0.3.1
(<a
href="https://redirect.github.com/googleapis/python-logging/issues/979">#979</a>)
(<a
href="1cc00ecf64">1cc00ec</a>)</li>
<li>Fix typing issue with gRPC metadata when key ends in -bin (<a
href="681bcc5c1f">681bcc5</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Added documentation on log_level and excluded_loggers params in
setup_logging (<a
href="https://redirect.github.com/googleapis/python-logging/issues/971">#971</a>)
(<a
href="70d9d25bf8">70d9d25</a>)</li>
<li>Update README to break infinite redirect loop (<a
href="https://redirect.github.com/googleapis/python-logging/issues/972">#972</a>)
(<a
href="52cd907bb3">52cd907</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/googleapis/python-logging/blob/main/CHANGELOG.md">google-cloud-logging's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/googleapis/python-logging/compare/v3.12.0...v3.12.1">3.12.1</a>
(2025-04-21)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Make logging handler close conditional to having the transport
opened (<a
href="https://redirect.github.com/googleapis/python-logging/issues/990">#990</a>)
(<a
href="66c6b91725">66c6b91</a>)</li>
</ul>
<h2><a
href="https://github.com/googleapis/python-logging/compare/v3.11.4...v3.12.0">3.12.0</a>
(2025-04-10)</h2>
<h3>Features</h3>
<ul>
<li>Add REST Interceptors which support reading metadata (<a
href="681bcc5c1f">681bcc5</a>)</li>
<li>Add support for opt-in debug logging (<a
href="681bcc5c1f">681bcc5</a>)</li>
<li>Added flushes/close functionality to logging handlers (<a
href="https://redirect.github.com/googleapis/python-logging/issues/917">#917</a>)
(<a
href="d179304b34">d179304</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>Allow protobuf 6.x (<a
href="https://redirect.github.com/googleapis/python-logging/issues/977">#977</a>)
(<a
href="6757890013">6757890</a>)</li>
<li><strong>deps:</strong> Require google-cloud-audit-log &gt;= 0.3.1
(<a
href="https://redirect.github.com/googleapis/python-logging/issues/979">#979</a>)
(<a
href="1cc00ecf64">1cc00ec</a>)</li>
<li>Fix typing issue with gRPC metadata when key ends in -bin (<a
href="681bcc5c1f">681bcc5</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Added documentation on log_level and excluded_loggers params in
setup_logging (<a
href="https://redirect.github.com/googleapis/python-logging/issues/971">#971</a>)
(<a
href="70d9d25bf8">70d9d25</a>)</li>
<li>Update README to break infinite redirect loop (<a
href="https://redirect.github.com/googleapis/python-logging/issues/972">#972</a>)
(<a
href="52cd907bb3">52cd907</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f4fb25ab6f"><code>f4fb25a</code></a>
chore(main): release 3.12.1 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/992">#992</a>)</li>
<li><a
href="66c6b91725"><code>66c6b91</code></a>
fix: make logging handler close conditional to having the transport
opened (#...</li>
<li><a
href="5f89b5f77d"><code>5f89b5f</code></a>
chore(main): release 3.12.0 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/973">#973</a>)</li>
<li><a
href="5db27c2ac0"><code>5db27c2</code></a>
chore(python): remove .flake8 configuration file in templates (<a
href="https://redirect.github.com/googleapis/python-logging/issues/983">#983</a>)</li>
<li><a
href="d179304b34"><code>d179304</code></a>
feat: Added flushes/close functionality to logging handlers (<a
href="https://redirect.github.com/googleapis/python-logging/issues/917">#917</a>)</li>
<li><a
href="1cc00ecf64"><code>1cc00ec</code></a>
fix(deps): require google-cloud-audit-log &gt;= 0.3.1 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/979">#979</a>)</li>
<li><a
href="42387bf63f"><code>42387bf</code></a>
chore: Update gapic-generator-python to 1.23.6 (<a
href="https://redirect.github.com/googleapis/python-logging/issues/982">#982</a>)</li>
<li><a
href="52cd907bb3"><code>52cd907</code></a>
docs: update README to break infinite redirect loop (<a
href="https://redirect.github.com/googleapis/python-logging/issues/972">#972</a>)</li>
<li><a
href="6757890013"><code>6757890</code></a>
fix: Allow protobuf 6.x (<a
href="https://redirect.github.com/googleapis/python-logging/issues/977">#977</a>)</li>
<li><a
href="40b1a528df"><code>40b1a52</code></a>
chore: remove unused files (<a
href="https://redirect.github.com/googleapis/python-logging/issues/976">#976</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/googleapis/python-logging/compare/v3.11.4...v3.12.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `pydantic` from 2.11.1 to 2.11.4
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/releases">pydantic's
releases</a>.</em></p>
<blockquote>
<h2>v2.11.4 2025-04-29</h2>
<h3>What's Changed</h3>
<h4>Packaging</h4>
<ul>
<li>Bump <code>mkdocs-llmstxt</code> to v0.2.0 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11725">#11725</a></li>
</ul>
<h4>Changes</h4>
<ul>
<li>Allow config and bases to be specified together in
<code>create_model()</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11714">#11714</a>.
This change was backported as it was previously possible (although not
meant to be supported)
to provide <code>model_config</code> as a field, which would make it
possible to provide both configuration
and bases.</li>
</ul>
<h4>Fixes</h4>
<ul>
<li>Remove generics cache workaround by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11755">#11755</a></li>
<li>Remove coercion of decimal constraints by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11772">#11772</a></li>
<li>Fix crash when expanding root type in the mypy plugin by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11735">#11735</a></li>
<li>Fix issue with recursive generic models by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11775">#11775</a></li>
<li>Traverse <code>function-before</code> schemas during schema
gathering by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11801">#11801</a></li>
</ul>
<h2>v2.11.3 2025-04-08</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<h3>Packaging</h3>
<ul>
<li>Update V1 copy to v1.10.21 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11706">pydantic/pydantic#11706</a></li>
</ul>
<h3>Fixes</h3>
<ul>
<li>Preserve field description when rebuilding model fields by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11698">pydantic/pydantic#11698</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.11.2...v2.11.3">https://github.com/pydantic/pydantic/compare/v2.11.2...v2.11.3</a></p>
<h2>v2.11.2 2025-04-03</h2>
<!-- raw HTML omitted -->
<h2>What's Changed</h2>
<h3>Fixes</h3>
<ul>
<li>Bump <code>pydantic-core</code> to v2.33.1 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11678">pydantic/pydantic#11678</a></li>
<li>Make sure <code>__pydantic_private__</code> exists before setting
private attributes by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11666">pydantic/pydantic#11666</a></li>
<li>Do not override <code>FieldInfo._complete</code> when using field
from parent class by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11668">pydantic/pydantic#11668</a></li>
<li>Provide the available definitions when applying discriminated unions
by <a href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11670">pydantic/pydantic#11670</a></li>
<li>Do not expand root type in the mypy plugin for variables by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11676">pydantic/pydantic#11676</a></li>
<li>Mention the attribute name in model fields deprecation message by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11674">pydantic/pydantic#11674</a></li>
<li>Properly validate parameterized mappings by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11658">pydantic/pydantic#11658</a></li>
<li>Prepare release v2.11.2 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11684">pydantic/pydantic#11684</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic/compare/v2.11.1...v2.11.2">https://github.com/pydantic/pydantic/compare/v2.11.1...v2.11.2</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic/blob/main/HISTORY.md">pydantic's
changelog</a>.</em></p>
<blockquote>
<h2>v2.11.4 (2025-04-29)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.11.4">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Packaging</h4>
<ul>
<li>Bump <code>mkdocs-llmstxt</code> to v0.2.0 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11725">#11725</a></li>
</ul>
<h4>Changes</h4>
<ul>
<li>Allow config and bases to be specified together in
<code>create_model()</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11714">#11714</a>.
This change was backported as it was previously possible (although not
meant to be supported)
to provide <code>model_config</code> as a field, which would make it
possible to provide both configuration
and bases.</li>
</ul>
<h4>Fixes</h4>
<ul>
<li>Remove generics cache workaround by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11755">#11755</a></li>
<li>Remove coercion of decimal constraints by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11772">#11772</a></li>
<li>Fix crash when expanding root type in the mypy plugin by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11735">#11735</a></li>
<li>Fix issue with recursive generic models by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11775">#11775</a></li>
<li>Traverse <code>function-before</code> schemas during schema
gathering by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11801">#11801</a></li>
</ul>
<h2>v2.11.3 (2025-04-08)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.11.3">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Packaging</h4>
<ul>
<li>Update V1 copy to v1.10.21 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11706">#11706</a></li>
</ul>
<h4>Fixes</h4>
<ul>
<li>Preserve field description when rebuilding model fields by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11698">#11698</a></li>
</ul>
<h2>v2.11.2 (2025-04-03)</h2>
<p><a
href="https://github.com/pydantic/pydantic/releases/tag/v2.11.2">GitHub
release</a></p>
<h3>What's Changed</h3>
<h4>Fixes</h4>
<ul>
<li>Bump <code>pydantic-core</code> to v2.33.1 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11678">#11678</a></li>
<li>Make sure <code>__pydantic_private__</code> exists before setting
private attributes by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11666">#11666</a></li>
<li>Do not override <code>FieldInfo._complete</code> when using field
from parent class by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic/pull/11668">#11668</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="d444cd1cf6"><code>d444cd1</code></a>
Prepare release v2.11.4</li>
<li><a
href="828fc48d55"><code>828fc48</code></a>
Add documentation note about common pitfall with the annotated
pattern</li>
<li><a
href="42bf1fd784"><code>42bf1fd</code></a>
Bump <code>pydantic-core</code> to v2.33.2 (<a
href="https://redirect.github.com/pydantic/pydantic/issues/11804">#11804</a>)</li>
<li><a
href="7b3f513215"><code>7b3f513</code></a>
Allow config and bases to be specified together in
<code>create_model()</code></li>
<li><a
href="fc521388f2"><code>fc52138</code></a>
Traverse <code>function-before</code> schemas during schema
gathering</li>
<li><a
href="25af78934a"><code>25af789</code></a>
Fix issue with recursive generic models</li>
<li><a
href="91ef6bb39e"><code>91ef6bb</code></a>
Update monthly download count in documentation</li>
<li><a
href="a830775328"><code>a830775</code></a>
Bump <code>mkdocs-llmstxt</code> to v0.2.0</li>
<li><a
href="f5d1c87128"><code>f5d1c87</code></a>
Fix crash when expanding root type in the mypy plugin</li>
<li><a
href="c80bb355d7"><code>c80bb35</code></a>
Remove coercion of decimal constraints</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic/compare/v2.11.1...v2.11.4">compare
view</a></li>
</ul>
</details>
<br />

Updates `pydantic-settings` from 2.8.1 to 2.9.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/pydantic/pydantic-settings/releases">pydantic-settings's
releases</a>.</em></p>
<blockquote>
<h2>v2.9.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Drop support for Python 3.8 by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/560">pydantic/pydantic-settings#560</a></li>
<li>Switch to <code>typing-inspection</code> by <a
href="https://github.com/Viicos"><code>@​Viicos</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/556">pydantic/pydantic-settings#556</a></li>
<li>Introduce <code>uv</code> for Project Management by <a
href="https://github.com/KanchiShimono"><code>@​KanchiShimono</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/547">pydantic/pydantic-settings#547</a></li>
<li>Refactor sources.py into a subpackage (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/546">#546</a>)
by <a href="https://github.com/ezwiefel"><code>@​ezwiefel</code></a> in
<a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/548">pydantic/pydantic-settings#548</a></li>
<li>chore: cleanup by <a
href="https://github.com/CodeWithEmad"><code>@​CodeWithEmad</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/563">pydantic/pydantic-settings#563</a></li>
<li>Fix typo in documentation by <a
href="https://github.com/CodeWithEmad"><code>@​CodeWithEmad</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/564">pydantic/pydantic-settings#564</a></li>
<li>Add support for AWS Secrets Manager by <a
href="https://github.com/mavwolverine"><code>@​mavwolverine</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/532">pydantic/pydantic-settings#532</a></li>
<li>Fix minor typo: conotations =&gt; connotations by <a
href="https://github.com/svenevs"><code>@​svenevs</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/577">pydantic/pydantic-settings#577</a></li>
<li>Azure Key Vault: Don't load disabled secret by <a
href="https://github.com/AndreuCodina"><code>@​AndreuCodina</code></a>
in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/578">pydantic/pydantic-settings#578</a></li>
<li>Add support for GCP Secret Manager by <a
href="https://github.com/ezwiefel"><code>@​ezwiefel</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/567">pydantic/pydantic-settings#567</a></li>
<li>CLI JSON Optional Default by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/581">pydantic/pydantic-settings#581</a></li>
<li>Fix for env nested enum. by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/589">pydantic/pydantic-settings#589</a></li>
<li>CLI submodel suppress. by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/587">pydantic/pydantic-settings#587</a></li>
<li>Cli retrieve unknown args by <a
href="https://github.com/kschwab"><code>@​kschwab</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/588">pydantic/pydantic-settings#588</a></li>
<li>Update pydantic by <a
href="https://github.com/hramezani"><code>@​hramezani</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/593">pydantic/pydantic-settings#593</a></li>
<li>Fix check in CI by <a
href="https://github.com/hramezani"><code>@​hramezani</code></a> in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/595">pydantic/pydantic-settings#595</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/ezwiefel"><code>@​ezwiefel</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/548">pydantic/pydantic-settings#548</a></li>
<li><a
href="https://github.com/CodeWithEmad"><code>@​CodeWithEmad</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/563">pydantic/pydantic-settings#563</a></li>
<li><a
href="https://github.com/mavwolverine"><code>@​mavwolverine</code></a>
made their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/532">pydantic/pydantic-settings#532</a></li>
<li><a href="https://github.com/svenevs"><code>@​svenevs</code></a> made
their first contribution in <a
href="https://redirect.github.com/pydantic/pydantic-settings/pull/577">pydantic/pydantic-settings#577</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/pydantic/pydantic-settings/compare/v2.8.1...v2.9.0">https://github.com/pydantic/pydantic-settings/compare/v2.8.1...v2.9.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="1874740923"><code>1874740</code></a>
Prepare release 2.9.1 (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/600">#600</a>)</li>
<li><a
href="88e77bc8aa"><code>88e77bc</code></a>
Fix typo in gcp secret manager error message (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/598">#598</a>)</li>
<li><a
href="e973d9afc8"><code>e973d9a</code></a>
fix: Expose ConfigFileSourceMixing on top level
sources/<strong>init</strong>.py (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/597">#597</a>)</li>
<li><a
href="8c0f5f18b0"><code>8c0f5f1</code></a>
Fix check in CI (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/595">#595</a>)</li>
<li><a
href="0ac2312042"><code>0ac2312</code></a>
Prepare release 2.9.0 (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/594">#594</a>)</li>
<li><a
href="f3e5ac382c"><code>f3e5ac3</code></a>
Update pydantic (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/593">#593</a>)</li>
<li><a
href="20640b0efe"><code>20640b0</code></a>
Cli retrieve unknown args (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/588">#588</a>)</li>
<li><a
href="ed7fd42bfb"><code>ed7fd42</code></a>
CLI submodel suppress. (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/587">#587</a>)</li>
<li><a
href="e9fb3164eb"><code>e9fb316</code></a>
Fix for env nested enum. (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/589">#589</a>)</li>
<li><a
href="0e9b329c74"><code>0e9b329</code></a>
CLI JSON Optional Default (<a
href="https://redirect.github.com/pydantic/pydantic-settings/issues/581">#581</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pydantic/pydantic-settings/compare/v2.8.1...v2.9.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `supabase` from 2.15.0 to 2.15.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/releases">supabase's
releases</a>.</em></p>
<blockquote>
<h2>v2.15.1</h2>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.15.0...v2.15.1">2.15.1</a>
(2025-04-28)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>postgrest:</strong> add missing count, head, and get params
(<a
href="https://redirect.github.com/supabase/supabase-py/issues/1098">#1098</a>)
(<a
href="e9c219ebda">e9c219e</a>)</li>
<li><strong>realtime:</strong> bump realtime from 2.4.2 to 2.4.3 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1112">#1112</a>)
(<a
href="1d429c6555">1d429c6</a>)</li>
<li>remove return type from postgrest methods (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1110">#1110</a>)
(<a
href="6664f42157">6664f42</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/supabase/supabase-py/blob/main/CHANGELOG.md">supabase's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/supabase/supabase-py/compare/v2.15.0...v2.15.1">2.15.1</a>
(2025-04-28)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>postgrest:</strong> add missing count, head, and get params
(<a
href="https://redirect.github.com/supabase/supabase-py/issues/1098">#1098</a>)
(<a
href="e9c219ebda">e9c219e</a>)</li>
<li><strong>realtime:</strong> bump realtime from 2.4.2 to 2.4.3 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1112">#1112</a>)
(<a
href="1d429c6555">1d429c6</a>)</li>
<li>remove return type from postgrest methods (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1110">#1110</a>)
(<a
href="6664f42157">6664f42</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9fdf32f36c"><code>9fdf32f</code></a>
chore(main): release 2.15.1 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1099">#1099</a>)</li>
<li><a
href="1d429c6555"><code>1d429c6</code></a>
fix(realtime): bump realtime from 2.4.2 to 2.4.3 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1112">#1112</a>)</li>
<li><a
href="b12940429d"><code>b129404</code></a>
chore(deps-dev): bump commitizen from 4.4.1 to 4.6.0 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1104">#1104</a>)</li>
<li><a
href="ad8f99e4a0"><code>ad8f99e</code></a>
chore(deps-dev): bump pytest-cov from 6.0.0 to 6.1.1 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1100">#1100</a>)</li>
<li><a
href="6664f42157"><code>6664f42</code></a>
fix: remove return type from postgrest methods (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1110">#1110</a>)</li>
<li><a
href="c0ca1758ba"><code>c0ca175</code></a>
ci: explicit permissions and remove _target (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1102">#1102</a>)</li>
<li><a
href="e9c219ebda"><code>e9c219e</code></a>
fix(postgrest): add missing count, head, and get params (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1098">#1098</a>)</li>
<li><a
href="d5aa9ce4c7"><code>d5aa9ce</code></a>
chore(deps-dev): bump python-dotenv from 1.0.1 to 1.1.0 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1085">#1085</a>)</li>
<li><a
href="8ff9ba8e23"><code>8ff9ba8</code></a>
chore(deps-dev): bump pytest-asyncio from 0.25.3 to 0.26.0 (<a
href="https://redirect.github.com/supabase/supabase-py/issues/1084">#1084</a>)</li>
<li>See full diff in <a
href="https://github.com/supabase/supabase-py/compare/v2.15.0...v2.15.1">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-05-16 09:44:23 +00:00
Emmanuel Ferdman
e5368f3857 fix: Resolve logger.warn(..) deprecration warnings (#9938)
This small PR resolves the deprecation warnings of the `logger` library:
```
DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
```
2025-05-16 10:56:03 +02:00
Reinier van der Leer
c73c6fe5c3 Merge branch 'master' into dev 2025-05-15 19:28:40 +02:00
Nicholas Tindle
9bef383df2 fix(backend): Make URL pinning work with extra_url_validator (#9940)
Github Blocks use an URL transformer passed to `Requests` to convert web
URLs to the API URLs. This doesn't always work with the anti-SSRF URL
pinning mechanism that was implemented in #8531.

### Changes 🏗️
In `Requests.request(..)`:
- Apply `validate_url` *after* `extra_url_validator`, to prevent
mismatch between `pinned_url` and `original_hostname`
- Simplify logic & add clarifying comments

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Tested the github blocks that had the issue

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-05-15 13:12:39 +00:00
Chirag Modi
2dc038b6c0 Add Llama API Support (#9899)
The changes in this PR are to add Llama API support.

### Changes 🏗️
We add both backend and frontend support.

**Backend**:
- Add llama_api provider
- Include models supported by Llama API along with configs
- llm_call
- credential store and llama_api_key field in Settings

**Frontend**:
- Llama API as a type
- Credentials input and provider for Llama API
 

### Checklist 📋

#### For code changes:
- [X] I have clearly listed my changes in the PR description
- [X] I have tested my changes according to the test plan:

**Test Plan**:

<details>
  <summary>AI Text Generator</summary>
  
  - [X] Start-up backend and frontend:
- Start backend with Docker services: `docker compose up -d --build`
     - Start frontend: `npm install && npm run dev`
- By visiting http://localhost:3000/, test inference and structured
outputs
  - [X] Create from scratch 
  - [X] Request for Llama API Credentials
  
<img width="2015" alt="image"
src="https://github.com/user-attachments/assets/3dede402-3718-4441-9327-ecab25c63ebf"
/>

  - [X] Execute an agent with at least 3 blocks
 
<img width="2026" alt="image"
src="https://github.com/user-attachments/assets/59d6d56b-2ccc-4af5-b511-4af312c3f7f8"
/>

  - [X] Confirm it executes correctly
</details>

<details>
  <summary>Structured Response Generator</summary>
  
  - [X] Start-up backend and frontend:
- Start backend with Docker services: `docker compose up -d --build`
     - Start frontend: `npm install && npm run dev`
- By visiting http://localhost:3000/, test inference and structured
outputs
  - [X] Create from scratch 
  - [X] Execute an agent 
<img width="2023" alt="image"
src="https://github.com/user-attachments/assets/d1107638-bf1b-45b1-a296-1e0fac29525b"
/>

  - [X] Confirm it executes correctly
</details>

---------

Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-05-14 19:45:40 +00:00
Krzysztof Czerwinski
cd6deb87c3 fix(frontend): Revert congrats onboarding screen action (#9943)
Revert congrats onboarding screen action of #9916
2025-05-14 19:19:27 +00:00
Zamil Majdy
1999ba38d9 feat(backend): Truncate logging on NotificationManager and LLMBlocks (#9939)
> Log entry with size 865.8K exceeds maximum size of 256.0K

Some logs are just too large.

### Changes 🏗️

Truncate logging on NotificationManager and LLMBlocks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing CI
2025-05-14 19:18:59 +00:00
Nicholas Tindle
e8fa996c2f refactor(frontend): Move from remote loaded GTM to local (#9933)
<!-- Clearly explain the need for these changes: -->

As part of a small security review, we found that google won't let you
load with integrity. That seems insane but the general workaround is to
load a static copy of your own

### Changes 🏗️
Moves all the google analytics scripts in house instead of loading via
cdn
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] tested it in a new isolated environment to confirm events still
work as expected everywhere they are used

---------

Co-authored-by: Reinier van der Leer <pwuts@agpt.co>
2025-05-14 16:07:40 +00:00
Krzysztof Czerwinski
e22d2c848a feat(frontend): Onboarding Updates 3 (#9916)
A collection of UX update and bug fixes for onboarding and wallet.

### Changes 🏗️

- Show spinner loading indicator when onboarding button is clicked
- Use `getLibraryAgentByStoreListingVersionID` instead of
`addMarketplaceAgentToLibrary` on congrats screen
- Fix `Not enough segments` issue: don't fetch onboarding when user is
logged out
- Minor updates
  - Fill some missing deps in deps arrays
  - `Spinner` component, styles updates
  - Use `useMemo`/`useCallback`
- Show error toast when onboarding agent fails to run:

<img width="405" alt="Screenshot 2025-05-06 at 5 09 01 PM"
src="https://github.com/user-attachments/assets/dd1272da-326a-448d-995d-98ac773b3ee4"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding can be completed
  - [x] Failing agent shows toast
  - [x] Wallet can be opened and works properly (tasks, confetti)
  - [x] Dependency arrays don't cause infinite loops
2025-05-14 14:29:29 +00:00
Reinier van der Leer
9471fd6b58 fix(frontend): Re-subscribe on WebSocket re-connect (#9935)
- Resolves #9929

### Changes 🏗️

- Implement `BackendAPI.onWebSocketConnect(..)`

- Improve reliability and reactivity of `/library/agents/[id]`:
  - Refresh page data and (re)subscribe and on WebSocket (re)connect
- Break up multi-action hooks into smaller parts to reduce unnecessary
re-renders and requests
  - Reduce duplicate requests

- Use `onWebSocketConnect` in `useAgentGraph` as well

- Tidy up `autogpt-server-api/client.ts` a bit

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]`
  - Run the agent
    - [x] -> UI should update normally with execution updates
- Suspend your computer, or restart the backend (or WS server) to break
the connection
    - DO NOT REFRESH THE TAB ITSELF
- [x] -> On reconnect, page data should be refreshed (check in network
tab of dev tools)
  - Run the agent again
    - [x] -> UI should update normally with execution updates
2025-05-14 00:39:14 +00:00
Krzysztof Czerwinski
c4bbfd5050 feat(frontend): Update login and signup feedback (#9917)
Currently both login and signup page show the same feedback on error on
cloud (join the waitlist).

### Changes 🏗️
Update login&signup feedback for cloud.

- Use cards for login and signup feedback instead of html list
- Make login page show prompt to redirect user to signup

<img width="474" alt="Screenshot 2025-05-07 at 4 01 07 PM"
src="https://github.com/user-attachments/assets/45f189ea-5fea-45bb-89f9-7323418d69ea"
/>
<img width="476" alt="Screenshot 2025-05-07 at 4 03 29 PM"
src="https://github.com/user-attachments/assets/96f4cd7f-f3e6-44b2-b647-96ee98063572"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Signup&login works with correct credentials
  - [x] Signup&login shows correct error message on wrong credentials
  - [x] Links work correctly
2025-05-13 15:57:55 +00:00
Toran Bruce Richards
08639bb1f0 fix(backend): Change library agents fetch sort from version to updatedAt (#9932)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
This pull request includes a small change to the `get_my_agents`
function in `autogpt_platform/backend/backend/server/v2/store/db.py`.
The change updates the sorting order for retrieving library agents to
use the `updatedAt` field instead of `agentGraphVersion`.

This aligns with the later sorting that is applied to the fetched page
in the frontend.

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-05-13 07:49:55 +00:00
Zamil Majdy
4d99ae27c9 fix(backend): Avoid executor stuck on cleanup waiting execution completion 2025-05-12 22:30:58 +01:00
Zamil Majdy
64ff161323 fix(backend): Avoid executor stuck on cleanup waiting execution completion 2025-05-12 22:29:41 +01:00
Nicholas Tindle
2b5b93a0f7 feat(backend): send emails daily instead of hourly (#9927)
<!-- Clearly explain the need for these changes: -->

John thinks an hourly email is a bit much; I'm inclined to agree.

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->
swaps the time for dealing the batch for the agent runs from one hour to
one day

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] No testing done
2025-05-12 18:24:14 +00:00
Nicholas Tindle
79cc08787b feat(blocks): initial google calendar (#9920)
We want to add support for reading + writing events for google calendar
so agents can do cool stuff with that!

### Changes 🏗️

* Add support for google calendar blocks

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Run manual tests of each block
  - [x] Build and run automatic tests
2025-05-12 18:21:05 +00:00
Reinier van der Leer
b740a6edc0 fix(backend): Unbreak existing Agent Executor nodes (#9928)
- Follow-up fix for #9862

### Changes 🏗️

- Rename the stored `data` input field to `inputs` on all Agent Executor
nodes in the DB

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Dry-run the query in Supabase to verify that the data operation
works
  - [x] CI
2025-05-12 11:41:07 +00:00
Zamil Majdy
c5946927ea Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-05-10 05:21:04 +01:00
Zamil Majdy
30086357bc Merge branch 'dev' of github.com:Significant-Gravitas/AutoGPT into dev 2025-05-10 05:19:38 +01:00
Zamil Majdy
e090195e57 feat(backend): User notification service to send a system alert 2025-05-10 05:19:22 +01:00
Zamil Majdy
d2bf0af3cd feat(backend): User notification service to send a system alert 2025-05-10 05:13:38 +01:00
Zamil Majdy
4413366ea7 fix(blocks): Fix load_all_blocks in concurrent calls (#9926)
```
2025-05-09 11:54:15,171 ERROR  Error executing graph 954e6fc8-9c90-46fa-be5b-4063eb519ec7: Block ID AIMusicGeneratorBlock error: 44f6c8ad-d75c-4ae1-8209-aad1c0326928 is already in use
```

### Changes 🏗️

This PR avoids the use of global variables messing up with the way
`load_all_blocks` is cached.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] CI
2025-05-09 13:48:15 +00:00
Zamil Majdy
3d05c26f26 fix(backend): Fix walltime & cputime stats on interrupted execution 2025-05-09 14:09:56 +01:00
Zamil Majdy
c736d401a6 fix(backend): Fix walltime & cputime stats on interrupted execution 2025-05-09 14:08:37 +01:00
Zamil Majdy
e8bc83445a feat(backend): Immediate alert on late execution check job failure (#9925)
Currently, there is no guarantee that an error will be reported right
away. And late execution is a serious issue that needs to be addressed
quickly

### Changes 🏗️

Provided a direct alert when late execution occurs.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Manual run on the late executions job on artificially created late
executions.
2025-05-09 07:31:23 +00:00
Zamil Majdy
8de88395f1 fix(backend): Continue stats accounting on aborted or broken executions (#9921)
This is a follow-up to
https://github.com/Significant-Gravitas/AutoGPT/pull/9903

The continued graph execution restarted all the execution stats from
zero, making the execution stats misleading.

### Changes 🏗️

Continue the execution stats when continuing the graph execution.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Existing tests, manual graph run with the graph execution aborted
midway.
2025-05-09 06:46:30 +00:00
Zamil Majdy
82cf0bcde7 refactor(backend): Clear out Notification Service code blockage (#9915)
Some of the code paths in the notification & scheduler service were
synchronous HTTP calls that execute a long-running job that blocks. This
makes the service threads busy waiting.

### Changes 🏗️

* Remove queue_notification API
* Remove DTO
* Move heavy tasks intothe  executor

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Manually executing notification service jobs through the scheduler
API
2025-05-09 06:16:43 +00:00
Krzysztof Czerwinski
089e7aae88 fix(frontend): Catch exception on agent listing page (#9923)
Listing page throws exception on deployment because of supabase auth
issue.

### Changes 🏗️

Catch the exception when getting library agent. This reverts the
behavior of listing page and it'll always show "Add to Library" when
user is logged in.

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  - [ ] ...
2025-05-08 22:29:40 +00:00
Toran Bruce Richards
74e6a6a43a fix(frontend/library): Quick Patch for Rendering Agent Outputs (#9922)
<!-- Clearly explain the need for these changes: -->
The goal of this change is a quick and temporary tweak to improve the
displaying of output text in the Agent Runs screen.

This change is made anticipating that these outputs will be properly
improved in the near future, and is thus just a temporary change in
order to display text in a human readable format.

### Changes 🏗️
There is one change in this PR:
- The class of the Agent Output textbox is changed to properly display
text without impacting the design.
Below is a before and after of this change:

**Before**

![image](https://github.com/user-attachments/assets/5ecc16c8-8ffd-4cc6-a627-00502848450e)

**After**

![image](https://github.com/user-attachments/assets/5601c2a0-f32f-4c75-afec-d86dc0c5081d)


### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...

---------

Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-05-08 11:41:27 +00:00
Reinier van der Leer
433b76b539 fix(backend/scheduler): Unbreak Scheduler.get_execution_schedules (#9919)
- Resolves #9918
- Follow-up fix for #9914

### Changes 🏗️

- In `get_graph_execution_schedules`, skip jobs when their kwargs can't
be parsed as `GraphExecutionJobArgs`
- Rename methods of `Scheduler` to clarify their scope (scheduled
*graph* executions)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Go to `/library/agents/[id]` (which calls `GET /api/schedules`)
    - [x] -> `GET /api/schedules` request returns HTTP 200
2025-05-08 11:26:51 +00:00
Reinier van der Leer
1ad6c76f9c feat(backend): Require discriminator value on graph save (#9858)
If a node has a multi-credentials input (e.g. AI Text Generator block)
but the discriminator value (e.g. model choice) is missing, the input
can't be discriminated into a single-provider input. Discrimination into
a single-provider input is necessary to make a graph-level credentials
input for use in the Library.

### Changes 🏗️

- feat(backend): Require discriminator fields to always have a value

- dx(frontend): Improve typing of discriminator stuff
- dx(frontend): Fix typing in `NodeOneOfDiscriminatorField` component

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Saving & running graphs with and without credentials works
normally
- Note: We don't have any blocks with a discriminator that doesn't have
a default value, so currently I don't think it's possible to produce a
case where this mechanism would be triggered.
2025-05-08 09:45:22 +00:00
Bently
104928c614 feat(platform): Add captcha to login, signup and password reset pages (#9847)
This PR adds Cloudflare's Turnstile CAPTCHA to the login, signup, and
password reset pages. it is setup to only show and work when behave as
is set to CLOUD so it will not show for local hosted users.

### Changes 🏗️

#### Backend Changes
-
**[backend/server/v2/turnstile/routes.py](https://github.com/Significant-Gravitas/AutoGPT/compare/dev...bently/secrt-1169-implement-captcha-on-sign-up?expand=1#diff-2c5c2cb13346370fc48bdde8691a0d3bbfc030f7718288101b67b641c7948c10)**:
Created API endpoint at `/api/turnstile/verify` to proxy verification
requests to Cloudflare
-
**[backend/server/v2/turnstile/service.py](https://github.com/Significant-Gravitas/AutoGPT/compare/dev...bently/secrt-1169-implement-captcha-on-sign-up?expand=1#diff-296991fdc3ea821ae5a568ca96bb89789f2fc7dda7b62f59ef6bcadfaea16e56)**:
Implements service to verify CAPTCHA tokens with Cloudflare using
server-side secret key

#### Frontend Changes
-
**[frontend/src/lib/turnstile.ts](https://github.com/Significant-Gravitas/AutoGPT/compare/dev...bently/secrt-1169-implement-captcha-on-sign-up?expand=1#diff-a698e2718e0f6b0afe1d0c7fda571a7bfcbec6aeacc963c2b3620cc683dc4448)**:
Client-side function to call the backend verification endpoint
-
**[frontend/src/components/auth/Turnstile.tsx](https://github.com/Significant-Gravitas/AutoGPT/compare/dev...bently/secrt-1169-implement-captcha-on-sign-up?expand=1#diff-71a73d58d0ba5e46e5702f2f2599284e72a8fcf6c5d0b5c72e7358570d631aa7)**:
Reusable Turnstile component that renders and manages the CAPTCHA widget
-
**[frontend/src/hooks/useTurnstile.ts](https://github.com/Significant-Gravitas/AutoGPT/compare/dev...bently/secrt-1169-implement-captcha-on-sign-up?expand=1#diff-4a6a9363243ab2a88dbfb498917f464896ada059617bd8b0fb51df532c73827d)**:
Custom hook that manages Turnstile state and conditionally activates
based on environment

#### Auth Flow Integration
- Modified server actions in `login`, `signup`, and `reset_password` to
accept and verify Turnstile tokens
- Updated auth page components to integrate the CAPTCHA widget with form
submissions

### Configuration Changes
- Added two new environment variables:
- `NEXT_PUBLIC_CLOUDFLARE_TURNSTILE_SITE_KEY`: Public site key for
frontend
- `CLOUDFLARE_TURNSTILE_SECRET_KEY`: Secret key for backend verification

### Test Plan 📋
- Ask Bently for the keys to test locally!
- [x] Test login, signup and password reset with Turnstile enabled
(BEHAVE_AS=CLOUD)
- [x] Verify CAPTCHA appears and must be completed before form
submission
  - [x] Verify error message appears if CAPTCHA is not completed
  - [x] Verify form submission works after completing CAPTCHA
- [x] Test login, signup and password reset with Turnstile disabled
(BEHAVE_AS=LOCAL)
  - [x] Verify CAPTCHA does not appear
  - [x] Verify form submission works without CAPTCHA
- [x] Test with invalid site key to ensure proper error handling

---------

Co-authored-by: Krzysztof Czerwinski <34861343+kcze@users.noreply.github.com>
2025-05-07 21:08:12 +00:00
Reinier van der Leer
0726a00fb7 fix(backend): Include sub-graphs in graph-level credentials support (#9862)
The Library Agent credentials UX (#9789) currently doesn't work for
sub-graphs.

### Changes 🏗️

- Include sub-graphs in generating `Graph.credentials_input_schema`
- Propagate `node_credentials_input_map` into `AgentExecutionBlock`
executions
- Fix: also apply `node_credentials_input_map` in `_enqueue_next_nodes`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - Import a graph with sub-graphs that need credentials
  - Run this agent from the Library
  - [x] -> Should work
2025-05-07 17:28:39 +00:00
Zamil Majdy
ac8ef9bdb2 feat(backend): Introduce late execution check scheduled job (#9914)
Introduce a late execution check scheduled job. The late threshold
duration is configurable.
This initial version only reports the error to Sentry.

### Changes 🏗️

* Added late execution check scheduled job
* Move the registration weekly notification processing job out of API
call and calling it directly from the scheduler service.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Manual firing of scheduled job through an exposed API
2025-05-07 05:00:37 +00:00
Zamil Majdy
519ad94ec9 Merge branch 'master' of github.com:Significant-Gravitas/AutoGPT into dev 2025-05-06 20:30:36 +07:00
Nicholas Tindle
505320fcd3 feat(backend): Move Scheduler (#9904)
<!-- Clearly explain the need for these changes: -->
We want the scheduler shouldn't scale with the rest API lol

### Changes 🏗️
pulls out the scheduler into its own service
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] test it

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-05-05 18:59:28 +00:00
Krzysztof Czerwinski
6f1578239a feat(platform): Update Marketplace Agent listing buttons (#9843)
Currently agent listing on Marketplace have bad UX.

### Changes 🏗️

- Add function and endpoint to check if user has `LibraryAgent` by given
`storeListingVersionId`
- Redesign listing buttons
- `Add to library` shown when user is logged in and doesn't have an
agent in library
  - `See runs` shown when user logged in as has the agent in the library
  - `Download agent` always shown
  - Disabled buttons during processing (adding/downloading)
- Stop raising when owner is trying to add own agent. Now it'll simply
redirect to Library.
- Remove button appearing/flickering after a delay on listing page -
logged in status is now checked in server component.
- Show error toast on adding/redirecting to library and downloading
error
- Update breadcrumbs and page title to say `Marketplace` instead of
`Store`
- `font-geist` -> `font-sans` (`font-geist` var doesn't exist)

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Button on a listing is `Add to library` (no library agent)
  - [x] Agent can be added and user is redirected
- [x] Button on the listing is `See runs` and clicking it redirects to
the library agent
  - [x] Remove agent from library
  - [x] Buttons shows `Add to library` again
  - [x] Agent can be re-added
  - [x] Agent can be downloaded
  - [x] `Add to library` Button is hidden when user is logged out

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-05-05 16:47:58 +00:00
Zamil Majdy
79319ad1a7 fix(backend): Avoid broken process pool by not failing process initializer (#9907)
Process initializer on the process pool should never fail, but we do
network-related stuff there.
This cause the pool to be in a broken state.

### Changes 🏗️

Remove the health check step on process initializer.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing CI test
2025-05-05 13:27:04 +00:00
Nicholas Tindle
afb66f75ec fix: disable google sheets in prod based on oauth review (#9906)
<!-- Clearly explain the need for these changes: -->

Our oauth review wants us to drop this in favor of a diff scope that
will require additional work

### Changes 🏗️
Disables the oauth sheets scopes in prod

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] set env locally
2025-05-02 19:40:51 +00:00
Krzysztof Czerwinski
59ec61ef98 feat(platform): Onboarding design&UX update (#9905)
A collection of updates regarding onboarding and wallet.

### Changes 🏗️

- `try-except` instead of `if` when rewarding (skip unnecessary db call)
- Make external services question onboarding step optional
- Add `SmartImage` component to lazy load images with pulse animation
and use it throughout onboarding
- Use store agent name instead of graph graph name (run page)
- Fix some images breaking layout on the agent card (run page)
- Center agent card vertically and horizontally (center on the left half
of page) (run page)
- Delay and tweak confetti when opening wallet and when task finished
(wallet)
- Flash wallet when credits change value
- Make tutorial video grayscale on completed steps (wallet)
- Fix confetti triggering on page refresh (wallet)
- Redirect to agent run page instead of Library after onboarding
- Expand task groups by default (wallet) - this means tutorial videos
are visible by default

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
- [x] Services step is optional and skipping it doesn't break onboarding
  - [x] `SmartImage` works properly
  - [x] Agent card is aligned properly, including on page scroll
  - [x] Wallet flash when credits value change
  - [x] User is redirected to the agent runs page after onboarding
2025-05-02 14:42:01 +00:00
Zamil Majdy
d7077b5161 feat(backend): Continue instead of retrying aborted/broken agent execution (#9903)
Currently, the agent/graph execution engine is consuming the execution
queue and acknowledges the message after fully completing its execution
or failing it.

However, in the case of the agent executor failing due to a
hardware/resource issue, or the executor did not manage to acknowledge
the execution message. Another agent executor will pick it up and start
the execution again from the beginning.

The scope of this PR is to make the next executor pick up the next work
to continue the pre-existing execution instead of starting it all over
from the beginning.

### Changes 🏗️

* Removed `start_node_execs` from `GraphExecutionEntry`
* Populate the starting graph node from the DB query instead (fetching
Running & Queued node executions).
* Removed `get_incomplete_node_executions` from DB manager.
* Use get_node_executions with a status filter instead.
* Allow graph execution to end in non-FAILED/COMPLETED status, e.g, when
the executor is interrupted, it should be stuck in the running status,
and let other executors continue the task.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run an agent, stop the executor midway, re-reun the executor, the
execution should be continued instead of restarted.
2025-05-01 16:02:03 +00:00
Zamil Majdy
475c5a5cc3 fix(backend): Avoid executing any agent with zero balance (#9901)
### Changes 🏗️

* Avoid executing any agent with a zero balance.
* Make node execution count global across agents for a single user.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run agents by tweaking the `execution_cost_count_threshold` &
`execution_cost_per_threshold` values.
2025-05-01 15:11:38 +00:00
Zamil Majdy
f5a07f1a35 hotfix(backend): Avoid executing any agent with zero balance (#9902)
### Changes 🏗️

* Avoid executing any agent with a zero balance.
* Make node execution count global across agents for a single user.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
- [x] Run agents by tweaking the `execution_cost_count_threshold` &
`execution_cost_per_threshold` values.
2025-05-01 10:11:09 -05:00
Zamil Majdy
86d5cfe60b feat(backend): Support flexible RPC client (#9842)
Using sync code in the async route often introduces a blocking
event-loop code that impacts stability.

The current RPC system only provides a synchronous client to call the
service endpoints.
The scope of this PR is to provide an entirely decoupled signature
between client and server, allowing the client can mix & match async &
sync options on the client code while not changing the async/sync nature
of the server.

### Changes 🏗️

* Add support for flexible async/sync RPC client.
* Migrate scheduler client to all-async client.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Scheduler route test.
  - [x] Modified service_test.py
  - [x] Run normal agent executions
2025-05-01 04:38:06 +00:00
Bently
602f887623 feat(frontend): fix admin add dollars (#9898)
Fixes the admin add dollars, in the ``add-money-button.tsx`` file, in
the handleApproveSubmit action it was trying to use formatCredits for
the value which is wrong, this fix changes it

```diff
 <form action={handleApproveSubmit}>
   <input type="hidden" name="id" value={userId} />
   <input
     type="hidden"
     name="amount"
-    value={formatCredits(Number(dollarAmount))}
+    value={Math.round(parseFloat(dollarAmount) * 100)}
   />
```
i was able to add $1, $0.10 and $0.01

![image](https://github.com/user-attachments/assets/3a3126c2-5f17-4c9b-8657-4372332a0ea3)
2025-04-30 17:24:26 +00:00
Bentlybro
1edde778c5 Merge branch 'master' into dev 2025-04-30 16:46:50 +01:00
Zamil Majdy
3526986f98 fix(backend): Failing test on a new Pydantic version (#9897)
```
FAILED test/model_test.py::test_agent_preset_from_db - pydantic_core._pydantic_core.ValidationError: 1 validation error for AgentNodeExecutionInputOutput

E       pydantic_core._pydantic_core.ValidationError: 1 validation error for AgentNodeExecutionInputOutput
E       data
E         JSON input should be string, bytes or bytearray [type=json_type, input_value=Json, input_type=Json]
E           For further information visit https://errors.pydantic.dev/2.11/v/json_type
```

### Changes 🏗️

Manually creating a Prisma model often breaks, and we have such an
instance in the test.
This PR fixes the test to make the new Pydantic happy.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] CI
2025-04-30 13:59:17 +00:00
Nicholas Tindle
04c4340ee3 feat(frontend,backend): user spending admin dashboard (#9751)
<!-- Clearly explain the need for these changes: -->
We need a way to refund people who spend money on agents wihout making
manual db actions

### Changes 🏗️
- Adds a bunch for refunding users
- Adds reasons and admin id for actions
- Add admin to db manager
- Add UI for this for the admin panel
- Clean up pagination controls
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test by importing dev db as baseline
- [x] Add transactions on top for "refund", and make sure all existing
transactions work

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-04-29 17:39:25 +00:00
Zamil Majdy
9fa62c03f6 feat(backend): Improve cancel execution reliability (#9889)
When an executor dies, an ongoing execution will not be retried and will
just stuck in the running status.
This change avoids such a scenario by allowing an execution of an entry
that is not in QUEUED status with the low-probability risk of double
execution.

### Changes 🏗️

* Allow non-QUEUED status to be re-executed.
* Improve cleanup of node & graph executor.
* Make a cancellation request consumption a separate thread to avoid
being blocked by other messages.
* Remove unused retry loop on the execution manager.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Run agent, kill the server, re-run it, agent restarted.
2025-04-29 17:06:03 +00:00
Mareddy Lohith Reddy
d5dc687484 fix: handle empty 204 responses in SendWebRequestBlock (#9887)
<!-- Clearly explain the need for these changes: -->
This PR fixes [Issue
#9883](https://github.com/Significant-Gravitas/AutoGPT/issues/9883),
where the SendWebRequestBlock crashes when receiving a 204 No Content
response, such as when posting to a Discord webhook. The fix ensures
that empty responses are handled gracefully, and the block does not
crash.

### Changes 🏗️
- Added a check to handle empty HTTP responses (like 204 status) in
SendWebRequestBlock
- Fallback to empty string or None if there is no response content
- Prevents server errors when parsing non-existent response bodies

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Send a POST request to an endpoint that returns 204 No Content
  - [x]  Confirm that SendWebRequestBlock handles it without crashing
  - [x] Confirm that regular 200 OK JSON responses still work

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Lohith-11 <lohithr011@gamil.com>
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-04-28 19:16:04 +00:00
Japh
fb5ce0a16d Add Note to "Getting Started" page for Raspberry Pi 5 page size issue (#9888)
Add Note to "Getting Started" page for Raspberry Pi 5 page size issue
with `supabase-vector` that prevents `docker compose up` from running
successfully.

<!-- Clearly explain the need for these changes: -->

### Changes 🏗️

- Added a Note to the "Getting Started" page that explains a change in
Raspberry Pi OS for Raspberry Pi 5s, and how to revert the change to
avoid an issue running the backend on Docker.

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] No code changes

#### For configuration changes:
- [x] No configuration changes

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
Co-authored-by: Nicholas Tindle <nicholas.tindle@agpt.co>
2025-04-28 19:07:44 +00:00
Nicholas Tindle
a1f17ca797 fix: use subheading for agent info not description (#9891)
<!-- Clearly explain the need for these changes: -->
we oopsed and used the wrong attribute for short desc
### Changes 🏗️
Uses sub heading instead now
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] check the expected text shows
2025-04-28 18:38:43 +00:00
Nicholas Tindle
8fdfd75cc4 feat: allow admins to download agents for review (#9881)
<!-- Clearly explain the need for these changes: -->
for admins to approve agents for the marketplace, we need to be able to
run them. this is a quick workaround for downloading them so you can put
them in your marketplace to check

### Changes 🏗️
- clones various endpoints related to downloading into an admin side
with logging, and admin checks
- adds download button and removes open in builder action
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] Test downloading agents from local marketplace
2025-04-28 17:58:23 +00:00
Abhimanyu Yadav
5b5b2043e8 fix(frontend): Add support to optional multiselect (#9885)
- fix #9882 

we’re currently using optional multi select, and it’s working great.
We’re able to correctly determine the data type for it. However, there’s
a small issue. We’re not using the correct subSchema that is inside
anyOf on the multi select input. This is why we’re getting the problem
on the Twitter block. It’s the only one that’s using this type of input,
so it’s the only one that’s affected.

![Screenshot 2025-04-26 at 5 39
51 PM](https://github.com/user-attachments/assets/834d64d8-84dc-4dbd-a03a-df03172ecee5)

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-04-28 15:18:29 +00:00
Nicholas Tindle
7d83f1db05 feat(block): bring back PrintConsoleBlock (#9850)
### Changes 🏗️

Bring back PrintConsoleBlock

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Print console block

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-04-28 09:55:57 +00:00
Zamil Majdy
f07696e3c1 fix(backend): Fix top-up with zero transaction flow (#9886)
The transaction with zero payment amount will not generate a payment ID,
so the checkout failed for this scenario.

### Changes 🏗️

Don't use payment id as transaction key on top-up with zero payment
amount.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Top-up with stripe coupon
2025-04-27 14:22:08 +00:00
Zamil Majdy
96a173a85f fix(backend): Avoid releasing lock that is no longer owned by the current thread (#9878)
There are instances of node executions that were failed and end up stuck
in the RUNNING status due to the execution failed to release the lock:
```
2025-04-24 20:53:31,573 INFO  [ExecutionManager|uid:25eba2d1-e9c1-44bc-88c7-43e0f4fbad5a|gid:01f8c315-c163-4dd1-a8a0-d396477c5a9f|nid:f8bf84ae-b1f0-4434-8f04-80f43852bc30]|geid:2e1b35c6-0d2f-4e97-adea-f6fe0d9965d0|neid:590b29ea-63ee-4e24-a429-de5a3e191e72|-] Failed node execution 590b29ea-63ee-4e24-a429-de5a3e191e72: Cannot release a lock that's no longer owned
```

### Changes 🏗️

Check the ownership of the lock before releasing.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing CI tests.

(cherry picked from commit ef022720d5)
2025-04-25 23:55:18 +07:00
sentry-autofix[bot]
9715ea5313 fix: handle token limits and estimate token count for llm calls (#9880)
👋 Hi there! This PR was automatically generated by Autofix 🤖

This fix was triggered by Toran Bruce Richards.

Fixes
[AUTOGPT-SERVER-1ZY](https://sentry.io/organizations/significant-gravitas/issues/6386687527/).
The issue was that: `llm_call` calculates `max_tokens` without
considering `input_tokens`, causing OpenRouter API errors when the
context window is exceeded.

- Implements a function `estimate_token_count` to estimate the number of
tokens in a list of messages.
- Calculates available tokens based on the context window, estimated
input tokens, and user-defined max tokens.
- Adjusts `max_tokens` for LLM calls to prevent exceeding context window
limits.
- Reduces `max_tokens` by 15% and retries if a token limit error is
encountered during LLM calls.

If you have any questions or feedback for the Sentry team about this
fix, please email [autofix@sentry.io](mailto:autofix@sentry.io) with the
Run ID: 32838.

---------

Co-authored-by: sentry-autofix[bot] <157164994+sentry-autofix[bot]@users.noreply.github.com>
Co-authored-by: Krzysztof Czerwinski <kpczerwinski@gmail.com>
2025-04-25 13:45:47 +00:00
Zamil Majdy
ef022720d5 fix(backend): Avoid releasing lock that is no longer owned by the current thread (#9878)
There are instances of node executions that were failed and end up stuck
in the RUNNING status due to the execution failed to release the lock:
```
2025-04-24 20:53:31,573 INFO  [ExecutionManager|uid:25eba2d1-e9c1-44bc-88c7-43e0f4fbad5a|gid:01f8c315-c163-4dd1-a8a0-d396477c5a9f|nid:f8bf84ae-b1f0-4434-8f04-80f43852bc30]|geid:2e1b35c6-0d2f-4e97-adea-f6fe0d9965d0|neid:590b29ea-63ee-4e24-a429-de5a3e191e72|-] Failed node execution 590b29ea-63ee-4e24-a429-de5a3e191e72: Cannot release a lock that's no longer owned
```

### Changes 🏗️

Check the ownership of the lock before releasing.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Existing CI tests.
2025-04-25 07:39:10 +00:00
Zamil Majdy
4ddb206f86 feat(frontend): Add billing page toggle (#9877)
### Changes 🏗️

Provide a system toggle for disabling the billing page:
NEXT_PUBLIC_SHOW_BILLING_PAGE

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Toggle `NEXT_PUBLIC_SHOW_BILLING_PAGE` value.
2025-04-24 19:33:20 +00:00
Zamil Majdy
91f34966c8 fix(block): Fix Smart Decision Block missing input beads & incompability with input in special characters (#9875)
Smart Decision Block was not able to work with sub agent with custom
name input & the bead were not properly propagated in the execution UI.
The scope of this PR is fixing it.

### Changes 🏗️

* Introduce an easy to parse format of tool edge:
`{tool}_^_{func}_~_{arg}`. Graph using SmartDecisionBlock needs to be
re-saved before execution to work.
* Reduce cluttering on a smart decision block logic.
* Fix beads not being shown for a smart decision block tool calling.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Execute an SDM with some special character input as a tool

<img width="672" alt="image"
src="https://github.com/user-attachments/assets/873556b3-c16a-4dd1-ad84-bc86c636c406"
/>
2025-04-24 19:24:41 +00:00
Krzysztof Czerwinski
11a69170b5 feat(frontend): Update "Edit a copy" modal and buttons (#9876)
Update "Edit a copy" modal text when copying marketplace agent in
Library. Update agent action buttons to reflect the design accurately.

### Changes 🏗️

- Update modal text
- Disable copying owned agents (only marketplace allowed)
- `Open in Builder` -> `Customize agent`
- Disabled `Customize agent` instead of hiding
- Change `Delete agent` to non-destructive design

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  - [ ] ...
2025-04-24 16:30:43 +00:00
Krzysztof Czerwinski
0675a41e42 fix(backend): Strip secrets, credentials when forking agent (#9874)
Strip secrets, credentials when forking agent

### Changes 🏗️

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  - [ ] ...
2025-04-24 15:09:41 +00:00
Bentlybro
56ce1a0c1c Merge branch 'master' into dev 2025-04-24 14:21:34 +01:00
Zamil Majdy
7fbe135ec8 feat(backend): Expose execution prometheus metrics (#9866)
Currently, we have no visibility on the state of the execution manager,
the scope of this PR is to open up the observability of it by exposing
Prometheus metrics.

### Changes 🏗️

Re-use the execution manager port to expose the Prometheus metrics.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Hit /metrics on 8002 port
2025-04-24 07:48:38 +00:00
Zamil Majdy
eb6a0b34e1 feat(backend): Use forkserver on process creation if possible (#9864)
### Changes 🏗️

Set process starting mode to forkserver instead of spawn, if possible,
for performance benefits.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Existing tests
2025-04-24 07:36:36 +00:00
Zamil Majdy
1e3236a041 feat(backend): Add retry on executor process initialization (#9865)
Executor process initialization can fail and cause this error:
```
concurrent.futures.process.BrokenProcessPool: A child process terminated abruptly, the process pool is not usable anymore
```

### Changes 🏗️

Add retry to reduce the chance of the initialization error to happen.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Existing tests
2025-04-23 21:24:17 +00:00
Krzysztof Czerwinski
160a622ba4 feat(platform): Forking agent in Library (#9870)
This PR introduces copying agents feature in the Library. Users can copy
and download their library agents but they can edit only the ones they
own (included copied ones).

### Changes 🏗️

- DB migration: add relation in `AgentGraph`: `forked_from_id` and
`forked_from_version`
- Add `fork_graph` function that makes a hardcopy of agent graph and its
nodes (all with new ids)
- Add `fork_library_agent` that copies library agent and its graph for a
user
- Add endpoint `/library/agents/{libraryAgentId}/fork`
- Add UI to `library/agents/[id]/page.tsx`: `Edit a copy` button with
dialog confirmation

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Agent can be copied, edited and runs
2025-04-23 16:28:42 +00:00
Toran Bruce Richards
e2a226dc49 Update repo-close-stale-issues.yml 2025-04-23 14:51:18 +01:00
Zamil Majdy
5047e99fd1 fix(frontend): Hide Google Maps Key ID filter (#9861)
### Changes 🏗️


![image](https://github.com/user-attachments/assets/d6b9f971-d914-4ff1-9319-a903707a2c72)

Hide Google Maps system id key on the frontend UI.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan
2025-04-22 16:50:05 +00:00
Krzysztof Czerwinski
c80d357149 feat(frontend): Use route groups (#9855)
Navbar sometimes disappears outside `/onboarding`.

### Changes 🏗️

This PR solves the problem of disappearing Navbar outside `/onboarding`
by introducing `app/(platform)` route group.

- Move all routes requiring Navbar to `app/(platform)`
- Move `<Navbar>` to `app/(platform)/layout.tsx`
- Move `/onboarding` to `app/(no-navbar/`
- Remove pathname injection to header from middleware and stop relying
on it to hide the navbar

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Common routes work properly
2025-04-22 09:10:12 +00:00
Zamil Majdy
20d39f6d44 fix(platform): Fix Google Maps API Key setting through env (#9848)
Setting the Google Maps API through the API has never worked on the
platform.

### Changes 🏗️

Set the default api key from the environment variable.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Test GoogleMapsBlock
2025-04-22 03:00:47 +07:00
Bently
d5b82c01e0 feat(backend): Adds latest llm models (#9856)
This PR adds the following models:
OpenAI's O3: https://platform.openai.com/docs/models/o3
OpenAI's GPT 4.1: https://platform.openai.com/docs/models/gpt-4.1
Anthropics Claude 3.7: https://www.anthropic.com/news/claude-3-7-sonnet
Googles gemini 2.5 pro:
https://openrouter.ai/google/gemini-2.5-pro-preview-03-25
2025-04-21 19:26:21 +00:00
Abhimanyu Yadav
69b8d96516 fix(library/run): Replace credits to cents (#9845)
Replacing credits with cents (100 credits = 1$).

I haven’t touched anything internally, just changed the UI.

Everything is working great.

On the frontend, there’s no other place where we use credits instead of
dollars.

![Screenshot 2025-04-19 at 11 36
00 AM](https://github.com/user-attachments/assets/de799b5c-094e-4c96-a7da-273ce60b2125)
<img width="1503" alt="Screenshot 2025-04-19 at 11 33 24 AM"
src="https://github.com/user-attachments/assets/87d7e218-f8f5-4e2e-92ef-70c81735db6b"
/>
2025-04-21 12:31:48 +00:00
Krzysztof Czerwinski
67af77e179 fix((backend): Fix migrate llm models in existing agents (#9810)
https://github.com/Significant-Gravitas/AutoGPT/pull/9452 was throwing
`operator does not exist: text ? unknown` on deployed dev and so the
function call was commented as a hotfix.
This PR fixes and re-enables the llm model migration function.

### Changes 🏗️

- Uncomment and fix `migrate_llm_models` function

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Migrate nodes with non-existing models
  - [x] Don't migrate nodes without any model or with correct models

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-04-19 12:52:36 +00:00
Abhimanyu Yadav
2a92970a5f fix(marketplace/library): Removing white borders from Avatar (#9818)
There are some white borders around the avatar in the store card, but
they are not present in the design, so I'm removing them.

![Screenshot 2025-04-15 at 3 58
05 PM](https://github.com/user-attachments/assets/f8c98076-9cc3-46f1-b4f3-41d4e48f6127)
2025-04-19 05:36:36 +00:00
Zamil Majdy
9052ee7b95 fix(backend): Clear RabbitMQ connection cache on execution-manager retry 2025-04-19 07:50:04 +02:00
Zamil Majdy
c783f64b33 fix(backend): Handle add execution API request failure (#9838)
There are cases where the publishing agent execution is failing, making
the agent execution appear to be stuck in a queue, but the execution has
never been in a queue in the first place.

### Changes 🏗️

On publishing failure, we set the graph & starting node execution status
to FAILED and let the UI bubble up the error so the user can try again.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Normal add execution flow
2025-04-18 18:35:43 +00:00
Zamil Majdy
055a231aed feat(backend): Add retry mechanism for pika publish_message (#9839)
For unknown reason publishing message can fail sometimes due to the
connection being broken:
MessageQueue suddenly unavailable, connection simply broke, connection
being reset, etc.

### Changes 🏗️

Adding a tenacity retry on AMQP or ConnectionError, which hopefully can
alleviate the issue.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Simple add execution
2025-04-18 17:56:27 +00:00
Reinier van der Leer
417d7732af feat(platform/library): Add credentials UX on /library/agents/[id] (#9789)
- Resolves #9771
- ... in a non-persistent way, so it won't work for webhook-triggered
agents
    For webhooks: #9541

### Changes 🏗️

Frontend:
- Add credentials inputs in Library "New run" screen (based on
`graph.credentials_input_schema`)
- Refactor `CredentialsInput` and `useCredentials` to not rely on XYFlow
context

- Unsplit lists of saved credentials in `CredentialsProvider` state

- Move logic that was being executed at component render to `useEffect`
hooks in `CredentialsInput`

Backend:
- Implement logic to aggregate credentials input requirements to one per
provider per graph
- Add `BaseGraph.credentials_input_schema` (JSON schema) computed field
    Underlying added logic:
- `BaseGraph._credentials_input_schema` - makes a `BlockSchema` from a
graph's aggregated credentials inputs
- `BaseGraph.aggregate_credentials_inputs()` - aggregates a graph's
nodes' credentials inputs using `CredentialsFieldInfo.combine(..)`
- `BlockSchema.get_credentials_fields_info() -> dict[str,
CredentialsFieldInfo]`
- `CredentialsFieldInfo` model (created from
`_CredentialsFieldSchemaExtra`)

- Implement logic to inject explicitly passed credentials into graph
execution
  - Add `credentials_inputs` parameter to `execute_graph` endpoint
- Add `graph_credentials_input` parameter to
`.executor.utils.add_graph_execution(..)`
  - Implement `.executor.utils.make_node_credentials_input_map(..)`
  - Amend `.executor.utils.construct_node_execution_input`
  - Add `GraphExecutionEntry.node_credentials_input_map` attribute
  - Amend validation to allow injecting credentials
    - Amend `GraphModel._validate_graph(..)`
    - Amend `.executor.utils._validate_node_input_credentials`
- Add `node_credentials_map` parameter to
`ExecutionManager.add_execution(..)`
    - Amend execution validation to handle side-loaded credentials
    - Add `GraphExecutionEntry.node_execution_map` attribute
- Add mechanism to inject passed credentials into node execution data
- Add credentials injection mechanism to node execution queueing logic
in `Executor._on_graph_execution(..)`

- Replace boilerplate logic in `v1.execute_graph` endpoint with call to
existing `.executor.utils.add_graph_execution(..)`
- Replace calls to `.server.routers.v1.execute_graph` with
`add_graph_execution`

Also:
- Address tech debt in `GraphModel._validate_gaph(..)`
- Fix type checking in `BaseGraph._generate_schema(..)`

#### TODO
- [ ] ~~Make "Run again" work with credentials in
`AgentRunDetailsView`~~
- [ ] Prohibit saving a graph if it has nodes with missing discriminator
value for discriminated credentials inputs

### Checklist 📋

#### For code changes:
- [ ] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [ ] ...
2025-04-18 14:27:13 +00:00
Krzysztof Czerwinski
f16a398a8e feat(frontend): Update completed task group design in Wallet (#9820)
This redesigns how the task group is displayed when finished for both
expanded and folded state.

### Changes 🏗️

- Folded state now displays `Done` badge and hides tasks
- Expanded state shows only task names and hides details and video

Screenshot:
1. Expanded unfinished group
2. Expanded finished group
3. Folded finished group

<img width="463" alt="Screenshot 2025-04-15 at 2 05 31 PM"
src="https://github.com/user-attachments/assets/40152073-fc0e-47c2-9fd4-a6b0161280e6"
/>

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Finished group displays correctly
  - [x] Unfinished group displays correctly
2025-04-18 09:45:35 +00:00
Krzysztof Czerwinski
e8bbd945f2 feat(frontend): Wallet top-up and auto-refill (#9819)
### Changes 🏗️

- Add top-up and auto-refill tabs in the Wallet
- Add shadcn `tabs` component
- Disable increase/decrease spinner buttons on number inputs across
Platform (moved css from `customnode.css` to `globals.css`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Incorrect values are detected properly
  - [x] Top-up works
  - [x] Setting auto-refill works
2025-04-18 09:44:54 +00:00
Krzysztof Czerwinski
d1730d7b1d fix(frontend): Fix onboarding agent execution (#9822)
Onboarding executes original agent graph directly without waiting for
marketplace agent to be added to user library.

### Changes 🏗️

- Execute library agent after it's already added to library

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Onboarding agent executes properly
2025-04-18 09:36:40 +00:00
Krzysztof Czerwinski
8ea64327a1 fix(backend): Fix array types in database (#9828)
Array fields in `schema.prisma` are non-nullable, but generated
migrations don’t add `NOT NULL` constraints. This causes existing rows
to get `NULL` values when new array columns are added, breaking schema
expectations and leading to bugs.

### Changes 🏗️

- Backfill all `NULL` rows on non-nullable array columns to empty arrays
- Set `NOT NULL` constraint on all array columns

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Existing `NULL` rows are properly backfilled
  - [x] Existing arrays are not set to default empty arrays
  - [x] Affected columns became non-nullable in the db
2025-04-18 07:43:54 +00:00
Bently
3cf30c22fb update(docs): Remove outdated submodule command from docs (#9836)
### Changes 🏗️

Updates to the setup docs to remove the old unneeded ``git submodule
update --init --recursive --progress`` command + some other small tweaks
around it
2025-04-17 16:45:07 +00:00
Reinier van der Leer
05c670eef9 fix(frontend/library): Prevent execution updates mixing between library agents (#9835)
If the websocket doesn't disconnect when the user switches to viewing a
different agent, they aren't unsubscribed. If execution updates *from a
different agent* are adopted into the page state, that can cause
crashes.

### Changes 🏗️

- Filter incoming execution updates by `graph_id`

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
- Go to an agent and initiate a run that will take a while (long enough
to navigate to a different agent)
  - Navigate: Library -> [another agent]
- [ ] Runs from the first agent don't show up in the runs list of the
other agent
2025-04-17 14:11:09 +00:00
Zamil Majdy
f6a4b036c7 fix(block): Disable LLM blocks parallel tool calls (#9834)
SmartDecisionBlock sometimes tried to be smart by calling multiple tool
calls and our platform does not support this yet.

### Changes 🏗️

Disable parallel tool calls for OpenAI & OpenRouter LLM provider LLM
blocks.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] Tested SmartDecisionBlock & AITextGeneratorBlock
2025-04-17 12:58:05 +00:00
Zamil Majdy
c43924cd4e feat(backend): Add RabbitMQ connection cleanup on executor shutdown hook 2025-04-17 01:28:15 +02:00
Zamil Majdy
e3846c22bd fix(backend): Avoid multithreaded pika access (#9832)
### Changes 🏗️

Avoid other threads accessing the channel within the same process.

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [x] I have tested my changes according to the test plan:
  - [x] Manual agent runs
2025-04-16 22:06:07 +00:00
Toran Bruce Richards
9a7a838418 fix(backend): Change node output logging type from info to debug (#9831)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
This PR simply changes the logging type from info to debug of node
outputs in the agent.py file.
<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [x] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>

---------

Co-authored-by: Bentlybro <Github@bentlybro.com>
2025-04-16 20:45:51 +00:00
Toran Bruce Richards
d61d815208 fix(logging): Change node data logging to debug level from info (#9830)
<!-- Clearly explain the need for these changes: -->

### Changes 🏗️
This change simply changes the logging level of node inputs and outputs
to debug level. This change is needed because currently logging all node
data causes logs that are too large for the logger to prevent nodes from
running.

<!-- Concisely describe all of the changes made in this pull request:
-->

### Checklist 📋

#### For code changes:
- [x] I have clearly listed my changes in the PR description
- [ ] I have made a test plan
- [ ] I have tested my changes according to the test plan:
  <!-- Put your test plan here: -->
  - [x] ...

<details>
  <summary>Example test plan</summary>
  
  - [ ] Create from scratch and execute an agent with at least 3 blocks
- [ ] Import an agent from file upload, and confirm it executes
correctly
  - [ ] Upload agent to marketplace
- [ ] Import an agent from marketplace and confirm it executes correctly
  - [ ] Edit an agent from monitor, and confirm it executes correctly
</details>

#### For configuration changes:
- [x] `.env.example` is updated or already compatible with my changes
- [x] `docker-compose.yml` is updated or already compatible with my
changes
- [x] I have included a list of my configuration changes in the PR
description (under **Changes**)

<details>
  <summary>Examples of configuration changes</summary>

  - Changing ports
  - Adding new services that need to communicate with each other
  - Secrets or environment variable changes
  - New or infrastructure changes such as databases
</details>
2025-04-16 19:22:52 +00:00
1046 changed files with 79021 additions and 31308 deletions

View File

@@ -27,7 +27,7 @@
!autogpt_platform/frontend/src/
!autogpt_platform/frontend/public/
!autogpt_platform/frontend/package.json
!autogpt_platform/frontend/yarn.lock
!autogpt_platform/frontend/pnpm-lock.yaml
!autogpt_platform/frontend/tsconfig.json
!autogpt_platform/frontend/README.md
## config

View File

@@ -10,17 +10,19 @@ updates:
commit-message:
prefix: "chore(libs/deps)"
prefix-development: "chore(libs/deps-dev)"
ignore:
- dependency-name: "poetry"
groups:
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
development-dependencies:
dependency-type: "development"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
# backend (Poetry project)
- package-ecosystem: "pip"
@@ -32,17 +34,19 @@ updates:
commit-message:
prefix: "chore(backend/deps)"
prefix-development: "chore(backend/deps-dev)"
ignore:
- dependency-name: "poetry"
groups:
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
development-dependencies:
dependency-type: "development"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
# frontend (Next.js project)
- package-ecosystem: "npm"
@@ -58,13 +62,13 @@ updates:
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
development-dependencies:
dependency-type: "development"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
# infra (Terraform)
- package-ecosystem: "terraform"
@@ -81,14 +85,13 @@ updates:
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
development-dependencies:
dependency-type: "development"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
# GitHub Actions
- package-ecosystem: "github-actions"
@@ -101,14 +104,13 @@ updates:
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
development-dependencies:
dependency-type: "development"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
# Docker
- package-ecosystem: "docker"
@@ -121,16 +123,16 @@ updates:
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
development-dependencies:
dependency-type: "development"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
# Docs
- package-ecosystem: 'pip'
- package-ecosystem: "pip"
directory: "docs/"
schedule:
interval: "weekly"
@@ -142,10 +144,10 @@ updates:
production-dependencies:
dependency-type: "production"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"
development-dependencies:
dependency-type: "development"
update-types:
- "minor"
- "patch"
- "minor"
- "patch"

5
.github/labeler.yml vendored
View File

@@ -24,8 +24,9 @@ platform/frontend:
platform/backend:
- changed-files:
- any-glob-to-any-file: autogpt_platform/backend/**
- all-globs-to-all-files: '!autogpt_platform/backend/backend/blocks/**'
- all-globs-to-any-file:
- autogpt_platform/backend/**
- '!autogpt_platform/backend/backend/blocks/**'
platform/blocks:
- changed-files:

47
.github/workflows/claude.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
) && (
github.event.comment.author_association == 'OWNER' ||
github.event.comment.author_association == 'MEMBER' ||
github.event.comment.author_association == 'COLLABORATOR' ||
github.event.review.author_association == 'OWNER' ||
github.event.review.author_association == 'MEMBER' ||
github.event.review.author_association == 'COLLABORATOR' ||
github.event.issue.author_association == 'OWNER' ||
github.event.issue.author_association == 'MEMBER' ||
github.event.issue.author_association == 'COLLABORATOR'
)
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@beta
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}

View File

@@ -32,7 +32,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
python-version: ["3.11"]
runs-on: ubuntu-latest
services:
@@ -50,6 +50,23 @@ jobs:
env:
RABBITMQ_DEFAULT_USER: ${{ env.RABBITMQ_DEFAULT_USER }}
RABBITMQ_DEFAULT_PASS: ${{ env.RABBITMQ_DEFAULT_PASS }}
clamav:
image: clamav/clamav-debian:latest
ports:
- 3310:3310
env:
CLAMAV_NO_FRESHCLAMD: false
CLAMD_CONF_StreamMaxLength: 50M
CLAMD_CONF_MaxFileSize: 100M
CLAMD_CONF_MaxScanSize: 100M
CLAMD_CONF_MaxThreads: 4
CLAMD_CONF_ReadTimeout: 300
options: >-
--health-cmd "clamdscan --version || exit 1"
--health-interval 30s
--health-timeout 10s
--health-retries 5
--health-start-period 180s
steps:
- name: Checkout repository
@@ -81,12 +98,12 @@ jobs:
- name: Install Poetry (Unix)
run: |
# Extract Poetry version from backend/poetry.lock
HEAD_POETRY_VERSION=$(head -n 1 poetry.lock | grep -oP '(?<=Poetry )[0-9]+\.[0-9]+\.[0-9]+')
HEAD_POETRY_VERSION=$(python ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry)
echo "Found Poetry version ${HEAD_POETRY_VERSION} in backend/poetry.lock"
if [ -n "$BASE_REF" ]; then
BASE_BRANCH=${BASE_REF/refs\/heads\//}
BASE_POETRY_VERSION=$((git show "origin/$BASE_BRANCH":./poetry.lock; true) | head -n 1 | grep -oP '(?<=Poetry )[0-9]+\.[0-9]+\.[0-9]+')
BASE_POETRY_VERSION=$((git show "origin/$BASE_BRANCH":./poetry.lock; true) | python ../../.github/workflows/scripts/get_package_version_from_lockfile.py poetry -)
echo "Found Poetry version ${BASE_POETRY_VERSION} in backend/poetry.lock on ${BASE_REF}"
POETRY_VERSION=$(printf '%s\n' "$HEAD_POETRY_VERSION" "$BASE_POETRY_VERSION" | sort -V | tail -n1)
else
@@ -131,6 +148,35 @@ jobs:
# outputs:
# DB_URL, API_URL, GRAPHQL_URL, ANON_KEY, SERVICE_ROLE_KEY, JWT_SECRET
- name: Wait for ClamAV to be ready
run: |
echo "Waiting for ClamAV daemon to start..."
max_attempts=60
attempt=0
until nc -z localhost 3310 || [ $attempt -eq $max_attempts ]; do
echo "ClamAV is unavailable - sleeping (attempt $((attempt+1))/$max_attempts)"
sleep 5
attempt=$((attempt+1))
done
if [ $attempt -eq $max_attempts ]; then
echo "ClamAV failed to start after $((max_attempts*5)) seconds"
echo "Checking ClamAV service logs..."
docker logs $(docker ps -q --filter "ancestor=clamav/clamav-debian:latest") 2>&1 | tail -50 || echo "No ClamAV container found"
exit 1
fi
echo "ClamAV is ready!"
# Verify ClamAV is responsive
echo "Testing ClamAV connection..."
timeout 10 bash -c 'echo "PING" | nc localhost 3310' || {
echo "ClamAV is not responding to PING"
docker logs $(docker ps -q --filter "ancestor=clamav/clamav-debian:latest") 2>&1 | tail -50 || echo "No ClamAV container found"
exit 1
}
- name: Run Database Migrations
run: poetry run prisma migrate dev --name updates
env:
@@ -144,9 +190,9 @@ jobs:
- name: Run pytest with coverage
run: |
if [[ "${{ runner.debug }}" == "1" ]]; then
poetry run pytest -s -vv -o log_cli=true -o log_cli_level=DEBUG test
poetry run pytest -s -vv -o log_cli=true -o log_cli_level=DEBUG
else
poetry run pytest -s -vv test
poetry run pytest -s -vv
fi
if: success() || (failure() && steps.lint.outcome == 'failure')
env:
@@ -159,6 +205,7 @@ jobs:
REDIS_HOST: "localhost"
REDIS_PORT: "6379"
REDIS_PASSWORD: "testpassword"
ENCRYPTION_KEY: "dvziYgz0KSK8FENhju0ZYi8-fRTfAdlz6YLhdB_jhNw=" # DO NOT USE IN PRODUCTION!!
env:
CI: true

View File

@@ -0,0 +1,198 @@
name: AutoGPT Platform - Dev Deploy PR Event Dispatcher
on:
pull_request:
types: [closed]
issue_comment:
types: [created]
permissions:
issues: write
pull-requests: write
jobs:
dispatch:
runs-on: ubuntu-latest
steps:
- name: Check comment permissions and deployment status
id: check_status
if: github.event_name == 'issue_comment' && github.event.issue.pull_request
uses: actions/github-script@v7
with:
script: |
const commentBody = context.payload.comment.body.trim();
const commentUser = context.payload.comment.user.login;
const prAuthor = context.payload.issue.user.login;
const authorAssociation = context.payload.comment.author_association;
// Check permissions
const hasPermission = (
authorAssociation === 'OWNER' ||
authorAssociation === 'MEMBER' ||
authorAssociation === 'COLLABORATOR'
);
core.setOutput('comment_body', commentBody);
core.setOutput('has_permission', hasPermission);
if (!hasPermission && (commentBody === '!deploy' || commentBody === '!undeploy')) {
core.setOutput('permission_denied', 'true');
return;
}
if (commentBody !== '!deploy' && commentBody !== '!undeploy') {
return;
}
// Process deploy command
if (commentBody === '!deploy') {
core.setOutput('should_deploy', 'true');
}
// Process undeploy command
else if (commentBody === '!undeploy') {
core.setOutput('should_undeploy', 'true');
}
- name: Post permission denied comment
if: steps.check_status.outputs.permission_denied == 'true'
uses: actions/github-script@v7
with:
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `❌ **Permission denied**: Only the repository owners, members, or collaborators can use deployment commands.`
});
- name: Get PR details for deployment
id: pr_details
if: steps.check_status.outputs.should_deploy == 'true' || steps.check_status.outputs.should_undeploy == 'true'
uses: actions/github-script@v7
with:
script: |
const pr = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.issue.number
});
core.setOutput('pr_number', pr.data.number);
core.setOutput('pr_title', pr.data.title);
core.setOutput('pr_state', pr.data.state);
- name: Dispatch Deploy Event
if: steps.check_status.outputs.should_deploy == 'true'
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: pr-event
client-payload: |
{
"action": "deploy",
"pr_number": "${{ steps.pr_details.outputs.pr_number }}",
"pr_title": "${{ steps.pr_details.outputs.pr_title }}",
"pr_state": "${{ steps.pr_details.outputs.pr_state }}",
"repo": "${{ github.repository }}"
}
- name: Post deploy success comment
if: steps.check_status.outputs.should_deploy == 'true'
uses: actions/github-script@v7
with:
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `🚀 **Deploying PR #${{ steps.pr_details.outputs.pr_number }}** to development environment...`
});
- name: Dispatch Undeploy Event (from comment)
if: steps.check_status.outputs.should_undeploy == 'true'
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: pr-event
client-payload: |
{
"action": "undeploy",
"pr_number": "${{ steps.pr_details.outputs.pr_number }}",
"pr_title": "${{ steps.pr_details.outputs.pr_title }}",
"pr_state": "${{ steps.pr_details.outputs.pr_state }}",
"repo": "${{ github.repository }}"
}
- name: Post undeploy success comment
if: steps.check_status.outputs.should_undeploy == 'true'
uses: actions/github-script@v7
with:
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `🗑️ **Undeploying PR #${{ steps.pr_details.outputs.pr_number }}** from development environment...`
});
- name: Check deployment status on PR close
id: check_pr_close
if: github.event_name == 'pull_request' && github.event.action == 'closed'
uses: actions/github-script@v7
with:
script: |
const comments = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number
});
let lastDeployIndex = -1;
let lastUndeployIndex = -1;
comments.data.forEach((comment, index) => {
if (comment.body.trim() === '!deploy') {
lastDeployIndex = index;
} else if (comment.body.trim() === '!undeploy') {
lastUndeployIndex = index;
}
});
// Should undeploy if there's a !deploy without a subsequent !undeploy
const shouldUndeploy = lastDeployIndex !== -1 && lastDeployIndex > lastUndeployIndex;
core.setOutput('should_undeploy', shouldUndeploy);
- name: Dispatch Undeploy Event (PR closed with active deployment)
if: >-
github.event_name == 'pull_request' &&
github.event.action == 'closed' &&
steps.check_pr_close.outputs.should_undeploy == 'true'
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.DISPATCH_TOKEN }}
repository: Significant-Gravitas/AutoGPT_cloud_infrastructure
event-type: pr-event
client-payload: |
{
"action": "undeploy",
"pr_number": "${{ github.event.pull_request.number }}",
"pr_title": "${{ github.event.pull_request.title }}",
"pr_state": "${{ github.event.pull_request.state }}",
"repo": "${{ github.repository }}"
}
- name: Post PR close undeploy comment
if: >-
github.event_name == 'pull_request' &&
github.event.action == 'closed' &&
steps.check_pr_close.outputs.should_undeploy == 'true'
uses: actions/github-script@v7
with:
script: |
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: `🧹 **Auto-undeploying**: PR closed with active deployment. Cleaning up development environment for PR #${{ github.event.pull_request.number }}.`
});

View File

@@ -18,46 +18,140 @@ defaults:
working-directory: autogpt_platform/frontend
jobs:
lint:
setup:
runs-on: ubuntu-latest
outputs:
cache-key: ${{ steps.cache-key.outputs.key }}
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Generate cache key
id: cache-key
run: echo "key=${{ runner.os }}-pnpm-${{ hashFiles('**/pnpm-lock.yaml') }}" >> $GITHUB_OUTPUT
- name: Cache dependencies
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ steps.cache-key.outputs.key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: |
yarn install --frozen-lockfile
run: pnpm install --frozen-lockfile
lint:
runs-on: ubuntu-latest
needs: setup
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run lint
run: |
yarn lint
run: pnpm lint
type-check:
runs-on: ubuntu-latest
needs: setup
steps:
- uses: actions/checkout@v4
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: |
yarn install --frozen-lockfile
run: pnpm install --frozen-lockfile
- name: Run tsc check
run: |
yarn type-check
run: pnpm type-check
chromatic:
runs-on: ubuntu-latest
needs: setup
# Only run on dev branch pushes or PRs targeting dev
if: github.ref == 'refs/heads/dev' || github.base_ref == 'dev'
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run Chromatic
uses: chromaui/action@latest
with:
projectToken: chpt_9e7c1a76478c9c8
onlyChanged: true
workingDir: autogpt_platform/frontend
token: ${{ secrets.GITHUB_TOKEN }}
test:
runs-on: ubuntu-latest
needs: setup
strategy:
fail-fast: false
matrix:
@@ -74,6 +168,9 @@ jobs:
with:
node-version: "21"
- name: Enable corepack
run: corepack enable
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@main
with:
@@ -92,26 +189,33 @@ jobs:
run: |
docker compose -f ../docker-compose.yml up -d
- name: Install dependencies
run: |
yarn install --frozen-lockfile
- name: Restore dependencies cache
uses: actions/cache@v4
with:
path: ~/.pnpm-store
key: ${{ needs.setup.outputs.cache-key }}
restore-keys: |
${{ runner.os }}-pnpm-
- name: Setup Builder .env
run: |
cp .env.example .env
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Setup .env
run: cp .env.example .env
- name: Build frontend
run: pnpm build --turbo
# uses Turbopack, much faster and safe enough for a test pipeline
- name: Install Browser '${{ matrix.browser }}'
run: yarn playwright install --with-deps ${{ matrix.browser }}
run: pnpm playwright install --with-deps ${{ matrix.browser }}
- name: Run tests
timeout-minutes: 20
run: |
yarn test --project=${{ matrix.browser }}
- name: Run Playwright tests
run: pnpm test:no-build --project=${{ matrix.browser }}
- name: Print Final Docker Compose logs
if: always()
run: |
docker compose -f ../docker-compose.yml logs
run: docker compose -f ../docker-compose.yml logs
- uses: actions/upload-artifact@v4
if: ${{ !cancelled() }}

View File

@@ -16,7 +16,7 @@ jobs:
# operations-per-run: 5000
stale-issue-message: >
This issue has automatically been marked as _stale_ because it has not had
any activity in the last 50 days. You can _unstale_ it by commenting or
any activity in the last 170 days. You can _unstale_ it by commenting or
removing the label. Otherwise, this issue will be closed in 10 days.
stale-pr-message: >
This pull request has automatically been marked as _stale_ because it has
@@ -25,7 +25,7 @@ jobs:
close-issue-message: >
This issue was closed automatically because it has been stale for 10 days
with no activity.
days-before-stale: 100
days-before-stale: 170
days-before-close: 10
# Do not touch meta issues:
exempt-issue-labels: meta,fridge,project management

View File

@@ -0,0 +1,60 @@
#!/usr/bin/env python3
import sys
if sys.version_info < (3, 11):
print("Python version 3.11 or higher required")
sys.exit(1)
import tomllib
def get_package_version(package_name: str, lockfile_path: str) -> str | None:
"""Extract package version from poetry.lock file."""
try:
if lockfile_path == "-":
data = tomllib.load(sys.stdin.buffer)
else:
with open(lockfile_path, "rb") as f:
data = tomllib.load(f)
except FileNotFoundError:
print(f"Error: File '{lockfile_path}' not found", file=sys.stderr)
sys.exit(1)
except tomllib.TOMLDecodeError as e:
print(f"Error parsing TOML file: {e}", file=sys.stderr)
sys.exit(1)
except Exception as e:
print(f"Error reading file: {e}", file=sys.stderr)
sys.exit(1)
# Look for the package in the packages list
packages = data.get("package", [])
for package in packages:
if package.get("name", "").lower() == package_name.lower():
return package.get("version")
return None
def main():
if len(sys.argv) not in (2, 3):
print(
"Usages: python get_package_version_from_lockfile.py <package name> [poetry.lock path]\n"
" cat poetry.lock | python get_package_version_from_lockfile.py <package name> -",
file=sys.stderr,
)
sys.exit(1)
package_name = sys.argv[1]
lockfile_path = sys.argv[2] if len(sys.argv) == 3 else "poetry.lock"
version = get_package_version(package_name, lockfile_path)
if version:
print(version)
else:
print(f"Package '{package_name}' not found in {lockfile_path}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

3
.gitignore vendored
View File

@@ -165,7 +165,7 @@ package-lock.json
# Allow for locally private items
# private
pri*
pri*
# ignore
ig*
.github_access_token
@@ -176,3 +176,4 @@ autogpt_platform/backend/settings.py
*.ign.*
.test-contents
.claude/settings.local.json

View File

@@ -17,7 +17,7 @@ repos:
name: Detect secrets
description: Detects high entropy strings that are likely to be passwords.
files: ^autogpt_platform/
stages: [push]
stages: [pre-push]
- repo: local
# For proper type checking, all dependencies need to be up-to-date.
@@ -235,44 +235,44 @@ repos:
hooks:
- id: tsc
name: Typecheck - AutoGPT Platform - Frontend
entry: bash -c 'cd autogpt_platform/frontend && npm run type-check'
entry: bash -c 'cd autogpt_platform/frontend && pnpm type-check'
files: ^autogpt_platform/frontend/
types: [file]
language: system
pass_filenames: false
- repo: local
hooks:
- id: pytest
name: Run tests - AutoGPT Platform - Backend
alias: pytest-platform-backend
entry: bash -c 'cd autogpt_platform/backend && poetry run pytest'
# include autogpt_libs source (since it's a path dependency) but exclude *_test.py files:
files: ^autogpt_platform/(backend/((backend|test)/|poetry\.lock$)|autogpt_libs/(autogpt_libs/.*(?<!_test)\.py|poetry\.lock)$)
language: system
pass_filenames: false
# - repo: local
# hooks:
# - id: pytest
# name: Run tests - AutoGPT Platform - Backend
# alias: pytest-platform-backend
# entry: bash -c 'cd autogpt_platform/backend && poetry run pytest'
# # include autogpt_libs source (since it's a path dependency) but exclude *_test.py files:
# files: ^autogpt_platform/(backend/((backend|test)/|poetry\.lock$)|autogpt_libs/(autogpt_libs/.*(?<!_test)\.py|poetry\.lock)$)
# language: system
# pass_filenames: false
- id: pytest
name: Run tests - Classic - AutoGPT (excl. slow tests)
alias: pytest-classic-autogpt
entry: bash -c 'cd classic/original_autogpt && poetry run pytest --cov=autogpt -m "not slow" tests/unit tests/integration'
# include forge source (since it's a path dependency) but exclude *_test.py files:
files: ^(classic/original_autogpt/((autogpt|tests)/|poetry\.lock$)|classic/forge/(forge/.*(?<!_test)\.py|poetry\.lock)$)
language: system
pass_filenames: false
# - id: pytest
# name: Run tests - Classic - AutoGPT (excl. slow tests)
# alias: pytest-classic-autogpt
# entry: bash -c 'cd classic/original_autogpt && poetry run pytest --cov=autogpt -m "not slow" tests/unit tests/integration'
# # include forge source (since it's a path dependency) but exclude *_test.py files:
# files: ^(classic/original_autogpt/((autogpt|tests)/|poetry\.lock$)|classic/forge/(forge/.*(?<!_test)\.py|poetry\.lock)$)
# language: system
# pass_filenames: false
- id: pytest
name: Run tests - Classic - Forge (excl. slow tests)
alias: pytest-classic-forge
entry: bash -c 'cd classic/forge && poetry run pytest --cov=forge -m "not slow"'
files: ^classic/forge/(forge/|tests/|poetry\.lock$)
language: system
pass_filenames: false
# - id: pytest
# name: Run tests - Classic - Forge (excl. slow tests)
# alias: pytest-classic-forge
# entry: bash -c 'cd classic/forge && poetry run pytest --cov=forge -m "not slow"'
# files: ^classic/forge/(forge/|tests/|poetry\.lock$)
# language: system
# pass_filenames: false
- id: pytest
name: Run tests - Classic - Benchmark
alias: pytest-classic-benchmark
entry: bash -c 'cd classic/benchmark && poetry run pytest --cov=benchmark'
files: ^classic/benchmark/(agbenchmark/|tests/|poetry\.lock$)
language: system
pass_filenames: false
# - id: pytest
# name: Run tests - Classic - Benchmark
# alias: pytest-classic-benchmark
# entry: bash -c 'cd classic/benchmark && poetry run pytest --cov=benchmark'
# files: ^classic/benchmark/(agbenchmark/|tests/|poetry\.lock$)
# language: system
# pass_filenames: false

6
.vscode/launch.json vendored
View File

@@ -32,9 +32,9 @@
"type": "debugpy",
"request": "launch",
"module": "backend.app",
// "env": {
// "ENV": "dev"
// },
"env": {
"OBJC_DISABLE_INITIALIZE_FORK_SAFETY": "YES"
},
"envFile": "${workspaceFolder}/backend/.env",
"justMyCode": false,
"cwd": "${workspaceFolder}/autogpt_platform/backend"

53
AGENTS.md Normal file
View File

@@ -0,0 +1,53 @@
# AutoGPT Platform Contribution Guide
This guide provides context for Codex when updating the **autogpt_platform** folder.
## Directory overview
- `autogpt_platform/backend` FastAPI based backend service.
- `autogpt_platform/autogpt_libs` Shared Python libraries.
- `autogpt_platform/frontend` Next.js + Typescript frontend.
- `autogpt_platform/docker-compose.yml` development stack.
See `docs/content/platform/getting-started.md` for setup instructions.
## Code style
- Format Python code with `poetry run format`.
- Format frontend code using `pnpm format`.
## Testing
- Backend: `poetry run test` (runs pytest with a docker based postgres + prisma).
- Frontend: `pnpm test` or `pnpm test-ui` for Playwright tests. See `docs/content/platform/contributing/tests.md` for tips.
Always run the relevant linters and tests before committing.
Use conventional commit messages for all commits (e.g. `feat(backend): add API`).
Types:
- feat
- fix
- refactor
- ci
- dx (developer experience)
Scopes:
- platform
- platform/library
- platform/marketplace
- backend
- backend/executor
- frontend
- frontend/library
- frontend/marketplace
- blocks
## Pull requests
- Use the template in `.github/PULL_REQUEST_TEMPLATE.md`.
- Rely on the pre-commit checks for linting and formatting
- Fill out the **Changes** section and the checklist.
- Use conventional commit titles with a scope (e.g. `feat(frontend): add feature`).
- Keep out-of-scope changes under 20% of the PR.
- Ensure PR descriptions are complete.
- For changes touching `data/*.py`, validate user ID checks or explain why not needed.
- If adding protected frontend routes, update `frontend/lib/supabase/middleware.ts`.
- Use the linear ticket branch structure if given codex/open-1668-resume-dropped-runs

View File

@@ -15,8 +15,35 @@
> Setting up and hosting the AutoGPT Platform yourself is a technical process.
> If you'd rather something that just works, we recommend [joining the waitlist](https://bit.ly/3ZDijAI) for the cloud-hosted beta.
### System Requirements
Before proceeding with the installation, ensure your system meets the following requirements:
#### Hardware Requirements
- CPU: 4+ cores recommended
- RAM: Minimum 8GB, 16GB recommended
- Storage: At least 10GB of free space
#### Software Requirements
- Operating Systems:
- Linux (Ubuntu 20.04 or newer recommended)
- macOS (10.15 or newer)
- Windows 10/11 with WSL2
- Required Software (with minimum versions):
- Docker Engine (20.10.0 or newer)
- Docker Compose (2.0.0 or newer)
- Git (2.30 or newer)
- Node.js (16.x or newer)
- npm (8.x or newer)
- VSCode (1.60 or newer) or any modern code editor
#### Network Requirements
- Stable internet connection
- Access to required ports (will be configured in Docker)
- Ability to make outbound HTTPS connections
### Updated Setup Instructions:
Weve moved to a fully maintained and regularly updated documentation site.
We've moved to a fully maintained and regularly updated documentation site.
👉 [Follow the official self-hosting guide here](https://docs.agpt.co/platform/getting-started/)
@@ -152,7 +179,7 @@ Just clone the repo, install dependencies with `./run setup`, and you should be
[![Join us on Discord](https://invidget.switchblade.xyz/autogpt)](https://discord.gg/autogpt)
To report a bug or request a feature, create a [GitHub Issue](https://github.com/Significant-Gravitas/AutoGPT/issues/new/choose). Please ensure someone else hasnt created an issue for the same topic.
To report a bug or request a feature, create a [GitHub Issue](https://github.com/Significant-Gravitas/AutoGPT/issues/new/choose). Please ensure someone else hasn't created an issue for the same topic.
## 🤝 Sister projects

147
autogpt_platform/CLAUDE.md Normal file
View File

@@ -0,0 +1,147 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Repository Overview
AutoGPT Platform is a monorepo containing:
- **Backend** (`/backend`): Python FastAPI server with async support
- **Frontend** (`/frontend`): Next.js React application
- **Shared Libraries** (`/autogpt_libs`): Common Python utilities
## Essential Commands
### Backend Development
```bash
# Install dependencies
cd backend && poetry install
# Run database migrations
poetry run prisma migrate dev
# Start all services (database, redis, rabbitmq, clamav)
docker compose up -d
# Run the backend server
poetry run serve
# Run tests
poetry run test
# Run specific test
poetry run pytest path/to/test_file.py::test_function_name
# Lint and format
# prefer format if you want to just "fix" it and only get the errors that can't be autofixed
poetry run format # Black + isort
poetry run lint # ruff
```
More details can be found in TESTING.md
#### Creating/Updating Snapshots
When you first write a test or when the expected output changes:
```bash
poetry run pytest path/to/test.py --snapshot-update
```
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
### Frontend Development
```bash
# Install dependencies
cd frontend && npm install
# Start development server
npm run dev
# Run E2E tests
npm run test
# Run Storybook for component development
npm run storybook
# Build production
npm run build
# Type checking
npm run type-check
```
## Architecture Overview
### Backend Architecture
- **API Layer**: FastAPI with REST and WebSocket endpoints
- **Database**: PostgreSQL with Prisma ORM, includes pgvector for embeddings
- **Queue System**: RabbitMQ for async task processing
- **Execution Engine**: Separate executor service processes agent workflows
- **Authentication**: JWT-based with Supabase integration
- **Security**: Cache protection middleware prevents sensitive data caching in browsers/proxies
### Frontend Architecture
- **Framework**: Next.js App Router with React Server Components
- **State Management**: React hooks + Supabase client for real-time updates
- **Workflow Builder**: Visual graph editor using @xyflow/react
- **UI Components**: Radix UI primitives with Tailwind CSS styling
- **Feature Flags**: LaunchDarkly integration
### Key Concepts
1. **Agent Graphs**: Workflow definitions stored as JSON, executed by the backend
2. **Blocks**: Reusable components in `/backend/blocks/` that perform specific tasks
3. **Integrations**: OAuth and API connections stored per user
4. **Store**: Marketplace for sharing agent templates
5. **Virus Scanning**: ClamAV integration for file upload security
### Testing Approach
- Backend uses pytest with snapshot testing for API responses
- Test files are colocated with source files (`*_test.py`)
- Frontend uses Playwright for E2E tests
- Component testing via Storybook
### Database Schema
Key models (defined in `/backend/schema.prisma`):
- `User`: Authentication and profile data
- `AgentGraph`: Workflow definitions with version control
- `AgentGraphExecution`: Execution history and results
- `AgentNode`: Individual nodes in a workflow
- `StoreListing`: Marketplace listings for sharing agents
### Environment Configuration
- Backend: `.env` file in `/backend`
- Frontend: `.env.local` file in `/frontend`
- Both require Supabase credentials and API keys for various services
### Common Development Tasks
**Adding a new block:**
1. Create new file in `/backend/backend/blocks/`
2. Inherit from `Block` base class
3. Define input/output schemas
4. Implement `run` method
5. Register in block registry
6. Generate the block uuid using `uuid.uuid4()`
**Modifying the API:**
1. Update route in `/backend/backend/server/routers/`
2. Add/update Pydantic models in same directory
3. Write tests alongside the route file
4. Run `poetry run test` to verify
**Frontend feature development:**
1. Components go in `/frontend/src/components/`
2. Use existing UI components from `/frontend/src/components/ui/`
3. Add Storybook stories for new components
4. Test with Playwright if user-facing
### Security Implementation
**Cache Protection Middleware:**
- Located in `/backend/backend/server/middleware/security.py`
- Default behavior: Disables caching for ALL endpoints with `Cache-Control: no-store, no-cache, must-revalidate, private`
- Uses an allow list approach - only explicitly permitted paths can be cached
- Cacheable paths include: static assets (`/static/*`, `/_next/static/*`), health checks, public store pages, documentation
- Prevents sensitive data (auth tokens, API keys, user data) from being cached by browsers/proxies
- To allow caching for a new endpoint, add it to `CACHEABLE_PATHS` in the middleware
- Applied to both main API server and external API applications

View File

@@ -15,44 +15,63 @@ Welcome to the AutoGPT Platform - a powerful system for creating and running AI
To run the AutoGPT Platform, follow these steps:
1. Clone this repository to your local machine and navigate to the `autogpt_platform` directory within the repository:
```
git clone <https://github.com/Significant-Gravitas/AutoGPT.git | git@github.com:Significant-Gravitas/AutoGPT.git>
cd AutoGPT/autogpt_platform
```
2. Run the following command:
```
cp .env.example .env
```
This command will copy the `.env.example` file to `.env`. You can modify the `.env` file to add your own environment variables.
3. Run the following command:
```
docker compose up -d
```
This command will start all the necessary backend services defined in the `docker-compose.yml` file in detached mode.
4. Navigate to `frontend` within the `autogpt_platform` directory:
```
cd frontend
```
You will need to run your frontend application separately on your local machine.
5. Run the following command:
5. Run the following command:
```
cp .env.example .env.local
```
This command will copy the `.env.example` file to `.env.local` in the `frontend` directory. You can modify the `.env.local` within this folder to add your own environment variables for the frontend application.
6. Run the following command:
Enable corepack and install dependencies by running:
```
npm install
npm run dev
corepack enable
pnpm i
```
This command will install the necessary dependencies and start the frontend application in development mode.
If you are using Yarn, you can run the following commands instead:
Generate the API client (this step is required before running the frontend):
```
yarn install && yarn dev
pnpm generate:api-client
```
Then start the frontend application in development mode:
```
pnpm dev
```
7. Open your browser and navigate to `http://localhost:3000` to access the AutoGPT Platform frontend.
@@ -68,43 +87,52 @@ Here are some useful Docker Compose commands for managing your AutoGPT Platform:
- `docker compose down`: Stop and remove containers, networks, and volumes.
- `docker compose watch`: Watch for changes in your services and automatically update them.
### Sample Scenarios
Here are some common scenarios where you might use multiple Docker Compose commands:
1. Updating and restarting a specific service:
```
docker compose build api_srv
docker compose up -d --no-deps api_srv
```
This rebuilds the `api_srv` service and restarts it without affecting other services.
2. Viewing logs for troubleshooting:
```
docker compose logs -f api_srv ws_srv
```
This shows and follows the logs for both `api_srv` and `ws_srv` services.
3. Scaling a service for increased load:
```
docker compose up -d --scale executor=3
```
This scales the `executor` service to 3 instances to handle increased load.
4. Stopping the entire system for maintenance:
```
docker compose stop
docker compose rm -f
docker compose pull
docker compose up -d
```
This stops all services, removes containers, pulls the latest images, and restarts the system.
5. Developing with live updates:
```
docker compose watch
```
This watches for changes in your code and automatically updates the relevant services.
6. Checking the status of services:
@@ -115,7 +143,6 @@ Here are some common scenarios where you might use multiple Docker Compose comma
These scenarios demonstrate how to use Docker Compose commands in combination to manage your AutoGPT Platform effectively.
### Persisting Data
To persist data for PostgreSQL and Redis, you can modify the `docker-compose.yml` file to add volumes. Here's how:
@@ -143,3 +170,27 @@ To persist data for PostgreSQL and Redis, you can modify the `docker-compose.yml
3. Save the file and run `docker compose up -d` to apply the changes.
This configuration will create named volumes for PostgreSQL and Redis, ensuring that your data persists across container restarts.
### API Client Generation
The platform includes scripts for generating and managing the API client:
- `pnpm fetch:openapi`: Fetches the OpenAPI specification from the backend service (requires backend to be running on port 8006)
- `pnpm generate:api-client`: Generates the TypeScript API client from the OpenAPI specification using Orval
- `pnpm generate:api-all`: Runs both fetch and generate commands in sequence
#### Manual API Client Updates
If you need to update the API client after making changes to the backend API:
1. Ensure the backend services are running:
```
docker compose up -d
```
2. Generate the updated API client:
```
pnpm generate:api-all
```
This will fetch the latest OpenAPI specification and regenerate the TypeScript client code.

View File

@@ -1,3 +1,3 @@
# AutoGPT Libs
This is a new project to store shared functionality across different services in NextGen AutoGPT (e.g. authentication)
This is a new project to store shared functionality across different services in the AutoGPT Platform (e.g. authentication)

View File

@@ -31,4 +31,5 @@ class APIKeyManager:
"""Verify if a provided API key matches the stored hash."""
if not provided_key.startswith(self.PREFIX):
return False
return hashlib.sha256(provided_key.encode()).hexdigest() == stored_hash
provided_hash = hashlib.sha256(provided_key.encode()).hexdigest()
return secrets.compare_digest(provided_hash, stored_hash)

View File

@@ -1,5 +1,6 @@
import inspect
import logging
import secrets
from typing import Any, Callable, Optional
from fastapi import HTTPException, Request, Security
@@ -16,7 +17,7 @@ logger = logging.getLogger(__name__)
async def auth_middleware(request: Request):
if not settings.ENABLE_AUTH:
# If authentication is disabled, allow the request to proceed
logger.warn("Auth disabled")
logger.warning("Auth disabled")
return {}
security = HTTPBearer()
@@ -93,7 +94,11 @@ class APIKeyValidator:
self.error_message = error_message
async def default_validator(self, api_key: str) -> bool:
return api_key == self.expected_token
if not self.expected_token:
raise ValueError(
"Expected Token Required to be set when uisng API Key Validator default validation"
)
return secrets.compare_digest(api_key, self.expected_token)
async def __call__(
self, request: Request, api_key: str = Security(APIKeyHeader)

View File

@@ -1,6 +1,6 @@
import inspect
import threading
from typing import Any, Awaitable, Callable, ParamSpec, TypeVar, cast, overload
from typing import Awaitable, Callable, ParamSpec, TypeVar, cast, overload
P = ParamSpec("P")
R = TypeVar("R")
@@ -19,41 +19,41 @@ def thread_cached(
) -> Callable[P, R] | Callable[P, Awaitable[R]]:
thread_local = threading.local()
def _clear():
if hasattr(thread_local, "cache"):
del thread_local.cache
if inspect.iscoroutinefunction(func):
async def async_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
key = (func, args, tuple(sorted(kwargs.items())))
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = await cast(Callable[P, Awaitable[R]], func)(
*args, **kwargs
)
return cache[key]
setattr(async_wrapper, "clear_cache", _clear)
return async_wrapper
else:
def sync_wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
cache = getattr(thread_local, "cache", None)
if cache is None:
cache = thread_local.cache = {}
# Include function in the key to prevent collisions between different functions
key = (func, args, tuple(sorted(kwargs.items())))
key = (args, tuple(sorted(kwargs.items())))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
setattr(sync_wrapper, "clear_cache", _clear)
return sync_wrapper
def clear_thread_cache(func: Callable[..., Any]) -> None:
"""Clear the cache for a thread-cached function."""
thread_local = threading.local()
cache = getattr(thread_local, "cache", None)
if cache is not None:
# Clear all entries that match the function
for key in list(cache.keys()):
if key and len(key) > 0 and key[0] == func:
del cache[key]
def clear_thread_cache(func: Callable) -> None:
if clear := getattr(func, "clear_cache", None):
clear()

View File

@@ -1,15 +1,15 @@
from contextlib import contextmanager
from threading import Lock
import asyncio
from contextlib import asynccontextmanager
from typing import TYPE_CHECKING, Any
from expiringdict import ExpiringDict
if TYPE_CHECKING:
from redis import Redis
from redis.lock import Lock as RedisLock
from redis.asyncio import Redis as AsyncRedis
from redis.asyncio.lock import Lock as AsyncRedisLock
class RedisKeyedMutex:
class AsyncRedisKeyedMutex:
"""
This class provides a mutex that can be locked and unlocked by a specific key,
using Redis as a distributed locking provider.
@@ -17,41 +17,45 @@ class RedisKeyedMutex:
in case the key is not unlocked for a specified duration, to prevent memory leaks.
"""
def __init__(self, redis: "Redis", timeout: int | None = 60):
def __init__(self, redis: "AsyncRedis", timeout: int | None = 60):
self.redis = redis
self.timeout = timeout
self.locks: dict[Any, "RedisLock"] = ExpiringDict(
self.locks: dict[Any, "AsyncRedisLock"] = ExpiringDict(
max_len=6000, max_age_seconds=self.timeout
)
self.locks_lock = Lock()
self.locks_lock = asyncio.Lock()
@contextmanager
def locked(self, key: Any):
lock = self.acquire(key)
@asynccontextmanager
async def locked(self, key: Any):
lock = await self.acquire(key)
try:
yield
finally:
if lock.locked():
lock.release()
if (await lock.locked()) and (await lock.owned()):
await lock.release()
def acquire(self, key: Any) -> "RedisLock":
async def acquire(self, key: Any) -> "AsyncRedisLock":
"""Acquires and returns a lock with the given key"""
with self.locks_lock:
async with self.locks_lock:
if key not in self.locks:
self.locks[key] = self.redis.lock(
str(key), self.timeout, thread_local=False
)
lock = self.locks[key]
lock.acquire()
await lock.acquire()
return lock
def release(self, key: Any):
if (lock := self.locks.get(key)) and lock.locked() and lock.owned():
lock.release()
async def release(self, key: Any):
if (
(lock := self.locks.get(key))
and (await lock.locked())
and (await lock.owned())
):
await lock.release()
def release_all_locks(self):
async def release_all_locks(self):
"""Call this on process termination to ensure all locks are released"""
self.locks_lock.acquire(blocking=False)
for lock in self.locks.values():
if lock.locked() and lock.owned():
lock.release()
async with self.locks_lock:
for lock in self.locks.values():
if (await lock.locked()) and (await lock.owned()):
await lock.release()

View File

@@ -323,6 +323,21 @@ files = [
{file = "charset_normalizer-3.3.2-py3-none-any.whl", hash = "sha256:3e4d1f6587322d2788836a99c69062fbb091331ec940e02d12d179c1d53e25fc"},
]
[[package]]
name = "click"
version = "8.2.1"
description = "Composable command line interface toolkit"
optional = false
python-versions = ">=3.10"
groups = ["main"]
files = [
{file = "click-8.2.1-py3-none-any.whl", hash = "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b"},
{file = "click-8.2.1.tar.gz", hash = "sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202"},
]
[package.dependencies]
colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "colorama"
version = "0.4.6"
@@ -399,6 +414,27 @@ files = [
[package.extras]
tests = ["coverage", "coveralls", "dill", "mock", "nose"]
[[package]]
name = "fastapi"
version = "0.115.12"
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "fastapi-0.115.12-py3-none-any.whl", hash = "sha256:e94613d6c05e27be7ffebdd6ea5f388112e5e430c8f7d6494a9d1d88d43e814d"},
{file = "fastapi-0.115.12.tar.gz", hash = "sha256:1e2c2a2646905f9e83d32f04a3f86aff4a286669c6c950ca95b5fd68c2602681"},
]
[package.dependencies]
pydantic = ">=1.7.4,<1.8 || >1.8,<1.8.1 || >1.8.1,<2.0.0 || >2.0.0,<2.0.1 || >2.0.1,<2.1.0 || >2.1.0,<3.0.0"
starlette = ">=0.40.0,<0.47.0"
typing-extensions = ">=4.8.0"
[package.extras]
all = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.5)", "httpx (>=0.23.0)", "itsdangerous (>=1.1.0)", "jinja2 (>=3.1.5)", "orjson (>=3.2.1)", "pydantic-extra-types (>=2.0.0)", "pydantic-settings (>=2.0.0)", "python-multipart (>=0.0.18)", "pyyaml (>=5.3.1)", "ujson (>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0)", "uvicorn[standard] (>=0.12.0)"]
standard = ["email-validator (>=2.0.0)", "fastapi-cli[standard] (>=0.0.5)", "httpx (>=0.23.0)", "jinja2 (>=3.1.5)", "python-multipart (>=0.0.18)", "uvicorn[standard] (>=0.12.0)"]
[[package]]
name = "frozenlist"
version = "1.4.1"
@@ -562,19 +598,19 @@ protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4
[[package]]
name = "google-cloud-audit-log"
version = "0.3.0"
version = "0.3.2"
description = "Google Cloud Audit Protos"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "google_cloud_audit_log-0.3.0-py2.py3-none-any.whl", hash = "sha256:8340793120a1d5aa143605def8704ecdcead15106f754ef1381ae3bab533722f"},
{file = "google_cloud_audit_log-0.3.0.tar.gz", hash = "sha256:901428b257020d8c1d1133e0fa004164a555e5a395c7ca3cdbb8486513df3a65"},
{file = "google_cloud_audit_log-0.3.2-py3-none-any.whl", hash = "sha256:daaedfb947a0d77f524e1bd2b560242ab4836fe1afd6b06b92f152b9658554ed"},
{file = "google_cloud_audit_log-0.3.2.tar.gz", hash = "sha256:2598f1533a7d7cdd6c7bf448c12e5519c1d53162d78784e10bcdd1df67791bc3"},
]
[package.dependencies]
googleapis-common-protos = ">=1.56.2,<2.0dev"
protobuf = ">=3.20.2,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0dev"
googleapis-common-protos = ">=1.56.2,<2.0.0"
protobuf = ">=3.20.2,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<7.0.0"
[[package]]
name = "google-cloud-core"
@@ -597,30 +633,30 @@ grpc = ["grpcio (>=1.38.0,<2.0dev)", "grpcio-status (>=1.38.0,<2.0.dev0)"]
[[package]]
name = "google-cloud-logging"
version = "3.11.4"
version = "3.12.1"
description = "Stackdriver Logging API client library"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "google_cloud_logging-3.11.4-py2.py3-none-any.whl", hash = "sha256:1d465ac62df29fb94bba4d6b4891035e57d573d84541dd8a40eebbc74422b2f0"},
{file = "google_cloud_logging-3.11.4.tar.gz", hash = "sha256:32305d989323f3c58603044e2ac5d9cf23e9465ede511bbe90b4309270d3195c"},
{file = "google_cloud_logging-3.12.1-py2.py3-none-any.whl", hash = "sha256:6817878af76ec4e7568976772839ab2c43ddfd18fbbf2ce32b13ef549cd5a862"},
{file = "google_cloud_logging-3.12.1.tar.gz", hash = "sha256:36efc823985055b203904e83e1c8f9f999b3c64270bcda39d57386ca4effd678"},
]
[package.dependencies]
google-api-core = {version = ">=1.34.1,<2.0.dev0 || >=2.11.dev0,<3.0.0dev", extras = ["grpc"]}
google-auth = ">=2.14.1,<2.24.0 || >2.24.0,<2.25.0 || >2.25.0,<3.0.0dev"
google-cloud-appengine-logging = ">=0.1.3,<2.0.0dev"
google-cloud-audit-log = ">=0.2.4,<1.0.0dev"
google-cloud-core = ">=2.0.0,<3.0.0dev"
grpc-google-iam-v1 = ">=0.12.4,<1.0.0dev"
google-api-core = {version = ">=1.34.1,<2.0.dev0 || >=2.11.dev0,<3.0.0", extras = ["grpc"]}
google-auth = ">=2.14.1,<2.24.0 || >2.24.0,<2.25.0 || >2.25.0,<3.0.0"
google-cloud-appengine-logging = ">=0.1.3,<2.0.0"
google-cloud-audit-log = ">=0.3.1,<1.0.0"
google-cloud-core = ">=2.0.0,<3.0.0"
grpc-google-iam-v1 = ">=0.12.4,<1.0.0"
opentelemetry-api = ">=1.9.0"
proto-plus = [
{version = ">=1.25.0,<2.0.0dev", markers = "python_version >= \"3.13\""},
{version = ">=1.22.2,<2.0.0dev", markers = "python_version >= \"3.11\" and python_version < \"3.13\""},
{version = ">=1.22.0,<2.0.0dev", markers = "python_version < \"3.11\""},
{version = ">=1.25.0,<2.0.0", markers = "python_version >= \"3.13\""},
{version = ">=1.22.2,<2.0.0", markers = "python_version >= \"3.11\" and python_version < \"3.13\""},
{version = ">=1.22.0,<2.0.0", markers = "python_version < \"3.11\""},
]
protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0dev"
protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<7.0.0"
[[package]]
name = "googleapis-common-protos"
@@ -895,6 +931,47 @@ files = [
{file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
]
[[package]]
name = "launchdarkly-eventsource"
version = "1.2.4"
description = "LaunchDarkly SSE Client"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "launchdarkly_eventsource-1.2.4-py3-none-any.whl", hash = "sha256:048ef8c4440d0d8219778661ee4d4b5e12aa6ed2c29a3004417ede44c2386e8c"},
{file = "launchdarkly_eventsource-1.2.4.tar.gz", hash = "sha256:b8b9342681f55e1d35c56243431cbbaca4eb9812d6785f8de204af322104e066"},
]
[package.dependencies]
urllib3 = ">=1.26.0,<3"
[[package]]
name = "launchdarkly-server-sdk"
version = "9.11.1"
description = "LaunchDarkly SDK for Python"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "launchdarkly_server_sdk-9.11.1-py3-none-any.whl", hash = "sha256:128569cebf666dd115cc0ba03c48ff75f6acc9788301a7e2c3a54d06107e445a"},
{file = "launchdarkly_server_sdk-9.11.1.tar.gz", hash = "sha256:150e29656cb8c506d1967f3c59e62b69310d345ec27217640a6146dd1db5d250"},
]
[package.dependencies]
certifi = ">=2018.4.16"
expiringdict = ">=1.1.4"
launchdarkly-eventsource = ">=1.2.4,<2.0.0"
pyRFC3339 = ">=1.0"
semver = ">=2.10.2"
urllib3 = ">=1.26.0,<3"
[package.extras]
consul = ["python-consul (>=1.0.1)"]
dynamodb = ["boto3 (>=1.9.71)"]
redis = ["redis (>=2.10.5)"]
test-filesource = ["pyyaml (>=5.3.1)", "watchdog (>=3.0.0)"]
[[package]]
name = "multidict"
version = "6.1.0"
@@ -1238,19 +1315,19 @@ pyasn1 = ">=0.4.6,<0.7.0"
[[package]]
name = "pydantic"
version = "2.11.1"
version = "2.11.4"
description = "Data validation using Python type hints"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pydantic-2.11.1-py3-none-any.whl", hash = "sha256:5b6c415eee9f8123a14d859be0c84363fec6b1feb6b688d6435801230b56e0b8"},
{file = "pydantic-2.11.1.tar.gz", hash = "sha256:442557d2910e75c991c39f4b4ab18963d57b9b55122c8b2a9cd176d8c29ce968"},
{file = "pydantic-2.11.4-py3-none-any.whl", hash = "sha256:d9615eaa9ac5a063471da949c8fc16376a84afb5024688b3ff885693506764eb"},
{file = "pydantic-2.11.4.tar.gz", hash = "sha256:32738d19d63a226a52eed76645a98ee07c1f410ee41d93b4afbfa85ed8111c2d"},
]
[package.dependencies]
annotated-types = ">=0.6.0"
pydantic-core = "2.33.0"
pydantic-core = "2.33.2"
typing-extensions = ">=4.12.2"
typing-inspection = ">=0.4.0"
@@ -1260,111 +1337,111 @@ timezone = ["tzdata ; python_version >= \"3.9\" and platform_system == \"Windows
[[package]]
name = "pydantic-core"
version = "2.33.0"
version = "2.33.2"
description = "Core functionality for Pydantic validation and serialization"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pydantic_core-2.33.0-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:71dffba8fe9ddff628c68f3abd845e91b028361d43c5f8e7b3f8b91d7d85413e"},
{file = "pydantic_core-2.33.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:abaeec1be6ed535a5d7ffc2e6c390083c425832b20efd621562fbb5bff6dc518"},
{file = "pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:759871f00e26ad3709efc773ac37b4d571de065f9dfb1778012908bcc36b3a73"},
{file = "pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:dcfebee69cd5e1c0b76a17e17e347c84b00acebb8dd8edb22d4a03e88e82a207"},
{file = "pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1b1262b912435a501fa04cd213720609e2cefa723a07c92017d18693e69bf00b"},
{file = "pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4726f1f3f42d6a25678c67da3f0b10f148f5655813c5aca54b0d1742ba821b8f"},
{file = "pydantic_core-2.33.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e790954b5093dff1e3a9a2523fddc4e79722d6f07993b4cd5547825c3cbf97b5"},
{file = "pydantic_core-2.33.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:34e7fb3abe375b5c4e64fab75733d605dda0f59827752debc99c17cb2d5f3276"},
{file = "pydantic_core-2.33.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:ecb158fb9b9091b515213bed3061eb7deb1d3b4e02327c27a0ea714ff46b0760"},
{file = "pydantic_core-2.33.0-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:4d9149e7528af8bbd76cc055967e6e04617dcb2a2afdaa3dea899406c5521faa"},
{file = "pydantic_core-2.33.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e81a295adccf73477220e15ff79235ca9dcbcee4be459eb9d4ce9a2763b8386c"},
{file = "pydantic_core-2.33.0-cp310-cp310-win32.whl", hash = "sha256:f22dab23cdbce2005f26a8f0c71698457861f97fc6318c75814a50c75e87d025"},
{file = "pydantic_core-2.33.0-cp310-cp310-win_amd64.whl", hash = "sha256:9cb2390355ba084c1ad49485d18449b4242da344dea3e0fe10babd1f0db7dcfc"},
{file = "pydantic_core-2.33.0-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:a608a75846804271cf9c83e40bbb4dab2ac614d33c6fd5b0c6187f53f5c593ef"},
{file = "pydantic_core-2.33.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e1c69aa459f5609dec2fa0652d495353accf3eda5bdb18782bc5a2ae45c9273a"},
{file = "pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b9ec80eb5a5f45a2211793f1c4aeddff0c3761d1c70d684965c1807e923a588b"},
{file = "pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e925819a98318d17251776bd3d6aa9f3ff77b965762155bdad15d1a9265c4cfd"},
{file = "pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5bf68bb859799e9cec3d9dd8323c40c00a254aabb56fe08f907e437005932f2b"},
{file = "pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1b2ea72dea0825949a045fa4071f6d5b3d7620d2a208335207793cf29c5a182d"},
{file = "pydantic_core-2.33.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1583539533160186ac546b49f5cde9ffc928062c96920f58bd95de32ffd7bffd"},
{file = "pydantic_core-2.33.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:23c3e77bf8a7317612e5c26a3b084c7edeb9552d645742a54a5867635b4f2453"},
{file = "pydantic_core-2.33.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a7a7f2a3f628d2f7ef11cb6188bcf0b9e1558151d511b974dfea10a49afe192b"},
{file = "pydantic_core-2.33.0-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:f1fb026c575e16f673c61c7b86144517705865173f3d0907040ac30c4f9f5915"},
{file = "pydantic_core-2.33.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:635702b2fed997e0ac256b2cfbdb4dd0bf7c56b5d8fba8ef03489c03b3eb40e2"},
{file = "pydantic_core-2.33.0-cp311-cp311-win32.whl", hash = "sha256:07b4ced28fccae3f00626eaa0c4001aa9ec140a29501770a88dbbb0966019a86"},
{file = "pydantic_core-2.33.0-cp311-cp311-win_amd64.whl", hash = "sha256:4927564be53239a87770a5f86bdc272b8d1fbb87ab7783ad70255b4ab01aa25b"},
{file = "pydantic_core-2.33.0-cp311-cp311-win_arm64.whl", hash = "sha256:69297418ad644d521ea3e1aa2e14a2a422726167e9ad22b89e8f1130d68e1e9a"},
{file = "pydantic_core-2.33.0-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:6c32a40712e3662bebe524abe8abb757f2fa2000028d64cc5a1006016c06af43"},
{file = "pydantic_core-2.33.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8ec86b5baa36f0a0bfb37db86c7d52652f8e8aa076ab745ef7725784183c3fdd"},
{file = "pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4deac83a8cc1d09e40683be0bc6d1fa4cde8df0a9bf0cda5693f9b0569ac01b6"},
{file = "pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:175ab598fb457a9aee63206a1993874badf3ed9a456e0654273e56f00747bbd6"},
{file = "pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5f36afd0d56a6c42cf4e8465b6441cf546ed69d3a4ec92724cc9c8c61bd6ecf4"},
{file = "pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0a98257451164666afafc7cbf5fb00d613e33f7e7ebb322fbcd99345695a9a61"},
{file = "pydantic_core-2.33.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ecc6d02d69b54a2eb83ebcc6f29df04957f734bcf309d346b4f83354d8376862"},
{file = "pydantic_core-2.33.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1a69b7596c6603afd049ce7f3835bcf57dd3892fc7279f0ddf987bebed8caa5a"},
{file = "pydantic_core-2.33.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:ea30239c148b6ef41364c6f51d103c2988965b643d62e10b233b5efdca8c0099"},
{file = "pydantic_core-2.33.0-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:abfa44cf2f7f7d7a199be6c6ec141c9024063205545aa09304349781b9a125e6"},
{file = "pydantic_core-2.33.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:20d4275f3c4659d92048c70797e5fdc396c6e4446caf517ba5cad2db60cd39d3"},
{file = "pydantic_core-2.33.0-cp312-cp312-win32.whl", hash = "sha256:918f2013d7eadea1d88d1a35fd4a1e16aaf90343eb446f91cb091ce7f9b431a2"},
{file = "pydantic_core-2.33.0-cp312-cp312-win_amd64.whl", hash = "sha256:aec79acc183865bad120b0190afac467c20b15289050648b876b07777e67ea48"},
{file = "pydantic_core-2.33.0-cp312-cp312-win_arm64.whl", hash = "sha256:5461934e895968655225dfa8b3be79e7e927e95d4bd6c2d40edd2fa7052e71b6"},
{file = "pydantic_core-2.33.0-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:f00e8b59e1fc8f09d05594aa7d2b726f1b277ca6155fc84c0396db1b373c4555"},
{file = "pydantic_core-2.33.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:1a73be93ecef45786d7d95b0c5e9b294faf35629d03d5b145b09b81258c7cd6d"},
{file = "pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ff48a55be9da6930254565ff5238d71d5e9cd8c5487a191cb85df3bdb8c77365"},
{file = "pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:26a4ea04195638dcd8c53dadb545d70badba51735b1594810e9768c2c0b4a5da"},
{file = "pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:41d698dcbe12b60661f0632b543dbb119e6ba088103b364ff65e951610cb7ce0"},
{file = "pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ae62032ef513fe6281ef0009e30838a01057b832dc265da32c10469622613885"},
{file = "pydantic_core-2.33.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f225f3a3995dbbc26affc191d0443c6c4aa71b83358fd4c2b7d63e2f6f0336f9"},
{file = "pydantic_core-2.33.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5bdd36b362f419c78d09630cbaebc64913f66f62bda6d42d5fbb08da8cc4f181"},
{file = "pydantic_core-2.33.0-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:2a0147c0bef783fd9abc9f016d66edb6cac466dc54a17ec5f5ada08ff65caf5d"},
{file = "pydantic_core-2.33.0-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:c860773a0f205926172c6644c394e02c25421dc9a456deff16f64c0e299487d3"},
{file = "pydantic_core-2.33.0-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:138d31e3f90087f42aa6286fb640f3c7a8eb7bdae829418265e7e7474bd2574b"},
{file = "pydantic_core-2.33.0-cp313-cp313-win32.whl", hash = "sha256:d20cbb9d3e95114325780f3cfe990f3ecae24de7a2d75f978783878cce2ad585"},
{file = "pydantic_core-2.33.0-cp313-cp313-win_amd64.whl", hash = "sha256:ca1103d70306489e3d006b0f79db8ca5dd3c977f6f13b2c59ff745249431a606"},
{file = "pydantic_core-2.33.0-cp313-cp313-win_arm64.whl", hash = "sha256:6291797cad239285275558e0a27872da735b05c75d5237bbade8736f80e4c225"},
{file = "pydantic_core-2.33.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:7b79af799630af263eca9ec87db519426d8c9b3be35016eddad1832bac812d87"},
{file = "pydantic_core-2.33.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eabf946a4739b5237f4f56d77fa6668263bc466d06a8036c055587c130a46f7b"},
{file = "pydantic_core-2.33.0-cp313-cp313t-win_amd64.whl", hash = "sha256:8a1d581e8cdbb857b0e0e81df98603376c1a5c34dc5e54039dcc00f043df81e7"},
{file = "pydantic_core-2.33.0-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:7c9c84749f5787781c1c45bb99f433402e484e515b40675a5d121ea14711cf61"},
{file = "pydantic_core-2.33.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:64672fa888595a959cfeff957a654e947e65bbe1d7d82f550417cbd6898a1d6b"},
{file = "pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:26bc7367c0961dec292244ef2549afa396e72e28cc24706210bd44d947582c59"},
{file = "pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ce72d46eb201ca43994303025bd54d8a35a3fc2a3495fac653d6eb7205ce04f4"},
{file = "pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:14229c1504287533dbf6b1fc56f752ce2b4e9694022ae7509631ce346158de11"},
{file = "pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:085d8985b1c1e48ef271e98a658f562f29d89bda98bf120502283efbc87313eb"},
{file = "pydantic_core-2.33.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31860fbda80d8f6828e84b4a4d129fd9c4535996b8249cfb8c720dc2a1a00bb8"},
{file = "pydantic_core-2.33.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f200b2f20856b5a6c3a35f0d4e344019f805e363416e609e9b47c552d35fd5ea"},
{file = "pydantic_core-2.33.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:5f72914cfd1d0176e58ddc05c7a47674ef4222c8253bf70322923e73e14a4ac3"},
{file = "pydantic_core-2.33.0-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:91301a0980a1d4530d4ba7e6a739ca1a6b31341252cb709948e0aca0860ce0ae"},
{file = "pydantic_core-2.33.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:7419241e17c7fbe5074ba79143d5523270e04f86f1b3a0dff8df490f84c8273a"},
{file = "pydantic_core-2.33.0-cp39-cp39-win32.whl", hash = "sha256:7a25493320203005d2a4dac76d1b7d953cb49bce6d459d9ae38e30dd9f29bc9c"},
{file = "pydantic_core-2.33.0-cp39-cp39-win_amd64.whl", hash = "sha256:82a4eba92b7ca8af1b7d5ef5f3d9647eee94d1f74d21ca7c21e3a2b92e008358"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:e2762c568596332fdab56b07060c8ab8362c56cf2a339ee54e491cd503612c50"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:5bf637300ff35d4f59c006fff201c510b2b5e745b07125458a5389af3c0dff8c"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:62c151ce3d59ed56ebd7ce9ce5986a409a85db697d25fc232f8e81f195aa39a1"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ee65f0cc652261744fd07f2c6e6901c914aa6c5ff4dcfaf1136bc394d0dd26b"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:024d136ae44d233e6322027bbf356712b3940bee816e6c948ce4b90f18471b3d"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:e37f10f6d4bc67c58fbd727108ae1d8b92b397355e68519f1e4a7babb1473442"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:502ed542e0d958bd12e7c3e9a015bce57deaf50eaa8c2e1c439b512cb9db1e3a"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:715c62af74c236bf386825c0fdfa08d092ab0f191eb5b4580d11c3189af9d330"},
{file = "pydantic_core-2.33.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:bccc06fa0372151f37f6b69834181aa9eb57cf8665ed36405fb45fbf6cac3bae"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5d8dc9f63a26f7259b57f46a7aab5af86b2ad6fbe48487500bb1f4b27e051e4c"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:30369e54d6d0113d2aa5aee7a90d17f225c13d87902ace8fcd7bbf99b19124db"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f3eb479354c62067afa62f53bb387827bee2f75c9c79ef25eef6ab84d4b1ae3b"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0310524c833d91403c960b8a3cf9f46c282eadd6afd276c8c5edc617bd705dc9"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:eddb18a00bbb855325db27b4c2a89a4ba491cd6a0bd6d852b225172a1f54b36c"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:ade5dbcf8d9ef8f4b28e682d0b29f3008df9842bb5ac48ac2c17bc55771cc976"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:2c0afd34f928383e3fd25740f2050dbac9d077e7ba5adbaa2227f4d4f3c8da5c"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:7da333f21cd9df51d5731513a6d39319892947604924ddf2e24a4612975fb936"},
{file = "pydantic_core-2.33.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:4b6d77c75a57f041c5ee915ff0b0bb58eabb78728b69ed967bc5b780e8f701b8"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:ba95691cf25f63df53c1d342413b41bd7762d9acb425df8858d7efa616c0870e"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:4f1ab031feb8676f6bd7c85abec86e2935850bf19b84432c64e3e239bffeb1ec"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:58c1151827eef98b83d49b6ca6065575876a02d2211f259fb1a6b7757bd24dd8"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a66d931ea2c1464b738ace44b7334ab32a2fd50be023d863935eb00f42be1778"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:0bcf0bab28995d483f6c8d7db25e0d05c3efa5cebfd7f56474359e7137f39856"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:89670d7a0045acb52be0566df5bc8b114ac967c662c06cf5e0c606e4aadc964b"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:b716294e721d8060908dbebe32639b01bfe61b15f9f57bcc18ca9a0e00d9520b"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fc53e05c16697ff0c1c7c2b98e45e131d4bfb78068fffff92a82d169cbb4c7b7"},
{file = "pydantic_core-2.33.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:68504959253303d3ae9406b634997a2123a0b0c1da86459abbd0ffc921695eac"},
{file = "pydantic_core-2.33.0.tar.gz", hash = "sha256:40eb8af662ba409c3cbf4a8150ad32ae73514cd7cb1f1a2113af39763dd616b3"},
{file = "pydantic_core-2.33.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2b3d326aaef0c0399d9afffeb6367d5e26ddc24d351dbc9c636840ac355dc5d8"},
{file = "pydantic_core-2.33.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e5b2671f05ba48b94cb90ce55d8bdcaaedb8ba00cc5359f6810fc918713983d"},
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d"},
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d53b22f2032c42eaaf025f7c40c2e3b94568ae077a606f006d206a463bc69572"},
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0405262705a123b7ce9f0b92f123334d67b70fd1f20a9372b907ce1080c7ba02"},
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b25d91e288e2c4e0662b8038a28c6a07eaac3e196cfc4ff69de4ea3db992a1b"},
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bdfe4b3789761f3bcb4b1ddf33355a71079858958e3a552f16d5af19768fef2"},
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:efec8db3266b76ef9607c2c4c419bdb06bf335ae433b80816089ea7585816f6a"},
{file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:031c57d67ca86902726e0fae2214ce6770bbe2f710dc33063187a68744a5ecac"},
{file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:f8de619080e944347f5f20de29a975c2d815d9ddd8be9b9b7268e2e3ef68605a"},
{file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:73662edf539e72a9440129f231ed3757faab89630d291b784ca99237fb94db2b"},
{file = "pydantic_core-2.33.2-cp310-cp310-win32.whl", hash = "sha256:0a39979dcbb70998b0e505fb1556a1d550a0781463ce84ebf915ba293ccb7e22"},
{file = "pydantic_core-2.33.2-cp310-cp310-win_amd64.whl", hash = "sha256:b0379a2b24882fef529ec3b4987cb5d003b9cda32256024e6fe1586ac45fc640"},
{file = "pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7"},
{file = "pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246"},
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f"},
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc"},
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de"},
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a"},
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef"},
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e"},
{file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d"},
{file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30"},
{file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf"},
{file = "pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51"},
{file = "pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab"},
{file = "pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65"},
{file = "pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc"},
{file = "pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7"},
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025"},
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011"},
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f"},
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88"},
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1"},
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b"},
{file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1"},
{file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6"},
{file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea"},
{file = "pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290"},
{file = "pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2"},
{file = "pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab"},
{file = "pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f"},
{file = "pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6"},
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef"},
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a"},
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916"},
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a"},
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d"},
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56"},
{file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5"},
{file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e"},
{file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162"},
{file = "pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849"},
{file = "pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9"},
{file = "pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9"},
{file = "pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac"},
{file = "pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5"},
{file = "pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9"},
{file = "pydantic_core-2.33.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:a2b911a5b90e0374d03813674bf0a5fbbb7741570dcd4b4e85a2e48d17def29d"},
{file = "pydantic_core-2.33.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6fa6dfc3e4d1f734a34710f391ae822e0a8eb8559a85c6979e14e65ee6ba2954"},
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c54c939ee22dc8e2d545da79fc5381f1c020d6d3141d3bd747eab59164dc89fb"},
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:53a57d2ed685940a504248187d5685e49eb5eef0f696853647bf37c418c538f7"},
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:09fb9dd6571aacd023fe6aaca316bd01cf60ab27240d7eb39ebd66a3a15293b4"},
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0e6116757f7959a712db11f3e9c0a99ade00a5bbedae83cb801985aa154f071b"},
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d55ab81c57b8ff8548c3e4947f119551253f4e3787a7bbc0b6b3ca47498a9d3"},
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c20c462aa4434b33a2661701b861604913f912254e441ab8d78d30485736115a"},
{file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:44857c3227d3fb5e753d5fe4a3420d6376fa594b07b621e220cd93703fe21782"},
{file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:eb9b459ca4df0e5c87deb59d37377461a538852765293f9e6ee834f0435a93b9"},
{file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9fcd347d2cc5c23b06de6d3b7b8275be558a0c90549495c699e379a80bf8379e"},
{file = "pydantic_core-2.33.2-cp39-cp39-win32.whl", hash = "sha256:83aa99b1285bc8f038941ddf598501a86f1536789740991d7d8756e34f1e74d9"},
{file = "pydantic_core-2.33.2-cp39-cp39-win_amd64.whl", hash = "sha256:f481959862f57f29601ccced557cc2e817bce7533ab8e01a797a48b49c9692b3"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c4aa4e82353f65e548c476b37e64189783aa5384903bfea4f41580f255fddfa"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d946c8bf0d5c24bf4fe333af284c59a19358aa3ec18cb3dc4370080da1e8ad29"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87b31b6846e361ef83fedb187bb5b4372d0da3f7e28d85415efa92d6125d6e6d"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa9d91b338f2df0508606f7009fde642391425189bba6d8c653afd80fd6bb64e"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2058a32994f1fde4ca0480ab9d1e75a0e8c87c22b53a3ae66554f9af78f2fe8c"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:0e03262ab796d986f978f79c943fc5f620381be7287148b8010b4097f79a39ec"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1a8695a8d00c73e50bff9dfda4d540b7dee29ff9b8053e38380426a85ef10052"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fa754d1850735a0b0e03bcffd9d4b4343eb417e47196e4485d9cca326073a42c"},
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a11c8d26a50bfab49002947d3d237abe4d9e4b5bdc8846a63537b6488e197808"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb"},
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:87acbfcf8e90ca885206e98359d7dca4bcbb35abdc0ff66672a293e1d7a19101"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:7f92c15cd1e97d4b12acd1cc9004fa092578acfa57b67ad5e43a197175d01a64"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3f26877a748dc4251cfcfda9dfb5f13fcb034f5308388066bcfe9031b63ae7d"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dac89aea9af8cd672fa7b510e7b8c33b0bba9a43186680550ccf23020f32d535"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:970919794d126ba8645f3837ab6046fb4e72bbc057b3709144066204c19a455d"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:3eb3fe62804e8f859c49ed20a8451342de53ed764150cb14ca71357c765dc2a6"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:3abcd9392a36025e3bd55f9bd38d908bd17962cc49bc6da8e7e96285336e2bca"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:3a1c81334778f9e3af2f8aeb7a960736e5cab1dfebfb26aabca09afd2906c039"},
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2807668ba86cb38c6817ad9bc66215ab8584d1d304030ce4f0887336f28a5e27"},
{file = "pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc"},
]
[package.dependencies]
@@ -1372,22 +1449,25 @@ typing-extensions = ">=4.6.0,<4.7.0 || >4.7.0"
[[package]]
name = "pydantic-settings"
version = "2.8.1"
version = "2.9.1"
description = "Settings management using Pydantic"
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pydantic_settings-2.8.1-py3-none-any.whl", hash = "sha256:81942d5ac3d905f7f3ee1a70df5dfb62d5569c12f51a5a647defc1c3d9ee2e9c"},
{file = "pydantic_settings-2.8.1.tar.gz", hash = "sha256:d5c663dfbe9db9d5e1c646b2e161da12f0d734d422ee56f567d0ea2cee4e8585"},
{file = "pydantic_settings-2.9.1-py3-none-any.whl", hash = "sha256:59b4f431b1defb26fe620c71a7d3968a710d719f5f4cdbbdb7926edeb770f6ef"},
{file = "pydantic_settings-2.9.1.tar.gz", hash = "sha256:c509bf79d27563add44e8446233359004ed85066cd096d8b510f715e6ef5d268"},
]
[package.dependencies]
pydantic = ">=2.7.0"
python-dotenv = ">=0.21.0"
typing-inspection = ">=0.4.0"
[package.extras]
aws-secrets-manager = ["boto3 (>=1.35.0)", "boto3-stubs[secretsmanager]"]
azure-key-vault = ["azure-identity (>=1.16.0)", "azure-keyvault-secrets (>=4.8.0)"]
gcp-secret-manager = ["google-cloud-secret-manager (>=2.23.1)"]
toml = ["tomli (>=2.0.1)"]
yaml = ["pyyaml (>=6.0.1)"]
@@ -1409,6 +1489,18 @@ dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pyte
docs = ["sphinx", "sphinx-rtd-theme", "zope.interface"]
tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"]
[[package]]
name = "pyrfc3339"
version = "2.0.1"
description = "Generate and parse RFC 3339 timestamps"
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "pyRFC3339-2.0.1-py3-none-any.whl", hash = "sha256:30b70a366acac3df7386b558c21af871522560ed7f3f73cf344b8c2cbb8b0c9d"},
{file = "pyrfc3339-2.0.1.tar.gz", hash = "sha256:e47843379ea35c1296c3b6c67a948a1a490ae0584edfcbdea0eaffb5dd29960b"},
]
[[package]]
name = "pytest"
version = "8.3.3"
@@ -1575,30 +1667,42 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "ruff"
version = "0.11.2"
version = "0.11.10"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.11.2-py3-none-linux_armv6l.whl", hash = "sha256:c69e20ea49e973f3afec2c06376eb56045709f0212615c1adb0eda35e8a4e477"},
{file = "ruff-0.11.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:2c5424cc1c4eb1d8ecabe6d4f1b70470b4f24a0c0171356290b1953ad8f0e272"},
{file = "ruff-0.11.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:ecf20854cc73f42171eedb66f006a43d0a21bfb98a2523a809931cda569552d9"},
{file = "ruff-0.11.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0c543bf65d5d27240321604cee0633a70c6c25c9a2f2492efa9f6d4b8e4199bb"},
{file = "ruff-0.11.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:20967168cc21195db5830b9224be0e964cc9c8ecf3b5a9e3ce19876e8d3a96e3"},
{file = "ruff-0.11.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:955a9ce63483999d9f0b8f0b4a3ad669e53484232853054cc8b9d51ab4c5de74"},
{file = "ruff-0.11.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:86b3a27c38b8fce73bcd262b0de32e9a6801b76d52cdb3ae4c914515f0cef608"},
{file = "ruff-0.11.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a3b66a03b248c9fcd9d64d445bafdf1589326bee6fc5c8e92d7562e58883e30f"},
{file = "ruff-0.11.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0397c2672db015be5aa3d4dac54c69aa012429097ff219392c018e21f5085147"},
{file = "ruff-0.11.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:869bcf3f9abf6457fbe39b5a37333aa4eecc52a3b99c98827ccc371a8e5b6f1b"},
{file = "ruff-0.11.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:2a2b50ca35457ba785cd8c93ebbe529467594087b527a08d487cf0ee7b3087e9"},
{file = "ruff-0.11.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:7c69c74bf53ddcfbc22e6eb2f31211df7f65054bfc1f72288fc71e5f82db3eab"},
{file = "ruff-0.11.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:6e8fb75e14560f7cf53b15bbc55baf5ecbe373dd5f3aab96ff7aa7777edd7630"},
{file = "ruff-0.11.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:842a472d7b4d6f5924e9297aa38149e5dcb1e628773b70e6387ae2c97a63c58f"},
{file = "ruff-0.11.2-py3-none-win32.whl", hash = "sha256:aca01ccd0eb5eb7156b324cfaa088586f06a86d9e5314b0eb330cb48415097cc"},
{file = "ruff-0.11.2-py3-none-win_amd64.whl", hash = "sha256:3170150172a8f994136c0c66f494edf199a0bbea7a409f649e4bc8f4d7084080"},
{file = "ruff-0.11.2-py3-none-win_arm64.whl", hash = "sha256:52933095158ff328f4c77af3d74f0379e34fd52f175144cefc1b192e7ccd32b4"},
{file = "ruff-0.11.2.tar.gz", hash = "sha256:ec47591497d5a1050175bdf4e1a4e6272cddff7da88a2ad595e1e326041d8d94"},
{file = "ruff-0.11.10-py3-none-linux_armv6l.whl", hash = "sha256:859a7bfa7bc8888abbea31ef8a2b411714e6a80f0d173c2a82f9041ed6b50f58"},
{file = "ruff-0.11.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:968220a57e09ea5e4fd48ed1c646419961a0570727c7e069842edd018ee8afed"},
{file = "ruff-0.11.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:1067245bad978e7aa7b22f67113ecc6eb241dca0d9b696144256c3a879663bca"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f4854fd09c7aed5b1590e996a81aeff0c9ff51378b084eb5a0b9cd9518e6cff2"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:8b4564e9f99168c0f9195a0fd5fa5928004b33b377137f978055e40008a082c5"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5b6a9cc5b62c03cc1fea0044ed8576379dbaf751d5503d718c973d5418483641"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:607ecbb6f03e44c9e0a93aedacb17b4eb4f3563d00e8b474298a201622677947"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7b3a522fa389402cd2137df9ddefe848f727250535c70dafa840badffb56b7a4"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2f071b0deed7e9245d5820dac235cbdd4ef99d7b12ff04c330a241ad3534319f"},
{file = "ruff-0.11.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4a60e3a0a617eafba1f2e4186d827759d65348fa53708ca547e384db28406a0b"},
{file = "ruff-0.11.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:da8ec977eaa4b7bf75470fb575bea2cb41a0e07c7ea9d5a0a97d13dbca697bf2"},
{file = "ruff-0.11.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:ddf8967e08227d1bd95cc0851ef80d2ad9c7c0c5aab1eba31db49cf0a7b99523"},
{file = "ruff-0.11.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:5a94acf798a82db188f6f36575d80609072b032105d114b0f98661e1679c9125"},
{file = "ruff-0.11.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:3afead355f1d16d95630df28d4ba17fb2cb9c8dfac8d21ced14984121f639bad"},
{file = "ruff-0.11.10-py3-none-win32.whl", hash = "sha256:dc061a98d32a97211af7e7f3fa1d4ca2fcf919fb96c28f39551f35fc55bdbc19"},
{file = "ruff-0.11.10-py3-none-win_amd64.whl", hash = "sha256:5cc725fbb4d25b0f185cb42df07ab6b76c4489b4bfb740a175f3a59c70e8a224"},
{file = "ruff-0.11.10-py3-none-win_arm64.whl", hash = "sha256:ef69637b35fb8b210743926778d0e45e1bffa850a7c61e428c6b971549b5f5d1"},
{file = "ruff-0.11.10.tar.gz", hash = "sha256:d522fb204b4959909ecac47da02830daec102eeb100fb50ea9554818d47a5fa6"},
]
[[package]]
name = "semver"
version = "3.0.4"
description = "Python helper for Semantic Versioning (https://semver.org)"
optional = false
python-versions = ">=3.7"
groups = ["main"]
files = [
{file = "semver-3.0.4-py3-none-any.whl", hash = "sha256:9c824d87ba7f7ab4a1890799cec8596f15c1241cb473404ea1cb0c55e4b04746"},
{file = "semver-3.0.4.tar.gz", hash = "sha256:afc7d8c584a5ed0a11033af086e8af226a9c0b206f313e0301f8dd7b6b589602"},
]
[[package]]
@@ -1625,6 +1729,24 @@ files = [
{file = "sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc"},
]
[[package]]
name = "starlette"
version = "0.46.2"
description = "The little ASGI library that shines."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "starlette-0.46.2-py3-none-any.whl", hash = "sha256:595633ce89f8ffa71a015caed34a5b2dc1c0cdb3f0f1fbd1e69339cf2abeec35"},
{file = "starlette-0.46.2.tar.gz", hash = "sha256:7f7361f34eed179294600af672f565727419830b54b7b084efe44bb82d2fccd5"},
]
[package.dependencies]
anyio = ">=3.6.2,<5"
[package.extras]
full = ["httpx (>=0.27.0,<0.29.0)", "itsdangerous", "jinja2", "python-multipart (>=0.0.18)", "pyyaml"]
[[package]]
name = "storage3"
version = "0.11.0"
@@ -1660,14 +1782,14 @@ test = ["pylint", "pytest", "pytest-black", "pytest-cov", "pytest-pylint"]
[[package]]
name = "supabase"
version = "2.15.0"
version = "2.15.1"
description = "Supabase client for Python."
optional = false
python-versions = "<4.0,>=3.9"
groups = ["main"]
files = [
{file = "supabase-2.15.0-py3-none-any.whl", hash = "sha256:a665c7ab6c8ad1d80609ab62ad657f66fdaf38070ec9e0db5c7887fd72b109c0"},
{file = "supabase-2.15.0.tar.gz", hash = "sha256:2e66289ad74ae9c4cb04a69f9de00cd2ce880cd890de23269a40ac5b69151d26"},
{file = "supabase-2.15.1-py3-none-any.whl", hash = "sha256:749299cdd74ecf528f52045c1e60d9dba81cc2054656f754c0ca7fba0dd34827"},
{file = "supabase-2.15.1.tar.gz", hash = "sha256:66e847dab9346062aa6a25b4e81ac786b972c5d4299827c57d1d5bd6a0346070"},
]
[package.dependencies]
@@ -1752,6 +1874,26 @@ h2 = ["h2 (>=4,<5)"]
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
zstd = ["zstandard (>=0.18.0)"]
[[package]]
name = "uvicorn"
version = "0.34.3"
description = "The lightning-fast ASGI server."
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "uvicorn-0.34.3-py3-none-any.whl", hash = "sha256:16246631db62bdfbf069b0645177d6e8a77ba950cfedbfd093acef9444e4d885"},
{file = "uvicorn-0.34.3.tar.gz", hash = "sha256:35919a9a979d7a59334b6b10e05d77c1d0d574c50e0fc98b8b1a0f165708b55a"},
]
[package.dependencies]
click = ">=7.0"
h11 = ">=0.8"
typing-extensions = {version = ">=4.0", markers = "python_version < \"3.11\""}
[package.extras]
standard = ["colorama (>=0.4) ; sys_platform == \"win32\"", "httptools (>=0.6.3)", "python-dotenv (>=0.13)", "pyyaml (>=5.1)", "uvloop (>=0.15.1) ; sys_platform != \"win32\" and sys_platform != \"cygwin\" and platform_python_implementation != \"PyPy\"", "watchfiles (>=0.13)", "websockets (>=10.4)"]
[[package]]
name = "websockets"
version = "12.0"
@@ -2034,4 +2176,4 @@ type = ["pytest-mypy"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10,<4.0"
content-hash = "c8e23c0609cae0717447f575849b658bee9203b784ec7270b62629cddbbbd9ca"
content-hash = "d92143928a88ca3a56ac200c335910eafac938940022fed8bd0d17c95040b54f"

View File

@@ -7,20 +7,23 @@ readme = "README.md"
packages = [{ include = "autogpt_libs" }]
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
colorama = "^0.4.6"
expiringdict = "^1.2.2"
google-cloud-logging = "^3.11.4"
pydantic = "^2.11.1"
pydantic-settings = "^2.8.1"
google-cloud-logging = "^3.12.1"
pydantic = "^2.11.4"
pydantic-settings = "^2.9.1"
pyjwt = "^2.10.1"
pytest-asyncio = "^0.26.0"
pytest-mock = "^3.14.0"
python = ">=3.10,<4.0"
supabase = "^2.15.0"
supabase = "^2.15.1"
launchdarkly-server-sdk = "^9.11.1"
fastapi = "^0.115.12"
uvicorn = "^0.34.3"
[tool.poetry.group.dev.dependencies]
redis = "^5.2.1"
ruff = "^0.11.0"
ruff = "^0.11.10"
[build-system]
requires = ["poetry-core"]

View File

@@ -13,7 +13,6 @@ PRISMA_SCHEMA="postgres/schema.prisma"
# EXECUTOR
NUM_GRAPH_WORKERS=10
NUM_NODE_WORKERS=3
BACKEND_CORS_ALLOW_ORIGINS=["http://localhost:3000"]
@@ -66,6 +65,13 @@ MEDIA_GCS_BUCKET_NAME=
## and tunnel it to your locally running backend.
PLATFORM_BASE_URL=http://localhost:3000
## Cloudflare Turnstile (CAPTCHA) Configuration
## Get these from the Cloudflare Turnstile dashboard: https://dash.cloudflare.com/?to=/:account/turnstile
## This is the backend secret key
TURNSTILE_SECRET_KEY=
## This is the verify URL
TURNSTILE_VERIFY_URL=https://challenges.cloudflare.com/turnstile/v0/siteverify
## == INTEGRATION CREDENTIALS == ##
# Each set of server side credentials is required for the corresponding 3rd party
# integration to work.
@@ -120,8 +126,10 @@ TODOIST_CLIENT_SECRET=
# LLM
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
AIML_API_KEY=
GROQ_API_KEY=
OPEN_ROUTER_API_KEY=
LLAMA_API_KEY=
# Reddit
# Go to https://www.reddit.com/prefs/apps and create a new app

View File

@@ -0,0 +1,237 @@
# Backend Testing Guide
This guide covers testing practices for the AutoGPT Platform backend, with a focus on snapshot testing for API endpoints.
## Table of Contents
- [Overview](#overview)
- [Running Tests](#running-tests)
- [Snapshot Testing](#snapshot-testing)
- [Writing Tests for API Routes](#writing-tests-for-api-routes)
- [Best Practices](#best-practices)
## Overview
The backend uses pytest for testing with the following key libraries:
- `pytest` - Test framework
- `pytest-asyncio` - Async test support
- `pytest-mock` - Mocking support
- `pytest-snapshot` - Snapshot testing for API responses
## Running Tests
### Run all tests
```bash
poetry run test
```
### Run specific test file
```bash
poetry run pytest path/to/test_file.py
```
### Run with verbose output
```bash
poetry run pytest -v
```
### Run with coverage
```bash
poetry run pytest --cov=backend
```
## Snapshot Testing
Snapshot testing captures the output of your code and compares it against previously saved snapshots. This is particularly useful for testing API responses.
### How Snapshot Testing Works
1. First run: Creates snapshot files in `snapshots/` directories
2. Subsequent runs: Compares output against saved snapshots
3. Changes detected: Test fails if output differs from snapshot
### Creating/Updating Snapshots
When you first write a test or when the expected output changes:
```bash
poetry run pytest path/to/test.py --snapshot-update
```
⚠️ **Important**: Always review snapshot changes before committing! Use `git diff` to verify the changes are expected.
### Snapshot Test Example
```python
import json
from pytest_snapshot.plugin import Snapshot
def test_api_endpoint(snapshot: Snapshot):
response = client.get("/api/endpoint")
# Snapshot the response
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response.json(), indent=2, sort_keys=True),
"endpoint_response"
)
```
### Best Practices for Snapshots
1. **Use descriptive names**: `"user_list_response"` not `"response1"`
2. **Sort JSON keys**: Ensures consistent snapshots
3. **Format JSON**: Use `indent=2` for readable diffs
4. **Exclude dynamic data**: Remove timestamps, IDs, etc. that change between runs
Example of excluding dynamic data:
```python
response_data = response.json()
# Remove dynamic fields for snapshot
response_data.pop("created_at", None)
response_data.pop("id", None)
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response_data, indent=2, sort_keys=True),
"static_response_data"
)
```
## Writing Tests for API Routes
### Basic Structure
```python
import json
import fastapi
import fastapi.testclient
import pytest
from pytest_snapshot.plugin import Snapshot
from backend.server.v2.myroute import router
app = fastapi.FastAPI()
app.include_router(router)
client = fastapi.testclient.TestClient(app)
def test_endpoint_success(snapshot: Snapshot):
response = client.get("/endpoint")
assert response.status_code == 200
# Test specific fields
data = response.json()
assert data["status"] == "success"
# Snapshot the full response
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(data, indent=2, sort_keys=True),
"endpoint_success_response"
)
```
### Testing with Authentication
```python
def override_auth_middleware():
return {"sub": "test-user-id"}
def override_get_user_id():
return "test-user-id"
app.dependency_overrides[auth_middleware] = override_auth_middleware
app.dependency_overrides[get_user_id] = override_get_user_id
```
### Mocking External Services
```python
def test_external_api_call(mocker, snapshot):
# Mock external service
mock_response = {"external": "data"}
mocker.patch(
"backend.services.external_api.call",
return_value=mock_response
)
response = client.post("/api/process")
assert response.status_code == 200
snapshot.snapshot_dir = "snapshots"
snapshot.assert_match(
json.dumps(response.json(), indent=2, sort_keys=True),
"process_with_external_response"
)
```
## Best Practices
### 1. Test Organization
- Place tests next to the code: `routes.py``routes_test.py`
- Use descriptive test names: `test_create_user_with_invalid_email`
- Group related tests in classes when appropriate
### 2. Test Coverage
- Test happy path and error cases
- Test edge cases (empty data, invalid formats)
- Test authentication and authorization
### 3. Snapshot Testing Guidelines
- Review all snapshot changes carefully
- Don't snapshot sensitive data
- Keep snapshots focused and minimal
- Update snapshots intentionally, not accidentally
### 4. Async Testing
- Use regular `def` for FastAPI TestClient tests
- Use `async def` with `@pytest.mark.asyncio` for testing async functions directly
### 5. Fixtures
Create reusable fixtures for common test data:
```python
@pytest.fixture
def sample_user():
return {
"email": "test@example.com",
"name": "Test User"
}
def test_create_user(sample_user, snapshot):
response = client.post("/users", json=sample_user)
# ... test implementation
```
## CI/CD Integration
The GitHub Actions workflow automatically runs tests on:
- Pull requests
- Pushes to main branch
Snapshot tests work in CI by:
1. Committing snapshot files to the repository
2. CI compares against committed snapshots
3. Fails if snapshots don't match
## Troubleshooting
### Snapshot Mismatches
- Review the diff carefully
- If changes are expected: `poetry run pytest --snapshot-update`
- If changes are unexpected: Fix the code causing the difference
### Async Test Issues
- Ensure async functions use `@pytest.mark.asyncio`
- Use `AsyncMock` for mocking async functions
- FastAPI TestClient handles async automatically
### Import Errors
- Check that all dependencies are in `pyproject.toml`
- Run `poetry install` to ensure dependencies are installed
- Verify import paths are correct
## Summary
Snapshot testing provides a powerful way to ensure API responses remain consistent. Combined with traditional assertions, it creates a robust test suite that catches regressions while remaining maintainable.
Remember: Good tests are as important as good code!

View File

@@ -1,3 +1,4 @@
import functools
import importlib
import os
import re
@@ -10,22 +11,16 @@ if TYPE_CHECKING:
T = TypeVar("T")
_AVAILABLE_BLOCKS: dict[str, type["Block"]] = {}
@functools.cache
def load_all_blocks() -> dict[str, type["Block"]]:
from backend.data.block import Block
if _AVAILABLE_BLOCKS:
return _AVAILABLE_BLOCKS
# Dynamically load all modules under backend.blocks
AVAILABLE_MODULES = []
current_dir = Path(__file__).parent
modules = [
str(f.relative_to(current_dir))[:-3].replace(os.path.sep, ".")
for f in current_dir.rglob("*.py")
if f.is_file() and f.name != "__init__.py"
if f.is_file() and f.name != "__init__.py" and not f.name.startswith("test_")
]
for module in modules:
if not re.match("^[a-z0-9_.]+$", module):
@@ -35,9 +30,9 @@ def load_all_blocks() -> dict[str, type["Block"]]:
)
importlib.import_module(f".{module}", package=__name__)
AVAILABLE_MODULES.append(module)
# Load all Block instances from the available modules
available_blocks: dict[str, type["Block"]] = {}
for block_cls in all_subclasses(Block):
class_name = block_cls.__name__
@@ -58,7 +53,7 @@ def load_all_blocks() -> dict[str, type["Block"]]:
f"Block ID {block.name} error: {block.id} is not a valid UUID"
)
if block.id in _AVAILABLE_BLOCKS:
if block.id in available_blocks:
raise ValueError(
f"Block ID {block.name} error: {block.id} is already in use"
)
@@ -89,9 +84,9 @@ def load_all_blocks() -> dict[str, type["Block"]]:
f"{block.name} has a boolean field with no default value"
)
_AVAILABLE_BLOCKS[block.id] = block_cls
available_blocks[block.id] = block_cls
return _AVAILABLE_BLOCKS
return available_blocks
__all__ = ["load_all_blocks"]

View File

@@ -1,5 +1,8 @@
import asyncio
import logging
from typing import Any
from typing import Any, Optional
from pydantic import JsonValue
from backend.data.block import (
Block,
@@ -12,9 +15,9 @@ from backend.data.block import (
)
from backend.data.execution import ExecutionStatus
from backend.data.model import SchemaField
from backend.util import json
from backend.util import json, retry
logger = logging.getLogger(__name__)
_logger = logging.getLogger(__name__)
class AgentExecutorBlock(Block):
@@ -23,17 +26,21 @@ class AgentExecutorBlock(Block):
graph_id: str = SchemaField(description="Graph ID")
graph_version: int = SchemaField(description="Graph Version")
data: BlockInput = SchemaField(description="Input data for the graph")
inputs: BlockInput = SchemaField(description="Input data for the graph")
input_schema: dict = SchemaField(description="Input schema for the graph")
output_schema: dict = SchemaField(description="Output schema for the graph")
nodes_input_masks: Optional[dict[str, dict[str, JsonValue]]] = SchemaField(
default=None, hidden=True
)
@classmethod
def get_input_schema(cls, data: BlockInput) -> dict[str, Any]:
return data.get("input_schema", {})
@classmethod
def get_input_defaults(cls, data: BlockInput) -> BlockInput:
return data.get("data", {})
return data.get("inputs", {})
@classmethod
def get_missing_input(cls, data: BlockInput) -> set[str]:
@@ -57,38 +64,96 @@ class AgentExecutorBlock(Block):
categories={BlockCategory.AGENT},
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
from backend.data.execution import ExecutionEventType
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
from backend.executor import utils as execution_utils
event_bus = execution_utils.get_execution_event_bus()
graph_exec = execution_utils.add_graph_execution(
graph_exec = await execution_utils.add_graph_execution(
graph_id=input_data.graph_id,
graph_version=input_data.graph_version,
user_id=input_data.user_id,
data=input_data.data,
inputs=input_data.inputs,
nodes_input_masks=input_data.nodes_input_masks,
use_db_query=False,
)
log_id = f"Graph #{input_data.graph_id}-V{input_data.graph_version}, exec-id: {graph_exec.graph_exec_id}"
logger = execution_utils.LogMetadata(
logger=_logger,
user_id=input_data.user_id,
graph_eid=graph_exec.id,
graph_id=input_data.graph_id,
node_eid="*",
node_id="*",
block_name=self.name,
)
try:
async for name, data in self._run(
graph_id=input_data.graph_id,
graph_version=input_data.graph_version,
graph_exec_id=graph_exec.id,
user_id=input_data.user_id,
logger=logger,
):
yield name, data
except asyncio.CancelledError:
await self._stop(
graph_exec_id=graph_exec.id,
user_id=input_data.user_id,
logger=logger,
)
logger.warning(
f"Execution of graph {input_data.graph_id}v{input_data.graph_version} was cancelled."
)
except Exception as e:
await self._stop(
graph_exec_id=graph_exec.id,
user_id=input_data.user_id,
logger=logger,
)
logger.error(
f"Execution of graph {input_data.graph_id}v{input_data.graph_version} failed: {e}, execution is stopped."
)
raise
async def _run(
self,
graph_id: str,
graph_version: int,
graph_exec_id: str,
user_id: str,
logger,
) -> BlockOutput:
from backend.data.execution import ExecutionEventType
from backend.executor import utils as execution_utils
event_bus = execution_utils.get_async_execution_event_bus()
log_id = f"Graph #{graph_id}-V{graph_version}, exec-id: {graph_exec_id}"
logger.info(f"Starting execution of {log_id}")
for event in event_bus.listen(
user_id=graph_exec.user_id,
graph_id=graph_exec.graph_id,
graph_exec_id=graph_exec.graph_exec_id,
async for event in event_bus.listen(
user_id=user_id,
graph_id=graph_id,
graph_exec_id=graph_exec_id,
):
if event.event_type == ExecutionEventType.GRAPH_EXEC_UPDATE:
if event.status in [
ExecutionStatus.COMPLETED,
ExecutionStatus.TERMINATED,
ExecutionStatus.FAILED,
]:
logger.info(f"Execution {log_id} ended with status {event.status}")
break
else:
continue
if event.status not in [
ExecutionStatus.COMPLETED,
ExecutionStatus.TERMINATED,
ExecutionStatus.FAILED,
]:
logger.debug(
f"Execution {log_id} received event {event.event_type} with status {event.status}"
)
continue
logger.info(
if event.event_type == ExecutionEventType.GRAPH_EXEC_UPDATE:
# If the graph execution is COMPLETED, TERMINATED, or FAILED,
# we can stop listening for further events.
break
logger.debug(
f"Execution {log_id} produced input {event.input_data} output {event.output_data}"
)
@@ -106,5 +171,29 @@ class AgentExecutorBlock(Block):
continue
for output_data in event.output_data.get("output", []):
logger.info(f"Execution {log_id} produced {output_name}: {output_data}")
logger.debug(
f"Execution {log_id} produced {output_name}: {output_data}"
)
yield output_name, output_data
@retry.func_retry
async def _stop(
self,
graph_exec_id: str,
user_id: str,
logger,
) -> None:
from backend.executor import utils as execution_utils
log_id = f"Graph exec-id: {graph_exec_id}"
logger.info(f"Stopping execution of {log_id}")
try:
await execution_utils.stop_graph_execution(
graph_exec_id=graph_exec_id,
user_id=user_id,
use_db_query=False,
)
logger.info(f"Execution {log_id} stopped successfully.")
except Exception as e:
logger.error(f"Failed to stop execution {log_id}: {e}")

View File

@@ -1,8 +1,8 @@
from enum import Enum
from typing import Literal
import replicate
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockSchema
@@ -165,15 +165,15 @@ class AIImageGeneratorBlock(Block):
},
)
def _run_client(
async def _run_client(
self, credentials: APIKeyCredentials, model_name: str, input_params: dict
):
try:
# Initialize Replicate client
client = replicate.Client(api_token=credentials.api_key.get_secret_value())
client = ReplicateClient(api_token=credentials.api_key.get_secret_value())
# Run the model with input parameters
output = client.run(model_name, input=input_params, wait=False)
output = await client.async_run(model_name, input=input_params, wait=False)
# Process output
if isinstance(output, list) and len(output) > 0:
@@ -195,7 +195,7 @@ class AIImageGeneratorBlock(Block):
except Exception as e:
raise RuntimeError(f"Unexpected error during model execution: {e}")
def generate_image(self, input_data: Input, credentials: APIKeyCredentials):
async def generate_image(self, input_data: Input, credentials: APIKeyCredentials):
try:
# Handle style-based prompt modification for models without native style support
modified_prompt = input_data.prompt
@@ -213,7 +213,7 @@ class AIImageGeneratorBlock(Block):
"steps": 40,
"cfg_scale": 7.0,
}
output = self._run_client(
output = await self._run_client(
credentials,
"stability-ai/stable-diffusion-3.5-medium",
input_params,
@@ -231,7 +231,7 @@ class AIImageGeneratorBlock(Block):
"output_format": "jpg", # Set to jpg for Flux models
"output_quality": 90,
}
output = self._run_client(
output = await self._run_client(
credentials, "black-forest-labs/flux-1.1-pro", input_params
)
return output
@@ -246,7 +246,7 @@ class AIImageGeneratorBlock(Block):
"output_format": "jpg",
"output_quality": 90,
}
output = self._run_client(
output = await self._run_client(
credentials, "black-forest-labs/flux-1.1-pro-ultra", input_params
)
return output
@@ -257,7 +257,7 @@ class AIImageGeneratorBlock(Block):
"size": SIZE_TO_RECRAFT_DIMENSIONS[input_data.size],
"style": input_data.style.value,
}
output = self._run_client(
output = await self._run_client(
credentials, "recraft-ai/recraft-v3", input_params
)
return output
@@ -296,9 +296,9 @@ class AIImageGeneratorBlock(Block):
style_text = style_map.get(style, "")
return f"{style_text} of" if style_text else ""
def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs):
async def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs):
try:
url = self.generate_image(input_data, credentials)
url = await self.generate_image(input_data, credentials)
if url:
yield "image_url", url
else:

View File

@@ -1,10 +1,10 @@
import asyncio
import logging
import time
from enum import Enum
from typing import Literal
import replicate
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
@@ -142,7 +142,7 @@ class AIMusicGeneratorBlock(Block):
test_credentials=TEST_CREDENTIALS,
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
max_retries = 3
@@ -154,7 +154,7 @@ class AIMusicGeneratorBlock(Block):
logger.debug(
f"[AIMusicGeneratorBlock] - Running model (attempt {attempt + 1})"
)
result = self.run_model(
result = await self.run_model(
api_key=credentials.api_key,
music_gen_model_version=input_data.music_gen_model_version,
prompt=input_data.prompt,
@@ -176,13 +176,13 @@ class AIMusicGeneratorBlock(Block):
last_error = f"Unexpected error: {str(e)}"
logger.error(f"[AIMusicGeneratorBlock] - Error: {last_error}")
if attempt < max_retries - 1:
time.sleep(retry_delay)
await asyncio.sleep(retry_delay)
continue
# If we've exhausted all retries, yield the error
yield "error", f"Failed after {max_retries} attempts. Last error: {last_error}"
def run_model(
async def run_model(
self,
api_key: SecretStr,
music_gen_model_version: MusicGenModelVersion,
@@ -196,10 +196,10 @@ class AIMusicGeneratorBlock(Block):
normalization_strategy: NormalizationStrategy,
):
# Initialize Replicate client with the API key
client = replicate.Client(api_token=api_key.get_secret_value())
client = ReplicateClient(api_token=api_key.get_secret_value())
# Run the model with parameters
output = client.run(
output = await client.async_run(
"meta/musicgen:671ac645ce5e552cc63a54a2bbff63fcf798043055d2dac5fc9e36a837eedcfb",
input={
"prompt": prompt,

View File

@@ -1,3 +1,4 @@
import asyncio
import logging
import time
from enum import Enum
@@ -13,7 +14,7 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.request import requests
from backend.util.request import Requests
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -52,6 +53,7 @@ class AudioTrack(str, Enum):
REFRESHER = ("Refresher",)
TOURIST = ("Tourist",)
TWIN_TYCHES = ("Twin Tyches",)
DONT_STOP_ME_ABSTRACT_FUTURE_BASS = ("Dont Stop Me Abstract Future Bass",)
@property
def audio_url(self):
@@ -77,6 +79,7 @@ class AudioTrack(str, Enum):
AudioTrack.REFRESHER: "https://cdn.tfrv.xyz/audio/refresher.mp3",
AudioTrack.TOURIST: "https://cdn.tfrv.xyz/audio/tourist.mp3",
AudioTrack.TWIN_TYCHES: "https://cdn.tfrv.xyz/audio/twin-tynches.mp3",
AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS: "https://cdn.revid.ai/audio/_dont-stop-me-abstract-future-bass.mp3",
}
return audio_urls[self]
@@ -104,6 +107,7 @@ class GenerationPreset(str, Enum):
MOVIE = ("Movie",)
STYLIZED_ILLUSTRATION = ("Stylized Illustration",)
MANGA = ("Manga",)
DEFAULT = ("DEFAULT",)
class Voice(str, Enum):
@@ -113,6 +117,7 @@ class Voice(str, Enum):
JESSICA = "Jessica"
CHARLOTTE = "Charlotte"
CALLUM = "Callum"
EVA = "Eva"
@property
def voice_id(self):
@@ -123,6 +128,7 @@ class Voice(str, Enum):
Voice.JESSICA: "cgSgspJ2msm6clMCkdW9",
Voice.CHARLOTTE: "XB0fDUnXU5powFXDhCwa",
Voice.CALLUM: "N2lVS1w4EtoT3dr4eOWO",
Voice.EVA: "FGY2WhTYpPnrIDTdsKH5",
}
return voice_id_map[self]
@@ -140,6 +146,8 @@ logger = logging.getLogger(__name__)
class AIShortformVideoCreatorBlock(Block):
"""Creates a shortform texttovideo clip using stock or AI imagery."""
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
@@ -183,6 +191,58 @@ class AIShortformVideoCreatorBlock(Block):
video_url: str = SchemaField(description="The URL of the created video")
error: str = SchemaField(description="Error message if the request failed")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""
url = "https://webhook.site/token"
headers = {"Accept": "application/json", "Content-Type": "application/json"}
response = await Requests().post(url, headers=headers)
webhook_data = response.json()
return webhook_data["uuid"], f"https://webhook.site/{webhook_data['uuid']}"
async def create_video(self, api_key: SecretStr, payload: dict) -> dict:
"""Create a video using the Revid API."""
url = "https://www.revid.ai/api/public/v2/render"
headers = {"key": api_key.get_secret_value()}
response = await Requests().post(url, json=payload, headers=headers)
logger.debug(
f"API Response Status Code: {response.status}, Content: {response.text}"
)
return response.json()
async def check_video_status(self, api_key: SecretStr, pid: str) -> dict:
"""Check the status of a video creation job."""
url = f"https://www.revid.ai/api/public/v2/status?pid={pid}"
headers = {"key": api_key.get_secret_value()}
response = await Requests().get(url, headers=headers)
return response.json()
async def wait_for_video(
self,
api_key: SecretStr,
pid: str,
max_wait_time: int = 1000,
) -> str:
"""Wait for video creation to complete and return the video URL."""
start_time = time.time()
while time.time() - start_time < max_wait_time:
status = await self.check_video_status(api_key, pid)
logger.debug(f"Video status: {status}")
if status.get("status") == "ready" and "videoUrl" in status:
return status["videoUrl"]
elif status.get("status") == "error":
error_message = status.get("error", "Unknown error occurred")
logger.error(f"Video creation failed: {error_message}")
raise ValueError(f"Video creation failed: {error_message}")
elif status.get("status") in ["FAILED", "CANCELED"]:
logger.error(f"Video creation failed: {status.get('message')}")
raise ValueError(f"Video creation failed: {status.get('message')}")
await asyncio.sleep(10)
logger.error("Video creation timed out")
raise TimeoutError("Video creation timed out")
def __init__(self):
super().__init__(
id="361697fb-0c4f-4feb-aed3-8320c88c771b",
@@ -201,91 +261,41 @@ class AIShortformVideoCreatorBlock(Block):
"voice": Voice.LILY,
"video_style": VisualMediaType.STOCK_VIDEOS,
},
test_output=(
"video_url",
"https://example.com/video.mp4",
),
test_output=("video_url", "https://example.com/video.mp4"),
test_mock={
"create_webhook": lambda: (
"create_webhook": lambda *args, **kwargs: (
"test_uuid",
"https://webhook.site/test_uuid",
),
"create_video": lambda api_key, payload: {"pid": "test_pid"},
"wait_for_video": lambda api_key, pid, webhook_token, max_wait_time=1000: "https://example.com/video.mp4",
"create_video": lambda *args, **kwargs: {"pid": "test_pid"},
"check_video_status": lambda *args, **kwargs: {
"status": "ready",
"videoUrl": "https://example.com/video.mp4",
},
"wait_for_video": lambda *args, **kwargs: "https://example.com/video.mp4",
},
test_credentials=TEST_CREDENTIALS,
)
def create_webhook(self):
url = "https://webhook.site/token"
headers = {"Accept": "application/json", "Content-Type": "application/json"}
response = requests.post(url, headers=headers)
webhook_data = response.json()
return webhook_data["uuid"], f"https://webhook.site/{webhook_data['uuid']}"
def create_video(self, api_key: SecretStr, payload: dict) -> dict:
url = "https://www.revid.ai/api/public/v2/render"
headers = {"key": api_key.get_secret_value()}
response = requests.post(url, json=payload, headers=headers)
logger.debug(
f"API Response Status Code: {response.status_code}, Content: {response.text}"
)
return response.json()
def check_video_status(self, api_key: SecretStr, pid: str) -> dict:
url = f"https://www.revid.ai/api/public/v2/status?pid={pid}"
headers = {"key": api_key.get_secret_value()}
response = requests.get(url, headers=headers)
return response.json()
def wait_for_video(
self,
api_key: SecretStr,
pid: str,
webhook_token: str,
max_wait_time: int = 1000,
) -> str:
start_time = time.time()
while time.time() - start_time < max_wait_time:
status = self.check_video_status(api_key, pid)
logger.debug(f"Video status: {status}")
if status.get("status") == "ready" and "videoUrl" in status:
return status["videoUrl"]
elif status.get("status") == "error":
error_message = status.get("error", "Unknown error occurred")
logger.error(f"Video creation failed: {error_message}")
raise ValueError(f"Video creation failed: {error_message}")
elif status.get("status") in ["FAILED", "CANCELED"]:
logger.error(f"Video creation failed: {status.get('message')}")
raise ValueError(f"Video creation failed: {status.get('message')}")
time.sleep(10)
logger.error("Video creation timed out")
raise TimeoutError("Video creation timed out")
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# Create a new Webhook.site URL
webhook_token, webhook_url = self.create_webhook()
webhook_token, webhook_url = await self.create_webhook()
logger.debug(f"Webhook URL: {webhook_url}")
audio_url = input_data.background_music.audio_url
payload = {
"frameRate": input_data.frame_rate,
"resolution": input_data.resolution,
"frameDurationMultiplier": 18,
"webhook": webhook_url,
"webhook": None,
"creationParams": {
"mediaType": input_data.video_style,
"captionPresetName": "Wrap 1",
"selectedVoice": input_data.voice.voice_id,
"hasEnhancedGeneration": True,
"generationPreset": input_data.generation_preset.name,
"selectedAudio": input_data.background_music,
"selectedAudio": input_data.background_music.value,
"origin": "/create",
"inputText": input_data.script,
"flowType": "text-to-video",
@@ -301,12 +311,12 @@ class AIShortformVideoCreatorBlock(Block):
"selectedStoryStyle": {"value": "custom", "label": "Custom"},
"hasToGenerateVideos": input_data.video_style
!= VisualMediaType.STOCK_VIDEOS,
"audioUrl": audio_url,
"audioUrl": input_data.background_music.audio_url,
},
}
logger.debug("Creating video...")
response = self.create_video(credentials.api_key, payload)
response = await self.create_video(credentials.api_key, payload)
pid = response.get("pid")
if not pid:
@@ -318,6 +328,370 @@ class AIShortformVideoCreatorBlock(Block):
logger.debug(
f"Video created with project ID: {pid}. Waiting for completion..."
)
video_url = self.wait_for_video(credentials.api_key, pid, webhook_token)
video_url = await self.wait_for_video(credentials.api_key, pid)
logger.debug(f"Video ready: {video_url}")
yield "video_url", video_url
class AIAdMakerVideoCreatorBlock(Block):
"""Generates a 30second vertical AI advert using optional usersupplied imagery."""
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
] = CredentialsField(
description="Credentials for Revid.ai API access.",
)
script: str = SchemaField(
description="Short advertising copy. Line breaks create new scenes.",
placeholder="Introducing Foobar [show product photo] the gadget that does it all.",
)
ratio: str = SchemaField(description="Aspect ratio", default="9 / 16")
target_duration: int = SchemaField(
description="Desired length of the ad in seconds.", default=30
)
voice: Voice = SchemaField(
description="Narration voice", default=Voice.EVA, placeholder=Voice.EVA
)
background_music: AudioTrack = SchemaField(
description="Background track",
default=AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS,
)
input_media_urls: list[str] = SchemaField(
description="List of image URLs to feature in the advert.", default=[]
)
use_only_provided_media: bool = SchemaField(
description="Restrict visuals to supplied images only.", default=True
)
class Output(BlockSchema):
video_url: str = SchemaField(description="URL of the finished advert")
error: str = SchemaField(description="Error message on failure")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""
url = "https://webhook.site/token"
headers = {"Accept": "application/json", "Content-Type": "application/json"}
response = await Requests().post(url, headers=headers)
webhook_data = response.json()
return webhook_data["uuid"], f"https://webhook.site/{webhook_data['uuid']}"
async def create_video(self, api_key: SecretStr, payload: dict) -> dict:
"""Create a video using the Revid API."""
url = "https://www.revid.ai/api/public/v2/render"
headers = {"key": api_key.get_secret_value()}
response = await Requests().post(url, json=payload, headers=headers)
logger.debug(
f"API Response Status Code: {response.status}, Content: {response.text}"
)
return response.json()
async def check_video_status(self, api_key: SecretStr, pid: str) -> dict:
"""Check the status of a video creation job."""
url = f"https://www.revid.ai/api/public/v2/status?pid={pid}"
headers = {"key": api_key.get_secret_value()}
response = await Requests().get(url, headers=headers)
return response.json()
async def wait_for_video(
self,
api_key: SecretStr,
pid: str,
max_wait_time: int = 1000,
) -> str:
"""Wait for video creation to complete and return the video URL."""
start_time = time.time()
while time.time() - start_time < max_wait_time:
status = await self.check_video_status(api_key, pid)
logger.debug(f"Video status: {status}")
if status.get("status") == "ready" and "videoUrl" in status:
return status["videoUrl"]
elif status.get("status") == "error":
error_message = status.get("error", "Unknown error occurred")
logger.error(f"Video creation failed: {error_message}")
raise ValueError(f"Video creation failed: {error_message}")
elif status.get("status") in ["FAILED", "CANCELED"]:
logger.error(f"Video creation failed: {status.get('message')}")
raise ValueError(f"Video creation failed: {status.get('message')}")
await asyncio.sleep(10)
logger.error("Video creation timed out")
raise TimeoutError("Video creation timed out")
def __init__(self):
super().__init__(
id="58bd2a19-115d-4fd1-8ca4-13b9e37fa6a0",
description="Creates an AIgenerated 30second advert (text + images)",
categories={BlockCategory.MARKETING, BlockCategory.AI},
input_schema=AIAdMakerVideoCreatorBlock.Input,
output_schema=AIAdMakerVideoCreatorBlock.Output,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"script": "Test product launch!",
"input_media_urls": [
"https://cdn.revid.ai/uploads/1747076315114-image.png",
],
},
test_output=("video_url", "https://example.com/ad.mp4"),
test_mock={
"create_webhook": lambda *args, **kwargs: (
"test_uuid",
"https://webhook.site/test_uuid",
),
"create_video": lambda *args, **kwargs: {"pid": "test_pid"},
"check_video_status": lambda *args, **kwargs: {
"status": "ready",
"videoUrl": "https://example.com/ad.mp4",
},
"wait_for_video": lambda *args, **kwargs: "https://example.com/ad.mp4",
},
test_credentials=TEST_CREDENTIALS,
)
async def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs):
webhook_token, webhook_url = await self.create_webhook()
payload = {
"webhook": webhook_url,
"creationParams": {
"targetDuration": input_data.target_duration,
"ratio": input_data.ratio,
"mediaType": "aiVideo",
"inputText": input_data.script,
"flowType": "text-to-video",
"slug": "ai-ad-generator",
"slugNew": "",
"isCopiedFrom": False,
"hasToGenerateVoice": True,
"hasToTranscript": False,
"hasToSearchMedia": True,
"hasAvatar": False,
"hasWebsiteRecorder": False,
"hasTextSmallAtBottom": False,
"selectedAudio": input_data.background_music.value,
"selectedVoice": input_data.voice.voice_id,
"selectedAvatar": "https://cdn.revid.ai/avatars/young-woman.mp4",
"selectedAvatarType": "video/mp4",
"websiteToRecord": "",
"hasToGenerateCover": True,
"nbGenerations": 1,
"disableCaptions": False,
"mediaMultiplier": "medium",
"characters": [],
"captionPresetName": "Revid",
"sourceType": "contentScraping",
"selectedStoryStyle": {"value": "custom", "label": "General"},
"generationPreset": "DEFAULT",
"hasToGenerateMusic": False,
"isOptimizedForChinese": False,
"generationUserPrompt": "",
"enableNsfwFilter": False,
"addStickers": False,
"typeMovingImageAnim": "dynamic",
"hasToGenerateSoundEffects": False,
"forceModelType": "gpt-image-1",
"selectedCharacters": [],
"lang": "",
"voiceSpeed": 1,
"disableAudio": False,
"disableVoice": False,
"useOnlyProvidedMedia": input_data.use_only_provided_media,
"imageGenerationModel": "ultra",
"videoGenerationModel": "pro",
"hasEnhancedGeneration": True,
"hasEnhancedGenerationPro": True,
"inputMedias": [
{"url": url, "title": "", "type": "image"}
for url in input_data.input_media_urls
],
"hasToGenerateVideos": True,
"audioUrl": input_data.background_music.audio_url,
"watermark": None,
},
}
response = await self.create_video(credentials.api_key, payload)
pid = response.get("pid")
if not pid:
raise RuntimeError("Failed to create video: No project ID returned")
video_url = await self.wait_for_video(credentials.api_key, pid)
yield "video_url", video_url
class AIScreenshotToVideoAdBlock(Block):
"""Creates an advert where the supplied screenshot is narrated by an AI avatar."""
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.REVID], Literal["api_key"]
] = CredentialsField(description="Revid.ai API key")
script: str = SchemaField(
description="Narration that will accompany the screenshot.",
placeholder="Check out these amazing stats!",
)
screenshot_url: str = SchemaField(
description="Screenshot or image URL to showcase."
)
ratio: str = SchemaField(default="9 / 16")
target_duration: int = SchemaField(default=30)
voice: Voice = SchemaField(default=Voice.EVA)
background_music: AudioTrack = SchemaField(
default=AudioTrack.DONT_STOP_ME_ABSTRACT_FUTURE_BASS
)
class Output(BlockSchema):
video_url: str = SchemaField(description="Rendered video URL")
error: str = SchemaField(description="Error, if encountered")
async def create_webhook(self) -> tuple[str, str]:
"""Create a new webhook URL for receiving notifications."""
url = "https://webhook.site/token"
headers = {"Accept": "application/json", "Content-Type": "application/json"}
response = await Requests().post(url, headers=headers)
webhook_data = response.json()
return webhook_data["uuid"], f"https://webhook.site/{webhook_data['uuid']}"
async def create_video(self, api_key: SecretStr, payload: dict) -> dict:
"""Create a video using the Revid API."""
url = "https://www.revid.ai/api/public/v2/render"
headers = {"key": api_key.get_secret_value()}
response = await Requests().post(url, json=payload, headers=headers)
logger.debug(
f"API Response Status Code: {response.status}, Content: {response.text}"
)
return response.json()
async def check_video_status(self, api_key: SecretStr, pid: str) -> dict:
"""Check the status of a video creation job."""
url = f"https://www.revid.ai/api/public/v2/status?pid={pid}"
headers = {"key": api_key.get_secret_value()}
response = await Requests().get(url, headers=headers)
return response.json()
async def wait_for_video(
self,
api_key: SecretStr,
pid: str,
max_wait_time: int = 1000,
) -> str:
"""Wait for video creation to complete and return the video URL."""
start_time = time.time()
while time.time() - start_time < max_wait_time:
status = await self.check_video_status(api_key, pid)
logger.debug(f"Video status: {status}")
if status.get("status") == "ready" and "videoUrl" in status:
return status["videoUrl"]
elif status.get("status") == "error":
error_message = status.get("error", "Unknown error occurred")
logger.error(f"Video creation failed: {error_message}")
raise ValueError(f"Video creation failed: {error_message}")
elif status.get("status") in ["FAILED", "CANCELED"]:
logger.error(f"Video creation failed: {status.get('message')}")
raise ValueError(f"Video creation failed: {status.get('message')}")
await asyncio.sleep(10)
logger.error("Video creation timed out")
raise TimeoutError("Video creation timed out")
def __init__(self):
super().__init__(
id="0f3e4635-e810-43d9-9e81-49e6f4e83b7c",
description="Turns a screenshot into an engaging, avatarnarrated video advert.",
categories={BlockCategory.AI, BlockCategory.MARKETING},
input_schema=AIScreenshotToVideoAdBlock.Input,
output_schema=AIScreenshotToVideoAdBlock.Output,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"script": "Amazing numbers!",
"screenshot_url": "https://cdn.revid.ai/uploads/1747080376028-image.png",
},
test_output=("video_url", "https://example.com/screenshot.mp4"),
test_mock={
"create_webhook": lambda *args, **kwargs: (
"test_uuid",
"https://webhook.site/test_uuid",
),
"create_video": lambda *args, **kwargs: {"pid": "test_pid"},
"check_video_status": lambda *args, **kwargs: {
"status": "ready",
"videoUrl": "https://example.com/screenshot.mp4",
},
"wait_for_video": lambda *args, **kwargs: "https://example.com/screenshot.mp4",
},
test_credentials=TEST_CREDENTIALS,
)
async def run(self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs):
webhook_token, webhook_url = await self.create_webhook()
payload = {
"webhook": webhook_url,
"creationParams": {
"targetDuration": input_data.target_duration,
"ratio": input_data.ratio,
"mediaType": "aiVideo",
"hasAvatar": True,
"removeAvatarBackground": True,
"inputText": input_data.script,
"flowType": "text-to-video",
"slug": "ai-ad-generator",
"slugNew": "screenshot-to-video-ad",
"isCopiedFrom": "ai-ad-generator",
"hasToGenerateVoice": True,
"hasToTranscript": False,
"hasToSearchMedia": True,
"hasWebsiteRecorder": False,
"hasTextSmallAtBottom": False,
"selectedAudio": input_data.background_music.value,
"selectedVoice": input_data.voice.voice_id,
"selectedAvatar": "https://cdn.revid.ai/avatars/young-woman.mp4",
"selectedAvatarType": "video/mp4",
"websiteToRecord": "",
"hasToGenerateCover": True,
"nbGenerations": 1,
"disableCaptions": False,
"mediaMultiplier": "medium",
"characters": [],
"captionPresetName": "Revid",
"sourceType": "contentScraping",
"selectedStoryStyle": {"value": "custom", "label": "General"},
"generationPreset": "DEFAULT",
"hasToGenerateMusic": False,
"isOptimizedForChinese": False,
"generationUserPrompt": "",
"enableNsfwFilter": False,
"addStickers": False,
"typeMovingImageAnim": "dynamic",
"hasToGenerateSoundEffects": False,
"forceModelType": "gpt-image-1",
"selectedCharacters": [],
"lang": "",
"voiceSpeed": 1,
"disableAudio": False,
"disableVoice": False,
"useOnlyProvidedMedia": True,
"imageGenerationModel": "ultra",
"videoGenerationModel": "ultra",
"hasEnhancedGeneration": True,
"hasEnhancedGenerationPro": True,
"inputMedias": [
{"url": input_data.screenshot_url, "title": "", "type": "image"}
],
"hasToGenerateVideos": True,
"audioUrl": input_data.background_music.audio_url,
"watermark": None,
},
}
response = await self.create_video(credentials.api_key, payload)
pid = response.get("pid")
if not pid:
raise RuntimeError("Failed to create video: No project ID returned")
video_url = await self.wait_for_video(credentials.api_key, pid)
yield "video_url", video_url

View File

@@ -4,6 +4,7 @@ from typing import List
from backend.blocks.apollo._auth import ApolloCredentials
from backend.blocks.apollo.models import (
Contact,
EnrichPersonRequest,
Organization,
SearchOrganizationsRequest,
SearchOrganizationsResponse,
@@ -27,14 +28,15 @@ class ApolloClient:
def _get_headers(self) -> dict[str, str]:
return {"x-api-key": self.credentials.api_key.get_secret_value()}
def search_people(self, query: SearchPeopleRequest) -> List[Contact]:
async def search_people(self, query: SearchPeopleRequest) -> List[Contact]:
"""Search for people in Apollo"""
response = self.requests.get(
response = await self.requests.post(
f"{self.API_URL}/mixed_people/search",
headers=self._get_headers(),
params=query.model_dump(exclude={"credentials", "max_results"}),
json=query.model_dump(exclude={"max_results"}),
)
parsed_response = SearchPeopleResponse(**response.json())
data = response.json()
parsed_response = SearchPeopleResponse(**data)
if parsed_response.pagination.total_entries == 0:
return []
@@ -52,27 +54,29 @@ class ApolloClient:
and len(parsed_response.people) > 0
):
query.page += 1
response = self.requests.get(
response = await self.requests.post(
f"{self.API_URL}/mixed_people/search",
headers=self._get_headers(),
params=query.model_dump(exclude={"credentials", "max_results"}),
json=query.model_dump(exclude={"max_results"}),
)
parsed_response = SearchPeopleResponse(**response.json())
data = response.json()
parsed_response = SearchPeopleResponse(**data)
people.extend(parsed_response.people[: query.max_results - len(people)])
logger.info(f"Found {len(people)} people")
return people[: query.max_results] if query.max_results else people
def search_organizations(
async def search_organizations(
self, query: SearchOrganizationsRequest
) -> List[Organization]:
"""Search for organizations in Apollo"""
response = self.requests.get(
response = await self.requests.post(
f"{self.API_URL}/mixed_companies/search",
headers=self._get_headers(),
params=query.model_dump(exclude={"credentials", "max_results"}),
json=query.model_dump(exclude={"max_results"}),
)
parsed_response = SearchOrganizationsResponse(**response.json())
data = response.json()
parsed_response = SearchOrganizationsResponse(**data)
if parsed_response.pagination.total_entries == 0:
return []
@@ -90,12 +94,13 @@ class ApolloClient:
and len(parsed_response.organizations) > 0
):
query.page += 1
response = self.requests.get(
response = await self.requests.post(
f"{self.API_URL}/mixed_companies/search",
headers=self._get_headers(),
params=query.model_dump(exclude={"credentials", "max_results"}),
json=query.model_dump(exclude={"max_results"}),
)
parsed_response = SearchOrganizationsResponse(**response.json())
data = response.json()
parsed_response = SearchOrganizationsResponse(**data)
organizations.extend(
parsed_response.organizations[
: query.max_results - len(organizations)
@@ -106,3 +111,21 @@ class ApolloClient:
return (
organizations[: query.max_results] if query.max_results else organizations
)
async def enrich_person(self, query: EnrichPersonRequest) -> Contact:
"""Enrich a person's data including email & phone reveal"""
response = await self.requests.post(
f"{self.API_URL}/people/match",
headers=self._get_headers(),
json=query.model_dump(),
params={
"reveal_personal_emails": "true",
},
)
data = response.json()
if "person" not in data:
raise ValueError(f"Person not found or enrichment failed: {data}")
contact = Contact(**data["person"])
contact.email = contact.email or "-"
return contact

View File

@@ -1,17 +1,31 @@
from enum import Enum
from typing import Any, Optional
from pydantic import BaseModel, ConfigDict
from pydantic import BaseModel as OriginalBaseModel
from pydantic import ConfigDict
from backend.data.model import SchemaField
class BaseModel(OriginalBaseModel):
def model_dump(self, *args, exclude: set[str] | None = None, **kwargs):
if exclude is None:
exclude = set("credentials")
else:
exclude.add("credentials")
kwargs.setdefault("exclude_none", True)
kwargs.setdefault("exclude_unset", True)
kwargs.setdefault("exclude_defaults", True)
return super().model_dump(*args, exclude=exclude, **kwargs)
class PrimaryPhone(BaseModel):
"""A primary phone in Apollo"""
number: str
source: str
sanitized_number: str
number: Optional[str] = ""
source: Optional[str] = ""
sanitized_number: Optional[str] = ""
class SenorityLevels(str, Enum):
@@ -42,102 +56,102 @@ class ContactEmailStatuses(str, Enum):
class RuleConfigStatus(BaseModel):
"""A rule config status in Apollo"""
_id: str
created_at: str
rule_action_config_id: str
rule_config_id: str
status_cd: str
updated_at: str
id: str
key: str
_id: Optional[str] = ""
created_at: Optional[str] = ""
rule_action_config_id: Optional[str] = ""
rule_config_id: Optional[str] = ""
status_cd: Optional[str] = ""
updated_at: Optional[str] = ""
id: Optional[str] = ""
key: Optional[str] = ""
class ContactCampaignStatus(BaseModel):
"""A contact campaign status in Apollo"""
id: str
emailer_campaign_id: str
send_email_from_user_id: str
inactive_reason: str
status: str
added_at: str
added_by_user_id: str
finished_at: str
paused_at: str
auto_unpause_at: str
send_email_from_email_address: str
send_email_from_email_account_id: str
manually_set_unpause: str
failure_reason: str
current_step_id: str
in_response_to_emailer_message_id: str
cc_emails: str
bcc_emails: str
to_emails: str
id: Optional[str] = ""
emailer_campaign_id: Optional[str] = ""
send_email_from_user_id: Optional[str] = ""
inactive_reason: Optional[str] = ""
status: Optional[str] = ""
added_at: Optional[str] = ""
added_by_user_id: Optional[str] = ""
finished_at: Optional[str] = ""
paused_at: Optional[str] = ""
auto_unpause_at: Optional[str] = ""
send_email_from_email_address: Optional[str] = ""
send_email_from_email_account_id: Optional[str] = ""
manually_set_unpause: Optional[str] = ""
failure_reason: Optional[str] = ""
current_step_id: Optional[str] = ""
in_response_to_emailer_message_id: Optional[str] = ""
cc_emails: Optional[str] = ""
bcc_emails: Optional[str] = ""
to_emails: Optional[str] = ""
class Account(BaseModel):
"""An account in Apollo"""
id: str
name: str
website_url: str
blog_url: str
angellist_url: str
linkedin_url: str
twitter_url: str
facebook_url: str
primary_phone: PrimaryPhone
languages: list[str]
alexa_ranking: int
phone: str
linkedin_uid: str
founded_year: int
publicly_traded_symbol: str
publicly_traded_exchange: str
logo_url: str
chrunchbase_url: str
primary_domain: str
domain: str
team_id: str
organization_id: str
account_stage_id: str
source: str
original_source: str
creator_id: str
owner_id: str
created_at: str
phone_status: str
hubspot_id: str
salesforce_id: str
crm_owner_id: str
parent_account_id: str
sanitized_phone: str
id: Optional[str] = ""
name: Optional[str] = ""
website_url: Optional[str] = ""
blog_url: Optional[str] = ""
angellist_url: Optional[str] = ""
linkedin_url: Optional[str] = ""
twitter_url: Optional[str] = ""
facebook_url: Optional[str] = ""
primary_phone: Optional[PrimaryPhone] = PrimaryPhone()
languages: Optional[list[str]] = []
alexa_ranking: Optional[int] = 0
phone: Optional[str] = ""
linkedin_uid: Optional[str] = ""
founded_year: Optional[int] = 0
publicly_traded_symbol: Optional[str] = ""
publicly_traded_exchange: Optional[str] = ""
logo_url: Optional[str] = ""
chrunchbase_url: Optional[str] = ""
primary_domain: Optional[str] = ""
domain: Optional[str] = ""
team_id: Optional[str] = ""
organization_id: Optional[str] = ""
account_stage_id: Optional[str] = ""
source: Optional[str] = ""
original_source: Optional[str] = ""
creator_id: Optional[str] = ""
owner_id: Optional[str] = ""
created_at: Optional[str] = ""
phone_status: Optional[str] = ""
hubspot_id: Optional[str] = ""
salesforce_id: Optional[str] = ""
crm_owner_id: Optional[str] = ""
parent_account_id: Optional[str] = ""
sanitized_phone: Optional[str] = ""
# no listed type on the API docs
account_playbook_statues: list[Any]
account_rule_config_statuses: list[RuleConfigStatus]
existence_level: str
label_ids: list[str]
typed_custom_fields: Any
custom_field_errors: Any
modality: str
source_display_name: str
salesforce_record_id: str
crm_record_url: str
account_playbook_statues: Optional[list[Any]] = []
account_rule_config_statuses: Optional[list[RuleConfigStatus]] = []
existence_level: Optional[str] = ""
label_ids: Optional[list[str]] = []
typed_custom_fields: Optional[Any] = {}
custom_field_errors: Optional[Any] = {}
modality: Optional[str] = ""
source_display_name: Optional[str] = ""
salesforce_record_id: Optional[str] = ""
crm_record_url: Optional[str] = ""
class ContactEmail(BaseModel):
"""A contact email in Apollo"""
email: str = ""
email_md5: str = ""
email_sha256: str = ""
email_status: str = ""
email_source: str = ""
extrapolated_email_confidence: str = ""
position: int = 0
email_from_customer: str = ""
free_domain: bool = True
email: Optional[str] = ""
email_md5: Optional[str] = ""
email_sha256: Optional[str] = ""
email_status: Optional[str] = ""
email_source: Optional[str] = ""
extrapolated_email_confidence: Optional[str] = ""
position: Optional[int] = 0
email_from_customer: Optional[str] = ""
free_domain: Optional[bool] = True
class EmploymentHistory(BaseModel):
@@ -150,40 +164,40 @@ class EmploymentHistory(BaseModel):
populate_by_name=True,
)
_id: Optional[str] = None
created_at: Optional[str] = None
current: Optional[bool] = None
degree: Optional[str] = None
description: Optional[str] = None
emails: Optional[str] = None
end_date: Optional[str] = None
grade_level: Optional[str] = None
kind: Optional[str] = None
major: Optional[str] = None
organization_id: Optional[str] = None
organization_name: Optional[str] = None
raw_address: Optional[str] = None
start_date: Optional[str] = None
title: Optional[str] = None
updated_at: Optional[str] = None
id: Optional[str] = None
key: Optional[str] = None
_id: Optional[str] = ""
created_at: Optional[str] = ""
current: Optional[bool] = False
degree: Optional[str] = ""
description: Optional[str] = ""
emails: Optional[str] = ""
end_date: Optional[str] = ""
grade_level: Optional[str] = ""
kind: Optional[str] = ""
major: Optional[str] = ""
organization_id: Optional[str] = ""
organization_name: Optional[str] = ""
raw_address: Optional[str] = ""
start_date: Optional[str] = ""
title: Optional[str] = ""
updated_at: Optional[str] = ""
id: Optional[str] = ""
key: Optional[str] = ""
class Breadcrumb(BaseModel):
"""A breadcrumb in Apollo"""
label: Optional[str] = "N/A"
signal_field_name: Optional[str] = "N/A"
value: str | list | None = "N/A"
display_name: Optional[str] = "N/A"
label: Optional[str] = ""
signal_field_name: Optional[str] = ""
value: str | list | None = ""
display_name: Optional[str] = ""
class TypedCustomField(BaseModel):
"""A typed custom field in Apollo"""
id: Optional[str] = "N/A"
value: Optional[str] = "N/A"
id: Optional[str] = ""
value: Optional[str] = ""
class Pagination(BaseModel):
@@ -205,23 +219,23 @@ class Pagination(BaseModel):
class DialerFlags(BaseModel):
"""A dialer flags in Apollo"""
country_name: str
country_enabled: bool
high_risk_calling_enabled: bool
potential_high_risk_number: bool
country_name: Optional[str] = ""
country_enabled: Optional[bool] = True
high_risk_calling_enabled: Optional[bool] = True
potential_high_risk_number: Optional[bool] = True
class PhoneNumber(BaseModel):
"""A phone number in Apollo"""
raw_number: str = ""
sanitized_number: str = ""
type: str = ""
position: int = 0
status: str = ""
dnc_status: str = ""
dnc_other_info: str = ""
dailer_flags: DialerFlags = DialerFlags(
raw_number: Optional[str] = ""
sanitized_number: Optional[str] = ""
type: Optional[str] = ""
position: Optional[int] = 0
status: Optional[str] = ""
dnc_status: Optional[str] = ""
dnc_other_info: Optional[str] = ""
dailer_flags: Optional[DialerFlags] = DialerFlags(
country_name="",
country_enabled=True,
high_risk_calling_enabled=True,
@@ -239,33 +253,31 @@ class Organization(BaseModel):
populate_by_name=True,
)
id: Optional[str] = "N/A"
name: Optional[str] = "N/A"
website_url: Optional[str] = "N/A"
blog_url: Optional[str] = "N/A"
angellist_url: Optional[str] = "N/A"
linkedin_url: Optional[str] = "N/A"
twitter_url: Optional[str] = "N/A"
facebook_url: Optional[str] = "N/A"
primary_phone: Optional[PrimaryPhone] = PrimaryPhone(
number="N/A", source="N/A", sanitized_number="N/A"
)
languages: list[str] = []
id: Optional[str] = ""
name: Optional[str] = ""
website_url: Optional[str] = ""
blog_url: Optional[str] = ""
angellist_url: Optional[str] = ""
linkedin_url: Optional[str] = ""
twitter_url: Optional[str] = ""
facebook_url: Optional[str] = ""
primary_phone: Optional[PrimaryPhone] = PrimaryPhone()
languages: Optional[list[str]] = []
alexa_ranking: Optional[int] = 0
phone: Optional[str] = "N/A"
linkedin_uid: Optional[str] = "N/A"
phone: Optional[str] = ""
linkedin_uid: Optional[str] = ""
founded_year: Optional[int] = 0
publicly_traded_symbol: Optional[str] = "N/A"
publicly_traded_exchange: Optional[str] = "N/A"
logo_url: Optional[str] = "N/A"
chrunchbase_url: Optional[str] = "N/A"
primary_domain: Optional[str] = "N/A"
sanitized_phone: Optional[str] = "N/A"
owned_by_organization_id: Optional[str] = "N/A"
intent_strength: Optional[str] = "N/A"
show_intent: bool = True
publicly_traded_symbol: Optional[str] = ""
publicly_traded_exchange: Optional[str] = ""
logo_url: Optional[str] = ""
chrunchbase_url: Optional[str] = ""
primary_domain: Optional[str] = ""
sanitized_phone: Optional[str] = ""
owned_by_organization_id: Optional[str] = ""
intent_strength: Optional[str] = ""
show_intent: Optional[bool] = True
has_intent_signal_account: Optional[bool] = True
intent_signal_account: Optional[str] = "N/A"
intent_signal_account: Optional[str] = ""
class Contact(BaseModel):
@@ -278,95 +290,95 @@ class Contact(BaseModel):
populate_by_name=True,
)
contact_roles: list[Any] = []
id: Optional[str] = None
first_name: Optional[str] = None
last_name: Optional[str] = None
name: Optional[str] = None
linkedin_url: Optional[str] = None
title: Optional[str] = None
contact_stage_id: Optional[str] = None
owner_id: Optional[str] = None
creator_id: Optional[str] = None
person_id: Optional[str] = None
email_needs_tickling: bool = True
organization_name: Optional[str] = None
source: Optional[str] = None
original_source: Optional[str] = None
organization_id: Optional[str] = None
headline: Optional[str] = None
photo_url: Optional[str] = None
present_raw_address: Optional[str] = None
linkededin_uid: Optional[str] = None
extrapolated_email_confidence: Optional[float] = None
salesforce_id: Optional[str] = None
salesforce_lead_id: Optional[str] = None
salesforce_contact_id: Optional[str] = None
saleforce_account_id: Optional[str] = None
crm_owner_id: Optional[str] = None
created_at: Optional[str] = None
emailer_campaign_ids: list[str] = []
direct_dial_status: Optional[str] = None
direct_dial_enrichment_failed_at: Optional[str] = None
email_status: Optional[str] = None
email_source: Optional[str] = None
account_id: Optional[str] = None
last_activity_date: Optional[str] = None
hubspot_vid: Optional[str] = None
hubspot_company_id: Optional[str] = None
crm_id: Optional[str] = None
sanitized_phone: Optional[str] = None
merged_crm_ids: Optional[str] = None
updated_at: Optional[str] = None
queued_for_crm_push: bool = True
suggested_from_rule_engine_config_id: Optional[str] = None
email_unsubscribed: Optional[str] = None
label_ids: list[Any] = []
has_pending_email_arcgate_request: bool = True
has_email_arcgate_request: bool = True
existence_level: Optional[str] = None
email: Optional[str] = None
email_from_customer: Optional[str] = None
typed_custom_fields: list[TypedCustomField] = []
custom_field_errors: Any = None
salesforce_record_id: Optional[str] = None
crm_record_url: Optional[str] = None
email_status_unavailable_reason: Optional[str] = None
email_true_status: Optional[str] = None
updated_email_true_status: bool = True
contact_rule_config_statuses: list[RuleConfigStatus] = []
source_display_name: Optional[str] = None
twitter_url: Optional[str] = None
contact_campaign_statuses: list[ContactCampaignStatus] = []
state: Optional[str] = None
city: Optional[str] = None
country: Optional[str] = None
account: Optional[Account] = None
contact_emails: list[ContactEmail] = []
organization: Optional[Organization] = None
employment_history: list[EmploymentHistory] = []
time_zone: Optional[str] = None
intent_strength: Optional[str] = None
show_intent: bool = True
phone_numbers: list[PhoneNumber] = []
account_phone_note: Optional[str] = None
free_domain: bool = True
is_likely_to_engage: bool = True
email_domain_catchall: bool = True
contact_job_change_event: Optional[str] = None
contact_roles: Optional[list[Any]] = []
id: Optional[str] = ""
first_name: Optional[str] = ""
last_name: Optional[str] = ""
name: Optional[str] = ""
linkedin_url: Optional[str] = ""
title: Optional[str] = ""
contact_stage_id: Optional[str] = ""
owner_id: Optional[str] = ""
creator_id: Optional[str] = ""
person_id: Optional[str] = ""
email_needs_tickling: Optional[bool] = True
organization_name: Optional[str] = ""
source: Optional[str] = ""
original_source: Optional[str] = ""
organization_id: Optional[str] = ""
headline: Optional[str] = ""
photo_url: Optional[str] = ""
present_raw_address: Optional[str] = ""
linkededin_uid: Optional[str] = ""
extrapolated_email_confidence: Optional[float] = 0.0
salesforce_id: Optional[str] = ""
salesforce_lead_id: Optional[str] = ""
salesforce_contact_id: Optional[str] = ""
saleforce_account_id: Optional[str] = ""
crm_owner_id: Optional[str] = ""
created_at: Optional[str] = ""
emailer_campaign_ids: Optional[list[str]] = []
direct_dial_status: Optional[str] = ""
direct_dial_enrichment_failed_at: Optional[str] = ""
email_status: Optional[str] = ""
email_source: Optional[str] = ""
account_id: Optional[str] = ""
last_activity_date: Optional[str] = ""
hubspot_vid: Optional[str] = ""
hubspot_company_id: Optional[str] = ""
crm_id: Optional[str] = ""
sanitized_phone: Optional[str] = ""
merged_crm_ids: Optional[str] = ""
updated_at: Optional[str] = ""
queued_for_crm_push: Optional[bool] = True
suggested_from_rule_engine_config_id: Optional[str] = ""
email_unsubscribed: Optional[str] = ""
label_ids: Optional[list[Any]] = []
has_pending_email_arcgate_request: Optional[bool] = True
has_email_arcgate_request: Optional[bool] = True
existence_level: Optional[str] = ""
email: Optional[str] = ""
email_from_customer: Optional[str] = ""
typed_custom_fields: Optional[list[TypedCustomField]] = []
custom_field_errors: Optional[Any] = {}
salesforce_record_id: Optional[str] = ""
crm_record_url: Optional[str] = ""
email_status_unavailable_reason: Optional[str] = ""
email_true_status: Optional[str] = ""
updated_email_true_status: Optional[bool] = True
contact_rule_config_statuses: Optional[list[RuleConfigStatus]] = []
source_display_name: Optional[str] = ""
twitter_url: Optional[str] = ""
contact_campaign_statuses: Optional[list[ContactCampaignStatus]] = []
state: Optional[str] = ""
city: Optional[str] = ""
country: Optional[str] = ""
account: Optional[Account] = Account()
contact_emails: Optional[list[ContactEmail]] = []
organization: Optional[Organization] = Organization()
employment_history: Optional[list[EmploymentHistory]] = []
time_zone: Optional[str] = ""
intent_strength: Optional[str] = ""
show_intent: Optional[bool] = True
phone_numbers: Optional[list[PhoneNumber]] = []
account_phone_note: Optional[str] = ""
free_domain: Optional[bool] = True
is_likely_to_engage: Optional[bool] = True
email_domain_catchall: Optional[bool] = True
contact_job_change_event: Optional[str] = ""
class SearchOrganizationsRequest(BaseModel):
"""Request for Apollo's search organizations API"""
organization_num_empoloyees_range: list[int] = SchemaField(
organization_num_employees_range: Optional[list[int]] = SchemaField(
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
default=[0, 1000000],
)
organization_locations: list[str] = SchemaField(
organization_locations: Optional[list[str]] = SchemaField(
description="""The location of the company headquarters. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, any Boston-based companies will not appearch in your search results, even if they match other parameters.
@@ -375,28 +387,30 @@ To exclude companies based on location, use the organization_not_locations param
""",
default_factory=list,
)
organizations_not_locations: list[str] = SchemaField(
organizations_not_locations: Optional[list[str]] = SchemaField(
description="""Exclude companies from search results based on the location of the company headquarters. You can use cities, US states, and countries as locations to exclude.
This parameter is useful for ensuring you do not prospect in an undesirable territory. For example, if you use ireland as a value, no Ireland-based companies will appear in your search results.
""",
default_factory=list,
)
q_organization_keyword_tags: list[str] = SchemaField(
description="""Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry."""
q_organization_keyword_tags: Optional[list[str]] = SchemaField(
description="""Filter search results based on keywords associated with companies. For example, you can enter mining as a value to return only companies that have an association with the mining industry.""",
default_factory=list,
)
q_organization_name: str = SchemaField(
q_organization_name: Optional[str] = SchemaField(
description="""Filter search results to include a specific company name.
If the value you enter for this parameter does not match with a company's name, the company will not appear in search results, even if it matches other parameters. Partial matches are accepted. For example, if you filter by the value marketing, a company called NY Marketing Unlimited would still be eligible as a search result, but NY Market Analysis would not be eligible."""
If the value you enter for this parameter does not match with a company's name, the company will not appear in search results, even if it matches other parameters. Partial matches are accepted. For example, if you filter by the value marketing, a company called NY Marketing Unlimited would still be eligible as a search result, but NY Market Analysis would not be eligible.""",
default="",
)
organization_ids: list[str] = SchemaField(
organization_ids: Optional[list[str]] = SchemaField(
description="""The Apollo IDs for the companies you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, identify the values for organization_id when you call this endpoint.""",
default_factory=list,
)
max_results: int = SchemaField(
max_results: Optional[int] = SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
ge=1,
@@ -421,11 +435,11 @@ Use the page parameter to search the different pages of data.""",
class SearchOrganizationsResponse(BaseModel):
"""Response from Apollo's search organizations API"""
breadcrumbs: list[Breadcrumb] = []
partial_results_only: bool = True
has_join: bool = True
disable_eu_prospecting: bool = True
partial_results_limit: int = 0
breadcrumbs: Optional[list[Breadcrumb]] = []
partial_results_only: Optional[bool] = True
has_join: Optional[bool] = True
disable_eu_prospecting: Optional[bool] = True
partial_results_limit: Optional[int] = 0
pagination: Pagination = Pagination(
page=0, per_page=0, total_entries=0, total_pages=0
)
@@ -433,14 +447,14 @@ class SearchOrganizationsResponse(BaseModel):
accounts: list[Any] = []
organizations: list[Organization] = []
models_ids: list[str] = []
num_fetch_result: Optional[str] = "N/A"
derived_params: Optional[str] = "N/A"
num_fetch_result: Optional[str] = ""
derived_params: Optional[str] = ""
class SearchPeopleRequest(BaseModel):
"""Request for Apollo's search people API"""
person_titles: list[str] = SchemaField(
person_titles: Optional[list[str]] = SchemaField(
description="""Job titles held by the people you want to find. For a person to be included in search results, they only need to match 1 of the job titles you add. Adding more job titles expands your search results.
Results also include job titles with the same terms, even if they are not exact matches. For example, searching for marketing manager might return people with the job title content marketing manager.
@@ -450,13 +464,13 @@ Use this parameter in combination with the person_seniorities[] parameter to fin
default_factory=list,
placeholder="marketing manager",
)
person_locations: list[str] = SchemaField(
person_locations: Optional[list[str]] = SchemaField(
description="""The location where people live. You can search across cities, US states, and countries.
To find people based on the headquarters locations of their current employer, use the organization_locations parameter.""",
default_factory=list,
)
person_seniorities: list[SenorityLevels] = SchemaField(
person_seniorities: Optional[list[SenorityLevels]] = SchemaField(
description="""The job seniority that people hold within their current employer. This enables you to find people that currently hold positions at certain reporting levels, such as Director level or senior IC level.
For a person to be included in search results, they only need to match 1 of the seniorities you add. Adding more seniorities expands your search results.
@@ -466,7 +480,7 @@ Searches only return results based on their current job title, so searching for
Use this parameter in combination with the person_titles[] parameter to find people based on specific job functions and seniority levels.""",
default_factory=list,
)
organization_locations: list[str] = SchemaField(
organization_locations: Optional[list[str]] = SchemaField(
description="""The location of the company headquarters for a person's current employer. You can search across cities, US states, and countries.
If a company has several office locations, results are still based on the headquarters location. For example, if you search chicago but a company's HQ location is in boston, people that work for the Boston-based company will not appear in your results, even if they match other parameters.
@@ -474,7 +488,7 @@ If a company has several office locations, results are still based on the headqu
To find people based on their personal location, use the person_locations parameter.""",
default_factory=list,
)
q_organization_domains: list[str] = SchemaField(
q_organization_domains: Optional[list[str]] = SchemaField(
description="""The domain name for the person's employer. This can be the current employer or a previous employer. Do not include www., the @ symbol, or similar.
You can add multiple domains to search across companies.
@@ -482,23 +496,23 @@ You can add multiple domains to search across companies.
Examples: apollo.io and microsoft.com""",
default_factory=list,
)
contact_email_statuses: list[ContactEmailStatuses] = SchemaField(
contact_email_statuses: Optional[list[ContactEmailStatuses]] = SchemaField(
description="""The email statuses for the people you want to find. You can add multiple statuses to expand your search.""",
default_factory=list,
)
organization_ids: list[str] = SchemaField(
organization_ids: Optional[list[str]] = SchemaField(
description="""The Apollo IDs for the companies (employers) you want to include in your search results. Each company in the Apollo database is assigned a unique ID.
To find IDs, call the Organization Search endpoint and identify the values for organization_id.""",
default_factory=list,
)
organization_num_empoloyees_range: list[int] = SchemaField(
organization_num_employees_range: Optional[list[int]] = SchemaField(
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
default_factory=list,
)
q_keywords: str = SchemaField(
q_keywords: Optional[str] = SchemaField(
description="""A string of words over which we want to filter the results""",
default="",
)
@@ -514,7 +528,7 @@ Use this parameter in combination with the per_page parameter to make search res
Use the page parameter to search the different pages of data.""",
default=100,
)
max_results: int = SchemaField(
max_results: Optional[int] = SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
ge=1,
@@ -533,16 +547,61 @@ class SearchPeopleResponse(BaseModel):
populate_by_name=True,
)
breadcrumbs: list[Breadcrumb] = []
partial_results_only: bool = True
has_join: bool = True
disable_eu_prospecting: bool = True
partial_results_limit: int = 0
breadcrumbs: Optional[list[Breadcrumb]] = []
partial_results_only: Optional[bool] = True
has_join: Optional[bool] = True
disable_eu_prospecting: Optional[bool] = True
partial_results_limit: Optional[int] = 0
pagination: Pagination = Pagination(
page=0, per_page=0, total_entries=0, total_pages=0
)
contacts: list[Contact] = []
people: list[Contact] = []
model_ids: list[str] = []
num_fetch_result: Optional[str] = "N/A"
derived_params: Optional[str] = "N/A"
num_fetch_result: Optional[str] = ""
derived_params: Optional[str] = ""
class EnrichPersonRequest(BaseModel):
"""Request for Apollo's person enrichment API"""
person_id: Optional[str] = SchemaField(
description="Apollo person ID to enrich (most accurate method)",
default="",
)
first_name: Optional[str] = SchemaField(
description="First name of the person to enrich",
default="",
)
last_name: Optional[str] = SchemaField(
description="Last name of the person to enrich",
default="",
)
name: Optional[str] = SchemaField(
description="Full name of the person to enrich",
default="",
)
email: Optional[str] = SchemaField(
description="Email address of the person to enrich",
default="",
)
domain: Optional[str] = SchemaField(
description="Company domain of the person to enrich",
default="",
)
company: Optional[str] = SchemaField(
description="Company name of the person to enrich",
default="",
)
linkedin_url: Optional[str] = SchemaField(
description="LinkedIn URL of the person to enrich",
default="",
)
organization_id: Optional[str] = SchemaField(
description="Apollo organization ID of the person's company",
default="",
)
title: Optional[str] = SchemaField(
description="Job title of the person to enrich",
default="",
)

View File

@@ -11,14 +11,14 @@ from backend.blocks.apollo.models import (
SearchOrganizationsRequest,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.data.model import CredentialsField, SchemaField
class SearchOrganizationsBlock(Block):
"""Search for organizations in Apollo"""
class Input(BlockSchema):
organization_num_empoloyees_range: list[int] = SchemaField(
organization_num_employees_range: list[int] = SchemaField(
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
@@ -65,7 +65,7 @@ To find IDs, identify the values for organization_id when you call this endpoint
le=50000,
advanced=True,
)
credentials: ApolloCredentialsInput = SchemaField(
credentials: ApolloCredentialsInput = CredentialsField(
description="Apollo credentials",
)
@@ -201,19 +201,17 @@ To find IDs, identify the values for organization_id when you call this endpoint
)
@staticmethod
def search_organizations(
async def search_organizations(
query: SearchOrganizationsRequest, credentials: ApolloCredentials
) -> list[Organization]:
client = ApolloClient(credentials)
return client.search_organizations(query)
return await client.search_organizations(query)
def run(
async def run(
self, input_data: Input, *, credentials: ApolloCredentials, **kwargs
) -> BlockOutput:
query = SearchOrganizationsRequest(
**input_data.model_dump(exclude={"credentials"})
)
organizations = self.search_organizations(query, credentials)
query = SearchOrganizationsRequest(**input_data.model_dump())
organizations = await self.search_organizations(query, credentials)
for organization in organizations:
yield "organization", organization
yield "organizations", organizations

View File

@@ -1,3 +1,5 @@
import asyncio
from backend.blocks.apollo._api import ApolloClient
from backend.blocks.apollo._auth import (
TEST_CREDENTIALS,
@@ -8,11 +10,12 @@ from backend.blocks.apollo._auth import (
from backend.blocks.apollo.models import (
Contact,
ContactEmailStatuses,
EnrichPersonRequest,
SearchPeopleRequest,
SenorityLevels,
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.data.model import CredentialsField, SchemaField
class SearchPeopleBlock(Block):
@@ -77,7 +80,7 @@ class SearchPeopleBlock(Block):
default_factory=list,
advanced=False,
)
organization_num_empoloyees_range: list[int] = SchemaField(
organization_num_employees_range: list[int] = SchemaField(
description="""The number range of employees working for the company. This enables you to find companies based on headcount. You can add multiple ranges to expand your search results.
Each range you add needs to be a string, with the upper and lower numbers of the range separated only by a comma.""",
@@ -90,14 +93,19 @@ class SearchPeopleBlock(Block):
advanced=False,
)
max_results: int = SchemaField(
description="""The maximum number of results to return. If you don't specify this parameter, the default is 100.""",
default=100,
description="""The maximum number of results to return. If you don't specify this parameter, the default is 25. Limited to 500 to prevent overspending.""",
default=25,
ge=1,
le=50000,
le=500,
advanced=True,
)
enrich_info: bool = SchemaField(
description="""Whether to enrich contacts with detailed information including real email addresses. This will double the search cost.""",
default=False,
advanced=True,
)
credentials: ApolloCredentialsInput = SchemaField(
credentials: ApolloCredentialsInput = CredentialsField(
description="Apollo credentials",
)
@@ -106,9 +114,6 @@ class SearchPeopleBlock(Block):
description="List of people found",
default_factory=list,
)
person: Contact = SchemaField(
description="Each found person, one at a time",
)
error: str = SchemaField(
description="Error message if the search failed",
default="",
@@ -124,87 +129,6 @@ class SearchPeopleBlock(Block):
test_credentials=TEST_CREDENTIALS,
test_input={"credentials": TEST_CREDENTIALS_INPUT},
test_output=[
(
"person",
Contact(
contact_roles=[],
id="1",
name="John Doe",
first_name="John",
last_name="Doe",
linkedin_url="https://www.linkedin.com/in/johndoe",
title="Software Engineer",
organization_name="Google",
organization_id="123456",
contact_stage_id="1",
owner_id="1",
creator_id="1",
person_id="1",
email_needs_tickling=True,
source="apollo",
original_source="apollo",
headline="Software Engineer",
photo_url="https://www.linkedin.com/in/johndoe",
present_raw_address="123 Main St, Anytown, USA",
linkededin_uid="123456",
extrapolated_email_confidence=0.8,
salesforce_id="123456",
salesforce_lead_id="123456",
salesforce_contact_id="123456",
saleforce_account_id="123456",
crm_owner_id="123456",
created_at="2021-01-01",
emailer_campaign_ids=[],
direct_dial_status="active",
direct_dial_enrichment_failed_at="2021-01-01",
email_status="active",
email_source="apollo",
account_id="123456",
last_activity_date="2021-01-01",
hubspot_vid="123456",
hubspot_company_id="123456",
crm_id="123456",
sanitized_phone="123456",
merged_crm_ids="123456",
updated_at="2021-01-01",
queued_for_crm_push=True,
suggested_from_rule_engine_config_id="123456",
email_unsubscribed=None,
label_ids=[],
has_pending_email_arcgate_request=True,
has_email_arcgate_request=True,
existence_level=None,
email=None,
email_from_customer=None,
typed_custom_fields=[],
custom_field_errors=None,
salesforce_record_id=None,
crm_record_url=None,
email_status_unavailable_reason=None,
email_true_status=None,
updated_email_true_status=True,
contact_rule_config_statuses=[],
source_display_name=None,
twitter_url=None,
contact_campaign_statuses=[],
state=None,
city=None,
country=None,
account=None,
contact_emails=[],
organization=None,
employment_history=[],
time_zone=None,
intent_strength=None,
show_intent=True,
phone_numbers=[],
account_phone_note=None,
free_domain=True,
is_likely_to_engage=True,
email_domain_catchall=True,
contact_job_change_event=None,
),
),
(
"people",
[
@@ -373,13 +297,41 @@ class SearchPeopleBlock(Block):
)
@staticmethod
def search_people(
async def search_people(
query: SearchPeopleRequest, credentials: ApolloCredentials
) -> list[Contact]:
client = ApolloClient(credentials)
return client.search_people(query)
return await client.search_people(query)
def run(
@staticmethod
async def enrich_person(
query: EnrichPersonRequest, credentials: ApolloCredentials
) -> Contact:
client = ApolloClient(credentials)
return await client.enrich_person(query)
@staticmethod
def merge_contact_data(original: Contact, enriched: Contact) -> Contact:
"""
Merge contact data from original search with enriched data.
Enriched data complements original data, only filling in missing values.
"""
merged_data = original.model_dump()
enriched_data = enriched.model_dump()
# Only update fields that are None, empty string, empty list, or default values in original
for key, enriched_value in enriched_data.items():
# Skip if enriched value is None, empty string, or empty list
if enriched_value is None or enriched_value == "" or enriched_value == []:
continue
# Update if original value is None, empty string, empty list, or zero
if enriched_value:
merged_data[key] = enriched_value
return Contact(**merged_data)
async def run(
self,
input_data: Input,
*,
@@ -387,8 +339,25 @@ class SearchPeopleBlock(Block):
**kwargs,
) -> BlockOutput:
query = SearchPeopleRequest(**input_data.model_dump(exclude={"credentials"}))
people = self.search_people(query, credentials)
for person in people:
yield "person", person
query = SearchPeopleRequest(**input_data.model_dump())
people = await self.search_people(query, credentials)
# Enrich with detailed info if requested
if input_data.enrich_info:
async def enrich_or_fallback(person: Contact):
try:
enrich_query = EnrichPersonRequest(person_id=person.id)
enriched_person = await self.enrich_person(
enrich_query, credentials
)
# Merge enriched data with original data, complementing instead of replacing
return self.merge_contact_data(person, enriched_person)
except Exception:
return person # If enrichment fails, use original person data
people = await asyncio.gather(
*(enrich_or_fallback(person) for person in people)
)
yield "people", people

View File

@@ -0,0 +1,138 @@
from backend.blocks.apollo._api import ApolloClient
from backend.blocks.apollo._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
ApolloCredentials,
ApolloCredentialsInput,
)
from backend.blocks.apollo.models import Contact, EnrichPersonRequest
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import CredentialsField, SchemaField
class GetPersonDetailBlock(Block):
"""Get detailed person data with Apollo API, including email reveal"""
class Input(BlockSchema):
person_id: str = SchemaField(
description="Apollo person ID to enrich (most accurate method)",
default="",
advanced=False,
)
first_name: str = SchemaField(
description="First name of the person to enrich",
default="",
advanced=False,
)
last_name: str = SchemaField(
description="Last name of the person to enrich",
default="",
advanced=False,
)
name: str = SchemaField(
description="Full name of the person to enrich (alternative to first_name + last_name)",
default="",
advanced=False,
)
email: str = SchemaField(
description="Known email address of the person (helps with matching)",
default="",
advanced=False,
)
domain: str = SchemaField(
description="Company domain of the person (e.g., 'google.com')",
default="",
advanced=False,
)
company: str = SchemaField(
description="Company name of the person",
default="",
advanced=False,
)
linkedin_url: str = SchemaField(
description="LinkedIn URL of the person",
default="",
advanced=False,
)
organization_id: str = SchemaField(
description="Apollo organization ID of the person's company",
default="",
advanced=True,
)
title: str = SchemaField(
description="Job title of the person to enrich",
default="",
advanced=True,
)
credentials: ApolloCredentialsInput = CredentialsField(
description="Apollo credentials",
)
class Output(BlockSchema):
contact: Contact = SchemaField(
description="Enriched contact information",
)
error: str = SchemaField(
description="Error message if enrichment failed",
default="",
)
def __init__(self):
super().__init__(
id="3b18d46c-3db6-42ae-a228-0ba441bdd176",
description="Get detailed person data with Apollo API, including email reveal",
categories={BlockCategory.SEARCH},
input_schema=GetPersonDetailBlock.Input,
output_schema=GetPersonDetailBlock.Output,
test_credentials=TEST_CREDENTIALS,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"first_name": "John",
"last_name": "Doe",
"company": "Google",
},
test_output=[
(
"contact",
Contact(
id="1",
name="John Doe",
first_name="John",
last_name="Doe",
email="john.doe@gmail.com",
title="Software Engineer",
organization_name="Google",
linkedin_url="https://www.linkedin.com/in/johndoe",
),
),
],
test_mock={
"enrich_person": lambda query, credentials: Contact(
id="1",
name="John Doe",
first_name="John",
last_name="Doe",
email="john.doe@gmail.com",
title="Software Engineer",
organization_name="Google",
linkedin_url="https://www.linkedin.com/in/johndoe",
)
},
)
@staticmethod
async def enrich_person(
query: EnrichPersonRequest, credentials: ApolloCredentials
) -> Contact:
client = ApolloClient(credentials)
return await client.enrich_person(query)
async def run(
self,
input_data: Input,
*,
credentials: ApolloCredentials,
**kwargs,
) -> BlockOutput:
query = EnrichPersonRequest(**input_data.model_dump())
yield "contact", await self.enrich_person(query, credentials)

View File

@@ -1,11 +1,9 @@
import enum
from typing import Any, List
from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema, BlockType
from backend.data.model import SchemaField
from backend.util import json
from backend.util.file import store_media_file
from backend.util.mock import MockObject
from backend.util.type import MediaFileType, convert
@@ -14,6 +12,12 @@ class FileStoreBlock(Block):
file_in: MediaFileType = SchemaField(
description="The file to store in the temporary directory, it can be a URL, data URI, or local path."
)
base_64: bool = SchemaField(
description="Whether produce an output in base64 format (not recommended, you can pass the string path just fine accross blocks).",
default=False,
advanced=True,
title="Produce Base64 Output",
)
class Output(BlockSchema):
file_out: MediaFileType = SchemaField(
@@ -30,19 +34,18 @@ class FileStoreBlock(Block):
static_output=True,
)
def run(
async def run(
self,
input_data: Input,
*,
graph_exec_id: str,
**kwargs,
) -> BlockOutput:
file_path = store_media_file(
yield "file_out", await store_media_file(
graph_exec_id=graph_exec_id,
file=input_data.file_in,
return_content=False,
return_content=input_data.base_64,
)
yield "file_out", file_path
class StoreValueBlock(Block):
@@ -84,268 +87,35 @@ class StoreValueBlock(Block):
static_output=True,
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "output", input_data.data or input_data.input
class FindInDictionaryBlock(Block):
class PrintToConsoleBlock(Block):
class Input(BlockSchema):
input: Any = SchemaField(description="Dictionary to lookup from")
key: str | int = SchemaField(description="Key to lookup in the dictionary")
text: Any = SchemaField(description="The data to print to the console.")
class Output(BlockSchema):
output: Any = SchemaField(description="Value found for the given key")
missing: Any = SchemaField(
description="Value of the input that missing the key"
)
output: Any = SchemaField(description="The data printed to the console.")
status: str = SchemaField(description="The status of the print operation.")
def __init__(self):
super().__init__(
id="0e50422c-6dee-4145-83d6-3a5a392f65de",
description="Lookup the given key in the input dictionary/object/list and return the value.",
input_schema=FindInDictionaryBlock.Input,
output_schema=FindInDictionaryBlock.Output,
test_input=[
{"input": {"apple": 1, "banana": 2, "cherry": 3}, "key": "banana"},
{"input": {"x": 10, "y": 20, "z": 30}, "key": "w"},
{"input": [1, 2, 3], "key": 1},
{"input": [1, 2, 3], "key": 3},
{"input": MockObject(value="!!", key="key"), "key": "key"},
{"input": [{"k1": "v1"}, {"k2": "v2"}, {"k1": "v3"}], "key": "k1"},
],
test_output=[
("output", 2),
("missing", {"x": 10, "y": 20, "z": 30}),
("output", 2),
("missing", [1, 2, 3]),
("output", "key"),
("output", ["v1", "v3"]),
],
id="f3b1c1b2-4c4f-4f0d-8d2f-4c4f0d8d2f4c",
description="Print the given text to the console, this is used for a debugging purpose.",
categories={BlockCategory.BASIC},
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
obj = input_data.input
key = input_data.key
if isinstance(obj, str):
obj = json.loads(obj)
if isinstance(obj, dict) and key in obj:
yield "output", obj[key]
elif isinstance(obj, list) and isinstance(key, int) and 0 <= key < len(obj):
yield "output", obj[key]
elif isinstance(obj, list) and isinstance(key, str):
if len(obj) == 0:
yield "output", []
elif isinstance(obj[0], dict) and key in obj[0]:
yield "output", [item[key] for item in obj if key in item]
else:
yield "output", [getattr(val, key) for val in obj if hasattr(val, key)]
elif isinstance(obj, object) and isinstance(key, str) and hasattr(obj, key):
yield "output", getattr(obj, key)
else:
yield "missing", input_data.input
class AddToDictionaryBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The dictionary to add the entry to. If not provided, a new dictionary will be created.",
)
key: str = SchemaField(
default="",
description="The key for the new entry.",
placeholder="new_key",
advanced=False,
)
value: Any = SchemaField(
default=None,
description="The value for the new entry.",
placeholder="new_value",
advanced=False,
)
entries: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The entries to add to the dictionary. This is the batch version of the `key` and `value` fields.",
advanced=True,
)
class Output(BlockSchema):
updated_dictionary: dict = SchemaField(
description="The dictionary with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="31d1064e-7446-4693-a7d4-65e5ca1180d1",
description="Adds a new key-value pair to a dictionary. If no dictionary is provided, a new one is created.",
categories={BlockCategory.BASIC},
input_schema=AddToDictionaryBlock.Input,
output_schema=AddToDictionaryBlock.Output,
test_input=[
{
"dictionary": {"existing_key": "existing_value"},
"key": "new_key",
"value": "new_value",
},
{"key": "first_key", "value": "first_value"},
{
"dictionary": {"existing_key": "existing_value"},
"entries": {"new_key": "new_value", "first_key": "first_value"},
},
],
input_schema=PrintToConsoleBlock.Input,
output_schema=PrintToConsoleBlock.Output,
test_input={"text": "Hello, World!"},
test_output=[
(
"updated_dictionary",
{"existing_key": "existing_value", "new_key": "new_value"},
),
("updated_dictionary", {"first_key": "first_value"}),
(
"updated_dictionary",
{
"existing_key": "existing_value",
"new_key": "new_value",
"first_key": "first_value",
},
),
("output", "Hello, World!"),
("status", "printed"),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
updated_dict = input_data.dictionary.copy()
if input_data.value is not None and input_data.key:
updated_dict[input_data.key] = input_data.value
for key, value in input_data.entries.items():
updated_dict[key] = value
yield "updated_dictionary", updated_dict
class AddToListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(
default_factory=list,
advanced=False,
description="The list to add the entry to. If not provided, a new list will be created.",
)
entry: Any = SchemaField(
description="The entry to add to the list. Can be of any type (string, int, dict, etc.).",
advanced=False,
default=None,
)
entries: List[Any] = SchemaField(
default_factory=lambda: list(),
description="The entries to add to the list. This is the batch version of the `entry` field.",
advanced=True,
)
position: int | None = SchemaField(
default=None,
description="The position to insert the new entry. If not provided, the entry will be appended to the end of the list.",
)
class Output(BlockSchema):
updated_list: List[Any] = SchemaField(
description="The list with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="aeb08fc1-2fc1-4141-bc8e-f758f183a822",
description="Adds a new entry to a list. The entry can be of any type. If no list is provided, a new one is created.",
categories={BlockCategory.BASIC},
input_schema=AddToListBlock.Input,
output_schema=AddToListBlock.Output,
test_input=[
{
"list": [1, "string", {"existing_key": "existing_value"}],
"entry": {"new_key": "new_value"},
"position": 1,
},
{"entry": "first_entry"},
{"list": ["a", "b", "c"], "entry": "d"},
{
"entry": "e",
"entries": ["f", "g"],
"list": ["a", "b"],
"position": 1,
},
],
test_output=[
(
"updated_list",
[
1,
{"new_key": "new_value"},
"string",
{"existing_key": "existing_value"},
],
),
("updated_list", ["first_entry"]),
("updated_list", ["a", "b", "c", "d"]),
("updated_list", ["a", "f", "g", "e", "b"]),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
entries_added = input_data.entries.copy()
if input_data.entry:
entries_added.append(input_data.entry)
updated_list = input_data.list.copy()
if (pos := input_data.position) is not None:
updated_list = updated_list[:pos] + entries_added + updated_list[pos:]
else:
updated_list += entries_added
yield "updated_list", updated_list
class FindInListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to search in.")
value: Any = SchemaField(description="The value to search for.")
class Output(BlockSchema):
index: int = SchemaField(description="The index of the value in the list.")
found: bool = SchemaField(
description="Whether the value was found in the list."
)
not_found_value: Any = SchemaField(
description="The value that was not found in the list."
)
def __init__(self):
super().__init__(
id="5e2c6d0a-1e37-489f-b1d0-8e1812b23333",
description="Finds the index of the value in the list.",
categories={BlockCategory.BASIC},
input_schema=FindInListBlock.Input,
output_schema=FindInListBlock.Output,
test_input=[
{"list": [1, 2, 3, 4, 5], "value": 3},
{"list": [1, 2, 3, 4, 5], "value": 6},
],
test_output=[
("index", 2),
("found", True),
("found", False),
("not_found_value", 6),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
yield "index", input_data.list.index(input_data.value)
yield "found", True
except ValueError:
yield "found", False
yield "not_found_value", input_data.value
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "output", input_data.text
yield "status", "printed"
class NoteBlock(Block):
@@ -369,108 +139,10 @@ class NoteBlock(Block):
block_type=BlockType.NOTE,
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "output", input_data.text
class CreateDictionaryBlock(Block):
class Input(BlockSchema):
values: dict[str, Any] = SchemaField(
description="Key-value pairs to create the dictionary with",
placeholder="e.g., {'name': 'Alice', 'age': 25}",
)
class Output(BlockSchema):
dictionary: dict[str, Any] = SchemaField(
description="The created dictionary containing the specified key-value pairs"
)
error: str = SchemaField(
description="Error message if dictionary creation failed"
)
def __init__(self):
super().__init__(
id="b924ddf4-de4f-4b56-9a85-358930dcbc91",
description="Creates a dictionary with the specified key-value pairs. Use this when you know all the values you want to add upfront.",
categories={BlockCategory.DATA},
input_schema=CreateDictionaryBlock.Input,
output_schema=CreateDictionaryBlock.Output,
test_input=[
{
"values": {"name": "Alice", "age": 25, "city": "New York"},
},
{
"values": {"numbers": [1, 2, 3], "active": True, "score": 95.5},
},
],
test_output=[
(
"dictionary",
{"name": "Alice", "age": 25, "city": "New York"},
),
(
"dictionary",
{"numbers": [1, 2, 3], "active": True, "score": 95.5},
),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# The values are already validated by Pydantic schema
yield "dictionary", input_data.values
except Exception as e:
yield "error", f"Failed to create dictionary: {str(e)}"
class CreateListBlock(Block):
class Input(BlockSchema):
values: List[Any] = SchemaField(
description="A list of values to be combined into a new list.",
placeholder="e.g., ['Alice', 25, True]",
)
class Output(BlockSchema):
list: List[Any] = SchemaField(
description="The created list containing the specified values."
)
error: str = SchemaField(description="Error message if list creation failed.")
def __init__(self):
super().__init__(
id="a912d5c7-6e00-4542-b2a9-8034136930e4",
description="Creates a list with the specified values. Use this when you know all the values you want to add upfront.",
categories={BlockCategory.DATA},
input_schema=CreateListBlock.Input,
output_schema=CreateListBlock.Output,
test_input=[
{
"values": ["Alice", 25, True],
},
{
"values": [1, 2, 3, "four", {"key": "value"}],
},
],
test_output=[
(
"list",
["Alice", 25, True],
),
(
"list",
[1, 2, 3, "four", {"key": "value"}],
),
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# The values are already validated by Pydantic schema
yield "list", input_data.values
except Exception as e:
yield "error", f"Failed to create list: {str(e)}"
class TypeOptions(enum.Enum):
STRING = "string"
NUMBER = "number"
@@ -488,6 +160,7 @@ class UniversalTypeConverterBlock(Block):
class Output(BlockSchema):
value: Any = SchemaField(description="The converted value.")
error: str = SchemaField(description="Error message if conversion failed.")
def __init__(self):
super().__init__(
@@ -498,7 +171,7 @@ class UniversalTypeConverterBlock(Block):
output_schema=UniversalTypeConverterBlock.Output,
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
converted_value = convert(
input_data.value,

View File

@@ -38,7 +38,7 @@ class BlockInstallationBlock(Block):
disabled=True,
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
code = input_data.code
if search := re.search(r"class (\w+)\(Block\):", code):
@@ -64,7 +64,7 @@ class BlockInstallationBlock(Block):
from backend.util.test import execute_block_test
execute_block_test(block)
await execute_block_test(block)
yield "success", "Block installed successfully."
except Exception as e:
os.remove(file_path)

View File

@@ -3,6 +3,7 @@ from typing import Any
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.type import convert
class ComparisonOperator(Enum):
@@ -70,7 +71,7 @@ class ConditionBlock(Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
operator = input_data.operator
value1 = input_data.value1
@@ -163,7 +164,7 @@ class IfInputMatchesBlock(Block):
},
{
"input": 10,
"value": None,
"value": "None",
"yes_value": "Yes",
"no_value": "No",
},
@@ -180,8 +181,24 @@ class IfInputMatchesBlock(Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
if input_data.input == input_data.value or input_data.input is input_data.value:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# If input_data.value is not matching input_data.input, convert value to type of input
if (
input_data.input != input_data.value
and input_data.input is not input_data.value
):
try:
# Only attempt conversion if input is not None and value is not None
if input_data.input is not None and input_data.value is not None:
input_type = type(input_data.input)
# Avoid converting if input_type is Any or object
if input_type not in (Any, object):
input_data.value = convert(input_data.value, input_type)
except Exception:
pass # If conversion fails, just leave value as is
if input_data.input == input_data.value:
yield "result", True
yield "yes_output", input_data.yes_value
else:

View File

@@ -1,7 +1,7 @@
from enum import Enum
from typing import Literal
from e2b_code_interpreter import Sandbox
from e2b_code_interpreter import AsyncSandbox
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
@@ -123,7 +123,7 @@ class CodeExecutionBlock(Block):
},
)
def execute_code(
async def execute_code(
self,
code: str,
language: ProgrammingLanguage,
@@ -135,21 +135,21 @@ class CodeExecutionBlock(Block):
try:
sandbox = None
if template_id:
sandbox = Sandbox(
sandbox = await AsyncSandbox.create(
template=template_id, api_key=api_key, timeout=timeout
)
else:
sandbox = Sandbox(api_key=api_key, timeout=timeout)
sandbox = await AsyncSandbox.create(api_key=api_key, timeout=timeout)
if not sandbox:
raise Exception("Sandbox not created")
# Running setup commands
for cmd in setup_commands:
sandbox.commands.run(cmd)
await sandbox.commands.run(cmd)
# Executing the code
execution = sandbox.run_code(
execution = await sandbox.run_code(
code,
language=language.value,
on_error=lambda e: sandbox.kill(), # Kill the sandbox if there is an error
@@ -167,11 +167,11 @@ class CodeExecutionBlock(Block):
except Exception as e:
raise e
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
response, stdout_logs, stderr_logs = self.execute_code(
response, stdout_logs, stderr_logs = await self.execute_code(
input_data.code,
input_data.language,
input_data.setup_commands,
@@ -278,11 +278,11 @@ class InstantiationBlock(Block):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
sandbox_id, response, stdout_logs, stderr_logs = self.execute_code(
sandbox_id, response, stdout_logs, stderr_logs = await self.execute_code(
input_data.setup_code,
input_data.language,
input_data.setup_commands,
@@ -303,7 +303,7 @@ class InstantiationBlock(Block):
except Exception as e:
yield "error", str(e)
def execute_code(
async def execute_code(
self,
code: str,
language: ProgrammingLanguage,
@@ -315,21 +315,21 @@ class InstantiationBlock(Block):
try:
sandbox = None
if template_id:
sandbox = Sandbox(
sandbox = await AsyncSandbox.create(
template=template_id, api_key=api_key, timeout=timeout
)
else:
sandbox = Sandbox(api_key=api_key, timeout=timeout)
sandbox = await AsyncSandbox.create(api_key=api_key, timeout=timeout)
if not sandbox:
raise Exception("Sandbox not created")
# Running setup commands
for cmd in setup_commands:
sandbox.commands.run(cmd)
await sandbox.commands.run(cmd)
# Executing the code
execution = sandbox.run_code(
execution = await sandbox.run_code(
code,
language=language.value,
on_error=lambda e: sandbox.kill(), # Kill the sandbox if there is an error
@@ -409,7 +409,7 @@ class StepExecutionBlock(Block):
},
)
def execute_step_code(
async def execute_step_code(
self,
sandbox_id: str,
code: str,
@@ -417,12 +417,12 @@ class StepExecutionBlock(Block):
api_key: str,
):
try:
sandbox = Sandbox.connect(sandbox_id=sandbox_id, api_key=api_key)
sandbox = await AsyncSandbox.connect(sandbox_id=sandbox_id, api_key=api_key)
if not sandbox:
raise Exception("Sandbox not found")
# Executing the code
execution = sandbox.run_code(code, language=language.value)
execution = await sandbox.run_code(code, language=language.value)
if execution.error:
raise Exception(execution.error)
@@ -436,11 +436,11 @@ class StepExecutionBlock(Block):
except Exception as e:
raise e
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
response, stdout_logs, stderr_logs = self.execute_step_code(
response, stdout_logs, stderr_logs = await self.execute_step_code(
input_data.sandbox_id,
input_data.step_code,
input_data.language,

View File

@@ -49,7 +49,7 @@ class CodeExtractionBlock(Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
# List of supported programming languages with mapped aliases
language_aliases = {
"html": ["html", "htm"],

View File

@@ -56,5 +56,5 @@ class CompassAITriggerBlock(Block):
# ],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "transcription", input_data.payload.transcription

View File

@@ -30,7 +30,7 @@ class WordCharacterCountBlock(Block):
test_output=[("word_count", 4), ("character_count", 19)],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
text = input_data.text
word_count = len(text.split())

View File

@@ -69,7 +69,7 @@ class ReadCsvBlock(Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
import csv
from io import StringIO

View File

@@ -0,0 +1,652 @@
from typing import Any, List
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.json import loads
from backend.util.mock import MockObject
# =============================================================================
# Dictionary Manipulation Blocks
# =============================================================================
class CreateDictionaryBlock(Block):
class Input(BlockSchema):
values: dict[str, Any] = SchemaField(
description="Key-value pairs to create the dictionary with",
placeholder="e.g., {'name': 'Alice', 'age': 25}",
)
class Output(BlockSchema):
dictionary: dict[str, Any] = SchemaField(
description="The created dictionary containing the specified key-value pairs"
)
error: str = SchemaField(
description="Error message if dictionary creation failed"
)
def __init__(self):
super().__init__(
id="b924ddf4-de4f-4b56-9a85-358930dcbc91",
description="Creates a dictionary with the specified key-value pairs. Use this when you know all the values you want to add upfront.",
categories={BlockCategory.DATA},
input_schema=CreateDictionaryBlock.Input,
output_schema=CreateDictionaryBlock.Output,
test_input=[
{
"values": {"name": "Alice", "age": 25, "city": "New York"},
},
{
"values": {"numbers": [1, 2, 3], "active": True, "score": 95.5},
},
],
test_output=[
(
"dictionary",
{"name": "Alice", "age": 25, "city": "New York"},
),
(
"dictionary",
{"numbers": [1, 2, 3], "active": True, "score": 95.5},
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
# The values are already validated by Pydantic schema
yield "dictionary", input_data.values
except Exception as e:
yield "error", f"Failed to create dictionary: {str(e)}"
class AddToDictionaryBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The dictionary to add the entry to. If not provided, a new dictionary will be created.",
)
key: str = SchemaField(
default="",
description="The key for the new entry.",
placeholder="new_key",
advanced=False,
)
value: Any = SchemaField(
default=None,
description="The value for the new entry.",
placeholder="new_value",
advanced=False,
)
entries: dict[Any, Any] = SchemaField(
default_factory=dict,
description="The entries to add to the dictionary. This is the batch version of the `key` and `value` fields.",
advanced=True,
)
class Output(BlockSchema):
updated_dictionary: dict = SchemaField(
description="The dictionary with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="31d1064e-7446-4693-a7d4-65e5ca1180d1",
description="Adds a new key-value pair to a dictionary. If no dictionary is provided, a new one is created.",
categories={BlockCategory.BASIC},
input_schema=AddToDictionaryBlock.Input,
output_schema=AddToDictionaryBlock.Output,
test_input=[
{
"dictionary": {"existing_key": "existing_value"},
"key": "new_key",
"value": "new_value",
},
{"key": "first_key", "value": "first_value"},
{
"dictionary": {"existing_key": "existing_value"},
"entries": {"new_key": "new_value", "first_key": "first_value"},
},
],
test_output=[
(
"updated_dictionary",
{"existing_key": "existing_value", "new_key": "new_value"},
),
("updated_dictionary", {"first_key": "first_value"}),
(
"updated_dictionary",
{
"existing_key": "existing_value",
"new_key": "new_value",
"first_key": "first_value",
},
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
updated_dict = input_data.dictionary.copy()
if input_data.value is not None and input_data.key:
updated_dict[input_data.key] = input_data.value
for key, value in input_data.entries.items():
updated_dict[key] = value
yield "updated_dictionary", updated_dict
class FindInDictionaryBlock(Block):
class Input(BlockSchema):
input: Any = SchemaField(description="Dictionary to lookup from")
key: str | int = SchemaField(description="Key to lookup in the dictionary")
class Output(BlockSchema):
output: Any = SchemaField(description="Value found for the given key")
missing: Any = SchemaField(
description="Value of the input that missing the key"
)
def __init__(self):
super().__init__(
id="0e50422c-6dee-4145-83d6-3a5a392f65de",
description="Lookup the given key in the input dictionary/object/list and return the value.",
input_schema=FindInDictionaryBlock.Input,
output_schema=FindInDictionaryBlock.Output,
test_input=[
{"input": {"apple": 1, "banana": 2, "cherry": 3}, "key": "banana"},
{"input": {"x": 10, "y": 20, "z": 30}, "key": "w"},
{"input": [1, 2, 3], "key": 1},
{"input": [1, 2, 3], "key": 3},
{"input": MockObject(value="!!", key="key"), "key": "key"},
{"input": [{"k1": "v1"}, {"k2": "v2"}, {"k1": "v3"}], "key": "k1"},
],
test_output=[
("output", 2),
("missing", {"x": 10, "y": 20, "z": 30}),
("output", 2),
("missing", [1, 2, 3]),
("output", "key"),
("output", ["v1", "v3"]),
],
categories={BlockCategory.BASIC},
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
obj = input_data.input
key = input_data.key
if isinstance(obj, str):
obj = loads(obj)
if isinstance(obj, dict) and key in obj:
yield "output", obj[key]
elif isinstance(obj, list) and isinstance(key, int) and 0 <= key < len(obj):
yield "output", obj[key]
elif isinstance(obj, list) and isinstance(key, str):
if len(obj) == 0:
yield "output", []
elif isinstance(obj[0], dict) and key in obj[0]:
yield "output", [item[key] for item in obj if key in item]
else:
yield "output", [getattr(val, key) for val in obj if hasattr(val, key)]
elif isinstance(obj, object) and isinstance(key, str) and hasattr(obj, key):
yield "output", getattr(obj, key)
else:
yield "missing", input_data.input
class RemoveFromDictionaryBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
description="The dictionary to modify."
)
key: str | int = SchemaField(description="Key to remove from the dictionary.")
return_value: bool = SchemaField(
default=False, description="Whether to return the removed value."
)
class Output(BlockSchema):
updated_dictionary: dict[Any, Any] = SchemaField(
description="The dictionary after removal."
)
removed_value: Any = SchemaField(description="The removed value if requested.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="46afe2ea-c613-43f8-95ff-6692c3ef6876",
description="Removes a key-value pair from a dictionary.",
categories={BlockCategory.BASIC},
input_schema=RemoveFromDictionaryBlock.Input,
output_schema=RemoveFromDictionaryBlock.Output,
test_input=[
{
"dictionary": {"a": 1, "b": 2, "c": 3},
"key": "b",
"return_value": True,
},
{"dictionary": {"x": "hello", "y": "world"}, "key": "x"},
],
test_output=[
("updated_dictionary", {"a": 1, "c": 3}),
("removed_value", 2),
("updated_dictionary", {"y": "world"}),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
updated_dict = input_data.dictionary.copy()
try:
removed_value = updated_dict.pop(input_data.key)
yield "updated_dictionary", updated_dict
if input_data.return_value:
yield "removed_value", removed_value
except KeyError:
yield "error", f"Key '{input_data.key}' not found in dictionary"
class ReplaceDictionaryValueBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(
description="The dictionary to modify."
)
key: str | int = SchemaField(description="Key to replace the value for.")
value: Any = SchemaField(description="The new value for the given key.")
class Output(BlockSchema):
updated_dictionary: dict[Any, Any] = SchemaField(
description="The dictionary after replacement."
)
old_value: Any = SchemaField(description="The value that was replaced.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="27e31876-18b6-44f3-ab97-f6226d8b3889",
description="Replaces the value for a specified key in a dictionary.",
categories={BlockCategory.BASIC},
input_schema=ReplaceDictionaryValueBlock.Input,
output_schema=ReplaceDictionaryValueBlock.Output,
test_input=[
{"dictionary": {"a": 1, "b": 2, "c": 3}, "key": "b", "value": 99},
{
"dictionary": {"x": "hello", "y": "world"},
"key": "y",
"value": "universe",
},
],
test_output=[
("updated_dictionary", {"a": 1, "b": 99, "c": 3}),
("old_value", 2),
("updated_dictionary", {"x": "hello", "y": "universe"}),
("old_value", "world"),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
updated_dict = input_data.dictionary.copy()
try:
old_value = updated_dict[input_data.key]
updated_dict[input_data.key] = input_data.value
yield "updated_dictionary", updated_dict
yield "old_value", old_value
except KeyError:
yield "error", f"Key '{input_data.key}' not found in dictionary"
class DictionaryIsEmptyBlock(Block):
class Input(BlockSchema):
dictionary: dict[Any, Any] = SchemaField(description="The dictionary to check.")
class Output(BlockSchema):
is_empty: bool = SchemaField(description="True if the dictionary is empty.")
def __init__(self):
super().__init__(
id="a3cf3f64-6bb9-4cc6-9900-608a0b3359b0",
description="Checks if a dictionary is empty.",
categories={BlockCategory.BASIC},
input_schema=DictionaryIsEmptyBlock.Input,
output_schema=DictionaryIsEmptyBlock.Output,
test_input=[{"dictionary": {}}, {"dictionary": {"a": 1}}],
test_output=[("is_empty", True), ("is_empty", False)],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "is_empty", len(input_data.dictionary) == 0
# =============================================================================
# List Manipulation Blocks
# =============================================================================
class CreateListBlock(Block):
class Input(BlockSchema):
values: List[Any] = SchemaField(
description="A list of values to be combined into a new list.",
placeholder="e.g., ['Alice', 25, True]",
)
class Output(BlockSchema):
list: List[Any] = SchemaField(
description="The created list containing the specified values."
)
error: str = SchemaField(description="Error message if list creation failed.")
def __init__(self):
super().__init__(
id="a912d5c7-6e00-4542-b2a9-8034136930e4",
description="Creates a list with the specified values. Use this when you know all the values you want to add upfront.",
categories={BlockCategory.DATA},
input_schema=CreateListBlock.Input,
output_schema=CreateListBlock.Output,
test_input=[
{
"values": ["Alice", 25, True],
},
{
"values": [1, 2, 3, "four", {"key": "value"}],
},
],
test_output=[
(
"list",
["Alice", 25, True],
),
(
"list",
[1, 2, 3, "four", {"key": "value"}],
),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
yield "list", input_data.values
except Exception as e:
yield "error", f"Failed to create list: {str(e)}"
class AddToListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(
default_factory=list,
advanced=False,
description="The list to add the entry to. If not provided, a new list will be created.",
)
entry: Any = SchemaField(
description="The entry to add to the list. Can be of any type (string, int, dict, etc.).",
advanced=False,
default=None,
)
entries: List[Any] = SchemaField(
default_factory=lambda: list(),
description="The entries to add to the list. This is the batch version of the `entry` field.",
advanced=True,
)
position: int | None = SchemaField(
default=None,
description="The position to insert the new entry. If not provided, the entry will be appended to the end of the list.",
)
class Output(BlockSchema):
updated_list: List[Any] = SchemaField(
description="The list with the new entry added."
)
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="aeb08fc1-2fc1-4141-bc8e-f758f183a822",
description="Adds a new entry to a list. The entry can be of any type. If no list is provided, a new one is created.",
categories={BlockCategory.BASIC},
input_schema=AddToListBlock.Input,
output_schema=AddToListBlock.Output,
test_input=[
{
"list": [1, "string", {"existing_key": "existing_value"}],
"entry": {"new_key": "new_value"},
"position": 1,
},
{"entry": "first_entry"},
{"list": ["a", "b", "c"], "entry": "d"},
{
"entry": "e",
"entries": ["f", "g"],
"list": ["a", "b"],
"position": 1,
},
],
test_output=[
(
"updated_list",
[
1,
{"new_key": "new_value"},
"string",
{"existing_key": "existing_value"},
],
),
("updated_list", ["first_entry"]),
("updated_list", ["a", "b", "c", "d"]),
("updated_list", ["a", "f", "g", "e", "b"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
entries_added = input_data.entries.copy()
if input_data.entry:
entries_added.append(input_data.entry)
updated_list = input_data.list.copy()
if (pos := input_data.position) is not None:
updated_list = updated_list[:pos] + entries_added + updated_list[pos:]
else:
updated_list += entries_added
yield "updated_list", updated_list
class FindInListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to search in.")
value: Any = SchemaField(description="The value to search for.")
class Output(BlockSchema):
index: int = SchemaField(description="The index of the value in the list.")
found: bool = SchemaField(
description="Whether the value was found in the list."
)
not_found_value: Any = SchemaField(
description="The value that was not found in the list."
)
def __init__(self):
super().__init__(
id="5e2c6d0a-1e37-489f-b1d0-8e1812b23333",
description="Finds the index of the value in the list.",
categories={BlockCategory.BASIC},
input_schema=FindInListBlock.Input,
output_schema=FindInListBlock.Output,
test_input=[
{"list": [1, 2, 3, 4, 5], "value": 3},
{"list": [1, 2, 3, 4, 5], "value": 6},
],
test_output=[
("index", 2),
("found", True),
("found", False),
("not_found_value", 6),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
yield "index", input_data.list.index(input_data.value)
yield "found", True
except ValueError:
yield "found", False
yield "not_found_value", input_data.value
class GetListItemBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to get the item from.")
index: int = SchemaField(
description="The 0-based index of the item (supports negative indices)."
)
class Output(BlockSchema):
item: Any = SchemaField(description="The item at the specified index.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="262ca24c-1025-43cf-a578-534e23234e97",
description="Returns the element at the given index.",
categories={BlockCategory.BASIC},
input_schema=GetListItemBlock.Input,
output_schema=GetListItemBlock.Output,
test_input=[
{"list": [1, 2, 3], "index": 1},
{"list": [1, 2, 3], "index": -1},
],
test_output=[
("item", 2),
("item", 3),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
try:
yield "item", input_data.list[input_data.index]
except IndexError:
yield "error", "Index out of range"
class RemoveFromListBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to modify.")
value: Any = SchemaField(
default=None, description="Value to remove from the list."
)
index: int | None = SchemaField(
default=None,
description="Index of the item to pop (supports negative indices).",
)
return_item: bool = SchemaField(
default=False, description="Whether to return the removed item."
)
class Output(BlockSchema):
updated_list: List[Any] = SchemaField(description="The list after removal.")
removed_item: Any = SchemaField(description="The removed item if requested.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="d93c5a93-ac7e-41c1-ae5c-ef67e6e9b826",
description="Removes an item from a list by value or index.",
categories={BlockCategory.BASIC},
input_schema=RemoveFromListBlock.Input,
output_schema=RemoveFromListBlock.Output,
test_input=[
{"list": [1, 2, 3], "index": 1, "return_item": True},
{"list": ["a", "b", "c"], "value": "b"},
],
test_output=[
("updated_list", [1, 3]),
("removed_item", 2),
("updated_list", ["a", "c"]),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
lst = input_data.list.copy()
removed = None
try:
if input_data.index is not None:
removed = lst.pop(input_data.index)
elif input_data.value is not None:
lst.remove(input_data.value)
removed = input_data.value
else:
raise ValueError("No index or value provided for removal")
except (IndexError, ValueError):
yield "error", "Index or value not found"
return
yield "updated_list", lst
if input_data.return_item:
yield "removed_item", removed
class ReplaceListItemBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to modify.")
index: int = SchemaField(
description="Index of the item to replace (supports negative indices)."
)
value: Any = SchemaField(description="The new value for the given index.")
class Output(BlockSchema):
updated_list: List[Any] = SchemaField(description="The list after replacement.")
old_item: Any = SchemaField(description="The item that was replaced.")
error: str = SchemaField(description="Error message if the operation failed.")
def __init__(self):
super().__init__(
id="fbf62922-bea1-4a3d-8bac-23587f810b38",
description="Replaces an item at the specified index.",
categories={BlockCategory.BASIC},
input_schema=ReplaceListItemBlock.Input,
output_schema=ReplaceListItemBlock.Output,
test_input=[
{"list": [1, 2, 3], "index": 1, "value": 99},
{"list": ["a", "b"], "index": -1, "value": "c"},
],
test_output=[
("updated_list", [1, 99, 3]),
("old_item", 2),
("updated_list", ["a", "c"]),
("old_item", "b"),
],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
lst = input_data.list.copy()
try:
old = lst[input_data.index]
lst[input_data.index] = input_data.value
except IndexError:
yield "error", "Index out of range"
return
yield "updated_list", lst
yield "old_item", old
class ListIsEmptyBlock(Block):
class Input(BlockSchema):
list: List[Any] = SchemaField(description="The list to check.")
class Output(BlockSchema):
is_empty: bool = SchemaField(description="True if the list is empty.")
def __init__(self):
super().__init__(
id="896ed73b-27d0-41be-813c-c1c1dc856c03",
description="Checks if a list is empty.",
categories={BlockCategory.BASIC},
input_schema=ListIsEmptyBlock.Input,
output_schema=ListIsEmptyBlock.Output,
test_input=[{"list": []}, {"list": [1]}],
test_output=[("is_empty", True), ("is_empty", False)],
)
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "is_empty", len(input_data.list) == 0

View File

@@ -34,6 +34,6 @@ This is a "quoted" string.""",
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
decoded_text = codecs.decode(input_data.text, "unicode_escape")
yield "decoded_text", decoded_text

View File

@@ -1,4 +1,3 @@
import asyncio
from typing import Literal
import aiohttp
@@ -74,7 +73,11 @@ class ReadDiscordMessagesBlock(Block):
("username", "test_user"),
],
test_mock={
"run_bot": lambda token: asyncio.Future() # Create a Future object for mocking
"run_bot": lambda token: {
"output_data": "Hello!\n\nFile from user: example.txt\nContent: This is the content of the file.",
"channel_name": "general",
"username": "test_user",
}
},
)
@@ -106,37 +109,24 @@ class ReadDiscordMessagesBlock(Block):
if attachment.filename.endswith((".txt", ".py")):
async with aiohttp.ClientSession() as session:
async with session.get(attachment.url) as response:
file_content = await response.text()
file_content = response.text()
self.output_data += f"\n\nFile from user: {attachment.filename}\nContent: {file_content}"
await client.close()
await client.start(token.get_secret_value())
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
while True:
for output_name, output_value in self.__run(input_data, credentials):
yield output_name, output_value
break
async for output_name, output_value in self.__run(input_data, credentials):
yield output_name, output_value
def __run(self, input_data: Input, credentials: APIKeyCredentials) -> BlockOutput:
async def __run(
self, input_data: Input, credentials: APIKeyCredentials
) -> BlockOutput:
try:
loop = asyncio.get_event_loop()
future = self.run_bot(credentials.api_key)
# If it's a Future (mock), set the result
if isinstance(future, asyncio.Future):
future.set_result(
{
"output_data": "Hello!\n\nFile from user: example.txt\nContent: This is the content of the file.",
"channel_name": "general",
"username": "test_user",
}
)
result = loop.run_until_complete(future)
result = await self.run_bot(credentials.api_key)
# For testing purposes, use the mocked result
if isinstance(result, dict):
@@ -190,7 +180,7 @@ class SendDiscordMessageBlock(Block):
},
test_output=[("status", "Message sent")],
test_mock={
"send_message": lambda token, channel_name, message_content: asyncio.Future()
"send_message": lambda token, channel_name, message_content: "Message sent"
},
test_credentials=TEST_CREDENTIALS,
)
@@ -222,23 +212,16 @@ class SendDiscordMessageBlock(Block):
"""Splits a message into chunks not exceeding the Discord limit."""
return [message[i : i + limit] for i in range(0, len(message), limit)]
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
loop = asyncio.get_event_loop()
future = self.send_message(
result = await self.send_message(
credentials.api_key.get_secret_value(),
input_data.channel_name,
input_data.message_content,
)
# If it's a Future (mock), set the result
if isinstance(future, asyncio.Future):
future.set_result("Message sent")
result = loop.run_until_complete(future)
# For testing purposes, use the mocked result
if isinstance(result, str):
self.output_data = result

View File

@@ -121,7 +121,7 @@ class SendEmailBlock(Block):
return "Email sent successfully"
def run(
async def run(
self, input_data: Input, *, credentials: SMTPCredentials, **kwargs
) -> BlockOutput:
yield "status", self.send_email(

View File

@@ -9,7 +9,7 @@ from backend.blocks.exa._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
class ContentRetrievalSettings(BaseModel):
@@ -62,7 +62,7 @@ class ExaContentsBlock(Block):
output_schema=ExaContentsBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/contents"
@@ -79,10 +79,8 @@ class ExaContentsBlock(Block):
}
try:
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
yield "results", data.get("results", [])
except Exception as e:
yield "error", str(e)
yield "results", []

View File

@@ -9,7 +9,7 @@ from backend.blocks.exa._auth import (
from backend.blocks.exa.helpers import ContentSettings
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
class ExaSearchBlock(Block):
@@ -78,6 +78,9 @@ class ExaSearchBlock(Block):
description="List of search results",
default_factory=list,
)
error: str = SchemaField(
description="Error message if the request failed",
)
def __init__(self):
super().__init__(
@@ -88,7 +91,7 @@ class ExaSearchBlock(Block):
output_schema=ExaSearchBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/search"
@@ -133,11 +136,9 @@ class ExaSearchBlock(Block):
payload[api_field] = value
try:
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
# Extract just the results array from the response
yield "results", data.get("results", [])
except Exception as e:
yield "error", str(e)
yield "results", []

View File

@@ -8,7 +8,7 @@ from backend.blocks.exa._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
from .helpers import ContentSettings
@@ -67,6 +67,7 @@ class ExaFindSimilarBlock(Block):
description="List of similar documents with title, URL, published date, author, and score",
default_factory=list,
)
error: str = SchemaField(description="Error message if the request failed")
def __init__(self):
super().__init__(
@@ -77,7 +78,7 @@ class ExaFindSimilarBlock(Block):
output_schema=ExaFindSimilarBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: ExaCredentials, **kwargs
) -> BlockOutput:
url = "https://api.exa.ai/findSimilar"
@@ -119,10 +120,8 @@ class ExaFindSimilarBlock(Block):
payload[api_field] = value.strftime("%Y-%m-%dT%H:%M:%S.000Z")
try:
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
yield "results", data.get("results", [])
except Exception as e:
yield "error", str(e)
yield "results", []

View File

@@ -1,10 +1,8 @@
import asyncio
import logging
import time
from enum import Enum
from typing import Any
import httpx
from backend.blocks.fal._auth import (
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
@@ -14,6 +12,7 @@ from backend.blocks.fal._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import ClientResponseError, Requests
logger = logging.getLogger(__name__)
@@ -21,6 +20,7 @@ logger = logging.getLogger(__name__)
class FalModel(str, Enum):
MOCHI = "fal-ai/mochi-v1"
LUMA = "fal-ai/luma-dream-machine"
VEO3 = "fal-ai/veo3"
class AIVideoGeneratorBlock(Block):
@@ -65,35 +65,37 @@ class AIVideoGeneratorBlock(Block):
)
def _get_headers(self, api_key: str) -> dict[str, str]:
"""Get headers for FAL API requests."""
"""Get headers for FAL API Requests."""
return {
"Authorization": f"Key {api_key}",
"Content-Type": "application/json",
}
def _submit_request(
async def _submit_request(
self, url: str, headers: dict[str, str], data: dict[str, Any]
) -> dict[str, Any]:
"""Submit a request to the FAL API."""
try:
response = httpx.post(url, headers=headers, json=data)
response.raise_for_status()
response = await Requests().post(url, headers=headers, json=data)
return response.json()
except httpx.HTTPError as e:
except ClientResponseError as e:
logger.error(f"FAL API request failed: {str(e)}")
raise RuntimeError(f"Failed to submit request: {str(e)}")
def _poll_status(self, status_url: str, headers: dict[str, str]) -> dict[str, Any]:
async def _poll_status(
self, status_url: str, headers: dict[str, str]
) -> dict[str, Any]:
"""Poll the status endpoint until completion or failure."""
try:
response = httpx.get(status_url, headers=headers)
response.raise_for_status()
response = await Requests().get(status_url, headers=headers)
return response.json()
except httpx.HTTPError as e:
except ClientResponseError as e:
logger.error(f"Failed to get status: {str(e)}")
raise RuntimeError(f"Failed to get status: {str(e)}")
def generate_video(self, input_data: Input, credentials: FalCredentials) -> str:
async def generate_video(
self, input_data: Input, credentials: FalCredentials
) -> str:
"""Generate video using the specified FAL model."""
base_url = "https://queue.fal.run"
api_key = credentials.api_key.get_secret_value()
@@ -102,13 +104,16 @@ class AIVideoGeneratorBlock(Block):
# Submit generation request
submit_url = f"{base_url}/{input_data.model.value}"
submit_data = {"prompt": input_data.prompt}
if input_data.model == FalModel.VEO3:
submit_data["generate_audio"] = True # type: ignore
seen_logs = set()
try:
# Submit request to queue
submit_response = httpx.post(submit_url, headers=headers, json=submit_data)
submit_response.raise_for_status()
submit_response = await Requests().post(
submit_url, headers=headers, json=submit_data
)
request_data = submit_response.json()
# Get request_id and urls from initial response
@@ -119,14 +124,23 @@ class AIVideoGeneratorBlock(Block):
if not all([request_id, status_url, result_url]):
raise ValueError("Missing required data in submission response")
# Ensure status_url is a string
if not isinstance(status_url, str):
raise ValueError("Invalid status URL format")
# Ensure result_url is a string
if not isinstance(result_url, str):
raise ValueError("Invalid result URL format")
# Poll for status with exponential backoff
max_attempts = 30
attempt = 0
base_wait_time = 5
while attempt < max_attempts:
status_response = httpx.get(f"{status_url}?logs=1", headers=headers)
status_response.raise_for_status()
status_response = await Requests().get(
f"{status_url}?logs=1", headers=headers
)
status_data = status_response.json()
# Process new logs only
@@ -149,8 +163,7 @@ class AIVideoGeneratorBlock(Block):
status = status_data.get("status")
if status == "COMPLETED":
# Get the final result
result_response = httpx.get(result_url, headers=headers)
result_response.raise_for_status()
result_response = await Requests().get(result_url, headers=headers)
result_data = result_response.json()
if "video" not in result_data or not isinstance(
@@ -159,8 +172,8 @@ class AIVideoGeneratorBlock(Block):
raise ValueError("Invalid response format - missing video data")
video_url = result_data["video"].get("url")
if not video_url:
raise ValueError("No video URL in response")
if not video_url or not isinstance(video_url, str):
raise ValueError("No valid video URL in response")
return video_url
@@ -180,19 +193,19 @@ class AIVideoGeneratorBlock(Block):
logger.info(f"[FAL Generation] Status: Unknown status: {status}")
wait_time = min(base_wait_time * (2**attempt), 60) # Cap at 60 seconds
time.sleep(wait_time)
await asyncio.sleep(wait_time)
attempt += 1
raise RuntimeError("Maximum polling attempts reached")
except httpx.HTTPError as e:
except ClientResponseError as e:
raise RuntimeError(f"API request failed: {str(e)}")
def run(
async def run(
self, input_data: Input, *, credentials: FalCredentials, **kwargs
) -> BlockOutput:
try:
video_url = self.generate_video(input_data, credentials)
video_url = await self.generate_video(input_data, credentials)
yield "video_url", video_url
except Exception as e:
error_message = str(e)

View File

@@ -0,0 +1,183 @@
from enum import Enum
from typing import Literal, Optional
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import (
APIKeyCredentials,
CredentialsField,
CredentialsMetaInput,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import MediaFileType, store_media_file
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="replicate",
api_key=SecretStr("mock-replicate-api-key"),
title="Mock Replicate API key",
expires_at=None,
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.type,
}
class FluxKontextModelName(str, Enum):
PRO = "Flux Kontext Pro"
MAX = "Flux Kontext Max"
@property
def api_name(self) -> str:
return f"black-forest-labs/flux-kontext-{self.name.lower()}"
class AspectRatio(str, Enum):
MATCH_INPUT_IMAGE = "match_input_image"
ASPECT_1_1 = "1:1"
ASPECT_16_9 = "16:9"
ASPECT_9_16 = "9:16"
ASPECT_4_3 = "4:3"
ASPECT_3_4 = "3:4"
ASPECT_3_2 = "3:2"
ASPECT_2_3 = "2:3"
ASPECT_4_5 = "4:5"
ASPECT_5_4 = "5:4"
ASPECT_21_9 = "21:9"
ASPECT_9_21 = "9:21"
ASPECT_2_1 = "2:1"
ASPECT_1_2 = "1:2"
class AIImageEditorBlock(Block):
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.REPLICATE], Literal["api_key"]
] = CredentialsField(
description="Replicate API key with permissions for Flux Kontext models",
)
prompt: str = SchemaField(
description="Text instruction describing the desired edit",
title="Prompt",
)
input_image: Optional[MediaFileType] = SchemaField(
description="Reference image URI (jpeg, png, gif, webp)",
default=None,
title="Input Image",
)
aspect_ratio: AspectRatio = SchemaField(
description="Aspect ratio of the generated image",
default=AspectRatio.MATCH_INPUT_IMAGE,
title="Aspect Ratio",
advanced=False,
)
seed: Optional[int] = SchemaField(
description="Random seed. Set for reproducible generation",
default=None,
title="Seed",
advanced=True,
)
model: FluxKontextModelName = SchemaField(
description="Model variant to use",
default=FluxKontextModelName.PRO,
title="Model",
)
class Output(BlockSchema):
output_image: MediaFileType = SchemaField(
description="URL of the transformed image"
)
error: str = SchemaField(description="Error message if generation failed")
def __init__(self):
super().__init__(
id="3fd9c73d-4370-4925-a1ff-1b86b99fabfa",
description=(
"Edit images using BlackForest Labs' Flux Kontext models. Provide a prompt "
"and optional reference image to generate a modified image."
),
categories={BlockCategory.AI, BlockCategory.MULTIMEDIA},
input_schema=AIImageEditorBlock.Input,
output_schema=AIImageEditorBlock.Output,
test_input={
"prompt": "Add a hat to the cat",
"input_image": "data:image/png;base64,MQ==",
"aspect_ratio": AspectRatio.MATCH_INPUT_IMAGE,
"seed": None,
"model": FluxKontextModelName.PRO,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("output_image", "https://replicate.com/output/edited-image.png"),
],
test_mock={
"run_model": lambda *args, **kwargs: "https://replicate.com/output/edited-image.png",
},
test_credentials=TEST_CREDENTIALS,
)
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
graph_exec_id: str,
**kwargs,
) -> BlockOutput:
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.model.api_name,
prompt=input_data.prompt,
input_image_b64=(
await store_media_file(
graph_exec_id=graph_exec_id,
file=input_data.input_image,
return_content=True,
)
if input_data.input_image
else None
),
aspect_ratio=input_data.aspect_ratio.value,
seed=input_data.seed,
)
yield "output_image", result
async def run_model(
self,
api_key: SecretStr,
model_name: str,
prompt: str,
input_image_b64: Optional[str],
aspect_ratio: str,
seed: Optional[int],
) -> MediaFileType:
client = ReplicateClient(api_token=api_key.get_secret_value())
input_params = {
"prompt": prompt,
"input_image": input_image_b64,
"aspect_ratio": aspect_ratio,
**({"seed": seed} if seed is not None else {}),
}
output: FileOutput | list[FileOutput] = await client.async_run( # type: ignore
model_name,
input=input_params,
wait=False,
)
if isinstance(output, list) and output:
output = output[0]
if isinstance(output, FileOutput):
return MediaFileType(output.url)
if isinstance(output, str):
return MediaFileType(output)
raise ValueError("No output received")

View File

@@ -46,6 +46,6 @@ class GenericWebhookTriggerBlock(Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "constants", input_data.constants
yield "payload", input_data.payload

View File

@@ -1,19 +1,30 @@
from typing import overload
from urllib.parse import urlparse
from backend.blocks.github._auth import (
GithubCredentials,
GithubFineGrainedAPICredentials,
)
from backend.util.request import Requests
from backend.util.request import URL, Requests
def _convert_to_api_url(url: str) -> str:
@overload
def _convert_to_api_url(url: str) -> str: ...
@overload
def _convert_to_api_url(url: URL) -> URL: ...
def _convert_to_api_url(url: str | URL) -> str | URL:
"""
Converts a standard GitHub URL to the corresponding GitHub API URL.
Handles repository URLs, issue URLs, pull request URLs, and more.
"""
parsed_url = urlparse(url)
path_parts = parsed_url.path.strip("/").split("/")
if url_as_str := isinstance(url, str):
url = urlparse(url)
path_parts = url.path.strip("/").split("/")
if len(path_parts) >= 2:
owner, repo = path_parts[0], path_parts[1]
@@ -28,7 +39,7 @@ def _convert_to_api_url(url: str) -> str:
else:
raise ValueError("Invalid GitHub URL format.")
return api_url
return api_url if url_as_str else urlparse(api_url)
def _get_headers(credentials: GithubCredentials) -> dict[str, str]:

View File

@@ -129,7 +129,7 @@ class GithubCreateCheckRunBlock(Block):
)
@staticmethod
def create_check_run(
async def create_check_run(
credentials: GithubCredentials,
repo_url: str,
name: str,
@@ -172,7 +172,7 @@ class GithubCreateCheckRunBlock(Block):
data.output = output_data
check_runs_url = f"{repo_url}/check-runs"
response = api.post(
response = await api.post(
check_runs_url, data=data.model_dump_json(exclude_none=True)
)
result = response.json()
@@ -183,7 +183,7 @@ class GithubCreateCheckRunBlock(Block):
"status": result["status"],
}
def run(
async def run(
self,
input_data: Input,
*,
@@ -191,7 +191,7 @@ class GithubCreateCheckRunBlock(Block):
**kwargs,
) -> BlockOutput:
try:
result = self.create_check_run(
result = await self.create_check_run(
credentials=credentials,
repo_url=input_data.repo_url,
name=input_data.name,
@@ -292,7 +292,7 @@ class GithubUpdateCheckRunBlock(Block):
)
@staticmethod
def update_check_run(
async def update_check_run(
credentials: GithubCredentials,
repo_url: str,
check_run_id: int,
@@ -325,7 +325,7 @@ class GithubUpdateCheckRunBlock(Block):
data.output = output_data
check_run_url = f"{repo_url}/check-runs/{check_run_id}"
response = api.patch(
response = await api.patch(
check_run_url, data=data.model_dump_json(exclude_none=True)
)
result = response.json()
@@ -337,7 +337,7 @@ class GithubUpdateCheckRunBlock(Block):
"conclusion": result.get("conclusion"),
}
def run(
async def run(
self,
input_data: Input,
*,
@@ -345,7 +345,7 @@ class GithubUpdateCheckRunBlock(Block):
**kwargs,
) -> BlockOutput:
try:
result = self.update_check_run(
result = await self.update_check_run(
credentials=credentials,
repo_url=input_data.repo_url,
check_run_id=input_data.check_run_id,

View File

@@ -80,7 +80,7 @@ class GithubCommentBlock(Block):
)
@staticmethod
def post_comment(
async def post_comment(
credentials: GithubCredentials, issue_url: str, body_text: str
) -> tuple[int, str]:
api = get_api(credentials)
@@ -88,18 +88,18 @@ class GithubCommentBlock(Block):
if "pull" in issue_url:
issue_url = issue_url.replace("pull", "issues")
comments_url = issue_url + "/comments"
response = api.post(comments_url, json=data)
response = await api.post(comments_url, json=data)
comment = response.json()
return comment["id"], comment["html_url"]
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
id, url = self.post_comment(
id, url = await self.post_comment(
credentials,
input_data.issue_url,
input_data.comment,
@@ -171,7 +171,7 @@ class GithubUpdateCommentBlock(Block):
)
@staticmethod
def update_comment(
async def update_comment(
credentials: GithubCredentials, comment_url: str, body_text: str
) -> tuple[int, str]:
api = get_api(credentials, convert_urls=False)
@@ -179,11 +179,11 @@ class GithubUpdateCommentBlock(Block):
url = convert_comment_url_to_api_endpoint(comment_url)
logger.info(url)
response = api.patch(url, json=data)
response = await api.patch(url, json=data)
comment = response.json()
return comment["id"], comment["html_url"]
def run(
async def run(
self,
input_data: Input,
*,
@@ -209,7 +209,7 @@ class GithubUpdateCommentBlock(Block):
raise ValueError(
"Must provide either comment_url or comment_id and issue_url"
)
id, url = self.update_comment(
id, url = await self.update_comment(
credentials,
input_data.comment_url,
input_data.comment,
@@ -288,7 +288,7 @@ class GithubListCommentsBlock(Block):
)
@staticmethod
def list_comments(
async def list_comments(
credentials: GithubCredentials, issue_url: str
) -> list[Output.CommentItem]:
parsed_url = urlparse(issue_url)
@@ -305,7 +305,7 @@ class GithubListCommentsBlock(Block):
# Set convert_urls=False since we're already providing an API URL
api = get_api(credentials, convert_urls=False)
response = api.get(api_url)
response = await api.get(api_url)
comments = response.json()
parsed_comments: list[GithubListCommentsBlock.Output.CommentItem] = [
{
@@ -318,18 +318,19 @@ class GithubListCommentsBlock(Block):
]
return parsed_comments
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
comments = self.list_comments(
comments = await self.list_comments(
credentials,
input_data.issue_url,
)
yield from (("comment", comment) for comment in comments)
for comment in comments:
yield "comment", comment
yield "comments", comments
@@ -381,24 +382,24 @@ class GithubMakeIssueBlock(Block):
)
@staticmethod
def create_issue(
async def create_issue(
credentials: GithubCredentials, repo_url: str, title: str, body: str
) -> tuple[int, str]:
api = get_api(credentials)
data = {"title": title, "body": body}
issues_url = repo_url + "/issues"
response = api.post(issues_url, json=data)
response = await api.post(issues_url, json=data)
issue = response.json()
return issue["number"], issue["html_url"]
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
number, url = self.create_issue(
number, url = await self.create_issue(
credentials,
input_data.repo_url,
input_data.title,
@@ -451,25 +452,25 @@ class GithubReadIssueBlock(Block):
)
@staticmethod
def read_issue(
async def read_issue(
credentials: GithubCredentials, issue_url: str
) -> tuple[str, str, str]:
api = get_api(credentials)
response = api.get(issue_url)
response = await api.get(issue_url)
data = response.json()
title = data.get("title", "No title found")
body = data.get("body", "No body content found")
user = data.get("user", {}).get("login", "No user found")
return title, body, user
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
title, body, user = self.read_issue(
title, body, user = await self.read_issue(
credentials,
input_data.issue_url,
)
@@ -531,30 +532,30 @@ class GithubListIssuesBlock(Block):
)
@staticmethod
def list_issues(
async def list_issues(
credentials: GithubCredentials, repo_url: str
) -> list[Output.IssueItem]:
api = get_api(credentials)
issues_url = repo_url + "/issues"
response = api.get(issues_url)
response = await api.get(issues_url)
data = response.json()
issues: list[GithubListIssuesBlock.Output.IssueItem] = [
{"title": issue["title"], "url": issue["html_url"]} for issue in data
]
return issues
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
issues = self.list_issues(
for issue in await self.list_issues(
credentials,
input_data.repo_url,
)
yield from (("issue", issue) for issue in issues)
):
yield "issue", issue
class GithubAddLabelBlock(Block):
@@ -593,21 +594,23 @@ class GithubAddLabelBlock(Block):
)
@staticmethod
def add_label(credentials: GithubCredentials, issue_url: str, label: str) -> str:
async def add_label(
credentials: GithubCredentials, issue_url: str, label: str
) -> str:
api = get_api(credentials)
data = {"labels": [label]}
labels_url = issue_url + "/labels"
api.post(labels_url, json=data)
await api.post(labels_url, json=data)
return "Label added successfully"
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
status = self.add_label(
status = await self.add_label(
credentials,
input_data.issue_url,
input_data.label,
@@ -653,20 +656,22 @@ class GithubRemoveLabelBlock(Block):
)
@staticmethod
def remove_label(credentials: GithubCredentials, issue_url: str, label: str) -> str:
async def remove_label(
credentials: GithubCredentials, issue_url: str, label: str
) -> str:
api = get_api(credentials)
label_url = issue_url + f"/labels/{label}"
api.delete(label_url)
await api.delete(label_url)
return "Label removed successfully"
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
status = self.remove_label(
status = await self.remove_label(
credentials,
input_data.issue_url,
input_data.label,
@@ -714,7 +719,7 @@ class GithubAssignIssueBlock(Block):
)
@staticmethod
def assign_issue(
async def assign_issue(
credentials: GithubCredentials,
issue_url: str,
assignee: str,
@@ -722,17 +727,17 @@ class GithubAssignIssueBlock(Block):
api = get_api(credentials)
assignees_url = issue_url + "/assignees"
data = {"assignees": [assignee]}
api.post(assignees_url, json=data)
await api.post(assignees_url, json=data)
return "Issue assigned successfully"
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
status = self.assign_issue(
status = await self.assign_issue(
credentials,
input_data.issue_url,
input_data.assignee,
@@ -780,7 +785,7 @@ class GithubUnassignIssueBlock(Block):
)
@staticmethod
def unassign_issue(
async def unassign_issue(
credentials: GithubCredentials,
issue_url: str,
assignee: str,
@@ -788,17 +793,17 @@ class GithubUnassignIssueBlock(Block):
api = get_api(credentials)
assignees_url = issue_url + "/assignees"
data = {"assignees": [assignee]}
api.delete(assignees_url, json=data)
await api.delete(assignees_url, json=data)
return "Issue unassigned successfully"
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
status = self.unassign_issue(
status = await self.unassign_issue(
credentials,
input_data.issue_url,
input_data.assignee,

View File

@@ -65,28 +65,31 @@ class GithubListPullRequestsBlock(Block):
)
@staticmethod
def list_prs(credentials: GithubCredentials, repo_url: str) -> list[Output.PRItem]:
async def list_prs(
credentials: GithubCredentials, repo_url: str
) -> list[Output.PRItem]:
api = get_api(credentials)
pulls_url = repo_url + "/pulls"
response = api.get(pulls_url)
response = await api.get(pulls_url)
data = response.json()
pull_requests: list[GithubListPullRequestsBlock.Output.PRItem] = [
{"title": pr["title"], "url": pr["html_url"]} for pr in data
]
return pull_requests
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
pull_requests = self.list_prs(
pull_requests = await self.list_prs(
credentials,
input_data.repo_url,
)
yield from (("pull_request", pr) for pr in pull_requests)
for pr in pull_requests:
yield "pull_request", pr
class GithubMakePullRequestBlock(Block):
@@ -153,7 +156,7 @@ class GithubMakePullRequestBlock(Block):
)
@staticmethod
def create_pr(
async def create_pr(
credentials: GithubCredentials,
repo_url: str,
title: str,
@@ -164,11 +167,11 @@ class GithubMakePullRequestBlock(Block):
api = get_api(credentials)
pulls_url = repo_url + "/pulls"
data = {"title": title, "body": body, "head": head, "base": base}
response = api.post(pulls_url, json=data)
response = await api.post(pulls_url, json=data)
pr_data = response.json()
return pr_data["number"], pr_data["html_url"]
def run(
async def run(
self,
input_data: Input,
*,
@@ -176,7 +179,7 @@ class GithubMakePullRequestBlock(Block):
**kwargs,
) -> BlockOutput:
try:
number, url = self.create_pr(
number, url = await self.create_pr(
credentials,
input_data.repo_url,
input_data.title,
@@ -242,39 +245,55 @@ class GithubReadPullRequestBlock(Block):
)
@staticmethod
def read_pr(credentials: GithubCredentials, pr_url: str) -> tuple[str, str, str]:
async def read_pr(
credentials: GithubCredentials, pr_url: str
) -> tuple[str, str, str]:
api = get_api(credentials)
# Adjust the URL to access the issue endpoint for PR metadata
issue_url = pr_url.replace("/pull/", "/issues/")
response = api.get(issue_url)
response = await api.get(issue_url)
data = response.json()
title = data.get("title", "No title found")
body = data.get("body", "No body content found")
author = data.get("user", {}).get("login", "No user found")
author = data.get("user", {}).get("login", "Unknown author")
return title, body, author
@staticmethod
def read_pr_changes(credentials: GithubCredentials, pr_url: str) -> str:
async def read_pr_changes(credentials: GithubCredentials, pr_url: str) -> str:
api = get_api(credentials)
files_url = prepare_pr_api_url(pr_url=pr_url, path="files")
response = api.get(files_url)
response = await api.get(files_url)
files = response.json()
changes = []
for file in files:
filename = file.get("filename")
patch = file.get("patch")
if filename and patch:
changes.append(f"File: {filename}\n{patch}")
status: str = file.get("status", "")
diff: str = file.get("patch", "")
if status != "removed":
is_filename: str = file.get("filename", "")
was_filename: str = (
file.get("previous_filename", is_filename)
if status != "added"
else ""
)
else:
is_filename = ""
was_filename: str = file.get("filename", "")
patch_header = ""
if was_filename:
patch_header += f"--- {was_filename}\n"
if is_filename:
patch_header += f"+++ {is_filename}\n"
changes.append(patch_header + diff)
return "\n\n".join(changes)
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
title, body, author = self.read_pr(
title, body, author = await self.read_pr(
credentials,
input_data.pr_url,
)
@@ -283,7 +302,7 @@ class GithubReadPullRequestBlock(Block):
yield "author", author
if input_data.include_pr_changes:
changes = self.read_pr_changes(
changes = await self.read_pr_changes(
credentials,
input_data.pr_url,
)
@@ -330,16 +349,16 @@ class GithubAssignPRReviewerBlock(Block):
)
@staticmethod
def assign_reviewer(
async def assign_reviewer(
credentials: GithubCredentials, pr_url: str, reviewer: str
) -> str:
api = get_api(credentials)
reviewers_url = prepare_pr_api_url(pr_url=pr_url, path="requested_reviewers")
data = {"reviewers": [reviewer]}
api.post(reviewers_url, json=data)
await api.post(reviewers_url, json=data)
return "Reviewer assigned successfully"
def run(
async def run(
self,
input_data: Input,
*,
@@ -347,7 +366,7 @@ class GithubAssignPRReviewerBlock(Block):
**kwargs,
) -> BlockOutput:
try:
status = self.assign_reviewer(
status = await self.assign_reviewer(
credentials,
input_data.pr_url,
input_data.reviewer,
@@ -397,16 +416,16 @@ class GithubUnassignPRReviewerBlock(Block):
)
@staticmethod
def unassign_reviewer(
async def unassign_reviewer(
credentials: GithubCredentials, pr_url: str, reviewer: str
) -> str:
api = get_api(credentials)
reviewers_url = prepare_pr_api_url(pr_url=pr_url, path="requested_reviewers")
data = {"reviewers": [reviewer]}
api.delete(reviewers_url, json=data)
await api.delete(reviewers_url, json=data)
return "Reviewer unassigned successfully"
def run(
async def run(
self,
input_data: Input,
*,
@@ -414,7 +433,7 @@ class GithubUnassignPRReviewerBlock(Block):
**kwargs,
) -> BlockOutput:
try:
status = self.unassign_reviewer(
status = await self.unassign_reviewer(
credentials,
input_data.pr_url,
input_data.reviewer,
@@ -477,12 +496,12 @@ class GithubListPRReviewersBlock(Block):
)
@staticmethod
def list_reviewers(
async def list_reviewers(
credentials: GithubCredentials, pr_url: str
) -> list[Output.ReviewerItem]:
api = get_api(credentials)
reviewers_url = prepare_pr_api_url(pr_url=pr_url, path="requested_reviewers")
response = api.get(reviewers_url)
response = await api.get(reviewers_url)
data = response.json()
reviewers: list[GithubListPRReviewersBlock.Output.ReviewerItem] = [
{"username": reviewer["login"], "url": reviewer["html_url"]}
@@ -490,18 +509,18 @@ class GithubListPRReviewersBlock(Block):
]
return reviewers
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
reviewers = self.list_reviewers(
for reviewer in await self.list_reviewers(
credentials,
input_data.pr_url,
)
yield from (("reviewer", reviewer) for reviewer in reviewers)
):
yield "reviewer", reviewer
def prepare_pr_api_url(pr_url: str, path: str) -> str:

View File

@@ -65,12 +65,12 @@ class GithubListTagsBlock(Block):
)
@staticmethod
def list_tags(
async def list_tags(
credentials: GithubCredentials, repo_url: str
) -> list[Output.TagItem]:
api = get_api(credentials)
tags_url = repo_url + "/tags"
response = api.get(tags_url)
response = await api.get(tags_url)
data = response.json()
repo_path = repo_url.replace("https://github.com/", "")
tags: list[GithubListTagsBlock.Output.TagItem] = [
@@ -82,18 +82,19 @@ class GithubListTagsBlock(Block):
]
return tags
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
tags = self.list_tags(
tags = await self.list_tags(
credentials,
input_data.repo_url,
)
yield from (("tag", tag) for tag in tags)
for tag in tags:
yield "tag", tag
class GithubListBranchesBlock(Block):
@@ -147,12 +148,12 @@ class GithubListBranchesBlock(Block):
)
@staticmethod
def list_branches(
async def list_branches(
credentials: GithubCredentials, repo_url: str
) -> list[Output.BranchItem]:
api = get_api(credentials)
branches_url = repo_url + "/branches"
response = api.get(branches_url)
response = await api.get(branches_url)
data = response.json()
repo_path = repo_url.replace("https://github.com/", "")
branches: list[GithubListBranchesBlock.Output.BranchItem] = [
@@ -164,18 +165,19 @@ class GithubListBranchesBlock(Block):
]
return branches
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
branches = self.list_branches(
branches = await self.list_branches(
credentials,
input_data.repo_url,
)
yield from (("branch", branch) for branch in branches)
for branch in branches:
yield "branch", branch
class GithubListDiscussionsBlock(Block):
@@ -234,7 +236,7 @@ class GithubListDiscussionsBlock(Block):
)
@staticmethod
def list_discussions(
async def list_discussions(
credentials: GithubCredentials, repo_url: str, num_discussions: int
) -> list[Output.DiscussionItem]:
api = get_api(credentials)
@@ -254,7 +256,7 @@ class GithubListDiscussionsBlock(Block):
}
"""
variables = {"owner": owner, "repo": repo, "num": num_discussions}
response = api.post(
response = await api.post(
"https://api.github.com/graphql",
json={"query": query, "variables": variables},
)
@@ -265,17 +267,20 @@ class GithubListDiscussionsBlock(Block):
]
return discussions
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
discussions = self.list_discussions(
credentials, input_data.repo_url, input_data.num_discussions
discussions = await self.list_discussions(
credentials,
input_data.repo_url,
input_data.num_discussions,
)
yield from (("discussion", discussion) for discussion in discussions)
for discussion in discussions:
yield "discussion", discussion
class GithubListReleasesBlock(Block):
@@ -329,30 +334,31 @@ class GithubListReleasesBlock(Block):
)
@staticmethod
def list_releases(
async def list_releases(
credentials: GithubCredentials, repo_url: str
) -> list[Output.ReleaseItem]:
api = get_api(credentials)
releases_url = repo_url + "/releases"
response = api.get(releases_url)
response = await api.get(releases_url)
data = response.json()
releases: list[GithubListReleasesBlock.Output.ReleaseItem] = [
{"name": release["name"], "url": release["html_url"]} for release in data
]
return releases
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
releases = self.list_releases(
releases = await self.list_releases(
credentials,
input_data.repo_url,
)
yield from (("release", release) for release in releases)
for release in releases:
yield "release", release
class GithubReadFileBlock(Block):
@@ -405,40 +411,40 @@ class GithubReadFileBlock(Block):
)
@staticmethod
def read_file(
async def read_file(
credentials: GithubCredentials, repo_url: str, file_path: str, branch: str
) -> tuple[str, int]:
api = get_api(credentials)
content_url = repo_url + f"/contents/{file_path}?ref={branch}"
response = api.get(content_url)
content = response.json()
response = await api.get(content_url)
data = response.json()
if isinstance(content, list):
if isinstance(data, list):
# Multiple entries of different types exist at this path
if not (file := next((f for f in content if f["type"] == "file"), None)):
if not (file := next((f for f in data if f["type"] == "file"), None)):
raise TypeError("Not a file")
content = file
data = file
if content["type"] != "file":
if data["type"] != "file":
raise TypeError("Not a file")
return content["content"], content["size"]
return data["content"], data["size"]
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
raw_content, size = self.read_file(
content, size = await self.read_file(
credentials,
input_data.repo_url,
input_data.file_path.lstrip("/"),
input_data.file_path,
input_data.branch,
)
yield "raw_content", raw_content
yield "text_content", base64.b64decode(raw_content).decode("utf-8")
yield "raw_content", content
yield "text_content", base64.b64decode(content).decode("utf-8")
yield "size", size
@@ -515,52 +521,55 @@ class GithubReadFolderBlock(Block):
)
@staticmethod
def read_folder(
async def read_folder(
credentials: GithubCredentials, repo_url: str, folder_path: str, branch: str
) -> tuple[list[Output.FileEntry], list[Output.DirEntry]]:
api = get_api(credentials)
contents_url = repo_url + f"/contents/{folder_path}?ref={branch}"
response = api.get(contents_url)
content = response.json()
response = await api.get(contents_url)
data = response.json()
if not isinstance(content, list):
if not isinstance(data, list):
raise TypeError("Not a folder")
files = [
files: list[GithubReadFolderBlock.Output.FileEntry] = [
GithubReadFolderBlock.Output.FileEntry(
name=entry["name"],
path=entry["path"],
size=entry["size"],
)
for entry in content
for entry in data
if entry["type"] == "file"
]
dirs = [
dirs: list[GithubReadFolderBlock.Output.DirEntry] = [
GithubReadFolderBlock.Output.DirEntry(
name=entry["name"],
path=entry["path"],
)
for entry in content
for entry in data
if entry["type"] == "dir"
]
return files, dirs
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
files, dirs = self.read_folder(
files, dirs = await self.read_folder(
credentials,
input_data.repo_url,
input_data.folder_path.lstrip("/"),
input_data.branch,
)
yield from (("file", file) for file in files)
yield from (("dir", dir) for dir in dirs)
for file in files:
yield "file", file
for dir in dirs:
yield "dir", dir
class GithubMakeBranchBlock(Block):
@@ -606,32 +615,35 @@ class GithubMakeBranchBlock(Block):
)
@staticmethod
def create_branch(
async def create_branch(
credentials: GithubCredentials,
repo_url: str,
new_branch: str,
source_branch: str,
) -> str:
api = get_api(credentials)
# Get the SHA of the source branch
ref_url = repo_url + f"/git/refs/heads/{source_branch}"
response = api.get(ref_url)
sha = response.json()["object"]["sha"]
response = await api.get(ref_url)
data = response.json()
sha = data["object"]["sha"]
# Create the new branch
create_ref_url = repo_url + "/git/refs"
data = {"ref": f"refs/heads/{new_branch}", "sha": sha}
response = api.post(create_ref_url, json=data)
new_ref_url = repo_url + "/git/refs"
data = {
"ref": f"refs/heads/{new_branch}",
"sha": sha,
}
response = await api.post(new_ref_url, json=data)
return "Branch created successfully"
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
status = self.create_branch(
status = await self.create_branch(
credentials,
input_data.repo_url,
input_data.new_branch,
@@ -678,22 +690,22 @@ class GithubDeleteBranchBlock(Block):
)
@staticmethod
def delete_branch(
async def delete_branch(
credentials: GithubCredentials, repo_url: str, branch: str
) -> str:
api = get_api(credentials)
ref_url = repo_url + f"/git/refs/heads/{branch}"
api.delete(ref_url)
await api.delete(ref_url)
return "Branch deleted successfully"
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
status = self.delete_branch(
status = await self.delete_branch(
credentials,
input_data.repo_url,
input_data.branch,
@@ -761,7 +773,7 @@ class GithubCreateFileBlock(Block):
)
@staticmethod
def create_file(
async def create_file(
credentials: GithubCredentials,
repo_url: str,
file_path: str,
@@ -770,23 +782,18 @@ class GithubCreateFileBlock(Block):
commit_message: str,
) -> tuple[str, str]:
api = get_api(credentials)
# Convert content to base64
content_bytes = content.encode("utf-8")
content_base64 = base64.b64encode(content_bytes).decode("utf-8")
# Create the file using the GitHub API
contents_url = f"{repo_url}/contents/{file_path}"
contents_url = repo_url + f"/contents/{file_path}"
content_base64 = base64.b64encode(content.encode()).decode()
data = {
"message": commit_message,
"content": content_base64,
"branch": branch,
}
response = api.put(contents_url, json=data)
result = response.json()
response = await api.put(contents_url, json=data)
data = response.json()
return data["content"]["html_url"], data["commit"]["sha"]
return result["content"]["html_url"], result["commit"]["sha"]
def run(
async def run(
self,
input_data: Input,
*,
@@ -794,7 +801,7 @@ class GithubCreateFileBlock(Block):
**kwargs,
) -> BlockOutput:
try:
url, sha = self.create_file(
url, sha = await self.create_file(
credentials,
input_data.repo_url,
input_data.file_path,
@@ -866,7 +873,7 @@ class GithubUpdateFileBlock(Block):
)
@staticmethod
def update_file(
async def update_file(
credentials: GithubCredentials,
repo_url: str,
file_path: str,
@@ -875,30 +882,24 @@ class GithubUpdateFileBlock(Block):
commit_message: str,
) -> tuple[str, str]:
api = get_api(credentials)
# First get the current file to get its SHA
contents_url = f"{repo_url}/contents/{file_path}"
contents_url = repo_url + f"/contents/{file_path}"
params = {"ref": branch}
response = api.get(contents_url, params=params)
current_file = response.json()
response = await api.get(contents_url, params=params)
data = response.json()
# Convert new content to base64
content_bytes = content.encode("utf-8")
content_base64 = base64.b64encode(content_bytes).decode("utf-8")
# Update the file
content_base64 = base64.b64encode(content.encode()).decode()
data = {
"message": commit_message,
"content": content_base64,
"sha": current_file["sha"],
"sha": data["sha"],
"branch": branch,
}
response = api.put(contents_url, json=data)
result = response.json()
response = await api.put(contents_url, json=data)
data = response.json()
return data["content"]["html_url"], data["commit"]["sha"]
return result["content"]["html_url"], result["commit"]["sha"]
def run(
async def run(
self,
input_data: Input,
*,
@@ -906,7 +907,7 @@ class GithubUpdateFileBlock(Block):
**kwargs,
) -> BlockOutput:
try:
url, sha = self.update_file(
url, sha = await self.update_file(
credentials,
input_data.repo_url,
input_data.file_path,
@@ -981,7 +982,7 @@ class GithubCreateRepositoryBlock(Block):
)
@staticmethod
def create_repository(
async def create_repository(
credentials: GithubCredentials,
name: str,
description: str,
@@ -989,24 +990,19 @@ class GithubCreateRepositoryBlock(Block):
auto_init: bool,
gitignore_template: str,
) -> tuple[str, str]:
api = get_api(credentials, convert_urls=False) # Disable URL conversion
api = get_api(credentials)
data = {
"name": name,
"description": description,
"private": private,
"auto_init": auto_init,
"gitignore_template": gitignore_template,
}
response = await api.post("https://api.github.com/user/repos", json=data)
data = response.json()
return data["html_url"], data["clone_url"]
if gitignore_template:
data["gitignore_template"] = gitignore_template
# Create repository using the user endpoint
response = api.post("https://api.github.com/user/repos", json=data)
result = response.json()
return result["html_url"], result["clone_url"]
def run(
async def run(
self,
input_data: Input,
*,
@@ -1014,7 +1010,7 @@ class GithubCreateRepositoryBlock(Block):
**kwargs,
) -> BlockOutput:
try:
url, clone_url = self.create_repository(
url, clone_url = await self.create_repository(
credentials,
input_data.name,
input_data.description,
@@ -1081,17 +1077,13 @@ class GithubListStargazersBlock(Block):
)
@staticmethod
def list_stargazers(
async def list_stargazers(
credentials: GithubCredentials, repo_url: str
) -> list[Output.StargazerItem]:
api = get_api(credentials)
# Add /stargazers to the repo URL to get stargazers endpoint
stargazers_url = f"{repo_url}/stargazers"
# Set accept header to get starred_at timestamp
headers = {"Accept": "application/vnd.github.star+json"}
response = api.get(stargazers_url, headers=headers)
stargazers_url = repo_url + "/stargazers"
response = await api.get(stargazers_url)
data = response.json()
stargazers: list[GithubListStargazersBlock.Output.StargazerItem] = [
{
"username": stargazer["login"],
@@ -1101,18 +1093,16 @@ class GithubListStargazersBlock(Block):
]
return stargazers
def run(
async def run(
self,
input_data: Input,
*,
credentials: GithubCredentials,
**kwargs,
) -> BlockOutput:
try:
stargazers = self.list_stargazers(
credentials,
input_data.repo_url,
)
yield from (("stargazer", stargazer) for stargazer in stargazers)
except Exception as e:
yield "error", str(e)
stargazers = await self.list_stargazers(
credentials,
input_data.repo_url,
)
for stargazer in stargazers:
yield "stargazer", stargazer

View File

@@ -115,7 +115,7 @@ class GithubCreateStatusBlock(Block):
)
@staticmethod
def create_status(
async def create_status(
credentials: GithubFineGrainedAPICredentials,
repo_url: str,
sha: str,
@@ -144,7 +144,9 @@ class GithubCreateStatusBlock(Block):
data.description = description
status_url = f"{repo_url}/statuses/{sha}"
response = api.post(status_url, data=data.model_dump_json(exclude_none=True))
response = await api.post(
status_url, data=data.model_dump_json(exclude_none=True)
)
result = response.json()
return {
@@ -158,7 +160,7 @@ class GithubCreateStatusBlock(Block):
"updated_at": result["updated_at"],
}
def run(
async def run(
self,
input_data: Input,
*,
@@ -166,7 +168,7 @@ class GithubCreateStatusBlock(Block):
**kwargs,
) -> BlockOutput:
try:
result = self.create_status(
result = await self.create_status(
credentials=credentials,
repo_url=input_data.repo_url,
sha=input_data.sha,

View File

@@ -53,7 +53,7 @@ class GitHubTriggerBase:
description="Error message if the payload could not be processed"
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "payload", input_data.payload
yield "triggered_by_user", input_data.payload["sender"]
@@ -148,8 +148,9 @@ class GithubPullRequestTriggerBlock(GitHubTriggerBase, Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
yield from super().run(input_data, **kwargs)
async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
async for name, value in super().run(input_data, **kwargs):
yield name, value
yield "event", input_data.payload["action"]
yield "number", input_data.payload["number"]
yield "pull_request", input_data.payload["pull_request"]

View File

@@ -0,0 +1,603 @@
import asyncio
import enum
import uuid
from datetime import datetime, timedelta, timezone
from typing import Literal
from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
from pydantic import BaseModel
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.settings import AppEnvironment, Settings
from ._auth import (
GOOGLE_OAUTH_IS_CONFIGURED,
TEST_CREDENTIALS,
TEST_CREDENTIALS_INPUT,
GoogleCredentials,
GoogleCredentialsField,
GoogleCredentialsInput,
)
class CalendarEvent(BaseModel):
"""Structured representation of a Google Calendar event."""
id: str
title: str
start_time: str
end_time: str
is_all_day: bool
location: str | None
description: str | None
organizer: str | None
attendees: list[str]
has_video_call: bool
video_link: str | None
calendar_link: str
is_recurring: bool
class GoogleCalendarReadEventsBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/calendar.readonly"]
)
calendar_id: str = SchemaField(
description="Calendar ID (use 'primary' for your main calendar)",
default="primary",
)
max_events: int = SchemaField(
description="Maximum number of events to retrieve", default=10
)
start_time: datetime = SchemaField(
description="Retrieve events starting from this time",
default_factory=lambda: datetime.now(tz=timezone.utc),
)
time_range_days: int = SchemaField(
description="Number of days to look ahead for events", default=30
)
search_term: str | None = SchemaField(
description="Optional search term to filter events by", default=None
)
page_token: str | None = SchemaField(
description="Page token from previous request to get the next batch of events. You can use this if you have lots of events you want to process in a loop",
default=None,
)
include_declined_events: bool = SchemaField(
description="Include events you've declined", default=False
)
class Output(BlockSchema):
events: list[CalendarEvent] = SchemaField(
description="List of calendar events in the requested time range",
default_factory=list,
)
event: CalendarEvent = SchemaField(
description="One of the calendar events in the requested time range"
)
next_page_token: str | None = SchemaField(
description="Token for retrieving the next page of events if more exist",
default=None,
)
error: str = SchemaField(
description="Error message if the request failed",
)
def __init__(self):
settings = Settings()
# Create realistic test data for events
test_now = datetime.now(tz=timezone.utc)
test_tomorrow = test_now + timedelta(days=1)
test_event_dict = {
"id": "event1id",
"title": "Team Meeting",
"start_time": test_tomorrow.strftime("%Y-%m-%d %H:%M"),
"end_time": (test_tomorrow + timedelta(hours=1)).strftime("%Y-%m-%d %H:%M"),
"is_all_day": False,
"location": "Conference Room A",
"description": "Weekly team sync",
"organizer": "manager@example.com",
"attendees": ["colleague1@example.com", "colleague2@example.com"],
"has_video_call": True,
"video_link": "https://meet.google.com/abc-defg-hij",
"calendar_link": "https://calendar.google.com/calendar/event?eid=event1id",
"is_recurring": True,
}
super().__init__(
id="80bc3ed1-e9a4-449e-8163-a8fc86f74f6a",
description="Retrieves upcoming events from a Google Calendar with filtering options",
categories={BlockCategory.PRODUCTIVITY, BlockCategory.DATA},
input_schema=GoogleCalendarReadEventsBlock.Input,
output_schema=GoogleCalendarReadEventsBlock.Output,
disabled=not GOOGLE_OAUTH_IS_CONFIGURED
or settings.config.app_env == AppEnvironment.PRODUCTION,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"calendar_id": "primary",
"max_events": 5,
"start_time": test_now.isoformat(),
"time_range_days": 7,
"search_term": None,
"include_declined_events": False,
"page_token": None,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("event", test_event_dict),
("events", [test_event_dict]),
],
test_mock={
"_read_calendar": lambda *args, **kwargs: {
"items": [
{
"id": "event1id",
"summary": "Team Meeting",
"start": {
"dateTime": test_tomorrow.isoformat(),
"timeZone": "UTC",
},
"end": {
"dateTime": (
test_tomorrow + timedelta(hours=1)
).isoformat(),
"timeZone": "UTC",
},
"location": "Conference Room A",
"description": "Weekly team sync",
"organizer": {"email": "manager@example.com"},
"attendees": [
{"email": "colleague1@example.com"},
{"email": "colleague2@example.com"},
],
"conferenceData": {
"conferenceUrl": "https://meet.google.com/abc-defg-hij"
},
"htmlLink": "https://calendar.google.com/calendar/event?eid=event1id",
"recurrence": ["RRULE:FREQ=WEEKLY;COUNT=10"],
}
],
"nextPageToken": None,
},
"_format_events": lambda *args, **kwargs: [test_event_dict],
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
try:
service = self._build_service(credentials, **kwargs)
# Calculate end time based on start time and time range
end_time = input_data.start_time + timedelta(
days=input_data.time_range_days
)
# Call Google Calendar API
result = await asyncio.to_thread(
self._read_calendar,
service=service,
calendarId=input_data.calendar_id,
time_min=input_data.start_time.isoformat(),
time_max=end_time.isoformat(),
max_results=input_data.max_events,
single_events=True,
search_term=input_data.search_term,
show_deleted=False,
show_hidden=input_data.include_declined_events,
page_token=input_data.page_token,
)
# Format events into a user-friendly structure
formatted_events = self._format_events(result.get("items", []))
# Include next page token if available
if next_page_token := result.get("nextPageToken"):
yield "next_page_token", next_page_token
for event in formatted_events:
yield "event", event
yield "events", formatted_events
except Exception as e:
yield "error", str(e)
@staticmethod
def _build_service(credentials: GoogleCredentials, **kwargs):
creds = Credentials(
token=(
credentials.access_token.get_secret_value()
if credentials.access_token
else None
),
refresh_token=(
credentials.refresh_token.get_secret_value()
if credentials.refresh_token
else None
),
token_uri="https://oauth2.googleapis.com/token",
client_id=Settings().secrets.google_client_id,
client_secret=Settings().secrets.google_client_secret,
scopes=credentials.scopes,
)
return build("calendar", "v3", credentials=creds)
def _read_calendar(
self,
service,
calendarId: str,
time_min: str,
time_max: str,
max_results: int,
single_events: bool,
search_term: str | None = None,
show_deleted: bool = False,
show_hidden: bool = False,
page_token: str | None = None,
) -> dict:
"""Read calendar events with optional filtering."""
calendar = service.events()
# Build query parameters
params = {
"calendarId": calendarId,
"timeMin": time_min,
"timeMax": time_max,
"maxResults": max_results,
"singleEvents": single_events,
"orderBy": "startTime",
"showDeleted": show_deleted,
"showHiddenInvitations": show_hidden,
**({"pageToken": page_token} if page_token else {}),
}
# Add search term if provided
if search_term:
params["q"] = search_term
result = calendar.list(**params).execute()
return result
def _format_events(self, events: list[dict]) -> list[CalendarEvent]:
"""Format Google Calendar API events into user-friendly structure."""
formatted_events = []
for event in events:
# Determine if all-day event
is_all_day = "date" in event.get("start", {})
# Format start and end times
if is_all_day:
start_time = event.get("start", {}).get("date", "")
end_time = event.get("end", {}).get("date", "")
else:
# Convert ISO format to more readable format
start_datetime = datetime.fromisoformat(
event.get("start", {}).get("dateTime", "").replace("Z", "+00:00")
)
end_datetime = datetime.fromisoformat(
event.get("end", {}).get("dateTime", "").replace("Z", "+00:00")
)
start_time = start_datetime.strftime("%Y-%m-%d %H:%M")
end_time = end_datetime.strftime("%Y-%m-%d %H:%M")
# Extract attendees
attendees = []
for attendee in event.get("attendees", []):
if email := attendee.get("email"):
attendees.append(email)
# Check for video call link
has_video_call = False
video_link = None
if conf_data := event.get("conferenceData"):
if conf_url := conf_data.get("conferenceUrl"):
has_video_call = True
video_link = conf_url
elif entry_points := conf_data.get("entryPoints", []):
for entry in entry_points:
if entry.get("entryPointType") == "video":
has_video_call = True
video_link = entry.get("uri")
break
# Create formatted event
formatted_event = CalendarEvent(
id=event.get("id", ""),
title=event.get("summary", "Untitled Event"),
start_time=start_time,
end_time=end_time,
is_all_day=is_all_day,
location=event.get("location"),
description=event.get("description"),
organizer=event.get("organizer", {}).get("email"),
attendees=attendees,
has_video_call=has_video_call,
video_link=video_link,
calendar_link=event.get("htmlLink", ""),
is_recurring=bool(event.get("recurrence")),
)
formatted_events.append(formatted_event)
return formatted_events
class ReminderPreset(enum.Enum):
"""Common reminder times before an event."""
TEN_MINUTES = 10
THIRTY_MINUTES = 30
ONE_HOUR = 60
ONE_DAY = 1440 # 24 hours in minutes
class RecurrenceFrequency(enum.Enum):
"""Frequency options for recurring events."""
DAILY = "DAILY"
WEEKLY = "WEEKLY"
MONTHLY = "MONTHLY"
YEARLY = "YEARLY"
class ExactTiming(BaseModel):
"""Model for specifying start and end times."""
discriminator: Literal["exact_timing"]
start_datetime: datetime
end_datetime: datetime
class DurationTiming(BaseModel):
"""Model for specifying start time and duration."""
discriminator: Literal["duration_timing"]
start_datetime: datetime
duration_minutes: int
class OneTimeEvent(BaseModel):
"""Model for a one-time event."""
discriminator: Literal["one_time"]
class RecurringEvent(BaseModel):
"""Model for a recurring event."""
discriminator: Literal["recurring"]
frequency: RecurrenceFrequency
count: int
class GoogleCalendarCreateEventBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/calendar"]
)
# Event Details
event_title: str = SchemaField(description="Title of the event")
location: str | None = SchemaField(
description="Location of the event", default=None
)
description: str | None = SchemaField(
description="Description of the event", default=None
)
# Timing
timing: ExactTiming | DurationTiming = SchemaField(
discriminator="discriminator",
advanced=False,
description="Specify when the event starts and ends",
default_factory=lambda: DurationTiming(
discriminator="duration_timing",
start_datetime=datetime.now().replace(microsecond=0, second=0, minute=0)
+ timedelta(hours=1),
duration_minutes=60,
),
)
# Calendar selection
calendar_id: str = SchemaField(
description="Calendar ID (use 'primary' for your main calendar)",
default="primary",
)
# Guests
guest_emails: list[str] = SchemaField(
description="Email addresses of guests to invite", default_factory=list
)
send_notifications: bool = SchemaField(
description="Send email notifications to guests", default=True
)
# Extras
add_google_meet: bool = SchemaField(
description="Include a Google Meet video conference link", default=False
)
recurrence: OneTimeEvent | RecurringEvent = SchemaField(
discriminator="discriminator",
description="Whether the event repeats",
default_factory=lambda: OneTimeEvent(discriminator="one_time"),
)
reminder_minutes: list[ReminderPreset] = SchemaField(
description="When to send reminders before the event",
default_factory=lambda: [ReminderPreset.TEN_MINUTES],
)
class Output(BlockSchema):
event_id: str = SchemaField(description="ID of the created event")
event_link: str = SchemaField(
description="Link to view the event in Google Calendar"
)
error: str = SchemaField(description="Error message if event creation failed")
def __init__(self):
settings = Settings()
super().__init__(
id="ed2ec950-fbff-4204-94c0-023fb1d625e0",
description="This block creates a new event in Google Calendar with customizable parameters.",
categories={BlockCategory.PRODUCTIVITY},
input_schema=GoogleCalendarCreateEventBlock.Input,
output_schema=GoogleCalendarCreateEventBlock.Output,
disabled=not GOOGLE_OAUTH_IS_CONFIGURED
or settings.config.app_env == AppEnvironment.PRODUCTION,
test_input={
"credentials": TEST_CREDENTIALS_INPUT,
"event_title": "Team Meeting",
"location": "Conference Room A",
"description": "Weekly team sync-up",
"calendar_id": "primary",
"guest_emails": ["colleague1@example.com", "colleague2@example.com"],
"add_google_meet": True,
"send_notifications": True,
"reminder_minutes": [
ReminderPreset.TEN_MINUTES.value,
ReminderPreset.ONE_HOUR.value,
],
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("event_id", "abc123event_id"),
("event_link", "https://calendar.google.com/calendar/event?eid=abc123"),
],
test_mock={
"_create_event": lambda *args, **kwargs: {
"id": "abc123event_id",
"htmlLink": "https://calendar.google.com/calendar/event?eid=abc123",
}
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
try:
service = self._build_service(credentials, **kwargs)
# Create event body
# Get start and end times based on the timing option
if input_data.timing.discriminator == "exact_timing":
start_datetime = input_data.timing.start_datetime
end_datetime = input_data.timing.end_datetime
else: # duration_timing
start_datetime = input_data.timing.start_datetime
end_datetime = start_datetime + timedelta(
minutes=input_data.timing.duration_minutes
)
# Format datetimes for Google Calendar API
start_time_str = start_datetime.isoformat()
end_time_str = end_datetime.isoformat()
# Build the event body
event_body = {
"summary": input_data.event_title,
"start": {"dateTime": start_time_str},
"end": {"dateTime": end_time_str},
}
# Add optional fields
if input_data.location:
event_body["location"] = input_data.location
if input_data.description:
event_body["description"] = input_data.description
# Add guests
if input_data.guest_emails:
event_body["attendees"] = [
{"email": email} for email in input_data.guest_emails
]
# Add reminders
if input_data.reminder_minutes:
event_body["reminders"] = {
"useDefault": False,
"overrides": [
{"method": "popup", "minutes": reminder.value}
for reminder in input_data.reminder_minutes
],
}
# Add Google Meet
if input_data.add_google_meet:
event_body["conferenceData"] = {
"createRequest": {
"requestId": f"meet-{uuid.uuid4()}",
"conferenceSolutionKey": {"type": "hangoutsMeet"},
}
}
# Add recurrence
if input_data.recurrence.discriminator == "recurring":
rule = f"RRULE:FREQ={input_data.recurrence.frequency.value}"
rule += f";COUNT={input_data.recurrence.count}"
event_body["recurrence"] = [rule]
# Create the event
result = await asyncio.to_thread(
self._create_event,
service=service,
calendar_id=input_data.calendar_id,
event_body=event_body,
send_notifications=input_data.send_notifications,
conference_data_version=1 if input_data.add_google_meet else 0,
)
yield "event_id", result["id"]
yield "event_link", result["htmlLink"]
except Exception as e:
yield "error", str(e)
@staticmethod
def _build_service(credentials: GoogleCredentials, **kwargs):
creds = Credentials(
token=(
credentials.access_token.get_secret_value()
if credentials.access_token
else None
),
refresh_token=(
credentials.refresh_token.get_secret_value()
if credentials.refresh_token
else None
),
token_uri="https://oauth2.googleapis.com/token",
client_id=Settings().secrets.google_client_id,
client_secret=Settings().secrets.google_client_secret,
scopes=credentials.scopes,
)
return build("calendar", "v3", credentials=creds)
def _create_event(
self,
service,
calendar_id: str,
event_body: dict,
send_notifications: bool = False,
conference_data_version: int = 0,
) -> dict:
"""Create a new event in Google Calendar."""
calendar = service.events()
# Make the API call
result = calendar.insert(
calendarId=calendar_id,
body=event_body,
sendNotifications=send_notifications,
conferenceDataVersion=conference_data_version,
).execute()
return result

View File

@@ -1,3 +1,4 @@
import asyncio
import base64
from email.utils import parseaddr
from typing import List
@@ -128,11 +129,13 @@ class GmailReadBlock(Block):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = self._build_service(credentials, **kwargs)
messages = self._read_emails(service, input_data.query, input_data.max_results)
service = GmailReadBlock._build_service(credentials, **kwargs)
messages = await asyncio.to_thread(
self._read_emails, service, input_data.query, input_data.max_results
)
for email in messages:
yield "email", email
yield "emails", messages
@@ -286,14 +289,18 @@ class GmailSendBlock(Block):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = GmailReadBlock._build_service(credentials, **kwargs)
send_result = self._send_email(
service, input_data.to, input_data.subject, input_data.body
result = await asyncio.to_thread(
self._send_email,
service,
input_data.to,
input_data.subject,
input_data.body,
)
yield "result", send_result
yield "result", result
def _send_email(self, service, to: str, subject: str, body: str) -> dict:
if not to or not subject or not body:
@@ -358,12 +365,12 @@ class GmailListLabelsBlock(Block):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = GmailReadBlock._build_service(credentials, **kwargs)
labels = self._list_labels(service)
yield "result", labels
result = await asyncio.to_thread(self._list_labels, service)
yield "result", result
def _list_labels(self, service) -> list[dict]:
results = service.users().labels().list(userId="me").execute()
@@ -419,11 +426,13 @@ class GmailAddLabelBlock(Block):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = GmailReadBlock._build_service(credentials, **kwargs)
result = self._add_label(service, input_data.message_id, input_data.label_name)
result = await asyncio.to_thread(
self._add_label, service, input_data.message_id, input_data.label_name
)
yield "result", result
def _add_label(self, service, message_id: str, label_name: str) -> dict:
@@ -502,12 +511,12 @@ class GmailRemoveLabelBlock(Block):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = GmailReadBlock._build_service(credentials, **kwargs)
result = self._remove_label(
service, input_data.message_id, input_data.label_name
result = await asyncio.to_thread(
self._remove_label, service, input_data.message_id, input_data.label_name
)
yield "result", result

View File

@@ -1,9 +1,12 @@
import asyncio
from enum import Enum
from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.settings import Settings
from backend.util.settings import AppEnvironment, Settings
from ._auth import (
GOOGLE_OAUTH_IS_CONFIGURED,
@@ -14,6 +17,102 @@ from ._auth import (
GoogleCredentialsInput,
)
settings = Settings()
GOOGLE_SHEETS_DISABLED = (
not GOOGLE_OAUTH_IS_CONFIGURED
or settings.config.app_env == AppEnvironment.PRODUCTION
)
def parse_a1_notation(a1: str) -> tuple[str | None, str]:
"""Split an A1notation string into *(sheet_name, cell_range)*.
Examples
--------
>>> parse_a1_notation("Sheet1!A1:B2")
("Sheet1", "A1:B2")
>>> parse_a1_notation("A1:B2")
(None, "A1:B2")
"""
if "!" in a1:
sheet, cell_range = a1.split("!", 1)
return sheet, cell_range
return None, a1
def _first_sheet_meta(service, spreadsheet_id: str) -> tuple[str, int]:
"""Return *(title, sheetId)* for the first sheet in *spreadsheet_id*."""
meta = (
service.spreadsheets()
.get(spreadsheetId=spreadsheet_id, includeGridData=False)
.execute()
)
first = meta["sheets"][0]["properties"]
return first["title"], first["sheetId"]
def resolve_sheet_name(service, spreadsheet_id: str, sheet_name: str | None) -> str:
"""Resolve *sheet_name*, falling back to the workbook's first sheet if empty."""
if sheet_name:
return sheet_name
title, _ = _first_sheet_meta(service, spreadsheet_id)
return title
def sheet_id_by_name(service, spreadsheet_id: str, sheet_name: str) -> int | None:
"""Return the *sheetId* for *sheet_name* (or `None` if not found)."""
meta = service.spreadsheets().get(spreadsheetId=spreadsheet_id).execute()
for sh in meta.get("sheets", []):
if sh.get("properties", {}).get("title") == sheet_name:
return sh["properties"]["sheetId"]
return None
def _build_sheets_service(credentials: GoogleCredentials):
settings = Settings()
creds = Credentials(
token=(
credentials.access_token.get_secret_value()
if credentials.access_token
else None
),
refresh_token=(
credentials.refresh_token.get_secret_value()
if credentials.refresh_token
else None
),
token_uri="https://oauth2.googleapis.com/token",
client_id=settings.secrets.google_client_id,
client_secret=settings.secrets.google_client_secret,
scopes=credentials.scopes,
)
return build("sheets", "v4", credentials=creds)
class SheetOperation(str, Enum):
CREATE = "create"
DELETE = "delete"
COPY = "copy"
class BatchOperationType(str, Enum):
UPDATE = "update"
CLEAR = "clear"
class BatchOperation(BlockSchema):
type: BatchOperationType = SchemaField(
description="The type of operation to perform"
)
range: str = SchemaField(description="The A1 notation range for the operation")
values: list[list[str]] = SchemaField(
description="Values to update (only for UPDATE)", default=[]
)
class GoogleSheetsReadBlock(Block):
class Input(BlockSchema):
@@ -42,7 +141,7 @@ class GoogleSheetsReadBlock(Block):
categories={BlockCategory.DATA},
input_schema=GoogleSheetsReadBlock.Input,
output_schema=GoogleSheetsReadBlock.Output,
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"range": "Sheet1!A1:B2",
@@ -66,32 +165,14 @@ class GoogleSheetsReadBlock(Block):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = self._build_service(credentials, **kwargs)
data = self._read_sheet(service, input_data.spreadsheet_id, input_data.range)
yield "result", data
@staticmethod
def _build_service(credentials: GoogleCredentials, **kwargs):
creds = Credentials(
token=(
credentials.access_token.get_secret_value()
if credentials.access_token
else None
),
refresh_token=(
credentials.refresh_token.get_secret_value()
if credentials.refresh_token
else None
),
token_uri="https://oauth2.googleapis.com/token",
client_id=Settings().secrets.google_client_id,
client_secret=Settings().secrets.google_client_secret,
scopes=credentials.scopes,
service = _build_sheets_service(credentials)
data = await asyncio.to_thread(
self._read_sheet, service, input_data.spreadsheet_id, input_data.range
)
return build("sheets", "v4", credentials=creds)
yield "result", data
def _read_sheet(self, service, spreadsheet_id: str, range: str) -> list[list[str]]:
sheet = service.spreadsheets()
@@ -129,7 +210,7 @@ class GoogleSheetsWriteBlock(Block):
categories={BlockCategory.DATA},
input_schema=GoogleSheetsWriteBlock.Input,
output_schema=GoogleSheetsWriteBlock.Output,
disabled=not GOOGLE_OAUTH_IS_CONFIGURED,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"range": "Sheet1!A1:B2",
@@ -155,11 +236,12 @@ class GoogleSheetsWriteBlock(Block):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = GoogleSheetsReadBlock._build_service(credentials, **kwargs)
result = self._write_sheet(
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._write_sheet,
service,
input_data.spreadsheet_id,
input_data.range,
@@ -183,3 +265,790 @@ class GoogleSheetsWriteBlock(Block):
.execute()
)
return result
class GoogleSheetsAppendBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
spreadsheet_id: str = SchemaField(description="Spreadsheet ID")
sheet_name: str = SchemaField(
description="Optional sheet to append to (defaults to first sheet)",
default="",
)
values: list[list[str]] = SchemaField(description="Rows to append")
class Output(BlockSchema):
result: dict = SchemaField(description="Append API response")
error: str = SchemaField(description="Error message, if any")
def __init__(self):
super().__init__(
id="531d50c0-d6b9-4cf9-a013-7bf783d313c7",
description="Append data to a Google Sheet (sheet optional)",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsAppendBlock.Input,
output_schema=GoogleSheetsAppendBlock.Output,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"values": [["Charlie", "95"]],
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("result", {"updatedCells": 2, "updatedColumns": 2, "updatedRows": 1}),
],
test_mock={
"_append_sheet": lambda *args, **kwargs: {
"updatedCells": 2,
"updatedColumns": 2,
"updatedRows": 1,
},
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._append_sheet,
service,
input_data.spreadsheet_id,
input_data.sheet_name,
input_data.values,
)
yield "result", result
def _append_sheet(
self,
service,
spreadsheet_id: str,
sheet_name: str,
values: list[list[str]],
) -> dict:
target_sheet = resolve_sheet_name(service, spreadsheet_id, sheet_name)
body = {"values": values}
return (
service.spreadsheets()
.values()
.append(
spreadsheetId=spreadsheet_id,
range=f"{target_sheet}!A:A",
valueInputOption="USER_ENTERED",
insertDataOption="INSERT_ROWS",
body=body,
)
.execute()
)
class GoogleSheetsClearBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
spreadsheet_id: str = SchemaField(
description="The ID of the spreadsheet to clear",
)
range: str = SchemaField(
description="The A1 notation of the range to clear",
)
class Output(BlockSchema):
result: dict = SchemaField(
description="The result of the clear operation",
)
error: str = SchemaField(
description="Error message if any",
)
def __init__(self):
super().__init__(
id="84938266-0fc7-46e5-9369-adb0f6ae8015",
description="This block clears data from a specified range in a Google Sheets spreadsheet.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsClearBlock.Input,
output_schema=GoogleSheetsClearBlock.Output,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"range": "Sheet1!A1:B2",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("result", {"clearedRange": "Sheet1!A1:B2"}),
],
test_mock={
"_clear_range": lambda *args, **kwargs: {
"clearedRange": "Sheet1!A1:B2"
},
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._clear_range,
service,
input_data.spreadsheet_id,
input_data.range,
)
yield "result", result
def _clear_range(self, service, spreadsheet_id: str, range: str) -> dict:
result = (
service.spreadsheets()
.values()
.clear(spreadsheetId=spreadsheet_id, range=range)
.execute()
)
return result
class GoogleSheetsMetadataBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets.readonly"]
)
spreadsheet_id: str = SchemaField(
description="The ID of the spreadsheet to get metadata for",
)
class Output(BlockSchema):
result: dict = SchemaField(
description="The metadata of the spreadsheet including sheets info",
)
error: str = SchemaField(
description="Error message if any",
)
def __init__(self):
super().__init__(
id="6a0be6ee-7a0d-4c92-819b-500630846ad0",
description="This block retrieves metadata about a Google Sheets spreadsheet including sheet names and properties.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsMetadataBlock.Input,
output_schema=GoogleSheetsMetadataBlock.Output,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
(
"result",
{
"title": "Test Spreadsheet",
"sheets": [{"title": "Sheet1", "sheetId": 0}],
},
),
],
test_mock={
"_get_metadata": lambda *args, **kwargs: {
"title": "Test Spreadsheet",
"sheets": [{"title": "Sheet1", "sheetId": 0}],
},
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._get_metadata,
service,
input_data.spreadsheet_id,
)
yield "result", result
def _get_metadata(self, service, spreadsheet_id: str) -> dict:
result = (
service.spreadsheets()
.get(spreadsheetId=spreadsheet_id, includeGridData=False)
.execute()
)
return {
"title": result.get("properties", {}).get("title"),
"sheets": [
{
"title": sheet.get("properties", {}).get("title"),
"sheetId": sheet.get("properties", {}).get("sheetId"),
"gridProperties": sheet.get("properties", {}).get(
"gridProperties", {}
),
}
for sheet in result.get("sheets", [])
],
}
class GoogleSheetsManageSheetBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
spreadsheet_id: str = SchemaField(description="Spreadsheet ID")
operation: SheetOperation = SchemaField(description="Operation to perform")
sheet_name: str = SchemaField(
description="Target sheet name (defaults to first sheet for delete)",
default="",
)
source_sheet_id: int = SchemaField(
description="Source sheet ID for copy", default=0
)
destination_sheet_name: str = SchemaField(
description="New sheet name for copy", default=""
)
class Output(BlockSchema):
result: dict = SchemaField(description="Operation result")
error: str = SchemaField(description="Error message, if any")
def __init__(self):
super().__init__(
id="7940189d-b137-4ef1-aa18-3dd9a5bde9f3",
description="Create, delete, or copy sheets (sheet optional)",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsManageSheetBlock.Input,
output_schema=GoogleSheetsManageSheetBlock.Output,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"operation": SheetOperation.CREATE,
"sheet_name": "NewSheet",
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("result", {"success": True, "sheetId": 123})],
test_mock={
"_manage_sheet": lambda *args, **kwargs: {
"success": True,
"sheetId": 123,
}
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._manage_sheet,
service,
input_data.spreadsheet_id,
input_data.operation,
input_data.sheet_name,
input_data.source_sheet_id,
input_data.destination_sheet_name,
)
yield "result", result
def _manage_sheet(
self,
service,
spreadsheet_id: str,
operation: SheetOperation,
sheet_name: str,
source_sheet_id: int,
destination_sheet_name: str,
) -> dict:
requests = []
# Ensure a target sheet name when needed
target_name = resolve_sheet_name(service, spreadsheet_id, sheet_name)
if operation == SheetOperation.CREATE:
requests.append({"addSheet": {"properties": {"title": target_name}}})
elif operation == SheetOperation.DELETE:
sid = sheet_id_by_name(service, spreadsheet_id, target_name)
if sid is None:
return {"error": f"Sheet '{target_name}' not found"}
requests.append({"deleteSheet": {"sheetId": sid}})
elif operation == SheetOperation.COPY:
requests.append(
{
"duplicateSheet": {
"sourceSheetId": source_sheet_id,
"newSheetName": destination_sheet_name
or f"Copy of {source_sheet_id}",
}
}
)
else:
return {"error": f"Unknown operation: {operation}"}
body = {"requests": requests}
result = (
service.spreadsheets()
.batchUpdate(spreadsheetId=spreadsheet_id, body=body)
.execute()
)
return {"success": True, "result": result}
class GoogleSheetsBatchOperationsBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
spreadsheet_id: str = SchemaField(
description="The ID of the spreadsheet to perform batch operations on",
)
operations: list[BatchOperation] = SchemaField(
description="List of operations to perform",
)
class Output(BlockSchema):
result: dict = SchemaField(
description="The result of the batch operations",
)
error: str = SchemaField(
description="Error message if any",
)
def __init__(self):
super().__init__(
id="a4078584-6fe5-46e0-997e-d5126cdd112a",
description="This block performs multiple operations on a Google Sheets spreadsheet in a single batch request.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsBatchOperationsBlock.Input,
output_schema=GoogleSheetsBatchOperationsBlock.Output,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"operations": [
{
"type": BatchOperationType.UPDATE,
"range": "A1:B1",
"values": [["Header1", "Header2"]],
},
{
"type": BatchOperationType.UPDATE,
"range": "A2:B2",
"values": [["Data1", "Data2"]],
},
],
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("result", {"totalUpdatedCells": 4, "replies": []}),
],
test_mock={
"_batch_operations": lambda *args, **kwargs: {
"totalUpdatedCells": 4,
"replies": [],
},
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._batch_operations,
service,
input_data.spreadsheet_id,
input_data.operations,
)
yield "result", result
def _batch_operations(
self, service, spreadsheet_id: str, operations: list[BatchOperation]
) -> dict:
update_data = []
clear_ranges = []
for op in operations:
if op.type == BatchOperationType.UPDATE:
update_data.append(
{
"range": op.range,
"values": op.values,
}
)
elif op.type == BatchOperationType.CLEAR:
clear_ranges.append(op.range)
results = {}
# Perform updates if any
if update_data:
update_body = {
"valueInputOption": "USER_ENTERED",
"data": update_data,
}
update_result = (
service.spreadsheets()
.values()
.batchUpdate(spreadsheetId=spreadsheet_id, body=update_body)
.execute()
)
results["updateResult"] = update_result
# Perform clears if any
if clear_ranges:
clear_body = {"ranges": clear_ranges}
clear_result = (
service.spreadsheets()
.values()
.batchClear(spreadsheetId=spreadsheet_id, body=clear_body)
.execute()
)
results["clearResult"] = clear_result
return results
class GoogleSheetsFindReplaceBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
spreadsheet_id: str = SchemaField(
description="The ID of the spreadsheet to perform find/replace on",
)
find_text: str = SchemaField(
description="The text to find",
)
replace_text: str = SchemaField(
description="The text to replace with",
)
sheet_id: int = SchemaField(
description="The ID of the specific sheet to search (optional, searches all sheets if not provided)",
default=-1,
)
match_case: bool = SchemaField(
description="Whether to match case",
default=False,
)
match_entire_cell: bool = SchemaField(
description="Whether to match entire cell",
default=False,
)
class Output(BlockSchema):
result: dict = SchemaField(
description="The result of the find/replace operation including number of replacements",
)
error: str = SchemaField(
description="Error message if any",
)
def __init__(self):
super().__init__(
id="accca760-8174-4656-b55e-5f0e82fee986",
description="This block finds and replaces text in a Google Sheets spreadsheet.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsFindReplaceBlock.Input,
output_schema=GoogleSheetsFindReplaceBlock.Output,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"find_text": "old_value",
"replace_text": "new_value",
"match_case": False,
"match_entire_cell": False,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("result", {"occurrencesChanged": 5}),
],
test_mock={
"_find_replace": lambda *args, **kwargs: {"occurrencesChanged": 5},
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._find_replace,
service,
input_data.spreadsheet_id,
input_data.find_text,
input_data.replace_text,
input_data.sheet_id,
input_data.match_case,
input_data.match_entire_cell,
)
yield "result", result
def _find_replace(
self,
service,
spreadsheet_id: str,
find_text: str,
replace_text: str,
sheet_id: int,
match_case: bool,
match_entire_cell: bool,
) -> dict:
find_replace_request = {
"find": find_text,
"replacement": replace_text,
"matchCase": match_case,
"matchEntireCell": match_entire_cell,
}
if sheet_id >= 0:
find_replace_request["sheetId"] = sheet_id
requests = [{"findReplace": find_replace_request}]
body = {"requests": requests}
result = (
service.spreadsheets()
.batchUpdate(spreadsheetId=spreadsheet_id, body=body)
.execute()
)
return result
class GoogleSheetsFormatBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
spreadsheet_id: str = SchemaField(description="Spreadsheet ID")
range: str = SchemaField(description="A1 notation sheet optional")
background_color: dict = SchemaField(default={})
text_color: dict = SchemaField(default={})
bold: bool = SchemaField(default=False)
italic: bool = SchemaField(default=False)
font_size: int = SchemaField(default=10)
class Output(BlockSchema):
result: dict = SchemaField(description="API response or success flag")
error: str = SchemaField(description="Error message, if any")
def __init__(self):
super().__init__(
id="270f2384-8089-4b5b-b2e3-fe2ea3d87c02",
description="Format a range in a Google Sheet (sheet optional)",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsFormatBlock.Input,
output_schema=GoogleSheetsFormatBlock.Output,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"spreadsheet_id": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"range": "A1:B2",
"background_color": {"red": 1.0, "green": 0.9, "blue": 0.9},
"bold": True,
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[("result", {"success": True})],
test_mock={"_format_cells": lambda *args, **kwargs: {"success": True}},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._format_cells,
service,
input_data.spreadsheet_id,
input_data.range,
input_data.background_color,
input_data.text_color,
input_data.bold,
input_data.italic,
input_data.font_size,
)
if "error" in result:
yield "error", result["error"]
else:
yield "result", result
def _format_cells(
self,
service,
spreadsheet_id: str,
a1_range: str,
background_color: dict,
text_color: dict,
bold: bool,
italic: bool,
font_size: int,
) -> dict:
sheet_name, cell_range = parse_a1_notation(a1_range)
sheet_name = resolve_sheet_name(service, spreadsheet_id, sheet_name)
sheet_id = sheet_id_by_name(service, spreadsheet_id, sheet_name)
if sheet_id is None:
return {"error": f"Sheet '{sheet_name}' not found"}
try:
start_cell, end_cell = cell_range.split(":")
start_col = ord(start_cell[0].upper()) - ord("A")
start_row = int(start_cell[1:]) - 1
end_col = ord(end_cell[0].upper()) - ord("A") + 1
end_row = int(end_cell[1:])
except (ValueError, IndexError):
return {"error": f"Invalid range format: {a1_range}"}
cell_format: dict = {"userEnteredFormat": {}}
if background_color:
cell_format["userEnteredFormat"]["backgroundColor"] = background_color
text_format: dict = {}
if text_color:
text_format["foregroundColor"] = text_color
if bold:
text_format["bold"] = True
if italic:
text_format["italic"] = True
if font_size != 10:
text_format["fontSize"] = font_size
if text_format:
cell_format["userEnteredFormat"]["textFormat"] = text_format
body = {
"requests": [
{
"repeatCell": {
"range": {
"sheetId": sheet_id,
"startRowIndex": start_row,
"endRowIndex": end_row,
"startColumnIndex": start_col,
"endColumnIndex": end_col,
},
"cell": cell_format,
"fields": "userEnteredFormat(backgroundColor,textFormat)",
}
}
]
}
service.spreadsheets().batchUpdate(
spreadsheetId=spreadsheet_id, body=body
).execute()
return {"success": True}
class GoogleSheetsCreateSpreadsheetBlock(Block):
class Input(BlockSchema):
credentials: GoogleCredentialsInput = GoogleCredentialsField(
["https://www.googleapis.com/auth/spreadsheets"]
)
title: str = SchemaField(
description="The title of the new spreadsheet",
)
sheet_names: list[str] = SchemaField(
description="List of sheet names to create (optional, defaults to single 'Sheet1')",
default=["Sheet1"],
)
class Output(BlockSchema):
result: dict = SchemaField(
description="The result containing spreadsheet ID and URL",
)
spreadsheet_id: str = SchemaField(
description="The ID of the created spreadsheet",
)
spreadsheet_url: str = SchemaField(
description="The URL of the created spreadsheet",
)
error: str = SchemaField(
description="Error message if any",
)
def __init__(self):
super().__init__(
id="c8d4c0d3-c76e-4c2a-8c66-4119817ea3d1",
description="This block creates a new Google Sheets spreadsheet with specified sheets.",
categories={BlockCategory.DATA},
input_schema=GoogleSheetsCreateSpreadsheetBlock.Input,
output_schema=GoogleSheetsCreateSpreadsheetBlock.Output,
disabled=GOOGLE_SHEETS_DISABLED,
test_input={
"title": "Test Spreadsheet",
"sheet_names": ["Sheet1", "Data", "Summary"],
"credentials": TEST_CREDENTIALS_INPUT,
},
test_credentials=TEST_CREDENTIALS,
test_output=[
("spreadsheet_id", "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms"),
(
"spreadsheet_url",
"https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit",
),
("result", {"success": True}),
],
test_mock={
"_create_spreadsheet": lambda *args, **kwargs: {
"spreadsheetId": "1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms",
"spreadsheetUrl": "https://docs.google.com/spreadsheets/d/1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs74OgvE2upms/edit",
},
},
)
async def run(
self, input_data: Input, *, credentials: GoogleCredentials, **kwargs
) -> BlockOutput:
service = _build_sheets_service(credentials)
result = await asyncio.to_thread(
self._create_spreadsheet,
service,
input_data.title,
input_data.sheet_names,
)
if "error" in result:
yield "error", result["error"]
else:
yield "spreadsheet_id", result["spreadsheetId"]
yield "spreadsheet_url", result["spreadsheetUrl"]
yield "result", {"success": True}
def _create_spreadsheet(self, service, title: str, sheet_names: list[str]) -> dict:
try:
# Create the initial spreadsheet
spreadsheet_body = {
"properties": {"title": title},
"sheets": [
{
"properties": {
"title": sheet_names[0] if sheet_names else "Sheet1"
}
}
],
}
result = service.spreadsheets().create(body=spreadsheet_body).execute()
spreadsheet_id = result["spreadsheetId"]
spreadsheet_url = result["spreadsheetUrl"]
# Add additional sheets if requested
if len(sheet_names) > 1:
requests = []
for sheet_name in sheet_names[1:]:
requests.append({"addSheet": {"properties": {"title": sheet_name}}})
if requests:
batch_body = {"requests": requests}
service.spreadsheets().batchUpdate(
spreadsheetId=spreadsheet_id, body=batch_body
).execute()
return {
"spreadsheetId": spreadsheet_id,
"spreadsheetUrl": spreadsheet_url,
}
except Exception as e:
return {"error": str(e)}

View File

@@ -103,7 +103,7 @@ class GoogleMapsSearchBlock(Block):
test_credentials=TEST_CREDENTIALS,
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
places = self.search_places(

View File

@@ -1,14 +1,17 @@
from typing import Any, Optional
from backend.util.request import requests
from backend.util.request import Requests
class GetRequest:
@classmethod
def get_request(
async def get_request(
cls, url: str, headers: Optional[dict] = None, json: bool = False
) -> Any:
if headers is None:
headers = {}
response = requests.get(url, headers=headers)
return response.json() if json else response.text
response = await Requests().get(url, headers=headers)
if json:
return response.json()
else:
return response.text()

View File

@@ -1,17 +1,56 @@
import json
import logging
from enum import Enum
from typing import Any
from io import BytesIO
from pathlib import Path
from typing import Literal
from requests.exceptions import HTTPError, RequestException
import aiofiles
from pydantic import SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.data.model import (
CredentialsField,
CredentialsMetaInput,
HostScopedCredentials,
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.file import (
MediaFileType,
get_exec_file_path,
get_mime_type,
store_media_file,
)
from backend.util.request import Requests
logger = logging.getLogger(name=__name__)
# Host-scoped credentials for HTTP requests
HttpCredentials = CredentialsMetaInput[
Literal[ProviderName.HTTP], Literal["host_scoped"]
]
TEST_CREDENTIALS = HostScopedCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
provider="http",
host="api.example.com",
headers={
"Authorization": SecretStr("Bearer test-token"),
"X-API-Key": SecretStr("test-api-key"),
},
title="Mock HTTP Host-Scoped Credentials",
)
TEST_CREDENTIALS_INPUT = {
"provider": TEST_CREDENTIALS.provider,
"id": TEST_CREDENTIALS.id,
"type": TEST_CREDENTIALS.type,
"title": TEST_CREDENTIALS.title,
}
class HttpMethod(Enum):
GET = "GET"
POST = "POST"
@@ -38,13 +77,21 @@ class SendWebRequestBlock(Block):
)
json_format: bool = SchemaField(
title="JSON format",
description="Whether to send and receive body as JSON",
description="If true, send the body as JSON (unless files are also present).",
default=True,
)
body: Any = SchemaField(
description="The body of the request",
body: dict | None = SchemaField(
description="Form/JSON body payload. If files are supplied, this must be a mapping of formfields.",
default=None,
)
files_name: str = SchemaField(
description="The name of the file field in the form data.",
default="file",
)
files: list[MediaFileType] = SchemaField(
description="Mapping of *form field name* → Image url / path / base64 url.",
default_factory=list,
)
class Output(BlockSchema):
response: object = SchemaField(description="The response from the server")
@@ -55,59 +102,161 @@ class SendWebRequestBlock(Block):
def __init__(self):
super().__init__(
id="6595ae1f-b924-42cb-9a41-551a0611c4b4",
description="This block makes an HTTP request to the given URL.",
description="Make an HTTP request (JSON / form / multipart).",
categories={BlockCategory.OUTPUT},
input_schema=SendWebRequestBlock.Input,
output_schema=SendWebRequestBlock.Output,
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
body = input_data.body
@staticmethod
async def _prepare_files(
graph_exec_id: str,
files_name: str,
files: list[MediaFileType],
) -> list[tuple[str, tuple[str, BytesIO, str]]]:
"""
Prepare files for the request by storing them and reading their content.
Returns a list of tuples in the format:
(files_name, (filename, BytesIO, mime_type))
"""
files_payload: list[tuple[str, tuple[str, BytesIO, str]]] = []
if input_data.json_format:
if isinstance(body, str):
try:
# Try to parse as JSON first
body = json.loads(body)
except json.JSONDecodeError:
# If it's not valid JSON and just plain text,
# we should send it as plain text instead
for media in files:
# Normalise to a list so we can repeat the same key
rel_path = await store_media_file(
graph_exec_id, media, return_content=False
)
abs_path = get_exec_file_path(graph_exec_id, rel_path)
async with aiofiles.open(abs_path, "rb") as f:
content = await f.read()
handle = BytesIO(content)
mime = get_mime_type(abs_path)
files_payload.append((files_name, (Path(abs_path).name, handle, mime)))
return files_payload
async def run(
self, input_data: Input, *, graph_exec_id: str, **kwargs
) -> BlockOutput:
# ─── Parse/normalise body ────────────────────────────────────
body = input_data.body
if isinstance(body, str):
try:
# Validate JSON string length to prevent DoS attacks
if len(body) > 10_000_000: # 10MB limit
raise ValueError("JSON body too large")
parsed_body = json.loads(body)
# Validate that parsed JSON is safe (basic object/array/primitive types)
if (
isinstance(parsed_body, (dict, list, str, int, float, bool))
or parsed_body is None
):
body = parsed_body
else:
# Unexpected type, treat as plain text
input_data.json_format = False
try:
response = requests.request(
input_data.method.value,
input_data.url,
headers=input_data.headers,
json=body if input_data.json_format else None,
data=body if not input_data.json_format else None,
except (json.JSONDecodeError, ValueError):
# Invalid JSON or too large treat as formfield value instead
input_data.json_format = False
# ─── Prepare files (if any) ──────────────────────────────────
use_files = bool(input_data.files)
files_payload: list[tuple[str, tuple[str, BytesIO, str]]] = []
if use_files:
files_payload = await self._prepare_files(
graph_exec_id, input_data.files_name, input_data.files
)
result = response.json() if input_data.json_format else response.text
# Enforce body format rules
if use_files and input_data.json_format:
raise ValueError(
"json_format=True cannot be combined with file uploads; set json_format=False and put form fields in `body`."
)
# ─── Execute request ─────────────────────────────────────────
response = await Requests().request(
input_data.method.value,
input_data.url,
headers=input_data.headers,
files=files_payload if use_files else None,
# * If files → multipart ⇒ pass formfields via data=
data=body if not input_data.json_format else None,
# * Else, choose JSON vs urlencoded based on flag
json=body if (input_data.json_format and not use_files) else None,
)
# Decide how to parse the response
if response.headers.get("content-type", "").startswith("application/json"):
result = None if response.status == 204 else response.json()
else:
result = response.text()
# Yield according to status code bucket
if 200 <= response.status < 300:
yield "response", result
elif 400 <= response.status < 500:
yield "client_error", result
else:
yield "server_error", result
except HTTPError as e:
# Handle error responses
try:
result = e.response.json() if input_data.json_format else str(e)
except json.JSONDecodeError:
result = str(e)
if 400 <= e.response.status_code < 500:
yield "client_error", result
elif 500 <= e.response.status_code < 600:
yield "server_error", result
else:
error_msg = (
"Unexpected status code "
f"{e.response.status_code} '{e.response.reason}'"
)
logger.warning(error_msg)
yield "error", error_msg
class SendAuthenticatedWebRequestBlock(SendWebRequestBlock):
class Input(SendWebRequestBlock.Input):
credentials: HttpCredentials = CredentialsField(
description="HTTP host-scoped credentials for automatic header injection",
discriminator="url",
)
except RequestException as e:
# Handle other request-related exceptions
yield "error", str(e)
def __init__(self):
Block.__init__(
self,
id="fff86bcd-e001-4bad-a7f6-2eae4720c8dc",
description="Make an authenticated HTTP request with host-scoped credentials (JSON / form / multipart).",
categories={BlockCategory.OUTPUT},
input_schema=SendAuthenticatedWebRequestBlock.Input,
output_schema=SendWebRequestBlock.Output,
test_credentials=TEST_CREDENTIALS,
)
except Exception as e:
# Catch any other unexpected exceptions
yield "error", str(e)
async def run( # type: ignore[override]
self,
input_data: Input,
*,
graph_exec_id: str,
credentials: HostScopedCredentials,
**kwargs,
) -> BlockOutput:
# Create SendWebRequestBlock.Input from our input (removing credentials field)
base_input = SendWebRequestBlock.Input(
url=input_data.url,
method=input_data.method,
headers=input_data.headers,
json_format=input_data.json_format,
body=input_data.body,
files_name=input_data.files_name,
files=input_data.files,
)
# Apply host-scoped credentials to headers
extra_headers = {}
if credentials.matches_url(input_data.url):
logger.debug(
f"Applying host-scoped credentials {credentials.id} for URL {input_data.url}"
)
extra_headers.update(credentials.get_headers_dict())
else:
logger.warning(
f"Host-scoped credentials {credentials.id} do not match URL {input_data.url}"
)
# Merge with user-provided headers (user headers take precedence)
base_input.headers = {**extra_headers, **input_data.headers}
# Use parent class run method
async for output_name, output_data in super().run(
base_input, graph_exec_id=graph_exec_id, **kwargs
):
yield output_name, output_data

View File

@@ -5,7 +5,7 @@ from backend.blocks.hubspot._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
class HubSpotCompanyBlock(Block):
@@ -35,7 +35,7 @@ class HubSpotCompanyBlock(Block):
output_schema=HubSpotCompanyBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: HubSpotCredentials, **kwargs
) -> BlockOutput:
base_url = "https://api.hubapi.com/crm/v3/objects/companies"
@@ -45,7 +45,7 @@ class HubSpotCompanyBlock(Block):
}
if input_data.operation == "create":
response = requests.post(
response = await Requests().post(
base_url, headers=headers, json={"properties": input_data.company_data}
)
result = response.json()
@@ -67,14 +67,16 @@ class HubSpotCompanyBlock(Block):
}
]
}
response = requests.post(search_url, headers=headers, json=search_data)
result = response.json()
yield "company", result.get("results", [{}])[0]
search_response = await Requests().post(
search_url, headers=headers, json=search_data
)
search_result = search_response.json()
yield "search_company", search_result.get("results", [{}])[0]
yield "status", "retrieved"
elif input_data.operation == "update":
# First get company ID by domain
search_response = requests.post(
search_response = await Requests().post(
f"{base_url}/search",
headers=headers,
json={
@@ -91,10 +93,11 @@ class HubSpotCompanyBlock(Block):
]
},
)
company_id = search_response.json().get("results", [{}])[0].get("id")
search_result = search_response.json()
company_id = search_result.get("results", [{}])[0].get("id")
if company_id:
response = requests.patch(
response = await Requests().patch(
f"{base_url}/{company_id}",
headers=headers,
json={"properties": input_data.company_data},

View File

@@ -5,7 +5,7 @@ from backend.blocks.hubspot._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
class HubSpotContactBlock(Block):
@@ -35,7 +35,7 @@ class HubSpotContactBlock(Block):
output_schema=HubSpotContactBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: HubSpotCredentials, **kwargs
) -> BlockOutput:
base_url = "https://api.hubapi.com/crm/v3/objects/contacts"
@@ -45,7 +45,7 @@ class HubSpotContactBlock(Block):
}
if input_data.operation == "create":
response = requests.post(
response = await Requests().post(
base_url, headers=headers, json={"properties": input_data.contact_data}
)
result = response.json()
@@ -53,7 +53,6 @@ class HubSpotContactBlock(Block):
yield "status", "created"
elif input_data.operation == "get":
# Search for contact by email
search_url = f"{base_url}/search"
search_data = {
"filterGroups": [
@@ -68,13 +67,15 @@ class HubSpotContactBlock(Block):
}
]
}
response = requests.post(search_url, headers=headers, json=search_data)
response = await Requests().post(
search_url, headers=headers, json=search_data
)
result = response.json()
yield "contact", result.get("results", [{}])[0]
yield "status", "retrieved"
elif input_data.operation == "update":
search_response = requests.post(
search_response = await Requests().post(
f"{base_url}/search",
headers=headers,
json={
@@ -91,10 +92,11 @@ class HubSpotContactBlock(Block):
]
},
)
contact_id = search_response.json().get("results", [{}])[0].get("id")
search_result = search_response.json()
contact_id = search_result.get("results", [{}])[0].get("id")
if contact_id:
response = requests.patch(
response = await Requests().patch(
f"{base_url}/{contact_id}",
headers=headers,
json={"properties": input_data.contact_data},

View File

@@ -7,7 +7,7 @@ from backend.blocks.hubspot._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
class HubSpotEngagementBlock(Block):
@@ -42,7 +42,7 @@ class HubSpotEngagementBlock(Block):
output_schema=HubSpotEngagementBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: HubSpotCredentials, **kwargs
) -> BlockOutput:
base_url = "https://api.hubapi.com"
@@ -66,7 +66,9 @@ class HubSpotEngagementBlock(Block):
}
}
response = requests.post(email_url, headers=headers, json=email_data)
response = await Requests().post(
email_url, headers=headers, json=email_data
)
result = response.json()
yield "result", result
yield "status", "email_sent"
@@ -80,7 +82,9 @@ class HubSpotEngagementBlock(Block):
params = {"limit": 100, "after": from_date.isoformat()}
response = requests.get(engagement_url, headers=headers, params=params)
response = await Requests().get(
engagement_url, headers=headers, params=params
)
engagements = response.json()
# Process engagement metrics

View File

@@ -12,7 +12,7 @@ from backend.data.model import (
SchemaField,
)
from backend.integrations.providers import ProviderName
from backend.util.request import requests
from backend.util.request import Requests
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -196,13 +196,13 @@ class IdeogramModelBlock(Block):
test_credentials=TEST_CREDENTIALS,
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
seed = input_data.seed
# Step 1: Generate the image
result = self.run_model(
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.ideogram_model_name.value,
prompt=input_data.prompt,
@@ -217,14 +217,14 @@ class IdeogramModelBlock(Block):
# Step 2: Upscale the image if requested
if input_data.upscale == UpscaleOption.AI_UPSCALE:
result = self.upscale_image(
result = await self.upscale_image(
api_key=credentials.api_key,
image_url=result,
)
yield "result", result
def run_model(
async def run_model(
self,
api_key: SecretStr,
model_name: str,
@@ -267,12 +267,12 @@ class IdeogramModelBlock(Block):
}
try:
response = requests.post(url, json=data, headers=headers)
response = await Requests().post(url, headers=headers, json=data)
return response.json()["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to fetch image: {str(e)}")
def upscale_image(self, api_key: SecretStr, image_url: str):
async def upscale_image(self, api_key: SecretStr, image_url: str):
url = "https://api.ideogram.ai/upscale"
headers = {
"Api-Key": api_key.get_secret_value(),
@@ -280,21 +280,22 @@ class IdeogramModelBlock(Block):
try:
# Step 1: Download the image from the provided URL
image_response = requests.get(image_url)
response = await Requests().get(image_url)
image_content = response.content
# Step 2: Send the downloaded image to the upscale API
files = {
"image_file": ("image.png", image_response.content, "image/png"),
"image_file": ("image.png", image_content, "image/png"),
}
response = requests.post(
response = await Requests().post(
url,
headers=headers,
data={"image_request": "{}"},
files=files,
)
return response.json()["data"][0]["url"]
return (response.json())["data"][0]["url"]
except RequestException as e:
raise Exception(f"Failed to upscale image: {str(e)}")

View File

@@ -95,7 +95,7 @@ class AgentInputBlock(Block):
}
)
def run(self, input_data: Input, *args, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, *args, **kwargs) -> BlockOutput:
if input_data.value is not None:
yield "result", input_data.value
@@ -186,7 +186,7 @@ class AgentOutputBlock(Block):
static_output=True,
)
def run(self, input_data: Input, *args, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, *args, **kwargs) -> BlockOutput:
"""
Attempts to format the recorded_value using the fmt_string if provided.
If formatting fails or no fmt_string is given, returns the original recorded_value.
@@ -413,6 +413,12 @@ class AgentFileInputBlock(AgentInputBlock):
advanced=False,
title="Default Value",
)
base_64: bool = SchemaField(
description="Whether produce an output in base64 format (not recommended, you can pass the string path just fine accross blocks).",
default=False,
advanced=True,
title="Produce Base64 Output",
)
class Output(AgentInputBlock.Output):
result: str = SchemaField(description="File reference/path result.")
@@ -436,7 +442,7 @@ class AgentFileInputBlock(AgentInputBlock):
],
)
def run(
async def run(
self,
input_data: Input,
*,
@@ -446,12 +452,11 @@ class AgentFileInputBlock(AgentInputBlock):
if not input_data.value:
return
file_path = store_media_file(
yield "result", await store_media_file(
graph_exec_id=graph_exec_id,
file=input_data.value,
return_content=False,
return_content=input_data.base_64,
)
yield "result", file_path
class AgentDropdownInputBlock(AgentInputBlock):

View File

@@ -53,7 +53,7 @@ class StepThroughItemsBlock(Block):
test_mock={},
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
for data in [input_data.items, input_data.items_object, input_data.items_str]:
if not data:
continue

View File

@@ -5,7 +5,7 @@ from backend.blocks.jina._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
class JinaChunkingBlock(Block):
@@ -35,7 +35,7 @@ class JinaChunkingBlock(Block):
output_schema=JinaChunkingBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: JinaCredentials, **kwargs
) -> BlockOutput:
url = "https://segment.jina.ai/"
@@ -55,7 +55,7 @@ class JinaChunkingBlock(Block):
"max_chunk_length": str(input_data.max_chunk_length),
}
response = requests.post(url, headers=headers, json=data)
response = await Requests().post(url, headers=headers, json=data)
result = response.json()
all_chunks.extend(result.get("chunks", []))

View File

@@ -5,7 +5,7 @@ from backend.blocks.jina._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
class JinaEmbeddingBlock(Block):
@@ -29,7 +29,7 @@ class JinaEmbeddingBlock(Block):
output_schema=JinaEmbeddingBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: JinaCredentials, **kwargs
) -> BlockOutput:
url = "https://api.jina.ai/v1/embeddings"
@@ -38,6 +38,6 @@ class JinaEmbeddingBlock(Block):
"Authorization": f"Bearer {credentials.api_key.get_secret_value()}",
}
data = {"input": input_data.texts, "model": input_data.model}
response = requests.post(url, headers=headers, json=data)
response = await Requests().post(url, headers=headers, json=data)
embeddings = [e["embedding"] for e in response.json()["data"]]
yield "embeddings", embeddings

View File

@@ -1,7 +1,5 @@
from urllib.parse import quote
import requests
from backend.blocks.jina._auth import (
JinaCredentials,
JinaCredentialsField,
@@ -9,6 +7,7 @@ from backend.blocks.jina._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import Requests
class FactCheckerBlock(Block):
@@ -35,7 +34,7 @@ class FactCheckerBlock(Block):
output_schema=FactCheckerBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: JinaCredentials, **kwargs
) -> BlockOutput:
encoded_statement = quote(input_data.statement)
@@ -46,8 +45,7 @@ class FactCheckerBlock(Block):
"Authorization": f"Bearer {credentials.api_key.get_secret_value()}",
}
response = requests.get(url, headers=headers)
response.raise_for_status()
response = await Requests().get(url, headers=headers)
data = response.json()
if "data" in data:

View File

@@ -39,7 +39,7 @@ class SearchTheWebBlock(Block, GetRequest):
test_mock={"get_request": lambda *args, **kwargs: "search content"},
)
def run(
async def run(
self, input_data: Input, *, credentials: JinaCredentials, **kwargs
) -> BlockOutput:
# Encode the search query
@@ -51,7 +51,7 @@ class SearchTheWebBlock(Block, GetRequest):
# Prepend the Jina Search URL to the encoded query
jina_search_url = f"https://s.jina.ai/{encoded_query}"
results = self.get_request(jina_search_url, headers=headers, json=False)
results = await self.get_request(jina_search_url, headers=headers, json=False)
# Output the search results
yield "results", results
@@ -90,7 +90,7 @@ class ExtractWebsiteContentBlock(Block, GetRequest):
test_mock={"get_request": lambda *args, **kwargs: "scraped content"},
)
def run(
async def run(
self, input_data: Input, *, credentials: JinaCredentials, **kwargs
) -> BlockOutput:
if input_data.raw_content:
@@ -103,5 +103,5 @@ class ExtractWebsiteContentBlock(Block, GetRequest):
"Authorization": f"Bearer {credentials.api_key.get_secret_value()}",
}
content = self.get_request(url, json=False, headers=headers)
content = await self.get_request(url, json=False, headers=headers)
yield "content", content

View File

@@ -48,7 +48,7 @@ class LinearClient:
raise_for_status=False,
)
def _execute_graphql_request(
async def _execute_graphql_request(
self, query: str, variables: dict | None = None
) -> Any:
"""
@@ -65,19 +65,18 @@ class LinearClient:
if variables:
payload["variables"] = variables
response = self._requests.post(self.API_URL, json=payload)
response = await self._requests.post(self.API_URL, json=payload)
if not response.ok:
try:
error_data = response.json()
error_message = error_data.get("errors", [{}])[0].get("message", "")
except json.JSONDecodeError:
error_message = response.text
error_message = response.text()
raise LinearAPIException(
f"Linear API request failed ({response.status_code}): {error_message}",
response.status_code,
f"Linear API request failed ({response.status}): {error_message}",
response.status,
)
response_data = response.json()
@@ -88,12 +87,12 @@ class LinearClient:
]
raise LinearAPIException(
f"Linear API returned errors: {', '.join(error_messages)}",
response.status_code,
response.status,
)
return response_data["data"]
def query(self, query: str, variables: Optional[dict] = None) -> dict:
async def query(self, query: str, variables: Optional[dict] = None) -> dict:
"""Executes a GraphQL query.
Args:
@@ -103,9 +102,9 @@ class LinearClient:
Returns:
The response data.
"""
return self._execute_graphql_request(query, variables)
return await self._execute_graphql_request(query, variables)
def mutate(self, mutation: str, variables: Optional[dict] = None) -> dict:
async def mutate(self, mutation: str, variables: Optional[dict] = None) -> dict:
"""Executes a GraphQL mutation.
Args:
@@ -115,9 +114,11 @@ class LinearClient:
Returns:
The response data.
"""
return self._execute_graphql_request(mutation, variables)
return await self._execute_graphql_request(mutation, variables)
def try_create_comment(self, issue_id: str, comment: str) -> CreateCommentResponse:
async def try_create_comment(
self, issue_id: str, comment: str
) -> CreateCommentResponse:
try:
mutation = """
mutation CommentCreate($input: CommentCreateInput!) {
@@ -138,13 +139,13 @@ class LinearClient:
}
}
added_comment = self.mutate(mutation, variables)
added_comment = await self.mutate(mutation, variables)
# Select the commentCreate field from the mutation response
return CreateCommentResponse(**added_comment["commentCreate"])
except LinearAPIException as e:
raise e
def try_get_team_by_name(self, team_name: str) -> str:
async def try_get_team_by_name(self, team_name: str) -> str:
try:
query = """
query GetTeamId($searchTerm: String!) {
@@ -167,12 +168,12 @@ class LinearClient:
"searchTerm": team_name,
}
team_id = self.query(query, variables)
team_id = await self.query(query, variables)
return team_id["teams"]["nodes"][0]["id"]
except LinearAPIException as e:
raise e
def try_create_issue(
async def try_create_issue(
self,
team_id: str,
title: str,
@@ -211,12 +212,12 @@ class LinearClient:
if priority:
variables["input"]["priority"] = priority
added_issue = self.mutate(mutation, variables)
added_issue = await self.mutate(mutation, variables)
return CreateIssueResponse(**added_issue["issueCreate"])
except LinearAPIException as e:
raise e
def try_search_projects(self, term: str) -> list[Project]:
async def try_search_projects(self, term: str) -> list[Project]:
try:
query = """
query SearchProjects($term: String!, $includeComments: Boolean!) {
@@ -238,14 +239,14 @@ class LinearClient:
"includeComments": True,
}
projects = self.query(query, variables)
projects = await self.query(query, variables)
return [
Project(**project) for project in projects["searchProjects"]["nodes"]
]
except LinearAPIException as e:
raise e
def try_search_issues(self, term: str) -> list[Issue]:
async def try_search_issues(self, term: str) -> list[Issue]:
try:
query = """
query SearchIssues($term: String!, $includeComments: Boolean!) {
@@ -266,7 +267,7 @@ class LinearClient:
"includeComments": True,
}
issues = self.query(query, variables)
issues = await self.query(query, variables)
return [Issue(**issue) for issue in issues["searchIssues"]["nodes"]]
except LinearAPIException as e:
raise e

View File

@@ -54,21 +54,21 @@ class LinearCreateCommentBlock(Block):
)
@staticmethod
def create_comment(
async def create_comment(
credentials: LinearCredentials, issue_id: str, comment: str
) -> tuple[str, str]:
client = LinearClient(credentials=credentials)
response: CreateCommentResponse = client.try_create_comment(
response: CreateCommentResponse = await client.try_create_comment(
issue_id=issue_id, comment=comment
)
return response.comment.id, response.comment.body
def run(
async def run(
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
) -> BlockOutput:
"""Execute the comment creation"""
try:
comment_id, comment_body = self.create_comment(
comment_id, comment_body = await self.create_comment(
credentials=credentials,
issue_id=input_data.issue_id,
comment=input_data.comment,

View File

@@ -67,7 +67,7 @@ class LinearCreateIssueBlock(Block):
)
@staticmethod
def create_issue(
async def create_issue(
credentials: LinearCredentials,
team_name: str,
title: str,
@@ -76,15 +76,15 @@ class LinearCreateIssueBlock(Block):
project_name: str | None = None,
) -> tuple[str, str]:
client = LinearClient(credentials=credentials)
team_id = client.try_get_team_by_name(team_name=team_name)
team_id = await client.try_get_team_by_name(team_name=team_name)
project_id: str | None = None
if project_name:
projects = client.try_search_projects(term=project_name)
projects = await client.try_search_projects(term=project_name)
if projects:
project_id = projects[0].id
else:
raise LinearAPIException("Project not found", status_code=404)
response: CreateIssueResponse = client.try_create_issue(
response: CreateIssueResponse = await client.try_create_issue(
team_id=team_id,
title=title,
description=description,
@@ -93,12 +93,12 @@ class LinearCreateIssueBlock(Block):
)
return response.issue.identifier, response.issue.title
def run(
async def run(
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
) -> BlockOutput:
"""Execute the issue creation"""
try:
issue_id, issue_title = self.create_issue(
issue_id, issue_title = await self.create_issue(
credentials=credentials,
team_name=input_data.team_name,
title=input_data.title,
@@ -168,20 +168,22 @@ class LinearSearchIssuesBlock(Block):
)
@staticmethod
def search_issues(
async def search_issues(
credentials: LinearCredentials,
term: str,
) -> list[Issue]:
client = LinearClient(credentials=credentials)
response: list[Issue] = client.try_search_issues(term=term)
response: list[Issue] = await client.try_search_issues(term=term)
return response
def run(
async def run(
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
) -> BlockOutput:
"""Execute the issue search"""
try:
issues = self.search_issues(credentials=credentials, term=input_data.term)
issues = await self.search_issues(
credentials=credentials, term=input_data.term
)
yield "issues", issues
except LinearAPIException as e:
yield "error", str(e)

View File

@@ -69,20 +69,20 @@ class LinearSearchProjectsBlock(Block):
)
@staticmethod
def search_projects(
async def search_projects(
credentials: LinearCredentials,
term: str,
) -> list[Project]:
client = LinearClient(credentials=credentials)
response: list[Project] = client.try_search_projects(term=term)
response: list[Project] = await client.try_search_projects(term=term)
return response
def run(
async def run(
self, input_data: Input, *, credentials: LinearCredentials, **kwargs
) -> BlockOutput:
"""Execute the project search"""
try:
projects = self.search_projects(
projects = await self.search_projects(
credentials=credentials,
term=input_data.term,
)

View File

@@ -3,15 +3,13 @@ import logging
from abc import ABC
from enum import Enum, EnumMeta
from json import JSONDecodeError
from types import MappingProxyType
from typing import Any, Iterable, List, Literal, NamedTuple, Optional
import anthropic
import ollama
import openai
from anthropic import NotGiven
from anthropic.types import ToolParam
from groq import Groq
from groq import AsyncGroq
from pydantic import BaseModel, SecretStr
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
@@ -24,23 +22,26 @@ from backend.data.model import (
)
from backend.integrations.providers import ProviderName
from backend.util import json
from backend.util.settings import BehaveAs, Settings
from backend.util.logging import TruncatedLogger
from backend.util.prompt import compress_prompt, estimate_token_count
from backend.util.text import TextFormatter
logger = logging.getLogger(__name__)
logger = TruncatedLogger(logging.getLogger(__name__), "[LLM-Block]")
fmt = TextFormatter()
LLMProviderName = Literal[
ProviderName.AIML_API,
ProviderName.ANTHROPIC,
ProviderName.GROQ,
ProviderName.OLLAMA,
ProviderName.OPENAI,
ProviderName.OPEN_ROUTER,
ProviderName.LLAMA_API,
]
AICredentials = CredentialsMetaInput[LLMProviderName, Literal["api_key"]]
TEST_CREDENTIALS = APIKeyCredentials(
id="ed55ac19-356e-4243-a6cb-bc599e9b716f",
id="769f6af7-820b-4d5d-9b7a-ab82bbc165f",
provider="openai",
api_key=SecretStr("mock-openai-api-key"),
title="Mock OpenAI API key",
@@ -71,36 +72,34 @@ class ModelMetadata(NamedTuple):
class LlmModelMeta(EnumMeta):
@property
def __members__(self) -> MappingProxyType:
if Settings().config.behave_as == BehaveAs.LOCAL:
members = super().__members__
return MappingProxyType(members)
else:
removed_providers = ["ollama"]
existing_members = super().__members__
members = {
name: member
for name, member in existing_members.items()
if LlmModel[name].provider not in removed_providers
}
return MappingProxyType(members)
pass
class LlmModel(str, Enum, metaclass=LlmModelMeta):
# OpenAI models
O3_MINI = "o3-mini"
O3 = "o3-2025-04-16"
O1 = "o1"
O1_PREVIEW = "o1-preview"
O1_MINI = "o1-mini"
GPT41 = "gpt-4.1-2025-04-14"
GPT4O_MINI = "gpt-4o-mini"
GPT4O = "gpt-4o"
GPT4_TURBO = "gpt-4-turbo"
GPT3_5_TURBO = "gpt-3.5-turbo"
# Anthropic models
CLAUDE_4_OPUS = "claude-opus-4-20250514"
CLAUDE_4_SONNET = "claude-sonnet-4-20250514"
CLAUDE_3_7_SONNET = "claude-3-7-sonnet-20250219"
CLAUDE_3_5_SONNET = "claude-3-5-sonnet-latest"
CLAUDE_3_5_HAIKU = "claude-3-5-haiku-latest"
CLAUDE_3_HAIKU = "claude-3-haiku-20240307"
# AI/ML API models
AIML_API_QWEN2_5_72B = "Qwen/Qwen2.5-72B-Instruct-Turbo"
AIML_API_LLAMA3_1_70B = "nvidia/llama-3.1-nemotron-70b-instruct"
AIML_API_LLAMA3_3_70B = "meta-llama/Llama-3.3-70B-Instruct-Turbo"
AIML_API_META_LLAMA_3_1_70B = "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo"
AIML_API_LLAMA_3_2_3B = "meta-llama/Llama-3.2-3B-Instruct-Turbo"
# Groq models
GEMMA2_9B = "gemma2-9b-it"
LLAMA3_3_70B = "llama-3.3-70b-versatile"
@@ -118,6 +117,7 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
OLLAMA_DOLPHIN = "dolphin-mistral:latest"
# OpenRouter models
GEMINI_FLASH_1_5 = "google/gemini-flash-1.5"
GEMINI_2_5_PRO = "google/gemini-2.5-pro-preview-03-25"
GROK_BETA = "x-ai/grok-beta"
MISTRAL_NEMO = "mistralai/mistral-nemo"
COHERE_COMMAND_R_08_2024 = "cohere/command-r-08-2024"
@@ -137,6 +137,11 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
GRYPHE_MYTHOMAX_L2_13B = "gryphe/mythomax-l2-13b"
META_LLAMA_4_SCOUT = "meta-llama/llama-4-scout"
META_LLAMA_4_MAVERICK = "meta-llama/llama-4-maverick"
# Llama API models
LLAMA_API_LLAMA_4_SCOUT = "Llama-4-Scout-17B-16E-Instruct-FP8"
LLAMA_API_LLAMA4_MAVERICK = "Llama-4-Maverick-17B-128E-Instruct-FP8"
LLAMA_API_LLAMA3_3_8B = "Llama-3.3-8B-Instruct"
LLAMA_API_LLAMA3_3_70B = "Llama-3.3-70B-Instruct"
@property
def metadata(self) -> ModelMetadata:
@@ -157,12 +162,14 @@ class LlmModel(str, Enum, metaclass=LlmModelMeta):
MODEL_METADATA = {
# https://platform.openai.com/docs/models
LlmModel.O3: ModelMetadata("openai", 200000, 100000),
LlmModel.O3_MINI: ModelMetadata("openai", 200000, 100000), # o3-mini-2025-01-31
LlmModel.O1: ModelMetadata("openai", 200000, 100000), # o1-2024-12-17
LlmModel.O1_PREVIEW: ModelMetadata(
"openai", 128000, 32768
), # o1-preview-2024-09-12
LlmModel.O1_MINI: ModelMetadata("openai", 128000, 65536), # o1-mini-2024-09-12
LlmModel.GPT41: ModelMetadata("openai", 1047576, 32768),
LlmModel.GPT4O_MINI: ModelMetadata(
"openai", 128000, 16384
), # gpt-4o-mini-2024-07-18
@@ -172,6 +179,15 @@ MODEL_METADATA = {
), # gpt-4-turbo-2024-04-09
LlmModel.GPT3_5_TURBO: ModelMetadata("openai", 16385, 4096), # gpt-3.5-turbo-0125
# https://docs.anthropic.com/en/docs/about-claude/models
LlmModel.CLAUDE_4_OPUS: ModelMetadata(
"anthropic", 200000, 8192
), # claude-4-opus-20250514
LlmModel.CLAUDE_4_SONNET: ModelMetadata(
"anthropic", 200000, 8192
), # claude-4-sonnet-20250514
LlmModel.CLAUDE_3_7_SONNET: ModelMetadata(
"anthropic", 200000, 8192
), # claude-3-7-sonnet-20250219
LlmModel.CLAUDE_3_5_SONNET: ModelMetadata(
"anthropic", 200000, 8192
), # claude-3-5-sonnet-20241022
@@ -181,6 +197,12 @@ MODEL_METADATA = {
LlmModel.CLAUDE_3_HAIKU: ModelMetadata(
"anthropic", 200000, 4096
), # claude-3-haiku-20240307
# https://docs.aimlapi.com/api-overview/model-database/text-models
LlmModel.AIML_API_QWEN2_5_72B: ModelMetadata("aiml_api", 32000, 8000),
LlmModel.AIML_API_LLAMA3_1_70B: ModelMetadata("aiml_api", 128000, 40000),
LlmModel.AIML_API_LLAMA3_3_70B: ModelMetadata("aiml_api", 128000, None),
LlmModel.AIML_API_META_LLAMA_3_1_70B: ModelMetadata("aiml_api", 131000, 2000),
LlmModel.AIML_API_LLAMA_3_2_3B: ModelMetadata("aiml_api", 128000, None),
# https://console.groq.com/docs/models
LlmModel.GEMMA2_9B: ModelMetadata("groq", 8192, None),
LlmModel.LLAMA3_3_70B: ModelMetadata("groq", 128000, 32768),
@@ -197,6 +219,7 @@ MODEL_METADATA = {
LlmModel.OLLAMA_DOLPHIN: ModelMetadata("ollama", 32768, None),
# https://openrouter.ai/models
LlmModel.GEMINI_FLASH_1_5: ModelMetadata("open_router", 1000000, 8192),
LlmModel.GEMINI_2_5_PRO: ModelMetadata("open_router", 1050000, 8192),
LlmModel.GROK_BETA: ModelMetadata("open_router", 131072, 131072),
LlmModel.MISTRAL_NEMO: ModelMetadata("open_router", 128000, 4096),
LlmModel.COHERE_COMMAND_R_08_2024: ModelMetadata("open_router", 128000, 4096),
@@ -220,6 +243,11 @@ MODEL_METADATA = {
LlmModel.GRYPHE_MYTHOMAX_L2_13B: ModelMetadata("open_router", 4096, 4096),
LlmModel.META_LLAMA_4_SCOUT: ModelMetadata("open_router", 131072, 131072),
LlmModel.META_LLAMA_4_MAVERICK: ModelMetadata("open_router", 1048576, 1000000),
# Llama API models
LlmModel.LLAMA_API_LLAMA_4_SCOUT: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA4_MAVERICK: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_8B: ModelMetadata("llama_api", 128000, 4028),
LlmModel.LLAMA_API_LLAMA3_3_70B: ModelMetadata("llama_api", 128000, 4028),
}
for model in LlmModel:
@@ -249,7 +277,7 @@ class LLMResponse(BaseModel):
def convert_openai_tool_fmt_to_anthropic(
openai_tools: list[dict] | None = None,
) -> Iterable[ToolParam] | NotGiven:
) -> Iterable[ToolParam] | anthropic.NotGiven:
"""
Convert OpenAI tool format to Anthropic tool format.
"""
@@ -279,7 +307,7 @@ def convert_openai_tool_fmt_to_anthropic(
return anthropic_tools
def llm_call(
async def llm_call(
credentials: APIKeyCredentials,
llm_model: LlmModel,
prompt: list[dict],
@@ -287,6 +315,8 @@ def llm_call(
max_tokens: int | None,
tools: list[dict] | None = None,
ollama_host: str = "localhost:11434",
parallel_tool_calls=None,
compress_prompt_to_fit: bool = True,
) -> LLMResponse:
"""
Make a call to a language model.
@@ -309,29 +339,40 @@ def llm_call(
- completion_tokens: The number of tokens used in the completion.
"""
provider = llm_model.metadata.provider
max_tokens = max_tokens or llm_model.max_output_tokens or 4096
context_window = llm_model.context_window
if compress_prompt_to_fit:
prompt = compress_prompt(
messages=prompt,
target_tokens=llm_model.context_window // 2,
lossy_ok=True,
)
# Calculate available tokens based on context window and input length
estimated_input_tokens = estimate_token_count(prompt)
model_max_output = llm_model.max_output_tokens or int(2**15)
user_max = max_tokens or model_max_output
available_tokens = max(context_window - estimated_input_tokens, 0)
max_tokens = max(min(available_tokens, model_max_output, user_max), 1)
if provider == "openai":
tools_param = tools if tools else openai.NOT_GIVEN
oai_client = openai.OpenAI(api_key=credentials.api_key.get_secret_value())
oai_client = openai.AsyncOpenAI(api_key=credentials.api_key.get_secret_value())
response_format = None
if llm_model in [LlmModel.O1_MINI, LlmModel.O1_PREVIEW]:
sys_messages = [p["content"] for p in prompt if p["role"] == "system"]
usr_messages = [p["content"] for p in prompt if p["role"] != "system"]
prompt = [
{"role": "user", "content": "\n".join(sys_messages)},
{"role": "user", "content": "\n".join(usr_messages)},
]
elif json_format:
if llm_model.startswith("o") or parallel_tool_calls is None:
parallel_tool_calls = openai.NOT_GIVEN
if json_format:
response_format = {"type": "json_object"}
response = oai_client.chat.completions.create(
response = await oai_client.chat.completions.create(
model=llm_model.value,
messages=prompt, # type: ignore
response_format=response_format, # type: ignore
max_completion_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=parallel_tool_calls,
)
if response.choices[0].message.tool_calls:
@@ -379,9 +420,11 @@ def llm_call(
messages.append({"role": p["role"], "content": p["content"]})
last_role = p["role"]
client = anthropic.Anthropic(api_key=credentials.api_key.get_secret_value())
client = anthropic.AsyncAnthropic(
api_key=credentials.api_key.get_secret_value()
)
try:
resp = client.messages.create(
resp = await client.messages.create(
model=llm_model.value,
system=sysprompt,
messages=messages,
@@ -412,7 +455,7 @@ def llm_call(
if not tool_calls and resp.stop_reason == "tool_use":
logger.warning(
"Tool use stop reason but no tool calls found in content. %s", resp
f"Tool use stop reason but no tool calls found in content. {resp}"
)
return LLMResponse(
@@ -435,9 +478,9 @@ def llm_call(
if tools:
raise ValueError("Groq does not support tools.")
client = Groq(api_key=credentials.api_key.get_secret_value())
client = AsyncGroq(api_key=credentials.api_key.get_secret_value())
response_format = {"type": "json_object"} if json_format else None
response = client.chat.completions.create(
response = await client.chat.completions.create(
model=llm_model.value,
messages=prompt, # type: ignore
response_format=response_format, # type: ignore
@@ -455,13 +498,14 @@ def llm_call(
if tools:
raise ValueError("Ollama does not support tools.")
client = ollama.Client(host=ollama_host)
client = ollama.AsyncClient(host=ollama_host)
sys_messages = [p["content"] for p in prompt if p["role"] == "system"]
usr_messages = [p["content"] for p in prompt if p["role"] != "system"]
response = client.generate(
response = await client.generate(
model=llm_model.value,
prompt=f"{sys_messages}\n\n{usr_messages}",
stream=False,
options={"num_ctx": max_tokens},
)
return LLMResponse(
raw_response=response.get("response") or "",
@@ -473,12 +517,12 @@ def llm_call(
)
elif provider == "open_router":
tools_param = tools if tools else openai.NOT_GIVEN
client = openai.OpenAI(
client = openai.AsyncOpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=credentials.api_key.get_secret_value(),
)
response = client.chat.completions.create(
response = await client.chat.completions.create(
extra_headers={
"HTTP-Referer": "https://agpt.co",
"X-Title": "AutoGPT",
@@ -518,6 +562,79 @@ def llm_call(
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
)
elif provider == "llama_api":
tools_param = tools if tools else openai.NOT_GIVEN
client = openai.AsyncOpenAI(
base_url="https://api.llama.com/compat/v1/",
api_key=credentials.api_key.get_secret_value(),
)
response = await client.chat.completions.create(
extra_headers={
"HTTP-Referer": "https://agpt.co",
"X-Title": "AutoGPT",
},
model=llm_model.value,
messages=prompt, # type: ignore
max_tokens=max_tokens,
tools=tools_param, # type: ignore
parallel_tool_calls=(
openai.NOT_GIVEN if parallel_tool_calls is None else parallel_tool_calls
),
)
# If there's no response, raise an error
if not response.choices:
if response:
raise ValueError(f"Llama API error: {response}")
else:
raise ValueError("No response from Llama API.")
if response.choices[0].message.tool_calls:
tool_calls = [
ToolContentBlock(
id=tool.id,
type=tool.type,
function=ToolCall(
name=tool.function.name, arguments=tool.function.arguments
),
)
for tool in response.choices[0].message.tool_calls
]
else:
tool_calls = None
return LLMResponse(
raw_response=response.choices[0].message,
prompt=prompt,
response=response.choices[0].message.content or "",
tool_calls=tool_calls,
prompt_tokens=response.usage.prompt_tokens if response.usage else 0,
completion_tokens=response.usage.completion_tokens if response.usage else 0,
)
elif provider == "aiml_api":
client = openai.OpenAI(
base_url="https://api.aimlapi.com/v2",
api_key=credentials.api_key.get_secret_value(),
default_headers={"X-Project": "AutoGPT"},
)
completion = client.chat.completions.create(
model=llm_model.value,
messages=prompt, # type: ignore
max_tokens=max_tokens,
)
return LLMResponse(
raw_response=completion.choices[0].message,
prompt=prompt,
response=completion.choices[0].message.content or "",
tool_calls=None,
prompt_tokens=completion.usage.prompt_tokens if completion.usage else 0,
completion_tokens=(
completion.usage.completion_tokens if completion.usage else 0
),
)
else:
raise ValueError(f"Unsupported LLM provider: {provider}")
@@ -542,6 +659,11 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
description="Expected format of the response. If provided, the response will be validated against this format. "
"The keys should be the expected fields in the response, and the values should be the description of the field.",
)
list_result: bool = SchemaField(
title="List Result",
default=False,
description="Whether the response should be a list of objects in the expected format.",
)
model: LlmModel = SchemaField(
title="LLM Model",
default=LlmModel.GPT4O,
@@ -573,7 +695,11 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
default=None,
description="The maximum number of tokens to generate in the chat completion.",
)
compress_prompt_to_fit: bool = SchemaField(
advanced=True,
default=True,
description="Whether to compress the prompt to fit within the model's context window.",
)
ollama_host: str = SchemaField(
advanced=True,
default="localhost:11434",
@@ -581,7 +707,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
)
class Output(BlockSchema):
response: dict[str, Any] = SchemaField(
response: dict[str, Any] | list[dict[str, Any]] = SchemaField(
description="The response object generated by the language model."
)
prompt: list = SchemaField(description="The prompt sent to the language model.")
@@ -625,12 +751,13 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
},
)
def llm_call(
async def llm_call(
self,
credentials: APIKeyCredentials,
llm_model: LlmModel,
prompt: list[dict],
json_format: bool,
compress_prompt_to_fit: bool,
max_tokens: int | None,
tools: list[dict] | None = None,
ollama_host: str = "localhost:11434",
@@ -640,7 +767,7 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
so that it can be mocked withing the block testing framework.
"""
self.prompt = prompt
return llm_call(
return await llm_call(
credentials=credentials,
llm_model=llm_model,
prompt=prompt,
@@ -648,9 +775,10 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
max_tokens=max_tokens,
tools=tools,
ollama_host=ollama_host,
compress_prompt_to_fit=compress_prompt_to_fit,
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
logger.debug(f"Calling LLM with input data: {input_data}")
@@ -672,13 +800,22 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
expected_format = [
f'"{k}": "{v}"' for k, v in input_data.expected_format.items()
]
format_prompt = ",\n ".join(expected_format)
if input_data.list_result:
format_prompt = (
f'"results": [\n {{\n {", ".join(expected_format)}\n }}\n]'
)
else:
format_prompt = "\n ".join(expected_format)
sys_prompt = trim_prompt(
f"""
|Reply strictly only in the following JSON format:
|{{
| {format_prompt}
|}}
|
|Ensure the response is valid JSON. Do not include any additional text outside of the JSON.
|If you cannot provide all the keys, provide an empty string for the values you cannot answer.
"""
)
prompt.append({"role": "system", "content": sys_prompt})
@@ -686,28 +823,28 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
if input_data.prompt:
prompt.append({"role": "user", "content": input_data.prompt})
def parse_response(resp: str) -> tuple[dict[str, Any], str | None]:
def validate_response(parsed: object) -> str | None:
try:
parsed = json.loads(resp)
if not isinstance(parsed, dict):
return {}, f"Expected a dictionary, but got {type(parsed)}"
return f"Expected a dictionary, but got {type(parsed)}"
miss_keys = set(input_data.expected_format.keys()) - set(parsed.keys())
if miss_keys:
return parsed, f"Missing keys: {miss_keys}"
return parsed, None
return f"Missing keys: {miss_keys}"
return None
except JSONDecodeError as e:
return {}, f"JSON decode error: {e}"
return f"JSON decode error: {e}"
logger.info(f"LLM request: {prompt}")
logger.debug(f"LLM request: {prompt}")
retry_prompt = ""
llm_model = input_data.model
for retry_count in range(input_data.retry):
try:
llm_response = self.llm_call(
llm_response = await self.llm_call(
credentials=credentials,
llm_model=llm_model,
prompt=prompt,
compress_prompt_to_fit=input_data.compress_prompt_to_fit,
json_format=bool(input_data.expected_format),
ollama_host=input_data.ollama_host,
max_tokens=input_data.max_tokens,
@@ -719,21 +856,32 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
output_token_count=llm_response.completion_tokens,
)
)
logger.info(f"LLM attempt-{retry_count} response: {response_text}")
logger.debug(f"LLM attempt-{retry_count} response: {response_text}")
if input_data.expected_format:
parsed_dict, parsed_error = parse_response(response_text)
if not parsed_error:
yield "response", {
k: (
json.loads(v)
if isinstance(v, str)
and v.startswith("[")
and v.endswith("]")
else (", ".join(v) if isinstance(v, list) else v)
response_obj = json.loads(response_text)
if input_data.list_result and isinstance(response_obj, dict):
if "results" in response_obj:
response_obj = response_obj.get("results", [])
elif len(response_obj) == 1:
response_obj = list(response_obj.values())
response_error = "\n".join(
[
validation_error
for response_item in (
response_obj
if isinstance(response_obj, list)
else [response_obj]
)
for k, v in parsed_dict.items()
}
if (validation_error := validate_response(response_item))
]
)
if not response_error:
yield "response", response_obj
yield "prompt", self.prompt
return
else:
@@ -750,13 +898,23 @@ class AIStructuredResponseGeneratorBlock(AIBlockBase):
|
|And this is the error:
|--
|{parsed_error}
|{response_error}
|--
"""
)
prompt.append({"role": "user", "content": retry_prompt})
except Exception as e:
logger.exception(f"Error calling LLM: {e}")
if (
"maximum context length" in str(e).lower()
or "token limit" in str(e).lower()
):
if input_data.max_tokens is None:
input_data.max_tokens = llm_model.max_output_tokens or 4096
input_data.max_tokens = int(input_data.max_tokens * 0.85)
logger.debug(
f"Reducing max_tokens to {input_data.max_tokens} for next attempt"
)
retry_prompt = f"Error calling LLM: {e}"
finally:
self.merge_stats(
@@ -834,17 +992,17 @@ class AITextGeneratorBlock(AIBlockBase):
test_mock={"llm_call": lambda *args, **kwargs: "Response text"},
)
def llm_call(
async def llm_call(
self,
input_data: AIStructuredResponseGeneratorBlock.Input,
credentials: APIKeyCredentials,
) -> str:
) -> dict:
block = AIStructuredResponseGeneratorBlock()
response = block.run_once(input_data, "response", credentials=credentials)
response = await block.run_once(input_data, "response", credentials=credentials)
self.merge_llm_stats(block)
return response["response"]
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
object_input_data = AIStructuredResponseGeneratorBlock.Input(
@@ -854,7 +1012,8 @@ class AITextGeneratorBlock(AIBlockBase):
},
expected_format={},
)
yield "response", self.llm_call(object_input_data, credentials)
response = await self.llm_call(object_input_data, credentials)
yield "response", response
yield "prompt", self.prompt
@@ -936,23 +1095,27 @@ class AITextSummarizerBlock(AIBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
for output in self._run(input_data, credentials):
yield output
async for output_name, output_data in self._run(input_data, credentials):
yield output_name, output_data
def _run(self, input_data: Input, credentials: APIKeyCredentials) -> BlockOutput:
async def _run(
self, input_data: Input, credentials: APIKeyCredentials
) -> BlockOutput:
chunks = self._split_text(
input_data.text, input_data.max_tokens, input_data.chunk_overlap
)
summaries = []
for chunk in chunks:
chunk_summary = self._summarize_chunk(chunk, input_data, credentials)
chunk_summary = await self._summarize_chunk(chunk, input_data, credentials)
summaries.append(chunk_summary)
final_summary = self._combine_summaries(summaries, input_data, credentials)
final_summary = await self._combine_summaries(
summaries, input_data, credentials
)
yield "summary", final_summary
yield "prompt", self.prompt
@@ -968,22 +1131,22 @@ class AITextSummarizerBlock(AIBlockBase):
return chunks
def llm_call(
async def llm_call(
self,
input_data: AIStructuredResponseGeneratorBlock.Input,
credentials: APIKeyCredentials,
) -> dict:
block = AIStructuredResponseGeneratorBlock()
response = block.run_once(input_data, "response", credentials=credentials)
response = await block.run_once(input_data, "response", credentials=credentials)
self.merge_llm_stats(block)
return response
def _summarize_chunk(
async def _summarize_chunk(
self, chunk: str, input_data: Input, credentials: APIKeyCredentials
) -> str:
prompt = f"Summarize the following text in a {input_data.style} form. Focus your summary on the topic of `{input_data.focus}` if present, otherwise just provide a general summary:\n\n```{chunk}```"
llm_response = self.llm_call(
llm_response = await self.llm_call(
AIStructuredResponseGeneratorBlock.Input(
prompt=prompt,
credentials=input_data.credentials,
@@ -995,7 +1158,7 @@ class AITextSummarizerBlock(AIBlockBase):
return llm_response["summary"]
def _combine_summaries(
async def _combine_summaries(
self, summaries: list[str], input_data: Input, credentials: APIKeyCredentials
) -> str:
combined_text = "\n\n".join(summaries)
@@ -1003,7 +1166,7 @@ class AITextSummarizerBlock(AIBlockBase):
if len(combined_text.split()) <= input_data.max_tokens:
prompt = f"Provide a final summary of the following section summaries in a {input_data.style} form, focus your summary on the topic of `{input_data.focus}` if present:\n\n ```{combined_text}```\n\n Just respond with the final_summary in the format specified."
llm_response = self.llm_call(
llm_response = await self.llm_call(
AIStructuredResponseGeneratorBlock.Input(
prompt=prompt,
credentials=input_data.credentials,
@@ -1018,7 +1181,8 @@ class AITextSummarizerBlock(AIBlockBase):
return llm_response["final_summary"]
else:
# If combined summaries are still too long, recursively summarize
return self._run(
block = AITextSummarizerBlock()
return await block.run_once(
AITextSummarizerBlock.Input(
text=combined_text,
credentials=input_data.credentials,
@@ -1026,10 +1190,9 @@ class AITextSummarizerBlock(AIBlockBase):
max_tokens=input_data.max_tokens,
chunk_overlap=input_data.chunk_overlap,
),
"summary",
credentials=credentials,
).send(None)[
1
] # Get the first yielded value
)
class AIConversationBlock(AIBlockBase):
@@ -1100,20 +1263,20 @@ class AIConversationBlock(AIBlockBase):
},
)
def llm_call(
async def llm_call(
self,
input_data: AIStructuredResponseGeneratorBlock.Input,
credentials: APIKeyCredentials,
) -> str:
) -> dict:
block = AIStructuredResponseGeneratorBlock()
response = block.run_once(input_data, "response", credentials=credentials)
response = await block.run_once(input_data, "response", credentials=credentials)
self.merge_llm_stats(block)
return response["response"]
return response
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
response = self.llm_call(
response = await self.llm_call(
AIStructuredResponseGeneratorBlock.Input(
prompt=input_data.prompt,
credentials=input_data.credentials,
@@ -1125,7 +1288,6 @@ class AIConversationBlock(AIBlockBase):
),
credentials=credentials,
)
yield "response", response
yield "prompt", self.prompt
@@ -1219,13 +1381,15 @@ class AIListGeneratorBlock(AIBlockBase):
},
)
def llm_call(
async def llm_call(
self,
input_data: AIStructuredResponseGeneratorBlock.Input,
credentials: APIKeyCredentials,
) -> dict[str, str]:
llm_block = AIStructuredResponseGeneratorBlock()
response = llm_block.run_once(input_data, "response", credentials=credentials)
response = await llm_block.run_once(
input_data, "response", credentials=credentials
)
self.merge_llm_stats(llm_block)
return response
@@ -1248,7 +1412,7 @@ class AIListGeneratorBlock(AIBlockBase):
logger.error(f"Failed to convert string to list: {e}")
raise ValueError("Invalid list format. Could not convert to list.")
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
logger.debug(f"Starting AIListGeneratorBlock.run with input data: {input_data}")
@@ -1314,7 +1478,7 @@ class AIListGeneratorBlock(AIBlockBase):
for attempt in range(input_data.max_retries):
try:
logger.debug("Calling LLM")
llm_response = self.llm_call(
llm_response = await self.llm_call(
AIStructuredResponseGeneratorBlock.Input(
sys_prompt=sys_prompt,
prompt=prompt,

View File

@@ -52,7 +52,7 @@ class CalculatorBlock(Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
operation = input_data.operation
a = input_data.a
b = input_data.b
@@ -107,7 +107,7 @@ class CountItemsBlock(Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
collection = input_data.collection
try:

View File

@@ -39,7 +39,7 @@ class MediaDurationBlock(Block):
output_schema=MediaDurationBlock.Output,
)
def run(
async def run(
self,
input_data: Input,
*,
@@ -47,7 +47,7 @@ class MediaDurationBlock(Block):
**kwargs,
) -> BlockOutput:
# 1) Store the input media locally
local_media_path = store_media_file(
local_media_path = await store_media_file(
graph_exec_id=graph_exec_id,
file=input_data.media_in,
return_content=False,
@@ -105,7 +105,7 @@ class LoopVideoBlock(Block):
output_schema=LoopVideoBlock.Output,
)
def run(
async def run(
self,
input_data: Input,
*,
@@ -114,7 +114,7 @@ class LoopVideoBlock(Block):
**kwargs,
) -> BlockOutput:
# 1) Store the input video locally
local_video_path = store_media_file(
local_video_path = await store_media_file(
graph_exec_id=graph_exec_id,
file=input_data.video_in,
return_content=False,
@@ -146,7 +146,7 @@ class LoopVideoBlock(Block):
looped_clip.write_videofile(output_abspath, codec="libx264", audio_codec="aac")
# Return as data URI
video_out = store_media_file(
video_out = await store_media_file(
graph_exec_id=graph_exec_id,
file=output_filename,
return_content=input_data.output_return_type == "data_uri",
@@ -194,7 +194,7 @@ class AddAudioToVideoBlock(Block):
output_schema=AddAudioToVideoBlock.Output,
)
def run(
async def run(
self,
input_data: Input,
*,
@@ -203,12 +203,12 @@ class AddAudioToVideoBlock(Block):
**kwargs,
) -> BlockOutput:
# 1) Store the inputs locally
local_video_path = store_media_file(
local_video_path = await store_media_file(
graph_exec_id=graph_exec_id,
file=input_data.video_in,
return_content=False,
)
local_audio_path = store_media_file(
local_audio_path = await store_media_file(
graph_exec_id=graph_exec_id,
file=input_data.audio_in,
return_content=False,
@@ -236,7 +236,7 @@ class AddAudioToVideoBlock(Block):
final_clip.write_videofile(output_abspath, codec="libx264", audio_codec="aac")
# 5) Return either path or data URI
video_out = store_media_file(
video_out = await store_media_file(
graph_exec_id=graph_exec_id,
file=output_filename,
return_content=input_data.output_return_type == "data_uri",

View File

@@ -13,7 +13,7 @@ from backend.data.model import (
SecretField,
)
from backend.integrations.providers import ProviderName
from backend.util.request import requests
from backend.util.request import Requests
TEST_CREDENTIALS = APIKeyCredentials(
id="01234567-89ab-cdef-0123-456789abcdef",
@@ -130,7 +130,7 @@ class PublishToMediumBlock(Block):
test_credentials=TEST_CREDENTIALS,
)
def create_post(
async def create_post(
self,
api_key: SecretStr,
author_id,
@@ -160,18 +160,17 @@ class PublishToMediumBlock(Block):
"notifyFollowers": notify_followers,
}
response = requests.post(
response = await Requests().post(
f"https://api.medium.com/v1/users/{author_id}/posts",
headers=headers,
json=data,
)
return response.json()
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
response = self.create_post(
response = await self.create_post(
credentials.api_key,
input_data.author_id.get_secret_value(),
input_data.title,

View File

@@ -13,7 +13,7 @@ from backend.data.model import (
from backend.integrations.providers import ProviderName
TEST_CREDENTIALS = APIKeyCredentials(
id="ed55ac19-356e-4243-a6cb-bc599e9b716f",
id="8cc8b2c5-d3e4-4b1c-84ad-e1e9fe2a0122",
provider="mem0",
api_key=SecretStr("mock-mem0-api-key"),
title="Mock Mem0 API key",
@@ -67,12 +67,11 @@ class AddMemoryBlock(Block, Mem0Base):
metadata: dict[str, Any] = SchemaField(
description="Optional metadata for the memory", default_factory=dict
)
limit_memory_to_run: bool = SchemaField(
description="Limit the memory to the run", default=False
)
limit_memory_to_agent: bool = SchemaField(
description="Limit the memory to the agent", default=False
description="Limit the memory to the agent", default=True
)
class Output(BlockSchema):
@@ -104,12 +103,17 @@ class AddMemoryBlock(Block, Mem0Base):
"credentials": TEST_CREDENTIALS_INPUT,
},
],
test_output=[("action", "NO_CHANGE"), ("action", "NO_CHANGE")],
test_output=[
("action", "CREATED"),
("memory", "test memory"),
("action", "CREATED"),
("memory", "test memory"),
],
test_credentials=TEST_CREDENTIALS,
test_mock={"_get_client": lambda credentials: MockMemoryClient()},
)
def run(
async def run(
self,
input_data: Input,
*,
@@ -117,15 +121,17 @@ class AddMemoryBlock(Block, Mem0Base):
user_id: str,
graph_id: str,
graph_exec_id: str,
**kwargs
**kwargs,
) -> BlockOutput:
try:
client = self._get_client(credentials)
if isinstance(input_data.content, Conversation):
messages = input_data.content.messages
elif isinstance(input_data.content, Content):
messages = [{"role": "user", "content": input_data.content.content}]
else:
messages = [{"role": "user", "content": input_data.content}]
messages = [{"role": "user", "content": str(input_data.content)}]
params = {
"user_id": user_id,
@@ -152,7 +158,7 @@ class AddMemoryBlock(Block, Mem0Base):
yield "action", "NO_CHANGE"
except Exception as e:
yield "error", str(object=e)
yield "error", str(e)
class SearchMemoryBlock(Block, Mem0Base):
@@ -176,6 +182,10 @@ class SearchMemoryBlock(Block, Mem0Base):
default_factory=list,
advanced=True,
)
metadata_filter: Optional[dict[str, Any]] = SchemaField(
description="Optional metadata filters to apply",
default=None,
)
limit_memory_to_run: bool = SchemaField(
description="Limit the memory to the run", default=False
)
@@ -206,7 +216,7 @@ class SearchMemoryBlock(Block, Mem0Base):
test_mock={"_get_client": lambda credentials: MockMemoryClient()},
)
def run(
async def run(
self,
input_data: Input,
*,
@@ -214,7 +224,7 @@ class SearchMemoryBlock(Block, Mem0Base):
user_id: str,
graph_id: str,
graph_exec_id: str,
**kwargs
**kwargs,
) -> BlockOutput:
try:
client = self._get_client(credentials)
@@ -233,6 +243,8 @@ class SearchMemoryBlock(Block, Mem0Base):
filters["AND"].append({"run_id": graph_exec_id})
if input_data.limit_memory_to_agent:
filters["AND"].append({"agent_id": graph_id})
if input_data.metadata_filter:
filters["AND"].append({"metadata": input_data.metadata_filter})
result: list[dict[str, Any]] = client.search(
input_data.query, version="v2", filters=filters
@@ -258,11 +270,15 @@ class GetAllMemoriesBlock(Block, Mem0Base):
categories: Optional[list[str]] = SchemaField(
description="Filter by categories", default=None
)
metadata_filter: Optional[dict[str, Any]] = SchemaField(
description="Optional metadata filters to apply",
default=None,
)
limit_memory_to_run: bool = SchemaField(
description="Limit the memory to the run", default=False
)
limit_memory_to_agent: bool = SchemaField(
description="Limit the memory to the agent", default=False
description="Limit the memory to the agent", default=True
)
class Output(BlockSchema):
@@ -272,11 +288,11 @@ class GetAllMemoriesBlock(Block, Mem0Base):
def __init__(self):
super().__init__(
id="45aee5bf-4767-45d1-a28b-e01c5aae9fc1",
description="Retrieve all memories from Mem0 with pagination",
description="Retrieve all memories from Mem0 with optional conversation filtering",
input_schema=GetAllMemoriesBlock.Input,
output_schema=GetAllMemoriesBlock.Output,
test_input={
"user_id": "test_user",
"metadata_filter": {"type": "test"},
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
@@ -286,7 +302,7 @@ class GetAllMemoriesBlock(Block, Mem0Base):
test_mock={"_get_client": lambda credentials: MockMemoryClient()},
)
def run(
async def run(
self,
input_data: Input,
*,
@@ -294,7 +310,7 @@ class GetAllMemoriesBlock(Block, Mem0Base):
user_id: str,
graph_id: str,
graph_exec_id: str,
**kwargs
**kwargs,
) -> BlockOutput:
try:
client = self._get_client(credentials)
@@ -312,6 +328,8 @@ class GetAllMemoriesBlock(Block, Mem0Base):
filters["AND"].append(
{"categories": {"contains": input_data.categories}}
)
if input_data.metadata_filter:
filters["AND"].append({"metadata": input_data.metadata_filter})
memories: list[dict[str, Any]] = client.get_all(
filters=filters,
@@ -324,14 +342,116 @@ class GetAllMemoriesBlock(Block, Mem0Base):
yield "error", str(e)
class GetLatestMemoryBlock(Block, Mem0Base):
"""Block for retrieving the latest memory from Mem0"""
class Input(BlockSchema):
credentials: CredentialsMetaInput[
Literal[ProviderName.MEM0], Literal["api_key"]
] = CredentialsField(description="Mem0 API key credentials")
trigger: bool = SchemaField(
description="An unused field that is used to trigger the block when you have no other inputs",
default=False,
advanced=False,
)
categories: Optional[list[str]] = SchemaField(
description="Filter by categories", default=None
)
conversation_id: Optional[str] = SchemaField(
description="Optional conversation ID to retrieve the latest memory from (uses run_id)",
default=None,
)
metadata_filter: Optional[dict[str, Any]] = SchemaField(
description="Optional metadata filters to apply",
default=None,
)
limit_memory_to_run: bool = SchemaField(
description="Limit the memory to the run", default=False
)
limit_memory_to_agent: bool = SchemaField(
description="Limit the memory to the agent", default=True
)
class Output(BlockSchema):
memory: Optional[dict[str, Any]] = SchemaField(
description="Latest memory if found"
)
found: bool = SchemaField(description="Whether a memory was found")
error: str = SchemaField(description="Error message if operation fails")
def __init__(self):
super().__init__(
id="0f9d81b5-a145-4c23-b87f-01d6bf37b677",
description="Retrieve the latest memory from Mem0 with optional key filtering",
input_schema=GetLatestMemoryBlock.Input,
output_schema=GetLatestMemoryBlock.Output,
test_input={
"metadata_filter": {"type": "test"},
"credentials": TEST_CREDENTIALS_INPUT,
},
test_output=[
("memory", {"id": "test-memory", "content": "test content"}),
("found", True),
],
test_credentials=TEST_CREDENTIALS,
test_mock={"_get_client": lambda credentials: MockMemoryClient()},
)
async def run(
self,
input_data: Input,
*,
credentials: APIKeyCredentials,
user_id: str,
graph_id: str,
graph_exec_id: str,
**kwargs,
) -> BlockOutput:
try:
client = self._get_client(credentials)
filters: Filter = {
"AND": [
{"user_id": user_id},
]
}
if input_data.limit_memory_to_run:
filters["AND"].append({"run_id": graph_exec_id})
if input_data.limit_memory_to_agent:
filters["AND"].append({"agent_id": graph_id})
if input_data.categories:
filters["AND"].append(
{"categories": {"contains": input_data.categories}}
)
if input_data.metadata_filter:
filters["AND"].append({"metadata": input_data.metadata_filter})
memories: list[dict[str, Any]] = client.get_all(
filters=filters,
version="v2",
)
if memories:
# Return the latest memory (first in the list as they're sorted by recency)
latest_memory = memories[0]
yield "memory", latest_memory
yield "found", True
else:
yield "memory", None
yield "found", False
except Exception as e:
yield "error", str(e)
# Mock client for testing
class MockMemoryClient:
"""Mock Mem0 client for testing"""
def add(self, *args, **kwargs):
return {"memory_id": "test-memory-id", "status": "success"}
return {"results": [{"event": "CREATED", "memory": "test memory"}]}
def search(self, *args, **kwargs) -> list[dict[str, str]]:
def search(self, *args, **kwargs) -> list[dict[str, Any]]:
return [{"id": "test-memory", "content": "test content"}]
def get_all(self, *args, **kwargs) -> list[dict[str, str]]:

View File

@@ -5,7 +5,7 @@ from backend.blocks.nvidia._auth import (
)
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
from backend.util.request import requests
from backend.util.request import Requests
from backend.util.type import MediaFileType
@@ -40,7 +40,7 @@ class NvidiaDeepfakeDetectBlock(Block):
output_schema=NvidiaDeepfakeDetectBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: NvidiaCredentials, **kwargs
) -> BlockOutput:
url = "https://ai.api.nvidia.com/v1/cv/hive/deepfake-image-detection"
@@ -59,8 +59,7 @@ class NvidiaDeepfakeDetectBlock(Block):
}
try:
response = requests.post(url, headers=headers, json=payload)
response.raise_for_status()
response = await Requests().post(url, headers=headers, json=payload)
data = response.json()
result = data.get("data", [{}])[0]

View File

@@ -0,0 +1,154 @@
import logging
from typing import Any, Literal
from autogpt_libs.utils.cache import thread_cached
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
from backend.data.model import SchemaField
logger = logging.getLogger(__name__)
@thread_cached
def get_database_manager_client():
from backend.executor import DatabaseManagerAsyncClient
from backend.util.service import get_service_client
return get_service_client(DatabaseManagerAsyncClient, health_check=False)
StorageScope = Literal["within_agent", "across_agents"]
def get_storage_key(key: str, scope: StorageScope, graph_id: str) -> str:
"""Generate the storage key based on scope"""
if scope == "across_agents":
return f"global#{key}"
else:
return f"agent#{graph_id}#{key}"
class PersistInformationBlock(Block):
"""Block for persisting key-value data for the current user with configurable scope"""
class Input(BlockSchema):
key: str = SchemaField(description="Key to store the information under")
value: Any = SchemaField(description="Value to store")
scope: StorageScope = SchemaField(
description="Scope of persistence: within_agent (shared across all runs of this agent) or across_agents (shared across all agents for this user)",
default="within_agent",
)
class Output(BlockSchema):
value: Any = SchemaField(description="Value that was stored")
def __init__(self):
super().__init__(
id="1d055e55-a2b9-4547-8311-907d05b0304d",
description="Persist key-value information for the current user",
categories={BlockCategory.DATA},
input_schema=PersistInformationBlock.Input,
output_schema=PersistInformationBlock.Output,
test_input={
"key": "user_preference",
"value": {"theme": "dark", "language": "en"},
"scope": "within_agent",
},
test_output=[
("value", {"theme": "dark", "language": "en"}),
],
test_mock={
"_store_data": lambda *args, **kwargs: {
"theme": "dark",
"language": "en",
}
},
)
async def run(
self,
input_data: Input,
*,
user_id: str,
graph_id: str,
node_exec_id: str,
**kwargs,
) -> BlockOutput:
# Determine the storage key based on scope
storage_key = get_storage_key(input_data.key, input_data.scope, graph_id)
# Store the data
yield "value", await self._store_data(
user_id=user_id,
node_exec_id=node_exec_id,
key=storage_key,
data=input_data.value,
)
async def _store_data(
self, user_id: str, node_exec_id: str, key: str, data: Any
) -> Any | None:
return await get_database_manager_client().set_execution_kv_data(
user_id=user_id,
node_exec_id=node_exec_id,
key=key,
data=data,
)
class RetrieveInformationBlock(Block):
"""Block for retrieving key-value data for the current user with configurable scope"""
class Input(BlockSchema):
key: str = SchemaField(description="Key to retrieve the information for")
scope: StorageScope = SchemaField(
description="Scope of persistence: within_agent (shared across all runs of this agent) or across_agents (shared across all agents for this user)",
default="within_agent",
)
default_value: Any = SchemaField(
description="Default value to return if key is not found", default=None
)
class Output(BlockSchema):
value: Any = SchemaField(description="Retrieved value or default value")
def __init__(self):
super().__init__(
id="d8710fc9-6e29-481e-a7d5-165eb16f8471",
description="Retrieve key-value information for the current user",
categories={BlockCategory.DATA},
input_schema=RetrieveInformationBlock.Input,
output_schema=RetrieveInformationBlock.Output,
test_input={
"key": "user_preference",
"scope": "within_agent",
"default_value": {"theme": "light", "language": "en"},
},
test_output=[
("value", {"theme": "light", "language": "en"}),
],
test_mock={"_retrieve_data": lambda *args, **kwargs: None},
)
async def run(
self, input_data: Input, *, user_id: str, graph_id: str, **kwargs
) -> BlockOutput:
# Determine the storage key based on scope
storage_key = get_storage_key(input_data.key, input_data.scope, graph_id)
# Retrieve the data
stored_value = await self._retrieve_data(
user_id=user_id,
key=storage_key,
)
if stored_value is not None:
yield "value", stored_value
else:
yield "value", input_data.default_value
async def _retrieve_data(self, user_id: str, key: str) -> Any | None:
return await get_database_manager_client().get_execution_kv_data(
user_id=user_id,
key=key,
)

View File

@@ -56,7 +56,7 @@ class PineconeInitBlock(Block):
output_schema=PineconeInitBlock.Output,
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
pc = Pinecone(api_key=credentials.api_key.get_secret_value())
@@ -117,7 +117,7 @@ class PineconeQueryBlock(Block):
output_schema=PineconeQueryBlock.Output,
)
def run(
async def run(
self,
input_data: Input,
*,
@@ -195,7 +195,7 @@ class PineconeInsertBlock(Block):
output_schema=PineconeInsertBlock.Output,
)
def run(
async def run(
self,
input_data: Input,
*,

View File

@@ -146,7 +146,7 @@ class GetRedditPostsBlock(Block):
subreddit = client.subreddit(input_data.subreddit)
return subreddit.new(limit=input_data.post_limit or 10)
def run(
async def run(
self, input_data: Input, *, credentials: RedditCredentials, **kwargs
) -> BlockOutput:
current_time = datetime.now(tz=timezone.utc)
@@ -207,7 +207,7 @@ class PostRedditCommentBlock(Block):
raise ValueError("Failed to post comment.")
return new_comment.id
def run(
async def run(
self, input_data: Input, *, credentials: RedditCredentials, **kwargs
) -> BlockOutput:
yield "comment_id", self.reply_post(credentials, input_data.data)

View File

@@ -2,8 +2,8 @@ import os
from enum import Enum
from typing import Literal
import replicate
from pydantic import SecretStr
from replicate.client import Client as ReplicateClient
from replicate.helpers import FileOutput
from backend.data.block import Block, BlockCategory, BlockOutput, BlockSchema
@@ -159,7 +159,7 @@ class ReplicateFluxAdvancedModelBlock(Block):
test_credentials=TEST_CREDENTIALS,
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
# If the seed is not provided, generate a random seed
@@ -168,7 +168,7 @@ class ReplicateFluxAdvancedModelBlock(Block):
seed = int.from_bytes(os.urandom(4), "big")
# Run the model using the provided inputs
result = self.run_model(
result = await self.run_model(
api_key=credentials.api_key,
model_name=input_data.replicate_model_name.api_name,
prompt=input_data.prompt,
@@ -183,7 +183,7 @@ class ReplicateFluxAdvancedModelBlock(Block):
)
yield "result", result
def run_model(
async def run_model(
self,
api_key: SecretStr,
model_name,
@@ -198,10 +198,10 @@ class ReplicateFluxAdvancedModelBlock(Block):
safety_tolerance,
):
# Initialize Replicate client with the API key
client = replicate.Client(api_token=api_key.get_secret_value())
client = ReplicateClient(api_token=api_key.get_secret_value())
# Run the model with additional parameters
output: FileOutput | list[FileOutput] = client.run( # type: ignore This is because they changed the return type, and didn't update the type hint! It should be overloaded depending on the value of `use_file_output` to `FileOutput | list[FileOutput]` but it's `Any | Iterator[Any]`
output: FileOutput | list[FileOutput] = await client.async_run( # type: ignore This is because they changed the return type, and didn't update the type hint! It should be overloaded depending on the value of `use_file_output` to `FileOutput | list[FileOutput]` but it's `Any | Iterator[Any]`
f"{model_name}",
input={
"prompt": prompt,

View File

@@ -1,4 +1,4 @@
import time
import asyncio
from datetime import datetime, timedelta, timezone
from typing import Any
@@ -87,7 +87,7 @@ class ReadRSSFeedBlock(Block):
def parse_feed(url: str) -> dict[str, Any]:
return feedparser.parse(url) # type: ignore
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
keep_going = True
start_time = datetime.now(timezone.utc) - timedelta(
minutes=input_data.time_period
@@ -113,4 +113,4 @@ class ReadRSSFeedBlock(Block):
),
)
time.sleep(input_data.polling_rate)
await asyncio.sleep(input_data.polling_rate)

View File

@@ -93,7 +93,7 @@ class DataSamplingBlock(Block):
)
self.accumulated_data = []
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
if input_data.accumulate:
if isinstance(input_data.data, dict):
self.accumulated_data.append(input_data.data)

View File

@@ -105,7 +105,7 @@ class ScreenshotWebPageBlock(Block):
)
@staticmethod
def take_screenshot(
async def take_screenshot(
credentials: APIKeyCredentials,
graph_exec_id: str,
url: str,
@@ -121,11 +121,10 @@ class ScreenshotWebPageBlock(Block):
"""
Takes a screenshot using the ScreenshotOne API
"""
api = Requests(trusted_origins=["https://api.screenshotone.com"])
api = Requests()
# Build API URL with parameters
# Build API parameters
params = {
"access_key": credentials.api_key.get_secret_value(),
"url": url,
"viewport_width": viewport_width,
"viewport_height": viewport_height,
@@ -137,19 +136,28 @@ class ScreenshotWebPageBlock(Block):
"cache": str(cache).lower(),
}
response = api.get("https://api.screenshotone.com/take", params=params)
# Make the API request
# Use header-based authentication instead of query parameter
headers = {
"X-Access-Key": credentials.api_key.get_secret_value(),
}
response = await api.get(
"https://api.screenshotone.com/take", params=params, headers=headers
)
content = response.content
return {
"image": store_media_file(
"image": await store_media_file(
graph_exec_id=graph_exec_id,
file=MediaFileType(
f"data:image/{format.value};base64,{b64encode(response.content).decode('utf-8')}"
f"data:image/{format.value};base64,{b64encode(content).decode('utf-8')}"
),
return_content=True,
)
}
def run(
async def run(
self,
input_data: Input,
*,
@@ -158,7 +166,7 @@ class ScreenshotWebPageBlock(Block):
**kwargs,
) -> BlockOutput:
try:
screenshot_data = self.take_screenshot(
screenshot_data = await self.take_screenshot(
credentials=credentials,
graph_exec_id=graph_exec_id,
url=input_data.url,

View File

@@ -36,10 +36,10 @@ class GetWikipediaSummaryBlock(Block, GetRequest):
test_mock={"get_request": lambda url, json: {"extract": "summary content"}},
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
topic = input_data.topic
url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{topic}"
response = self.get_request(url, json=True)
response = await self.get_request(url, json=True)
if "extract" not in response:
raise RuntimeError(f"Unable to parse Wikipedia response: {response}")
yield "summary", response["extract"]
@@ -113,14 +113,14 @@ class GetWeatherInformationBlock(Block, GetRequest):
test_credentials=TEST_CREDENTIALS,
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
units = "metric" if input_data.use_celsius else "imperial"
api_key = credentials.api_key
location = input_data.location
url = f"http://api.openweathermap.org/data/2.5/weather?q={quote(location)}&appid={api_key}&units={units}"
weather_data = self.get_request(url, json=True)
weather_data = await self.get_request(url, json=True)
if "main" in weather_data and "weather" in weather_data:
yield "temperature", str(weather_data["main"]["temp"])

View File

@@ -1,7 +1,7 @@
from typing import Any, Dict
from backend.data.block import Block
from backend.util.request import requests
from backend.util.request import Requests
from ._api import Color, CustomerDetails, OrderItem, Profile
@@ -14,20 +14,25 @@ class Slant3DBlockBase(Block):
def _get_headers(self, api_key: str) -> Dict[str, str]:
return {"api-key": api_key, "Content-Type": "application/json"}
def _make_request(self, method: str, endpoint: str, api_key: str, **kwargs) -> Dict:
async def _make_request(
self, method: str, endpoint: str, api_key: str, **kwargs
) -> Dict:
url = f"{self.BASE_URL}/{endpoint}"
response = requests.request(
response = await Requests().request(
method=method, url=url, headers=self._get_headers(api_key), **kwargs
)
resp = response.json()
if not response.ok:
error_msg = response.json().get("error", "Unknown error")
error_msg = resp.get("error", "Unknown error")
raise RuntimeError(f"API request failed: {error_msg}")
return response.json()
return resp
def _check_valid_color(self, profile: Profile, color: Color, api_key: str) -> str:
response = self._make_request(
async def _check_valid_color(
self, profile: Profile, color: Color, api_key: str
) -> str:
response = await self._make_request(
"GET",
"filament",
api_key,
@@ -48,10 +53,12 @@ Valid colors for {profile.value} are:
)
return color_tag
def _convert_to_color(self, profile: Profile, color: Color, api_key: str) -> str:
return self._check_valid_color(profile, color, api_key)
async def _convert_to_color(
self, profile: Profile, color: Color, api_key: str
) -> str:
return await self._check_valid_color(profile, color, api_key)
def _format_order_data(
async def _format_order_data(
self,
customer: CustomerDetails,
order_number: str,
@@ -61,6 +68,7 @@ Valid colors for {profile.value} are:
"""Helper function to format order data for API requests"""
orders = []
for item in items:
color_tag = await self._convert_to_color(item.profile, item.color, api_key)
order_data = {
"email": customer.email,
"phone": customer.phone,
@@ -85,9 +93,7 @@ Valid colors for {profile.value} are:
"order_quantity": item.quantity,
"order_image_url": "",
"order_sku": "NOT_USED",
"order_item_color": self._convert_to_color(
item.profile, item.color, api_key
),
"order_item_color": color_tag,
"profile": item.profile.value,
}
orders.append(order_data)

View File

@@ -72,11 +72,11 @@ class Slant3DFilamentBlock(Slant3DBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
result = self._make_request(
result = await self._make_request(
"GET", "filament", credentials.api_key.get_secret_value()
)
yield "filaments", result["filaments"]

View File

@@ -1,8 +1,6 @@
import uuid
from typing import List
import requests as baserequests
from backend.data.block import BlockOutput, BlockSchema
from backend.data.model import APIKeyCredentials, SchemaField
from backend.util import settings
@@ -76,17 +74,17 @@ class Slant3DCreateOrderBlock(Slant3DBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
order_data = self._format_order_data(
order_data = await self._format_order_data(
input_data.customer,
input_data.order_number,
input_data.items,
credentials.api_key.get_secret_value(),
)
result = self._make_request(
result = await self._make_request(
"POST", "order", credentials.api_key.get_secret_value(), json=order_data
)
yield "order_id", result["orderId"]
@@ -162,28 +160,24 @@ class Slant3DEstimateOrderBlock(Slant3DBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
order_data = self._format_order_data(
order_data = await self._format_order_data(
input_data.customer,
input_data.order_number,
input_data.items,
credentials.api_key.get_secret_value(),
)
try:
result = self._make_request(
"POST",
"order/estimate",
credentials.api_key.get_secret_value(),
json=order_data,
)
yield "total_price", result["totalPrice"]
yield "shipping_cost", result["shippingCost"]
yield "printing_cost", result["printingCost"]
except baserequests.HTTPError as e:
yield "error", str(f"Error estimating order: {e} {e.response.text}")
raise
result = await self._make_request(
"POST",
"order/estimate",
credentials.api_key.get_secret_value(),
json=order_data,
)
yield "total_price", result["totalPrice"]
yield "shipping_cost", result["shippingCost"]
yield "printing_cost", result["printingCost"]
class Slant3DEstimateShippingBlock(Slant3DBlockBase):
@@ -246,17 +240,17 @@ class Slant3DEstimateShippingBlock(Slant3DBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
order_data = self._format_order_data(
order_data = await self._format_order_data(
input_data.customer,
input_data.order_number,
input_data.items,
credentials.api_key.get_secret_value(),
)
result = self._make_request(
result = await self._make_request(
"POST",
"order/estimateShipping",
credentials.api_key.get_secret_value(),
@@ -312,11 +306,11 @@ class Slant3DGetOrdersBlock(Slant3DBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
result = self._make_request(
result = await self._make_request(
"GET", "order", credentials.api_key.get_secret_value()
)
yield "orders", [str(order["orderId"]) for order in result["ordersData"]]
@@ -359,11 +353,11 @@ class Slant3DTrackingBlock(Slant3DBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
result = self._make_request(
result = await self._make_request(
"GET",
f"order/{input_data.order_id}/get-tracking",
credentials.api_key.get_secret_value(),
@@ -403,11 +397,11 @@ class Slant3DCancelOrderBlock(Slant3DBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
result = self._make_request(
result = await self._make_request(
"DELETE",
f"order/{input_data.order_id}",
credentials.api_key.get_secret_value(),

View File

@@ -44,11 +44,11 @@ class Slant3DSlicerBlock(Slant3DBlockBase):
},
)
def run(
async def run(
self, input_data: Input, *, credentials: APIKeyCredentials, **kwargs
) -> BlockOutput:
try:
result = self._make_request(
result = await self._make_request(
"POST",
"slicer",
credentials.api_key.get_secret_value(),

View File

@@ -37,7 +37,7 @@ class Slant3DTriggerBase:
description="Error message if payload processing failed"
)
def run(self, input_data: Input, **kwargs) -> BlockOutput:
async def run(self, input_data: Input, **kwargs) -> BlockOutput:
yield "payload", input_data.payload
yield "order_id", input_data.payload["orderId"]
@@ -117,8 +117,9 @@ class Slant3DOrderWebhookBlock(Slant3DTriggerBase, Block):
],
)
def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
yield from super().run(input_data, **kwargs)
async def run(self, input_data: Input, **kwargs) -> BlockOutput: # type: ignore
async for name, value in super().run(input_data, **kwargs):
yield name, value
# Extract and normalize values from the payload
yield "status", input_data.payload["status"]

View File

@@ -26,10 +26,10 @@ logger = logging.getLogger(__name__)
@thread_cached
def get_database_manager_client():
from backend.executor import DatabaseManager
from backend.executor import DatabaseManagerAsyncClient
from backend.util.service import get_service_client
return get_service_client(DatabaseManager)
return get_service_client(DatabaseManagerAsyncClient, health_check=False)
def _get_tool_requests(entry: dict[str, Any]) -> list[str]:
@@ -85,7 +85,7 @@ def _get_tool_responses(entry: dict[str, Any]) -> list[str]:
return tool_call_ids
def _create_tool_response(call_id: str, output: dict[str, Any]) -> dict[str, Any]:
def _create_tool_response(call_id: str, output: Any) -> dict[str, Any]:
"""
Create a tool response message for either OpenAI or Anthropics,
based on the tool_id format.
@@ -142,6 +142,12 @@ class SmartDecisionMakerBlock(Block):
advanced=False,
)
credentials: llm.AICredentials = llm.AICredentialsField()
multiple_tool_calls: bool = SchemaField(
title="Multiple Tool Calls",
default=False,
description="Whether to allow multiple tool calls in a single response.",
advanced=True,
)
sys_prompt: str = SchemaField(
title="System Prompt",
default="Thinking carefully step by step decide which function to call. "
@@ -150,7 +156,7 @@ class SmartDecisionMakerBlock(Block):
"matching the required jsonschema signature, no missing argument is allowed. "
"If you have already completed the task objective, you can end the task "
"by providing the end result of your work as a finish message. "
"Only provide EXACTLY one function call, multiple tool calls is strictly prohibited.",
"Function parameters that has no default value and not optional typed has to be provided. ",
description="The system prompt to provide additional context to the model.",
)
conversation_history: list[dict] = SchemaField(
@@ -206,6 +212,15 @@ class SmartDecisionMakerBlock(Block):
"link like the output of `StoreValue` or `AgentInput` block"
)
# Check that both conversation_history and last_tool_output are connected together
if any(link.sink_name == "conversation_history" for link in links) != any(
link.sink_name == "last_tool_output" for link in links
):
raise ValueError(
"Last Tool Output is needed when Conversation History is used, "
"and vice versa. Please connect both inputs together."
)
return missing_links
@classmethod
@@ -216,8 +231,15 @@ class SmartDecisionMakerBlock(Block):
conversation_history = data.get("conversation_history", [])
pending_tool_calls = get_pending_tool_calls(conversation_history)
last_tool_output = data.get("last_tool_output")
if not last_tool_output and pending_tool_calls:
# Tool call is pending, wait for the tool output to be provided.
if last_tool_output is None and pending_tool_calls:
return {"last_tool_output"}
# No tool call is pending, wait for the conversation history to be updated.
if last_tool_output is not None and not pending_tool_calls:
return {"conversation_history"}
return set()
class Output(BlockSchema):
@@ -247,7 +269,11 @@ class SmartDecisionMakerBlock(Block):
)
@staticmethod
def _create_block_function_signature(
def cleanup(s: str):
return re.sub(r"[^a-zA-Z0-9_-]", "_", s).lower()
@staticmethod
async def _create_block_function_signature(
sink_node: "Node", links: list["Link"]
) -> dict[str, Any]:
"""
@@ -266,38 +292,27 @@ class SmartDecisionMakerBlock(Block):
block = sink_node.block
tool_function: dict[str, Any] = {
"name": re.sub(r"[^a-zA-Z0-9_-]", "_", block.name).lower(),
"name": SmartDecisionMakerBlock.cleanup(block.name),
"description": block.description,
}
sink_block_input_schema = block.input_schema
properties = {}
required = []
for link in links:
sink_block_input_schema = block.input_schema
description = (
sink_block_input_schema.model_fields[link.sink_name].description
if link.sink_name in sink_block_input_schema.model_fields
and sink_block_input_schema.model_fields[link.sink_name].description
else f"The {link.sink_name} of the tool"
sink_name = SmartDecisionMakerBlock.cleanup(link.sink_name)
properties[sink_name] = sink_block_input_schema.get_field_schema(
link.sink_name
)
properties[link.sink_name.lower()] = {
"type": "string",
"description": description,
}
tool_function["parameters"] = {
"type": "object",
**block.input_schema.jsonschema(),
"properties": properties,
"required": required,
"additionalProperties": False,
"strict": True,
}
return {"type": "function", "function": tool_function}
@staticmethod
def _create_agent_function_signature(
async def _create_agent_function_signature(
sink_node: "Node", links: list["Link"]
) -> dict[str, Any]:
"""
@@ -319,37 +334,39 @@ class SmartDecisionMakerBlock(Block):
raise ValueError("Graph ID or Graph Version not found in sink node.")
db_client = get_database_manager_client()
sink_graph_meta = db_client.get_graph_metadata(graph_id, graph_version)
sink_graph_meta = await db_client.get_graph_metadata(graph_id, graph_version)
if not sink_graph_meta:
raise ValueError(
f"Sink graph metadata not found: {graph_id} {graph_version}"
)
tool_function: dict[str, Any] = {
"name": re.sub(r"[^a-zA-Z0-9_-]", "_", sink_graph_meta.name).lower(),
"name": SmartDecisionMakerBlock.cleanup(sink_graph_meta.name),
"description": sink_graph_meta.description,
}
properties = {}
required = []
for link in links:
sink_block_input_schema = sink_node.input_default["input_schema"]
sink_block_properties = sink_block_input_schema.get("properties", {}).get(
link.sink_name, {}
)
sink_name = SmartDecisionMakerBlock.cleanup(link.sink_name)
description = (
sink_block_input_schema["properties"][link.sink_name]["description"]
if "description"
in sink_block_input_schema["properties"][link.sink_name]
sink_block_properties["description"]
if "description" in sink_block_properties
else f"The {link.sink_name} of the tool"
)
properties[link.sink_name.lower()] = {
properties[sink_name] = {
"type": "string",
"description": description,
"default": json.dumps(sink_block_properties.get("default", None)),
}
tool_function["parameters"] = {
"type": "object",
"properties": properties,
"required": required,
"additionalProperties": False,
"strict": True,
}
@@ -357,7 +374,7 @@ class SmartDecisionMakerBlock(Block):
return {"type": "function", "function": tool_function}
@staticmethod
def _create_function_signature(node_id: str) -> list[dict[str, Any]]:
async def _create_function_signature(node_id: str) -> list[dict[str, Any]]:
"""
Creates function signatures for tools linked to a specified node within a graph.
@@ -379,13 +396,13 @@ class SmartDecisionMakerBlock(Block):
db_client = get_database_manager_client()
tools = [
(link, node)
for link, node in db_client.get_connected_output_nodes(node_id)
for link, node in await db_client.get_connected_output_nodes(node_id)
if link.source_name.startswith("tools_^_") and link.source_id == node_id
]
if not tools:
raise ValueError("There is no next node to execute.")
return_tool_functions = []
return_tool_functions: list[dict[str, Any]] = []
grouped_tool_links: dict[str, tuple["Node", list["Link"]]] = {}
for link, node in tools:
@@ -400,20 +417,20 @@ class SmartDecisionMakerBlock(Block):
if sink_node.block_id == AgentExecutorBlock().id:
return_tool_functions.append(
SmartDecisionMakerBlock._create_agent_function_signature(
await SmartDecisionMakerBlock._create_agent_function_signature(
sink_node, links
)
)
else:
return_tool_functions.append(
SmartDecisionMakerBlock._create_block_function_signature(
await SmartDecisionMakerBlock._create_block_function_signature(
sink_node, links
)
)
return return_tool_functions
def run(
async def run(
self,
input_data: Input,
*,
@@ -425,13 +442,14 @@ class SmartDecisionMakerBlock(Block):
user_id: str,
**kwargs,
) -> BlockOutput:
tool_functions = self._create_function_signature(node_id)
tool_functions = await self._create_function_signature(node_id)
yield "tool_functions", json.dumps(tool_functions)
input_data.conversation_history = input_data.conversation_history or []
prompt = [json.to_dict(p) for p in input_data.conversation_history if p]
pending_tool_calls = get_pending_tool_calls(input_data.conversation_history)
if pending_tool_calls and not input_data.last_tool_output:
if pending_tool_calls and input_data.last_tool_output is None:
raise ValueError(f"Tool call requires an output for {pending_tool_calls}")
# Prefill all missing tool calls with the last tool output/
@@ -465,6 +483,10 @@ class SmartDecisionMakerBlock(Block):
)
prompt.extend(tool_output)
if input_data.multiple_tool_calls:
input_data.sys_prompt += "\nYou can call a tool (different tools) multiple times in a single response."
else:
input_data.sys_prompt += "\nOnly provide EXACTLY one function call, multiple tool calls is strictly prohibited."
values = input_data.prompt_values
if values:
@@ -483,7 +505,7 @@ class SmartDecisionMakerBlock(Block):
):
prompt.append({"role": "user", "content": prefix + input_data.prompt})
response = llm.llm_call(
response = await llm.llm_call(
credentials=credentials,
llm_model=input_data.model,
prompt=prompt,
@@ -491,6 +513,7 @@ class SmartDecisionMakerBlock(Block):
max_tokens=input_data.max_tokens,
tools=tool_functions,
ollama_host=input_data.ollama_host,
parallel_tool_calls=input_data.multiple_tool_calls,
)
if not response.tool_calls:
@@ -501,8 +524,31 @@ class SmartDecisionMakerBlock(Block):
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
for arg_name, arg_value in tool_args.items():
yield f"tools_^_{tool_name}_{arg_name}".lower(), arg_value
# Find the tool definition to get the expected arguments
tool_def = next(
(
tool
for tool in tool_functions
if tool["function"]["name"] == tool_name
),
None,
)
if (
tool_def
and "function" in tool_def
and "parameters" in tool_def["function"]
):
expected_args = tool_def["function"]["parameters"].get("properties", {})
else:
expected_args = tool_args.keys()
# Yield provided arguments and None for missing ones
for arg_name in expected_args:
if arg_name in tool_args:
yield f"tools_^_{tool_name}_~_{arg_name}", tool_args[arg_name]
else:
yield f"tools_^_{tool_name}_~_{arg_name}", None
response.prompt.append(response.raw_response)
yield "conversations", response.prompt

Some files were not shown because too many files have changed in this diff Show More